text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Discussion on the Application of Cloud Computing in Agricultural Information Management
: In order to solve the problem of “Last Mile” in agricultural information management, it is imperative to introduce the cloud computing platform. The cloud computing will bring revolutionary changes in the management of agricultural information resources. This study based on the description of the basic connotation, technology and the characteristics of cloud computing, analysis the feasibility that cloud computing application in the management of agricultural information resource and the current problems in China. Furthermore, it puts forward the model of management in agriculture information resource and analysis its applied prospects and problems that must be attention.
INTRODUCTION
In recent years, with the development of information technology, the infrastructure construction in rural areas has been greatly improved in China.However, the "last mile" problem still exists.It always plagues the department of agricultural information.Cloud computing as a mode of business service is applied in various fields and brings some new ideas to agricultural information management and service.Cloud computing with its unique advantages will play an unlimited role in the field of agriculture information management.Thus, to discuss the application of cloud computing in agriculture information has important theoretical and practical significance.
In domestic, the government pay a lot of attentions on cloud computing.In December 2011, the data center project of telecom cloud computing settled in Hohhot in China.In the period of "Twelfth Five-Year", the scale of China's cloud computing industry chain will up to between 750 and 1000 billion.At present, the development plans of cloud computing which are starting all over the country are "Xiang yun Plan" in Beijing, "Yun Hai Plan" in Shanghai, "Tianyun Plan" in Guangzhou and "Cloud computing industrial union" in Wuhan.
Developing cloud computing is an important opportunity to catch up with the world advanced level, meanwhile, it is an important opportunity to carry out industry applications for agriculture and rural areas and also, it is necessary for developing information agriculture and agricultural public service.Cloud computing is the trend of information technology in the future; it is an applied mode of new commercial services which has a broad applied prospects, the development of agriculture and information technology are closed linked.
The study purpose of this study is to analyze the current status and the existed problems in the developing process, introduce the advantages of cloud computing and construct agriculture management information system based on the technology of cloud computing, explore the theoretical foundation and technical support of cloud computing technology in agricultural information and provide a source of power for the development of agricultural information.
CLOUD COMPUTING OVERVIEW
Connotation of cloud computing: Cloud computing has always been concerned since it generated and is known as the third IT industry revolution.Its goal is to provide a set of public facilities with computing, services and applications.Just like as easy to use computer resources as use water, electricity and gas in daily lives.However, when it comes to its definition, it has not yet formed a consensus neither at home nor at abroad.Among these, a representative definition is defined in Wikipedia: Based on the internet, the shared hardware and software resources and information could be supplied to the computer and other devices on their demands (EB/OL, 2011a).There is also some opinion believe that cloud computing is the business computing model based on the internet, it takes use of the highspeed internet transmission capacity, make the data processing process from a personal computer or sever to a cluster of servers on the internet (EB/OL, 2011b)."Cloud computing" is a new model of network application that proposed by Google.The narrow sense of cloud computing is obtaining the necessary resources through network.In a broad sense, cloud computing refers to the delivery and usage mode of services.That is, to acquire the services that they need through the network on-demand.In a word, through the cloud computing which has unique functions such as ultralarge-scale, virtualization and reliable, the providers can handle a large number of information within few seconds and to achieve the same powerful performance as a "super computer".It is shown by Fig. 1.
The characteristics of cloud computing:
The core of cloud computing is to manage and schedule the information resource by unifying and form a pool of resources to provide services to users.The so-called "cloud" is the network that provides resource, for the users, the resources in the cloud could be unlimited extension and on demand to pay by volume at anytime or anywhere, therefore, it has some characteristics as follows (Chen et al., 2011): • High elasticity: Cloud computing as a massive resource pool, it could be dynamic stretching according to the applied resource, dynamic expansion of resources when it is in high load and release excessive resources when it is in the low load, thus, it can improve the efficiency of resources.
• High reliability: In the process of cloud computing, all data and programs are stored and run in the cloud, computing is also processed by cloud, cloud services are distributed on the respective servers, it can hand the failed node automatically and ensure the computing and related applications can run smoothly.• High flexibility: For users, it can take advantage of the technology infrastructure resource quickly, the implementation mechanism of cloud services is transparent, the users could obtain the services that their own needs not by mastering the cloud computing mechanism.• Low cost: Due to some users use cloud computing are not frequently or only one-time, if they purchase the expensive equipment, it will increase their cost, the infrastructure of cloud computing is often provided by the third party and calculated by the amounts which will greatly reduce the user's cost and reduce the knowledge requirement of theirs.• High share: It can be used by many users which prevent individual users bear the cost is too high.• High independence: The users can access the system by the PC or other devices and not constrained by the location and time, just as obtain the information and service by the internet.
The related technology of cloud computing:
Cloud computing is the combination by traditional computer technology and network technology including grid computing, distributed computing, parallel computing, utility computing, network storage technologies, reutilization and load balance.It is designed to integrate a perfect system with high strong computing power and with the help of SaaS, PaaS, IaaS and MSP to distribute the computing power to the end-users.Ding and Yan (2012) Next we take the IaaS cloud computing as example to outline the implementation of clouding computing mechanism which is shown in Fig. 2.
The interactive interface of users provides application with a form of Web Services and obtains the users' requirements.The directory of service is a list which the users can access.The module of system management is responsible for managing and allocating all available resources, its core is load balancing.The configuration tool is responsible for preparing the operated environment at the allocated node.The module of monitoring statistics is responsible for monitoring the running status and finishing statistics of nodes.The process of implementation is not complicated: the interactive interface of users allows them to choose and invoke a service from the directory.When the request is passed to the module of management system, it will allocate appropriate resources for users and then invoke configuration tool for them to prepare operating environment (Huo et al., 2012).
The status and the problems that in the agriculture information management of china: In recent years, China's agriculture information is focused on the information technology infrastructure and agricultural information services.Different functions and different types of agricultural management information system have emerged.Through years of construction, agriculture information infrastructure has made great achievements, such as "villages" project, "agriculture project" and "three-in-one works" and so on.These constructions made a solid foundation for agriculture information services.Qian (2012) but currently, in the process of constructing agriculture information, there exists some problems such as "highlight hardware, overlook software, low quality of the information, cannot meet the reality needs of farmers, farmers couldn't apply the practical information and information could not have some impact on farmers and so on" (Qiao and Liu, 2006)
THE UTILITY AND THE MODEL OF AGRICULTURE INFORMATION RESOURCES UNDER THE CONTEXT OF CLOUD COMPUTING
Agricultural information resources and services are to take different approaches to provide the required information that the farmers need, in this process, the mode of service are changing with the changes of the external environment.At the first stage, it transform information to farmers mainly by face-to-face.At the second stage, it transforms information to farmers mainly by media and printed materials.At the third stage, it transforms information to farmers mainly by telephone, network and so on.Also there are some new rural construction areas; they establish" Rural Book House" to transform information.At the fourth stage is the cloud computing stage, it mainly through several of convenient terminals and combine the network to obtain the required information.Cloud computing as a business computing and storage mode, its research contents covered the fields of agricultural information services; as a technology, cloud computing brings some new technology, new methods and new ideas for agricultural information and it have a great impact on agricultural information services.Thus, with the development of cloud computing, it can drive the development of agricultural information services and transform the mode of agricultural information services, which embodied in the construction of agricultural information resources mode, agricultural information resource utilization mode, agricultural information services operation mode.
Construction mode:
Agricultural information resources services take the construction of agricultural information as a precondition, under the context of cloud computing, the construction of agricultural information resource is gradually sharing.Cloud computing is based on a wide range of computing resources and storage resources, which can dynamic invocation and allocation resources, therefore, it is needed to achieve the resources sharing.In this process, the biggest problem is to uniform the data format, in order to improve the quality of service, the service business need to uniform the data standards, therefore, it is needed to harmonize the data, in order to develop the deep levels of services.From the point of agricultural information service organizations, different agencies have the different characteristics of agricultural information resources.Under the condition of cloud computing, the different agencies could use the common space to storage messages, sharing the infrastructures which provided by the cloud service provider, without requiring purchase the storage equipment, the resources of agricultural information will be all storage d on the cloud server, therefore, the same information environment can provide the basis for the sharing of agricultural information resources.Under the condition of unified data standard and the physical storage environment, it became very easily to sharing agricultural information resource, meanwhile, it can provide a guarantee for agricultural information services.
Utility mode:
• Accurate calculations of quantum: To access the agricultural information resources which based on the mode of cloud computing, it not need to purchase their own infrastructure construction resource center, without building a service platform and only need to obtain the necessary resources and services.Therefore, there need not to be many more centers, it could provide farmers with efficient information services and make the billing more accurate.• Unified search conveniently: In the rural areas, due to the farmer's ability is limited and couple with their low educational level, so it requires the operation as simple as possible.The cloud computing environment just to meet these needs for farmers.The cloud platform is no longer using the traditional information retrieval mode; it takes integrated unified retrieval to the farmers.When the farmers put forward some requests in the system, the client directly send issues to the clouds, the resources dispatch center compute and storage resources and collect all the data results, finally give the results back to the users.In order to improve the efficiency of the massive resources and make it obtain more profits, the cloud services will strengthen the function of cloud platform and develop practical efficient integrated retrieval system, to achieve the finding for users and attract users to the platform.The power of cloud computing could be expansion unlimited when it is necessary, therefore, the way of information retrieval will no longer be constrained by hardware, at the same time, the speed and the accuracy will be further enhanced.
• The unified advisory services: Under the condition of cloud computing mode, the agency of agricultural information service take unified cloud platform to provide information consulting services to farmers.Farmers don't need to consider to whom solving the problem.Under the condition of cloud computing, the basic agricultural information and consulting model is shown by Fig. 3.
After the farmers put forward advisory requests and the platform of cloud service receive, it will determine the patterns and the characteristics of the problems and according to the advantages of different agricultural information service agencies, to specify an agricultural service agencies to answer the users' questions.The platform of cloud consulting could be seen as a black box between the farmers and the agricultural information services, it plays an important role of advisory scheduling, allocate resource and assist communication.In order to encourage the agencies of agricultural information services to participate in advisory services, the cloud platform will give those appropriate awards to incentive their enthusiasm for answering these questions.The advantages of this model are: the agencies of agricultural information services don't need to develop their own information consulting platform; this can save the costs of purchasing equipments and developing systems.And also it is conducive to play their advantages to provide a unified and high-quality service.Due to it adopts the model of joint services, so it will help the farmers to receive answers on time.In order to integrate the resource of agricultural science and technology and construct the center of agricultural science and technology services, data sources and a wide range of agricultural science and technology service network, bring together the firstclass team of experts and to carry out agricultural information service, tele-consultation, video diagnostics and agriculture products online trading and so on, to solve agricultural science and technology information silos and services lag and other issues, there a project team explore the construction of agricultural cloud structure, Li et al. (2011).The following Fig. 4 shows the frame structure of the agriculture cloud.
The feasibility analysis that the application of cloud computing in agricultural information management:
The operating feasibility for farmers: For the majority farmers, they need not to know the details of the infrastructure in cloud and need not to have the appropriate expertise and also need not to control directly.Under the mode of cloud, the user's computer will become very simple and perhaps small memory, no hard drive and software; it can also meet our needs.
Because the user's computer would do nothing in addition to send and receive commands and data through the browser to the cloud, but in this way, it also can use the computing resource, storage space and various application software.This is just like the wires which connect the "display" and "host" are infinite long, so we can place the "display" in front of the users, but the host on the far even the uses don't know where it is.
Cloud computing transforms the wire which connect "display" and "host" to a network, transform "host" into the server cluster of service providers.The computer cluster just as a "cloud", it could be used for you at any time, the users only need to input their own data to be processed and can get the results that they want.This process requires the users to pay cost, but the cost is lower than that they purchase computer equipment and software systems and also to maintain it.This technology is known as the "Cloud technology" and it makes people that don't understand computers can easy to use the computer as soon as possible.Just like the natural gas in household, it is only need to know where the switch is, not necessary to know all aspects of gas production and transportation.By using cloud technology, it makes civilians can use the original superior sophisticated technology with lower cost; it is a huge step forward for human society.This is also the cloud computing platform's advantage that other technologies cannot match.
The cost feasibility of farmers: In cloud computing, the cost of the PC's configuration is very low, compared with the grid, the computing power of cloud computing, comprehensive and security are more superior, the architecture of cloud computing is a collection of resources, these resources could be managed dynamically and maintain the resource at any time.Bring the introduction of cloud computing to the construction of agricultural information if very feasible, the departments of agricultural can share the infrastructures which are connected together by a large number of systems, the cost of network will reduce and the efficiency will increase.
The existed practice for reference: Currently, there are a lot of areas have launch cloud computing platform and have achieved good results.
The "agriculture engineering" in Shandong is the first time to introduce cloud computing technology in the field of agricultural information in China and it provide new ideas for the construction of national agricultural information News (2011).The engineering research center of Hebei Province study the role that cloud computing play in the agriculture of things and create the applied model of cloud computing under the parallel algorithms.Their project which completed in 2009-base on the cloud model of rural science and technology information service platform (agriculture information cloud) won the award of Hubei Province science and technology progress.Dai and Chen (2009) In December 2011, the cloud computing data center project of telecom in China was officially settled in Hohhot, the total investment is expected to reach 12 billion Yuan.
Cloud computing is a new and revolutionary concept in the current, it enables users to liberate from the personal computer or small server system and makes them from the traditional mode of personal desktop-centric applications to the applied mode which based on virtual storage-centric.This mode can allow the computer or server which has poor performance plays a maximized function, agricultural-related agencies build data center to develop the storage computing services and this will enhance the speed and quality of the construction of agricultural information.
THE APPLIED PROSPECTS OF CLOUD COMPUTING IN AGRICULTURAL INFORMATION MANAGEMENT
Prospects: Cloud computing will change the way that people use to obtain agricultural information.The cloud computing infrastructure in the future will become an indispensable basis for people's lives.At present, people used to store and use their own data and related applied software, but in the era of cloud computing, the users need not to know where their own data is, they only need to put forward their needs to cloud computing and the cloud will meet all the propose requirements for the farmers.The users only need to have one computer which can access internet, one browser and they can enjoy the endless fun that brought by cloud computing and they never need to worry about the virus of software and documentation, never need to re-install anti-virus software and firewalls, never need to worry about whether their own software is the latest version, because all of these have been done by the other side of "cloud", there are some professional IT staff help them maintain and update that previously made on the PC.Under the situation of cloud computing, people's concept using network resource will occur a radical change.Cloud computing will support the farmers access to application services at any wherever using various terminals.They request their resources from the cloud not from a fixed physical entity.The applications are run in the cloud somewhere, but in fact, the farmers don't need to know and don't need to worry the specific location of the application.They only need to a laptop or a cell phone and they can achieve what they want even the task of supercomputing.
According to CCID consulting, it predicts that in the next three years, cloud computing in China will gradually be more and more used by enterprise and institutions.Its market size will from 16.7 billion in 2010 to 117.4 billion in 2013.In the "Cloud high forum in China", many experts said that the cloud computing will greatly promote the construction of information infrastructure in China and the process of information.
The problems that should be noticed in the management of agricultural information of cloud computing: Using the technology of cloud to constructing agricultural information management system will enable 800 million farmers are in direct benefit, they don't need to pay much more costs and don't need to master advanced computer knowledge, they still can enjoy much more professional services and this will also enhance the agricultural production efficiency, make the guidelines which made by the governments more scientific, more timely and more accurate.When constructing the agricultural information management based on the cloud technology, it is needed to pay attention the following questions: • Establish a national standard of agricultural information • Expand publicity to increase the degree of concern At present, the cloud computing application in the field of agricultural information technology is not concerned enough, it is not yet arouse widespread concern by society and it should strengthen applied research in agricultural information services, introduce, absorb and promote its application in this field. | 4,912.6 | 2013-03-15T00:00:00.000 | [
"Computer Science",
"Agricultural and Food Sciences"
] |
SEMI-INFINITE-GEOMETRY BOUNDARY-PROBLEM FOR LIGHT MIGRATION IN HIGHLY SCATTERING MEDIA - A FREQUENCY-DOMAIN STUDY IN THE DIFFUSION-APPROXIMATION
We have studied light migration in highly scattering media theoretically and experimentally, using the diffusion approximation in a semi-infinite-geometry boundary condition. Both the light source and the detector were located on the surface of a semi-infinite medium. Working with frequency-domain spectroscopy, we approached the problem in three areas: (1) we derived theoretical expressions for the measured quantities in frequency-domain spectroscopy by applying appropriate boundary conditions to the diffusion equation; (2) we experimentally verified the theoretical expressions by performing measurements on a macroscopically homogeneous medium in quasi-semi-infinite-geometry conditions; (3) we applied Monte Carlo methods to simulate the semi-infinite-geometry boundary problem. The experimental results and the confirming Monte Carlo simulation show that the diffusion approximation, under the appropriate boundary conditions, accurately estimates the optical parameters of the medium.
INTRODUCTION
Light propagation in turbid media is described by transport theory, also called the theory of radiative transfer.", 2 The Boltzmann transport equation, which is a balance relationship, treats light propagation as the transport of photons through a medium containing particles. In most practical cases the equation of transfer cannot be solved exactly. Often it is necessary to consider an approximate approach. One of these simplified approaches is the diffusion approximation, 3 5 which is valid in the strongly scattering regime. 6 The observed optical properties of most biological tissues 7 are typified by a scattering coefficient that far exceeds the absorption coefficient. A number of studies employed the diffusion theory to investigate the optical properties of tissues. These studies used steady-state spectroscopyI' 0 and time-resolved spectroscopy" in both the time domain1 2 '1 4 and the frequency domain.1 5 We present a frequency-domain study of the applicability of the diffusion approximation to the case of a semiinfinite geometry. Both the light source and the detector are placed at the interface between air and a strongly scattering medium; the interface extends indefinitely. The proper solution of this boundary problem has important practical implications because it represents a reasonable model for in vivo, noninvasive applications of light spectroscopy in medicine. When the light source and the detector are placed on a surface separating two media with different optical properties, the diffusion approximation is not rigorously applicable.1 6 Nevertheless, the diffusion approximation has been applied to predict the time-domain and steady-state response in the reflectance geometry from quasi-semi-infinite tissues.10"1 2 We derive the expression for the frequency-domain photon fluence rate and verify its equivalence with the corresponding ex-pression derived in the time domain. 1 2 Experimentally, we test the expression's level of accuracy by performing a systematic study on a macroscopically homogeneous tissuelike phantom. Since the diffusion theory is highly accurate in predicting the results of experiments performed in an infinite geometry, 17 '1 9 we compare our results obtained in the semi-infinite geometry (i.e., at the surface of the medium) with the results of the measurements conducted deep inside the bulk medium (i.e., in the infinite geometry). The comparison of experimental results is carried out for a wide range of values of /ua and ,-l. A Monte Carlo simulation of the boundary problem has been performed.
THEORY
The distribution of photons in random media is described by the angular photon density u(r, Qk, t), which is defined so that u(r, f, t)d 3 rdfl is the expected number of photons in d 3 r around r moving in direction fl in solid angle dil at time t. The temporal evolution of the angular photon density in a medium where the processes of absorption and elastic scattering take place is given by the Boltzmann transport equation 4 : u_ v -Vuv(/La + a, 8 )u + j dQ 'v/ 8 p,(Q' fl) at J~l X U(r, 1I', t) + q(r, Q, t), (1) where v is the speed of photons in the medium (and v its modulus), v/a and vlu are the rates of absorption and scattering, respectively, p,(fl I -LI) is the normalized probability for scattering events that carry photons from I' into l, and q is the photon source term. The Boltzmann transport equation is an integrodifferential equation containing both time and spatial derivatives, and 0740-3224/94/102128-11$06.00 ©1994 Optical Society of America its solution requires initial and boundary conditions for u(r, L, t).
In the multiply scattering regime the usual simplification is the diffusion approximation. The approximation assumes that the angular photon flux, defined as ,t(r, LI, t) vu(r, l, t), is quasi-isotropic 3 5 : where T (r, t) f 4 dLQ (r, L, t) is the total photon flux and J(r, t) a f 4 at v dt + VU + ( + As')J = q (4) v at 3 where A,' [defined as (1 -g)ut with g the average cosine of the scattering angle] is the transport scattering coefficient and q 0 and q, are defined by introduction of the following expansion of the angular dependence of the source: If we assume that the photon source is isotropic (q, = 0) and neglect the time derivative of J, which is equivalent to saying that the variations of J occur on a time scale much larger than the time between photon collisions with the scattering particles of the medium, Eq. (4) yields where D = 1(3Aa + 3't) is the diffusion constant. Finally, by using expression (6) for J, we can rewrite Eq. (3) in the form of the photon-diffusion equation: It is important to be clear about the limitations of the diffusion equation. As is discussed, its derivation requires the following approximations: It has been shown that the photon-flux quasi-isotropy condition is well satisfied 6 16 (al) In strongly scattering media (a << L) (a2) Far from boundaries, (a3) Far from sources, where "far" in conditions (a2) and (a3) refers to distances much greater than the photon mean free path. In frequency-domain spectroscopy the intensity of the light source is modulated at a frequency (/21r) typically of tens to hundreds of megahertz, so the photon density is written as where is the phase lag between the source (located at r = 0) and the detector (located at a distance r >> 1/ug' from the source) and x is defined as c/vLa, with v the speed of light in the medium (given by c/n, with n being the index of refraction of the medium). Equations (8)- (10) have been experimentally verified"19-21 and provide a good description of the homogeneous infinite medium problem in the multiply scattering regime.
In most medical applications the method for noninvasive, in vivo spectroscopy measurements is to place both the light source and the detector upon the surface to be examined. It is evident that the infinite-medium scheme is not appropriate for such a geometry. A better approach is to consider a uniform semi-infinite medium and to solve the diffusion equation [Eq. (7)] with the appropriate boundary conditions. Before proceeding, we note that the problem itself is beyond the limits of the diffusion approximation: both source and detector are placed on the boundary, where, as discussed, the diffusion equation does not approximate the transport equation as well as it does deep in the medium. However, it is still a reasonable starting point to treat the problem,1 2 even if the final results must be critically analyzed to verify the extent of acceptability. The validity of the diffusion approximation can be quantified by evaluation of the ratio between the isotropic and the directional photon flux. This ratio should be much greater than 1, as is required by relation (2). In the homogeneous infinite medium, where the diffusion approximation yields accurate results, for typical values of the physical parameters of tissue in the near infrared (a = 0.05 cm-', /utL = 15 cm-', r = 3 cm, v = 2.26 X 1010 cm/s, corresponding to an index of refraction of n = 1.33, and cs = 2ir X 120 MHz) such a ratio is The physical boundary condition required at a vacuum interface is that there be no incoming photons at the boundary. 4 Apparently at the vacuum boundary the diffusion approximation breaks down. The photon flux is nonzero only on half of the range of the solid angle, and the quasi-isotropy condition is not satisfied. On the other hand, a mismatch of the index of refraction at the interface of the strongly scattering medium and the outside nonscattering medium accounts for an inwardly directed component of the photon flux at the boundary. The boundary condition for the mismatch semiinfinite medium can be satisfied when the density of photons U is equal to 0 on an extrapolated boundary at a distance Zb = 2aD, where a is a constant that is related to the relative index of refraction (nrel) of the two media. 22 ' 23 The distance Zb for nrel = 1.33 (or nrel = 1.4, which is a typical value for a tissue-air interface in the red-near-infrared spectral region 2 4 ) and for typical values of D in tissues is -0.15 cm. Furthermore, it has been shown that a light beam incident upon the surface can be well represented by a single scatter source at a depth zo equal to one effective photon mean free path 0 " 2 [i.e., Zo = 1/( /La + ui4)]. This parameter zo has a value of -0.1 cm in tissues. We observe that this feature accounts for an effective isotropic photon source even if the photons are actually injected in a single direction. Finally, the boundary problem of setting U = 0 on the extrapolated boundary can be treated by introduction of a negative image source of photons above the plane, one that is symmetric with respect to the actual photon source. 2 5 This approach enables one to take advantage of the solution that is valid for the infinite medium. In the semi-infinite-medium model, which is pictorially represented in Fig. 1, the diffusion equation [Eq. (7)] is used with qo = qaq (where a stands for the actual source and i stands for the image) to yield the solution obeying the required boundary conditions in the space z 2 Zb. The solution, by application of the superposition principle, can immediately be written from expressions (8)-(10): where, with the notation introduced in Fig. 1, The new coordinate p is the projection of the sourcedetector distance ra on the interface plane = Zb. The detector coordinate z is at Zb Z Zb + ZO. Assuming that 1 >> (Zb + Zo ± z) 2 /p 2 , in Eq. (12) we carry over expansions to the second order in (Zb + zo ± z)/p. After the necessary calculations, using Eqs. (12), (2), and (6), we find that the dc and ac photon flux along -z (in Fig. 1 the detector fiber receives photons in an inward direction -z) and the phase lag 0s between source and detector are given by the following relationships: z Fig. 1. Semi-infinite-geometry model: Zb is the distance between the extrapolated boundary and the surface of the medium and zo is the depth of the effective single scatter source inside the scattering medium. The strongly scattering medium extends in the space > Zb. The detector optical fiber, which is parallel to the z axis, is immersed in the scattering medium at a depth z ranging from Zb to Zb + zo. The distance between the effective (image) photon source and the tip of the optical fiber is ra (ri).
The projection of the source-detector distance onto the plane z = is p.
In tissues, conditions (16) and (17) are better satisfied than condition (18). For the previously mentioned tissue's optical properties in the near-infrared, the quantities on the left-hand sides of inequalities (16) and (17) are -0.001, and that on the left-hand side of inequality (18) is 0.01.
We have compared our result for the frequency-domain quantities with the expression for the time-domain reflectance in the half-space geometry obtained by Patterson et al.1 2 Since the same boundary conditions have been applied, the two solutions should be related by a Fourier transform with respect to time. We have verified the correspondence of the two results in the limiting case j IJ(p, t)Jexp(iwjt)dt = IJ(p, )l. (19) 1+ ( 2D 1) To verify experimentally the solutions found for the semi-infinite geometry and to use the measurement protocol described in a previous paper, 9 we rewrite Eqs. (13)- (15) to obtain quantities that show a linear dependence on p: where the p-dependent functions FdC, Fac, and F 0 and the p-independent functions Gdc and Gac are defined by Eqs. (13)- (15). The determination of the slopes of the straight lines allows one to recover the values of /La and A/ ofthe medium. That the arguments ofthe logarithms are not dimensionless does not present a problem as far as the slopes of the straight lines are concerned. In fact the particular choice of the units introduces a constant, which does not effect the slopes. We also observe that the particular values of the parameters of the model (namely, Zb, zo, and z) have no influence on the slopes of the lines (z has no effect on their intercept either). This property is important, because the parameters Zb and zo depend on the optical properties of the medium, namely, on /L, ' and n, and the positioning of the tip of the detector optical fiber (which is related to the parameter z) is in practice not exactly reproducible.
We conclude this theoretical section by observing that the isotropy factor defined by Eq. (11), for the same values of the parameters considered in Eq. (11), has a minimum value of -2, which is marginally acceptable compared with the value of 8 in the infinite geometry.
This result indicates that the isotropic term is larger than the directional flux but is not much larger as required by the diffusion approximation. We evaluate the level of accuracy of the semi-infinite-medium expression (13) by performing a series of measurements in a macroscopically homogeneous, strongly scattering medium and by a Monte Carlo simulation.
EXPERIMENTAL METHODS
The experimental arrangement, shown in Fig. 2, is typical for frequency-domain spectroscopy. The light source, a diode laser (Sony SLD104AU) emitting at a wavelength of 780 nm, is intensity modulated at a frequency of 120 MHz by being supplied with the sine-wave output of a frequency synthesizer (Marconi 2022A) by means of an rf amplifier (ENI Model 403 LA). The average emitted light power is -3 mW. In our measurements the laser diode is directly immersed in the medium, and the detected light is collected by a bundle of optical fibers (overall diameter 3 mm) and delivered to a photomultiplier tube (PMT) (Hamamatsu R928). The gain function of the PMT is modulated at a frequency of 120.00008 MHz, which is slightly different from the modulation frequency of the light source. The small frequency difference, which is selectable, produces beating between the detected signal and the gain function of the PMT, giving rise to a signal at the difference frequency (80 Hz in our case), which is sent to a computer card. A digital acquisition technique 2 6 and a fast-Fourier-transform algorithm provide the phase shift relative to a reference signal (), the average intensity (dc), and the amplitude of the intensity oscillations (ac) of the detected light. The signal used as a reference for the phase measurement is a synchronous (with the frequency synthesizer) clock generated by the computer.
The multiply scattering medium used in our measurement is an aqueous solution of Liposyn III 10% [Abbott Laboratories (IL)]. It is an intravenous fat emulsion that was previously used as tissuelike phantom in both steady-state1 0 and time-resolved spectroscopy. 27 We studied four different Liposyn concentrations to test the theoretically derived results in a range of values for A,'. The concentrations employed are 4.5%, 9.0%, 13.5%, and 18% by volume, which correspond to a solids content of 0.45%, 0.90%, 1.35%, and 1.80%, respectively. Consequently the transport scattering coefficient L,' should range from -4 to 16 cm'1, as we verified with measurements in the infinite medium before performing the surface experiment. The aqueous Liposyn solution acts as the host medium, diluting the absorbing substance. For such a substance we chose black India ink, which is soluble in water. We measured its absorption spectrum in a nonscattering regime at the wavelength of the diode laser sult we decided to increase the concentration of absorber at steps of 0.2 mL/L to increase /La by 0.014 cm'1 per step. After the first 10 steps we increased the amount of absorber added between successive measurements. We performed 24 measurements in 24 different conditions of scatterer and absorber contents. First we increased the transport scattering coefficient (measurements 1-4, ,s' ranging from -4 to 16 cm'l); then we increased the absorption coefficient without changing the scatterer solids content (measurements 5-24, /La ranging from -0.026 to 0.4 cm-'). The solution was held in a cylindrical container (22 cm in diameter by 13 cm in height).
Our measurement protocol consists of two series of measurements for each Liposyn-black-India-ink solution. The first series is conducted in the quasi-infinite geometry (shortened to infinite geometry in what follows), in which both the light source and the detector optical fiber are deeply immersed into the medium (at a depth of -5 cm). The second series is performed in the quasi-semi-infinite geometry (shortened to semi-infinite geometry), in which both the light source and the detector optical fiber are positioned on the surface of the medium. In each one of the two series of measurements we collect data at 5-8 different source-detector separations, ranging from a minimum of 1.6 cm to a maximum of 5.4 cm. The different source-detector separations are accomplished by means of a raster scanning device (Techno XYZ positioning table), which moves the light source with respect to the fixed-detector optical fiber. The uncertainty in the variations of r (or p) is -10 ,um. The experimental configurations in the infinite and semi-infinite geometries are sketched in Fig. 2. We observe that the source-detector separations are measured from the emitting point of the laser diode to the center of the detector fiber bundle. The effect of the finite size of the detector fiber (3 mm in diameter) on the measured values of /La and /t' is negligible when multiple source-detector distances are employed in the data analysis. We have experimentally verified that fibers with different diameters give the same values of /La and /Ls ' The measurement of dc, ac, and phase at several source-detector distances enables us to determine the slopes of the straight lines associated with dc (Sdc) ac (Sac), and phase (So). These straight lines are given by ln(rUdc), ln(rUac), and <) in the infinite geometry 2 0 and by Eqs. (20)- (22) in the semi-infinite geometry. In the infinite geometry the way to recover /Qa and A,' has been described in detail.' 9 In the semi-infinite geometry we treat the problem of recovering La and ,u&, from Eqs. (20)- (22) iteratively: First we neglect the terms containing /ta and /Ls' on the left-hand side of the equations and obtain the slopes S() S), and S(0) from which we determine L(°) and LI(). Then we use these values to obtain SM SM, and S(1) and hence AL) and 41), and we continue applying this procedure recursively until /LCi) and /uL14) reproduce themselves within a given uncertainty of 0.1%. The convergence is reached after few iterations.
EXPERIMENTAL RESULTS
On the basis of the discussion conducted in an earlier paper' 9 we have recovered /La and /ju' from the data pairs of dc and phase and ac and phase. In what follows we present only the results obtained from dc and phase data, but we note that similar results are obtained from ac and phase data.
Infinite Geometry
The values of Ls' and /La measured in the infinite geometry are plotted in Fig. 3 as a function of Liposyn and blackink concentrations. In Fig. 3(al), j' shows a linear dependence on the scatterer-solids content, in agreement with linear transport theory. 3 By contrast, L is essentially insensitive to the increase in the black-ink concentration [ Fig. 3(a2)]. In the absence of black ink, the measured value of /La for diluted Liposyn (0.026 0.001 cm-') is essentially due to water. In fact, the reported value 2 8 of ua for water at 780 nm, which is 0.023 cm-', is in good agreement with our measurement. The linear dependence of /La on black-ink concentration [ Fig. 3(b2)] is also in agreement both with the theory (/a = e[c], where e is the extinction coefficient and [c] is the chromophore concentration) and with other experiments.' 7 "1 9 The slope of the straight line calculated with the points relative to ink concentrations smaller than 2 mL/L [(64.5 ± 0.4) X 10-3 cm'1 mL' L], for which the diffusion theory provides an excellent approximation to the transport theory, is very close to the slope obtained spectrophotometrically in a nonscattering regime [(66.1 ± 0.3) X 10-3 cmnl mL'1 L]. The measured values of u,, relative to ink concentrations greater than 2 mL/L deviate from the values measured on the spectrophotometer by less than 6%. On the basis of these observations we assume that the infinite-geometry measurements provide accurate results for the optical parameters of the medium. We therefore use these results as reference values for the semi-infinite-geometry measurements
Semi-infinite Geometry
We have analyzed the surface data in two ways: (i) considering Eqs. (20)- (22), thereby taking into account the appropriate boundary conditions, and (ii) using the infinite-geometry equations (8)- (10). In these ways we quantify the corrections yielded by the application of the proper boundary conditions with respect to the infinitegeometry results. The results for I-tu' and j.t in the 24 media variations examined are shown in Fig. 4, where they may be compared with the results of the infinitegeometry measurements. We have also compared the values of the slopes related to dc (Sde), ac (Sac), and phase (S<) in the three cases considered (referred to as the infinite geometry, the semi-infinite geometry with boundary conditions, and the semi-infinite geometry without boundary conditions). This comparison, plotted in Fig. 5, provides information on the behavior of the frequencydomain parameters, namely, on their deviation from the accurate infinite-model predictions.
The sensitivity of the semi-infinite-geometry results to the positioning of the source and the detector relative to the surface plane can be evaluated by comparison of the data presented in Table 1. We measured the values of /j, and Au' for slightly different positions of the laser diode and the tip of the detector optical fiber. That is, assigning to the medium surface a coordinate = 0, we have examined two positions relative to the boundary plane, i.e., a surface position (-0) and 1 mm into the medium ( = 1). We then obtained four possible configurations for the source-detector system, that is, (0, 0), (1, 0), (0, 1), and (1, 1), where the first coordinate is relative to the source and the second is relative to the detector. Table 1 shows the results obtained for alla and juL' in the solution with 1. The results of the Monte Carlo simulation for a modulation frequency of 120 MHz (to match the experimental modulation-frequency condition) are shown in Fig. 6 and Table 2. In Fig. 6 we show a comparison of the straight lines associated with dc, ac, and phase in the case of the infinite geometry, the semi-infinite geometry with boundary conditions, and the semi-infinite geometry without boundary conditions. In Table 2 we list the values obtained for Iua and A,' in the three cases.
Infinite-Geometry Results
The infinite-geometry results shown in Fig. 3 have been used as a framework to provide the correct values of the optical parameters in the multiply scattering medium. Several arguments have been presented above justify this designation: (i) The linear dependence of ,ut' on Lyposyn solids content; (ii) The independence of p.L,' from black-India-ink concentration; (iii) The independence of Qia from Liposyn solids content; (iv) The linear dependence, quantitatively similar to the one obtained spectrophotometrically, of ILua on black-Indiaink concentration.
Whereas conditions (i) and (iii) are certainly well satisfied, conditions (ii) and (iv) hold rigorously only for black-Indiaink concentrations smaller than -3 mL/L. However, the deviations of the measured values of /ice' and /l-a at the maximum ink concentration examined (6.1 mL/L) from the values that would satisfy conditions (ii) and (iv) are small (-6%). We neglected these deviations in comparing the semi-infinite-geometry results. From a general standpoint these deviations are a sign of the shortcomings of the diffusion approximation for higher absorption coefficients. As discussed in Section 2, the diffusion approximation requires tLs'/Itta to be much greater than 1.
The results of our measurements permit us to quantify this requirement: the values of the optical parameters of our medium are consistent with Mie theory and with spectrophotometric measurements when /i-ta > 80, and they deviate by -6% for Us'/Ma 40.
Semi-infinite-Geometry Results
The method used to recover the values of Ada and ' from the measured data is based on the determination of the slopes of the straight lines associated with dc, ac, and phase. In the semi-infinite geometry this method presents the advantage of being insensitive to the values of the distance parameters of the model, Zb, Zo, and z. This topic was discussed in Section 2 on the basis of the derived expressions for the dc, the ac, and the phase slopes. The results presented in Table 1 experimentally confirm the theoretical predictions relative to the parameter z. Therefore the combined theoretical and experimental results show that the relative index of refraction (influencing Zb) the value of the photon mean free path in the multiply scattering medium (related to z), and line parameters considered in this paper are obtained by least-squares fits. In all cases considered the linear fits are very good; the correlation coefficients typically exceed 0.999.
The comparison of the measured values of Mua and Ms' in the three cases considered (infinite geometry, semi-infinite geometry with boundary conditions, and semi-infinite geometry without boundary conditions) is presented in Fig. 4. A more quantitative comparison is made by analysis of the deviations of the semi-infinite geometry results from the infinite-geometry results. These deviations are shown in Fig. 7. With proper boundary conditions the semi-infinite measurements yield values of Ma that differ by less than 4% from the values determined with the infinite geometry. The deviations relative to A' are larger, ranging from -5% to 15%, but r, p (cm) Fig. 6. Straight lines associated with (a) dc, (b) ac, and (c) phase as a function of r (infinite geometry) or p (semi-infinite geometry), obtained from the Monte Carlo simulation. The different symbols refer to the three conditions examined (dc* and ac* refer to values relative to the maximum source-detector distance and V)* refers to a value relative to the minimum source-detector distance). Circles, infinite-medium simulation, infinite-geometry equations: dc* = ln(rUdc), ac* = ln(rUac), D* = . Squares, semi-infinitemedium simulation, semi-infinite-geometry equations: dc*, ac*, and * given by the left-hand sides of Eqs. (20)- (22). Triangles, semi-infinite-medium simulation, infinite-geometry equations: dc* = ln(pq('dc 8
T . i -A-~~~~~F
antini et al. 1 the required independence of Mus' from absorber concentration is retained. On the other hand, the analysis of the semi-infinite-measurement data with the infinitegeometry equations yields poor results for both Ma and As'.
Ma typically deviates by 15% from the accurate values, whereas Mas' shows a dependence on the absorber concentration. Obviously the use of the infinite-geometry model for analyzing the semi-infinite-geometry data is not expected to yield good results. Nevertheless the comparison presented in Figs. 4 and 5 allows us quantitatively to evaluate the correction that is due to the semiinfinite geometry model. From Fig. 5 one can see that for absorber concentrations smaller than 3 mL/L, which correspond to MA'/Ma > 80, the use of the semi-infinitegeometry boundary conditions gives rise to corrections in the right direction: the dc, the ac, and the phase slopes are systematically closer to the correct value. For ink concentrations higher than 3 mL/L (A'/Ma < 80) the corrections are less precise, especially in the case of the ac slopes.
The Monte Carlo simulation provides an independent test of the semi-infinite-geometry boundary problem. The results presented in Fig. 6 and Table 2 are similar to the ones obtained experimentally. Use of the semi-infinite-geometry boundary conditions yields better accuracy for Ma than for Ms'. The corrections provided by the boundary conditions are particularly evident and effective in the evaluation of Ma. The slopes of the straight lines associated with dc, ac, and phase are closer to the accurate ones when the boundary conditions are applied.
CONCLUSIONS
In this paper a systematic study of the applicability of the diffusion approximation to the semi-infinite-geometry boundary problem has been presented. The principal result is that in a macroscopically homogeneous, multiply scattering medium reasonably good estimates of the optical parameters are obtained from the diffusion theory, provided that the appropriate boundary conditions are applied. The fact, also shown in this paper, that the measurements are quite insensitive to the precise geometrical configuration at the surface, namely, the positions of the source and the detector relative to the surface plane of the medium, suggests that slightly different boundary geometries could be equally well represented. | 6,841.8 | 1994-10-01T00:00:00.000 | [
"Mathematics"
] |
Prevention and Detection Methods for Enhancing Security in an RFID System
Low-cost radio frequency identification (RFID) tag is exposed to various security and privacy threats due to computational constraint. This paper proposes the use of both prevention and detection techniques to solve the security and privacy issues. A mutual authentication protocol with integration of tag's unique electronic fingerprint is proposed to enhance the security level in RFID communication. A lightweight cryptographic algorithm that conforms to the EPCglobal Class-1 Generation-2 standard is proposed to prevent replay attack, denial of service, and data leakage issues. The security of the protocol is validated by using formal analysis tool, AVISPA. The received power of tag is used as a unique electronic fingerprint to detect cloning tags. t-test algorithm is used to analyze received power of tag at single-frequency band to distinguish between legitimate and counterfeit tag. False acceptance rate (FAR), false rejection rate (FRR), receiver operating characteristic (ROC) curve, and equal error rate (EER) were implemented to justify the robustness of t-test in detecting counterfeit tags. Received power of tag at single frequency band that was analyzed by using t-test was proved to be able to detect counterfeit tag efficiently as the area under the ROC curve obtained is high (0.922).
Introduction
Radio frequency identification (RFID) tags that conform to EPC Class-1 Generation-2 (Gen 2) standards are broadly used in supply chain management, logistic, person identification, and access control.Global RFID market is expected to grow at a compound annual growth rate (CAGR) of roughly 17% to a value of approximately $9.7 billion in the period 2011-2013.However, the privacy and security of the usage of RFID technology are not guaranteed.The issues that raise security concerns are possibility of tag cloning issue, denial of service (DoS) attack, replay attack, and data leakage.
Gen 2 tags are susceptible to cloning attack due to lack of explicit authentication and security functionalities.Complex cryptographic algorithms, including hash function, and symmetric and asymmetric algorithms, are not supported by Gen 2 tags [1][2][3].This is because Gen 2 tags have low-computation capabilities that are only able to support simple mathematical functions.Hence, strong adversaries are capable of skimming on transmission channels to obtain tag information [4].This information may be used to create counterfeit tags that bear the same information as that of a legitimate tag.Counterfeit tags can be attached to bogus products and disguise these as authentic products in the market.The counterfeit tag issue is very serious because it is capable of causing a menace ranging from public privacy and safety issues to loss of industry revenues.
Lightweight cryptographic algorithm (i.e., CRC, PRNG, and XOR functions) can be used to prevent data leakage problem in Gen 2 tag.In addition, received power of tag can be used as tag's unique electronic fingerprint to detect counterfeit tags.Detection techniques are deployed to minimize the negative effects of tag cloning threats [5].Counterfeit tags can be detected by employing the electronic fingerprinting system in an RFID system since each RFID tag is unique, based on their radio frequencies and manufacturing differences.Received power of tag at single frequency band is analysed by using t-test to distinguish between legitimate and counterfeit tag.Hence, the combination of prevention and detection methods could be the countermeasure to the privacy and security issues being faced by Gen 2 tags.
The remaining of this paper is structured as follows: Section 2 describes the related works and Section 3 illustrates the overview of proposed lightweight cryptographic mutual authentication protocol.Section 4 outlines the experiment setup and data collection for fingerprint-matching method.Section 5 explains the t-test algorithm in details and Section 6 analyzes the accuracy and performance of fingerprint-matching method.Section 7 shows the overall security analysis and Section 8 concludes the paper.
Related Works
In Chien and Chen [2], PRNG, CRC, and XOR are used as the fundamentals in the protocol.Two sets of authentication and access keys are designed to defend DoS attack.However, the scheme is vulnerable to replay attack and information leakage.Chien and Huang [6] presented a lightweight mutual authentication protocol to solve replay attack and secret disclosure problem of Li et al. [7] scheme.But cloning attack problem is not resolved in this scheme.Song and Mitchell [8] proposed an authentication protocol that uses challenge-response approach and simple functions such as right and left shifts and bitwise XOR operation in the scheme.However, the scheme is vulnerable to tag impersonation attack and server impersonation attack.Song [9] presented an authentication protocol for tag ownership transfer that meets new owner privacy, old owner privacy, and authorization recovery requirements.However, the ownership transfer protocol is vulnerable to a desynchronization attack that prevents a legitimate reader from authenticating a legitimate tag, and vice versa.Burmester and Munilla [10] proposed a lightweight mutual authentication protocol that supports session unlinkability, forward and backward secrecy.The protocol is optimistic with constant key lookup, and can easily be implemented on a Gen 2 platform.However, the scheme is susceptible to replay and cloning attacks.Chen and Deng [11] proposed mutual authentication protocol that is able to reduce database loading and ensure user privacy.But the authentication protocol did not take into consideration cloning attack issues.
In [12], minimum power responses measured at multiple frequencies are used as unique electronic fingerprint.The power is measured at the range from 860 MHz to 960 MHz in increments of 1 MHz.Two-way analysis of variance (twoway ANOVA) is used to test the equality of means of two groups in terms of minimum power response and different physical characteristic of tags.10-fold cross-validation on the classifier is used to validate the result obtained, and the AUC is 0.999.The average true positive rate and false positive rate are 0.905 and 0.001, respectively.The research focused on using minimum power responses at multiple frequencies as a unique electronic fingerprint for RFID tags.Hence, this paper extends the idea to show that received power of tag at single frequency band can be used to fingerprint RFID In addition, UHF RFID tag that is proved can be uniquely identified in controlled environment based on the signal spectral features with 0% of EER.The physical-layer identification method is complex, and the reader used in conducting the experiment is purposely built.In contrast, the proposed method in this paper is simple and applicable to any Gen 2 reader.
Lightweight Cryptographic Mutual Authentication Protocol
A lightweight cryptographic mutual authentication protocol that conforms to Gen 2 standards is proposed.The proposed protocol consists of initialization phase and authentication phase.The channel between a back-end server and a reader is assumed secure.On the other hand, the channel between a reader and a tag is assumed insecure.The notations used in the description of proposed protocol are shown in Table 1.
In the initialization phase, a back-end server and tag store information are required to perform authentication.The back-end server initially stores seven values of each tag in its database.These are new index denotes as CRC (E T ⊕K i+1 ), old index denotes as CRC (E T ⊕ K i ), tag's electronic product code denotes as E T , new session key denotes as K i+1 , old session key denotes as K i , new random number denotes as Rn i+1 , and old random number denotes as Rn i .On the other hand, three values that are stored in the tag are E T , K i , and Rn i .Session key of current session is denoted as K i , and the session key after a successful session is denoted as K i+1 .The tag's temporary key is denoted as K t , and server's temporary key is denoted as K s .The overall protocol scheme is shown in Figure 1.
In authentication phase, the reader will send query command to the tag.The tag computes At the same time, PRNG generates tag's temporary key, K t , based on the seed number, Rn i ⊕ K i .The encrypted message, M 1 , is sent via the reader to the back-end server.The backend server searches for an index, CRC (E T ⊕ K i ), in its database that is matching with the encrypted message.If matching index is found, the encrypted message is decrypted using the session key, K i , that is in the same row as indicated by index.Otherwise, the server searches the matching of M 1 with the old index, CRC (E T ⊕ K i−1 ).If the matching of old index is found, old session key, K i−1 , is used to decrypt the message.The authentication of the message is then verified.If the decrypted message does not match the message recorded in the database for both new and old indexes, an error message will be sent to the reader.received Rn i+1 in the user memory bank for the usage in the next session.
Experimental Setup and Fingerprint Data Collection
The proposed RFID tag fingerprint-matching method illustrated in Figure 2 consists of initial phase and detection phase.In the initial phase, received power of each EPC tag is calculated using Friis transmission equation.Reader transmitted power used in the equation is measured using a spectrum analyzer.The received power is measured once the power is held constant.Each tag received power is stored in database.In the detection phase, stored fingerprint and measured fingerprint are compared using t-test algorithm.
The tag being measured is proved to be a legitimate tag if P value of t-test algorithm is greater than 0.05.Otherwise, the tag is proved to be a counterfeit tag.The received power of tag is calculated based on the reader's transmitted power, which is measured at 919-923 MHz.The frequency band is used based on the Malaysian UHF RFID standard governed by Malaysian Communications and Multimedia Commission (MCMC) [14].However, the measurement is still applicable to other countries, RFID frequency band.The transmitted power of tag is measured for 100 passive RFID tags at fixed temperature and controlled environment.The legitimate tag fingerprint template is determined by obtaining the average received power of 50 readings per tag.The received power that acts as a unique fingerprint for each tag is measured in dBm.The received power is stored in the database only in order to protect the secrecy of fingerprint value from being obtained by adversaries.The unique fingerprint value that stored in the database can be searched based on the EPC.Hence, the stored fingerprint value in database and measured fingerprint value that obtained from experimental measurement can be compared to verify the genuineness of the tag. Figure 3 shows the measurement of reader transmitted power platform.The setup consists of a passive RFID reader and antenna, passive EPC tag, and spectrum analyzer.The reader operates at UHF 919-923 MHz and supports Gen 2 protocol.The antenna and tag are placed at fixed position to obtain an accurate and reliable result.To determine precise reader transmitted power, cable loss and power loss within the power splitter must be considered.Hence, power value obtained from the spectrum analyzer is added to the total power loss measured to obtain an accurate reader transmitted power.Figure 4 shows a measurement of reader transmitted power using spectrum analyzer.
The tag received power is calculated using Friis transmission equation, as demonstrated in where P r is the power received by the tag antenna and P t is the power input to the reader antenna.In addition, G t is the antenna gain of the reader antenna, G r is the antenna gain of the tag antenna, λ is the wavelength, and R is the distance between reader and tag antennas.Friis transmission equation is only applicable in Fraunhofer region.Hence, a minimum Fraunhofer region is determined by using where, r f f is the minimum far field distance, D is the diameter of transmitting antenna, and λ is the wavelength.The diameter of transmitting antenna is 0.185 m, and the wavelength is 0.33 m because the frequency chosen is 919.73MHz.Hence, the minimum far field distance is 0.21 m.The tag should be placed at a distance greater than 0.21 m such that it is in the Fraunhofer region.In this setup, the distance between the tag and reader antenna is 0.3 m in order to satisfy Fraunhofer region condition.Parameters used in the measurement are shown in Table 2.
t-test Algorithm
Cloning tags may be detected by comparing extracted received power and stored fingerprint using t-test algorithm, as illustrated in where X 1 and X 2 are the means of legitimate and suspicious tag groups, N 1 and N 2 are the number of samples for legitimate and suspicious groups, respectively, and S p 2 is the pooled variance.t-test algorithm is a statistical test used to identify differences in the means and variances of two populations, namely, legitimate tag and suspicious tag populations.Significant level equals to 0.05 is chosen in conducting the t-test in order to verify the probability of a false rejection.The tag used can be considered as counterfeit if P value obtained from t-test is less than significant level, 0.05.The tag is proved as counterfeit tag because the matching probability between stored fingerprint and measured fingerprint is less.
When a tag is suspected to be counterfeit, comparison of stored and measured tag's fingerprint experiment needs to be conducted.In Case 1, a suspicious tag claims to belong to Tag A based on the stored fingerprint.As demonstrated in Table 3, P value obtained from the t-test within Tag A and the suspicious tag is higher than 0.05.This proves that no significant difference exists between the suspicious tag and Tag A. Hence, the suspicious tag is a legitimate tag.The higher the P value is, the more likely that the two groups will match.Otherwise, the tag is proved to be a counterfeit one.In Case 2, a suspicious tag claims to belong to Tag 4. A t-test is conducted between the suspicious tag and Tag B. The P value obtained from Table 4 is less than 0.05.Hence, the suspicious tag from Case 2 is proved to be a counterfeit.
Fingerprint-Matching Performance Analysis
The accuracy of proposed fingerprint-matching method in distinguishing between legitimate and counterfeit tags as shown in Case 2 is analyzed by using FAR, FRR, ROC, and EER.A 2 × 2 contingency table is used to verify four outcomes from the data obtained from Case 2. The outcome is a true acceptance (TA) when measured fingerprint is verified as a genuine value and the tag identity is found in the database.When the measured fingerprint has genuine value but the tag identity is not found in the database, the outcome is false acceptance (FA).Conversely, true reject (TR) is obtained when measured fingerprint has bogus value and the tag identity is not found in the database.False reject (FR) is obtained when measured fingerprint is verified as a bogus value but the tag identity is found in the database.Table 5 illustrates four outcomes obtained from fingerprintmatching method for Case 2.
False acceptance rate (FAR) is the measurement of probability in which the fingerprint-matching method falsely verifies different tags as identical.False rejection rate (FRR) is the measurement of probability in which the fingerprintmatching method falsely verifies identical tags as different.FAR and FRR are calculated using (4) and (5), respectively [15], FAR and FRR for Case 2 are shown in Table 6.ROC curve and EER are used to evaluate the performance of t-test algorithm in verifying measured fingerprint with stored fingerprint.ROC curve illustrated in Figure 5 plots the true acceptance rate (TAR) versus its false acceptance rate (FAR).EER is the rate at which both FAR and FRR are equal.Based on the ROC curve, EER for Case 2 is 0.16, which is considered as a low value.The lower the EER is, the more accurate will be the fingerprint-matching method.
The area under curve (AUC) of the ROC curve is a measurement of the performance of t-test algorithm in distinguishing between two fingerprint data sets.The accuracy of the t-test algorithm is verified using a rough guide for classifying the accuracy of a test as shown in Table 7 [16,17].
AUC for Case 2 that obtained from SPSS statistical analysis result is 0.922 as shown in Table 8, which is considered an excellent performance according to the accuracy guide.This proves that the t-test algorithm offers high accuracies in distinguishing fingerprints between data sets of two tags.
Security Analysis
The security of proposed protocol that is written in HLPSL is validated using AVISPA tool.The intruder under the Dolev-Yao model has capability to full control over the network [18].The intruder may intercept and analyze transmitted message as well as impersonate one of the agents (tag, reader, and server) to send modified message to others.Data secrecy and mutual authentication are the security goals that needed to achieve in AVISPA tool.The E T as well as session keys K i and K s are kept secret in the transmission channel.An attack is considered happened if intruder is able to obtain any secret values.In addition, tag and back-end server are only allowed to reveal their identity information to the authorized parties.Back-end server needs to ensure that the current session's message, M 1 , is the message that computed by legitimate tag.This is to prevent replay attack where intruder sends previous session's message to the legitimate reader.Same case is applied to M 2 .The authentication of M 2 must be verified by the tag as a legitimate message that sent by the legitimate reader.Figure 6 shows that OFMC and CL-AtSe back-ends found no man-in-the-middle attacks and the stated security goals were satisfied for a bounded number of sessions as specified in environment role.The strong authentication between the tag and back-end system is established and the secrecy of the EPC and session keys are protected from eavesdropping.The analysis using SATMC and TA4SP on the proposed protocol is inconclusive because the back-ends only support protocols that are free of algebraic equation.Replay attack can be prevented in this proposed protocol because the value transmitted for each session is different.The proposed protocol is a challenge-response For each session, M 1 and M 2 are enciphered by using corresponding session keys, K i and K s by the tag and server respectively.These session keys are synchronously updated during mutual authentication by both tag and server.Hence attacker is unable to use the session keys, K i and K s , of a particular session to decipher encrypted message for any of the following sessions.DoS attack can be defended by using updated session key.The legitimate tag can be identified by verifying the encrypted message with message recorded in the database.On the other hand, the authentication of the reader is verified by the tag by comparing the decrypted message with message recorded in the tag.Both new and old indexes, session keys, and random numbers that are stored in the back-end server are used to prevent desynchronized issue.Desynchronization problem occurred when variables stored in the tag are different with the one stored in the database.Hence, the server can use old variables to resynchronize with the tag.
The secrecy of the tag's information is safe from eavesdropping attack.The E T is enciphered with session key where the session key will be updated after each complete session.In addition, tag is hard to compromise because M 1 and M 2 are enciphered by using different key.If M 1 and M 2 are eavesdropped between legitimate tag and reader, the attacker is unable to obtain any secret information.For example, Hence, attacker is only able to get enciphered key and is impossible to guess its original key value.
The proposed protocol can prevent the issue of cloning tags by using fingerprint information stored in the database to detect counterfeit tags.Each tag has it own unique received power of tag value.Even though adversaries are able to copy all the data from a tag, they are unable to create a counterfeit tag that has the exact same physical feature as original tag.Thus, any counterfeit tag can be found when the fingerprint of tag detected is not matched with the fingerprint information stored in the tag.The proposed method is analyzed by using one factor only, which is received power of tag at single frequency, whereas two factors, namely, minimum power responses at multiple frequencies and physical characteristic of tags, are tested by using ANOVA in [12].The accuracy of the proposed method and method of [12] is excellent in both, with the values of 0.922 and 0.999, respectively.The proposed method is simpler but capable to produce comparable accuracy of method [12] which analyses two factors to detect cloning tags.
Table 9 indicates a comparison of results between proposed scheme and related security schemes in terms of replay attack, DoS attack, cloning attack, forward secrecy, and Gen 2 standards compliance.The proposed lightweight cryptographic mutual authentication protocol is proved to possess more security protection compared to existing security schemes.
Conclusions
This paper proposed the use of both prevention and detection methods to enhance the security level in an RFID system.The lightweight cryptographic mutual authentication protocol that consists of lightweight cryptographic algorithm, including XOR, CRC, and PRNG functions, is used as prevention method.The security of proposed protocol is validated using AVISPA tool and is proved safe from replay attack, denial of service threats, and data leakage problem.
In addition, tag's fingerprint extraction and matching method is presented as a detection method in detecting counterfeit tags.Each tag received power is measured, calculated, and stored in the database for further reference.Tag received power can be used as unique fingerprint as these are significantly different in the frequency range of 919-923 MHz.t-test algorithm is used to determine the identity of measured tag.Measured tag is proved as counterfeit if the P-value of the t-test conducted is less than 0.05.Accuracy of the fingerprint-matching method is tested, and 4% of FAR and 0% of FRR is achieved.In addition, fingerprintmatching is proved to be an excellent method, as the area under the ROC curve is 0.922 and ERR is 0.16.Hence, t-test algorithm was proved to be able to protect RFID communication system from tags cloning attack by efficiently distinguishing between legitimate and counterfeit tags.
Figure 3 :
Figure 3: Measurement of received power of tag platform.
Figure 4 :
Figure 4: Reader transmitted power measured with spectrum analyzer.
Figure 5 :
Figure 5: Receiver operating curve with equal error rate.
Table 1 :
Notations used in the protocol.
K s , is generated.If the M 1 is decrypted with old index, then K s is generated by XOR K i−1 and Rn i−1 as a seed.Then, the back-end server computes M 2 = CRC (E T ⊕ K s ).A new session key, K i+1 , is generated, and CRC (E T ⊕ K i+1 ) is computed and updated as a new index in the database.In addition, a new random number, Rn i+1 , is generated and concatenates with M 2 .The new session key and random number are stored in the row that indicated by the new On the other hand, if the server successfully authenticates the tag, a server's temporary key, index.Afterwards, the back-end server forwards M 2 Rn i+1 to the tag through the reader.The tag computes M t = CRC (E T ⊕ K t ), and the authentication of the reader is verified by the tag where a comparison of M 2 and M t is made.If both messages are matched, then the tag will update a new session key, K i+1 , where K i+1 = PRNG (K t ).Otherwise, the key will be maintained as current session key, K i .The tag stores the International Journal of Distributed Sensor Networks
Table 2 :
Notations used in the Protocol.
Table 3 :
t-test for Tag A and suspicious tag.
Table 4 :
t-test for Tag B and suspicious tag.
Table 5 :
Four outcomes from fingerprint matching method.
Table 6 :
FAR and FRR for Case 2.
Table 7 :
Accuracy of test categorization.
Table 8 :
Accuracy of test categorization. | 5,506.2 | 2012-08-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Zero Distribution of System with Unknown Random Variables Case Study: Avoiding Collision Path
This paper presents the stochastic analysis of finding the feasible trajectories of robotics arm motion at obstacle surrounding. Unknown variables are coefficients of polynomials joint angle so that the collision-free motion is achieved. k a~ is matrix consisting of these unknown feasible polynomial coefficients. The pattern of feasible polynomial in the obstacle environment shows as random. This paper proposes to model the pattern of this randomness values using random polynomial with unknown variables as coefficients. The behavior of the system will be obtained from zero distribution as the characteristic of such random polynomial. Results show that the pattern of random polynomial of avoiding collision can be constructed from zero distribution. Zero distribution is like building block of the system with obstacles as uncertainty factor. By scale factor k, which has range, the random coefficient pattern can be predicted.
Introduction
Modeling complex systems as random polynomial presents in many cases mathematics, engineering and physics. Pioneer works in the random polynomial has been conducted by Littlewood et al [1,2]. Fifty years after the work of M. Kac [4], Sheep et al [5] showed that when the polynomials degree N gets large, the zeros of random polynomial tend to concentrate near the unit circle in the complex plane. Nowadays, the random polynomial is very fascinating to be investigated since the distribution of zero of polynomials of high degree with random coefficients present in quantum chaotic dynamics [7]. Research of random polynomial application has been introduced into communication engineering by [8]. They discussed the density of the zeros of a random polynomial with nonzero mean correlated Gaussian coefficients.
In this paper, sixth degree polynomial is utilized as the joint angle path of arm robot motion in the obstacle environment. Different with conventional approach of avoiding collision of robotics arm where it needs to find the end-effector trajectories first before converted to joint angle trajectories, the avoiding collision random polynomial modeling will find the joint angle trajectories directly without searching the end-effector trajectories..
The joint angle matrix is the matrix composed from the joint trajectory of each link. This matrix is feasible if and only if the motion caused by it is collision-fee. Due to obstacle position, not all values of the real numbers to compose the joint angle matrix will make the feasible motion. It means the polynomial coefficient matrix is composed from proper values of the real number. Machmudah et al [9] use Genetic algorithm (GA) and Particle Swarm Optimization (PSO) in finding the feasible coefficient polynomial. Although the GA and PSO can be used to find this feasible coefficient and show good performance in evolving it; however, the pattern of avoiding collision polynomial coefficient is still unknown, except just random. Preliminary investigation of the difficulty to find the pattern of the feasible joint angle matrix has been presented in [10]. This paper will use zero distribution of random unknown variables to model the pattern of collision-free polynomial of arm robot motion.
Polynomial joint angle of arm robot motion in the proximity of obstacles
Using polynomial as joint angle is very essential in the arm robot motion since the polynomial will give smooth and continuous trajectories. Without obstacles, finding polynomial joint angle is deterministic problem where no randomness involved in the system.
When obstacles are suddenly placed in the environment, finding polynomial joint angle path will not be simple as system without obstacles. There is a requirement, collision-free which should be achieved for all motion. The collision-free polynomial coefficients need to be searched. This paper will use the polynomial degree sixth as avoiding collision joint angle path as used in [9].
Polynomial degree sixth of avoiding collision is defined in the following For each link, avoiding collision polynomial has one unknown coefficient, a6k , which is modeled as the random real number. For n-link, there will be n a6k.
The feasible matrix of unknown polynomial coefficients can be defined as follows where k ã and k 6 ã are the feasible matrix of polynomial coefficients and the proper leading coefficients of the avoiding collision polynomial, respectively.
Zero Distribution of avoiding collision random polynomial
Previous researches have been shown that zero of random polynomial can model the chaotic system [5,7]. It will be very essential to investigate the zero distribution of the avoiding collision path since the obstacles has transform the deterministic polynomial joint angle path into random polynomial.
It is generally known that zero distribution of large degree of random polynomials in the complex plane tends to concentrate in unitary circle [1][2][3][4][5][6][7][8], as shown in Figure 2a. The avoiding collision random polynomial has n unknown variables which is small degree polynomial. Preliminary investigation of random pattern of avoiding collision problem using polynomial function has been investigated in [10]. To illustrate the zero distribution of avoiding collision path, 5-DOF planar series robot will be used. Figure 2b shows an example of the zero distribution of collision-free polynomial while Figure 2c shows the zero distribution of unfeasible polynomial for five links series robot. The avoiding collision path results and the environment are shown in Figure 2d. The arm robot has four links so that there are five unknown variables. Thus, the system is equivalent with random polynomial degree four. There will be four possible zeros which can be four real values or two real value and two complex conjugates or four complex conjugates. Figures 2a and 2b show that both of feasible and unfeasible coefficients have similar zero distribution so that it will be difficult to predict the feasible random coefficient from this phenomenon.
Proposed method
Previous section has been shown that there is possibility that the random coefficient of collisionfree and collide joint angles are tend to sit in the similar area. Then, it seems difficult to predict the pattern of avoiding collision polynomial by this zero distribution. The zero distribution shows no difference between unfeasible polynomial and collision-free polynomial.
Instead of analyzing the behavior of random coefficient to get zero distribution, we propose to model zero as basic of building construction of the system. (4) where cn and bn are the normalized coefficient and their values in descending order, respectively The first column of cn is always 1. This paper proposes to convert the normalized coefficient into the actual coefficient using the following > @k c a n k~ (5) where k ã and k~a re the feasible unknown random variable and a feasibility factor, respectively. The collision-free polynomial coefficients will have pattern in the following > @k c a a a n k] .... [ 6 62 61 (6) By this approach, k a~ is predicted from zero distribution of the normalized coefficient with scale factor k~. The random coefficient of the system will be developed from zero distribution. It means we will picture the unknown variables as random polynomials. It should be noted that this random polynomial is not random polynomial of joint angle path but it is the random polynomial of the system. Random polynomial characteristic is also present in the Random Matrix Theory (RMT), thus we should investigate whether we can use it in system with n unknown variables. Different with RMT which use polynomial with large degree or f o n , for system with n unknown variables, the zero distribution is n degree only and the zero distribution will not tend to unitary circle since it has few zeros only.
Numerical Experiment
The proposed method will be applied to predict the pattern of k ã for 3-DOF planar robot in [9] and 5-DOF planar robot in very crowded obstacles. For 3-DOF planar robot, the proposed method has succeeded to predict the collision-free polynomial coefficient. With three unknown variables, there will be two possible zeros which can be either two real values or two complex conjugates. The predicted ak from various zero of random generated variables are shown in Table 1. The value of k~ is searched by checking the interval and doing the collision detection at this interval. To avoid the collision with the base, the second and third links should have minimum radius 5 cm from the base.
Next simulation will use 5-DOF planar robot with very crowded obstacles. The fitness function used is minimum joint angle traveling distance and minimum Cartesian distance with weighting factor 0.5. Table 2 show the results of pattern predicted from zero distribution. Like 3-DOF planar robot, for 5-DOF planar robot with very crowded obstacles, the zero distribution shows a success to be used as basic pattern of collision-free polynomial of the arm robot motion. The system can be model as random polynomial with proper composition of zeros and k~.
This paper also investigates that if we consider generating polynomials randomly, the value of kĩ s very difficult be found. In this case, the zeros of chosen random polynomial are not proper. In case of collision-free, the pattern of avoiding collision polynomial can be model as proposed approach. This paper uses the computational approach in [9,10], as benchmark case to predict the randomness of avoiding collision polynomial using the proposed method.
Conclusion
The proposed method to predict the pattern of collision-free polynomial of arm robot motion has been presented. The random polynomial is constructed from the normalized random coefficient by factor k.
Depending on the position of obstacles, the value of k~ will have specific interval. | 2,181.2 | 2014-01-01T00:00:00.000 | [
"Mathematics",
"Computer Science"
] |
Assisting the Visually Impaired in Multi-object Scene Description Using OWA-Based Fusion of CNN Models
Advances in technology can provide a lot of support for visually impaired (VI) persons. In particular, computer vision and machine learning can provide solutions for object detection and recognition. In this work, we propose a multi-label image classification solution for assisting a VI person in recognizing the presence of multiple objects in a scene. The solution is based on the fusion of two deep CNN models using the induced ordered weighted averaging (OWA) approach. Namely, in this work, we fuse the outputs of two pre-trained CNN models, VGG16 and SqueezeNet. To use the induced OWA approach, we need to estimate a confidence measure in the outputs of the two CNN base models. To this end, we propose the residual error between the predicted output and the true output as a measure of confidence. We estimate this residual error using another dedicated CNN model that is trained on the residual errors computed from the main CNN models. Then, the OAW technique uses these estimated residual errors as confidence measures and fuses the decisions of the two main CNN models. When tested on four image datasets of indoor environments from two separate locations, the proposed novel method improves the detection accuracy compared to both base CNN models. The results are also significantly better than state-of-the-art methods reported in the literature.
Introduction
Artificial intelligence (AI) is one of the main areas of research and development in the world today. Many types of AI systems have been developed recently and have provided impressive solution in applications such as voice-powered virtual assistants (e.g., Siri and Alexa) [1], autonomous vehicles (e.g., Tesla) [2], robotics (e.g., in manufacturing cars) [3], and automatic translation (e.g., Google translate) [4]. Many AI-based solutions are also developed in the area of assistive technology, in particular for assisting the visually impaired (VI) persons. Most of these systems deal with the autonomous navigation problem using wearable assistive devices such as infrared sensors, ultrasound sensors, RFID, BLE beacon, and cameras [5,6]. Besides autonomous navigation, VI persons need also other types of assistive technology such as people and object detection/recognition. Computer vision techniques combined with machine learning provide the most suitable solutions for this problem. For example, Hasanuzzaman et al. [7] present a computer vision system to recognize currency using speeded-up robust features (SURF). The system can recognize US notes with 100% true recognition rate and 0% false recognition rate. Other systems proposed in Refs. [8,9], for example, assist the VI persons to do their shopping in the supermarket by detecting and reading barcodes and giving the VI person information (extracted from the shop database) about the product via voice communication. The authors in Ref. [10] proposed a system to assist the VI persons detect and read any text in their view. The system does this by first detecting candidate regions that may contain text using special statistical features. Then, commercial optical character recognition (OCR) software is used to recognize text inside the candidate regions (or decide that the content of the regions is non-text). Another interesting application is a travel assistant presented by the authors in Ref. [11] that detects and recognizes text in public transportation domains. The system detects text writings on buses and stations and informs the VI person about station names and numbers, bus numbers and destinations, and so on. Another work by Jia et al. [12] addresses the problem of finding staircases inside buildings and informing the user when they are within five meters of any staircases. The method is based on an iterative preemptive RANSAC algorithm to detect steps of a staircase. But, the work by Yang and Tian [13] focuses on detecting doors inside buildings by detecting the most general and stable features of doors, namely edges and corners. Finally, a system for detecting restroom signage is presented in Ref. [14] based on scale-invariant feature transform (SIFT) features.
Object detection and recognition is a heavily studied problem in the computer vision field. The early object detection algorithms, such as the ones by Viola and Jones [15] and Dalal and Triggs [16], were built based on the extraction of handcrafted features before applying a classification algorithm. After the rebirth of neural networks in 2012 and the appearance of deep learning network and convolutional neural network (CNN) [17], more advanced object detection algorithms based on these methods have appeared, including region CNN (RCNN) [18,19], you only look once (YOLO) [20,21], single shot MultiBox detector (SDD) [22], pyramid networks [23], and Retina-Net networks [24]. These algorithms, while quite successful in detecting objects, have high computational costs and hence are difficult to execute on portable devices, unless one focuses on one object such as faces. This makes them less useful for the VI person because it is more convenient for him/her to use portable devices and also because she/he wants to detect a wider range of objects that can be seen in daily life. Therefore, some researchers proposed a compromise solution that can detect multiple objects in a short amount of time by solving the problem from a multi-label classification perspective [25][26][27][28][29][30]. In this approach, the presence of multiple objects can be detected, but not their exact location in the image. We believe that it is reasonable to assume that the VI person is more interested in detecting multiple objects in a fast way and then in knowing their exact locations.
Thus, in this work, we propose a scene description module for listing which objects are present in scene and inform the VI user about them using voice communications. The module is part of a larger assistive technology system for the visually impaired that is designed to assist them in (1) navigating indoor environments, (2) detecting and reading text information, (3) detecting and recognizing faces, and (4) detecting and listing objects present in the scene. The system uses a portable device connected to a camera placed on the VI person chest.
Our solution for the scene description module is based on a multi-label classification approach using deep CNN models. Usually, CNNs do not perform well for small datasets as they are prone to overfitting. In this case, it has been shown in many studies [31][32][33][34] that it is more suitable to employ a knowledge transfer approach by using pre-trained CNNs such as VGG CNN family [35] and GoogLeNet (inception CNN family) [36]. The pre-trained CNNs have been already trained on very large image datasets and all they need is small modifications to adapt them to our special dataset. One way to transfer knowledge from these pre-trained models is by using their learned feature representations as input to train another external classifier. The survey in [31] describes this option and discusses factors to consider in applying such approach. In some cases, it is possible and more worthwhile to retrain the whole model again but not starting from scratch, i.e., with random model weights. In other words, we start with the pre-trained weights and retrain the whole model on the new dataset, which is known as the fine-tuning approach.
Many CNN models have been proposed in the literature for image classification including VGG-VD [35], Goog-LeNet [36], SqueezeNet [37], MobileNet [38], ResNet [39], and so on. These CNN models have different classification capabilities because of the architectural differences and because of the different datasets they are pre-trained on. Thus, it is wise to try to fuse them in a way that takes advantage of their respective strengths.
Fusion of ensemble of classifiers and/or multiple features are efficient techniques to achieve better results for applications like classification and recognition [40][41][42][43][44][45][46][47]. Usually, fusion is employed in three levels: data input level, feature level, and decision level [42]. In data input-level fusion, images from multiple sources are combined together to create a new image with a better signal-to-noise ratio than the input signals. Feature-level fusion consists of combining the features extracted from feature extraction algorithms to create a richer and more descriptive feature vector. In decision-level fusion, first the data are separately classified using different methods and then fusion consists of merging the output from the classification. For example, the work in [48] explores the fusion of images from different datasets to enhance the performance of a multi-task deep classifier. The authors in [44] propose a novel deep CNN model based on multiscale side output fusion in a deeply supervised network. The model fuses different feature vectors from the deep CNN at different scales in order to improve the salient object detection.
For decision-level fusion, the work in [45] investigated the use of multiple widely used ensemble methods in the context of deep neural network classifiers. The investigated 1 3 methods include naive unweighted averaging, majority voting, the Bayes optimal classifier, etc. However, these methods are vulnerable to weak learners, are sensitive to overconfident learners and may lead to information loss [45].
Another recent work by Koh and Woo [46] presented a novel fusion approach of different multi-view classifications. The different multi-views are obtained by applying a classification model over different batches of the data. Then, the fusion of these different views involves computing co-occurrence matrices, weighted adjacency matrices, and Laplacian matrices, which are time-consuming.
In this work, we propose to improve the detection accuracy of CNN models by fusing their predictions using induced ordered weighted averaging (OWA) techniques. In particular, we use two base CNN models, namely the VGG16 model [35] and another light model called SqueezeNet [49]. We have selected these two models because they are not that deep and they are quite diverse. Our datasets are small (less than 160 images in the training set), and it is known from the literature that deeper models that have large number of weights need a huge dataset for training [35]. Furthermore, it is well known that fusion methods work best when we have diverse classifiers [45,50]. Our chosen models are quite diverse because VGG16 uses a basic convolutional architecture with a large number of weights, whereas SqueezeNet is a much lighter network that uses an advanced squeeze/ expand architecture. We also increase diversity through using a different training approach. For the SqueezeNet, we use a fine-tuning approach, where we update all its weights during training, whereas for the VGG16 model we only update the weights of the upper added layers only.
The induced OWA technique fuses the output predictions of the CNN models by computing their weighted average. However, the weights used are computed after ordering the predictions based on their importance or level of confidence. As a measure of confidence for each prediction, we propose to use the residual error between the predicted output and the true output. The residual error can be computed at training time, because we have the true outputs. But during test time, obviously, we do not have the true outputs. As a solution, we propose to estimate the residual errors from the input image directly by training another dedicated CNN for this purpose. In other words, for each dataset we train two CNN models: One is used to learn the actual output, while the other is used to learn the residual error. Using this approach for each predicted output, we have also an estimate of the residual error, which we can use as a measure of confidence in the prediction. It is important to note that unlike the regular weighted average scheme, where the weights are the same for all input images, the OWA technique computes different weights for each input image. Thus, the "optimal" weights are used for each input image, and this explains why OWA is able to improve on the accuracy of the two base models.
The contributions of this paper can be summarized in the following points: • Proposing a deep learning solution for image multi-label classification based on the fusion of two CNN models using OWA theory. • Proposing the residual errors between the model predictions and the true labels as measures of confidence in the model predictions and proposing using dedicated CNN models as a solution for their estimation from the input images. • Developing the mathematical model that formulates the usage of the estimated residual errors to fuse the predictions of the CNN models using the induced OWA approach.
The rest of this paper is organized as follows. In Sect. 2, we provide a description of the proposed methods based on the fusion of CNN models using OWA. The experimental results and conclusions are presented in Sects. 3 and 4, respectively.
Materials and Methods
In this section, first, we describe the two pre-trained CNN models, VGG-16 and SqueezeNet, and the modification made on the architectures to adapt them to our multi-label classification problem. Then, Sect. 2.2 introduces the OWA theory. Finally, Sect. 2.3 describes the proposed method including the mathematical formulation of the proposed fusion approach.
Pre-trained CNN Models Description
As mentioned earlier, we propose to fuse the outputs of two base CNN models, namely the VGG16 model [35] and the SqueezeNet model [49], using the induced OWA approach. Figure 1 shows the architecture of the pre-trained CNN models used. For the VGG16 model, we remove the last two layers in the original model and then add an extra dense layer with LeakyReLU activation function [51] followed by a BatchNormalization layer [52].
As for the SqueezeNet CNN, we remove the layers after fire 9 block and replace them with an extra convolutional layer with LeakyReLU activation functions and a BatchNormalization layer. We have to also follow this by a GlobalA-vgPooling2D layer before we end the network with the output layer. The output layer for both models has N o neurons with linear activation functions, which is then converted to binary output (representing the presence and non-presence of the particular object) using a threshold T p . For example, Fig. 1 shows a sample output for some input images, where 1 3 the output is converted to binary values using a threshold T p = 0.5. A binary output of one indicates the corresponding object is present in the image.
Another difference between the two base models is in the training. For the VGG16-based model, we fix the pre-trained layers because of the huge number of parameters in these layers (> 14 million), whereas for the SqueezeNet CNN, we employ a fine-tuning approach to train the network because it is a small network with less than one million parameters, which makes it easier to fine-tune using reasonable computational resources. Furthermore, even without fine-tuning the VGG16-based model can achieve good results, due to its rich architecture.
Induced OWA
Suppose we have a set of arguments a j , representing outputs of multiple estimators which we want to fuse into one argument. One simple way to do this is using simple weighted averaging, where the weights represent our confidence in the corresponding estimator, i.e., if we have high confidence in a particular estimator, we can assign it a bigger weight and vice versa. However, sometimes the confidence is related to the output rather than the estimator itself. For example, we might favor outputs with large values over small values or vice versa. For these cases, the ordered weighted averaging (OWA) is defined.
An OWA operator of dimension P is a mapping F W ∶ R P → R that has an associated weighting vector w = w 1 , w 2 , … , w P T such that w j ∈ [0, 1] and ∑ P j=1 w j = 1. The function F W a 1 , a 2 , … , a P determines the aggregated value of the arguments a 1 , a 2 , … , a P such that: where b j is the jth largest of the a j and P is the number of predictions.
It is important to observe the main difference between the definition of the OWA and a simple weighted average. The OWA involves the step of ordering the arguments a j from largest or most confident to smallest or least confident. This makes the weights dependent on the position in the ordering rather than on the arguments themselves. This also makes the OWA operator a nonlinear operator that provides a very rich family of aggregation operators parameterized by the weighting vector. For example, if the weights are equal to 1/P, then the OWA is simply the average operator. If the weight vector is [1, 0, …, 0], then the OWA becomes the maximum operator. Conversely, if the weight vector is [0, …, 0, 1], then OWA becomes the minimum operator.
By definition, the OWA operator performs an ordering of the arguments to be aggregated. The ordering is done based on the values of the arguments. In other application, the argument's value is not the one that makes the argument important. Instead, we have an auxiliary value which can give us a confidence level in the argument. In this scenario, we can use the induced OWA operator, which relies on an auxiliary value, called order-inducing variable, to order the arguments.
This scenario applies to us in this work, because the actual value of an estimator output does not give us any indication of its importance or confidence. (Recall that in our case, the arguments represent outputs of multiple estimators.) Thus, we need to use an order-inducing variable that measures the confidence in the estimator's output. We discuss this issue in Sect. 2.4.
Prioritized Aggregation Operator (PAO)
An issue of considerable interest, in applications of the induced OWA operator, is the determination of the weights to be used. To this end, various approaches have been suggested for obtaining these weights [53][54][55][56]. One elegant way is the prioritized aggregation operator (PAO), presented in [55]. Let the order-inducing variable be S j (in our case this will be the predicted residual errors) and let S 0 = 1 . The weights in the PAO approach are defined as follows. First, we define: Thus, in general we can write: Finally, the weights are defined as follows: The definition in (4) guarantees that ∑ P j=1 w j = 1. Recall here that P is the number of arguments and hence the number of estimators.
Proposed Fusion Approach Using Induced OWA
Recall that in applications of the induced OWA operator, we need to define an order-inducing variable, which is a variable that can help us order the predicted outputs of the estimators from most confident to least confident. In other words, the order-inducing variable measures the confidence in the predicted output of an estimator.
Our idea is to use the residual error, between predicted and true outputs, as a measure of confidence in the predicted output. We can do that by analyzing and modeling the residual errors of the estimator in the input space. Obviously, to compute the residual error in the model output, we need to know the true output. However, recall that the true output is known during training, but not during testing of the model. Thus, the idea is to use another dedicated CNN model to learn how to predict the residual error from the input (image). In other words, we have a pair of CNN models, one to predict the actual output and another one to predict the residual error in the output. This pair of models is illustrated in Fig. 2, where Fig. 2a shows the main CNN models used for predicting the object presence, while Fig. 2b shows the two models used to predict the residual error in the outputs of the main model. In Fig. 2a, we also illustrate the fusion operation performed at the decision level using the OWA approach.
The residual error models shown in Fig. 2b can perform the learning of the residual error, because we can compute the actual error during training from the predicted and true outputs. Accordingly, the proposed method involves the following steps: 1. Train the two main models shown in Fig. 2a Fig. 2a to predict the object presence from the input image. 5. Then, we use the models in Fig. 2b to estimate the residual errors of both main CNN models. 6. Finally, we use the estimated residual errors to fuse the outputs of the two main CNN models based on the induced OWA approach.
Thus, given a sample input image, let y j be the true object label (equal to one if object j is present). Furthermore, let ŷ j1 and ŷ j2 be the two predictions produced by the two main models for object j (These are real-valued numbers). Next, let e j1 = abs ŷ j1 − y j and e j2 = abs ŷ j2 − y j be the absolute values of the residual errors corresponding to the two main models. Then obviously, the lower residual error indicates higher confidence in the prediction. But recall that we need the order-inducing variable to order the predictions from most confident to least confident. Thus, we propose the following definition for the order-inducing variable: Next, as explained earlier, the weights are defined by first ordering the predictions ŷ j1 and ŷ j2 based on their order-inducing variable S jk and then computing the weights using the PAO approach. Therefore, we have two cases: The two pre-trained models are used to predict the object presence and then fused using OWA, b Two models to predict the residual error in the outputs of the models in part (a)
Case 2 S j2 ≥ S j1
Finally, based on these weights, the final fused prediction for object j is computed as follows: It is worth observing here that the weights are not fixed for all objects; instead, they vary depending on residual errors predicted (from the test image) using the dedicated CNN models. This is why, this efficient fusion approach is able to select the better prediction for each object and hence produce improvement in the accuracy over the whole testing set.
Lastly, recall that the final predicted output ŷ j is compared to the presence threshold T P to decide whether the object j is present or not.
Experimental Results
In the first part of this section, we describe the datasets used in this study and how they are collected. Then, we optimize the base CNN models for these datasets. In particular, we find experimentally the best presence threshold T p . Finally, we present preliminary results of the proposed solution.
Dataset Description
The experiments in this paper use four datasets of multilabeled images for evaluating the efficiency of the proposed deep learning solution. The first two datasets pertain to the college of computer and information sciences building at the King Saud University (KSU), Saudi Arabia. They have and w j2 = S j1 1 + S j1 .
been collected by our team using a CMOS camera with the following features 87.2 fps, 752 × 480, 0.36 MPix, 1/3″, ON Semiconductor, Global Shutter, and connection via USB 2.0.
The second two datasets are collected by the authors of [29] in two different buildings of the University of Trento, Italy. The cameras used to capture these images are from a company called IDS-imaging [57]. The authors used the camera model UI-1240LE -C-HQ, which is CMOS-based camera with 25.8 fps, 1280 × 1024, 1.31 MPix, 1/1.8″, e2v, Global Shutter, Global Start Shutter, Rolling Shutter, and USB 2.0 support. The camera is equipped with KOWA LM4NCL 1/2″ 3.5-mm F1.4 manual IRIS C-Mount lens from RMA Electronics Inc. Company [58]. The details of these four datasets are given in Table 1, which also presents the list of objects considered in every dataset.
It is noteworthy that we have selected the objects deemed to be the most important ones in the considered indoor environments. Also note that the datasets are not randomly split into training and testing images, because we cannot guarantee that all objects will be represented in the training set. The split into training and test images is performed manually beforehand and is fixed for all experiments. Table 2 presents the number of occurrences of objects in the training set and the test set for each dataset, while Fig. 3 shows four sample images with the list of objects contained within.
Assessment Metrics
In order to assess the proposed solution, we need to define quantitative performance metrics. In single-label classification, we use metrics such as precision, sensitivity (recall), specificity, and accuracy. These metrics can also be used in the multi-label case, and they can be computed per label/object or as an overall metric. Let x i represent a sample image in the test set where 1 ≤ i ≤ N test , and let Y i represent the set of true labels or objects associated with it. In addition, let P be a multilabel classifier that returns the set of predicted labels for x i . For the label y k , four basic quantities characterizing the binary classification performance on this label can be defined: The quantities represent the number of true positives, false positives, false negatives, and true negatives with respect to label y k , respectively. It can be easily checked that TP k + FP k + FN k + TN k = N test . Based on these qualities, we define the metrics precision (PRE), sensitivity or recall (SEN), specificity (SPE), and accuracy (ACC) in Eqs. (9)- (12).
To get the overall metrics, we can just take the average of the individual metrics per label. In the multi-label case, ACC is ambiguous [59,60]. However, balanced or average accuracy (AVG) can be used instead: In regular classification, with a single label per image, the classification per image is either correct or wrong. However, multi-label classification problems have a chance of being partially correct, because some labels may be detected, while others not. Thus, there are other evaluation metrics specific to multi-label classification that takes this fact into account. The Hamming loss (HL) is probably the most widely used loss function in multi-label classification. The HL is used for measuring the fraction of incorrectly predicted labels. First, we define it per image sample HL i : (14) Here, Δ stands for the symmetric difference between two sets and N labels is the number of objects/labels. The overall HL can then be computed by taking the average over all sample images in the test set: The mean average precision (mAP) is a ranking metric which refers to the average fraction of relevant labels ranked higher than the irrelevant ones. First, we compute the precision/recall curve for a particular label/object over the whole test set. Then, the average precision per object is computed as the area under the precision-recall curve: Obviously, the precision/recall is discrete and thus AP needs to be approximated. Recently, it is approximated by finding the area under the precision-recall curve which is known as area under curve (AUC). Finally, mAP is computed the average of the individual AP k .
The label ranking loss (RL) is another metric that computes the average number of label pairs that are incorrectly ordered. In other words, it computes the fraction of reversely ordered label pairs, i.e., an irrelevant label is ranked higher than a relevant label. For more details, we refer the reader to [61].
Training the CNN Models
We implement the proposed deep CNN models for image multi-label classification in the Keras environment with Tensorflow as a back end. Tensorflow is end-to-end open-source machine learning platform developed by Google and that can be programmed in Python. However, due to its peculiar programming style it is often used through a higher-level programming interface such as Keras (also written in Python).
All experiments are conducted on HP-laptop with an Intel Core i7-7700HQ CPU, the NVIDIA graphics card GeForce GTX 1060 Ti with 4 GB dedicated memory, and 8 GB of RAM. The size of the images contained in the datasets is 640 × 480. We set up the CNN base models so that they accept the images with their original size. Figure 4 shows the plot of the curves of the loss versus epoch number for training the different CNN models on the first dataset (KSU1) as an example. We set the batch size to 16 and the learning rate to 0.001. From these plots, we can see that the loss function converges with 100 epochs for both models when training them to classify the actual images. In order to improve the stability of the models' convergence, we reduce the learning rate after epoch 100 from 0.001 to 0.0001 and train them for 20 epochs more. As for the CNN models dedicated to learn the residual error, they actually converge much quicker. Thus, we only trained them for 30 epochs each with a learning rate equal to 0.001 and a batch size of 16.
Determining the Optimal Presence Threshold T P
Recall that an important parameter used in our CNN models is the presence threshold T P which is used to convert the outputs into binary values. It is reasonable to assume that the best value for this parameter is 0.5, but an ablation study is needed to obtain the optimal value according to our defined metrics. Thus, in this set of experiments we examine the AVG accuracy of the base models with respect to the threshold T P . The results are shown in Fig. 5 and clearly show that the best threshold value is 0.3 and not 0.5, and this is true for all datasets. We note here that we execute each training experiment ten times and plot the average of the AVG metric. The small bars at each point indicate the standard deviation of the ten runs.
Results for a Sample Run
The proposed fusion algorithm relies clearly on the correct estimation of the residual values between true and predicted outputs. To estimate the residual values, we can use any one of the two CNN models. Obviously, it is preferable to use the model that gives a better prediction of the residual values. To that end, we train both model types, SqueezeNet and VGG16, and evaluate them based on the mean squared error (MSE) between the true residuals and the one predicted. Table 3 shows the obtained MSE results. Based on this comparison, it is clear that SqueezeNet is more accurate in predicting the residual values. Thus, we use this model to estimate the residual values after each main model. From Tables 4 and 5, we can see that for all datasets, the proposed fused model produced improvement in the AVG accuracy, compared to both of the base models. We can also observe that in general SqueezeNet outperforms VGG16 in terms of AVG results. However, the fusion of the two models produces a good improvement on average for all datasets. That is, because even though SqueezeNet usually performs better, for some images VGG16 does a better job. For example, by looking at the "people" object in Table 5 (found in UTrento datasets), we clearly see that VGG16 outperforms SqueezeNet for this object type. Thus, we can say that the two models complement each other, as the VGG16 model in a sense corrects the SqueezeNet modes for those specific object classes.
When we look at the overall metrics, we can observe that the fused model outperforms the separate models for almost all metrics. There are few exceptions (highlighted in bold font) such as for the mAP metric in the KSU2 dataset, where the fused model achieved 80.69, whereas the SqueezeNet model achieved 81.43. The other exception is the UTrento1 dataset with respect to the RL metric. Here, the RL metric for the two separate models achieved quite different values, namely 0.185 and 0.125. The RL of the fused model should be lower than both RL of the two models, which is not the case. However, the RL value of 0.132 achieved by the fused model is significantly lower than the VGG16 model and closer to the optimal result of the SqueezeNet model.
Comparison to State-of-the-Art
We also present comparison with the state-of-the-art methods in Table 6. These state-of-the-art methods include: (1) SURF matching and Gaussian process regression (SURF + GPR) [28], (2) compressive sensing and Gaussian process regression (CS + GP) [28], (3) multi-resolution random projection (MR-random-proj) [30], (4) pre-trained GoogLeNet CNN [36], (5) pre-trained ResNet CNN [39], (6) fine-tuning of SqueezeNet CNN [25], and (7) a recent method based on convolutional SVM networks (convolutional SVM Net) [26]. Again, these results are obtained using T P = 0.3 as the presence of threshold. The results in Table 6 clearly show that the proposed method produced significant improvements over the state-of-the-art methods for all datasets, except KSU1, where the improvement is insignificant.
From Tables 4, 5, and 6, we notice that SqueezeNet produces better results than VGG16 on average. It is very important to observe here that the fusion technique computes different weights for each image. In other words, for each image, a different weighting scheme is computed that favors the best model for the current image. Therefore, even though SqueezeNet does better than VGG16 on average, the latter does better for certain images. The proposed OWA fusion technique is able to discover and exploit those cases by adjusting its weights accordingly. We also notice from Table 6 that the improvements are more significant for UTrento1 and UTrento2 datasets. These two datasets are more challenging as can be illustrated by their lower AVG accuracies achieved. One possible reason for that is that they contain barrel-like distortions as can be observed in Fig. 3. The challenging nature of these two datasets partly explains why the proposed method provides a more significant improvement for them. The two CNN models disagree more for these two datasets, and the fusion technique is able to exploit this disagreement in a complementary way to enhance the detection result.
Hardware Implementation Using FPGA
This work describes a module that is part of a bigger project, called BlindSys, to help the VI persons with vision tasks such as navigation, text recognition, object detection, and recognition of faces. The project involves a hardware part, which is illustrated in Fig. 6. The system hardware includes a wide-angle camera, earphone, laser and inertial measurement unit (IMU) sensors, and of a high-end 10-inch mobile device (tablet).
However, the tablet does not contain any graphical processing units, and thus, it is not able to handle real-time execution of some tasks. Thus, using a dedicated hardware circuitry to execute network models, such as field programmable gate arrays (FPGA), is an attractive solution.
Modern neural networks are computationally expensive and require specialized hardware, such as graphics processing units. The use of mobile devices without further optimization may not provide sufficient performance when high processing speed is required such as in our computer vision system to support the VI persons. We can speed up neural networks by moving CNN computation from software to hardware, namely an FPGA implementation, and by using fixed-point calculations instead of floating point.
The unique flexibility of the FPGA fabric allows the logic precision to be adjusted to the minimum that a particular network design requires. By limiting the bit precision of the CNN calculation, the number of images that can be processed per second can be significantly increased, improving performance and reducing power, while achieving the exact performance of the corresponding software implementation.
For example, the authors in [62] proposed an FPGA implementation of pre-trained deep neural networks from VGG16. They used dynamic precision quantization with 48-bit data representation and singular vector decomposition to reduce the size of fully connected layers, which led to smaller number of weights that had to be passed from the device the external memory. Another work by Zhang et al. [63] analyzed the throughput and required memory bandwidth for various CNNs using optimization techniques, such as loop tiling and transformation. They achieved 17.42× speedup for the AlexNet CNN [17]. Another work by Suda et al. [64] considers a higher-level solution, which uses the OpenGL compiler for deep networks. Using their method, they were able to implement two large-scale CNNs, namely AlexNet and VGG16, on two Altera Stratix-V FPGA platforms, DE5-Net and P395-D8 boards, which have different hardware resources.
Recently, the authors in [65] implemented a fully connected neural network with six layers and 64 neurons using the FPGA Cyclone IV GX FPGA DE2i-150 from Altera. The descriptions of the digital blocks are performed using fixed-point notation, which provides a high speed and a low cost of hardware resources. In this manner, N = 32 bits are used to represent a fixed-point format, where the most significant bit represents the sign, three bits are used to represent the integer part, and 28 bits are used to represent the fractional part. Recently, Duarte et al. [66] have proposed a protocol for automatic conversion of fully connected neural network implementations in high-level programming language to intermediate format (HLS) and then into FPGA implementation.
However, implementing even deeper networks with multiple dozens of layers is problematic, since all layer weights would not fit into the FPGA memory and will require the use of the external RAM, which can lead to the decrease in performance. Moreover, due to the large number of layers, error accumulation will increase and will require wider bit range to store fixed-point weight values. All of the above findings make our solution more advantageous because it is based on SqueezeNet and Vgg16 CNN which are shallow models with low number of layers compared to other more recent models in the literature. In fact, as we mentioned previously, FPGA implementations for the larger of the two models (VGG16) have already been successfully proposed in the literature.
Conclusions
In this work, we present a novel computer vision method for the detection of the presence of multiple objects in a scene. The method represents a module in a larger assistive technology system for the visually impaired. We propose an innovative idea to fuse two CNN models, namely VGG16 and SqueezeNet, based on dedicated CNN models to estimate the residual error in predicted outputs in combination with an OWA approach.
The experimental results on four image datasets of indoor environments from two separate locations show significant improvements compared to state-of-the-art methods. The proposed OWA approach, based on estimating the residual of the CNN outputs and using it as confidence values, is able to select the better of the two CNN outputs. One way to improve the results is to add more CNN models to the ensemble. However, this will also increase the computational time. Another more promising direction is to employ augmentations techniques to increase the dataset size, because usually CNN models require large datasets for training. Finally, an interesting idea is to divide the objects of interest among multiple CNN models. For example, we can use three CNN models of the same type or different types; each one is responsible for detecting five objects only out of 15. It is expected that reducing the number of objects per CNN should increase its performance. | 9,088.8 | 2020-07-29T00:00:00.000 | [
"Computer Science"
] |
Potentials and Challenges of Promoting Bunno Bedelele Zone Major Tourism Destinations, South Western Ethiopia
Ethiopia in general and Bunno Bedelle Zone in particular, is rich in both natural and cultural tourism resources. Conversely, it’s economic, socio-cultural and environmental role is insignificant due to problems mainly related to the lack of infrastructural development and marketing of the destination. In order to make tourism development more sustainable, it is important to look into the problems associated with promoting and marketing of the tourist destinations. Thus, as to achieve the objective of the study a mixed research methodology was implemented. Since the aim of this study was to assess the potentials and Challenges of promoting Bunno Bedelle Zone major tourism destinations site. The research was typically descriptive in nature. In this study, both primary and secondary data was gathered. The data was collected through extensive, questionnaire, interview, focus group discussion, and field observation. The collected data were coded, processed and analyzed with the help of SPSS. The finding of the study demonstrated that the promotional activities of Bunno Bedelle Zone failed to meet their ultimate goal to become an important tourist destination because of limitation in community capacity building in tourism concept, lack of expertise in the area to develop ecotourism and lack of basic infrastructure like absence of adequate transport facilities, communication facilities, health care facilities ,accommodation service, Above all, awareness problems, and lack of cooperation between stakeholders aggravated problems which prevent sustainability. To establish the ecotourism and minimizing the challenges of tourist destinations, there should be broader awareness creation program for the wider communities to develop community sense of ownership for their own resource and further investigations is needed to identify and promote the potential tourism resources of the areas.
Although Bunno Bedelle Zone, and Oromia Region in Ethiopia, is rich in tourism potential many more, its economic impact is insignificant. Moreover, due consideration has not been given to marketing and promoting tourism destinations of Bunno Bedelle Zone by National and regional tourism bureaus and destination administrators. Above all, there is lack of studies concerning challenges in promoting tourist destinations of study area. Thus, this study was proposed to find out the potentials and what the actual challenges are that affect tourism destination marketing and promotion activities of the zone administrative bodies and also to unpack the current trend.
Objectives of the study 1.2.1 General Objectives
To assess the Potentials and challenges of promoting Bunno Bedelele Zone major tourism destinations.
Specific Objectives
To assess the current tourism potentials of Bunno Bedele Zone. To look into the problems associated with promoting and marketing major Bunno Bedele Zone tourism destinations. To identify the appropriate tools for marketing and promotion of Bale Zone major tourism resources.
Significance of the Study
This study is expected to have significance as it contributes by showing the current tourism destinations and marketing problems, and also by suggesting appropriate tourism promotion approaches in the study area which will be helpful for Culture and Tourism Bureaus at different levels to revise their policy so as to be more effective. Likewise, the findings of the research will have essential roles for policy formulation and decision making at all levels (such as, Regional, Zonal, Woreda, Kebele and grass root level of the community) to have appropriate marketing strategies. Since the research results will be forwarded to the Culture and Tourism Offices and other development agents, it will improve the knowledge and attitude of people about the tools that are suitable for marketing the suggest needed tourism resources.
LOCATION.
The study was conducted at Bunno Bedelle zone, western of oromia region. Buno Bedele zone is one of the newly emerged zone with ten district in south western part of oromia regional states in 2008E.C.It is bordered to the south by jimma zone, to the east by East wollega zone, to the west by Illu Aba Bor Zone and to the north by West wollega zone. The capital city of the zone is Bedelle town and it is located 480km away from Addis Ababa. The zone has good climatic conditions and evergreen throught the year. Figure1: Map of study area.
Socio-economic conditions of community
The community practice mixed farming mode of life. Different kinds of crops, vegetables, livestock, oil crops and coffee etc. are under production in the woreda. Coffee and Khat, teff and maize are an important cash crop of the area.
Study site selections and Sampling design
The rational for selecting Bunno Bedelle Zone was the presence of various natural and cultural potential resource for ecotourism development .Purposive sampling techniques was used to select target populations from officials working in the Zone and Woredas Culture and Tourism offices, none governmental development bodies , local communities elders, and destination managers of the study areas for the interview and FGDs. Participants were selected based on the experiences, skills and knowledge of experts on the theme of the research. 171 respondents were selected among 300 target populations based on the formula of (Israel, 1992). = .
=171
where: n = sample size required N= number of people in the population e = the precision level which is = (±5%) Where Confidence Level is 95% at P = ± 5 (maximum variability)
Methods of Data Collections
To undertake this study, both primary and secondary data were used by employing both qualitative and quantitative. The primary data was collected using structured questionnaire that was administered by the researcher. The questionnaire was pre-tested and a necessary correction was made before actual use. During interview, the researcher provided enough information about the objectives of the study to avoid potential bias from the respondents in responding to questions. Secondary data were collected from relevant sources such as from bureau of Oromia Forest and wildlife enterprise of the Bunno Bedelle Zone, Ministry of Culture and Tourism bureau of the zone as well the woreda, in addition secondary data from literature review and previously published research and articles was also used.
Interview
Interview were conducted with Community representatives, staff members of cultural and tourism bureau and Oromia forest and wildlife enterprise of the zone and woredas. To do these a list of questions was prepared and respondents were interviewed to forward their ideas, opinions, feelings and knowledge regarding the potentials and obstacles to promote tourism potentials of the destinations., role of community to conserve biodiversity and their responsibility to develop ecotourism in the area were part of interview questions.
Focused group discussion
Focus group discussions were carried out with local elders, community leaders, including individuals who have adequate knowledge of tourism potentials of the areas). The issues discussed include the potential resource for ecotourism and challenges of promoting ecotourism potentials of the areas., role of government and community to develop ecotourism in the study area, ,weather community has been taken any training concerning awareness creations program and biodiversity issues in general and ecotourism in particular were some of the points discussed with the community during study period .
Questionnaire surveys
Questionnaire survey was carried out with respondents to gather data on socioeconomic characteristics, potential tourism resource of the area and challenges that hinders to promote potential tourism resources of destination site and appropriate tools to promote tourism destination site of the areas.
Key informants' interview
Key informant interview were carried out with 15 respondents purposely taken through snowball methods these were, , community leader, manager of Oromia forest and wild life enterprise, staff member of Oromia forest and wild life enterprise, manager of tourism bureau of zone and to collect reliable information about potential resource of Bunno Bedele Zone and different water body available in the area, challenges limit to promote ecotourism destinations of the area and appropriate tools used to promote tourism destinations of the area were the questions asked .
Field observation
Observation was the main instrument of data collection applied during area visitation to observe the condition of biodiversity, potential resource for ecotourism and challenges of promoting tourism destinations of the area. In addition, infrastructure facilities and life style of the local people nearby the destinations site was observed during area visitations. To capture information, Note book and photo camera were used.
METHODS OF DATA ANALYSIS
Data analyses were done both quantitatively and qualitatively. The data gathered using questionnaire was first arranged and organized in tables and changed into frequency and percentage, and then it is classified and tabulated., SPSS statistics version 20 was used for performing percentage, frequency, graphs (bar charts) to show results. Text explanations and descriptions were used in case of qualitative data analysis. Data collected through interview were analyzed systematically based on the techniques of listening and transcription, reduction to units of relevant meaning and summarization. Data collected through focused group discussion was critically analyzed by using the guidelines which was used for analyzing interview responses. Data collected through field observation was analyzed in the form of text.
RESULT AND DISCUSSIONS 4.1. Source of income.
In order to identify the major source of income, basic data were collected from the respondents. Based on their response crop farming, livestock rearing, mixed farming (both livestock rearing and crop farming) and other income generating activities like handcraft and beekeeping activities are the major source of their income as indicated in (Figure2).
Figure2: Income source of community Concerning community source of income 38.60% respondent mentioned that mixed farming (both crop farming and livestock rearing) is their income generating activity and crop farming accounts30.41% and the rest of them practice livestock rearing and other income generating activities like handicrafts and bee keeping activities contributed 26.32% and 4.67% respectively. Human population increase, need of grazing land and agricultural expansion will negatively affect the tourism destinations of the areas unless other alternative income generating activity like ecotourism is established in the area, the potential resource of the area could be destructed by over grazing, settlement, and farm land expansion.
Natural resource for ecotourism development in Bunno Beddele Zone.
The area is also drained by different seasonal (intermittent) and principal rivers of high potential for varieties of economic activities. For instance, the principal rivers that constantly flow throughout the year like: Dabena , Didesa, and other principal rivers that have high potential for transportation and large-scale irrigation in their lowland areas and potential source of fish species , scenic mountains that is having good climatic conditions and hot spring that is used to cure desease.This indicated that Presence of this natural resource is the base for ecotourism development .However it need effective management and promotions (Figure3). Aba Bekele water fall: According to focus group discussions Dega woreda is full of attractive natural and cultural tourist attraction site. Some of tourist attraction sites in the woreda are. , Bala Dagna Cave, Bokicha Mountain, Sida Aba Café, (Cultural & Historical sites) and Sida Aba mathi, Meko Mountain, Tabala Kunbure (Hot Springs). According to focus group discusions and sampled respondents response all of these centers are natural and there were no any facilities around these places. The absence of facilities, the absence of graveled road construction, and luck of enough budgets, water pipe line and low awareness's the district lack of budget from the tourists are some of the obstacles that hinders the development of ecotourism in the area(fig3). Based on focus group discussions and researcher field observations Dangiwaj water fall is one of the tourist attractions of natural cave that is found in the gechi woredas of Bunno Bedelle Zone. The woreda has one natural cave site attraction and three natural forest tourist attraction centers(fig4) .
Haro Aba Diko Controlled Hunting area forest in Dabo Hana woreda
Tourism is an industry that brings about both direct and indirect economic and social benefits, and consequently supports other economic sectors. Despite the enormous tourism potential in the woreda much was not developed and well recognized yet in the way it contributes to the overall regional development.
The dense natural forest found in Dabo hana woreda is an important potential tourism resources of the areas.The forest consists of old and big natural trees. According to focus group discussions this forest extends up to mako woreda and consists of various flora and founa that is source of tourist attractions. some of the founa that are found in this forest area are : African Buffalo (Syncerus caffer), Bush Pig, ( Potamochoerus larvatus) Black and white monkey(Colobus guereza), Common bush Buck(Tragelaphus scriptus), Water Buck(Kobus ellipsiprymnus ) Plain Zebra ( Equus quagga,) Blue monkey (Cercopiticus mitis) and vegetations ranges from herbaceous to tree species .Based on respondents and key informants interviewYako Cave and Chayi Cave are another historical site that was found in the woreda including Aba café one of the tourist cultural attractions ( fig 5).
Figure5: Forest of Haro Aba Diko Controlled Hunting area.
Bettere Water falls Now a days waterfall is the most interesting natural tourism destinations, in the context of our country,using water fall as tourism resource is not well done. Bettere Water fall that is found in chora woreda is one of the most important tourist destinations if promoted and tourist facilities are fulfilled. According to focus group discussions this water fall is consist of natural caves that is the habitat for various attractive bird species and the community also practicing bee keeping activities in this cave (fig6).
Figure6: Betteeree/cooraa/ water falls Dabena Waterfall.
This is one of potential tourism destinations that was found near the Bunno Bedelle zone. The information obtained from respondents indicated that Dabena River is potential of fish species which could be source of income for local community around it (figure). This shows that if ecotourism is started in the area, community could benefit in various way.However due to poor management of the destinations the community nearby the destinations not gained the more benefit from this river as well as waterfall(fig7).
Figure7: Dabena waterfall
Dhambacha stone and simbir water fall Dhambacha stone and the water fall of simbir that are found in Borecha woreda are one of the natural tourist attractions destinations. According to focus group discussions and interview with sample respondents this water fall is attractive in nature and it is a pleasure place for them during their leisure time.
Cultural attractions of Bunno Bedelle Zone
In addition to the above mentioned potential natural tourism resource Bunno Bedelle zone is also endowed with historical and cultural resources .The following are identified as cultural tourism resources of the area.
Cultural Food and Drink of Bunno Bedelle zone oromo ethnic groups
According to respondents' interview and community discussion, by Oromo ethnic group that found in Bunno Bedelle zone practice both tangible and intangible cultural practices, cultural food and Drink is among tangible cultural resources that makes Bunno Bedelle zone an ideal place for developing diversified kind of ecotourism products for the sustainable benefit of the local community. During study period of time the researcher identified various cultural food and drinks these are:Buna Qalaa, Anchootee, Marqaa. Caccabsa, Maqinoo (irra Dibaa), Keenetoo, Bookaa, Daadhii (Figure 9).
Coffee Ceremony
According to community discussions and key informants interview, drinking coffee is an essential part of daily life among them. It plays a vital role in socio-economic, political, cultural and religious life of the people. Making coffee involves roasting coffee beans and boiling water this helps to discuss on issues like culture, work ethics, peace, health, education and other societal issues. The traditional coffee ceremony plays great role in creating opportunity to deal with conflicts of the community members by way of traditional conflict resolution mechanism called Jaarsummaa. The ceremony itself reflects that sharing the ceremony means the neighbors are in peace with each other. The guests of the ceremony also wish peace to the family who arranged the ceremony saying, 'bunaa fi nagaa hin dhabinaa' meaning 'let coffee and peace be upon you (Figure). The above figures (11&12) illustrate some of the cultural features that exist in Bunno Bedelle zone. Given their shared similar culture with other localities in the zone, the local communities can also inevitably utilize these cultural features if ecotourism is developed in their area. With regard to the benefits of ecotourism as related to cultural tourism in Bunno Bedelle zone, the participants in the FGD and interview sessions pointed out that local community can get a lot of benefit. The mentioned potential cultural features in Bunno Bedelle zone of oromo ethnic group are performing cultural dances, showing hair styles, cultural food preparation system, traditional home construction system the local communities can also open souvenir shop to sell traditional material and cloth. Hence, the presences of such kind of cultural features in Bunno Bedelle zone are opportunities for the development of ecotourism.
Challenges faced to develop ecotourism at destinations of Bunno Bedelle Zone
The results of collected information indicated that Bunno Bedelle Zone has huge potential for ecotourism development. However there was challenges that limit to initiate and develop, this are lack of awareness and limited participation of community in decision making and management activities, limitation in community capacity building in tourism concept, lack of expertise in the area to develop community based ecotourism and lack of basic infrastructure like absence of adequate transport facilities, communication facilities, health care facilities ,accommodation service, lodge service that are essential for visitor and host community are some of the challenges that hinders the development of ecotourism in the Area s( Figure 12).
Figure13
: Responses of the respondent to the different Challenges to develop ecotourism.
The above figure (13) showed, (24.2%) of respondents mentioned that there is limitations of community capacity buildings 22.9 % of respondents mentioned lack of awareness and limited participations of community as obstacles to promote tourism destinations of the area. This indicates that Lack of knowledge and awareness can affect the participations of community in every aspect of conservations activity. Given that community participation is an essential element for ecotourism development, lack of awareness and participations of community can affect the establishment and development of ecotourism. (19.41%) of sampled respondents replied that lack of facilities and service destination site as another challenges that leads community towards resource degradations. The participants during FGD confirmed that there is no enough infrastructure and accommodation around the tourism destination site. Unless this infrastructure related problem is solved, it is challenging to successfully develop ecotourism in the area. It is in line with the studies of Demeke and Verma (2014) and Alemayehu (2011) identified limited transportation and accommodation facilities as a challenge for ecotourism in their study area. 17.06% of sampled respondents mentioned that, the regional, zonal and woreda culture and tourism office coordination is very weak.
There is no strong link and communication of bottom up or up down communication between each office. Tourism is a multi-sectoral activity that requires different stakeholders to participate. Hence, its success is greatly determined by the role every stakeholder pertains to play. A key to the success of ecotourism is the formation of strong partnerships so that the multiple goals of conservation and equitable development can be met
Limited Tourism Research and Development
Limited research concerning tourism development is another challenge of the study areas to promote potential tourism resources of the destinations. There was no more research conducted in Bunno Bedelle zone concerning tourism in general and ecotourism in particular. This indicated that the study area was not promoted and need further study to promote the potentials of the area as well as to facilitate the establishment of community based ecotourism for the week of minimizing the degradations of resource and to provide income to the country in general and improvements of the livelihood of local community surrounding the area in particular.
Therefore, to overcome such obstacles an urgent solution is needed through facilitating the mechanisms that facilitate the promotions of tourism destination site and participate community in every aspect of conservation activity.
CONCLUSIONS AND RECOMMENDATIONS
This research attempts to examine the potentials and challenges of promoting tourism destinations of, Bunno Beddele zone, south western Ethiopia. The result of the study showed that the area has huge atractive natural and cultural potential resource needed for ecotourism development .These includes Dabena and Didessa river diversity of fauna such as African Buffalo (Syncerus caffer), Bush Pig,( Potamochoerus larvatus) Black and white monkey(Colobus guereza), Common bush Buck(Tragelaphus scriptus), Water Buck(Kobus ellipsiprymnus ) Plain Zebra ( Equus quagga,) Blue monkey (Cercopiticus mitis) and Rivers that is source of fish species, attractive forest found in the study area, and cultural resource of the area like dressing style of local community,delicious cultural food and drink of local community are valuable opportunities that empower development of ecotourism, However having these potential resource ecotourism development in the destinations is constrained by Lack of community knowledge about ecotourism Limitations in community capacity building from government organizations Lack of community social infrastructures that had led local community to destructions potential resource of study area. Lack of promotions of the study area potential resource for tourism Lack of cooperation's among community and governmental and private organizations to discuss on constraints and opportunity of the study area. To establish the ecotourism and minimizing the challenges of tourist destinations, the following recommendations were forwarded.
It has to have broader awareness creation program for the wider communities towards the tourism business and its impacts on their lives by providing adequate trainings.
To ensure community-based ecotourism establishment in the study area, local communities must undergo various capability-building programs. This in turn, enables the communities to provide them with skills which are needed to manage the tourism Local community social infrastructural service should be provided in order to reduce community dependence on natural resource. A further investigation is needed to identify and promote potential resource of the destinations .
6.Acknowledgements
My greatest and deepest appreciation goes to study village communities for their kind support in supporting this study. Thanks, are also owed to all individuals and organizations who directly or indirectly participated in this research and for the journal which waived the fees. | 5,104.4 | 2020-12-01T00:00:00.000 | [
"Business",
"Economics",
"Environmental Science",
"Geography"
] |
Self-consistent assessment of Englert-Schwinger model on atomic properties
Our manuscript investigates a self-consistent solution of the statistical atom model proposed by Berthold-Georg Englert and Julian Schwinger (the ES model) and benchmarks it against atomic Kohn-Sham and two orbital-free models of the Thomas-Fermi-Dirac (TFD)-$\lambda$vW family. Results show that the ES model generally offers the same accuracy as the well-known TFD-$\frac{1}{5}$vW model; however, the ES model corrects the failure in Pauli potential near-nucleus region. We also point to the inability of describing low-$Z$ atoms as the foremost concern in improving the present model.
I. INTRODUCTION
Orbital-free density functional theory (OFDFT) is a lucrative path to linear scaling DFT methods.The modern OFDFT theory is mostly investigated in the setting of non-local or generalized gradient approximation (GGA)-style kinetic functionals.Potential functionals, such as those used in Berthold-Georg Englert and Julian Schwinger's statistical atom model [1][2][3] , have fallen out of favor.Yet, it has been shown that at least GGA-style orbital-free functionals usually suffer from a theoretical flaw: the atomic nucleus is not well described, as evidenced by the singular Pauli potential [4][5][6] .The Pauli potential describes a positive effective repulsion arising from the Pauli exclusion principle, and its violation implies a fundamentally wrong solution of the electronic problem.Potential functionals allow linear-scaling DFT to sidestep this flaw and our intention here is to quantify the potential functionals' selfconsistent accuracy by benchmarking Englert and Schwinger's model 2,3 to more well-known OFDFT models.
A flurry of recent activity uses potential as a variable instead of density.Much of the work is focused especially on the Pauli potentials' role and especially in atoms 5,7,8 .The potential variable has even been combined with machine learning 9 , which also has had some applications for the search of kinetic energy functionals 10,11 .Pseudopotentials 6,12 have been useful in conventional approaches to extended systems with OFDFT, where an active effort to improve local pseudopotentials' transferability is ongoing 13,14 .While this solution works remarkably well 12 , a few questions remain for systematic improvement.In general, how much of the overall accuracy is because of the better ion-energy interaction description or the better electronic kinetic energy functional?Even more importantly, in the case of fitted pseudopotentials, when is a pseudopotential overfitted compared to the kinetic energy functional?One approach is to investigate the full potential solution to OFDFT equations, as we did by using the projector-augmented-wave method 15 , but one of the problems here is the near-nucleus singularity of the Pauli potential, which brings the solution's plausibility into question.
The problem near the atomic nucleus is not a new one, and it motivated the development of the statistical atom model 1 .Englert and Schwinger's approach, however, is based on the effective potential instead of the density, which allows natural energy scale separation of the problematic core electrons from the bulk.In particular, we are interested in the model proposed in two papers 2,3 and later refined in Englerts' book, Semiclassical Theory of Atoms 16 .
The perturbative results from the statistical atom are of amazing accuracy, but because our focus is on extended systems, self-consistent solutions of atoms are more interesting to us.We therefore explore a self-consistent solution to Englert and Schwinger's model and assess its quality by comparing it to two other representative orbital-free models that belong to the Thomas-Fermi-Dirac (TFD)-λvW family of orbital-free models.Results show that the ES model generally offers the same accuracy as the well-known TFD-1 5 vW model, but it works better for few quantities, especially for the Pauli potential.We also discerned that the model's current limitation is the inability to model atoms with a low atomic number (<12).
II. THE MODEL
We briefly present the potential functional formalism in DFT and show how it can be used to correct the Thomas-Fermi description of atoms.
A. Formalism
We start with the usual DFT formalism, where energy is separated as where E int is the functional containing electron-electron interaction.We enforce the particle number N restriction via Lagrange multiplier ζ and apply the Legendre transformation to the kinetic energy where we introduced an effective single particle potential V .
By incorporating this into functional (1), we arrive at a joint functional where the variables n, V and ζ are treated as independent variables 17 The variations of this functional lead us to the relations where V int is the single particle interaction potential.Here we use the binding energy ζ instead of the more common chemical potential µ = −ζ.The energy E 1 [V + ζ] is possible to approximate semiclassically 16,18 .
We emphasize here that working with this joint functional is still working within the DFT framework given by the Hohenberg-Kohn theorems, although in this formalism we acknowledge effective potential and chemical potential as independent variables.For simple cases like the Thomas-Fermi theory, it is possible to reduce the potential functional version of E 1 [V + ζ] to density functionals and vice versa 16 .Because we are working with a single particle potential and E 1 will be approximated with the help of a single particle potential, the kinetic energy in this model is non-interacting kinetic energy, which Kohn-Sham and other OFDFT schemes also use.
B. Semiclassical E 1
Englert and Schwinger derived a semiclassical Airy-average expression for E 1 ; this can be understood as a Thomas-Fermi expression that contains quantum corrections.The semiclassical expression is given in terms of Airy-average functions F m , which are closely related to Airy functions.
The derivation is based on the approximation that each electron moves in a local harmonic potential.Then this is expanded in terms of r, p commutators to a first order (Thomas-Fermi corresponds to a zeroth-order approximation; i.e., the position and momentum commute). 16.
The main quantity in the quantum corrected theory is and the Airy-average functions are functions of this variable We can obtain the remaining Airy-average functions recursively with the rule The quantum-corrected Thomas-Fermi expression for with approximations to the second order in potential O(∇ 2 V ).Reducing this to a kinetic energy functional 16 results in a Thomas-Fermi kinetic energy functional plus one-ninth of the von Weizsäcker kinetic energy functional, which is more widely known as the gradientcorrected Thomas-Fermi.
We note that the semiclassical evaluation is not valid near the atomic nucleus, which is why it is reasonable to split the evaluation of energy into two parts: Electrons described well by the Thomas-Fermi theory and strongly bound electrons (SBE) that are better described as hydrogenic states 19 .This is the main motivation for the joint functional treatment, as similar treatment is unavailable in terms of density functionals.
Energy E 1 is the trace over the non-interacting atomic Hamiltonian where the Hamiltonian is H = 1 2 p 2 + V (r) and η is the Heaviside function.We can split this into two parts using an energy scale given by ζ s , so that the electrons with energy levels above ζ s are treated semiclassically and electrons with energy below ζ s are treated with discrete quantum states.We accomplish this by adding an intelligent zero based on ζ s .This results in where the functional E ζζs is approximated semiclassically and E S is evaluated with some other quantum mechanical method.Formally this can be thought of as the evaluation of the semiclassical approximation from which we subtract semiclassical evaluation on energy scales below ζ s and finally adding the correct treatment for electrons below energy ζ s .
The critical part of the ES model is to treat the exact part E S as hydrogenic states.It is assumed that the potential near the nucleus is quite close to Coulombic potential − Z r so that perturbative evaluation of the states is accurate enough where n s is the uppermost electron shell treated with hydrogenic states and n SBE is the density of the electrons treated with hydrogenic states.By shell, we mean the collection of all hydrogenic states with same energy; thus the shells are tabulated by the quantum number n.The density n SBE is obtained from spherically averaged hydrogenic states.
where ψ i av is the spherically averaged wavefunction of ith hydrogenic shell.For justification and details, see 1,19 .Originally, Englert and Schwinger investigated corrections to the Thomas-Fermi model-and as they pointed out, the cut-off energy ζ s has certain ambiguity because we are trying to patch a continuous semiclassical model with a discrete quantum model.The most consistent way to achieve this for the Thomas-Fermi model is to average over two electronic shells with equal weights on an energy scale.This is also a good choice for the Thomas-Fermi model with corrections used here.
More concretely, this means where ζ i is the corrected binding energy of ith electronic shell For practical purposes, these corrections can be embedded inside the Airy-average functions F m ; thus they become corrected F m .The correction induces their own y variables and by doing replacement we can use the energy expression (5).The corrections for strongly bound electrons are inside the functions F m .
Our numerical results indicate that for all atoms of the relevant size (Z < 100), the only energetically believable correction is the one where only the lowest shell (i.e., 1s electrons) is treated exactly, and average occurs over shells 1 and 2. This approximation naturally breaks down when the number of total electrons is low, namely Z ∼ 12.
D. Semiclassical Density
The density can be calculated via relation n = δ δV E 1 (2).The resulting density splits straightforwardly into two contributions where e 1 is the energy density of E 1 , ñ is the semiclassical contribution, and the two last terms are the contribution of the strongly bound electrons .Note that the contribution from strongly bound electrons contains the derivative with respect to the core binding energy ζ s , because it depends on the potential via (6).The derivative is so there will be contributions from the hydrogenic states that depend on the potential.
The semiclassical contribution ñ in spherical symmetry is where the first term 1 2π 2∇V F 2 contains the Thomas-Fermi limit.The expression is valid only for neutral atoms and positive ions, as it assumes that the potential's gradient is positive.
Theoretically, using a simpler density expression is also a viable option, which is only the first two terms of (8).For our benchmarking, we use only the aforementioned density, but later we discuss the self-consistent accuracy of the simpler density.
As in the case of energy, we can include the correction for strongly bound electrons by using the corrected Airy-average functions F m .
E. Interaction
So far, we have detailed how to calculate E 1 and electronic density, which are independent of any possible interaction.Now we detail the electronic interaction included via the effective potential V .
The interaction term E int is separated into an electrostatic term (Hartree) and into an exchange term The corresponding interaction potential is then The Hartree term is well known r − r ′ , and its connection to density will be used to achieve a self-consistent solution.We do not add correlation effects to the model, as the effect of correlation in atoms is below the accuracy of semiclassical E 1 16 .
Before the model is complete, we must comment on how to add the exchange effects.The aim is to include local exchange effects, as described by the Dirac exchange functional 20 To be consistent, we discard exchange effects on strongly bound electrons, because we already approximated them to be non-interacting (although they will receive a marginal contribution via potential).The exchange potential is necessary for self-consistency, given by The most straightforward way to add exchange effects is simply by replacing n → ñ so that only the smooth part is treated with exchange.Testing showed us that while this method produced exchange effects of the correct magnitude, it is not the most accurate way to include the exchange (while discarding contributions of the strongly bound electrons).
The second route taken by Englert and Schwinger is to start from the Thomas-Fermi theory to arrive at a potential description of the exchange.At the Thomas-Fermi level, the exchange potential can be calculated from V ex = π ∂ ∂V n, and the exchange energy density ǫ ex from relation ǫex ∂y = 1 2π 2∇V 4 3 V 2 ex .To include exchange effects on the Thomas-Fermi level, the obvious choice for n here is the Thomas-Fermi part of the density 1 2π 2∇V F 2 , resulting in the potential and the exchange energy We found the description of the exchange potential inadequate at larger distances in the atoms.Thus, we also use another version of the exchange potential 2 , which uses the density expression ñ where the Laplacian of the potential has been approximated out.The resulting exchange potential is which we will use in our implementation.We expect the previous exchange energy description to be accurate enough for this potential.Later we discuss different exchange approximations that are meaningful.
III. IMPLEMENTATION
We obtained the self-consistent solution via relation n = δ δV E 1 and the connection of a single particle potential V to the electrostatic potential V H .For a given effective potential where in V ex we use the old potential and density.This is iterated until the change in potential and binding energy is small enough.With a good initial guess, this procedure gives the self-consistent solution.Here the Thomas-Fermi potential is a sufficient initial guess.
We describe this idea in a bit more detail for spherically symmetrical atoms.With relation (4) and the connection of an electrostatic potential V H with a Poisson equation, we arrive at with boundary conditions rV (r → 0) → −Z and rV (r → ∞) = −Z + N, where zero potential has been assigned to infinitely far away from the nucleus.After we use the fact that our system is spherically symmetric and introduce the auxiliary quantity V (r) = − Z r Φ(r), we have the differential equation rV ex , with boundary conditions ( 13) After obtaining the effective potential V , we find the corresponding binding energy with Newton's method from the relation N − ∫ drn(V + ζ) = 0. We use a non-uniform grid that is denser near the nucleus, where the change in values is greater than in the tail (the grid is correspondingly sparser in the tail).
A. Numerical Method
Englert and Schwinger's original paper 3 uses the shooting method to solve the resulting differential equation.We solve the resulting differential with a simple 1D finite element method with linear elements.For the finite element method, we derive the weak form of the differential equation, which is where v is the test function.During testing we noted that the exchange potential is the most sensitive quantity from a numerical point of view.In the numerical study 3 , the solution for neutral atoms was not obtained because the effective potential's long-range behavior is unknown.
We obtain the neutral atom solution simply by setting the boundary condition to zero, as dictated by (13), and then we converge the result with respect to grid size until the errors (due to the grid's finite size) are below the error threshold.
The differential equation is a bit different than Englert and Schwinger's book 16 .We use the more straightforward expression derived from a Poisson equation than they mention in the book 16 , where all the terms containing the Laplacian of effective potential are moved to the left side of the equation.
IV. RESULTS
We benchmark the numerical results against perturbative results calculated by Englert and Schwinger, the Kohn-Sham results, and the TFD-λW model.First we evaluate our results against Englert and Schwinger's 16 , and then compare the self-consistent results for the following methods: ES The Englert-Schwinger model with exchange potential (12) and density expression (8).All of these models are based on DFT.In each model, we treat the exchange effect with the Dirac exchange (9).Thus we choose to benchmark against Kohn-Sham as the most accurate of the approximations.Originally, Englert and Schwinger opted to benchmark against Hartree-Fock data.
KS-LDA
The exact difference between the orbital-free models is a bit more complex.Both orbitalfree and ES are (in a sense) approximating the energy E 1 .The ES model approximates it directly, including both kinetic and potential terms, while TFD-λW models approximate it via approximating the kinetic energy density functional only.From a formal point of view TFD-1 9 W and the ES model both expand the semiclassical trace to the same order in ̵ h, but we must remember that the TFD-1 9 W model disregards the strongly bound electron correction completely.In model TFD- The atomic Kohn-Sham solver used is available in a grid-based implementation of the projector-augmented waves (GPAW) DFT package 22 .The kinetic energy density functional methods are implemented within this same Kohn-Sham solver.The implementation details are described elsewhere 15,23 .
A. Comparison to Englert-Schwinger Reference Numerical Data
We first study the model presented in the book Semiclassical Theory of Atoms 16 .Thus, we use density expression (8) and exchange potential (11) and solve the resulting differential equation.
We compare the numerical results against the ones provided in 16 in Table I for Krypton
B. Effect of Exchange Potential
Exchange potential and energy have few possible approximations, as indicated in section II E. To choose the best one, we take a look at Krypton to see the effects of different exchange functionals.We look at the total energy, binding energy ζ, and averages over densities ⟨ 1 r ⟩, ⟨r⟩ and r 2 .The last one is defined by The results for different exchange expressions for neutral Krypton are tabulated in Table II.
From the change r 2 we can see that the choice of exchange potential has a strong effect near the atom's edge, which is quite natural 16 .As the exchange energy is negative and the corresponding potential is attractive, we would expect the correct inclusion of exchange to make the atomic size smaller.
The energy difference is not that useful on a semiclassical scale if we compare it to a highly accurate Englert-Schwinger prediction for semiclassical energy, which is −2747.64Hartrees for Krypton.The deviation from this value is 0.7%, 0.1% and 0.2 % for methods (10), (11), and (12), respectively.Our choice, then, should be based on the quality of density at the atom's outer reaches.The experimental value for r 2 can be obtained via diamagnetic susceptibilities 3 , which are provided by the CRC Handbook of Chemistry and Physics 24 .The experimental value of r 2 for Krypton is 1.010 Bohr 2 .This indicates that all methods are a bit insufficient, but that ( 12) is clearly the best for r 2 .Finally, we want to note that there is a deviation of 0.05 -0.1 Bohr 2 to the reported r 2 values 3 so that the deviation to the experimental value might already be at the level of precision of the chosen methods.
We therefore determined that for the self-consistent ES atom model, the exchange potential ( 12) is preferred.Still, the model's accuracy in the atom's outer reaches is somewhat limited by the exchange approximation, and it remains unclear how the exchange effect should be treated for strongly bound electrons.We intend to address this point in future work.
In the following, we use exchange potential (12).
C. Assessment of ES Improvements in Neutral Atoms
We compute the energies and a few other descriptive quantities for the ES model and compare them to other models for neutral systems to assess the quality of the self-consistent solution.The total energy as a quantity is not so important, as the real predictive power lies in the energy differences, but having this information is somewhat useful, because it informs the quality of approximations made.As previously mentioned, though, here we consider Kohn-Sham as the "ground truth." Most of our quantities depend on density to indicate the shape and quality.The nearnucleus area is probed by averaging ⟨1 r⟩, which is related to the shielding of the nuclear magnetic moment 16 .The average over r is related to electric polarizability 16 .Finally, as we mentioned in the previous section, we measure ⟨r 2 ⟩, which probes the density at atoms' outer edges and is related to diamagnetic susceptibility.
For atoms, the energy difference is tested only by calculating ionization potential and comparing the total energies of charged and neutral systems.As the semiclassical model incorporates no information of the valence electron shells, our primary interest is in the ionization potential of alkali metals, where the semiclassical approximation could produce good results.
First we show the general trends over the whole Z to get a sense of what is or is not a reasonable comparison.Figure 1 shows the relative error when compared to Kohn-Sham energies.It is reassuring to see the error is generally small, and it goes down for both methods when Z goes higher, which means that the methods are capturing the essence of the Thomas-Fermi theory.The larger deviation for small-Z for ES can be explained by the strongly bound electron correction, with averaging over shells not as valid for small-Z.For high-Z, one possible explanation is that we are correcting for a tad too few electrons here, but adding a second shell of strongly bound electrons does not improve the situation here, as then we are overcorrecting.
The expectation value ⟨ 1 r ⟩ in Figure 2 is quite smooth for all models, as mainly strongly bound electrons contribute to the density near the nucleus.The ES model handles them explicitly, while the TFD-λvW model seems to handle them implicitly with the von Weizsäcker term.
In Figure 3, we start to see the so-called shell oscillation for ⟨r⟩.The main comment here is that we should not take this value too seriously when doing comparisons, as long as the semiclassical model has a decent average over the shell effects.As is obvious, both models satisfy this requirement.
Next we look at the general trend of r 2 in Figure 4.The shell oscillations already present in the case of ⟨r⟩ are magnified, as we are probing even farther reaches of the atoms.We see again that both OFDFT models are reasonable.
Comparing the ES model with KS-LDA and experimental numbers should only be taken seriously for inert atoms, which have a closed shell structure.In Xenon (Z=54) we see that experimental value is closer to semiclassical value than KS-LDA.The ES model is actually in better agreement with the experimental values as it assigns smaller sizes for almost all atoms 25 .The effect is mostly due to the exchange term as seen from Table II.The origin of these shell oscillations in the Kohn-Sham model is obvious: some outer orbitals are more delocalized than others.
We can see next in the comparison of different atomic models the absolute value of the numbers presented as general trends in Figures 1 through 4. We feel it is informative enough to focus on three representative closed-shell systems: Argon (low-Z, where the strongly bound electron approximation is still valid), Krypton (medium-Z), and Xenon (high-Z).
From Figures 1 through 4 and Tables III through V, we can draw some conclusions.First is the remarkable and well-known accuracy of the TFD+ 1 5 vW model and the fact that ES is much better than TFD+ 1 9 vW, which has a functional that is formally expanded to the same order in ̵ h, but is missing the corrections for strongly bound electrons.We did not include Quantity KS-LDA TFD+ 1 9 vW TFD+ TFD+ 1 9 vW in Figures 1 through 4, because the accuracy is substantially lower than for other models.The deviation of TFD+ 1 9 vW from KS-LDA for Argon, Krypton, and Xenon is 7.1 %, 5.4 %, 4.6 %, which is substantially higher than for TFD+ 1 5 vW or ES, as seen from Figure 1.Overall, it seems that after Z > 40, TFD+ 1 5 vW and the ES model offer relatively similar accuracy, with TFD+ 1 5 vW being slightly better.For all models, the density-dependent quantities ⟨r⟩, r 2 and ⟨ 1 r ⟩ are quite reasonable, surprisingly even for TFD+ 1 9 vW.This reflects the fact that all the models have a reasonable density average over the shell effects in KS-LDA densities, which are shown in Figure 5.
Near the nucleus, TFD+ 1 5 vW has the best average description compared to KS-LDA if we look at quantity ⟨ 1 r ⟩.The worst deviation for TFD+ 1 5 vW is for Krypton with 0.6 % from KS-LDA, while for TFD+ 1 9 vW and for ES the worst case is Argon, with deviations of 6.1 % and 6.7 % respectively.
The first quantity to contain shell effects is ⟨r⟩.For these values, all the models give surprisingly similar results.The worst case for ES is Argon, where we have a deviation of 3.2 %, while the worst case for kinetic energy functionals is Krypton, where the deviation is ∼ 5.1%.Again we note that shell oscillation plays a role here and that the Argon value for ES is probably particularly bad: in Figure 3 we see a bump in the ES values for small Z.This is most likely a slight artifact caused by the averaging procedure.
The quantity r 2 is an interesting one.For Argon, all the semiclassical models give similar results, which are significantly above the KS-LDA, up to ∼ 10% deviation for TFD+ 1 5 vW.The case for Krypton is similar, except for ES, which is closer to KS-LDA than kinetic energy functionals.Finally, for Krypton, ES is below all the other models, which is a good thing if we look at the experimental values in Figure 4.
For inert atoms there is a trend: all semiclassical values start above KS-LDA values in Argon and end up below KS-LDA in Xenon.However, we should not read too much into this, as the quantity is quite dependent on the outer orbitals of the particular element, as Figure 4 shows.The ES model clearly wins in the description of the atom's outer reaches when considering experimental values.
The ionization potential calculated with a difference of total energies E(N) − E(N − 1) and the results are in Table VI.We only calculated it for alkali metals, as the ES model will fail to provide reasonable potential for other elements because the shell effects are missing.
A deviation in the ionization potential emerges (up to 9 % deviation for Cesium), but the trend is similar for both methods.
Atomic densities for closed-shell atoms are shown in 5. We also plot Kohn-Sham densities as a reference.We can see that the ES densities are not completely structureless; they do contain some structure due to averaging over the hydrogenic shells.Yet, obviously, they do not contain the shell structure of Kohn-Sham due to single particle states.
Previous results prove that ES model does not lose to the density functional models and is even better than Kohn-Sham in some special cases.But what the ES model mostly supplies is theoretical clarity and rigor.One thorny issue with TFD+λvW models and other GGA-based OFDFT models has been the Pauli potential v Θ 's negativity, which is defined as.
where T s is the non-interacting kinetic energy and T vW [n] is the von Weizsäcker kinetic energy functional.It has been shown that v Θ should always be positive 4 , but for TFD+λvW and many other GGA kinetic functionals, it has been found to be negative near a nucleus 5 .
This emerge because the semiclassical evaluation is not valid near a nucleus; see 16 This raises the question of how the TFD+ 1 5 vW's energetics can be so good (while having reasonable geometric properties, too) if it completely ignores the correction for strongly bound electrons?The answer must be related to the dual nature of the von Weizsäcker term, because it has two roles as a density functional: it is a gradient correction to Thomas-Fermi, but it also is exact for up to two non-interacting particles.These two roles naturally differ by a constant factor, but the form is the same.Kinetic energy functionals of GGA type can be built around the dual nature by interpolating between these two extremes 5,26,27 .Here we are inclined to support the view of Englert and Schwinger: it is more straightforward to exclude the strongly bound electrons from the semiclassical evaluation than try to modify the functional to include them via the von Weizsäcker term.The near nucleus Pauli potential singularity is also avoided by the use of pseudopotentials as the strongly bound electrons are excluded from the OFDFT calculation.The non-negativity constraint might be a nonissue for valence density 28 .We also must remember that the λ value of 1 5 is obtained by fitting, while the strongly bound electron correction is better motivated.If the ES model is compared to the gradient expansion in atoms, then it is clearly preferable.
D. Accuracy of Self-Consistent Density Approximations
As mentioned earlier, it is not necessary to calculate the density with the full expression (8) to get the same semiclassical accuracy.Only the first line of ( 8) is necessary, as Englert argued 16 .The rest of the expression contains full divergence-i.e., it integrates to zero so it does not contribute to the number of electrons, only to the distribution.We are interested in the approximation for future use, if the method is extended to larger systems.We also want to see the effect on r 2 to see how sensitive it is to density approximations, because we already established the effect of exchange.
The simple density approximation is obtained simply by discarding the higher-order variations.The resulting expression is We compare the difference of two density expressions ( 14) and ( 8) in self-consistent calculations.We focus on Krypton for simplicity, but the trends are reproduced over a range of Z.As both integrate to the electron number, the difference is purely just a redistribution of the density.
From Table VII we can see that the simple density assigns more density near the nucleus while the full expression localizes the density more, which is seen in r 2 .Full density has a bit better energy when compared to KS-LDA .For energy, the maximal deviation between density expressions is 0.4 % percent in Xenon, and for r 2 the maximal deviation is 3% percent
V
, we find the corresponding binding energy ζ with relation N = ∫ drn[V (r)] + ζ].This yields the density n[V (r) + ζ], which we can use to find a new Hartree potential V H through a Poisson equation.The new effective potential is now obtained from this Hartree potential with Spherically symmetric Kohn-Sham atom, where the xc-functional is the Dirac exchange (9) and no correlation (KS-LDA stands for Kohn-Sham local density approximation).TFD-1 9 W The OFDFT model, which contains Dirac exchange and Thomas-Fermi plus one-ninth of a von Weizsäcker term as a kinetic energy functional, where the von Weizsäcker factor is derived as a quantum correction to the Thomas-Fermi functional.TFD-1 5 W The OFDFT model, which contains the Dirac exchange and Thomas-Fermi plus one-fifth of a von Weizsäcker term as the kinetic energy functional, where the von Weizsäcker factor is fitted rather than derived.
electronic configuration.The numbers correspond to fair accuracy.The biggest error is the binding energy of Z=38, charge=2 where we have a deviation of 3.3%.Deviation in other values are below 2%.Core binding energies ζ 1 and ζ 2 show strong similarity, which is to be expected as the potential should have a − Z r shape near the hydrogenic states.The neutral atom results by Englert and Schwinger are the result of extrapolation, so they are omitted from the comparison.
Figure 1 .
Figure 1.Total energy error compared to the Kohn-Sham energy as a function of Z.
Figure 2 .
Figure 2. Expectation value ⟨ 1 r ⟩ as a function of Z.
Figure 3 .
Figure 3. Expectation value ⟨r⟩ as a function of Z.
Figure 4 .
Figure 4. Expectation value ⟨r 2 ⟩ as a function of Z.The experimental values are from 24 .
Figure 6 .
Figure 6.Pauli potential v Θ of four atomic models.
21 W, the von Weizsäcker fraction is fitted to produce best results for atoms21.
Table II .
Krypton Z = 36 with different exchange potentials with full density expression(8).Classical radius is defined by V (r classical ) + ζ = 0.
Table III .
Energies and averages of different models for Argon Z = 18.Results are in atomic units.
Table IV .
Energies and averages of different models for Krypton Z = 36.Results are in atomic units.
Table V .
Energies and averages of different models for Xenon Z = 54.Results are in atomic units. | 7,579.6 | 2017-11-15T00:00:00.000 | [
"Physics"
] |
Temperate Propolis Has Anti-Inflammatory Effects and Is a Potent Inhibitor of Nitric Oxide Formation in Macrophages
Previous research has shown that propolis has immunomodulatory activity. Extracts from two UK propolis samples were assessed for their anti-inflammatory activities by investigating their ability to alter the production of the cytokines: tumour necrosis factor-α (TNF-α), interleukin-1β (IL-1β), IL-6, and IL-10 from mouse bone marrow-derived macrophages co-stimulated with lipopolysaccharide (LPS). The propolis extracts suppressed the secretion of IL-1β and IL-6 with less effect on TNFα. In addition, propolis reduced the levels of nitric oxide formed by LPS-stimulated macrophages. Metabolomic profiling was carried out by liquid chromatography (LC) coupled with mass spectrometry (MS) on a ZIC-pHILIC column. LPS increased the levels of intermediates involved in nitric oxide biosynthesis; propolis lowered many of these. In addition, LPS produced an increase in itaconate and citrate, and propolis treatment increased itaconate still further while greatly reducing citrate levels. Moreover, LPS treatment increased levels of glutathione (GSH) and intermediates in its biosynthesis, while propolis treatment boosted these still further. In addition, propolis treatment greatly increased levels of uridine diphosphate (UDP)–sugar conjugates. Overall, the results showed that propolis extracts exert an anti-inflammatory effect by the inhibition of pro-inflammatory cytokines and by the metabolic reprogramming of LPS activity in macrophages.
Introduction
Propolis is collected by bees from plants surrounding the beehive and mixed with bee saliva and with bees wax. It is used to cover surfaces within the hive prior to laying down the honey comb and to plug gaps within the hive in order to exclude the outside world. In the UK, Northern Europe, and North America, propolis is collected from poplar trees and their relatives; in Southern Europe and North Africa, much of it comes from Cypress trees; and in tropical regions such as Brazil and West Africa, propolis is usually collected from many different plant sources [1,2]. There are numerous literature reports on the immunomodulatory effects of propolis [3][4][5][6][7][8]. In a recent paper, we studied several types of propolis for their effects on the metabolic response of THP-1 cells to treatment with Figure 1 shows the effect of LPS on NO production in LPS-activated macrophages and the effect of two different types of propolis from the UK (from Essex and from the Midlands) in inhibiting NO production. Similarly, treatment with LPS elevated the levels of interleukin-1β (IL-1β), tumour necrosis factor-α (TFN-α), and IL-6 in macrophages (Figures 2-4), whereas the propolis samples clearly lowered IL-1β and IL-6 but with less clear effects in lowering TFN-α. Propolis treatment lowers the production of IL-10, which is considered to be an anti-inflammatory cytokine ( Figure 5). This might imply that the propolis does have some pro-inflammatory effects. Propolis is regarded as being immunomodulatory, and it is conceivable that it might both regulate and enhance the immune response. The regulation of IL-10 levels in immune cells is complex [26] and is regulated by other cytokines such as interferons, which might also be regulated by the propolis.
Propolis Does Not Affect Cell Viability
The propolis samples did not affect cell viability of the macrophages at the concentration used ( Figure 6). Table 1 lists the numerous significant effects on cellular metabolites of the two propolis samples on the response of the macrophages to LPS treatment.
Propolis Treatment Inhibits NO Formation
Analysis of the metabolomics data shows the clear effects of LPS in greatly elevating hydroxyarginine, the intermediate in NO production from arginine, and citrulline, the product remaining after NO formation. The UK propolis samples have a marked effect on lowering both hydroxyarginine and citrulline (Figure 7), which fits with the direct measurement indicating lowered NO formation (Figure 1). The propolis samples also increase arginosuccinate, which is involved in recycling citrulline back to arginine in order to make more NO, and the accumulation could be due to the inhibition of conversion of this substrate into arginine. There were no direct effects on arginine levels, which were the same in the macrophages treated with LPS and LPS + propolis.
Propolis Treatment May Promote Energy Metabolism and Stimulate the Formation of High Energy Phosphates
LPS produced moderate increases in NADH and ATP, which are largely derived from the TCA cycle or fatty acid oxidation, and propolis treatment increased NADH and ATP levels further (Figures 8 and 9). In addition, the levels of other high energy phosphates such as creatine phosphate and uridine triphosphate (UTP) were increased by propolis treatment (Figures 8 and 9). LPS treatment had a marked effect on some glycolysis intermediates with a large increase in glyceraldehyde 3-phosphate (G3P), which was increased still further by the propolis treatments (Table 1).
Propolis Treatment Stimulates Formation of Amino Sugars
The propolis treatments caused increases in the levels of the amino sugars N-acetyl glucosamine, neuraminate, and N-acetylglucosamine phosphate (Table 1) as well as increases in UDP-N-acetyl glucosamine and UDP-glucose ( Figure 10). All these intermediates are increased in the LPS-treated samples and markedly increased in the propolis-treated samples.
Propolis Treatment Lowers Citrate and Increases Itaconate
Citrate was markedly lowered by propolis treatment in comparison with LPS treatment alone, and the anti-inflammatory compound itaconic acid was markedly increased by propolis treatment (Figure 11).
Propolis Treatment Increases GSH Levels
Another important pathway in macrophages is glutathione metabolism, since GSH may modulate the response of the macrophages and may be responsible for damping down the respiratory burst following stimulation with a pathogen product [17,19,20]. The propolis treatments increase GSH levels, and the level of its oxidation product GSSG is also elevated greatly in comparison to LPS treatment alone ( Figure 12). In addition, the propolis treatments elevate all the intermediates involved in GSH biosynthesis: glycine, cysteine, glutamate, and gammaglutamyl cysteine (Table 1).
Propolis Treatment Increases Fatty Acid Metabolism
One of the major effects of the propolis treatment is on the levels of certain fatty acids, which are markedly increased, suggesting that this might be due to triglyceride hydrolysis and a switch towards fatty acid metabolism ( Table 1). The increase in fatty acid metabolism is underlined by the elevation of several acyl carnitines, which are required for the transport of fatty acids into mitochondria so that β-oxidation can be carried out. A signature of M2 macrophages in comparison with M1 macrophages is increased fatty acid metabolism [17,19,20]. Some long-chain fatty acids are increased by LPS treatment but are markedly lowered by the propolis treatments, including eicosatetraencoic acid, which, as the precursor of prostaglandins, can be considered pro-inflammatory.
Discussion
It is clear from the viability data and the enhancement of energy metabolites such as ATP and creatine phosphate that propolis if anything increases the viability of the cultured macrophages. The cytokine assays largely support the reports [3][4][5][6] that propolis is anti-inflammatory and the metabolomics data almost perfectly support the idea that propolis treatment is pushing the macrophages towards an M2-like character. The conversion of arginine to citrulline with the formation of NO is inhibited, and levels of GSH and GSSH are increased, pointing towards the promotion of activity against ROS generation; a number of intermediates required for GSH biosynthesis are elevated by propolis treatment. UDP-N-acetyl glucosamine, which is used for glycoprotein formation in M2-type macrophages, is elevated, as are intermediates required for its biosynthesis, UTP, N-acetylglucosamine, and N-acetylglucosamine phosphate. Of note, UDP-N-acetyl glucosamine is required as a substrate for the production of glycans on proteins such as the CD206 receptor, which is abundantly expressed in M2 macrophages. It is well established that in M2 macrophages, some of the glycolytic flux is diverted into the amino sugar pathway [20]. The propolis-treated macrophages accumulate glyceraldehyde phosphate to an even greater extent than macrophages treated with LPS alone, suggesting the inhibition of glyceraldehyde phosphate dehydrogenase; this process can promote methylglyoxal formation from glyceraldehyde phosphate and dihydroxy acetone phosphate [27]. The elevation of methylglyoxal by the propolis treatments is supported by a large increase in lactoyl glutathione, which is the cellular detoxification product of methyl glyoxal [28]. Methyl glyoxal can react with arginine residues in proteins generating the advanced glycation product carboxyethylarginine, which is elevated following LPS treatment and further elevated by propolis treatment. Carboxyethyl arginine has been shown to be an inhibitor of NO production [29]. Citrate accumulates in M1-type macrophages and is important in sustaining the inflammatory response through supporting fatty acid biosynthesis [17,19,20,30] and the production of ROS. The propolis treatments hugely deplete citrate. LPS treatment increases the levels of the anti-inflammatory compound itaconic acid, and propolis treatment increases its levels still further; there is some evidence that itaconate stimulates M2 macrophage polarisation [23].
The metabolism of eicosatetranenoic acid is carried out by peroxisomes [31], and it has been shown that PPAR-γ receptor activation can affect macrophage polarisation [32]. Flavonoids have been shown to inhibit inducible cycloxgenase synthase and nitric oxide synthase by binding to the PPAR-γ receptor in macrophages [25]. Thus, it would seem that part of the action of propolis may be due to promoting PPAR-γ receptor activation. This has been found to be a feature of many natural products [33], including flavonoids that are present in the propolis extracts. It has been observed that fatty acid metabolism is promoted over glycolysis in M2 macrophages in comparison with M1 macrophages, and the presence of increased levels of several hydroxylated long-chain carnitines and greatly elevated levels of partially oxidised fatty acids in the propolis-treated samples suggest that the propolis treatments are promoting the M2 character in the macrophages [34,35].
The two samples used in this study were typical samples of temperate propolis collected from poplar buds where the composition is well established [24,36]. There are some variations between samples 224 and 225. The flavanones pinocembrin and pinobanksin ( Figures S2 and S4) appear to be qualitatively similar in the samples, whereas the flavone chrysin and the flavonols galangin methyl ethers and kaempferol appear to be markedly higher in sample 224 ( Figures S1, S3, S5 and S6). It was previously observed that flavones and flavonols had a greater effect in inhibiting nitric oxide production by macrophages than flavanones [37][38][39]. This might explain some of the difference in effect between 224 and 225 on the macrophages.
Chemicals and Reagents
Fetal bovine serum, DMEM (1X), DMEM phenol red free, and RPM-1640 medium were obtained from Gibco. Glutamine solution, PBS, and penicillin/streptomycin solution were obtained from Lonza. Six-well cell culture plates were purchased from Thermo Fisher Scientific; 96-well cell culture plates were purchased from TPP, Switzerland. Ethanol, trypan blue stain, bovine serum albumin, EDTA, carbonyl cyanide m-chlorophenyl hydrazine, oligomycin A, and ammonium carbonate were obtained from Sigma-Aldrich, Dorset, UK. HPLC grade methanol, acetonitrile, and water were obtained from Fisher Scientific, Leicestershire, UK.
Preparation of Propolis Extracts
The two UK propolis samples were obtained by James Fearnley; sample 224 was from Essex and sample 225 was from the Midlands. Ethanol extracts of approximately 10 g propolis were prepared by vigorous mixing and sonication for 60 min using a sonicating bath (Fisher Scientific, Loughborough, UK). The extracts were filtered and the propolis was re-extracted twice with 100 mL of ethanol (Fisher Scientific, Loughborough, UK). The extracts were combined and evaporated, and the residue was stored at room temperature until required for the assays.
Generation of Bone Marrow-Derived Macrophages (BMMs)
All experiments were approved by and conducted in accordance with the Animal Welfare and Ethical Review Board of the University of Strathclyde and UK Home Office Regulations. Bone marrow was collected from the femur and tibia bones of 6-8-week-old male or female BALB/c mice, which were bred at Strathclyde University and killed by cervical dislocation. Then, bones were dissected from adherent tissues and washed briefly with 70% ethanol. In sterile conditions, under a tissue culture hood, the bone ends were cut to allow bone marrow elution through washing the bone DMEM medium. Then, the eluted bone marrow was collected, filtered using a cell strainer, and centrifuged at 400× g for 5 min. The supernatants were next aspirated and replaced with a known amount of fresh complete DMEM medium to count the obtained cells, using trypan blue stain in order to culture them at the required density. Then, cells were plated and cultured on tissue culture Petri dishes at a density of 2 × 10 6 cells/mL in complete DMEM with 20% L929 cell supernatant [40] and maintained at 37 • C in a humidified atmosphere of 5% (v/v) CO 2 . Fresh complete DMEM supplemented with 20% L929 cell supernatant was added on day 4 to feed the macrophages. On day 7, the cells were harvested by scraping them into 5 mL complete DMEM at 4 • C to allow adherent cell detachment, and they were then collected for further centrifugation at 400× g for 5 min. Then, the viability and number of cells was checked using trypan blue stain followed by identification by flow cytometry and plating according to the desired experiments.
Flow Cytometry
Re-suspended cells with a density of 0.5 × 10 6 / fluorescence activated cell sorting (FACS) tube were incubated with anti-mouse CD16/CD32 for 5 min to block subsequent nonspecific binding of antibodies to Fc receptors. Cells were next incubated with antibodies specific for CD11b-APC (BD Pharmingen) and F4/80-FITC (eBioscience) along with the Fluorescence Minus One (FMO) controls (eBioscience and BD Pharmingen) and placed at 4 • C for 25 min, after which they were washed in FACS buffer (2% Bovine Serum Albumin (Sigma-Aldrich, Poole, Dorset, UK) in PBS (Lonza, Slough, UK) with 2 mM EDTA). Then, the cells were re-suspended in FACS buffer to render them ready for flow cytometry analysis. Flow cytometry was carried out using a FACS Canto instrument (BD Pharmingen, San Jose, CA, USA). The purity of differentiated bone marrow-derived macrophages was determined by the measurement of CD11b and F4/-80 double positive cells. Cultures of >90% were used for cytokine and metabolomics assays.
Effect of Propolis Treatment on LPS-Activated Macrophages
The BMMs were plated at a concentration of 2 × 10 6 cells/2 mL of complete RPMI medium (RPMI-1640 (Lonza), 2 mM glutamine (Lonza), 50 U/mL penicillin (Lonza), 50 µg/mL streptomycin (Lonza), 10% FCS (Gibco, Paisley, UK)) in 6-well plates, with 5 to 6 replicates/each condition used, and then rested for 5 h or overnight. To study the effect of adding propolis on the macrophage metabolome, the two sample extracts were added at a concentration of 50 µg/mL along with an equivalent amount of medium only added to the control group. Then, treated BMMs were incubated for 18 h at 37 • C in a humidified atmosphere of 5% (v/v) CO 2 . After 18 h of incubation, LPS (Escherichia coli, Sigma Aldrich) was added to give a concentration of 100 ng/mL, and incubation was continued for a further 24 h.
Cell extracts were prepared by washing the cells once with warm PBS before harvesting the cells in a chilled extraction solution (MeOH/MeCN/H 2 O, 50:30:20 v/v) with a concentration of 1 mL of extraction mix per 2 × 10 6 cells. Then, cell lysates were collected and shaken at 1200 rpm for 20 min at 4 • C before being centrifuged at 0 • C at 13,000 rpm for 15 min. Then, the supernatants were collected and transferred into auto sampler vials for loading into the LC-MS autosampler or storage at −80 • C until analysis.
Measurement of Cell Viability
The BMM cells were seeded at a density of 2 × 10 5 cells/well in 96-well plates and incubated for 24 h at 37 • C in a humidified atmosphere of 5% CO 2 . After 4 h of resting, to allow adhesion, the cells were treated with medium only for the negative control cells and propolis extract (50 µg/mL) and all incubated for a further 24 h. After 24 h, LPS was added to all cells except the control incubations, for which incubation was carried out for 24 h. Four hours before the end of the incubation, Alamar blue [8] was added to all cells, with a slight shaking, and then the plates were returned to the incubator until the end of treatment. To read the cell viability, 100 µL of all conditions was pipetted into new plates to measure fluorescence at excitation: 530 nm, emission: 590 nm using a Polarstar Omega plate reader (BMG Labtech, Aylesbury, UK).
Measurement of NO Production in BMMs
BMMs were plated and stimulated with propolis pre-treatment followed by LPS activation. First, 50 µL aliquots of cell supernatant were collected and added into wells of a 96-well plate. Then, Griess reagents (A + B) were mixed in a ratio of 1:1 [2% (w/v) sulphanilamide in 5% (v/v) H 3 PO 4 and 0.2% (w/v) naphylethylenediamine HCl in water], and 50 µL of the mix were added to the cell supernatants in each well. Then, the 96-well plate was incubated in the dark for 10 min. Then, the absorbance was read using a Polarstar Omega plate reader at 540 nm. Nitrite production was determined relative to a standard curve constructed with solutions of sodium nitrite (NaNO 2 ) as described previously [41] from a 10 mM stock solution of NaNO 2 prepared in complete RPMI 1640 cell medium.
Cytokine Assays
BMMs were plated and stimulated with propolis pre-treatment followed by LPS activation. First, 50 µL of cell supernatant were collected for analysis using ELISA Ready-Set-Go kits purchased from Thermo Fisher Scientific (Loughborough, UK).
The assays were performed according to the manufacturer's instructions to quantify the release of cytokines: (TNF-α, IL-1β, IL-6, and IL-10). The reaction was stopped using 2 N sulphuric acid. The plates were read using a SpectraMax M5 plate reader (Molecular Devices, Sunnyvale, CA, USA) at 560 nm, and the absorbance values were corrected by subtracting readings. The data obtained were analysed using Gen5 and Prism 7.
Liquid Chromatography/Mass Spectroscopy (LC/MS)
The chromatographic conditions were set as follows: A ZICpHILIC column (150 × 4.6 mm × 5 µm, Hichrom, Reading, UK) was eluted with a linear gradient over 30 min between 20 mM ammonium carbonate (pH 9.2)/MeCN (20:80) at 0 min and 20 mM ammonium carbonate (pH 9.2)/MeCN (20:80) at 30 min with a flow rate of 0.3 mL/min, followed by washing with 20 mM ammonium carbonate MeCN (95:5) for 5 min and then re-equilibration with the starting conditions for 10 min. LC/MS was carried out by using a Dionex 3000 HPLC pump coupled to an Exactive (Orbitrap) mass spectrometer from Thermo Fisher Scientific (Bremen, Germany). The spray voltage was 4.5 kV for positive mode and 4.0 kV for negative mode. The temperature of the ion transfer capillary was 275 • C and sheath and auxiliary gas were 50 and 17 arbitrary units, respectively. The full scan range was 75 to 1200 m/z for both positive and negative modes. The data were recorded using the Xcalibur 2.1.0 software package (Thermo Fisher Scientific, Hemel Hempstead, UK). The signals of 83.0604 m/z (2 × ACN + H) and 91.0037 m/z (2 × formate-H) were selected as lock masses for the positive and negative modes, respectively, during each analytical run.
Metabolomic Data Analysis
Raw data from untargeted metabolomic studies were putatively identified and processed using Mzmine [42] Prior to further analysis, data were filtered in which metabolites of low intensities (<1000 peak height) and metabolites that did not show any significant fold changes were excluded in order to simplify the data for interpretation. Accurate masses were searched against an in-house database, and in addition, retention times for 200 metabolites were matched against standard mixtures run at the same time as the samples [43]. Otherwise, metabolite matches were based on accurate masses < 3 ppm and identified to MSI levels 1, where the retention time matched that of a standard or to level 2 [44]. Details of the metabolite mixtures were described previously [43].
Statistical Analysis
The extracted data were exported to Excel in order to calculate p values and metabolite ratios. Further analysis of the extracted data was carried out by using Metaboanalyst 4.0 [45] in order to determine ANOVA values between the different treatments. A PDF file with mean extracted peak areas and standard error of the mean (SEM) values is provided in Supplementary Materials Appendix 1.
Conclusions
One might ask what benefit immunomodulation might offer to the bees that collect it. Perhaps some clue to this is from the recent paper indicating that the honey bee microbiome is stabilised by propolis [46], and this could in part be mediated by modulation of the immune response as well as by the control of pathogens by the propolis. Propolis contains hundreds of components with flavonoids and phenylpropanoid compounds being the major components. However, it may act in the form of a complex with more minor components also playing a role in its overall activity. Generally, the well-established anti-protozoal activity of propolis [47] may depend on the activity of the whole mixture, since usually, isolated components are less active or not more active than the extract mixture. The current paper provides strong support for propolis being effective as an anti-inflammatory complex. Propolis has been studied as an immunomodulatory agent in a number of sizeable trials, and its safety in high doses is good [9][10][11][12][13]. There had been no studies examining whether or not the components in propolis are absorbed. However, recently, we carried out a small-scale trial looking at the absorption of the flavonoids from propolis from an oral dose, and it was clear that the flavonoid components within propolis were generally well absorbed [48]. Such variations in effect vs. composition across different samples could provide the basis for a subsequent study. The effects of propolis on the immune response are very interesting, and these in vitro results support the in vivo trials [10][11][12][13] aimed at producing immune modulation. There is a particularly strong need for a more comprehensive study of the absorption and metabolism of propolis components to support any claims with regard to its efficacy in providing immune support.
Supplementary Materials: The following are available online at http://www.mdpi.com/2218-1989/10/10/413/s1, Figure S1: Extracted ion traces for chrysin in samples 225 and 224, Figure S2: Extracted ion traces for pinocembrin in samples 225 and 224, Figure S3: Extracted ion traces for galangin in samples 225 and 224, Figure S4: Extracted ion traces for pinobanksin in samples 225 and 224, Figure S5: Extracted ion traces for galangin methyl ethers in samples 225 and 224, Figure S6: Extracted ion traces for kaempferol in samples 225 and 224, Appendix 1: PDF file including mean peak areas and associated SEM values for the features extracted from the raw data files. | 5,220.6 | 2020-09-14T00:00:00.000 | [
"Chemistry",
"Biology"
] |
Cooking pasta with Lie groups
We extend the (gauged) Skyrme model to the case in which the global isospin group (which usually is taken to be $SU(N)$) is a generic compact connected Lie group $G$. We analyze the corresponding field equations in (3+1) dimensions from a group theory point of view. Several solutions can be constructed analytically and are determined by the embeddings of three dimensional simple Lie groups into $G$, in a generic irreducible representation. These solutions represent the so-called nuclear pasta state configurations of nuclear matter at low energy. We employ the Dynkin explicit classification of all three dimensional Lie subgroups of exceptional Lie group to classify all such solutions in the case $G$ is an exceptional simple Lie group, and give all ingredients to construct them explicitly. As an example, we construct the explicit solutions for $G=G_{2}$. We then extend our ansatz to include the minimal coupling of the Skyrme field to a $U(1)$ gauge field. We extend the definition of the topological charge to this case and then concentrate our attention to the electromagnetic case. After imposing a"free force condition"on the gauge field, the complete set of coupled field equations corresponding to the gauged Skyrme model minimally coupled to an Abelian gauge field is reduced to just one linear ODE keeping alive the topological charge. We discuss the cases in which such ODE belongs to the (Whittaker-)Hill and Mathieu types.
Introduction
Nuclear pasta is a phase of matter that appears organized in some ordered structures when a large number of Baryons is confined in a finite volume [1], [2], [3], [4], [5], [6]. These configurations appear, for instance, in the crust of neutron stars. Such aggregations of Baryons may take the form of tubular structures, called Spaghetti states, or layers having a finite width, called Lasagna states, or even globular shape, the gnocchi. Until very recently, it was always tacitly assumed that nuclear pasta phase is the prototypical situation in which it is impossible to reach a good analytic grasp. This is related to the fact that such structures appear in the low energy limit of Quantum Chromodynamics (QCD) in which perturbation theory does not work and, at a first glance, the strong non-linear interactions prevent any attempt to find exact solutions. Now, the low energy limit of QCD is described by the Skyrme model [7] at the leading order in the 't Hooft expansion (see [8], [9], [10], [11], [12], [13], as well as [14], [15] and references therein). Unsurprisingly, the highly non-linear character of the Skyrme field equations discouraged any mathematical description of this kind of structures. Consequently, as the above references show, numerical methods (which, computationally, are quite demanding) are dominating in this regime. The situation is even worse when one wants to analyze the electromagnetic field generated in the nuclear pasta phase as, when the minimal coupling with the U (1) gauge field is taken into account; even the available numerical methods are not effective.
On the other hand, one may ask: is the mathematical dream of an analytic description of nuclear pasta structure really out of reach? Analytical methods to infer the general dependence of the nuclear pasta phase on relevant physical parameters (such as the Baryon density) not only would greatly improve our understanding of the nuclear pasta phase itself, but they could also shed considerable light on the interactions of dense nuclear matter with the electromagnetic field.
From the mathematical viewpoint, the problem is very deep and yet simple to state: can we find analytic solutions of the (gauged) Skyrme model able to describe typical configurations of the nuclear pasta phase? Despite the fact that this model has been introduced in the early sixties, for several years only numerical solutions had been available (the only exceptions being [16], in which the authors constructed analytic solutions of the Skyrme field equations in a suitable fixed curved background). Nevertheless, the mathematical beauty of the Skyrme model attracted the attention of many leading mathematicians and physicists. In particular, in [17], [18], [19], [20] and [21], the authors were able to disclose the geometrical structures of configurations with two Skyrmions, to analyze the interaction energy of well separated solitons, to establish necessary conditions for the existence of Skyrmionic crystals and so on. All these remarkable results have been obtained without the availability of analytic solutions of the Skyrme field equations. These efforts (together with the comparison with Yang-Mills theory in which explicit solutions representing instantons and non-Abelian monopoles shed considerable light on the mathematical and physical properties of Yang-Mills theory itself) show very clearly the importance to search for new analytic tools to analyze the gauged Skyrme model in sectors with high Baryonic charge.
Quite recently, new methods have been introduced that allowed the construction of explicit analytical solutions of the Skyrme field equations. Such solutions are suitable to describe nuclear Lasagna and Spaghetti states, see [22], [23], [24], [25], [26], [27], [28], [29], [30], [31], [32], and [33]. Let us recall that the Skyrme model is a non-linear field theory for a scalar field U taking values in the SU (N ) Lie group, where N is the flavor number. This theory possesses a conserved topological charge (the third homotopy class) which physically is interpreted as the Baryonic charge of the configuration.
Most of the solutions found so far have been constructed by employing ad hoc ansätze adapted to the properties of the SU (2) group, but soon it has been realized that particular group structures seem to be at the root of the solvability of the Skyrme field equations. For example, the exponentiation of certain linear functions taking value in the Lie algebra lead to Spaghetti-like configurations, while Euler parameterization of the field U , with suitable linear exponents, lead to Lasagna-like solutions. In all these cases, the solutions are also topologically non-trivial with arbitrary Baryonic charge. A proper mathematical understanding and generalization of the strategy devised in [22], [23], [24], [25], [26], [27], [28], [29], [30], [31], [32] and [33], offers the unique opportunity to disclose the deep connections of the nuclear pasta phase with the theory of Lie groups; two topics which (until very recently) could have been considered extremely far from each other. The present paper is devoted to this opportunity: to provide nuclear pasta configurations of Lasagna and Spaghetti types with the mathematical basis of Lie group theory.
A first step in this direction was to link certain properties of the semi-simple Lie group to the possibility of getting explicit solutions of the Skyrme equations in the Lasagna configurations for the case of SU (N ) groups with arbitrary N [33]. More in general, using the methods developed in [26], [27], [28], [30], [31], [33], with the generalization of the Euler angles to SU (N ) of [34], [35], [36], it has been possible to construct non-embedded multi-Baryonic solutions of nuclear Spaghetti and nuclear Lasagna, at least for the case for the SU (N ) groups, see [37].
A fundamental ingredient in the theory of Lie groups with relevant applications in the Skyrme model is the concept of non-embedded solutions introduced in [11] and [12]. These are solutions of the SU (N )-Skyrme model which cannot be written as trivial embeddings of SU (2) in SU (N ). However, the techniques used to get such results, for example in [33], where quite specific of the group SU (N ). In fact, as we will show in the present manuscript, there is a very interesting relation between Lie group theory and such families of solutions, which allows to generalize the above results in a much more general setting and to classify the solutions: this is exactly the main goal of the present paper.
Resume of the results
Firstly, we will prove that, having fixed a compact connected Lie group G with a given irreducible representation (irrep), the solutions are determined in general by deformations of embeddings of three dimensional Lie groups into G.
Secondly, we will prove that inequivalent families of solutions correspond to inequivalent embeddings (not related by conjugation in G). The problem of determine all possible three dimensional subgroups of a simple Lie group has been solved by E. B. Dynkin in [38]. In particular, in that paper, all possible three dimensional subalgebras of the exceptional Lie algebras are written down.
Thirdly, we will show that such classification also classifies the Spaghetti and Lasagna solutions determined via group theory methods. The difference between Spaghetti and Lasagna depends on the realization of the subgroup element of G: if it is generated by the exponentiation of a linear combination of the generators of a three-dimensional subalgebra of g = Lie(G), then we get Spaghetti-like solutions, while if the realization is through Euler parameterization we get Lasagna-like solutions. Then, we will compute explicitly relevant quantities such as the energy of these configurations.
Fourthly, we will extend this classification to the case of the gauged Skyrme model minimally coupled to Maxwell theory. In particular, we will extend the definition of topological (Baryonic) charge to this case. We will reduce the complete set of coupled field equations both in the gauged Lasagna case and in the gauged Spaghetti case to a single linear equation and we will analyze the integrable cases which correspond to Whittaker-Hill and Mathieu types linear differential equations.
Main tools employed in the analysis
In the present work, we will employ abstract techniques and general properties of semi-simple Lie groups in order to investigate their relation with solvability of the Skyrme equations. This allows to extend all the results found in [33] for the special unitary groups to an arbitrary semi-simple compact Lie group. Indeed, all results will be based on the properties of the roots and weights of the associated Lie algebras, while a generalized Euler parameterization of the Skyrme field U , taking values in G, will lead in general to Lasagna configurations. Similarly, the direct exponentiation of the algebra, as discussed above, will lead us to Spaghetti structures extending the results of [37]. In any case, we will compute the energy of such configurations and will show that they have always a non-trivial Baryon (topological) charge. Interestingly enough, a strategy for constructing non-trivial non-SU (2) solutions in the sense of [11] and [12] will result to be strictly related to the classification of all three dimensional groups in any given simple Lie group, provided by Dynkin in his PhD thesis work, see [38]. As an application of our general analysis, we will show how to construct all non-trivial Lasagna and Spaghetti configurations in any exceptional Lie group, making very explicit the case of G = G 2 .
The generalization of our ansätze which allows to include the minimal coupling of the model to a U (1) electromagnetic field will be introduced as follows. As usual, the gauge field will work as a connection making all derivative covariant under the action of the U (1) gauge field, while their dynamics is expressed by the usual Maxwell action (although our methods also work in the Yang-Mills case). The covariant derivatives break the topological nature of the original term expressing the Baryonic charge. Therefore, generalizing the result in [8], we will deform the Baryonic density expression in order to recover topological invariance .
The introduction of the electromagnetic field makes the field equations of the gauged Skyrme model minimally coupled to Maxwell theory extremely more complicated than in the Skyrme case. Nevertheless, quite surprisingly, the equations will be separable (in a suitable sense) and once again solvable, after imposing the free force conditions on the gauge field. This condition appears quite naturally in Plasma physics (see [39], [40], [41], [42], [43] and references therein). Quite interestingly, such condition implies that the gauge field disappears from the gauged Skyrme field equations (without being a trivial gauge field, of course) and therefore, in this way the gauged Skyrme field equations can be solved as in the ungauged case. It is a very non-trivial result that the remaining field equations (which correspond to the Maxwell equations with the source term arising from the gauged Skyrme model) reduce just to one linear equation for a suitable component of the gauge field in which the Skyrmion act as a source-like term. We will analyze the integrable cases in which this last remaining equation takes the form of a Hill equation for the case of Lasagna states, while a Schrödinger equation with a bi-periodic potential of finite type in the Spaghetti case.
Interestingly enough, for the Lasagna case another nice coincidence shows up here: the relevant solutions we need are exactly the periodic solutions whose existence has been investigated in [44], and which explicit form for the case of a Whittaker-Hill equation has been determined in [45].
It is a truly remarkable result that such a complicated phase such as the nuclear pasta phase of the low energy limit of QCD (even taking into account the minimal coupling with Maxwell theory) can be understood so cleanly in terms of the theory of Lie group.
Notations and conventions
Our conventions are as follows. The action of the Skyrme model in (3 + 1) dimensions is where K and λ are positive coupling constants and g is the metric determinant. The Skyrme field U is a map where G is semi-simple compact Lie group, so that where {T i } is a basis for the Lie algebra g = Lie(G).
The system is confined in a box of finite volume with a flat metric. For Lasagna states we will use a metric of the form where the adimensional spatial coordinates have the ranges so that the solitons are confined in a box of volume V = (2π) 3 L r L γ L φ . For nuclear Spaghetti we will use the metric ansatz with adimensional coordinates ranging in and a total volume V = 4π 3 L r L θ L φ . The energy-momentum tensor associated to the Skyrme field is given by The topological charge is defined by (see Proposition 3) where V is the spatial region spanned by the coordinates at any fixed time t, L = U −1 dU and Tr is the trace over the matrix indices.
Lasagna groups
In [46] it has been shown how Lasagna configurations can be determined as solutions of the Skyrme equations realized as Euler parameterizations of three dimensional cycles in SU (N ). Indeed, these cycles result to be suitable deformations of different non-trivial embeddings of SU (2) into SU (N ). Here we want to prove that such construction can be easily extended to any simple Lie group (at least for the case of the undeformed embedding). Recall that in the case of SU (N ) the embedding was defined [33] by the generalized Euler map where σ is a constant, m is an integer, κ a suitable matrix in SU (N ) and h(r) results to be a linear function of r with values in the Cartan algebra H. Indeed, the main trick was to determine a suitable matrix κ able to make everything easily computable and to grant periodicity of e mγκ . The convenient strategy has been composed in two steps: first we have taken a basis of eigenmatrices of the simple roots, λ j , j = 1, . . . , r, where r = N − 1 is the rank of the group, and defined the matrix where † means hermitian conjugate and c j are complex constants. The second step consisted in determining the allowed values for the c j . We want to do the same with a generic simple Lie group G replacing SU (N ). The first problem we ran into is the following. If λ ∈ g C is an eigenmatrix of a root α of the Lie algebra g of G (so it belongs to the complexification g C of g), in general λ † doesn't belong to g C if G = SU (N ) for some N . So in general κ defined above is not a matrix of g.
In order to overcome this problem, we notice that a compact simple Lie group G always contain a split maximal subgroup [47], which is a maximal subgroup K with the property that 2dim(K) + r = dim(G) and that there exists a Cartan subalgebra H of g all contained in p, the orthogonal complement of the Lie algebra k of K in g (w.r.t. the Killing product): Of course k is a subalgebra of g, while p is not, since which says that p is a representation space for G and K is an isotropy group for p. One can easily show that a root matrix λ, associated to a root α, must have the form Then, k − ip also is a root matrix, corresponding to the root −α. We replace the hermitian conjugation with the ∼ conjugation defined by This way, if λ j , j = 1, . . . , r are matrix roots corresponding to the simple roots of g, then Notice that for G = SU (N ) we haveλ = −λ † . If we choose normalizations as in Appendix A, we can use the matrices J k to decompose h(z) = r j=1 y j (z)J j . The properties of the roots can be inferred case by case from the lists in Appendix A. Exactly the same calculations as in [46] show that the field equations for the Skyrme field are equivalent to the system The first equation has solution h ′′ = 0, as a consequence of the strict positivity of the Cartan matrix for each simple group. The second system, using that the λ αj +α k are independent, reduces to the set of equations Since (α j |α k ) = 0 if and only if α j and α k are linked and since there are r − 1 links in a connected Dynkin diagram, these are exactly r − 1 equations. These are independent and assuming a = α 1 (h) = 0 have the general solution α j (h ′ ) = ǫ j a, j = 2, . . . , r, (2.11) where ǫ j are signs. As in [46], we can solve it by writing Applying α k to both hands and defining ǫ 1 = 1 we get where C G is the Cartan matrix associated to G. The Cartan matrix is positive definite and is therefore always invertible, so that (2.14) Therefore, we have proven the following generalization of Proposition 2 in [46].
are given by where a is a real constant and ǫ j are signs, with ǫ 1 = 1.
Now we have to discuss which choices of the coefficients c j are allowed. To this hand, we have that the solution must cover a topological cycle entirely. First, we notice that as a consequence of our normalizations, if we want to get it with r varying in [0, 2π], we must take see [46], Proposition 3. The second step is to grant periodicity of e γκ . This is the difficult part and determines the allowed values for the c j . Notice that κ is diagonalizable (over C). Indeed, since G is compact, in the adjoint representation κ results to be antihermitian and then diagonalizable with imaginary eigenvalues. It follows that it is diagonalizable in any representation with purely imaginary eigenvalues. If N is the dimension of the representation, then the eigenvalues iµ 1 , . . . , iµ N must be in rational ratios, which means that for any µ a = 0 it must exist integers n a = 0 such that µ a n b = µ b n a , (2.19) or, equivalently, that it exists a non-vanishing real number µ and N integers n a ∈ Z, such that µ a = µn a . (2.20) This condition in general will depend on N , G and the constants c j . In [46] this problem has been shown to have a set of solutions for the particular case of G = SU (N ) in the fundamental representation. Here we have to generalize that procedure without exploiting a very explicit realization. Indeed, we can prove that there are solutions with all c j different from zero by following a strategy developed by Dynkin in [38], that we will recall in the next section. Let us choose f in the Cartan subalgebra, such that α j (f ) = b, a positive constant independent on j, so that We can easily determine it as follows. If h j = i[λ j ,λ j ], then set f = r k=1 p k h k . Thus, the above condition is equivalent to from which we immediately get By the properties of the Cartan matrix it follows that p j are all positive. Finally, we set Then, we have the following proposition: If κ is constructed with the above choice of c j , then e κz is periodic with period n 2π b where n may be 1 or 2 depending on the representation, n = 1 for the adjoint representation.
Proof. We first show that periodicity is independent on the phases of c j . If e κz is periodic, then, for any fixed g ∈ G, ge κz g −1 is also periodic with the same period. Since the simple roots α j are linearly independent, for any fixed j we can find an element h j of the Cartan algebra such that α k (h j ) = δ kj . Let us set g = e ψhj . Then, which shows that gκg −1 differs from κ only by the phase of c j . This proves our assert. So, it is sufficient to prove the proposition for ψ j = 0. In this case, T 3 := f, T 1 := κ and Therefore, as before, the periodicity of e κz is equivalent to the periodicity of e f z . But and Now, any given root α is with the n j all non-negative or all non-positive integers. Therefore, All this exponentials are therefore periodic, with the longest period determined by the simple roots, for which e iα(f )z = e ibz , which has period T = 2π/b. But for any root α implies that g = e f T is in the center of the group. Since the center of a simple compact group is finite, this means that g n = I is the unit matrix for some integer n. Now, since κ is not in the Cartan subalgebra, it follows from [48], Section VII, Theorem 8.5 (see also [47]) that n = 1 or n = 2 depending on the specific representation.
This shows that there exist always at least a set of solutions with all non-vanishing c j , parametrized by a torus of phases. In [46] it has been shown that indeed, for the case of SU (N ) in the smallest irreducible representation, there is a further set of deformations that has been called a moduli space. This is a very difficult task to be investigated in general and we will not consider it here.
On the physical meaning of the time-dependence in the ansatz
It is worth to discuss the physical meaning of the time-dependent ansatz in Eq. (2.1) for the Lasagna-type configurations as well as the one in Eqs. (3.1), (3.2) and (3.3) for the spaghetti-type configurations. First of all, despite the time-dependence of the ansatz of the U field, the energy-momentum tensor is still stationary (so that it describes a static distribution of energy and momentum). This approach is inspired by the usual time-dependent ansatz that is used for Bosons stars [58,59] (and generalize it to arbitrary Lie group) in which the U (1) charged scalar field depends on time in such a way to avoid the Derrick theorem (see [60]). Secondly, the peculiar time-dependence is chosen in order to simplify as much as possible the field equations without loosing the topological charge (as, until very recently, the Skyrme field equations have always been considered a very hard nut to crack from the analytic viewpoint). Thirdly (as it will be discussed in the next sections on the minimal coupling with Maxwell), the present ansatz (both for lasagna and spaghetti type configurations) produces U (1) currents associated to the minimal coupling with Maxwell with a manifest superconducting current. Indeed (as it is clear from Eqs. (6.69) and (6.130)), the present U (1) current always has the form where Ω depends on either the Lasagna or the spaghetti profiles (see Eqs. (6.69) and (6.130)) while Φ is a field which is defined modulo 2π. Consequently, the following observations are important.
1) The current does not vanish even when the electromagnetic potential vanishes (A µ = 0).
2) Such a "left over" is maximal where Ω is maximal (and this corresponds to the local maxima of the energy density: see Eqs. (6.69) and (6.70).
3) J (0)µ cannot be turned off continuously. One can try to eliminate J (0)µ either deforming the profiles appearing in Ω integer multiples of π (but this is impossible as such a deformation would kill the topological charge as well) or deforming Φ to a constant (but also this deformation cannot be achieved for the same reason). Moreover, as it is the case in [57], Φ is only defined modulo 2π. Consequently, J (0)µ defined in Eq. (2.33) is a superconducting current supported by the present gauged configurations.
These are the three of the main physical reasons to choose this peculiar time-dependent ansatz. On the other hand, it is worth to emphasize that the peculiar time-dependence we have chosen (for the reasons explained above) prevents one from using the usual techniques (see, for instance, [13]) to "quantize" the present topologically non-trivial solutions. In particular, the typical hypothesis of a static SU (N )-valued field U is violated in our case (since, as it has been already emphasize, the requirement to have a static T µν which describes a stationary distribution of energy and momentum does not imply that U itself is static). Therefore, to estimate the "classical isospin" of the present configurations we will proceed in a different manner in the next sections.
Energy and Baryon number
The energy of these solutions can be easily computed by means of Proposition 6 in Appendix B. We get and where σ depends on the representation and has to be chosen so that the solution correctly covers a cycle when m = 1 and φ varies from 0 to 2π. To specify it, let us investigate the Baryon number integral. To this hand, let us look better at Proposition 2. The fact that n = 1 or 2 obviously distinguishes the SO(3)-type solutions from the SU (2)-type ones (see [46]), since only in the first case the period remains invariant when passing to the adjoint representation. The right ranges are then understood by considering the correct Euler parameterizations for SO(3) and for SU (2). If we write it generically as one finds that, if T is the period of the exponential functions, in both cases z must vary in a period and y in a range of T /4. The difference is in x, which has to vary in a period for SO(3) and half a period for SU (2), for example, see Appendix C in [46]. If we set x = σφ, y = r and z = mγ and we want to normalize the ranges of the coordinates φ, r, γ, so that all vary in [0, 2π], we see that we always have to require where n is an integer. With these conventions we can state the following proposition.
Proposition 3. The Baryonic topological charge is
The proof is exactly the same as in Appendix F of [46], so we omit it. The energy per Baryon g = E/B is therefore
Spaghetti groups
Another kind of configurations is obtained by starting from a different ansatz, which leads to Spaghetti like solutions. Spaghetti can be parameterized by the following ansatz: In the ansatz, T i are matrices of a given representation of the Lie algebra of G and are required to define a three dimensional subalgebra that we can choose to normalize so that and satisfy where I G,ρ is the Dynkin index of su (2) in G (see [38]), that is the coefficient relating the trace product in the representation ρ of Lie(G) to the Killing product of su (2). We also define Together with τ 1 , they satisfy With these rules, we get for L µ = U −1 ∂ µ U : For the other terms, set α = Θ, Φ and using we get and This shows that the expression of the L µ is universal (depend only on the algebra of the τ j ), so the field equations are always the same for any choice of the group. These are What is expected to change is just the topological charge and the energy. Given this universality property, we see immediately that, for any given group G, these kind of solutions are classified by all possible ways of finding a three dimensional simple subalgebra of the lie algebra g. Luckily, we don't need to tackle such a program, since has already been solved by E. B. Dynkin in [38], chapter III. This work as follows.
First, it is convenient to complexify the algebra, recombine and normalize the generators f, e + , e − of the subgroup so that Each complex three dimensional simple algebra is isomorphic to this. However, we must consider as equivalent only the ones which are isomorphic through an automorphism of the group. Let α j , j = 1, . . . , r be simple roots defined from a cartan subalgebra containing f . Then, it results that (α j |f ) must be integer numbers that can assume only the values 0,1,2. The set of numbers d j = α j (f ) are called the Dynkin characteristic of the subgroup. The main result of [38] is that the three dimensional simple subalgebras A are in one to one correspondence with the characteristics and one can indeed classify the characteristics. A subalgebra is said to be regular if its roots are indeed roots of g. The subalgebra A is said to be integral if the projection of the roots of g along the direction of the roots of A are integer multiples of the simple root α A of A. Since α A (f ) = 2, we see that the dual of α A in the Cartan subalgebra H is From this it follows immediately that A is integral if and only if all the numbers of the Dynkin characteristic χ A = (d 1 , . . . , d r ) of A are even (so are 0 and 2). All inequivalent characteristics for the exceptional Lie groups are listed in [38]. Furthermore, given such a characteristic χ = (d 1 , . . . , d r ), there it is explained how to construct explicitly the associated subalgebra.
and choose p j so that α j (f ) = d j . This gives From this we get As usual, C G is the Cartan matrix. In general, the construction of the remaining generators is non-trivial. To do it, one has to consider the subset of the root system Σ defined by Then, all roots are positive. If λ α are the corresponding eigenmatrices (normalized so that Tr(λ α λ α ) = −1), one then has to look for real coefficients k α such that, setting then [e + , e − ] = −if . If χ G is an admissible characteristic, then in general there are infinite solutions, but we know that are all equivalent so it is sufficient to choose one, all the other ones being related to it by conjugation with elements of the group. Notice that the resulting equations are in general where we used that any positive root can be written as with n β,j non-negative integers, and In the particular case when d j = 2 for all j, Σ χG consists of all simple roots and the solution is easily obtained as Finally, we can go back to our real case by taking Notice that this is the same construction we used to get a periodic generator κ for the Lasagna configurations. This also shows that indeed we can construct a κ matrix for each three dimensional subalgebra.
Energy density and Baryon charge
Let us determine the energy density and the Baryon charge. The energy density is defined by the T tt component of the energy-momentum tensor A direct computation gives I G,ρ is the Dynkin index and can be computed as follows. First, observe that a generic root has the form By using (3.22), (3.28) and the definition of Σ χG , we get Therefore, and so The Baryon charge can be written as in which ρ B is the Baryonic density charge Recalling the ranges (1.5) for the coordinates and that q = 2v + 1 and χ(0) = 0, we get The boundary conditions on χ(r) depend on the periodicity of τ 1 , which corresponds to the periodicity of T 3 (T 3 = τ 1 (Θ = π)). We must have χ(2π) = nπT G,ρ , so that where T G,ρ = 1 for representations with even dimension and T G,ρ = 2 for representations with odd dimension.
On the "classical" isospin of these configurations
We have shown in previous sections that the inclusion of a suitable time-dependence in the ansätze, both for lasagna and spaghetti phases (see Eqs. (2.1) and (3.1)), is one of the key ingredients that allows the field equations to be considerably reduced, leading to a single integrable ODE equation for the profiles. This timedependence offers a nice short-cut to estimate the "classical Isospin" of the configurations analyzed in the present paper (a relevant question is whether or not the classical Isospin is large when the Baryonic charge is large). In particular, one may evaluate the "cost" of removing such time-dependence. Such a cost is related to the internal Isospin symmetry of the theory. This is like trying to estimate the angular momentum of a spinning top by evaluating the cost to make the spinning top to stop spinning. In the present case, the time-dependence of the configurations can be removed from the ansätze by introducing a Isospin chemical potential; then the isospin chemical potential needed to remove such time-dependence is a measure of the classical Isospin of the present configurations. We will see how this works for the simplest SU (2) case, where the generators are T j = iσ j , being σ j the Pauli matrices (general group G behave in a similar way).
As it is well known, the effects of the Isospin chemical potential can be taken into account by using the following covariant derivative Now, we will use exactly the same ansatz as before in the spaghetti SU (2) case, but this time without the time dependence: together with the introduction of the Isospin chemical potential in Eq. (3.46) in the theory. One can check directly that the complete set of Skyrme equations can still be reduced to the same ODE for the profile χ (r) in the case of the spaghetti phase in Eq. (3.18) only provided the Isospin chemical potential for the spaghetti phase is given byμ In other word, the cost to eliminate the time-dependence is to introduce an Isospin chemical potential which is large when the Baryonic charge of the spaghetti is large. Something similar happens in the case of the lasagna phase. Let us consider the ansatz in terms of the Euler angles but without the time-dependence for the SU (2) case: Let us introduce the Isospin chemical potential, demanding that the profile h(r) should be the same as before. Then, as in the spaghetti case, the Skyrme field equations with chemical potential can still be satisfied by the very same profile h(r) provided we fix the Isospin chemical potential as At this point it is important to remember that in the SU (2) case the lasagna and spaghetti type solutions have the following values for the topological charges see [25] and [26] for more details. These arguments show that the "classical Isospin" of configurations with high Baryonic charge is large. Finally, it is important to point out that the large Isospin case corresponds to either neutron rich or proton rich matter and due to Coulomb effects (not taken into account in this model), the neutron rich solution is preferred. This fact is very convenient as far as the physics of neutron stars is concerned.
Examples: exceptional pasta
As an example we can consider the "basic exceptional Skyrmions", that are solutions in lowest dimensional representation when G is one of the exceptional Lie groups. There are five cases that we now recall according to the dimension of the group. For each of them we know all inequivalent three dimensional subalgebras, each one determined by the Dynkin characteristic χ I (d 1 , . . . , d r ), where I is the Dynkin index and d j are the coefficients of the characteristic, ordered as the simple root listed in Appendix A.
The smallest exceptional group is G 2 , a 14 dimensional group of rank 2 whose smallest irrep is 7 dimensional. There are four different three dimensional subalgebras. It contains four 3D subalgebras, having characteristics χ 1 and χ 2 are regular but not integral, while χ 4 and χ 28 are not regular but are integral. In particular, the minimal regular subalgebra containing χ 4 is χ 1 ⊕χ 3 , while χ 28 is maximal so that the smallest regular subalgebra containing it is G 2 itself.
As an example, we will finally construct the explicit solutions for G 2 , which we can call "G 2 exceptional pasta".
G 2 exceptional Spaghetti
Here we consider explicit solutions case by case. Our deduction will be quite general and independent on the specific realization in terms of matrices, but just on the chosen representation. Nevertheless, for sake of completeness, in Appendix D we will provide an explicit matrix realization of the subalgebras in the lowest fundamental representation.
χ 28 -Spaghetti
This is the principal case, with χ 28 = (2, 2). Therefore p ≡ (p 1 , p 2 ) = (36,20). We already know the solution in this case. The Spaghetti solution is Because of Proposition 2, we already know that working in the adjoint the solution is of SO(3)-type. In the representation 7 7 7, it is sufficient to verify that for T − = 3λ 1 + √ 5λ 2 , and v the maximal vector of 7 7 7, then the vectors ρ k 7 7 7 (T − )(v), k = 0, . . . , 6 are all linearly independent. Here ρ 7 7 7 : G 2 → End(R 7 ) is the representation map of the algebra. This is proved in Appendix D and proves that R 7 is irreducible under χ 28 . Since it is odd dimensional, it is of SO(3)-type.
G 2 exceptional Lasagna
For the exceptional Lasagna we can use Proposition 2. Since we already know that n = b must be equal to 1, we get that (p 1 , p 2 ) = (18,10), and, if we fix ψ j = 0 for simplicity, then Moreover, from Proposition 1 we get .
Extended ansatz
In order to allow for further generalizations, it is convenient to employ the Euler parameterization in a more general ansatz, after fixing the matrices κ and f . Let us consider the Skyrmionic field 1 where κ is specified in (2.2), and f has the same properties as in (2.21). One of the aim of this generalization is to provide a description of different pasta states without specifying them a priori. This could lead to a comprehensive description of Skyrmions in a finite volume and to an analytical definition of other possible states (such as gnocchi states) and the transitions between them. In this work, we did not analyze all these possibilities and all the limits of this models, but we outline the main properties which characterize them, namely the wave equations, the topological charge and the energy density. If we define then U (t, r, φ, γ) = e −α(t,r,φ,γ)κ e ξ(t,r,φ,γ)κ e χ(t,r,φ,γ)f e ξ(t,r,φ,γ)κ e α(t,r,φ,γ)κ . and f can be normalized so that tr(f 2 ) = tr(κ 2 ). (5.8) This leads to the condition (2.24) and, in particular, |c j | 2 = b 2 p j . Using these conventions we can now write the Skyrme equation explicitly.
Non-linear wave equations
We call wave equations to the field equations for the functions α, ξ and χ. These result to be (5.11)
Baryon charge
The Baryon charge is with ρ B = −12 c 2 ε ijk ∂ i α∂ j ξ∂ k cos(bχ). (5.14) Up to now we have just written local expressions, but in order to compute the Baryonic charge it is necessary to define the ranges of α, ξ and χ. Proposition 2 tells us that the period of e gκ is T κ = ηn 2π b , where η = 1, 2 depending on the representation, while n ∈ Z. Following [46], the ranges must be where σ = 1 for odd-dimensional representations and 1 2 for even-dimensional representations and m, n are both integer. The integration of the density charge leads to We can compute the ratio c 2 b 2 in the following way. From (5.8), we get where the definition f = r j=1 p j h j has been used. Now, we can replace the coefficients p j with (2.23) and Tr(h j f ) = α j (f ) = ib to get The Baryon charge takes the form
Example: the Lasagna case
Let us now compare the results obtained in this section with the previous ones. Our quantities can be written in terms of the Lasagna ansatz as follows Moreover, the profile function depends only on the parameter r (χ = χ(r)). This leads to the following relations With these choices the equations (5.10) and (5.11) are automatically satisfied. The equation (5.9) becomes which leads to the solution where the boundary conditions χ(0) = 0 and χ(2π) = π b have been used. Now, it is easy to compute the energy density, which results The integration over the volume of the box gives the total energy of the Lasagna (5.24)
Coupling with U (1) gauge field
By employing the generalization presented in the previous section, it is now easy to couple the Skyrmion field to an electromagnetic field A µ . To this aim we introduce the action where and the hat stands for the replacement of the partial derivative with a covariant derivative which means thatL Here T is any element of the Lie algebra of the group G, representing the direction of the U (1) gauge field. Later, we will identify T with the generator T 3 . The action (6.1) is now invariant under gauge transformation The gauge invariance appears also in the fact that the theory depends on A µ through the quantity ∂ µ α − A µ , which is invariant for gauge transformations.
Covariant Baryonic charge
As in [49], in order to determine a topological invariant, one is tempted to start directly generalizing (1.7) to the expressionB which, however, is not a topological invariant if the field-strength F is non-vanishing. Nevertheless, a topological invariant can be constructed after a simple subtraction, even for a non-Abelian gauge field. Indeed, we have: Proof. In order to prove the proposition, we have to prove that the first variation ofB w.r.t. U and ω (independently) vanishes at any functional point, that is independently if U and ω are constrained by some equations of motion. Notice that in taking variations, δω is a well defined 1-form on S despite ω could not be. To keep notation compact we will use bold round brackets to indicate a trace ( ( (M) ) ) ≡ Tr(M). Moreover, we first recall the following properties. If a j , j = 1, . . . , k are Lie algebra valued 1-forms then ( ( (a 1 ∧ · · · ∧ a k−1 ∧ a k ) ) ) = (−1) k−1 ( ( (a k ∧ a 1 ∧ · · · ∧ a k−1 ) ) ).
Notice that with our conventions in (6.4), we have to make the identifications (6.43) where, according to the conventions in [46], the orientation of the coordinates is such that ε rγφ = 1. In our case S is a closed three dimensional manifold in a semisimple compact Lie group G. Since in this case H 2 (G, Q) = 0, we get that the correction to the density does not contribute to the integral and we expectB = B always. It is worth to mention that in the construction of the solutions of the Skyrme equations, however, S is replaced by V that is compact but it is not a closed smooth manifold but a hyperrectangle with boundary. Therefore, the above integral does not define a topological invariant unless we impose suitable boundary conditions. To understand which are the most suitable ones, let us first analyze the case A = 0. In this case the map U maps the hyperrectangle in a closed smooth submanifold of G, soL is the pull-back of a 1-form well defined on a closed compact manifold (indeed, the left-invariant Maurer-Cartan form) and this is the reason we get a topological invariant. This suggest the boundary conditions we are looking for. They have to be imposed so that also A is the pull-back of a well defined 1-form over G (or the image of the hyperrectangle in G). Under these conditions, the quantities A µ are not independent, due to the fact that Φ, Θ and χ defines a map M : R 3+1 → R 3 . Locally, the embedding takes the form
Example: Lasagna states coupled to an electromagnetic field
To be explicit, we now work out the example of Lasagna states. For this case we choose T = κ. The covariant derivative determines the coupling of the gauge field to the Skyrmions, which appears in the definition ofL μ Notice that the introduction of the gauge field in the direction κ causes a shift in L µ given by ∂ µ α → ∂ µ α − A µ . It results that all the quantities we computed in the previous section are shifted by this quantity when the Skyrmions are coupled to a Maxwell field and it is really easy to convert the uncoupled theory with the coupled one. The covariant Baryon density charge now becomeŝ where ρ B is the uncoupled density. The correction to ρ B is a total derivative, so it depends only on the boundary conditions, as discussed above. Differently from [49], our system lives in a box, so, the electromagnetic field is not constrained to zero at the boundaries. Therefore, the Baryonic charge is not necessarily a topological invariant and not even expected to be an integer. As we said above, we can fix this problem by requiring for A to be the pull-back of a well defined potential over the homology cycle of G selected by the map U . This is easily accomplished by looking at the form of the ansatz for the Lasagna states. As t is irrelevant, we fix t = 0 to simplify the expressions: U (r, γ, φ) = e −φσκ e χ(r)f e mγκ = e −φσκ e mγκ(r) e χ(r)f . (6.50) Since bχ(2π) = π, we see thatκ(0) = −κ(2π) = κ, so that, if, for a generic fixed r, U (r, γ, φ) defines a two dimensional surface in G, for r = 0, 2π it collapses down to one dimensional circles: This degeneration means that well defined 1-forms on the whole manifold must have components only along the direction on the degeneration submanifolds, which in our case means A α (r = 0) = 0, (6.53) A ξ (r = 2π) = 0, (6.54) which in the original coordinates becomes 1 2m Also, one between φ and γ has to be identified periodically, while the other one is periodic or "antiperiodic" 3 according to the cases if the cycle is of SO(3) or SU (2), respectively. Therefore, in any case, the 1-forms in the image of the embedding have to be periodically identified so that the integrals at the "boundaries" φ = 0 and φ = 2π cancel out and the same happens for the boundaries at γ = 0 and γ = 2π. So, the only boundaries that may contribute are the ones at r = 0 and r = 2π, which we collectively call ∂ r B. Therefore, the Baryonic charge resultŝ (σA γ (0, γ, φ) + mA φ (0, γ, φ))dγdφ = B (6.57) because of the above boundary conditions, and we used that A α = σA γ + mA φ .
Decoupling of Skyrme equations and free-force conditions
To the Skyrmion equation coupled to a Maxwell field, obtained by shifting ∂ µ α → ∂ µ α − A µ in (5.9), (5.10) and (5.11), we have to add the Maxwell equations, which are given by In the generic Euler parameterization, they become To look for explicit solutions, we aim to decouple the Skyrme equations from the Maxwell field. Since A µ appears in the products ( , we can separate the Skyrme equations from the rest by looking for solutions where these terms are a priori fixed functions Recall that, the quantity ∂ µ α − A µ is gauge invariant.
Example: Lasagna, again
We can use the results of this section in order to study the behavior of Lasagna when coupled to the U (1) gauge field. To simplify the results, we use the free-force conditions and for the gauge field we make the ansatz If we simply shift the gauge field, we can writẽ These conditions are easily solved by using (5.20) (which also apply toà µ ) together with (6.72). This leads to ; so, only one gauge field results to be independent, for instance we can takeà φ . Thus, the wave equations and the Maxwell equation become The first equation can be rewritten as d dr where M is an integration constant. This determines the boundary values of χ ′ from the ones of χ Vice versa, we can write M in terms of χ ′ (0) We can put this result into the Maxwell equations, getting (6.83) From (6.78) we can locally write r in terms of χ as which leads to a definition ofà φ in terms of χ, let us call B(χ) =à φ (r(χ)). This way, equation (6.82) can be entirely written in terms of χ where a prime indicates derivative with respect to χ, and the following quantities have been introduced with We can use we have that x must vary in the interval [0, π/2]. Also, we have seen that for the Lasagna states such interval must be one quarter of the period over the cycle, which means that the solution we are looking for must be periodic with period 2π as a function of x. Therefore, it is worth mentioning the following result [44] (see also [50]): Proposition 5. Let y 1 (x) and y 2 (x) be the solutions of the Hill equation where f (x) is an even function of the form for κ an arbitrary constant. The question on the existence of such solutions is investigated in [44]. For example, the existence of solution y (2) is granted if and only if ω takes values for which the determinant of the infinite dimensional matrix vanishes. 4 However, for any practical purposes, such a way is impracticable for looking for explicit solutions in this very general case. Therefore, in place of pursuing this very general analysis, we move now to a particular but more tractable case.
Linear solution of the Skyrme equations
In the particular case when M = 2 b 2 λ , we can find a very simple solution: which is a Whittaker-Hill equation [51,45], see also [52]. It is convenient to introduce the variable changer = r 4 , sor has range 0 ≤r ≤ π 2 . This way, equation (6.102) takes the canonical form We can therefore determine the solutions y (2) and y (3) , following [45]. Using the same notations of that paper, we can identify In particular, η = −ωρ. For the function φ (3) we have to take (see [45]) A n B n cos((2n + 1)x), (6.108) (ρ + 2ji), j > 0, (6.109) and B n solves the recursion relations To find the periodic solution of period 2π, ω and η must be constrained by the following trascendental equation, expressed in terms of a continued fraction (see [45], formula (5.1)) which then gives the solution Finally, B 0 is fixed by the condition φ (3) (0) = 1.
As what concerns the solution κφ (2) (x), we have to consider (ρ + (2j + 1)i), j > 1, (6.116) and D n solves the recursion relations To find the periodic solution of period π, ω and η must be constrained by the following trascendental equation, expressed in terms of a continued fraction (see [45], formula (5.3)) which then gives the solution Since κ is arbitrary, no normalization is required for D 1 . However, notice that κφ (2) (x) can be considered only if equation (6.119) has common solutions with (6.112).
Example: Spaghetti states coupled to an electromagnetic field
In the case of Spaghetti, we do not use a parameterization of Euler type but the exponential parameterization of Section 3. Still, the analysis can be easily extended to this case. Following [53], the gauge field is described by Also in this case, the free-force conditions decouples the Skyrme equations from the Maxwell field. In particular, we take A reasonable explicit form of a gauge field with these properties is given by θ)). which means that 2χ ′2 λq 2 sin 2 χ 2 + L 2 θ + 4q 2 L 2 r cos χ = 2Z, (6.128) where Z is a constant, which is equivalent to This is a stationary Schrödinger equation with a double periodic potential of finite type. In particular, here one is interested in the zero eigenvalue case. Both the direct and inverse problem for this kind of equation is well studied and much more involuted than the one dimensional case (already highly non-trivial). Here we simply defer the reader to the literature (see [54] and reference therein), and limit ourselves to discuss the boundary conditions. As we discussed in the previous sections, the boundary conditions on A µ are outlined by the behavior of the Skyrme field in the edges of the box, namely (once again, we fix t = 0) This requires the following constraints The contribution of the gauge field to the Baryonic density can be always computed from (6.43). This giveŝ where L r and L θ are specified in (3.9) and (3.16). We easily find U −1 T 3 U = T 3 + sin(qθ)τ 2 − sin(qθ) cos χτ 2 + sin(qθ) sin χτ 3 .
A Roots of simple algebras
Here we list the roots of all simple algebras.
A.1 A N
The corresponding complex algebra is sl(N + 1), while the compact form is su (N + 1). If e a , a = 1, . . . , N + 1, is the canonical basis 5 of R N +1 , then the real linear space generated by the roots is isomorphic to a hyperplane in R N +1 in which all non-vanishing roots are represented by the vectors α j,k = e j − e k , j = k. The simple roots are α j = e j − e j+1 , j = 1, . . . , N . If λ j are the root matrices corresponding to the simple roots, then The split subalgebra is so(N + 1).
A.2 B N
The corresponding compact form is so(2N + 1). The real linear space generated by the roots is isomorphic to R N . If e a , a = 1, . . . , N , is the canonical basis of R N , then all non-vanishing roots are represented by the vectors e j − e k , j = k, ±(e j + e k ), j < k, ±e j . The simple roots are α j = e j − e j+1 , j = 1, . . . , N − 1 and α N = e N . If λ j are the root matrices corresponding to the simple roots, then The split subalgebra is so(N ) ⊕ so(N + 1).
A.3 C N
The corresponding compact form is us p (2N ), the compact symplectic algebra. The real linear space generated by the roots is isomorphic to R N . If e a , a = 1, . . . , N , is the canonical basis of R N , then all non-vanishing roots are represented by the vectors e j − e k , j = k, ±(e j + e k ), j < k, ±2e j . The simple roots are α j = e j − e j+1 , j = 1, . . . , N − 1 and α N = 2e N . If λ j are the root matrices corresponding to the simple roots, then The split subalgebra is u(N ).
A.4 D N
The corresponding compact form is so (2N ). The real linear space generated by the roots is isomorphic to R N . If e a , a = 1, . . . , N , is the canonical basis of R N , then all non-vanishing roots are represented by the vectors e j − e k , j = k, ±(e j + e k ), j < k. The simple roots are α j = e j − e j+1 , j = 1, . . . , N − 1 and α N = e N −1 + e N . If λ j are the root matrices corresponding to the simple roots, the relevant non-vanishing commutators are The split subalgebra is so(N ) ⊕ so(N ).
A.5 G 2
The corresponding compact form is g 2 . The real linear space generated by the roots is isomorphic to a hyperplane in R 3 . If e a , a = 1, . . . , 3, is the canonical basis of R 3 , then all non-vanishing roots are represented by the vectors e j − e k , j = k, and ±(e 1 + e 2 + e 3 − 3e s ), s=1,2,3. The simple roots are α 1 = e 2 − e 3 and α 2 = e 1 − 2e 2 + e 3 . If λ j are the root matrices corresponding to the simple roots, the relevant non-vanishing commutator is [λ 1 , λ 2 ]. The split subalgebra is so(4).
A.6 F 4
The corresponding compact form is f 4 . The real linear space generated by the roots is isomorphic to R 4 . If e a , a = 1, . . . , 4, is the canonical basis of R 4 , then all non-vanishing roots are represented by the vectors e j − e k , j = k, ±(e j + e k ), j < k, ±e j , 1 2 (±e 1 ± e 2 ± e 3 ± e 4 ). The simple roots are α 1 = e 2 − e 3 , α 1 = e 3 − e 4 , α 3 = e 4 , and α 4 = 1 2 (e 1 − e 2 − e 3 − e 4 ). If λ j are the root matrices corresponding to the simple roots, the relevant non-vanishing commutators are The split subalgebra is us p (6) ⊕ us p (2).
A.7 E 6
The corresponding compact form is e 6 . The real linear space generated by the roots is isomorphic to R 6 . If e a , a = 1, . . . , 6, is the canonical basis of R 6 , then all non-vanishing roots are represented by the vectors ±(e j − e k ), j < k < 6, ±(e j + e k ), j < k < 6, where in the parenthesis only an even number of minus signs can appear. The simple roots are If λ j are the root matrices corresponding to the simple roots, the relevant non-vanishing commutators are The split subalgebra is us p (8).
A.8 E 7
The corresponding compact form is e 7 . The real linear space generated by the roots is isomorphic to R 7 . If e a , a = 1, . . . , 7, is the canonical basis of R 7 , then all non-vanishing roots are represented by the vectors ±(e j − e k ), j < k < 7, ±(e j + e k ), j < k < 7, ± √ 2 e 7 , where in the parenthesis only an odd number of minus signs can appear. The simple roots are If λ j are the root matrices corresponding to the simple roots, the relevant non-vanishing commutators are The split subalgebra is su(8).
A.9 E 8 The corresponding compact form is e 8 . The real linear space generated by the roots is isomorphic to R 8 . If e a , a = 1, . . . , 8, is the canonical basis of R 8 , then all non-vanishing roots are represented by the vectors ±(e j − e k ), j < k, ±(e j + e k ), j < k, If λ j are the root matrices corresponding to the simple roots, the relevant non-vanishing commutators are The split subalgebra is so(16).
A.10 Resuming
In conclusion, we see that the commutators we need are strictly related to the Dynkin diagram of the algebra: a commutator between eigenmatrices of two simple roots is non zero only if the roots are linked, that is if the scalar product is not zero. This is simply related to the fact that, with obvious notation, the commutators [λ αi , λ αj ] or is an eigenmatrix for α i + α j , or it vanishes. We also recall here some very well known facts. The fact that Dynkin diagrams have no loops allows to choose the normalization of the matrices λ j so that t.i. [λ αj , λ α k ] = −(α j |α k )λ αj +α k if j < k and with the opposite sign if we change j and k. Here (|) is the scalar product in the space of roots. Notice that also Remember that the trace product is proportional to the Killing product and that the only non-Killing orthogonal root spaces are the ones corresponding to opposite roots. This allows to fix a global normalization so that We also have, for the simple roots α j , where J j are in a Cartan algebra. From the fact that the simple roots are linearly independent, it easily follows that the J j , j = 1, . . . , r, are a basis for the Cartan subalgebra. This is also sufficient to fix the scalar product in the space of roots so that (α j |α k ) = α j (J k ), (A. 13) if the roots are defined as for any h ∈ H. In particular, using ad invariance of the trace product we get Finally, recall that any given simple Lie algebra is characterized by the r × r Cartan matrix With this we can rewrite the normalization conditions as Tr(λ j λ k ) = −δ jk , (A.17) The Cartan matrices of simple Lie groups can be found, for example, in [55], Table 6.
B A proposition
Proposition 6. Let κ = r j=1 (c j λ j + c * jλ j ), and h ∈ H a matrix such that α j (h) = ε j a, where ε j is a sign, j = 1, . . . , r, and set x := e −hz κe hz , z ∈ R. Then and Proof. Let us start with to get where we used that α j (h ′ ) 2 = (ε j a) 2 = a 2 , and that the only non-vanishing traces are Tr(λ m λ n ) = −δ mn . This proves (B. There are different ways of constructing a convenient basis for the Lie algebra of G 2 . We will refer to [56]. In that notation a basis is C J , J = 1, . . . , 14. The only maximal regular subgroup is SO(4) generated by C L , L = 1, 2, 3, 8, 9, 10. The remaining matrices generate p. A convenient Cartan subspace is thus As a basis, we take h 1 = C 11 and h 2 = C 5 . One can easily diagonalize the action of ad(H). If we set To keep contact with our conventions we have to redefine the basis for H as J 1 and J 2 , defined by (notice that λ j is simply the complex conjugate of λ j ) This gives us the geometry of roots, so that We can represent this vectors in the canonical euclidean R 2 as the vectors The corresponding Cartan matrix is with inverse (C G ) −1 = 2 3 1 2 . (D.14) It is also useful to determine the basis for all eigenspaces, in a convenient way, normalized so that Tr(λ α λ α ) = −1 for any root. After setting α 3 = α 1 + α 2 , α 4 = 2α 1 + α 2 , α 5 = 3α 1 + α 2 , α 6 = 3α 1 + 2α 2 , (D. 15) we can state the following proposition.
Proposition 7.
A suitable choice of the eigenmatrices associated to the roots α j , j = 3, 4, 5, 6, is given by Moreover,λ j is the complex conjugate of λ j .
Proof. We know from the general theory that if λ a and λ b are eigenmatrices of the roots α a and α b respectively, and if α a + α b is also root, than the eigenmatrices of α a + α b have the form µ[λ a , λ b ] for any given constant µ. Since λ 1 and λ 2 are eigenmatrices for the fundamental roots α 1 and α 2 , we have that the matrices λ j specified above are surely eigenmatrices for the corresponding roots α j , j = 3, 4, 5, 6. We have only to explain the choices of the constant factors. These are chosen to be real and such that Tr(λ jλj ) = −1. To prove it, first notice that necessarily [λ i ,λ j ] are eigenmatrices for −α i − α j , so we can identifyλ 3 = √ 2[λ 1 ,λ 2 ] and so on. The last part of the proposition then follows from the fact that it is true for λ 1 and λ 2 and that all the coefficient we chosen for defining the remaining λ j are real. Then, using ad-invariance of the trace product, that is Tr( where we used again that [λ 1 ,λ 2 ] = 0, and, therefore, Tr([λ 1 , λ 3 ][λ 1 ,λ 3 ]) = 2(α 2 (J 1 )) 2 Tr(λ 2 λ 2 ) = −2(−1/2) 2 , (D. 22) and putting all together we get Tr(λ 4λ4 ) = −1.
The remaining two cases are proved exactly in the same way.
D.3 Explicit matrix realizations
Here we present the explicit matrix representation of the three dimensional subalgebras in the irrep 7 7 7 of G 2 . These are 6 We could compute it explicitly, but it is not necessary for our purposes. | 16,147.6 | 2022-01-29T00:00:00.000 | [
"Physics"
] |
Offloading under cognitive load: Humans are willing to offload parts of an attentionally demanding task to an algorithm
In the near future, humans will increasingly be required to offload tasks to artificial systems to facilitate daily as well as professional activities. Yet, research has shown that humans are often averse to offloading tasks to algorithms (so-called “algorithmic aversion”). In the present study, we asked whether this aversion is also present when humans act under high cognitive load. Participants performed an attentionally demanding task (a multiple object tracking (MOT) task), which required them to track a subset of moving targets among distractors on a computer screen. Participants first performed the MOT task alone (Solo condition) and were then given the option to offload an unlimited number of targets to a computer partner (Joint condition). We found that participants significantly offloaded some (but not all) targets to the computer partner, thereby improving their individual tracking accuracy (Experiment 1). A similar tendency for offloading was observed when participants were informed beforehand that the computer partner’s tracking accuracy was flawless (Experiment 2). The present findings show that humans are willing to (partially) offload task demands to an algorithm to reduce their own cognitive load. We suggest that the cognitive load of a task is an important factor to consider when evaluating human tendencies for offloading cognition onto artificial systems.
Introduction
Cognitive offloading is defined as "the use of physical action to alter the information processing requirements of a task so as to reduce cognitive demand" [1, p. 676]. As such, offloading can help humans to overcome cognitive capacity limitations [2,3], enabling them to attain goals that could not have been attained (as quickly, easily, or efficiently) otherwise. In the present paper, we specifically focus on offloading cognition onto technological devices [1]. For example, consider using Google Maps (or a similar navigation app on your phone) while driving, to guide you to your destination-by doing so, you free up the cognitive capacity normally required for paying attention to road signs, which in turn allows you to focus entirely on the a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 busy traffic or even to perform a secondary task such as talking to your friend in the passenger seat. Or consider using virtual assistants such as Siri or Alexa to retrieve information that you would otherwise spend time and cognitive effort looking up manually (e.g., by searching for it on the internet). In the near future, humans will likely (be required to) interact more frequently with increasingly more sophisticated artificial systems (e.g., ChatGPT) to facilitate daily as well as professional activities [4]. For this reason, it is both necessary and timely to further investigate the conditions under which humans are willing to offload certain tasks to artificial systems.
Recent research [5][6][7][8] on cognitive offloading has investigated a number of tasks and factors that influence the human likelihood to offload cognition to technological devices. For instance, one study [5] used a mental rotation task in which participants could reduce their cognitive demand by externalizing the task by manipulating a knob to physically rotate the stimulus. The (actual and believed) reliability of that knob was systematically varied. It was found that participants adapted their offloading behavior such that they used the knob to a smaller extent if its reliability was (believed to be) lower. This finding suggests that preexisting beliefs about the technological reliability, even if false, can influence offloading behavior. In a follow-up study [6], researchers instructed participants to either focus on the speed or the accuracy of their performance. If the focus was on accuracy, participants offloaded more frequently compared to if the focus was on speed, suggesting that offloading behavior is influenced by performance goals. In a different study [7], participants were required to solve an arithmetic or a social task with the aid of either another human, a (picture of a) robot, or an app. These potential "assistants" were described as having either task-specific or task-unspecific expertise. It was found that the different descriptions greatly influenced offloading behavior with regard to the app (i.e., more offloading for task-specific vs. task-unspecific expertise) but less so with regard to the human and robot. This suggests that the expertise of a technological system is another factor that influences offloading behavior in humans. It was also found that participants offloaded the arithmetic task to a "socially incapable" robot while offloading the social task to a "socially capable" robot even though both robots were equally capable of solving the arithmetic task [8]. This suggests that humans tend to implicitly assume that artificial systems are designed to be skilled in only one task domain (e.g., the social domain), but not in two different domains (e.g., social and arithmetic). In sum, the above-mentioned studies show that the human willingness to offload tasks to technological systems is influenced by the system's (actual and believed) reliability [5], its (ascribed) expertise [7,8], as well as participants' own performance goals [6].
Other studies compared the human willingness to offload tasks to another human vs. a technological system. In particular, these studies identified several factors that influence whether humans tend to show "algorithmic appreciation" or "algorithmic aversion", i.e., whether humans prefer to rely on an assessment made by an algorithm rather than by a human, or the other way around (for a recent review, see [4]). Algorithmic appreciation/aversion has often been tested in decision-making contexts. In these studies, an algorithm either aids participants in the decision-making process by taking an advisory role [9] or participants are asked whether a particular decision should be taken by another human or by an algorithm [10]. For instance, in the latter case, participants were asked in one study [10] whether it is more permissible that a medical decision (i.e., performing a risky surgery or not) is made by a human doctor or by an algorithm. Participants clearly preferred the human doctor, suggesting algorithmic aversion. However, if participants were told that the expertise of the algorithm was higher than that of the human doctor, participants' preference shifted to algorithmic appreciation. Note that medical decision-making often involves particularly high stakes, since making a wrong decision can have serious consequences. In contrast, many everyday tasks that humans might potentially offload to algorithms involve comparatively low stakes, such as asking Siri for advice or using Google Maps to plan a route. Even if the respective algorithm failed (e.g., Siri does not provide the desired information or Google Maps does not plan the most efficient route), the consequences for the human user would normally not be serious (e.g., spending some extra time to retrieve the desired information or to plan the route).
A related set of studies investigated situations where algorithms can perform tasks autonomously-so-called "performative algorithms" [4]. For instance, one study tested whether humans prefer to rely on another human's forecast or on the forecast of an algorithm [11]. These forecasts predicted students' academic success based on their admission data. Participants tended to rely on the human forecaster, even if the algorithm made more accurate predictions than the human. Another study [12] investigated how humans judge the authenticity of works (e.g., art or music) created by either humans or algorithms. Participants perceived the algorithmic work as less authentic and genuine than the human work. Yet, if participants were told that an algorithm had created the work while being guided by a human, they judged this work as more authentic (than if an algorithm had created it autonomously).
To summarize (see [4]), key factors that determine whether humans show algorithmic appreciation or aversion are whether the algorithm (1) performs tasks on its own [12] or has an advisory role [9], (2) makes mistakes or performs flawlessly [11], and (3) is perceived as capable of performing the assigned task or not [10]. With regard to the first point, people are more likely to feel aversion towards an algorithm that performs a task autonomously compared to an algorithm in an advisory role [4]. With regard to the second point, if people observe an algorithm making mistakes, their attitude can quickly change from appreciation to aversion [11]. And with regard to the third point, if the algorithm is presented as more capable than a human agent, appreciation is more likely than aversion [10]. As noted earlier, another key factor to consider is whether the task or decision at hand involves high or low stakes, as this might influence the basic human tendency for algorithmic appreciation or aversion. This difference between high and low stakes might be additionally modulated by whether humans are more or less risk averse. Further factors include the prior description of an algorithm (see recent review [13]) and various kinds of affective influences such as whether a robot has human facial features (like iCub) or not. Finally, it matters whether a person is asked to decide if she would rather perform a task herself or to (partially) delegate it to an algorithm, or whether she is asked to decide if a task should be performed by another human or an algorithm.
To date, however, research has not investigated to what extent humans show offloading in tasks with high attentional demands. Are humans willing to (partially) offload task demands to an algorithm in order to reduce their own attentional load? This question is highly relevant since humans tend to commit more errors under high attentional load [14] and could thus substantially profit from cognitive offloading. By sharing a task with an algorithm, humans could not only reduce cognitive effort and thereby increase individual accuracy but could also go beyond their own capacity limitations [14] and thus achieve an overall performance level that exceeds the level they would achieve if they performed the entire task alone.
Thus, in the present study, we investigated whether and to what extent humans offload parts of an attentionally demanding task to an algorithm-and whether prior information about this algorithm affects the human tendency for offloading.
We addressed this question by conducting two behavioral experiments in which participants performed a multiple object tracking (MOT) task [15]. We chose this task because it has been reliably used in earlier research to test the limitations of attentional processing [16] and to investigate human-human [17] as well as human-AI collaborations [18,19]. In the MOT task, participants are required to track a subset of moving target objects among distractor objects on a computer screen. Studies showed that humans are able to track a limited number of objects (four objects [20]; but also see: [21]) and that tracking is an effortful task that requires sustained attention over prolonged periods of time (e.g., [22,23]). Thus, the MOT task is highly suitable to tax the human attentional system and therefore provides an ideal "test case" for our purposes.
With regard to the outcome of the present study, two opposite hypotheses can be made based on prior research. On the one hand, earlier findings suggest that humans will not offload tasks to an algorithm that acts autonomously, as they prefer algorithms that have an advisory function only [4]. On the other hand, given that the present task is attentionally demanding, one could predict that humans will (partially) offload this task to an algorithm to reduce their own cognitive effort [1]-in line with the offloading tendencies humans show nowadays in certain daily activities (e.g., when using Google Maps or Siri). Given that the present study uses a scenario with what can be considered rather low stakes, the second hypothesis seems more likely, as human offloading tendencies may arguably be stronger when low stakes are involved.
Moreover, as previous research demonstrated that the human tendency for algorithmic appreciation vs. aversion may depend on the (ascribed) reliability [5] and expertise of an algorithm [7,8,10], we explored whether including the expertise factor would affect participants' behavior also in the present study. For this reason, we conducted one experiment in which participants did not receive any prior information about the expertise of the algorithm (Experiment 1; "computer capacity unknown") and a second experiment in which participants were informed beforehand that the algorithm was flawless (Experiment 2; "computer capacity known").
Participants
Fifty-two university students participated in the present study, i.e., 26 participants took part in Experiment 1 (M = 25.82 years, SD = 4.30 years, 11 female, 15 male) and Experiment 2 (M = 26.85 years, SD = 8.46 years, 18 female, 8 male), respectively. Sample size was determined using G*Power [24] (alpha = 0.05, power = 0.80) with the aim to detect medium-sized effects both for correlations (r = 0.5) and pairwise comparisons (Cohen's d = 0.58). Participants gave written informed consent prior to participation and received 10 EUR as compensation. All participants expressed their consent for publication. The study was conducted in line with the ethical principles of the Declaration of Helsinki and of the American Psychological Association (APA). The study was approved by the ethics committee of the Institute of Philosophy and Educational Research at Ruhr University Bochum (EPE-2023-003).
Experimental setup and procedure
Participants sat at a desk in front of a 24" computer screen (resolution: 1920 x 1080 pixels, refresh rate: 60 Hz), at a distance of 90 cm. A keyboard and mouse were positioned within easy reach.
Experiment 1 consisted of two conditions, a Solo condition (always performed first) and a Joint condition (always performed second). We chose this fixed order of conditions because we wanted to see whether (and to what extent) participants would reduce their maximum individual tracking load in the Joint condition by offloading to the computer partner. For participants to decide how much to offload, they needed to be aware of their own individual capacity. This means that in case of the MOT task, participants needed to know how many targets they were able to track individually. For this reason, we had participants perform the Solo condition first (so they could learn about their individual capacity), followed by the Joint condition where participants could then decide whether they wanted to reduce their tracking load relative to the Solo condition. Note that we address potential concerns about order effects in the final paragraph of the Results section.
There were 25 trials in the Solo and 50 trials in the Joint condition; resulting in 75 trials in total. In the Solo condition, participants performed the MOT task alone. In each trial, 19 stationary objects were initially displayed on the screen. Out of these, 6 objects were randomly selected as "targets" and were highlighted in white; all other objects served as "distractors" and were colored in grey. Participants were instructed to select as many (i.e., between 0 and 6) targets as they would like to track in that trial. They indicated their selection via mouse click and then confirmed this selection by clicking on a dot in the center of the screen. Upon confirmation, the highlighted targets switched color such that all objects (targets and distractors) now looked identical, i.e., all were colored grey. All objects then started to move, moving across the screen in randomly selected directions for a duration of 11 seconds. While moving, objects repelled each other and the screen borders in a physically plausible way (i.e., angle of incidence equals angle of reflection). Once the objects stopped moving, participants were instructed to click on those targets they had previously selected. They confirmed their target selection by clicking on the dot in the center of the screen, thereby marking the end of the trial. For an exemplary trial sequence, see Fig 1 (top row).
Participants were informed that they would earn one point for each correctly identified target, yet would lose one point for each incorrect selection. The goal was to identify as many targets as possible without making mistakes. Prior to performing the Solo condition, participants were instructed by the experimenter on the task procedure and performed two training trials to become familiar with the sequence of events.
In the Joint condition, participants performed the same MOT task as in the Solo condition but now together with a so-called "computer partner" (which was an algorithm). Participants were told that those targets that they do not select themselves will be selected and tracked by the computer. That is, after participants selected their targets and confirmed their selection, the remaining targets were highlighted in violet to indicate that these had been selected by the computer partner. For instance, if a participant selected three particular targets, the remaining three targets were automatically selected by the computer partner. Then, as in the Solo condition, all objects started moving across the screen (with targets and distractors looking identical) for 11 seconds. Once objects stopped moving, participants were asked to select their previously selected targets. After participants confirmed their selection, the remaining targets were selected by the computer partner; the selected targets were highlighted in yellow. Participants then confirmed seeing the computer partner's selection by pressing the space bar, thereby marking the end of the trial. For an exemplary trial sequence, see Fig 1 (bottom row).
As in the Solo condition, participants were informed that they would earn one point for each correctly identified target and lose one point for each incorrect selection, and that the same was true for the targets selected by the computer partner. Participants were told that the total score consisted of the sum of their own and the partner's points. The goal was to identify, jointly with the partner, as many targets as possible without making mistakes. The tracking accuracy of the computer partner was 100% (i.e., its selections were always correct), yet participants were not explicitly informed about this. Prior to performing the Joint condition, participants were instructed by the experimenter about the task procedure and performed two training trials.
After completing both conditions, participants were presented with several task-related questions. First, they were asked how many targets they had tracked (and why) and whether they had followed a certain strategy when selecting which targets to track. These questions were asked separately for each condition. In addition, for the Joint condition, participants were asked how they decided how many targets to track themselves and how many targets to "offload" to the computer partner (note that the word "offload" was not explicitly used). Lastly, participants were asked to rate how many targets they believed the computer partner was able to track (on a scale from "0 targets" to "more than 6 targets").
After answering the questions above, participants were presented with a set of questionnaire items that capture personality traits. This set was designed such that participants' replies might provide insight into the factors that influence a person's willingness to offload tasks to an algorithm. The set consisted of the Desirability of Control Scale (20 items [25]), the Trust in Automation Scale (3-items-subset [26]), and the Affinity for Technological Systems Scale (9 items [27]). In addition, participants were asked to rate how reliable and competent they perceived the computer partner to be (7 items). This last set of items was also taken from the Trust in Automation Scale [26]) but was slightly adapted in wording such that it referred specifically to the computer partner. The exact wording of all items (in German), as well as links to the English translation, are provided in S1 File.
In Experiment 2, the procedure was exactly the same as in Experiment 1, with only one difference: Prior to performing the Joint condition with the computer partner, participants were informed that the computer partner is a software specifically designed for the MOT task such that it tracks targets with 100% accuracy regardless of how many targets it is assigned to track (see S1 File for the exact wording in German). In other words, they were told that the computer partner performs the task flawlessly, with an unlimited tracking capacity. Note that the tracking accuracy of the computer partner was 100% in both Experiments 1 and 2; the only difference was that in Experiment 2, participants were made aware of this fact whereas participants in Experiment 1 were uninformed.
The experiments were programmed in Python 3.0 using the pygame library. An experimental session (i.e., completing the experiment as well as the questionnaires) took about 60 minutes. All analyses were performed using custom R scripts.
Results
In Experiment 1, to address the question of whether humans offload (parts of) an attentionally demanding task to an algorithm, we compared how many targets participants chose to track in the Solo condition compared to the Joint condition (for a descriptive overview, see Fig 2A). We found that participants chose to track significantly fewer targets in the Joint (M = 2.68, SD = 1.08) compared to the Solo (M = 3.42, SD = 0.55) condition, paired t-test: t(25) = -3.67, p = .001, Cohen's d = 0.72 (medium-sized effect [28]). This indicates that, in the Joint condition, participants decided to offload a subset of targets to the computer partner. Note that in the Solo condition, participants tracked between 3 and 4 targets on average, indicating that they tracked (close to) the maximum number of four targets humans are typically capable of tracking [21]. We next explored how offloading affected participants' tracking accuracy (for a descriptive overview, see Fig 2B). Accuracy was computed by dividing participants' correct selections by all selections for each trial. We found that accuracy was significantly higher in the Joint (M = 0.97, SD = 0.03) compared to the Solo (M = 0.94, SD = 0.04) condition (paired t-test: t (25) = 4.59, p < .001, Cohen's d = 0.90 (large-sized effect [28]), suggesting that offloading targets to the computer partner increased the accuracy of participants' individual performance.
For Experiment 2, we first performed a belief manipulation check, i.e., we verified whether participants had actually believed that the computer partner was flawless (i.e., tracked with 100% accuracy), in line with what they had been told by the experimenter. For this purpose, we examined participants' replies to the question "How many targets can the computer partner track accurately?" (see S1 File). Note that this question was asked after participants had performed the Joint condition. We found that the computer's capacity was rated as significantly higher in Experiment 2 compared to Experiment 1 (χ 2 (5) = 16.83, p = .005), see Fig 3. Twentyfive out of the twenty-six participants rated the computer's capacity to be "6 targets" (5 participants) or higher (20 participants). This confirms that most participants in Experiment 2 actually believed that the computer partner could flawlessly track at least 6 targets (i.e., the maximum number of targets in the present task).
Consistent with the analyses performed for Experiment 1, we analyzed the number of targets participants selected and participants' tracking accuracy also for Experiment 2. Results showed that participants selected significantly fewer targets (paired t-test: t(25) = -6.60, p < .001, Cohen's d = 1.29 (large-sized effect [28]) and performed with a significantly higher accuracy (paired t-test: t(24) = 4.08, p < .001, Cohen's d = 0.81 (large-sized effect [28]) in the Joint (M targets = 2.19, SD targets = 1.06; M accuracy = 0.97, SD accuracy = 0.04) compared to the Solo (M targets = 3.35, SD targets = 0.56; M accuracy = 0.93, SD accuracy = 0.06) condition (for a descriptive overview, see Fig 4). This is in line with the findings from Experiment 1 and indicates that, in the Joint condition, participants offloaded a subset of targets to the computer partner. This offloading was accompanied by a boost in individual accuracy.
To assess whether the extent of offloading was larger in Experiment 2 than in Experiment 1 (i.e., whether the prior information about the computer's capacity influenced participants' willingness to offload), we compared the number of targets selected in Experiments 1 versus 2. There was no significant difference, independent t-test: t(50) = 1.59, p = .117, Cohen's d = 0.44 (small-sized effect [28]). When calculating a Bayes factor for this comparison, we found that the null hypothesis is 1.27 more likely than the alternative hypothesis.
Turning to the questionnaire results, we first analyzed participants' ratings regarding the number of targets they had tracked in the Solo and Joint condition (for a descriptive overview, see Fig 5). Participants' ratings were consistent with their actual behavior, i.e., participants stated that they had tracked fewer targets in the Joint vs. Solo condition both for Experiment 1 (paired t-test: t(25) = 3.58, p = .001, Cohen's d = 0.70 (medium-sized effect [28])) and Experiment 2 (paired t-test: t(25) = 4.97, p < .001, Cohen's d = 0.98 (large-sized effect [28])). This We next categorized the strategies that participants described when asked about how they had selected which targets to track. In the Solo condition, by far the most prevalent strategy across experiments was to choose objects that were initially displayed in close proximity on the screen. The majority of participants (85% in Exp. 1; 77% in Exp. 2) reported this strategy; the rest did not indicate any clear strategy.
In the Joint condition, we had asked participants how they decided how many targets to track themselves and how many targets to "offload" to the computer partner. To evaluate participants' strategies, we created the following four categories based on participants' responses: (1) No change: Participants did not change their behavior relative to the Solo condition; (2) Fair: Participants preferred a fair split (selecting 3 targets for themselves and offloading 3 targets to the computer partner); (3) Minor offloading: Participants offloaded targets to a small extent (selected 0.5-1 target fewer in the Joint compared to the Solo condition); (4) Major offloading: Participants offloaded targets to a large extent (selected 1.5-3.5 targets fewer in the Joint compared to the Solo condition). Note that, if participants indicated a range (e.g., 1-2 targets), we computed the average of the bounds of the range (e.g., 1.5 targets). When comparing the frequency of the reported strategies in the Joint condition across experiments (for a descriptive overview, see Fig 6), we observed that around 40% of participants (i.e., 11 out of 26 participants) indicated no change in behavior relative to the Solo condition in Experiment 1 whereas in Experiment 2, the same number of participants (i.e., 11 out of 26 participants) reported minor offloading. However, this difference was not significant (χ 2 (3) = 3.17, p = .365). Focusing only on the frequency of participants indicating offloading (either minor or major), we observed that 11 participants (42%) in Experiment 1 and 17 participants (65%) in Experiment 2 reported offloading as a strategy in the Joint condition.
We next tested to what extent our questionnaire scales predicted offloading. For this purpose, we combined the data from our two experiments and first evaluated the reliability of all our questionnaire scales using Cronbach's alpha [29]. We found acceptable reliabilities for all our scales (Desirability of Control: 0.78; Affinity for Technological Systems: 0.91; Trust in Automation: 0.69; Reliability and Competence of the Computer Partner: 0.73) [30]. We entered all questionnaire scales as metric predictors and the between-subjects factor Experiment (1 vs. 2) into a multiple regression and used the offloading extent (i.e., difference between number of selected targets in the Solo vs. Joint condition) as the dependent variable (F(5,46) = 1.00, p = .426, R 2 = 0.10). None of the predictors was significant (Desirability of Control: β = -0.06, t = -0.42, p = .679; Affinity for Technological Systems: β = 0.07, t = 0.450, p = .655; Trust in Automation: β = -0.12, t = -0.81, p = .422; Reliability and Competence of the Computer Partner: β = -0.18, t = -1.26, p = .213), suggesting that none of the questionnaire scales predicted how many targets participants decided to offload.
Finally, we addressed a potential confound inherent in our experimental design: Because participants completed the two conditions always in a fixed order, with the Joint after the Solo condition, it is possible that the difference we observed between the two conditions can be simply ascribed to effects of fatigue. In other words, maybe participants decided to track fewer targets in the Joint compared to the Solo condition simply because they felt tired-and not because they wanted to offload parts of the attentional demands to the computer partner. If this was true, the same drop in selected targets should occur if the first Solo condition was followed by a second Solo (rather than Joint) condition, because participants should be equally fatigued over time. We addressed this possibility by drawing on data from an earlier study conducted by the first author ( [17]; see "No Information" condition). In that study, 32 participants performed the same MOT task as in the present study for 100 trials; the present study consisted of 75 trials in total. Note that participants in that study took part in pairs of two yet performed the task individually, just as in the Solo condition of the present study. We split participants' data up in 25-trial segments and tested whether, over time, the number of objects participants selected and their tracking accuracy decreased. For this purpose, we used a linear mixed model with random intercepts for each pair and included the within-subjects factor Trial Segments (1st Quarter, 2nd Quarter, 3rd Quarter, 4th Quarter), for which we defined the 1st Quarter as the reference group (i.e., all other trial segments are compared to this reference group). We found that the number of selected targets did not significantly differ relative to the reference group (1st Quarter vs. 2nd Quarter: t(122) = -0.71, p = .479; 1st Quarter vs. 3rd Quarter: t(122) = -0.60, p = .551; 1st Quarter vs. 4th Quarter: t(122) = -0.10, p = .918). The same was true for participants' tracking accuracy (1st Quarter vs. 2nd Quarter: t(123) = 0.52, p = .602; 1st Quarter vs. 3rd Quarter: t(122) = 0.51, p = .612; 1st Quarter vs. 4th Quarter: t(122) = 0.11, p = .915). In sum, these results suggest that, in a MOT task with up to 100 trials, neither the number of selected targets nor the tracking accuracy decreases over time as a result of fatigue. Accordingly, the observed difference between the Solo and Joint condition in the present study can hardly be explained by effects of fatigue.
Discussion
In the present study, we investigated whether and to what extent humans offload parts of an attentionally demanding task to an algorithm-and whether prior information about this algorithm affects the human tendency for offloading. In two experiments, participants performed a multiple object tracking (MOT) task, either alone (Solo condition) or together with a socalled computer partner (Joint condition). Across experiments, participants showed a clear willingness to offload parts of the task (i.e., a subset of to-be-tracked targets) to the computer partner. In particular, participants decided to track 3.4 targets on average in the Solo compared to 2.4 targets in the Joint condition, i.e., they offloaded 1 target to the computer partner. Earlier research suggests that this difference in the number of tracked targets cannot be alternatively explained by a progressive performance decrease over time due to fatigue. This result is in line with earlier research predicting that humans will (partially) offload an attentionally demanding task to an algorithm to reduce their own cognitive effort [1]. Participants' behavior in the present study is also consistent with the offloading tendencies humans show nowadays in certain daily activities (e.g., when using Google Maps or Siri).
The decision to offload a subset of targets to the computer partner resulted in a boost in participants' tracking accuracy, indicating that participants made fewer mistakes thanks to a reduction in cognitive demand [1]. This result demonstrates that offloading benefited participants' individual performance.
As previous research demonstrated that the human tendency for algorithmic appreciation vs. aversion may depend on the (ascribed) reliability [5] and expertise of an algorithm [7,8,10], we also explored whether the expertise factor would affect participants' behavior in the present study. To this end, participants in Experiment 1 did not receive any prior information about the expertise of the algorithm whereas participants in Experiment 2 were informed beforehand that the algorithm was flawless. Results showed that this prior information did not significantly affect the number of targets participants decided to track.
Regarding our main result, i.e., the observation that participants decided to track fewer targets in the Joint compared to the Individual condition, one may wonder whether this difference could be due to the fact that in the Joint condition, participants might have tried to monitor the computer's performance in addition to performing their own tracking task. If this was true, then this might have impaired participants' individual tracking capacity, resulting in a smaller number of tracked targets in the Joint condition. Note that this worry concerns Experiment 1 only, as in Experiment 2, participants were told that the computer's accuracy was perfect such that there was no obvious need for monitoring. From participants' questionnaire responses, we know that indeed, in Experiment 1, several (i.e., 19%) participants tried to find out about the computer's accuracy through monitoring the targets selected by the computer. However, participants quickly realized that the computer's accuracy was high and stopped monitoring after only a few trials. No participant reported having monitored the computer throughout. These subjective self-reports are supported by participants' objective accuracy scores: if participants had monitored the computer throughout the entire experiment, then their own performance should have been severely impaired. Instead, participants' performed with the same average accuracy of 97% in Experiment 1 and 2, while the number of tracked targets was similar. The fact that accuracy levels did not differ between experiments suggests that participants did not perform a secondary task in Experiment 1.
Moreover, our results show that participants were well aware of their tracking/offloading decisions. When asked after the experiment how many targets they had tracked in the Solo and Joint condition, the reported numbers were consistent with the numbers they had actually tracked. This suggests that participants had a high metacognitive awareness of their actions and that the offloading was likely an explicit choice participants made, which fits well with the proposition that metacognitive evaluations play a central role in offloading [1].
Finally, our questionnaire results do not provide any support for the hypothesis that participants' offloading tendencies might be related to certain personality dimensions such as desirability of control [25], trust in automation [26], and affinity for technological systems [27]. However, future studies with larger sample sizes are needed to rule out any influence of these personality dimensions more decisively. Alternatively, future studies could test these factors experimentally, e.g., by comparing two groups of pre-selected participants (e.g., one group with a high and one with a low affinity for technological systems).
We suggest that future studies could consider the degree of cognitive load also as a moderating variable. For instance, for medical decisions, people may be more inclined to rely on the advice of an algorithm rather than a human doctor if they know that the human doctor formulated her advice under high cognitive load (e.g., at a particular busy time in the hospital). This is because people might realize that the performance of the algorithm, in contrast to that of the human, cannot be affected by any inherently human cognitive capacity limitations. Note that, as mentioned at the outset, future studies would also need to investigate the effect of cognitive load in high stakes scenarios. As the experimental task used in the present study involves considerably low stakes, our results cannot be generalized to high stakes scenarios such as the afore-mentioned medical decision-making. Investigating the latter types of scenarios seems worthwhile because it is in these scenarios that errors committed by humans under high attentional load [14] can have serious consequences, and thus, humans could substantially profit from cognitive offloading. Hence, understanding the offloading tendencies of humans (e.g., doctors) in such high stakes scenarios is critical.
More generally, the conclusions drawn from the present study are limited in the sense that we have only investigated the willingness of human participants to offload a task to a computer partner, yet we did not test their willingness to offload the same task to a human partner. Thus, based on the present findings, we can report that humans are willing to (partially) offload an attentionally demanding task to an algorithm, but we do not know to what extent they would offload this task to another human, nor whether, if given the choice, they would choose the computer or the human as a partner. These questions need to be addressed by future studies to gain a more comprehensive picture of the human aversion/appreciation tendencies.
A final open question one must ask in light of the results of the present study is why, in Experiment 2, participants did not decide to offload the complete task to the computer partner-given that they knew that the computer would perform the task flawlessly. Two evident explanations for why participants did not offload the entire task are the following. (1) Participants did not want to feel bored while passively waiting for the computer to finish the task, and (2) participants experienced some form of experimental demand, i.e., they wanted to continue performing the experimental task that they had been asked (and paid) to perform. To address these points, we are currently conducting a follow-up study in which we inform participants that, if they were to offload the tracking task entirely to the computer partner, then they would be able to perform a secondary task while the computer performs the tracking task. This way, participants should feel free to offload the entire task-they would not feel bored nor would they have the impression not to meet the experimenter's demands. Likely, the specifics of this secondary task will influence participants' behavior, e.g., it will matter whether the secondary task is easier or more difficult than the tracking task, whether it is incentivized (e.g., by an additional monetary gain), and whether it allows participants to still monitor the computer's performance.
To conclude, the present findings show that people are willing to (partially) offload task demands to an algorithm to reduce their own cognitive effort, thereby increasing individual accuracy. We suggest that the cognitive load of a task is an important factor to consider when evaluating human tendencies for offloading cognition onto artificial systems. | 8,895.6 | 2023-05-19T00:00:00.000 | [
"Psychology",
"Computer Science"
] |
Assessment of Background Illumination Influence on Accuracy of Measurements Performed on Optical Coordinate Measuring Machine Equipped with Video Probe
Currently the Coordinate Measuring Technique is facing new challenges both in terms of used methodology and a speed of measurement. More and more often modern optical systems or multisensor systems replace classic solutions. Measurement performed using the optical system is more vulnerable to incorrect points acquisition due to such factors as an inadequate focus or parameters of applied illumination. This article examines the effect of an increasing illumination on the measurement result. A glass reference plate with marked circles and a hole plate standard were used for the measurements performed on a multi-sensor machine Zeiss O’ Inspect 442. The experiment consisted of measurements of standard objects with different values of the backlight at the maximum magnification. Such approach allows to assess the influence of controlled parameter on errors of diameter and form measurements as well as an uncertainty of measurements by determination of ellipses of point repeatability. The analysis of the obtained results shows that increasing backlight mainly affects the result of the diameter measurement.
Introduction
Coordinate Metrology is a field of metrology with a very wide range of applications. All objects in the space that surrounds us are three-dimensional. Most of them are manufactured according to the shape previously designed in accordance with the technical documentation. Such items in the documentation are described as a view or a section along with any requirements for their execution, basic dimensions, tolerances, etc. Information about the form and individual dimensions of the measured object in coordinate measuring technique is perceived as a set of coordinates of points. For the measurements, coordinate machines are used, which currently can be divided into two groups: contact machines and optical machines. The oldest and most numerous group is constituted by devices that use contact probe heads in order to obtain the coordinates of the contact point with a measured surface. Such machines have been developed since the second half of 20th century, and their properties have been thoroughly studied and described in literature. However, over the last years, the second of mentioned groups is gaining more and more attention. Contactless coordinate machines offer some advantages which are virtually inaccessible to tactile systems, mainly incomparable speed of points coordinates acquisition. At the same time, accuracy of contactless systems is described generally as lower than for contact coordinate measuring machines (CMMs). The solution that efficiently combines the advantages of classical coordinate machines and optical machines is a group of so-called multisensor machines [1,2]. As a relatively new solution, such systems require intensive research, especially on the accuracy of measurements [3,4].
The main contributors to overall accuracy of machines of this kind include sources connected with machine kinematics and tactile probe characteristics, which can be tested and described in similar manner as in case of classic CMMs. Additionally, sources related with utilization of different contactless probing systems can be pointed out such as: errors connected with a camera or a white light sensor functioning, algorithms used for edges detection or properties of applied lighting.
Topic of an illumination influence on the accuracy of measurements performed using optical coordinate measuring machines (OCMM) equipped with a video probe was rarely investigated in the past. In [5] Kim and McKeown presented analytical and experimental approaches to exploring how the measuring uncertainty limit of video probes is determined by major design parameters, one of them being the illumination. Using an example of optimal design, they demonstrated that an ultraprecision measurement of 0.01 µm uncertainty can be practically achieved providing optimal lighting conditions. In [6], Tran and Claudet reported on the effect of the sensitivity of vision probing on an OCMM to different lighting conditions, both for unidirectional and bidirectional measurements. They found that the lighting is a major contributor to the measurement error budget, especially when a bidirectional measurement needs to be made.
There are also some papers treating generally about an issue of measurement accuracy and the uncertainty in measurements performed using optical CMMs in which the illumination is considered as one of uncertainty contributors. In [7], Carmignato and others used two artefacts that are commonly used for performance verification of optical CMMs: a linear glass scale and an optomechanical hole plate for quantification of uncertainty contributors in coordinate measurements using video probes. The results showed the significant influence of illumination, an objective magnification, a measuring window size, use of autofocus and an image filtering on measurement uncertainty. In [3], Weckenmann and Bernstein described a prototype of an optical multi-sensor-measurement system combining a shadow system and a light-section system for the in-line inspection of concave extruding profiles. Experimental analysis of a measurement uncertainty was performed. To this aim, the effect of typical environmental influences like a dust, object's vibrations, the illuminations' pitch error or extraneous light on the measurement accuracy was examined. After those analyses, the measurement system was evaluated under shop floor conditions. Authors found that the influences of an extraneous light and reflections are the reasons for the higher uncertainty values. In [8], Carmignato presented an industrial comparison of CMMs equipped with optical sensors, performed in Europe through 1.5-year time period. Participants were chosen mainly from small-medium size industrial companies. On each CMM taking part in this comparison, a set of calibrated artefacts (glass scale, optomechanical hole plate and 3D injection molded standards) with measurement tasks of different complexity was measured. In addition to the evaluation of actual metrological performances of optical CMMs in industry; also, an uncertainty estimation was performed. One of the error sources demonstrated by the results was the use of translucent materials (such as plastic) that can transmit or reflect the light depending on the material properties (e.g., color) and on the type of light used, causing an effect of shrinkage or distortion of the measured features. Other papers treating accuracy and uncertainty in optical measurements may be found in [9][10][11][12].
Fundamental information on optics used in dimensional metrology and error sources including lighting conditions are given in [13][14][15][16]. Schwenke et al. provided a technical overview of the optical methods available for the dimensional metrology in [17]. Methods for the measurement of length, an angle, a surface form and spatial coordinates were described. In the paper, both the metrological characteristics and the technical limitations of the methods were presented along with some new and promising approaches that may play an important role in the dimensional metrology for production. Moreover, an influence of illumination-related factors like an ambient light, refraction effects, an outof-focus blur and other is investigated. In [18], Larue analyzed all the factors that go into measurement precision with the illumination as one of them and showed how the use of optical technologies makes it possible to greatly reduce the primary causes of measurement imprecision. The illustration of such usages was also given in the form of specific cases taken from real applications in the aeronautical, automotive, and naval industries.
Another group of papers that investigate lighting influence on optical coordinate measurements are publications treating about sensors performance in multisensor systems and data fusion techniques used for data obtained with their use. Examples of such research may be found in [19][20][21][22][23].
Works on development of a virtual model of optical CMM equipped with the video probe are now in the final stage in Laboratory of Coordinate Metrology. This model will be based on a description of the measurement uncertainty using multiple simulations of measuring points reproduction expressed as repeatability ellipses [24]. In order to make the virtual model fully functional, it is necessary to know two things:
1.
Values of task-specific error changes related to measurement of distances, positions and different form deviations under changing lighting conditions (needed for systematic error correction of single measurement result).
2.
Changes in uncertainty areas of a measuring point reproduction for point measurements under changing lighting conditions (needed for simulation of uncertainty associated to single measurement).
Investigations presented in this paper aim to determine the abovementioned changes introduced by changes in the backlight illumination of the measured workpiece.
The rest of the paper is arranged in the following way: Section 2 describes the methodology used in experiments; obtained results are shown in Section 3, while Section 4 includes a discussion over the experiments results and presents the direction of future works.
Materials and Methods
Measurements with the use of Optical Measuring Machines start with mounting the object on the measuring table, setting the appropriate parameters and then performing the measurement which is carried out with the use of a digital camera that takes a digital image of a measured object at programmed locations. After the digital image is taken, an image analysis begins which is aimed at detection of the object edges which can be defined as significant local changes of intensity within the processed image. Most often this process is done using the algorithm that searches for gradient transitions. Then the pixel and subpixel contours are detected which leads to determination of an object outline and its application to the original image [25][26][27][28][29]. The process is presented schematically in Figure 1.
The changes in intensity within an image can be attributed to the geometrical properties of measured objects, but they are also connected with condition under which the digital image was taken such as applied illumination. Techniques used to ensure an appropriate lighting of measured object include: Backlighting, Diffuse Lighting, Direct Incident Lighting or Dark Field Illumination. This paper focuses on the first of mentioned techniques which is well-known for example from its application in microscopy. In this method, the measured part is placed between the sensor and a light source; thus, it is possible to obtain the high contrast between the dark silhouette representing an object shape and a bright background. As the measured part is put in the light beam, the used intensity of illumination can significantly influence measurement results. However, settings connected with the light intensity in Optical CMMs are typically chosen manually by the user. Therefore, a question arises how changes of the lighting intensity within the limits specified by a measuring system manufacturer will affect the measurement errors obtained for certain measuring tasks and the measurement uncertainty. The changes in intensity within an image can be attributed to the geometrical properties of measured objects, but they are also connected with condition under which the digital image was taken such as applied illumination. Techniques used to ensure an appropriate lighting of measured object include: Backlighting, Diffuse Lighting, Direct Incident Lighting or Dark Field Illumination. This paper focuses on the first of mentioned techniques which is well-known for example from its application in microscopy. In this method, the measured part is placed between the sensor and a light source; thus, it is possible to obtain the high contrast between the dark silhouette representing an object shape and a bright background. As the measured part is put in the light beam, the used intensity of illumination can significantly influence measurement results. However, settings connected with the light intensity in Optical CMMs are typically chosen manually by the user. Therefore, a question arises how changes of the lighting intensity within the limits specified by a measuring system manufacturer will affect the measurement errors obtained for certain measuring tasks and the measurement uncertainty.
The above-mentioned problem can be examined by conducting appropriate experiments. One of the methods suitable for this purpose is a methodology based on reference object measurements which are conducted under changing conditions, in this case with changing illumination intensity. The experiment involved measurements of a special hole plate standard for optical systems, shown in Figure 2a, which can be used in a comparison between tactile and mechanical measurements and was described in [30,31]. The second reference object used in research is a reference glass plate with marked circular features of different sizes (depicted in Figure 2b,c). The part coordinate system for the hole plate standard was based on the measurement of three circles whose centers were used for determination of axes of the local coordinate system. Then the zero point of designated datum was moved to the circle number 1. The size of reference object is 80 mm × 80 mm from the center of the circle located in the left bottom of the plate to the circle placed in the top right corner. The circles diameter equal to 5.5 mm, and the distance between circles determined along axes of the local coordinate system equals to 20 mm. The orientation of axes of the local coordinate system for the glass plate was copied from the machine coordinate system; only the origin was set in the center of a chosen circle. The feature used during inspection of this reference object has a diameter of 0.254 mm. The above-mentioned problem can be examined by conducting appropriate experiments. One of the methods suitable for this purpose is a methodology based on reference object measurements which are conducted under changing conditions, in this case with changing illumination intensity. The experiment involved measurements of a special hole plate standard for optical systems, shown in Figure 2a, which can be used in a comparison between tactile and mechanical measurements and was described in [30,31]. The second reference object used in research is a reference glass plate with marked circular features of different sizes (depicted in Figure 2b,c). The part coordinate system for the hole plate standard was based on the measurement of three circles whose centers were used for determination of axes of the local coordinate system. Then the zero point of designated datum was moved to the circle number 1. The size of reference object is 80 mm × 80 mm from the center of the circle located in the left bottom of the plate to the circle placed in the top right corner. The circles diameter equal to 5.5 mm, and the distance between circles determined along axes of the local coordinate system equals to 20 mm. The orientation of axes of the local coordinate system for the glass plate was copied from the machine coordinate system; only the origin was set in the center of a chosen circle. The feature used during inspection of this reference object has a diameter of 0.254 mm.
These reference objects were chosen to assess the influence of changing lights properties both on inner and outer dimension measurements. Independent on the type of used reference object, the experiments involved multiple measurements of chosen features with different lighting intensity. After the local coordinate system was determined, chosen circles were measured using 12 points. The initial value of the backlight was determined empirically as the value of L9 which mean that illumination has intensity of 9% of maximum value recommended by measuring system manufacturer. This is the limit number at which the software is able to find the gradient threshold and measure the point. Below this value, the software does not find points and reports an error. The backlight intensities were tested with higher density to value marked as L20, and then, intensities were changed with step 5% up to L100, which means the maximum (100%) available illumination setting on the machine. These reference objects were chosen to assess the influence of changing lights properties both on inner and outer dimension measurements. Independent on the type of used reference object, the experiments involved multiple measurements of chosen features with different lighting intensity. After the local coordinate system was determined, chosen circles were measured using 12 points. The initial value of the backlight was determined empirically as the value of L9 which mean that illumination has intensity of 9% of maximum value recommended by measuring system manufacturer. This is the limit number at which the software is able to find the gradient threshold and measure the point. Below this value, the software does not find points and reports an error. The backlight intensities were tested with higher density to value marked as L20, and then, intensities were changed with step 5% up to L100, which means the maximum (100%) available illumination setting on the machine.
In case of hole plate standard, three circles were measured (marked as 1, 13 and 25see Figure 2), which are placed on the diagonal of the plate. Measurements were repeated 30 times for each applied illumination. After each measurement, the diameter of circle was determined, together with the form deviation and the position of a circle center.
Additionally, position of each measured point was controlled and recorded for each repetition of measuring sequence. They were used in order to determine ellipses of point repeatability. These ellipses can be treated as a quantitative representation of point measurement uncertainty. They were determined in the following way. Firstly, the center of ellipse is calculated using formulas: where xi, yi-coordinates of measured point obtained in subsequent measuring cycles; N-number of cycles utilized during experiment (N = 30). Next covariance matrix can be formulated: where -variance of first variable x; -variance of second variable y. , -the covariance of two variables x and y.
After these steps, the eigenvalues λ1. λ2 of covariance matrix can be calculated, as well as values of eigenvectors v1, v2. Then, they are used in order to determine lengths of ellipse's semi-major (a) and semi-minor (b) axes with formulas In case of hole plate standard, three circles were measured (marked as 1, 13 and 25-see Figure 2), which are placed on the diagonal of the plate. Measurements were repeated 30 times for each applied illumination. After each measurement, the diameter of circle was determined, together with the form deviation and the position of a circle center.
Additionally, position of each measured point was controlled and recorded for each repetition of measuring sequence. They were used in order to determine ellipses of point repeatability. These ellipses can be treated as a quantitative representation of point measurement uncertainty. They were determined in the following way. Firstly, the center of ellipse is calculated using formulas: where x i , y i -coordinates of measured point obtained in subsequent measuring cycles; N-number of cycles utilized during experiment (N = 30). Next covariance matrix can be formulated: After these steps, the eigenvalues λ 1 . λ 2 of covariance matrix can be calculated, as well as values of eigenvectors v 1 , v 2 . Then, they are used in order to determine lengths of ellipse's semi-major (a) and semi-minor (b) axes with formulas where the eigenvalues λ 1 , λ 2 of covariance matrix. The value of 5.9 is used as a multiplier that is taken from the Chi-square distribution and guarantees a 95% confidence interval.
Finally, the ellipse slope is given as where γthe angle between semi-major axis of ellipse and x-axis of coordinate system; v 1 (x), v 1 (y)-components of the largest eigenvector of covariance matrix. The same experiment procedure was applied for the glass plate but only for one chosen circle.
All measurements described in the paper were performed at the Laboratory of Coordinate Metrology on an optical multisensor machine Zeiss O' Inspect 442, presented in Figure 3. The machine is located in an air-conditioned room, and the temperature during the measurements was monitored and changed in the range (20.4; 20.7) • C. Temperature compensation system was turned on in order to minimize thermal influences on measurement results. The accuracy of the machine is described by the equation of maximum permissible errors (7): where L is the measured value given in mm.
and guarantees a 95% confidence interval. Finally, the ellipse slope is given as = tan (6) where -the angle between semi-major axis of ellipse and x-axis of coordinate system; v1(x), v1(y)-components of the largest eigenvector of covariance matrix. The same experiment procedure was applied for the glass plate but only for one chosen circle.
All measurements described in the paper were performed at the Laboratory of Coordinate Metrology on an optical multisensor machine Zeiss O' Inspect 442, presented in Figure 3. The machine is located in an air-conditioned room, and the temperature during the measurements was monitored and changed in the range (20.4; 20.7) °C. Temperature compensation system was turned on in order to minimize thermal influences on measurement results. The accuracy of the machine is described by the equation of maximum permissible errors (7): where L is the measured value given in mm.
The largest available magnification, 6.3× and the backlight were used for all measurements included in experiments.
Results
Changes of measured diameter depending on the utilized backlight are presented in Figure 4 which was prepared for the one of the circles on the hole plate and in the Figure 5 which represents results obtained during measurements of circle with diameter 0.254 mm marked on the glass plate. The different colors of dots and error bars in Figures 4-9 have no additional meaning, they are used in order to improve legibility of presented The largest available magnification, 6.3× and the backlight were used for all measurements included in experiments.
Results
Changes of measured diameter depending on the utilized backlight are presented in Figure 4 which was prepared for the one of the circles on the hole plate and in the Similar characteristic was obtained in case of glass plate standard, but the increase in ing caused a decrease of the measured diameter instead of increase which was ob for hole plate. Additionally, measurements with lower values of applied backligh higher stability than in previous case.
The next figures (Figures 6 and 7) show changes of roundness error depend the applied backlight value. Figure 6 shows results obtained during hole plate me ments, while Figure 7 presents outcome of glass plate measurements. Similarly, to the analysis of diameter changes for hole plate measurements, the values show lower stability, especially the value obtained for the L9 backlight. As previous case, the problem is probably a small light intensity and, consequently variability given by calculation algorithms. The next part of the graph does not sh dependence of the error value with the increasing backlight. Moreover, measurem glass plate showed that changing backlight does not significantly affect roundness urements. On the other hand, the difference of approximately 0.0015 mm in me roundness can be observed between presented figures. Considering that both ref objects have small form deviations such observation is rather surprising. It may b nected with the size of measured features. In case of glass plate, the whole circle analyzed on the basis of one digital image. For bigger features, it is necessary to m The center point position for the circle 25 of the hole plate was also m observed that for both measured circles, the experiment provided similar cases, the dispersion of circle positions in x axis is within 0.1 µm and in y µm for the circle that is closer to the hole plate coordinate system and wi the one that is farther. This level of changes should rather be attributed to ences, especially that no clear upward or downward trend is observed. M ous results were obtained from controlling the position of individual me where the point position is not influenced by averaging algorithms like b used for circles determination. The next figure (Figure 9) shows the change one of controlled points (north pole of the circle) depending on applied i tensity. The center point position for the circle 25 of the hole plate was also me observed that for both measured circles, the experiment provided similar r cases, the dispersion of circle positions in x axis is within 0.1 µm and in y a µm for the circle that is closer to the hole plate coordinate system and wit the one that is farther. This level of changes should rather be attributed to ences, especially that no clear upward or downward trend is observed. Mo ous results were obtained from controlling the position of individual mea where the point position is not influenced by averaging algorithms like be used for circles determination. The next figure (Figure 9) shows the changes one of controlled points (north pole of the circle) depending on applied ill tensity. In case of inner dimension, the values on the left side of the graph corresponding to the lowest backlight values are rather unstable. This may be due to insufficient lighting of the tested element, which results in the erroneous detection of the edge and, consequently, erroneous acquisition of measured point. For backlight values from approximately 20 to 50, the characteristic is close to straight line without major deviations. However, from the value of 50, a significant change in the value of the diameter is noticeable, which progresses with the increase of the backlight. Such a characteristic is interesting because from the analysis of literature and from the point of view of the physical properties of light, a linear characteristic should be expected in the whole range of tested lighting parameters. Similar characteristic was obtained in case of glass plate standard, but the increase in lighting caused a decrease of the measured diameter instead of increase which was observed for hole plate. Additionally, measurements with lower values of applied backlight show higher stability than in previous case.
The next figures (Figures 6 and 7) show changes of roundness error depending on the applied backlight value. Figure 6 shows results obtained during hole plate measurements, while Figure 7 presents outcome of glass plate measurements.
Similarly, to the analysis of diameter changes for hole plate measurements, the initial values show lower stability, especially the value obtained for the L9 backlight. As in the previous case, the problem is probably a small light intensity and, consequently, a big variability given by calculation algorithms. The next part of the graph does not show the dependence of the error value with the increasing backlight. Moreover, measurements of glass plate showed that changing backlight does not significantly affect roundness measurements. On the other hand, the difference of approximately 0.0015 mm in measured roundness can be observed between presented figures. Considering that both reference objects have small form deviations such observation is rather surprising. It may be connected with the size of measured features. In case of glass plate, the whole circle can be analyzed on the basis of one digital image. For bigger features, it is necessary to measure circle partially, so the measurement accuracy is prone to additional error sources.
The results of the measurement of the center point position for the circle 1 of the hole plate are shown in the Figure 8. The graph presents how the center position differs depending on applied backlight.
The center point position for the circle 25 of the hole plate was also measured. It was observed that for both measured circles, the experiment provided similar results. In both cases, the dispersion of circle positions in x axis is within 0.1 µm and in y axis within 0.1 µm for the circle that is closer to the hole plate coordinate system and within 0.2 µm for the one that is farther. This level of changes should rather be attributed to random influences, especially that no clear upward or downward trend is observed. More unambiguous results were obtained from controlling the position of individual measured points, where the point position is not influenced by averaging algorithms like best fit that was used for circles determination. The next figure (Figure 9) shows the changes of position of one of controlled points (north pole of the circle) depending on applied illumination intensity.
The same change was also investigated for a point in the north pole of another circle (circle 25). Similar results were obtained for both considered circles. The higher illumination intensity results in shift of measured point position in positive direction of x axis which can be related with the blurring of the edges due to the greater intensity of the applied lighting.
The influence of changing backlight intensity on measurement uncertainty was checked in last part of experiment which involved determination of ellipses of point repeatability (see Section 2 for further details). The next Figures (Figures 10-12) shows ellipses obtained for first measured point in inspection routine for different circles measured on hole plate standard. applied lighting.
The influence of changing backlight intensity on measurement uncerta checked in last part of experiment which involved determination of ellipses of peatability (see Section 2 for further details). The next Figures (Figures 10-12) s lipses obtained for first measured point in inspection routine for different circl ured on hole plate standard. The influence of changing backlight intensity on measurement uncertai checked in last part of experiment which involved determination of ellipses of peatability (see Section 2 for further details). The next Figures (Figures 10-12) s lipses obtained for first measured point in inspection routine for different circl ured on hole plate standard. It is also worth noting that a location of symmetrical features (like circular rather insensitive to effects that perturb edge detection, while size determination not benefit from the symmetry of the hole are much more strongly affected. Th which are unimportant when measuring location of centers of symmetric feature come a serious problem for less-symmetrical ones because the effect of illum change on determination of the feature's edges may not be the same for section Analyzing the obtained results, it should be noted that increase of lighting intensity affected the area of determined ellipses but not significantly (quantitative data related to this issue is given in next section). On the other hand, for the highest presented illumination value, the shift of ellipse center is clearly visible which should be attributed to illumination effect on position of measuring point. Values of ellipses' centers translation along x axis corresponds to values of measuring point position changes presented in Figures 10 and 11. The lengths of ellipse's semi-major (a) and semi-minor (b) axes for other ellipses of point repeatability determined within presented research are summarized in Table 1.
Discussion
The presented research aimed to determine: -Values of task-specific error changes related to a measurement of distances, positions and circularity deviation under changing lighting conditions. -Changes in uncertainty areas of a measuring point repeatability for point measurements under changing lighting conditions.
General conclusion coming from analysis of performed experiments results is that they show quite significant effect of the backlight on the results of circles' diameters and points' positions measurements (after exceeding a certain level of illumination, which is L50 for considered OCMM); on the contrary, effects of illumination on results of circularity deviation measurements are negligible and small on uncertainty of point measurement.
It is also worth noting that a location of symmetrical features (like circular holes) is rather insensitive to effects that perturb edge detection, while size determinations that do not benefit from the symmetry of the hole are much more strongly affected. The effects which are unimportant when measuring location of centers of symmetric features can become a serious problem for less-symmetrical ones because the effect of illumination change on determination of the feature's edges may not be the same for sections of the feature with different, unsymmetrical shapes.
Other general observations are that the detected single point positions may change by several micrometers over a range of reasonable intensity choices (this change was not bigger than 3 µm for presented research) and that the shift is non-linear. Presented results also show that this change can depend on the position relative to the machine coordinate system.
As can be seen on Figures 4 and 5, the magnitude of illumination influence on both external and internal diameter measurements is similar. The difference is that for measurements of internal diameter with the increase of illumination the diameter rises while for measurements of external diameter it declines. This finding is consistent with results obtained by other researchers in case of measurements performed on optical systems different than OCMM.
Below, the correction function is proposed. It presents the value of systematic error of internal circle diameter measurement that may be attributed to application of illumination level above the L50 level during the measurement. Firstly, it is explained how this function is built and how it was obtained on the example of research performed within this paper. As such, functions are specific to the measuring machine that was used (results presented in this paper are valid only for OCMM used during described experiments); in the next step, the general procedure of determination of this kind of functions on any OCMM is presented.
The influence of illumination changes above L50 level on results of internal circle's diameter measurements was approximated using polynomial function. Approximation of experimental data was performed using linear, polynomial (with maximum function degree equal to 3), exponential and logarithmic functions; the best approximation result was chosen by calculation of the coefficient of determination (R 2 ) and selection of the function for which value of this coefficient was highest (R 2 was bigger than 0.98 for all cases). In the presented case, the polynomial function was selected and may be given as (8): ∆d(I ratio ) = −163.170 * 10 −5 * I 3 + 128.205 * 10 −5 * I 2 + 501.398 * 10 −5 * I −8.881 * 10 −5 mm (8) where ∆d is a value of diameter of circle change that may be taken as an approximation of systematic error attributed to illumination level change, I ratio is calculated using following Equation (9): (9) where I is the illumination level applied during measurements (in the presented case, it may change in the range of (L50; L100>); I smax is the last illumination level for which the results of diameter measurements are stable (in the presented case it is L50); I max is the maximum illumination level that may be used during measurements (in the presented case it is L100). For the function presented in (8), the mean approximation error equaled to 0 mm, and the maximum approximation error was not bigger than 0.00013 mm.
In order to be able to easily calculate the value of the systematic error of circle diameter measurement that may be attributed to a certain illumination level directly from function (8), the mean diameter value determined from measurements performed for illumination in the range from L15 to L50 (where the measurement results are stable) was deducted from diameter values obtained for illuminations set for values bigger than L50.
Similar influence functions may be determined for an external circle's diameter and, based on results of point position measurements, for systematic errors of point coordinates that determine the measuring point position.
The general procedure of determination of this kind of functions on any OCMM is as follows: 1.
Perform experiments as described in Section 2 of this paper.
2.
Determine the range in which measurement results are stable. Determine the I smax as a last illumination level for which the results are stable. Calculate mean value of measurement results in stability range (if changes of considered characteristic values are visible for all illumination levels, without stability range, omit this step and the next one).
3.
Deduct mean value determined in step 2 from considered characteristic values obtained for illuminations set for values bigger than I smax . 4.
For all illumination levels above I smax (if step 3 was omitted take minimum illumination applied during measurements as I smax ) calculate I ratio using (9). 5.
Determine influence functions that give the relation between I ratio values and values determined in step 3 (or of rough measurement results if step 3 was omitted) using different approximation methods (for example linear, polynomial, exponential and logarithmic approximation, or any other relevant approximation method). 6.
Calculate the coefficient of determination (R 2 ) and select the function (out of functions determined in step 5) for which the value of this coefficient is the highest.
The next issue that was investigated was changes in uncertainty areas of the measuring point reproduction for point measurements performed under changing lighting conditions. Results of these investigations were presented in Figures 10-12. Visual analysis of these figures shows that increase of lighting intensity does not significantly affect the area of determined ellipses. In order to quantify changes in uncertainty areas, ellipses fields (S) determined using Equation (10) are presented in Table 1. Results presented in Table 1 show that changes in uncertainty areas of measuring point reproduction caused by different illumination level applied during measurement reach 30% (estimated separately for each point). No clear rising or declining tendency is however observed with increase of illumination level.
Since many metrological programs used during optical measurements do not have the possibility of automatic backlight selection, the conducted experiment can be useful to increase the attention of users to the selection of appropriate backlight during measurements. Analysis of the results showed that there is a limit value (L50) for a particular software for which the results behave stably, but after exceeding it, the measurement error increases noticeably. It was also shown that for some measuring tasks (measurement of internal circles) applying too low illumination may also cause instabilities in obtained results, which may be attributed to worse performance of algorithms responsible for detection of the edges of measured objects.
Illumination influences on results of measurements performed on OCMM equipped with video probe that were determined in this paper are valid only for the CMM that was used during presented experiments; however, the described methodology may be applied to other OCMMs and similar influence equations may be established for them.
Influence functions determined within this research will be used in the virtual model of OCMM that is now in the final stage of development for determination of a single mea-surement systematic error. Information regarding possible level of illumination influence on the uncertainty of measuring point reproduction will be used for scaling the uncertainty ellipses obtained from Monte Carlo simulations for points being the subject of simulations. The fully functional virtual model of OCMM will be described in future publications. | 8,815.8 | 2021-04-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Nonlinear Processes in Geophysics Lower-hybrid ( LH ) oscillitons evolved from ion-acoustic ( IA ) / ion-cyclotron ( IC ) solitary waves : effect of electron inertia
Lower-hybrid (LH) oscillitons reveal one aspect of geocomplexities. They have been observed by rockets and satellites in various regions in geospace. They are extraordinary solitary waves the envelop of which has a relatively longer period, while the amplitude is modulated violently by embedded oscillations of much shorter periods. We employ a two-fluid (electron-ion) slab model in a Cartesian geometry to expose the excitation of LH oscillitons. Relying on a set of self-similar equations, we first produce, as a reference, the well-known three shapes (sinusoidal, sawtooth, and spiky or bipolar) of parallel-propagating ion-acoustic (IA) solitary structures in the absence of electron inertia, along with their Fast Fourier Transform (FFT) power spectra. The study is then expanded to illustrate distorted structures of the IA modes by taking into account all the three components of variables. In this case, the ion-cyclotron (IC) mode comes into play. Furthermore, the electron inertia is incorporated in the equations. It is found that the inertia modulates the coupled IA/IC envelops to produce LH oscillitons. The newly excited structures are characterized by a normal low-frequency IC solitary envelop embedded by highfrequency, small-amplitude LH oscillations which are superimposed upon by higher-frequency but smaller-amplitude IA ingredients. The oscillitons are shown to be sensitive to several input parameters (e.g., the Mach number, the electronion mass/temperature ratios, and the electron thermal speed). Interestingly, whenever a LH oscilliton is triggered, there occurs a density cavity the depth of which can reach up to 20% of the background density, along with density humps on both sides of the cavity. Unexpectedly, a mode at much lower frequencies is also found beyond the IC band. Future studies are Correspondence to: J. Z. G. Ma<EMAIL_ADDRESS>finally highlighted. The appendices give a general dispersion relation and specific ones of linear modes relevant to all the nonlinear modes encountered in the text.
Introduction
Nonlinear waves have increasingly drawn much attention in the study of geocomplexities in last decades.They are ubiquitous in space and laboratories under different plasma conditions.Since the first soliton (also called a solitary wave) was noticed by John Scott Russell in 1834, the first equation (i.e., the Korteweg-de Vries, KdV, equation) was derived to describe weakly dispersive, nonlinear water waves in 1895, and the first prediction was made about the existence of the non-wave structures in plasmas (called the Bernstein-Green-Kruskal, BGK, mode) in 1957, many branches have been developed to meet the needs of solving different problems relevant to the construction, maintenance, propagation, and effects of solitary structures.
A focus is on large-scale, finite-amplitude, solitary envelops.The study can be traced far back to 1970s when Shukla and Yu (1978) offered exact stationary solutions for ion-acoustic (IA) solitons propagating obliquely in a twocomponent (electron and ion), low-β, non-isothermal plasma system, with an assumption that ions do not have a polarization drift in a constant magnetic field.In this case, the electron inertia was neglected due to the much smaller mass than that of ions.Later, the limit of the polarization drift was relaxed by Yu et al. (1980) and a generalized result was obtained.Meanwhile, Temerin et al. (1979) found three shapes of electrostatic solitary waves: sinusoidal, sawtooth, and spiky/bipolar.In a generalized study, Lee and Kan (1981) obtained nonlinear IA and ion-cyclotron (IC) waves (the authors also mentioned another type, so called "IA solitons"; however, by reproducing the results, we easily confirmed that it is not an independent type but the IA solitary waves with a longer period).Nakamura and Sugai (1996) studied nonlinear waves in a three-component (warm ion, cold and energetic electrons), unmagnetized plasma system.They pointed out that a pseudo-potential method (i.e., the "Sagdeev potential") is more suitable than the KdV approach to produce results applicable to experiments.In such a system, Chatterjee and Roychoudhury (1997) found that, when the ion temperature increases, the amplitude of the IA solitary waves increases when the corresponding self-similar position (denoted by ξ =x−V t, where x is the 1-D coordinate, V is the phase speed of solitons, and t is the time) shifts to the origin.For a system containing multi-ionic components but ions are cold, Das et al. (2000) claimed that a power expansion technique leads to higher order nonlinear IA wave equations; these equations yield various solitary waves (such as, spiky solitons, double layers, etc.) depending on plasma parameters.
More important, Jovanović and Shukla (2000) found a fully nonlinear coherent solution for solitary structures (called "electron-holes" in the text) in the magnetosphere.They took into account a low-frequency (LF) ion dynamics, and exposed a typical cylindrically-symmetric hole of a Larmor-radius size.The results were used to explain FAST/POLAR observations of antisymmetric bipolar pulses in the parallel direction to localized magnetic field lines and unipolar ones in the perpendicular plane.In addition, Mamun et al. (2002) employed the power expansion technique to study the properties of obliquely propagating electronacoustic (EA) solitary waves in a magnetized plasma system of three components: a cold magnetized electron fluid, hot electrons obeying a non-isothermal vortex-like distribution, and stationary ions.It was found that the external magnetic field and the obliqueness of the solitary waves could greatly change the amplitude and the width of solitons, and, positive potential solitons correspond to cold electron density holes/cavitons (note that these electron holes are "ion clumps" as defined by Dupree, 1972; and naturally, the potential is positive).Moreover, Reddy et al. (2002) considered a two-component (cold ions and warm electrons), homogeneous, magnetized system.The authors verified that parallel-propagating solitary waves have structures of sinusoidal, sawtooth, and highly spiky waveforms, and, the highly spiky waveforms have periods ranging from IC to IA frequencies.Likewise, Bharuthram et al. (2002) considered a homogeneous magnetized plasma system consisting of Boltzmann electrons and warm ions, aiming at the nonlinear solitary structures arising from a coupling between the IA and IC waves.The authors not only obtained the wellknown three waveforms, but also suggested that a finite ion temperature suppresses the IC nonlinearity and enhances the IA nonlinearity.Furthermore, in order to explain the fine structures in the auroral kilometric radiation, Pottelette et al. (2003) studied the excitation of IA solitons (called "electrostatic shocks" in the text) in a unmagnetized ion-electron system.They used the Sagdeev potential to derive the amplitude, speed, and width of the localized IA shocks in the fluid approximation.Such electrostatic nonlinear structures were considered to be necessary for auroral electron acceleration up to the observed energies of ∼10 keV.Recently, Ma and Hirose (2009a) performed a parameterized study on IA solitary waves by employing the Sagdeev potential in a magnetized plasma system consisting of warm ions, background electrons, and energetic electrons.
It deserves to mention some other topics in the study of electrostatic solitary waves, so as to show the robust growth of this subject in space physics.One of the subdivisions is for small-scale plasma systems.In these systems, influences brought about by either the finite radius of a flux tube (Gradov et al., 1984(Gradov et al., ,1985)), or the boundary in slab models (Vladimirov et al., 1993) cannot be neglected anymore.Luckily, as indicated by these authors, the effects are generally limited in the boundary layers and do not have an impact on bulk solitary wave propagations.Another branch is the study of EA solitary waves.These type of waves are known to contribute most of higher-frequency electrostatic noises than IC/IA waves.Different from the IC case where the electron inertia is neglected, the study assumes motionless ions.The first study was done by Dubouloz (1993), followed by e.g., Mamun et al. (2002), Berthomier et al. (2000Berthomier et al. ( , 2003)), Shukla et al. (2004), Singh and Lakhina (2004), Kakad et al. (2007), Lakhina et al. (2008).Specially, Pottelette and Berthomier (2009) set up a model which is useful to explain observations.The third fork lies in the study on the effects of the centrifugal and Coriolis forces on the propagation of solitary waves.The work was initiated by Stenflo (1990), and followed by Yu and Stenflo (1991), Stenflo andYu (1992, 1995), Shi et al. (2001Shi et al. ( , 2005Shi et al. ( , 2008) ) and Ma (2010).
The packets mentioned above constitute simple nonlinear waves; i.e., the Fast Fourier Transform (FFT) power spectra of the envelops are featured by a single frequency only, as well as its harmonics.To our surprise, observations of high-sensitivity rockets and satellites in last 20 years exposed a new type of more complicated solitary waves.They are associated with the lower-hybrid (LH) mode, accompanied by density depletions.The MARIE sounding rocket made the first definitive observations of the LH cavities (La-Belle et al., 1986), and the waveforms have been exhibited by many implemented space projects, such as, the FREJA satellite (Dovner et al., 1994;Eriksson et al., 1994;Pécseli et al., 1996;Høymork et al., 2001), the Alaska-93 sounding rocket (Delory, 1996), the AMICIST sounding rocket (Pinc ¸n et al., 1997), the Polar satellite (Cattell et al., 1998(Cattell et al., , 1999)), the FAST satellite (McFadden et al., 1998;Pottelette et al., 1999), the GEODESIC sounding rocket (Knudsen et al., 2003;Burchill et al., 2004), and the Viking and Cluster satellites (Tjulin et al., 2003(Tjulin et al., , 2004)).Schuck et al. (2003) made a detailed review on related observations and simulations.
The new packets exhibit violent modulations in amplitudes of relatively slowly-varying, classical solitary structures, performed by embedded quickly-varying, small-amplitude oscillations.Such a structure was called "oscillitons" (Sauer et al., 2001), but there, and in subsequent papers by the same group of authors (e.g., Sauer et al., 2002Sauer et al., , 2003;;Dubinin et al., 2002Dubinin et al., , 2003aDubinin et al., , b, c, 2007;;McKenzie et al., 2004;Cattaert and Verheest, 2005;Sydora et al., 2007), it was used in the sense of nonlinear electromagnetic "whistler" structures, which can exist at parallel and supposedly also at oblique propagation with respect to the static, background magnetic field (F. Verheest, personal communication, 2010).The modulation was present in either a multi-species system (more than two types of charged particles) responsible for observed IC oscillitons (e.g., Sauer et al., 2001Sauer et al., , 2003)), or an electronion system to explain coherent whistler waves and oscillitons (e.g., Cattaert and Verheest, 2005;Sydora et al., 2007).Notice that in this article and following ones, we expand the intrinsic meaning of the terminology and use it to describe amplitude-modulated solitary packets, either the waves are excited electrostatically or electromagnetically, and either in a non-dusty plasma system or a dusty one.Figure 1 gives an example to show the observed features of "LH-oscilliton" structures, adapted from Fig. 3 of Cattell et al. (1998).The measurement was performed by the Polar satellite at the plasma sheet boundary.The large amplitudes of E x and E y can be up to 40 mV/m.Each of the LH packets after 02:05:25.7 lasts 0.1 s (about 3 ion gyroperiods) appearing in density cavities.The bursty nature exhibits the modulation of the LH mode (see details in Cattell et al., 1998).
That LH solitary structures are of great importance in the study of nonlinear processes is by virtue of the fact that they are the prime candidate for the acceleration of ions and generation of "ion conics" in the high-latitude ionosphere and magnetosphere (see comprehensive contributions in 1980-90s by, e.g., Chang and Coppi, 1981;Retterer et al., 1986;Kintner et al., 1992;Chang, 1993, and references therein).However, an effective modulational mechanism of "oscillitons" had not been proposed till Kourakis and Shukla (2005).The authors provided a complicated methodological formulation to suggest that the modulation may be due to parametric interactions between different modes or, simply, to the nonlinear (self-)interaction of some wave itself.
We checked observations after investigating the LH instabilities (Ma and Hirose, 2009b).We found that the modulation may be easily understood by noticing the role played by the electron inertia.For example, the FAST satellite showed the modulations of electrons in the excitation of solitary waves (McFadden et al., 1998): in an auroral density cavity containing ion beams, electrons inside an ion-beam region are modulated at hydrogen cyclotron frequency 208 Hz, while the ones outside it are at ∼120 Hz, along with an identified LH frequency at ∼450 Hz which was found to merge to IC waves continuously at the boundary of the ion-beam region.This indicates that the role played by the electron inertia becomes dominant outside the ion-beam region.Besides, the Cluster satellites exposed that the widths of the cavities lie between the ion gyroradius r i =v T i / i (where v T i is the ion thermal speed and i is the ion gyro-frequency) and the electron inertial length r e =c/ω pe (where c and ω pe are the speed of light and electron plasma frequency, respectively) (Tjulin et al., 2004).This reveals that the electron inertia is an important parameter due to the fact that observed LH phenomena are connected, more or less, to inertial Alfvén waves (Shapiro, 1998), the nonlinear mode of which was found to modulate spatially the electron density and energy (Knudsen, 1996).
What is more, in cases where the electron inertia is neglected, in the linear regime, IA/IC modes are excited (e.g., Boyd and Sanderson, 2003); while in the nonlinear regime, it is the IA/IC solitary waves, rather than LH ones, that are initiated with either small amplitudes or large amplitudes, where electrons are free to response to ion kinetics, satisfying the charge-nuetrality condition (e.g., Reddy et al., 2002).On the contrary, if the electron inertia is present, plasma particles are coupled with each other which prevents electrons from becoming unbounded.In the linear regime, high-frequency (ω≥ i ) electrostatic LH instabilities are excited (e.g., Hirose and Alexeff, 1972).By contrast, in the nonlinear regime, on one hand, the inertia constrains small-amplitude IA solitary waves (Kuehl and Zhang, 1991); on the other hand, for largeamplitude ones, the inertia enhances the IC amplitudes (Sen et al., 2008).
As a matter of fact, the effects of electron inertia have already been discussed extensively in other fields, such as, linear tearing mode and nonlinear magnetic islands (Shivamoggi, 1997), magnetic reconnection (Al-Salti andShivamoggi, 2003), gyrokinetic turbulence (Jones and Parker, 2003), acoustic instabilities (Merlino and D'Angelo, 2005), and Alfvén compressional wave of gravitational instabilities (Uberoi, 2009).Manifestly, there should exists an intrinsic interplay via the electron inertia between IA/IC solitary structures and "LH oscillitons".Motivated by Kourakis and Shukla (2005)'s work which provides a generic methodology applicable to a variety of electrostatic modes, we aim at finding a formulation to explain the influence of the electron inertia on the transition from IA/IC solitary waves to LH oscillitons.
Enlightened by Sauer et al. (2003)'s work, we make use of a fluid description.In order to concentrate on the mechanism of the modulation played by the electron inertia, and gain important insights into more complicated situations (such as a multi-species system), while still being able to illustrate our approach clearly, we consider a slab model, as described in Sect.2, to formulate a collision-free, two-fluid (electronion) system in a Cartesian geometry.We start from introducing basic parallel-propagating IA solitary waves in Sect.3, where the electron inertia is not taken into consideration, and the FFT spectra are shown.Then, in Sect.4, we describe distorted IA/IC solitary waves.On the basis of these studies, we investigate in Sect. 5 the impact of the electron inertia on the possible excitation and propagation of nonlinear LH oscillitons by a parameterized study through a few input parameters.A simulation is performed.Finally, in Sect.6, we summarize the results and have some discussions.The Appendices give generalized and specific dispersion relations of linear modes, corresponding to all the nonlinear ones discussed in the text.
Nonlinear, two-fluid equations: a slab model
We consider that an external magnetic field permeates through regions visited by satellites (e.g., auroral acceleration regions by FAST, bow shock and magnetopause by Cluster).It has three components: in a Cartesian geometry where B j 0 (j =x, y, z) is constant in the direction of j along which êj is the unit vector.In these regions, the plasma pressure is much smaller than the magnetic pressure, i.e., β 1. Plasma waves excited are thus electrostatic in general, rather than electromagnetic, with k z k ⊥ due to the parallel magnetization of particles (where k z and k ⊥ denote parallel and transverse wavenumbers, respectively).The Landau damping can be neglected due to the fact that the phase speed v p of these waves satisfies where v T α = √ k B T α /m α is the thermal speed of either electron or ion species with α={e,i}, respectively, in which k B is the Boltzmann constant, T α and m α are the temperature and mass, respectively.Thus, λ FLR 1 (where λ FLR is the finite Larmor radius, FLR).However, because the condition λ λ D (where λ is the wave length, and λ D is the Debye scale) is not always valid for the waves, charge separation effects may not be ignored, though the quasi-neutrality (n e0 ≈n i0 where n α0 is the density at equilibrium) can still be applied because the space-charge density n sc , as a result of the charge separation, is still relatively small compared to n α0 .In a finite time, these space-charge density granulations retain their structural integrity and ballistically propagate along a specific direction to form "trains" of solitons propagating in space to form solitary waves (Pécseli, 1985;Chiueh and Diamond, 1986).
In order to provide the most basic picture for the emergence and propagation of nonlinear solitary waves, and thus to gain important insights into the effects of electron inertia on the features of solitary structures, while still being able to illustrate the process clearly, we focus on a system composed of two components: isothermal electrons and adiabatic ions.They are described by two-fluid equations under collision-free conditions in the Cartesian frame (x,y,z), including conservation equations of mass, momentum, and energy, plus four Maxwell's equations.A generalized set of equations is as follows: where the coupling of the fluid variables to Maxwell's equtaions occurs through the definitions that relate the particles' number densities (n e , n i ) and flow velocities (u e , u i ) to the current density [j =e(n i u i − n e u e )] and charge density [ρ e = e(n i − n e )].Notice that the conservation equations of energy are reduced to an equation of state for electrons and an adiabatic equation for ions, respectively.The basic unknowns in the model are n e , n i , u e , u i , E, and B, while T e0 is the equilibrium temperature of electrons, and γ is the adiabatic index.Parameters 0 and µ 0 are the absolute permittivity and permeability in free space, respectively, and c = 1/ √ 0 µ 0 is the vacuum speed of light.Hereafter, subscript " 0 " attached to parameters in English indicates "equilibrium".Note that B is different from B ext .
For convenience, we use dimensionless parameters: the electron and ion densities n e and n i are normalized by n 0 ; coordinates r={x,y,z} by electron Debye length λ De =v T e /ω pe ; velocity u α = u αx ,u αy ,u αz by the acoustic speed c s = √ k B T e0 /m i ; pressure p α by p α0 = n 0 k B T α0 ; time t by ion plasma period τ pi =ω −1 pi (notice that ω pi λ De =c s ); magnetic field B by a pseudo-magnetic field B 0 = m e ω pi /e; electric field E by a pseudo-electric field E 0 = c s B 0 ; and, three coefficients are introduced: ξ m = m i /m e , ξ T = T e0 /T i0 , and ξ v =v T e /c.A dimensionless expression of Eq. ( 3) is To reduce the complexity of math in solving the problem and pay close attention to the formation of solitary structures, we employ a slab model where all parameters depend only on the x-coordinate.In this case, γ =3.By using a single independent variable X for a self-similar transformation X=x−Mt (Lee and Kan, 1981), where M is the Mach number independent of X, along with we obtain a set of equations as follows: by using boundary conditions n e | X=0 =n 0 , n i | X=0 =n 0 , Note that the u αx -origin is shifted from "0" to "M", and the density equations and Gauss's law require S ne =S ni , written as S n .Note that the charge density ρ e in Eq. ( 3) is at present expressed by a space-charge density n sc of solitary waves.Equation ( 6) describes localized, coherent solitary waves which may be excited in the two-fluid system by the balance of nonlinearity and the dispersive effect (Davidson, 1972;Drazin, 1984, and references therein).They transport energy from nonlinear solitary waves to ambient plasma particles owing to their retained shape and speed during propagations (Davydov, 1985;Hasegawa and Kodama, 1995), superimposing upon background propagating or non-propagating plasma oscillations, such as IA and/or IC modes.The classical dispersion relations of these linear modes have widely been studied experimentally and theoretically since 1960s, e.g., Tanaca et al. (1966aTanaca et al. ( , b, 1967; IA mode), Hirose et al. (1970a, b, c;IC mode).By solving the perturbed equations of Eq. ( 3), we present related dispersion relations in Appendices for use below.
We solve Eq. ( 6) in three cases by steps.We start from introducing a basic case of parallel-propagating, electrostatic IA solitary waves where all variables has only one component along X.Then, we expand the study to a more generalized case where they have normal three components.Based on these studies we focus on the evolving patterns of oscillitons to show the modulation of the electron inertia on lowfrequency solitary waves by introducing high-frequency oscillations into amplitudes.
In the absence of electron inertia: parallel-propagating IA solitary waves
When the electron inertia is neglected, the simplest nonlinear solitary wave is in the IA mode.This mode propagates in a direction parallel to local magnetic field lines, superimposing upon background linear electrostatic IA oscillations the dispersion relation of which is given in Appendix A.
The set of equations to describe nonlinear IA solitary waves propagating along B ext = B x0 êx can be derived from Eq. ( 6) as follows: Equation ( 8) produces a propagating IA mode satisfying (cf.Eq. 9 in Ma and Hirose, 2009a) from which, as well as Eq. ( 8), the space-charge density n sc and the wave-field E x of solitary waves can be solved numerically.Take ξ T =10.The solitary structures are calculated under different Mach numbers.Their evolution is exhibited by n sc and E x .Figure 2 illustrates an example of the features of the soliton trains under M=1.14, 1.16, 1.28, respectively.
The top two panels in the figure illustrate sinusoidal shapes of both n sc and E x , respectively.They both have small amplitudes at M=1.14: n sc is within 5×10 −5 and E x is no larger than 0.5.By contrast, when M increases to 1.16, each packet of the former exposes a tooth shape, while that of the latter display a sawtooth one, visible in the two bottom left panels.When the Mach number continues to increase to 1.28, the two bottom right panels disclose spiky shapes, unipolar for n sc and bipolar for E x , respectively.It deserves to mention again that these three shapes of IA E x structures (i.e., sinusoidal, sawtooth, and spiky/bipolar) are well-known in satellite observations.They were first reported by S3-3 satellite in late 1970s (Temerin et al., 1979) and have thereafter been detected in various regions in geospace by numerous satellites (see a detailed review in Ma and Hirose, 2009a).
In order to know the features in the frequency regime, we make use of a fast Fourier transform (FFT) algorithm to give the power densities of the three E x waveforms, as shown in Fig. 3 where the vertical axis has an arbitrary unit.The top panel is the spectrum at M=1.14, exhibiting a couple of peaks: one is at ω 1 = 0.031ω pi , the fundamental frequency of the imperfect sine wave; the other is at ω 2 = 0.062ω pi , a harmonic of ω 1 .The former corresponds to the IA period T X = ω −1 = 32.3τpi (note that ω pi τ pi = 1 as assumed in normalizations) of the solitary structures along X, as provided in the top E x panel of Fig. 2. As a double-check, we use Eq.(A4) to calculate the IA wavelength k x .In dimensionless units, the equation becomes where the units of ω and k x are ω pi and λ −1 De , respectively.In our calculations, we use ξ T =10.Thus, we have which is the exact IA wavelength shown in Fig. 2.This first study tells us, though very basic, that in the nonlinear system, solitary waves will develop harmonics due to the fact that the structures do not have pure sine-waveforms.It is thus predictable that more harmonics will be developed if the shape of solitons goes farther away from a sine wave.This case is very similar to that of a piece of real musical instruments: in addition to the fundamental frequency f , a soliton train also has frequencies which are exact multiples of f : f , 2f , 3f,....This prediction is confirmed by other two panels of Fig. 3.At M=1.16, for example, the IA frequency spectrum of the sawtooth-like E x structures have distinct four harmonics: 0.018, 0.036, 0.054, and 0.072, accompanied by a broadband of noises up to 5ω pi .In the M=1.28 panel, bipolar solitons provide more harmonics, and these harmonics merge into the background noises eventually.Notice that in this case, it is not the fundamental frequency but Nonlin.Processes Geophys., 17, 245-268, 2010 the second harmonic that has the maximum power density.This warns us that, in data analysis to identify in-situ waves, a frequency with the strongest peak may not represent that the wave is excited at that frequency.This is important when determining signatures of, e.g., LH waves.
In the absence of electron inertia:
generalized IA/IC solitary waves In reality, variables have three components.We generalize the case discussed above to see the features of the nonlinear waves propagating in space.In this case, the background electrostatic waves contain propagating IA waves, and nonpropagating IC waves which oscillate locally.Appendix B describes the dispersion relation.
Neglect the electron inertia again.The second and third equations of Eq. ( 6) produce which gives and Substituting above expressions for the terms in Eq. ( 6) leads to a new set of equations as follows: Note that in the present case, the electron inertia is still not included, just as the previous case.Electrons thus continue to provide background conditions for the modulation of ions driven by space-charge electric fields to excite linear and nonlinear IA/IC waves.In simulations, five input parameters are chosen as follows: M=1, m i /m p =16, ξ m =1836.2,ξ T =10, ξ v =0.1.Various initial conditions are given to produce corresponding solitary structures.Deformed IA/IC solitary structures (sinusoidal, sawtooth, and spiky/bipolar) are obtained.An example is exposed in Fig. 4.
Figure 4 shows a sinusoidal IA/IC soliton train driven under boundary conditions: U 0 = {0.3,0,0},E 0 = {0,0,1}, and B 0 = {0.2,0,1}.Note that all the variables are restricted to the nonlinear evolutions along one self-similar X-coordinate in the slab model.In the figure, top left panel gives solitons' electron or ion density (n e or n i ), and top right panel reveals the space-charge density (n sc ), inserted by an enlarged curve.Notice that the difference between the electron and ion densities is so small (n sc ∼ 10 −5 ), five orders smaller than either n e or n i , that n e and n i curves are superimposed upon each other on the top left panel.
The middle three panels show three electric wave-field components (E x , E y , E z ).The E x -panel demonstrates two types of oscillations: one is the IC mode in a larger period of X∼170 with a cyclotron frequency about 0.005-0.01ω pi , and the other is the IA mode in a much smaller period of X∼3 with frequencies around 0.3 ω pi .Figure 5 gives a FFT spectrum of the E x -structures.Obviously, the IC mode has three harmonics at f =0.0055 ω pi , 0.011 ω pi , and 0.017 ω pi , respectively, while the IA mode shows a narrow-band spectrum within 0.3±0.05ω pi .Above 0.6 ω pi there occurs a wide band of noises.
Interestingly enough, the IA mode exist in neither E y nor E z .Both of the components constitute a right (rather than left) circular polarization wave propagating along with the E x wave, as shown in Fig. 6.The tip of the electric field vector perpendicular to êx depicts a circle in the perpendicular plane, and describes a helix along the direction of wave propagation X.The magnitude of the electric field vector is constant as it rotates.This circular polarization is possible due to the fact that E y and E z are orthogonal with each other but have a same IC frequency.
Similar to E y and E z , B y and B z give a pure IC mode, as depicted in the two bottom left panels of Fig. 4.They produce a propagating right circular polarization wave, while B x keeps constant.The bottom right panel of the figure gives two total magnetic field strengths, the initial field |B 0 |, and the soliton field |B(X)|.No doubt that the magnetic field strength is enhanced.This tells us that solitons carry a stronger magnetic field on average and thus store more magnetic energy.A direct consequence may attribute to the magnetic holes (or decreases, bubbles) in surrounding regions as observed by numerous satellites/spacecrafts (see, e.g., Tsurutani et al., 2005).
The other two distorted sawtooth and bipolar structures are exhibited in Figs.7 and 8, respectively.The former is under boundary conditions: U 0 = {0.2,0,0},E 0 = {0.5,0.5,1}, and B 0 = {0.2,0.5,3}, while the latter is under boundary conditions: U 0 = {0.3,0,0},E 0 = {0,0.4,1},and B 0 = {0.2,0,1},however M is reduced from 1 to 0.85.There are a few similarities compared to the sinusoidal case.Firstly, E y and E z form a right circular polarization wave; secondly, B y and B z still hold a right polarization wave, however the sawtooth case offers an elliptical one, while the bipolar case is a circular one.Lastly, the total magnetic fields on average are all increased from their respective initial ones.
Nevertheless, the space-charge densities evolve very differently.Their amplitudes change much more abruptly but periodically with X.The period is about 500 in the sawtooth case, while it is about 250 in the bipolar case.What is more, the maximum amplitude of the space-charge density does not retain constant in the latter case.There is a slight increase in X, while the pulsations have a higher-frequency ingredient: in the sawtooth case, the IA period in X is several Debye lengths; in the bipolar case, the period in X is only a few tenths of a Debye length.Accordingly, the FFT spectra of the both cases expose different signatures, as shown in Fig. 9.In the sawtooth case (upper panel), the IC mode contains a series of harmonics but with a narrower IA band than the bipolar case (lower panel); by contrast, the bipolar case owns fewer IC harmonics, but with a broader IA band up to several ω pi .This is understandable by considering the small wavelength of the IA mode: a few tenths of a Debye length in X corresponds to several ω pi in frequency.As a double-check, let's turn to Eq. (B3).Its dimensionless expression is as follows: Obviously, when k is large, the oscillation frequency cannot be smaller than the ion plasma frequency, i.e., ω≥1.The critical wavenumber k cr is In our case, γ =3 and ξ T =10, leading to k cr =2.3 or λ cr =k −1 cr ≈0.4.This means that if the wavelength is shorter than half of the electron Debye length, it is possible for IA oscillations to surpass the ion plasma frequency.This is interesting because it is well known that a LH frequency is always smaller than the ion plasma frequency.It is thus reasonable to deduce that IA peaks can exist in higher frequency band of a wave spectrum than LH ones, just as exhibited already by those observations in solar wind plasmas (see a comprehensive review by Briand, 2009).
In the presence of electron inertia: LH "oscillitons"
Above two sections discuss cases where the electron inertia is neglected.We have known that in those cases IA/IC solitary waves are able to be excited.In this section, we take into account the electron inertia, that is, electrons no longer response to the space-charge density instantly, but in a style constrained by both ion and electron kinetics.We must solve Eq. ( 6) directly, where both ion and electron masses play an equally important role.The shapes of solitary structures may be modulated to unexpected appearances which are different from the three conventional ones in either IA or IA/IC modes.
In the linear regime, a mega-amount of theoretical and experimental work were performed in 1960s and 1970s on plasma instabilities and excited waves in the presence of electron inertia (see, e.g., Artsimovich, 1964;Alexeff et al., 1970;Hirose and Alexeff, 1972;Mikhailovskii, 1974).The important outcome is, instead of linear IC and/or IC/IA modes as described in Appendices A and B, high-frequency (ω> i ) LH modes are triggered.In a two-fluid model in the presence of plasma nonuniformities, the LH dispersion relation was found to be heavily dependent of the gradients in, e.g., plasma density (Ma and Hirose, 2009b); if the plasma is uniform, as described by the generalized set of equations, Eq. ( 3), Appendix C provides a generalized dispersion, Eq. (C16), from which either electron and ion plasma oscillations, or IC, IA, ion and electron upper-hybrid, and lower-hybrid modes can be obtained under different directions relative to local magnetic field lines.
In the nonlinear regime, the structures of solitary waves can be simulated by employing directly the two-fluid set of equations, Eq. ( 6), as well as Eq. ( 7).We use a group of reference initial conditions and input parameters as follows: u xx0 ,u y0 ,u z0 = {0.3,0,0},E x0 ,E y0 ,E z0 = {0,0,1}, B x0 ,B y0 ,B z0 = {0.2,0,1}.We perform a parameterized study with changes in M, m i /m p , ξ m , ξ T , and ξ v , to see modulations of these input parameters on typical structures of soliton trains in the presence of the electron inertia.
Solitary structure of reference
The same as the IA/IC case, following five input parameters are used to perform a typical simulation: M=1, m i /m p =16, ξ m =1836.2,ξ T =10, ξ v =0.1.The features of the solitary structures are shown in Figs. 10 and 11.
In Fig. 10, the top left panel exposes density depletions in either n e or n i of the soliton train in propagation.The depth of the cavity structures is on average 1.33%, with a minimum 3.39%, of the background plasma density.There are also two shoulders for every density holes/dips, which is about 10% of the depth.The top right panel shows the space-charge density which is, on average, −2.8×10 −8 , with a maximum 7.1×10 −5 and a minimum −5.3×10 −5 .Thus, the density holes has a tiny excess of positive charges as an arithmetic mean.The X-dependent periodicity is also expressed by both electron and ion velocities.See the panels in the middle and lower rows.All the velocity components have a same period as that of the densities.Note that in the propagation direction, electrons and ions have a same speed (u ex = u ix ), while the perpendicular components of the ion velocity is much smaller than those of the electron velocity.This implies we can neglect the perpendicular motion of ions; equivalently, it is reasonable to assume that the velocity of ions have only one component which is in the propagation direction.
Figure 11 displays the three components of electric wavefield (E x , E y , E z ; top row); the magnetic wave-field components (B y and B z ) and total strengthes (|B 0 | and |B|) (middle row); and amplitudes of electric and magnetic fields in the plane perpendicular to the propagation direction (E ⊥ and B ⊥ , lower row), respectively.Similar to the IA/IC case, the two perpendicular components of both the electric and magnetic fields produce right circular polarization IC waves, and the total magnetic field strength of the solitary structures becomes stronger on average than the initial values.It deserves to mention that although every bipolar E x structure contains a series of pulses, the averaged amplitude is zero.This means that along the propagation direction, there is no net potential drop carried by the soliton train.On the contrary, in the transverse plane, there always exists a pulsative electric field E ⊥ (lower left panel).Thus, the soliton train drives a E ⊥ × B x drift which behaves as a prime mover of transverse ion heating in the presence of a crossed B x (see details in Ma et al., 2009).Note that the soliton's magnetic field also has three components, and B ⊥ = 0, as shown in the lower right panel of Fig. 11.Because E x is normal to B ⊥ , the E x × B ⊥ drift provides an additional source for transverse ion heating.The study of this issue is beyond the scope of this article and will be introduced in another paper.
Figures 10 and 11 demonstrate two kinds of oscillations: IC and LH modes.From discussions presented in above sections in IA/IC cases, we know that the IC mode differentiates one soliton from the other in the train after a solitary wave is excited.The frequency can be easily estimated from the humps or dips of parameter envelops.In the present case, the period is X = 154, corresponding to an frequency of 0.0065 ω pi .This frequency can be easily trace out from the perpendicular speeds u iy and u iz (relative to the propagation direction) of ions in Fig. 10, because, relative to electrons, their mass is so big that it is hard for them to respond immediately to the high-frequency drive.On the contrary, electrons are decoupled from ions in the perpendicular plane and their u ey and u ez carry IC envelops which are modulated by any possible faster oscillations.Look at the related panels in the figure: there are 6 peaks in every envelop, with a period of about X = 25.This is contributed by the LH mode the frequency of which can be easily identified from u ex or u ix waveforms which reveal so strong a coupling in the propagation direction between electrons and ions that both of them contribute together to a LH mode while experiencing IC os-cillations.Such packets, manifesting a low-frequency (IC) solitary structure but with high-frequency (LH) modulation in amplitudes, are called "LH oscillitons".
Unexpectedly, by enlarging the coordinates (e.g., changing the scale of X from a maximum value of 1000 to 100) we find two extra phenomena: (1) IA oscillations of much smaller amplitudes are superimposed upon the LH oscillitons, and they have a small period, about X∼2−5.(2) Both LH and IA waveforms are deformed sine-styles in both amplitudes and periods.These factors should contribute IA peaks and/or noises in the high-frequency side of power spectra.To verify this point, we plot Fig. 12 to give the FFT power spectrum (lower panel) of the E x component in Fig. 11, along with those of the E y structures in the IA/IC case (upper left) and the present case (upper right) to identify frequency signatures.From discussions in the last section, we know that in the IA/IC mode, ions have only IC oscillations in y.Thus, the spectrum of E y displays only a IC peak as shown in the upper-left panel.The peak is at 0.0065 ω pi , corresponding to a period of X=154.In the LH-oscilliton case, we know that the LH mode is excited by taking into account the effect of electron inertia.This will surely leads to peaks or noises in the LH band.See the upper-right E y panel of the figure.At f =0.032, 0.038, 0.044, there are three peaks, corresponding to periods of X=23, 26, 31.This indicates that there are three LH oscillations in êy -direction.Notice the dominant peak which represents a period of X=26.This is in agreement with the data given in the last paragraph, "a period of about X=25".However, due to the weak coupling between electrons and ions in the transverse plane, the LH signature is not very as strong as that in the parallel direction as revealed by the lower E x -FFT panel.
In this panel, there are three bands.The IC band is on the low-frequency side of the spectrum with three harmonic peaks: f =0.0065, 0.01325, 0.01975.Note that the last peak is in the LH band which contains two groups: One owns higher magnitude but lower frequencies with peaks at f =0.01825, 0.025, 0.03175, 0.03825, 0.04475, 0.05625, sharing a same interval between adjacent frequencies, and the other has lower magnitude but higher frequencies with peaks at f =0.06325, 0.06975, 0.0765 of the same interval.On the high-frequency side of the spectrum, there exists a narrow IA band of ∼0.25−0.4.This corresponds to periods of X from ∼2.5−4, the same range as that checked by enlarging the coordinates.By reviewing the E x -panel in Fig. 11, the E x -FFT panel discloses an important message: an oscilliton may carry a hidden higher-frequency mode which is unable to be discerned from the obvious two features of its own: low-frequency solitary envelop and high-frequency oscillation.In the LH-oscilliton case, for example, without the FFT analysis, it were impossible to perceive the IA mode.
By reviewing the FFT spectra in last and present sections, it is worthwhile to stress that it is the electron inertia that modulates IA/IC solitary waves into LH oscillitons.They carry low-frequency IC envelops the amplitude of which is embedded with high-frequency LH oscillations superimposed upon by a higher-frequency IA mode.Without the electron inertia, solitary structures do not have the modulated shapes (i.e., oscillitons) brought about by the interaction between LH and IA/IC ingredients, but exhibit only the traditional three (sinusoidal, sawtooth, and bipolar) IA/IC shapes.
Parameterized simulation of oscilliton shapes
There are several input parameters in Eqs. ( 6) and ( 7): Mach number M, mass ratio between ion and electron ξ m , temperature ratio ξ T , and speed ratio ξ v .Their magnitudes influence the appearance of oscilliton waves.Under the same initial conditions as the typical case discussed in the last section, we change the values of these parameters to expose their effect on the evolution of oscilliton trains.
Figure 13 exhibits the impact of M on the LH oscillitons discussed in the last Section with M=1 (lower left 2 panels).We display four cases together for M=0.85, 0.9, 1, and 1.2 to see the evolution of space-charge density n sc and electric wavefield E x -component.With the increase of M, the amplitudes of both n sc and E x are enhanced.So does the number of solitons, indicating a reduced IC period.However, the frequency of the oscillations in every oscilliton is decreased.Above M=1.2, no oscilliton entities occur, meaning there is a limit for the Mach number only within which can oscillitons be driven.
Figure 14 exposes the impact of ξ m =m i /m e on the LHoscilliton wave discussed in the last section with m i /m p =16 (m p is the proton mass; lower right 2 panels).We show four cases together for m i /m p =3, 4, 8, and 16.When the ratio becomes larger, the amplitudes of both n sc and E x tend to smaller, while the solitary E x structures change more from bipolar shapes to oscillitons, along with higher-frequency oscillations within every envelop.It is thus predictable that small mass ratios seem to restrain the electron inertia from exciting the LH mode in solitary waves.Different from the previous case, ξ m does not affect the IC-period of the solitary waves.
Figure 15 demonstrates the influence of ξ T = T e /T i on the LH-oscilliton wave discussed in the last section with T e /T i =10 (upper right 2 panels).We show four cases together for T e /T i =6.5, 10, 15, and 20.The larger the ratio, the higher the amplitudes of the two parameters in roughly linear relations, respectively: and E x | max = 2.2ξ T − 0.04ξ 2 T − 10.We are concerned most about these relations because of the fact that the temperature ratio changes violently in space plasmas both spatially and temporally.Luckily, the plots simulated here expose that, except the amplitudes, the ratio changes neither the envelops nor the IC and LH periods.
Figure 16 illustrates the control of the electron thermal speed V T e over oscillitons.When the speed is below 0.03 c, no oscillitons are driven but only LH waves carrying IA fluctuations.With the increase of the speed, features of solitary trains become evident gradually.However, after V T e =0.13,only bipolar waveforms manifest.During the development, the LH-IA signature is dominant at first, but suppressed increasingly with weaker oscillation and more obvious IC features, even out of sight at last.
Figures 13-16 exhibit the variation of the oscilliton structures with the four input parameters in the X-space.Accordingly, this change can also be expressed in the frequency regime.Figure 17 gives two such examples for Figs.13-16 by illustrating the features of the FFT power spectra.The upper two panels is the spectra of E x in Fig. 13 under M=0.85 and 1, respectively; the lower two ones is that in Fig. 16 under V T e /c=0.04 and 0.1, respectively.The different IC/LH/IA Fig. 15.Same as Fig. 13 but the impact of ξ T = T e /T i on LHoscilliton waves.Notice that the panels of T e /T i =10 is the typical case discussed in the last section.bands are identified by vertical dash lines in all panels.Evidently, on one hand, the two upper panels tell us that, at a larger Mach number, the IC band shifts to the right, indicating a decrease in frequency, while the LH band shifts to the left, meaning an increase in frequency.This is in agreement with what we have seen in Fig. 13.Unexpectedly, we can see a much weaker IA band ingredients appear at M=1, which is hard to be identified in Fig. 13.In the two lower panels, on the other hand, only the LH and IA bands exist at a smaller electron thermal speed; by contrast, if the speed becomes larger, the IC band is apparent, while the original LH/IA features are suppressed, with lower power densities and narrower bands.F r e q u e n c y M = 0 .8 5
A preliminary simulation to observations
Though the excitation of oscillitons is known to be contributed by the electron inertia, and the modulation of the solitary structures by several input parameters is discussed, observations recorded much more complicated waveforms of oscilliton data.This attracts us to perform a preliminary datafit simulation.Let's see the typical modulated waves measured by the Polar satellite (Cattell et al., 1998), as shown in Fig. 1.After 02:05:25.7,the slowly-oscillating envelop has a frequency ω s ∼1/3 ion gyrofrequency, while its amplitude was modulated violently by a quickly-oscillating ingredient with a frequency ω q ∼8ω s .The authors showed that the envelop of the modulated amplitude in each packet increases, reaching a maximum, and finally decreases with the period 1/ω s , and the carried wave frequency ω q is close to the LH frequency.Fig. 18.A preliminary simulation to observed LH oscillitons under U 0 = (0.2,0,0),E 0 = (0.5,0.5,1), and B 0 = (0.2,0.5,3) with M=1, m i /m p =16, T e /T i =10, and v T e /c=0.05.See Fig. 1 for a reference.
However, we would like to point out that higher-frequency oscillations should be measured if the payload resolution is high enough due to their modulations to the amplitude of LH packets.
Under initial conditions U 0 = (0.2,0,0),E 0 = (0.5,0.5,1), and B 0 = (0.2,0.5,3), along with following input parameters M=1, m i /m p =16, T e /T i =10, and v T e /c=0.05,we calculate the LH-oscilliton wavefroms, as given by Fig. 18 where the electric wavefield component E x (upper panel) and its spectrum (lower panel) are provided.The E x panel conveys following messages about the oscillitons.Firstly, the series of packets has a dominant IC period of about 500 λ De , providing a frequency peak at ∼0.002 ω pi .Secondly, the envelop is modulated by LH oscillations of a period about 30 λ De , offering a frequency of ∼0.03 ω pi .Lastly, superimposed upon the LH mode, there are higher-frequency IA oscillations of periods about several λ De , contributing to a frequency band of tenths of ω pi .These results are in agreement with the FFT spectrum: in the IC band, there are a series of IC harmonics with a fundamental peak of 0.0019 ω pi ; in the LH band, there are LH peaks the dominant one of which is at 0.0315 ω pi ; In the IA band, there is a narrow band up to 0.3 ω pi the lowfrequency end of extends into the LH band.Above the IA band, the high-frequency noises are obvious in the HF region.
Though from Fig. 1, it is hard to identify the periodic feature of waves before 02:05:25.7,the upper panel in Fig. 18 reveals the seemingly periodic nature before X=30×10 3 .It is not hard to see two packets from X=0 to X=25×10 3 , each of which contains 5 small oscillitons with a period of X=3000.The period is about X=12.5×10 3 , the same as that of the three big oscillitons after X=30×10 3 .This very long period (X=12.5×10 3 ), along with the period of X=3000, should provide low-frequency (LF) harmonic peaks at about www.nonlin-processes-geophys.net/17/245/2010/ Nonlin.Processes Geophys., 17, 245-268, 2010 8×10 −5 and 3.3×10 −4 , respectively, in the FFT panel.Fortunately, we can discern a peak corresponding to the second one in the LF region, though it gives only a tiny hump in power.Unfortunately, the first one is unable to be trace out.This is understandable by realizing that there are 12 envelops of X=3000 from X=0 to X=30×10 3 , however only 3-5 (X=12 500)-ones from X=0 to X=60×10 3 .The two lowerfrequency modulations than that of the IC mode may be aroused by some other mechanism irrelevant of the Landau damping effect (or, the kinetic resonance).(This is because in the present model, this effect is negligible (λ FLR 1) for any species, and the electromagnetic perturbation approach due to the resonance does not fit the present study; Kourakis and Shukla, 2004;Hirose, 2005Hirose, , 2007.) .)6 Summary and discussion Kourakis and Shukla (2005) provided a generic methodological formulation for observed oscillitons which are featured by quickly-varying oscillations superimposed upon slowlyvarying solitary waves.Inspired by Sauer et al. (2003)'s study in a dusty plasma case where an addition of a second ion population in a single-ion plasma leads to significant modifications of solitary waves, we specialize Kourakis and Shukla (2005) model to explain the formation of abundantly observed LH-oscillitons in space plasmas.
Owing to the fact that the Landau damping is negligible (λ FLR 1) for any species, we employed a collision-free, two-fluid model to perform parameterized simulations.We started from exhibiting the excitation of the IA/IC solitary waves, and investigated the modulation of the IC/IA envelops by the electron inertia.The inertia triggers LH oscillitons characterized by a normal IC-period solitary envelop embedded by LH oscillations which contain higher-frequency but smaller-amplitude IA constituents.We exhibited the impact of the electron inertia on oscilliton packets via several input parameters like the Mach number, the electron-ion mass ratio, temperature ratio, etc. Unexpectedly, there exists a lower-frequency mode beyond the IC band.It is hard to be explained by the present hydrodynamic model.We will pay attention to it in our future work.
Though it has already been illustrated that the electron inertia behaves as one key to trigger LH oscillitons which are influenced by input parameters, we realize that initial conditions should also have impacts on the existence of oscillitons.Recall the first study of solitary waves by John Scott Russell: the speed of a ship triggers water solitary waves (e.g., Craik, 2004).Naturally, we are going to report in another paper the roles played by the initial conditions.Besides, we are very interested in the new Cluster observations on coherent whistler emissions in the magnetosphere (e.g., Dubinin et al., 2007).The measurements provided clear evidence that the magnetic field components are so perturbed as to form a sequence of oscilliton packets while the periodic structure was also revealed from the wavelet analysis (see Fig. 6 of that article).This will energize us to generalize Sauer et al. (2002)'s oblique-whistler model to warm-plasma cases in which particles are coupled with each other more closely via the pressure term.
Last but not least, we will try to find a link between the solitary waves and the two important categories of observations: transverse ion heating and broadband noise excitation.This subject is important.On one hand, it will provide a mechanism for the transverse ion heating via the non-Maxwellian velocity distributions brought about by space charges in geospace.On the other hand, it will serve as a reference in studying the relationship among nonlinear Alfvén waves, discontinuities, proton perpendicular acceleration, and magnetic holes/decreases in interplanetary space and the planetary magnetospheres (see new theories and observations by, e.g., Tsurutani et al., 2002aTsurutani et al., , b, 2005)).The study will be dominantly based on two facts: (1) soliton trains carry electric fields contributed by space charges the density of which is periodic in both space and time for a single oscilliton train, as depicted in this paper; (2) the spatial extent of a single train perpendicular to the propagation direction is in scales of 2-20 ion Larmor radii (see e.g.Ergun, 1999) and there appears existing sets of trains propagating in background magnetic fields.A simple picture is as follows: ions (not those constituting solitary waves but surrounding ones in the vicinity, at least in the boundary layer, of the waves) are accelerated by a stochastic space-charge electric field, in both amplitude and time, contributed by all solitary trains in the perpendicular plane of the propagation direction.At any moment, If the field can be considered as proportional to the radius in a cylindrical geometry if the total space charges offered by solitary waves are uniform instantly, the E×B drift can drive ambient Maxwellian ions to non-Maxwellians transversely with observable orders of upgraded magnitude in temperature (Ma et al., 2009).
Luckily, this picture appears to be valid in view of observations (e.g., Ergun et al., 1998;Ergun, 1999;Pickett et al., 2005): magnetic flux tubes are always teeming with a dense cluster of soliton trains.Though solitary packets in each of the trains can be considered identical, different trains carry different packets with respective amplitudes, lifetimes, and scales.These divergences are originated from the saturated growths of linearization constrained by the input parameters and boundary conditions at specific positions and time.Undoubtedly, at any time, the collective behavior of the cluster of these solitary waves differs from that of a single solitary train: both the amplitude and the lifetime of space-charge density n sc is random.In view of the measurements that solitary trains fill up all spaces in a magnetic flux tube, the whole tube cylinder has a space-charge density, ñc , which is instantaneously uniform in space, at least to the leading order for simplicity, but stochastic in time.Here, the "uniform" denotes an average of space-charge densities carried by packets on different trains with different n sc , while "stochastic" expresses the random features of n sc in both amplitude and lifetime.By employing such a physical model, a companion paper will introduce the study on the frequency sweeping of IC oscillations, an important subdivision of broadband noises.
It deserves to verify an essential condition in discussing electrostatic waves in this paper: the negligence of the Landau damping effect.This effect is very important for ion acoustic waves (see, e.g., Gary, 1993).It can only be neglected in cases, as mentioned in the beginning of Sect.2, where the phase speed (v p ) of the waves is very large compared to the ion thermal speed (v T i ), and very small compared to the electron thermal speed (v T e ), as expressed by Eq. ( 2).We show that this condition is satisfied automatically in employing the two-fluid model (e.g., Chapter 4, Bellan, 2008): by recasting the generalized dispersion relation, Eq. (B3), we have which reproduces Eq. (4.38) in Bellan (2008).Because the wavelength 1/k is tens of λ De (see, e.g., Fig. 4), we have λ 2 De k 2 1.Thus, Eq. ( 18) provides due to the fact that m i m e and c s 0. Notice that this requirement is not only applicable to regions where ξ T = T e0 /T i0 > 1 (or, equivalently, T e0 > T i0 ), but also to those where T e0 < T i0 .Table 1 lists some plasma parameters in auroral regions where T e0 > T i0 , calculated from measurements by the GEODESIC rocket (Burchill et al., 2004), the Freja satellite (Eriksson et al., 1994), and the FAST satellite (Ergun, 1999), respectively.By contrast, in most magnetosphic plasmas (especially encountering some extreme situations like, e.g., substorms), ξ T lies between 1/12 and 1/3, as reported by, e.g., AMPTE/IRM and Cluster observations in the Earth's central plasma sheet, the sheet boundary layers, and magnetosheath (Baumjohann, 1993;Phan et al., 1994;Lavraud, 2009).It is another interesting topic on the excitation of oscillitons under the condition of ξ T < 1, besides the case introduced in this paper with ξ T > 1.
To finalize this paper, we would like to point out that the descriptions of nonlinear plasma physics often suffer from a lack of a coherent nomenclature for the phenomena: one author's "electrostatic shock" is another authors "soliton".This fact was highlighted in the introduction to Ma and Hirose (2009a) where the authors gave a thorough discussion of prior literature in this field, and showed that there are often differences in nomenclature used by different authors.
In addition to the propagating IA waves, there are local non-propagating IC oscillations in the plane perpendicular to the propagating direction of the IA modes.In the slabmodel case as introduced in Sect. 2 where parameters are only dependent of x, we obtain the linearized equations of Eq. (3) for u i1⊥ = u i1y ,u i1z of ions as follows: Clearly, this is an ion cyclotron oscillation in the plane perpendicular to êx .Note that the oscillation frequency is not an invariable if the strength of the external magnetic field changes in space and/or time which is excited by nonlinear waves.
Appendix C Dispersion relation of generalized LH waves
For convenience, we temporally transform coordinates êx , êy , êz to ê x , ê y ,b in which b=B ext /|B ext | with By using ω and k to represent the wave frequency and the amplitude of the wave vector k=k , respectively, we obtain a linearized set of equations from Eq. (3) as follows: where a homogeneous plasma is assumed with u 0 =0.The two momentum equations give which leads to by employing the two density equations in Eq. (C1).Together with the electromagnetic equations, this equation produces in which n e1 , n i1 , and E x , E y can be obtained by solving the two momentum equations in Eq. (C1 The solution is: Similarly, Equation (C4, C13, and C14), together with Poisson equation, produce in the electrostatic case where B 1 =0, where cosθ=k z /k, sinθ = k ⊥ /k, ω e = 2 e + k 2 v 2 T e , and ω i = 2 i + γ k 2 v 2 T i are the pseudo-Bohm-Gross frequencies of electrons and ions, respectively.Notice that the electric displacement term is neglected due to v 2 p c 2 where v p =ω/k is the phase speed.
C1 Modes along b
In this case, θ=0 • .Equation (C16) becomes which exhibits several propagating modes under different cases related to v T α within respective ranges of v p .
C1.1 Both electron and ion inertia involved
In this case, v T i v p v T e .Equation (C17) gives
C1.2 Electron inertia neglected
In this case, v p ∼v T i .Equation (C17) gives
C1.3 Ion inertia neglected
In this case, v p ∼v T e .Equation (C17) gives which exhibits more complicated propagating modes through ω α under different conditions.
C2.1 Both electron and ion inertia involved
In this case, ω i ω ω e .Equation (C23) gives the Lowerhybrid (LH) mode (ω LH ): which leads to, on one hand, in a weak magnetic field (ω e ω pe in, e.g., ionospheric plasmas); on the other hand, in a strong magnetic field (ω e ω pe in, e.g., pulsar or blackhole plasmas).
C2.3 Ion inertia neglected
In this case, ω∼ω e ω i .Equation (C23) gives the Electron upper hybrid eUH mode (ω eUH ): which leads to the ion gyro-oscillation mode ( e ) if B z0 is strong enough.
C2.4 Extreme cases
The pseudo-Bohm-Gross frequency ω α (k) reveals that if the scale of the wavelengths is large (i.e., k is small) compared to the thermal background gyroradius ρ α =v Tα / α = √ m α k B T α / eB z0 , Debye length, above propagations are only weakly modified by the temperatures of particles.In the extreme case where k=0, no propagations exist and particles are only oscillating locally with pure LH frequency, i.e., in above respective modes.Notice that this extreme case is equivalent to cold plasma conditions where particle pressures have no effects.On the contrary, for extreme small-scale wavelengths (i.e., k is large), the dispersion relations of above modes become respectively, irrelevant of gyro-frequencies of particles.Under cold plasma conditions, it is easy to see that there exist two non-propagating modes, one of which oscillates in the electron plasma frequency ω pe , and the other is in the ion plasma frequency ω pi .
Fig. 1 .
Fig. 1.Nonlinear "oscilliton" structures associated with the LH mode (adapted from Fig. 3 of Cattell et al., 1998).The measurement was performed by the Polar satellite at the plasma sheet boundary.The bursty nature after 02:05:25.7 exhibits the modulation of the LH mode, see, e.g., a detailed discussion by Cattell et al. (1998).
8) by using n i = S n /u ix , n e = S n /u ex , and E x /ξ m = −d /dX.Note that (1) all parameters are dimensionless, and is the normalized electrostatic potential the unit of which is k B T e0 /e; (2) S y = S z =0 due to the fact only E x component of E exists.
Fig. 3 .
Fig. 3. FFT spectra of E x in corresponding panels of Fig. 2.
Fig. 4 .
Fig. 4. Sinusoidal IA/IC solitary structures driven in a nonlinear system where all variables depend only on the x-coordinate (a slab model).Shown in panels of the figure are solitons' density (n e or n i ; top left), space-charge density inserted by an enlarged curve (n sc ; top right), electric wave-field components (E x , E y , E z ; middle row), magnetic wave-field components B y (lower left) and B z (lower middle), and, total magnetic field strengths [initial |B 0 | and soliton |B(X)|; lower right].Input parameters are as follows: M=1, m i /m p =16, ξ m =1836.2,ξ T =10, ξ v =0.1.
Fig. 5 .
Fig. 5. FFT spectrum of the deformed sinusoidal E x solitary structures.
Fig. 6 .
Fig. 6.Non-propagating circular polarization IC wave constituted by E y and E z oscillations accompanying the sinusoidal E x solitary structures.
Fig. 9 .
Fig. 9. FFT spectra of the sawtooth (upper panel) and bipolar (lower panel) E x solitary structures.
Fig. 11 .
Fig. 11.Same as Fig. 10, however, shown in panels are the three components of electric wave-field (E x , E y , E z ; top row); magnetic wavefield components (B y and B z ) and total strengthes (|B 0 | and |B|) (middle row); and amplitudes of electric and magnetic fields in the plane perpendicular to the propagation direction (E ⊥ and B ⊥ , lower row), respectively.
Fig. 12 .
Fig. 12. FFT power spectra of socilliton structures.Upper left: E y in the IA/IC case (last subsection); upper right: E y in the oscilliton case (this subsection); lower panel: E x in the ocilliton case (this subsection).
Fig. 13 .
Fig. 13.Impact of the Mach number M on LH-oscilliton waves.Only space-charge density n c and electric wavefield E x -component are shown.Notice that the panels of M=1 is the typical case discussed in the last section.
Fig. 14 .
Fig. 14.Same as Fig. 13 but the impact of ξ m = m i /m e on LHoscilliton waves.Notice that the panels of m i /m e =16 is the typical case discussed in the last section.
Fig. 16 .
Fig.16.Same as Fig.13but the impact of ξ v = V T e /c on LHoscilliton waves.Notice that the panels of V T e /c=0.1 is the typical case discussed in the last section.
Fig. 17 .
Fig. 17.FFT power spectra of E x under two input parameters M=0.85 and 1 in Fig. 13 (left panels) and V T e /c = 0.04 and 0.1 in Fig. 16, respectively, as two examples to illustrate the the variation of frequency with input parameters as exposed from Figs. 13 to 16.The different bands (IC/LH/IA) are labeled in all panels. | 15,379 | 2010-05-07T00:00:00.000 | [
"Physics"
] |
Lack of serological and molecular evidence of arbovirus infections in bats from Brazil
Viruses are important agents of emerging zoonoses and are a substantial public health issue. Among emerging viruses, an important group are arboviruses, which are characterized by being maintained in nature in cycles involving hematophagous arthropod vectors and a wide range of vertebrate hosts. Recently, bats have received increasing attention as an important source for the emergence of zoonoses and as possible viral reservoirs. Among the arboviruses, there are many representatives of the genera Flavivirus and Alphavirus, which are responsible for important epidemics such as Dengue virus, Zika virus and Chikungunya virus. Due to the importance of analyzing potential viral reservoirs for zoonosis control and expanding our knowledge of bat viruses, this study aimed to investigate the presence of viruses of the Alphavirus and Flavivirus genera in bats. We analyzed serum, liver, lungs and intestine from 103 bats sampled in northeast and southern Brazil via Nested-PCR and the hemagglutination inhibition test. All samples tested in this study were negative for arboviruses, suggesting that no active or past infection was present in the captured bats. These data indicate that the bats examined herein probably do not constitute a reservoir for these viruses in the studied areas. Further studies are needed to clarify the role of bats as reservoirs and sources of infection of these viral zoonoses.
Introduction
Among emerging viruses, an important group are arboviruses (arthropod-borne viruses), which are characterized by being maintained in nature in cycles involving hematophagous arthropod vectors and a wide range of hosts [1,2]. These hosts are often vertebrates, especially mammals and birds. Bats are mammals belonging to the order Chiroptera [3], and they are considered to be one of the most abundant, diverse and geographically distributed vertebrates in the world [4]. In Brazil, there is a great diversity of bats, with approximately 179 species (10 of which are endemic) and 68 subspecies belonging to 68 genera documented [5,6]. They present a broad geographic distribution, being able to fly long distances and often coming into direct or indirect contact with humans [3]. Recently, these vertebrates have received increasing attention as an important source for the emergence of zoonoses and possibly as viral reservoirs [7][8][9]. The infection of bats by arboviruses has long been reported by several authors [8,[10][11][12][13][14]. Although the possibility of them acting as reservoirs has been raised, it is not clear yet if they play this role in the ecological cycle [15,16].
Being a large highly populated tropical country with one third of its territory covered by forests, Brazil presents ideal conditions for the existence of many arboviruses [17]. More than 200 arboviruses have been isolated in the country, and approximately 40 of these viruses cause diseases in humans [17,18]. The country presents a constant risk of emergence and re-emergence of arboviruses due to the existence of densely populated cities infested by mosquitoes of the genera Culex and Aedes, which are important vectors for arboviruses [1,7,19,20].
Focusing on the importance of the analysis of potential sources of zoonoses and viral reservoirs for the control of emerging viruses, the aim of this study was to investigate arbovirus infections, of the Alphavirus and Flavivirus genera, in bats from southeast and northeast Brazil. In addition, we aim to provide relevant information that may contribute to the epidemiological surveillance of diseases of great public health impact.
Study areas
Sampling was performed between 2014 and 2017 at ten different sites in urban and peri-urban areas of two cities from two different states of Brazil: São José do Rio Preto, state of São Paulo (SP) and Barreiras, state of Bahia (BA). The complete geographical coordinates address and ecological characteristics of all sampling sites are available as supplementary information (S1 Table). Additionally, climatological characteristics (temperature and rainfall) are given in S1
Samples
Bats were collected using mist nets, and the sex and species of the animals were determined [22][23][24]. Following euthanasia, which was performed by subcutaneous anesthesia with 80 mg/ kg ketamine (Dopalen-Vertebrands, Paulínia, SP, Brazil) and 20 mg/kg xylazine (Rompun-Bayer S.A., São Paulo, SP, Brazil), the liver, intestines and lungs from each specimen were removed and stored at -150˚C. Blood was also collected, and serum was separated and stored at -150˚C. Finally, the animals were fixed in 10% formaldehyde for 24 hours and deposited in 70% alcohol in the Chiroptera Collection of the Department of Zoology and Botany-IBILCE / UNESP, where they are available for taxonomic studies.
RNA extraction and cDNA synthesis
The RNA of the lungs, intestines and liver was extracted in order to test for the presence of Alphavirus and Flavivirus RNA. Tissues were homogenized in a Turrax-MA102 (Marconi, Piracicaba, SP, Brazil), and pellets were used for RNA extractions in TRIzol (Thermo Fisher Scientific, Waltham, MA, USA), according to manufacturer's instructions. Finally, the RNA was resuspended in 100 μl of water treated with DEPC (Sigma Aldrich, St. Louis, Missouri, USA) and stored at -150˚C. Quantification of samples was performed on the NanoDrop 2000 spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA). Nucleic acids extracted from the organs of individual bats were subjected to cDNA synthesis using the High-Capacity cDNA Reverse Transcription Kit (Applied Biosystems, Foster City, CA, USA).
Endogenous control amplification
After cDNA synthesis, in order to check the RNA quality, PCR was performed for the endogenous control, β-actin, as described previously [25].
Molecular detection of Alphavirus and Flavivirus
Detection of viral RNA in the tissue samples was carried out by Nested-PCR assay. We designed a set of primers targeting the nsp4 region of Alphavirus based on 23 viruses from this genus (S1 File). To test for Flavivirus, we used PCR primers external to Flav100F and Flav200R, published by Maher-Sturgess et al., 2008 [26], for the Nested-PCR reaction, which targets the NS5 region. The sets of primers used for each reaction is described in Table 1. Additional information on primer design and molecular tests are available in S1 and S2 Files. The PCR reaction was carried out using Long PCR Enzyme Mix (Thermo Fisher Scientific, Waltham, MA, USA). Amplification conditions were 3 min at 94˚C, 35 cycles at 94˚C for 1 min, annealing at 50˚C (for Alphavirus) and 48˚C (for Flavivirus) for 45 seconds, and extension at 72˚C for 1 min followed by a final extension at 72˚C for 10 minutes. The Nested-PCR reaction was performed as described above except for the annealing temperature, 45˚C (for Alphavirus) and 42˚C (for Flavivirus). Samples were resolved in 1% agarose gels. The expected sizes of the amplicons were 803 bp for Flavivirus and 913 bp for Alphavirus.
Alphavirus and Flavivirus antibody detection
The presence of antibodies for Alphavirus and Flavivirus in the serum samples were tested by hemagglutination inhibition (HI), as described previously [27]. Additional information about the antigens and the positive controls used in the HI, are given in S2 Table. The tests were performed by the Adolfo Lutz Institute-SP.
Results
A total of 39 animals were collected in Barreiras-BA and 64 in the region of São José do Rio Preto-SP, totaling 103 bats. These were distributed in four families: Molossidae, Phyllostomidae, Emballonuridae and Vespertilionidae (Fig 1 and Table 2). The diversity of species and the amount of male/female per species are reported in Table 2. The livers and intestines were collected from 28 specimens from Barreiras-BA and only the livers from 11 specimens. From the region of São José do Rio Preto-SP liver, lungs and intestines were collected from all bats. Detailed information on species, sex, organs tested, site of sampling and month of collection for each specimen are presented in S3 Table. The necropsies did not show any morphological changes that indicated a pathological condition. RNA from all tissue samples was extracted and quantified. The quality of the RNA was confirmed by the amplification of the endogenous gene, β-actin, from all tested samples.
The presence of Alphavirus and Flavivirus RNA was investigated by Nested-PCR in all available tissue samples. The results revealed that none of the 103 samples were infected by Alphavirus and Flavivirus at the time they were collected.
We also tested for serological evidence of previous infections by viruses from this groups using HI. A total of 73 serum samples (46 from São José do Rio Preto-SP and 27 from Barreiras-BA) were tested for anti-DENV-1, anti-DENV-2, anti-DENV-3, anti-DENV-4, anti-
Discussion
The role of bats in the transmission and ecology of arboviruses, such as Flavivirus and Alphavirus, is not fully elucidated; however, several studies over the past 40 years have demonstrated that bats are susceptible to infection by viruses of these genera. For example, evidence of nucleic acids and antibodies from the four DENV serotypes has been reported in bats in Central and South America [28]. In addition, CHIKV was detected in bats from Asia [29]. In this study, we analyzed the presence of RNA from Alphavirus and Flavivirus and antibodies from viruses from these genera in animals collected in São José do Rio Preto (São Paulo State) and Barreiras (Bahia State). The first city is in the northwest region of São Paulo state and comprises an area with a high incidence of arbovirus infection in humans. Several studies have shown that this is a region of YFV transmission [30,31], is hyper-endemic for DENV [32][33][34], has confirmed cases of MAYV [35,36], Saint Louis encephalitis virus (SLEV) [34,37] and ZIKV [38][39][40]. Additionally, some cases of DENV intra-serotype co-infection [32] and coinfection with other Flaviviruses [34,41] have been reported. In the Barreiras region, epidemiological reports from 2014 to 2017 also show the occurrence of DENV, CHIKV, ZIKV and YFV [42][43][44]. In addition, all animals were collected in densely populated urban or peri-urban areas where the main vector of these arboviruses, Aedes aegypti, has extensive circulation. Our results suggest that no active or past infections by the arboviruses in this study were present in the captured bats. These facts indicate that although the bats collected are in close contact with these viruses, that they are not being infected and that they probably do not constitute a reservoir in the studied regions. Our study sampled bats from the Molossidae, Vespertilionidae, Phyllostomidae and Emballonuridae families. The few previous studies that sought to identify arboviruses in these bat families show that Molossidae and Phyllostomidae have the highest incidence of arbovirus infection. As an example of Molossidae infection, St. Louis encephalitis virus (SLEV) was isolated from Tadarida brasiliensis mexicana in Texas [45]. Additionally, in east Africa, antibodies reactive to ZIKV were detected at a high seroprevalence in Mops condylurus using the indirect hemagglutination test (HAI) [10]. In the Phyllostomidae family, there is a single work that detected the viral RNA of DENV-1 in Carollia perspicillata in French Guiana [46]. Additionally, specific antibodies against West Nile Virus (WNV), SLEV and DENV 1-4 were detected by the plaque reduction neutralization test (PRNT) in three bat species from Mexico: Glossophaga soricina, Artibeus jamaicensis and Artibeus lituratus, with a Flavivirus antibody prevalence of 33%, 24%, and 9%, respectively [47]. For the Emballonuridae and Vespertilionidae families, there are no records of viral RNA or antibodies against the Flavivirus and Alphavirus genera [46].
Even with the absence of studies that reported the lack of active and/or past arbovirus infection in bats, some studies with DENV corroborate the results obtained in this work. In a study carried out in Mexico, which investigated the presence of the 4 serotypes of DENV in serum, lung and liver of 240 bats, no active or past infection in these animals was found [48]. Additionally, other studies with DENV, demonstrate that some serotypes of this virus did not replicate efficiently in cell lines derived from neotropical bat species and indicate that some species are incapable of sustaining Dengue virus replication and are unlikely to act as reservoirs for this virus [49][50][51]. Moreover, a recent work showed that bats sampled from households in Costa Rican urban environments do not sustain DENV amplification, since they do not support sufficient virus replication. These findings excluded them as potential hosts or reservoirs with no role in the transmission cycle and more likely are functioning as epidemiological dead-end hosts for this virus [52].
To date, few studies have aimed to identify arbovirus in bats, even with all the recognition of the importance of these animals in the emergence of zoonotic viruses. Some studies have demonstrated that ecological, behavioral and phylogenetic characteristics can influence and diversify the immunological response to viral infections in different species of bats [53,54]. For example, some evidence suggests that large colonies and higher species richness were significantly positively associated with European Bat lyssavirus 1 (EBLV-1) seroprevalence [54]. Additionally, they observed that EBLV-1 seroprevalence in bats from the Vespertilionidae and Rhinolophidae families were different. This difference is likely due to differences between bat species in the immune response and the lifespan of immunity to a virus infection [53,55]. Therefore, it is more likely to find seropositive bats in species with long lifespan immunity [55]. However, no study aimed to investigate differences in immunological responses between bat species to arbovirus infection.
Even though there is no serological and molecular evidence of arbovirus infections in Brazilian bats from the studied regions, we emphasize the importance of continuing studies in other locations in order to evaluate the importance of bats as arbovirus reservoirs and to determine if these animals are an important part of its enzootic cycle. | 3,088.4 | 2018-11-07T00:00:00.000 | [
"Environmental Science",
"Biology",
"Medicine"
] |
Expression and characterization of a des-methionine mutant interleukin-2 receptor (Tac protein) with interleukin-2 binding affinity.
A gene coding for the Tac protein (interleukin-2 receptor alpha-subunit, IL-2R alpha) of the interleukin-2 receptor was constructed by chemoenzymatic gene synthesis. The gene designed for mutagenesis codes for a receptor protein where all 10 methionines are substituted by alanine, valine, leucine, and isoleucine. In addition, aspartate at position 6 is substituted by glutamate. This desmethionine IL-2R alpha and the wild-type IL-2R alpha genes were integrated into a eukaryotic expression vector and transferred into different cell lines. The recipient cell lines express both wild-type and mutant receptor proteins on their cell surfaces which are recognized equally by different monoclonal antibodies. It was possible to establish cell lines with high level IL-2R alpha chain expression by fluorescence-activated cell sorting. The wild-type IL-2R alpha expressed in LTK- cells is a glycoprotein with an apparent molecular size of about 60 kDa and a typical low interleukin-2 binding affinity of KD = 12 nM. Despite the fact that 11 amino acids are altered, no significant difference in the mutant IL-2R alpha is observed, exhibiting the same molecular size and a low interleukin-2 binding affinity of KD = 26 nM.
only transiently expressed on antigen-or mitogen-stimulated T-cells (5). Since neither IL-2 nor IL-2R is expressed constitutively, the induction of these two genes is of eminent importance for the immune response. The constitutive expression of IL-2R is induced by infection of T-cells with HTLV-I and is correlated with their tumorous growth by autocrine regulation (6-8).
With the aid of the monoclonal antibody anti-Tac (3, 9), which recognizes human IL-2R, the cDNA for an IL-2R subunit (Tac protein, IL-2Ra, or p55) has been cloned and verified by heterologous expression (10-12). IL-2Ra is a glycoprotein which migrates on SDS-polyacrylamide gel electrophoresis with a molecular size of =60 kDa. The precursor protein of 28.5 kDa is co-and post-translationally modified by glycosylation, phosphorylation, and sulfation (4, 13, 14).
Radiolabel binding studies indicate a high (KD = 10 pM) and low (KO = 10 nM) affinity for IL-2 binding to IL-2R (15, 16). Transfer and expression of IL-2Ra cDNA in nonlymphoid cells generates only low affinity binding sites (17)(18)(19). In contrast, expression of human IL-2Ra cDNA upon transfer into mouse T-cell lines CTLL-2 or EL-4 results in the exhibition of high affinity receptors as well as the reconstitution of typical functionality (proliferation response to IL-2) in CTLL-2 cells (20,21). This discrepancy seems to be explained by the demonstration of a second IL-2-binding protein, the IL-2R /+subunit (IL-2R@, p70, or converter) which is only found in T-lymphocytes and related cell lines (22-24). Different studies suggest that high affinity IL-2R is a membrane complex composed of at least the converter protein p70 and the Tac protein p55 (25-27). Although IL-2Ra has been studied extensively, little is known about the IL-2-binding site(s) and IL-2Ra function. Site-directed mutagenesis of both IL-2Ra and ligand IL-2 has not yet helped to answer these questions (28-30). Nothing is known about the three-dimensional structure of the Tac protein which is essential for understanding IL-2 binding and interaction with the converter protein.
As a prerequisite for solving these problems, we have devised a synthetic gene allowing simple modifications for mutagenesis, which codes for an IL-2Ra protein without methionines, coined des-Met IL-2Ra (31). The replacements of methionines were done to maintain the predicted secondary structure (32-34). The acid-labile sequence Asp-Asp-Asp-Pro at positions 4-7 was stabilized by substitution of aspartate at position 6 by glutamate. In this report, we show that this des-Met IL-2Ra is expressed as an IL-2-binding protein in eukaryotic cells. In parallel, we have expressed des-Met IL-2Ra in Escherichia coli as a fusion with P-galactosidase in order to isolate the receptor protein after chemical cleavage with cyanogen bromide (35). (Tad IL-2 REC L MET ASP SER TYR LEU LEU MET TRP GLY LEU LEU THR PHE ILE MET VAL PRO GLY CYS ASN ALA -20 -10 -1 5'" TTC ATG GAT TCA TAC CTG CTG ATG TGG GGA CTG CTC ACG TTC ATC ATG GTG CCT GGC TGC CAG GCA GAG C T ' a 3' G TAC CTA ACT ATG GAC GAC TAC ACC CCT GAC GAG TGC AAG TAG TAC CAC GGA CCG ACG GTC
FIG. 2. Plasmid constructions for expression of interleukin-2 receptors in eukaryotic cells. The synthetic gene segments IL-2RecI-111 (finely stippled boxes)
were inserted into pUC8/9 for subcloning and verification of DNA sequence (step 1). Gene segments with the correct DNA sequence were ligated to the total receptor gene and cloned with the leader segment (IL-2RecL or PstI-Sac1 leader sequence from pGL2) into the expression vector pBEH (step 2). The cDNA for human IL-2Ra in pGL2 (stippled bores) was also inserted into the exmession vector pBEH. Details of the plasmid constructions are described under "Experimental Procedures." Functional parts of the vectors as indicated.
Chemoenzymutic Synthesis of a Gene Coding for Des-Met
IL-2Ra"In order to obtain insight into the structure-function relationship of IL-2 binding to its membrane receptor, we have devised a synthetic gene coding for a des-Met IL-2Ra mutant protein. On the basis of the published cDNA and protein sequence (10-12), this gene was constructed ( Fig. 1) with some particularly designed features. Following the secondary structure prediction of Chou and Fasman (32), we exchanged all 10 methionines of IL-2Ra with the amino acids alanine, valine, leucine, or isoleucine. The replacements were done to maintain the predicted secondary structure, e.g. Met-Ala (@sheet) or Met-Leu (a-helix). Then, the gene was modified on the DNA level due to the degeneracy of the genetic code by introducing new restriction endonuclease recognition sites, e.g. KpnI, SalI, and CZuI, and deleting existing double or triple sites, e.g. NcoI and BglI. These modifications result in a module system facilitating future modifications as sitespecific mutagenesis by cutting out a gene fragment and synthesis of a modified new double strand followed by reinsertion. Besides these features, the sequence was modified in order to guarantee the exact finding of the individual complementary oligonucleotides and their joining with high effi-* "Experimental Procedures" are presented in miniprint at the end of this paper. Miniprint is easily read with the aid of a standard magnifying glass. Full size photocopies are included in the microfilm edition of the Journal that is available from Waverly Press. ciency in a one-step test tube reaction as introduced by Khorana (48). This was done with a computer program (51) checking whether the opposite strands alb, c/d, and elf do pair preferentially over, e.g. a/d or a/f (see Fig. 1). In particular, the free overlapping ends of 6-9 bases within the whole segments 1-111 were checked for their unambiguous matchings.
The total gene consists of 834 base pairs and is divided into four DNA segments, 1-111 and L (for leader), defined by individual restriction endonuclease cutting sites (Fig. 2). Each segment is constructed from 10 to 16 individual oligonucleotides, two for the leader fragment, ranging in length from 27 to 73 bases. After a one-step hybridization and ligation of oligonucleotides La-Lb, Ia-Ip, IIa-111, and IIIa-IIIj with good success, we verified the DNA sequence according to Maxam and Gilbert (40). We found four out of five clones of segment IL-2RecIII with the correct sequence. Similar results were obtained with segments IL-2RecI and IL-2RecII and IL-2RecL.
Establishment of Cell Lines with Stable Expression of Wildtype and Des-Met IL-2Ra"The different gene fragments were ligated to the total gene and cloned in the vector pBEH as outlined under "Experimental Procedures." (see Fig. 2). This eukaryotic expression vector contains pBR322/328 and SV40 sequences for replication in E. coli and certain primate cells. A gene inserted into the polylinker from pUC9 is set under the control of the SV40 early promoter. Splicing and polyadenylation signals are from SV40. As a positive control in all experiments, the cDNA gene coding for wild-type IL-2Ra was cloned into the same vector (Fig. 2).
TABLE I Eukmyotic cells with stable expression of wild-type and des-Met
Cells were analyzed with a fluorescence-activated cell sorter after incubation with monoclonal antibody against IL-2Ra. The percentage of Tac positive cells was estimated as described under "Experimental Procedures." Cells were transfected with the indicated IL-2Ra. LTK-, and CHO-dhfr cells with pBEH des-Met IL-2Ra or pBEH IL-2Ra cDNA. After selection for (3418 resistance, mixtures of about 500-1000 clones were tested for IL-2Ra expression by directly or indirectly staining with monoclonal antibodies. We used different antibodies: anti-Tac (3), IL-2R1-Fitc (Coulter clone), and 2A3 (46). They are all similar with regard to competing with IL-2 for binding to the receptor. The flow cytometric analysis of these cells clearly shows the expression of IL-2Ra and the des-Met mutant in all three examined cell lines ( Fig. 3 and Table I). Stained cells had an intensive membrane fluorescence as seen in Fig. 4B. Monoclonal antibodies used for detection of IL-2Ra cannot distinguish between wild-type and mutant receptors showing that des-Met IL-2Ra is stably expressed as an integral membrane protein exposing an epitope detected by these different antibodies.
Fluorescence-activated Cell Sorting Generates LTK-Cells with High Level IL-2Ra and Des-Met IL-2Ra Expression-
Both genes were stably expressed in BHK-21, CHO-dhfr, and LTK-cells. In mixtures of cell clones, the expression of receptor varies over a wide range. We enriched a population by sorting out highly fluorescent cells to get a large number of receptor molecules per cell. For this purpose, we used LTKcells since they were more convenient for sorting. We found that more than 90% of the cells were still alive after sorting. The top 5% of stained viable cells at each sorting were collected. After 1 week of cultivation, enough cells were grown to repeat the sorting procedure. Five cycles of sorting were necessary to establish cells with high level expression of antigen as shown in Fig. 4 (C and D). The cells expressing wild-type IL-2Ra were 81% positive; des-Met mutant cells were 97% positive (Table I). Most cells had a high relative fluorescence intensity, and the level of expression was stable during 2 months of cultivation. The population of LTK-cells expressing wild-type IL-2Ra still contained about 20% negative cells.
Immunoprecipitation of Des-Met IL-2Ra with an Apparent Molecular Size of 60 kDa-IL-2Ra on T-lymphocytes is a glycoprotein with a 28.5-kDa peptide backbone which is sequentially processed through at least two intermediate forms to a mature form of 50-60 kDa containing both Nand 0linked carbohydrate moieties (4,14). For this reason, we were interested whether the post-translational modifications of wild-type and mutant receptor protein in transfected LTKcells were distinguishable. It is known that the recognition of potential glycosylation sites is due to several intrinsic parameters of the protein including protein conformation. We examined the molecular size of des-Met and wild-type IL-2Ra by immunoprecipitation of cells ["sSS]cysteine-labeled in vivo with a rabbit polyclonal antibody directed against human IL-2Ra. The result is shown in Fig. 5. The wild-type and des-Met mutant IL-2Ra have identical molecular sizes of about 60 kDa on a discontinuous 12% SDS-polyacrylamide gel (Fig. 5, lanes C-E), suggesting that both molecules are equally glycosylated in mouse LTK-cells. The bands are indistinct, which is typical for a heterogeneity in the carbohydrate moieties. This heterogeneity is found for IL-2Ra in lymphoid cells, too (4,9).
Scatchard Plot Analysis of Interleukin-2 Binding to Recombinant LTK-Cells-Since we did not observe any differences by comparing mutant with wild-type IL-2Ra, we attempted to test whether or not mutant IL-2Ra is capable of binding IL-2.
Competition experiments and estimations of the dissociation constant ( K D ) with '2sI-labeled rIL-2 as tracer were undertaken according to Robb et al. (15,16) with five timessorted LTK-cells. Staining profiles of these cells are shown in Fig. 4 (C and D). Nontransfected LTK-cells served as a negative control. In each experiment, unspecific, nonsaturable binding was determined by addition of 0.1 g/liter anti-Tac mAb (9,16,49) or 10 p~ rIL-2 into the assay medium and is substracted from total binding in Fig. 6 or Table 11. This unspecific binding was usually low and did not vary significantly. The result of different radiolabel binding experiments is that des-Met IL-2Ra can bind IL-2. It was possible to displace 76-94% of lZ5I-labeled rIL-2 binding to LTK-cells expressing the des-Met mutant by 10 ~L M rIL-2 or 0.1 g/liter anti-Tac mAb. As a control, we employed mouse ascites fluid with antibody directed against &galactosidase at a concentration of 0.1 g/liter, which had no effect on IL-2 binding (data not shown). This demonstrated the specific and saturable binding of IL-2 to the mutant receptor. The results of binding experiments to determine the dissociation constant (KD) with different IL-2 concentrations (typically from 0.235 to 60.0 nM) are shown in Fig. 6. As calculated from the slope of the straight line in the Scatchard plot, des-Met 11-2Ra binds IL-2 with KD = 25.8 k 7.0 nM. Authentic IL-2Ra binds IL-2 with an affinity of KD = 11.9 & 2.1 nM (Table 11). If present at all, this difference in affinity is small and correlates well with published KD values for low affinity IL-2 binding (16-18) to the Tac protein. As expected, we could not detect any high affinity binding sites at low IL-2 concentrations on transfected LTK-cells. High affinity binding sites are found on the HTLV-I-infected human T-cell line HUT 102 (Table 11)
Interleukin-2 receptor numbers and affinities on sorted LTKand HUT 102 cells
The results represent the mean & S.D. of multiple binding experimenta (numbers in parentheses). | 3,105.2 | 1988-06-15T00:00:00.000 | [
"Biology"
] |
Effect of TRPV4-p38 MAPK Pathway on Neuropathic Pain in Rats with Chronic Compression of the Dorsal Root Ganglion
The aim of this study was to investigate the relationships among TRPV4, p38, and neuropathic pain in a rat model of chronic compression of the dorsal root ganglion. Mechanical allodynia appeared after CCD surgery, enhanced via the intrathecal injection of 4α-phorbol 12,13-didecanoate (4α-PDD, an agonist of TRPV4) and anisomycin (an agonist of p38), but was suppressed by Ruthenium Red (RR, an inhibitor of TRPV4) and SB203580 (an inhibitor of p38). The protein expressions of p38 and P-p38 were upregulated by 4α-PDD and anisomycin injection but reduced by RR and SB203580. Moreover, TRPV4 was upregulated by 4α-PDD and SB203580 and downregulated by RR and anisomycin. In DRG tissues, the numbers of TRPV4- or p38-positive small neurons were significantly changed in CCD rats, increased by the agonists, and decreased by the inhibitors. The amplitudes of ectopic discharges were increased by 4α-PDD and anisomycin but decreased by RR and SB203580. Collectively, these results support the link between TRPV4 and p38 and their intermediary role for neuropathic pain in rats with chronic compression of the dorsal root ganglion.
Introduction
After tissue injury and inflammation, the sensory signals from the primary sensory neurons to the spinal dorsal horn change significantly, ultimately leading to the development of chronic pain [1]. Abnormal pain manifestations including allodynia, hyperalgesia, and spontaneous pain episodes are believed to partially result from plastic phenomena in the spinal sensory system [2,3]. These manifestations, known as neuropathic pain, are caused by the primary injury or functional disability of the nervous system. Radicular neuralgia is the most common form of neuropathic pain; it occurs when the radix spinalis or the dorsal root ganglia (DRGs) are stimulated by harmful factors (e.g., the protrusion of a lumbar intervertebral disc, lumbar spinal stenosis, spinal cord tumor compression, and certain inflammatory substances). They then become excited to create and transmit neuropathic pain signals [4]. Our studies have used chronic compression of the DRGs (CCD) as a typical model of neuropathic pain that demonstrates spontaneous pain, hyperalgesia, and allodynia and is accompanied by increased spontaneous discharges of neurons as well as decreased action potentials and electric current thresholds [5].
When the physiopathological mechanism of inflammatory pain was studied in patients with amputation neuroma, spinal cord injury, or other models of neuropathic pain, mitogen-activated protein kinases (MAPKs, e.g., ERK, JNK, and p38) played a critical role. The phosphorylated forms of these kinases maintain and enlarge the pain signal from the peripheral nociceptors or DRGs by modifying proteins posttranslationally and regulating the transcription of critical genes. The local injection of MAPK inhibitors significantly depresses thermal and mechanical hyperalgesia [6][7][8]. p38 is an important MAPK involved in inflammation-induced pain and is activated by a variety of acute stimuli, including the intraplantar injection of formalin and the intrathecal injection of substance P [9]. Furthermore, p38 reduces pain by inhibiting p38 phosphorylation through decreased TNF- [10]. Pregabalin reduces the nociceptive reaction in a zymosan-induced inflammatory pain model by inhibiting the phosphorylation of MAPKs [11]. Considerable evidence indicates that p38 is an important mediator of the NF-B pathway, which might be inhibited by SB203580 through the NF-B pathway [12,13]. Thus, the p38 and the NF-B pathway are likely key contributors in inflammation [14][15][16].
Transient receptor potential (TRP) channels are Ca 2+permeable cation channels that play important roles in sensory function. These channels can be activated by a variety of stimuli, including mechanical and osmotic stress, thermal stimuli, and chemical signals [17,18]. TRP vanilloid receptor 4 (TRPV4) is a member of the vanilloid subfamily of TRP channels, and accumulating experimental evidence over the past decade has increasingly clarified its role in pain signal transduction [19][20][21][22][23][24][25][26][27]. Our preliminary studies showed that TRPV4 participates in the mechanisms associated with the high excitability of neurons, hyperalgesia, and allodynia after CCD surgery. Moreover, TRPV4 antisense oligodeoxynucleotides partially restore the decreased mechanical withdrawal threshold after CCD surgery without influencing the basal threshold [28]. These antisense oligodeoxynucleotides exert their effect through the TRPV4-NO pathway mediated by NF-B [29]. TRPV4 ion channels can be induced via a hypotonic solution and 4 -PDD; furthermore, they open with increased inward current, leading to a peak of intracellular calcium. This increasing intracellular calcium activates p38 by phosphorylating it into a biologically functional form, mediating mechanical allodynia and inflammation-induced hyperalgesia [30]. Capsazepine, the antagonist of another TRP family member, TRPV1, activates p38, JNK, and ERK1/2 dose-dependently and upregulates the death receptors (DRs) through activated JNK [31]. No direct evidence has been reported with regard to the relationship between TRPV4 and p38. Thus, we formed the following hypothesis: activated and upregulated TRPV4 in the DRG neurons of a CCD model mediates the formation and transmission of neuropathic pain through the alternation of p38.
Spontaneous pain is likely related to the abnormal electrical activity and activated TRPV4 ion channels that mediate calcium and sodium internal flow, creating ectopic discharges [32]. The nerve fibers involved in ectopic discharges include A and C fibers; A fibers are thin, have a myelin sheath, and mediate stabbing pain, whereas C fibers are thick, do not have a myelin sheath, and mediate causalgia. C fibers root from small neurons in the DRGs and primarily conduct slow pain.
We investigated the expression of TRPV4 and p38 after CCD surgery and then determined whether agonizing or inhibiting TRPV4 and p38 changed the expression and location pattern of those proteins. Finally, we examined the effects of those agonists or inhibitors on the development of mechanical allodynia and ectopic discharges.
Experimental Animals.
Adult SPF male Wistar rats with weights ranging from 180 g to 200 g, provided by the Experimental Animal Center of Shandong University, were housed in a room with pathogen-free air, at 20 ± 2 ∘ C, two per cage, on a 12 h light/dark cycle. Water and food were available ad libitum. The animals were allowed 7 days to habituate to their housing prior to manipulation and 30 min to habituate to the experimentation environment before each behavioral study was performed. The Animal Care and Use Committee of Shandong University approved all experimental procedures.
Reagents.
Four days after CCD surgery, the TRPV4 inhibitor Ruthenium Red (RR, Sigma, Germany), the TRPV4 agonist 4 -PDD (CST, USA), the p38 inhibitor SB203580 (CST, USA), or the p38 agonist anisomycin (CST, USA) was given to the experimental groups at the recommended concentrations via intrathecal injection.
CCD Model.
After anesthetizing the rats with 10% chloral hydrate (300 mg/100 g body weight, i.p.), the animals were shaved and sterilized. Then, their skin was cut between the bilateral spina iliac, extending upward approximately 2 cm. Next, the deep fascia and muscle were longitudinally cut upward approximately 2 cm from the end of the right fascia triangle; the tail levator was pushed aside so that the mastoid of L4 and L5 was visible. After separating the covering muscles, the outer intervertebral foramen of L4 and L5 was revealed; then, sterile L-shaped steel bars (diameter = 0.63 mm) were inserted into the intervertebral foramen of L4 and L5 at a 30 ∘ oblique angle with the spinal column, keeping the other end of the steel bar out of the intervertebral foramen. After the operation, the incisions were washed with normal saline; then, the muscle, fascia, and skin were sutured in sequence, and penicillin was given intraperitoneally to prevent infection. The rats that developed autophagy, sensory deficiency, or disability were eliminated from analysis.
Behavioral Testing.
Walk gait pattern was assessed as an index of motor function. A score of 1 indicates a normal gait, without foot deformities; a score of 2 indicates a normal gait with obvious foot deformities; a score of 3 indicates a slight gait disturbance with foot-drop; and a score of 4 indicates a serious gait disturbance with myasthenia. Only rats scoring 1 were used for the following experimental procedures.
The behavioral testing was performed with regard to the ipsilateral hind paw of the animals prior to surgery as well as on postoperative days 2, 4, 6, 10, 14, and 28. The effects of the inhibitor or agonist on the CCD-induced allodynia were tested between 0.5-h and 8-h after injection. The paw withdrawal mechanical threshold (PWMT) was evaluated using a BME-404 Mechanical Analgesia Tester (Chinese Academy of Medical Sciences, CAMS, Beijing, China). A probe was pressed against the lateral plantar surface of the hind paws with sufficient force. A positive response was noted BioMed Research International 3 when the paw immediately withdrew. The procedure was repeated five times at least 5 min apart, and the average value was used as a variable.
Western Blot
Analysis. The L4 and L5 ganglia from the operated side were harvested quickly and carefully. Protein samples of the DRGs were prepared on ice. Then, the sample of total protein was separated by 5% and 10% SDS-PAGE. Proteins were transferred to polyvinylidene fluoride membranes. The membranes were incubated in 5% milk for 2 h at room temperature. Then, the membranes were incubated with the primary antibody at 4 ∘ C overnight followed by horseradish peroxidase-(HRP-) conjugated secondary antibodies for 1 h. The signal was detected using the Immobilon6 Western Chemiluminescent HRP Substrate. The primary antibodies were rabbit anti-TRPV4 polyclonal antibody (1 : 800, Abcam, Cambridge, UK), rabbit anti-p38 polyclonal antibody (1 : 200, CST, USA), and rabbit anti-P-p38 polyclonal antibody (1 : 1,000, CST, USA), whereas the second antibody was a goat anti-rabbit antibody (1 : 8,000, Zhongshan Goldenbridge, Beijing, China). The protein bands were visualized using a FluoroChem 9900 Imaging System (USA), and the bands' intensity was quantified with the Quantity One software and normalized to -tubulin (1 : 1,000, CST, USA).
Immunohistochemistry.
After the behavioral and pain tests, the rats were deeply anesthetized with 5% isoflurane and perfused transcardially with cold normal saline followed by a fixative containing 4% paraformaldehyde and 0.2% picric acid in 0.1 M phosphate-buffered saline (PBS, pH 6.9). Ipsilateral lumbar L4-L5 DRGs were removed rapidly after perfusion, postfixed in the same fixative overnight at 4 ∘ C, and then dehydrated and paraffin-infused. A series of 4m paraffin sections were cut using a rotary microtome. The sections were heated at 65 ∘ C for at least 2 h and then deparaffinized. Antigen retrieval was accomplished with citrate buffer in a microwave oven at 92-98 ∘ C for 15-20 min. The sections were washed in PBS and then incubated separately in rabbit anti-TRPV4 polyclonal antibody (1 : 200, Abcam, Cambridge, UK), rabbit anti-p38 polyclonal antibody (1 : 50, CST, USA), and rabbit anti-P-p38 polyclonal antibody (1 : 100, CST, USA) at 4 ∘ C overnight. The sections were incubated using a specific secondary antibody for 2 h at room temperature. DAB substrate solution and hematoxylin were used to develop the color. Labeled sections were examined under a Leica Quantimet 550 DMRXA automated research microscope (GER) and analyzed using IPP.6.
Ectopic Spontaneous Discharge
Recording. Four days after CCD surgery, rats without autophagy, sensory deficiency, or disability were selected. The animals were anesthetized with 10% chloral hydrate (300 mg/100 g body weight, i.p.). Their skin was cut, their muscles were moved, and the vertebral plate of their L2-L6 was removed carefully to avoid spinal cord or nerve injury. The steel bars were removed from the DRGs, the peripheral nerves were cut off approximately 10 mm from the L4 and L5 DRGs to block signaling from the peripheral receptors, and the communicating branches of
Statistical Analyses.
All calculations and statistical analyses were performed using Prism 5.0 (GraphPad Software, San Diego, CA, USA). Data values were expressed as means ± SEM. values < 0.05 were considered significant. The analyses were performed using one-way (with Tukey post hoc tests) and two-way ANOVAs.
Changes in PWMT in CCD Rats after the Injection of
Agonists and Inhibitors. Via behavioral testing, we first determined the mechanical allodynia regulation of the ipsilateral hind paw compared with controls. PWMT significantly decreased from the second day after CCD surgery, lasting 14 days ( < 0.01, Figure 1); then, it increased to normal levels. To study the effects of TRPV4 and p38 with regard to neuropathic pain further, we sought to determine the abilities of RR, 4 -PDD, SB203580, and anisomycin to enhance or block the nociception signs induced 4 days after CCD surgery. As Figures 2(a) and 2(c) show, RR-and SB203580treated rats exhibited paw withdrawals at higher mechanical forces compared with the saline group. In addition, 4 -PDD and anisomycin injections markedly decreased the paw withdrawal threshold compared with the saline group ( Figures 5(b) and 5(d)). The most prominent time point was approximately 2 h after injection; no significant dose dependence was found.
Effects of Agonists and Inhibitors of TRPV4 and p38 on Protein Expression in CCD Rats.
To investigate whether the TRPV4 and p38 expression changes affected each other, pharmacological agonists and inhibitors were given to CCD rats. Separately, the concentrations of these reagents were 1 nmol/L, 10 nmol/L, and 100 nmol/L for RR and 4 -PDD; 10 mol/L, 20 mol/L, and 40 mol/L for SB203580; and 5 g/mL, 25 g/mL, and 40 g/mL for anisomycin. The expressions of TRPV4, p38, and P-p38 were tested at 1, 2, 4, and 8 h after intrathecal injection of these drugs. The control group was given normal saline in an equal quantity.
As shown in Figure 3(a), TRPV4, p38, and P-p38 were significantly inhibited by RR (2-4 h for TRPV4; 1-8 h for p38; and 1-4 h for P-p38), and both the TRPV4 and p38 changes were dose-dependent. When TRPV4 expression increased (see Figure 3(b)) by 4 -PDD (2 h), p38 and P-p38 were also upregulated (4 h for p38; 2 h for P-p38), and all compounds were associated with the concentration of the drug given. The administration of SB203580 (Figure 3(c)) significantly reduced the expression of p38 and P-p38 (4 h for p38; 2-4 h for P-p38) but significantly increased the expression of TRPV4 (1-8 h) regardless of the concentration. Finally, we gave anisomycin to the CCD rats (see Figure 3(d)); p38 protein expression was significantly increased (1-2 h), TRPV4 was inhibited (1-8 h), and P-p38 did not change significantly. A clear dose-dependent relationship was not found.
Protein Distribution Changes after Intrathecal Injections of TRPV4 and p38 Agonists and Inhibitors among CCD Rats.
To evaluate whether the cellular distributions of TRPV4 and p38 within DRG neurons were altered because of CCD and the intrathecal injections of agonists and inhibitors, we used immunohistochemical staining to determine the proportion of TRPV4 and p38-positive neurons in the DRG tissues of CCD rats and controls after injection (Figures 4 and 5). We found that TRPV4 and p38 labeling were both evident in small, medium, and large ganglion cell bodies (small < 30 m, middle 30 m-40 m, and large > 40 m). The number of positive cells increased after CCD and was affected by the agonists and inhibitors. A quantitative analysis revealed that the number of TRPV4-positive neurons (Figure 4(g)) in the small and total ganglion neuron groups increased significantly ( < 0.01) compared with controls. Following the RR and SB203580 injections, the number of TRPV4positive small neurons was reduced ( < 0.01). The total positive neuron number increased after anisomycin injection ( < 0.01), which significantly differed from the CCD group. As Figure 5(g) shows, the number of p38-positive neurons of all sizes was significantly increased after CCD compared with controls ( < 0.05, large; < 0.01, medium, small, and total). The number of p38-positive, small neurons and the total number of p38-positive neurons were significantly reduced by SB203580 ( < 0.01) and increased by 4 -PDD ( < 0.01) and anisomycin ( < 0.01) compared with the CCD group.
The Effects of the Agonists and Inhibitors on Electrophysiological Properties.
To confirm the contribution of TRPV4 and p38 with regard to spontaneous pain, we measured the ectopic discharges after CCD and the intrathecal injection of agonists or inhibitors. As Figure 6(a) shows, rare ectopic discharges occurred in normal rats. The frequencies of ectopic discharges did not markedly differ between groups (Figure 6(h)). However, the amplitudes (Figure 6 the RR and SB203580 groups were significantly reduced ( < 0.01) but significantly increased in the 4 -PDD and anisomycin groups ( < 0.01).
Discussion
The current study clearly shows that the expressions of TRPV4, p38, and P-p38 were elevated shortly after CCD surgery, whereas the PWMT decreased between 2 and 14 days after operation. We would like to evaluate rats at 4 days after CCD surgery in future experiments. When TRPV4 was activated by 4 -PDD, the expressions of TRPV4, p38, and P-p38 were upregulated and accompanied by increased TRPV4and p38-positive small neurons in DRG tissue, anabatic mechanical allodynia, and larger amplitudes of ectopic discharges compared with the CCD rats given normal saline. The rats given RR injections exhibited opposite patterns of performance compared with those given 4 -PDD, though RR not only is a TRPV4 blocker but also can block other mechanically sensitive channels, as well as other channels that are not mechanically sensitive. We will try to find more TRPV4-specific blockers in the further studies. As p38 was activated in CCD rats due to anisomycin, the expression of p38 and P-p38 increased; however, anisomycin reduced TRPV4 expression. Interestingly, the number of TRPV4and p38-positive small neurons in DRG tissue increased simultaneously and the amplitudes of ectopic discharges increased in CCD rats given anisomycin, similar to those given 4 -PDD. When the activation of p38 was inhibited by SB203580, the PWMT and ectopic discharges performed in the reverse manner among rats given anisomycin. After CCD, the allodynia of rats and the hyperexcitability of their neurons were due to the activity of ion channels (e.g., voltage-gated Na+ and K+ channels), hyperpolarizing activated cation channels, and TRP channels. TRPV4 is a highly Ca 2+ -permeable cation channel found in the cell membrane, cytoplasm, and nucleus that plays a marked role in hyperalgesia [17]. p38 is the third MAPK found in mammals. This kinase is activated by intracellular [30] inflammatory cytokines (IL-1 and TNF-) and cell stressors (UV, extracellular hypertonic solution, and heat shock) and takes part in intracellular inflammation and stress reactions as a single intracellular cascade reaction [33]. After DRG injury, the phosphorylation of p38 clearly increases, and SLN-induced transitory phosphorylation occurs 5 days after surgery [34]. The total protein expressions of TRPV4, p38, and P-p38 were tested after CCD, and the results showed that the chronic compression of DRG induced the upregulation of TRPV4 and p38 and facilitated the phosphorylation of p38; however, these changes were not completely synchronous with PWMT. These results reveal that TRPV4 and p38 are both involved in the mechanism of allodynia after CCD surgery, although other factors (e.g., the mechanism of central sensitization) should not be ignored.
When TRPV4 ion channels were activated or inhibited, the expression of p38 changed in parallel with TRPV4, and a similar result followed the phosphorylation of p38. These changes in the quantity and activation of TRPV4 alternated with the expression and phosphorylation of p38, which might be affected by the TRPV4-mediated calcium influx. When the agonist or inhibitor of p38 was given to CCD rats, the TRPV4 expression was the opposite of p38 expression, showing that the expression and phosphorylation of p38 affect TRPV4 through an unknown mechanism. Regarding the PWMT, TRPV4 and p38 agonists intensified allodynia after CCD, whereas TRPV4 and p38 inhibitors alleviated post-CCD allodynia, strongly demonstrating the function of TRPV4 and p38 in allodynia. Intrathecal injection is an effective method of intrathecal therapy, and the success rate of injection is more than 95%. Remarkably, the drugs are distributed not only to DRGs but also to the spinal cord. We would like to investigate the relationship between TRPV4 and p38 in the spinal dorsal horn and their effects on PWMTs in future studies.
As core components of the nervous system, neurons are involved in processing and transmitting signals, including inflammatory responses [35,36]. These results show that the small neurons in the DRG might be the most important neurons for processing and transmitting pain signals. The L5-VRT model [37] demonstrated a method for investigating the role of uninjured neurons regarding neuropathic pain without sensory afferent injury. Eight to ten days after L5-VRT, the ipsilateral L4 and L5 DRGs of rats created stronger spontaneous discharges at the same time that mechanical hyperalgesia appeared. Of the two types of nociceptors, A for fast pain and C for slow pain, the excitability of C fibers changes with tissue injury and inflammatory stimuli, contributing to the formation of neuropathic pain. As demonstrated in the immunohistochemical experiments, the number of TRPV4-or p38-positive small neurons changed significantly after CCD and was alternated by the agonists and inhibitors of TRPV4 and p38. Thus, the small neurons branching out into C fibers might be the most important neurons for neuropathic pain. DRGs are the primary neurons for sensory afferents, primarily nourishing and supporting the nerves that branch from them. Under normal conditions, DRG neuron bodies do not transport electoral signals directly nor do they create spontaneous discharges [38]. After DRG injuries, the physiological function of the neurons changes, and they become unduly excited. Thus, slight chemical or physical stimuli can induce large amounts of ectopic discharges from neurons in vivo, making them a signal source and leading to neuropathic pain that presents as spontaneous pain, hyperalgesia, and allodynia. The results of electrophysiological experiments showed that the ectopic discharges appeared after CCD, and the amplitudes were affected by activated or inhibited TRPV4 and p38, similar to the changes in small neurons. The small neurons take part in the formation of ectopic discharges mediated by TRPV4 and p38, and the TRPV4-p38 pathway might be one mechanism of ectopic discharge.
Conclusion
In conclusion, the expression and activation of TRPV4 and p38 change after CCD significantly contribute to the alternation of PWMT and the amplitudes of ectopic discharges, acting upon each other, mostly in small neurons. The current studies provide evidence for the existence of a link between TRPV4 and p38, with an intermediary role for neuropathic pain. Furthermore, the link between TRPV4 and p38 is bidirectional. | 5,129.6 | 2016-06-05T00:00:00.000 | [
"Biology"
] |
Impact of Adaptive Thermogenesis in Mice on the Treatment of Obesity
Obesity and associated metabolic diseases have become a priority area of study due to the exponential increase in their prevalence and the corresponding health and economic impact. In the last decade, brown adipose tissue has become an attractive target to treat obesity. However, environmental variables such as temperature and the dynamics of energy expenditure could influence brown adipose tissue activity. Currently, most metabolic studies are carried out at a room temperature of 21 °C, which is considered a thermoneutral zone for adult humans. However, in mice this chronic cold temperature triggers an increase in their adaptive thermogenesis. In this review, we aim to cover important aspects related to the adaptation of animals to room temperature, the influence of housing and temperature on the development of metabolic phenotypes in experimental mice and their translation to human physiology. Mice studies performed in chronic cold or thermoneutral conditions allow us to better understand underlying physiological mechanisms for successful, reproducible translation into humans in the fight against obesity and metabolic diseases.
Introduction
Endotherms, such as mammals, are organisms that use the heat released during cell metabolism to maintain a stable internal temperature [1]. This constant central temperature maintenance favors metabolic conditions so that enzymatic reactions can be carried out optimally, allowing endothermic organisms to be active and adapt to various environments through internal thermoregulation [2,3].
Since a stable core temperature is essential for the survival of endotherms, endothermic animals do everything possible to defend their core temperature in colder environments ( Figure 1). However, when central temperature defense is not possible, such as during food shortages or seasonal cold periods, many endotherms, including mice, leave homeothermy and engage in seasonal drowsiness or hibernation to conserve energy [4,5].
The experimental mouse, Mus musculus, is one of the most commonly used model organisms for studies of metabolism, immunity and cardiovascular physiology, and for modelling human diseases [6][7][8]. The reason is the conservation of genes between mice and humans, along with the growing repertoire of genetic tools that allow the manipulation of mouse genes to decipher mechanisms underlying physiological and pathophysiological processes. Therefore, we assume that research in mice will provide valuable information on human biology. Although this is true in most studies, Figure 1. Classification of animals according to their body temperature and adaptation to room temperature. Animals are classified according to their way of acquiring body heat and their ability to adapt to room temperature. Endothermic animals can produce heat endogenously. Ectothermal animals cannot produce their own heat, so they rely on ambient temperature. In addition, animals can be classified into homeotherms that can keep their body temperature constant, or poikilotherms that have a variable body temperature. Humans and mice are mammals that are classified as homeothermal endotherms. However, when subjected to an ambient temperature well below their thermoneutrality, they experience physiological responses that trigger adaptive thermogenesis.
Like other small mammals, the mouse has a large surface area and a small body mass. This makes mice vulnerable to fluctuations in the ambient temperature (T a ), especially when it falls below their thermoneutral temperature (29-31 • C) [9][10][11][12][13]. Mammals try to maintain their core temperature through the adaptive capacity of thermoregulation. Thus, the mouse uses various adaptations to keep thermal homeostasis in colder environments. For instance, the function of brown adipose tissue (BAT) is to maintain body temperature through a process called thermogenesis or heat production. Currently, most metabolic studies involving rodents are carried out at 21 • C, which is a thermoneutral zone in adult humans but is below the thermoneutral zone in mice. As a consequence, research studies in mice that are housed at 21 • C may not directly apply to humans, who live mainly in their comfort zone or neutrality [6,7,10]. For this reason, it is necessary to understand how T a affects metabolic and cardiovascular phenotypes in mice, and the importance of this variable in the modelling of human diseases in rodents.
As animal models and measurement techniques become increasingly accurate and sophisticated, environmental variables become critical for research development. A stable, defined environment is essential to generate consistent experimental results that support both replication and valid interpretations of the data. Previous studies have shown how mice adaptation to T a alters their disease phenotype [14][15][16]. Consequently, T a might be a variable to consider in metabolic studies to guarantee valid interpretation of experimental results, consistent conclusions and greater certainty in the translation of preclinical experiments to clinical studies.
The prevention and treatment of obesity has become a health priority. There is an alarming increase in the prevalence of obesity and associated metabolic diseases, including type 2 diabetes mellitus (T2D) and cardiovascular disease [17]. The health and economic impact of monitoring and managing obesity and associated complications is also remarkable. Lifestyle changes, such as dietary interventions and/or increased physical activity, have been widely recommended to prevent and treat obesity. However, it is essential to determine why, in general, many obese individuals are exceptionally resistant to treatment and voluntary weight loss is so difficult to achieve and sustain over time. Thus, a better understanding of energy homeostasis is essential.
In this review, we aim to cover important aspects related to the adaptation of animals to T a , the influence of T a on the development of metabolic phenotypes in experimental mice, and their translation into human physiology.
Classification of Animals According to Body Temperature and Their Adaptation to Ambient Temperature
All living beings are sensitive to a minimum, optimum and maximum temperature. Due to environmental adaptations, organisms are conditioned to their habitat in different climatic zones. Accordingly, they are classified into eurytherms (tolerant to a wide variation of external temperatures) and stenotherms (tolerant to a narrow range of ambient temperatures) [18] (Figure 1).
The temperature of an animal is the amount of heat per unit of tissue mass and is a balance between heat production and exchange, a key determinant in reproduction and development [19]. Body temperature (T b ) is defined as the reflection of the thermal energy that is retained in the body's molecules. Based on the stability of T b , animal species can be classified as either poikilotherms or homeotherms [19] (Figure 1). Poikilotherms are animals with a variable T b , i.e., their temperature changes in response to environmental conditions. In contrast, homeotherms are animals that maintain a relatively stable T b . Most homeotherms manage to maintain a constant T b through physiological processes that regulate production rates and heat loss. The difference between poikilotherms and homeotherms depends on the animal's physiology and the nature of the environment. An animal can maintain a constant T b if it inhabits an environment with a constant T a . Thermoregulation mechanisms are understood as the physiological strategy that an animal uses to control temperature within the desired range [20]. According to these thermoregulation mechanisms, animals are also described as ectotherms and endotherms ( Figure 1).
In addition, animals can control their T b through their behavior. Behavioral thermoregulation can be used to control the body temperature of a poikilotherm or to reduce the cost of thermoregulation in a homeotherm [20]. In ectotherms, the environment and behavioral thermoregulation determine the T b . In contrast, endotherms are vertebrates that generate internal heat to maintain a given T b .
Most mammals and birds (as illustrated in Figure 1) are classified as homeotherms because T b is stable, and endotherms since they thermoregulate T b through metabolic heat and the thermal insulation capacity of the animal.
Relationship Between Body Size and Physiological Temperature
Since the 1990s, genetic mouse models have been used to study obesity and energy balance. The cloning and characterization of mutant genes associated with obesity led to the discovery of proteins such as leptin [21], its receptor [22] and melanocyte-stimulating hormone [23], among others [24][25][26], that cause monogenic obesity in mice and humans [27]. Together, these studies validated the use of the mouse in the modelling of biological diseases related to energy homeostasis. Nonetheless, in energy homeostasis studies, the thermal physiology of the experimental model of choice must be considered [8]. Mice and humans are both endothermic mammals with the ability to thermoregulate to maintain a constant T b . Yet the process of thermoregulation has important physiological differences between the two species that we should bear in mind as researchers. For instance, the size of an animal influences its thermal biology through its surface/volume ratio. The larger the individual is, the smaller the ratio. Body surface area is proportional to the power of 2/3 to mass and it is an important determinant in heat loss [28,29]. Adult humans are approximately 3000 times heavier than mice (75 kg vs. 25 g). As thermal biology depends on body size, it is important to consider this significant difference in inter-species dimensions. Homeothermic endotherms, such as mice and humans, must dissipate the excess heat produced by their metabolism across the body surface. Humans have a larger body size with a lower relative surface area, which leads to less heat loss. In contrast, mice have a smaller body size for a greater relative surface area, and thus greater heat loss.
Mice and humans have a similar internal T b average of 37.0 • C in humans and 36.6 • C in mice [30], which is within the characteristic range in mammals. Humans generate heat primarily as a by-product of metabolism, without as much need for additional heat generation mechanisms. In fact, human physiology is mostly aimed at heat dissipation. In contrast, the small size of the mouse means that it can transfer heat quickly and have rapid changes in T b , so mice require more heat generation capacity to maintain their T b . Figure 2 shows the components of energy expenditure depending on T a [28][29][30][31]. In the mouse, total energy expenditure is the sum of the basal metabolic rate (BMR), physical activity, food thermogenesis and cold-induced thermogenesis [32]. At a given T a , over a third of the total energy expenditure is cold-induced thermogenesis, which is necessary to maintain T b . This amount of cold-induced thermogenesis, also called facultative or adaptive thermogenesis, is reduced by the availability of nesting material or by keeping mice grouped in cages so they can snuggle. In contrast, in humans, cold-induced thermogenesis contributes a very small fraction to total energy expenditure [33]. The total energy expenditure of the mouse can be divided into four components: basal metabolic rate, physical activity (green), the thermal effect of food (red) and cold-induced thermogenesis (blue). At room temperature (20-22 • C) more than one third of the total energy expenditure is cold-induced thermogenesis, which is required to maintain body temperature. (B) The energy used for cold-induced thermogenesis is mainly produced by skeletal muscle shivering and BAT thermogenesis.
Thermal Physiology and Thermoneutrality Zone
Mammals use heat conservation and generation mechanisms to maintain thermal homeostasis, which is reflected in their constant internal temperature [20]. On exposure to a cold environment, several behavioral mechanisms of heat conservation are activated, such as vasoconstriction, piloerection, hunched posture (to reduce the surface area) and snuggling. When these conservative heat adaptations prove insufficient for defense against the cold, mammals increase their energy expenditure to generate heat by involuntary muscle contractions (shivering thermogenesis) and uncoupled respiration in brown adipocytes (non-shivering thermogenesis, known as adaptive or facultative thermogenesis). The opposite occurs when mammals face environmental heat. In this case, there are behavioral adaptations such as vasodilation and increased passive heat loss, as well as panting, licking and sweating (in humans) to increase active heat loss through cooling by evaporation.
Halfway between these metabolic adaptations to a cold environment and heat is the thermoneutral zone, which is defined as the nadir in BMR [8][9][10]20]. When the T a is within the thermoneutral zone, BMR generates enough heat to maintain a constant core temperature at 37-38 • C. For young C57BL/6J mice (~3 months), the thermoneutral zone is between 29-31 • C [6,8,10,11], which is similar to the thermoneutral zone of a naked human (~28 • C) [34][35][36]. However, the thermoneutral or comfort zone in dressed humans is around 20-22 • C, which is often the temperature of the animal facilities where the mice are housed. This colder T a keeps mice in significant thermal stress or controlled hypothermia, resulting in the activation of facultative thermogenesis in BAT to maintain thermal homeostasis. As a consequence, the BMR and food intake of mice housed at a T a of 20 • C is~100% higher than those housed at 30 • C. Both parameters increase by another~100% when mice are housed at a T a of 4-5 • C [37]. As discussed in detail below, the chronic housing of mice under thermal stress conditions (T a of 20-22 • C) has profound effects on many physiological phenotypes and their intrinsic ability to adapt to environmental challenges.
Although the thermoneutral zone is considered a standard range, it is a highly variable parameter that differs between species. Previous studies showed that the thermoneutral zone of a particular mammal reflected its adaptations to its natural habitat [9].
In addition to these differences between species, many parameters can affect the range of the thermoneutral zone and the cold tolerance within a given species. For example, age (newborn and young mice have higher thermoneutral zones), muscle mass (basal metabolism and heat production are proportional to muscle mass), locomotor activity (exercise increases the production of heat to decrease the thermoneutral zone), pregnancy (fetal metabolism increases heat production), lactation (milk production generates heat), and isolation (greater isolation reduces the increase in metabolic rate at lower temperatures) can dynamically modulate the thermoneutral zone and the susceptibility of the organism to a cold environment [9][10][11]. This variation in the thermoneutral zone explains the differences observed in the cold tolerance of some mutant animals [11], such as those lacking hair, skin or subdermal fat [38][39][40][41][42]. This evidence suggests that experimental determination of thermoneutrality is necessary to understand how genetic mutations in mice affect physiology and disease susceptibility.
Thermal Variations in the Housing of Experimental Mice
In the animal facility, mice can consume unlimited food to meet the energy requirements of adaptive thermogenesis. However, it is known that "control" mice (wild-type experimental mice fed ad-libitum and without physical activity) become sedentary, obese and glucose intolerant and the implications for data misinterpretation in human studies is known [43]. The researchers stated that lack of exercise and unlimited access to food are the factors that most influence the inadequate interpretation of results. Other studies indicate that the underlying role of cold ambient temperatures is the cause of excessive intake and metabolic disorders [6,7,44]. A clearly defined stable environment is essential to generate consistent experimental results that support both replication and valid interpretations of the data. As animal models and measurement techniques become increasingly precise, environmental influences become critical in experimental development. Current technology can detect subtle effects that may have been part of the experimental background previously.
There are many varieties of rodent cages (for example, open lid, closed lid, ventilated and unventilated) that may vary in size, bedding, enrichment devices and other attributes. Even the position of the cage on a shelf can influence the result of the behavioral tests [7]. Another attribute that is rarely considered is the color of the cage. A few years ago, it was shown that this fundamental characteristic of the environment significantly influences circadian metabolic measures in rats [45,46]. The cage dye (transparent, amber, blue or red) causes a significant variation in maximum levels and maximum durations of melatonin during the dark phase and significant changes in the circadian moment of insulin spikes [46].
A fundamental characteristic of the rodent cage is the bedding. The properties of different types of rodent beds can differentially influence the environment of the cage, the physiology and behavior of rodents and even the health of the animals [47][48][49][50][51].
T a is another critical feature of the rodent cage environment that is probably influenced by the type of cage system used. Some studies show how the T a interacts with the cage system and possibly with tumor growth [12,52]. For example, one study evaluated the thermogenesis of BAT in nude and SCID mice that were individually housed at a T a of 21 • C in ventilated cages with or without shelter or in a static (non-ventilated) cage. The results showed that, independently of the strain, mice individually housed in ventilated cages without shelter had significantly higher BAT thermogenesis and higher adrenal weights than mice housed in static cages or in ventilated cages with shelter. In addition, when tumor cells were implanted, mice housed in static cages had greater tumor growth than mice under the other two conditions. The authors concluded that mice housed in ventilated cages without shelter experienced cold stress, which in turn interfered with tumor growth [12]. Another study reported that BALB/c and C57BL/6 mice housed 5 per cage at a T a of 22 • C had higher tumor growth than those maintained at 30 • C but did not detect a temperature effect on tumor growth when they used nude mice and SCID with immunodeficiency. The study also determined that the antitumor immune response was attenuated in immunodeficient mice maintained at 21 • C compared to those housed at 30 • C [52].
Another variable to consider is the density of mouse housing. For some studies, individual housing is preferred or necessary, while in other cases, rodents can be accommodated in groups that vary in number and density depending on the type of cage, the duration of the study, the purpose of the study and other factors. An important fact to consider is that not all mice housed in the same cage are necessarily identical, even if they are highly inbred. The differences between cage mates can be visually obvious in cages with domination hierarchies, which can occur in association with fights and overt injuries in some cage mates but not in others. Mice housed in groups may show greater phenotypic variation in some characteristics than mice housed individually from the same inbred strain [53].
Rodent housing density can directly affect the environmental conditions within the cage and, therefore, potentially alter the physiology, behavior of animals [54] and stress levels [55,56]. An interesting study evaluated the effect of the number of mice in a cage on the inside environment. Mice were housed in stable cohorts of one or five per cage, or in a test cage that initially contained five mice. One mouse was removed per week from the test cage until only one was left. Regardless of the room temperature (22 • C, 26 • C or 30 • C), cages containing five mice were general approximately 1.5 • C warmer than cages with individual mice, and a population of approximately three mice was associated with a decrease in temperatures and dew point inside the cage [57]. These findings are particularly relevant for situations in which individual mice are removed from a cage for some reason (for example, death, fighting and experimental use) because the remaining mice will experience different environmental conditions that could influence the experimental results [57].
Social housing can also affect the physiology and behavior of animals [58][59][60]. For example, a study that evaluated sleep, temperature and activity compared these measures in mice initially housed as part of a trio, then individually and finally individually with access to a shelter [60]. The data showed that the modifications in housing significantly influenced both the sleep and activity of mice. When housed individually, mice showed less rapid eye movement sleep and more locomotive activity during the dark phase than when they were housed as part of a trio. When given a shelter, the same mouse spent more time in slow wave sleep and was less active during the dark phase.
Thus, researchers should keep in mind that eliminating mice during an experiment could affect metabolism, as well as many of the other temperature-sensitive biological and physiological responses that have been analyzed so far. This probably contributes to experimental variability between experiments and laboratories.
Neuronal Control of Body Temperature
Homeostatic control of the T b is essential for the survival of mammals. It is well-established that T b is regulated partly by specific neuronal populations located in the hypothalamus [61]. This part of the brain works as a thermostat to maintain the T b within a narrow range [62]. The most important regions of the hypothalamus involved in T b regulation are the preoptic area (POA) and the posterior hypothalamic area (Figure 3) [63]. They contain temperature-sensitive neurons that initiate neuronal responses for heat generation or heat dissipation. This means that the brain itself is an input to regulate homeostatic responses. These conclusions are based on results obtained from electrophysiological recordings of the POA and revealed that local or environmental heat activate a subset of neurons referred to as "warm-sensitive" [8,64,65]. In addition to sensing local brain temperature, POA neurons receive thermal information from the periphery. It has been reported that three tissues provide an important input: the skin, spinal cord, and abdominal viscera [66]. The thermosensitivity of these tissues is due to sensory neurons that measure the temperature. Most of the neurons have cell bodies located in dorsal root ganglia (DRG) and their axons extend out to the target tissues [67]. Considerable progress has been made to elucidate the molecular basis of peripheral cold and warmth sensing. These studies have led to the identification of a number of ion channels activated by a wide spectrum of physical and chemical stimuli. Those activated by temperature belong to a superfamily of ion channels called transient receptor potential (TRP) channels [68,69]. Four TRP subtypes are activated by an increase in temperature and two TRP channels are activated by decreases in temperature [68]. For example, TRPM8 is an ion channel that admits Ca2 + and Na + in response to moderate cold (10-25 • C), while several transient receptor potential cation channels (TRV) have been proposed to sense warmth including TRVP1, TRVP3, TRCP4 and TRVP2 [70]. The mechanism by which temperature modulates TRP channels remains to be elucidated. Temperature information is sensed by these thermoreceptors in DRG neurons and is then transmitted to the dorsal horn of the spinal cord, where it is further processed before being sent to the brain (Figure 3). Elegant experiments have been carried out to elucidate the role of these thermosensitive neurons. Transgenic mice lacking TRVP1 in temperature-sensitive DRG neurons have reduced spinal neuron responses to heat [71]. Similarly, ablation of TRPM8+ DRG neurons reduced the number of spinal neurons activated by mild cold, but not by lower temperatures [72]. These results support the idea that spinal neurons synthesize information from many types of DRG neurons.
Dorsal horn neurons send glutamatergic projection to the brain that synapse with the lateral parabrachial nucleus (LPB) and the thalamus. Thermal information received in the thalamus is relayed upward to the somatosensory cortex and other cortex areas, where it mediates the discrimination of temperature (spinothalamocortical pathway) [73]. The ablation of this thermosensory pathway does not affect the autonomic response of T b regulation. However, injuring or silencing of the LPB abolishes autonomic responses to skin cooling and warming and the temperature preference in behavioral assays [74]. This result suggests that the spinothalamocortical pathway does not play a role in the thermal afferent pathway that evokes involuntary thermoregulatory responses to environmental challenges.
Ascending temperature information terminates in two anatomically distinct areas of LPB: the external lateral and dorsal LPB (LPBel and LPBd). It has been demonstrated that warm and cold activate cFOS expression in LPBd and LPBel, respectively [75]. LPB neurons send glutamatergic projections to the midline POA, where GABAergic and glutamatergic interneurons in the median preoptic (MnPO) subnucleus are activated [76]. LPBel neurons activate GABAergic MnPO interneurons that inhibit the distinct population of warm-sensitive neurons in the medial preoptic (MPO) subnucleus that control cutaneous vasoconstriction, BAT and shivering. Thus, inhibition of neurons in the MPO increases core body temperature, shivering, metabolism and heart rate. In contrast, glutamatergic interneurons in the MnPO, which may be excited by glutamatergic inputs from warm-activated neurons in LPDd, excite warm sensitive neurons in MPO [61,77]. Altogether, this thermoregulatory network is a sophisticated reflex that is necessary to maintain T b during an environmental temperature challenge (Figure 3).
Adaptive Thermogenesis in Brown Adipose Tissue
Small mammals have a tissue dedicated to heat generation, the BAT [78]. For a long time, it was known that BAT was present in small mammals such as rodents and neonatal humans. However, in the last decade it was discovered that active BAT is also found in adult humans [79].
In rodents, BAT is located mainly in the interscapular zone. In adult humans, it is found in the supraclavicular region, and in the cervical, axillary, paravertebral and perirenal areas [80]. BAT is called "classic" to distinguish it from inducible or beige adipose tissue, which has unique molecular and developmental characteristics [81]. Beige adipocytes have the appearance of white adipose tissue (WAT) until the animal needs to generate more heat. After exposure to cold or other stimuli, this beige adipose tissue or inducible BAT is enriched in cells with the appearance and functional characteristics of classic BAT in a process called browning. Although beige and BAT adipose tissue have different developmental origins and gene expression profiles [82], both are thermogenic. Thermogenic adipocytes can increase energy expenditure and generate heat by uncoupling the oxidative metabolism from ATP production. This function is carried out by the uncoupling protein (UCP)1, a proton transporter located in the internal mitochondrial membrane that uncouples energy generation from fuel oxidation from ATP production to produce heat. Thus, the electrochemical gradient generated through the electron transport chain (ETC) is dissipated [83][84][85]. In brown adipocytes, the high content of mitochondria and their vascular and nervous supply facilitates thermogenesis activated by the sympathetic nervous system. The nerve terminals act on α-adrenergic receptors to promote thermogenesis in BAT. It has been shown that cold enhances sympathetic signaling and that chronic exposure to cold triggers the expansion and activation of BAT [86], resulting in adaptive thermogenesis.
Adaptive thermogenesis is a mechanism of metabolic heat production that involves stimulation of the sympathetic nervous system to release norepinephrine (NE) and epinephrine, resulting in the increased metabolic activity necessary for heat generation in BAT [10,79]. Previous studies have shown that heat production by adaptive thermogenesis in mice can triple that of basal metabolism, and it is what increases the most in other animal models [87,88].
Obesity is an important risk factor for type 2 diabetes and cardiovascular disease. Importantly, BAT has been shown to promote HDL turnover and reverse cholesterol transport [89]. The high metabolic activity of thermogenic adipocytes confers atheroprotective properties through increased systemic cholesterol flow through the HDL compartment.
The thermogenic function of BAT requires an adaptive increase in proteasomal activity to ensure the quality control of cellular proteins. It has been shown that ER-localized transcription factor nuclear factor erythroid-2, like-1 (Nfe2l1 protein, also known as Nrf1) is an important mediator of brown adipocyte function, providing a greater proteometabolic quality control to adapt to cold or obesity [90]. It has been described that obesity might affect BAT's proteasomal activity [90]. A recent epigenomic study associated an altered methylation pattern of the human NFE2L1 locus with BMI [91]. However, the molecular mechanism implicated in how this epigenetic variant could affect Nrf1 and proteasome activity is still unknown.
Therapeutic Efficacy of Adaptive Thermogenesis in Obesity
In recent decades, BAT has been extensively investigated for its potential therapeutic role in obesity and T2D. Previous studies showed that excessive caloric intake could stimulate the expansion of BAT and the increase in thermogenesis as an adaptive measure to maintain body weight. This mechanism of diet-induced thermogenesis is mediated by BAT and UCP1 [92]. In fact, in the absence of UCP1, mice are prone to obesity. Initial studies, where brown adipocytes were genetically ablated with a toxin driven by the UCP1 promoter [93], demonstrated for the first time the protective effect of BAT against obesity and T2D. Importantly, these protective effects were observed in mice raised at room temperature (thermal stress with a T a of 20-22 • C). Subsequent investigations under T a conditions showed that Ucp1 −/− mice were very susceptible to hypothermia, due to recurrent tremors, but did not demonstrate the role of UCP1 in thermogenesis, nor a propensity to develop obesity [94,95]. However, when mice remained at thermoneutrality, they showed greater metabolic efficiency, which resulted in an increase in adiposity and obesity development [96]. This is explained because UCP1 knockout mice are more susceptible to hypothermia, which directly affects most of the systemic effects of energy metabolism [93].
The preferable fuel source in BAT is lipids, but glucose is also used. Therefore, approaches to activate BAT and reduce glucose and lipid content through adaptive thermogenesis could be potential therapies to fight against obesity [83]. The best way to model human energy physiology with mice is under thermoneutral (30 • C) conditions. Under this situation, cold-induced thermogenesis would be minimal and will not influence total energy expenditure [97]. Other potential variables that may account in similar proportions for total energy expenditure compared with a sedentary human would be BMR (70%), food thermogenesis (10%) and energy expenditure by physical activity (20%) [7]. Because energy expenditure decreases by approximately 50% in mice at thermoneutrality [98], this implies that the metabolic phenotype in obesity including adiposity would be highly dependent on the T a . An example of this implication is shown in a study about thyroid hormone metabolism [99]. In humans, hyperthyroidism is associated with a hypermetabolic state, characterized by heat intolerance and fat loss, while hypothyroidism decreases energy expenditure and promotes cold intolerance and obesity. Interestingly, the authors have shown that unlike in humans with hypothyroidism, mice that lack type 2 deiodinase, a key enzyme in the conversion of thyroid hormone, did not develop metabolic dysfunction when housed at 22 • C. However, when these animals were maintained at thermoneutrality (30 • C), there was an increase in adiposity, hepatic steatosis and glucose intolerance [99]. Therefore, they concluded that the accommodation of mice at a T a resulted in increased adrenergic activity in BAT, which compensated for the loss of activity of deiodinase type 2 and the production of T3. These findings suggest that chronic housing of mice under conditions of thermal stress can mask the genetic functions involved in energy balance and metabolic homeostasis triggering a change in the metabolic phenotype.
Activating BAT to Treat Obesity
Obesity is a chronic metabolic disorder characterized by ectopic fat deposition and a state of chronic low-grade inflammation. It is associated with higher free fatty acids, glucose and insulin levels. Adipose tissues, both WAT and BAT, are highly affected during obesity. These alterations include adipocyte hyperplasia and hypertrophy [100], endoplasmic reticulum stress [101], oxidative stress [102], fibrosis [100] and mitochondrial dysfunction [103], among others. Obesity is associated with severe disorders such as cardiovascular disease, dyslipidemia, T2D or even some forms of cancer. BAT was initially recognized for its ability to protect animals from hypothermia [104]. However in the last decade, the discovery that BAT is active in adult humans and that it is reduced in several conditions such as obesity, T2D and aging has triggered leading research in the BAT field to improve lipid and glucose homeostasis in the fight against obesity [105][106][107][108][109].
Exercise is another inductor of BAT thermogenesis [118]. Interestingly, a positive correlation has been demonstrated between exercise and increased browning in subcutaneous WAT [119]. The lactate produced in muscles after exercise or after cold exposure leads to an increase in UCP1 levels in the adipose tissue [120]. Recently, Takahashi et al. demonstrated that subcutaneous WAT-derived TGF-β2 secreted after exercise or its administration improved glucose homeostasis and insulin sensitivity by increasing fatty acid oxidation [121]. Furthermore, some myokines related to BAT activation, such as irisin [122] or β-aminoisobutyric acid, have been shown to decrease weight gain and improve glucose tolerance in mice [123].
As mentioned above, BAT is innervated by the sympathetic nervous system and controlled by adrenergic inductors such as norepinephrine. Thus, huge efforts have been made to find novel β-adrenergic agonists that can potentiate BAT activity and enhance thermogenesis. Some examples are Cl-316,243 [124] or mirabegron [125]. The latter was initially clinically used for overactive bladder but was also found to activate BAT in rats and humans [126]. Many other studies have focused on increasing the BAT mass, i.e., the differentiation of brown adipocytes or WAT browning. One study centered on fibroblast growth factor-21 (FGF21) [127], which is mainly secreted by the liver and associated with BAT activity. In humans higher FGF21 levels have been found in serum after cold exposure [128]. Another important thermogenic coactivator is PGC1-α, which is involved in mitochondrial biogenesis and thermogenesis [129]. Its activity is related to an increase in other transcriptional factors involved in brown differentiation such as PRDM16 or the PPARs family [130]. Other factors have also been studied to enhance thermogenesis in obesity such as the bone morphogenetic proteins (BMPs) family, for example BMP7 [131] or BMP8b [132]. Interestingly, although BMP4 improves the obese phenotype, it has a tissue-dependent dual effect: it increases browning in subcutaneous WAT; and it increases the number of lipid droplets and decreases BAT UCP1 expression [133]. Another approach to activate thermogenesis has been based on PPAR-γ agonists. Some studies showed increased UCP1 levels after treatment with rosiglitazone [134,135]. Other secreted peptides or hormones have been reported to activate BAT: norepinephrine [136], natriuretic peptides [137], meteorin-like [138], bile acids, adenosine [139] or activin E [140,141].
In recent years, BAT-derived adipokines, commonly called batokines, have generated considerable interest among the scientific community for their anti-obesity potential [142]. In 2018, Deshmukh et al. found a batokine called EPDR1 involved in BAT activation [143]. Another recent and elegant study has discovered a new chemokine called CXCL14 that is secreted by BAT and induces browning of WAT via immune cell activation [144].
Finally, alternative therapies are being studied to increase BAT mass and thermogenesis. These include the direct transplantation of this tissue or differentiated beige cells from preadipocytes, mesenchymal stem cells (MSCs) or induced pluripotent stem cells (iPSC) [145]. In the future, more personalized therapies could focus on the intrinsic study of the genome to identify other BAT activators such as miRNAs to combat obesity [146].
Role of Thermoneutrality in Obesity and Metabolic Studies: Chronic Cold vs. Thermoneutrality
It is known that an increase in metabolic heat production has physiological effects. In fact, for every 1 • C that the T a drops, approximately 46.3 kcal/m 2 /24 h are required to maintain the core temperature of the mouse [13]. This increase in metabolic heat production has many physiological effects. For example, energy expenditure is approximately 50% lower in mice living at thermoneutrality than in mice living in chronic cold conditions [98]. Therefore, we can deduce that the metabolic phenotypes of obesity and adiposity are highly dependent on the T a at which mice are housed ( Figure 4). Mouse models used to study metabolic diseases are influenced by environmental temperature. Mice show differences in the metabolic phenotype when housed at a standard temperature (21 • C) vs. thermoneutrality (30 • C). Mice at standard temperatures are subjected to chronic cold. This triggers controlled hypothermia where energy expenditure is affected by changes in physiology (food intake, BAT and WAT physiology and an increase in adaptive thermogenesis) and in metabolism (basal metabolism, adaptive thermogenesis, diet efficiency, insulin secretion, adipose tissue physiology, inflammation at adipocyte and vascular levels, and the effect of drugs and therapies against obesity).
Some studies have shown that the lack of efficient induction of the obese phenotype by high fat diet (HFD) is due to the fact that mice were housed in conditions below their thermoneutrality [57][58][59][60]. This effect of low temperatures on obesity has been observed in other animal models [14,78,81], demonstrating the correlation between low temperature and low effectiveness of the diet, as well as low levels of insulin and glucose and an altered response in glucose tolerance tests and energy homeostasis [10,44].
Other researchers found differences in metabolic inflammation [84].
Mice housed at thermoneutrality vs. chronic cold had greater metabolic inflammation. This was correlated with higher inflammation in the WAT and at vascular level, promoting atherosclerosis. The authors thus claim that thermal stress could limit our ability to faithfully model human diseases in mice [84].
The exposure to chronic cold vs. thermoneutrality has also been studied at the pharmacological level in obesity. The chemical protonophore, 2,4-dinitrophenol (DNP) was used for weight loss in humans. However, its consumption was discontinued due to toxicity [45,46]. DNP generates heat by uncoupling mitochondria. Thus, its anti-obesity mechanism is at least due to an increase in energy expenditure, without a direct effect on food intake. At 22 • C, chronic treatment in mice with a low dose of DNP had no effect on the phenotype and the only change observed was that BAT became less active [47]. This indicates that adaptive cold-induced thermogenesis was potentially reduced by the amount of heat that was generated by DNP-mediated uncoupling [147]. When the same experiment was performed at thermoneutrality, the same dose of DNP increased energy expenditure, reduced body weight, reduced adiposity and improved glucose tolerance [47]. Although DNPs have been discontinued for the treatment of obesity [46], they could serve as a model for the effects of systemic uncoupling agents, which demonstrate better efficacy at thermoneutrality than at colder temperatures.
Another comparison of the pharmacological effect at chronic cold vs. thermoneutrality is the β3 adrenergic agonist, Cl316,243. The main effects of β3 agonists are direct stimulation of BAT thermogenesis and WAT lipolysis [52]. These experiments were performed at 22 • C, and the pharmacological treatment showed no effect on body weight or adiposity. However, there was greater energy expenditure compensated with the increase in food intake [52]. At thermoneutrality, Cl316,243 increased energy expenditure, but it also reduced body weight and adiposity.
Altogether, it is clear that the T a used in experiments with mice is critical and might directly influence the efficiency and clinical translatability of pharmacological, metabolic and energy homeostasis studies.
Conclusions and Future Perspectives
The mouse is the predominant model for studying human diseases. However, many studies fail to deliver mechanistic information about human physiology. This failure comes from translating preclinical studies in mice to therapy in humans. Although we are aware that mice and humans are two different species, we do not always consider all the external variables of the environment, which could influence the physiological adaptation of the study model and hinder the reproducibility of preclinical investigations.
The thermoregulatory network triggered by the hypothalamus is a necessary reflex to maintain T b during variations in T a . Therefore, given the influence that T a exerts on the physiological and pathophysiological responses of the mouse, this study variable should be considered for more correct, efficient translation into human therapy.
The epidemic of obesity and metabolic diseases is increasing exponentially, and current therapies remain inefficient. BAT has become an attractive potential target to treat obesity due to its thermogenic capacity. However, environmental variables such as temperature could directly influence both BAT activity and the dynamics of energy expenditure in the model under study.
For future metabolic studies, it would be important to consider all the variables that may influence the experimental outcome regarding obesity and insulin resistance. For example, an important point is the selective breeding of mouse animal models. Relevant specific differences in metabolic activity have been found in strain-dependent genetic conditions [148][149][150]. However, the correlation between strain variables and thermo-neutrality in studies of obesity and associated diseases is still unknown.
Currently, most metabolic studies are carried out at a room temperature of 21 • C, which is considered a thermoneutral zone for adult humans. However, mice subjected to the same temperatures experience chronic cold. The cold triggers-controlled hypothermia in which energy expenditure is affected by the increase in adaptive thermogenesis, by the activation of BAT and tremors due to involuntary muscle contractions (Figure 4). Further studies in mice at thermoneutrality will deepen our understanding of physiological mechanisms, which could increase the success of translation into human treatments for obesity and metabolic diseases. | 9,396.8 | 2020-01-28T00:00:00.000 | [
"Biology"
] |
Downlink Channel Estimation in Cellular Systems with Antenna Arrays at Base Stations Using Channel Probing with Feedback
In mobile communication systems with multisensor antennas at base stations, downlink channel estimation plays a key role because accurate channel estimates are needed for transmit beamforming. One e ffi cient approach to this problem is channel probing with feedback. In this method, the base station array transmits probing (training) signals. The channel is then estimated from feedback reports provided by the users. This paper studies the performance of the channel probing method with feedback using a multisensor base station antenna array and single-sensor users. The least squares (LS), linear minimum mean square error (LMMSE), and a new scaled LS (SLS) approaches to the channel estimation are studied. Optimal choice of probing signals is investigated for each of these techniques and their channel estimation performances are analyzed. In the case of multiple LS channel estimates, the best linear unbiased estimation (BLUE) scheme for their linear combining is developed and studied.
INTRODUCTION
In recent years, transmit beamforming has been a topic of growing interest [1,2,3,4,5]. The aim of transmit beamforming is to send desired information signals from the base station array to each user and, at the same time, to minimize undesired crosstalks, that is, to satisfy a certain quality of service constraint for each user. This task becomes very complicated if the transmitter does not have precise knowledge of the downlink channel information for each user. Therefore, the beamforming performance severely depends on the quality of channel estimates and an accurate downlink channel estimation plays a key role in transmit beamforming [6,7,8,9]. One of the most popular approaches to downlink channel estimation is channel probing with user feedback [1,2]. This approach suggests to probe the downlink channel by transmitting training signals from the base station to each user and then to estimate the channel from feedback reports provided by the users.
In this paper, we study the performance of channel probing with feedback in the case of a multisensor base station antenna array and single-sensor users [2]. We develop three channel estimators which offer different tradeoffs in terms of performance and a priori required knowledge of the channel statistical parameters. First of all, the traditional least squares (LS) method is considered which does not require any knowledge of the channel parameters. Then, a refined version of the LS estimator is proposed (which is referred to as the scaled LS (SLS) estimator). The SLS estimator offers a substantially improved performance relative to the LS method but requires that the trace of the channel covariance matrix and the receiver noise powers be known a priori. Finally, the linear minimum mean square error (LMMSE) channel estimator is developed and studied. The latter technique is able to outperform both the LS and SLS estimators, but it requires the full a priori knowledge of the channel covariance matrix and the receiver noise powers. For each of the aforementioned techniques, the optimal choices of probing signal matrices for downlink channel measurement are studied and channel estimation errors are analyzed. Moreover, in the case of multiple LS channel estimates, an optimal scheme for their linear combining is proposed using the so-called best linear unbiased estimation (BLUE) approach. The effect of such a combining on the performance of downlink channel estimation is investigated.
BACKGROUND
We assume a base station array of L sensors and arbitrary geometry and consider the case of flat block fading 1 [2]. In this case, the signal received by the ith mobile user can be expressed as follows: where s(k) is the transmitted signal, w is the L × 1 downlink weight vector, h i is the L × 1 vector which describes an unknown complex vector channel from the array to the ith user, n i (k) is the user zero-mean white noise, and (·) H stands for the Hermitian transpose.
In order to measure the vector channel for each user, the method of [2] suggests to use the so-called probing mode to transmit N ≥ L training signals s(1), . . . , s(N) from the base station antenna array using the beamforming weight vectors w 1 , . . . , w N , respectively. The received signals at the ith mobile can be expressed as follows: where . . , n i (N)] T , and (·) * and (·) T stand for the complex conjugate and the transpose, respectively. Then, each receiver (mobile user) employs the information mode to feed the data received in the probing mode back to the base station where these data are used to estimate the downlink vector channels. Alternatively (to decrease the amount of feedback bits), channel estimation can be done directly at each receiver. In the latter case, receivers feed the corresponding channel estimates back to the base station.
LS CHANNEL ESTIMATION
Knowing r i , the downlink vector channel between the base station and the ith user can be estimated using the least LS approach as [2] where W † = (WW H ) −1 W is the pseudoinverse of W H . Assume that the transmitted power in the probing mode is constrained as: where P is a given power constant. We find W which minimizes the channel estimation error for the ith user subject to the transmitted power constraint (5). This is equivalent to 1 The flat fading assumption is valid for narrowband communication systems. the optimization problem where E{·} is the statistical expectation. Using (2) and (4), we have that h i −ĥ i = W † n i and, hence, the objective function in (6) can be rewritten as where we use the fact that E{n i n H i } = σ 2 i I. Here, σ 2 i is the noise power of the ith user, I is the identity matrix, and tr{·} denotes the trace of a matrix.
Using (7), the optimization problem (6) can be equivalently written in the following form: We obtain the solution to this problem using the Lagrange multiplier method, that is, via minimizing the function where λ is the Lagrange multiplier.
To compute ∂L(W, λ)/∂W H , the following lemma will be useful. Lemma 1. If a square matrix F is a function of another square matrix G = A + BX + X H CX, then the following chain rule is valid: where A, B, and C are constant matrices and the dimensions of all the matrices in (10) are assumed to match.
Proof. See Appendix A.
Furthermore, the following expressions for the matrix derivatives of traces will be used [10]: Applying (11) and (12) to (13), we can transform the latter equation as Using (14) and applying (11) to compute ∂ tr{WW H }/∂W H in the second term of (9), we have that Setting (15) to zero, we obtain that any probing matrix is the optimal one if it satisfies the equation Since WW H is Hermitian and positive definite, we can write its eigendecomposition as where Γ is a diagonal matrix with positive eigenvalues on the main diagonal. Using the positiveness of the eigenvalues of WW H and taking into account that Q is a unitary matrix (Q H Q = QQ H = I), we have from (16) that and, therefore, Inserting (19) into (17) and using the identity QQ H = I, we obtain that W is an optimal probing matrix if Using the power constraint (5), we can rewrite (20) as Therefore, any probing matrix with orthogonal rows of the same norm √ P/L is an optimal one. Note that the similar fact has been earlier discovered from different points of view in [11,12]. With such optimal probing, the LS estimator reduces to the simple decorrelator-type estimator.
According to (21), there is an infinite number of choices of the optimal probing matrix. It is also worth noting that each optimal choice of W is user independent. Therefore, any probing matrix that satisfies (21) is optimal for all users.
It should be stressed that additional constraints on W may be dictated by particular implementation issues. For example, the peak transmitted power per antenna may be limited. In this case, we have to distribute the transmitted power uniformly over the antennas and, therefore, the additional constraint is that all the elements of the optimal probing matrix should have the same magnitude. To satisfy this con-straint, a properly normalized submatrix of the DFT matrix can be used, that is, where W N = e j2π/N . Using (21) along with (7), we obtain that the minimum downlink channel mean-square estimation error becomes We stress that the error in (23) is proportional to the square of the number of transmit antennas and this may lead to a certain restriction of the dimension of the transmit array. However, one can compensate for this effect by increasing the total transmitted power in the probing mode.
Another interesting observation is that the error in (23) is independent of the channel realization h i and the array geometry.
SCALED LS CHANNEL ESTIMATION
Obviously, the LS estimate (4) does not necessarily minimize the channel estimation error because its objective is to minimize the signal estimation error rather than the channel estimation error. Therefore, it may be possible to use an additional scaling factor γ to further reduce this error. Using this idea, applying (2) and (4), and dropping the user index i for the sake of simplicity, we can write the channel estimation error in the following form: whereĥ LS is the LS channel estimate of (4), R h = E{hh H } is the channel correlation matrix, and J LS is given by (7). Clearly, (24) is minimized with and the minimum of (24) with respect to γ is given by Note that the optimal γ in (25) is a function of the trace of the channel correlation matrix R h and the noise variance σ 2 . Therefore, these values have to be known (or preliminary estimated) when using the SLS approach. In practice, the estimate of tr{R h }, can be used in (25) in lieu of tr{R h }. Assuming that the values of tr{R h } and σ 2 are given in advance, defining the SLS channel estimate asĥ and using (4) and (25), we havê The optimal probing matrix for channel estimation using the SLS method can be found by means of solving the following optimization problem: Since tr{R h } > 0, we see from (26) that J SLS is a monotonically increasing function of J LS . Note that tr{R h } is not a function of W, and, therefore, J LS is the only term in (26) which depends on W. This means that the optimization problems (6) and (30) are equivalent. Therefore, the optimal choice of probing matrix for the SLS channel estimation technique is the same as for the LS approach.
LMMSE CHANNEL ESTIMATION
In this section, we consider the LMMSE estimator of h which is given by [13] The performance of this estimator is characterized by the error e = h −ĥ LMMSE whose mean is zero, and the covariance matrix is given by [13] The LMMSE estimation error is given by To minimize (33) subject to the transmitted power constraint tr{WW H } = P, we can use the Lagrange multiplier method. The problem can be written as follows: Using the chain rule (10), it can be readily shown that the optimal probing must satisfy Using the constraint tr{WW H } = P, (35) can be rewritten as follows: Interestingly, in the high signal-to-noise ratio (SNR) case (σ 2 → 0), (36) transforms to (21). Therefore, in the high SNR domain, the LS, SLS, and LMMSE approaches all have the same condition on optimal probing matrices. Using (36), we obtain that in the optimal probing case, Therefore, If the channel coefficients are all i.i.d. random variables, we have R h = ξ 2 I, where ξ 2 can be viewed as the channel attenuation parameter. In this case, (36) transforms to (21) and, therefore, the optimal probing matrix for the LS estimator is also optimal for the LMMSE estimator. Furthermore, in such a situation, the minimum of the channel estimation error is given by Interestingly, if R h = ξ 2 I, then (26) and (39) are identical which means that the performances of the SLS and LMMSE estimators are similar in this case.
COMBINING OF MULTIPLE LS CHANNEL ESTIMATES
In Sections 3, 4, and 5, the specific case of a single channel estimate has been considered. In this section, we extend the optimal probing approach to the case of multiple LS channel estimates. If there are multiple probing periods available within the channel coherency time, it may be inefficient from the computational and buffering viewpoints to store and process dynamically long amounts of data that are formed by accumulation of multiple received data blocks corresponding to different probing periods. A good alternative here is to obtain a particular channel estimate for each probing period and then to store these estimates dynamically rather than storing the data itself, and to compute the final channel estimate based on a proper combination of such (previously obtained) particular estimates. Let us have K estimatesĥ i,1 , . . . ,ĥ i,K of the downlink channel corresponding to the ith user. Let each estimate be computed using (4) based on some probing matrices W 1 , . . . , W K , respectively. The channel is assumed to be quasistatic (fixed) at the interval of K probings, and P k = W k 2 F is the transmitted power during the kth probing.
We aim to improve the performance of downlink channel estimation by combining the estimated valuesĥ i,k for k = 1, . . . , K in a linear way as follows: where α i,k are unknown weighting coefficients.
Let us obtain the optimal weighting coefficients by means of minimizing the error in (40). Then, these coefficients can be found by solving the following optimization problem: where the constraint in (41) guarantees the unbiasedness of the final channel estimate. This problem corresponds to the so-called BLUE estimator [13].
The solution to (41) is given by the following lemma.
Lemma 2. The optimal weights {α i,k } K k=1 for the ith user are given by It is worth noting that the optimal weighting coefficients α i,k are user independent (i.e., they are the same for each user).
Choosing optimal orthogonal weighting matrices in each probing, we have where is the total transmitted power during the K probings. Inserting (43) into (42), we obtain that in the case of using optimal orthogonal weighting matrices, the expression for optimal weighting coefficients can be simplified to In this case, the downlink channel estimation error is given by where n i,k is the zero-mean white noise vector of the ith user in the kth probing. When deriving (46), we have used the property E{n i,k n H i,l } = σ 2 i δ k,l I, where δ k,l is the Kronecker delta.
We observe that, similar to (23), the error in (46) is independent of the channel realization and the array geometry. Comparing (46) with (23), we see that the optimal linear combining of multiple estimates reduces the estimation error by a factor of P tot /P. For example, if each probing has the same power (P k = P, K = 1, 2, . . . , K), then P tot = KP and the estimation error is reduced by a factor of K.
NUMERICAL EXAMPLES
In our simulations, we compare the performance of the LS, SLS, and LMMSE channel estimators in the cases of optimal and nonoptimal choices of probing matrices. Throughout all our simulation examples, we assume that N = L. The channel coefficients and the receiver noise are assumed to be circular complex Gaussian random variables with the unit variance.
We assume that the base station has a uniform linear array and the downlink channel correlation matrix R h has the following structure: where n and m are the indices of the array sensors. This model of the array covariance is frequently used in the literature; see [14,15,16] and references therein. The elements of L × L probing matrices W in the case of nonoptimal probing have been drawn independently from a zero-mean complex Gaussian random generator in each simulation run. However, to avoid possible computational inaccuracy of the LS estimator, we have ignored all probing matrices that have resulted in a condition number of WW H greater than 10 4 . Each simulated point is obtained by averaging 5000 independent simulation runs.
In Figure 1, we display the mean of the norm squared of the channel estimation error (MNSE) of the LS channel estimator in the optimal and nonoptimal probing matrix cases. P/σ 2 . Note that the performance of the LS estimator is independent of the parameter r. The parameter L is varied in this figure.
In Figure 2, the performance of the SLS estimator is tested under the similar conditions. Similar to the LS method, the performance of the LS estimator is independent of the parameter r. In both figures, the channel covariance matrix R h is assumed to be known exactly. Other conditions are similar to that of Figures 1 and 2.
From Figures 1, 2, 3, and 4, it can be seen that the optimal probing improves the quality of channel estimation substantially for all methods. Note that this improvement is especially pronounced for large values of P/σ 2 if the SLS or LMMSE method is used. Comparing Figures 3 and 4, we also see that these figures give nearly the same results. This means that moderate correlation of the channel coefficients does not affect the LMMSE approach.
As it has been mentioned before, the SLS channel estimator requires the knowledge of tr{R h }. However, note that the LS estimator can be applied to estimate this parameter using (27). In Figure 5, the MNSEs of the SLS estimator with optimal probing are plotted versus P/σ 2 in the cases when the exact and estimated values of tr{R h } are used. In the latter case, the LS method is applied to obtain the estimate of tr{R h } which is then inserted into the SLS estimator. All other conditions are similar to that of the previous figures.
In the LMMSE method, the full knowledge of the channel correlation matrix R h is required either at the base station or at the mobile station to estimate the channel (depending on where the channel estimation is done). Also, the base station transmitter has to know this matrix in order to compute the optimal probing matrix. However, one may use the following rank-one estimate of this matrix: In Figure 6, the performance of the LMMSE channel estimator is tested versus P/σ 2 in the cases when R h is known exactly and when its estimate (48) is used. In the latter case, the optimal LS probing is used (note, however, that in the general case, such a probing is nonoptimal for the LMMSE approach). The value of L is varied in this figure and r = 0.25 is taken. From Figures 5 and 6, we see that there are only small performance losses caused by using the estimated values of tr{R h } and R h in the SLS and LMMSE estimators, respectively, in lieu of the exact values of tr{R h } and R h . Also, from Figure 6, we see that the optimal LS probing becomes nearly optimal for the LMMSE approach starting from moderate values of SNR. This observation supports theoretical results of Section 5. Figure 7 compares the performances of the LS, SLS, and LMMSE estimators versus P/σ 2 . In this figure, we assume that r = 0.25, and two variants of the LMMSE estimator are considered. Both these variants assume that the estimator knows R h exactly, but the first variant uses the optimal probing signal that satisfies (36), while the second one employs the matrix which satisfies (21) and, therefore, is optimal only for LS and SLS estimators and/or for the uncorrelated channel case (r = 0). From this figure, we observe that the difference in performance between the first and second variants of the LMMSE estimator is negligible at all the tested values of SNR. Therefore, the LS/SLS probing appears to be suboptimal for the LMMSE estimator.
In the last example, the case of multiple LS channel estimates are assumed. In Figure 8, the parameter L = 4 is chosen and the performance of the BLUE estimator is compared for K = 2 and K = 4. Three cases are considered in this figure: (i) both the probing matrices and the coefficients α i,k are optimal; (ii) the probing matrices are nonoptimal but the coefficients α i,k are optimal; (iii) both the probing matrices and the coefficients α i,k are nonoptimal.
In the third case, the coefficients α i,k = 1/K are assumed for all i and k. Figure 8 demonstrates substantial improvements which can be achieved when the BLUE estimator is used in the case of multiple channel estimates. This figure also shows that the choice of optimal probing matrices and coefficients α i,k is critical for the estimator performance as nonoptimal choices of one or both of these parameters may cause a severe performance degradation.
CONCLUSIONS
We have studied the performance of the channel probing method with feedback using a multisensor base station antenna array and single-sensor users. Three channel estimators have been developed which offer different tradeoffs in terms of performance and a priori required knowledge of the channel statistical parameters. First of all, the traditional LS method has been considered. The LS estimator does not require any knowledge of the channel parameters. Then, a new (refined) version of the LS estimator has been proposed. This refined technique is referred to as the SLS estimator. It has been shown to offer a substantially improved channel estimation performance relative to the LS method but requires that the trace of the channel covariance matrix and the receiver noise powers be known a priori. Finally, the LMMSE channel estimator is developed and studied. The latter technique has been shown to potentially outperform both the LS and SLS estimators, but it requires the full a priori knowledge of the channel covariance matrix and the receiver noise powers.
For each of the above mentioned techniques, the optimal choices of probing signal matrices for downlink channel measurement have been studied and channel estimation errors have been analyzed. In the case of multiple LS channel estimates, the BLUE scheme for their linear combining has been developed.
Simulation examples have demonstrated substantial performance improvements that can be achieved using optimal channel probing.
A. PROOF OF LEMMA 1
First of all, we prove the chain rule for the particular case when G = BX. Writing this equation elementwise, we have g i,l = k b i,k x k,l and, therefore, and the proof for the particular case G = BX is completed.
To extend the proof to the general case G = A + BX + X H CX, we notice that this equation can be rewritten as G = A + (B + X H C)X and, therefore, the established result for the particular case G = BX can be applied taking into account that the matrix A is constant and that ∂ tr{B + X H C}/∂X = 0. In other words, replacing the matrix B by the matrix B+X H C, we straightforwardly extend our proof to the general case. | 5,524.4 | 2004-01-01T00:00:00.000 | [
"Computer Science",
"Business"
] |
Establishment and Characterization of a High and Stable Porcine CD163-Expressing MARC-145 Cell Line
Isolation and identification of diverse porcine reproductive and respiratory syndrome viruses (PRRSVs) play a fundamental role in PRRSV research and disease management. However, PRRSV has a restricted cell tropism for infection. MARC-145 cells are routinely used for North American genotype PRRSV isolation and vaccine production. But MARC-145 cells have some limitations such as low virus yield. CD163 is a cellular receptor that mediates productive infection of PRRSV in various nonpermissive cell lines. In this study, we established a high and stable porcine CD163- (pCD163-) expressing MARC-145 cell line toward increasing its susceptibility to PRRSV infection. Indirect immunofluorescence assay (IFA) and Western blotting assays showed that pCD163 was expressed higher in pCD163-MARC cell line than MARC-145 cells. Furthermore, the ability of pCD163-MARC cell line to propagate PRRSV was significantly increased as compared with MARC-145 cells. Finally, we found that pCD163-MARC cell line had a higher isolation rate of clinical PRRSV samples and propagated live attenuated PRRS vaccine strains more efficiently than MARC-145 cells. This pCD163-MARC cell line will be a valuable tool for propagation and research of PRRSV.
Introduction
Porcine reproductive and respiratory syndrome (PRRS) first appeared in the late 1980s independently but almost simultaneously in North America and Europe. PRRS spread quickly to most swine producing countries worldwide and became one of the most economically important diseases in the swine industry [1]. The causative agent is PRRS virus (PRRSV) and two distinct genotypes are found: the European (EU) type (genotype 1) and the North American (NA) type (genotype 2) [2][3][4]. Sequence analysis shows they share approximately 60% nucleotide sequence identity at the genome level [5][6][7][8]. Since the emergence, they exhibit distinct genetic and antigenic variations and have been identified as dominating pathogens causing reproductive failures in sows and gilts, respiratory distress, high mortality rates for nursery pigs, and serious economic losses per year [9][10][11][12]. PRRSV was first isolated from fetuses suspicious of PRRS in 1995 in China [13]. In 2006, a large-scale devastating disease, known locally as "high fever," broke out in China causing high morbidity of 50-100% and a mortality rate of 20-100% [14].
Pig is the only natural host of PRRSV. PRRSV has a restricted cell tropism for infection. In vivo, it replicates preferentially in porcine alveolar macrophages (PAMs) [15,16].
In vitro, PRRSV replication is limited to isolated PAMs [17], differentiated monocytes [18], and African green monkey kidney derived cells, such as MARC-145 [17,18]. Cell tropism of PRRSV is determined by the interaction between the viral surface protein(s) and the cellular receptor(s) on the surface of host cells. Many cellular receptors have been described for PRRSV, including heparin sulfate (HS) [19], sialoadhesin (Sn) [20], vimentin [21], and CD151 [22]. However, there is no evidence that cells expressing these proteins in trans lead to a productive infection by PRRSV, nor any evidence is presented that endogenous expression of their cDNA confers susceptibility to nonpermissive cells. Since 2007, CD163, a cellular glycoprotein in the scavenger receptor cysteine-rich (SRCR) superfamily, has been described to function as a putative cellular receptor for PRRSV [23]. Furthermore, the expression of CD163 in nonpermissive cells such as PAM [24], CHO, and PK15 cells [25], BHK-21 cells [26], and murine macrophage-derived cells [27] has been shown to confer these cells to be permissive to PRRSV and support the production of PRRSV. The expression level of CD163 determines the level of PRRSV production, implicating that CD163 is a critical factor for PRRSV infection [20]. Other reports further indicate that the expression level of CD163 appears to correlate with the efficiency of PRRSV infection and an important residue of CD163 is found to be involved in the functional interaction with PRRSV [28,29].
Isolation of PRRSV from clinical PRRSV samples is important for PRRSV diagnostics research. PAMs are susceptible to PRRSV but the primary cells are difficult to isolate from pigs and to maintain in vitro. MARC-145 cells are routinely used for NA genotype PRRSV isolation and vaccine production. However, MARC-145 cells have some limitations such as relatively low virus yield and low isolation rate of PRRSV from clinical specimen. In this study, a MARC-145 cell line stably and highly expressing pCD163 was generated and the susceptibility to PRRSV infection was evaluated. Remarkably, the expression of pCD163 in pCD163-MARC cells was robust, which suffices to render MARC-145 cells higher susceptible to clinical PRRSV isolation. The engineered cell line propagated live attenuated PRRSV vaccine strains more efficiently than MARC-145 cell line. As such, this pCD163-MARC cell line reported here will be a valuable tool to advance the biological characteristics analysis of PRRSV.
Materials and Methods
2.1. Animals. 4-to 6-week-old PRRSV-negative pigs were obtained from a local farm without PRRSV, porcine circovirus 2, porcine parvovirus, pseudorabies virus, and Actinobacillus pleuropneumoniae history. All pigs were tested and proven to be seronegative for PRRS by indirect enzymelinked immunosorbent assay (iELISA) and PRRSV negative by RT-qPCR.
Cells.
PAMs were obtained from the lungs of PRRSVnegative pigs mentioned above. In brief, the lungs were washed five to eight times with sterilized phosphate-buffered saline (PBS) and each aliquot of washing fluid was centrifuged for 10 min at 1500 rpm. The resulting cell pellets were mixed together, washed again in PBS, and resuspended in RPMI 1640 medium (Invitrogen, Carlsbad, CA, USA) supplemented with 10% FBS, 2 mM L-glutamine, 1 mM nonessential amino acids, 100 U penicillin/ml, and 100 g streptomycin/ml. Cells were counted and seeded in 6-well tissue culture plates at a density of 2 × 10 6 cells/well. MARC-145 cells were grown in Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% FBS, 2 mM L-glutamine, 100 U of penicillin/ml, and 100 g of streptomycin/ml. These cells were cultured in a humidified incubator with 5% CO 2 at 37 ∘ C.
Virus.
NA genotype PRRSV classical strain S1 (GenBank accession number DQ459471) and highly pathogenic strain SY0608 (GenBank accession number EU144079) were kindly provided by Dr. Ping Jiang (Nanjing Agricultural University, China) [30,31]. Both strains were propagated and titrated on MARC-145 cells and stored at −20 ∘ C for further use.
Construction of the pCI-pCD163
Plasmid. Based on pCD163 gene sequence (GenBank accession number HM991330), primers used for amplification of the fulllength of pCD163 gene were designed. The total RNA was extracted from PAMs using TRIzol reagent according to the manufacturer's instructions (Invitrogen). RNA was further purified by using an RNeasy Minikit, including a DNase digestion step (Qiagen, Chatsworth, CA, USA). The purified RNA was reverse transcribed to cDNA using an oligo (dT) primer and SuperScript6 III Reverse Transcriptase (Invitrogen). The full-length of pCD163 gene was then amplified from cDNA using primer pair as follows: pCD163-Fwd (upstream primer): 5 -ATACTCGAGCCACCATGG-ACAAACTCAGAATGGTGCTAC-3 (containing XhoI site italicized); pCD163-Rev (downstream primer): 5 -ATAGCGGCCGCAAGCTTATCATTGTACTTCAGAG-3 (containing NotI site italicized). Then the pCD163 gene was cloned into pCI-neo mammalian expression vector (Promega, Madison, WI, USA) using XhoI and NotI sites to obtain plasmid pCI-pCD163. The expression plasmid was sequenced to confirm the correct tandem in frame insertion of pCD163 gene.
Generation of pCD163-MARC Cell
Line. MARC-145 cells were transfected with the plasmid pCI-pCD163 using the Lipofectamine 3000 reagent (Invitrogen) according to the manufacturer's protocol. pCD163-expressing cells were selected with 300 g/ml of G418 (Invitrogen) diluted in DMEM containing 10% FBS. The medium containing G418 was replenished every 3-4 days during selection. After selection, individual cell clone was isolated by a limited dilution method using 96-well cell culture plates, and fifteen resulting clones were screened for expression of pCD163 by Western blotting using mouse anti-pCD163 MAb (AbD Serotec, Raleigh, NC, USA). The cell clone expressing the highest level of pCD163 was chosen and designated as pCD163-MARC.
2.6. IFA. MARC-145 and pCD163-MARC cells were seeded directly onto coverslips. 48 h later, the expression of pCD163 was identified by IFA. IFA was also conducted to confirm the N protein expression of the 5th-passage PRRSV isolated from clinical specimen. Briefly, cells were washed twice in ice-cold phosphate-buffered saline (PBS) and fixed with 4% paraformaldehyde in PBS at 4 ∘ C for 1 h. Cells were then washed three times with ice-cold PBS and permeabilized with 0.5% Triton X-100 for 15 min. The coverslips were then incubated with mouse anti-pCD163 MAb (1 : 500) or anti-N MAb SDOW17 (Rural Technologies, Brookings, SD, USA) for 1 h. After three washes with PBS, coverslips were incubated with Alexa Fluor 488-conjugated goat anti-mouse IgG(H+L) antibody (Invitrogen) at room temperature for 1 h. Then the coverslips were washed three times and mounted with mounting buffer (60% glycerol and 0.1% sodium azide in PBS) and observed under an Olympus BX51 inverted fluorescence microscope.
Western Blotting Assay.
After being grown in 6-well tissue culture plates for 48 h, MARC-145 and pCD163-MARC cells were lysed in ice-cold cell lysis buffer (Beyotime, Shanghai, China) containing 20 mM Tris (pH 7.5), 150 mM NaCl, and 1% Triton X-100 supplemented with phenylmethylsulfonyl fluoride (PMSF; Beyotime, Shanghai, China). The samples were analyzed by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and Western blotting. Briefly, the samples were resolved in a 12% polyacrylamide gel. Separated proteins were then transferred onto a nitrocellulose membrane and probed with mouse anti-pCD163 MAb or -actin antibody (Santa Cruz Biotechnology, Santa Cruz, CA, USA). Specific reaction products were detected with horseradish peroxidase-(HRP-) conjugated goat anti-mouse IgG (Boster, Wuhan, China). The membranes were developed using SuperSignal5 West Pico Chemiluminescent Substrate according to the manufacturer's suggestions (Pierce, Rockford, IL, USA). Digital signal of chemiluminescent Western blotting was acquired by Bio-Rad GelDoc6 XR & ChemiDoc6 XRS system and analysis was conducted by the Quantity One program, version 4.6 (Bio-Rad). Each experiment was repeated three times.
Cell Proliferation Assay.
Cell proliferation of the cell lines was evaluated as previously described [32] with the following modifications. Briefly, MARC-145 and pCD163-MARC cells were seeded in 24-well plates at a density of 1 × 10 4 cells/well. Cells were digested and cell numbers were counted for a total of eight consecutive days to evaluate the proliferation rates. All assays were repeated at least three times, with each experiment performed in triplicate.
Growth
Kinetics. The 60th passage of pCD163-MARC cells and MARC-145 cells was grown in 24-well plates to 80% confluency and infected with PRRSV classical strain S1 or highly pathogenic strain SY0608 at the same multiplicity of infection (MOI) of 1. Virus was allowed to adsorb for 1 h at 37 ∘ C. The inoculum was removed, and the cells were washed twice and then replaced with fresh medium. At 24 h, 48 h, 72 h, 96 h, and 120 h postinfection (hpi), the supernatants were collected. One part of culture supernatants was titrated by a microtitration infectivity assay on MARC-145 cells and the 50% tissue culture infective dose (TCID 50 ) was calculated by the Reed-Muench method [33]. The remaining part of culture supernatants was subsequently used for SYBR Green real-time PCR assay. All assays were repeated at least three times, with each experiment performed in triplicate.
Real-Time RT-PCR.
Real-time RT-PCR was carried out as described previously [34]. Total RNA of viral supernatants harvested as described above was extracted using TRIzol reagent. cDNA was synthesized using oligo d(T) primer as mentioned above. SYBR Green real-time PCR was performed to evaluate PRRSV RNA level, using the sequence of sense primer, 5 -AATAACAACGGCAAGCAGCAG-3 , and antisense primer, 5 -CCTCTGGACTGGTTTTGTTGG-3 . The cDNA was used as the template. The reaction was performed at 95 ∘ C for 2 min, followed by 40 cycles of 95 ∘ C for 15 s and 61 ∘ C for 1 min using the ABI 7300 detection system. For quantification, cDNA of PRRSV strain S1 was tenfold serially diluted and used to generate the standard curve. The realtime PCR method was very sensitive and could detect even 0.01 TCID 50 of PRRSV in the viral supernatant. PRRSV RNA quantity of samples was determined by linear extrapolation of the Ct value plotted against the standard curve. All assays were repeated at least three times, with each experiment performed in triplicate. least significance difference (LSD). A P value < 0.05 was considered statistically significant [35].
Generation and Characterization of the pCD163-MARC
Cell Line. The eukaryotic expression plasmid pCI-pCD163 was constructed and the sequencing data confirmed a 100% identity with pCD163 sequence (GenBank accession number HM991330), demonstrating a successful and correct insertion of pCD163 gene in the construct. The plasmid was then transfected into MARC-145 cells. After selection with G418, each cell clone was examined for pCD163 gene expression. The cell clone of the highest pCD163 expression was identified by IFA and Western blotting and designated pCD163-MARC. The fluorescence intensity of pCD163-MARC cells was significantly higher than that of MARC-145 cells, and pCD163 was present both on the cell surface and in cytoplasm ( Figure 1). The pCD163 expression level was further examined by Western blotting (Figure 2(a)). pCD163-MARC cells were found to express 8.7 times higher level of pCD163 than MARC-145 cells (Figure 2(b)). The growth speed and viability of pCD163-MARC cells were compared with MARC-145 cells. As shown in Figure 3, pCD163-MARC cells had similar cell viability and doubling times with the parental MARC-145 cells (P > 0.05).
pCD163-MARC Cells Are Highly Susceptible to Both PRRSV Classical and Highly Pathogenic Strains.
To evaluate the susceptibility of pCD163-MARC cells to PRRSV, 60th passage of pCD163-MARC and MARC-145 cells was individually infected with PRRSV classical strain S1 or highly pathogenic strain SY0608 at an MOI of 1. TCID 50 was measured to determine viral growth kinetics. As shown in Figure 4, pCD163-MARC cells propagated both PRRSV S1 and SY0608 more efficiently than MARC-145 cells (P < 0.05). Real-time RT-PCR was also conducted to amplify the PRRSV N gene from the culture media at different time points. As shown in Figure 5, MARC-145 cells clearly contained lower levels of viral RNA than the pCD163-MARC cells (P < 0.05). In sum, the data suggested that pCD163-MARC cells are more susceptible to infections by both PRRSV S1 and SY0608 strains, thereby emphasizing that the expression level of pCD163 correlates well with the overall level of PRRSV propagation. (Figure 6).
pCD163-MARC Cells Are More Suitable for PRRSV Vaccine Production.
Vaccination is considered the major protective measure to control PRRS. Next, we examined and compared pCD163-MARC and MARC-145 cells for their ability to propagate live attenuated PRRS vaccine strains. As summarized in Table 2, viral titers produced by pCD163-MARC cells for PRRSV R98 and JXA1-R vaccine strain were higher than MARC-145 cells (P < 0.05). This result supported a notion that pCD163-MARC cells are more effective in propagation of PRRSV vaccine strain and can be used in vaccine production.
Discussion
To date, the propagation of PRRSV using cell lines in vitro can be carried out in three methods. One method represents the monocyte-macrophage lineage containing PAMs and peripheral blood monocytes. Another method represents African green monkey kidney cell MA-104 and its derivatives MARC-145 and CL-2621. The third is recombinant cell lines created from PRRSV nonpermissive cells by recombinant DNA technology, which confers permissiveness to PRRSV infection after expressing the corresponding receptor [36]. So far, HS, Sn, CD163, CD151, and vimentin have been reported as PRRSV receptors. With the growth of CD163, the infection of PRRSV in cultured monocytes is regularly increasing [37]. The results of flow cytometry show a direct correlation between the expression of CD163 in PAM cells and PRRSV infectivity [38]. What is more, CD163 has been proved to be related to the entry of PRRSV into MARC-145 cells which do not belong to the monocyte-macrophage lineage [23]. Pretreatment of PAMs with tetradecanoyl phorbol acetate (TPA) or lipopolysaccharide (LPS) shows that the CD163 expression decreases, and accordingly infection of PRRSV is reduced [39]. Transfection of PRRSV nonpermissive cells with the CD163 gene derived from swine, canine, murine, human, and * * * simian can render these cells permissive to PRRSV infection, fully confirming the function of CD163 [23,39]. All of these data demonstrate that CD163 plays an important role in determining PRRSV susceptibility and productive infection. So far, numerous cell lines were constructed to enhance PRRSV replication, such as continuous PAM [24], CHO and PK15 cells [25], BHK-21 cells [26], and murine macrophagederived cells [27], fully revealing the importance of CD163. MARC-145 cells are routinely used for NA genotype PRRSV isolation and vaccine production. However, MARC-145 cells have some limitations such as relatively low virus yield and no cell lines constructed based on MARC-145 cells to enhance the propagation of PRRSV. The current study has attempted to generate a subline of MARC-145 cells stably expressing pCD163, such that these cells can be used to facilitate sensitive isolation of PRRSV for research and diagnostics and to produce vaccine virus effectively.
CD163 is a 130 kD glycoprotein containing a large extracellular region of 9 SRCR domains, a single transmembrane (TM) domain, and a short cytoplasmic tail [40,41]. Domains in pCD163 related to PRRSV infection have been identified using methods of various deletion and chimeric constructs. Touching the importance of SRCR domains, excluded are the first four N-terminal repeats (SRCRs 1-4), whereas SRCR5 is vital for mediating PRRSV infection [23,39,42]. Further studies demonstrated that the TM domain is required for PRRSV susceptibility. The cytoplasmic tail is conserved among species but is not required for PRRSV infection [43]. In our study, the full-length of pCD163 gene was amplified by RT-PCR from PAMs and cloned into the eukaryotic expression vector pCI-neo containing the geneticin gene for antibiotics selection. The plasmid pCI-pCD163 was introduced to MARC-145 cells and positive cells were selected based on the resistance to G418, and the highest level of pCD163 expressing cell clone was selected, and pCD163-MARC was established.
PRRSV was first isolated from fetuses suspicious of PRRS in 1995 [13], which belongs to NA genotype PRRSV [44]. Since then, PRRSV spread in China has been predominantly NA genotype. In 2006, highly pathogenic PRRS broke out in China and caused enormous economic losses [14]. In the present study, we used NA genotype PRRSV classical strain S1 and highly pathogenic strain SY0608 to represent PRRSV circulating in China and examined their propagation in the established pCD163-MARC cells. The growth kinetics of PRRSV and the determination of PRRSV RNA demonstrated that pCD163-MARC cells had higher production level of both PRRSV strains (Figures 4 and 5). The presenting data here show that pCD163 has a pivotal role in MARC-145 for PRRSV infection in accordance with previous reports that the expression level of pCD163 could determine the efficiency of PRRSV infection [23]. However, whether the cell line is used for EU genotype PRRSV isolation needs to be studied in the future.
Isolation of PRRSV from clinical samples is important for PRRSV research and diagnostics. The PRRSV isolation rate in pCD163-MARC cells reaches 98.2-100%, which is significantly higher than the isolation rate in MARC-145 cells (Table 1). We show that pCD163-MARC cells are more suitable for isolation of field viruses. Using these cells, more PRRSV isolates may be obtained to study their biological characteristics. In addition, two vaccine strains currently used in China can be produced at higher titers in pCD163-MARC cells than in MARC-145 cells, demonstrating the great potential of this cell line for efficient production of commercial vaccines (Table 2).
Conclusions
We have successfully established a high and stable pCD163 expressing MARC-145 cell line with increased susceptibility to PRRSV infection. This pCD163-MARC cell line will be a valuable tool not only to facilitate PRRSV propagation for PRRSV isolation and vaccine production, but also to study the biological characteristics of PRRSV.
Ethical Approval
The animal experiments were approved by the Institutional Animal Experiment Commission (Institute of Animal Science and Veterinary Medicine, Shandong Academy of Agricultural Sciences) and conducted accordingly. Collecting clinical tissue samples was granted an exemption from requiring ethics approval by the Institutional Animal Experiment Commission (Institute of Animal Science and Veterinary Medicine, Shandong Academy of Agricultural Sciences), because tissue samples were collected for diagnostic purposes.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Authors' Contributions
Xiangju Wu and Jing Qi designed and performed the experiments and prepared figures; Xiaoyan Cong, Lei Chen, Yue Hu, Guisheng Wang, and Fulin Tian prepared the reagents and samples and helped with experimental design; Dongwan Yoo and Feng Li discussed the data and corrected the manuscript; Wenbo Sun, Zhi Chen, Lihui Guo, Jiaqiang Wu, and Jun Li were involved in data discussion; Jinbao Wang, Xiaomin Zhao, and Yijun Du initiated the study, designed the experiments, and wrote the manuscript. All authors revised | 4,737.6 | 2018-02-21T00:00:00.000 | [
"Biology"
] |
Constraining the Higgs self couplings at $e^+e^-$ colliders
We study the sensitivity to the shape of the Higgs potential of single, double, and triple Higgs production at future $e^+e^-$ colliders. Physics beyond the Standard Model is parameterised through the inclusion of higher-dimensional operators $(\Phi^\dagger \Phi- v^2/2)^n/\Lambda^{(2n-4)}$ with $n=3,4$, which allows a consistent treatment of independent deviations of the cubic and quartic self couplings beyond the tree level. We calculate the effects induced by a modified potential up to one loop in single and double Higgs production and at the tree level in triple Higgs production, for both $Z$ boson associated and $W$ boson fusion production mechanisms. We consider two different scenarios. First, the dimension six operator provides the dominant contribution (as expected, for instance, in a linear effective-field-theory(EFT)); we find in this case that the corresponding Wilson coefficient can be determined at $\mathcal{O}(10\%)$ accuracy by just combining accurate measurements of single Higgs cross sections at $\sqrt{\hat s}=$240-250 GeV and double Higgs production in $W$ boson fusion at higher energies. Second, both operators of dimension six and eight can give effects of similar order, i.e., independent quartic self coupling deviations are present. Constraints on Wilson coefficients can be best tested by combining measurements from single, double and triple Higgs production. Given that the sensitivity of single Higgs production to the dimension eight operator is presently unknown, we consider double and triple Higgs production and show that combining their information colliders at higher energies will provide first coarse constraints on the corresponding Wilson coefficient.
Introduction
In the Standard Model (SM), the breaking of the electroweak symmetry, SU (2) L ×U (1) Y → U (1) QED , is induced by the potential: where Φ is the Higgs doublet and the parameters µ and λ depend on the vacuum expectation value of the Higgs field v (or equivalently, the Fermi constant G F ) and the Higgs boson mass m H , i.e., µ 2 = m 2 H /2 and λ = m 2 H /(2v 2 ). The form of eq. (1.1) is dictated by the symmetries of the SM and the requirement of renormalisability. It is therefore a firm prediction of the SM that, once m H is known, the Higgs boson (H) self interactions are uniquely determined; λ SM 3 = λ SM 4 = λ, where λ SM 3 (λ SM 4 ) is the factor in front of the vH 3 (H 4 /4) interaction in the SM Lagrangian after ElectroWeak Symmetry Breaking (EWSB). Since its discovery in 2012 [1,2] a wealth of information has been accumulated on the scalar particle at 125 GeV of mass. Its couplings to vector bosons and third generation fermions [3] are so far all compatible with the SM expectations. However, no confirmation on the SM form of the Higgs potential is yet available from collider experiments. The reason of this lies in the intrinsic difficulty of accessing the relevant information experimentally.
Determining the form of the Higgs potential necessarily implies measuring the strength of the Higgs three-and four-(and possibly higher) point self-couplings. This is a challenging task at colliders for several reasons. As mentioned above, the self-couplings are proportional to λ, which in the SM is about 1/8 and therefore rather weak, i.e. of the same order of the Higgs couplings to the vector bosons and significantly smaller than the Yukawa coupling to the top quark. In addition, for a direct sensitivity on λ 3 (λ 4 ), processes featuring at least three(four) Higgs bosons need to be considered, namely double(triple) Higgs production As a result, effects associated to the self-couplings in the range of the SM values are in general very small. This simple fact has two immediate implications. First, one will need considerable statistics to be collected at the LHC Run II and III and in future electronpositron colliders before reaching the sensitivity to values close the SM predictions. Second, the precision of the experimental determinations of the self-couplings will critically depend on that of the other Higgs boson couplings entering the same process (or more in general the observable) under consideration. For example, at the LHC the largest production rate is due to gluon fusion processes via a top-quark loop. While the leading contribution from the top-quark Yukawa coupling y t scales as y 4 t , the leading contribution from the Higgs selfcoupling scales as (λ 3 ) 2 and is kinematically suppressed at large m(HH) invariant-mass values.
Many studies have been performed for the LHC at √ŝ = 13 TeV aiming to directly access λ 3 from double Higgs production measurements [4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21]. However, due to the complexity of the corresponding realistic experimental setups, it is still unclear what is the final precision that could be achieved on the determination of λ 3 . At the moment the strongest experimental bounds on λ 3 (from non-resonant double-Higgs production) have been obtained in the CMS analysis for the bbγγ signature [22]. Exclusion limits on λ 3 have been found to strongly depend on the value of the top Yukawa coupling y t . In particular, in the case of an SM y t value, order λ 3 < −9 λ SM 3 and λ 3 > 15 λ SM 3 limits are obtained. According to optimist experimental projection studies [23], even with the high luminosity (HL) option of 3000 fb −1 it may be possible to exclude values only in the range λ 3 < −1.3 λ SM 3 and λ 3 > 8.7 λ SM 3 . Concerning the quadrilinear coupling λ 4 , it is instead incontrovertibly clear that the possibility of constraining λ 4 via the measurement of triple Higgs production at the LHC is quite bleak [24][25][26][27]. Even at a future 100 TeV proton-proton collider only loose bounds may be obtained with a considerable amount of integrated luminosity [28][29][30].
Additional and complementary strategies for the determination of λ 3 and λ 4 are therefore desirable not only at the moment but also for the (near) future. To this purpose, a lot of new results have recently appeared aiming to access λ 3 via indirect loop-induced effects. This idea has been pioneered by McCullough in the context of e + e − colliders [31], where loop-induced effects in single-Higgs production have been investigated for ZH associated production [32][33][34]. A first evaluation of analogous loop effects at the LHC has been pre-sented in ref. [35] for gg → H → γγ. At the same time, the complete set of (one-and two-) loop computations for all relevant single-Higgs observables at the LHC together with the proposal of combining inclusive and differential observables, has been put forward in [36]. Since then several studies have appeared: the computation of the factorisable QCD corrections to the single-Higgs EW production at the LHC [37], two-loop effects in precision EW observables [38,39] and, more recently, further investigations on the impact of the differential information and the relevance of SM electroweak corrections [40]. Furthermore, global analyses in the context of an SMEFT (SM-EFT) have also been presented for present and future measurements at the LHC [41] and even for the case of future e + e − colliders [42,43]. On the other hand, in these works, effects of λ 4 have been either ignored, being irrelevant for the calculation considered, or assumed to be determined in turn by the λ 3 value.
In the present work we investigate for the first time the (combined) sensitivity to both the λ 3 and λ 4 self-couplings in (multi-)Higgs production at future e + e − colliders. We consider H, HH, and HHH production both in association with a Z boson or via W -boson fusion (WBF) [44]. These processes are listed in Tab. 1, where we have also specified at Process λ 3 λ 4 ZH, ν eνe H (WBF) one-loop two-loop ZHH, ν eνe HH (WBF) tree one-loop ZHHH, ν eνe HHH (WBF) tree tree Table 1: Processes considered in this work and the order at which the λ 3 and λ 4 dependence appears. We do not calculate two-loop effects, but we do calculate one-loop effects for both single and double Higgs production.
which level in perturbation theory the λ 3 and λ 4 dependence appears (we do not calculate two-loop effects in this work). In particular, we perform the computation of one-loop effects in single and (for the first time) double Higgs production. The former pose no theoretical challenge, confirm the results of [31,43] (and mutatis mutandis, of [36,37]); they are presented here for completeness and are also used in our analysis. On the other hand, one-loop effects in double Higgs production can be computed only within a complete and consistent EFT approach, where UV renormalisation can be performed. To this purpose, we work in a theoretical and computational framework where the cubic and quartic couplings can independently deviate from the SM predictions and loop computations can be consistently performed. Specifically, we add the two higher-dimensional operators c 2n (Φ † Φ − v 2 /2) n /Λ (2n−4) with n = 3, 4 to the SM Lagrangian, where the presence of the "−v 2 " term considerably simplifies the technical steps of the one-loop calculation in double Higgs production. On the other hand, Wilson coefficients in this basis or in the standard c 2n (Φ † Φ) n /Λ (2n−4) parameterisation can be easily related at any perturbative order and also after the running to a different scale. While the c 2n coefficient are more suitable for the matching to a UV-complete model, the c 2n ones feature simple relations to Higgs self-couplings, and are more convenient for phenomenological predictions such as those performed here for (multi-)Higgs production. Independently from the choice of the basis, it will be clear in the text that the Wilson coefficients c 6 and c 8 , or c 6 and c 8 , are the relevant parameter to be considered and not directly the λ 3 and λ 4 couplings. By comparing and combining the direct and indirect sensitivities on the Higgs selfcouplings that could be obtained at future e + e − colliders (CEPC [45], FCC-ee [46], ILC [47] and CLIC [48,49]) running at different energies and luminosities, we explore the final reach of such colliders to constrain the Higgs potential. In general, we assume that BSM effects due to the Higgs interactions with the other SM particles are negligible w.r.t. those induced by self interactions. In practice, we work under the same assumptions of the first calculations of one-loop λ 3 effects in single Higgs production at e + e − [31] or proton-proton collisions [35,36], which represented an unavoidable input for the analyses considering a more general class of BSM scenarios [41][42][43]. On the other hand, precisely one of these recent global analyses, ref. [43], has shown that in high-energy e + e − collisions, where ZHH and WBF HH production are kinematically available, working with our assumption or allowing for additional BSM effects does not affect the constraints that can be obtained for the trilinear Higgs self-coupling, thus justifying our working assumption. In this work we will investigate the precision that can be achieved on the trilinear Higgs self-coupling, not only when it is close to its SM value, i.e., we will explore also regimes where its effects entering via loop corrections can be relevant also in double-Higgs production. Moreover, these loop corrections, similarly to ZHHH and WBF HHH production at the Born level, involve effects due to the quadrilinear Higgs self-coupling. We will investigate the constraints that can be set on this coupling under two different assumptions. In the first, we consider the case of a well behaving EFT expansion, where dimension-eight operators induce effects smaller than dimension-six ones. In other words, the value of the trilinear coupling automatically sets also the value of the quadrilinear coupling. In the second, we lift this assumptions and we allow for independent trilinear and quadrilinear couplings, namely, we allow for similar effects from (Φ † Φ) 3 and (Φ † Φ) 4 operators.
The paper is organised a follows. In section 2 we introduce the notation and we discuss the EFT framework used in our calculation. The details concerning the definition of the renormalisation scheme at one loop and all the necessary counterterms for the calculation performed here are given in Appendix A. In section 3 we provide the predictions for cross sections of single, double and triple Higgs production at different energies, discussing their dependence on the c 6 and c 8 parameters. The one-loop calculations in single and double Higgs production are performed via one-loop form factors, the explicit results for which are provided in Appendix B. In section 4 we determine the reach of several experimental setups at future e + e − colliders for constraining the cubic and quartic couplings; both individual and combined results from single, double and triple Higgs production are scrutinised. The maximum c 6 and c 8 values beyond which perturbative convergence cannot be trusted are derived in Appendix C. We draw our conclusions in section 5.
Notation and parametrisation of New Physics effects
In this work we are interested to the effect induced by the modification where the New Physics (NP) modifications of the potential are all included in V NP and the symbol Φ denotes the Higgs doublet. The term V SM has already been defined in eq. (1.1). Following the convention of ref. [50], the most general form of V NP that is invariant under SU (2) symmetry can be written as It is important to specify from the beginning why for our calculation it is convenient to parametrise the NP contributions as done in eq. (2.2) and not using the standard EFT parameterisation 3) The advantages of the parametrisation in eq. (2.2) w.r.t the one in eq. (2.3) are due to the fact that after EWSB any Φ † Φ n originates H i terms with 1 ≤ i ≤ 2n, while any In other words, at tree-level, the trilinear Higgs self-coupling receives modifications only from c 6 and the quadrilinear only from c 6 and c 8 . Needless to say, when they are summed to V SM , equations (2.2) and (2.3) not only refer to the same quantity parametrised in a different way (V SM + V NP std = V SM + V NP ), but they are also fully equivalent for any truncation of the series at a given order n.
Writing V SM (Φ) + V NP (Φ) after EWSB as allows to define the self-couplings λ n , which can be parametrised by the quantities 1 where λ SM 3 (λ SM 4 ) is the value of λ 3 (λ 4 ) in the SM and reads In other words,c 6 ,c 8 andc 10 are c 6 , c 8 and c 10 normalised in such a way that can be easily related to κ 3 , κ 4 and κ 5 . In particular Eqs. (2.5) and (2.6) make two important points manifest. First, while the trilinear coupling only depends on c 6 , the quadrilinear depends on both c 6 and c 8 . Thus, in a well-behaved EFT, where the effects of higher dimensional operators are systematically suppressed by a large scale, one expects deviations both in κ 3 and κ 4 and such that (κ 4 − 1) 6(κ 3 − 1), see also eq. (2.10). Second, κ 3 and κ 4 do not depend on any c 2n coefficient with n > 4, without any assumption on the c 2n values with n > 4. In other words, for the study of trilinear and quadrilinear Higgs self-couplings at the tree level, only the c 6 and c 8 Wilson coefficients are relevant. On the other hand, at one-loop level also c 10 is in principle needed. As already mentioned in the introduction, in this paper we will calculate one-loop corrections to double Higgs production, taking into account both c 6 and c 8 effects. In general, when loop corrections are calculated and c 2n coefficients themselves are renormalised, the parametrisation of eq. (2.2) is convenient also for other reasons that are particularly relevant when the Wilson coefficients are renormalised. At variance with eq. (2.3), the values of the coefficients c 2n influence only the value of the Higgs self-couplings; they do not alter the SM relations among m H , v, µ and λ This is convenient because the physical quantities are m H and v and not µ and λ, while using eq. (2.3) one has to determine before µ and λ and then the self-couplings (see Appendix A of [36]). Especially, and this is the main motivation for this work, thanks to eqs. (2.11) and (2.12) the modification V SM (Φ) → V SM (Φ) + V NP (Φ) allows to keep the SM relations between the renormalisation constants and the definition of the renormalisation counterterms as done in ref. [51]. On the other hand, the explicit insertion of v 2 in eq. (2.2) deserves a particular attention for the renormalisation procedure and leads to additional counterterms. All the necessary ingredients for the calculations performed here are provided in Appendix A.
It is important to note that the coefficients c 2n in eq. (2.2) and c 2n in eq. (2.3) are connected by very simple relations and can be converted ones into the others at any step of the calculation. Thus, our calculation is fully equivalent to using the c 2n coefficients and renormalising them in the MS scheme (see Appendix A). Via the simple tree-level relations among c 2n and c 2n coefficients one can convert results at any order from one convention to the other, including the renormalisation-group equations. As already said, while c 2n coefficients are more suitable for the matching to a UV-complete model, the c 2n coefficients are more convenient for one-loop calculations of hard-scattering matrix elements such as those performed here for double Higgs production. Independently from the basis choice, the NP effects should be parametrised via c 6 and c 8 , or c 6 and c 8 , rather than directly through the λ 3 and λ 4 couplings. The reason is that the individual Wilson coefficients are the quantities entering the renormalisation procedure, and c 6 , or equivalently c 6 , is affecting both λ 3 and λ 4 values. In the following we will mainly parametrise NP via thē c 2n coefficients, which are simply related to both c 2n and κ n (see eqs.(2.5)-(2.10)).
3 Single, double and triple Higgs production:c 6 andc 8 dependence In this section we describe the calculations for single, double and triple Higgs production via WBF, the e + e − → ν eνe H(H(H)) processes, or in association with a Z boson, the e + e − → ZH(H(H)) processes. We will denote the latter also as ZH n . Besides the ZH n and WBF production modes, at e + e − colliders there are other possibly relevant processes such as ttH n , Z-boson-fusion (ZBF) or loop-induced H n production via photon fusion [50] from the initial-state radiation. However, these processes have considerably smaller cross sections than WBF or ZH n production modes, so we do not consider them in our analysis. Part of them have been considered in ref. [43] and their impact has been indeed found to be negligible.
While triple Higgs production processes are calculated at the Born level, for both single and double Higgs production we take into account also one-loop corrections involving the additionalc 6 andc 8 dependence. The sensitivity onc 6 andc 8 , and in turn on κ 3 and κ 4 , depends on the multiplicity of Higgs bosons in the final state and the numbers of loop corrections considered, as summarised in Tab. 1. We expect complementary information from ZH n and WBF, when the different collider energies of the possible future e + e − colliders are considered. While the cross section of ZH n is maximal for energies slightly larger than its production threshold, the WBF cross section grows with the energy. Moreover, based on results in refs. [31,36,37,40,43], in ZH n production we expect a strong dependence of the Higgs self-coupling effects from loop corrections, with larger effects at lower energies. On the contrary, in WBF this energy-dependence is expected to be much smaller.
It important to note that ZH n and WBF cross sections, and in turn their sensitivity onc 6 andc 8 , depend on beam polarisations, which can be tuned at linear colliders. 2 First of all, WBF contributes only via the LR polarisations, 3 since W -boson couples only to the left-chirality fermions. Conversely, the ZH n processes can also originate from RL polarisations (right-handed e − , left-handed e + ), also denoted as P (e − , e + ) = (1.0, −1.0).
On the other hand, results for RL polarisations can be easily obtained from those with LR via the relation In all our calculations we use following input parameters [53] G µ = 1.166 378 7 × 10 −5 GeV −2 , m W = 80.385 GeV , m Z = 91.1876 GeV We assumec 6 andc 8 measured at the scale µ r = 2m H , which therefore we will also use as MS renormalisation scale forc 6 in the double Higgs computation. The WBF production modes feature the same final states of ZH n production with Z → ν eνe decays. The latter are not considered as part of the WBF contribution in our calculation. Although NLO EW corrections in the SM would jeopardise the gauge invariance of this classification at the amplitude level [54], this is not the case for the one-loop corrections induced by additional c 2n interactions, which is the kind of effects we consider in this work on top of LO effects, as better specified later in eqs. (3.5) and (3.11). Moreover, the interference of ZH n -type diagrams with "genuine" WBF configurations is negligible
Single Higgs production
In this section we briefly (re)-describe the calculation of loop-induced effects fromc 6 (κ 3 ) in ZH and single-Higgs WBF production at e + e − colliders (representative diagrams are shown in Fig. 1). We introduce the notation that will be generalised to the case of double Higgs and triple Higgs production and we show how it is related to the previous calculations [31,36,40,43].
For both WBF and ZH production channels no Higgs self-coupling contributes at the tree level. On the other hand, one-loop corrections depends on the trilinear Higgs selfcoupling, but not on the quartic. Thus, while the LO cross section σ LO (H) is only of SM origin, NLO predictions includes also effects fromc 6 : proportional toc 8 or any other c 2n coefficient in this expansion, meaning that eq. (3.5) is actually exact; no other terms can enter at all even for higher orders in the (v/Λ) expansion. Furthermore, we remind that, at variance with the case of double Higgs production, in single Higgs production at one-loop the anomalous coupling approach (κ 3 ) is fully equivalent to the calculation in the EFT (c 6 ). For our phenomenological study we ignore the SM NLO EW corrections [54,55]. Our main focus is not the precise determination ofc 6 , but the study of its impact via its leading contributions. As discussed in detail in ref. [40], SM NLO EW corrections have a tiny impact on the extraction of the value ofc 6 and do not affect the accuracy of the determination ofc 6 . Therefore, we considerc 6 effects at one loop via the following approximation With this approximation, the sensitivity to the trilinear coupling can be expressed via the ratio where we have expressed the σ i /σ LO ratios directly 5 using the symbols C 1 and C 2 introduced in ref. [36]. C 1 denotes the one-loop virtual contribution involving one triple Higgs vertex, while C 2 originates from the Higgs wave-function renormalisation constant (see eqs. (A.2),(A.14) and (A.17)), which is the only source ofc 2 6 and thus κ 2 3 dependence at one loop level. Both C 1 and C 2 are independently UV-finite and, for simplicity, we choose not to resum higher-orders contributions to the wave function, at variance with ref. [36]. Indeed, given the results already presented in ref. [31], we expect to bound κ 3 close to the SM (κ 3 = 1) and in this scenario such a resummation would not make a noticeable difference anyway. Moreover, even considering κ 3 in the range |κ 3 | < 6 from ref. [56], the difference between the formula in eq. (3.7) and including the resummed higher-order contributions to Z H is below 1% (see also ref. [40]). Considering C 2 in eq. (3.8), the difference w.r.t. the definition in ref. [36] is only due to this choice, however, in the limitc 6 → 0(κ 3 → 1) the two different definitions are equivalent as can be seen from the value of C 2 : Moreover, in the limitc 6 → 0, a linear expansion of eq. (3.7) for ZH would lead to the result in ref. [31]. As explained in ref. [36] for hadronic processes, C 1 parametrises contributions that are process and kinematic dependent. In Fig. 2, we show σ LO (left plot) and C 1 (right plot) for ZH (red) and WBF (green) production as function of the energy of the collider √ŝ . As expected, while C 1 strongly depends on √ŝ for ZH, it does very mildly for WBF H. In particular, for ZH, when increasing the energy, C 1 decreases at the beginning, then changes its sign around √ŝ = 550 GeV and remains small. On the other hand, the total cross section for ZH production peaks at around √ŝ = 240 GeV and decreases as √ŝ increases, while the cross section for WBF H production increases with √ŝ . Thus, while for the range 200 − 500 GeV the ZH production is expected to be more sensitive than WBF onc 6 (κ 3 ), at higher energies the situation is reversed. The information from collisions at different energies, or even at different colliders, increases the sensitivity on κ 3 , as it has been discussed in ref. [43]. We will show analogous results in sec. 4. We have also looked at the differential distribution for the transverse momentum of the Higgs boson, but we have not seen any strong dependence on C 1 . Hence, for single Higgs production at e + e − colliders differential distribution cannot increase the sensitivity on κ 3 , at variance with the case of hadron colliders [36,37,40,41] and of double-Higgs production [57].
The range of validity of this calculation in κ 3 and in turnc 6 is mainly dictated by the effects from δZ NP H , as discussed in ref. [36], from which the bound |κ 3 | < 20 can be also straightforwardly applied here. A more cautious and conservative condition can be derived by requiring perturbative unitarity for the HH → HH scattering amplitude and/or perturbativity for the loop corrections to the HHH vertex in any kinematic configuration. This bound has been derived in ref. [56] and leads to the requirement |κ 3 | < 6, independently from the value of κ 4 . However, the kinematic configuration leading to this bounds are those involving two Higgses on-shell and the virtuality of the third Higgs close to 2m H , which is not relevant for the trilinear interaction entering in single-Higgs production. We independently re-investigated this bound onc 6 (and analogous ones onc 8 ) in Appendix C, where its derivation is discussed in detail.
Double Higgs production
We now consider double Higgs production. The cross sections for the production of two Higgs bosons in association with a Z bosons (e + e − →ZHH) and via WBF (e + e − →ν eνe HH) do depend on the trilinear Higgs self-coupling already at the tree level (see diagrams in Fig. 3). Moreover, for both processes, one-loop corrections depend on both the trilinear and quartic Higgs self-couplings. At leading order ZHH and WBF HH cross sections can be written as 6 : (3.10) 6 The σi terms entering eq. (3.10) are not the same quantities appearing in eq. (3.5). where σ 0 is the SM result, σ 1 represents the leading contribution in the EFT expansion (order (v/Λ) 2 ), while σ 2 is the squared EFT term of order (v/Λ) 4 . Note that within our choice of operators there is no contribution proportional toc 8 in this expansion. Actually, no c 2n coefficient with n > 3 enters at the tree level. The NLO corrections involve several different contributions. First we classify all of them and then we specify those relevant for our study. Using a notation that is analogous to eq. (3.10), the cross section at NLO accuracy can be parametrised as (3.12) (3.13) (3.14) where the σ ij quantities refer to the one-loop terms that factorisec i 6c j 8 contributions and the σ i0j to those proportional toc i 6c j 10 . Some comments on the terms in (3.12), (3.13), (3.14) and (3.15) are in order.
The terms in (3.12) are the NLO EW corrections to the contributions that appear already at LO. The quantity σ 00 , for instance, corresponds to the NLO EW corrections in the SM and has been calculated in ref. [58]. The terms σ 10 and σ 20 are O(α) corrections to σ 1 and σ 2 , respectively, and thus they are always subdominant. They should be included for precise determination ofc 6 values, yet being subdominant, we neglect them together with σ 00 in this first analysis, similarly to the case of single Higgs production.
The terms in (3.13) collect contributions that appear at NLO for the first time. For small valuesc 6 1, these terms are suppressed w.r.t. σ 1 and σ 2 in (3.10), and may be neglected. However, at variance with σ 10 and σ 20 , for large values ofc 6 , they are not subdominant. Thus, we keep them in order to study thec 6 and in turn κ 3 dependence beyond the linear approximation, which as explained is not sufficient for large values of c 6 . Moreover, this allows also to better quantify the range of validity of our perturbative calculation (see Appendix C). These contributions originate from the left diagram of Fig. 4, which shows also the other possible one-loop corrections to the HHH vertex. This diagram induces a (c 6 ) 3 dependence in the amplitude and in turn a (c 6 ) 4 dependence in σ 1−loop (HH) via the interference with Born diagrams. As it has been discussed in ref. [56], its contribution can be large. Also, the presence of (c 6 ) 3 effects indicates that terms up to the order (v/Λ) 6 have to be taken into account in the one-loop amplitudes and thus in the renormalisation constants. Schematically, each order in the (v/Λ) expansion implies that the following terms can be in principle present Thus, the full dependence on λ 3 and λ 4 of the diagrams appearing in Fig. 4 is taken into account. 7 On the other hand, (v/Λ) 6 terms include c 10 contributions, which we reparametrised in term ofc 10 ≡ (c 10 v 6 )/(λΛ 6 ); they lead to an independent value also for λ 5 , the factor in front of the H 5 /v term appearing in V NP (Φ) after EWSB. The origin of the terms in (3.14) and (3.15) can be now understood on the base of Fig. 4 and eqs. (3.16)- (3.18) and are commented in the following. The terms in (3.14) are the contributions that depend onc 8 . Thus, σ 01 , σ 11 and σ 21 are the most relevant quantities in our one-loop study of double Higgs production, as they provide the sensitivity toc 8 and therefore to the deviation from the quadrilinear that one expects on top of the one determined byc 6 only. Although σ 11 and σ 21 would be suppressed for small c 6 we keep them to study the validity of our calculation in the (c 6 ,c 8 ) plane, or equivalently (κ 3 , κ 4 ) plane.
Finally, the last term (3.15), is related toc 10 -dependent contributions. These contributions arise from the diagram with the H 5 interactions in Fig. 4 and the corresponding term in the renormalisation constant δc 6 (see eq. (A.16) for the explicit δc 6 formula), and can be expressed as At one-loop in ZHH or WBF production their sum can be written as a kinematically independent shift toc 6 , In practice we can only constrain a linear combination ofc 6 andc 10 that is in eq. (3.20). In the following we work in the assumptions thatc 10 effects are negligible and we setc 10 = 0, however, for not too large values ofc 10 , i.e., where the linear expansion inc 10 is reliable, results ofc 6 can be translated into a linear combination ofc 6 andc 10 via eq. (3.20). 8 In order to be directly sensitive toc 10 one would need to consider one-loop effects in triple Higgs production, or evaluate quadruple Higgs production at the tree level.
In conclusion, in our phenomenological analysis, we evaluatec 6 andc 8 effects at one loop via the following approximation σ pheno NLO (HH) = σ LO (HH) + ∆σc 6 (HH) + ∆σc 8 (HH) , ∆σc 6 (HH) =c 3 6 σ 30 + σ 40c6 , The analytical results for the form factors used for the calculation of ∆σc 6 (HH) and ∆σc 8 (HH) are given in Appendix B. We show now the impact ofc 6 andc 8 in the σ pheno NLO predictions at different energies. First of all, in Fig. 5 we show the LO cross section σ LO of ZHH (left) and WBF (right) production as function of √ŝ for different values ofc 6 . In ZHH production, the LO cross section peaks around √ŝ = 500 GeV, which is the optimal energy for measuring this processes, while WBF HH cross section grows with energy. As can be seen by comparing the left and right plot, the dependence onc 6 is different in ZHH and WBF HH production. Especially, at variance with ZHH, WBF HH cross sections in general increase when c 6 = 0. This feature is even more clear in the top-left plot of Fig. 6, where we show the dependence of σ LO onc 6 for the different phenomenologically relevant configurations that will be analysed in sec. 4, namely, ZHH at 500 GeV collisions and WBF HH at 1, 1.4 and 3 TeV collisions.
Using a similar layout, in Fig. 6 we display three other plots, which show the dependence of σ LO , ∆σc 6 (HH) and ∆σc 8 (HH)/c 8 onc 6 for different processes and energies. Specifically, in the upper-right plot we show the case of ZHH at 500 GeV, while in the lower plots we show WBF HH at 1 TeV (left) and 3 TeV (right). In these three plots we display σ LO , which has also been shown in the top-left plot, as a black line and ∆σc 6 (HH) and ∆σc 8 (HH)/c 8 as a blue and red line, respectively. Thus the blue line directly shows thec 8 -independent part of σ pheno NLO , while the red one corresponds to the coefficient in front of thec 8 -dependent part ∆σc 8 (HH), which in turn depends onc 6 . For both cases, a shortdashed line is used when ∆σc 6 (HH) or ∆σc 8 (HH)/c 8 are negative. From Fig. 6 we can see that not only for the LO prediction (top-left plot) but also for one-loop effects thec 6 (as wellc 8 ) dependence is very different in ZHH (top-right plot) and WBF HH (lower plots) production. On the other hand, as can be seen in the lower plots, besides a global rescaling factor, WBF HH results are not strongly affected by the energy of e + e − collisions. 9 In the case of ZHH production at 500 GeV, the minimum of the LO cross section is at c 6 ∼ −3, while for WBF HH it is atc 6 ∼ 0.5. This minimum is given by cancellations induced by the interference of diagrams featuring or not the HHH vertex. Such pattern of cancellations is different in the ∆σc 6 (HH) one-loop contribution, which in absolute value is instead minimal atc 6 = 0 and very large at large values ofc 6 . For this reason, e.g., for c 6 < −3 the ∆σc 6 (HH) one-loop contribution is larger than the LO cross section. This does not signal the breaking of the perturbative convergence, rather it is due to the large cancellations that are present in this region only in the LO cross section; as already said, the perturbative limits, which are derived in Appendix C, require |c 6 | < 5 and correspond to the range of the plot. In the case of WBF HH production ∆σc 6 (HH) is always smaller than σ LO , being negative forc 6 > 0 and positive forc 6 < 0.
Regarding the ∆σc 8 (HH) contribution, which we display in the red lines normalised with 1/c 8 , the effect is very different in ZHH and WBF HH production. In the case of ZHH production ∆σc 8 (HH) is always negative and the minimum in absolute value is very close to the minimum of the LO prediction. In the case of WBF HH production ∆σc 8 (HH) change sign atc 6 ∼ −2 andc 6 ∼ 0.5, being positive between these two values and negative outside them. In general, in absolute value, the ratio ∆σc 8 (HH)/σ LO is always below c 8 · 2% value. Still, given the allowed perturbative range |c 8 | < 31 (see Appendix C), effects from large values ofc 8 can be in principle probed.
Triple Higgs production
In triple Higgs production cubic and quartic self-couplings are present already at the treelevel and therefore both the leading dependences onc 6 andc 8 are already present at LO (see diagrams in Fig. 7). Following the same notation used for double Higgs production, the cross section used for our phenomenological predictions can be written as where the σ 00 term corresponds to the LO SM prediction. Similarly to the case of double Higgs production at one loop, terms up to the eighth power in the (v/Λ) expansion are present at the cross section level, although in this case only the fourth power is present at the amplitude level. The upper bounds onc 6 andc 8 mentioned in the previous section and discussed in Appendix C have to be considered also in this case. It is important to note that although for large values ofc 6 andc 8 loop corrections may be sizeable, at variance with double Higgs production,c 6 andc 8 are both entering at LO. Thus, when limits onc 6 andc 8 are extracted, loop corrections may slightly affect them, but only for largec 6 andc 8 values. In Tab. 2 we give all the σ ij /σ 00 ratios, so that the size of all the relative effects from the different NP contributions can be easily inferred. 10 In Fig. 8, we show σ LO at different energies for representative values ofc 6 andc 8 , including the SM case (c 6 = 0,c 8 = 0) where σ LO = σ 00 . There, we also explicitly show the value of the σ 02 component, which factorises 10 There are large cancellations among the different contributions; more digits than those shown here have to be taken into account in order to obtain a reliable result. the (c 8 ) 2 dependence. We can see that for ZHHH production (left) the sensitivity toc 8 is rather weak. The σ 02 component is just around 1% of σ 00 , which means that even for large values ofc 8 the total cross section would not be large enough to be measurable at the future colliders considered in this study (see discussion in sec. 4). On the other hand, the total cross section of WBF HHH increases with the energy, as for single and double Higgs production. Especially, the σ 02 component is much larger; it is of the same order of the SM σ 00 component. As an example, assumingc 8 = 1(c 8 = −1) andc 6 = 0, σ LO at 3 TeV is 1.7 (2.2) times larger than σ 00 . For largec 8 values, σ LO ≈c 2 8 σ 02 ≈c 2 8 σ 00 . As can be seen in Tab. 2, WBF is also very sensitive onc 6 ; for large values ofc 6 indeed σ LO ≈c 4 6 σ 40 and in particularc 4 6 σ 40 ≈c 4 6 σ 00 at 3 TeV. All these effects are even larger at lower energies.
Bounds on the Higgs self-couplings
In this section we study how thec 6 andc 8 parameters can be constrained at future lepton colliders via the analysis of single, double, and triple Higgs production. We consider four future e + e − colliders, CEPC [45], FCC-ee [46], ILC [47], and CLIC [48,49], with different operations modes 11 that are summarised in Tab. 3. In the following, we will refer to the different scenarios as "collider-√ŝ " like, e.g., CLIC-3000. Although higher integrated luminosities can be attained at the CEPC and FCC-ee, energies as high as at the ILC and CLIC cannot be reached, since they are circular colliders. As a result, only single Higgs production can be measured at the CEPC and FCC-ee, and therefore only indirect constraints via loop corrections can be set onc 6 . Instead, at the ILC and CLIC double Higgs production can be measured. With this process, bothc 6 andc 8 can be constrained, the former via the direct dependence at the Born level and the latter via the indirect dependence 11 At the ILC also an operation mode at √ŝ ∼ 350 GeV is expected, but studies mainly focused on the scan of the tt production threshold, ignoring Higgs physics. At CLIC also a slightly different scenario at 380 GeV instead of 350 GeV may be possible. through loop corrections. Moreover, even triple Higgs production is kinematically allowed at the ILC and CLIC, allowing to set direct constraints onc 8 . In our analysis we consider the following two scenarios 12 : 1. As expected from a well-behaving EFT expansion, the contribution fromc 8 is suppressed and we can safely setc 8 = 0. We explore how well we can measurec 6 , not only assumingc 6 ∼ 0, i.e., an SM-like configuration, but also allowing for large BSM effects viac 6 = 0.
2. The value ofc 8 can be different from zero and leads to non-negligible effects. We explore how well we can constrainc 8 and how muchc 8 can affect the measurement ofc 6 .
First, we study the sensitivity of ZH n and WBF processes at the various colliders considered. Then we show combined results for the ILC and CLIC. It is important to note, however, that single Higgs production depends onc 8 only via two-loop effects, which we did not calculate in this work (see Tab. 1). Thus, we cannot directly combine single Higgs with double Higgs and triple Higgs in the case of Scenario 2. Nevertheless, we discuss the limit that can be obtained in single Higgs production under the assumption that thē c 8 -dependent two-loop effects are negligible.
Single Higgs production
In this section we discuss the constraints that can be obtained onc 6 via the single-Higgs production modes. As said, since the effects ofc 8 are unknown, we restrict our study to the case where it can be ignored, i.e., Scenario 1. We start by considering the case in which we assume that the Higgs potential is like in the SM (c 6 = 0) and then we consider the BSM case withc 6 = 0. 12 One may be tempted to explore the regimec6 = 0 andc8 = 0, too. However, this condition is neither motivated by an EFT expansion nor protected by any symmetry. As can be seen from eq. (A. 16 Table 4: Expected precision for the measurements of single Higgs production modes and the expected 1σ and 2σ constraints onc 6 , assuming an SM measurement, are listed. The value of for the CEPC has been taken or obtained via a luminosity rescaling from ref. [45], for the FCC-ee from ref. [46], for the ILC from refs. [47,59] and for the CLIC from ref. [49]. In Tab. 4 we show 1σ and 2σ constraints onc 6 that can be obtained via ZH and WBF H at different energies and colliders, using eq. (3.7). We show also the value of C 1 and the accuracy that can be achieved in any experimental setup, as provided in [45-47, 49, 59] or obtained from them via a luminosity rescaling. 13 In general in this work, unless differently specified, we assume Gaussian distributions for the errors and no correlations among them, and the errors are rescaled according to cross section in BSM cases. In the results of Tab. 4 we did not take into account effects due toc 6 in the Higgs decay, since, at variance with the LHC case, they can be in principle neglected at e + e − colliders. Indeed, the total cross section of e + e − → ZH production can be measured via the recoiling mass method [47], without selecting a particular H decay channel. Using the same method, the branching ratio of any (visible) decay channel can be precisely measured and used as input in the WBF H analysis, so that also in this case effects due toc 6 in the Higgs decay can be neglected. Nevertheless, we explicitly checked that taking into accountc 6 effects in the decay for the H → bb channel, which will be the one most precisely measured, results in Tab. 4 are almost unchanged.
As can be seen in eq. (3.7), not only a linearlyc 6 dependent term is present, but also ac 2 6 one. Since C 2 is negative and C 1 is positive for both ZH and WBF H, the SM cross section value is degenerate inc 6 ; besides the SM casec 6 = 0 also a second differentc 6 = 0 condition is giving the same value of the cross section. While for the WBF H this second solution is close toc 6 = 2, in ZH at 240-250 GeV this is aroundc 6 = 9, depending on the energy. As a result, the two solutions being close to each other, in WBF H the 1σ and 2σ intervals are always broad, while in ZH at 240-250 GeV we see two narrow intervals: one aroundc 6 = 0 and one aroundc 6 = 9. Note that for CLIC-350 also ZH is yielding a broad interval as a constraint, since is larger and C 1 is smaller. Via the combined measurement 13 In the case of WBF H at ILC, e.g., only the H → bb has been considered for obtaining the value of .
Thus, smaller values of may be also achieved. in single Higgs production in the Scenario 1 described in the text. Two representative cases are considered: ZH at CEPC-250 and WBF H at CLIC-1400.
of ZH and WBF H processes, or including LHC results in a global fit, thec 6 region around c 6 = 9 can be excluded. In conclusion, assuming no other BSM effects, the best constraints onc 6 via single Higgs production can be obtained at low energy and high luminosity.
We now consider the situation in whichc 6 has a value different from zero, which will denote asc true 6 , and we explore the constraints that can be set onc 6 , by varying the value ofc true 6 . In Fig. 9 we consider ZH at CEPC-250 and WBF H at CLIC-1400 as examples. 14 The bands in the plot show which constraints onc 6 (y-axis) can be set, depending on the value ofc true 6 (x-axis). We considered only the −5 <c 6 ,c true 6 < 5 range, so that results can be directly compared with the analogous analysis performed in the next section for double Higgs production, where this range cannot be extended without violating perturbativity (see Appendix C). The "X" shapes of the ZH and WBF H bands can be understood as follows. In the limit of zero uncertainties two solutions can be obtained from the equation For ZH production, due to the large value of C 1 , only one branch is present in the −5 <c 6 ,c true 6 < 5 region. Instead, for WBF H, since C 1 is small, SM-like scenariosc true 6 ∼ 0 14 The case of CLIC-350 can be directly seen in Fig. 14, which is described later in the text. In this case, since the value of is larger, weaker constraints can be set w.r.t. CEPC-250. 15 If we consider ZH at FCC-ee-240 we obtain P = (4.9, 4.9). ∼ 4, WBF H constraints are stronger. We remind the reader that it is not obvious that the LHC, even after accumulating 3000 fb −1 of luminosity, will be able to exclude a valuec 6 ∼ 4. Still, with a single measurement forc true 6 ∼ 4 both the intervals aroundc 6 ∼ 4 andc 6 ∼ −3 are allowed, but the latter may be probed also at the LHC. As shown in Tab. 4, also for ZH andc true 6 ∼ 0 there is a second interval in the constraints, but it is outside the range of the plot.
Double Higgs production
We now turn to the case of double Higgs production. The expected precisions for the measurements considered in our analysis 16 are listed in Tab. 5. Although double Higgs production cannot be measured as precise as single Higgs production, it depends onc 6 at LO and therefore the sensitivity on this parameter is much higher.
We start our analysis considering Scenario 1, where we setc 8 = 0. As can be seen in sec. 3.2, the WBF HH dependence onc 6 is similar for different energies. For this reason, for Scenario 1, we show WBF HH only for CLIC-1400, together with ZHH at ILC-500. Similarly to Fig. 9, which concerns the case of single Higgs production, in Fig. 10 we plot the constraints that can be set onc 6 , by varying the value ofc true 6 . Also in σ LO (HH) both a linear and quadratic dependence onc 6 are present, leading to "X"-shape bands. The "X"-shape is slightly asymmetric due to the one-loop σ 30 and σ 40 contributions that are present in σ pheno NLO (HH), see eq. (3.21), which we always use in our study. The central points of the "X" bands are around (c true 6 ,c 6 ) = (−2.5, −2.5) for ZHH at ILC-500, and around (c true 6 ,c 6 ) = (0.5, 0.5) for WBF HH at CLIC-1400. For this reason, although the WBF HH band is narrower due to a largerc 6 dependence, for valuesc true 6 ∼ 0, ZHH at ILC-500 is giving better constraints. On the other hand, for valuesc true 6 = 0 and especiallȳ c true 6 ∼ −2.5, WBF HH at CLIC-1400 is leading to better constraints. It interesting to note that the central points of the "X" bands in WBF H and WBF HH are very close, while for ZH and ZHH they are different. This implies that the combination of the information 16 Note that the value of listed in ref. [49,60] are for a different luminosities than those considered in Tab. 3. Since the statistical uncertainty is the dominant one, the values of in Tab. 5 have been obtained by rescaling those of ref. [49,60] proportionally to the square root of the luminosity. in double Higgs production in the Scenario 1 described in the text. Two representative cases are considered: ZHH at ILC-500 and WBF HH at CLIC-1400.
from WBF single and double Higgs production would not exclude any of the branches of the "X" shape. Thus, the information from ZH or ZHH is necessary for this purpose. We will comment again this point in sec. 4.4.
We now consider Scenario 2. Specifically, we assume that the true value forc 6 isc true 6 and that the measured cross section for double Higgs production is σ measured = σ pheno NLO (c 6 = c true 6 ,c 8 = 0) and we show which value of (c 6 ,c 8 ) can be constrained via the prediction of σ pheno NLO (c 6 ,c 8 ). Starting with the SM case, we show results for σ measured = σ pheno NLO (c 6 = 0,c 8 = 0) in Fig. 11. We consider the range |c 6 | < 5 and |c 8 | < 31, because as explained in Appendix C for larger values the perturbative calculations cannot be trusted. The plot on the left shows the constraints for ZHH and WBF HH at the ILC-500, while the one on the right those for WBF HH at CLIC-1400 and CLIC-3000. First of all we can notice that the constraints onc 6 are weaker than in Scenario 1. Also, no constraints onc 8 independently fromc 6 can be set. On the other hand, the largest part of the (c 6 ,c 8 ) plane can be excluded and the shape of the band depends on the process. It is important to note that this results depend on the choice of the renormalisation scale µ r and therefore the scale at which c 6 (µ r ) andc 8 (µ r ) are measured. Our results refers to µ r = 2m H , which corresponds to the production threshold for the HH pair and therefore to the phase-space region associated to the bulk of the cross section. While the region close to the SM (c 6 ∼ 0,c 8 ∼ 0) is very mildly affected by this choice, we warn the reader that the border of the plane |c 6 | ∼ 5 and |c 8 | ∼ 31 can be strongly affected.
We then consider how the constraints in the (c 6 ,c 8 ) plane depend on the value of σ measured . We consider BSM configurations σ measured = σ pheno NLO (c 6 =c true 6 ,c 8 = 0) with is given. For these plots, only results for ZHH at ILC-500 and WBF HH at ILC-1000 are displayed. Similarly to the SM case, given a value ofc true 6 , the constraints onc 6 independent from c 8 are weaker than those in Scenario 1. However, also in these cases, the largest part of the (c 6 ,c 8 ) plane can be excluded and the shapes of the bands strongly depend both on the process and the value ofc true 6 . In all cases, ZHH and WBF HH sensitivities are complementary; as we will see in sec. 4.4, their combination improves the constraints in the (c 6 ,c 8 ) plane. This is a clear advantage for the ILC, where both ZHH and WBF HH can be precisely measured.
The shapes of the green and red bands can be qualitatively explained as follow. Withoutc 8 effects the green and red bands would simply consist of either two separate (narrow) bands or a single large band, consistently with the results that could be obtained by vertically slicing the bands in Fig. 10. Thec 8 effects bend the bands, leading to the shapes that can be observed in Fig. 12. It is interesting to note that the improvement from CLIC-1400 to CLIC-3000 is rather mild. The main reason is that the increment of the WBF HH cross section is compensated by the decrement of its dependence onc 6 , which can be directly observed in the top-left plot of Fig. 6.
Triple Higgs production
We now consider the case of triple Higgs production. In the SM ZHHH and WBF HHH production processes have a too small cross section for being observed. As an example, if we consider LR-polarised beams at 1 TeV and the dominant decay into a bb pair for the three Higgs bosons and into jets for the Z boson, about 6 ab −1 of integrated luminosity would be necessary for one signal event in the SM. As can be seen in Fig. 8, with WBF HHH the cross section is even smaller in the SM, on the other hand this process has a strong 17 As the total cross section depends onc8 mildly, we do not expect that the constraints depend onc true 0). Indeed, the number of events expected is close to zero and a Gaussian fit cannot be performed. Rather, we have to assume events are zero and compare them with the expected value of events for a given (c true 6 ,c true 8 ) performing a Poissonian analysis. 18 We assume that the other SM backgrounds are giving zero events and we estimate the signal efficiency ε HHH by rescaling the one known for WBF HH production ε HH . In practice, for both WBF HHH and ZHHH production we estimate the signal efficiency to be ε HHH = ε 3 2 HH = 4.7%, where ε HH has been taken from ref. [57]. In Fig. 13, we show the 2σ bounds in the (c 6 ,c 8 ) plane. The plot on the left shows the constraints for ZHHH and WBF HHH at ILC-1000, while the one on the right those for WBF HHH at CLIC-1400 and CLIC-3000. As can be seen, at ILC-1000 almost all the (c 6 ,c 8 ) plane is compatible with a zero event condition, both for ZHHH and WBF HHH production. On the other hand, at CLIC-1400 and especially at CLIC-3000 a vast area of the plane can be excluded via the study of WBF HHH production. In particular, at CLIC-3000, the constraint onc 8 are comparable to those obtainable at a future 100 TeV hadron collider [28,29]. The constraints onc 6 are instead worse than in the double Higgs production case.
Combined bounds
We now investigate the constraints that can be obtained via the combination of the information from single, double and triple Higgs production. We consider both Scenarios 1 and 2 and, as already mentioned, in the case of Scenario 2 we combine only results from double and triple Higgs production. We show in parallel the limits onc 6 from single Higgs production by assuming that thec 8 -dependent two-loop effects are small.
We start discussing the Scenario-1 analysis, separately considering the ILC and CLIC. For both colliders we progressively include results at higher energies in three stages. In for the ILC (left) and the CLIC (right) in the Scenario 1 described in the text. the case of the ILC, we start with ZH at ILC-250, in a second step we include ZHH and WBF H results from ILC-500 and finally ZHHH and WBF H(H(H)) from ILC-1000. Instead, in the case of the CLIC, we start with ZH at CLIC-350, in a second step we include WBF H(H(H)) and ZHHH results from CLIC-1400 and finally WBF H(H(H)) results from CLIC-3000. In the case of triple Higgs production we assume that we observe as many events as predicted by σ LO (HHH) in eq. (3.22), withc 8 = 0.
In Fig. 14, we show the combined results for the ILC (left) and CLIC (right) assuming Scenario 1. In the first stage, both ILC-250 and CLIC-350 constraints are worse than those of CEPC-250 shown in Fig. 9. This is due to a lower precision in the measurements ( ) and for CLIC-350 also a smaller value of C 1 . However, in the second stage, including results at higher energies, for both colliders constraints are much stronger, since double Higgs production becomes available. Especially, combining single and double Higgs production the "X" shape disappears and only the band around the linec 6 =c true 6 remains. 19 In the case of the CLIC, bumps are still present atc 6 ∼ 1, which originate from the centre of the "X"-shape band for WBF H(H) at CLIC-1400, see Fig. 9 and Fig. 10. For the same reason, also for the ILC the band is slightly larger aroundc 6 ∼ 1. In the third stage, constraints are improved both for the ILC and CLIC. Still, the weaker bounds can be set for ∼ 0 <c true 6 < 1, where the center of the "X"-shape band for WBF HH is located. In this region, constraints are better at the ILC thanks to the ZHH contribution at 500 GeV, which helps to resolve this region.
We now consider Scenario 2. As done in the case of double Higgs production we assume that the true value forc 6 isc true 6 and that σ measured = σ pheno NLO (c 6 =c true 6 ,c true 8 = 0) while that for triple Higgs production we observe as many events as predicted by σ LO (HHH) in eq. (3.22), withc 8 = 0. In the case of the ILC, we consider ZHH at ILC-500 and its combination with ILC-1000 results from ZHHH and WBF HH(H) production. In the case of CLIC, we consider ZHHH and WBF HH(H) production at CLIC-1400 and its combination with WBF HH(H) at CLIC-3000. Thus, while ILC-500 is not a combined result, being simply obtained for ZHH production, all the others include information from both double and triple Higgs production. As already said, single Higgs production cannot be directly included in the combination, since itsc 8 dependence starts at two-loop level.
In Fig. 15 we show results for the SM case (c true 6 = 0,c true 8 = 0) as green bands. There we also show as red bands the limits onc 6 extracted from single Higgs measurements at the ILC and CLIC 20 , assuming that the two-loopc 8 dependence is negligible. Due to the available higher energies, combined double and triple Higgs constraints at the CLIC are better than at the ILC. Indeed the WBF HH(H) production cross section increases with the energy. On the other hand, single Higgs production can be better measured at the ILC and therefore the corresponding constraints onc 6 are better than at the CLIC. We notice that the only case where single Higgs results may be relevant in a further combination with those from double and triple Higgs production is the case of ILC-500, which is actually coming from only ZHH production. Indeed, the combination of ZH at ILC-250 and WBF H at ILC-500 would help in removing the band aroundc 6 = −4, and shrinking the possible region for the band around SM value. On the contrary, at higher energies the WBF HH production is more relevant in constrainingc 6 . Thus, with the exception of ILC-500, single Higgs production could be helpful in constraining the (c 6 ,c 8 ) plane only if the dependence onc 8 at two-loop is larger than what we assumed or if low-energy runs at higher luminosity, such as those at circular colliders, are considered.
In Fig. 16 we show the constraints from the combination of double and triple Higgs for BSM casesc true 6 = −4, −2, −1, 1, 2, 4. As already discussed for the SM case, constraints from single Higgs production are negligible for high energy e + e − colliders in this scenario under our assumptions and for this reason they are not shown. We display in each plot both CLIC and ILC bounds. As we can see, both in the SM and in all BSM cases considered, the combination of results from double and triple Higgs production is always strongly improving the bounds. Also, with higher energies, stronger constraints can be set; the best results can be obtained combining results at CLIC-1400 with those at CLIC-3000, especially forc true 6 = 0 since a non-zero number of events can be observed. It is interesting to note that CLIC bounds around (c true 6 ,c true 8 ) are less sensitive than at the ILC on the value ofc true 6 , featuring vertical elongated contours in the (c 6 ,c 8 ) plane. The reason is that at CLIC bounds mainly comes from WBF HHH, while at the ILC mainly from double Higgs production, both ZHH and WBF HH.
In conclusion we observed that low-and high-energy runs are useful for constraining the shape of the Higgs potential. Under the assumption of Scenario 1, we have shown the complementarity of ZH production at low energy with WBF HH information at higher energies. Under the Scenario 2, we have shown that the combination of the information from double and triple Higgs production, which is possible only at high energy, improves the constraints in the (c 6 ,c 8 ) plane (cf. fig. 12 with fig. 16). 20 More specifically, for the ILC, the single Higgs limit are combined results from ZH at ILC-250, WBF H at ILC-500, and WBF H at ILC-1000, while for the CLIC, the single Higgs limit are combined results from ZH at CLIC-350, WBF H at CLIC-1400, and WBF H at CLIC-3000.
Conclusions
Determining whether the scalar potential for the Higgs boson is the minimal one predicted by the SM is among the main targets of the current and future colliders. In this work, we have investigated the possibility of setting constraints on the shape of the Higgs potential via the measurements of single, double and triple Higgs production at future e + e − colliders, considering the two dominant channels, i.e., Z boson associate production (ZH n ) and W boson fusion WBF. In order to leave the possibility for the trilinear and quadrilinear couplings to vary independently, we have added to the SM potential two EFT operators 2 v 2 4 and calculated the tree-level and one-loop dependence on c 6 and c 8 for single and double Higgs production as well as tree-level results for triple Higgs production (see also Tab. 1 in sec.1).
One-loop corrections to single Higgs production, which depends only on λ 3 and thus c 6 , have already been calculated and studied in the literature and we have confirmed previous results. On the other hand, the one-loop dependence on λ 4 and therefore on c 6 and c 8 of double Higgs production has been calculated for the first time here. At variance with the case of single Higgs production, the EFT parametrisation is in this case compulsory and an anomalous coupling approach cannot be consistently used; the c 6 parameter is itself renormalised and receives corrections from both c 6 and c 8 . We have provided all the necessary renormalisation constants and counterterms and expressed the finite one-loop results via analytical form factors that can be directly used in phenomenological applications. We have also motivated the inclusion of the "− 1 2 v 2 " term in the EFT parametrisation, which simplifies the renormalisation procedure by preserving the relations among the SM counterterms. Nevertheless, results can always be easily translated to the In our phenomenological analyses we have considered several experimental setups at future e + e − colliders (CEPC, FCC-ee, ILC and CLIC) and have analysed the constraints that can be set onc 6 ≡ c 6 v 2 λΛ 2 andc 8 ≡ 4c 8 v 4 λΛ 4 . To this purpose we have considered two scenarios: • Scenario 1: the effects ofc 8 are negligible and we analyse the constraints that can be set onc 6 , both for the SM potential and in the casec true 6 = 0.
• Scenario 2: the effects ofc 8 are not assumed to be negligible and we analyse the constraints that can be set on the (c 6 ,c 8 ) plane, both for the SM potential and in the casec true 6 = 0.
In Scenario 1 the value of λ 4 directly depends on λ 3 , while in Scenario 2 they are independent. We verified that requiring perturbative convergence sets upper bounds on the absolute values ofc 6 andc 8 , i.e., |c 6 | < 5 and |c 6 | < 31. Thus, we have analysed the constraints that can be set in this region of the (c 6 ,c 8 ) plane. In Scenario 1, the best constraints onc 6 can be obtained from the combination of ZH results from low-energy high-luminosity runs and results from high-energy runs for ZHH and WBF (HH, HHH) production. On the other hand, in BSM casesc true 6 = 0, WBF H gives stronger constraints than ZH production and similarly WBF HH production can be more sensitive than ZHH production.
In Scenario 2, since two-loopc 8 effects for single Higgs production are not available, we combine only double and triple Higgs production, and show the single Higgs bounds under the assumption that two-loop effects are negligible. The combination of high-energy results from double and triple Higgs production gives the best constraints and in both cases the WBF channel is in general the most relevant. Single Higgs production is only relevant for low-energy machines, and almost negligible once WBF HH is available. For this reason, the higher is the energy, the stronger are the constraints that can be obtained in the (c 6 , c 8 ) plane, both for the SM case and the BSM configurations withc true 6 = 0. In both Scenario 1 and Scenario 2, although WBF HH constraints alone are stronger than those for ZHH, the two production processes are in fact complementary and lead to improved results when they are combined. At high-energy e + e − colliders triple Higgs production is not measurable in the SM, but its cross section strongly depends on the value ofc 8 . In particular, at CLIC-3000, the constraint that can obtained onc 8 via WBF HHH production are comparable to those obtainable at a future 100 TeV hadron collider.
In conclusion, we have demonstrated that the analysis of single, double and triple Higgs production at e + e − colliders can be exploited for constraining the trilinear and quartic coupling via direct and loop-induced indirect effects. In this first sensitivity study we have assumed that BSM effects on the couplings of the Higgs boson with other particles can be neglected. This assumption has already been shown to be reasonable for the SM case in Scenario 1 in ref. [43], yet further studies will be necessary for the other configurations considered in this work. Also, as already mentioned, another possible sensitivity on thec 8 parameter may be obtained from the high-precision measurements of single Higgs production at future e + e − colliders. To establish what kind of constraints could be reached onc 8 in this case, a two-loop computation of e + e − → ZH will be needed. 4.4517.08. This work has received funding from the European Union's Horizon 2020 research and innovation programme as part of the Marie Sk lodowska-Curie Innovative Training Network MCnetITN3 (grant agreement no. 722104) and by F.R.S.-FNRS under the "Excellence of Science -EOS" -be.h project n. 30820817. The work of D.P. is supported by the Alexander von Humboldt Foundation, in the framework of the Sofja Kovalevskaja Award Project "Event Simulation for the Large Hadron Collider at High Precision". where Z H is the Higgs wave function and T is the tadpole contribution, which we cancel via the δt counterterm so that the physical value of v does not get shifted. All the other quantities do not receive additional one-loop contributions on top of the SM ones, including δv, which is completely of SM origin. Thus, for our calculation the necessary ingredients for the renormalisation of the virtual corrections are: All the quantities with "SM" as apex are the SM contributions and can be found in [51], those with "NP", which indeed stands for new physics, are the new contributions from c 6 , c 8 and c 10 . Besides c 6 , which is renormalised in the MS scheme, all the other EW input parameters are assumed to be renormalised on-shell, with exception of fine structure α, which we renormalise in the G µ -scheme. This is relevant for our calculation since in the SM the renormalisation of v is related to the charge renormalisation, δZ e , The appearance of the extra quantity −6 c 6 Λ 2 v 3 δv in eq. (A.3) is due to the presence of v in the parametrisation of eq. (2.2), which as we said has an impact in the renormalisation procedure. Before giving the explicit formulas for δZ NP H , δ(m 2 H ) NP , δt NP and the counterterm for the H 3 vertex and H propagator, we briefly discuss this technical aspect.
The explicit term v used in the parametrisation of eq. (2.2) is a subtle quantity. In a tree-level analysis it can be trivially identified with the location of the minimum of V (Φ), which defines the ground state |0 of the Higgs field Figure 18: The structure of one-loop effects in the HHV V amplitude expressed via form factors.
is the contribution from the trilinear Higgs self-coupling to δZ H in the SM and A 0 and B 0 are the standard scalar loop integrals and ∆ is the UV divergence ∆ ≡ 1/ − γ + log(4π) in D = 4 − 2 dimensions. As discussed in sec. 3.2 terms up to the order (v/Λ) 6 have to be in general considered. However, note that no terms beyond (v/Λ) 2 are present in δt, or beyond (v/Λ) 4 in δZ H and δm 2 H , while c 6 is appearing at order (v/Λ) 2 , so terms up to (v/Λ) 6 are in fact present in δc 6 .
We want to stress that all these contributions have to be taken into account in order to obtain gauge invariance for the final finite result of double-Higgs production at one loop. We kept the explicit dependence on the ξ parameter for a generic R ξ -gauge in order to verify that renormalised amplitudes do not depend on ξ. With this calculation setup, results are equivalent to those of a standard calculations based on the parameterisation of eq. (2.3) and c 2n coefficients renormalised in the MS scheme.
B One-loop amplitudes via form factors
In this section we provide all the form factors that are necessary for the calculations of oneloop amplitudes for ZHH and WBF HH production entering σ pheno NLO (HH) in eq. (3.21). These are the form factors for the • HV V vertex, Figure 19: Feynman diagrams contributing to the V [HV V ] form factor at one loop.
where σ 1 and σ 2 are part of σ LO (HH) in eq. (3.10). Note that σ 30 and σ 40 are written in such a form that can be easily extend to the case in which the δZ NP H contribution from external legs is resummed, as done in ref. [36]. However, considering |c 6 | < 5, resummation is not necessary given thatc 2 6 δZ SM,λ H < 4%.
HVV-vertex
The HV V form factor, which will denote as V [HV V ], enters both the single and double Higgs production calculation and can be written as For our calculation thec 6 -independent part can be ignored, while in a generic gauge V 1 [HV V ] is given by the three diagrams 22 in Fig. 19. Using the convention that the corresponding Feynman rule is iV µ 1 µ 2 [HV V ], as we will do also for the other form factors, we can write V µ 1 µ 2
1
[HV V ] as In particular T µ 1 µ 2 (p 1 , p 2 , m V , m H ) = (−6B 0 − 24m 2 V C 0 + 24C 00 )g µ 1 µ 2 − 24p µ 2 1 p µ 1 2 C 12 , (B.6) where p 1 , p 2 are the (incoming) momenta of the two vector bosons, µ 1 , µ 2 are the corresponding Lorentz indices, m V with V = W, Z is mass of the vector bosons, and B 0 , C 0 , C 00 , C 12 are one-loop scalar/tensor integrals defined according to the notation used, e.g., in ref. [51] and where the following variables are understood: It is important to note that P (HH) does not depend onc 8 . Indeed, although the second diagram, the seagull, depends onc 8 due to the HHHH vertex, it is exactly cancelled by the Higgs-mass counter term. We remind the reader that the −δZ NP H component in the counterterm has been removed from P [HH].
HHH-vertex form factor
The form-factor for the HHH vertex, V [HHH], receives contributions from the diagrams already shown in the main text in Fig. 4 ) the value ofc 6 (c 8 ) such that the one-loop amplitude is as large as the tree-level one, i.e. the value of ofc 6 (c 8 ) from where perturbative convergence cannot be trusted anymore. For the estimation ofc max 6 we take into account the leading contribution from V 30 and P 20 , both yieldingc 3 6 terms. For c max 8 we instead consider as first step the contribution from V 11 , which is the dominant term whenc 6 is large, and we compare it with the linearlyc 6 dependent part of the tree-level vertex. In such a way the value ofc max 8 is independent onc 6 .
The value ofc max 6 (c max 8 ) has a kinematic dependence. In the left plot of Fig. 23 we display the dependence ofc max 6 on m(HH), ranging from 125 GeV to 3 TeV. The equivalent plot forc max 8 , taking leading term inc 6 , is shown on the right. Thus, we explore m(HH) values both below the production threshold and in the tail of the m(HH) distributions. As can be seen in both cases, the most stringent constraints, |c 6 | < 5 and |c 8 | < 31, arise from the threshold condition m(HH) = 2m H , while for different values of m(HH) the bound is weaker.
In the case ofc max 6 the constraint is independent on the value of the renormalisation scale µ r and compatible with the result obtained in ref. [56], where the subdominant contribution of P 20 was not taken into account. Conversely, in the case ofc max 8 the constraint does depend on the value of the renormalisation scale µ r . However, we verified for µ r = m H , 4m H , that the most stringentc max 8 value is anyway arising from the kinematic condition m(HH) = 2m H .
The constraints of eq. (C.1) have been derived via the analysis of the H * → HH amplitude, but also the HV V and HHV V vertexes contribute via loop corrections to the quantity σ pheno NLO (HH), eq. (3.21), that is used for our phenomenological analysis. Thus, it is important to check if they can affect the results of eq. (C.1). To this purpose we directly considered the quantities for ZHHH and WBF HH at different energies. The quantity rc 6 is the ratio between the term with the highest power inc 6 from ∆σc 6 and the one with the highest power in c 6 from σ LO , i.e., the ratio of the dominant contributions at tree and one-loop level for largec 6 values. Similarly, the quantity rc 8 is the ratio between the term with the highest power inc 6 from ∆σc 8 and σ LO . Thus, both of them can be considered as a generalisation of the first step; both HV V and HHV V vertices are taken into account and phase-space integration is performed.
In the left plot of Fig. 24, we show rc 6 for the case of ZHH at 500 GeV and of WBF at 1, 1.4 and 3 TeV, which are the phenomenologically relevant scenarios analysed in sec. 4. Requiring |rc 6 | < 1, we can get |c 6 | < 8 for ZHH at 500 GeV, and |c 6 | < 9, 10, 11 for WBF HH at 1000, 1400 and 3000 GeV, respectively. Thus, as one would expected from Fig. 23 for the H * → HH vertex, at higher energies, far from the production threshold, limits are weaker. In the right plot we show rc 8 for the same energies an process. Also in this case the obtained limits are weaker than in eq. (C.1), |c 8 | <∼ 35 − 40. | 19,055.4 | 2018-02-21T00:00:00.000 | [
"Physics"
] |
Impressions of the Biota Associated With Waterfalls and Cascades from a Holocene Tufa in the Zrmanja River Canyon , Croatia
The term “tufa” (“sedra” in Croatian) refers to freshwater carbonate precipitates from low-temperature springs, lakes, and waterfalls, which contain the remains of macroand microphytes, invertebrates and bacteria. The term “travertine” refers to thermal and hydrothermal calcium carbonate deposits which lack i n situ macrophyte and/or animal remains (FORD & PEDLEY, 1996). An increasing amount of work is being done on these rocks (IRION & MÜLLER, 1968; GOLUBI∆, 1969; CHAFETZ & FOLK, 1984; LOVE, 1985; PEDLEY, 1990, 1992; PENTECOST, 1993; FREYTET & VERRECCHIA, 1998). Recent interest in tufas has been accelerated by suggestions that these carbonates are important archives of palaeoenvironmental and palaeoclimatic information (GOUDIE et al., 1993; PEDLEY et al., 1996; ANDREWS et al., 2000). In particular the global carbon cycle is of special interest in this regard (YUAN, 1997). Additionally, these rocks are potential data repositories for unravelling the question about the modes of calcite precipitation, especially Impressions of the Biota Associated With Waterfalls and Cascades from a Holocene Tufa in the Zrmanja River Canyon, Croatia
INTRODUCTION
The term "tufa" ("sedra" in Croatian) refers to freshwater carbonate precipitates from low-temperature springs, lakes, and waterfalls, which contain the remains of macro-and microphytes, invertebrates and bacteria.The term "travertine" refers to thermal and hydrothermal calcium carbonate deposits which lack i n situ macrophyte and/or animal remains (FORD & PED-LEY, 1996).An increasing amount of work is being done on these rocks (IRION & MÜLLER, 1968;GOL-UBI∆, 1969;CHAFETZ & FOLK, 1984;LOVE, 1985;PEDLEY, 1990PEDLEY, , 1992;;PENTECOST, 1993;FREYTET & VERRECCHIA, 1998).Recent interest in tufas has been accelerated by suggestions that these carbonates are important archives of palaeoenvironmental and palaeoclimatic information (GOUDIE et al., 1993;PEDLEY et al., 1996;ANDREWS et al., 2000).In particular the global carbon cycle is of special interest in this regard (YUAN, 1997).Additionally, these rocks are potential data repositories for unravelling the question about the modes of calcite precipitation, especially Impressions of the Biota Associated With Waterfalls and Cascades from a Holocene Tufa in the Zrmanja River Canyon, Croatia Gordana PAVLOVI∆ , Joaeica ZUPANI», Esad PROHI∆ and Darko TIBLJA© those associated with micro-organisms (M E R Z, 1992; SARASHINA & ENDO, 1998;YATES & ROBBINS, 1998;CASTANIER et al., 1999).Tufa accumulations show widespread development throughout the Dinaric karst region of Croatia where they are associated with thick carbonate sections of Upper Triassic to Cretaceous age (HERAK et al., 1969;POL©AK, 1979).The spectacular series of tufa dams, lakes and waterfalls of the Plitvice and Krka National Parks are of interest not only to scientists (GOLUBI∆, 1957;PEVALEK, 1958;MATONI»KIN & PAVLE-TI∆, 1963;SRDO» et al., 1985SRDO» et al., , 1994;;BOAEI»EVI∆, 1990, 2000;HORVATIN»I∆ et al., 2000) but are also of growing interest to tourists.Although the tufa accumulations, which are developed in many parts of the Zrmanja river area, have been the subject of several studies, these were mainly focused on the identification of the present vegetation (MATONI» KIN & PAVLE-T I ∆, 1961, 1962).There is only one research article on Holocene waterfall tufas in the Zrmanja river canyon, which establishes a model of meteoric diagenesis of tufa deposits by describing and interpreting the relationship between petrographic features and geochemical processes (PAVLOVI∆ et al., 2002).Since relatively little research has been done on these deposits, petrographic analyses have been conducted in order to provide descriptions of sampling localities, original fabric of the tufa and the effects of freshwater on these rocks.
GEOLOGICAL SETTING
The study was undertaken in the Zrmanja river canyon which is situated in the northernmost part of Dalmatia (Fig. 1).This area is largely composed of Cretaceous limestones (I V A N O V I ∆ et al., 1976) while the main portion of the clastic succession consists of Palaeogene clastic carbonates (Promina beds), Neogene clayey marls and Holocene river deposits (FRITZ et al., 1978).The morphology of the Zrmanja canyon evolved through deformation in the Eocene and Oligocene, which produced compressional structures striking NW-SE (FRITZ, 1972).The largest tributary of the Zrmanja is the Krupa river, and tufa formation readily occurs downstream from their confluence which is situated within the settlement of Sastavci.Faster precipitation rates are favoured downstream, owing to increased saturation levels with respect to calcite as well as increased water flow and temperature of water (MATONI» KIN & PAVLETI∆, 1961).Due to river incision into earlier deposited tufa, fossil tufa profiles, up to 10 m in height, are found along the river banks (FRITZ, 1972).The investigated deposits were formed during Holocene times to the present day, with the main precipitation period occuring during the Atlantic period, i.e. 5800-3200 yr B.P. (©EGOTA, 1968).
METHODS
Field-work was conducted around 3 waterfalls on the Zrmanja river, and at a small cascade on the Krupa river where tufa precipitation is presently active (Fig. 1).Recent precipitates were sampled from the river bed (in some instances from the crest of waterfalls/cascade), where a variety of macrotypes were collected.The fossil tufa samples were collected in profiles along the flanks of the Zrmanja.At each sample point full site descriptions were made, backed up with photographs.All samples were air dried and macroscopic descriptions and photographs of representative hand specimens taken.Due to their crumbly fabric it was necessary to impregnate a representative suite of samples under vacuum with epoxy resin in order to obtain standard thin sections for study using a petrographic microscope.
SAMPLING LOCALITIES
The investigated tufa deposits have the form of phytoherm build-ups which correspond closely with the descriptions of CHAFETZ & FOLK (1984) and PED-LEY (1990) for the waterfall or cascade environmental model of cool freshwater tufas.They are most commonly wedge-shaped with the thickest parts relating to locations of maximum turbulence such as at locations where waterfalls/cascades occur.Both inorganic (CO 2 degassing) and organically (the excretion of highly adhesive mucopolysaccharides) mediated precipitation is enhanced at these sites, though the relative abundance of each type is yet to be ascertained (EMEIS et al., 1987;LORAH & HERMAN, 1988;MERZ-PREISS & RIDING, 1999).
The streambed of the JankoviÊa buk (Fig. 2) waterfall (approximately 3-4 m high), which is very shallow and densely owergrown by an algal mat, has an undulatory appearance due to convex surfaces separated by narrow troughs.It is a habitat that is submerged or at least continuously kept wet and therefore heavily colonized by cyanobacteria which facilitate binding and/or precipitation of calcium carbonate onto their sheaths, thereby often forming hemispherical stromatolitic cushions.Certain species occur only under specific hydraulic conditions and their local relief can cause a rise of their own local water table level by upstream ponding, while disruption of flow enhances the degree of turbulence at the substrate/water interface.These more turbulent situations are favourable for the development of mosses which represent the next stage of a vegetational succession characteristic of tufa waterfalls (MATONI» KIN & PAVLETI∆, 1960).
The flanks of the Berberi buk (Fig. 3) and Ogari buk (Fig. 4) waterfalls (approximately 7-8 m high) are abundantly lined by perennial higher plants (trees) which provide an ideal habitat for the development of the so called "vegetation of shade" that is extensively described by MATONI» KIN & PAVLETI∆ (1961).This vegetation of shade, densely established on waterfall faces, is represented by ecologically distinctive species of overhanging mosses which form small dams and decorative "curtains" with cavernous areas inside and underneath the tufa accumulations.Both waterfalls, in contrast to the JankoviÊa buk waterfall, represent progressive stages of tufa development due to the con-siderable growth and diversity of supporting plant material.
At the face of the Krupa cascade (Fig. 5), up to 0.5 m high, the deposit thickens both in a down-and upstream direction with regard to the streambed.In a downstream direction, along the streambed lie numerous coated grains, the overall dimensions of which are up to 15 cm in diameter.
ORIGINAL FABRIC
On the basis of macroscopic characteristics, such as the type of encrusted substrate (i.e. a framework of encrusted plant remains and the kind of plants which participate in rock formation), and porosity type, the studied tufas are similar in structure and composition to deposits reported by IRION & MÜLLER (1968) a n d LOVE (1985).Accordingly, three major morphologies of tufas can be distinguished in this active system: 1) encrusted mossy deposits, 2) algally laminated crusts, and 3) algally coated grains.Nevertheless, an attempt at classifying samples has proven to be a problem.Investigated deposits tend to exhibit a great diversity of structures, textures, morphologies and constituents, differing very much from deposit to deposit.
Since aquatic mosses are ubiquitous at sites of water turbulence, they occupy a substantial part of the tufa waterfalls and constitute the bulk of the studied deposits.Due to the macroscopic dimensions of these plants, their moulds (Pl.1/1) are readily visible in the rock compared to algal morphologies.The overall framework of the deposits is provided by elongated, parallel stems whereby the 0.1-0.3mm diameter holes (stem moulds) are formed, the orientations of which range from vertical to horizontal.Occasionally, up to 2 cm long cavities are present.Most of the specimens are extremely friable, and brown to light brown in colour.In thin-section, encrustations consist of microcrystalline low-magnesian calcite (PAVLOVI∆ et al., 2002) composed of peloidal micrite that grow upon all exposed surfaces resulting in a highly porous (up to 75% porosity) fragile structure (Pl.1/2).Encrusting crystals very often surround the mosses in a concentric pattern that closely fit the morphology of their stems.These are responsible for the preservation of well-defined moulds after plant decay (Pl.1/3).The same arrangement is produced on the protonema (Pl.1/4) which represent the first stadium of the growth of mosses (PAVLETI∆, 1968).This kind of precipitation is attributed to the photosynthesis-respiration plant cycle (GOLUBI∆, 1973) but frequently is enhanced by prokaryote-microphyte biofilms which colonize the mosses (P E D L E Y, 1992).The female plants can produce a sporophyte plant composed of a foot, seta and capsule; the latter is a distinguishing feature between genera and species (PAVLETI∆, 1968).With progressive encrustation, the mosses begin to decay turning reddish-orange (Pl.1/5) and finally black (in plane light).
Algally laminated crusts frequently form stromatolites in the deposits.They consist of alternating laminations paralleling the substrate, with individual laminae ranging up to 4 mm in thickness (Pl.1/6).Such bacterioherms are almost totally lacking mosses and higher plants.They are undulatory at both large and small scales, and alternate with horizons containing small lenticular openings caused by degradation processes or faunal (burrowing) activity.The crusts mainly display dense, micritic features but porous, friable deposits are also likely to occur.On a microscopic scale, these rocks are characterized by thin internal micritic and microsparitic laminations (Pl.2/1) as a result of continuing encrustation by algal filaments which are approximately up to 0.005 mm in diameter.Most of these filaments are intertwined or parallel.It is possible to interpret a dense lamina (Pl.2/2), composed of micriteencrusted filaments, as a consequence of rapid spring growth and, a more porous lamina caused by summer/autumn, decreased algal growth (IRION & MÜLLER, 1968;LOVE, 1985;JANSSEN et al., 1999).These investigators have suggested that this laminated fabric should be explained by seasonal variation in growth of the plants and especially cyanobacteria and algae.Some algal colonies form hemispherical to subspherical masses of radiating filaments coated with micrite (Pl.2/3).Complex diagenetic recrystallizations can lead to subhedral or euhedral sparite crystals (Pl.2/4) which form fanlike or radial palisadic structures (KENDALL & BROUGHTON, 1978;BRAITHWA-ITE, 1979;CHAFETZ et al., 1985;FREYTET & VER-RECCHIA, 1998, 1999).It is explained by the fact that many small nuclei, as a result of high degrees of supersaturation in the vicinity of metabolising cells, can coalesce primarily along the c axis with little lateral growth and generate the large columnar crystals.However, some of these spar crystals are almost certainly produced by obligatory calcifying cyanobacteria, especially members of the R i v u l a r i a c e a (PEDLEY, 1992).In this case the mechanism of growth of calcite crystals is interpreted in the scope of biological mediation related to metabolism.
Since tufa-forming environments are characterized by the highly complex interplay between organic and inorganic processes, it is possible to find that the dominance of mosses and algae vary greatly and are intermixed in differing proportions.Some specimens (Pl.2/3), classified as mossy deposits according to their structure, have conspicuously low porosity due to the fact that calcite is associated with abundant algal and microbial communities attached to the moss stems and leaves; this feature is reported by many investigators (e.g.PENTECOST, 1987;WINSBOROUGH & GO-LUBI∆, 1987;PEDLEY, 1992).Also, a large proportion of cyanobacterial biomass can host scattered occurrences of mosses (Pl.2/1).It can be said that the intimate association of procaryotes, macrophytes, microphytes and fauna actively create their microenvironment in order to keep up with individual ecological requirements (DRYSDALE, 1999).For these reasons simultaneous build-up and decomposition of carbonates can produce varied and often complex patterns of encrustation (Pls.1/2, 2/1).
Algally coated grains are also common.These take the form of oncoids formed from various particles which may be coated in the streams (Pl.2/5).Many nuclei are clasts of Cretaceous limestone up to 15 cm in diameter, while micritic envelopes developed due to the process of biogenic carbonate precipitation as described by SCHNEIDER (1977).The carbonate substrate (limestone) had been attacked by endolithic microorganisms the activity of which has left recognizable boring patterns (Pl.2/6).The following microbial colonization of the surface of the nucleus produced concentric banding with the cortices similar in thickness to the laminations seen in the algal crusts.In most cases, the upper portion of a clast has a soft, bumpy surface while the bottom of the clast has a hard, smooth surface.In addition, the cortices are thinner along the base of clast due to the inability of the larger grains to roll.In thin section, the cortical laminations consist of two alternating layers (Pl.2/2): ( 1) elongate spar crystals, cross cut by Vshaped clumps of filamentous cyanobacterial "bushes", and ( 2) individual subparallel cyanobacterial filaments calcified by micrite.This feature is almost identical to that recorded by LOVE (1985).
EFECTS OF FRESHWATER ON TUFA
A low degree of diagenetic alteration of primary tufa fabric is one of the main characteristics of investigated Holocene tufas.This is due to the presence of an insoluble residue, which greatly retards textural alteration, and due to the fact that the studied rocks are young tufas (PAVLOVI∆ et al., 2002).Consequently, there is very little recrystallization or carbonate cement development.
Occurrences of diagenetic cementation are present in samples either characterized by a combination of large cavities and/or a low insoluble residue content (Pl.3/1, 3/2) or by sampling sites related to places with vigorous water flow (Pl.3/3).Meniscus texture (Pl.3/2), which is a common feature in the vadose zone (CHAFETZ et al., 1985), is composed of anhedral and/or rhombohedral crystals, generally clear in appearance, ranging in size from 0.01 mm to approximately 0.05 mm.Isopachous and drusy mosaic cements are commonly associated with the phreatic zone (CHA-FETZ et al., 1985).Some pores are lined with equant rhombohedral or bladed crystals of isopachous cements (Pl.3/3) up to 0.1 mm in length and up to 0.05 mm in width.Others are filled with drusy fabric (Pl.3/1) displaying an increase in crystal size (up to 0.1 mm) away from the substrate or cavity wall.We ascribe large, well-ordered spar crystals (Pl.3/1) to slow, inorganic precipitation inside of the macrophyte boundary layer where water flow is reduced; an in-depth description of this occurrence is provided in PEDLEY (1992).This study has shown the complex cementation pattern that is characteristic for non-marine settings, indicating a range of microenvironments in which formation occurs.These are in agreement with the work of CHAFETZ et al. (1985).
After formation, some of the algally laminated crusts can undergo aggradational neomorphism (a type of recrystallization) that results in laminated, coarsely crystalline, neomorphic crusts.The specimen which is ascribed to this process is heavier, harder, less porous and more massive with regard to the porous, light tufa (Pl.3/4).We believe that diagenesis commences inside of V-shaped algal bushes (Pl.2/2) which are encased within a micritic or sparry calcite phase; the process is explained in detail by LOVE & CHAFETZ (1988).BRADLEY (1929, in LOVE & CHAFETZ, 1988) attributed the cause of the process to ammonia from decomposing algae.This sparitization occurs at the expense of the overlying micritic crystals (Pl.3/5) and gives rise to the disappearance of the original micritic phase and organic filaments.With time, it results in continuous layers composed almost entirely of coarse, columnar crystals.In the literature, different hypotheses are put forward to explain such sparite crusts, i.e. the question as to whether these deposits represent diagenetic and recrystallized features or physicochemical precipitates remains unresolved (LOVE & CHAFETZ, 1988;PEDLEY, 1992;JANSSEN et al., 1999;FREY-TET & VERRECCHIA, 1999).Also, this polycyclic isopachous fringe sequence is explained by biological mediation associated with a procaryote-microphyte biofilm producing peloidal micrite and inorganic precipitation resulting in pallisade spar (PEDLEY, 1992;FORD & PEDLEY, 1996).PAVLOVI∆ et al. (2002) favour neomorphism, as explained by LOVE & CHA-FETZ (1988), based on the overall geochemical evidence.
In close proximity to the large pores the coarsening of crystal sizes in micrite is common because of enhanced meteoric diagenesis (Pl.3/6).
CONCLUSION
Phytoherm deposits abound in the studied sites, and show preserved organic structures as well as welldefined impressions of their shape when in life position.The fairly rapid development of tufa build-ups, at the Ogari buk and Berberi buk waterfall sites, is greatly enhanced by the vigorous growth of macro-and microphytic vegetation.Based on macroscopic observation, the predominant types of encrustation are: (1) encrusted mossy deposits, (2) algally laminated crusts, and (3) algally coated grains.Encrustations on mosses are composed of peloidal micrite which often mimic their morphology.Algal encrustations are characterized by thin alternating micritic and microsparitic/sparitic laminae.With increasing age the deposits show postgenetic transformations which result in meniscus, isopachous and drusy mosaic cement growths that are encountered primarily in open-textured fabrics.Specimens belonging to algally laminated crusts have undergone aggradational neomorphism which resulted in the development of a dense crust composed of laminae containing coarse, columnar crystals.
Fig. 2
Fig. 2 JankoviÊa buk waterfall cascading over Cretaceous limestone which is densely overgrown by an algal mat whereas mosses represent small, isolated patches at the sites of turbulent water flow.
Fig. 3
Fig. 3 Overview of a heavily vegetated tufa dam at Berberi buk waterfall.Vigorous water flow is associated with macrophyte hummocks (mosses).
Fig. 4 A
Fig. 4 A typical waterfall site (Ogari buk) showing the large cavity (centre) developed behind the overhangs.A series of mosscovered cascades develop on the upstream side of waterfall.
Fig. 5
Fig. 5 Overview of the Krupa cascade (up to 0.5 m in height) that is largely devoid of macrophytes.Submerged algally coated grains are situated on the downstream side of the cascade. | 4,271.4 | 2002-06-28T00:00:00.000 | [
"Environmental Science",
"Geology"
] |
Iterative Operator-splitting Methods with Embedded Discretization Schemes
In this paper we describe a computation of iterative operator-splitting method, which are known as competitive splitting methods, see [10] and [11]. We derived a closed form, based on commutators for the iterative method. The time discretization schemes apply extrapolation schemes and Pade approximations to the exp-functions. Spatial discretization schemes considered Lax-Wendroff methods and are combined with the iterative schemes. The error analysis describe the approximation errors. Numerical examples of ordinary and partial differential equations support the fast computation ideas.
Introduction
In this paper we concentrate on approximation to the solution of the linear evolution equation where L, A and B are unbounded operators and T ∈ R + .We solve our equation with the following numerical scheme: and c 0 (t n ) = c n , c −1 = 0.0, with c i+1 (t n ) = c n , where c(t n ) is the approximation at t = t n .
The numerical method (2)-( 3) can be written algebraic in a 2-stage iterative splitting scheme: c i (t) = exp(At)c(0) + t 0 exp(As)Bc i−1 ds, (4) c i+1 (t) = exp(B t)c(0) + t 0 exp(Bs)Ac i ds, (5) where i = 1, 3, 5, . . .and the initial or start solution is given as c 0 (t) = 0 or constant.Further we have the conditions, that c n is the known split approximation at the time-level t = t n .The split approximation at the time-level t = t n+1 is defined as c n+1 = c 2m+1 (t n+1 ).(Clearly, the function c i+1 (t) depends on the interval [t n , t n+1 ], too, but, for the sake of simplicity, in our notation we omit the dependence on n.) Based on our motivation to design effective algorithms for large equation systems.The problem arose in the field of optimizing the computation of the iteration steps of very large systems of differential equations fixed on time-scale and on one discretization method.Here novel ideas in computing e x p-functions can be used, so we apply the multi-product expansion as an extrapolation method, see [8].The iterative splitting method can be formulated as a generalization of a waveform relaxation approach, see [5].So that also numerical schemes of sparse solvers can be taken into account to such a method.For partial differential equations, the spatial discretization methods are very important and we concentrate on efficient embedded discretization schemes, that overcome the problem of order restriction, while using lower order spatial discretization schemes, see [12].
Historically, effective computational methods can derived by considering the local character of each equation part.So in the last years the ideas of splitting into simpler equations are established, see [16], [7] and [14].We concentrate on choosing extrapolation to obtain higher order schemes without loosing efficiency in computing the operators.
The outline of the paper is as follows.The operator-splitting-method is introduced and the error-analysis of the operator-splitting method is presented in Section 2. A closed form is discussed in Section 4, where we discuss an efficient computation of the iterative splitting method with based on extrapolation methods.In Section 6 we present the numerical results for the methods.Finally we discuss future works in the area of iterative methods.
Error analysis
The following algorithm is based on the iteration with fixed-splitting discretization step-size τ, namely, on the time-interval [t n , t n+1 ] we solve the following sub-problems consecutively for i = 0, 2, . . ., 2m. (cf.[16]): and c 0 (t n ) = c n , c −1 = 0.0, with where c n is the known split approximation at the time-level t = t n .The split approximation at the time-level t = t n+1 is defined as c n+1 = c 2m+1 (t n+1 ).(Clearly, the function c i+1 (t) depends on the interval [t n , t n+1 ], too, but, for the sake of simplicity, in our notation we omit the dependence on n.)
Two-step iterative schemes for operators
In the following we discuss the consistency of the 2 stage iterative method, taken into account to iterate over both operators.
Theorem 1. Let us consider the abstract Cauchy problem in a Banach space
where A, B : X → X are given linear operators which are generators of the analytical semigroups and c 0 ∈ X is a given element.We assume dom(B) ⊂ dom(A), so we are restricted to balance the operators.Further, we assume the estimations of an unbounded operator, see [15]: where α ∈ (0, 1) and we assume B 1−α is the infinitesimal generator of an analytical semigroup for all α ∈ (0, 1), see [4].
Further we assume: where α, β , p, q ∈ (0, 1) and The error of the first time-step is of accuracy (τ i A α n ), where τ n = t n+1 − t n and we have equidistant time-steps, with n = 1, . . ., N .Further i A are the iterative steps with operator A.
For the first iterations we have: and for the second iteration we have: In general we have: For the odd iterations: i = 2m + 1 for m = 0, 1, 2, . . .
We have the following solutions for the iterative scheme: The solutions for the first two equations are given by the variation of constants: For the recursive even and odd iterations we have the solutions: For the odd iterations: i = 2m + 1 for m = 0, 1, 2, . . .
For e 2 we have: We obtain: For odd and even iterations, the recursive proof is given in the following.In the next steps, we shift t n → 0 and t n+1 → τ n for simpler calculations, see [15].The initial conditions are given with c(0) = c(t n ).
For the odd iterations means the iteration over operator A: i = 2m + 1, with m = 0, 1, 2, . .., we obtain for c i and c: By shifting 0 → t n and τ n → t n+1 , we obtain our result: where α = min i j=1 {α i } and 0 ≤ α i < 1 and i A is the number of odd iteration steps means over the operator A.
The same proof idea can be applied to the even iterative scheme.
Where for the even scheme we could not obtain a higher order, see: and we have t n exp(B(t n+1 − s 1 ))A. . .
So we could not improve the order in the weaker iterations, so we need at least some strong iterative steps.
Remark 1. An example can be assumed with
We apply two iterative steps with operator A and have the following local errors: and hence where 0 ≤ α < 1 and C, C are constants independent of τ n .
Convergence results
To derive convergent results, we apply stability and consistency done in the previous sections.
The convergence result for the assumption B = A 1−α and apply the iteration to operator A (one-side iteration).
We obtain the following results: where m are number of the local steps, C is a constant independent of τ n .
For i A = 2: where m are number of the local steps, C is a constant independent of τ n .
The same recursive argument can be done for arbitrary i A ∈ N + and we obtain where m are number of the local steps, C is a constant independent of τ n and i A are the number of iterative steps.
Remark 2. Local we have an error O(t i
This means we need sufficient iterative steps i A .At least for α ∈ (0, 1) and an assumed convergence (t c ) with order c > 0, we need In numerical experiments we obtain much more better results and achieve second or third order methods with two and three iterative steps.
In the next section we describe the computation of the integral formulation with exp-functions.
Computation of the iterative splitting schemes: Closed formulation
In the last years, the computational effort to compute integral with exp-function has increased, we present a closed form, and re-substitute the integral with closed functions.Such benefits accelerate the computation and made the ideas to parallelize, see [2] and [8].
Recursion.We study the stability of the linear system ( 6) and ( 7), based on different closed formulations.We consider the suitable vector norm • on R M , together with its induced operator norm.The matrix exponential of Z ∈ R M ×M is denoted by exp(Z).We assume that: where K A , K B ∈ R + are given as the growth estimation of the exponential functions, see [5].
It can be shown that the system (1) implies exp(τ n (A + B))c n ≤ K c n and is itself stable.
For more transparency of the splitting scheme ( 6) and ( 7), we consider a wellconditioned system of eigenvectors whereby we can consider the eigenvalues λ 1 of A and λ 2 of B instead of the operators A, B themselves.
We assume that all initial values c i (t n ) = c approx (t n ) with i = 0, 1, 2, . . ., are as where m is the order, see [5].
Further we assume λ 1 = λ 2 , otherwise we do not consider the iterative splitting method, while the time-scales are equal, see [7].
A(α)-stability.We define z k = τλ k , k = 1, 2. We start with c 0 (t) = u n and we obtain: where S m is the stability function of the scheme with m-iterations.
Let us consider the A(α)-stability given by the following eigenvalues in a wedge: For the A-stability we have The stability of the splitting schemes are given in the following theorems with respect to A and A(α)-stability.
Computation of the iterative splitting methods: Closed formulation with integral computations
A further computation of the iterative schemes are given by the variation of constants, see for exponential splitting schemes [15].
To obtain analytical solutions of the differential equations: . . .
where c(t n ) is the initial condition and A, B are bounded operators, the initialization is with c 1,iter (t) = exp(B t) exp(At)c(t n ) is a first order splitting scheme.
The application of the variation of constants is given as: We apply the numerical integration of the integral with Trapezoidal rule for the first integral and Simpson's rule for the second integral and obtain: where we compute c 1,iter (t n+1 ), c 2,iter (t n+1 ), . . ., and s The forth order method can also be computed with Bode's or Romberg's rules: Example.
where we have to compute the subinterval results: We we have to compute t ∈ [0, T ], with t 0 , t 1 , . . ., t N and N number of time steps, where the time steps are equidistant of τ = t j − t j−1 , j = 1, . . ., N .
The generalization is given with Romberg's extrapolation scheme is given in the following algorithm.
(i) We start with n = 0 and the initial condition c(0), and starting solution We compute the time interval t n , t n+1 and the solution c(t n+1 ) is obtained by: (a) We start with i = 2 where and We compute the integrals of the functions f A,i−1 , f B,i by: where ) we increase i = i + 1, till i = I and we go to (iii) (iii) The result is given as c(t n+1 ) = c I (t n+1 ), we increase n = n + 1 and goto (ii), if n = N we are finished.
Remark 3. The same recurrent argument can be applied to the next iterative scheme.A higher numerical integration method is necessary.Here we have only to apply matrix multiplications and can skip the time-consuming integral computations.Only two evaluations for the exponential function for A and B are necessary.The main disadvantage of computing the iterative scheme exactly are the time-consuming inverse matrices.These can be skipped with numerical methods.
We have the following assumptions for the stability formulations: The stability of the methods are given in the following Theorem 2 Theorem 2. We have the following stability for the integral formulated iterative schemes: For the stability function S i,iter of iterative splitting schemes we have with ω ∈ [0, 1] and the initial conditions are c(t n ) = c n and i is the iteration index.
Proof.We proof the stability of S 1,iter .
For the extrapolation schemes we have the stability function: For both possibilities z 1 → −∞ and z 2 → −∞ we have S 1,iter → 0. For the higher iteration steps we taken into account the assumptions (59)-(60).
Based on this assumptions, we write for i = 2 For both possibilities z 1 , z 2 → −∞ we have S 2,iter → 0. Same proof idea is used for the higher iterative steps.
Spatial discretization schemes
To apply our method to partial differential equations, we have to consider higher order spatial discretization schemes.
In the following we discuss an efficient scheme based on Lax-Wendroff.
The Lax-Wendroff scheme in one dimension
The 2nd order Lax-Wendroff scheme for the 1D advection diffusion equation is: where ν = v∆t ∆x is the Courant-Friedrich-Levy number and Since the Lax-Wendroff scheme is made for hyperbolic PDEs only we have to examine the stability closely and carry out a von-Neumann stability analysis.We encounter the following stability condition for the Fourier coefficients of u j (t n ) Obviously this statement is always true for right-hand sides ≥ 1 so it is sufficient to estimate the solutions of In Figure 5.1 we see the pairs (ν, z) for which the statement holds.Of course only the region {(ν, z), ν ≥ 0, z ≥ 1} is of practical interest.The figure suggests that for all ν ≥ 0 there are z ≥ 1 leading to a stable scheme.For the small D(z, v, ∆t) for which this is true we may say that the problem is convection dominated.
Generalization to two dimensions
The advection-diffusion equation in two dimensions: Application of the Lax-Wendroff scheme yields the second order accurate formula: We use central differences to achieve a second order scheme.This is similar to the 1D scheme (77) except for the new cross term incorporating ∂ x y u: introducing the Courant-Friedrich-Levy numbers and the constants z x , z y This finite difference scheme is now brought into conservation form * : At this point we introduced the flux limiter φ(θ i ) which is used to handle steep gradients of u(x, y, t) where the Lax-Wendroff scheme adds spurious oscillations (see also [6]).θ i = is a measure for the slope of u and is estimated in the x and in the y direction respectively.For our purposes we chose the van Leer limiter:
Dimensional splitting
During the numerical experiments we will compare the discussed Lax-Wendroff scheme to the dimensional splitting which is also translated into a second order finite difference scheme, filtering out the numerical viscosity . In this case we use an implicit BDF2 method to achieve the second order in time yielding: with the spatial discretization We suppressed the dependencies u i j (t n ) on the right hand side
Advection-diffusion splitting
Another way of splitting the advection-diffusion equation is the following: This splitting will also be used during the experiments.The implementation of the operators is very similar to the above finite differencing schemes.However numerical aspects demand the use of an explicit method namely the Adams-Bashforth method.For further discussions on the practicability of finite differencing schemes in conjunction with the iterative splitting see [10].
Numerical experiments
In the following we present numerical experiments with the closed computable iterative splitting methods and their benefits to standard schemes.
First experiment
We deal with the 2-dimensional advection-diffusion equation and periodic boundary conditions with the parameters The given advection-diffusion problem has an analytical solution which we will use as a convenient initial function: We apply dimensional splitting to our problem where We use a 1st order upwind scheme for ∂ ∂ x and a 2nd order central difference scheme for 2 we achieve a 2nd order finite difference scheme because the new diffusion constant eliminates the first order error (i.e. the numerical viscosity) of the Taylor expansion of the upwind scheme.L y u is derived in the same way.
We apply a BDF5 method to gain 5th order accuracy in time: Our aim is to compare the iterative splitting method with AB-splitting.Since [A x , A y ] = 0 there is no splitting-error for the AB-splitting and therefore we cannot expect to achieve better results with the iterative splitting in terms of general numerical accuracy.Instead we will show that the iterative splitting out competes AB-splitting regarding the computational effort and round-off-errors.But first there are some remarks which have to be made concerning the special behavior of both methods when combined with high-order Runge-Kutta and BDF methods.
Splitting and schemes of high order in time.
Concerning AB-Splitting.The principle of AB-splitting is well known and simple.The equation du d t = Au + Bu is broken up into which are connected via u n+1 (t) = u n+1/2 (t + ∆t).This is pointed out in figure (6.2).AB-splitting works very well for any given one-step method like the Crank-Nicholson-Scheme.Not taking into account the splitting-error (which is an error in time) it is also compatible with high order schemes such as explicit/implicit Runge-Kutta-schemes.
Things look different if one tries to use a multi-step method like the implicit BDF or the explicit Adams method with AB-splitting, these cannot be properly applied as is shown by the following example: Choose for instance a BDF2 method which, in case of du/d t = f (u), has the scheme So the first step of the AB-splitting looks like: Clearly u n+1/2 (t) = u n (t) but what is u n+1/2 (t − ∆t)?This is also shown in figure (6.2) and it is obvious that we wont have knowledge about u n+1/2 (t − ∆t) unless we compute it separately which means additional computational effort.This overhead even increases dramatically when we move to a multi-step method of higher order.The mentioned problems with the AB-splitting will not occur with a higher order Runge-Kutta method since only knowledge of u n (t) is needed.
Remarks about the iterative splitting.
The BDF methods apply very well to the iterative splitting.Let us recall at this point that this method, although being a real splitting scheme, always remains a combination of the operators A and B so no steps have to be done into one direction only ‡ .
In particular we do a subdivision of our given time-discretization t j = t 0 + j∆t into I parts.So we have subintervals t j,i = t j + i∆t/I , 0 ≤ i ≤ I on which we solve ‡ As we will see there is an exception to this. the following equations iteratively: u −1/I is either 0 or a reasonable approximation § while u 0 = u(t j ) and u 1 = u(t j +∆t).The crucial point here is that we only know our approximations at given times which don't happen to be the times at which a Runge-Kutta method needs to know them.Therefore, in case of a RK method, the values of the approximations have to be interpolated with at least the accuracy one wishes to attain with the splitting and this means a lot of additional computational effort.We may summarize our results now in Table 6.1 that shows which methods are practicable for each kind of splitting scheme.¶ Numerical results.After resolving the technical aspects of this issue we can now proceed to the actual computations.The question which arises is which of the splitting methods has the least computational effort since we can expect them to solve the problem with more or less the same accuracy if we use practicable methods with equal order because [A x , B x ] = 0. We tested the dimensional splitting of the 2d-advection-diffusion equation with the AB-splitting combined with a 5th order RK method after Dormand and Prince and with the iterative splitting in conjunction with a BDF5 scheme.We used 40 × 40-and 80 × 80-grids and completed n t time-steps with each of which subdivided into 10 smaller steps until we reached time t end = 0.6 which is sufficient to see the main effects.The iterative splitting was done with 2 iterations which was already enough to attain the desired order.In Tables 7.2 and 6.3 the errors at time t end and the computation times are shown.§ In fact the order of the approximation is not of much importance if we fulfill a sufficient number of iterations.In case of u −1/I = 0 we have the exception that a step in A-direction is done while B is left out.The error of this step vanishes after a few but mostly only one iteration ¶ In favor of the iterative splitting scheme take also into the account that AB-splitting may be used along with the mentioned high order methods but cannot maintain the order if [A, B] = 0 while the iterative splitting re-establishes the maximum order of the scheme when a sufficient number of iterations is done.As we can see, the error of the iterative splitting reaches the AB-splitting error after a certain number of time-steps and stays below it for all additional steps we accomplish.Of course the error cannot sink under a certain amount which is governed by the spatial discretization.It is to be noticed that while the computation time used for the iterative splitting is always about 20%-40% less than that of the AB-splitting the accuracy is, with a sufficient number of timesteps, slightly better than that of the AB-splitting.This is due to the roundoff error which is higher for the Runge-Kutta method because of the greater amount of basic operations needed to compute the RK steps.
A future task will be to introduce non-commuting operators in order to show the superiority of the iterative splitting over the AB-splitting when the order in time is reduced due to the splitting error.
Second experiment
In the second experiment we consider the linear and nonlinear advectiondiffusion equation in two dimensions.
Two dimensional advection-diffusion equation. We deal with the two dimensional advection-diffusion equation with the parameters
The code for both methods is kept in the simplest possible form.
One and two step iterative schemes for advection-diffusion equations.
We use the splitting (96) and apply the one and two step iterative splitting respectively.
Results are shown in Tables 7.3 and 7.4.We see in case of the one step method that the choice of A and B makes a difference for 1 Iteration.This is due to the dominance of the advection part.In general there is no improvement with higher iteration numbers.In this case the operators are defined as follows: B = −u i−1 ∂ y + D∂ y y (105) u i−1 denotes the (i − 1)th iteration.This takes care of the coupling between the iteration steps.We use a finite differencing scheme of third order in space (central differences) and time (explicit Adams-Bashforth) and compare the results with a scheme of only second order.We also compare the iterative scheme with a standard Strang-Marchuk splitting of second order.Numerical results are shown in Table 7.6.Remark 4. In the second experiment, we can improve iterative splitting methods with embedded spatial discretization methods.Higher order schemes as Lax-Wendroff schemes are embedded as dimensional splitting schemes to our iterative splitting method.Also nonlinear equations can be embedded.To compute the iterative scheme, we have only to deal with spatial discretization schemes on each dimensions.At least less iterative steps are needed to achieve higher order results.
Conclusions and Discussions
We have presented an iterative operator-splitting method computed with extrapolation schemes.We have analyzed the splitting error for the operators.
Under weak assumptions we could proof the higher order error bounds.Closed formulations allow to compute the delicate iterative scheme efficient.We embedded spatial discretization schemes to the iterative splitting method and achieved higher order results.Numerical examples confirm the applications to differential equations and achieve the theoretical results.In the future we will focus us on the development of improved operator-splitting methods with respect to their application in timedependent and nonlinear differential equations.
Figure 5 . 1 .
Figure 5.1.(ν, z) inside the blue area satisfy the stability condition.The region of interest is also shown in detail.
Table 6 . 2 .
Errors and computation times of AB-splitting and iterative splitting for a 40 × 40-grid.Number of steps Error AB Error It. spl.AB computation time It.spl.computation time
Table 6 . 3 .
Errors and computation times of AB-splitting and iterative splitting for a 80 × 80-grid.
Table 7 . 3 .
Errors for different diffusion constants on a 40×40-grid with the one step iterative splitting with A = −v∇ and B = D∆.
Table 7 . 4 .
Errors for different diffusion constants on a 40×40-grid with the one step iterative splitting with A = D∆ and B = −v∇.
One and two step iterative schemes for 2D Burgers equation.
In our last experiment we apply the iterative splitting to the 2D Burgers equation:
Table 7 . 5 .
Errors for different diffusion constants on a 40×40-grid with the two step iterative splitting.
Table 7 . 6 .
Burgers equation on a 40×40-grid with the two step iterative splitting and schemes of second and third order (t 0 = 0.25, T = 30). | 6,003.2 | 2012-05-28T00:00:00.000 | [
"Mathematics",
"Computer Science"
] |
Direct detection of atomic oxygen on the dayside and nightside of Venus
Atomic oxygen is a key species in the mesosphere and thermosphere of Venus. It peaks in the transition region between the two dominant atmospheric circulation patterns, the retrograde super-rotating zonal flow below 70 km and the subsolar to antisolar flow above 120 km altitude. However, past and current detection methods are indirect and based on measurements of other molecules in combination with photochemical models. Here, we show direct detection of atomic oxygen on the dayside as well as on the nightside of Venus by measuring its ground-state transition at 4.74 THz (63.2 µm). The atomic oxygen is concentrated at altitudes around 100 km with a maximum column density on the dayside where it is generated by photolysis of carbon dioxide and carbon monoxide. This method enables detailed investigations of the Venusian atmosphere in the region between the two atmospheric circulation patterns in support of future space missions to Venus.
L.62: please, mention the limitation of your method, that only provides column densities, while limb observations of the airglow can lead to atomic oxygen density profiles.These methods are thus complementary.
l.66: you mention the detection of the 63 µm emission in the Earth and Mars atmospheres.Is it expected to be seen elsewhere?Also, please, mention earlier studies for the Earth detections, such as Grossman and Vollmann (1997), Grossman et al. (2000) and Mlynczak et al. (2004).l.67: you mention the 18 O isotope on Earth.Is it expected to be observed on Venus?If yes, please, mention it and, if not, maybe it is not useful to write this sentence in this manuscript.l.81:It would be great if you could provide a local-time/latitude map of the locations of the observations on the planet.Or maybe a table with the latitude, longitude, local time and SZA of the 17 observations.It could also be inserted in the methods section.Figure 2a and 2b: for clarity, a vertical dashed line should be added at 18:00 LT to mark the terminator.Or maybe the background of the left part of the plots should be grey (for consistency with figure 3).According to the SS-AS circulation theory, and as explained in your conclusion, the atomic oxygen density is expected to be maximum at the subsolar point, then decrease near the terminator and increase again at the anti-solar point.Yet, this is not exactly what we can see in Figure 2b.The decrease from 15:00 to 18:00 LT can clearly be observed, which is expected.But why is the oxygen density increasing again between 18:00 and 19:00 LT, and then decreasing from 19:00 to 21:00?We should expect some symmetry of the dayside and the nightside points but we do not see that on this plot.Do you have an explanation for this behavior in the nightside?I also suggest that you try to plot these figures according to the distance to the subsolar or antisolar points, instead of latitudes.Soret et al. (2012) showed that, since the distribution is expected to be concentrically distributed around the antisolar point on the nightside, data should be represented as a function of the antisolar angle (figure 2 of their manuscript).Maybe a trend will more clearly appear in your dataset if using this variable.
It is also a shame that no data have been acquired at the SS and AS points.I hope future observations will be able to fill in the gaps.That would be a huge step in the understanding of the Venus circulation.l.157: please, mention that the 6x1017 cm-2 value is obtained at the antisolar point, where it reaches its maximum on the nightside, while your dataset does not go so deep in the nightside.l.160-164: Again, your data set does not include observations at the antisolar and subsolar points.Being mostly performed near the terminator, the small variations retrieved in this study can (and should) easily be explained.
L.160: why don't you mention the work of Gilli sooner?l.176:I agree with your explanation from a theoretical point of view.But, as explained in my comments of figure 2, I do not agree that you see this trend in your data (except maybe on the dayside, but you are missing the 12:00 to 15:00 LT area anyway).You should rephrase this paragraph.
L.184: This study can clearly show how Venus and Earth are different, but it will not be straightforward, using the oxygen density, to understand how they evolved so differently.I think you should rephrase.l.252 and 283: "enabling the first observation of Venus with SOFIA" Very good!It should appear it the main text.
Figure M1: It is not easy to distinguish between the data and the fits.Could you maybe lighten the fit curves or use different colors?l.385: "the atmospheric temperature does not change much".Please, add a reference.
Overall, this is a very interesting work, with promising results and I wonder whether if you have considered studying the correlation between your atomic oxygen column density and the O2 nightglow observed by Venus Express?Or, to compare your atomic oxygen temperature and the O2 nightglow?These are two observational datasets that do not require any modeling and it would definitely be a good way to connect them and bring this work a step further.
Minor comments
l.23: I suggest you add "However, past and current detection methods".l.27: "on board SOFIA".
L.26-27: Maybe the upGREAT and SOFIA acronym should be explained in the abstract.l.28: please, add "0.7 to 3.8 x 1017 cm-2 between 15:00 and 21:00 Local Time".l.32: "These data are the first dayside measurements of atomic oxygen."It should appear sooner in the abstract.Maybe, l.25 "We report on the first direct detection…".l.32: "This new method" l.39-40: please, add commas "On the nightside, atomic oxygen recombines in a three-body reaction to molecular oxygen.This is the main source of excited oxygen molecules, which, in turn, are the source of the Venus nightglow."l.43: what does "This" refer to?The global circulation?l.48: "limb" Do you mean terminator?l.51: "onboard" instead of "on" L.52: please, provide the transition of the 1.27 µm airglow.l.54: add comma "quenching coefficients, reaction rates and efficiencies, atomic oxygen densities have been" L.55: add reference 3 together with reference 2 l.57: change "longitude" to "latitude" l.59: "stronger": please, quantify.l.68: add a comma after "In the atmosphere of Venus" l.74-75: the acronym should probably be explained in the abstract.
l.75: "on the SOFIA airplane".And, also, add a reference to Methods/SOFIA upGREAT observations.l.80: add a comma after "In total".l.79-80: the sentence is a bit long.I suggest "high spectral resolving power of upGREAT allows distinguishing both.The telluric atomic oxygen line is used for frequency and radiometric calibration."l.81: replace the comma after "measured" by ":".l.107: please, change to 246 K, for consistency with the abstract and the rest of the manuscript.l.114-115: replace "which are on" by "in".l.115: a space should be added between "points" and "(circles)".l.123-124: avoid the repetition of the word "measurements" in the same sentence.l.125: "predicts" sounds like the result of a model, while Sagawa (2008) corresponds to an observational study.Maybe "observes" or "deduces" are more appropriate terms here.l.136: "This is determined from the fit of a radiative transfer model".l.138: change "ref 16" to the names of the authors, as done in the rest of the manuscript.
L.158: the reference "2" to Brecht et al is a little bit weird at its current location.It looks like a squared value.
L.312: please, add a reference to the Earth's study.
L.317: I think it should be equation (1) instead of (2).Same for the following equations.l.329: S=1.131 x 10-21 (x is missing) l.346: the sentence is a bit long.I suggest "… Earth atmosphere at the altitude where the line becomes optically thick.In this case, it corresponds to the lower thermosphere, at around 100 km altitude." Reviewer #2 (Remarks to the Author): Overall: The manuscript is discussing a unique dataset that was obtained by the upGREAT heterodyne spectrometer on board SOFIA.Atomic oxygen is an important chemical species in atmospheres but can also be difficult to directly observe.This manuscript is providing a decent and brief overview of the importance of atomic oxygen in the Venusian atmosphere.It also does a good job in comparing their results to past observations and model simulations.There are a few revisions being suggested that are easily corrected.The recommendation is this manuscript needs minor revisions before publication.
Major comments:
(1) The introduction/motivation section (~Lines 38 -44) needs to be re-organized and focused.Things are briefly touched on but the connection between the points are not clear or fully covered with appropriate references.It is unclear if the authors want to only discuss one nightglow (O2 IR) or briefly state a few.
Atomic oxygen is important for photochemistry, circulation, and energy budget.Due to a lack of direct atomic oxygen observations, scientists have had to rely on other atmospheric features to understand the atomic oxygen distribution, such as nightglow.Nightglows (e.g.O2 IR, O2 visible, NO UV) provide insight into the photochemistry and dynamics due to the intensity and location of the features on the nightside (2) Throughout the manuscript, please be clearer on the local time being discussed.There are places where it isn't clear if the full range of local times are being discussed, the dayside, or the nightside are being discussed.
(3) The radiative transfer (RT) code needs more support.It is unclear if the RT code has been published before and how the input values were determined.This section needs references and/or more detail.
Details:
Line 20: replace "is mostly located" with "peaks" Line 41: replace "source of the Venus nightglow" with "source of several Venus nightglows".O2 IR nightglow and the O2 Visible nightglow can be produced from the 3-body reaction.
Line 39 -44: This group of sentences needs rephrasing and re-organization.See major comment #1 Line 44: "Pioneer" -> "Pioneer Venus Orbiter" Line 50: State the wavelength or just "IR" after "O2" Line 53-55: This sentence is missing a word or punctuation to provide more clarity in the statement "reaction rates and efficiencies atomic oxygen densities have been calculated."Line 154: Please clarify "These values".Is this regarding the range or the average or all of the above?Line 155: "O2 nightglow", do you mean "O2 IR nightglow"?Line 156: Please clarify "these data".It is unclear if this refers to the new observations or from previous work.Also unclear which local time this refers too.
Line 157: A clearer statement might be: "The column densities derived from VIRTIS altitude profiles as reported in Brecht et al. 2012 Line 291: Please clarify or rephrase; "In order to avoid that most of the pixels…" What is being avoided?
Line 305: a comma is needed after, "For further analysis" Line 311: Is there a reference for the radiative transfer code?If the code is not public and/or published, please add supporting references for values etc. Line 358: Please rephrase this sentence.What temperature is determined?Line 396 -399: Could you please provide more information, such as key words to put in the search fields to find the exact data that was used?As written, it is unclear if the data is available.
Berlin, July 22, 2023
Dear Reviewers: Thank you for your thoughtful comments.We have included all of them in the revised manuscript.In the redline version of our manuscript (Huebers-Venus-revision-red-line-2023-07-12) these changes are indicated in red.On the following pages you will find detailed comments on your remarks (in red).We hope that the revised manuscript meets your expectations.
Sincerely,
Heinz-Wilhelm Hübers (on behalf of the authors)
REVIEWER COMMENTS
Reviewer #1 (Remarks to the Author): The manuscript « Atomic oxygen on the dayside and nightside of Venus » by Hübers et al. presents a method applied to Venus for the first time to determine oxygen column densities and temperatures.The technique, based on the observation of the 63 µm oxygen emission wavelength using an airborne spectrometer is described.This method leads to the measurement of the Earth atmosphere temperature, as well as the Venus cloud top temperature, the atomic oxygen temperature and the atomic oxygen column density.This is the first time that such direct measurements are performed to estimate the oxygen column density on the Venus dayside.The method is also valid for the Venus nightside.Results are in good agreement with previous studies, based on airglow measurements and global circulation models.The SOFIA/upGREAT observations have been acquired between 15 and 21 LT.Future observations are very promising to map the entire Venus globe, especially near both the sub-solar and the anti-solar points, which are missing in this study.The paper is clear, well written and results are highly encouraging.I recommend this paper to be published by Nature Communications after addressing the following points.
Main comments L. 31: "The atomic oxygen is found at altitudes around 100 km": this study only provides column densities.It is thus not possible to retrieve altitude of the oxygen layer from those measurements.This statement is not wrong but based on results from previous studies.In the abstract, the statement is misleading, as the reader might think that this result comes from this study.
Our study provides the temperature of the atomic oxygen and the column density.With a-priori knowledge of the temperature profile of the Venusian atmosphere it is, therefore, possible to obtain coarse altitude information, but not altitude profiles as it is possible with limb-scanning instruments on Venus orbiters.We have rephrased the sentence.Now it reads: "The temperature of the atomic oxygen is ~156 K on the dayside and ~115 K on the nightside which corresponds to altitudes around 100 km." (lines 32-34).With this reformulation we hope to clarify that we determine the temperature of atomic oxygen which corresponds to an altitude.
L.62: please, mention the limitation of your method, that only provides column densities, while limb observations of the airglow can lead to atomic oxygen density profiles.These methods are thus complementary.
Our method cannot deliver altitude profiles, because they are from an aircraft which does not allow limb observations.If a THz heterodyne spectrometer would be implemented on a Venus orbiter, it would also be possible to derive altitude profiles with our method.We have added the following two sentences (lines 89 -92) to explain this: "In contrast to nightglow observations from a Venus orbiter, which provide altitude profiles due to a limb-scan observing geometry, our method provides column densities.However, with a THz heterodyne spectrometer on a Venus orbiter it would be possible to obtain altitude pro-files of atomic oxygen."l.66: you mention the detection of the 63 µm emission in the Earth and Mars atmospheres.Is it expected to be seen elsewhere?Also, please, mention earlier studies for the Earth detections, such as Grossman and Vollmann (1997), Grossman et al. (2000) and Mlynczak et al. (2004).
Atomic oxygen has been detected, for example, in the atmosphere of Europa (D. T. Hall et al., Nature, vol. 373, pp. 677-679, 1995).But these observations are in the UV.The only planets (incl.moons) where the 4.7-THz transition was detected are Earth, Mars and Venus.In principle, the 4.7-THz line might be detected in the atmosphere of Europa, but this is challenging, because the atomic oxygen column density is three orders of magnitude smaller than on Venus.Regarding Earth: We have added the suggested references.l.67: you mention the 18 O isotope on Earth.Is it expected to be observed on Venus?If yes, please, mention it and, if not, maybe it is not useful to write this sentence in this manuscript.
We have added the following sentence about the 18 O isotope in the Venusian atmosphere (lines 71 -73): "For Venus, Earth-bound absorption spectroscopy of near-infrared CO lines yielded an isotopic ratio that is not significantly different from that of the Earth with 16 O/ 18 O 500 (Iwagami et al.,Ref. 21)." l.81:It would be great if you could provide a local-time/latitude map of the locations of the observations on the planet.Or maybe a table with the latitude, longitude, local time and SZA of the 17 observations.It could also be inserted in the methods section.Fig. 3 is a local-time/latitude map.But numbers of the exact positions are not given.We have added local times and latitudes as well as a table with the requested information in the methods section (Tables 2-4).We have added the vertical lines for both frequencies and explained it in the figure caption.
Figure 2a and 2b: for clarity, a vertical dashed line should be added at 18:00 LT to mark the terminator.Or maybe the background of the left part of the plots should be grey (for consistency with figure 3).
We changed it according to the recommendation.The background of the nightside data is grey.The atomic oxygen line is not optically thick and not optically thin, but somewhere in between.If the line would be optically thick we would see the temperature of atomic oxygen, which is 170 K at 15:40 LT.This is explained in lines 115 -117.We have changed it.Now, the old Fig. 2b is Fig. 2c and circles represent the oxygen densities.
According to the SS-AS circulation theory, and as explained in your conclusion, the atomic oxygen density is expected to be maximum at the subsolar point, then decrease near the terminator and increase again at the anti-solar point.Yet, this is not exactly what we can see in Figure 2b.The decrease from 15:00 to 18:00 LT can clearly be observed, which is expected.But why is the oxygen density increasing again between 18:00 and 19:00 LT, and then decreasing from 19:00 to 21:00?We should expect some symmetry of the dayside and the nightside points but we do not see that on this plot.Do you have an explanation for this behavior in the nightside?
We speculate that the increase from 19:00 to 21:00 is a local maximum of atomic which might be induced by the local wind field structure oxygen (for example caused by a vortex).It seems that this happens in a region covering the on the northern (two data points with small uncertainty) and southern hemisphere (one data point with small uncertainty) as well as the equator (although the one data point has a rather large uncertainty).It is worth noting, that some substructure is also observed by VIRTIS for the atomic oxygen concentration around the antisolar point (Soret et al., 2012, Ref. 3).However, we have only few data points and therefore it is not possible to make a final conclusion about that.We have added a discussion on this topic (lines 187 -191) which reads: "Between 19:00 and 20:00 LT a small local peak of the column density occurs.We speculate that this might be caused by dynamical processes in the atmosphere which may lead to a local maximum.This is, however, much less pronounced than the maximum at the antisolar point.It is worth noting, that Soret et al. have observed a substructure of the antisolar atomic oxygen maximum with local maxima 3 ."I also suggest that you try to plot these figures according to the distance to the subsolar or antisolar points, instead of latitudes.Soret et al. (2012) showed that, since the distribution is expected to be concentrically distributed around the antisolar point on the nightside, data should be represented as a function of the antisolar angle (figure 2 of their manuscript).Maybe a trend will more clearly appear in your dataset if using this variable.It is also a shame that no data have been acquired at the SS and AS points.I hope future observations will be able to fill in the gaps.That would be a huge step in the understanding of the Venus circulation.
Thank you for the suggestion.We have included a plot (new Fig. 2b) with the column density as a function of the SZA.The nighttime values have been averaged (weighted average with the uncertainty as weight).The trend of decreasing column density with increasing SZA is visible.We have discussed this in lines 166 -170, which reads: "Fig.2b shows the column densities as a function of the solar zenith angle.The black data point at nighttime is the weighted average of all nighttime column densities ((1.67±0.09)x 1017 cm -2 ).With increasing solar zenith angle the column density decreases slightly, because the generation of atomic oxygen by photolysis of CO2 decreases with decreasing illumination from the sun." l.157: please, mention that the 6x10 17 cm -2 value is obtained at the antisolar point, where it reaches its maximum on the nightside, while your dataset does not go so deep in the nightside.
We have added the following sentence: "The latter value is obtained at the antisolar point, where it reaches its maximum on the nightside and which is not covered by our observations."(lines 175 -177).l.160-164: Again, your data set does not include observations at the antisolar and subsolar points.Being mostly performed near the terminator, the small variations retrieved in this study can (and should) easily be explained.
We have added a plot with the column densities as a function of solar zenith angle (new Fig. 2b) and we discussed Fig. 2 a-c in more detail in lines 163-193.
L.160: why don't you mention the work of Gilli sooner?
We have rephrased the introductory parts (as suggested be reviewer 2) and added the reference to the work by Gilli et al. in the introduction (line 45).l.176:I agree with your explanation from a theoretical point of view.But, as explained in my comments of figure 2, I do not agree that you see this trend in your data (except maybe on the dayside, but you are missing the 12:00 to 15:00 LT area anyway).You should rephrase this paragraph.
We have added Fig. 2b (column density vs. solar zenith angle) and discussed the Figure in more detail (lines 163 -193).
L.184: This study can clearly show how Venus and Earth are different, but it will not be straightforward, using the oxygen density, to understand how they evolved so differently.I think you should rephrase.
We have rephrased that sentence more tentatively.It now reads "… data may help to improve our understanding of how and why Venus and Earth atmospheres are so different."l.252 and 283: "enabling the first observation of Venus with SOFIA" Very good!It should appear it the main text.Figure M1: It is not easy to distinguish between the data and the fits.Could you maybe lighten the fit curves or use different colors?
As Reviewer 2 pointed out, there is another observation of Venus, namely the attempt to detect phosphine in the Venus atmosphere.These measurements were done in parallel (with the other frequency channel of upGREAT) to our measurements.Although phosphine was not detected it is also an observation of Venus.We explained that in lines 323 -325.Since it does not add new information about the Venus atmosphere, we prefer not to mention it in the main text.We have changed Fig. M1 (different colors for fit and measurement).l.385: "the atmospheric temperature does not change much".Please, add a reference.
Overall, this is a very interesting work, with promising results and I wonder whether if you have considered studying the correlation between your atomic oxygen column density and the O2 nightglow observed by Venus Express?Or, to compare your atomic oxygen temperature and the O2 nightglow?These are two observational datasets that do not require any modeling and it would definitely be a good way to connect them and bring this work a step further.
We have considered the correlation between atomic oxygen column density and O2 nightglow observations by Venus Express.In lines 53-63 we briefly summarize the correlation between both and how O2 nightglow data from Venus Express are used to derive atomic oxygen column densities (references are given, Refs: 1, 2, 3, 8).We would like to point out that in these references the nightglow data are used to derive the atomic oxygen density and the focus is on atomic oxygen.For example, Ref. 2 provide no map of the nightglow but an atomic oxygen map.In addition, for a detailed comparison one would need nightglow data from the same date (or close to that date) as our observation.These data are not available, because Venus Express stopped operation in 2014.The importance of using data from the same date is shown in Ref. 1 by Gérard et al.where the authors compare O2 nightglow data with atomic oxygen data from the same orbit of Venus Express (at 00:30 LT).Conclusions such as for example in Gilli et al. (Ref. 8) that atomic oxygen data are in good agreement with O2 nightglow data are, therefore, not possible with our observations simply because of the lack of nightglow data measured at the same time.Despite that, we compare our data with atomic oxygen data obtained from nightglow observations with Venus Express (lines 172 -192).Here, we explicitly refer to O2 nightglow measurements of Gérard et al (Ref. 29).We have only nighttime data until 21 LT.Therefore, the comparison is limited, since nightglow data are most pronounced around the antisolar point.
Minor comments l.23:I suggest you add "However, past and current detection methods".Done l.27: "on board SOFIA".Done L.26-27: Maybe the upGREAT and SOFIA acronym should be explained in the abstract.Done l.28: please, add "0.7 to 3.8 x 1017 cm-2 between 15:00 and 21:00 Local Time".Done l.32: "These data are the first dayside measurements of atomic oxygen."It should appear sooner in the abstract.Maybe, l.25 "We report on the first direct detection…".
We deleted the sentence "These data are the first …" and added in line 25 "first".l.32: "This new method" Done l.39-40: please, add commas "On the nightside, atomic oxygen recombines in a three-body reaction to molecular oxygen.This is the main source of excited oxygen molecules, which, in turn, are the source of the Venus nightglow."Done l.43: what does "This" refer to?The global circulation?
We rephrased the sentence.It now reads: "In addition, atomic oxygen can be used as tracer for the global circulation in the upper thermosphere (~130 -250 km) as demonstrated by measurements of the oxygen dayglow with the Pioneer Venus Orbiter."(lines 45 -47) l.48: "limb" Do you mean terminator?
We mean the edge of the disc of Venus.We changed "limb" to "edge".l.51: "onboard" instead of "on" Done L.52: please, provide the transition of the 1.27 µm airglow.Done l.54: add comma "quenching coefficients, reaction rates and efficiencies, atomic oxygen densities have been" Done L.55: add reference 3 together with reference 2 Done l.57: change "longitude" to "latitude" Done l.59: "stronger": please, quantify.Done l.68: add a comma after "In the atmosphere of Venus" Done l.74-75: the acronym should probably be explained in the abstract.Done l.75: "on the SOFIA airplane".And, also, add a reference to Methods/SOFIA upGREAT observations.Done l.80: add a comma after "In total".Done l.79-80: the sentence is a bit long.I suggest "high spectral resolving power of upGREAT allows distinguishing both.The telluric atomic oxygen line is used for frequency and radiometric calibration." We have implemented this suggestion.(lines 82 -84) l.81: replace the comma after "measured" by ":".Done l.107: please, change to 246 K, for consistency with the abstract and the rest of the manuscript.Done l.114-115: replace "which are on" by "in".Done l.115: a space should be added between "points" and "(circles)".Done l.123-124: avoid the repetition of the word "measurements" in the same sentence.
We replaced the second "measurements" by "observations".
l.125: "predicts" sounds like the result of a model, while Sagawa (2008) corresponds to an observational study.Maybe "observes" or "deduces" are more appropriate terms here.
l.136: "This is determined from the fit of a radiative transfer model".Done l.138: change "ref 16" to the names of the authors, as done in the rest of the manuscript.Done L.158: the reference "2" to Brecht et al is a little bit weird at its current location.It looks like a squared value.Figure 3: For clarity, it would be nice to add more clearly some local time values (12:00, 18:00 and 24:00 LT, at least).
L.317: I think it should be equation (1) instead of (2).Same for the following equations.
Yes, the numbering was wrong.We corrected it.
l.346: the sentence is a bit long.I suggest "… Earth atmosphere at the altitude where the line becomes optically thick.In this case, it corresponds to the lower thermosphere, at around 100 km altitude." We changed the sentence as suggested.
Reviewer #2 (Remarks to the Author):
Overall: The manuscript is discussing a unique dataset that was obtained by the upGREAT heterodyne spectrometer on board SOFIA.Atomic oxygen is an important chemical species in atmospheres but can also be difficult to directly observe.This manuscript is providing a decent and brief overview of the importance of atomic oxygen in the Venusian atmosphere.It also does a good job in comparing their results to past observations and model simulations.There are a few revisions being suggested that are easily corrected.The recommendation is this manuscript needs minor revisions before publication.
We have added more information about the radiative transfer code (lines 354 -361) and its verification and we added a reference (Ref. 11, Richter et al. 2021).
Line 358: Please rephrase this sentence.What temperature is determined?
It is the temperature of the Earth atmosphere where the atomic oxygen line becomes optically thick.Now it reads "the temperature of the Earth atmosphere at which the line saturates".
Line 396 -399: Could you please provide more information, such as key words to put in the search fields to find the exact data that was used?As written, it is unclear if the data is available.
Figure 1a :
Figure 1a: please, add a vertical dashed line at 4.744777 THz and another one at 4.744980 THz and explain in the legend.
Figure 2a :
Figure 2a: I am not sure I understand.Why don't we see the atomic oxygen temperature of 200 K at 15:40 LT shown in figure 1b? Figure 2b: in figure 2a, you use squares for the brightness temperatures and circles for the atomic oxygen temperatures.Please, use different symbols for figure 2b, which represent another quantity: oxygen densities (or, at least, replace squares by circles).
Line 54 :
Is temperature a-priori information?Line 79: it is unclear what is mean by "allows distinguishing both and using the telluric…" Line 138: check on how to write reference within statement Line 140: "The average nightside temperature is lower"… what is it lower than?Line 145: remove paratheses from around the temperature values.Line 151: Please clarify the column density value stated on line 152 is for the dayside.
Figure 1a :
Figure 1a: please, add a vertical dashed line at 4.744777 THz and another one at 4.744980 THz and explain in the legend.
Figure 2a :
Figure 2a: I am not sure I understand.Why don't we see the atomic oxygen temperature of 200 K at 15:40 LT shown in figure 1b?
Figure 2b :
Figure 2b: in figure 2a, you use squares for the brightness temperatures and circles for the atomic oxygen temperatures.Please, use different symbols for figure 2b, which represent another quantity: oxygen densities (or, at least, replace squares by circles).
3. l.176: use a period instead of a colon.Done l.179: replace "towards the nightside" by "near the terminator".Done l.130: "Future observations, especially near the antisolar and subsolar points but also at all solar zenith angles, will provide…" Done l.227: ref 17 is not cited in the text.It was cited in line 107 of the original manuscript.In the revised version it is cited in line 112 (Ref.22).
are lower (~1 x 1017 cm-2)2."Line 157-158: Brecht et al. 2012 report GCM simulation results too.Should state those values to compare with the VIRTIS results, your results, and with the next sentence about a different GCM simulation result.Line 158: missing period at end of sentence Line 159: rephrase; "Also, results from a different model yielded column densities around 2 x 1017 cm-2 which are comparable with our observations."Line 168 -170: (Fig. 3 Caption) Please state if this is the evening or morning terminator Line 283-284: Please clarify this statement.There are other SOFIA observations of Venus.Most of them are conference proceedings, but Cordiner et al. 2022 (https://doi.org/10.1029/2022GL101055) is actually published. | 7,296.8 | 2023-11-07T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Nuclear data adjustment using Bayesian inference, diagnostics for model fit and influence of model parameters
. The mathematical models used for nuclear data evaluations contain a large number of theoretical parameters that are usually uncertain. These parameters can be calibrated (or improved) by the information collected from integral / di ff erential experiments. The Bayesian inference technique is used to utilize measurements for data assimilation. The Bayesian approximation is based on the least-square or Monte-Carlo approaches. In this process, the model parameters are optimized. In the adjustment process, it is essential to include the analysis related to the influence of model parameters on the adjusted data. In this work, some statistical indicators such as the concept of Cook’s distance; Akaike, Bayesian and deviance information criteria; e ff ective degrees of freedom are developed within the CONRAD platform. Further, these indicators are applied to a test case of 155 Gd to evaluate and compare the influence of resonance parameters.
Introduction
In the modelling of a nuclear application (for example, a reactor design), nuclear data libraries are used extensively. These nuclear libraries contain all the information related to nuclear reaction and physics to be used in the modelling. These data are evaluated using nuclear reaction models and differential data. The evaluated data can also be improved by using integral experiments [1,2]. This data improvement process is based on a mathematical framework called the Bayesian inference, where the inclusion of experiments optimizes the model parameters. The Bayesian problem can be solved using analytical (least square analysis) or sampling-based (Monte-Carlo approach) methods [1].
In terms of the physical interpretation of a particular adjustment (i.e., by types of reaction, isotopes, and energy ranges), the adjustment process is still not convincing. The application domain of the adjusted data is not clearly understood. For the case of a new reactor design, it is not clear if the data which was adjusted with a specific set of integral/differential experiments is representative or the data should be adjusted with another set of experiments. In this regard, it is essential to evaluate the influence of input data (or model parameters) on the adjustment. Also, identifying the best distribution while fitting a data set is one of the biggest challenges for mathematicians.
To diagnose the adjustment and fit, there are several approaches (such as goodness of fit, generalizability) proposed in the literature. In terms of fit, it should not be the best fitting model but the best predicting model that can make accurate predictions. In the process of fitting a model to the given data, the preferred model should not be * Corresponding author: e-mail<EMAIL_ADDRESS>evaluated based on its goodness of fit (GOF) but its generalizability. The goodness of fit measures root mean square of the deviance (GOF = i (obs i − predict i ) 2 /N). The GOF improves with more parameters.
A model's generalizability is its ability to fit all future data samples from the same process, not only the current data. A model's complexity is its flexibility to fit a wide range of data [5]. It depends on the number of model parameters (the degrees of freedom) and the functional form of the model. We are always interested in a model selection method accounting for the effects of complexity. Over-fitting usually undermines the predictive accuracy. Therefore, among the candidate models, the model that best captures the underlying regularities should be the preferred one.
There are several methods proposed in the literature to select the best model. Some popular selection methods are Akaike information criterion (AIC), Bayesian information criterion (BIC), deviance information criterion (DIC), Cross-validation, Bootstrap, Minimum description length, Cross entropy, etc. Information criteria such as AIC and BIC measure a model's generalizability (i.e., GOF and complexity) [6,9]. These information criteria find a tradeoff between the model simplicity and the GOF. For studying the influence of model parameters on the fit, the Cook's distance (CD) [4,7,8] can be used.
The main objectives of this work are to develop statistical indicators within the existing platform CONRAD [3] (a platform for nuclear data evaluation from CEA) to diagnose the influence of model parameters and model fit in the process of nuclear data adjustment and to demonstrate these indicators using a test case. This work is a continuation of what was presented in [4], with the addition that CD, AIC, BIC, DIC, and the effective number of degrees of freedom are discussed and applied to a nuclear data fit. Also, a complete set of new data is analyzed. Results are compared and discussed.
Data adjustment using Bayesian inference
In probability theory, the Bayes' theorem combines the probability distributions of the prior information and the measurements to provide the posterior distribution. Using the Bayes' theorem, a relationship between the prior, posterior, and the likelihood can be obtained. I.e., the prior is convoluted with the likelihood for the new set of experiments in order to obtain the posterior as expressed below.
The first two statistical moments (mean and covariance) of the posterior distribution can be obtained using a fitting procedure (e.g., generalized least square fit). In nuclear data assimilation and inverse problems, this is a very standard method for data adjustment. The mathematical details are presented in [1,4].
Model selection, measure of influential data, model complexity and fit
As mentioned above, there are several methods to account for the quality of the model and the influence of individual model parameters. In this section, we explore the influence of model parameters by using Cook's distance (Section 3.1) and by using reduced models (Section 3.2.)
Cook's distance for the influential parameter (or data)
The influence of a perticular parameter (or data set) on the overall regression fit can be understood in terms of the variation in the fit if a particular model parameter or a data set is not considered in the fitting process. A generalized formula for the Cook's distance is given in [7,8].
As previously discussed in [4], if σ p is the posterior data adjusted with all model parameters and σ i p is the posterior data when the i th parameter is not considered in the adjustment process, the Cook's distance can be written as: where M σ P is the covariance of the posterior parameters.
Model selection
The model selection process is an essential step in statistical analysis when the accuracy of the model used in the adjustment is of prime interest, especially for nuclear-related applications. Suppose, we have models M 1 , M 2 , .....M k such that where p is the probability of the model parameter x and X j is the parameter space associated with parameter x j of the model M j . To predict the relative quality of a statistical model and to select the best model out of all possible models, information criteria are often used. Information criteria estimate the relative quality of statistical models. The model minimizing the chosen criterion is selected as the best model. In information theory, the Akaike information criterion (AIC) is defined as: where n is the number of model parameters and ln(L) is the log of the maximum likelihood function of model M j . The likelihood is defined as the joint probability distribution of the evaluated values at the given observations. The maximum likelihood (L) corresponds to the likelihood computed with the optimized model parameters in the Bayesian approximation. BIC and DIC are also similar to AIC, but the penalty is harsher. Thus, BIC and DIC tend to choose a simpler model.
In information theory, the Bayesian information criterion and deviation information criterion are defined as: where n D is the effective degrees of freedom and k is the number of experimental data points used in Bayesian parameters adjustment. The model with the smallest values of these criteria will be considered as the best model. Using Kullback-Leibler residual information [9], the effective degrees of freedom n D can be estimated as: where M σ p and M σ p are the prior and posterior covariances. Information criteria are suitable for all types of problems related to model selection and are the most popular methods for model selection. More detailed information about model selection criterion, effective degrees of freedom and goodness of fit test for Bayesian posterior can be found in the article of Spiegelhalter, D. J. [9]. All these methods discussed above (Cook's distance, information criteria and effective degrees of freedom) are implemented in CONRAD and are demonstrated with a test case in the next section.
Test case and results
In this work, for the adjustment process, the time-offlight experiment for radiative capture of 155 Gd in the energy range of [1.4, 3.4] eV is taken as an experiment for the adjustment. In Figure 1, the experimental data (capture cross-section) at 2138 points in the energy range is shown. This experimental data is extracted from the CONRAD database. The uncertainty in the experimental data is 1% of the mean. Two peaks, first at 2.008 eV and second at 2.568 eV are observed. The corresponding resonance parameters (neutron width Γ n , radiation width Γ γ ) and associated uncertainties are shown in Table 1. These four parameters (Γ γ1 , Γ γ2 , Γ n1 , Γ n2 ) are considered here as prior parameters in the Bayesian inference for parameter adjustment. The case where all four parameters are used in the adjustment process can be considered as a model M. To compare information criteria with the Cook's distance, four sub-cases are created by removing one parameter in the adjustment. In this way, four different models M1 (with model parameters Γ γ2 , Γ n1 , Γ n2 ), M2 (with model parameters Γ γ1 , Γ γ2 , Γ n2 ), M3 (with model parameters Γ γ1 , Γ n1 , Γ n2 ) and M4 (with model parameters Γ γ1 , Γ γ2 , Γ n1 ) are used for parameters adjustment with Bayesian analysis. Further, Cook's distance, model selection criteria (AIC, BIC, and DIC), and effective degrees of freedom are computed. The results are tabulated in Table 2. From Table 2, it can be observed that for each case, the values of information criteria (AIC, BIC, and DIC) are approximately the same. The main reason for this is that for these sub-cases, the number of model parameters (equal to 3) is very small in comparison to the log of likelihood. The largest Cook's distance in M3 suggests that Γ γ2 is a very important parameter for the fit. AIC, BIC, and DIC also suggest that if Γ γ2 is removed from the fit, the model is rejected (large values of information criteria). It can be also noticed from the Cook's distance that Γ n2 is not an important parameter in the fit as the model M4 (in which the parameter Γ n2 is removed) has the smallest Cook's distance. Model M4 with parameters Γ γ1 , Γ n1 , Γ γ2 is the best model out of the given options M1, M2, M3 and M4. All approaches (AIC, BIC, DIC, and Cook's distance) give the same conclusion.
The statistical indicators discussed in this article are currently developed and validated with a simplified test case here. It is expected that these indicators will be used as essential decision-making tools in enhancing the accuracy of the nuclear data adjustment and in selecting the experimental domain for new reactor design.
Conclusions
In this article, for 155 Gd, the influence of resonance parameters is evaluated using Cook's distance and is compared with other model selection techniques (AIC, BIC and DIC criteria) derived from information theory. Effective degrees of freedom is also estimated. It is concluded that all the approaches (AIC, BIC, DIC, and Cook's distance) discussed and developed in CONRAD give the same conclusion. Future work will concentrate on defining the best way to use these indicators to enhance the nuclear data assimilation process by reducing/discriminating the number of parameters. | 2,774.8 | 2020-01-01T00:00:00.000 | [
"Computer Science"
] |
Functional Model of Compound II of Cytochrome P450: Spectroscopic Characterization and Reactivity Studies of a FeIV–OH Complex
Herein, we show that the reaction of a mononuclear FeIII(OH) complex (1) with N-tosyliminobenzyliodinane (PhINTs) resulted in the formation of a FeIV(OH) species (3). The obtained complex 3 was characterized by an array of spectroscopic techniques and represented a rare example of a synthetic FeIV(OH) complex. The reaction of 1 with the one-electron oxidizing agent was reported to form a ligand-oxidized FeIII(OH) complex (2). 3 revealed a one-electron reduction potential of −0.22 V vs Fc+/Fc at −15 °C, which was 150 mV anodically shifted than 2 (Ered = −0.37 V vs Fc+/Fc at −15 °C), inferring 3 to be more oxidizing than 2. 3 reacted spontaneously with (4-OMe-C6H4)3C• to form (4-OMe-C6H4)3C(OH) through rebound of the OH group and displayed significantly faster reactivity than 2. Further, activation of the hydrocarbon C–H and the phenolic O–H bond by 2 and 3 was compared and showed that 3 is a stronger oxidant than 2. A detailed kinetic study established the occurrence of a concerted proton–electron transfer/hydrogen atom transfer reaction of 3. Studying one-electron reduction of 2 and 3 using decamethylferrocene (Fc*) revealed a higher ket of 3 than 2. The study established that the primary coordination sphere around Fe and the redox state of the metal center is very crucial in controlling the reactivity of high-valent Fe–OH complexes. Further, a FeIII(OMe) complex (4) was synthesized and thoroughly characterized, including X-ray structure determination. The reaction of 4 with PhINTs resulted in the formation of a FeIV(OMe) species (5), revealing the presence of two FeIV species with isomer shifts of −0.11 mm/s and = 0.17 mm/s in the Mössbauer spectrum and showed FeIV/FeIII potential at −0.36 V vs Fc+/Fc couple in acetonitrile at −15 °C. The reactivity studies of 5 were investigated and compared with the FeIV(OH) complex (3).
■ INTRODUCTION
−3 A large family of α-ketoglutarate (α-KG)-dependent oxygenases, containing a 2-His-1-carboxylate facial coordination motif around the Fe II active site, generates Fe IV �O species using O 2 as the oxidant and α-KG as the sacrificial substrate, 4,5 which then abstracts the hydrogen atom from the substrate to form Fe III (OH) and a carbon-centered substrate radical.The subsequent rebound of the OH group forms the hydroxylated product and Fe(II).An example of this family of enzymes is prolyl-4-hydroxylases, whose function is described in Scheme 1A.However, in a large family of cytochrome P450 (CYP) enzymes, Fe IV �O porphyrin π-cation radical species, commonly known as Compound I (Cpd-I), cleaves the hydrocarbon C−H bond at the rate-determining step, thus ensuing fast rebound of the OH group from the formed Fe IV (OH) (Compound-II or Cpd-II) to the substrate radical to generate the C−OH bond and a Fe(III)−porphyrin complex (Scheme 1B). 2 Since the rebound step is very fast, direct observation of the C−OH bond formation step is challenging.−12 In addition to the archetypal OH rebound reaction of Cpd-II in CYP monooxygenases, the involvement of the intermediate in the direct hydrogen atom abstraction (HAA) or proton-coupled oxidation reaction to form a H 2 Ocoordinated [Fe III (porphyrin)(cystine)] complex has been described.
Examples of such reactions are the C−C bond cleavage of fatty acids by OleT (bacterial CYP), 13−15 the third step of oxidation of androgens to estrogens by a steroid aromatase CYP 19A1, 16 desaturation of valproic acid to 2-propyl-4pentenoic acid by liver CYP, 17 etc.Thus, considerable importance was given to elucidating the reaction mechanism of Cpd-II for HAA reactions. 18,19In a recent investigation by Green and Mittra, the bond dissociation free energy (BDFE) of the O−H bond of [Fe III (H 2 O)(porphyrin)(cystine)] generated from Cpd-II was experimentally determined to be 90 kcal/mol. 20hus, exploring the reactivity of biomimetic Fe(OH) species has attracted considerable attention.The focus has been given to independently synthesizing Fe n+ (OH) complexes and studying their reactivities to gain an insight into the OH rebound and HAA reaction mechanisms.Goldberg et al. reported the spectroscopic characterization and OH rebound studies of a Fe IV (OH) complex of a corrole ligand. 21,22Further, it was shown that the species could participate in the hydrogen atom transfer (HAT) reactions. 23−27 Fout et al. reported the characterization and OH rebound studies of Fe III (OH) complexes. 28Further, the reactivity studies of a couple of synthetic Fe III (OH) toward the activation of alkane C−H and phenolic O−H bonds have been studied.Borovik et al. described a thorough spectroscopic characterization of a protonated Fe IV (O) species. 29In this study, they suggested that the protonation most likely occurred at the ligand backbone, which made intramolecular hydrogen bonds to stabilize the Fe�O moiety.Further, inspired by the mechanistic cycle of CYP, Nam et al. reported the spectroscopic characterization and nitrogen group rebound studies of a Fe(IV)−amido complex (Fe IV −NHR), which was synthesized from a Fe(V)−imido (Fe V �NR) species via a HAT reaction. 30ecent studies by Green et al. showed that the basicity of the coordinated OH group of Cpd-II plays a key role in controlling the reactivity, and the pK a of the OH group was determined by the axial ligand present trans to the Fe−OH bond in Cpd-II. 11he coordination of the thiolate ligand at the axial position was suggested to decrease the reduction potential and increase the pK a of Fe IV (OH).Very high pK a (>10) of a couple of Cpd-II intermediates has been determined experimentally. 11,31,32owever, the examples describing the structure and function relationship are lacking for the artificial analogues Cpd-II.Further, limited information is available about the redox properties and reactivities of Fe IV (OH) complexes toward OH rebound, proton-coupled electron transfer (PCET), and oxygen atom transfer (OAT) reactions.Compared to the large number of examples reported for the biomimetic Fe n+ � O species, only one example is known describing detailed characterization and OH rebound reactivity studies of a synthetic Fe IV (OH) complex. 21Thus, we sought to explore the coordination chemistry of synthetic Fe IV (OH) species.In this study, we report a detailed characterization and reactivity study of a synthetic Fe IV OH complex (3), which was prepared by reacting Fe III (OH) complex (1) of a tetraanionic N 2 O 2 donor ligand HMPAB 4− (H 4 HMPAB = 1,2-bis(2-hydroxy-2methylpropanamido)benzene) with an excess of N-tosyliminobenzyliodinane (PhINTs) in acetonitrile at low temperatures.The reactivity of 3 was compared to the ligand radicalcoordinated Fe III (OH) complex (2).Additionally, we describe the preparation of a Fe III (OMe) complex of HMPAB ([Fe III (HMPAB)(OMe)] 2− (4)).The reaction of 4 was investigated with PhINTs, which resulted in the generation of a Fe IV (OMe) species (5) which was characterized and whose reactivity studies were further investigated.
Synthesis and Characterization of Fe III Complexes
The synthesis and characterization of the Fe III (OH) complex (1) were reported by us recently. 33We further prepared the Fe III (OMe) complex (4), by reacting equimolar amounts of H 4 HMPAB and FeCl 3 in methanol in the presence of Me 4 NOH as the base under anaerobic conditions (details are described in the Methods).The compound was crystallized by diffusing diethyl ether into an acetonitrile solution of 4. The X-ray structure of 4 is described in Figure 1.A distorted square-pyramidal geometry around Fe is noted in 4 (τ 5 = 0.097). 34The Fe−N amide (2.059(3) S1 and S2.
The UV−vis spectrum of 4 was measured in acetonitrile, which exhibited broad peaks at 360 and 485 nm (Figure S3).The X-band electron paramagnetic resonance (EPR) spectrum of 4 in frozen tetrahydrofuran/methanol (5:2) at 77 K revealed g values at 5.9 and 2.0 (Figure S4), suggesting the presence of high-spin Fe III (S = 5/2) in 4. Further, we determined the solution magnetic moment of 4 by Evans' method, which also revealed the existence of S = 5/2 Fe in 4 (μ eff = 4.9 μ B in CD 3 OD at 25 °C, Figure S5).The cyclic voltammogram of 4 was measured in acetonitrile, which revealed an oxidation event at −0.24 V vs Fc + /Fc couple (Figure S6), which is cathodically shifted compared to 1 (E ox = −0.134V vs Fc + /Fc couple).
Synthesis and Characterization of Fe IV Complexes
Next, we evaluated the reaction of Fe III complexes with different oxidizing agents.We observed that the reaction of 1 with magic blue ((4−Br-C 6 H 4 ) 3 NSbCl 6 ) formed a ligand radical-coordinated Fe III (OH) complex (2), 33 which was characterized by an array of spectroscopic techniques, whose reactivity studies were explored further. 33It has been shown before that the reaction of different Fe(II) or Fe(III) complexes with PhINTs resulted in the generation of Fe IV � NTs or Fe V �NTs complexes, respectively. 35Inspired by these studies, we envisioned that the reaction of 1 with PhINTs would also result in the formation of a ligand radicalcoordinated Fe IV �NTs compound, which we thought based on our early observation of the occurrence of a ligand-derived oxidation event of 1.Initially, to explore such chemistry, we set out to conduct the reaction of 1 with PhINTs.
We investigated the reaction of 1 with an excess amount of PhINTs (3 equiv with respect to the Fe complex) in acetonitrile at −25 °C and monitored the reaction by UV− vis spectroscopy.The formation of a new species 3 occurs upon adding PhINTs to 1 (Figures 2A and S7), which revealed absorbance maxima at 365 nm (3360 M −1 cm −1 ), 465 nm (3200 M −1 cm −1 ), and 680 nm (876 M −1 cm −1 ).Strikingly, the UV−vis features of 3 are very similar to species (2) obtained by adding magic blue to 1 (Figure 2A).To investigate the origin of the transitions in the oxidized Fe complexes, TD-DFT calculations were performed, as shown in Figure S7B.While 1 shows a featureless optical spectrum, 2 and 3 demonstrate 2 peaks in the visible region, which is consistent with experimental data.The calculated optical spectra of 2 and 3 are additionally very similar.A titration experiment was performed to understand the exact amounts of PhINTs needed to generate the intermediate 3 completely, which revealed no additional increase in absorbance maxima at 365 and 465 nm after the addition of more than ∼0.6 equiv of PhINTs (Figure S8) to 1.The experiment infers that the one-electron oxidation of the Fe III (OH) complex (1) requires 0.5 equiv of PhINTs.However, in the presence of an excess of PhINTs in the solution, we speculate the coordination of PhINTs to the Fe center (trans to the Fe−OH, Scheme 2), which we suggest based on spectroscopic measurements and a drastic reactivity difference (vide infra).Further, species 3 remained EPR silent when the X-band EPR spectrum was measured in frozen acetonitrile at 77 K (Figure S9).The 1 H NMR spectrum of 3 revealed paramagnetically shifted proton resonances (Figure S10).Measurement of solution magnetic moment of 3 by Evans' method showed a μ eff value of 4.73 μ B (Figure S10), which corresponds to the presence of an S = 2 ground state of Fe in 3 and suggests that 3 is a one-electron oxidized species of 1.
The electrochemical properties of 3 were additionally investigated in acetonitrile at ca. −15 °C in the presence of n BuNPF 6 as the supporting electrolyte.The one-electron reduction potential of 3 was observed at −0.22 V versus the Fc + /Fc couple, which is ca.150 mV anodically shifted compared to the one-electron reduction potential of 2 at −15 °C, which was observed at −0.37 V vs Fc + /Fc (Figures 2B and S11).The results imply the presence of different coordination environments around Fe in 2 and 3.The redox events observed in 2 and 3 can be assigned as ligand vs Fecentered, respectively (vide infra).
Complexes 1 and 3 were subsequently investigated by X-ray absorption near-edge structure (XANES) and extended X-ray absorption fine structure (EXAFS) spectroscopy (Figure 3).Complex 3 generated with PhINTs displays a positive shift of the Fe−K edge energy of 0.95 eV from 7125.13 to 7126.08 eV at a normalized absorption of 0.6, reflecting the higher ionization energy required for ejecting a core 1s electron from a more positively charged Fe IV ion. 36,37−40 This is further corroborated by the observed upshift (∼0.35 eV) of the pre-edge energy transition at 7114.32 eV in 3 than 7113.97eV in 1.−43 A pre-edge area of 19.3 units was obtained for 1 at 7113.97 eV, which is close and consistent with that of previously studied five-coordinated ferric centers, demonstrating pre-edge areas of ∼14.1 units at 7113 eV (Figure 3A inset, 3B, Table S3). 33By contrast, complex 3 demonstrates a preedge area of 23.5 units at 7114.32 eV (Figure 3A inset, 3B, Table S3) comparable to the reported high-spin Fe(IV) oxo complexes, where an area of ∼25 units has been observed. 38,40,44The lesser intense pre-edge feature of complex 3 vs iron(IV) oxo complexes of tetraamido macrocyclic ligands (TAMLs) 36,45,46 is due to its higher coordination environment and more centrosymmetric geometry in comparison to fivecoordinated Fe IV complexes.Indeed, centrosymmetric complexes have been shown to have a decreased intensity in their pre-edge features due to an increase in the metal 4p mixing into the 3d orbitals, contributing toward the electric dipole 1s to 4p character of this transition. 41This effect was further corroborated through time-dependent density functional theory (TD-DFT) calculations (Figure S1).TD-DFT calculated five coordinated Fe IV �O and Fe IV −OH complexes of the HMPAB ligand display higher pre-edge intensities in comparison to the Fe IV hydroxo complex bound to the oxygen atom of the PhINTs ligand in agreement with experimental pre-edge trends (Figures S1 and 3A inset) pointing toward a six-coordinated geometry in 3. The increased coordination of complex 3 in comparison to the five-coordinated Fe IV (O) or Fe IV (OH) complexes was further proved by its EXAFS spectral data illustrated in Figure 3C.
The EXAFS spectra of the Fe(III) complex (1) display 2 peaks corresponding to the distinctive Fe−N and Fe−O bond distances, whereas the oxidized Fe(IV) complex (3) illustrates 2 peaks (I, II) at comparatively lower apparent distances corresponding to the shortened Fe−O/N bond distances together with a weak shoulder (III) arising from the bond between the Fe IV metal center and oxygen of the PhINTs ligand (Scheme 2).EXAFS fits for the first coordination sphere and the entire spectrum for the Fe-based complexes are further shown in Table S4, Figure 3C inset, and Figure S12 in Supporting Information.In our previous study, we reported the EXAFS spectrum of the Fe(III) complex (1), which clearly resolves 3 Fe−O distances at 1.88 Å and 2 Fe−N distances at 2.01 Å, in close agreement with obtained XRD data (Table S4, Figure 3C, Supporting Information). 33By contrast, EXAFS fits of the Fe(IV) complex (3) show 3 shortened Fe−O bond distances at 1.82 Å (fit 9, Table S4), 2 Fe−N distances at 1.97 Å, and an elongated Fe−O bond distance at 2.13 Å.It is important to note here that the ligation of PhINTs to a Co(II) center through coordination of the O/N atom has been demonstrated before. 47The EXAFS data further reveals that the Fe−O OH distance in 3 is ∼1.82Å, which is significantly elongated than the reported Fe�O bond lengths of 1.64 Å in an Fe IV (O) complex with a TAML. 36Further, the calculated Fe IV (O) complex of HMPAB (Table S5, Supporting Information) as well as other examples illustrated Fe�O bond lengths < 1.7 Å. 38,40,48−51 This excludes the possibility of the presence of a shortened Fe�O bond in 3. Furthermore, a Fe�N distance of 1.65 ± 0.04 Å was obtained in the [Fe V (TAML)(NTs)] − complex, 35 which is also much shorter than the core bond distances observed in 3.This comparison also discards the possibility of the formation of a Fe−imido backbone in 3. Nonetheless, the Fe−O distance of 3 is close to the Fe−O bond length observed in the [Fe IV (ttpc)(OH)] complex (1.857(3) Å; H 3 ttpc = tris(2,4,6-triphenyl)phenyl corrole ligand) 21 and the Fe−N distance reported in the [Fe IV (TAML)(NHTs)] − species (1.89 Å). 30 The EXAFS data further agreed well with the DFTcalculated structure of 3 (vide supra).We investigated, in this case, the structure of 3 with S = 1 or 2 ground state with a six-coordinated geometry around Fe, where the sixth position is occupied by PhINTs through the O/N donor atoms (Appendix).While the optimized structure of Fe IV coordinated by PhINTs through the N donor atom revealed a short Fe−N bond distance of 1.965 Å and a decreased calculated pre-edge intensity in comparison to 1 (Figure S1), the Fe IV −PhINTs complex bound to an O donor atom (S = 2 ground state) showed comparable pre-edge intensities (Figure 3A inset) in comparison to 1 as previously discussed.Furthermore, the optimized structure of 3 composed of an Fe IV (OH) complex with a coordinated O atom of PhINTs revealed an Fe−O OH distance of 1.847 Å and an elongated Fe−O PhINTs bond length of 2.11 Å (Figure 4) in agreement with experimentally obtained data (Table S5).It is important to remark that the DFT-optimized structure of an Fe IV (O) complex of HMPAB 4− revealed a Fe−O distance of 1.657 Å (Table S5), which is inconsistent with the EXAFS data of 3, further showing that 3 is a six-coordinated Fe IV (OH) species with a bound PhINTs ligand.Thus, based on the experimental observations, we suggest here that the formation of intermediate 3 requires 1.5 equiv of PhINTs, 0.5 equiv of which is used to oxidize Fe(III) to Fe(IV), while another equiv bounds the Fe complex as an axial ligand to the Fe metal center.
Thus, the tetraanionic ligand scaffold (HMPAB 4− ) used in this study is not capable of forming Fe�NTs intermediates, which is in stark contrast to the [Fe III (TAML)] − complex, where the formation of [Fe V (TAML)(NTs)] − intermediate was observed. 35This sharp reactivity difference between these two ligand systems is noteworthy and reveals that the geometry of the ligand is crucial for the generation of Fe�X (X = NR or O) species.Nonetheless, stabilization of Mn V (O) species has been achieved by the use of a HMPAB 4− ligand scaffold. 52,53xt, we examined the reaction of the Fe III (OMe) complex (4) with PhINTs, which also resulted in similar spectral features to that of the reaction of 1 with PhINTs (Figure S13).By analogy with the reactivity of 1, we presume the formation of a PhINTs-coordinated Fe IV (OMe) complex (5).Further, no peaks were observed in the X-band EPR spectrum of 5 at 77 K (Figure S14).The cyclic voltammogram of species 5 was then measured in acetonitrile at −15 °C using n Bu 4 NPF 6 as the supporting electrolyte, which revealed a reduction event at a half-wave potential of −0.36 V vs Fc + /Fc couple (Figure S15), which is cathodically shifted compared to 3. Next, we determined the 57 Fe Mossbauer spectrum of 5, which is shown in Figure S16 at 77 K, and revealed contributions of two Fe IV complexes, an S = 1 species with an isomer shift (δ) of −0.11 mm/s (ΔE q = 0.77 mm/s) and an S = 2 species with an isomer shift (δ) of 0.17 mm/s (ΔE q = 1.43 mm/s).Although we are unable to report here the Mossbauer spectrum of 3, the data of an analogous compound (5) suggest the existence of Fe IV in 3, which is in corroboration with the XANES and EXAFS data.
Considering the different electronic structures of 3, we set out to explore the reactivity studies of 3 and compare them with 2. Additionally, we performed the reactivity studies of 5.
JACS Au
The reaction was monitored by UV−vis spectroscopy, and a k 2 value of (2.46 ± 0.02) × 10 2 M −1 s −1 was estimated from the slope of a plot of 1/[3] vs time (s).Analysis of the reaction products by 1 H NMR spectroscopy revealed the formation of 67% of (4-OMe-C 6 H 4 ) 3 COH as the product (Figure S17).Thus, the rebound of the OH group of 3 to the carbon radical occurs spontaneously, which is a functional mimic of compound II of a large family of CYP.As the one-electron oxidation potential of (4-OMe-C 6 H 4 ) 3 C • is lower than the one-electron reduction potential of 3, the initial ET from the radical to Fe IV and subsequent attack by hydroxide is also another possibility for the formation of the C−OH bond, which we do not exclude.
Further, to compare the hydroxide rebound reaction of 3 with 2, we investigated the reaction of 2 with (4-OMe-C 6 H 4 ) 3 C • in 1:4 acetonitrile/toluene (v/v) at −60 °C.Strikingly, no change of UV−vis spectral features was noticed over a period of 200 s (Figure S18).However, 2 reacted spontaneously with (4-OMe-C 6 H 4 ) 3 C • at a higher temperature.This reactivity difference between 3 and 2 suggests that the coordinated PhINTs trans to the OH group of Fe IV in 3 enhances the reactivity of 3 than 2.
Next, we examined the reaction of 5 with (4-OMe-C 6 H 4 ) 3 C • in 1:4 acetonitrile/toluene (v/v) at −25 °C.However, the formation of (4-OMe-C 6 H 4 ) 3 C(OMe) was not observed in the reaction by GC−mass and 1 H NMR spectroscopy studies.The experiment suggests that the Fe(IV)−OMe bond cleavage of 5 does not occur during the reaction of 5 with (4-OMe-C 6 H 4 ) 3 C • .
One-Electron Reduction Reactions
We subsequently examined the one-electron reduction reaction of 3 using decamethylferrocene (Fc*) as the reducing agent.The addition of 1 equiv of Fc* to an acetonitrile solution of 3 at −25 °C resulted in the decay of the intermediate with a k et value of 7.8 × 10 2 M −1 s −1 (Figure S19).The reaction resulted in the near quantitative formation of decamethylferrocenium cation (Fc* + ), which was calculated by UV−vis spectroscopy.Nonetheless, when the reduction reaction of 2 was conducted in the presence of Fc* at −20 °C, a k et value of 1.87 × 10 2 M −1 s −1 was obtained (Figure S20).The slower reactivity of 2 compared to 3 illustrates the different electronic structure of 3 than 2.
Reactivity Studies of Fe Complexes with para-Substituted 2,6-Di-tert-butylphenols
Next, we explored the reactivity of 3 with different 4-X-2,6-ditert-butyl phenol (4-X-DTBP; X = OMe, Me, Et, t Bu, H, Br, and OAc) substrates.The reaction of 3 with 4-X-DTBP substrates was performed in acetonitrile at −45 °C in the presence of substrates, and the reaction was monitored by UV−vis spectroscopy following the decay of the intermediate at 465 nm.The addition of 1 equiv of 4-OMe-DTBP to 3 resulted in the immediate decomposition of the intermediate and formation of the 4-methoxy-2,6-di-tert-butyl phenoxy radical at 390 and 406 nm in the UV−vis spectrum (Figure 6A).The formation of the isosbestic point was observed at 412 nm.Analysis of the reaction solution by EPR spectroscopy exhibited the formation of 52% of 4-methoxy-2,6-di-tert-butyl phenoxy radical in the reaction solution (Figure S21).Additionally, the analysis of the reaction solution by 1 H NMR spectroscopy revealed the formation of 24% of 2,6-ditert-butyl-1,4-benzoquinone as the 4-OMe-DTBP-derived reaction product (Figure S22).A k 2 value of 30.1 ± 0.31 M −1 s −1 was obtained from the slope of a plot of 1/[3] vs time(s) for the reaction of 3 with 1 equiv of 4-OMe-DTBP (Figure S23).
To further understand the reaction of 3 toward the activation of the phenolic O−H bond, we performed the reaction of 3 with other 4-X-DTBP substrates (X = Me, Et, t Bu, H, Br, and OAc) in acetonitrile at −45 °C in the presence of excess substrates (pseudo-first-order condition).Determination of k 2 values and product analysis data for these substrates is described in Figures S24−S47.Tables S6 and S7 describe the observed k 2 values and the reaction products, respectively.Plots of log k 2 versus σ p + (Hammett plot) and the O−H bond dissociation energy (BDE) of phenols of 4-X-DTBP substrates are shown in Figure 6.In each of the plots, a linear relation was observed from R = OMe to t Bu.The trend discontinued from 4-H-DTBP, and no trend was observed with electron-deficient phenol substrates.Interestingly, we observed that the reactivity of 2 toward 4-H-DTBP falls in the same line as that of other electron-rich 4-X-DTBP substrates. 33A linear relationship in a plot of logk 2 vs BDE is indicative of a ratelimiting O−H bond cleavage pathway, as reported for the phenol oxidation reactions of Mn−oxo or Ru−oxo species. 54,55urther, the results described in Figure 6 demonstrate a change in the reaction mechanism upon going from electron-rich to electron-deficient substrates, and such a changeover happens at 4-H-DTBP.The observation of C−C bond formation products in the case of electron-deficient phenol substrates infers the transfer of electron and proton to 3. From thermodynamic consideration, the electron transfer from phenol to Fe IV is not feasible because of the lower E 1/2 value of 3. We propose a rate-limiting proton transfer reaction followed by a fast electron transfer reaction that occurs in the case of electrondeficient phenol substrates.Such a reaction mechanism is also anticipated in the case of oxidation of electron-deficient phenols by a Mn(V)−imido complex. 56e further correlated the rate constants with the redox potential of 4-X-DTBP substrates.The observed k 2 values increased upon decreasing the redox potential of the substrates, and the trend was disrupted again at 4-H-DTBP.The results imply that the redox-driving force controls the reaction in the electron-rich regime.A plot of (RT/F) ln k 2 vs E ox of the phenol derivatives showed a linear correlation, as displayed in Figure 6D, which revealed a negative slope of −0.15 with electron-rich substrates.No pattern was followed for other 4-X-DTBP derivatives when X = H, OAc, and Br, implying the occurrence of a different reaction mechanism.A slope of 0 in the Marcus plot corresponds to a pure hydrogen atom transfer (HAT) reaction.An example of this type of reaction is the reaction of the cumylperoxyl radical with 4-X-DTBP substrates, where a slope of −0.05 is reported. 57owever, for rate-limiting electron transfer and fast proton transfer reactions, a slope of −0.5 is expected. 58Further, a slope of −1.0 should be obtained for rate-determining proton transfer and equilibrium electron transfer reactions. 58If the proton and electron transfer reactions happen at a comparable rate, then the slope value between −1.0 and −0.5 is expected, 59−61 as reported for the reaction of phenol substrates with (μ−η 2 :η 2 -peroxo)dicopper(II) 62 and Cu III (μ-O) 2 Ni III complexes. 63The observed slope value (−0.15) in the present study is thus consistent with the occurrence of hydrogen atom transfer (HAT)/concerted proton−electron transfer (CPET) mechanism.Slopes of −0.19 and −0.12 have been reported for the HAT reaction of Fe IV (OH)(ttppc) and Mn IV (OH)(ttppc) species with 4-X-DTBP derivatives, respectively. 23e further compared the k 2 values of 4-X-DTBP oxidation reactions of 2 with 3 at −25 °C, which are presented in Table 1 and Figures S48−S59.For all the 4-X-DTBP substrates, a much higher reactivity was observed with 3 compared to 2 (Figure 7).
For example, more than 120 times higher k 2 of 3 with 4-Et-DTBP was obtained than 2 under identical reaction conditions.Likewise, 167 and 114 times faster reactions of 3 than 2 were noted when 4-Me-DTBP and 4-t Bu-DTBP were used as the substrates, respectively.Additionally, we compared the k 2 values of 2 and 3 using 4-Et-DTBP at different temperatures (−25 to −45 °C), as displayed in Table S8.At all temperatures, a faster reactivity of 3 than 2 was noted; however, the ratio of k 2 (complex 3)/k 2 (complex 2) increased upon decreasing the temperature.Further, a very slow reactivity of 2 with 4-Br-DTBP was observed in acetonitrile.However, a very fast reaction was noted when 3 was used as the oxidant.Thus, a comparison of the PCET reactivity study clearly demonstrates that 3 is more oxidizing compared to 2. Further, a plot of (RT/F) ln k 2 vs E ox of 4-X-DTBP substrates at −25 °C exhibited a slope of −0.076 (Figure S59), which indicates that the substrate redox potential dependence of 3 at higher temperatures is less and a HAT/CPET mechanism is favored.Examination of kinetic isotope effect (KIE) using 4-OMe-DTBP and 3 exhibited a k 2 H /k 2 D value of 1.7.Nonetheless, the KIE value is significantly less for the phenol oxidation reaction following a HAT/CPET pathway.In addition, we also measured the k 2 value of 2.27 M −1 s −1 for the reaction of 3 with 2,6-di-tert-butylphenol-d (Figures S38 and S39) and a KIE value of 1.36.
Then, we examined the preliminary reactivity studies of 5 with 4-Me-DTBP and 4-Et-DTBP in acetonitrile at −25 °C under pseudo-first-order reaction conditions.k 2 values of 11.7 and 10.3 M −1 s −1 were obtained for 4-Me-DTBP and 4-Et-DTBP, respectively (Figures S60−S64), which are lower than the k 2 values obtained for the reaction of 3 with these substrates.The slower reactivity of 5 than 3 can be correlated with the cathodically shifted E red value of 5 relative to 3.
Reactivity Studies of Fe Complexes with Hydrocarbon Substrates
We further explored the hydrocarbon C−H bond activation reactions of 3 in acetonitrile at −10 °C (Scheme 3).An addition of 1 equiv of BNAH (1-benzyl-1,4-dihydronicotinamide) to an acetonitrile solution of 3 resulted in immediate decay of the intermediate, monitored by UV−vis spectroscopy (Figure 8A).The decay of 3 at 465 nm followed a secondorder rate equation, and a k 2 value of 221.6 ± 2.7 M −1 s −1 has been estimated from the slope of a plot of 1/[3] vs time (s) (Figure 8B).We observed that the reaction slowed down in the presence of deuterated BNAH, resulting in a k 2 D value of 39.6 ± 0.3 M −1 s −1 and a primary KIE value of 5.6 (Figure 8B).Further, Eyring analysis was carried out to estimate activation parameters, which established ΔH ‡ and ΔS ‡ values of 9.4 kcal mol −1 and −11.8 cal K −1 mol −1 , respectively (Figures S65 and S66, Table S9).The observed ΔS ‡ value suggests the occurrence of a HAT/CPET pathway instead of a hydride transfer reaction.A ΔS ‡ of ∼−20 cal K −1 mol −1 has been reported for the HAT reaction between M−oxo species and BNAH. 53,64However, in the case of involvement of the hydride transfer reaction, a more negative ΔS ‡ is expected, as reported for the oxidation of 2-propanol. 65Additionally, 3 was found to react with substrates having relatively higher bond dissociation energy (BDE), such as 9,10-dihydroanthracene (9,10-DHA) and 1,4-cyclohexadiene ( respectively.However, ethylbenzene having a BDE higher than 80 kcal/mol was found to be unreactive toward 3 at −10 °C.The Brønsted−Evans−Polanyi (BEP) correlation plot revealed a coefficient (α) of −0.6 (Figure 8D, Table S10).Although the substrate scope is limited for the C−H abstraction reaction of 3, similar plots have been reported for other high-valent metal complexes for HAT/CPET reactions. 6,66−68 Thus, based on the BEP plot and observed KIE for BNAH, we speculate a CPET/HAT reaction mechanism for the C−H activation reactions by 3. Interestingly, we found that 9,10-DHA remained unreactive toward 2 in acetonitrile at −25 °C (Figure 8C), suggesting that 2 is a sluggish oxidant compared to 3.However, 2 reacted with BNAH under second-order reaction conditions, resulting in a k 2 value of 65.02 ± 0.55 M −1 s −1 (Figure S75) at −10 °C, considerably less than the k 2 value of 3. The reaction of 2 with BNAD (1-benzyl-1,4-dihydropyridine-4,4-d 2 -3-carboxamide) yielded a k 2 value of 31.04 ± 0.27 M −1 s −1 (Figure S75).A primary KIE of 2.09 was obtained for the reaction of 2 with BNAH.Thus, a comparison of reactivity studies with BNAH and 9,10-DHA revealed different electronic structures of 2 and 3, showing that 3 is a better oxidant than 2.
Reactivity Studies of Fe Complexes with Triarylphosphine Derivatives
Reactivity studies of high-valent M−OH/M−OH 2 species toward OAT reactions are rarely reported in the literature. 69hus, we set out to investigate the reaction of 3 with tris(4-Xphenyl)phosphine derivatives (X = OMe, Me, H, and Cl).The addition of a 50-fold excess of Ph 3 P to an acetonitrile solution of 3 at −10 °C resulted in the decay of the features of the intermediate (Figure 9).Analysis of reaction products by ESImass spectrometry and 31 P NMR spectroscopy revealed the formation of 25% of Ph 3 P�O as the product (Figure S76).Likewise, we observed the formation of 44% of (4-OMe-C 6 H 4 ) 3 P�O when (4-OMe-C 6 H 4 ) 3 P was used as the substrate (Figure S77).The product yields imply that 2 equiv of oxidant is required to convert 1 equiv of Ar 3 P to Ar 3 PO.Analysis of the reaction solution after the reaction of 3 with Ph 3 P by EPR spectroscopy revealed the formation of Fe(III) complexes.The observation of Fe(III) species in the reaction solution demonstrates that 2 equiv of 3 is required to oxidize 1 equiv of PPh 3 .Thus, the reaction of 3 with Ar 3 P substrates showed that a Fe IV (OH) complex is capable of participating in the OAT reactions.An 18 O-labeling experiment was additionally conducted using 3 and (4-OMe-C 6 H 4 ) 3 P in the presence of added H 2 O, revealing the incorporation of ∼90% of 18 O in the formed (4-OMe-C 6 H 4 ) 3 PO and inferring that the OH group of 3 is exchangeable (Figure S78).Interestingly, no nitrogen group transfer reaction product to the Ar 3 P substrates was noted in the 31 P NMR and ESI-mass data, implying that the weakly coordinated PhINTs to Fe IV (OH) are not capable of forming Ar 3 P�NTs-type products.
We subsequently explored the kinetics of the reaction of Ar 3 P substrates with 3 in acetonitrile at −10 °C (Figures S79− S89).The k obs values were found to increase linearly with increasing concentrations of substrates (Figures S80, S83, S86, and S89).However, a y-axis intercept was observed in all of the k obs vs [Ar 3 P] plots, suggesting that a side reaction pathway is operating in the presence of Ar 3 P substrates.The k 2 values for all substrates are listed in Table S11.A plot of log k rel vs σ p + yielded a linear correlation plot with a slope of −1.78 (Figure 9C), corroborating the electrophilic nature of the Fe IV (OH) intermediate species.Further, a plot of log k 2 vs E ox of the Ar 3 P substrates yielded a linear correlation and revealed a decrease in reactivity with an increase in the oxidation potential of Ar 3 P substrates (Figure 9D), corroborating that 3 is working as an electrophile in the oxidation of Ar 3 P substrates.
However, a very slow reaction was observed when the reaction of 2 was conducted with an excess of PPh 3 (50-fold excess) at −10 °C (Figure 9B), and we were unable to obtain rate constant values.The OAT studies further established an enhanced reactivity of 3 than 2 and also showed for the first time that a Fe IV (OH) species is capable of participating in the OAT reaction.
■ CONCLUSIONS
The present study demonstrates the reaction of a Fe III (OH) complex (1) with excess PhINTs, which resulted in the formation of a Fe IV (OH) complex (3) with coordinated PhINTs at the axial position.However, the reaction of 1 with one electron-oxidizing agent was shown to generate a ligand radical-coordinated Fe III (OH) species (2). 33We suggest that the coordination of PhINTs to Fe causes the metal-based electron removal rather than ligand oxidation in 1.The different coordination geometry around Fe in 2 and 3 is supported by cyclic voltammetry studies: the one-electron reduction potential of 3 is 150 mV anodically shifted than 2, which is because of the presence of an additional ligand (PhINTs) around Fe in 3. Further, the X-ray absorption spectroscopic investigations of 3 support the + IV oxidation state and an octahedral geometry around Fe in 3. The Fe−OH distance in 3 was found to be ∼1.84Å by the EXAFS technique, which is considerably elongated compared to the Fe−O distance expected in a Fe IV �O species. 70,71It is important to remark here that the Fe−O bond lengths of Cpd-II in chloroperoxidase 10 and P450 (CYP158-II) 11 were found to be 1.82 and 1.84 Å, respectively.We also present the synthesis and characterization of a Fe III (OMe) complex (5).The reaction of this complex with PhINTs resulted in the generation of a Fe IV (OMe) complex (5), where the coordination of PhINTs to the Fe center has been suggested.Mossbauer data of the latter complex supports the formation of the + IV Fe metal center in 5. Electrochemical measurements revealed that one-electron reduction potential of 5 is cathodically shifted than 3.
The reactivity of Fe IV (OH) species (3) was compared with the ligand radical-coordinated Fe III (OH) complex (2) toward PCET and OAT reactions.We suggest that the observed higher reactivity of 3 than 2 is because of the coordinated PhINTs at the sixth position in 3.In addition, the reactivity studies of 5 exhibited a slower reaction compared to 3. Overall, the study describes the detailed OH rebound, PCET, OAT, and ET reaction studies of high-valent Fe−OH complexes and highlights the importance of the electronic structure of Fe in controlling the reactivity.
■ METHODS
The chemicals and solvents used in this study were purchased from commercial sources and used as received unless mentioned.The iron(III) complexes used in this study were prepared inside a N 2 -filled glovebox.(4-OMe-C 6 H 4 ) 3 C • , 21 2,6-di-tert-butyl-4-methoxyphenold, 57 BNAH, 72 and BNAD 73 were prepared following literature procedures.We recently described the synthesis and characterization of Fe III (OH) (1) and ligand-oxidized Fe III (OH) (2) complexes. 33 1H NMR spectra of organic molecules and Fe complexes were recorded on a Bruker 500 MHz (DPX-500) or Bruker 400 MHz (DPX-400) NMR spectrometer.The ESI-mass data reported in this study were measured using a Waters Xevo-G2XQTOF instrument.The IR spectrum of the Fe complexes was measured in KBr pellets using a Nicolet Proteǵé460 ESP instrument.CHN analysis of all Fe complexes was recorded in a PerkinElmer 2400 series II CHNS/O instrument.UV−vis spectra of Fe complexes were measured using an Agilent diode array 8454 spectrophotometer connected to a Unisoku cryostat.Mossbauer data of intermediate 5 was recorded using a 57 Co source in a Rh matrix in an alternating constant acceleration Wissel Mossbauer spectrometer operated in the transmission mode using the Janis Research SuperVariTemp setup.Isomer shifts were reported relative to the iron foil at ambient temperature.Simulation of experimental data was done using Igor Pro 8 software.
Caution: Although no problems were encountered during the synthesis of the complex, perchlorate salts are potentially explosive and should be handled with care! 74
Synthesis of (NMe 4 ) 2 [Fe III (HMPAB)(OMe)] (4)
A methanolic solution (3 mL) of FeCl 3 (0.08 g, 0.5 mmol) was added dropwise to a stirring reaction solution of H 4 HMPAB (0.15 g, 0.5 mmol) and Me 4 NOH (0.85 g, 25% solution in methanol; 2.25 mmol, 4.5 equiv) in methanol (2 mL) inside a nitrogen-filled glovebox.The resulting reaction solution was allowed to stir at around 25 °C for 2 h.Then, the solvent was reduced to dryness, and acetonitrile (ca. 3 mL) was added to the residue to dissolve.An excess of diethyl ether was slowly introduced into the reaction solution and was allowed to stir at room temperature.Then, the reaction mixture was placed at −20 °C inside a refrigerator overnight.Precipitation of a yellowish-brown solid occurred.The solid compound was separated and dried under vacuum.Single crystals suitable for X-ray diffraction were obtained upon diffusing diethyl ether into an acetonitrile solution of the complex at room temperature.Yield: 0.11 g (42% Approximately 50% of the 57 Fe-enriched (NMe 4 ) 2 [Fe III (HMPAB)-(OMe)] complex was prepared following a similar procedure as described above.The 57 Fe isotope-enriched FeCl 3 was prepared by mixing 1:1 naturally abundant Fe and 57 Fe metal and refluxed for 24 h with aqueous HCl under air.Product Analysis 10 mL of acetonitrile was added to complex 1 (3.7 mg, 0.0072 mmol) in a reaction bath (RB) inside a nitrogen-filled glovebox, and the RB was sealed with a septum.The reaction solution was then placed in an Eyla low-temperature reaction bath precooled to −10 °C.An acetonitrile solution (1 mL) of PhINTs (4 mg, 0.0108 mmol) was slowly introduced into the reaction solution and was allowed to stir at −10 °C for 5−7 min for generation of the intermediate species (3).Then, different substrates were introduced into the reaction solution slowly using a syringe maintaining the N 2 atmosphere and allowed to stir at −10 °C (described below for different substrates).
Reaction of 3 with (4-OMe-C 6 H 4 ) 3 C
• (4-OMe-C 6 H 4 ) 3 C • (0.0072 mmol, prepared in situ) dissolved in 1 mL of 1:4 acetonitrile/toluene solution (v/v) was slowly introduced into an acetonitrile solution (10 mL) of 3 (0.0072 mmol) at −10 °C.The reaction mixture was allowed to stir for 25−30 min, maintaining the temperature at −10 °C.Then, the reaction mixture was warmed to room temperature, and the solvent was removed under high vacuum.The resulting residue was redissolved in CDCl 3 , and 1 H NMR data of the crude reaction mixture was recorded without further purification.(4-OMe-C 6 H 4 ) 3 C−OH was quantified using trimethoxybenzene as an internal standard.
A blank experiment was also performed in the absence of intermediate 3 under the same experimental conditions, which revealed a trace amount of (4-OMe-C 6 H 4 ) 3 C−OH.
Reaction of 3 with BNAH
An acetonitrile solution of BNAH (0.0072 mmol) was slowly introduced into the reaction solution containing intermediate 3 (0.0072 mmol) and allowed to stir for 10 min at −10 °C.Then, the reaction mixture was quenched with a minimum amount of dilute HCl (in acetonitrile), and all solvent was removed under high vacuum.The resultant residue was dissolved in D 2 O, and the 1 H NMR and ESI-mass spectra of the crude reaction mixture were recorded without further purification.Quantification of BNA + was performed by 1 H NMR spectroscopy using 3,5-dinitrobenzoic acid as an internal standard.The formation of BNA + (∼96%) was noted.
A blank experiment was performed under similar experimental conditions in the absence of 3. The experiment revealed no formation of BNA + as the product.
Reaction of 3 with (4-X-C 6 H 4 ) 3 P Substrates (X = H, OMe, Me, and Cl)
An acetonitrile solution of (4-X-C 6 H 4 ) 3 P (0.0072 mmol) in 1 mL of acetonitrile was introduced into the stirring reaction solution containing 3 (0.0072 mmol) and allowed to stir at −10 °C for 5 h.Then, the solvent was removed under reduced pressure, and the resulting crude residue was redissolved in CDCl 3 and analyzed by 31 P NMR spectroscopy and ESI-mass spectrometry.The formed product was quantified by NMR spectroscopy by comparing the integration of (4-X-C 6 H 4 ) 3 P�O with unreacted (4-X-C 6 H 4 ) 3 P.
A blank experiment was also performed under the same reaction conditions in the absence of 3, which showed no formation of the (4-X-C 6 H 4 ) 3 P�O product.The yields of different (4-X-C 6 H 4 ) 3 Pderived products are listed in Table S7.
Reaction of 3 with 4-X-DTBP
An acetonitrile solution (1 mL) of 4-X-2,6-DTBP (0.012 mmol) was slowly introduced into an acetonitrile solution of intermediate 3 (0.012 mmol) at −25 °C under the N 2 atmosphere.The resulting reaction solution was allowed to stir at −25 °C for 1 h maintaining the N 2 atmosphere.Once the reaction was complete, the solution was quenched with a minimum amount of dilute HCl (in acetonitrile), and the solvent was removed under reduced pressure.The organic products were extracted with diethyl ether (3× 20 mL), dried over anhydrous sodium sulfate, and evaporated to dryness.The crude product was analyzed by 1 H NMR spectroscopy, and the 4-X-DTBPderived products were quantified by comparing their integration value with the unreacted substrate.The yields of different 4-X-DTBPderived products are listed in Table S7.
A blank experiment was also performed in the absence of 3, which revealed that no oxidized products were formed.
Reaction of 3 with 9,10-DHA
An acetonitrile solution (1 mL) of 9,10-dihydroanthracene (0.12 mmol) in 1 mL of acetonitrile was slowly introduced into an acetonitrile solution (10 mL) of 3 (0.012 mmol) at −10 °C under a N 2 atmosphere.The resulting reaction solution was allowed to stir at −10 °C for 5 h.Once the reaction was complete, the solution was quenched with a minimum amount of dilute HCl (in acetonitrile), and the solvent was removed under reduced pressure.As an internal standard, 1 equiv (0.12 mmol) trimethoxybenzene was added to the reaction mixture.Then, the organic products were extracted with diethyl ether (3× 20 mL) as the solvent, dried over anhydrous sodium sulfate, and evaporated to dryness.The organic products were analyzed and quantified by 1 H NMR without further purification.The formation of dihydroanthracene (∼50%) was noted.
Figure 1 .
Figure 1.X-ray structure of 4 with 50% ellipsoid probability.The hydrogen atoms of the ligand and countercations are removed for the sake of clarity.CCDC number: 2321489.
Scheme 2 .
Scheme 2. Reaction of 1 with Excess PhINTs and Magic Blue in Acetonitrile at −25 °C
Figure 3 .
Figure 3. (A) Normalized Fe K-edge XANES spectra recorded at 20 K of 1 shown in black together with 3 (shown in blue).Inset: zoom-in view of the pre-edge regions of 1 and 3. (B) Zoom-in view of the preedge regions together with the respective fits shown in dashed blue.The black and blue dashed lines correspond to the step and pseudovoigt functions used to fit the pre-edge peaks, respectively.(C) Fourier transforms of k 2 -weighted Fe EXAFS of 1 (in black) and 3 (blue).Inset: Back Fourier transformed experimental (solid lines) and fitted (dashed lines) k 2 χ(k) for 1 and 3. Experimental spectra were calculated for k values of 2−14.107Å −1 .
Figure 4 .
Figure 4. DFT-optimized structure of the Fe IV (OH) complex coordinated to PhINTs.During optimization, we excluded the methyl group of PhINTs.
Figure 7 .
Figure 7. Plot of k obs vs [4-X-DTBP] substrates (X = Me, Et, and t Bu) for the reaction of 2 or 3 with 4-X-DTBP in acetonitrile at −25 °C.The k obs values for the reaction of 2 with 4-X-DTBP were taken from ref 33.
Figure 8 .
Figure 8. (A) Change in the UV−vis spectrum of 3 (0.2 mM) upon addition of 1 equiv of BNAH in acetonitrile at −10 °C.(B) Plot of 1/ [3] vs time (s) for the reaction of 3 with BNAH or BNAD for the determination of k 2 values.(C) Change in absorbance at 465 nm of 2/3 in the presence of an excess amount of 9,10-DHA at −10 °C.(D) Plot of log k 2 ′ vs C−H bond dissociation energy of the substrates.
Figure 9 .
Figure 9. (A) Change in the UV−vis spectrum of 3 upon addition of 50 equiv of PPh 3 to 3 in acetonitrile at −10 °C.(B) Change in absorbance at 465 nm of 2/3 in the presence of 50 equiv of PPh 3 at −10 °C.Plots of log k rel vs σ p (C) and log k 2 vs E ox of (4-X-C 6 H 4 ) 3 P (D) for the reaction of 3 with (4-X-C 6 H 4 ) 3 P at −10 °C. | 11,032 | 2024-03-11T00:00:00.000 | [
"Chemistry"
] |
Tunneling Between Schwarzschild-de Sitter Vacua
We extend the study of the effect of static primordial black holes on vacuum decay. In particular, we compare the tunneling rates between vacua of different values of the cosmological constant and black hole mass by pointing out the dominant processes based on a numerical examination of the thin wall instanton. Three distinct cases are considered, namely the nucleation of a true vacuum bubble into the false vacuum, the nucleation of a false vacuum bubble into the true vacuum as well as the Farhi-Guth-Guven mechanism. As a proof of concept, it is shown that in order to increase the transition rate into an inflating region, we find that not only is the inclusion of a black hole necessary, but the inclusion of a cosmological constant in the initial phase is also required. Among the cases studied, we show that the most likely scenario is the elimination of inhomogeneities in the final phase.
I. INTRODUCTION
Research on false vacuum decay in quantum field theory was prompted by the work of Sydney Coleman et al. [6,9,10]. The effect of gravitation on bubble nucleation has sparked intense study over the last forty years, leading to interesting investigations such as the effect of black holes on the nucleation rates of true vacuum bubbles. Early work on the topic can be found in [3,18,21], while more recent developments, motivated by the role of impurities in the decay rates of first order phase transitions, are addressed in [5,16]. In the latter, it was shown that by relaxing the initial assumption of homogeneity of de-Sitter spacetime, the inclusion of black holes, as seeds of inhomogeneity, leads to enhanced decay rates. As a result, this process could affect the lifetime of the Higgs vacuum [4,15], increasing dramatically the probability of vacuum decay.
In this paper, we consider the Euclidean instanton approach 1 . While this approach has been applied in the context of false vacuum decay (downward tunneling) [5,16], the present work extends the analysis to the nucleation of false vacuum bubbles within a lowenergy true vacuum (upward tunneling). Our aim is to explore the parameter space and compare the tunneling rates between the initial and final states with a cosmological constant and/or a black hole to the standard upward tunneling scenario. The latter has been explored in [20] while false vacuum bubbles of a de Sitter (dS) interior and a Schwarzschildde Sitter (SdS) exterior were discussed in [1,2]. We present the most general expression for the tunneling rates of false and true vacuum bubbles with a SdS interior/exterior to determine which processes are favorable due the inclusion of a cosmological constant in the initial phase.
A case of particular interest is that of the Farhi-Guth-Guven (FGG) mechanism for understanding the nucleation of inflating regions (false vacuum bubbles) from non inflating ones [1,2,11]. A proper understanding of this mechanism could shed light on the beginning of inflation [17]. In this work, we apply the Euclidean approach to the FGG mechanism as well and we present a relative comparison between the FGG mechanism 1 For an alternative, we refer the reader to the Lorentzian WKB approach [11][12][13] and the tunneling upward in the potential with a nonzero cosmological constant in both vacua. Although we focus on vacua with a positive cosmological constant, we note that work has been done on the bubble nucleation in Schwarzschild-Anti de Sitter (S-AdS) spacetimes [5] and a possible implication on the information loss problem can be found in [7,8,23].
The manuscript is organized as follows: In section II we provide the formalism of constructing the thin wall instanton. In section III, we perform the numerical examination of the most general instanton. Finally, in section IV we discuss the FGG mechanism in the context of conical singularities. After discussing our results in section V we provide the reader with details about conical angles in Appendix A.
II. CONSTRUCTING THE INSTANTON
We start our discussion, by reviewing the formalism presented in [5,16]. We consider two Schwarzschild-de Sitter spacetimes with arbitrary cosmological constants separated by a thin wall of constant tension. By performing the Wick rotation t → −iτ , the metric on each side of the wall reads, where Here, M + is the mass of the black hole outside the bubble while M − is the mass of the remnant black hole inside the bubble. Moreover, Λ + and Λ − are the exterior and interior cosmological constants, respectively. The wall is parametrized by r = R(λ) and the Israel junction conditions [19] lead to, The radius of the bubble, R, is a function of the proper time λ, the dot represents the derivative with respect to λ, and σ is the tension of the wall. Using 2 , and (2) we arrive at the equation which describes the evolution of the bubble wall, where Here, l + (l − ) is the dS length inside(outside) the bubble, The term R 2 l 2 − arises from the nonzero value of the cosmological constant in the true vacuum.
Combining (3), (4), the evolution of the time coordinate is given by A. Tunneling to lower values of the cosmological constant Generally, the bubble nucleation rate reads, where B is the "bounce" and A is a prefactor. It describes the probability to penetrate and escape a potential barrier 3 . We begin by considering tunneling from a higher value of the cosmological constant to a lower one while we assume the existence of a Schwarzschild black hole both in the initial and final state. The Euclidean action for this case, is given 2 This relation comes from the fact that the induced metric must be the same on both sides of the wall parametrized by λ. 3 For the rest of the paper, we ignore A, we set = 1 and we calculate the bounce. by [5,16], where R is the Ricci scalar, K is the trace of the extrinsic curvature, Λ is the cosmological constant and σ is the surface tension of the bubble wall. The subscript + and -denotes the outside and inside region of the wall, respectively. To proceed, we need to explicitly calculate the Euclidean action on each side of the wall.
Before doing so, it is useful to mention that the issue of conical singularities has been explored in [14,16], and the integral over the Ricci scalar for near horizon geometries, takes the form, while the Gibbons-Hawking boundary term reads, where ∆ represents the deficit angle and A is the area of the conical defect. Combining these terms together, we are left with an action that doesn't depend on the conical deficit [16], Let us write the expressions for the Euclidean action and from (8), we have three distinct contributions to consider, as derived in [5] : • Outside the bubble (M + ) The Euclidean action for the exterior of the bubble is where A c + = 4πr 2 c + represents the area of the cosmological horizon and β is the periodicity of τ and in general is different than the periodicity of the cosmological horizon r c , β c + (See Appendix A for a pedagogical discussion on conical angles). Using, where the prime represents the derivative with respect to R, the term in the parenthesis becomes zero and the action is independent of the conical angle.
• Inside the bubble (M − ) The Euclidean action for the interior of the bubble is where A − represents the area of the black hole horizon and β is the periodicity of τ and is different than the periodicity of the black hole horizon r h − , β h − . Using, we, again, see that the result does not depend on the conical angle.
• Bubble wall (W) The action for the wall is We have included the Gibbons-Hawking boundary terms induced by the wall and we made use of the Israel junction conditions which can be written as, Combining (12), (14) and (16), we arrive at The "bounce" is obtained by subtracting the background Euclidean action from the Euclidean action for the bubble wall solution [5], where In (19), A + represents the area black hole horizon.
B. Limiting case of no black hole
For future reference, we give the expression for the bounce of the dS-dS transition [5], where B down M ± =0 corresponds to the Coleman-DeLuccia bounce (B CDL ) for downward tunneling. We solve the equation for the bubble wall (4) with R[−γπ/(2 √ 1 + ζ)] = 0 as the initial condition to obtain, where ζ ≡ γ 2 l 2 − . The equations which describe the evolution of the time coordinate are, Plugging (21) and (22) into (20) leads to This result agrees with [5,22].
C. Tunneling to higher values of the cosmological constant
Having set the basis so far, we proceed in presenting the novel part of our work by deriving the general expression for the tunneling rate between a spacetime of a lower value to a higher value of the cosmological constant 4 , or using (7), Using (18), (19) as well as, we obtain, Even though in flat spacetime, energy conservation forbids the upward tunneling at zero temperature, if the true vacuum is a de Sitter space-time, and when the temperature is nonzero, thermal fluctuations allow the creation of such a bubble [20]. Given that this is a possible process 5 , we see its generalization for black holes. The zero-mass limit of (27) reads which also corresponds to the CDL case, now for upward tunneling.
III. NUMERICAL ANALYSIS OF UPWARD/DOWNWARD TUNNELING
We compare the tunneling rates of the SdS-SdS upward/downward phase transitions with an arbitrary cosmological constant by performing a full numerical analysis. The parameter δ represents the difference between the vacua while is measuring the upward shift of the potential as shown in Fig. 1. These parameters are related to the cosmological constants in each vacuum via, 6 We explore the range = 10 −7 M 2 pl to = 5 × 10 −6 M 2 pl , which corresponds to l + ≈ 655l pl to l + ≈ 1195l pl and l − ≈ 775l pl to l − ≈ 5477l pl , while we define p and q as the fraction of the black hole mass with respect to the Nariai mass, 7 Given these values of the parameters, we explore how the shifting of the potential, , the difference between the vacua, δ, as well as the tension, σ, affect the tunneling process.
First, we solve (4) for the radius of the bubble, R, and (6) for the time evolutionτ . We compute numerically the ratio of the bounce (18) to its zero-mass limit (23) Numerical analyses of this type have been performed in the past [5,16]. In [16], the case of = 0 for downward tunneling was considered for nonzero masses inside and outside the bubble, while in [5] the case of nonzero epsilon was studied, for downward tunneling. In both papers, the parameter space of masses was searched numerically to find the preferred tunneling process compared to the CDL scenario and was found that the initial inhomogeneities speed up the tunneling process and depending on the values of the masses the dominant process could be either the tunneling to a/no BH. In this work, we also perform a numerical search but now over the parameter space in epsilon and delta which gives us the opportunity to better isolate the effects of the cosmological constant in the tunneling process. This is quite interesting not so much for the tunneling downward the potential case as for the tunneling upward one, given that we are interested to answer a question about primordial inhomogeneities and if they can speed up the tunneling up process. This, for example, could help us better understand the initial conditions for inflation.
We consider two cases. The first one corresponds to the initial and the final state in indicating that after its formation, the bubble is initially contracting and then expanding.
A comparison of tunneling rates as a function of is shown in Fig.(2) 8 . In Fig.(2a On the other hand, tunneling upward in the potential, or the nucleation of a false vacuum bubble inside the true vacuum as a function of is shown in Fig.(2b). Here we notice that the fastest tunneling rate corresponds to tunneling from a SdS exterior of GM − ≈ 24l pl to a dS interior of a higher value of the cosmological constant (cyan double dashed line) while the two lowest tunneling rates are subdominant to the CDL case (dashed and dotted orange lines). Thus, we observe that even in the false bubble nucleation, as the cosmological constant takes on higher values, the inhomogeneities (black holes) are more likely to vanish. Even though the upward tunneling rate is largely suppressed compared to the downward tunneling rate, we notice that the fastest rate takes on up to a 10 percent enhancement compared to the CDL one, over the range of parameters explored. In both cases, it is evident that as the value of grows, the tunneling to lower values of the mass is enhanced while the tunneling to larger masses is suppressed compared to the CDL case.
Next, we study how the difference between the vacua affects the tunneling rate(see Fig.(3)). For tunneling downward the potential, as the value of δ increases, the largest tunneling rate corresponds to tunneling from a SdS exterior of GM − ≈ 24l pl to a dS interior (Fig.(3a)). For tunneling upward in the potential (Fig. (3b)), the fastest tunneling rates (double dashed and single dashed line) decrease as the difference between the vacua increases. To have the most enhanced rate, we need small values of δ and, again, no black hole in the end state.
To complete the picture, we briefly comment on the effect of the tension on the tun- Fig.(2). In Fig.(3b), the orange curves decrease as a function of δ while the cyan increase, unlike in Fig.(2b), where the reverse happens.
The goal of this work is not to exhaustively search the parameter space for the masses when = 0 as done in [5,16]. Although when taking the limit of = 0 and using the appropriate choice of parameters our results coincide with theirs, the novelty of this paper is to check explicitly the effect of epsilon and delta on the tunneling between an initial mass and a final mass for a given set of parameters. For every initial mass that is chosen, 3 distinct cases are considered, namely a final mass smaller, equal or larger than the initial mass. The question is which of these three cases is the minimum compared to the CDL case as we vary epsilon and delta. We noticed that as the value of epsilon increases, the preferred process compared to the CDL case is the tunneling to no black hole (zero final mass) for the tunneling downward case. The next step was to check if within this parameter space, as we increase epsilon, the tunneling to no black hole process is still the nonzero one even for the tunneling upward the potential. Although the latter is suppressed compared to the tunneling favored, there is a part of the parameter space (the one presented in the paper) where there is an enhancement in the transition rate compared to the upward CDL one and the black holes act as seeds of bubble nucleation in this case as well. This completes the proof of concept procedure.
IV. FGG MECHANISM
To understand how inflating regions may be spawned from noninflating ones, it is worthwhile to study processes such as the FGG mechanism. In this case, a false vacuum bubble tunnels through a wormhole to produce an inflating region [2,11]. As we take the zero-mass limit of this process, a totally disconnected phase that includes the new vacuum is nucleated while the initial spacetime is maintained. This is in contrast with the CDL scenario where the tunneling from Minkowski to a higher energy density vacuum is prohibited.
To calculate the rate of the FGG mechanism, first, we write down the Euclidean action of the dS to S process, where By using (25) and we arrive at The zero mass limit of this process leads to The minus sign in (35) is related to a sign choice we are forced to make due to quantum cosmological boundary conditions so as to keep the transition probability smaller than one.
Similar to the previous section, we perform a numerical analysis on the FGG mechanism, by considering two distinct cases. In Fig.(5), among the S-dS processes (pink, grey and blue dashed lines), the tunneling from S with GM + ≈ 72l pl to dS dominates while, among the S-SdS processes we conclude that the dominant process is the one that tunnels to a smaller mass black hole (single-dashed cyan line). Overall the preferred state is the one that tunnels to no black hole. It is evident that the tunneling to no black hole is favored not only with the inclusion of a positive cosmological constant in the initial phase but in the FGG mechanism as well.
This gives us the opportunity to make a relative comparison of the FGG mechanism 9 with the up-tunneling process described in Section III. Since B F GG B F GG 0 , the inclusion of a black hole in the initial phase does not have a big effect on the FGG tunneling rate while in Fig.(3b), especially for the tunneling to no black hole (cyan single dashed and double dashed lines), clearly B < B CDL . This shows that the inclusion of a black hole in the initial phase makes the nonzero case more sensitive to transition to a dS phase than the FGG mechanism.
V. CONCLUSION
In this work, we have explored the nucleation of true and false vacuum bubbles. Our discussion was restricted to a positive cosmological constant, including a black hole in both the initial and final states. By separating the difference between the vacua, δ, and the vertical shift of the potential, , we study the tunneling probability of the processes.
We find that as the potential shifts to higher values of the cosmological constant, the nucleation rate of true and false vacuum bubbles is enhanced compared to the CDL rate.
Overall, we explored different values of the black hole masses in both vacua and we found, as a proof of concept, that the fastest tunneling rate (for all cases) corresponds to an end state of no black hole. Especially for the nucleation of false vacuum bubbles, it means that the tunneling to higher values of the cosmological constant tends to remove inhomogeneities. This could have important consequences for the early universe, for example, this could be a process from which the initial inhomogeneities in a noninflating universe vanish. Furthermore, we explored the effect of the difference between the vacua on the tunneling rate. As in the case of , we find that the no-black hole end state leads to the most enhanced rate.
The creation of inflating regions out of non inflating ones was analyzed in the context of the FGG mechanism. Two cases were considered, the first one being the nucleation of a dS bubble, completely disconnected from the initial Schwarzschild spacetime. We notice that within this tunneling process, the production of a false vacuum bubble becomes more likely. In the second case, we allow for a remnant black hole in the final state. Comparing all the tunneling events in Fig. (5), we conclude that the final state without the inclusion of a black hole is slightly favored.
This provides a new way to make a relative comparison between the FGG mechanism and the tunneling upward in the potential with a nonzero cosmological constant in both vacua. While for both processes the most likely scenario is the complete elimination of inhomogeneities, we observe that for the same range of δ, the nonzero value of is essential in speeding up the tunneling process. This indicates that not only the inclusion of a black hole is necessary in the initial phase to enhance the tunneling rate, but the inclusion of a cosmological constant as well. While this comparison cannot exclude the FGG mechanism as a physical process, at least within the parameter range explored in this paper, it shows that the existence of a nonzero value of can enhance the elimination of inhomogeneities in the early universe, thus providing a sufficiently smooth patch for the onset of inflation.
Further exploration of the parameter space would be necessary to make these arguments more concrete.
In terms of understanding the initial conditions for inflation, as well as the mechanisms that lead to transitions between vacua, it is important to consider all the allowed tunneling scenarios and their likelihood as this will help understand the preferred transitions among the vacua in the string theory landscape. In this work, we have used the Euclidean instanton approach to explore all the allowed transitions with a nonzero positive cosmological constant. We have extended the analysis to include tunneling downward in the potential as well as the FGG mechanism. It remains of interest to use our method to explore the formation of AdS bubbles as this could be deeply linked to the information loss problem or to the study of the stability of the Higgs vacuum since these nucleation seeds could drastically alter the time it takes to decay to a different standard model. | 5,024.6 | 2019-06-27T00:00:00.000 | [
"Physics"
] |
Chinese Insertions Functioning in English: A Comparative Approach to Fiction and Non-Fiction Texts
The present article is devoted to the comparative analysis of Chinese insertions functioning in fiction and non-fiction written in English based on the works of Amy Tan and Peter Hessler. As the research showed, the leading function of insertions in non-fiction was documentary function, whereas in fiction it was the function of ethnic coloring and the function of hero speech characteristics. The functions in the analyzed texts overlap and complement one another. Structurally the insertions in non-fiction feature words and phrases, in fiction they vary from interjections to significant parts of texts.
Introduction
The research of such linguistic phenomenon as foreign language inclusions lately has aroused a great interest among the linguists of different countries. Studying particular characteristics of speech occasional borrowings functioning in the recipient text both in general linguistic sense and in a complex of linguastylistical problems provides an opportunity to see the initial stage of borrowing in the dynamics. In spite of the problem scope there is still no single established definition of foreign insertions. The researchers discriminate the following characteristic features of foreign insertions: non-assimilation, grammatical non-assimilation, belonging to a foreign system, they are not fixed in the dictionaries of a recipient language, they are used by bilinguals.
Quite often foreign insertions are considered together with such connected notions as loan-words, exoticisms, barbarisms. The term "foreign insertions" is the most general of them and, according to many researchers, includes some connected terms and can be mostly seen in special literature. Foreign insertions are distinguished from the connected notions by a greater non-assimilation by a recipient language, mostly foreign graphic transference and a greater volume of language units.
In the present paper by foreign insertions we understand foreign linguistic units, transliteration of foreign linguistic units as well as different volume units of the donor language (from interjections to text extracts) used according to the laws of the recipient language, which are surrounded by the elements of the borrowing language (English), non-assimilated or partly assimilated in the system of the borrowing language, but not codified in it [1. P. 270].
There is a great number of foreign insertions classifications based on various features (correlation with the systems of the donor language and the recipient language, the degree of connection with ethno-cultural character of the text content, source of borrowing, position in the text, its volume). During the analysis of the current classifications it turned out that the researchers count as foreign insertions not only full insertions in their foreign graphics, but also the parts of texts translated into the recipient language; not only words, but foreign letters inside words as well; not only linguistic phenomena not having ethic coloring, but also the elements having it. As a result, foreign insertions are being confused with exoticisms.
Discussion
The main reason of using foreign insertions of any volume is a language contact including code switching. Literary creativity, both fiction and non-fiction, can be considered a variety of a language contact. Notably, foreign insertions can be in the linguistic arsenal of a monolingual writer, not only a bilingual one.
The problem of foreign insertions functioning has been well-analyzed only on the material of fiction. Practically all the researchers have featured a nominative function, a function of creating a comic effect, a function of ethnic coloring, a function of speech individualization. Some investigators also differentiate a function of elaboration and semantics specification of a close word of the recipient text, a function of a character's emotional state rendering, a function of ethnic peculiarity and mythology, a function of narration metaphorization as well as linguareflexive, repressive, intermediary, excluding functions along with a euphemistic one, etc.
The majority of researchers have pointed out that the dominating functions in fiction are nominative function and ethnic coloring function.
In the present paper there has been made an attempt to make a comparative analysis of Chinese insertions functioning in fiction and non-fiction texts written in English. "River Town: Two Years on the Yangtze" by Peter Hessler was chosen as a non-fiction source. This book is considered to be the most significant work of this author and it contains a sufficient for this analysis number of foreign insertions. A fiction text is represented by Amy Tan's novel "The Hundred Secret Senses" abundant in the equivalent linguistic units.
Let us now proceed to the description of Chinese insertions and the functions performed by them in the abovementioned pieces of work. We should immediately mention that both authors represent Chinese insertions in transliteration due to the fact that Chinese hieroglyphs would be perceived as too exotic and absolutely incomprehensible for the readers.
According to the structural features classification the majority of insertions found in non-fiction material are words.
"People everywhere talked about Gaige Kaifang, Reform and Opening, which included both increased contact with the outside world and the Capitalist-style economic reforms that Deng Xiaoping had initiated in 1978" [2].
Insertions-expressions and insertions-phrases can be also found. It turned out that insertions-interjections and insertions-sentences are not present in non-fiction text. On the contrary, they can often be found in fiction text and there they create a decorative and an emotive functions. This fact somehow signals that a set of functions delivered by foreign insertions in non-fiction is different from a set of functions in other genres. According to the subject classification insertions-toponyms and insertions-anthroponyms are most widely spread in the work under analysis. In the fiction the scope of insertions varies between interjections and text extracts. Both in fiction and non-fiction insertions are mostly represented by toponyms and anthroponyms, which is quite predictable, because they tie the narration to a certain place and time.
Whenever foreigners arrived in our province, everyone in the countryside -from Nanning to Guilin -talked about them [3. P. 35].
"My apartment was on the top floor of a building high on a hill above the Wu River. It was a pretty river, fast and clean, and it ran from the wild southern mountains of Guizhou province. Across the Wu River was the main city of Fuling, a tangle of blocky concrete buildings rising up the hillside" [2].
Let us now pass on to the foreign insertions functions performed in non-fiction. The majority of insertions (toponyms, anthroponyms, some realia) carry out the documentary function. Also the insertions perform nominative, euphemistic, culture oriented, linguareflexive, compensatory and separating functions. Herewith one insertion often performs several functions at the same time.
In the fiction describing a foreign situation the role of foreign insertions is not limited to sending a message. They appear in the text not only for nominative reasons, but also to perform aesthetic functions.
The stylistic potential of Chinese insertions on the background of English linguistic environment is contingent on their content as well as their unusual sound representation and exotic associations created by it. Semantic and stylistic functions performed by Chinese insertions overlap and complement each other, they comply with the imagery perspective of the fiction whole. For example, one insertion can be used in nominative function and create ethnic coloring at the same time. That is why to illustrate the phenomena the insertions where the given function was the leading one were selected.
Having got into the linguistic environment of the recipient text written in English, Chinese insertions perform a number of semantic and stylistic functions: nominative, specification, expressive, creating a comic effect, cacophemism transmission, ethic coloring and characteristics creation, leitmotif creation, depicting of characters emotional state, ethnic worldview and mythology peculiarity transmission, speech individualization. This is their main focus both in microcontexts and in the creative system of the whole book.
Thus, the undertaken analysis of foreign insertions performed on the material of fiction and non-fiction has allowed us to recognize the peculiarities of foreign insertions functioning in the present genres. In non-fiction the majority of Chinese insertions perform a documentary function due to which a realistic reproduction of events, factual authenticity is attained. In fiction the leading functions are the function of ethic coloring and speech characteristics function. In both genres the functions performed by foreign insertions can overlap. During the research it was noted that some functions can be the same in fiction and non-fiction (for example, nominative, culture oriented, euphemistic, separating), but some functions are specific for each genre in particular (speech individualization, comic effect creation, cacophemism transmission, leitmotif creation in fiction; documentary function in non-fiction).
Conclusion
Thus, we came to a conclusion that Chinese insertions different in structure can be found both in fiction and non-fiction. However, in non-fiction the insertions on the text level were not found. The majority of the functions performed by insertions in the texts conform with each other, however there are some genre differences in this sense. The undertaken analysis can present some interest for linguistics in general, namely for linguacontactology and linguastylistics. We see the perspectives of the research in further specification of functions performed by foreign insertions in the texts of different genres. | 2,088.8 | 2020-12-15T00:00:00.000 | [
"Art",
"Linguistics"
] |
The study of honokiol as a natural product-based antimicrobial agent and its potential interaction with FtsZ protein
Multidrug resistant bacteria have been a global health threat currently and frontline clinical treatments for these infections are very limited. To develop potent antibacterial agents with new bactericidal mechanisms is thus needed urgently to address this critical antibiotic resistance challenge. Natural products are a treasure of small molecules with high bioactive and low toxicity. In the present study, we demonstrated that a natural compound, honokiol, showed potent antibacterial activity against a number of Gram-positive bacteria including MRSA and VRE. Moreover, honokiol in combination with clinically used β-lactam antibiotics exhibits strong synergistic antimicrobial effects against drug-resistant S. aureus strains. Biochemical studies further reveal that honokiol may disrupt the GTPase activity, FtsZ polymerization, cell division. These biological impacts induced by honokiol may ultimately cause bacterial cell death. The in vivo antibacterial activity of honokiol against S. aureus infection was also verified with a biological model of G. mellonella larvae. The in vivo results support that honokiol is low toxic against the larvae and effectively increases the survival rate of the larvae infected with S. aureus. These findings demonstrate the potential of honokiol for further structural advancement as a new class of antibacterial agents with high potency against multidrug-resistant bacteria.
Introduction
Over the past decades, antibiotics have been widely utilized to treat various bacterial infections and protected the global public health.However, due to the inappropriate administration of antibiotics, bacteria have developed strong defense mechanisms against the conventional antibiotics and that results in drug resistance (Aminov, 2021;Smith et al., 2023).At present, antimicrobial resistance stands as a critical and worldwide predicament.Notable examples of antibiotic resistant bacteria include vancomycin-resistant Enterococcus faecium (VREF) and methicillin-resistant Staphylococcus aureus (MRSA).Both emblematic of bacteria are resistant to conventional antibiotics such as vancomycin and methicillin (Simonetti et al., 2022;Cimen et al., 2023).Given the rapid emergence of drug-resistant bacteria as well as the dwindling options for clinical treatments, an urgent is thus needed to develop alternative strategies to combat drug-resistant bacterial infections.
Throughout the history of antibiotic discovery, natural products have played a pivotal role, including the production of widely utilized antibiotics like penicillin and vancomycin.Today, there is ongoing anticipation that natural products may yet again provide solutions to the antibiotic crisis that we face (Wright, 2017).Honokiol (Figure 1) is an herbal chemical component originating from Asia, primarily found in the bark and flowers of Magnolia officinalis and Styrax obassia trees.These plants have a long history in traditional Chinese medicine and have been employed to treat certain diseases or symptoms such as asthma, abdominal discomfort and pain, indigestion, and cough associated with asthma (Rauf et al., 2021).In recent years, an increasing number of studies have shown that honokiol may possess a wide range of pharmacological effects, such as anticancer, antioxidant, anti-inflammatory, and antiviral properties (Rauf et al., 2021).Honokiol is considered safety to human body (Sarrica et al., 2018) and its inhibitory effects on Nocardia seriolae has also been reported previously (Jiang et al., 2022).However, the antibacterial mechanism of honokiol is still not clear.In this study, we evaluated the antimicrobial activity of honokiol against a panel of bacteria and attempted to understand its potential interactions with the purified bacterial Filamenting temperature-sensitive mutant Z (FtsZ) that has been known taking vital role in bacterial cell division.
Regarding recent advancements in the discovery of new drugs, the bacterial divisome has emerged as a promising drug target for the development of the next generation of antibiotics.Within the divisome, particular attention has been given to FtsZ, recognized as a pivotal component (Casiraghi et al., 2020;Silber et al., 2020).In the process of bacterial cell division, FtsZ undergoes assembly to form a circular structure known as Z-ring at the division site.The process of cell division involves the binding and hydrolysis of GTP.Subsequently, FtsZ becomes stabilized and affixed to the inner surface of the cytoplasmic membrane through interactions with FtsZ-binding proteins.Once the Z-ring is established, FtsZ orchestrates the recruitment of various other proteins responsible for initiating cytokinesis (Bisson-Filho et al., 2017;Yang et al., 2017;Monteiro et al., 2018).The substantial conservation and functional significance of FtsZ render it an attractive target for the development of innovative antibacterial agents (Haranahalli et al., 2016;Hurley et al., 2016).Recently, several natural products have been reported as FtsZtargeting compounds, such as berberine, curcumin and cinnamaldehyde (Figure 1).Bacterial cell division was found disrupted by inhibiting FtsZ activity with these compounds (Domadia et al., 2007(Domadia et al., , 2008;;Rai et al., 2008).In attempts to search for the potential targets of honokiol in bacteria, we observe that honokiol may interrupt the Z-ring formation and cell division on B. subtilis cells.We therefore performed experiments to understand the interaction of honokiol FtsZ as such interactions that could inhibit the proper function of FtsZ in bacterial cells.
In vitro antibacterial activity of honokiol
To evaluate the antibacterial activity of honokiol, a number of bacteria including drug-resistant strains were utilized in the present study.Methicillin and berberine were also investigated under the same conditions for comparison.The minimum inhibitory concentrations (MIC) values obtained are summarized in Table 1.The results showed that honokiol could inhibit the growth of all Gram-positive strains examined and the MIC values were found ranging from 4 to 12 μg/mL.Honokiol exhibited comparable inhibitory effects against B. subtilis 168, S. aureus ATCC 29213, and S. epidermidis ATCC 12228 with MIC at 4 μg/mL, which was notably better than berberine.Furthermore, honokiol displayed remarkable antibacterial activity against methicillin-resistant S. aureus (MRSA), including strains ATCC BAA-41, 33591, 33592, et cetera, with MICs ranging from 4 to 8 μg/mL.The antibacterial activity of honokiol was found 100-fold better than that of methicillin and berberine.Honokiol also exhibited a robust inhibitory effect on the growth of E. faecalis and E. faecium, with an MIC of 8 μg/mL.Notably, vancomycin against vancomycin-resistant E. faecalis and E. faecium (VREs) exhibited high MICs (> 96 μg/ mL).The results indicate the superior antibacterial activity of honokiol against VREs as compared to vancomycin (Sun et al., 2014).However, honokiol even at 256 μg/mL exhibited no antibacterial activity against Gram-negative bacteria such as E. coli, P. aeruginosa, and K. pneumoniae.One of the possible reasons may be attributed to the low penetration ability of honokiol against out membrane of Gram-negative strains.
Time-killing curve determinations of honokiol
The bactericidal and bacteriostatic nature of honokiol against bacteria were investigated and the viable counts were conducted following established protocols (Wikler et al., 2009).Killing curves obtained from the action of honokiol against B. subtilis 168 and S. aureus ATCC 29213 are depicted in Figure 2. The control group exhibited rapid growth in CFU counts as compared to the initial inoculum.In Figure 2A, the results show that honokiol at 1 × MIC causes a reduction of 1 × 10 2 CFU/mL against S. aureus within 6 h.Moreover, the viable count of bacteria was found below the lowest detectable limit (10 3 CFU/mL) within 24 h.In the B. subtilis bacterial survival assays (Figure 2B), honokiol at 4 × MIC lowered the viable counts rapidly below the lowest detectable limit after 4 h of incubation, and maintained the viable counts under the detectable limit for over 24 h at MIC concentration.We also obtained the time-killing curves of honokiol against drug-resistant strains, including S. aureus ATCC 33591 and ATCC 43300, and E. faecium ATCC 700221.The observation (Supplementary Figure S1) was found similar to those observed in B. subtilis 168 and S. aureus ATCC 29213.These results may indicate that the antibacterial activity of honokiol aligns with a bactericidal mode of action.
Synergistic effects of honokiol with β-lactam antibiotics
To evaluate the potential of honokiol in restoring the antibacterial efficacy of β-lactam antibiotics against ampicillin-resistant S. aureus and MRSA (ATCC BAA-41), a broth microdilution checkerboard experiment was conducted.From Table 2 and Figure 3, the results illustrate that β-lactam antibiotics display modest or moderate antibacterial activity against drug-resistant S. aureus, with MIC values exceeding 24 μg/mL.In general, the combination of these β-lactam antibiotics with honokiol is able to enhances the antibacterial activity effectively against the strains tested.Honokiol at 1 μg/mL enhanced the antibacterial effectiveness of ampicillin against ampicillin-resistant S. aureus, as indicated by the reduced MIC value of ampicillin from 24 μg/mL to 6 μg/mL.Furthermore, honokiol at 1 μg/mL significantly improved the bacteria-killing activity of methicillin against MRSA (ATCC BAA-41) and lowered the MIC value from 1,024 μg/mL to 64 μg/mL.A Fractional Inhibitory Concentration Index (FICI) observed was 0.3125.
The combination of honokiol with oxacillin or ampicillin also exhibited synergistic effects against BAA-41 strain, with FICIs of 0.5 and 0.375, respectively.In these assays, honokiol at 1 μg/mL enhanced the bacteria-killing potency of ampicillin by 4 folds (MIC was reduced from 48 μg/mL to 12 μg/mL) and oxacillin by 8 folds (MIC was reduced from 256 μg/mL to 32 μg/mL) against BAA-41 strain.A partial synergistic effect with FICIs of 0.75 for the combination of honokiol with imipenem or ceftazidime were also observed in the antibacterial study.
The antibacterial effect of honokiol in combination with PMBN against gram-negative strains
The cell wall of Gram-negative bacteria is different from that of Gram-positive bacteria, possessing special components such as lipopolysaccharides, outer membrane, and lipoproteins in addition to the peptidoglycan layer.Some antimicrobial substances, such as penicillin and trypsin, have poor antibacterial effects on Gramnegative bacteria due to the bacterial outer membrane (Smith et al., 2023).PMBN (polymyxin B nonapeptide), capable of improving the permeability of the outer membrane of Gram-negative bacteria (Tsubery et al., 2000), was utilized to improve the antibacterial effect of honokiol against E. coli and K. pneumoniae by enhancing the permeability of the compound.Table 3 displays the antimicrobial activity of honokiol in combination with PMBN.The results indicate that neither PMBN nor honokiol at 32 μg/mL exhibits observable antibacterial effects against the strains tested.Nevertheless, when honokiol combining with PMBN at 20 μg/mL, it showed antibacterial activity against both K. pneumoniae and E. coli and a MIC value of 32 μg/mL was obtained.
The effect of honokiol on bacterial cell morphology and membrane of Bacillus subtilis
FtsZ-targeting compounds are known to exert antibacterial function by inhibiting FtsZ activity and inducing cell elongation (Hufford and Lasswell, 1978;Rai et al., 2008;Ruiz-Avila et al., 2013).To get more understanding on the possible antibacterial mechanism of honokiol, we investigated its effects on B. subtilis cell division.The morphology of B. subtilis cells exposed to the conditions with and without honokiol was examined with microscopy techniques.The results revealed a significant elongation of B. subtilis cells in the presence of honokiol as compared with the control.Under normal conditions, ~60% of B. subtilis cells are found 3-5 μm in length, and ~ 38% are found in 5-10 μm.Only ~1.5% of the cells are found in 10-15 μm.The average length of the cells is 4.6 ± 0.29 μm (Figures 4A,E).However, most of the cells treated with honokiol at the MIC concentration (4 μg/mL) exhibited a length exceeding 20 μm, and ~ 45% of the cells are longer than 30 μm.And the average cell length is 34.7 ± 2.20 μm (Figures 4B,E).This phenomenon of cell division inhibition found is the same as reported FtsZ inhibitors such as TXA707, pyrimidine and quinoline derivatives (Fang et al., 2019a;Li et al., 2020;Ferrer-Gonzalez et al., 2021).Since disturbances to the bacterial cell membrane could trigger lysis and demise, we thus further investigated whether honokiol affected the integrity of the bacterial membrane.This was accomplished by incorporating FM4-64, a red fluorescent dye, to examine the membrane's response to honokiol.As shown in Figure 4D, although B. subtilis cells exhibited an elongated morphology, honokiol did not cause any measurable disturbance of the cell membrane, as indicated by the high similarity between the treated and untreated cells in Figure 4C.In addition, no septum was observed in the elongated cells (Figure 4D).These results suggest that honokiol could inhibit the growth of B. subtilis by promoting the inhibition of cell division without compromising the bacterial membrane.These results prompted us to conduct further investigations into the underlying antibacterial mechanism of honokiol.
The effect of honokiol on FtsZ activity in vitro
Recent studies have pointed out that the observed cell division inhibition induced by antibacterial compounds may be attributed to their ability to inhibit the GTPase activity of FtsZ (Margalit et al., 2004;Mathew et al., 2013;Sun et al., 2017b).In this study, we conducted investigations using FtsZ protein expressed from S. aureus to assess the effect of honokiol on FtsZ GTPase activity by applying the experimental conditions established in our previous work (Fang et al., 2019b).The results (Figure 5A) demonstrate that FtsZ GTPase activity is suppressed by honokiol in a dose-dependent manner.Notably, honokiol at 1 μg/mL, exhibited a 15% inhibition.However, at higher concentrations, such as at 2 μg/mL, 4 μg/mL and 8 μg/mL, honokiol resulted in the inhibition of 40, 60, and 75%, respectively (refer to Figure 5A).These results suggest that the inhibition of FtsZ GTPase activity by honokiol in S. aureus may be one of the possible mechanisms impeding bacterial growth.To further explore the effect of honokiol on the FtsZ protein activity, the dynamic polymerization of FtsZ treated with honokiol was performed.The light scattering assays were used for the investigation.Figure 5B illustrates the time-dependent polymerization patterns of FtsZ with and without using honokiol from 1 to 4 μg/mL.The results indicate that honokiol enhances FtsZ polymerization in a concentration-dependent manner, in accord with the characteristics observed in the reported FtsZ-targeting compounds (Andreu et al., 2010;Kaul et al., 2012;Kelley et al., 2012;Sun et al., 2017a).To validate the specificity of honokiol, 5 μg/mL methicillin, a non-FtsZ-targeting antibiotic, were used as a negative control, revealing no observable effects on FtsZ polymerization under the same conditions.
Additionally, we visualized the effect of honokiol on FtsZ polymerization through transmission electron microscopy (TEM).Notably, applying 4 μg/mL honokiol, both the size of FtsZ polymers and the bundling of FtsZ protofilaments exhibited a significant increase (Figures 5C,D).Collectively, these results may support that the antibacterial effect of honokiol most likely is attributed to the influence of honokiol on both GTPase activity inhibition and FtsZ polymerization enhancement.In addition to GTPase assays and FtsZ polymerization, the effect of honokiol on the secondary structure of FtsZ were investigated by monitoring changes in the far-UV circular dichroism (CD) spectrum of FtsZ (Figure 5E).The data revealed that honokiol could significantly alter the secondary structure of FtsZ.By analyzing the CD spectra according to Yang's reference, it reveals that the secondary structure of FtsZ consists of approximately 30.9% α-helices, 21.5% β-sheets, and 47.6% other structures.When treated with 4 μg/mL honokiol, the percentage of α-helices increased to approximately 38.8%, while the percentage of β-sheets decreased to 13.9%.These structural changes induced by honokiol may disrupt the proper function of FtsZ.
The effect of honokiol on Z-ring formation
To understand further whether honokiol interacted with FtsZ protein in bacterial cells, the assembly of Z-ring in B. subtilis cells was investigated.Bacteria were subjected to DMSO (as a solvent control) or honokiol, and their fluorescence was observed with microscope.In the absence of honokiol, fluorescent spots indicating Z-rings were observed at the midpoint of the cells, and the percentages of Z-rings or ill-formed dots are 83.3 and 16.7%, respectively, (Figures 5F,H).However, for honokiol-treated bacteria, most of midcell spots were disappeared, only 10.7% Z-ring were observed (Figures 5G,H).On the other hand, most of FtsZ distribution appeared as scattered, discrete spots along the elongated cell, suggesting a mis-localization of FtsZ protein (Figure 5G; Supplementary Figure S3).Since honokiol promotes FtsZ polymerization, it is possible that the scattered and punctate fluorescence spots in the treated bacteria represent multiple, non-functional FtsZ polymer structures.The presence of such disorganized FtsZ within elongated bacterial cells is a characteristic of agents that interfere FtsZ polymerization (Hurley et al., 2016).
Honokiol has little impact on the mammalian tubulin
As tubulin closely parallels bacterial FtsZ in mammalian systems, we also assessed the potential influence of honokiol on mammalian tubulin.The presence of the tubulin inhibitor vinblastine (30 μM) caused a complete suppression of mammalian tubulin polymerization.Conversely, in the presence of paclitaxel (20 μM), a known enhancer of polymerization, a markedly increase in fluorescence intensity was observed.Notably, when applying honokiol at 20 μM, the results were found comparable with those of controls involving mammalian tubulin treated with 1% DMSO.The results suggest that honokiol neither stimulates nor inhibits tubulin polymerization (Figure 6A).
In vivo antibacterial activity of honokiol
Galleria mellonella larvae are widely utilized as a model for evaluating the in vivo toxicity and effectiveness of antibacterial drugs in non-mammalian systems (Tsai et al., 2016;Allegra et al., 2018).Our results demonstrated that the survival rates of larvae injected with 0.6 mg/kg and 2.5 mg/kg honokiol after 120 h were 89.7 and 96.6%, respectively (Figure 6B).There were no statistically significant differences compared to the vehicle and PBS group (P 1 = 0.1577, P 2 = 0.5480).The results may indicate that honokiol may have no or low toxicity to the larvae and it may not cause apparent harm.In contrast, the survival rate of S. aureus infected larvae plummeted to 6.7% after 120 h, while the honokiol-treated larvae exhibited a significantly increased survival rate at the same time point.The survival rates of the infected larvae after 120 h was 24.1% (p = 0.0103) and 69.0%(p < 0.0001) when they were treated with 0.6 mg/kg and 2.5 mg/kg honokiol, respectively (Figure 6B).This substantial difference not only shows statistically significant, but also indicates the effective rescue of larvae from S. aureus infection by honokiol.The results obtained from larval model may support a non-toxic profile of honokiol.
Proposed binding mode of honokiol in FtsZ
Based on the results obtained from the aforementioned biological assays, honokiol may potentially interact with FtsZ.Thus, molecular modeling study was performed to predict the potential binding site of FtsZ protein.The optimal docking pose suggests that honokiol likely binds to an interdomain cleft in the C-terminal region.This narrow cleft is formed by T7-loop, H7-helix and the four-stranded β-sheet, as illustrated in Figure 7A.Further insights into the molecular interactions are revealed in the 2D ligand interaction diaGram (Figure 7B), illustrating the predicted interactions between honokiol and FtsZ residues.Hydrophobic interactions play a significant role in the binding of honokiol with the key residues of FtsZ including Asp 199, Leu 200, Ile 228, Val 297 and Thr 309.Notably, Pi-Lone Pair and Amide-Pi Stacked interactions were found between Thr 309 and the aromatic ring of honokiol.Moreover, van der Waals forces contribute to the interaction between honokiol and amino acids, such as Val 310 and Gln 192, situated around the binding pocket.The molecular interaction analysis provides the possibly molecular interaction information for honokiol binding to FtsZ protein.The present study may give valuable insights into new drug development targeting FtsZ protein for antibacterial therapy.
Antimicrobial susceptibility assay
The B. subtilis strain examined in this assay was sourced from our internal collection, while additional strains were obtained from the American Type Culture Collection (ATCC, United States).Antimicrobial susceptibility tests performed were following the broth micro-dilution methods in 96-well microplates outlined in the Clinical and Laboratory Standards Institute (CLSI) guidelines (Wikler et al., 2009).For S. aureus strains, Cation-adjusted Mueller Hinton broth (CAMHB) assays were utilized; for E. faecium and E. faecalis strains, Brain Heart Infusion broth (BHI) assays were employed; Mueller Hinton broth (MHB) assays were used for other strains.The bacteria stored in glycerol stocks were firstly streaked onto the Luria-Bertani (LB) agar plates and then were incubated at 37°C for overnight.Then, the single colony from agar plate was inoculated in MHB (5 mL) or CAMHB (5 mL) or BHI (5 mL) and followed was cultured at 37°C for overnight with agitation at 250 rpm.The overnight-cultured cells were diluted tenfold into fresh medium and then were incubated with shaking.After shaking for 2 h, the absorbance [at 600 nm (OD 600 )] of the cell culture was measured.Then, the concentration of the cell culture was adjusted to 5 × 10 6 CFU/ mL.Subsequently, the cells (10 μL) were transferred to a 96-well microplate for measurement.The compounds to be tested were dissolved in DMSO.A two-fold serial dilution was performed for the DMSO solution.The concentration of DMSO in each well was fixed at 1%.Following a plate incubation at 37°C for 18 h, the OD 600 values were recorded for each well with microplate reader (Bio-rad).All experiments were performed in triplicate.The minimum inhibitory concentration (MIC) was defined as the lowest compound concentration at which bacterial growth was inhibited by ≥90%.
Time-killing curve assay
S. aureus ATCC 29213 or B. subtilis 168 cultures underwent intentional dilution to achieve an approximate concentration of 10 5 CFU/mL in CAMHB or MHB, respectively.The dilution process involved incorporating various concentrations of honokiol.Subsequently, the cultures were placed in a 37°C incubator with continuous shaking.At predetermined intervals, the sample (100 μL) was taken for serial dilution in 900 μL CAMHB or MHB.Following this, 100 μL aliquots from three dilutions were carefully spread onto Mueller Hinton agar plates.After a 24-h incubation period at 37°C, the plates were examined to determine accurate cell counts (CFU/ mL).All experiments were performed in triplicate.
Synergistic effect of honokiol with β-lactam antibiotics or PMNB
Synergistic effects of honokiol with β-lactam antibiotics or PMNB were examined with checkerboard assays (Tsubery et al., 2000).Honokiol solution was diluted in 2-fold serial dilutions across the microplate (96 wells), while β-lactam antibiotics or PMNB were similarly diluted down the 96-well assay plate.Two columns of the 96-well assay plate were reserved for untreated cells.Then, the bacterial suspensions at 5 × 10 5 CFU/mL were added to each well of the assay plate to 100 μL.All experiments were performed in triplicate.The assay plates were then incubated at 37°C.After incubated for 18 h, a microplate reader was used to measure the OD 600 value for the wells of the 96-well assay plate.
Visualization of bacterial morphology, bacterial membrane and Z-ring
The morphology of B. subtilis cells was examined with an Olympus FSX100 microscope.Log-phase B. subtilis cells were diluted to the OD 600 of 0.1 and subjected to incubation, with or without the presence of honoiol, at 37°C for 4 h.Following incubation, the bacterial cells were harvested and re-suspended in a PBS buffer containing 0.25% agarose.A volume of 10 μL from the sample mixtures was then applied to the microscopic slide pre-treated with 0.1% (w/v) poly-L-lysine.For the purpose of membrane staining, B. subtilis cells were subsequent incubated with a concentration of 1.6 μM of FM 4-64 for an additional half-hour at a temperature of 37°C, with no shaking involved.Postincubation, the cells were harvested and then re-suspended in a 100 μL solution of PBS buffer, which included 0.25% agarose.Subsequently, a volume of 10 μL from this cell suspension was depositied onto a microscope slide that had been pre-coated with 0.1% (w/v) poly-Llysine.The morphology of the bacterial cells was subsequently examined under a light phase-contrast microscope using an Olympus Bio Imaging Navigator FSX 100 microscope.For the visualization of Z-ring, a culture of B. subtilis harboring an IPTG-inducible plasmid for the overexpression of GFP-tagged FtsZ was cultivated in LB medium supplemented with 30 μg/mL chloramphenicol.Following an overnight incubation, a portion of the culture was serially diluted to 1% in fresh LB medium supplemented with 4 μg/mL of honokiol and 40 μM of IPTG.The cells were then incubated for an additional 4 h at 37°C.Subsequently, the cells were fixed, collected, and re-suspended in PBS buffer containing 0.25% agarose.A volume of 10 μL of the cell suspension was added to a microscope slide pre-coated with 0.1% (w/v) poly-L-lysine and examined using a fluorescence microscope at 60× oil immersion objective with a standard FITC filter cube.The images were obtained using an Olympus Bio Imaging Navigator FSX 100 microscope system.
GTPase activity test
The GTPase activity of S. aureus FtsZ was assessed following established protocols outlined in our prior research (Fang et al., 2019b).A phosphate assay kit was employed for the evaluation, conducted in 96-well microplates based on a previously described methodology.In this experiment, the FtsZ proteins (4 μM) were incubated with honokiol at various concentrations in Tris buffer (20 mM, pH 7.4) for 10 min at room temperature.To prevent compound aggregation, 0.01% Triton X-100 was included.Following this, 5 mM MgCl 2 and 200 mM KCl were added to the reaction mixture.The reactions were initiated by introducing 500 mM GTP and allowed to incubate at 37°C.All experiments were performed in triplicate.After a 30-min incubation period, the reaction was terminated with the addition of Cytophos (100 μL).Then, the sample was further incubated for 10-min.The quantification of inorganic phosphate was conducted with a microplate reader to record the absorbance at 650 nm.
Effects of honokiol on the FtsZ polymerization
For scattering assays, a fluorescence spectrometer was employed to detect FtsZ polymerization under the condition reported previously (Fang et al., 2019b).FtsZ concentration at 7.5 μM in a MOPS buffer (50 mM, pH 6.5, supplemented with 0.01% Triton X-100 to prevent compound aggregation) and honokiol at various concentrations were loaded into a fluorometer cuvette.By sequential additions of 50 mM KCl, 10 mM MgCl 2 and 1 mM GTP, the polymerization reaction was initiated and then was monitored for 2000 s.The experiments were performed in triplicate.For the TEM analysis, 12 μM S. aureus FtsZ were incubated in the absence and in the presence of honokiol (4 μg/ mL) in MOPS buffer (50 mM, pH 6.5) under room temperature conditions.After incubation for 10 min, the reaction mixtures were supplemented with 50 mM KCl, 5 mM MgCl 2 and 1 mM GTP, followed by a subsequent incubation at 37°C for 15 min.Next, the resulting sample mixtures (10 μL) were deposited onto a 400-mesh glowdischarged Formvar carbon-coated copper grid for a 10-min duration.The grids were then negatively stained using 0.5% phosphotungstic acid (PTA, 10 μL) for 30 s, air-dried, and subjected to observation.
Effect of honokiol on the secondary structural changes of FtsZ
SaFtsZ (10 μM) was allowed to react for 30 min at 25°C with either no addition or with various concentrations of honokiol (1 and 4 μg/ mL) in a 20 mM Tris buffer (pH 7.4), which included 0.01% Triton X-100 to prevent the compound aggregation.The far-UV circular dichroism (CD) spectrum was recorded over the wavelength range of 200-250 nm using a JASCO J-810 spectropolarimeter that was fitted with a temperature controller and a 0.1 cm path length quartz cuvette.Each spectrum was an average of five scans.The CD spectra were analyzed for deconvolution and statistical purposes using Jasco analysis software and Origin 8.0 software, respectively.
Impact of honokiol on eukaryotic tubulin polymerization
The eukaryotic tubulin polymerization in the presence of honokiol was studied by employing a tubulin polymerization assay kit (BK011P, Cytoskeleton, Inc.) with fluorescence microscopy.The process of polymerization was observed by tracking the increase in fluorescence intensity from a fluorescent marker, 4′,6-diamidino-2-phenylindole (DAPI), which was integrated into the microtubules.The concentration of porcine brain tubulin used in the assay was 2 μg/ mL.As control references, paclitaxel (20 μM) and vinblastine (30 μM), which were known to either promote or inhibit tubulin polymerization, respectively, were also included.The test compound, honokiol, was administered at a concentration of 20 μM, which equated to 5.3 μg/mL in a final solution of 1% DMSO.All experiments were performed in triplicate.The fluorescence readings were obtained using a PolarStar Frontiers in Microbiology frontiersin.org Optima microplate reader, with the excitation wavelength set at 360 nm and the emission wavelength at 450 nm.The collected data were managed using Microsoft Excel and further analyzed with Origin analysis software.
3.9 Impact of honokiol on Staphylococcus aureus infection Galleria mellonella larvae model G. mellonella larvae weighing around 0.25 g were selected randomly.Then, the honokiol solution (20 μL) was injected per larvae with a microinjector (HAMILTON, Swiss).Three independent experimental groups (10 larvae per group) were conducted.The experimental groups were tested under the following conditions: vehicle, PBS (0.5% DMSO), infected group (S.aureus ATCC 29213 suspension was injected and the final concentration was 0.5 McFarland), the compound group (injected with honokiol 0.6 mg/kg or 2.5 mg/kg) and the treatment group (injected with honokiol 0.6 mg/ kg or 2.5 mg/kg after 2 h S. aureus injection).All experiments were performed in triplicate.Following injection, the larvae were cultured in a dark room at a temperature of 37°C.Subsequently, the number of viable larvae in each experimental group was counted at 24-h intervals for a duration of 120 h.The survival rate was then computed and graphically represented (Peng et al., 2023).Statistical analysis and graphical representation of the experimental outcomes were conducted with GraphPad Prism 8.0 software (GraphPad Software, San Diego, California, United States).The experimental data were presented as mean ± standard deviation (x ± s).Significance assessment was performed using the Log-rank test; statistical significance was denoted by p < 0.05, with *p < 0.05, **p < 0.01, and ***p < 0.001 indicating varying degrees of significance.
Molecular modeling analysis
For our molecular modeling investigation, we utilized the CDocker program within Discovery Studio 2016.An X-ray crystal structure of S. aureus FtsZ, sourced from the RCSB Protein Data Bank (PDB entry: 4DXD) (Tan et al., 2012), was employed as the foundation.Elimination of water molecules and co-crystal ligands, as well as protein preparation for docking, was carried out using an automated protocol in Discovery Studio.The binding sites of FtsZ were defined from PDB record.The molecular structure of honokiol was hand-drawn and converted into a three-dimensional format through the Discovery Studio molecule editor.The automated docking study was then executed with DS-CDocker protocol.CDOCKER is a software application that employs the CHARMM force field for molecular docking, utilizing a rigid receptor model.Random conformations of honokiol were spawned through hightemperature molecular dynamics simulation, involving 1,000 dynamic steps conducted at 1,000 K.These diverse conformations of honokiol were subsequently docked into the binding site, with their orientations refined through simulated annealing molecular dynamics consisting of 2,000 steps heated to 700 K followed by 5,000 steps to cool down to 300 K.The energy values of the final docking poses were computed and subsequently ordered based on their respective CDOCKER scores.The resultant highest-scoring pose underwent thorough visual inspection for a detailed analysis of molecular interactions.
Discussion and conclusion
In recent years, increasing evidence suggest that honokiol possesses a wide range of pharmacological activities (Rauf et al., 2021).For instance, honokiol inhibits the growth and spread of tumor cells through various pathways.Honokiol exerts anti-tumor effects through multiple mechanisms, including induction of apoptosis, inhibition of cell proliferation, and suppression of angiogenesis.Honokiol has shown promising results in preclinical studies against various cancers, including breast, lung, prostate, and pancreatic cancer (Cen et al., 2018;Ong et al., 2019;Zhang et al., 2019).
Honokiol also possesses strong antioxidant properties, scavenging free radicals and reducing oxidative stress (Rauf et al., 2021).Its antioxidant activity has been linked to protection against neurodegenerative diseases, cardiovascular disorders, and age-related cognitive decline (Wang et al., 2011;Talarek et al., 2017;Wang et al., 2022).Moreover, honokiol exhibits potent anti-inflammatory effects by suppressing the production of pro-inflammatory cytokines and inhibiting NF-κB signaling pathways.Studies have demonstrated its efficacy in ameliorating inflammatory conditions such as arthritis, colitis, and dermatitis (Tse et al., 2005;Debsharma et al., 2023).
Furthermore, in a 30-day clinical trial with 40 participants, no significant side effects were found from daily doses of 11.9 mg of honokiol (Campus et al., 2011).Given the safety and pharmacological activity of honokiol, it shows promise for further development as a natural compound-based drug (Sarrica et al., 2018).
Recently, the antibacterial activity of honokiol has been reported (Rauf et al., 2021).Honokiol at 10 μg/mL was found to inhibit biofilm formation of MRSA 41573 and, at 50 μg/mL, it disrupted the mature biofilm.The results of RT-PCR analysis suggests that its potential mechanism of action may involve the inhibition of sarA, cidA, and icaA, as well as eDNA release, and the expression of PIA (Li et al., 2016;Qiao et al., 2016).Despite honokiol is reported to inhibit the growth of Actinobacillus actinomycetemcomitans, Porphyromonas gingivalis, and Prevotella intermedia with a MIC value of 25 μg/mL (Chang et al., 1998;Ho et al., 2001), the inhibition mechanism of honokiol is still unclear at present.By using computer-aided simulation, Liu et al. suggested that honokiol could bind to the PC190723 binding site of FtsZ protein.Biological tests also found that honokiol could inhibit FtsZ polymerization at a high concentration (100 μg/mL).It also exhibited antibacterial activity against three strains of S. aureus tested with MIC values of 8-16 μg/mL (Liu et al., 2014).However, in the reported study, neither the effects of honokiol on FtsZ GTPase activity, nor its inhibitory effects on bacterial cell division were investigated.Additionally, Triton X-100 was not added in the assessment of honokiol's inhibitory effect on FtsZ.Since some small molecules may inhibit FtsZ polymerization through compound aggregation, the use of Triton X-100 in the assays is required to exclude false positive results when testing the effect of compounds on FtsZ polymerization (Anderson et al., 2012).
In the present study, we revealed that honokiol effectively enhanced FtsZ protein assembly and inhibited GTPase activity of FtsZ, while the compound showed no any observable effects on tubulin polymerization.Moreover, honokiol was found inhibited 10.3389/fmicb.2024.1361508Frontiers in Microbiology 11 frontiersin.orgbacterial cell division.In antibacterial tests, honokiol displayed potent antibacterial effects against Gram-positive bacteria including VREF and MRSA.In addition, honokiol restored MRSA susceptibility to the β-lactam antibiotics tested.Furthermore, the growth of Gramnegative bacteria was inhibited when honokiol was combined with PMBN.The anti-infective potential of honokiol against bacteria was further validated with the use of G. mellonella larvae as an in vivo model.The notable features of honokiol position it as a promising candidate for further structural modifications.This study is intended to bolster its pharmacological activities, refine pharmacokinetic profiles, and ultimately pave the way for developing potent antibacterial agents against drug-resistant bacterial infections.
FIGURE 2
FIGURE 2 Time-killing curve of honokiol against S. aureus ATCC 29213 (A) and B. subtilis 168 (B).Different concentrations of honokiol were represented by different colors.
FIGURE 4
FIGURE 4 The effect of honokiol on bacterial morphology and membrane of B. subtilis.The B. subtilis cells were grown in the absence (A), and in the presence (B) of 4 μg/mL of honokiol.Observations after membrane staining with the red fluorescent dye FM 4-64 are shown in the absence (C) or in the presence (D) of 4 μg/mL of honokiol (The scale bar is 10 μm).(E) Comparison of the cell length distributions of the B. subtilis cells (the number of the cell is 50 for each group) treated by 1%(v/v) DMSO and the 4 μg/mL of honokiol.****The result has statistical significance.
FIGURE 5
FIGURE 5 Effects of honokiol on the FtsZ.(A) Inhibition of GTPase activity of FtsZ by honokiol.(B) Time-dependent polymerization profiles of S. aureus FtsZ in the absence and presence of honokiol at a concentration ranging from 1 to 4 μg/mL.(C,D) Transmission electron micrographs of FtsZ polymers in the absence (C) and in the presence (D) of 4 μg/mL of honokiol (Scale Bar is 500 nm).(E) The CD spectra of FtsZ in the absence and presence of 1 or 4 μg/ mL honokiol.(F,G) The effect of honokiol on Z-ring formation in B. subtilis (Scale bar is 10 μm).The bacterial cells were grown in the absence (F) or in the presence of 4 μg/mL of honokiol (G).Comparison of the the percentage of FtsZ rings or ill-formed dots distributions in the B. subtilis cells treated by 1%(v/v) DMSO and the 4 μg/mL of honokiol (H).
FIGURE 6 (A) Effect of honokiol, binblastin and paclitaxel on the polymerization of mammalian tubulin.(B) Evaluation of the toxicity and antibacterial activity of honokiol in vivo via using the G. mellonella larvae models (HNK: honokiol).
FIGURE 7
FIGURE 7The proposed binding mode of honokiol in S. aureus FtsZ (PDB ID: 4DXD).(A) Honokiol in the interdomain cleft of FtsZ.(B) Predicted interaction between honokiol and amino acids of FtsZ.
TABLE 1
Antibacterial activity of honokiol against a panel of bacterial strains.
TABLE 3
Antimicrobial activities of honokiol and PMBN against E. coli and K. pneumoniae (The concentration of PMBN in the combination test is fixed at 20 μg/mL). | 8,240.4 | 2024-07-22T00:00:00.000 | [
"Medicine",
"Chemistry",
"Biology"
] |
Feasibility of early radial artery occlusion recanalization and reuse through transradial access for neuroendovascular procedures
Background Radial artery occlusion (RAO) remains a significant limitation of neuroendovascular procedures peformed through transradial access (TRA) when radial artery needs to be reused. Instances of early RAO recanalization to successfully complete neuroendovascular procedures have been rarely documented. Materials and methods Documents and imaging data were extracted retrospectively for all patients who underwent TRA diagnostic angiography and neuroendovascular procedures in our center from June 2022 to February 2023. The patients with early RAO who required repeat TRA were included. Results A total of 46 patients underwent repeat TRA, and 13 consecutive patients who experienced early RAO after angiography as confirmed by ultrasonography were enrolled in this study. The occluded radial arteries were successfully recanalized, and subsequent neuroendovascular procedures were carried out successful. During an average follow-up time of 7.1 months, no patients exhibited symptomatic RAO, dissection, hematoma or pseudoaneurysm. Conclusions Early RAO recanalization and reused for neuroendovascular procedures through TRA is feasible. A visually guided and stable puncture process plays a crucial role in successfully recanalizing early RAO.
transfemoral access (TFA) [1][2][3][4].Numerous studies have demonstrated the feasibility of repeat TRA for neuroendovascular procedures [5,6].However, despite the many advantages of TRA, RAO remains a significant limitation of neuroendovascular procedures via TRA when radial artery needs to be reused.Nevertheless, reaccessing the occluded radial artery is a feasible approach for performing repeated neuroendovascular procedures [7].In this retrospective study, we present our experience and techniques in recanalizing early RAO through repeat TRA for neuroendovascular procedures at a single center.
Background
The adoption of transradial access (TRA) for diagnostic cerebral angiography and neuroendovascular procedures is increasing worldwide.This trend may be attributable in part to the recognition that TRA offers substantial reductions in complications and costs when compared to
Patients
The electronic medical records (inpatient and outpatient) of all patients who underwent diagnostic angiography and interventional neuroendovascular procedures utilizing TRA at our center from June 2022 to February 2023 were retrospectively reviewed.Inclusion criteria were: (1) Patients underwent diagnostic angiography or interventional neuroendovascular procedures via TRA; (2) Patients were diagnosed with early RAO through vascular ultrasound examination; (3) Patients underwent repeated TRA on the same side.Exclusion criteria were: (1) Patients initially underwent TRA followed by unscheduled TFA; (2) Patients underwent diagnosis and treatment of non-cerebrovascular diseases via TRA.The study was approved by the Ethics Committee of Zhongshan Hospital, Fudan University (Xiamen Branch).The need for informed consent was waived by ethics committee of Zhongshan Hospital (Xiamen branch) due to retrospective nature of the study.All investigations were performed in accordance with the Declaration of Helsinki.
Recanalization technique
We confirmed thrombosis and the absence of blood flow through the radial artery thrombosis by ultrasound within 48 h after the patient's angiography was completed.The repeated puncture site was located within a 3 cm radius around the initial puncture site.Ultrasound examination revealed the presence of thrombus in the radial artery, and compression of the ultrasound probe did not demonstrate any vascular deformation or blood flow.We utilized an Arrow Quick Flash radial artery catheterization set (RA-04220, Mexico) for the puncture needle.The set consisted of a puncture needle tube encompassed by an outer sheath and an internal movable guide wire (Fig. 1).
Once the echogenic tip of the puncture needle was visualized within the lumen of the radial artery under transverse ultrasound guidance, the movable guide wire was slowly advanced into the blood vessel and positioned within the true lumen of the blood vessel through longaxis ultrasound (Fig. 2).
The outer sheath was then carefully advanced, observing bleedback and noting only a small amount of blood was found around the outer sheath.To confirm its position within the radial artery, radial arteriography was performed, excluding cases with tortuous arteries, associated calcified plaques, adjacent small dissections, and long segment clots.Following confirmation, a cocktail of verapamil (2 mg), nitroglycerin (100 µg), and 5 mL of aspirated blood was injected.Under the guiadance of the roadmap technique, the guide wire of either 4 Fr or 6 Fr introducers was navigated into the radial artery and successfully placed.To prevent clot formation, heparin (5000 U) was added to physiological saline for instillation during cerebral angiography.The radial artery compression device (TR Band, Japan) has been used at the end of the diagnostic procedure.
Statistical analysis
SPSS version 24.0 software (SPSS Inc, Chicago, IL) was used for the analysis.Continuous variables were presented as mean ± standard deviation.The time of early RAO recanalization are presented as median and interquartile range.The intergroup comparison was performed using t-test.Unless stated otherwise, a 2-tailed P < 0.05 was considered statistically significant.
Results
A total of 144 patients underwent TRA procedures for diagnostic angiography (127 cases) and neuroendovascular procedures (62 cases) from June 2022 to February 2023, resulting in a total of 189 TRA instances.Among these, diagnostic angiography was performed from the left TRA in 2 cases, while the remaining 142 cases were conducted on the right side.All repeated TRA were performed on the right side.The initial planned TRA count was 193, but two diagnostic angiographies and two neuroendovascular procedures were excluded due to the intraoperative change from TRA to TFA.One of the excluded cases involved an elderly woman with right ophthalmic artery stenosis of the internal carotid artery, who was diagnosed with RAO after cerebral and coronary angiography at other hospitals using a 4 Fr sheath via TRA.Four days after the initial operation, the patient was transferred to our center, where RAO was recanalized, and a 6 Fr sheath was successfully placed.Angiography revealed that the patient's dominant artery in the right hand was the ulnar artery, and the radial artery was unable to accommodate the 6 Fr sheath, ultimately requiring a switch to TFA.
Over the course of the 8-month study, a total of 46 patients underwent repeat TRA for diagnostic angiography, intervention, or postoperative recheck.In diagnostic angiography, all patients received a 4 Fr or 5 Fr sheath, while intervention procedures involved the placement of a 6 Fr sheath.Among the patients, 44 had a successful second TRA, and 2 underwent a third successful TRA.During the initial diagnostic puncture procedures, ultrasound guidance wasn't commonly employed unless there were multiple unsuccessful attempts at puncturing.A total of 13 patients among the 46 requiring a second procedure were diagnosed with early RAO based on ultrasonography following their initial diagnostic angiography.The average diameter of the radial artery in 13 cases of RAO patients was 2.1 ± 0.4 mm, while in 33 cases without RAO, the average diameter was 2.3 ± 0.4 mm.
The diameter of the radial artery was not correlated with the risk of occlusion (P > 0.05).All 13 patients underwent recanalization of the occluded radial artery within 6 days (IQR: 3-7 days), and subsequent intervention procedures were then performed successfully (Table 1).
Discussion
TRA has increasingly become a standard approach for diagnostic cerebral angiography and neuroendovascular procedures.When compared to TFA, TRA offers the advantage of reducing the risk of access-site complications [3].At our center, the right TRA was predominantly preferred due to its convenience for the surgeons.Additionally, based on our single-center experience, the right TRA has demonstrated greater stability for cerebrovascular diseases, particularly for certain left carotid system diseases, except for cases involving the left vertebral artery.
The reported incidence of RAO varies from 0.8 to 33%, with early RAO occuring at approximately 5.5-7.7%[8][9][10].Several factors have been associated with RAO, including multiple puncture failures, female sex, age, vessel diameter, introducer sheath size, prolonged postprocedure compression time, patient hemostasis, and higher doses of anticoagulation [11,12].In our center, we have implemented the use of a radial artery compression device (TR Band, Japan, max air: 18 mL).However, it is worth noting that prolonged compression time (6 h), deflated by 2 mL every 2 h and high-pressure compression (14 mL) might contribute to the incidence of RAO Fig. 2 Ultrasound images showing transverse (A, ↑ black circle was radial artery) and long-axis (B, ↑ black fusiform was radial artery) of the radial artery with RAO.The radial artery with the echogenic needle tip within the lumen in transverse (C, ↑ white dot in the round black was needle tip).The movable guide wire was located in the true lumen through ultrasound (D, ↑ white strip was movable guide wire) in our center.2 mL deflation every 15 min (rather than every 2 h) is much more practical and feasible, even in cases involving large sheaths and patients on antiplatelets therapy.Considering the dosage of heparin and the patient's antiplatelet status, the duration of wrist band retention after diagnostic angiography is typically 1-2 h, while it is extended to 3-4 h following treatment procedures [13].
Futhermore, studies have demonstrated the safety and feasibility of recanalizing chronic radial artery occlusion through distal TRA [14,15].In 2021, Feng Li et al. successfully recanalized a RAO formed three days postoperation using distal TRA guided by vascular ultrasonography [16].Neil Majmundar et al. reported their successful experience in repeat TRA procedures and techniques for reaccessing occluded arteries in selected patients.Among their nine RAO patients, five failed to recanalize RAO and were subsequently transferred to TFA, while four cases achieved successful recanalization [7].Utilizing our method, we achieved a 100% technological success rate in early RAO recanalization.Notably, in comparison to previous studies, the early RAO formation site in this study was primarily near the puncture point, resulting in relatively shorter distances of occlusion.
After the needle reaches the center of the blood vessel, we maintained stability in the right hand while using the left hand to operate the ultrasound and advance the movable guide wire of the puncture needle and outer sheath.This technique is crucial to avoid exchanging the left and right hands.In contrast to a micro puncture needle, the Arrow quick flash device avoids the need to exchange the right and left hands to place the guidewire, thus ensuring stability.
Ultrasound currently plays a unique role in diagnosing RAO and guiding radial artery puncture, offering the benefit of a visible puncture process and the ability to assess the compatibility between the radial artery and the outer diameter of the arterial sheathbefore the procedure [17].Under ultrasound guidance, we can observe the tissue structure surrounding the radial artery, assess the length of thrombosis within the radial artery, and ensure that blood vessel lumen remains patent when pressure is applied to the ultrasound probe.Additionally, successful puncture of the outer sheath typically results in minimal bleeding, as observed in most medical records.Through the use of ultrasound imaging and manual injection of contrast agent, we confirm the stability of the outer sheath within the radial artery and assess blood flow conditions at the distal end of the sheath.
The early RAO recanalization through in situ puncture holds paramount significance in contemporary medical interventions.This approach encompasses a multifaceted spectrum of benefits and applications that underscore its pivotal role in clinical practice: (1) Facilitating subsequent interventional neuroendovascular or cardiovascular procedures after recanalization, allowing for comprehensive patient treatment; (2) Mitigating potential vascular injury from new approaches via repeated access routes, and reducing potential risks; (3) Offering a viable alternative for patients declining TFA, catering to individual preferences and enhancing patient satisfaction; (4) Addressing symptomatic RAO; (5) Providing arteriovenous fistula access for hemodialysis or serving as donors for coronary artery bypass grafting; (6) Achieving radial artery recanalization without the necessity for additional balloons or stents, and cutting down on costs.
There are several limitations to our single-center study.This study is a single-center retrospective design, which lacks randomization and presents challenges in controlling variables, thus susceptible to biases and confounding factors.Due to the small size of the sample used in our study, there are constraints that hinder the ability to conduct additional and more in-depth statistical analyses.Additionally, our study may not encompass the various situations encountered, such as patients with tortuous artery, associated calcified plaques, adjacent small dissections, or long segment clot etc.Therefore, further evaluation of this technology on a larger scale is necessary to validate its advantages and ensure its safety.Compared to the success rate of early RAO recanalization, the success rate in later-stage RAO appears relatively diminished, likely attributable to the presence of mature thrombi and longer thrombotic segments.Investigating these underlying factors and improving the success rate of recanalization will be a focus of our forthcoming research.Recanalization of RAO is a significant area of focus.However, it is crucial to carefully weigh the risks versus the benefits associated with the procedure.
Conclusions
The recanalization of early RAO and subsequent neuroendovascular procedures through TRA are feasible.
A visual and stable puncture process significantly contributes to successful recanalization of early RAO.Some patients did not experience recurrent RAO following recanalization and reuse procedure and even in cases where RAO developed postoperatively, no access-related asymptomatic issues were observed.
Fig. 1
Fig. 1 The process of puncture with arrow quick flash radial artery catheterization set
Table 1
Characteristics of patients with RAO recanalization and reuse through TRA RAO: radial artery occlusion; TRA: transradial access | 2,979.2 | 2024-01-31T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Enhanced flow-motion complexity of skin microvascular perfusion in Sherpas and lowlanders during ascent to high altitude
An increased and more effective microvascular perfusion is postulated to play a key role in the physiological adaptation of Sherpa highlanders to the hypobaric hypoxia encountered at high altitude. To investigate this, we used Lempel-Ziv complexity (LZC) analysis to explore the spatiotemporal dynamics of the variability of the skin microvascular blood flux (BF) signals measured at the forearm and finger, in 32 lowlanders (LL) and 46 Sherpa highlanders (SH) during the Xtreme Everest 2 expedition. Measurements were made at baseline (BL) (LL: London 35 m; SH: Kathmandu 1300 m) and at Everest base camp (LL and SH: EBC 5,300 m). We found that BF signal content increased with ascent to EBC in both SH and LL. At both altitudes, LZC of the BF signals was significantly higher in SH, and was related to local slow-wave flow-motion activity over multiple spatial and temporal scales. In SH, BF LZC was also positively associated with LZC of the simultaneously measured tissue oxygenation signals. These data provide robust mechanistic information of microvascular network functionality and flexibility during hypoxic exposure on ascent to high altitude. They demonstrate the importance of a sustained heterogeneity of network perfusion, associated with local vaso-control mechanisms, to effective tissue oxygenation during hypobaric hypoxia.
Materials and Methods
Study participants. The signals analysed are from measurements made in 32 lowlanders (LL) all of whom were born and lived below 1000 m, and were not from a native high altitude population, and 46 Sherpa highlanders (SH). These 78 individuals represent a subset of the 144 participants of the Xtreme Everest 2 research expedition (XE2) 28 in whom we have previously reported the effects of hypobaric hypoxia on microvascular blood flow in the time and spectral domains 7 . Approval of the study design, risk management plan and protocol were obtained from both the University College London Research Ethics Committee (Ref: 3750/002) and the Nepal Health Research Council (NHRC) (1334). The study was performed to the standards set by the Declaration of Helsinki, except for registration in a database. All participants provided written informed consent for participation in the studies.
Tissue blood flux and oxygenation signal capture. Skin microvascular blood flux (BF) and tissue oxygenation (OXY) signals, and skin temperature were recorded simultaneously at the forearm using a combined laser Doppler fluximetry (LDF) and white light reflectance spectroscopy probe (CP1T-1000 LDF ™ /OXY ™ /temperature probe, Moor Instruments Ltd, Axminster, UK) placed approximately 10 cm proximal to the wrist. A second combined LDF ™ /temperature probe (VP1T, Moor, Axminster, UK) was placed on the pulp of the index finger. Probe position was recorded with a photograph and permanent marker pen to ensure use of the same anatomical site in subsequent measurements as previously described 7 .
Signals were collected at baseline (BL) in London for LL (35 m), and in Kathmandu (1300 m) for SH, and at Everest Base Camp (EBC) (5300 m) as described previously 8 . Measurements were made on the morning of day 2 after arrival at EBC after an 11 day trek from Lukla (2800 m). Participants lay in a supine position during signal capture with their non-dominant arm resting at heart level. Full details of the environmental and physiological measurements made at each altitude are given elsewhere 7 .
Signal analysis. All signals were captured at a 40 Hz sampling rate using the manufacturer's software (moorVMS-PC software, Moor Instruments Ltd, Axminster, UK) and a continuous 10 min sample of each signal extracted for analysis in the time and spectral domain. The signals analysed were resting skin blood flux (BF, in arbitrary perfusion units, PU) and oxygenated Hb (oxyHb, in arbitrary units, AU). We elected to focus on the oxyHb output as the prime tissue oxygenation signal for the complexity analysis as suggested by our previous studies 22 .
The complexity of the BF and oxyHb signals arising from rhythmical flow-motion activity was explored using non-linear LZ complexity (LZC) analysis 21 to determine how much the BF and oxyHb signals differed from a random sequence of finite symbolic sequences derived from the time series 20,30 . The signals were represented as a binary string using delta encoding whereby a zero is recorded if a value is less than the previous value in the time series and a one when it is greater than that previously 22,23 . The LZC is a measure of the information content, or effort to describe, a signal and is the length of the shortest instruction set required to reconstruct it without loss of information. A random signal would have high complexity as there are no rules that define it whereas a periodic signal would have low complexity as the same terms are repeated continually. LZC analysis does not require the signal to be strongly stationary, unlike chaos-based entropy analysis, and thus the signal can be normalised to the length of the sample window 31 . Exhaustive LZC, where the signal is decomposed into all the instructions required to reproduce it, was then calculated for each 40 second epoch to determine the complexity lower bound. A complexity index (LZC index) was calculated as the mean of the 15 × 40 second epochs for each sampled signal to provide an index of the dynamic activity modulating the BF and oxyHb signals.
The physiological processes that modulate flow-motion and determine the information content of the BF signals operate at frequencies ranging from 0.001 Hz to 2 Hz 14 . They also appear to vary with altitude 8,9 . To take account of these multiple, and potentially varying, process scales we measured LZC in multiple time-scales (MLZC) using a coarse-graining approach 24,32 . MLZC was explored between scales τ = 1-12, as the maximum frequency of interest is governed by an upper limit of heart rate of 1.6 Hz and Nyquist frequency twice this (3.2 Hz) which corresponds to scale τ = 12. To determine the association between the complexity of the BF signal and the mechanisms underlying flow-motion we calculated the Spearman correlations for relative power spectral density of the endothelial (0.0095-0.02 Hz), sympathetic (0.02-0.06 Hz), myogenic (0.06-0.15 Hz), respiratory (0.15-0.4 Hz), and cardiac (0.4-1.6 Hz) 14 Data were tested for Gaussian distribution using the Kolmogorov-Smirnov test and visual inspection of histograms. As the BF signal data did not show a normal distribution, data are presented as median (95%CI). SH and LL cohorts at BL and EBC were compared using two-way ANOVA. We used Mann Whitney U test and Wilcoxon Signed Rank test to compare single cohorts between BL and EBC. The relationships between BF, oxyHb and network perfusion complexity at each site were assessed individually using Spearman's rank correlation coefficient. A p-value of < 0.05 was considered statistically significant for all analyses. We used multivariable linear regression models to explore factors that were independently associated, with microvascular network perfusion heterogeneity and oxygenation as the dependent (outcome) variables. Explanatory variables included in the models were skin temperature, age and gender, with site included as a binary indicator variable.
The time domain characteristics of microcirculatory BF signals from the current subset of XE2 individuals used for complexity analysis are summarised in Table 1. Resting skin BF (PU) at the forearm of the 46 SH decreased with ascent to EBC, while that of the 32 LL remained unchanged, as reported previously in the full cohort 8 . In both sub-groups, resting BF at the finger decreased on ascent to EBC compared to BL (both p < 0.001). Skin temperature varied between groups and with ascent to EBC but was notably higher at the finger in SH at both BL and EBC (Table 1). In both groups resting BF were positively associated with skin temperature at BL at the forearm (LL, r = 0.370 p = 0.037; SH, r = 0.577 p < 0.0001) and at the finger (LL, r = 0.436 p = 0.013; SH, r = 0.764 p < 0.0001). This association was lost at EBC in LL (p > 0.05 for forearm and finger), but not in SH (forearm r = 0.261 p = 0.021; finger r = 0.643 p < 0.0001).
LZ complexity of microcirculatory blood flux and oxygenation signals increases with hypobaric hypoxia.
The complexity of the BF and oxyHb signals arising from oscillatory flow-motion activity was determined using non-linear LZ complexity (LZC) analysis. Examination of the BF signals obtained from SH and LL at BL and on ascent to EBC, revealed that the information content of the BF signals remained relatively constant over the 15 epochs (600 s) analysed in both groups under normoxia and hypobaric hypoxia. As previously, it was thus deemed appropriate to additionally present the complexity of each signal as a mean LZC index 22,23 . Figure 2 shows that the complexity of resting BF signals measured at the forearm and finger in SH and LL remained relatively constant over the 15 epochs sampled, at both BL and EBC. LZC index (mean LZC of 15 epochs) of the BF signal in both the forearm and finger microvascular beds was significantly influenced by group (forearm: p = 0.0001, F = 15.6; finger: p < 0.0001, F = 59.9), and site (forearm: p = 0.0013, F = 10.7; finger p = 0.0025, F = 9.5). Forearm BF LZC index increased in SH with ascent to EBC (BL vs EBC p < 0.0031), and at EBC was significantly greater than that of LL (SH vs LL p < 0.0001) (Fig. 3). There was no association between forearm BF LZC index and resting BF or skin temperature at BL or EBC in SH or LL.
LZC index of the finger BF signal was higher in SH than LL at both BL and EBC (both p < 0.0001), but only increased with ascent to EBC in LL (p = 0.0413) (Fig. 3). In SH, finger BF LZC index was positively associated with resting BF (p < 0.001 at both BL and EBC) and skin temperature at both altitudes (p = 0.001 at both BL and EBC). No such association was seen in LL.
Neither age nor sex were independently associated with BF LZC index in SH. However, in LL age was negatively associated with forearm BF LZC index (r = −0.491 p = 0.004).
We found no significant difference in LZC index of the oxyHb signals measured at the forearm between LL and SH, at either BL or EBC (p > 0.05) (Fig. 4). However, in SH forearm BF LZC index was positively associated with the LZC index of the oxyHb signal at both BL (r = 0.414 p = 0.005) and EBC (r = 0.307 p = 0.004). No such association was found in LL at BL (p = 0.672), and only approached significance at EBC (r = 0.353 p = 0.065). www.nature.com/scientificreports www.nature.com/scientificreports/ Changes in multiscale LZC (MLZC) at altitude are associated with flow-motion activity. The LZC of the BF signals at the forearm was computed over 12 scales for all LL and SH at BL and EBC (Fig. 5). As the scale length increased, so too did LZC, with better separation seen at certain scales. In both SH and LL, the largest differences between the LZC values at BL and EBC were for time scales > 10 in both forearm and finger microvascular beds (p < 0.05). Figure 6 shows the association of the power in the five spectral bands to LZC forearm and finger BF complexity at each scale for the two groups at each altitude. There were clear differences in the associations between LZC and spectral power in the five bands both between SH and LL, and between BL and EBC in the forearm and finger BF signals. In LL, cardiac activity negatively associated with forearm BF LZC at all scales at both altitudes. Power in the respiratory band was positively associated with LZC over scales τ = 4-12 at BL, but did not reach significance at EBC. In the low frequency bands, the myogenic activity negatively associated with LZC at EBC over scales τ = 1-12 (40-3.34 Hz sample rate). No significant association was found between power in the neurogenic activity band and LZC at any scale at either BL or EBC in LL. In SH, there was no association between LZC and spectral power in any band at BL. At EBC, there was a significant negative association between myogenic activity www.nature.com/scientificreports www.nature.com/scientificreports/ and forearm BF LZC, and a positive association between neurogenic activity and BF LZC across scales τ = 2-11. No association between forearm BF LZC and the high frequency activity bands was seen at either altitude in SH.
Similar trends were seen in the association between spectral power in the five bands and LZC in the finger BF signals. However, the only associations between BF LZC and spectral power that reached significance in the finger were in LL at BL, where cardiac activity negatively associated with BF LZC, and myogenic activity negatively associated with BF LZC at all scales (Fig. 6).
Discussion
We have shown that the LZ-complexity of the skin microvascular blood flux signals, measured in participants of the XE2 research expedition 28 , increased with ascent to EBC (5300 m). We observed that SH exhibited a greater level of network complexity and hence capacity for heterogeneous flow distribution than LL, at both BL and EBC. We also found clear differences in the influence of local flow-motion activity on the information content, and hence complexity of the BF signal, between SH and LL on ascent to altitude. Finally, we have shown that in SH, the increased complexity of microvascular network perfusion is associated with an enhanced complexity of the We have previously shown using time and spectral domain analysis of BF signals that SH, when exposed to hypobaric hypoxia, demonstrated superior preservation of peripheral microcirculatory perfusion compared to LL and that in SH differences in local myogenic (vasomotor) and neurogenic control may play a key role in their www.nature.com/scientificreports www.nature.com/scientificreports/ adaptation to high altitude by sustaining local perfusion and tissue oxygenation. However, a robust and consistent mechanistic description of flow dynamics within the microcirculation cannot be achieved using time domain analysis (in either the resting state or during haemodynamic perturbation) or frequency domain analysis methods alone 33 .
Non-linear complexity-based analysis applied to BF signals derived from the peripheral vasculature has been shown to yield additional, and much deeper, understanding of the loss of system flexibility, and to enhance risk assessment 24,26 . In the current study, LZC index was higher in SH than LL in both forearm and finger BF signals (Figs 2 and 3). It also increased to a greater extent in SH than LL on ascent to altitude. LZC of the BF signal has previously been shown to decline in individuals with or at risk of CVD 24 and in primates with the onset of diabetes 27 . Similarly, Frisbee et al. 32 , using chaotic network attractor analysis to explore the spatial and temporal shift in perfusion distribution at successive arteriolar bifurcations within skeletal muscle, have shown an imbalanced and temporally stable distribution of flow through the microvascular network in rodent models of increasing metabolic and CV disease risk. Our current findings suggest that the increase in variability in the BF signal seen in SH is indicative of a beneficially enhanced microcirculatory adaptive capacity through more effective autoregulation within the microvascular network 22,34,35 .
Distributive alterations and heterogeneity of flow within microvascular networks are critical to an adequate tissue oxygenation 10,11 and oscillatory fluctuations in microvascular network flow have been shown to ensure a more effective tissue oxygenation than would be obtained with a steady blood flow 16 . While we report no change in LZC of the oxyHb signal with ascent to EBC nor differences between SH and LL, we did observe a positive association between the LZC index of forearm BF and LZC index of oxyHb signals in SH, measured simultaneously at the forearm, at both BL and EBC. We additionally found that in SH the enhanced complexity of network perfusion measured at both altitudes was positively associated (r = 0.252 p = 0.019) with the previously reported microvascular oxygen unloading rate, measured in the same individuals 8 . Such a relationship was not seen in LL (p = 0.761). The distribution of O 2 in tissue depends on microvascular network structure and flow and haematocrit distributions, which are all markedly heterogeneous 10 . Disturbed capillary flow patterns have been shown to limit the efficacy of oxygen extraction even in the absence of changes in mean flow 36 . While we are unable to demonstrate a causal relationship between microvascular blood flow, complexity of the perfusion signals and tissue oxygenation, such a relationship would appear consistent with a beneficial adaptation in SH whereby enhanced variability in flow-motion activity gives rise to more effective O 2 unloading.
The increase in LZC index derived from laser Doppler BF signals from the skin with ascent to EBC, and the differences in variability of the signals between SH and LL, appear at first to be contrary to the decrease in flow heterogeneity index (HI) reported by Gilbert-Kawai et al. 7 derived from direct observation of the movement of red blood cells in the buccal mucosal microcirculatory bed using incident dark-field imaging. In this study of 64 SH and 69 LL from XE2, Gilbert-Kawai et al. report that in SH sublingual blood flow increases on ascent to high altitude in a ''uniform and homogenous manner" in vessels < 25 µm diameter. By contrast they report that in LL microvascular flow decreased, but observed that this decrease was "not in a uniform manner, such that it became heterogeneous in nature". HI is calculated using a semi-quantitative technique, as the highest site flow velocity minus the lowest site flow velocity divided by the mean of the flow velocities across all sublingual vessels imaged 37 . HI is thus indicative of variations in red blood cell flow velocities across a series of individual capillaries. By contrast, LZC index is a measure of the algorithmic complexity of the time series data of the laser Doppler blood flux signal and yields a measure of the variability or predictability of the oscillatory signal. As such, these two indices cannot be compared. It should not be neglected that capillary density has been reported to increase under conditions of hypobaric hypoxia 1 , and that while Gilbert-Kawai et al. 7 report that sublingual small vessel density was not different between the SH and LL at BL testing, capillary density was up to 30% greater in SH at EBC. Taken together, these data from the buccal mucosa suggest that SH maintain a significantly greater microcirculatory flow per unit time and flow per unit volume of tissue than LL at high altitude; an anticipated consequence of which might be greater information content of the BF signals and consequently a higher LZC.
From a signal perspective, variability in BF arises from the cumulative activity of all the processes modulating BF and their temporal variation. Fluctuations in microcirculatory flow occur at different frequencies related to local endothelial (0.0095-0.02 Hz), sympathetic (0.02-0.06 Hz) activity, myogenic activity in the vessel wall (0.060.15 Hz), and modulation by respiratory (0.15-0.4 Hz), and heart (0.4-1.6 Hz) rates 14 . Examination of the information content of the BF signals from SH and LL revealed a clear and significant different in LZC between the two groups on ascent to altitude that becomes more pronounced at certain timescales (or sampling rates). Furthermore, by examining the association of the individual spectral bands associated with flow-motion with the different time-scales in MLZC, our data provide strong evidence that the influence of the modulators of flow-motion activity differs between SH and LL and with ascent to altitude. The Spearman correlations between the power bands of the forearm BF signals and LZC across multiple time scales shown in Fig. 6 showed that in LL there was a significant contribution of cardiac power compared with SH, at both BL and EBC. Consequently, the BF signal had proportionally more periodic content in LL and the complexity of the signal was reduced as the information content falls. Previous studies in altitude-naive LL and high altitude residents have shown that autonomic function, heart rate variability and respiration rate are differentially affected at high altitude, with LL showing sympathetic activation to modulate the direct vasodilatory effects of hypobaric hypoxia 38,39 . Heart rate variability is also known to contribute to complexity of the BF signal 40,41 and cardiac rhythm to be modulated by respiratory oscillation 42 . This coupling of the two HF components offers a possible partial explanation as to why the MLZC increases with scale in both SH and LL (Fig. 5).
The lower frequencies associated with flow-motion (that generally contain most of the power in the signal) also contributed to signal variability. We report a marked uprating of the association between neurogenic power with LZC of BF measured at the forearm at high altitude, which is positively associated in SH. This appears consistent with the preservation of vasoconstrictor response and enhanced neurogenic activity reported in SH at altitude reported previously 8 . Such changes are indicative of an adaptive modulation of sympatho-vagal activity through which SH can better regulate flow, allowing them to stay in a hypobaric atmosphere at lower temperatures without excessive autonomic stress 43 . These adaptations may also contribute to the higher skin temperatures measured in SH. We saw no significant uprating of the association between neurogenic power and LZC of BF at any scale measured at the forearm of LL at high altitude.
Myogenic activity (vasomotion) is closely associated with effective oxygen delivery 17,44,45 and has been shown to increase in hypobaric hypoxia 8,9 . Consistent with this we observed an increased association of the myogenic power band with LZC of the blood flux signal over multiple time scales in both SH and LL, under conditions of hypobaric hypoxia at EBC. The association of myogenic power with BF LZC was negative, consistent with its periodic nature. While the effect of flow-motion on the transport of oxygen to tissue 'is highly complicated' 46 , an increase in the lower frequencies of flow-motion and particularly in vasomotion has been shown to give rise to transients in the partial pressure of O 2 (PO 2 ) to substantially increase the volume of oxygenated tissue and to oxygenate tissue domains which under steady-state conditions would remain anoxic 11 . Indeed, mathematical modelling suggests that vasomotion activity can change oxygen delivery to tissues by up to eightfold under certain conditions 47 . Clinically, it has been shown that vasomotion is increased in patients with mild peripheral arterial occlusive disease (PAOD) and that those patients with enhanced vasomotion had significantly higher tissue oxygen levels than those without, despite similar blood flow 48 . Our data are also consistent with the increase in vasomotion seen in reduced perfusion states such as sepsis and cardiopulmonary bypass and in multiple organ dysfunction and mortality [49][50][51] . Thus, the increased strength of the association of the myogenic power band activity with LZC, is consistent with vasomotion being an important modulator of gaseous exchange under conditions of hypobaric hypoxia, when the microcirculation may tend towards a critical state.
We found little association between forearm BF LZC and the power of the endothelial frequency band in either LL or SH. While the strength of the association in SH was greater at EBC than at BL, it only reached significance at the highest sampling frequency of 40 Hz (τ = 1). Thus, while endothelium-mediated flow-motion might be expected to enhance network perfusion and tissue oxygenation, we can as yet draw few conclusions on the role of the endothelium-attributed flow-motion activity within the skin microvasculature at altitude.
This study has several strengths, not least the large and sex balanced group size. While we saw no independent effect of sex or BMI on our outcome measures of complexity in either SH or LL, we did observe a negative effect of age on BF LZC index in LL but not in SH. LL (mean age 46(14)y) were significantly older than the SH (28(6)y) (p = 0.0001, SH vs LL) in this subset of participants from XE2, and it is probable that increasing age constitutes a risk factor for declining heterogeneity of microcirculatory network perfusion 52 . A similar age-related reduction in signal complexity has been reported using nonlinear measures applied to cardiac signals 53 and signals derived from large blood vessel (pulse wave velocity) 15 .
To the best of our knowledge, this study is the first to analyse the effect of hypobaric hypoxia at high altitude on the microcirculation using complexity-based measures applied to skin microvascular LD blood flux signals. We have previously demonstrated that complexity-based measures can differentiate both between haemodynamic states 22,23 and between groups of individuals at increasing risk of developing microvascular dysfunction 24 . The current study extends and strengthens the utility of these approaches to widely varying cohorts.
We analysed signals from two skin sites that allowed us to explore the differential impact of local flow-motion activity on network flow heterogeneity. Skin is a major thermoregulatory organ sensitive to environmental temperature. The temperature of the London laboratory where baseline measurements were performed in lowlanders was lower than that in Kathmandu where Sherpas were studied. It is therefore probable that this contributed to the lower forearm and finger fluxes seen in lowlanders at baseline. While both cohorts were exposed to the same laboratory conditions at EBC, finger BF and skin temperature remained significantly higher in SH than LL. This suggests that the physiological differences in SH and LL seen at EBC are independent of external temperature. However, the variations in skin temperature across the cohorts and altitudes may be expected to differentially influence relative flow-motion activity and signal complexity 23 as will differences in haematocrit that have been reported previously [5][6][7] and which may influence the laser Doppler signal 8 .
The paucity of significant associations between LZC and spectral power across the five frequency bands measured at the finger was unexpected, as LZC of the BF signals measured at both the forearm and the finger were higher in SH than LL and both influenced by ascent to altitude. It was also unexpected as skin BF at the finger is largely determined by arteriovenous anastomoses under vasoconstrictor sympathetic control 29 and we have previously shown that ascent to EBC results in differential changes in vasoconstrictor responses and in local flow-motion activity in the low frequency bands in the resting blood flux signals in SH and LL 8 .
In exploring the associations between LZC and spectral power of the BF signals, we used fixed non-overlapping intervals to define the spectral bands as described previously by Stefanovska and colleagues 14 . It is unlikely that the boundaries of these frequency intervals remain constant across a cohort of individuals, or for a given individual, under changing conditions of physiological stress. State-dependent fluctuations in frequency intervals may thus give rise to different patterns in complexity within, and across, the cohorts studied. It would be interesting to explore further the impact of time-varying spectral band boundaries on signal complexity, and how this may inform a mechanistic interpretation of the heterogeneity of microvascular network perfusion.
Conclusions
These data confirm that synchronicity of rhythms in the modulators of microcirculatory blood flux signals, assessed using non-linear complexity analysis, contributes to the heterogeneity of microvascular perfusion. They also go some way to describe the changes in the activity of local and systemic physiological mechanisms that modulate the potentially beneficial adaptation response seen in SH under conditions of hypobaric hypoxia. Together these data suggest that peripheral tissues play an important physiological role in the cardiovascular adaptation to hypoxia, and that this role is better developed in native altitude dwellers than in lowlanders. | 6,496 | 2019-10-07T00:00:00.000 | [
"Biology"
] |
CHARACTERIZATION OF NEW EXCIMER PUMPED UV LASER DYES 2
New excimer-pumped laser dyes based on p-quaterphenyl are described and the associated performance
parameters are presented. A discussion of the variations in the performance parameters found is made in
terms of dye chemical structures. Most of the dyes studied are significantly better than the chosen
reference dye BBQ. The best dye in the series (dye 3) is stabilized both by ring-alkylation and by ringbridging.
The net effect is the production of a new dye with a lifetime ten times greater than that of the
reference dye BBQ.
INTRODUCTION
The oligophenylenes have been recognized as good candidates for new laser dyes in the near ultraviolet region for some time.In the case of 308 nm excimer pumping, many p-quaterphenyl dyes are attractive since their first singlet-singlet absorptions are typically in this wavelength region, thus providing for efficient pumping.However, most of the p-quaterphenyl derivatives are not suitable for pumping by the third harmonic of Nd:YAG (355 nm) since their wavelengths of maximum absorp- tion are bathochromically shifted relative to 355 nm.In this regard it is to be " To whom correspondence is to be sent.99 especially noted that the ring-bridged derivatives, such as those studied here, have absorption maxima which are red-shifted relative to the parent dye p-quaterphenyl.Thus, many of these ring-bridged dyes are able to be pumped with 355 nm Nd:YAG radiation providing for efficient conversion in the blue spectral region for this important pumping source.
Selected p-quaterphenyl derivatives, pumped by excimer laser radiation, have been examined by Rinke, Gusten and Ache. 3'4 Kauffman and co-workers 5'6 have reported results for a series of p-quaterphenyls under flash-lamp pumping condi- tions.In the first paper in this series, 7 we presented our work on several new derivatives of p-terphenyl.A preliminary account of our work on excimer laser pumped p-quaterphenyls has been given. 8In this paper we present a more detailed account of our results for a series of new excimer pumped p-quaterphenyl dyes.
EXPERIMENTAL DETAILS
The experimental apparatus and methods used in this study have been described in detail in the previous paper. 7Although these dyes have been examined over a period of months, the dye lifetime and conversion efficiency data come from measurements done under identical operating conditions.For the sake of internal standardization on a day-to-day basis, the dye BBQ, 9 dye 14 in Table 1, inp-dioxane solvent was used as a benchmark.Before each study the dye laser was tuned up with BBQ to give the performance values given in Table 1.In this manner, results from day-to-day, especially for lifetimes, could be related quantitatively.In this study 100 ml volumes were used in the small Lambda Physik circulators with no cover gas circulation.The optimum concentrations listed in Table I are for the oscillator-preamplifier stage of the dye laser.
The dyes described in this study have been synthesized by methods previously described.5-8 BBQ was used as purchased from Exciton, Inc.The solvent p-dioxane was purchased from Fisher Scientific Co. as an analytical reagent and used only when freshly opened.
RESULTS AND DISCUSSION
The dyes studied are identified and numbered in Table I where pertinent perform- ance characteristics are also summarized.For the sake of discussion, we have separated the dyes into two classes: singly and doubly ring-bridged p-quaterphenyls.Using this difference in ring-bridging as a means of classification has the practical consequence of distinguishing the tuning curves of the dyes.Figures 1 and 2 demonstrate this point for several of the dyes studied.Within each bridging class, the dyes are listed in the order of increasing laser wavelength maximum.The perform- ance characteristics of the internal standard dye BBO and two related dyes are given last in the table. 1 The tuning curves (conversion efficiency versus wavelength) of selected singly ring-bridged quaterphenyls and, for comparison, p-quaterphenyl, in p-dioxane solvent.The dye concentrations are all at the optimum values listed in Table 1.The curves are: curve a, dye 13; curve b, dye 2; curve c, dye 3; curve d, dye 6.
General Properties of Dyes
Overall, the conversion efficiencies listed in Table 1 vary only from 12 to 16% making any one dye acceptable from that standpoint.Compared with BBO (dye 14), most of the lifetimes are longer and, thus, these new excimer pumped dyes represent a relative improvement in stability.Indeed, dye 3 represents more than an order of magnitude enhancement in stability relative to BBQ.Wuvelength, nm Figure 2 The tuning curves (conversion efficiency versus wavelength) of selected doubly ring-bridged quaterphenyls in p-dioxane solvent.The dye concentrations are all at the values listed in Table 1.The curves are: curve a, dye 9; curve b, dye 8; curve c, dye 10; curve d, dye 11.Singly-bridged Quaterphenyls The singly-bridged dyes all have similarly shaped, doubly featured, tuning curves as indicated in Figure 1.In turn, these tuning curves are closely related to that of p-quaterphenyl itself which is also shown for the sake of comparison.In each tuning curve the shorter wavelength feature corresponds to the first vibronic component of the fluorescence; the second to the second component.
Inspection of the characteristics of dyes 1 and 2 in Table I in comparison with p-quaterphenyl, dye 12, shows that single ring-bridging has the effect of red-shifting the tuning curve by roughly 8 nm.The conversion efficiencies of dyes I and 2 are also slightly reduced compared to dye 12.These results are consistent with those of Rinke, Gusten and Ache 4 who studied similar, but not identical, ring-bridged p-quaterphenyls.Substitution of methoxy groups on the ethyl side chains (dye 1 versus dye 2) gives a moderate improvement in lifetime, and only slightly affects the remaining dye parameters, as expected.
Using dyes 1 and 2 as comparison structures, it can be seen that the following conclusions hold for the remaining singly-bridged dyes" 1.The substitution of alkoxy groups at the terminal ring ends of the parent structure (dyes 4, 5, 6) greatly reduces the dye lifetime.In the case of the methoxy substituted dye (dye 4), the lifetime is reduced to the level of BBQ (dye 14).In this context it appears that rigidization of the aryloxy groups as found in dyes 5 and 6, serves to partially arrest photochemical degradation.This effect is analogous to the observed increase in lifetime found in 7-amino-coumarins when the amino group is rigidized, 1 reducing non-radiative loss of excited state energy.The substitution of methoxy groups (dye 4) at the terminal ring positions also results in a redshift in the tuning curve of 11 nm.Rigidization (see dyes 5 and 6) further redshifts the tuning curve about 5 nm.
2. Substitution of t-butyl at the terminal ring positions redshifts the tuning curve (compare dyes 2 and 3) by 6 nm.This substitution also dramatically improves the dye lifetime by a factor of 2.5.We found a very similar result for the 4,4"-t-alkyl- substituted p-terphenyls. 7Thus, it appears that ring substitution by tertiary alkyl groups produces the most long-lived oligophenylene laser dyes.
3. While each tuning curve represents two vibronic components of the fluor- escence, the usefulness of a specific dye varies due to the intensity variation in the tuning curve between the two vibronic components.For example, the wavelength region between the two components in p-quaterphenyl (see Figure 1), around 357 nm, has a conversion efficiency of about 10% of the maximum value.This large "hole" in the tuning curve attenuates the usefulness of the dye in this particular region.With the exception of dye 3, none of the singly-bridged derivatives is as extreme in this respect as p-quaterphenyl.Nonetheless, the "hole" in the tuning curve remains an inconvenience as one tunes through the gain profile of the dye.Almost any substitution on the basic p-quaterphenyl we have studied leads to an enhancement of the usefulness of the dye (see curves in Figure 1) since the minimum gain value is generally increased with chemical substitution.In a separate paper, 11 we have also discussed how one can construct laser dye mixtures to overcome this inconvenience.
Doubly-bridged Quaterphenyls
Double ring-bridging (see dyes 7 and 9), in general, changes the shape of the tuning curves to one which has one strong broad feature as indicated in Table I and Figure 2. It does not matter which chemical entity, methylene or isoelectronic oxygen (see dyes 7 and 9 in Table 1), is used in the double bridging, for a single broad tuning curve results.In this regard our results are consistent with the results of Rinke, Gusten and Ache .3 With further chemical substitution on the terminal aromatic rings, it is found that aryl-oxy derivatization (dyes 8, 10, and 11) results in red-shifting the tuning curve the most.Adding a complicated hydrocarbon function (dye 11 versus 10) in place of methyl on the oxygen only slightly red-shifts the tuning curve.However, such a substitution has the desirable consequence of nearly doubling the lifetime.
Comparison of the parent doubly-bridged dyes (dyes 7 and 9) yields several interesting facts.First, the oxy-bridged dye is blue-shifted compared to the substi- tuted methylene dye.Second, the stability (half-life) of the substituted methylene derivative, dye 9, is greater by a factor of 2 than the corresponding oxy-bridged compound, dye 7.This relative stability ratio does not at first appear to be consistent with the results of Rinke, Gusten and Ache, 3 who found the opposite ratio for the unsubstituted methylene-bridged dye.As has previously been discussed, 5 substitut- ing all hydrogens with alkyl groups makes an important contribution to dye stability.A similar discussion may be applied to these p-quaterphenyl dyes.Thus, it is probable that the unsubstituted methylene bridge is the chemical site of weakness not observed when the bridge is protected by alkylation.Third, the lifetimes of the doubly-bridged compounds are all as good as or better than that of BBQ.As can be seen by comparing dyes 13 and 14 (BBQ), the addition of a rigidized ether group enhances the stability of the alkoxy derivative by roughly a factor of 2, as it did in the singly ring-bridged dyes 5 and 6 versus 4.
CONCLUSIONS
In conclusion, we have studied a series of excimer pumped p-quaterphenyl dyes which are significantly better than the chosen reference dye BBQ.As we have noted previously for the p-terphenyls, 7 the ring-alkylated dyes stand out as the more stable derivatives.In turn, ring-bridging serves to enhance the stability of the p-quater- phenyls further.The addition of oxygen to the aromatic rings in thep-quaterphenyls, as in the case of the p-terphenyls, 7 reduces the performance of the dyes significantly.However, the addition of oxygen to exocyclic alkyl groups can be beneficial to dye stability.The best dye in this series, dye 3, offers more than an order of magnitude improvement in stability at the expense of only a slight decrease in conversion efficiency compared to the reference dye BBQ.
It would appear, comparing dye 9 with 1, and dye 3 with 2, that an outstanding new material could be made by combining double ring-bridging with t-alkylation, as in the new dye below. 12
Figure
Figure1The tuning curves (conversion efficiency versus wavelength) of selected singly ring-bridged | 2,529.8 | 1991-01-01T00:00:00.000 | [
"Chemistry"
] |
A high order approach for nonlinear Volterra-Hammerstein integral equations
Abstract: Here a scheme for solving the nonlinear integral equation of Volterra-Hammerstein type is given. We combine the related theories of homotopy perturbation method (HPM) with the simplified reproducing kernel method (SRKM). The nonlinear system can be transformed into linear equations by utilizing HPM. Based on the SRKM, we can solve these linear equations. Furthermore, we discuss convergence and error analysis of the HPM-SRKM. Finally, the feasibility of this method is verified by numerical examples.
Introduction
As a classical model of nonlinear integral equation, the nonlinear Volterra-Hammerstein type equations [1][2][3] can be used in biological models, fluid mechanics, communication theory, etc. In this article, we primarily concentrate on numerical solutions for Volterra-Hammerstein. Generally, these systems can be characterized by: where nonlinear function F is known. λ is a constant, G : C n [a, b] → C[a, b] is a linear operator with boundedness. The choice of constraints H are satisfied that G(u(x)) = f have a unique solution. Several numerical methods of the nonlinear Volterra-Hammerstein type equations have been proffered, for instance, continuous interpolation method [1], iteration method [2], Galerkin method [4], and other methods [5][6][7]. Mirzaee [8] approximated the nonlinear Hammerstein integral equation by utilizing the least square method based on the Legendre-Bernstein. In [9], Ordokhani proposed a configuration method based on a Walsh function that converts the Hammerstein equations into algebraic equations. Normally, in applying these methods, we have to calculate substantially integrals or employ the iterative method, which is computationally complicated. To address these problems, in [4], Mandal proposed Galerkin methods and acquired the superconvergence results in the uniform norm. Recently, reproducing kernel space theory has commonly applied to solve the nonlinear boundary value problems [10,11], heat conduction equation [12], interfacial issues [13,14], the Allen-Cahn equation [15], fractional-order Boussinesq equation [16], and other functional equation models [17][18][19][20][21]. Several methods [22][23][24] are also proposed for different kinds of integral equations. It is worth mentioning that many scholars improved the RKMs to study various kind of equations which is the case of [25][26][27][28][29][30]. However, the traditional reproducing kernel method [31] requires orthogonalization in the solution process, and the calculation process is complex and time-consuming. In this work, we apply HPM to do away with the integral term conveniently. We utilize SRKM to effectively avoid the Smith orthogonalization process and economizes the calculation time.
The outline of the work is as follows: We introduce the reproducing kernel theory and the homotopy perturbation theory in section 2. In section 3, we display the HPM-SRKM. Then in section 4, some numerical experiments are presented. Finaly, a conclusion is generalized in the final section.
Reproducing kernel Hilbert space
The inner product and the norm of two reproducing kernel spaces related to this model are introduced.
Definition 1. ( [32]) Let H be the Hilbert space, and the elements in H be complex-valued functions on X. If there is a unique function K s (t) for ∀s ∈ X that satisfies Then H is defined as a reproducing kernel space, K(s, t) = K s (t) is defined as a reproducing kernel function.
Introduction to Homotopy perturbation method (HPM)
The HPM ( [33]) is implemented by embedding a small perturbation operator p(p ∈ [0, 1]) and a homotopy path is constructed: (2.1) When the operator p = 0, the Eq (2.1) is equivalent to the subsequent initial value problem: The Eq (2.1) is the original problem when p = 1. The Eqs (2.1) and (2.2) are also subject to condition H. When the operator p changes from 0 to 1, the solution u(x) of Eq (1.1) follows the homotopy path from the initial value Eq (2.2) to the original problem. From the perturbation parameter theory [34], the solution satisfying the homotopy path can be extended into the form of Maclaurin series of p: Therefore, when p → 1, the approximate solution of the homotopy equation is Bring Eq (2.3) back Eq (2.1), and taking the k derivatives of function F, the Eq (2.1) is equaled to and B k is depended on u 0 (t), u 1 (t), · · · , u k (t). By comparing the coefficients of p k , the solution of Eq (2.1) is equivalent to the following system: Through the above calculations, we can get the approximate solution of the Eq (1.1) by adding up the solutions of the Eq (2.6).
HPM-SRKM for solving equation
3.1. Presenting the HPM-SRKM to solve the Eq (2.6) Since the system Eq (2.6) are two linear equations, we can equate them to the following problem . ., then v(x) ≡ 0. This proof is completed.
Convergence and error estimation
In this subsection, we will argue the convergence of the HPM-SRKM.
From the continuity of the reproducing kernel function R x , we obtain where M is a nonnegative constant. This proof is completed.
Numerical examples
The theoretical part of the solution is proved in the previous sections. Some numerical examples are given to illustrate its effectiveness. We operate our programs in MATHMATICA 7.0. Meanwhile, the red lines represent the approximate solutions and the blue dots represent the exact solutions in the figure. The absolute errors e i , the exact and the approximate solutions are listed in the tables. We also use the following formulas to calculate the convergence rate r. r = log 2 e n e 2n .
Example 1. For the following nonlinear HIE: where f (x) = 1 6 (−3 + 8cosx + cos2x), the exact solution is u(x) = cosx. The numerical results are illustrated in Figure 1. The comparison of the numerical results and the absolute error e i are listed in Table 1. We get an exact solution with higher precision than the method of traditional reproducing kernel method [20] for n = 25.
Example 2. Consider the nonlinear HIE: where f (x) = − 15 56 x 8 + 13 14 x 7 − 11 10 x 6 + 9 20 x 5 + x 2 − x. The exact solution is u(x) = x 2 − x. The numerical results are illustrated in Figure 2. Table 2 is illustrated the numerical results and the absolute error e i . From the results of Table 2, we can see that our method approximates the exact solution more closely than the Legendre spectral Galerkin and multi-Galerkin methods [4] for n = 10.
Conclusions
In this article, the SRKM-HPM was smoothly applied to figure out the nonlinear HIE by getting the approximate uniform solution. Besides, compared with the method of traditional reproducing kernel method [31], Legendre spectral Galerkin, and multi-Galerkin methods [4], the convergence speed and accuracy of solution were better. | 1,561 | 2021-01-01T00:00:00.000 | [
"Mathematics"
] |
Deep Sequencing Identification of Novel Glucocorticoid-Responsive miRNAs in Apoptotic Primary Lymphocytes
Apoptosis of lymphocytes governs the response of the immune system to environmental stress and toxic insult. Signaling through the ubiquitously expressed glucocorticoid receptor, stress-induced glucocorticoid hormones induce apoptosis via mechanisms requiring altered gene expression. Several reports have detailed the changes in gene expression mediating glucocorticoid-induced apoptosis of lymphocytes. However, few studies have examined the role of non-coding miRNAs in this essential physiological process. Previously, using hybridization-based gene expression analysis and deep sequencing of small RNAs, we described the prevalent post-transcriptional repression of annotated miRNAs during glucocorticoid-induced apoptosis of lymphocytes. Here, we describe the development of a customized bioinformatics pipeline that facilitates the deep sequencing-mediated discovery of novel glucocorticoid-responsive miRNAs in apoptotic primary lymphocytes. This analysis identifies the potential presence of over 200 novel glucocorticoid-responsive miRNAs. We have validated the expression of two novel glucocorticoid-responsive miRNAs using small RNA-specific qPCR. Furthermore, through the use of Ingenuity Pathways Analysis (IPA) we determined that the putative targets of these novel validated miRNAs are predicted to regulate cell death processes. These findings identify two and predict the presence of additional novel glucocorticoid-responsive miRNAs in the rat transcriptome, suggesting a potential role for both annotated and novel miRNAs in glucocorticoid-induced apoptosis of lymphocytes.
Introduction
Apoptosis of lymphocytes is critical for the homeostatic balance of the immune system. The escape of lymphocytes from apoptotic constraint results in dire consequences including the development of hematomalignancy and autoimmune disorders. Glucocorticoid hormones are potent inducers of lymphocyte apoptosis [1]. Endogenous glucocorticoids regulate immune development through the elimination of unwanted immature thymocytes during the T-cell selection process [2]. Furthermore, given their aggressive proapoptotic properties, synthetic glucocorticoids are a mainstay of hematomalignant chemotherapeutic regimens.
Glucocorticoids are a class of essential stress-induced steroid hormones regulating cardiovascular, metabolic, homeostatic and immunologic functions. Endogenous glucocorticoids are synthesized and secreted under the control of the hypothalamic-pituitary-adrenal axis in response to stressors, including environmental stress, nociception, and emotion [3]. The pleiotropic effects of glucocorticoids are mediated by the ubiquitously expressed glucocorticoid receptor (GR), which serves as a sensor of environmental stress, mediating the response of the immune system to environmental stress and toxic insult. Glucocorticoid-induced apoptosis of lymphocytes is a multifaceted process, requiring signaling through the GR and the altered expression of apoptotic effector genes [4][5][6]. Several laboratories have performed genome-wide microarray analysis to delineate the changes in gene expression that modulate glucocorticoidinduced apoptosis. Most notably, the expression of the pro-apoptotic BH3-only Bcl-2 family member Bim is induced by glucocorticoid-treatment in murine lymphoma cell lines, human leukemic cell lines, mouse primary thymocytes, as well as human primary chronic lymphoblastic leukemia and acute lymphoblastic leukemia samples [7][8][9]. While not the only mechanism involved in this complex process, the upregulation of Bim is likely an important mediator of glucocorticoid-induced apoptosis, as both in-vivo and in-vitro depletion of Bim expression in lymphocytes decreases sensitivity to glucocorticoid-induced apoptosis [10][11][12]. However, until recently, gene expression analysis of lymphocytes undergoing glucocorticoid-induced apoptosis has largely ignored the examination of non-coding RNAs, or miRNAs.
MiRNAs are non-coding, ~21mer, single-stranded posttranscriptional regulators of gene expression [13,14]. First discovered in C. elegans fifteen years ago, highly conserved miRNAs have now been identified and cloned in plants, D. melanogaster, rodents, humans and numerous other species [15][16][17][18]. The interaction of a miRNA with mRNA (via imperfect "seed sequence" binding) hinders target mRNA translation while increasing evidence demonstrates that miRNAs can also promote the deadenylation and subsequent degradation of their mRNA targets [19].
To date, miRNAs have been assigned regulatory roles in fundamental biological processes, including differentiation, proliferation, embryonic development, and cell death [20]. Accordingly, the dysregulation of miRNA expression and function is a common observation in numerous and diverse human diseases [21]. Currently, there are over 2000 annotated mature human miRNAs, each with the capacity to regulate hundreds of target mRNAs (or approximately 30% of coding genes), establishing miRNAs as a substantial class of gene regulatory elements [22]. Importantly, miRNAs also regulate lymphocyte function and survival through both the induction and antagonism of apoptosis [23].
Previously, using both microarray and deep sequencing analysis, we described the prevalent repression of annotated miRNA expression during glucocorticoid-induced apoptosis of primary lymphocytes [1]. Further functional studies demonstrated for the first time a regulatory role for specific miRNAs and miRNA processors in the execution of glucocorticoid-induced apoptosis. Interestingly, this analysis also indicated the potential presence of numerous novel glucocorticoid-responsive miRNAs.
Here, we have developed a customized bioinformatics pipeline that facilitates the deep sequencing-mediated discovery of novel miRNAs. Using this approach, we describe the identification of hundreds of potentially novel glucocorticoidresponsive miRNAs in the transcriptome of apoptotic primary lymphocytes. Furthermore, we validated the glucocorticoiddependent repression of two candidate novel miRNAs and Ingenuity Pathways Analysis (Ingenuity® Systems, www.ingenuity.com) predicted that these novel glucocorticoidresponsive miRNAs may contribute to glucocorticoid-induced apoptosis. In summary, these computational findings describe the discovery of novel glucocorticoid-responsive miRNAs and further suggest a potential role for both annotated and novel miRNAs in the glucocorticoid-induced apoptosis program.
Discovery of novel miRNAs from deep sequencing data: Generation of test and training sets
To identify glucocorticoid-responsive novel miRNAs from deep sequencing data we employed a customized bioinformatics pipeline. This pipeline is based on miRanalyzer, a previously published methodology (also available via webserver) [24]; however, we implemented several significant modifications to the original miRanalyzer approach (see methods). The basis of this computational analysis was to first align miR-analyzer-generated reads to the genome and use 'machine learning' to learn from the signal profile of known miRNAs and known non-miRNAs (training). Once the models are trained and able to accurately classify known miRNAs from non-miRNAs, we then use the models to predict novel miRNAs from signals at unannotated regions of the genome (testing) ( Figure 1A).
This analysis employed reads previously generated by miRNA-seq analysis of annotated miRNAs during glucocorticoid-induced apoptosis [1]. Reads were generated by next generation sequencing on the Illumina platform using total RNA extracted from dexamethasone (Dex) treated and untreated (Control) primary thymocytes (see [1] for detailed description of apoptosis analysis). We obtained approximately 12-13 million reads for each sample and performed quality control analysis using FastQC (http:// www.bioinformatics.bbsrc.ac.uk/projects/fastqc).
We then trimmed all reads at the 3' end to remove adapter sequences. Trimmed reads were subjected to a step-wise alignment protocol adopted from miRanalyzer [24] which first attempts to align reads to known miRNA sequences, and the remaining unaligned reads are then sequentially aligned to mature, mature-star*, unobserved mature-star*, hairpin, Refseq, and Rfam transcripts, sequentially ( Figure S1). As a final alignment step, the remaining reads are aligned to the whole rat genome (Rn4). As expected, a large number of the total ~12-13 million reads obtained from deep sequencing of each sample aligned to known miRNAs when compared to the aforementioned RNA subtypes ( Figure 1B). Reads that aligned to known miRNAs were used to generate the "Positive" training set while reads that aligned to other RNA subtypes were used to generate the "Negative" training set. Reads that did not align to any annotated RNA species but did align to the genome were used as "Test" data ( Figure 1A).
To generate sequences belonging to the "Training" and "Test" datasets, reads with overlapping genomic coordinates were grouped together to form 'clusters' (totaling a length of 20-27 nucleotides) and several 'precursor' sequences were generated from each cluster. Precursors encompassed a genomic window centered at the cluster and extending on both the 5' and 3' ends of the cluster. We obtained 284 Control and 236 Dex clusters in the true "Positive" and 5,499 Control and 3,179 Dex clusters in the true "Negative" training data (Table S1A). Generated precursor sequences were then subjected to secondary structure selection criteria ( Figure 1A).
The secondary structure of each precursor sequence was generated using Vienna RNA [25] and the precursor sequence was discarded from further consideration if the secondary structure did not meet stringent criteria. Pre-miRNAs are characterized by a canonical stem loop structure, hence the selection criteria was designed to discard all precursors whose secondary structure did not exhibit the desired number of base pairing and a stable hairpin structure. The filtered precursor sequences that met these criteria were used to generate molecular features that describe the unique sequence and/or secondary structure attributes of the precursor candidate in question. We chose a set of ten molecular features that best characterize attributes distinguishing a miRNA from other RNA subtypes ( Figure 1A). These include features that characterize the degree of conservation of the miRNA sequence, the signal intensity at each putative miRNA location, and characteristics of the predicted secondary structure including the minimum free energy ( Figure S2).
Training of random forest models
We generated molecular features for all filtered precursor sequences within the "Positive" and "Negative" training sets and constructed random forest models for both the Control and Dex datasets ( Figure 1A). While our modeling technique used information from all molecular features for classification purposes, our analysis indicated that certain molecular features had more influence on the classification. The sequence conservation score was the most informative feature whereas the number of bases in the overhang of the secondary This bioinformatics analysis workflow describes the novel miRNA discovery process adapted from miRanalyzer. The analysis pipeline uses next generation sequencing (miRNA-seq) data from untreated (control) or dexamethasone-treated rat primary thymocytes as input. This pipeline divides reads into three files: reads that align to an annotated mature miRNA ("Positive" training set), reads that align to other RNA subtypes ("Negative" training set), or reads that align at unannotated regions ("Test" set). Reads from each of these files are then aligned and alignment results are methodically processed to generate clusters, precursors and predicted secondary structures. Random forest machine learning is then employed to train the models for the prediction of novel miRNAs in the "Test" dataset. The output provides the genomic coordinates of predicted putative novel miRNAs. (B) Table describes total number of reads generated by miRNA-seq of control and dexamethasone treated primary thymocytes analyzed using the novel bioinformatics workflow described above. As expected, the majority of these reads align to known miRNAs when compared to other RNA subtypes. (C) Table summarizes the total number of known and predicted novel miRNAs identified by the bioinformatics workflow as induced or repressed in control and dexamethasone treated rat primary thymocytes. Both known and predicted novel miRNAs exhibit a trend of repressed expression during glucocorticoid-induced apoptosis. doi: 10.1371/journal.pone.0078316.g001 structure was among the least informative ( Figure S2). The training classification error for random forest models ranged from 93.3% to 99.8% denoting a high degree of accuracy (Table S2A and S2B).
Prediction of novel miRNAs using trained models
The "Test" dataset was processed in a manner identical to the "Training" dataset in terms of preparation of clusters and precursor sequences; however, we eliminated clusters from further analysis if miRNA expression signal was below 11 raw read count to focus on only those miRNAs displaying moderate to high expression levels. We obtained 15,332 Control and 9,876 Dex clusters resulting in 52,354 Control and 33,646 Dex precursors in the "Test" dataset (Table S1B). These precursors were subjected to classification using our modeling technique.
The precursor sequences that were predicted as novel miRNAs were further filtered based on two criteria: (i) the minimum free energy of their predicted secondary structures, and (ii) signal intensity. This yielded 515 and 346 novel miRNAs predicted for Control and Dex samples, respectively, with 220 common between the two sample types. Previously, we reported that the majority of known miRNAs are repressed during glucocorticoid-induced apoptosis of lymphocytes [1]. Interestingly, this trend extends to our analysis of miRNA-seqderived novel miRNAs. Here, approximately 80% of predicted novel miRNAs were repressed in response to dexamethasone treatment ( Figure 1C).
Validation of novel glucocorticoid-responsive miRNAs
To verify the glucocorticoid-induced repression of miRNAs, a combination of both annotated and novel miRNA candidates were selected for qPCR validation. Two novel miRNA candidates, candidate 44 and candidate 166, were chosen for validation on the basis of their predicted secondary structure. Both candidates demonstrate a canonical stem-loop structure and a putative mature miRNA sequence ( Figure 2A). Furthermore, the expression of each novel miRNA candidate (as visualized in the UCSC Genome Browser [26]) is repressed in response to dexamethasone treatment ( Figure 2B). This observation parallels the trend of prevalent repression of annotated miRNAs during glucocorticoid-induced apoptosis of lymphocytes, suggesting that these novel candidates are biologically similar to annotated miRNAs. Interestingly, candidate 166 also exhibits detectable signal at the proximal mature miRNA rno-miR-6324, a recently annotated mature miRNA arising from the same precursor as candidate 166 [27]. While the basal expression of rno-miR-3624 is lower than candidate 166, it is also repressed in response to dexamethasone treatment ( Figure 2B).
A total of five candidates, three annotated miRNAs (miR-1949, miR-3559-5p, and miR-362*) and the two predicted novel miRNAs (candidates 44 and 166) were subjected to small-RNA qPCR analysis. Each validation candidate exhibited sufficient basal signal for qPCR analysis and a degree of glucocorticoid-responsiveness as determined by the percent control value generated from computationally derived expression signals ( Figure 2C and 2D). Custom Taqman Small RNA assays were designed to the mature 5'-3' sequence of each candidate miRNA and used for the targeted quantitation of novel glucocorticoid-responsive miRNAs. These assays employ a sequence-specific stem-loop 3' reverse transcription primer, thereby assuring the definitive analysis of small RNAs [28]. This analysis confirmed the significant repression of both the annotated positive controls as well as the novel candidate miRNAs during glucocorticoid-induced apoptosis of primary lymphocytes ( Figure 2E). Interestingly, the percent of control values generated by qPCR analysis closely mirror those derived from the miRNA-seq data ( Figure 2D). These findings confirm the presence of two and predict the existence of numerous additional novel glucocorticoid-responsive miRNAs in the rat transcriptome ( Figure 1C). To explore the potential functional roles of these novel glucocorticoid-responsive novel miRNAs, we performed further computational analysis to identify the predicted gene targets for each of the two qPCRvalidated novel miRNAs.
Pathways analysis predicts novel miRNA targets may contribute to glucocorticoid-induced apoptosis
Using the mature sequence of novel miRNA candidates 44 and 166, gene target predictions were made against the 3' untranslated regions of RefSeq transcripts via the miRanda miRNA target prediction algorithm [29]. Numerous gene targets were predicted for both candidate novel miRNAs ( Figure 3A). To assess the potential role of these predicted targets in the glucocorticoid-induced apoptosis program, whole genome gene expression microarray was performed on both untreated and dexamethasone treated primary thymocytes (3 biological replicates each). Ingenuity Pathways Analysis (IPA) of genes deemed differentially expressed (p-value < 0.01 and absolute fold change > 1.2) suggests that they govern molecular and cellular functions involving cell proliferation, cell division, and cell death ( Figure 3B). Interestingly, IPA of the predicted novel miRNA targets suggests that these miRNAs may contribute to many of the same molecular and cellular functions identified by the whole genome microarray analysis. Specifically, cell death and cell survival is a top IPA-generated molecular and cellular function for the miRanda predicted targets of both candidates 44 and 166 ( Figure 3B).
Further Venn diagram analysis identified specific mRNA targets of candidates 44 and 166 differentially expressed during glucocorticoid-induced apoptosis ( Figure 3A). IPA of this combined gene list identified cell death and survival as a top predicted molecular and cellular function of these differentially expressed potential targets, as well as other functions critical to the induction and execution of glucocorticoid-induced apoptosis, including changes in cell morphology, cell cycle and cell signaling ( Figure 3C). These computational findings suggest that these novel glucocorticoid-responsive miRNAs may contribute to glucocorticoid-induced apoptosis.
Discussion
Previously, using both microarray and deep sequencing analysis, we described the prevalent repression of annotated miRNAs during glucocorticoid-induced apoptosis of primary rat thymocytes [1]. Additional studies have demonstrated the Deep Sequencing Identification of Novel miRNAs PLOS ONE | www.plosone.org glucocorticoid-mediated regulation of specific miRNAs in lymphoid cells, and further delineated a functional role for these miRNAs in the execution of glucocorticoid-induced apoptosis [30][31][32][33]. For example, studies by both Harada et al. and Molitoris et al. report the glucocorticoid-mediated repression of the miR-17 family, resulting in increased Bim expression, and, consequently, increased sensitivity to glucocorticoid-induced apoptosis [31,32]. Alternatively, several studies have reported that specific miRNAs regulate glucocorticoid sensitivity and contribute to glucocorticoid-resistance in lymphoid malignancies [34][35][36][37].
In our present study, we propose the existence of novel, unannotated, glucocorticoid-responsive miRNAs with expression profiles similar to those we previously described for ViennaRNA. The predicted 'mature' sequence is highlighted in red; the remaining hairpin contains the putative stem loop and mature-star* sequence, the minimum free energy (MFE) of each structure is indicated. The VARNA visualization applet was used to draw the RNA secondary structure [66]. (B) The expression of the candidate novel miRNAs, candidates 44 and 166 visualized in UCSC genome browser (Dex treated is top bar, Control is bottom bar). Both of the predicted novel miRNAs are repressed during glucocorticoid-induced apoptosis of primary lymphocytes. Visualization of novel miRNA candidate 166 also detects a glucocorticoid-responsive signal at the proximal newly annotated mature miRNA rno-miR-6324, which is antisense to candidate 166 (indicated in the red box). (C) Percent control values of the five miRNAs (3 known and 2 predicted novel candidates) selected for qPCR validation. Percent control was calculated as (Dex/Control) using computationally derived signal values for control and dexamethasone-treated rat primary thymocytes. Signal values were generated using stringent sequence alignment criteria of miRNA-seq data. (D) Graphic representation of percent control values for control and dexamethasone-treated samples generated using computationally derived expression signals from the miRNA-seq data. Raw read counts at each miRNA were normalized to the total number of aligned reads in the respective sample to generate normalized signal. (E) Rat primary thymocytes were untreated (control) or treated with 100nM dexamethasone for 6 hours (apoptosis was monitored as previously described [1]). The expression of annotated positive controls and individual mature candidates was evaluated via quantitative PCR using custom TaqMan Small RNA Assays. The expression of RNU43 small nuclear RNA served as an endogenous control. Results are reported as mean percent control values +/-SEM values for 3 biological replicates (**p<.01). doi: 10.1371/journal.pone.0078316.g002 annotated miRNAs (dexamethasone-induced repression). Given that deep sequencing technology provides a powerful, unbiased platform to measure the expression of miRNAs we sought to further explore and catalogue the presence of novel glucocorticoid-responsive miRNAs in the rat transcriptome. To this end, we developed a bioinformatics pipeline combining elements of miRanalyzer [24], a peer-reviewed publically available miRNA discovery approach, and a customized machine learning technique to facilitate the identification of novel miRNAs from deep sequencing data.
The discovery of novel miRNAs from deep sequencing data is a rapidly expanding area of bioinformatics research. To date, numerous studies have reported the deep sequencingmediated discovery of novel miRNAs in diverse systems including viruses [38,39], plants [40][41][42][43], insects [44], lower vertebrates [45,46], mammals [47,48], cell culture [49,50], and human patient samples [51][52][53][54][55]. Interestingly, several of these studies report the altered expression profile of these newly identified miRNAs during pathophysiological conditions including aging, Sjogren's Syndrome, psoriasis, b-cell malignancy, and lung cancer [48,[51][52][53][54]. Our study extends these findings to non-transformed, mammalian primary lymphocytes and, to our knowledge, is the first to report the hormonal-regulation of novel miRNA expression. Importantly, the recent, independent discovery of rno-miR-6324 (a mature miRNA in the anti-sense orientation to candidate 166) strengthens the evidence that candidate 166 is a novel, glucocorticoid-responsive miRNA and that our approach to the identification of novel miRNAs from deep-sequencing data is both accurate and reproducible [27].
We next employed IPA to characterize the potential cellular and molecular functions of the newly validated glucocorticoidresponsive miRNAs. This analysis indicated that the putative targets of these novel miRNAs are predicted to influence cell death. Pathways analysis of specific novel miRNA candidate targets differentially regulated during glucocorticoid-induced apoptosis identified cell death and survival as a top-regulated predicted cellular and molecular function as well as other cellular processes essential for the glucocorticoid-induced cell death program, including changes in cellular morphology, cell cycle, and cellular signaling [56]. Presently, further functional analyses of these novel miRNAs in this model system are not possible, since rat primary thymocytes are not amenable to genetic manipulation in-vitro. However, these preliminary IPAderived functional predictions provide a promising basis for the future validation and functional analysis of both novel miRNAs in an alternative, adaptable model system.
In summary, these studies employ a customized bioinformatic pipeline that enables the discovery of novel miRNAs from deep sequencing data and further describes the repression of two novel miRNAs (candidates 44 and 166) during glucocorticoid-induced apoptosis of primary thymocytes. Computational analysis predicts that miRNA candidates 44 and 166 may contribute to the glucocorticoid-induced apoptosis program through the regulation of target mRNAs involved in cell death and survival functions. These findings are the first to identify the presence of novel, glucocorticoid-responsive miRNAs in the rat transcriptome.
Ethics Statement
All animal experiments were approved by the National Institute of Environmental Health Sciences Institutional Animal Care and Use Committee and complied with USDA Column C classification (minimal, transient, or no pain or distress). Experimental animals were routinely monitored by NIEHS veterinary staff and investigators for pain or distress.
Rat primary thymocyte isolation
Rat primary thymoyctes were isolated from adrenalectomized (60-75g) male Sprague-Dawley rats (Charles River Laboratories, Wilmington, MA) approximately 1-2 weeks after surgery. Following decapitation, the thymi of three animals were removed and pooled in RPMI 1640 medium containing 10% heat-inactivated fetal bovine serum, 4 mM glutamine, 75 units/ml streptomycin, and 100 units/ml penicillin. Thymi were gently sheared with surgical scissors at room temperature. Sheared cells were filtered through 200-micron nylon mesh twice and centrifuged at 3K for 5 minutes at room temperature. The cell pellet was then resuspended in fresh media and filtered into a sterile conical tube. Cells were cultured at a final concentration of 2x10 6 cells/mL and incubated at 37°C, 5% CO 2 atmosphere.
miRNA deep sequencing
Rat primary thymocytes were isolated and cultured in the presence or absence of 100nM dexamethasone for 6 hours. Following treatment, total RNA was isolated using the Ambion mirVana miRNA isolation kit (Austin, TX) from untreated control and dexamethasone-treated samples and subjected to miRNA Deep Sequencing. Small RNA cDNA libraries were prepared according to manufacturer's protocol (Small RNA Sample Prep Kit Oligo Only, protocol 71003, Illumina, Inc., San Diego, CA). Small RNA cDNA libraries were then sequenced according to manufacturer's instructions on the Illumina Genome Analyzer II (Illumina, Inc., San Diego, CA). The data discussed in this publication have been deposited in NCBI's Sequence Read Archive [57] and are accessible through SRA accession number SRP019941.
Bioinformatic analysis of miRNA deep sequencing data
Deep sequencing data for one lane each of Dex and Control samples were received in the fasta format. Read lengths of Dex samples was 35 nucleotides whereas for control it was 25 nucleotides. However approximate length of a mature miRNA is around 18-22 nucleotides therefore it is likely that the 3' end of the read sequence may contain adapter sequences. To remove possible adapter sequences we trimmed the reads at 3' end such that resulting reads were 20 nucleotides in length. Next, the sequence reads were collapsed into a fasta formatted file where only unique sequences remain and duplicated sequences were counted and recorded in the header information for each sequence. Out of 13,087,842 and 12,307,015 reads in Dex and Control respectively, the data was compressed to 440,473 and 657,066 unique reads respectively. The resulting files were used for further analysis, which included discovery of novel miRNAs and calculation of differential expression in Dex vs. Control for novel and existing miRNAs.
Computational prediction of novel miRNAs
To discover novel miRNAs from deep sequencing data, we designed a bioinformatics pipeline based on the miRanalyzer methodology. miRanalzyer is a web server that uses input short sequence reads of lengths up to 25nts and outputs predicted novel miRNAs [24]. It is also available in a stand alone version [58]. Moreover, our experimentation with the software determined that the implementation of the random forest prediction approach within miRanalyzer was not robust enough to yield reproducible results. To overcome these limitations we implemented a number of new ideas within the novel miRNA discovery paradigm. To this end, we designed a data analysis workflow that uses the general framework and certain components from miRanalyzer and combines it with our novel machine learning approach.
First, we implemented a sequence alignment strategy as described in miRanalyzer. The fasta files from Dex and Control samples were used as input. Alignments were performed in a sequential manner. First, reads were aligned to known miRNAs, followed by alignment to mature-star*, mature-star* unobserved and hairpin precursors. Next, the remaining reads were aligned to known mRNAs and RNA families as defined by RFAM. The sequences that map to any of the above RNA subtypes are then removed. Remaining sequences are aligned to the Rn4 genome ( Figure S1).
Reads aligning to mature miRNAs were used to build the true "Positive" training dataset and reads aligned to other RNA types such as RFAM was used to build "Negative" training dataset. Reads that did not map to any known RNAs but map to unannotated locations in the genome were used to build the "Test" dataset. The alignments for training/test dataset were generated using bowtie (0.12.7) [59] with -best and -strata options. We allowed up to 2 mismatches in the seed length of 17 and up to 6 alignments were allowed per read. Only the longest alignments that maintained the number of observed mismatches within the seed were kept for further analysis.
Following the miRanalyzer approach, all overlapping aligned reads were grouped together and 'clusters' were formed. 'Precursor' sequences were then generated from each cluster [58]. We predicted the secondary structure of each precursor sequence using the ViennaRNA (version 2.0.6) [60] tool and removed precursors if any of the following were true: 1. It doesn't have single stem hairpin structure 2. If it has less than 19 bindings to the candidate precursor sequence 3. If it has less than 11 bindings to the region occupied by the read cluster 4. If candidate precursor genomic location doesn't overlap with a known miRNA (only in case of true positive data set) For the remaining precursor sequences, we calculated the molecular attributes that best describe the sequence and secondary structure characteristics of the precursor sequences. These characteristics are then used as input to the machine learning methods to train the models. The molecular features used include: 1. Total number of bindings within the read cluster 2. Total number of bindings in whole candidate precursor secondary structure 3. The length of the read cluster 4. The expression of mature-star* sequence 5. Total tag counts in the read cluster 6. The minimum free energy (MFE) 7. Normalized Energy (MFE/candidate precursor length) 8. The difference in the number of nucleotides that don't bind between the arms 9. The expression of overlapping conserved region 10. The number of unbinding nucleotides in overhang region Using these features calculated for each precursor sequence in the positive and negative training dataset, we built two random forest models, one each for the Control and Dex data using "randomForest" R-Package [61]. The random forest model consisted of 1000 binary decision trees, each constructed from 66% of randomly selected training precursors and 3 randomly selected training features. For each training sample, aggregated classification votes were computed from all the trees in which the sample under consideration was excluded. Next, the out of bag training error/accuracy rates were computed, using above classification vote counts. The importance of each of the training feature is assessed using change in out of bag training accuracy, after permuting the values of feature of interest ( Figure S2 shows the ranking of features). Ranking is calculated by mean decrease in accuracy associated with each feature.
Our training models displayed significantly high classification accuracy (i.e. low class error) as described by the confusion matrix (Table S2). We employed 1000 trees for modeling, a significantly large number compared to the miRanalyzer, to ensure that training and testing results are consistent and reproducible.
We used reads from the "Test" dataset and generated clusters and precursor sequences as described earlier. Here, we discarded clusters with raw read counts (expression value) lower than 11 prior to precursor generation step to avoid regions with low expression. The resulting precursor sequences from the test data were used for feature generation and as input for classification using the two random forest models trained from the Control and Dex data as described earlier.
Precursor sequences that were predicted to be novel miRNAs were identified, and the parent 'cluster' sequence for each of those precursors was used as the novel 'mature' miRNA sequence. These novel miRNAs were further discarded if they met either of the following criteria: 1. If the predicted novel miRNA localizes to chrUn or any of the 'random' chromosomes.
2. If the MFE of predicted novel miRNA is greater than -25.
In cases where the chromosomal coordinates of the novel miRNAs overlapped each other, they were merged to form one novel miRNA.
miRNA signal and differential expression calculation
To determine computationally derived expression values at each annotated and predicted novel miRNA, we counted the total number of reads aligned at a genomic locus normalized by the total aligned reads for a given sample (Reads Per Million). For this calculation, we only included those reads that met very stringent sequence alignment criteria (reads may have a maximum of 3 alignments and only one mismatched position within the 17nt seed length). To determine whether a given miRNA is induced or repressed in response to dexamethasone treatment, we calculated the ratio of signal in Dex divided by Control. If the ratio is above 1, we consider the miRNA induced by Dex, if the ratio is below 1, we consider the miRNA repressed by Dex.
Novel miRNA qPCR
Total RNAs were isolated from control and dexamethasonetreated (100nM, 6 hours) thymocytes using the Ambion mirVana miRNA isolation kit (Austin, TX). For annotated and novel miRNA validations, total RNAs were reverse transcribed using the Taqman miRNA Reverse Transcription kit (Applied Biosystems, CA, USA) and analyzed using custom-designed Taqman Small RNA Assays (Applied Biosystems, CA, USA) per manufacturer instructions. Single-tube primer/probes for each candidate were designed using the Custom TaqMan Small RNA Assay Design Tool using the predicted (or annotated) mature miRNA sequence as the design template. Prior to submission, template sequences were evaluated for specificity via the Basic Local Alignment Search Tool (BLAST) [62]. Primer template sequences for each candidate novel miRNA were: Candidate 44: CGCGGATGATGACACCTGGGTAT Candidate 166: GCTCTGCTGACTGCCTATGGGCT Each customized small RNA assay was evaluated for signal in both reverse transcriptase minus and the cDNA minus nontemplate controls, indicating the detection of small-RNAspecific signal. Each primer/probe was normalized to the expression of the small-nucleolar RNA RNU43.
Whole genome microarray
Rat primary thymocytes were isolated and cultured in the presence or absence of 100nM dexamethasone for 6 hours. Following treatment, total RNA was isolated from three biological replicates using the Ambion mirVana miRNA isolation kit (Austin, TX) and subjected to whole genome microarray analysis. Gene expression analysis was conducted using Agilent Whole Rat Genome 4x44 multiplex format oligo arrays (014879) (Agilent Technologies) following the Agilent 1color microarray-based gene expression analysis protocol. Starting with 500ng of total RNA, Cy3 labeled cRNA was produced according to manufacturer's protocol. For each sample, 1.65ug of Cy3 labeled cRNAs were fragmented and hybridized for 17 hours in a rotating hybridization oven. Slides were washed and then scanned with an Agilent Scanner. Data was obtained using the Agilent Feature Extraction software (v9.5), using the 1-color defaults for all parameters. The Agilent Feature Extraction Software performed error modeling, adjusting for additive and multiplicative noise. The resulting data were processed using the Rosetta Resolver® system (version 7.2) (Rosetta Biosoftware, Kirkland, WA). The data discussed in this publication have been deposited in NCBI's Gene Expression Omnibus [63] and are accessible through GEO Series accession number GSE45560.
Analysis of whole genome microarray data
The feature extractor processed raw signal was log 2transformed, quantile normalized and summarized for each probe using median polish algorithm. Next, we identified differentially expressed genes in Dex treated compared to Control samples using signal to noise statistic defined as the ratio of average signal difference and sum of between replicate standard deviations. The adjusted and unadjusted p-values for this signal to noise statistic was computed using left/right tail of empirical distribution generated by 10,000 sample/probe permutations (similar to [64]. We used a nominal p-value threshold of 0.01 (nominal p-value =< 0.01) and absolute fold change threshold of 1.2 (absolute fold >= 1.2) to identify differentially expressed probes. We used available probe annotation to map probe IDs to corresponding RefSeq genes. We identified 219 genes with statistically significant differential expression.
Prediction novel miRNA targets
Prediction of gene targets for a given miRNA was conducted using miRanda software [65]. We used the mature miRNA sequence as an input to the program and the software generated predicted gene targets by comparing complementarity in the seed region of the miRNA sequence to the 3' UTR sequence of all known mRNAs in the genome. The list of gene targets for each of the two candidate novel miRNA was further analyzed for enrichment of biological pathways using IPA.
Pathway analysis using IPA
We employed Ingenuity Pathway Analysis software to identify enriched biological pathways and molecular functions within a given gene list. We performed IPA on a list of gene targets for each of the two candidate novel miRNAs (candidate 44 and 166), and also performed IPA on the list of differentially expressed genes identified by the microarray analysis. IPA was also performed on the subset of gene targets, as identified by the Venn Diagram analysis, to be differentially expressed in the microarray analysis (Figure 3). Figure S1.
Supporting Information
Alignment workflow based on original miRanalyzer. Work-flow diagram of sequence alignment as implemented in miRanalyzer. The figure was adapted from the miRanalyzer manuscript [58]. (TIFF) Figure S2.
Accuracy of molecular features used in computational prediction of miRNAs. | 7,904.4 | 2013-10-24T00:00:00.000 | [
"Biology",
"Medicine"
] |
Geographic style maps for two-dimensional lattices
Continuous invariant-based maps visualize for the first time all two-dimensional lattices extracted from hundreds of thousands of known crystal structures in the Cambridge Structural Database.
Practical motivations for solving the problem of how to continuously classify lattices
This paper for mathematical crystallographers presents applications of the work of Kurlin (2022b) written for mathematicians and computer scientists, with proofs of the invariance of map coordinates up to basis choice, and their continuity under perturbations of a basis. A lattice can be considered as a periodic crystal whose atomic motif consists of a single point. In Euclidean space R n , a lattice à & R n consists of all integer linear combinations of basis vectors v 1 , . . . , v n , which span a primitive unit cell U of Ã.
Crystallography traditionally splits crystals into only finitely many classes, for instance by their space-group types. These discrete symmetry-based classifications were suitable for distinguishing highly symmetric crystals manually or simply by eye. Nowadays crystals are simulated and synthesized on an industrial scale. The Cambridge Structural Database (CSD) contains nearly 1.2 million existing crystal structures (Groom et al., 2016). Crystal structure prediction (CSP) tools generate millions of crystal structures even for a fixed chemical composition (Pulido et al., 2017), mostly with P1 symmetry. Data sets of this size require finer classifications than by 230 crystallographic groups.
A more important reason for a continuous approach to classifying periodic structures is the inevitability of noise in data. Slight changes in initial simulated or actual crystallization conditions mean that the same crystal can have slightly different X-ray patterns, leading to close but distinct structures. Fig. 1 shows that a reduced cell cannot be used to continuously quantify a distance between general periodic sets. If we consider only lattices, a similar discontinuity of a reduced basis arises in Fig. 2.
The transitivity axiom is needed to split lattices into disjoint equivalence classes: the class [Ã] consists of all lattices equivalent to Ã, since if à is equivalent to à 0 , which is equivalent to à 00 , all three lattices are in the same class. Past equivalences in the work of Lima- de-Faria et al. (1990) use numerical thresholds to determine a lattice class but, as Fig. 3 illustrates, all lattices can be made equivalent through sufficiently many slight perturbations up to any positive threshold due to the transitivity axiom.
An alternative mathematical approach classifies lattices by space groups and finer algebraic structures (Nespolo, 2008). Since crystal structures are determined as rigid forms, the most practically important equivalence of crystal structures and their lattices is a rigid motion, which in R 2 is any composition of translations and rotations. This is the strongest possible equivalence on crystals that are indistinguishable as rigid bodies.
Slightly weaker is equivalence based on isometry or congruence, denoted by à ffi à 0 , which is any rigid motion composed of mirror reflections. Even if we fix an equivalence such as isometry, Sacchi et al. (2020) highlight that the key question 'same or different' remains unanswered. What is needed is the notion of an invariant. Definition 1.1 (invariants versus complete invariants). A descriptor I, such as a numerical vector, is called an isometry invariant of a lattice à & R 2 if I takes the same value on all isometric lattices: if à ffi à 0 are isometric then I(Ã) = I(à 0 ), so I has no false negatives. An isometry invariant I is called complete (or injective) if the converse also holds: if I(Ã) = I(à 0 ) then à ffi à 0 , so I distinguishes all non-isometric lattices. Hence a complete invariant I has neither false negatives nor false positives (see Fig. 4).
In a fixed coordinate system, the basis vectors are not isometry invariants as they change under rotation, but the primitive cell area is preserved by isometry. If an invariant I takes different values on lattices Ã, à 0 , these lattices are certainly not isometric, while non-invariants cannot help distinguish equivalent objects. For example, isometric lattices à ffi à 0 can have infinitely many primitive bases. Most isometry invariants allow false positives that are non-isometric lattices à 6 ffi à 0 with I(Ã) = I(à 0 ). For instance, infinitely many non-isometric lattices have the same primitive cell area.
Complete invariants are the main goal of all classifications. Continuous invariants, which change only slightly under small perturbations of the underlying object, are even better. The dependence of pseudosymmetry on thresholds discussed by Zwart et al. (2008) can be resolved in a continuous way by finding, for any given lattice, its closest higher-symmetry neighbour through continuous invariants as in Problem 1.2. All lattices continuously deform into each other if we allow any small changes.
Figure 4
The root invariant RI(Ã) from Definition 3.1 used for mapping crystal structures from the CSD in this paper is a continuous and complete isometry invariant of all two-dimensional lattices.
Figure 1
For almost any perturbation of atoms, the symmetry group and any reduced cell (even its volume) discontinuously change, which justifies a continuous classification.
Overview of key concepts and past work on classifications of lattices
Crystallography traditionally uses a conventional cell to uniquely represent any periodic crystal (see Hahn et al., 2016). In the simpler case of three-dimensional lattices, the cell used is Niggli's reduced cell (Niggli, 1928). Since the current paper studies lattices in R 2 , we give the two-dimensional version obtained from the three-dimensional definition, which is derived as a limit of the reduction conditions for a threedimensional reduced basis with an orthogonal third vector v 3 whose length becomes infinite. For vectors v 1 = (a 1 , a 2 ) and v 2 = (b 1 , b 2 ) in R 2 , the determinant of the matrix a 1 b 1 a 2 b 2 with the columns v 1 , v 2 is defined as detðv 1 ; v 2 Þ ¼ a 1 b 2 À a 2 b 1 .
The new conditions for rigid motion did not appear in the work of de Wolff (2016) because reduced bases were considered up to isometry including reflections. Any rectangular lattice has a unique (up to rigid motion) reduced cell a  b, but two 'potentially reduced' bases v 1 = (a, 0) and v 2 = (0, AE b), which are not related by rigid motion for 0 < a < b. Definition 2.1 chooses only one of these bases, namely v 1 = (a, 0) and v 2 = (0, b). So detðv 1 ; v 2 Þ > 0 defines a right-handed basis in R 2 .
Since reduced bases are easy to compute (Křivý & Gruber, 1976), they can be used to define the discrete metric d(Ã, à 0 ) taking the same non-zero value (say, 1) for any non-isometric lattices à 6 ffi à 0 . Discontinuity of a reduced basis up to perturbations was practically demonstrated in the seminal work of Andrews et al. (1980). The introduction of Edelsbrunner et al. (2021) said that 'There is no method for choosing a unique basis for a lattice in a continuous manner. Indeed, continuity contradicts uniqueness as we can continuously deform a basis to a different basis of the same lattice'; see Fig. 2 and a formal proof in Widdowson et al. (2022, theorem 15). Since a reduced basis is discontinuous under perturbations, then so is any metric on these reduced bases.
Important advances were made (Andrews & Bernstein, 1988McGill et al., 2014;Andrews et al., 2019a;Bernstein et al., 2022) by analysing complicated boundary cases where cell reductions can be discontinuous. Since these advances are specialized for R 3 , we refer the reader to another paper (Bright et al., 2021) for a detailed review of reduced bases for three-dimensional lattices.
Another way to represent a lattice à & R n is by its Wigner-Seitz cell (Wigner & Seitz, 1933) or Voronoi domain V(Ã) consisting of all points p 2 R n that are closer to the origin 0 2 à than to all other points of à (Fig. 5). Though V(Ã) uniquely determines à up to rotations, almost any tiny perturbation of a rectangular lattice à converts the rectangular domain V(Ã) into a hexagon. Hence all combinatorial invariants (numbers of vertices or edges) of V(Ã) are discontinuous, similarly in higher dimensions.
However, comparing Voronoi domains as geometric shapes by optimal rotation (Mosca & Kurlin, 2020) around a common centre led to two continuous metrics on lattices up to rigid motion and uniform scaling. The minimization over infinitely many rotations was resolved only by finite sampling, so the exact computation of these metrics is still open. Similar computational difficulties remain for stronger isometry invariants of general periodic sets (Anosova & Kurlin, 2021a,b, 2022aSmith & Kurlin, 2022).
Another attempt to produce computable metrics was to consider distance-based invariants ) whose completeness was proved for generic crystals. These invariants helped establish the crystal isometry principle by experimentally checking that all periodic crystal structures from the CSD remain non-isometric after forgetting all chemical information. This principle implies that all periodic crystals can be studied in the common crystal isometry space (CRISP) whose version for twodimensional lattices is the lattice isometry space LISðR 2 Þ.
Though the paper by Conway & Sloane (1992) 30 years ago aimed for continuous invariants of three-dimensional lattices, no formal proofs were given even for the isometry invariance.
This past work for three-dimensional lattices has been corrected and extended by Kurlin (2022a). Kurlin (2022b, proposition 3.10) proves that a reduced basis from Definition 2.1 is unique (also in the case of rigid motion) and all reduced bases are in a 1-1 correspondence with obtuse superbases, which are easier to visualize, especially for n 3.
Definition 2.2 (superbase, conorms p ij ). For any basis v 1 , . . . , v n in R n , the superbase v 0 , v 1 , . . . , v n from Conway & Sloane (1992) includes the vector v 0 ¼ À P n i¼1 v i . The conorms p ij = Àv i v j are the negative scalar products of the vectors. The superbase is called obtuse if all p ij ! 0, so all angles between the vectors v i , v j are non-acute for distinct indices i, j 2 {0, 1, . . . , n}. The obtuse superbase is strict if all p ij > 0.
Definition 2.2 uses the conorms p ij from Conway & Sloane (1992), which were also known as negative Selling parameters (Selling, 1874) and Delaunay parameters (Delaunay et al., 1934). Lagrange (1773) proved that the isometry class of any lattice à & R 2 with a basis v 1 , v 2 is determined by the positive quadratic form Þ is also called a metric tensor of (a basis of) Ã. Any Q(x, y) has a reduced (non-acute) form with 0 < q 11 q 22 and Àq 11 2q 12 0, which is equivalent to reducing a basis up to isometry. The bases v 1 = (3, 0), v AE 2 ¼ ðÀ1; AE2Þ generate the mirror images not related by rigid motion, but define the same form Q = 9x 2 À 6xy + 5y 2 satisfying the reduction conditions above. So quadratic forms do not distinguish mirror images (enantiomorphs). Hence the new conditions for the rigid motion were needed in Definition 2.1.
Motivated by the non-homogeneity of the metric tensor (two squared lengths and scalar product), Delaunay (1937) proposed the homogeneous parameters called conorms by Conway & Sloane (1992) (see Definition 2.2). Then any permutation of superbase vectors satisfying v 0 + v 1 + v 2 = 0 changes p 12 , p 01 , p 02 by the same permutation of indices. For example, swapping v 1 , v 2 is equivalent to swapping p 01 , p 02 .
Delaunay's reduction (Delaunay et al., 1973) proved the key existence result: any lattice in dimensions 2 and 3 has an obtuse superbase with all p ij ! 0. Section 3 further develops the Delaunay parameters to show in Section 4 how millions of lattices from real crystal structures in the CSD are distributed in continuous spaces of lattices.
Homogeneous complete invariants of twodimensional lattices up to four equivalences
This section provides a reminder of the lattice classifications in Theorem 3.4 based on the recent invariants introduced in Definitions 3.1 and 3.2 from Kurlin (2022b, sections 3-4).
Definition 3.1 [sign(Ã) and root invariants RI, RI
have different lengths and no right angles, and hence can be ordered so that |v 1 | < |v 2 | < |v 0 |. Let sign(Ã) be the sign of detðv 1 ; v 2 Þ of the matrix with the columns v 1 , v 2 . The root invariant RI(Ã) is the triple of the root products r ij ¼ ffiffiffiffiffiffiffiffiffiffi ffi Àv i v j p , which have original units of vector coordinates such as å ngströ ms and are ordered by their size for distinct indices i, j 2 {0, 1, 2}. The oriented root invariant RI o (Ã) is RI(Ã) with sign(Ã) as a superscript, which we skip if sign(Ã) = 0.
Kurlin (2022b, lemma 3.8) proved that RI(Ã) is an isometry invariant of Ã, independent of an obtuse superbase B because an obtuse superbase of à is unique up to isometry, also up to rigid motion for non-rectangular lattices. This uniqueness was missed by Conway & Sloane (1992) and actually fails in R 3 (see Kurlin, 2022a).
Definition 3.2 (projected invariants PI, PI o ). The root invariants of all lattices à & R 2 live in the triangular cone TC in Fig. 6. The triangular projection TP: TC ! QT divides each coordinate by the size (Ã) = r 12 + r 01 + r 02 and projects RI(Ã) to ð" r r 12 ; " r r 01 ; " r r 02 Þ in the quotient triangle QT in Fig. 7. This triangle can be visualized as the isosceles right-angled triangle QT ¼ fx; y ! 0; x þ y 1g & R 2 parameterized by x ¼ " r r 02 À " r r 01 and y ¼ 3" r r 12 . The resulting pair PI(Ã) = (x, y) is the projected invariant. The oriented invariant PI o (Ã) is obtained by adding the superscript sign(Ã).
All oriented projected invariants PI o (Ã) with sign(Ã) live in a union of two quotient triangles QT + [ QT À . These triangles should be glued along the common subspace of mirror-symmetric lattices (all non-oblique lattices à & R 2 ), whose PI(Ã) belong to the boundary of QT. Fig. 7 Left: the triangular cone TC = {ðr 12 ; r 01 ; r 02 Þ 2 R 3 j 0 r 12 r 01 r 02 6 ¼ 0} is the space of all root invariants, see Definition 3.1. Middle: TC projects to the quotient triangle QT representing all two-dimensional lattices up to isometry and uniform scaling. Right: QT is parameterized by x ¼ " r r 02 À " r r 01 2 ½0; 1Þ and y ¼ 3" r r 12 2 ½0; 1.
the hypotenuses of QT AE and indicates how to glue the remaining sides. We get a punctured sphere due to the excluded vertex (1, 0).
Mapping millions of two-dimensional lattices extracted from crystal structures in the CSD
For any periodic crystal structure from the CSD, which has full geometric data of its lattice à & R 3 , we extract three twodimensional lattices generated by three pairs {v 2 , v 3 }, {v 1 , v 3 }, {v 1 , v 2 } of given basis vectors of Ã. So the CSD provides a huge collection of 2.6 million two-dimensional lattices, which our reduction approach maps to the triangle QT in under 1 h on a standard laptop. Fig. 9 shows all resulting 2.6 million lattices in QT. Only about 55% of all lattices have Bravais classes oc, op, hp, tp. The remaining 45% of lattices are oblique, with Bravais class mp. These occupy almost the full quotient triangle QT, although we see a somewhat greater density close to subspaces Left: all projected invariants PI(Ã) live in the quotient triangle QT parameterized by x ¼ " r r 02 À " r r 01 2 ½0; 1Þ and y ¼ 3" r r 12 2 ½0; 1. Right: mirror images (enantiomorphs) of any oblique lattice are represented by a pair (x, y) $ (1 À y, 1 À x) in the quotient square QS = QT + [ QT À symmetric in the diagonal x + y = 1.
representing higher-symmetry lattices -especially around hexagonal and rectangular centred lattices.
The gap of about two pixels near the horizontal edge in Fig. 9 corresponds to " r r 12 ¼ 0:01. The relevant lattices have basis vectors v 1 , v 2 whose angle is perturbed from 90 by less than 0.03 . The CSD has only 399 such lattices and " r r 12 > 0:005 for all but one of them. After removing all non-oblique lattices represented by root invariants along the boundary of QT, the map in Fig. 10 shows more clearly that all oblique lattices extracted from the CSD occupy the triangle QT without any gaps.
The heat map of rectangular lattices in Fig. 11 (top) has two high-concentration (black) pixels at a ' 3.5 Å arising from 386 near-identical primitive monoclinic crystal structures ofoxalic acid dihydrate. This molecule was used as a benchmark for the calculation of electron densities since its crystallographic properties were thoroughly documented by Stevens & Coppens (1980). Hundreds of publications have since generated and deposited further refinements of its structural determination.
In the heat map of centred rectangular lattices in Fig. 11 (bottom), the most prominent feature is the hottest area in the region where the shortest side length is between 2.5 and 5 Å . We also see a The normal-scale heat map in QT of all two-dimensional oblique lattices from CSD crystals. After removing mirror-symmetric lattices on the boundary of QT, we can better see the tendency towards hexagonal lattices at the top-left corner (0, 1) 2 QT. The heat map in QT of all two-dimensional lattices extracted from 870 000+ crystal structures in the CSD. The colour of each pixel indicates (on the logarithmic scale) the number of lattices whose projected invariant PIðÃÞ = ðx; yÞ = ð" r r 02 À " r r 01 ; 3" r r 12 Þ belongs to this pixel. The darkest pixels represent rectangular lattices on the bottom edge of QT. pixels. This line represents two-dimensional lattices in bodycentred cubic lattices, where the ratio of side lengths is ffiffi ffi 2 p . This ratio was reported among preferred values for lattice length ratios in dimension 3 by de Gelder & Janner (2005). Another high-concentration pixel represents 130 structures of a standard test molecule (hexamethylenetetramine), which was frequently used in the investigation of lattice vibrations (Becka & Cruickshank, 1963).
Hexagonal and square lattices are characterized by the inter-point distance a. Fig. 12 shows distributions and preferred values of a (in Å ) among CSD lattices.
Other complete invariants and a spherical map of two-dimensional lattices
In comparison with other complete invariants, RI(Ã) has the advantage of homogeneity so that any permutation of (indices of) superbase vectors v 0 , v 1 , v 2 permutes the three root products accordingly: r ij 7 ! r ðiÞðjÞ . The metric tensor MT ¼ ðv 2 1 ; v 1 v 2 ; v 2 2 Þ including the coefficients of the form Q à (x, y) = q 11 x 2 + 2q 12 xy + q 22 y 2 representing à is not homogeneous in the above sense. Taking square roots gives the quadratic invariant QI(Ã) = ( 11 , 12 , 22 ) = ð ffiffiffiffiffiffi q 11 p ; ffiffiffiffiffiffiffiffiffi ffi Àq 12 p ; ffiffiffiffiffiffi q 22 p Þ in the units of basis coordinates. The quadratic invariant QI(Ã) is complete up to isometry by Theorem 3.4(a).
In the isosceles triangle QT, continuous metrics and chiral distances have simple formulae in the work of Kurlin (2022b, sections 5-6) for the coordinates x ¼ " r r 02 À " r r 01 , y ¼ 3" r r 12 but can be now re-written for any coordinates on LISðR 2 Þ [see the earlier non-isosceles triangles of Engel et al. (2004, Fig. 1.2 on p. 82) and Zhilinskii (2016, Fig. 6.2)].
Since the quotient square QS = QT + [ QT À with identified sides is a punctured sphere, it is natural to visualize QS as the round surface of Earth with QT AE as the north/south hemispheres separated by the equator along their common boundary of QT represented by projected invariants PI(Ã) of all mirror-symmetric lattices Ã.
We can choose any internal point of the quotient triangle QT as the north pole. The most natural choice is the incentre P + (pole), the centre of the circle inscribed into QT + because the rays from P + to the vertices of QT + equally bisect the angles 90 , 45 , 45 . The incentre of QT + has the coordinates ( with the projected invariant PIðà þ 2 Þ ¼ ðx; xÞ has the basis v 1 ' (1.9, 0), v 2 ' (À0.18, 3.63) inversely designed by Kurlin [2022b, example 4.10 (à 2 )].
(a) The spherical map SM sends the incentre P + of QT to the north pole of the hemisphere HS + and the boundary @QT to the equator of HS + [see Fig. 13 (middle)]. Linearly map the line segment between P + and any point (x, y) in the boundary @QT to the shortest arc connecting the north pole SM(P + ) to SM(x, y) in the equator of HS + . Extend the spherical map to SM: QS ! S 2 by sending any pair of invariants PI o (Ã AE ) with sign(Ã AE ) = AE1 to the northern/southern hemispheres of the two-dimensional sphere S 2 , respectively.
(b) For any lattice à & R 2 , the latitude '(Ã) 2 [À90 , + 90 ] is the angle from the equatorial plane EP of S 2 to the radiusvector to the point SM[PI o (Ã)] 2 S 2 in the upwards direction.
Let v(Ã) be the orthogonal projection of this radius-vector to EP. Define the Greenwich point as G ¼ ð0; ffiffi ffi 2 p À 1Þ 2 @QT in the line through P + and (1, 0). This G represents all centred rectangular lattices with a conventional unit cell 2a  2b whose ratio r ¼ b=a can be found from Example 3.3: The Greenwich meridian is the great circle on the sphere S 2 passing through the point SM(G) in the equator E. The longitude (Ã) 2 (À180 , 180 ] is the anticlockwise angle from the Greenwich plane through the Greenwich meridian to the vector v(Ã) above.
For lattices with PI(Ã) in the straight-line segment between the excluded vertex (1, 0) and the incentre P + , we choose the The histograms of minimum inter-point distances a in å ngströ ms. longitude = +180 rather than À180 . Proposition 5.2 computes (Ã), '(Ã) via PI(Ã) = (x, y) and is proved in Appendix A.
Figure 14
The heat map of two-dimensional lattices from crystal structures in the CSD on the northern hemisphere. The radial distance is the latitude ' 2 curved 'pixels' of a much lower concentration. The high concentration near the point representing hexagonal lattices is visible in Figs. 14, 15 as dark pixels near the longitude = À45 . Where non-oblique lattices are included, we see the high concentrations along the borders of QT, with primitive rectangular lattices appearing as a dark thick arc on the equator for 2 [67.5 , 180 ).
The heat maps show a hexagonal 'ridge' along the meridional arc at = À45 in Figs. 14 and 15, which appears as a round arc in Figs. 16 and 17. The concentration of exact square and rectangular lattices is even higher (dark pixels for the Bravais classes tp and op), but there are fewer lattices close to these classes possibly because manual or automatic adjustments are easier for angles close to 90 than to 60 .
Main conclusions and motivations for a continuous crystallography
The heat maps in Figs. 9-10 and 14-17 visualize for the first time 2.6 million two-dimensional lattices in real crystal structures from the CSD. The preprint of Bright et al. (2021) extends this approach to three-dimensional lattices, but there is a growing database of real and theoretical two-dimensional lattice structures with potentially interesting properties (Mounet & Gibertini, 2020) for which two-dimensional lattice invariants may have direct utility. The maps indicate that lattices occur naturally in continuous distributions, and their geometry can be investigated by continuous invariant-based classification in addition to using discrete symmetry groups. The heat map of two-dimensional lattices from crystal structures in the CSD on the western hemisphere. Angles on the circumference show the latitude ' 2 [À90 , 90 ]. Top: N = 1 100 580 lattices with 2 (À180 , 0 ]. The hexagonal lattice at = À45 and the centred rectangular lattice at = À112.5 are marked on the horizontal arc (western half-equator). Bottom: all N = 932 626 oblique lattices with 2 (À180 , 0 ] and ' 6 ¼ 0.
Figure 15
The heat map of two-dimensional lattices from crystal structures in the CSD on the northern hemisphere. The radial distance is the latitude ' 2 The continuous approach has the added advantage of more easily spotting structures that are geometrically nearly identical, but where small variances in crystallization conditions have led to slight structure perturbations which disrupt higher lattice symmetries. The Python code for new invariants is available at https://github.com/MattB-242/Lattice_Invariance.
Using a geographic analogue, the recent isometry invariants create complete and continuous maps for efficient navigation in the lattice isometry space LISðR 2 Þ, which can be magnified as satellite images and explored at any desirable resolution. Since each invariant is a point in a space on which various metrics can be defined, this representation leads to a continuous 'distance' between two lattices based on their separation in LISðR 2 Þ and also a continuous measure of 'dissymmetry' as the closest distance to the subspace corresponding to lattices with higher symmetry (see Kurlin, 2022b).
The four non-generic Bravais classes of two-dimensional lattices are lower-dimensional subspaces in LISðR 2 Þ whose separate maps in Fig. 11 and 12 have no intermediate gaps and include sparse or empty regions only for small or very large values of cell parameters.
Using a biological analogue, crystallography previously took a similar approach to the classical taxonomy, dividing lattices into an increasingly complex sequence of discrete categories based on symmetries as they divided organisms according to physical characteristics; see a comprehensive review by Nespolo et al. (2018).
The new area of continuous crystallography uses the geometric properties of the lattice itself to continuously classify an individual lattice in as granular a manner as we like, in a manner akin to the modern use of genetic sequences and markers to classify organisms. Indeed, since the root invariant RI(Ã) of a lattice à is complete, this RI(Ã) could be said to represent the DNA of Ã. Even better than the real DNA, any two-dimensional lattice can be explicitly built up from RI(Ã) [see Kurlin (2022b), proposition 4.9].
The complete root invariant from Definition 3.1 extends to a three-dimensional lattice as follows. For any threedimensional lattice, depending on its Voronoi domain, all obtuse superbases fv i g 3 i¼0 with v 0 + v 1 + v 2 + v 3 = 0 are described by Kurlin (2022a, lemmas 4.1-4.5). Any generic three-dimensional lattice has a unique (up to isometry) obtuse superbase whose root products r ij = ffiffiffiffiffiffiffiffiffiffi ffi Àv i v j p can be considered as labels on the edges of a three-dimensional tetrahedron or written in the matrix r 23 r 13 r 12 r 01 r 02 r 03 : Permutations of four superbase vectors induce 4! = 24 permutations of the above six root products. Other nongeneric cases require other permutations, which were not previously considered by Andrews et al. (2019b), to guarantee a complete invariant of all three-dimensional lattices [in Kurlin (2022a, theorem entries DEBXIT01, . . . , DEBXIT06 represent two polymorphs: four (near-)duplicates of T2-and two (near-)duplicates of T2-reported in our past work (Pulido et al., 2017). Zhu et al. (2022) predicted and synthesized new material based on PDD invariants.
Consider the vertical edge between hexagonal and square lattices, where (Ã) 2 [À45 , 67.5 ]. The latitude '(Ã) is proportional to the ratio in which the point PI(Ã) = (x, y) splits the line segment L from P + to the vertical edge. The In the main body of this paper, we show heat maps of orientation-unaware projected invariants, which clearly demonstrate the way that lattices generated from the CSD distribute through the lattice invariant space without gaps. Fig. 18 shows plots of orientation-aware projected invariants PI o (Ã) for the same data set.
In both plots, we see an additional apparent non-smooth jump across the diagonal representing higher-symmetry lattices, so that there is some apparent favouring of positive chirality among two-dimensional lattices. This is an artefact of the interaction between vectors in the initial CSD data, and our consistently ordered selection of pairs from those vectors, and should not be read as a real physical effect. We also note that there is a much lower relative concentration, apparent from the lightness of the colour of each pixel, in the standard plot of oblique lattices. In this case the oxalic acid structures mentioned in the main body of the paper all have consistent chirality and remain below the diagonal of the quotient square, while the lattices in any other pixel split between each half of the plot and therefore have much lower relative counts. | 7,089.6 | 2023-01-01T00:00:00.000 | [
"Materials Science",
"Computer Science"
] |
The International
The National Digital Information Infrastructure and Preservation Program (NDIIPP) was initiated in December 2000 when the U.S. Congress authorized the Library of Congress to work with a broad range of institutions to develop a national strategy for the preservation of at-risk digital content. Guided by a strategy of collaboration and iteration, the Library of Congress began the formation of a national network of partners dedicated to collecting and preserving important born-digital information. Over the last six years, the Library and its partners have been engaged in learning through action that has resulted in an evolving understanding of the most appropriate roles and functions for a national network of diverse stakeholders. The emerging network is complex and inclusive of a variety of stakeholders; content producers, content stewards and service providers from the public and private sectors. Lessons learned indicate that interoperability is a challenge in all aspects of collaborative work.
Introduction
When spider webs unite, they can tie up a lion.Ethiopian proverb In the winter of 2000, a national digital preservation network began to form in the United States.The National Digital Information Infrastructure and Preservation Program (NDIIPP) was initiated by Congressional legislation that authorized the Library of Congress to work with other institutions to form a national network of partners dedicated to collecting and preserving important born-digital information.Guided by a strategy of collaboration and iteration, the Library of Congress and its partners have been engaged in learning through action (referred to as "learn by doing") that has resulted in an evolving understanding of the most appropriate roles and functions for a national network of diverse stakeholders.
Preserving our cultural heritage is not a mission that can be accomplished by a single institution.The amount of historical and creative content has reached astronomical proportions with the advent of the Internet.Technology has allowed any individual to become a publisher.Libraries and archives face a daunting task in their efforts to continue the tradition of preservation in the digital age.Although it was recognized early on that no one institution can do this alone, NDIIPP work has also taught us that the national network will be a complex interaction between networks rather than individual parties.
to the Present: Developing the NDIIPP Network
When the U.S. Congress authorized NDIIPP, the Library started with the development of a plan and a strategy for moving forward.The plan, called "Preserving Our Digital Heritage," (Library of Congress, 2002) was approved by Congress early in 2003.During the development of the NDIIPP master plan, the Library met with hundreds of interested parties convened around the topics of preservation, technical architecture, research agendas and content collection and production.The result was the culmination of the initial research and planning phase and represented the fruits of intensive consultations with a wide range of American and international innovators, creators and high-level managers of digital information in the private and public sectors.
Congressional approval of the plan signaled the initiation of the first phase of network formation.The Library was eager to get to work on this exciting -yet daunting -program to save the creative and intellectual heritage of the nation in digital form.In the plan, the Library identified needs, how to address intellectual property issues and where to make investments of funding.Work began in three areas of endeavor -preservation partnerships, technical architecture and basic research.
Phase 1: Seeding the Network (2002-2005)
The first phase of network formation can be best characterized as launching small networks of partners with the common goal of preservation but with individual challenges of content and technical viewpoints.It began in September 2004 when NDIIPP funded content collecting and preservation projects comprising 36 institutions working with eight consortia.Each project consortia focused on specific content types and developed relationships and processes around the content.Each project set its own
The International Journal of Digital Curation
Issue 1, Volume 3 | 2008 technical agenda and devised its own methodology.In this phase there was an emphasis on "learn by doing." This first set of preservation partnership investments totaled nearly US$14 million in funding to eight projects comprising 36 institutions.These institutions are selecting, collecting and preserving important digital materials such as: • Geospatial data Other partners, added in later years, collecting important content so that it is available for future research are Portico, which is developing an archiving service for electronic journals; SCOLA (Satellite Communications for Learning), which is saving high-interest foreign news broadcasts such as those from Al-Jazeera and from Pakistan, Russia and the Philippines; and LOCKSS, a multi-site distributed archive of content.
Although the projects were developed around diverse content types, their activities have come to focus on four cross-cutting areas: • Selection and collection of digital content In May 2005, the Library and the National Science Foundation awarded 10 university teams a total of US$3 million to undertake pioneering research to support the long-term management of digital information.These basic research awards were the outcome of a partnership between the two agencies to develop the first digital preservation research grants program 1 .
A test completed in June 2005, called the Archive Ingest and Handling Test (AIHT), serves as an example of how NDIIPP is catalyzing joint problem-solving to achieve programmatic goals.AIHT tested the ingest of a digital archive into diverse systems.The digital archive was donated by George Mason University, and the Library conducted the test with Johns Hopkins, Harvard, Stanford and Old Dominion universities.The archive contained approximately 57,000 files totaling about 12 gigabytes.Although relatively small, it was complex in its mix of formats and metadata.
The archive test proved that different approaches to the same problem can coexist and work successfully and coincidently.We learned which aspects of digital preservation are institution-specific and which aspects are more general.In fact, the Library believes that taking several approaches to the same problem is preferable to homogeneity, which risks data corruption or irretrievable loss should the single system solution fail.
The test also taught us that a data-centric approach to the transfer of content is preferable to a tool-based strategy.Thus, this approach assumes that data will pass among institutions in its original context, to be interpreted by the ingest system of the receiving-preserving institution.Of course, heterogeneous approaches to the same problem can only be successfully guaranteed when networking and cooperation among various institutions exists to the degree necessary to ensure interoperability (Wilson, 2003).
In a related project, the Los Alamos National Laboratory Research Library worked on building mechanisms to address challenges related to collecting, storing and accessing complex digital objects.Tools are under development for assigning metadata, transferring content between repositories and storing content within repositories.Los Alamos is using MPEG-21 as the underpinning of this work (Bekaert & Van de Sompel, 2005).In addition to the three investment areas, the Library formed an independent working group designed to examine an important portion of the U.S. copyright law that deals with libraries' use of archival materials.We learned early on that we would not be able to move forward with the digital preservation program until we had resolved some of the intellectual property issues that hindered our work.
The Library is in a unique position because the U.S. Copyright Office is part of the institution.The newly formed working group, known as the Section 108 Study Group, was convened in April 2005 under the sponsorship of the Library and the U.S. Copyright Office.Its objective was to re-examine the exceptions and limitations applicable to libraries and archives under the Copyright Act, specifically in light of the changes produced by the widespread use of digital technologies since the last significant study in 1988.The group made recommendations in March 2008 for changes that result in draft legislation for Congress, addressing exceptions for libraries and archives to collect, preserve and serve digital materials2 .
Lessons learned while seeding the network highlighted the individuality of institutional business processes and constraints.The characteristics of these initial collaborations were very much project-centric.The consortia and the Library were challenged by mechanisms for cooperative agreements and distributing funding across federal, state and private institutions.Bringing business relationships across such diverse organizations into agreement consumed time and resources.Early partnership building success was often marked by the completion and signing of a cooperative agreement.
Phase 2: Strengthening and Expanding the Network (2006-2008)
The current phase of the Digital Preservation Program is intent on strengthening and sustaining current partnerships while adding new types of partners and identifying tools and services for the network.The work of the first phase informed the second and current phase of network formation.In January 2005, the collecting and preserving partners identified some common tools and services needed to preserve digital content.Tools to work with metadata and tools to examine, characterize, and verify file formats were of highest priority.One of the most important services is storage for large volumes of files.
In May 2006, the Library began a pilot project with the San Diego Supercomputer Center (SDSC) to assess the ability of a trusted partner to store digital data from the Library.The two main objectives of this project were for SDSC to: • reliably host the Library's digital content and guarantee data integrity and access • enable the Library to remotely access, manage, process and analyze that content Two new communities are being developed during this phase -one with state libraries and archives, and another with the commercial content producers.The result of workshops conducted in 2005 with state librarians, archivists, and records managers informed a plan to fund multi-state demonstration projects whose results will assist all states in making decisions on preserving records and other state data that are increasingly available only in digital form.
Another set of investments is addressing the long-term preservation of creative content in digital form.Eight Preserving Creative America projects target preservation issues across a broad range of creative works, including digital photographs, cartoons, motion pictures, sound recordings and even video games.The work is being conducted by a combination of industry trade associations, private sector companies and nonprofits, as well as cultural heritage institutions.Several of the projects involve developing standardized approaches to content formats and metadata, which are expected to increase greatly the chances that the digital content of today will survive to become America's cultural patrimony tomorrow.
Although many of the creative content industries have begun to look seriously at what will be needed to sustain digital content over time, the Preserving Creative America projects will provide added impetus for collaborations within and across industries, as well as with libraries and archives.The awards also allow the Library to respect Congress's wishes that we enlist the private sector to help address the longterm preservation of digital content.This phase can be characterized as one in which the partners identified common tasks and worked across projects.The partners began to form the larger network and during this phase identified functions of a preservation network that are applicable to a variety of content communities.In this phase the program began to see the emergence of defined partner roles within the network and the emergence of communities of practice.The partners as consortia identified their strengths and areas in which they could provide leadership and expertise to other partners.
Phase 3: Sustaining the Network (2008-2009)
The National Digital Information Infrastructure and Preservation Program is building a stewardship network of partners that operate in one or more functional roles: (1) (4) Capacity Building.
Together, the interaction of partners playing various roles will strategically provide the necessary content, support and services to all the network's members.A layered model (Figure 1) illustrates the single or multiple roles any one organization would fulfill (Arms, 2006).
Layer 2: Information and Expertise, Development and Diffusion.
In this layer the focus is on the activities rather than on the actors, or partners themselves.It is where overlapping communities of practice will be constructed.It is the place to roll up your sleeves and make a contribution that assures that the content collected in Layer 1 is available to future generations.
Depending on the task, some activities may come and some may go.Organizations in this layer may see themselves more as "contributors" than as permanent "members."For example, standards-setting bodies will play a role, yet the work they do, while crucial to NDIIPP, may not be carried out specifically for the program.
The International Journal of Digital Curation Issue 1, Volume 3 | 2008
This layer is for services that further the objective of long-term access to digital information.The players in this layer perform work that is useful to many entities and their work involves more than the mere sharing of expertise and information.Their services range from being totally centrally supported by the network to being partially subsidized, to being commercial and for-profit.Some examples of services in this layer are tool and format registries or Copyright registration and deposit tools.Open-source software developers such as DSpace and Fedora live here.One of our NDIIPP partners, LOCKSS, also inhabits this space.
Layer 4: Capacity Building.The players in this layer will seek or provide funding and other support for activities in all four layers.The funding will be expected to produce results that benefit entities across the network.Government agencies at the federal, state and local levels will have a part to play.They will support training and education and the development of curricula in digital preservation.
Outcomes for this phase of NDIIPP work in addition to network formation include addressing access, developing and formalizing roles in the network, adopting interoperability standards, continuing to develop and refine services, issuing a plan for collecting content and providing a content directory of what has been saved so far.It is expected that by this time the recommendations for legislative changes suggested by our Copyright Study Group will be reviewed.Without changes to U.S. Copyright law, it will be difficult for libraries and archives to serve these materials to their users without violating intellectual property rights.A long-term funding strategy will help ensure that partners are able to continue their important work and their contributions to our universal digital library.
Phase 4: Formalizing the Network (2010-2015)
Although the rapidly changing technology and political landscapes make it difficult to project this far ahead, we do know we will formalize the network even further through a broader and deeper range of partners.Organizational roles and responsibilities will be refined and adopted.By this time, we hope that public awareness of digital preservation and why it is needed will be clear to policymakers, scholars and students as well as the general public.The vision is that of a galaxy of networks of content creators, producers and stewards interwoven with networks of service providers collaborating on standards and practices that sustain a large valuable collection of digital content.
Lessons Learned Within the Network of Partners
Collaborative digital preservation endeavors most often begin with metadata or format standards and workflow practices in order to promote interoperability.As essential as these efforts are, interoperability has become the signal word for agreement.One of the early lessons of the NDIIPP work is that there are interoperability challenges in every phase of the life cycle of digital objects.This paper highlights the challenges and some agreement in the planning and management, data curation and stewardship functions of the life cycle.
The International Journal of Digital Curation
Issue 1, Volume 3 | 2008
Planning and Management of Partners Lesson: Organizational operations and business practices do not interoperate.
When establishing collaborative relationships between public and private, academic and government, commercial and academic, business operations are not always interoperable.In the NDIIPP experience, accounting systems and business practices of academic institutions are not compatible with federal government grant and procurement requirements.Monthly reporting and invoicing are very difficult for academic organizations but very common for public and commercial organizations.Sorting out roles and responsibilities around the acquisition and use of digital content is also very challenging because previous methods for inventory, acquisition and preservation are predicated on physical copies that can be more tightly controlled.Even within the same organizational domain, there are barriers to collaboration.The five universities engaged in the NDIIPP MetaArchive Project tackled the business relationship challenge by establishing a nonprofit (U.S. 501 (c) (3) nonprofit corporation called Educopia (McDonald & Walters, 2007).
Lesson: The NDIIPP network is an emergent network.NDIIPP stated the goal of assembling a distributed network of partners as a strategy for digital preservation.The network was not constructed but rather is emerging from the work of the partnerships (Milward & Provan, 2006).
The consortia comprising the first NDIIPP funding initiative were selected for their demonstrated abilities in three areas: content selection, technical capacity, and potential to organize a network of partners.The bi-annual NDIIPP partners' meetings brought together participants from all NDIIPP-sponsored projects including the joint NSF/LC DIGARCH partners.The lesson learned is that although these partners shared a common interest and often articulated common problems, their work with diverse data and data communities was not conducive to thinking and working as a larger network.They needed some working sessions to discover and leverage the beneficial relationships.
One such session led by Clay Shirky asked the partners to identify project strengths that would benefit other projects.This exercise influenced the layered view of the stewardship network (Figure 1).It brought to the attention of all that the program needed to invest in projects that produced tools and services useful across the partnerships and that the partners needed to work together on common issues such as intellectual property, sustainability and collection policy.
At year three, there have been working sessions to define requirements for tools for file identification, verification and validation, metadata transformation and capture of content from the Web, shared storage solutions to address the growing demand for large volumes of data especially in the geospatial and Web domains, and shared collection development within the Web archiving domain.Established tools such as LOCKSS are being applied to varied data types to meet the needs of special preservation systems.There is a practical collaboration on large-volume transfer and storage mechanisms between the NDIIPP National Geospatial Digital Archive and the NSF DIGARCH research project at the University of Tennessee.
Data Curation and Stewardship Lesson: Working within a diverse partner network increases the complexities for data interoperability.
From the viewpoint of the NDIIPP original eight preservation consortia, the OAIS concept of designated communities has been borne out for content types--social science datasets require different workflows and standards and currently serve different research communities from geospatial data.Interoperability challenges become greater as the designated user communities broaden their interest in content from various producer communities.An example is the wide adoption of geospatial data for commercial content services.From the NDIIPP partners, it is conceived that a useful research corpus could include political and government Web sites, polling data from the social science community and geospatial data created by state and local governments.Each of these data communities has vastly different metadata standards and practices.Access points are different.The content itself is comprised of a variety of formats that, in all three cases require different software for retrieval and display of the objects.
Lesson: Metadata in standardized formats very often represents an institutional context not easily transferable to a larger context.
The excellent work on metadata schemas over the last several years was useful to NDIIPP project teams but in the 2003 Archive Ingest and Handling Test (AIHT) it was revealed that each institution employed a different grammar for the same schemas.No two METS applications were alike even though there was some transferability across local archives.Clay Shirky, technical advisor for the project observed, "The goal should be to reduce, where possible, the number of grammars in use to describe digital data and to maximize overlap, or at least ease of translation, between commonly used fields.But it should not be to create a common superset of all possible metadata" (Shirky, 2005).
Lesson: At this time, the greatest common ground for preservation process, tools and standards lies at the bit level for digital content.
The Library is currently testing a process and protocols for transfer and verification of diverse data that can be applied at the bit level.All eight consortia are participating in an exercise to move content to an archive at the Library with a file manifest and a package manifest that provides information at a minimal level to retrieve and return the package to the source as well as to plan for more than just archival storage should that scenario develop (Sugimoto, 2006).Plugin architectures allow for diverse use of common validation and format tools such as JHOVE.The AIHT project demonstrated that interoperability for long-term preservation is datacentric and not system-centric.Common tools for data analysis, verification and validation were very useful but project participants cautioned against universal use of a single tool due to the possibility of inherent assumptions and logic that may not provide complete coverage and extraction of useful information.
Conclusions
Through NDIIPP, the original strategy of "learn by doing" has revealed the emergence of a complex network of partners that is best described as a network of networks.Each content community has identified, and is well on the way to solving, specific challenges for each content type -geospatial, digital television, Web content, digital images, digital sound recordings and datasets.At the same time, the various partners brought together through the program projects have been able to recognize and define functions that are best addressed through collaborative and common work.Each of the networks brings expertise and skill of value to the whole network.This network is organic rather than constructed and becomes stronger through shared expertise and common goals.
Figure 1 .
Figure 1.Layers in a Stewardship Network. | 4,957.4 | 2008-12-02T00:00:00.000 | [
"Computer Science"
] |
The Insensitivity of TASK-3 K2P Channels to External Tetraethylammonium (TEA) Partially Depends on the Cap Structure
Two-pore domain K+ channels (K2P) display a characteristic extracellular cap structure formed by two M1-P1 linkers, the functional role of which is poorly understood. It has been proposed that the presence of the cap explains the insensitivity of K2P channels to several K+ channel blockers including tetraethylammonium (TEA). We have explored this hypothesis using mutagenesis and functional analysis, followed by molecular simulations. Our results show that the deletion of the cap structure of TASK-3 (TWIK-related acid-sensitive K+ channel) generates a TEA-sensitive channel with an IC50 of 11.8 ± 0.4 mM. The enhanced sensitivity to TEA displayed by the cap-less channel is also explained by the presence of an extra tyrosine residue at position 99. These results were corroborated by molecular simulation analysis, which shows an increased stability in the binding of TEA to the cap-less channel when a ring of four tyrosine is present at the external entrance of the permeation pathway. Consistently, Y99A or Y205A single-residue mutants generated in a cap-less channel backbone resulted in TASK-3 channels with low affinity to external TEA.
Introduction
Leak K + channel family, also known as K 2 P or two-pore domain K + channels, are widely expressed among different cell types, where they play a critical role in setting the resting membrane potential [1,2]. The K 2 P family consists of fifteen different members divided into six subfamilies based on structural and functional properties [3][4][5][6]. In humans, K 2 P channels are encoded by the KCNK gene family and mutations of its genes have been associated with several pathologies. For instance, TASK-1 malfunction is linked to pulmonary hypertension [7] and cardiac arrhythmias [8]. Additionally, mutations of TASK-3 are associated with Birk Barel syndrome [9], and TASK-3 overexpression was found in human breast cancer tumors, where it has been proposed to act as a proto-oncogene [10]. Further study showed that TASK-3 gene knock down in breast cancer cells is associated with an induction of cellular senescence and cell cycle arrest [11].
Regarding protein structure, each K 2 P channel subunit has four transmembrane domains (TM1-TM4) and two pore-forming domains (P1 and P2). Therefore, two subunits are required to form a functional channel [12,13].
Recently, X-ray crystallographic structures of TRAAK (TWIK-related arachidonic acid-stimulated K + channel), TREK1 (TWIK-Related K + Channel), TREK2 and TWIK-1 (Tandem pore domains in a weak inward rectifying K + channel) channels have been reported, giving important insights into the K 2 P channel function [14][15][16][17]. Structural studies revealed that K 2 P channels display an exclusive extracellular cap domain formed by the extracellular loop that connects the first transmembrane domain and the first pore-forming sequence (TM1-P1 loop). The cap domain forms two tunnel-like side portals, known as the extracellular ion pathway (EIP) [18]. Also, the cap structure has been proposed as a barrier that hinders the access of classical K + channel blockers to their binding sites. Thus, the cap domain has been proposed to be responsible for the poor sensitivity of K 2 P channels to classical K + channel blockers [15,16].
By using mutagenesis, electrophysiology and computational analysis, we herein explored the role of the cap structure and potential residues in the blockade of TASK-3 channel by tetraethylammonium (TEA).
Our results confirm that the cap structure limits the access of TEA to the binding site in the TASK-3 channel. The deletion of the cap domain (by replacing the Loop1-P1 with a second Loop2-P2), generates a TEA-sensitive TASK-3/2loop2 channel. This TEA sensitivity is explained by a four-tyrosine ring at the mouth of the pore (Y99 and Y205). When the Y99 and the Y205 residues were mutated to alanine in the background of the TASK-3/2loop2, the channels displayed a substantial insensitivity to TEA similar to that observed in wild-type TASK-3 channels.
TEA Is a Potent Blocker of Kv2.1 Channel but Not an Effective Blocker of TASK-3 Channel
We first examined the effect of external TEA on Kv2.1 (a member of the voltage-dependent potassium channels family) and TASK-3 channels (member of the K 2 P channel family) expressed in HEK-293 cells. We found that the application of 100 mM TEA led to a strong inhibition of Kv2.1 currents (~85%) ( Figure 1A), with an IC 50 value of 16.9 ± 1.7 mM at +80 mV ( Figure 1C). In contrast, the blockade of TASK-3 by 100 mM TEA was very low (IC 50 value of 12.5 ± 3.4 at 80 mV), reaching 30% inhibition at saturating TEA concentrations at +80 mV ( Figure 1B,C), consistent with previously reported findings [5,19]. domains in a weak inward rectifying K + channel) channels have been reported, giving important insights into the K2P channel function [14][15][16][17]. Structural studies revealed that K2P channels display an exclusive extracellular cap domain formed by the extracellular loop that connects the first transmembrane domain and the first pore-forming sequence (TM1-P1 loop). The cap domain forms two tunnel-like side portals, known as the extracellular ion pathway (EIP) [18]. Also, the cap structure has been proposed as a barrier that hinders the access of classical K + channel blockers to their binding sites. Thus, the cap domain has been proposed to be responsible for the poor sensitivity of K2P channels to classical K + channel blockers [15,16]. By using mutagenesis, electrophysiology and computational analysis, we herein explored the role of the cap structure and potential residues in the blockade of TASK-3 channel by tetraethylammonium (TEA).
Our results confirm that the cap structure limits the access of TEA to the binding site in the TASK-3 channel. The deletion of the cap domain (by replacing the Loop1-P1 with a second Loop2-P2), generates a TEA-sensitive TASK-3/2loop2 channel. This TEA sensitivity is explained by a fourtyrosine ring at the mouth of the pore (Y99 and Y205). When the Y99 and the Y205 residues were mutated to alanine in the background of the TASK-3/2loop2, the channels displayed a substantial insensitivity to TEA similar to that observed in wild-type TASK-3 channels.
TEA Is a Potent Blocker of Kv2.1 Channel but Not an Effective Blocker of TASK-3 Channel
We first examined the effect of external TEA on Kv2.1 (a member of the voltage-dependent potassium channels family) and TASK-3 channels (member of the K2P channel family) expressed in HEK-293 cells. We found that the application of 100 mM TEA led to a strong inhibition of Kv2.1 currents (~85%) ( Figure 1A), with an IC50 value of 16.9 ± 1.7 mM at +80 mV ( Figure 1C). In contrast, the blockade of TASK-3 by 100 mM TEA was very low (IC50 value of 12.5 ± 3.4 at 80 mV), reaching 30% inhibition at saturating TEA concentrations at +80 mV ( Figure 1B,C), consistent with previously reported findings [5,19]. The high affinity of TEA for Kv or Kir channels depends on the presence of aromatic residues (Tyrosine or Phenylalanine) at the mouth of the pore [20][21][22][23]. For instance, it has been reported that residues Y82 and Y380 are key residues involved in TEA-mediated blockade in KcsA and Kv2.1 channels, respectively (see Figure 2A) [20][21][22][23]. The high affinity of TEA for Kv or Kir channels depends on the presence of aromatic residues (Tyrosine or Phenylalanine) at the mouth of the pore [20][21][22][23]. For instance, it has been reported that residues Y82 and Y380 are key residues involved in TEA-mediated blockade in KcsA and Kv2.1 channels, respectively (see Figure 2A) [20][21][22][23]. Kv2.1 and TASK-3 pore domains. Gaps are indicated by dashes, letters with gray background are the residues implicated in the TEA binding site (Y82, Y380, A99, A100 and Y205, respectively). The selectivity filter signatures are boxed and the numbers are indicated. The PD1 and PD2 signify pore domains 1 and 2 of TASK-3 channels, respectively. (B) Dose-response curve of TEA on A99Y (red triangle) and A100Y (red square) mutants. The block was analyzed at the end of the test pulse at +80 mV. Results are shown as means ± SEM. The black lines were taken from the fits in Figure 1C and correspond to the TEA inhibition curves for TASK-3 WT and Kv2.1, respectively.
Taking advantage of the availability of Kv2.1-containing expression vector, we mutated the residue Y380 for alanine (Y380A) in Kv2.1 channels and found an important increase in the IC50 value (~3-fold, IC50 55.5 ± 2.2 mM) in the Y380A mutant ( Figure S1). This finding is consistent with the Y380 residue playing a key role in the sensitivity of Kv2.1 to TEA, as previously reported [24,25].
We then examined the alignment of the pore domains of KcsA, Kv2.1 and TASK-3 (each P domain, separately) ( Figure 2A). The A100 residue in the first pore region of TASK-3 (P1-domain) is the equivalent amino acid to Y82 (KcsA) and Y380 (Kv2.1). In contrast, the second pore region of TASK-3 (P2-domain) displays a tyrosine residue in position 205 (Y205) (Figure 2A). Thus, the presence of only one tyrosine (Y205) placed in the P2-domain per TASK-3 subunit (two tyrosine for a functional dimeric channel), should explain the extracellular TEA insensitivity obtained in the Gaps are indicated by dashes, letters with gray background are the residues implicated in the TEA binding site (Y82, Y380, A99, A100 and Y205, respectively). The selectivity filter signatures are boxed and the numbers are indicated. The PD1 and PD2 signify pore domains 1 and 2 of TASK-3 channels, respectively. (B) Dose-response curve of TEA on A99Y (red triangle) and A100Y (red square) mutants. The block was analyzed at the end of the test pulse at +80 mV. Results are shown as means ± SEM. The black lines were taken from the fits in Figure 1C and correspond to the TEA inhibition curves for TASK-3 WT and Kv2.1, respectively.
Taking advantage of the availability of Kv2.1-containing expression vector, we mutated the residue Y380 for alanine (Y380A) in Kv2.1 channels and found an important increase in the IC 50 value (~3-fold, IC 50 55.5 ± 2.2 mM) in the Y380A mutant ( Figure S1). This finding is consistent with the Y380 residue playing a key role in the sensitivity of Kv2.1 to TEA, as previously reported [24,25].
We then examined the alignment of the pore domains of KcsA, Kv2.1 and TASK-3 (each P domain, separately) ( Figure 2A). The A100 residue in the first pore region of TASK-3 (P1-domain) is the equivalent amino acid to Y82 (KcsA) and Y380 (Kv2.1). In contrast, the second pore region of TASK-3 (P2-domain) displays a tyrosine residue in position 205 (Y205) (Figure 2A). Thus, the presence of only one tyrosine (Y205) placed in the P2-domain per TASK-3 subunit (two tyrosine for a functional dimeric channel), should explain the extracellular TEA insensitivity obtained in the TASK-3 channel. To test this possibility, the single mutation A100Y on the WT background was investigated. As shown in Figure 2B, this mutant was poorly TEA-sensitive and had an IC 50 value of 196.2 ± 19.4 mM (n = 4). To rule out a possible insensitivity to TEA due to a higher distance between the blocker and the selectivity filter compared to that existing in TEA-sensitive channels, we also evaluated the single mutation A99Y on the TASK-3 background that showed a sensitivity to TEA similar to that observed in the A100Y mutant (IC 50 value of 348.0 ± 17.0 mM; n = 3) ( Figure 2B).
Cap Structure Deletion in TASK-3 Generates Poorly Selective Channels
The low sensitivity of TASK-3 channels to TEA has been explained by the presence of the cap structure, which blocks the access of TEA to its binding sites [15,16].
To probe the hypothesis proposed for the role of the cap structure in the insensitivity of K 2 P channels to TEA, we constructed TASK-3 channels that lacked the cap structure. This goal was achieved by constructing a cDNA encoding for TASK-3 channels where the cap-forming loop1-P1 sequence was replaced with a loop2-P2 (TASK-3/2loop2) ( Figure 3). Therefore, the cDNA encoding for cap-less TASK-3 channels is the one that has two loop2-P2 (TASK-3/2loop2) as external linkers ( Figure 3). TASK-3 channel. To test this possibility, the single mutation A100Y on the WT background was investigated. As shown in Figure 2B, this mutant was poorly TEA-sensitive and had an IC50 value of 196.2 ± 19.4 mM (n = 4). To rule out a possible insensitivity to TEA due to a higher distance between the blocker and the selectivity filter compared to that existing in TEA-sensitive channels, we also evaluated the single mutation A99Y on the TASK-3 background that showed a sensitivity to TEA similar to that observed in the A100Y mutant (IC50 value of 348.0 ± 17.0 mM; n = 3) ( Figure 2B).
Cap Structure Deletion in TASK-3 Generates Poorly Selective Channels
The low sensitivity of TASK-3 channels to TEA has been explained by the presence of the cap structure, which blocks the access of TEA to its binding sites [15,16].
To probe the hypothesis proposed for the role of the cap structure in the insensitivity of K2P channels to TEA, we constructed TASK-3 channels that lacked the cap structure. This goal was achieved by constructing a cDNA encoding for TASK-3 channels where the cap-forming loop1-P1 sequence was replaced with a loop2-P2 (TASK-3/2loop2) ( Figure 3). Therefore, the cDNA encoding for cap-less TASK-3 channels is the one that has two loop2-P2 (TASK-3/2loop2) as external linkers ( Figure 3). In this representation, each subunit has two poreforming domains (P loops) and four transmembrane domains (denoted M1-M4). To the right is shown the TASK-3/2Loop2 channel construct with the amino acid sequence of selectivity filter illustrated in boxes. Figure 4A-F shows a comparison of the currents generated by TASK-3 (WT) and TASK-3/2loop2, in physiological ( Figure 4A,D) and high external K + concentrations ( Figure 4B,E), respectively. TASK-3 WT channels show a characteristic leak potassium current with a normal time dependence and selectivity of K + over Na + ( Figure 4A,B), as seen in the current-voltage relations ( Figure 4C). Although the TASK-3/2loop2 construct could be readily over-expressed in HEK-293 cells, the magnitude of the currents was lower than those displayed by TASK-3 WT channels ( Figure 4D) and showed poor selectivity of K + over Na + when evaluated under physiological conditions (145 mM vs. 5 mM, intracellular vs. extracellular [K + ]) ( Figure 4F). However, robust currents were obtained under symmetrical potassium conditions (140 mM K + ) ( Figure 4E,F). The lack of selectivity displayed by the TASK-3/2loop2 channel might be a consequence of mutating the GYG (Glycine-Tyrosine-Glycine) triplet from the pore forming region 1 to the GFG (Glycine-Phenylalanine-Glycine) triplet from the pore-forming region 2. Figure 4B,E), respectively. TASK-3 WT channels show a characteristic leak potassium current with a normal time dependence and selectivity of K + over Na + ( Figure 4A,B), as seen in the current-voltage relations ( Figure 4C). Although the TASK-3/2loop2 construct could be readily over-expressed in HEK-293 cells, the magnitude of the currents was lower than those displayed by TASK-3 WT channels ( Figure 4D) and showed poor selectivity of K + over Na + when evaluated under physiological conditions (145 mM vs. 5 mM, intracellular vs. extracellular [K + ]) ( Figure 4F). However, robust currents were obtained under symmetrical potassium conditions (140 mM K + ) ( Figure 4E,F). The lack of selectivity displayed by the TASK-3/2loop2 channel might be a consequence of mutating the GYG (Glycine-Tyrosine-Glycine) triplet from the pore forming region 1 to the GFG (Glycine-Phenylalanine-Glycine) triplet from the pore-forming region 2.
A Ring of Four Tyrosines, at the Mouth of the Pore, Confers TEA Sensitivity to TASK-3
We evaluated the effect of TEA blocker on the cap-less TASK-3/2loop2 construct. This construct generates a channel with one tyrosine per P-domain (therefore, four tyrosine per dimeric channel). In this case, a strong sensitivity to the extracellular TEA blockade is expected.
Indeed, Figure 5A-B shows that cap-less TASK-3 channel was TEA-sensitive and had a maximum inhibition of 90% and an IC50 value of 11.8 ± 0.4 mM (n = 4) when assayed in symmetrical K + conditions ( Figure 5B).
Given that the activity of the cap-less channel was only detected when recorded under high external K + concentration, we were forced to add TEA without reducing the external K + concentration, thus creating a substantial change in external osmolality. To rule out any possible effect on TASK-3 and Kv2.1 due to a change in external osmolality, we tested the currents displayed by TASK-3 and Kv2.1 channels in response to different external solutions when the osmolality was increased by adding mannitol instead of TEA. As seen in Figure S2, channel activity of both TASK-3 and Kv2.1 was poorly decreased when switched from isosmotic to hyperosmotic solution (800 mOsm).
To test the possibility that residue Y99 confers, at least in part, the sensitivity of the TASK-3/2loop2 construct to TEA, we mutated residue Y99 for an alanine residue (Y99A) in the backbone of the cap-less TASK-3 channel (TASK-3/2loop2/Y99A).
As shown in Figure 5 C,D, theTASK-3/2loop2/Y99A mutant displayed a partial TEA sensitivity with a maximal inhibition of 46% and an IC50 value of 17.3 ± 1.8 mM. By analogy, we also tested the contribution of the Y205 residue of TASK-3 to the TEA sensitivity. Replacement of Y205 for an alanine residue (Y205A) on the background of the mutant TASK-3/2loop2 (TASK-3/2loop2/Y205A) showed a similar pattern to that obtained with the TASK-3/2loop2/Y99A mutant ( Figure 5E,F). TASK- We evaluated the effect of TEA blocker on the cap-less TASK-3/2loop2 construct. This construct generates a channel with one tyrosine per P-domain (therefore, four tyrosine per dimeric channel). In this case, a strong sensitivity to the extracellular TEA blockade is expected.
Indeed, Figure 5A-B shows that cap-less TASK-3 channel was TEA-sensitive and had a maximum inhibition of 90% and an IC 50 value of 11.8 ± 0.4 mM (n = 4) when assayed in symmetrical K + conditions ( Figure 5B).
Given that the activity of the cap-less channel was only detected when recorded under high external K + concentration, we were forced to add TEA without reducing the external K + concentration, thus creating a substantial change in external osmolality. To rule out any possible effect on TASK-3 and Kv2.1 due to a change in external osmolality, we tested the currents displayed by TASK-3 and Kv2.1 channels in response to different external solutions when the osmolality was increased by adding mannitol instead of TEA. As seen in Figure S2, channel activity of both TASK-3 and Kv2.1 was poorly decreased when switched from isosmotic to hyperosmotic solution (800 mOsm).
To test the possibility that residue Y99 confers, at least in part, the sensitivity of the TASK-3/2loop2 construct to TEA, we mutated residue Y99 for an alanine residue (Y99A) in the backbone of the cap-less TASK-3 channel (TASK-3/2loop2/Y99A).
As shown in Figure 5C,D, theTASK-3/2loop2/Y99A mutant displayed a partial TEA sensitivity with a maximal inhibition of 46% and an IC 50 value of 17.3 ± 1.8 mM. By analogy, we also tested the contribution of the Y205 residue of TASK-3 to the TEA sensitivity. Replacement of Y205 for an alanine residue (Y205A) on the background of the mutant TASK-3/2loop2 (TASK-3/2loop2/Y205A) showed a similar pattern to that obtained with the TASK-3/2loop2/Y99A mutant ( Figure 5E,F). TASK-3/2loop2/Y205A mutant presented a maximal inhibition of 59% and had an IC 50 value of 63.9 ± 5.4 mM ( Figure 5F). mM ( Figure 5F).
We then generated a cap-less TASK-3 channel with no tyrosine residues near the pore region (TASK-3/2loop2/Y99A/Y205A mutant) to test its sensitivity to TEA. As shown in Figure 5 G,H, mutant channels were essentially insensitive to TEA blockade, with a similar insensitivity to that displayed by the TASK-3 WT channel (Figure 5 G,H). Taken together, our data clearly show that, in the absence of the cap structure, TASK-3 channel requires a four-tyrosine ring at the mouth of the pore to be fully blocked by extracellular TEA ions. Therefore, our results are consistent with a partial role of the cap structure to the access of TEA blocker. Curves are fits to a 4-parameter logistic function and were constructed by using the average of fitted parameters of the individual experiments. The lines without points are taken from the fits shown in Figure 1C and correspond to the TEA inhibition curves for TASK-3 WT and Kv2.1, respectively.
We then generated a cap-less TASK-3 channel with no tyrosine residues near the pore region (TASK-3/2loop2/Y99A/Y205A mutant) to test its sensitivity to TEA. As shown in Figure 5G,H, mutant channels were essentially insensitive to TEA blockade, with a similar insensitivity to that displayed by the TASK-3 WT channel ( Figure 5G,H). Taken together, our data clearly show that, in the absence of the cap structure, TASK-3 channel requires a four-tyrosine ring at the mouth of the pore to be fully blocked by extracellular TEA ions. Therefore, our results are consistent with a partial role of the cap structure to the access of TEA blocker.
Computational Analysis of Extracellular TEA Binding in TASK-3 Channel
Given that the crystallographic structure for any member of the K 2 P TASK subfamily has not been solved, the best template for TASK-3 was the structure of the TREK-1 channel (Protein Data Bank (PDB) ID code 4TWK), which displays 31% sequence identity and e-value = 1E−32. TASK-3/2loop2 and TASK-3 WT models ( Figure S3A,B) were subjected to MDs (Molecular Dynamics) by 50 ns. The RMSD (root-mean-square deviation) values for the initial structure of 2loop2 were less than 2 Å (Figure S3C), and continued decreasing gradually with an increase in the simulation time. During the last 12 ns, the RMSD values remain moderately constant, at less than 1 Å. The TASK-3 WT model is 0.2 Å lower than the 2loop2 model until after the first 26 ns, and subsequently the differences were significantly lower. Both models reached an equilibrium in the last 8 ns, approximately. The XP15 (Extra precision) method of Glide docking was used to investigate the binding site of TEA in our models. In the 2loop2 model, only ten poses were found, and all these poses were located in the center of 4 relevant tyrosine residues shown in Figure 6A.
We also investigated the stability of ligand-receptor complexes (obtained by docking methodology) using MDs. Accordingly, the 1st best pose ranked by ∆GBind was subjected to MDs of 100 ns. During the first 50 ns, energy restrains were applied to ligands and the secondary structures of the channels, and during the last 50 ns, the energy restraints over the ligands were removed. To measure the residence time of TEA poses in the binding site, the distance between TEA and the tyrosine (99 and 205, in both monomers) was computed over the whole trajectory. For the 2loop2 channel, its poses remained stables most of the time (Figure 6B), and the first TEA pose lost affinity in the last 4 ns. Given that the distance was calculated using the center of mass of the TEA poses and each tyrosine residue, it is likely that the distance ranges do not correspond to a specific type of interaction but rather only as coordination. For both 2loop2/A99/Y205 and 2loop2/Y99/A205 mutant channels, all TEA poses lose affinity in the binding site before to the first 55 ns, depicted in Figure 6D and F. Because no poses were found in the 2loop2/A99/A205 channel, the best pose of 2loop2 was selected and the four tyrosine residues were mutated to alanine and an energy minimization was applied. Then, the same simulation protocol was applied. As with the other mutant channels (A99/Y205 & Y99/A205), in this case, the TEA pose left the binding site in the first non-restrained ns ( Figure 6H). Taking together, the results shown in Figure 6 confirmed that TASK-3 requires a four-tyrosine ring at the external mouth of the pore for optimal binding to external TEA ions. been solved, the best template for TASK-3 was the structure of the TREK-1 channel (Protein Data Bank (PDB) ID code 4TWK), which displays 31% sequence identity and e-value = 1E−32. TASK-3/2loop2 and TASK-3 WT models ( Figure S3A,B) were subjected to MDs (Molecular Dynamics) by 50 ns. The RMSD (root-mean-square deviation) values for the initial structure of 2loop2 were less than 2 Å (Figure S3C), and continued decreasing gradually with an increase in the simulation time. During the last 12 ns, the RMSD values remain moderately constant, at less than 1 Å . The TASK-3 WT model is 0.2 Å lower than the 2loop2 model until after the first 26 ns, and subsequently the differences were significantly lower. Both models reached an equilibrium in the last 8 ns, approximately. The XP15 (Extra precision) method of Glide docking was used to investigate the binding site of TEA in our models. In the 2loop2 model, only ten poses were found, and all these poses were located in the center of 4 relevant tyrosine residues shown in Figure 6A. (A,C,E,G) cluster of TEA poses (in green color) obtained by docking analysis for 2loop2, 2loop2/A99, 2loop2/A205 and 2loop2/A99/A205 channels, respectively. In red are shown the residues forming the binding site of TEA near to the selectivity filter (SF). In yellow color, K + ions are depicted and water molecules are represented in red and white, placed in the SF. The superscript letter implies the monomer to which it belongs. (B) shows the distances between the 1st best poses and the tyrosine residues, respectively. Similarly, the distances between the 1st best poses and the tyrosine residues 2loop2/A99, 2loop2/A205 and 2loop2/A99/A205 mutant channels are shown in (D,F,H), respectively.
Discussion
The molecular mechanism of blockade of Kv and Kir potassium channels by external TEA has been widely studied [20][21][22][23]26]. These studies have provided relevant insights into the gating and permeation processes of K + channels [20][21][22][23]26]. Regarding K 2 P channels, there is one study in the literature where a detailed study of the blockade of TREK-1 channels by internal TEA was described [28]. On the other hand, K 2 P channels are recognized as extracellular TEA non-sensitive channels [5].
The elucidation of the structure of K 2 P channels have provided several clues about the molecular determinants underlying gating processes in K 2 P channels [14][15][16][17]. K 2 P structures revealed that two M1-P1 loops form a cap domain, which has been proposed to form a physical barrier for the access of classical K + channel blockers such as TEA to their binding sites in K 2 P channels [15,16].
In the present article, we used a combination of mutagenesis, functional evaluation and dynamic simulations to challenge the hypothesis that insensitivity of TASK-3 channels for external TEA is due to the presence of the cap structure. Our results suggest that the cap domain in TASK-3 channels effectively restricts the access of extracellular TEA to their binding sites, although the removal of the cap structure does not generate fully blocked TASK-3 mediated K + currents.
Amino acid sequence analysis of the TASK-3 channel suggested a partial binding site for TEA blocker composed by a tyrosine placed in position 205 at the second P domain. This tyrosine residue resembles the binding site for TEA in Kir and Kv channels, where an aromatic residue (phenylalanine or tyrosine) in position 82 or 320 (KcsA or Kv1.2 channel, respectively, see Figure 2A) play an essential role in TEA binding [20][21][22][23]. Given the tetrameric architecture of Kir and Kv channels, the presence of a tyrosine generates a four-tyrosine ring to TEA coordinate via π-cation interaction [22,26].
If only four aromatic residues are responsible for TEA binding in other K + channels, we hypothesized that engineering a ring composed of four tyrosine residues might result in TASK-3 channels highly sensitive for TEA ions. As proof of concept, we introduced an extra tyrosine residue either in position 99 (A99Y) or 100 (A100Y) in TASK-3 channel and assessed the sensitivity of this channel to TEA. Our results showed that TASK-3 channels are partially blocked by TEA ions when four tyrosine residues were placed near the pore region. Strikingly, the A99Y mutant was fully sensitive to external TEA ions when the cap structure was removed from TASK-3 channels.
Functional analysis of the cap-less construct (TASK-3 2loop2) displayed a TEA sensitivity with an IC 50 close to that obtained for Kv2.1 channel. Our results provide strong evidence supporting for residues Y99 and Y205 forming part of the binding site for TEA: Y99A and Y205A mutants resulted in cap-less TASK-3 channels with partial sensitivity to external TEA ions ( Figure 5C-F). Additionally, the double mutant 2loop2/Y99A/Y205A a substantial reduction in the sensitivity to TEA. Taken together, our results show support for Y205 as part of a TEA-binding site in TASK-3 channels. Moreover, the mutants 2loop2/Y99A/Y205A in TASK-3 (dose-response shown in Figure 5H) and Y380A in Kv2.1 channel (dose-response shown in Figure S1) still showed some sensitivity to TEA ions, suggesting that other residues from both channels located in the K + permeation pathway might be important for TEA binding. More experiments in the future are required to evaluate the contribution of other residues to TEA binding.
The cap structure deletion generated in the construct TASK-3/2loop2 also evidenced the relevance of extracellular ion pathway (EIP) for TASK-3 channel function. Functional evaluation of TASK-3/2loop2 showed a loss of K + selectivity. This loss of selectivity displayed by cap-less TASK-3 channels might be due to a constitutive C-type inactivation caused by the absence of the cap structure [29][30][31][32]. In this case, the cap structure might be acting as a K + concentrative pathway near to the pore region and its removal could be associated with lower local K + concentrations near the pore that may result in a pore collapse. The robust activity of cap-less TASK-3 channels recorded under symmetrical high K + concentrations are in agreement with this hypothesis, although further experiments are required in order to confirm the mechanism underlying loss of selectivity in the cap-less channels.
According to our homology model of TASK-3, the EIP of the cap structure has a group of amino acids that generate an electronegative potential (Q68, E70, P71, G75, Q77 and H98), which could increase the concentration of potassium in the extracellular conduction pathway [18]. Our model generated for the cap-less TASK-3 is consistent with a decreased electronegative potential and with the consequent effect on the selectivity filter, which was confirmed when the electrostatic potential was evaluated for the WT and TASK-3/2loop2 models ( Figure S4). In contrast to other K 2 P channels, the cap deletion did not affect the expression or dimerization of TASK-3 channel, ruling out an essential role of the cap in the dimerization of TASK-3 channels.
In conclusion, our study revealed that cap structure explains, at least in part, the poor sensitivity of K 2 P channels to TEA. Moreover, the cap structure is not essential to the channel expression or assembly. Our data also supports for a key role of the cap structure in TASK-3 channel function by maintaining the architecture in the mouth of the pore.
Constructs
Cavia porcellus TASK-3 (GenBank accession No AF212827) was obtained from Dr. Jürgen Daut (Marburg University, Marburg, Germany). Rattus norvegicus Kv2.1 (GenBank under accession No NM_013186) cDNA was subcloned into pMAX (eukaryotic expression vector) vector and provided by Dr. Steve Goldstein (Loyola University Chicago, Chicago, IL, USA). Mutants and deletion constructs were generated by PCR (Taq DNA polymerase, Thermo Scientific, Waltham, MA, USA) using standard protocols. The sequences of amplified regions were confirmed by DNA sequencing.
Electrophysiological Recordings
HEK-293 cells were maintained in DMEM-F12 media (Invitrogen Life Technologies, Carlsbad, CA, USA) supplemented with 10% FBS and 1% penicillin/streptomycin. Plasmid transient transfections (1-2 µg plasmid) were done with a DNA ratio of 3:1 (plasmid encoding channel: plasmid encoding for GFP as marker) using Xfect polymer (Clontech, Mountain View, CA, USA). Whole cell recordings were performed at room temperature for 24 to 48 h. post-transfection using a PC-501A patch clamp amplifier (Warner Instruments, Hamden, CT, USA) and borosilicate pipettes as described elsewhere [29]. Cells were continuously perfused with bath solution containing (in mM): 135 NaCl, 5 KCl, 1 MgCl 2 , 1 CaCl 2 , 10 HEPES, 10 Sucrose, adjusted to pH 7.4 with NaOH. Intracellular pipette solution contained (in mM): 145 KCl, 5 EGTA, 2 MgCl 2 , 10 HEPES, adjusted to pH 7.4 with KOH. External high K + solution was obtained by equimolar substitution of Na + by K + . Tetraethylammonium chloride (Sigma-Aldrich, St. Louis, MO, USA) was directly dissolved in external bath solutions to obtain the desired final concentrations. Control experiments designed to rule out a possible contribution of external osmolality were performed using the bath solution described above but supplemented with D-Mannitol.
Homology Modeling
Five different models for TASK-3 and its variants were built by homology using the structure of the TREK-1 channel (PDB: 4TWK) as a template using the software MODELLER (University of California, San Francisco, CA, USA) [33]. Both monomers were optimized by Molecular Dynamics (MD) and evaluated using Energy (DOPE) [34] and Procheck programs. The models were prepared in Maestro suite and protonation states were assigned with PROPKA software at pH 7.4. The structures were refined by means of energy minimization in vacuum with a conjugate gradient algorithm. Afterward, the models were embedded into a pre-equilibrated POPC (phosphatidylcholine) bilayer and solvated in a cubic box with SPC (simple point charge model) water molecules, in periodic boundary conditions adding 150 mM of NaCl. Subsequently, the system was relaxed by MDs for 50 ns with 0.25 kcal mol −1 Å −2 of harmonic energy restraints, applied to the secondary structure (excepting loops), using a Desmond package and OPLS software (Desmond Molecular Dynamics System, New York, NY, USA) [35]. To replicate the thermodynamic condition in wet-lab, isothermal-isobaric (NPT) ensembles at 1.01325 bar and 300 K method as thermostat were used. Root-mean square deviations (RMSD) were computed over all heavy atoms along the MD trajectory to evaluate equilibrium convergence.
Computational Mutagenesis (CM)
The construct TASK-3/2loop2, that displays no cap structure and two loop2-P2 per subunit, has 4 tyrosine residues: a Y99 and Y205 from each monomer. These residues are positioned in the extracellular mouth of channel just above to the selectivity filter (SF), in direct contact with the aqueous medium. Tyrosines 99 and 205 were subjected to CM. The last structure from the MD trajectory of TASK-3/2loop2 was used as starting point to CM, according to the scheme represented in Table 1. All mutations were performed with Maestro suite and then all residues within 8 Å of cutoff from the mutated residue were subjected to energy minimization in implicit solvent. In construct named 2loop2, the cap structure from TASK-3 channels was removed and the 1st Pore Domain (PD1) was replaced by 2nd Pore Domain (PD2). Hence, 2loop2 has two PD2 with 2 Tyr (Tyrosine) residues (Y99 & Y205) per subunit, forming the putative binding site of TEA. In the mutant 2loop2/Y99A, the Y99 was mutated to alanine in both subunits (Y99A A , Y99A B ). In 2loop2/Y205A, the Y205 was mutated to alanine and 2loop2/Y99A/Y205A, theY99 and Y205 were mutated by alanine in both subunits, generating a channel without a binding site for TEA.
Docking and Molecular Mechanics Energies Combined with Generalized Born and Surface Area Continuum Solvation (MM-GBSA) Studies
TEA structure was downloaded from PDB (ID: 1T36) in SDF (Spatial Data File) format and then prepared with Ligprep tool in OLPS (Optimized Potential for Liquid Simulations) 2005. All possible protonation states for TEA at physiological pH were generated using the Epik program [36]. To assess the binding site of TEA in our channel models, docking studies were carried out in all systems shown in Table 1. Before docking, the K + ion located in the first site (S1) of SF was removed to avoid the TEA-ion electrostatic repulsion. The conformational search of TEA was carried out in a grid box placed in the extracellular portal of the channel, using the geometric coordinates of S1 and dimensions of 26 × 26 × 26 Å in each edge of the box. The Extra Precision (XP) algorithm of Glide, flexible ligand sampling and default docking parameters were used [37]. Docking assays were followed by MM-GBSA method to obtain the relative binding affinities of docking conformers. The MM-GBSA energies were computed over all docking outputs using OPLS 2005 and Prime program. The protein was subjected to an energy minimization within 8 Å of radius from the ligand. Subsequently, al No, it means potassium binding site number 1l conformers in each system were ranked by the relative binding affinities (∆GBind) values.
Molecular Dynamics Simulations (MDs)
The conformers for each system shown in Table 1 were ranked by ∆GBind and subjected to MDs (100 ns). For the first 50 ns of simulation, an energy restraint of 0.5 kcal mol −1 Å −2 was applied to the ligands, which allows channels to adapt to the ligand. Then, the energy restraints over the ligands were removed after the first 50 ns. During the whole simulation time, energy restraints were applied to secondary structure of the channel (0.25 kcal mol −1 Å −2 ). To evaluate the coordination and the time of residence of TEA within the binding site for all systems, the distances between the TEA mass center and the mass center of each residue (numbers 99 and 205, in both monomers) during 100 ns were measured. The electrostatic potential surfaces were computed with APBS20,21 v1.4 over the protein as a mean in whole simulation time (supporting material S4).
Statistical Analysis
Data were compiled and analyzed with the SPSS software package, version 17.0 (SPSS Inc., Chicago, IL, USA). Individual experimental TEA blockade data were fitted to a four-parameter logistic function, described by the following equation: where I/I max is the blocked fraction of K + -mediated currents. I min and I max represent minimal and maximal currents, and h and IC 50 , represent the Hill coefficient and concentration of TEA producing half-maximal inhibition of TASK-3 currents, respectively. Significance of differences between means were calculated with unpaired Student's t test. All data shown are mean ± standard error of mean (SEM). Author Contributions: G.C., R.Z. and D.B. performed the experiments and analyzed the data; M.A.C. and L.Z.: Experiment design, results analysis and wrote the manuscript. All the authors listed above have made substantial, direct and intellectual contribution to the work, and approved it for publication. | 8,915.2 | 2018-08-01T00:00:00.000 | [
"Chemistry",
"Biology"
] |
Theoretical modeling and experimental verification of a broadband microvibrational energy harvesting system
To scavenge energy from imperceptible vibrations, this paper investigates the broadband response and output performance of a microvibrational piezoelectric energy harvesting system with mechanical stopper. The energy harvesting system comprises a cantilever beam made of piezoelectric material, which is affixed with a coil at its unbound end and a mechanical stopper. The coil is placed in a magnetic field to provide an ultra‐low level excitation. The electromechanical model is derived according to force integration method (FIM) and Hertz's contact theory, and numerical simulations are undertaken to evaluate the influence of the excitation level, and the gap on the performance. For the linear counterpart without stopper, experimental results indicate the system generates a peak power of 24.12 μW with matched resistance under excitation with a level of 0.003 N and a frequency of 200.3 Hz. When a polydimethylsiloxane (PDMS) stopper is introduced, the vibration of the piezoelectric beam exhibits an obvious nonlinearity with an amplitude of micron scale. Increasing the excitation level and decreasing the gap could efficiently broaden the response bandwidth. Experimental results demonstrate that a copper stopper with larger elastic modulus results in a wider response frequency range, and the half‐power bandwidth could reach 37.1 Hz under excitation with a level of 0.003 N.
| INTRODUCTION
In recent decades, the extensive application of wireless sensors and portable electronics has garnered significant research interest, and the proliferation of these electronic devices has captured a significant portion of the market's expansion. 1Although a tremendous advance has been made in lowering the power consumption of these electronics, the devices continue to rely on conventional batteries that necessitate frequent replacement or recharging. 2 Energy harvesting technology 3 is capable of transforming ambient energy into usable electrical power, 4 making it a potential alternative to batteries.This technology has gained considerable attention as a promising battery-free solution. 5To date, the scavenging of mechanical and vibrational kinetic energy has been achieved through the implementation of three prevailing energy conversion mechanisms, specifically, the piezoelectric, 6 electromagnetic, 7 and electrostatic 8 effects.Piezoelectric energy harvesters have gained considerable global attention as a result of their high energy density and ease of miniaturization in fabrication. 9Piezoelectric harvesters utilizing the linear resonance mechanism have been extensively studied in the past. 10However, the linear harvesters only perform well near the resonance frequency and little change in the frequency of vibration will greatly degrade the energy harvesting performance. 11he issue at hand has prompted a great deal of interest in the study of nonlinear vibrational harvesters, which can function over a broad spectrum of frequencies.Both theoretical analysis and empirical validation of such devices have received considerable attention. 12In particular, multistable oscillators 13 with different types of potential energy functions have been extensively investigated.As an example, Stanton et al. 14 introduced a monostable piezoelectric harvester with bidirectional hysteresis by incorporating a permanent magnet end mass, which interacted with oppositely poled stationary magnets.Neiss et al. 15 presented the analytical formulations for determining the jump-up and jump-down points, maximum power output under optimal resistive load, and 3 dB-bandwidth of a monostable piezoelectric harvester.Due to the large amplitude oscillation between two potential wells, bistable energy harvesters resulting from nonlinear and mechanical mechanisms have been widely investigated for broadband operation. 16Erturk et al. 17,18 presented a bistable piezomagnetoelastic device that aims to enhance piezoelectric power generation substantially.This mechanism exhibits superior performance over its linear counterpart, as evidenced by large-amplitude periodic oscillations observed across a wide frequency range.By coupling two rotatable magnets with the end magnet of a beam, Zhou et al. 19,20 proposed a novel bistable piezoelectric harvester and indicated the design could cover the broad low-frequency range of 4-22 Hz.Daqaq 21 investigated the response of bistable harvesters to white Gaussian excitations and noted that the superiority of bistable configurations over linear harvesters is dependent on the proper design of the potential function, which should be based on the known noise intensity.Exploring internal resonance, Fan delves into a U-shaped vibration-based energy harvester endowed with both broadband and bidirectional capabilities. 22Additionally, a wideband two-element piezoelectric energy harvester exhibiting both bistability and parametric resonance characteristics is presented. 23Furthermore, Fan introduced an internal resonance piezoelectric energy harvester featuring threedimensionally coupled bending and torsional modes. 24ore recently, Pan et al. 25 proposed a new concept by integrating bi-stability and swinging balls to harvest wind energy and experimental results indicated that the design achieved a significant improvement in electric outputs due to its ability to realize snap-through motions over a wide range of wind speed.To enhance the efficiency of energy harvesting from vibrations characterized by low excitation levels and a broad frequency spectrum, a variety of nonlinear configurations exhibiting tristable, [26][27][28] quad-stable, 29,30 and penta-stable 31,32 characteristics have been proposed and investigated theoretically and experimentally.
In addition to the magnetic coupling method mentioned above, introducing mechanical stoppers to the energy harvesting system to achieve broadband response has attracted significant attention. 33By positioning a stopper to a low-frequency cantilever nanogenerator, Song et al. 34 found that the bandwidth could be broadened by at least 200%.In their research, Zhou et al. 35,36 conducted a comparison of the efficacy of a piezoelectric energy harvesting device utilizing four different types of stoppers.Their objective was to determine the ideal impact configurations that would result in optimal performance.Their findings demonstrated that there were specific regions within the parameter space where the energy harvester could achieve an optimal balance between the bandwidth of the harvested energy and the average power generated.Instead of placing stoppers on the base, Hu et al. 37 proposed an impact engaged two degrees of freedom (DOF) piezoelectric harvester by positioning the stoppers onto the body with a primary DOF and theoretical investigation showed that the bandwidth and the opencircuit voltage could be enhanced by tuning the stopper distance.By combining magnetic coupling mechanism and mechanical stopper, the performance of piezoelectric harvesters can be enhanced more obviously.For instance, the researchers Fan et al. 38,39 presented a design for a monostable piezoelectric energy harvester that incorporates symmetric magnetic attraction to a cantilever beam and a pair of stoppers to restrict the maximum deflection of the beam.They demonstrated that this new design can produce a broader operating bandwidth and higher output voltage compared to the linear energy harvesting approach.The researchers, Wang et al. 40 introduced a novel and compact piezoelectric energy harvesting device featuring a hybrid nonlinear mechanism.This device was designed for use in condition monitoring systems for freight trains, with a specific focus on ultralow-frequency and broadband applications.Theoretical and experimental investigations demonstrated that this harvester could effectively operate within the frequency range of 1-11 Hz, thereby enabling it to power typical commercial wireless Bluetooth sensors.
Notably previous studies have been carried out under large excitations provided by the shaker to ensure that the amplitude of the response is relatively large, and the softening and hardening characteristics of the piezoelectric energy harvesters are more obvious. 41However, there are still many vibration sources with extremely weak amplitudes in real life. 42nder excitation of these imperceptible vibrations, little attention has been paid to the broadband response of the piezoelectric energy harvesting system, especially when the response amplitude is in the micron scale.Therefore, this paper investigates the broadband response and energy harvesting performance of a microvibrational piezoelectric energy harvesting system with mechanical stopper under excitation of imperceptible vibration.The proposed energy harvesting system is composed of a piezoelectric cantilever beam and a coil attached at the free end of the beam which is placed in a magnetic field to provide an ultra-low level excitation.Electromechanical model of the energy harvesting system is derived, following which numerical analysis is provided.Finally, experiments are undertaken, and the influence of the excitation level, the gap between the beam and the stopper, and the materials of the mechanical stopper are considered.
The subsequent sections of this paper are structured as follows: Section 2 provides a comprehensive explanation of the design, operational concept, and modeling of the microvibrational piezoelectric energy harvesting system.In Section 3, numerical investigations are conducted, and Section 4 outlines the experimental validation process.Lastly, the conclusions drawn from this study are presented in Section 5.
| Description and operating principle
Figure 1A illustrates the schematic diagram of the microvibrational energy harvesting system with a mechanical stopper for performance enhancement.It is composed of a piezoelectric cantilever beam, a coil attached at the free end, a mechanical stopper, and a pair of external electromagnets.The end coil is placed between the external electromagnets, and the electromagnets are powered by a DC source to provide a relatively uniform external magnetic field.On the contrary, an AC signal is sent to the coil, and an alternating magnetic force provided by the electromagnet will be exerted on the piezoelectric beam to drive it to oscillate.During the oscillation, the beam will impact with the mechanical stopper, thus nonlinearity is introduced to the system, and the output performance of the piezoelectric cantilever beam will be enhanced.When the cantilever beam is at rest, the gap between the beam and the mechanical stopper is denoted as d, and it is set to be adjustable as needed.Furthermore, the AC signal sent to the coil can also be adjusted in practical conditions to control the magnetic force exerted on the piezoelectric cantilever beam.
As noted, the electromagnets are applied to provide magnetic field in this paper, and the coil with current is utilized to produce alternating magnetic force.In practical application, permanent magnets can be used instead of electromagnets.As an alternative, the end coil can also be replaced by a permanent magnet, and then placed in an alternating magnetic field.Of course, there may be other combinations of coils, magnets, and electromagnets, and their ultimate goals are to provide alternating magnetic force exerted on the piezoelectric cantilever beam.
| Mathematical modeling
For the microvibrational energy harvesting system illustrated Figure 1, it can be simplified as a theoretical impact vibration model, which contains a cantilever beam and a stopper at the end of the beam.The piezoelectric beam in Figure 1 has a layered structure which is detailed shown in Figure 2A, which composes of a substrate and two piezoelectric layers.To model the transverse vibro-impact of the beam, a finite element model of a three-layer beam structure is proposed as shown in Figure 2B.In this particular model, the beam undergoes a process of discretization whereby it is divided into N distinct elements.The model employs a plane beam element that features two nodes.Each of the nodes is characterized by 2-DOF, namely a translational DOF and a rotational DOF.The force integration method (FIM) in the finite element framework can be used in the dynamic modeling.
The displacement vector δ e of each element is calculated as follows: where Y and θ are the translational and rotational displacement, respectively.A concentrated mass matrix M [ ] e of the beam element is written as follows.
where ρ and A represent the density, the cross-section area, respectively.It should also be noted that L p is the length of the piezoceramic layer and L b represents the length of the substrate layer.f 1 (l) is expressed as: but in practice, the actual vibrating beam length is l due to the clamping of the fixed end of the beam.Where N is shape function with ξ = x l .The stiffness matrix is expressed as 43 : where B is geometric function matrix of the element that is in the following form: D is the elastic matrix, which is E in the beam element problem in this paper.Where where y ˆis the distance from a point in the plane to the neutral axis of a global cross-section, I zA 1 , I ZA 2 , and I ZA 3 are the moments of inertia of A 1 , A 2 , and A 3 with respect to the neutral axis, respectively.
( )
, where represents the Young's modulus of the three layers, respectively.For the linear system without the stopper, the dynamic model after finite element discretization is expressed as Equation (10). where and C [ ] are the mass matrix, stiffness matrix, and Rayleigh damping matrix.δ { } is the displacement vector of the node, and P t ( ) is the vector for the excitation force vector.The assembling process of mass and stiffness matrices are shown in Figure 3.
The coil at the end of the beam is viewed as a concentrated mass applied to the corresponding location of the stiffness matrix of the beam.The excitation force is exerted at the end of the beam and it is in the y direction.The external excitation force vector can be calculated as follows.
The force is assumed to be exerted on the (2N − 3)th node.Furthermore, F e is the excitation force provided by the electromagnets.By applying the method of modal superposition, the first n modes are discretized and the displacement vector δ { } of the beam is expressed in Equation ( 12): F I G U R E 3 Assembling process of mass and stiffness matrices.
where η i is the generalized displacements, φ { } is the mode shape of the beam.By substituting Equation ( 12) into Equation (10), equation of motion about the generalized displacements can be obtained using the orthogonality of mode shapes.
where ξ i is the damping ratio, ω i is the ith natural frequency.In Equation ( 13), the damping term is given directly according to the set damping ratio ξ i .By solving Equation ( 13), the generalized displacements can be calculated.Subsequently, the displacement response of any node on the linear beam can be obtained by substituting the generalized displacements into Equation (13).For the piezoelectric cantilever beam, the electromechanical coupling characteristics should be considered.Therefore, according the Dai et al. 44 and Erturk and Inman, 45 the electromechanical model of the system with a mechanical stopper can be written as: where C P is the equivalent capacitance, θ i is the equivalent electromechanical coupling efficient, and R is the load resistance.C P and θ i can be identified by experiments.Furthermore, Q t ( ) is the impact force generated due to the impact of the beam by the stopper, which can be expressed as: where λ is the equivalent spring stiffness obtained from the Hertz contact model.It is assumed that the stopper is located at the Nth node on the beam, and δ (1) N represents the first freedom value of the Nth displacement vector, namely the lateral displacement of the point corresponding to the position of the stopper on the beam.The system parameters are as shown in the following table (Tables 1).
| NUMERICAL INVESTIGATION
Based on the mathematical model, numerical simulations are carried out, and the influence of the excitation level, and the gap between the beam and stopper on the energy harvesting performance is evaluated.The fourthorder Runge-Kutta algorithm is adopted to obtain the numerical results.During the simulation, the node displacement on the beam corresponding to the position of the stopper is monitored.Dichotomy method is applied to reduce the simulation step when the piezoelectric beam impacts with the stopper and disengages from the contact, to ensure the difference between the numerical result and the gap is less than the preset threshold.
Under forward-sweep frequency excitation with levels of 0.0012, 0.0018, 0.0022, 0.0026, and 0.0030 N, the open-circuit voltage response of the linear piezoelectric beam without stopper is illustrated in Figure 4A.At any excitation level, the output voltage first increases and then decreases, and obtains a maximum value at the frequency of 198.5 Hz.The peak voltages are respectively 1.78, 2.67, 3.26, 3.86, and 4.45 V for excitation levels of 0.0012, 0.0018, 0.0022, 0.0026, and 0.0030 N. In general, the output performance of the linear piezoelectric beam without stopper could be enhanced by increasing the excitation level.Under reverse-sweep frequency excitation, the open-circuit voltage response of the linear piezoelectric beam at various excitation levels are depicted in Figure 4B and there is almost no changes in the peak voltages compared with the results under forward-sweep frequency excitation.However, the frequency, at which the peak voltages are obtained, is 197.5 Hz and it is a little smaller compared with that under forward-sweep frequency excitation.
When the mechanical stopper is introduced to the system, the voltage response of the system under excitation level of 0.0030 N is applied to evaluate the influence of the gap between the beam and the stopper.For the gap equaling 0 μm, the response of the system under forward-sweep frequency excitation, shown in Figure 4C, indicates that the output voltage is relatively low and achieves a peak value at the frequency of 216.7 Hz.Compared with the results of the linear counterpart, there is an obvious shift in the frequency at which the peak voltage is obtained.On the contrary, the voltage response of the system with the gap equaling 140 μm is basically consistent with the output of the linear counterpart.On this occasion, the displacement response amplitude of the end of the cantilever beam is smaller than the gap between the beam and the stopper, and the response of the system is linear.
As the gap between the beam and stopper decreases to 120 μm, the voltage response under forward-sweep frequency excitation is slightly different from that of linear counterpart, and the system exhibits weak nonlinearity.Meanwhile, the displacement response of the end of the cantilever beam is close to the gap between the beam and stopper.Therefore, it can be reached that obvious nonlinearities will be introduced into the system by further decreasing the gap between the beam and stopper.For the gap equaling 100, 80, 60, 40, and 20 μm, the voltage response under forward-sweep frequency excitation, illustrated in Figure 4C, reveals that the decreasing of the gap could broaden the response frequency range, and the corresponding half-power bandwidth are 6.2, 7.9, 10.5, 15.From the numerical simulations, it is observed that the displacement response of the piezoelectric beam is in the micron scale, and the half-power bandwidth of the system can be broadened by decreasing the gap between the beam and stopper.Due to the decreasing of the gap, the response amplitude of the cantilever beam is limited, and the peak voltage exhibits a decreasing trend.To validate the numerical results calculated from the electromechanical model, experiments of the microvibrational piezoelectric energy harvesting system with mechanical stopper are carried out under different conditions, and the experiment setup is illustrated in Figure 5.In the experiment, a DC power supply is applied to provide current for the electromagnets, and an AC signal generated by a function generator (Agilent 33120 A) is sent to the coil to realize alternating magnetic force excitation.The amplitude of the magnetic force is determined by applying the Wheatstone bridge, and the output voltage of the piezoelectric beam is acquired by a data acquisition card (NI-USB-6259) and then processed by a computer.The piezoelectric beam used in the experiments is made of PZT with the dimension of 32 mm × 7.2 mm × 0.83 mm, and the coils applied has an external diameter of 8 mm, an inner diameter of 4 mm, and a thickness of 3 mm.Furthermore, the piezoelectric beam is fixed by a 3D printing fixture, and a precision sliding table with a positioning accuracy of 20 μm is fixed to the bottom of the 3D printing fixture to adjust the distance between the piezoelectric beam and the stopper.
| Calibration of magnetic force
Wheatstone bridge has a high sensitivity in resistance measurement, and the unbalanced bridge is commonly utilized to measure the variation in resistance.Wheatstone bridge can be categorized into single-bridge, halfbridge, and full-bridge, among which the full-bridge has the highest sensitivity.Therefore, the full-bridge circuit combined with resistive strain gauges is applied in this paper to calibrate the magnetic force exerted on the piezoelectric cantilever beam.To further improve the sensitivity of measurement, the piezoelectric cantilever beam is replaced by a plastic sheet with a small elastic modulus for calibration.The benefit of this is to make the cantilever beam deformable more easily under ultra-low electromagnetic excitation, thus causing the change in the resistance of the resistance strain gauge.
In the experiment, four resistive strain gauges (R 1 , R 2 , R 3 , and R 4 ) with the same size are applied as bridge arms and attached to the upper and lower surfaces at the root of a cantilever beam, as shown in Figure 6A.A source meter (Keithley 2400) provides the operating voltage to the Wheatstone bridge circuit, and the nano-voltmeter (Keith-ley2182A) measures the voltage between the parallel arms.Figure 6B shows the circuit diagram of the Wheatstone bridge where R 1 , R 2 , R 3 , and R 4 are the resistances of the four resistive strain gauges and they have the same value of R. The resistances of the strain gauges depend on the deformation of the cantilever beam, and the voltage between the parallel arms can be expressed as following.
where U is the voltage provided by the source meter.When a mass with known weight is placed at the end of the cantilever beam, the beam will bend and cause the change in the strain gauges.Then, the voltage between the parallel arms is calculated as Equation (18).
where R Δ i represents the change in the resistance of each strain gauge.For the reason that the deformation of the cantilever beam is small, the resulting change in resistance is also very tiny.Therefore, Equation ( 18) can be simplified as the following.
In small elastic deformation, the relative change in the resistance of the strain gauge is positively correlated with strain, as in Equation (20).
where K is the sensitivity coefficient of the resistance strain gauge, and ε is the strain at the root of the cantilever beam.In this condition, Equation ( 20) can be further simplified as following.
Accordingly, we can linearly convert the strain signal into voltage signal through the Wheatstone bridge.For the cantilever beam, the strain under the action of the mass block can be expressed as: where m is the mass of the block, g is the acceleration due to gravity, L is the distance between the strain gauge and the mass block, E is elastic modulus of cantilever beam, b is the width of the cantilever beam, and h is the thickness.According to Equations ( 21) and ( 22), the mass m is also linear with the voltage variation.Therefore, the relationship between the mass and the voltage can be obtained, as illustrated in Figure 7A.
When the coil at the free end of the cantilever beam is placed in the magnetic field, the deformation is stimulated by the electromagnetic force.In the experiment, the magnetic field at the position of the coil is measured with a Gauss meter (F.W.Ell 5180, F.W.ELL, USA).By changing the output current of DC power supply, the magnetic force exerted on the beam through the electromagnet can be adjusted.Then, the relationship between the magnetic field at the position of the coil and the electromagnetic force can be obtained according to the relationship between the voltage variation and the mass, as depicted in Figure 7B.Therefore, the electromagnetic force exerted on the cantilever beam can be calibrated, and it can be adjusted in the following experiments.For excitation levels of 0.0012, 0.0018, 0.0022, 0.0026, and 0.003 N, the open-circuit voltage response of the linear piezoelectric cantilever beam under forward sweep frequency excitations are illustrated in Figure 8A.At a certain excitation level, the response voltage first increases and then decreases with an increase in the excitation frequency, and achieves the maximum value near the resonance frequency.And the obtained maximum voltage for excitation levels of 0.0012, 0.0018, 0.0022, 0.0026, and 0.003 N are respectively, 1.43, 2.10, 2.62, 3.22, and 3.97 V at the frequencies of 202.8, 202.6, 202.1, 201.7, and 201 Hz.Obviously, the increase in the excitation level could enhance the output performance of the linear piezoelectric beam.To be noted, the frequency, corresponding to the maximum voltage, decreases with an increase in the excitation level.The reason for this phenomenon may be that the direction of the electromagnetic excitation in the experiment was not completely perpendicular to the piezoelectric beam, and there was a small component in the axis of the beam.Thus, the axial preload will decrease the resonant frequency.Under reverse sweep frequency excitation, the voltage response of the linear piezoelectric beam under various excitation levels is shown in Figure 8B.When the excitation frequencies are respectively 201. 6, 201.3, 200.9, 200.6, and 200.4 Hz, the piezoelectric beam under excitation levels of 0.0012, 0.0018, 0.0022, 0.0026, and 0.003 N achieve the maximum voltages and they are 1.46, 2.12, 2.64, 3.29, and 4.01 V, respectively.It can be observed that the output performance of the piezoelectric beam under the reverse sweep frequency excitation is consistent with that under the forward sweep frequency excitation, and there is only a little difference in the increasing trend near the resonance frequency.
Under excitation with a level of 0.003 N and a frequency of 200.3 Hz, the voltage response of the linear piezoelectric beam and corresponding frequency spectrum are, respectively, shown in Figure 9A,B.The achieved maximum voltage is 3.75 V, and the energy in the frequency spectrum mainly distributes at the frequency of 200.3 Hz, which is consistent with the excitation frequency.Obviously, the response of the piezoelectric beam exhibits a perfect periodic vibrational characteristic under electromagnetic excitation with ultra-low levels.
Additionally, different load resistances are connected to the circuit to investigate its influence on the energy harvesting performance of the linear piezoelectric beam.In the experiment, a parametric sweep was conducted for different resistance values ranging from 100 Ω to 900 KΩ.The measurements were performed under excitation with a frequency of 200.3 Hz and an amplitude of 0.003 N. Figure 10A,B, respectively, illustrate the variation of peak output voltage and maximum output power with the change of load resistance.With the increase of load resistance, the voltage gradually increases to an asymptotic value.While, the output power increases to a maximum value and then starts to decrease to an asymptotic value with an increase in the load resistance.The phenomenon indicates that the optimum load resistance of the piezoelectric beam is 50 kΩ and the obtained maximum output power is 24.12 μW.
Furthermore, the influence of the end mass and its position on the output performance of the linear piezoelectric beam was investigated.In the experiment, the end mass was provided by mass blocks with the same volume and masses of 0.1, 0.3, and 0.7 g, respectively.When the mass blocks were pasted at a position 20 mm away from the fixed end of the piezoelectric beam, the open-circuit voltage response of the system under forward sweep frequency | 2545 excitation with a level of 0.003 N is illustrated in Figure 11A.For the end mass of 0.1 g, the maximum voltage is 3.36 V and obtained at the frequency of 198 Hz.As the end mass increases to 0.3 and 0.7 g, the maximum voltage are, respectively, 3.4 V at the frequency of 193 Hz, and 3.0 V at the frequency of 181.9 Hz.Under reverse sweep frequency excitation, the voltage response of the system with different end masses is shown in Figure 11B.When the end mass are 0.1, 0.3 and 0.7 g, the obtained peak voltages are, respectively, 3.38, 3.42, and 3.0 V at the frequencies of 196.9, 192.1, and 181.1 Hz.In summary, the increase of the end mass is an effective method to reduce the resonant frequency of the system, which can be applied to adapt to different vibration frequencies.
To investigate the influence of the position of end mass on the energy harvesting performance, the end mass of 0.7 g was applied, and it was pasted at the position 10, 15, and 20 mm away from the fixed end of the piezoelectric beam.Under forward-sweep frequency excitation with a level of 0.003 N, the voltage response of the linear piezoelectric beam is exhibited in Figure 11C.With the end mass at the position 10, 15, and 20 mm away from the fixed end, the system achieved the peak voltages of 3.45, 3.32, and 2.97 V, respectively, at the resonance frequency of 199.7, 193.7, and 181.9 Hz.Under reverse-sweep frequency excitation, the maximum voltages are respectively obtained at 198.5, 192.8, and 181.2 Hz with the values of 3.44, 3.3, and 2.99 V, as shown in Figure 11D.From the results, it can be observed that the closer the end mass is to the free end of the piezoelectric cantilever, the lower is the resonance frequency of the system.By reasonably adjusting the end mass and its position on the linear piezoelectric beam, the system can be utilized to generate considerable electrical energy under excitation with different frequency.
| Response of nonlinear system with a mechanical stopper
In the experiment, the hemispherical mechanical stopper is made from PDMS, and had the diameter of 5 mm and elastic modulus of 1 MPa.When the gap between the beam and the stopper is 40 μm, the response of the nonlinear system is investigated under various excitation levels, and Figure 12A illustrates the open-circuit voltage response under forward sweep frequency excitation.When the excitation level is 0.0012 N, the response amplitude of the beam is smaller than 40 μm, and the response exhibits the same characteristics as the linear system, with a maximum output voltage of 1.32 V generated.For excitation levels of 0.0018, 0.0022, 0.0026, and 0.003 N, the maximum output voltages of the system are, respectively, 1.64, 1.73, 1.78, and 1.79 V, which are smaller than that of the linear piezoelectric beam.The reason for this phenomenon may be that the introduction of the stopper limits the response amplitude of the beam.Although, the response frequency range of the system is broadened, and the half-power bandwidth at excitation levels of 0.0018, 0.0022, 0.0026, and 0.003 N are, respectively, 4.3, 5.9, 7.7, and 9.9 Hz.In comparison, the halfpower bandwidth of the linear counterpart under excitation with various levels is about 3.0 Hz.Therefore, it can be conclude that the introduction of the mechanical stopper can widen the response frequency range of the system, and this phenomenon is even more pronounced with the increase of excitation level.Under reverse-sweep frequency excitation with different levels, the voltage response of the system is exhibited in Figure 12B, and the variation trend is almost consistent with the response under forward-sweep frequency excitation.To be noted, the jump phenomenon from the high-energy orbit to low-energy orbit under forward-sweep frequency excitation is more pronounced that the jump phenomenon from the low-energy orbit to high-energy orbit under reverse-sweep frequency excitation.
Under excitation with a level of 0.003 N, the influence of the gap between the beam and the stopper on the performance of the system is investigated.Under forward sweep frequency excitation, the open-circuit voltage response of the system with different gaps between the beam and the stopper is illustrated in Figure 13A.When the gap between the beam and the stopper is 140 μm, the voltage response is basically consistent with the response of the linear system, and the voltage reaches the maximum value (3.84 V) at the frequency of 200.9 Hz.In this case, the gap between the beam and the stopper is larger than the response amplitude of the beam, and there is no impact between the PDMS stopper and the beam.As the gap between the beam and the stopper decreases to 120 μm, there is a weak impact between the beam and the stopper, and the maximum voltage is achieved at the frequency of 201.1 Hz.As the gap further decreases to 100 μm, the voltage response of the piezoelectric beam exhibits obvious nonlinear characteristics.Before the beam impacts with the PDMS stopper, the voltage response exhibits linear characteristics and is similar to the results of linear counterpart.The impact between the beam and the stopper occurs at the frequency of 199.3 Hz, and the peak voltage shows an increasing trend as the frequency continues to increase, with a maximum voltage of 3.14 V achieved at the frequency of 201.7 Hz.After the frequency of 203.5 Hz, the impact phenomenon no longer occurs, and the response again exhibits linear characteristics.Furthermore, the halfpower bandwidth on this occasion is 5.8 Hz.As the gap decreases to 80, 60, 40, and 20 μm, the response amplitude of the beam and voltage is limited, and the maximum voltage are respectively 2.70, 2.10, 1.51, and 1.17 response range can be broadened with a decrease of the gap between the beam and the stopper, with a smaller maximum voltage generated.
In addition, the material of the mechanical stopper on the response voltage and half-power bandwidth of the system is studied in the experiment.A new mechanical stopper made from copper (elasticity modulus: about 100 GPa) is applied and it has the same dimension with the PDMS stopper utilized before.Herein, the experiments are carried out under sweep frequency excitation with a level of 0.003 N, and the gap between the beam and the stopper with the value of 40, 60, 80, and 100 μm is emphasized.When the gap between the beam and the stopper is 40 μm, the open-circuit voltage response of the system under forward and reverse sweep frequency excitation is illustrated in Figure 14A.Before the impact between the beam and the stopper happens, the voltage response is consistent with the results of linear counterpart.For forward-sweep frequency excitation, the impact occurs at the frequency of 192.2 Hz and terminates at the frequency of 229.2 Hz.A maximum voltage of 1.56 V is generated, and the half-power bandwidth is about 37.1 Hz.While, the frequency range where the impact occurs is from 192.2 to 202.2 Hz under reverse sweep frequency excitation.Regarding the output performance, the obtained maximum voltage is 1.42 V and the half-power bandwidth is 12.4 Hz.Compared to the results utilizing the PDMS stopper, it is observed that the copper stopper results in a much wider half-power bandwidth under forward-sweep frequency excitation, and this phenomenon may be contributed to the larger elastic modulus of the copper stopper.
With an increase in the gap between the beam and the stopper, the impact between them begins to occur at higher frequencies and ends up at lower frequencies under forward sweep frequency excitation.This phenomenon contributes to that the half-power bandwidth becomes narrower and they are respectively 31.9, 24, and 11.5 Hz when the gap is 60, 80, and 100 μm, as shown in Figure 14B-D.To be noted, the increase in the gap leads to an increase in the response amplitude and the maximum voltages under forward sweep frequency excitation are, respectively, 2.05, 2.64, and 3.0 V for gaps of 60, 80, and 100 μm.Under reverse sweep frequency excitation, the half-power bandwidth also shows a narrowing trend with an increase in the gap between the beam and the stopper, with the values of 8.4, 6.5, and 5.2 Hz for gaps of 60, 80, and 100 μm.And the corresponding maximum voltage are, respectively, 2.03, 2.45, and 2.95 V.In general, the increase in the elastic modulus of the mechanical stopper could broaden the response frequency range of the system, and the overall performance depends significantly on the gaps between the beam and the stopper.| 2549
| Comparison of energy harvesting devices
To compare performances of other energy harvesting devices and the energy harvesting device based on impact vibration proposed in this paper, a comparative table is given above.The device in this paper has advantages in frequency bandwidth and power density.Subsequently, higher output power density can be achieved by optimizing the volume and shape of the fixture Tables 2.
| CONCLUSION
In this work, a microvibrational piezoelectric energy harvesting system with mechanical stopper for performance enhancement has been designed, modeled, and investigated numerically and experimentally.The confirmation is composed of a piezoelectric cantilever beam, a mechanical stopper, and a coil attached at the free end of the beam which is placed in a magnetic field to provide an ultra-low level excitation.A theoretical model of the system has been established, and numerical simulations indicates that the increase in excitation level has a positive effect on the output and the nonlinearity resulted from the introduction of the mechanical stopper could broaden the half-power bandwidth.
In the experiments, the magnetic force in the excitation is calibrated through the Wheatstone bridge.Experimental results of the linear piezoelectric beam are consistent with the numerical simulation, and it is demonstrate that the vibration of the beam is in the micron scale.Under excitation with a level of 0.003 N and a frequency of 200.3 Hz, a peak output power of 24.12 μW is achieved with a matched load resistance of 50 KΩ.Furthermore, experiments verify that the frequency range for large-amplitude oscillation can be controlled by adjusting the end mass and its position on the beam.
When a PDMS stopper with an elastic modulus of about 1 MPa is utilized as the stopper, the vibration of the piezoelectric beam exhibits strong nonlinearity.Experimental results under different excitation conditions validate that the response bandwidth of the system can be broadened by increasing the excitation level and decreasing the gap between the beam and stopper.To be noted, the decrease in the gap limits the response amplitude of the beam and leads to a smaller peak voltage.To further investigate the material of the mechanical stopper on energy harvesting performance, a copper stopper is utilized to replace the PDMS stopper and it has a larger elastic modulus of 100 GPa.Experimental results under forward sweep frequency excitation with a level of 0.003 N reveal that the larger elastic modulus results in a wider response frequency range, and the half-power bandwidth could reach 37.1 Hz when the gap between the beam and the stopper is 40 μm.In general, the results provided in this paper can be referenced for the design of precision sensors, elastic modulus identification, surface flatness testing, and so on.
F I G U R E 1
Schematic diagram of the microvibrational energy harvesting system.
3, and 20.7 Hz, respectively.And the peak voltages are 3.0, 2.55, 2.04, 1.48, and 1.0 V, respectively, at the excitation frequencies of 200.2, 201.1, 202.4,205, and 208.2 Hz for the gap equaling 100, 80, 60, 40, and 20 μm.It can be observed that, due to the decreasing of the gap, the peak voltages are achieved at higher frequencies and show an obvious decreasing trend.Under reverse-sweep frequency excitation, the open-circuit voltage response of the system for various gaps are illustrated in Figure 4D, and the changing trend of voltage is almost same as that under forwardsweep frequency excitation.For the gap equaling 100, 80, 60, 40, and 20 μm, the half-power bandwidth are 6.4,8, 10.4,14.4, and 19.8 Hz, respectively.
F I G U R E 4
Numerical results of the linear and nonlinear system under sweep frequency excitation: (A) linear system, forward-sweep; (B) linear system, reverse-sweep; (C) nonlinear system, forward-sweep; and (D) nonlinear system, reverse-sweep.VALIDATION 4.1 | Experimental setup
F I G U R E 6
Magnetic force calibration: (A) schematic diagram of measurement principle; (B) circuit diagram for the Wheatstone bridge.
F I G U R E 7
Magnetic force calibration: (A) relationship between mass and voltage variation; (B) relationship between magnetic field and electromagnetic force.F I G U R E 8 Voltage response of the linear system under forward (A) and reverse (B) sweep frequency excitation with various levels.
F I G U R E 9
Voltage response and frequency spectrum at the excitation of 200.3 Hz and 0.003 N. F I G U R E 10 Influence of external load resistance on the peak voltage and maximum power at the excitation of 200.3 Hz and 0.003 N. WEI ET AL.
11
Influence of end mass on the performance: (A) forward and (B) reverse sweep frequency excitation.Influence of end mass's position on the performance: (C) forward and (D) reverse sweep frequency excitation.F I G U R E 12 Influence of excitation level on the performance with a gap of 40 μm: (A) forward and (B) reverse sweep frequency excitation.
V at the frequencies of 202, 202.4,203.4,and 205.4 Hz.However, the half-power bandwidth increases with a decrease in the gap, and it is, respectively, 7.2, 10, 14.4, and 18.2 Hz for the gaps of 80, 60, 40, and 20 μm.Under reverse sweep frequency excitation, the voltage response of the system is illustrated in Figure 13B.It is seen that the response under reverse-sweep frequency excitation is similar to that under forward-sweep frequency excitation, while the jump phenomenon is not obvious.For gaps of 100, 80, 60, 40, and 20 μm, the maximum voltage are respectively achieved at the frequency of 201.2, 202, 202.3, 204.1, and 206.2 Hz, with the values of 3.2, 2.65, 2.14, 1.44, and 0.91 V. To the noted, the half-power bandwidth are respectively, 6.3, 7.9, 10.3, 15.8, and 25.1 Hz.From the experimental results, it is concluded that the frequency F I G U R E 13 Influence of the gap between the beam and stopper on the performance at an excitation level of 0.003 N: (A) forward and (B) reverse sweep frequency excitation.F I G U R E 14 Influence of the material of stopper of the stopper: (A) 40 μm; (B) 60 μm; (C) 80 μm; (D) 100 μm.
Physical properties of the structures.
T A B L E 2 | 9,524.2 | 2024-05-13T00:00:00.000 | [
"Engineering",
"Physics"
] |
Explaining electron and muon $g-2$ anomalies in an Aligned 2-Higgs Doublet Model with Right-Handed Neutrinos
We explain anomalies currently present in various data samples used for the measurement of the anomalous magnetic moment of electron ($a_e$) and muon ($a_\mu$) in terms of an Aligned 2-Higgs Doublet Model with right-handed neutrinos. The explanation is driven by one and two-loop topologies wherein a very light CP-odd neutral Higgs state ($A$) contributes significantly to $a_\mu$ but negligibly to $a_e$, so as to revert the sign of the new physics corrections in the former case with respect to the latter, wherein the dominant contribution is due to a charged Higgs boson ($H^\pm$) and heavy neutrinos with mass at the electroweak scale. For the region of parameter space of our new physics model which explains the aforementioned anomalies we also predict an almost background-free smoking-gun signature of it, consisting of $H^\pm A$ production followed by Higgs boson decays yielding multi-$\tau$ final states, which can be pursued at the Large Hadron Collider.
I. INTRODUCTION
It is tempting to conclude that the time-honoured discrepancy between the Standard Model (SM) prediction for the muon anomalous magnetic moment and its experimental measurement is a firm indication of New Physics (NP) Beyond the SM (BSM). Moreover, after improving the determination of the fine structure constant, it recently turned out that there is also a significant difference between the experimental result of the electron anomalous magnetic moment and the corresponding SM prediction. According to the latest results, we have the following deviations in the anomalous magnetic moments of muon and electrons [1,2]: which indicate a 3.1σ and 2.4σ discrepancy between theory and experiment, respectively. Fermilab and J-PARC experiments [3,4] are going to explore these anomalies in the near future with much higher precision, but now it is worthwhile speculating what possible NP phenomena might lie behind these two measurements. In doing so, it should be noted that δa e and δa µ have opposite signs, which provides a challenge for any BSM explanation attempting to account for both of them simultaneously. This generated growing interest and several extensions of the SM have been analysed as possible origin of the results in (1). It is clear that any Electro-Weak (EW) scale NP effects that may explain the a µ result will lead to corrections to a e of order 10 −5 times smaller, due to the typical relative suppression generated by the mass ratio (m e /m µ ) 2 , and, crucially, with the same sign. Therefore, the anomalies of a µ and a e cannot be resolved simultaneously with the same NP contribution, unless it violates lepton flavour universality in a very peculiar way, so as to give a positive contribution to a µ and a negative one to a e . Some attempts along this line were in fact pursued by Ref. .
In this paper, we analyse the anomalous magnetic moment of muon and electron in a 2HDM with RH neutrinos and aligned Yukawa couplings. We emphasise that, in this class of models, one can account for the a e through oneloop effects generated by the exchange of RH neutrinos and charged Higgs bosons. At the same time, the measured value of a µ can be obtained accurately through two-loop effects generated by a light CP-odd neutral Higgs state in combination with charged leptons. This phenomenology requires the H ± and A states to be relatively light, so that their pair production process has a sizeable cross section at the Large Hadron Collider (LHC), thereby enabling one to fingerprint this A2HDM with RH neutrinos in the years to come.
The plan of this paper is as follows. In the next section we describe our NP scenario. In the following one we present the formulae for a e and a µ . After this, we present our results for the two anomalous magnetic moments and the aforementioned H ± A signature in two separate subsections. We then conclude.
II. A2HDM WITH RH NEUTRINOS
The most general Yukawa Lagrangian of the 2HDM can be written as where the quark Q L , u R , d R and lepton L R , R , ν R fields are defined in the weak interaction basis and we also included the couplings of the Left-Handed (LH) lepton doublets with the RH neutrinos. The Φ 1,2 fields are the two Higgs doublets in the Higgs basis and, as customary,Φ i = iσ 2 Φ * i . The Yukawa couplings Y 1j and Y 2j , with j = u, d, , are 3 × 3 complex matrices while Y 1ν and Y 2ν are 3 × n R matrices, with n R being the number of RH neutrinos. Besides implementing the standard Z 2 symmetry, potentially dangerous tree-level Flavour Changing Neutral Currents (FCNCs) can be tamed by requiring the alignment in flavour space of the two Yukawa matrices that couple to the same right-handed quark or lepton. This implies 1 Renormalisation group effects can introduce some misalignment in the Yukawa couplings. These provide negligible FCNC contributions in the quark sector suppressed by mass hierarchies m q m 2 q /v 3 [27,28]. The Yukawa Lagrangian in Eq. (2) generates a Dirac mass matrix for the standard neutrinos and can also be supplemented by a Majorana mass term M R for the RH ones where C is the charge-conjugation operator. In particular, by exploiting a bi-unitary transformation in the charged lepton sector and a unitary transformation on the RH neutrinos, L L = U L L L , R = U R R and ν R = U ν R ν R , it is always possible to diagonalise (with real eigenvalues) the charged lepton and Majorana mass matrices at the same time, In this basis the neutrino mass matrix can be written as such that M ν = U T MU provides the masses of the three light active neutrinos ν l and of the remaining n R heavy sterile neutrinos ν h . The Yukawa interactions of the physical (pseudo)scalars 2 with the mass eigenstate fermions are then described by (8) where the couplings of the neutral Higgs states to the fermions are given by where the matrix R diagonalises the scalar mass matrix. Because of the alignment of the Yukawa matrices all the couplings of the (pseudo)scalar fields to fermions are proportional to the corresponding mass matrices, hence the A2HDM acronym. Therefore, this 2HDM realisation is notably different from the standard four Types [29][30][31], wherein the Yukawa couplings are fixed to well defined functions of the ratio of the Vacuum Expectation Values (VEVs) of the two Higgs doublets, denoted by tan β, see Tab. I. Then, the charged Higgs boson currents in the lepton sector are given by: Finally, the neutral and charged gauge boson interactions of the neutrinos are We refer to [32] for further details on the model.
III. ANOMALOUS MAGNETIC MOMENTS
The one-loop contributions to the anomalous magnetic moment of either lepton are where the individual terms are with The index of the contributions corresponds to the different subfigures in Fig. 1 where, for simplicity, we show only the diagrams determined by the charged currents. The contribution g (a) alone would exactly correspond to the SM case if it were not for the rescaling induced by the neutrino mixing matrix. Nevertheless, the constant terms in g (a) and g (b) sums up to the SM result of 5/3 due to the unitarity of such a mixing matrix. Therefore, these can be neglected since they do not contribute to the NP part. The term g 2HDM contains all the neutral Higgs boson contributions which are typical of the 2HDM alone. These are typically suppressed by a factor of m 2 /m 2 φ , with φ being one of the neutral (pseudo)scalar states of the 2HDM.
We can then write the contribution to (g − 2) , = e, µ, due to charged currents as follows: The contribution to (g − 2) , = e, µ, from the neutral (pseudo)scalars is where For the sake of completeness, we also give the Barr-Zee two-loop diagram contributions, [33][34][35][36][37][38] where N f c is the number of colours and Q f the electric charge while The total contribution to the g − 2 is thus given by a = a ± + a 0 + a two-loop . Finally we present the Branching Ratio (BR) of the Lepton Flavour Violating (LFV) decays α → β γ (with α, β = e, µ, τ ), as follows: with where Γ α is the total decay width of the lepton α and the loop functions are given above. The structure of the loop corrections is obviously the same as the one appearing above in the charged current corrections to (g − 2) . The measured BR of these LFV decays will act as a constraint in our analysis.
IV. RESULTS
The solution of the a µ anomaly relies upon a light pseudoscalar state A contributing to the dominant two-loop Barr-Zee diagrams, as customary in 2HDMs. The explanation of the anomaly is particularly simple in the 'leptonspecific' 2HDM scenario, also dubbed Type-IV, in which the couplings of the A and H ± bosons to the leptons can be enhanced (for large tan β) while those to the quarks are suppressed (being proportional to tan −1 β). Indeed, while it is always possibile to enhance the couplings to the leptons in any of the four standard realisations of the 2HDM, in Type-I and -III this is done at the cost of increasing the couplings to the up quark (for small tan β). As a consequence, one faces a strong constraint from the perturbativity of the top-quark Yukawa coupling. In the Type-II, instead, the couplings to the down-quarks are enhanced (for large tan β) and severe bounds are imposed by flavour physics and direct searches for extra Higgs bosons. These issues can be much more easily addressed in the A2HDM since the couplings to leptons and quarks are disentangled and ζ can be raised independently of ζ u and ζ d .
It is worth emphasising that a simultaneous explanation of both the a e and a µ anomalies cannot be achieved neither in the Z 2 symmetric scenarios of the 2HDM nor in the pure A2HDM, since the contributions to the anomalous moments have a fixed sign as they both originate from the same ζ . In [26], this constraint has been overcome by decoupling the electron and muon sectors, where all Yukawa matrices can be made diagonal in the fermion mass basis [39,40]. Here, instead, the degeneracy will be broken by exploiting the lepton non-universality that naturally arises in RH neutrino models: augmenting the A2HDM with RH neutrinos can allow for an independent solution to a e . This is obtained with the one-loop diagrams shown in Fig. 1 provided that the charged Higgs boson is not too heavy to suppress the loop corrections.
The mass of the charged Higgs boson is bounded from below by direct searches at LEP II. In particular, searches for H ± pair production provide m H ± 93.5 GeV at 95 % Confidence Level (CL) [41] assuming the charged Higgs only decays leptonically into τ ν. Since the mass of the pseudoscalar A state is thus required to be much lighter than the charged one, our scenarios realises the mass hierarchy m A m H ± m H . The almost degeneracy between the heavy neutral scalar and the charged Higgs state is induced by the constraints on the EW Precision Observables (EWPOs), i.e., S, T and U . Indeed, the most stringent one arises from custodial symmetry and reads as 3 which fixes the mass splitting to (m H ± − m H ) ∼ O(10 GeV). As quoted above, the scenarios with light scalar states is strongly constrained by flavour physics, in particular by neutral meson mixings (∆M q and K ), leptonic decays of neutral and charged mesons as well as radiative B decays (b → sγ). These mostly depend on m H ± , ζ u,d . Such measurements are reconciled in our setup simply by requiring a sufficiently small ζ u,d which we will set to zero for the sake of simplicity. This in turn implies that the Yukawa interactions in our BSM scenario are purely leptophilic. This configuration also naturally complies with void searches for extra (pseudo)scalars at the LHC. In this respect, we have required that the Higgs sector of our model is compliant with the experimental constraints implemented in HiggsSignals [42] (capturing the LHC measurements 3 The expression for ∆T assumes the mass hierarchy m A m Z m H ± m H and sin(β − α) 1. of the discovered Higgs boson 4 ) and in HiggsBounds [43] (enforcing limits following the aforementioned void searches for the H, A and H ± states at past and present colliders).
Contributions mediated by the charged Higgs states also affect the leptonic decays i → j νν at tree level, with the stronger constraint coming from τ → µνν [44,45]. The corresponding bound projects onto the ratio z = ζ 2 m τ m µ /m 2 H ± and gives |z| < 0.72 at 95% CL [46]. Finally, upper bounds on LFV processes, (BR(µ → eγ) ≤ 4.2 × 10 −13 , BR(τ → eγ) ≤ 3.3 × 10 −8 , BR(τ → µγ) ≤ 4.4 × 10 −8 at 90% CL) constrain the RH neutrinos interactions with the charged leptons. The charged Higgs boson also affects these decays with a large contribution. Since a RH neutrino is only employed in the explanation of the a e anomaly, a non-negligible mixing is strictly required with the electron family. Therefore, the stringent constraint from µ → eγ and the milder one from τ → eγ can be satisfied by simply relying on the hierarchy
A. Predictions for δae and δaµ
The contribution to δa e arising from the W ± , encoded in the G W ± function defined in Eq. (14), is negative but it can never be enhanced being fixed by the gauge interactions. For m 2 The impact of the charged Higgs boson in the loop functions is, however, much different. As an example, for large heavy neutrino masses, it saturates to G H ± ζ ζ ν /2 − ζ 2 ν /6 or behaves as G H ± (ζ ζ ν /2 − ζ 2 ν /12)(m 2 ν h /m 2 H ± ) for larger m H ± . In both cases, the solution of the a e anomaly is facilitated by large and opposite ζ and ζ ν . The same effect would also push the predicted a µ in the opposite direction with respect to the current measurement. This is not an issue since the same hierarchy |(U Lh ) µ ν h | |(U Lh ) e ν h | required to prevent the LFV bounds also suppresses the contribution of the charged Higgs boson to the muon g − 2. As well known in the literature, the latter can be explained in the 2HDM by the two-loop Barr-Zee diagrams of the neutral scalars which provide a positive correction for sufficiently light A. This contribution may compete in a e against the one-loop effects discussed above but it is found to be subdominant in most of the parameter space.
The results of our analysis are depicted in Figs. 2 and 3. The former shows the regions in which the predicted a µ is within 1 and 2σ around the measured central value. These are projected onto the most relevant parameter space defined by m A and ζ . The mass of the charged Higgs boson has been fixed at a reference value of m H ± = 200 GeV. Different choices of m H ± slightly modify the contours shown in the plot. In Fig. 3 we show the prediction for a e . The points are generated by scanning over the parameter space of the model and comply with the experimental and theoretical bounds quoted above while reproducing a µ within the 2σ range. The parameters are scanned as follows: m ν h ∈ (200, 2000) GeV, m H± , m H ∈ (100, 1000) GeV, m A ∈ (10, 60) GeV, ζ , ζ ν ∈ (−150, 150) and |(U Lh ) µ ν h | 2 ∈ (10 −5 , 10 −3 ). In Fig. 3(a) and (b), (g − 2) e is plotted, respectively, against ζ ν and the effective coupling ζ ν Y ν which characterises this model and that has been extensively discussed in [32]. The vertical dashed line shows the maximum allowed value required by pertubativity. Finally, Fig. 3(c) shows the distribution of points along the ζ ν and ζ directions compliant with all the bounds discussed above as well as a e and a µ measurements within 2σ. As mentioned already, the two couplings must necessarily have opposite signs.
B. LHC phenomenology of the extra (pseudo)scalar bosons
In the leptophilic scenario delineated above, the light pseudoscalar state A can decay at tree-level via A → τ τ with BR close to 100%. For the charged Higgs boson, instead, the two main open decay modes are H ± → AW ± , where the interaction is completely fixed by the SU (2) L gauge coupling, and H ± → τ ± ν, which is controlled by the ζ coupling. Analogously, for the heavy neutral scalar state H the two leading decay modes are H → τ τ and H → AZ. For large m H ± , m H , the BRs of the H ± and H are solely controlled by the coupling g = ζ m τ /m H ± and are approximated by 5 Since the couplings to the quarks are suppressed, the main production modes proceed through the EW interactions. The relevant processes are with the corresponding cross sections being only functions of the masses of the corresponding particles. The cross sections at the LHC are computed with MadGraph [47] and are shown in Fig. 4. The largest contributions arise from H ± A and HA.
The main signatures resulting from these processes are characterised by final states with several τ leptons where the first two stem from H ± A production (with a subleading component from H ± H) while the last two arise from the HA production. A thorough analysis is beyond the scope of this paper. In order to get a feeling of the potential of these channels, here we list only an estimate of the inclusive cross section for the corresponding SM background
V. CONCLUSIONS
The measurements of the the anomalous magnetic moment of electron and muon are amongst the most precise ones in the whole of particle physics, probing not only the structure of the SM but also the possibility of BSM theories entering these experimental observables. Intriguingly, both of these are currently showing some anomalies with respect to the SM predictions. Crucially, the two results go in different directions, i.e., the measurement of a µ exceeds the SM result while that of a e lies below the corresponding SM yield. This circumstance makes it difficult to find BSM solutions, as multiple new particles are generally needed, each contributing its corrections in different directions, i.e., with different signs, unless significant violation of discrete quantum numbers is exploited.
In this paper, we adopted an A2HDM supplemented by RH neutrinos, respecting all the SM symmetries. In such a BSM framework, a possible explanation to the aforementioned anomalies can be attained through one and two-loop topologies wherein the contribution from a very light CP-odd neutral Higgs state interacting with leptons, is tensioned against the one due to a charged Higgs boson interacting with the new heavy neutrinos, the latter with mass at the EW scale. Crucially, such a spectrum is able to explain the two leptonic anomalous magnetic moment measurements while also predicting new hallmark signals in the form of qq → H ± A production yielding multi-τ final states, which are almost background free at the LHC and thus accessible already with current data samples. | 4,727.8 | 2020-12-12T00:00:00.000 | [
"Physics"
] |
The Construction of Computer-based Application of Working Memory Test for Early Age Children in Indonesia
Working memory (WM), a central component of executive function (EF) which facilitates the capability to store and modulate information, develops rapidly during early childhood, and has proven to contribute to children's academic achievement. WM generally has two types of measures, each of which mainly involves the WM's verbal and visuospatial aspect. However, research on the standardized and developed assessment of WM aspects for early age children in Indonesia remains inadequate, especially embedded with information technology. This study aimed to develop a WM measurement tool using a computer-based application test to support the integration between the computer-based and behavioral measurements of WM aspects in early age children. Construct validity of the WM computerized test was determined by comparing the conventional and computerized EF tests on 36 children (15 boys and 21 girls) age 48-72 months old. Two computerized WM tasks that specified WM's verbal aspect, namely the Backward Animal Task and Shining Star respectively, were administered individually to each child by a trained tester. The Spearman correlation analysis resulted in Shining Star as the most suitable computer-based WM task for early age children. Both conventional and computer-based measures of the visuospatial aspect of WM had similar task mechanisms and rules. They equivalently required visual and kinesthetic modalities, which emphasized the common nonverbal aspects of WM. This result provides an initiative for the evidence-based development of the computer-based WM test in Indonesia for early age children, which is critically important to help individuals with psychological and behavioral problems during Covid-19. Keywords— computer-based test; early age children; executive function; working memory.
I. INTRODUCTION
As the global pandemic of Coronavirus perseveres, applying the new normal regulation leads to inevitable changes that demand prevalent adjustment in many sectors. One of the significantly affected job sectors in human services due to this restriction is practitioner psychologists whose work scope involves direct face-to-face interaction with their clients during psychological assessment or intervention. However, they are currently forced to perform their practices remotely via online to prevent further Covid-19 transition. Unfortunately, the innovation of information technology-based tests for psychological assessment and interventions in Indonesia is relatively less developed than the conventional ones, hence the restricted practices in helping individuals and groups with mental and behavioral problems during the pandemic. This study aimed to develop a psychological test of working memory (WM), one of the core components of higher-order neurocognitive skills called executive function (EF).
EF is a set of neurocognitive skills consisting of planning, monitoring, modulating, and emotion and behavior regulation skills considered as determining factors that contribute to an individual's capacity to adapt to their surroundings [1]. Working memory (WM), which is one of EF's main components, enables storing and modification of required information to meet social demands [2]. Together with other EF components, namely inhibitory control and shifting, WM facilitates an individual to maintain a mental representation of memories and rules relevant to the ongoing context [1]. Thus, the individual can manifest a selfregulated behavior that is fully conscious, which is required to accomplish his or her goals.
Theoretically, WM involves multimodal encoding of several underlying systems. One type of system is known as the slave systems, consisted of the phonological loop and the visuospatial sketchpad. Whereas the other has a role as the central executive and episodic buffer [3], [4]. The phonological loop takes charge of provisional auditoryverbal information and stores it for a mere couple of seconds.
Thus, the act of rehearsing and articulating verbal information is necessary to activate the memories before they decline. For example, immediate recall of a sequential order after the objects are presented serves as memory maintenance as well as the recording of pieces of information of the named objects [4].
Meanwhile, the visuospatial sketchpad is responsible for processing nonverbal information, such as visual, spatial, and kinesthetic aspects. This aspect is central to maintaining close attention to integrate colors, locations, and shapes of an object required to activate and preserve the memories [3], [4]. Besides, although it is less associated with language, the visuospatial sketchpad is involved in maintaining mental representations of a page arrangement and proper eye movement while reading, as well as grammatical structure in general [4].
On the other hand, the central executive lacks a memory storage mechanism. Nonetheless, it plays a significant role in monitoring and coordinating the two underlying slave systems, i.e., the phonological loop and the visuospatial sketchpad. In addition, it also manages another system called the episodic buffer. The term "episodic buffer" [3] is temporary storage that combines information from the phonological loop, the visuospatial sketchpad, and the longterm memory (LTM) (e.g., Fig. 1). This system enables individuals to activate memories directly associated with the verbal and nonverbal stimuli and link this information with memories formerly stored in the LTM [3], [4].
Typically, WM experiences rapid growth during the early childhood period [5]. According to Piaget's object permanence theory, WM's development is manifested by the time the child can store online memory of a previously hidden object since as early as 7,5 months old [6]. [3] Furthermore, 12 months old infants also showed the capability to memorize the new location of an object previously hidden in a different place through A-not-B-task. Physiologically, basic WM skill requires the activation of dispersed parts of the brain during the first year of life. However, the functional localization employed during the performance of a WM task is already formed in the frontal lobe at age 4, which indicates a rapid brain development in preschoolers.
In due course, the typical neural development facilitates the establishment of specialization and interconnection between WM brain areas. Different WM aspects activate different regions of the brain [3]. For instance, the Brodmann area number 40 and 44 in the brain are activated by encoding verbal related memory, which facilitates the phonological loop. On the other hand, the brain's right hemisphere, such as area 6, 9, 40, and 47, are activated by the visuospatial sketchpad stimuli. Although central executive and episodic buffer usually involve several parts of the brain concurrently, both are considered to rely heavily on the frontal lobe activation.
Concerning WM's role, previous studies reported a meaningful contribution of WM in facilitating language, arithmetic, and reading skills, which contribute to the child's academic achievement [7]. Specifically, a deficit in WM's verbal aspect is associated with subpar language skill, while lacking in the spatial aspect of WM led to low literacy, comprehension, and arithmetic skills [8]. Children with a neurodevelopmental disorder, such as dyslexia and dyscalculia, are associated with deficits in verbal-auditory modality related to WM's phonological loop aspect and the visuospatial sketchpad aspect of WM, respectively [9]. Meanwhile, the Attention Deficit and Hyperactivity Disorder (ADHD) is also associated with deficits in WM related to EF [9], which significantly influence the aspects of academic performance and social interaction. These findings highlight the importance of WM's contribution to a child's developmental milestone and future achievement.
Besides WM's crucial contribution to language and academic achievement, stress regulation is also known to directly impact WM performance. The letter fluency task, which measured verbal WM among healthy student participants, predicted acute stress responses due to reduced attentional control that caused a lack of mental representation and retrieval of the verbal memory [10]. Furthermore, a child's trait anxiety leads to deficits in central executive functioning, which negatively impacts WM performance [11]. Therefore, WM is predicted as one of the most important psychological assessment components at the time of the Covid-19 pandemic, which serves as an indicator of stress regulation ability. Additionally, an intervention based on enhanced WM performance may also be considered for further stress reduction treatment to overcome stressful situations due to the pandemic.
In general, WM tasks were developed [12] and employed [13], both auditory and visual modality, respectively. These tasks are administered using the conventional method with manual scoring. Initially, WM tasks were developed by conventionally measuring the performance through behavioral observations on tasks such as the Backward Digit Span task [12] and the Corsi Block task [14]. Consequently, a conventional battery of EF tasks was developed [15], some of which particularly measure the WM aspects, that is made compatible for children in Indonesia. Although these tasks were already adapted into Bahasa, several limitations were found due to technical shortfalls while collecting the data, especially in terms of time precision and storage efficiency.
To date, several studies have not only developed WM tasks using a conventional procedure but also with a computer-based testing program for preschool children and other range of ages with both typical [16]- [18] and atypical development [19]- [20]. This computer-based administration method is considered more efficient and accurate compared to the conventional one. The computer-based WM test allows for the time efficiency regarding the duration of test administration. Besides, unlike the traditional WM test that only provides for the single scores of total correct and false responses, the computer-based WM test also records other measures, such as the reaction time and test duration. Overall, data collection and calculation automation using a computer-based application for the test cuts back on humaninduced error due to any probability of mistakes in manual scoring. Moreover, given the current Covid-19 pandemic situation, the WM test's conventional administration is not favorable because it requires face-to-face interaction between the child and the administrator or tester. Therefore, a standardized computer-based WM test likely serves as the solution to this issue.
However, research on the WM measurement in Indonesia remains limited due to the lack of methodological grounding and interdisciplinary collaboration regarding technological tools that can be used to measure human mental capacity. Meanwhile, there is a surging call for the online assessment of neuropsychological measures to help psychologists substitute the conventional administration procedure with a well-suited remote option for clients' treatment and evaluation, especially during the Covid-19 during the pandemic season. Therefore, this study aimed to develop WM tasks into a computer-based test in Bahasa Indonesia. This computer-based WM test is essential to improving the data quality and efficiency and answering the challenges of the increasing needs of online assessment throughout the pandemic season. With this WM task development, a parallel study of a computer-based measurement was also being developed for other EF components, such as Inhibitory Control (IC) [21]. Both computer-based tasks employed similar software engine to support prototype for test stimuli and attain data with high precision and accuracy.
The use of a systematical data collecting method with computer-based software was designed to enhance the quality, quantity, and time reliability of data. The administration procedure of the computer-based was highly standardized to avoid human error factor. This test application software was a manifestation of the artificial cognitive system [21], which refers to a set of software utilized to interact with the child's cognitive skill. Therefore, the systematical computer-based application improved the standardizations of the WM test and administration method.
In this study, two WM tasks were developed into computer-based measurement based on the conventional ones, which were designed and adjusted for Indonesian children [15]. These tasks were tested on typically developed children between age 4 to 6 years old in Jabodetabek. Correlation analysis between each conventional and computer-based task was conducted to examine whether both tasks measured the common construct.
II. MATERIALS AND METHOD
The Research Ethics Committee approved all of the study procedures at the Faculty of Psychology, University of Indonesia.
A. Participants
Thirty-six children age 48-72 months old (15 boys and 21 girls) participated in this study. All participants were subjected to a screening procedure by completing parental information about the child's medical records and developmental trajectories. Children who had symptoms of any psychiatric, neurological, genetic, sensory, or chronic medical conditions and any indications of any developmental disorders are excluded from our study. Finally, only children with typical development could participate in this study.
B. Procedure
Recruitment for the data collection was managed by cooperating with kindergartens and preschools, distributing flyers, and posting the recruitment information on social media. Parents interested in this study were asked to complete the screening test. Children who passed the participant criteria were invited to make an appointment schedule for the primary data collection session at the Faculty of Psychology, University of Indonesia.
Parents of each participant were explained about the procedure of the tests before filling in the informed consent. Afterward, only those whose child passed the screening procedure and consented to participate were invited to the Faculty of Psychology, University of Indonesia, to take the test. Each child had the WM test administered individually by a trained tester in an examination room. During the test, the child was given both conventional and computer-based WM tasks, with the duration ranged from 30-60 minutes. Rewards were presented to every child who completed the tests.
C. Measurements
This study employed WM tasks using both conventional (Backward Word Span and Backward Corsi Block Span) and computer-based tasks (Backward Animal and Shining Star). The computer-based tasks were developed using Unity Engine 5.3, easy-to-use game-making software, which enabled the developer to build prototypes of the tasks.
The Backward Word Span used in this study originated from a previous study [12], which was later adjusted [15] to be appropriately used for early age children in Indonesia. In this task, the child was required to memorize words mentioned by the tester in a specific order. Afterward, the child was asked to recall and mention the words in a reversed order. The first level consisted of two words. The number of words increased with every level, with the highest one comprised of five words. Failure to mention the words in the correct reversed order within the three given trials on each level failed to proceed to the next level. Another conventional task was Backward Corsi Block Span, which required a retrospective recall of spatial sequences [13]. This task was adjusted for the early age of the children population in Indonesia [15]. This task used visual stimuli in the form of five green blocks arranged in a particular layout in a piece of paper (e.g., Fig. 2). The tester showed the child which blocks to point at in a specific order, before asking the child to point at the blocks one by one in a correct reversed order. On the first level consisted of two blocks, and the number of blocks increased each time the child succeeded in performing the instruction within three given trials on each level.
Both computer-based tasks used the same backward mechanism with different stimuli variants (e.g., Fig. 3). In the Backward Animal task, various species of animals appeared successively on the monitor. Each animal appeared with a simultaneous auditory stimulus pronouncing the name of the animal. Afterward, the monitor showed the previously presented animals in a horizontal row, and the child was asked to tap at the animals successively in reversed appearance order. On the first level, there were two animals presented. The number of animals increased with every level, with the highest one consisted of eleven animals. Only three trials can be given within each trial. Shining Star task used the same backward mechanisms, number of stimuli on each level, and rules as the Backward Animal one. The stimuli employed in this task were pictures of stars with an audio effect of "sparkling" sound each time the star shined a brighter luminance and opened its eyes (e.g., Fig. 4). Failure to point at the stars in the required reversed order within three trials on each level automatically terminated the task.
C. Technical Details
Unity Engine 5.3 was used to provide android support and prototype for test stimuli, such as the auditory and visual output. It allowed rapid game development and supported Android for portable device use [23]. The WM tasks were presented through the game application to arouse the child's interest in the task. Both the Backward Animal task and Shining Star tasks used a similar mechanism.
The program generated scores from the number of stars that the child correctly tapped in backward order, while the reaction time between the appearance of the shining effect and the participant's tap was also recorded. The results were then exported into a .csv format for further analysis. 1) Backward Animal task: the workflow of this task is depicted in Fig. 5A. When the child was ready to start the task, a picture of a zoo appeared as a background on the screen. Afterward, animals were set to appear based on a predetermined sequence from the developer. The animal's names were pronounced as the picture showed up one by one within two seconds from each other. After the appearance sequence was completed, the formerly presented animals showed up on the screen all at once. The application then waited for the participant's input to be stored. The scores were derived from the number of animals the child managed to put in backward order correctly. If the child taps the animals one by one according to the correct backward order at the first attempt, the child would be automatically directed to the second level. However, if the child failed at the first chance, the same number yet different variants of animals would appear for the child's second and also third attempt. Reaction time for each trial was recorded together with the scores and automatically stored in a .csv format in the device. Fig. 5B. After the child started the task, five cartoon-faced stars were displayed on the screen. At the beginning of each task, all five stars closed their eyes. Each star took a turn to shine one by one, which was displayed by the increased brightness and waking expression of the corresponding star with a sparkling sound effect. The sequence of the order was arranged beforehand in a predetermined manner. On each level, once the order is completed, a circle at the bottom left of the screen switched from red to green to give a sign for the participant to take a turn.
D. Statistical Analysis
All collected data were analyzed using SPSS. Descriptive analyses were conducted to generate detailed features of the dataset. Afterward, Spearman correlation analysis was employed to generate concurrent validity between conventional and computer-based tasks due to small sample size, hence the non-normal distribution of the data.
III. RESULT AND DISCUSSION
Before running the inferential analysis, descriptive statistics were conducted to gain an overview of the data.
A. Descriptive
The scores in conventional and computer-based WM tasks ranged from 2 to 5 and 11, respectively. Since the children were able to achieve the highest score on WM conventional tasks, the score range in the computer-based tasks was increased to capture the extent to which the child could achieve the highest score on WM tasks [11]. Based on 36 participants' results in the WM test with conventional methods, five children scored 4 in the Backward Word Span task, and five others scored 5 in the Backward Corsi Block Span task. Meanwhile, according to the computer-based test, the Backward Animal task and Shining Star task's maximum scores were 4 and 5, respectively. Considering the difference in the upper and lower threshold, and between conventional and computer-based tests, all the scores were converted to Z-score before comparing the means (e.g., Table 1). Thus, all the converted Z-scores were transformed into a normal distribution. In the conventional test, the Backward Word Span task's mean was lower than that of the Backward Corsi Block Span task. Meanwhile, the computer-based Shining Star task's mean was higher than that of the Backward Animal task.
B. Spearman Correlation
Inferential statistical analysis was conducted to investigate the correlation between the computer-based application tasks and the conventional ones. According to Spearman correlation analysis, both the computer-based Backward Animal and Shining Star tasks correlated significantly (r = 0.396, p < 0,05). Both conventional Backward Word Span and Backward Corsi Block Span tasks also yielded significant correlation r = 0,565, p < 0,01). Subsequently, based on the Spearman correlations between each conventional and computer-based task, only Backward Corsi Block Span task correlated significantly with the Shining Star task (r = 0,539, p < 0,01). This result might be due to the similarity of the WM aspect measured by the task stimuli. Both tasks employed visuospatial stimuli, which required the child to activate the nonverbal aspect of WM. Both tasks also required the same kinesthetic modality of response by tapping on the screen. Although the stimuli object was different, both equally emphasized visuospatial and kinesthetic modality with similar task mechanisms and rules. Therefore, both the computer-based Shining Star task and the conventional Backward Corsi Block Span task corroborated the same measurement aspect in WM tasks, namely the nonverbal memory, which was facilitated by the visuospatial sketchpad aspect of WM.
On the contrary, the computer-based Backward Animal task did not associate significantly with the conventional Backward Word Span task (r = -0,128, p > 0,05). This difference might be caused by the difference in the aspects of WM measured in each task. In the conventional Backward Word Span, the child was instructed to memorize the auditory stimuli before giving a verbal response. Thus, this task involved mere auditory modality, which activated the phonological loop [4]. On the other hand, the computerbased Backward Animal task involved both visual and auditory modalities, which activated the phonological loop and the visuospatial sketchpad and the kinesthetic response. These multiple modalities and rule mechanisms in a single task might lead to the decline of one's memory performance [20]. Thus, participants attained the lowest mean score in the Backward Animal task compared to the other three tasks.
Concerning the computer-based application test, the game elements in the test appealed to the child's interest and helped the child maintain his or her attention to the tasks. The computer-based procedure also supported the tester's role in promoting a child's understanding of the task's instructions and rules. However, according to an observational report, children were found to experience considerable difficulties in the Backward Animal task, which was demonstrated by a relatively higher number of trials needed before the child was ready to begin the actual main task. In contrast, children showed better performance in the Backward Word Span task, which emphasized mere auditory modality. On that account, the task's complexity contributed to the insignificant correlation between the Backward Word Span task and the Backward Animal task.
Consequently, we assumed a correlation between the task complexity in Backward Animal tasks and the participants' age. Previous studies reported that children's WM skills developed along with the complexity of the materials, which required more considerable task demand during the school period, especially in the first year of the elementary school [25]. Considering that our participants were still preschool children, we suggested that the Backward Animal Task was more suitable for older children or at least children who have passed the elementary school's first year.
This study aimed to enhance the measurement accuracy and standardization of WM tasks and provided the initiative to develop a computer-based application test for WM tasks in Indonesia. However, as pioneering research, several limitations were found in this study. First, the sample size used in this study was relatively small, that the normal distribution assumption was not adequately met. Second, this study's demographical data indicated a proportional imbalance of the participants' socioeconomic status (SES), where the number of participants from the high SES was particularly less than those from the middle and low SES. A previous study [15] found that SES as one of the contributing factors that complemented a child's EF performance, in which the participants from high SES most likely achieved better EF scores than those from the middle and low SES. Therefore, the small sample size and imbalance proportion of SES categorization in this study led to the data's non-normal distribution.
Regardless of the limitations, this study has developed the first computer-based application test, which accurately measures the WM aspects for early age children in Indonesia. Further studies on this topic should consider administering the tasks to the older age groups to see which task and difficulty level fit best with the older children or adolescent, especially the Backward Animal task that holds a wider range of difficulties than the other three WM tasks. A bigger sample size consisting of proportional SES categorization is also necessary for future studies to obtain a normal data distribution representing the WM skills of early age children in Indonesia. Additionally, an advanced computer-based test development for the WM tasks is further required to decipher auditory responses from the participants. This feature is also expected to provide a system that automatically and precisely analyses the phonological loop components in WM tasks for a more detailed measurement in the future.
IV. CONCLUSION
This preliminary study managed to develop a computerbased application test to measure WM aspects for Indonesia's early age children. Based on the findings, Shining Star task proved to measure the same construct as the Backward Corsi Block Span task. Therefore, further studies can employ Shining Star task as a measurement tool for developing standardized norms of WM in early age children in Indonesia, which is an essential step to obtain an overall representative data of WM skills for the early age children in Indonesia. In addition, this computer-based WM test is of practical use for optimizing the child's WM skills through an evidence-based intervention. Furthermore, this finding facilitates integrating the computer-based and physiological measurement of WM aspects to gain more comprehensive results. | 5,910.8 | 2020-09-20T00:00:00.000 | [
"Computer Science",
"Education"
] |
Modelio Project Management Server Constellation
SOFTEAM is a French middle-sized company that provides the Modelio modelling tool. Modelio is an enterprise-level open source modelling solution delivering functionality for business, software and infrastructure architects.
Introduction
SOFTEAM is a French middle-sized company that provides the Modelio modelling tool. Modelio. 1 is an enterprise-level open source modelling solution delivering functionality for business, software and infrastructure architects. It is a comprehensive MDE workbench tool supporting the UML2.x standard. Modelio provides a csentral IDE which allows various languages (represented as UML profiles) to be combined in the same model. Modelio proposes various extension modules, enabling the customization of this MDE environment for different purposes and stakeholders.
The Team Work Manager is SOFTEAM's solution to team collaboration in Modelio. It allows Modelio users, after a minimal software and hardware investment, to efficiently share and work together on models stored in a central repository accessible in a local network or in the Internet. It automates version control and configuration management, making sure every developer has access to the last version of the shared model and works on a uniform configuration. From the point of view of the developer, a repository is divided into Projects, which contain: Model elements, Extension modules used by the user and Configuration information. A repository needs to be installed, configured and maintained by the users in private machines. A SVN repository may store different projects and different teams may work in the same repository. Developers use the Modelio desktop client to access a central repository on a SVN like workflow: committing modifications to model elements, receiving updates from other users and using merges/locks to deal with concurrent work.
By its participation on the MODAClouds project, SOFTEAM intended to move its modelling services to the Cloud in order to relieve the burden for our clients in supporting the necessary infrastructure. During the MODAClouds project, we developed a new version of this tool called Constellation [1,2]. This service is based on a Service-Oriented Architecture under which the TeamWork Manager is provided as a service on the Cloud. By the beginning of the third year of the project we started providing commercial services based on Constellation.
We hope that the "potentially infinite" resources available on the Cloud will make tasks such as scaling the servers of a project up and out and moving between different Cloud providers very easy to our customers. Additionally, activities such as monitoring and adapting the installation hopefully will be able to be executed without specialized knowledge in systems administration.
The MODAClouds provided features have an important role in fulfilling these objectives. As we are going to present in the following sections, the role of MODA-Clouds in Constellation is two-fold. At design time, MODAClouds should support design and implementation in a Cloud provider independent way, reducing development costs, and increasing its flexibility. At run time, it should support the monitoring and adaptation of the application to support its desired QoS levels.
This chapter is organised as follows. Section 12.2 presents the proposed architecture of Constellation. Section 12.3 presents how we used MODAClouds components in building our case study. Finally, Sect. 12.4 presents our conclusions.
Proposed Architecture
In order to simplify this migration, the architecture of our Cloud solution relies on the implementation of a component called Administration Server (Fig. 12.1). The Administration Server allows clients to create and manage user accounts, define roles, and create modelling projects and associate users and roles to specific projects. The Administration Server is designed as a JEE application which provides a web accessible user interface support implemented with Java Server Faces 2 and service behaviour supported by Entity Java Beans components. This application is linked with an relational database to ensure persistency of application data.
The Administration Server can provision computing resources in order to maintain the established level of quality of service. Cloud Services managed by a Administration Server are delivered as Cloud-enabled applications. These applications are deployed on the provisioned Cloud resource. Once deployed in Cloud resources, services usually need to be configured and accessed by clients. The Administration Server needs to make sure that the necessary projects, users and permissions have been created and set up once a Cloud agent has been installed. Standard protocols are used for both activities. Web Services enable the deployed agents to be configured. Moreover, TCP/IP protocols will allow Modelio desktop based clients to connect to an agent, independently from which Cloud it has been deployed. External agents are independent applications that provide specific high resource consuming services to Prototype of Constellation. Agents can be deployed on demand on specific Cloud instances (IaaS or PaaS depending on their implementation). The number of deployed agents may change in real time depending on the application workload. Each agent implements a variable number of services called Workers, which are executed when an agent receives a command from the Administration server.
The only dependency of this design to the specific Cloud provider is the communication between the Administration Server and the Cloud provider in order to deploy, monitor and eventually migrate services. The actual code to interact with the Cloud provider is however encapsulated in a Web Service usually installed on the Administration Server. This Web Service translates actual requests from the user into specific requests to MODAClouds runtime components.
Modelling with Creator 4Clouds
We used MODAClouds Creator 4Clouds Functional Modelling tool to describe the architecture of Constellarion's Administration Server along with its modelling services. We have also used this model as input to other design and runtime tools. During the first MODAClouds phase we considered two kinds of services: SVN and HTTP fragments. The first one provides a read-write model that is edited collaboratively, while the second one provides read-only models that are shared among different teams. Figure 12.2 depicts the functional architecture of Constellation specified with the MODAClouds IDE as a Cloud Computation Independent Model.
At the highest level, the CCIM shows the services that compose Constellation: the Administration Server and the Administration Database connected by an interface provided by the Administration Database and required by the Administration Server.
Still at the CCIM level, Fig. 12.3 shows the QoS constraints associated with the most important operations provided by the Constellation modelling services. For SVN fragments, 15 s is the target average time for reading model modifications, and 60 s is the target average time for writes. This considers that users make large commits (i.e., containing a great number of model changes, and therefore expect to obtain large change sets when they update). For HTTP models, 5 s is the average time for reading parts of the model, considering that users make infrequent accesses to subparts of shared read-only models. Constraints on the 85th percentile are used to define acceptable upper bounds for response times. These are set to 12 s for HTTP reads, and to 30 s and 5 min for SVN reads and writes, respectively.
CPIM and CPSM models describe the deployment of the application at different levels of abstraction, first in a Cloud provider independent way, and then in a Cloud provider specific way. Figure 12.4 presents excerpts of the Constellation application model described in MODACloudML at the three levels of abstraction in order to illustrate the correspondence between the CCIM and the CPIM and CPSM models.
Multi-cloud Deployment with CloudML 4Clouds
The deployment model at CPIM level allows us to model the deployment of our application by identifying the various components of our application deployment.
Fig. 12.4 Three levels in IDE
In this experiment, our efforts focused on better use of Cloud platforms through the integration of PaaS services and the migration to a multi-Cloud deployment solution. In a second step, we sought to take advantage of the support of multi-Cloud environments allowed by the MODAClouds project. We studied the best deployment configuration for our application and selected three Cloud providers: Amazon EC2, Flexiant and Amazon RDS. Figure 12.5 describes the deployment of Constellation in a multi-Cloud context. It shows an Administration Server and two agents, both of them in IaaS Cloud nodes. The former in Amazon, the later in Flexiant. The database that stores administration data is stored on a PaaS database, provided by Amazon RDS.
This development brings the following benefits: • Allows us to scale the compute and storage resources available to our database to meet Constellation needs. • Provides the best reliability to our application with automated backups, DB snapshots and automatic host replacement capabilities. • Provides predictable and consistent performance for I/O intensive transactional database workloads.
Cost and Performance Analysis with SPACE 4Clouds
As part of MODACloudsML CCIM models, we provided models of how users interact with Constellation, and of the performance of Constellation services when actually deployed on a virtual machines. We used SPACE4 Clouds to assess the costs and QoS the current architecture is able to provide on different Clouds, and in particular, the maximum number of clients we can serve with the modelled architecture. In addition, we devised a trial architecture for a new modelling service called Conference Service to be implemented during the last year of the project, and compared its QoS characteristics with the one implemented in the first two years of the project. Differently from a SVN service, the conference service decouples the reading and writing load on the system in different VMs that can be load balanced and Cloud bursted independently. This is a typical example of advanced deployment configurations Constellation needs to support. Our experiments showed that the Conference Service is more scalable than the current solution.
The Fig. 12.6 presents the usage model of our users, obtained through observation of typical users. It considers users that connect to modelling through their full workday. Five percent of the time they interact with Constellation, they connect to an existing project, which is translated onto the sequence of calls we see on the top of the figure. Ten percent of the time, they read updates from an SVN model, seventy-five percent of the time they get data from HTTP fragments and ten percent of the time they perform SVN commits.
In addition to usage models, we provided models of user workload throughout the day (see the Fig. 12.7). We represented a typical business office workload, with most of it concentrated around commercial working hours (8-12 h and 13-17 h).
SPACE 4Clouds allowed us to discover the peak number of users supported by this architecture. Figure 12.8 shows the result of this analysis. We can see that the SVN service supports around 250-300 users without breaking QoS constraints, while the Conference service scales to almost the double number of users without breaking constraints.
Multi-cloud Monitoring and Management with Energizer 4Clouds
Energizer 4Clouds provides valuable services for our case study, such as the management of the execution, intended as the set of operations to instantiate; run and stop services on the Cloud; the monitoring of the running application and the selfadaptation of the application, to ensure the fulfilment of the QoS goals. When defining the final design of the Constellation case study, we were interested in the best way to integrate the features provided by the platform into our application. In the context of the Constellation case study, we are interested in the integration of three aspects of Energizer 4Clouds: the monitoring platform, the self-adaptation platform and the execution platform. Figure 12.9 presents the deployment model of the Constellation case study including runtime platform components.
The Monitoring Platform allows us to monitor specific metrics collected from business components of our case study deployed on different Cloud platforms. To achieve this goal, we integrated five components into our architecture: three componentsfrom the monitoring platform and two components developed using the API Fig. 12.9 MODAClouds runtime platform integration provided by platform components. The role of these is to exploit monitoring data in our application.
To exploit the monitoring platform, we have integrated two components based on the API provided by the monitoring platform. These components ensure the intermediation between the monitoring platforms and business components of Constellation. They allowed us to implement a Cloud vendor independent agent monitoring user interface, and to integrate it to our commercial offering.
• Constellation Data Collector: To collect business metrics from Constellation agents, we integrated into our architecture this extension of the monitoring platform. Based on MODAClouds Data Collector API, this programme will collect data about CPU, RAMS and Access Disk of each process managed by agents. • Constellation Data Analyzer: Based on the REST API of MODAClouds Monitoring Manage, Constellation will incorporate a component to analyse, store and display monitoring data according to a business point of view. This service will be integrated into the Administration Server.
Conclusion
Constellation can be presented as an advanced repository which stores the models defined using the Modelio CASE tool and which provides several high-time consuming services on the Cloud. Among its services, we find the creation of collaborative projects, the hosting of model fragments allowing teamwork, the management of a Model Library catalogue or monitoring services applied to all these elements.
In this chapter, we presented the final version of the Project Management Server, renamed, for commercial reasons to Constellation. The development of Constellation started with the beginning of the MODAClouds project and by the end of it we have a first version that started to be commercialized. The current commercial version of Constellation is restricted to deployment on customer premises. We are confident that, thanks to MODAClouds, its architecture is ready to the Cloud.
The Constellation case study integrated both design time and runtime components from MODAClouds in its design. At design time, MODAClouds supported the design of the architecture of the application, and its early QoS analysis, in order to identify bottlenecks. At runtime, MODAClouds supported the multi-Cloud deployment, management and monitoring of Constellation. | 3,206 | 2017-01-01T00:00:00.000 | [
"Computer Science",
"Engineering",
"Business"
] |
Droplet Impact on Suspended Metallic Meshes: Effects of Wettability, Reynolds and Weber Numbers
Liquid penetration analysis in porous media is of great importance in a wide range of applications such as ink jet printing technology, painting and textile design. This article presents an investigation of droplet impingement onto metallic meshes, aiming to provide insights by identifying and quantifying impact characteristics that are difficult to measure experimentally. For this purpose, an enhanced Volume-Of-Fluid (VOF) numerical simulation framework is utilised, previously developed in the general context of the OpenFOAM CFD Toolbox. Droplet impacts on metallic meshes are performed both experimentally and numerically with satisfactory degree of agreement. From the experimental investigation three main outcomes are observed—deposition, partial imbibition, and penetration. The penetration into suspended meshes leads to spectacular multiple jetting below the mesh. A higher amount of liquid penetration is linked to higher impact velocity, lower viscosity and larger pore size dimension. An estimation of the liquid penetration is given in order to evaluate the impregnation properties of the meshes. From the parametric analysis it is shown that liquid viscosity affects the adhesion characteristics of the drops significantly, whereas droplet break-up after the impact is mostly controlled by surface tension. Additionally, wettability characteristics are found to play an important role in both liquid penetration and droplet break-up below the mesh.
Introduction
Droplet impact onto solid, dry surfaces is a widely studied phenomenon, which has been extensively investigated in the past decades due to its involvement in many environmental and industrial applications [1,2]. In order to understand the behaviour of the droplets in such cases, the mechanisms involved in droplet impact dynamics must be thoroughly investigated. Droplet dynamics depend on a number of parameters that are related to the droplet itself, the impacted surface and the local gas layer near the wall. Studies have shown that droplet diameter (D 0 ), impact velocity (U 0 ), viscosity (µ), surface tension (σ), wettability of the solid surface, as well as the non-isothermal effects (e.g., solidification, evaporation) constitute important parameters in the Even though droplet impact on solid surfaces has been an area of interest for many researchers for a few decades now, droplet impact on porous surfaces and especially on metallic meshes is a relatively new field that has attracted attention both in academia and industry. Potential applications of metallic meshes include medical spray penetration in the human skin, irrigation mechanisms and ink jet printing. Droplets behave differently when impacting heterogeneous media, introducing influence of surface properties such as surface roughness [16]. Absorption of the droplet itself by the porous surface depends on both the properties of the liquid and the porous medium [17]. Spreading on porous media has received little attention compared to similar investigations on impermeable surfaces, despite its common presence in applications [18]. Alam et al. [19] concluded that important factors for the spreading of an impacting drop on a porous surface include the properties of the liquid (viscosity, surface tension), impact conditions (impact velocity, droplet diameter), absorption, wettability and roughness. Ryu et al. [20] investigated in their experiments the effect of wettability of water drop impacting on a hydrophobic and superhydrophobic mesh. From the experiments it was shown that water can penetrate superhydrophobic meshes easier, compared to the hydrophobic surfaces and that penetration on a superhydrophobic mesh can occur either during the impact or during the recoil.
Older as well as recently published numerical investigations of drop impact into porous media can be found in the literature. Reis et al. [21] conducted parametric analysis of drop impact on porous medium with the VOF method. The results showed that for large Re due to the viscous drag forces, relatively large degrees of penetration in the substrate was observed. By comparison, for small Re values the degree of change of the momentum of the droplet, due to viscous effects outside the substrate starts to become more significant, resulting in a reduction of the lateral spreading. In addition it was found, that the potential penetration or absorption of the droplet into the porous surface will also influence the impact dynamics. Simulations of a sessile droplet on porous substrates using the VOF method in ANSYS were performed by Fu et al. [22]. Their results showed that the interaction between droplet and porous substrates is generally determined by the spreading and the penetration processes that are competing. For the wettability effect it was found that for smaller static equilibrium contact angle, more pronounced spreading and permeation can be seen, and when the static equilibrium contact angle increases, the spread and imbibition of the droplet become more difficult.
Liwei et al. [23] utilised a 3D Many-Body Dissipative Particle Dynamics (MDPD) model and tested it against experiments of droplet impact on mesh screens. They found good agreement between the numerical and experimental results, and concluded that the influence of the droplet speed and size of the mesh is of higher importance, compared to the wettability characteristics and the drop viscosity.
The present study uses the enhanced VOF method developed by Georgoulas et al. [24] and Vontas et al. [25]. The enhancements mitigate spurious velocities and enables accurate treatment of dynamic contact angle by employing the Kistler DCA model. The implementation of the Kistler model was validated in Reference [25] with the experimental data of Patil et al. [13] and Yokoi et al. [14]. In both cases, low Weber number water droplet impacts on dry, flat surfaces under isothermal conditions were considered. Further validation for higher Weber numbers is presented in this study. The validated model is subsequently used to study water droplets impacting on metallic suspended meshes, along with detailed experimental investigations. The aim of the numerical modelling is to identify and quantify valuable information that cannot be accurately revealed from the measurements, such as the volume of liquid entrapped within the metallic mesh with respect to time. This information offers additional insight into the complex underpinning mechanisms in the considered droplet impact phenomena.
Governing Equations
With the utilised VOF method, a transport equation for the volume fraction α of the secondary phase (liquid) is simultaneously solved with a single set of continuity and Navier-Stokes equations for the whole flow field. For the primary phase (gas) the corresponding volume fraction is calculated as (1 − α). The two fluids (gas and liquid) are considered to be Newtonian, immiscible and incompressible, while the environmental conditions are considered isothermal. The governing equations have the following form: where ρ b and µ b are the bulk density and viscosity of the fluid, U is the fluid velocity, p is the pressure, g is acceleration due to gravity, and F s is the volumetric representation of the surface tension force. The bulk physical properties of the mixture are calculated as weighted averages of the corresponding properties of the liquid and gaseous phases: where ρ l , ρ g , µ l and µ g , denote the density and viscosity of the liquid and the gas phases respectively, In the VOF method, α is advected by the velocity field. In incompressible flows this is equivalent to conservation of the volume fraction, and makes the method mass conservative. Additionally, the surface tension force is modelled as a volumetric force, and it uses the Continuum-Surface-Force (CSF) method, which was first introduced by Brackbill et al. [26] applying the following equations: where σ is the surface tension, and κ is the curvature of the interface.
Sharpening the Interface
For the sharpening of the interface, Equation (2), is modified and takes the following form: where U r is an artificial compression velocity at the interface. In the finite volume discretisation used in "interFoam" this is given by: where n f is the cell surface normal vector, φ is the mass flux, s f is the surface area of the cells, and C γ is a coefficient that controls the artificial compression of the interface diffusion, where its value can be set between 0 and 4. Further details can be found in the work of Georgoulas et al. [24].
VOF Smoothing
The enhanced VOF-based solver used in the present study overcomes numerical artefacts of the original model, known as "spurious currents", that are usually developed at the interface between the liquid and gaseous phases. The proposed enhancement involves the calculation of the interface curvature κ using the smoothed volume fraction values α which are obtained from the initially calculated α field, smoothing it over a finite region near the interface Equation (10). All other equations are using the initially calculated (non-smoothed) volume fraction values of α [27]. Further details on the proposed implementation can be found in the article by Georgoulas et al. [24].
Dynamic Contact Angle Treatment
The enhanced numerical simulation framework also contains an implementation of the DCA model originally suggested by Kistler [28]. This implementation was tested in Reference [25] against experiments available in the literature for droplet impact on flat surfaces with different wettability, and it was shown that it can predict accurately both the spreading and recoiling stages of the impacts. In more detail, with this DCA treatment, the dynamic contact angle θ d is given as a function of the contact line velocity (u cline ), through the capillary number Ca and the inverse of Hoffman's function. The θ d , can be calculated by the following Equation (11): where f −1 H is the inverse function of the Hoffman's empirical function, which is given by: where x is equal to: The capillary number is defined as where u cline is the velocity of the contact line. The equilibrium angle θ eq is replaced by either a limiting advancing or receding contact angle θ a or θ r , respectively, depending on the sign of the velocity vector at the contact line.
Low Weber Number Impacts
Numerous empirical models related to the DCA with the contact line velocity are available in the literature [29,30]. As mentioned in Section 2, the numerical investigation performed by Vontas et al. [25], showed that the predictions of Kistler's DCA model are closer to experimental measurements compared to both the Constant Contact Angle (CCA) model, and the DCA model that is offered in the original distribution of OpenFOAM. The numerical predictions of the present numerical simulation framework with the Kistler DCA model can be seen in Figure 1, compared with the experimental results reported by Yokoi et al. [14] for droplet impact with a comparatively low Weber number We = 32 on a hydrophobic surface. The present results clearly show good agreement with the experiments and are consistent with previous validation results presented by Vontas et al. [25]. Experiment [14] CFD Dynamic Kistler CA
High Weber Number Impacts
For the purposes of the present investigation it was deemed appropriate to further check the validity of the proposed VOF-based numerical simulation framework, also for higher We number droplet impacts. Therefore, the droplet impact experiments that were conducted by Šikalo et al. [15], are also simulated here. In the experiment, for the detailed observation of the spreading droplet, a high resolution charge-coupled device (CCD) camera (Sensicam PCO, 1280 × 1024 pixels) equipped with a long-distance microscope is used. The working liquid is water impacting a horizontal hydrophobic surface (dry wax), the droplet diameter is D 0 = 2.45 mm and the droplet impact velocity is U 0 = 1.64 m s −1 , while the advancing θ a and receding θ r contact angles are 105°and 95°respectivey. The Re and We numbers are 4010 and 90 respectively.
A 2D axisymmetric simulation is performed. For higher We number processes, a finer computational mesh is required. After conducting a mesh independence study, it is found that the solution is mesh independent for 5 µm of minimum grid size. The utilised computational domain is a 5°wedge, with 5 mm radius and 8 mm height. The computational domain, mesh details and boundary conditions are illustrated in Figure 2.
The total number of cells is 1.6 million (1000 × 1600 × 1 in the x, y and z direction respectively). A structured mesh of hexahedral and prismatic elements with a grid clustering towards the bottom-left corner of the computational domain (i.e., the central point of the simulated droplet impact) is utilised. A no-slip velocity boundary condition is imposed at solid walls, and a dynamic contact angle boundary condition for the volume fraction. At the outer boundary, a homogeneous Neumann condition is imposed on the pressure allows the flow to exit and enter freely.
As can be observed in Figures 3 and 4, the 2D axisymmetric simulation predicts very well the spatial and temporal evolution of the considered droplet impact. However, since for the main aim of the present work 2D axisymmetric simulations cannot be performed, it was deemed appropriate to also perform a 3D simulation and check the validity of the results for the same experiment. In order to reduce significantly the computational time and the total number of cells, a 3D symmetric simulation was conducted, representing only one-fourth of the considered droplet impact case. Using a minimum cell size of 5 µm, as in the 2D case, was not possible due to a limitation in the available computational resources. Therefore the minimum cell size in the 3D case is 25 µm, and in total four successive levels of mesh refinement were applied, utilising the "topoSet" utility in OpenFOAM. The total number of cells is 23.6 M, consisting of hexahedral and polyhedral cells. However, in order to compare and demonstrate the quantitative difference between 2D and 3D simulations, an additional 2D simulation with minimum cell size of 25 µm, is also performed. The numerical results of the 3D simulations are compared to the experimental results in Figure 3. As can be observed qualitatively, the 3D numerical case shows good agreement with the experiment.
In Figure 4, a quantitative comparison of the four cases is presented, where the contact diameter D over time t is plotted. The results show that both the 3D and 2D (both cases) simulations are in good agreement with the experimental measurements. Additionally, it can be seen that the 2D axisymmetric case produces results in closer agreement with the experimental values compared to the 3D case. This can be explained from the fact that the 2D simulations have five times smaller minimum cell size. However, it can be seen that the 3D solution with 25 µm minimum cell dimension offers a good compromise between accuracy and computational cost despite not being a mesh-independent solution. Therefore, this minimum cell size value of 25 µm is adopted for the 3D parametric numerical simulations in the main part of the present work. Exp.
Figure 3.
Qualitative comparison of the results of the 2D (5 µm) and 3D numerical simulation results with experimental snapshots [15]. [15] with the 2D axisymmetric and 3D numerical simulation results over time.
Droplet Impact on Metallic Meshes
In this section, the experimental and numerical results obtained for droplets impacting onto metallic meshes are presented, analysed and compared. More detailed descriptions of these experiments are available in Reference [31]. The main aim here is to select the appropriate computational domain, mesh, boundary conditions and overall computational set-up characteristics for the parametric numerical analysis.
Experimental Investigation
The droplet impact experiments were recorded using a Photron Fastcam SA4 high-speed camera (with a resolution of 1024 × 800 pixels). The test area was illuminated using a custom-built high-speed LED light source, synchronised to the high-speed camera. A purpose-built image processing algorithm was developed using MATLAB to measure the droplet initial diameter and the maximum spreading area of the impact. The impact velocity was also determined by measuring the rate of displacement of the droplet's centre of mass from the video images. To confirm repeatability, droplet impacts were repeated at least 5 times for each set of impact conditions. A portion of the mesh was suspended using a ring with a 20 mm inner diameter. It was observed that a small vertical movement of the suspended mesh occurred after the impact of the droplet, particularly for high impact velocities. Figure 5 shows a schematic illustration of the experimental set-up. Tables 1 and 2. Different liquids and needle size were chosen in order to maintain the same range of non-dimensional parameters.
The impact of the droplet on the mesh led to three different possible outcomes; deposition, imbibition and penetration. In the case of deposition, after the impact there is no penetration of the droplet below the upper mesh surface. The deposition outcome is mainly influenced by a high viscosity, small pore size, and low impact velocity. The partial imbibition is mainly influenced by a larger dimension of the pores and a higher impact velocity. Depending on the initial parameters, the partial imbibition led to the formation of a liquid jet or a spray cone below the mesh, and subsequently to the separation of daughter droplets from the initial droplet. In the case of penetration, all the liquid penetrates below the mesh. A higher amount of liquid penetration is linked to a higher velocity impact, lower viscosity and a larger dimension of the pore size. The mesh pore size D p and wire diameter D w , spanned between 25-400 µm and 25-220 µm respectively. Figure 6, shows an example of the different outcomes. The main characteristics of the metallic mesh are also shown at the left bottom corner of the same figure. Table 1. Liquid properties, Re and We numbers ranges used in the experiments. To evaluate the impregnation properties of the meshes an estimation of the liquid penetration is obtained by computing the volume of the daughter droplets ejected from the surface after the impact, or subtracting the volume of the remaining cap above the mesh from the initial droplet volume as highlighted in Figure 7.
The initial volume of the droplet is calculated from the initial droplet radius r i , assuming that the droplet has a perfectly spherical shape before the impact. Consequently, in the case shown in Figure 7a, the liquid penetration will be given by: This corresponds to the sum of all the individual droplets ejected below the mesh. Again here the assumption is that all of these droplets have a spherical shape and that their centre is along the same 2D vertical plane. In the case of Figure 7b, due to the complexity of the outcome, it is not possible to calculate the volume of the droplets ejected below the mesh, therefore the liquid penetrationṼ is simplified toṼ where, V 0 and V cap are the initial volume of the droplet and the volume of the cap left above the mesh respectively, r c is the radius and h the height of the cap. In this case of complex droplet penetration, part of the liquid will be trapped within the wire mesh. To take account of this non-negligible percentage, the error was estimated as follows and reported in the graphs. Considering the base of the cap lying above the mesh being perfectly circular, the area of this base is be given by The maximum percentage of trapped liquid is given by the base area multiplied by the thickness of the mesh and the Ratio of Open Area (ROA) of each mesh. This ratio is given by: Assuming that the thickness of the wire mesh will be equal to the double of the mesh wire diameter, the maximum volume of trapped liquid will be equal to The general trend of liquid penetration for the three liquids with the associated error bar, as a function of pore size, liquid properties and impact velocity is shown in Figure 8. For all the liquids it is shown that increasing the pore size, the percentage of liquid penetration will also increase. At the same time, given the same pore size but increasing the impact velocity, the percentage of liquid penetration will be higher. To verify whether the vertical movement of the mesh due to the impact can have an influence on the percentage of penetration or the outcome, some of the experiments were repeated using a 21-gauge needle with water, acetone and W&G on surfaces with pore sizes of 25, 200, and 400 µm. To suspend a portion of the mesh two different ring sizes were used, respectively, with a diameter of 15 mm and 25 mm to verify if the amplitude of the oscillation can influence the outcome. No significant difference was observed in terms of outcome and percentage penetration for any of the considered liquids (Tables 3-5). It is possible to conclude that the movement of the mesh has no relevant effect on the nature of the outcome. Figure 9 shows the trends of liquid penetration for the three liquids at the same impact velocity. In Figure 9a, which corresponds to an impact velocity of 1.85 m s −1 , it is shown that, given the same pore size and velocity, increasing the viscosity, reduces the liquid penetration. At the same time, for a lower surface tension, as for the case of acetone, given the same pore size and velocity, the percentage of penetration is higher. The effect of liquid properties becomes less critical with increasing impact velocity. In fact, observing Figure 9b,c, which correspond respectively to impact velocities of 2.70 m s −1 and 3.60 m s −1 , the difference in percentage of liquid penetration for water, acetone and water-glycerol is smaller. Figure 10a, shows the trend of liquid penetration for a droplet of water, at the same impact velocity but with a different initial diameter. According to the study of Xu et al. [32], who analysed water droplet impact on meshes with different pores sizes, it is not possible to define a numberÑ given by the ratio of the shadow area of the droplet over the single pore areã Increasing the value ofÑ, the value of the impact velocity, necessary to eject part of the droplet below the surface, will be lower. In our experiments, it is shown that, given the same impact velocity, the percentage of liquid penetration of a droplet with a mean diameter equal to D m = 3.0 mm will be higher than the one for a droplet of D m = 2.1 mm, which means that for a smaller droplet of water, the impact velocity necessary to get the same liquid through the pore must be higher. However, increasing the size of the pore, specifically for a pore diameter larger than 100 µm, the percentage of liquid penetration at lower velocity will be higher in the case of droplets of water with a smaller diameter.
When increasing the impact velocity the effect of initial droplet diameter on the percentage of liquid penetration becomes small (Figure 10b). Xu et al. [32] pointed out that, for a wide range of pore dimensions, it is not appropriate to refer to a constant coefficient to predict the impact velocity for which penetration will occur. This assumption is valid just considering a single mesh geometry and without varying liquid properties. In fact, the experiment shows that varying liquid properties, for example in the case of water and water-glycerol, given the same droplet initial diameter and mesh geometry, penetration will occur at a different impact velocity. Consequently, the coefficientÑ is not sufficient to estimate the velocity for which penetration will occur.
Numerical Investigation
Numerical simulations of one of the metallic meshes presented in the previous section are performed for the case D p = 400 µm and D w = 220 µm and compared with the experiment. The corresponding experimental conditions are summarised in Table 6. The numerical results are presented and analysed in order to provide new insights into the resulting phenomenon providing quantitative details for the droplet impact and penetration characteristics. For the generation of the geometry of the metallic mesh, the CAD cloud software Onshape ® is used. In Figure 11 the actual metallic mesh geometry from the experiments and the corresponding CAD model, are depicted. For the grid (mesh) generation, the snappyHexMesh (sHM) utility of OpenFOAM is utilised. In Figure 12 the computational domain, the boundary conditions as well as the metallic mesh position after using the snappyHexMesh utility is illustrated. For reducing the total number of cells and hence the computational time and cost, one-fourth of the total 3D domain with symmetry planes is simulated.
As can be seen from Figure 12 in total four cell refinement levels were applied for numerical cases I and III with minimum cell size 12.5 µm in the region where the metallic mesh is located and maximum size 200 µm, close to the edges of the computational domain. In cases II and IV, three refinement levels with minimum cell size 25 µm and maximum cell size 200 µm, close to the computational domain edges, were used. Therefore, the computational domain for all cases consisted of a hybrid grid with high number of hexahedral cells and a small number of polyhedral cells, which were applied only close to curved walls of the wires.
Two separate cases are considered to study the sensitivity of the initial position of the droplet relative to the mesh (Figure 12f,g). The droplet is centered above two wires for cases I and II, while for cases III and IV the droplet is centered above one wire. The overall differences between these four numerical simulation set-ups are summarised in Table 7 and are further described in the following paragraphs.
Numerical Simulation Results for Droplet Impact on Metallic Meshes
All simulations presented in this sub-section refer to the same experimental conditions (as shown in Table 6). The differences related to the domain dimensions, the total number of cells, the overall levels of refinement and the initial position of the droplet can be found in Table 7. Table 6. Initial droplet parameters and characteristics for the considered experiment, at t = 00 ms. Initial droplet diameter D 0 , impact velocity U 0 , We, Re numbers and advancing θ a and receding θ r contact angles of the droplet. A macroscopic/qualitative comparison of the numerical predictions with the experimental measurements are shown in Figure 13. As it can be observed, all four numerical simulations show a good overall agreement with the experimental data. Specifically, during the advancing phase of the droplet, both experimentally and numerically, the liquid crosses the metallic mesh generating a number of fingers which become thinner and longer as time passes. It can be seen that both high and lower resolution numerical simulations show good agreement with the experimental data. During the receding phase, in the experimental data the fingers are thinner compared to the numerical cases, where the fingers are thicker and fewer in number. Subsequently, during the receding phase, the disintegration of the first droplets occurs at time t = 2.50 ms for numerical case I and t = 2.60 ms for case II, while in the experiment the first disintegration occurs at t = 2.40 ms. Accordingly, the first disintegration of droplets for cases III and IV occurs at t = 2.50 ms and t = 2.70 ms, respectively. This suggests that the disintegration of the first droplet occurs at a time period which is not related to the droplet impact position relative to the mesh. Additionally, comparing numerical simulations I and II, the disintegration of the first droplet is delayed only by 0.10 ms in the simulation of 4M cells. Hence, it can be concluded that the lower number of cells does not affect significantly the results, up to that time instant. For later time periods, in both experimental and numerical cases, daughter droplets occur from the breakup of the previously formed fingers. From the numerical results it can be observed that there is no difference between the two cases regarding the diameters of the disintegrated droplets. However, a small difference in the thickness of the fingers can be observed ( Figure 13). This is due to the fact that the numerical cases with 4M cells cannot capture the thin fingers of the experimental results due to the coarser mesh. Conversely, higher resolution cases can better capture this phenomenon. Since the main aim of the numerical investigation is to quantify the volume of liquid retained within the metallic mesh, the resolution of the 4M cells (case II) domain in the vicinity of the mesh can be considered adequate to capture this phenomenon. However, if the focus is to predict the diameter of the daughter droplets after impact and droplet penetration through the metallic mesh, then more dense cells should be added at these regions. This conclusion can also be observed quantitatively, in Figure 14, by comparing the dimensionless volume v * = V t /V 0 of liquid that passes below the metallic mesh (where V t is the volume of liquid at each specific time period) over the dimensionless time t * (t * = tU 0 /D 0 ). As it can be seen the first measurement point in the experiment, differs significantly from all numerical cases. This is because the results from the experiments are limited by the fact that it is not possible to visualise the droplets captured by the wire mesh, while in the numerical cases this liquid volume is accounted for. As a result the percentage of liquid penetration is underestimated by the experiment, due to the fact that during the spreading a larger portion of the mesh will be covered by the liquid phase, and consequently a higher percentage of the droplet will be entrapped in the wires. It should be mentioned that an error bar of 10% in each value of the experimental results has been added. Comparing now the numerical cases only, a good agreement can be observed between the 4M cells cases (case II and case IV) and the 31.9M cells cases (case I, case III). The maximum differences are in the order of 5%. This further justifies quantitatively the previous qualitative conclusion. Therefore, for the parametric numerical simulations of the article, case II is selected as the most appropriate computational set-up, considering that it provides adequate accuracy in the prediction of the liquid amount that passes below the metallic mesh while requiring a lower computational cost.
Parametric Numerical Simulations
Numerical simulations of "virtual liquid" droplets impacting on metallic meshes are presented in the present section. "Virtual liquid" in this context refers to liquids where only one property is changed with respect to water (e.g., viscosity) while all the other fluid properties remain unchanged. These simulations are performed to better understand the effect of the considered parameter, as this is something that cannot be performed with laboratory experiments. Particular attention is given to the effect of the varied parameter on droplet penetration characteristics, which is evaluated as the dimensionless volume of liquid that remains attached to the metallic mesh region with respect to time.
The case for the parametric analysis, uses the same numerical parameters as shown in case II Table 7 as well as the same conditions as in Table 6. The details of each numerical simulation performed in the proposed parametric analysis are summarised in Table 8. Additional results of case II can be seen in Figure 15, where useful information that can be easily extracted from the numerical model (e.g., velocity, relative pressure) are depicted. Particularly, in Figure 15a,b a 3D representation of the drop and the volume fraction, can be seen. From these figures the fast crossing of the droplet during the advancing phase is clearly visible, since approximately 90% of the liquid has crossed the mesh within 3.0 ms, after the impact. Figure 15c shows the relative pressure, which is concentrated on the wires, where the contact with the liquid is, and decreases as the drop passes through the mesh. The corresponding velocity magnitude (U) is shown in Figure 15d. Here, after the droplet reaches a velocity of 0 m s −1 on the mesh, a velocity increase on the liquid, below the mesh, is seen. Table 8. Numerical cases performed for investigating the effects of dynamic viscosity (µ), surface tension (σ), and wettability (θ a and θ r ). The parameter that is changed in each case is indicated in bold. Case II corresponds to the base case.
Influence of Reynolds Number
First, the influence of the Reynolds number on the droplet impact on metallic meshes is investigated by performing numerical simulations of droplets with various viscosities (µ). Four different numerical simulations are conducted (numerical cases III a-d), with the viscosity value increasing by a factor of 2.5 in each case.
The effect of Re at three different time instants can be seen in Figure 16, where case II is compared with cases III a-d. Qualitatively, during the advancing phase and particularly at t * = 0.7 it can be observed that there is no significant difference between the cases, apart from the fact that the cases with higher viscosity have higher maximum contact diameter compared to case II. However, other factors such as the length as well as the thickness of the fingers which are formed below the mesh, are the same in all cases. Subsequently, the effects of the Re number and the resulting differences between the cases are more significant. Particularly, for t * = 2.7, it is obvious that the gradual decrease of Re causes a corresponding decrease in the amount of liquid that has penetrated below the metallic mesh structure. This is evident from the higher quantity of liquid traced above the metallic mesh structure, as well as from the fact that the detached small droplets below the metallic mesh have travelled a progressively smaller distance with respect to the vertical axis. This is also apparent at t * = 6.0. Furthermore, observing all cases it is evident that the gradual decrease of Re causes the part of the droplet that stays above the metallic mesh structure to recoil faster. Especially in the case of the lowest Re, no hanging droplet volume can be traced below the metallic mesh which indicates an almost complete droplet recoil. Focusing now on the number and size of the disintegrated daughter droplets after the penetration, no significant effect due to the gradual increase of Re, can be observed. Therefore, it can be concluded that Re, and hence, the liquid viscosity mainly affects the adhesion characteristics of the impacting droplet to the metallic mesh structure and not the droplet break-up characteristics in the amount of liquid that penetrates through. In order to conduct a more quantitative comparison of the effect of Re number, the dimensionless volume of liquid that remains above the metallic mesh is plotted with respect to time for all five cases in Figure 17. It is shown that, as Re decreases the final droplet percentage that remains above the metallic mesh increases progressively from almost 0% (case II) to approximately 20%, 40%, 45%, and 60% for cases III-a, III-b, III-c and III-d, respectively. From the same figure, it is also characteristic that for the four examined cases during the recoiling phase, an increase of the liquid located above the metallic mesh is observed before the value of volume is finally stabilised. This is because the liquid that hangs below the mesh structure, and earlier disintegrated into smaller droplets, is sucked back into the recoiling mother droplet located above the mesh. Additionally, it can be seen that there is a threshold of Re above which a total penetration of the liquid is observed. However, further numerical investigation with different pore size is required in order to have clearer insight on this. The results for the liquid volume above the metallic mesh can be related to the dimensionless Ohnesorge number (Oh = √ We/Re). The Ohnesorge number in case II is Oh = 2.26 × 10 −3 , while the virtual fluid of case III-d has a value of Oh = 2.26 × 10 −2 . This value is very close to the critical value Oh = 3 × 10 −2 , where according to Blanchette & Bigioni [33], viscosity plays an important role in the behaviour of the droplet, for example, for droplet merging [34].
The capillary number (Ca = µU 0 /σ) represents the ratio of the viscous and surface tension forces, and is linked to the Ohnesorge number by the relation Ca = Oh √ We. In this case, the capillary forces (Ca = 0.25) are negligible, compared to the viscous forces which are dominant. A value of less than Ca = 10 −5 , would be needed for the flow to be dominated by capillary forces [35].
Influence of Weber Number
In order to have better understanding of the effect of the Weber number on the resulting phenomenon, two numerical simulations were conducted with approximately one-third and two-thirds of the surface tension value of case II. The values of We for cases IV-a and IV-b are 416 and 194, respectively. Both cases are compared with the results of the base case II where We = 126.
From the qualitative ( Figure 18) and quantitative ( Figure 19) comparison in this series of parametric numerical experiments, it is obvious that the decrease of the We number has almost no effect in the amount of water that penetrates the metallic mesh, but it has a significant effect in the droplet break-up characteristics below the metallic mesh structure. In particular, as can be seen from Figure 18, the qualitative comparison of cases IV-a and IV-b with case II demonstrates that for the early stages of the impact (up to t * = 2.7) higher values of We lead to the generation of more fingers that are longer and thinner. Furthermore, the lower the We number the slower the disintegration of the fingers into smaller daughter droplets.
Quantitatively, from the diagram illustrated in Figure 19 it can be seen that increasing the We number results in an early stage (t * < 3), faster crossing of the liquid through the metallic mesh. This agrees with the investigation conducted by Malla et al. [36], where they concluded that lower surface tension, or higher We values promote faster droplet spreading velocity. However, even though faster crossing is observed for the cases with higher surface tension, the amount of liquid that finally remains above and entrapped within the metallic mesh is similar for all cases. It can be also observed that the fastest penetration of the liquid takes place during the advancing phase and particularly from t * = 0.55 up to t * = 1.1, where 4-6% less water compared to case II, has remained above the mesh. At t * = 2.05, more than 96% of the liquid has crossed the mesh. The exact values of the liquid that remained above the mesh are presented in Table 9.
From all the above, it can be concluded that the We number has a negligible effect on the amount of the liquid that penetrates the metallic mesh, but it has a significant effect in the finger formation and break-up characteristics, below the metallic mesh.
Influence of Wettability
In order to investigate the effect of wettability, three additional numerical simulations are performed, where the only parameters that vary are the advancing θ a and receding θ r contact angles. For a clearer interpretation, these three simulations correspond to hydrophilic, hydrophobic and superhydrophobic surfaces.
The values of θ a and θ r for the hydrophilic surface (case V-a) are 60°and 43°, respectively (50°less than the base case), while for the hydrophobic surface (case V-b) θ a and θ r are 115°and 98°(5°higher than the base case). The reason of performing case V-b, for which the contact angles are really close to the case II, was mainly to observe the sensitivity to small angle changes.
Cases II, V-a and V-b, have the same contact angle hysteresis CAH (∆θ = θ a − θ r ) ∆θ = 17°. Lastly, case V-c which corresponds to a superhydrophobic surface, where θ a and θ r is 162°and 154°r espectively, and ∆θ = 8°. This agrees with the definition of superhydrophobic surface [8], where θ a and θ r should be above 150°, and ∆θ should be lower than 10°.
From the qualitative comparison of Figure 20, it is clear that during the advancing phase and particularly at t * = 0.7 the small difference in the advancing and receding contact angle values between the two hydrophobic cases, has not affected the shape of the droplet. However, there are significant differences between the hydrophilic and the superhydrophobic cases. For the hydrophilic case (θ a = 60°and θ r = 43°), which has the lowest contact angles, the main difference is that the lengths of the fingers, below the mesh, are smaller. On the other hand, for the superhydrophobic case (θ a = 162°and θ r = 154°), the fingers are longer and thinner. At non-dimensional time instant t * = 2.7, the hydrophilic surface is the only case where there is not any disintegration of the droplet. The case with θ a = 115°, θ r = 98°shows the closest results to the base case, however the increment of the contact angles has resulted in a faster breakup of the fingers as well as larger diameters of daughter droplets. Conversely, the daughter droplets are smaller for the superhydrophobic surface case. Furthermore, the high contact angles of the superhydrophobic surface have resulted in a total rebound of the droplet segments that have remained above the mesh. For the same case, at t * = 6.0, the rebound of the droplets is continued, while below the mesh the droplets are smaller in diameter and some of them have even reached the downstream boundary of the computational domain.
From the qualitative comparison it seems that the superhydrophobic surface during the advancing phase of the droplet, is the case with the fastest crossing of the liquid due to the longer fingers, however, from the quantitative comparison, as shown in Figure 21, it can be seen that the hydrophilic surface has the highest rate of liquid crossing the mesh after the impact, during the advancing phase. Additionally, in the hydrophilic case, the recoiling phase begins at t * = 0.15 prior to the base case. The case of the hydrophilic surface is found to be the case with the least amount of liquid above the mesh. Particularly, as it can be seen in Table 9 more than 84% of the total amount of the liquid is already located below the mesh within the first 1.1 ms (t * = 0.75), and makes it the case with the fastest crossing rate of liquid through the mesh during the advancing phase. However, from t * = 1.5 up to the last available time period, a gradual increase of the dimensionless volume of liquid is observed. The investigated contact angles are: θ a = 60°, θ r = 43°(case V-a-hydrophilic surface), θ a = 110°, θ r = 93°(case II-hydrophobic surface), θ a = 115°, θ r = 98°(case V-b-hydrophobic surface) and θ a = 162°and θ r = 154°( case V-c-superhydrophobic surface).
As mentioned previously the two hydrophobic surfaces show the closest results, both from a quantitative and a qualitative point of view. This was expected, since the difference of the advancing and receding contact angles between the two cases is 5°. However, even though the difference of contact angles is small, overall faster crossing of the liquid is observed for the case with higher contact angles (θ a = 115°and θ r = 98°, case V-b ). On the other hand, for the superhydrophobic surface case, different behaviour of the droplet as well as faster crossing of the liquid can be seen. In more detail, the end of the advancing phase of the droplet is at t * = 1.5, which is 0.6 later than that of case II. When the receding phase begins, the volume of the liquid above and below the mesh starts being stable until t * = 2.0, where there is no connection between the drops above the mesh and the drops below the mesh. Moreover, since a complete rebound of the droplet is taking place above the mesh, the volume of liquid which is in contact with the metallic mesh, is close to zero. At the last available non-dimensional time period t * = 6.0 significant differences between the cases can be seen regarding the volume of water above the mesh. Specifically, the hydrophobic case with the high contact angles (case V-b) has only 0.13% of the liquid above the metallic mesh, which makes it the case with the least liquid entrapped. The hydrophilic case is the second in the row with the least percentage of liquid above the mesh, since only 1% of the liquid has remained above the mesh. On the other hand, 43% of the liquid of the superhydrophobic surface case remains above the mesh, however the superhydrophobic properties of the surface has caused a complete rebound of the droplet, resulting in almost zero amount of liquid being in contact with the metallic mesh.
Conclusions
The development of porous materials has attracted the attention of the research communities over the past decades. Porosity characteristics have specific impacts on material properties, and materials are applied in several areas such as painting and ink-jet printing. In this study, a numerical investigation of droplet impact phenomena on suspended metallic meshes was conducted to identify and quantify the effects of fundamental controlling parameters on the penetration characteristics of the droplets as they pass through the metallic mesh structure, and to give further insights into the experimental measurements presented in this study. For this purpose, an enhanced VOF based numerical simulation framework was utilised. Initially, additional validation studies of droplets impacting onto solid substrates were conducted. Then, a specific experiment of droplet impacting on a suspended metallic mesh was performed numerically with satisfactory agreement with the experimental data. It has been illustrated that the proposed numerical simulation allows to quantify, in detail, parameters difficult to be reached experimentally. Subsequently, three different series of parametric numerical investigations were conducted in order to isolate, identify, and quantify the individual effects of the Reynolds (Re) and Weber (We) numbers, as well as the metallic mesh surface wettability characteristics. From the overall analysis of the numerical predictions the following conclusions can be drawn-(1) This VOF-based numerical simulation framework can provide accurate predictions for high We number droplet impacts on flat surfaces. (2) The VOF-based numerical framework can predict relatively well more complex droplet impact phenomena such as droplets that impact on suspended metallic meshes. (3) From the parametric numerical investigation it is evident that the Reynolds number has a quite significant effect on the adhesion characteristics of the impacting drops to the metallic mesh structure, but plays only a minor role on the break-up characteristics of the liquid volume that passes through. (4) In comparison, the value of the We number has a negligible effect on the amount of liquid that penetrates the metallic mesh structure but it has a quite significant effect on the droplet break-up characteristics, below the metallic mesh structure. (5) Finally, it has been shown that the wettability characteristics of the metallic surface in the mesh structure has a profound effect both on the liquid penetration characteristics as well as on the break-up characteristics, affecting significantly the diameter and the total number of the daughter droplets that are created below the mesh.
For further investigations, the VOF-based numerical framework needs to be further improved by implementing a dynamic/adaptive mesh refinement technique, in order to significantly reduce the currently prohibitive computational cost for mesh-independent 3D numerical solutions. This will enable the proposed numerical simulation framework to be used effectively in order to complement the experimentally-derived droplet impact regime maps, for such complex droplet impact investigations, by providing data points that are difficult or even impossible to be obtained experimentally, due to the limitations in operating conditions and laboratory measuring techniques. | 11,163.6 | 2020-05-22T00:00:00.000 | [
"Physics"
] |
Analysis of the asymmetrically expressed Ablim1 locus reveals existence of a lateral plate Nodal-independent left sided signal and an early, left-right independent role for nodal flow
Background Vertebrates show clear asymmetry in left-right (L-R) patterning of their organs and associated vasculature. During mammalian development a cilia driven leftwards flow of liquid leads to the left-sided expression of Nodal, which in turn activates asymmetric expression of the transcription factor Pitx2. While Pitx2 asymmetry drives many aspects of asymmetric morphogenesis, it is clear from published data that additional asymmetrically expressed loci must exist. Results A L-R expression screen identified the cytoskeletally-associated gene, actin binding lim protein 1 (Ablim1), as asymmetrically expressed in both the node and left lateral plate mesoderm (LPM). LPM expression closely mirrors that of Nodal. Significantly, Ablim1 LPM asymmetry was detected in the absence of detectable Nodal. In the node, Ablim1 was initially expressed symmetrically across the entire structure, resolving to give a peri-nodal ring at the headfold stage in a flow and Pkd2-dependent manner. The peri-nodal ring of Ablim1 expression became asymmetric by the mid-headfold stage, showing stronger right than left-sided expression. Node asymmetry became more apparent as development proceeded; expression retreated in an anticlockwise direction, disappearing first from the left anterior node. Indeed, at early somite stages Ablim1 shows a unique asymmetric expression pattern, in the left lateral plate and to the right side of the node. Conclusion Left LPM Ablim1 is expressed in the absence of detectable LPM Nodal, clearly revealing existence of a Pitx2 and Nodal-independent left-sided signal in mammals. At the node, a previously unrecognised action of early nodal flow and Pkd2 activity, within the pit of the node, influences gene expression in a symmetric manner. Subsequent Ablim1 expression in the peri-nodal ring reveals a very early indication of L-R asymmetry. Ablim1 expression analysis at the node acts as an indicator of nodal flow. Together these results make Ablim1 a candidate for controlling aspects of L-R identity and patterning.
Background
While vertebrates are externally mirror symmetrical between their left and right sides, internally the positioning and patterning of their organs and vasculature show marked left-right (L-R) asymmetry. The heart and its associated vasculature, the lungs and various elements of the gut show distinct L-R asymmetric patterning. The importance of correctly establishing L-R asymmetry is evident when the association between situs defects and disease is analysed. A strong association is evident with congenital heart disease [1] while links also exist with ciliary dyskinesia, cystic kidney disease and extrahepatic biliary atresia [2,3].
During mammalian development, the first morphological sign of L-R asymmetry is the looping of the primitive heart tube, initially to the right. Shortly after this the embryo begins to undergo embryonic turning, the pro-cess that results in the embryo taking up the classic foetal position. This occurs in a L-R asymmetric manner such that the caudal-most region of the embryo passes to the right side of the head. These morphological asymmetries are, however, prefigured by molecular asymmetries. Work over the past decade has resulted in a broadly accepted model explaining how L-R asymmetry is established in the mammalian embryo (reviewed [4]).
Initial asymmetry is believed to be established when posteriorly tilted cilia within the embryonic node rotate to drive a leftwards flow of liquid (nodal flow). The role of flow in establishing situs was demonstrated in elegant experiments applying artificial flow to embryos in culture; flow reproducibly directed normally left-sided gene expression downstream of the direction of flow [5]. The question of how the embryo perceives nodal flow remains unresolved, although various models exist. One model argues that a morphogen is carried leftwards by the flow [6]. A second, the two cilia model [7], argues that mechanosensory cilia directly sense nodal flow, resulting in a left-sided intracellular calcium signal. The third model argues that membrane bound vesicles, termed nodal vesicular parcels (NVPs), are carried leftwards by the flow, breaking on the left side of the node to release a cargo of morphogens [8]. At present no one model fully explains all the existing experimental data [9]. The resulting signal at the left side of the node is then communicated several cell diameters to the left lateral plate, possibly through intracellular calcium signalling [7].
In the left lateral plate, the gene encoding the signalling molecule Nodal is asymmetrically activated downstream of nodal flow. Little is known of the mechanism of this activation. Nodal is at the top of a left-sided genetic cascade, auto-activating its own expression as well as that of its antagonist Lefty2 and the downstream transcription factor Pitx2 (reviewed [4]). All are asymmetrically expressed prior to the appearance of morphological asymmetry [10]. While Nodal and Lefty2 are expressed for only 6-8 hours [10], asymmetric Pitx2 expression is maintained into organogenesis and has been argued to be the ultimate effecter of left identity [11,12]. However, Pitx2 null embryos do not lose all aspects of left sidedness, making it clear that additional uncharacterised signals help distinguish the left and right sides of the early embryo [4].
We set out to identify additional asymmetric genes using a micro array based approach. This resulted in the identification of asymmetric expression of actin binding lim protein 1 (Ablim1), a gene showing asymmetry of expression in both the left LPM and the node. The lateral plate expression broadly mirrors that of Nodal, yet uniquely for a left LPM expressed locus, we demonstrate that it can be asymmetrically expressed in the absence of detectable LPM Nodal. This makes it clear that there is an additional uncharacterised asymmetric LPM signal. Initial Ablim1 expression in the ventral node was downregulated and peri-nodal expression upregulated in a flow and Pkd2-dependent manner, revealing an asymmetry independent role for flow in regulating gene expression within the node. The first node asymmetry was seen at the late headfold stage, earlier than any previously characterised L-R asymmetry in the mouse. Subsequently, the peri-nodal ring of expression retreated asymmetrically, moving around the node in a clockwise direction. Together these results reveal a Nodal-independent asymmetric LPM signal and make Ablim1 a candidate for controlling aspects of L-R identity and patterning.
Ablim1: a novel mammalian L-R asymmetric locus
To identify genes asymmetrically expressed between left and right sides in the developing embryo we compared gene expression using the MRC Mouse Known Gene Oligo Array printed array slides (Mm_SGC_Av2), which identify 7455 known loci. Left and right lateral plate tissue was dissected from 3-6 somite embryos and pools from 4 embryos were used to prepare RNA. Following SMART PCR amplification, hybridisation and analysis were performed as described in the Materials and Methods. The results from 4 replicates were analysed, and lists ranking the apparent degree of left or right sided asymmetry were generated (see Additional file 1). The presence, at the top of the left-sided list of the known left specific gene, Pitx2, demonstrated the validity of the approach. Significantly, Nodal, though present on the array, was not identified; Lefty2 was not present on the array. We further analysed expression of 7 loci from the left-sided and 6 from the right sided list by RNA wholemount in situ hybridisation (WISH) on 8.5 dpc embryos. Of the 13 loci examined, only one, Ablim1, showed apparent L-R asymmetry of expression ( Fig. 1 and data not shown). Expression in the left LPM was clearly stronger and more extensive than in the right, while a second asymmetric domain was visible at the node. Initial analysis of a few embryos showed expression predominantly on the right hand side of the node.
Performing WISH using the full cDNA, we examined Ablim1 expression in a developmental series of embryos, from 6.5 dpc to 9.5 dpc (Fig. 1b-e). Expression was detected in the yolk sac of 6.5 dpc embryos, initially as patches (Fig. 1b) that became a distinct ring of expression by 7.5 dpc (Fig. 1c), consistent with it marking the prospective blood islands. At 7.5 dpc, expression was also seen in the developing head folds (Fig. 1c), an expression pattern maintained through 9.5 dpc (Fig. 1d, e). Ablim1 expression in the node was seen from 7.5 dpc (Fig. 1c). By 8.5 dpc L-R symmetric expression was evident in the developing heart. Strikingly, asymmetric expression was also seen in the LPM; stronger and more extensive on the left than the right (Fig. 1d). By 9.5 dpc expression was evident in the head, the somites and portions of the heart, but no L-R asymmetry was seen at this stage (Fig. 1e, data not shown).
Two classes of Ablim1 transcript show asymmetric expression in the lateral plate and the node respectively
Analysis of the published data [13] and ESTs http:// www.ensembl.org demonstrates the existence of multiple Ablim1 transcripts, exhibiting both alternative splicing and multiple alternative first exons. Northern blot and RT-PCR analysis confirmed the existence of Ablim1 transcripts in 7.5 and 8.5 dpc embryos (data not shown). While multiple protein isoforms of Ablim1 exist, they comprise two major classes; long forms containing lim domains and a villin head piece (VHP) and a short form lacking the lim domains [13] (Fig. 1a). We investigated the temporospatial distribution of the transcripts encoding these isoforms using WISH probes hybridising to different portions of the Ablim1 message; probes corresponding to the beginning (Ex 2-6), the common region (Ex 8-25) and the 3'UTR (3'UTRa and b; Fig. 1). The Ex 2-6 probe, encompassing the lim domains, revealed expression in the developing heart (Fig. 1f) as well as the right side of the node when WISH colour development was extended for longer times (Fig. 1j), but not in the lateral plate. The Ex 8-25 probe revealed asymmetric expression to the right side of the node and in the left lateral plate; symmetric expression was evident in the head and heart (Fig. 1g, k). The 3'UTRa probe showed similar expression to Ex8-25, although node expression was significantly weaker when compared to the other expression domains (Fig. 1h, l). The terminal 3'UTR probe, 3'UTRb, showed similar expression, but failed to detect node expression, arguing for a shorter 3'UTR in the node transcripts ( Fig. 1i, m). Together these results argue that a transcript encoding a long isoform containing both lim domains and VHP is present in the node, while a shorter isoform (with no lim domains) is present in the left lateral plate; it is not possible to say from our WISH or RT-PCR analysis whether the short isoform is also present in the node.
Asymmetric Ablim1 lateral plate expression mirrors Nodal
Nodal, often thought of as the master gene controlling left-sided gene expression, is expressed in the left but not right lateral plate from 3-6 somite stages [14,15]. If asymmetric Ablim1 expression is directly controlled by Nodal, it would be expected to exhibit similar temporospatial expression and not be expressed asymmetrically prior to asymmetric Nodal expression. We therefore examined the temporospatial expression of Ablim1 from the late headfold to the 10 somite stage by WISH (using the 3'UTRa probe). Up to and including the 2 somite stage, bilaterally symmetrical expression was seen in the anterior lateral plate, contiguous with expression in the heart (Fig. 2a). By 3 somites expression in the LPM became clearly asymmetric, with expression in the left lateral plate being both stronger and stretching noticeably further posteriorly than in the right (Fig. 2b). This asymmetric expression was highly evident at 5 somites, being particularly obvious when the WISH is developed for a short time (Fig. 2c). By 7 somites, after lateral plate Nodal expression has ceased, asymmetric left lateral plate expression of Ablim1 was strongly downregulated, although still evident at a low level in some embryos (Fig. 2d). By 8 somites, no sign of asymmetric lateral plate expression was evident (Fig. 2e). Analysis of sections revealed that lateral plate expression was present throughout the lateral plate mesoderm (Fig. 2f, g) similar to Nodal. These data are consistent with the hypothesis that Nodal activates Ablim1. As with our previous results, expression of Ablim1 in the node was apparent when the WISH colour development was left for longer periods of time ( Fig. 1h, l, 2b).
Left lateral plate Ablim1 expression occurs in the absence of detectable LPM Nodal
To further test the hypothesis that Nodal activates asymmetric Ablim1 expression, Ablim1 lateral plate expression was analysed in mutants with abnormal L-R patterning. Dnahc11 encodes a dynein heavy chain required for nodal cilia motility: the point mutant Dnahc11 iv (iv) results in immotile nodal cilia, absence of nodal flow, and randomisation of both situs and Nodal lateral plate expression [15][16][17][18]. When Ablim1 lateral plate expression was analysed in iv mutants, a mixture of expression patterns were seen (Table 1), similar to those previously reported for Nodal in the iv mutant [15], including leftsided (Fig. 3a), bilateral (Fig. 3b) and right sided (Fig. 3c) expression. These data show that Ablim1 asymmetry is downstream of nodal flow, similar to Nodal asymmetry and is consistent with Ablim1 asymmetry being downstream of Nodal. This hypothesis was further supported by analysis of Shh mutant embryos; Shh mutants do not express the Nodal antagonist Lefty1 in the midline, resulting in bilateral Nodal expression [19]. Consistent with a role for Nodal upstream of lateral plate Ablim1 expres-sion, bilateral Ablim1 expression was seen in Shh -/embryos (data not shown and Table 1).
We sought to analyse Ablim1 expression in embryos where Nodal LPM expression was strongly downregulated. Nodal contains an intronic enhancer (ASE) that is responsible for positive feedback regulation of its asymmetric expression in the left LPM. The NodalD600 allele lacks this ASE and consequently NodalD600/D600 embryos express extremely low levels of Nodal in the LPM [20,21]. This results in delayed and posteriorised activation of the Nodal target Pitx2, leading to L-R patterning defects. When Ablim1 expression was analysed in NodalD600/D600 embryos, ~50% of embryos showed asymmetric expression in left LPM ( Fig. 3d; Table 1); the remainder were symmetrical, showing no LPM expression. However, the frequency at which no lateral plate Ablim1 expression was detected in NodalD600/D600 embryos was far higher than in wild type (Table 1). It is possible that the change in Ablim1 expression is a consequence of the delayed activation of the transcription factor Pitx2 in the NodalD600/D600 embryos. Therefore, we analyzed Ablim1 expression in Pitx2 mutants; Pitx2c is the isoform expressed asymmetrically in the left lateral plate mesoderm and Pitx2c-/-embryos show defects in L-R patterning [22]. All 5 Pitx2c-/-embryos analysed showed wild type Ablim1 expression (Table 1; Fig. 3e) demonstrating that Pitx2c expression is not required for Ablim1 asymmetry.
It seemed possible that the low level of LPM Nodal expressed in Nodal D600/D600 embryos is borderline for activating asymmetric expression of Ablim1 in the left LPM and therefore leads to stochastic activation of the locus. To determine whether the 50% of Nodal D600/D600 embryos that expressed asymmetric LPM Ablim1 Figure 2 Ablim1 lateral plate asymmetry mirrors Nodal. Ablim1 is bilaterally symmetrical at 2 somites (a), becoming obviously asymmetric by 3 somites (b), when left lateral plate expression is stronger and more extensive than on the right. This WISH has been developed for longer allowing the asymmetric node expression to be visualised (open arrowhead). LPM asymmetry remains obvious at 5 somites (c) and some left LPM asymmetry is visible at 7 somites. Symmetrical expression in the more posterior somites is also evident. By 8 somites, all L-R asymmetry has been lost (e). Histology shows that Ablim1 is expressed throughout the lateral plate mesoderm (f, g). The posterior most extent of LPM expression is indicated by a closed arrowhead. In all panels the 3'UTRa probe has been used.
resulted from low level Nodal expression, we analysed lateral plate Ablim1 expression in Nodal Δnode/embryos. The Nodal Δnode allele lacks the enhancer that drives Nodal expression in the node [23]. Nodal Δnode/embryos have no, or occassionally a minimal ammount of Nodal expression at the node and no detectable Nodal, Lefty2 or Pitx2 in the LPM [23]. When Ablim1 expression was analysed in Nodal Δnode/mutant embryos, of 25 analysed, 6 (24%) showed clear asymmetric left LPM expression (Table 1, Fig. 3g). The remaining 19 (76%) showed no LPM Ablim1 expression on either the right or left sides (Table 1, Fig. 3f). These data clearly demonstrate that Ablim1 can be asymmetrically expressed in the LPM in the absence of detectable LPM Nodal signalling and therefore suggests that this asymmetric expression is regulated by signalling cues that are independent of the Nodal cascade. The reduced frequency of asymmetric Ablim1 expression in Nodal Δnode/embryos, however, suggests a role for Nodal in the robustness of Ablim1 lateral plate asymmetry. When we re-analysed these data with respect to the stage of development (Table 2), we saw no asymmetric LPM Ablim1 expression before 4 somites. Approximately 30% of 4-5 somite embryos showed asymmetric LPM Ablim1 expression. This increased to 50% when the 6-7 somite embryos were analysed. Together these data demonstrate that detectable LPM Nodal is required for early asymmetric left LPM Ablim1 expression, but, that in its absence a second asymmetric system can activate Ablim1 asymmetry.
Ablim1 shows no functional ASE
In the lateral plate, Nodal's auto-activation of its own expression, as well as its activation of Lefty2 and Pitx2 expression, is mediated by binding of FoxH1 to asymmetric elements (ASEs). ASEs contain two or three FoxH1 binding sites (TGT G/T T/G ATT) within a 30-200 bp region. The random frequency of a pair of binding sites within such a region is once every 350 kb. When ASE-like sequences were sought around the mouse and human Ablim1 loci, 7 sequences were identified in mouse and 3 in humans (see Additional file 2), although position and overall sequence was not conserved. To address whether Nodal might be interacting with Ablim1 through these elements, the sequences plus 100 bp on either side were PCR amplified and cloned into luciferase reporter vectors. These were transfected into HepG2 cells together with FoxH1 and a constitutively active Alk4 construct. While a control fragment from the mouse Pitx2 ASE activated luciferase, as previously reported [24], all the Ablim1 derived fragments failed to activate expression above background levels, arguing that Nodal does not activate Ablim1 through FoxH1 binding to an ASE (data not shown). Highly dynamic Ablim1 node expression: a very early marker of asymmetry When temporospatial Ablim1 expression was analysed in the node, utilising the Ex2-6 probe, transcript was detected from when the node is first patent (Fig. 4a, b). Intriguingly, this first expression is of a salt and pepper pattern stretching across the pit of the node. This resolved to give a ring surrounding the node by the midheadfold stage (Fig. 4c); the first indication of asymmetry was evident at this stage, with a higher level of expression on the right than the left hand side of the node (Fig. 4c).
During the next few hours of development, expression on the anterior left side of the node was lost, while expression was activated in the midline cells anterior to the node, resulting in a question mark-like expression pattern (Fig. 4d). By 4 somites, the remaining left-sided expression was lost, resulting in solely right-sided expression at the node (Fig. 4e). This abrogation of expression continued in a clockwise direction around the node (Fig. 4f), until by 7 somites peri-nodal expression was restricted to midline cells anterior to the node (Fig. 4g). By 8 somites midline expression has also been lost (Fig. 4h). Sections through these embryos reveal that expression in the node is restricted to the ventral layer (Fig. 4i).
Ablim1 node expression is controlled by nodal flow and Pkd2 activity
The early dynamic expression pattern of Ablim1 in the node correlates with the changes in nodal cilia motility and nodal flow described by Okada [18]. The loss of Ablim1 expression from the pit corresponds to the stage when local vortices form, while the loss of expression from the left side of the node correlates with the establishment of a strong leftwards flow. In conjunction with the very early asymmetry of Ablim1, this suggests that expression may be responding directly to nodal flow. To test this hypothesis we next examined Ablim1 node expression in iv mutants, where there is no nodal flow. Of 40 mutant embryos analysed, no asymmetry was detected at any stage of development. Indeed, the robust peri-nodal ring of expression was never detected. There was, however, a rise in the number of embryos where expression was not detected, from 12% in wt to 42% in iv/ iv mutants (Table 3). Intriguingly, in the 58% of embryos where expression was detected, the pattern of expression was the same patchy expression seen in the very earliest wild type nodes (Fig. 5a). This pattern was maintained well through the period that leftwards laminar flow is normally detected and that strong asymmetry of Ablim1 is normally seen at the node. These data demonstrate that nodal flow is required for the upregulation of Ablim1 expression in the peri-nodal region and is involved in downregulation of Ablim1 expression in the pit of the node. The two cilia hypothesis argues that nodal flow is directly detected by nodal cilia, through the activity of Pkd2 [7], resulting in a left-sided Ca2+ signal. To test whether Ablim1 may be responding to these asymmetric Ca2+ signals, we analysed expression in Pkd2 mutants. Surprisingly, very similar result were obtained to those for the iv mutant. From 9 mutants analysed, 4 (44%) showed no detectable expression, while 5 showed the same patchy expression we detected in the embryos lacking nodal flow (Fig. 5b). Once again, no L-R asymmetry was detectable. Together, these data demonstrate a requirement for both nodal flow and Pkd2 within the pit of the node for the loss of Ablim1 expression as well as for the establishment of asymmetric expression surrounding the node.
Discussion
In this paper we describe the novel L-R asymmetric expression pattern of Ablim1, a gene that can be expressed independently of detectable Nodal in the left lateral plate mesoderm. A separate asymmetric expression domain in the embryonic node reveals both that Ablim1 expression is the earliest known marker of mammalian L-R asymmetry, and that flow (most likely mediated through Pkd2-dependent Ca2+ signalling) is directly affecting gene expression in the node, prior to any effects on asymmetry.
A Nodal independent L-R asymmetric signal ("signal X")
Work from many research groups over the past decade has established a generally accepted pathway underlying establishment of L-R patterning in mammals (reviewed [4]). The activation of the Nodal signalling cascade in the left LPM results in asymmetric, left-sided, expression of Pitx2 that is maintained into organogenesis (Fig. 6). In light of misexpression experiments in chick and Xenopus, Pitx2 has been argued to specify left sidedness [11,12]. Yet, while Pitx2 mutant mouse embryos demonstrate right pulmonary and atrial isomerism [25][26][27][28], the initial direction of embryonic turning and heart looping are Pitx2-independent [22]. In contrast, analysis of mice lacking or unable to respond to Nodal signalling in the lateral plate showed a randomised direction of heart looping [23,29]; the direction of embryonic turning was also randomised in Cryptic (MGI: Cfc1) mutants, but not reported for the Nodal mutants [23,29]. These data argue that heart looping and embryonic turning are controlled by Nodal expression, however, it is not possible to distinguish the role of Nodal at the node from its role in the lateral plate in these experiments. Our results clearly show Ablim1 asymmetry is independent of Pitx2 expression. More significantly, Ablim1 is capable of being asymmetrically expressed in the absence of detectable LPM Nodal signalling. Quite clearly, another L-R asymmetric signal is present in the embryo, that for the sake of discussion we shall refer to as "signal X".
The control of Ablim1 LPM asymmetry
While it is clear from our analysis that Ablim1 can be asymmetrically expressed in the absence of detectable Nodal and Pitx2, asymmetric Nodal obviously does play a role. The strong reduction of lateral plate Nodal expression in Nodal D600/D600 embryos results in ~50% of embryos with detectable asymmetric LPM Ablim1 expression, while the total removal of LPM Nodal expression in the Nodal Δnode/embryos reduces this level to 24% ( Table 1). The level of LPM Nodal expression therefore directly affects asymmetric Ablim1 expression. This relationship is underlined when the temporal activation of Ablim1 is analysed; strong reduction of Nodal signalling is known to result in delayed target gene activation [20]. At early somite stages no Nodal-independent Ablim1 expression is seen, compared to expression in 30% of embryos by 4-5 somites and 50% by 6-7 somites ( Table 2). It is therefore clear that two signals, a Nodal-dependent and the Nodal-independent "signal X" influence asymmetric lateral plate Ablim1 (Fig. 6A). While we have drawn "signal X" and Nodal both directly acting on Ablim1, we cannot rule out the possibility that "signal X" is in part regulated by Nodal. It is also possible that there is a temporal offset between Nodal signalling and "signal X", with Nodal acting first. The data neither argue for nor against these scenarios and in the absence of identifying and interfering with "signal X" it is impossible to distinguish between them.
A left-sided "signal X"
A signal acting on the left side of the embryo may reflect either a left-sided activator or a right sided repressor of Ablim1 expression. Work from various groups has revealed additional lateral plate asymmetries in gene expression. BMP signalling, as assessed by Smad1 phosphorylation, is L-R asymmetric and this asymmetry is driven by asymmetric expression of the BMP antagonists chordin and noggin, which in turn are controlled by Nodal [30]. Therefore, it seems unlikely that BMP signalling lies upstream of Ablim1 asymmetry. Similarly, the Nkx3-2 (or BapX1) homeodomain locus shows right sided asymmetric expression, but again is thought to act downstream of Nodal [31]. In contrast, little is known about control of the right sided transcription factor Snail (Snai1), which temporally mirrors Nodal [32]. Conditional deletion leads to randomised embryonic turning and heart looping and bilateral activation of the Nodal signalling cascade, arguing that it normally inhibits right sided Nodal expression. It is therefore possible that Snai1 inhibits Ablim1 expression on the right side of the embryo. Indeed the role of the snail family of genes as transcriptional repressors is well documented [33]. A formal possibility exists that "signal X" in fact represents a very low level Nodal signal, below that capable of activating the known LPM targets of Nodal. While no Nodal expression is detected in the LPM, low level ectopic Nodal expression was reported in a few cells in the pit of the node in a small proportion of Nodal Δnode/embryos [23]. Such a localised signal would be expected to first activate expression close to the node, in a manner similar to that reported for initial Nodal activation [discussed in [4]. No such localised expression was evident for Ablim1 in Nodal Δnode/embryos ( Fig. 3g and data not shown). Indeed, such putative low level Nodal signalling falls outside of the known activity of Nodal and it too would arguably constitue a novel asymmetric signal.
Flow regulates early symmetrical Ablim1 expression at the node
Wild type Ablim1 expression in the node changes markedly at the mid-late headfold stage, from a low level broad pan-node expression to a robust peri-nodal ring (Fig. 4). This reflects two changes; upregulation in the crown cells surrounding the node and downregulation within the pit of the node. Developmentally this corresponds to when nodal cilia are driving vortical fluid motion [18], suggesting that cilia driven fluid flow plays a role in these expression changes. This is supported by the failure of iv and Pkd2 mutant embryos to express the peri-nodal ring of Ablim1 expression (Table 3). Moreover, almost 60% of these embryos maintain the same low level pan-node expression seen in early wild type embryos. We detected no expression in nodes of the remaining mutant embryos and argue that this reflects either failure to maintain low level pan-node expression over time and/or technical limitations in detecting low level gene expression by WISH. The role of nodal cilia motility in generating nodal flow and L-R asymmetry is so central to thinking that little consideration has been given to any earlier role for fluid flow. Our data, however, reveals an early and previously unrecognised function for nodal flow in modulating symmetrical gene expression within the node, separate from the role of nodal flow in L-R determination.
Ablim1 is the earliest marker of L-R asymmetry in mouse
The first L-R asymmetry in Ablim1 expression is evident in the node by the late headfold stage (Fig. 4c), several hours before the 1-2 somite stage at which asymmetry is evident for the other known asymmetric loci Nodal, Cerl2 (MGI: Dand5) and Lplunc1 [14,15,[34][35][36]. This argues that Ablim1 is responding to very early asymmetric signals. Nodal flow is argued to be the initial asymmetric signal in the mouse (reviewed [4]) and is clearly driving fluid flow leftwards by early somite stages [18]. However, the first Ablim1 asymmetry is evident at a stage when beads introduced into a node in vitro are carried leftwards only inefficiently and on average, hopping from vortex to vortex [18]. Whether such an inefficient flow could affect sensory cilia sufficiently to fulfil the requirements of the two cilia hypothesis is unclear. Presumably NVPs could be carried leftwards in a similar manner to the beads, although whether they would break efficiently in such a flow is uncertain. By the early somite stages a completely novel and very distinctive asymmetry of Ablim1 expression becomes evident at the node. The peri-nodal ring "retreats" around the node in a clockwise direction between 3 and 7 somites. This asymmetry of expression is very different from that seen for other asymmetrically expressed loci at the node; these show bi-lateral expression flanking the node, with stronger expression on one side than the other.
While the control of Ablim1 node asymmetry is not addressed by this study, the speed of the changes in Ablim1 expression suggests that this is an active process. In chick an anti-clockwise migration of cells around the node underlies gene asymmetry [37,38]. However, in mice, cre-loxP based lineage analyses of both node crown and pit cells did not reveal such cell migration [23,39]. The role of flow in controlling earlier changes in Ablim1 expression makes flow a mechanism we must contemplate. Yet for flow to control Ablim1 asymmetry, the following objections must be taken into account. (1) How could leftwards flow initially affect just the anterior node? At early somite stages the anterior node is shallower than the posterior [40,41] and this may influence the ability of flow to impact on the crown cells in the anterior versus the posterior. (2) How does flow subsequently affect the posterior node? As the embryo grows the node remodels, becoming more even in depth between anterior and posterior. At the same time leftwards flow becomes laminar. A combination of these two events may then allow flow to also affect posterior left-sided node crown cells. (3) How does flow subsequently affect the right side of the node? In vivo, fluid flow within the node recycles being drawn downwards on the right hand side [42]. A combination of growth, node remodelling and perhaps temporal accumulation of signalling may allow the right side of the node to respond to the recycled flow as it is pulled back into the right hand side of the node (Fig. 6B).
While there is strong evidence for nodal flow in many vertebrates [43][44][45], earlier, pre-flow events have been demonstrated to influence L-R patterning in non-mammalian species. Vg1 can influence situs determination and its putative co-receptor, Syndecan-2, becomes asymmetrically phosphorylated in pre-flow Xenopus embryos [46][47][48][49]. Pharmacolgical experiments have implictaed H + K + ATPase, VATPase, Serotonin and 14-3-3 family member E in Xenopus situs determination [50][51][52]. In the resulting serotonin model, an electric field drives serotonin through gap junctions in Xenopus, resulting in higher right than left sided localisation [52][53][54]. The H + K + ATPase mRNA similarly becomes asymmetrically localised in Xenopus and perturbed expression disrupts L-R patterning in both Xenopus and chick [52]. Therefore the question must be raised as to whether such early mechanisms also exist in mammals and may be controlling Ablim1 asymmetry, either at the node or in the LPM.
That there is asymmetry of Ablim1 expression at both the node and the LPM bears comparison to the expression pattern of Nodal. Nodal asymmetry at the node slightly predates that in the LPM and is required for LPM expression in the mouse [23]. It has even been argued, in light of the ability of Nodal to autoactivate, that Nodal at the node might be carried to left LPM to activate expression there [55]. In striking contrast, Ablim1 asymmetry is on opposite sides in the node and LPM. So while there is Ablim1 asymmetry at the node when LPM asymmetry is first detected, the expression domains are on opposite sides of the embryo (Fig. 1g). Moreover, while Nodal is a signalling molecule, Ablim1 is a structural, cell autonomously acting protein. It is difficult to envisage how a cytoskeletal protein would be acting to repress its own expression across many cell diameters. More likely, the two expression domains are independently regulated. Indeed, the long isoform of the protein seen at the node but not detected in the LPM originates at an alternate first exon, consistent with different promoter enhancer combinations controlling the two expression domains.
Ablim1 Function
Uniquely, for mammalian asymmetric genes, Ablim1 encodes a structural protein. As its name suggests, Ablim1 protein binds to actin and when first identified, the presence of lim domains led the authors to suggest that it might act as an adaptor protein, bringing other proteins to the actin cytoskeleton [56]. The homologue in C. elegans, unc-115, similarly binds actin [57], and when mutated leads to an uncoordinated phenotype and defects in axon guidance [58]. Expression of a dominant negative Ablim1 in chick embryos leads to similar axonal phenotypes [59]. Yang and Lundquist [60] further dem-onstrated that expression of unc-115 in mammalian fibroblasts led to the formation of peripheral actin conglomerations at the expense of stress fibres. One possible explanation that they suggest is that Ablim1 protein may have different roles at the cell membrane and in the cytoplasm, acting to abrogate stress fibre formation when cytoplasmic. This raises the possibility that asymmetric Ablim1 expression might prove permissive for asymmetric morphogenetic changes in the embryo. However, when an isoform specific deletion of Ablim1 was made, for the isoform seen in the eye, no defects were reported [13]. This deletion, however, seems unlikely to affect the expression that we have described. Indeed many additional alternative first exons are now annotated that were not evident to Lu and colleagues.
Conclusion
We have identified Ablim1 as a L-R asymmetrically expressed LPM gene that also shows a highly novel asymmetric expression pattern in the node. Through study of Ablim1, we provide definitive evidence that in addition to the recognised Nodal-Pitx2 asymmetric pathway in the left LPM, a second LPM Nodal-independent pathway must exist.
In the node we reveal a previously unrealised role for flow and Pkd2 in the control of early symmetrical Ablim1 expression within the node. This provides a novel, expression based readout of nodal flow. It seems reasonable to speculate that other genes expressed within the node may also be modulated by fluid flow.
Ablim1 expression within the early node becomes asymmetric at the head fold stage, several hours previous to other asymmetrically expressed loci. This is the earliest marker of L-R asymmetry in the mouse. Subsequent asymmetric node expression proceeds in an entirely novel pattern, retreating around the node. While we do not understand how this is controlled, future study of this seems likely to shed light on the mechanisms of L-R patterning.
Ablim1 is a candidate for processes controlling L-R morphogenesis and identity and future study of mutants will reveal it role in L-R patterning.
All animals were used in accordance with UK Home Office regulations.
Microarrays
Tissue for micro-array analysis was dissected in cooled PBS, then snap frozen in liquid nitrogen. RNA was produced by Qiagen RNeasy mini kit and quality assessed by Agilent 2100 Bioanalyzer. Reverse transcription and amplification were conducted according to the SMART mRNA Amplification Kit (Clontech). The resulting samples were labelled with Cy3 and Cy5 and used to hybridise MRC Mouse Known Gene Oligo Array printed array slides (Mm_SGC_Av2), identifying 7455 known genes, according to standard protocols. Results from 4 experimental repeats were analysed using GeneSpring (Agilent Technologies). Genes were ranked for differences in left versus right sided expression. The data from these experiments has been submitted to ArrayExpress, reference E-MEXP-2277.
IC3 represents a full length Ablim1 clone. Clones portions of Ablim1 sequence, exons 2-6, 8-25 and 3'UTR sequences, were produced by restriction digest or PCR amplification and cloned into pBluescript2. Antisense RNA probes were produced and used for wholemount in situ hybridisation, according to standard protocols.
Putative ASE sequences were amplified by PCR from BALB/c mouse and commercial human DNA, TA cloned into pGL3-Promoter vector (Promega). The sequence cloned into pGL3 was confirmed by sequencing. Luciferase assays were carried out as previously described [65].
Additional file 1 L-R micro array data. The top asymmetric expressing genes as indicated by the micro-array analysis. Data from 4 experiments has been averaged and genes ranked for stronger left (top) or right (bottom) sided expression. Genes subsequently analysed by in situ are highlighted. Additional file 2 ASE containing sequences around Ablim1. Sequences containing two or more FoxH1 binding sites (TGT G/T T/G ATT) within a 30-200 bp region, in and surrounding the mouse and human Ablim1 loci. | 8,727.6 | 2010-05-20T00:00:00.000 | [
"Biology"
] |
First Evidences of Ionospheric Plasma Depletions Observations Using GNSS-R Data from CYGNSS
At some frequencies, Earth’s ionosphere may significantly impact satellite communications, Global Navigation Satellite Systems (GNSS) positioning, and Earth Observation measurements. Due to the temporal and spatial variations in the Total Electron Content (TEC) and the ionosphere dynamics (i.e., fluctuations in the electron content density), electromagnetic waves suffer from signal delay, polarization change (i.e., Faraday rotation), direction of arrival, and fluctuations in signal intensity and phase (i.e., scintillation). Although there are previous studies proposing GNSS Reflectometry (GNSS-R) to study the ionospheric scintillation using, for example TechDemoSat-1, the amount of data is limited. In this study, data from NASA CYGNSS constellation have been used to explore a new source of data for ionospheric activity, and in particular, for travelling equatorial plasma depletions (EPBs). Using data from GNSS ground stations, previous studies detected and characterized their presence at equatorial latitudes. This work presents, for the first time to authors’ knowledge, the evidence of ionospheric bubbles detection in ocean regions using GNSS-R data, where there are no ground stations available. The results of the study show that bubbles can be detected and, in addition to measure their dimensions and duration, the increased intensity scintillation (S4) occurring in the bubbles can be estimated. The bubbles detected here reached S4 values of around 0.3–0.4 lasting for some seconds to few minutes. Furthermore, a comparison with data from ESA Swarm mission is presented, showing certain correlation in regions where there is S4 peaks detected by CYGNSS and fluctuations in the plasma density as measured by Swarm.
Physics of the Ionosphere
The ionosphere is a layer of the atmosphere that plays a very important role in satellite communications and Earth Observation. This layer, which ranges from around 60 km to more than 500 km altitude, contains free electrons and ions, making a sort of "electric conductor" that interacts with the electromagnetic waves crossing it.
The shape and density in electron density profile, are highly affected by many factors in a balance between production and destruction processes driven, mainly, by solar irradiation. Also, its dynamics is influenced by the movements of the inner atmospheric layers and the outer magnetosphere. Because of all these factors, the ionospheric fluctuations take place in a wide range of amplitudes and duration, from kilometers to several meters and from a few seconds or minutes to days.
It has been observed that there is an increase in the activity, related to solar irradiation, in the transitions (sunrise and sunset), with a rearrangement of the layers. Seasonal dependence is also noticed, with more activity during the periods around the Spring and Autumn Equinoxes. And, on top of that, there is a positive correlation with solar activity following the 11-year solar cycle, with a minimum happening this year 2020.
Electromagnetic waves crossing the ionosphere may suffer from those perturbations, making the signal to change its direction of propagation, polarization, or creating fluctuations in intensity and phase, the so-called, ionospheric scintillation. This phenomenon is known since the first satellites put in orbit and it has been a subject of study until today. Some models have been developed to try to describe and predict its behavior. During the past years, some ionospheric scintillation models have appeared, for example: • The Global Ionospheric Scintillation propagation Model (GISM) [1] provides time series of intensity and phase scintillation models using a turbulent ionosphere, and an electromagnetic waves propagator in turbulent media, using the Multiple Phase Screen theory (MPS) [2]. GISM is the model accepted by the ITU-R (International Telecommunication Union -Radiocommunication Sector) for the ionospheric communications [3].
•
The WideBand MODel (WBMOD) [4] is also based on electron-density irregularities and uses the MPS theory to compute the scintillation effects in an statistical sense. In this model it is possible to set-up a communication scenario (time, location, and other geophysical conditions).
•
The Wernik-Alfonsi-Materassi Model (WAM) [5] also uses the MPS theory, but generates its statistics from in situ measurements on ionization fluctuations measured by the Dynamics Explorer 2 satellite. It can also predict the ionospheric S 4 index along a defined path given some environmental conditions.
More recently, in the context of a European Space Agency (ESA) project, the GISM functionalities were extended in the UPC/OE/RDA (Universitat Politecnica de Catalunya/Observatori de l'Ebre/Research and Development in Aerospace) SCIONAV model [6], to reproduce more realistically the behavior of ionospheric scintillation at both low and high latitudes, taking into account the different physical phenomena that create them, such as the bubbles and depletions in equatorial regions and it is based on lookup tables resulting from extensive analyses.
However, all these models require further improvements: • In polar and auroral regions the models still need a better description in terms of velocity, distribution, duration and intensity of the ionospheric effects. • A better modelling (3D) of the equatorial plasma bubbles (EPBs) can be incorporated in the UPC/OE/RDA SCIONAV model, relating its 3D properties to the altitude, and the amount of plasma depletion with the S 4 produced [7].
•
In GNSS-R applications, some anomalous fluctuations have been reported in equatorial regions in regions over calm ocean, which are supposed to come from ionospheric scintillation [8].
Using GNSS-R to Study the Ionosphere
The goal of this study is to provide a way to improve the ionospheric models, by means of GNSS-R. This idea was firstly proposed in 1996 by S. J. Katzberg and J. L. Garrison [9], but not really explored the following years because the reflection over ocean (rough) surfaces is mostly incoherent, and sensitive to the winds, as reported in an airborne experiment of the same authors [10]. It was not until 2016 that it was shown [8] that the large fluctuations in the peak of the measured Delay-Doppler Map of GNSS-R instruments around the geomagnetic equator in calm sea conditions (wind speeds < 3 m/s), where reflections are more coherent, could be due to ionospheric scintillation.
The main purpose of GNSS-R missions is to study geophysical parameters of the Earth by studying the reflection of GNSS signals on the Earth's surface. Applications include sea altimetry, soil moisture measurement or ice detection. The signals used to apply this technique cross the ionosphere in their down-welling and up-welling paths, that is, first from the GNSS satellite to the specular reflection point on the Earth's surface, and then from this point to the GNSS-R receiver. Because of the height of the satellites involved, the down-welling path always crosses all the ionospheric layers, but the up-welling one may only cross some of them, depending on the altitude of the GNSS-R instrument.
Furthermore, GNSS signals would only allow the study of ionospheric scintillation in the GNSS bands, usually the L1/E1/B1, L2, and L5/E5. Other bands object of interest nowadays in Earth Observation, such as P-band (to be used in ESA BIOMASS Mission) are also affected by these perturbations, but they cannot be detected and/or quantified by this technique, although it could be applied using other signals of opportunity such as those from MUOS (Mobile User Objective System) satellites [11].
Materials and Methods
The current study uses open data from NASA CYGNSS (Cyclone Global Navigation Satellite System) mission [12,13], which continuously delivers GNSS-R data since March 2017. NASA CYGNSS is a microsatellite constellation led by the University of Michigan and the Southwest Research Institute, launched and operated by NASA in December 2016. The mission main goal is to provide a better forecast of hurricanes by studying the interaction between the sea and the atmosphere.
CYGNSS Dataset Description
The open-access CYGNSS database is provided by the Physical Oceanography Distributed Active Archive Center (PO.DAAC) [14]. The data is recorded continuously since 17 March 2017, for each of the 8 satellites in orbit. The data includes the DDM (Delay-Doppler Map), that is the cross-correlation of the reflected signal with a replica of the transmitted signal for different delay lags, and Doppler frequencies [15]. CYGNSS provides DDMs from up to the 4 channels tracking the signals received from 4 different GPS satellites, with a sample rate of 1 Hz. For each sample, auxiliary data such as the timestamp, the position of both CYGNSS satellite and the emitting GPS spacecraft are provided, together with the post-computed position of the specular point for the reflected signal on the Earth's ellipsoid.
In any case, the data from reflections is only available from around 40 • N to 40 • S around the equator, as the satellite orbit inclination is around 35 o . The orbital period is ∼95 min but, as the constellation is formed by 8 satellites distributed along the orbit, the overall revisit time is much less. Figure 1 shows the average number of samples measured during one day by the whole constellation as a function of the latitude per different cell sizes. CYGNSS has an almost circular Low Earth Orbit (LEO) at an altitude of ∼520 km, at an average speed of 7.6 km/s, much larger than the GPS satellites linear velocity (3.89 km/s), which complete an orbit every 12 h. Table 1 summarizes the mission parameters.
Data Processing
The data processing is performed to derive the CYGNSS observables of ionospheric activity. Source data comes from the DDMs. DDMs provided by CYGNSS has a size of 11 Doppler frequency bins × 17 delay bins, with a resolution of 200 Hz per Doppler bin, and 0.2552 chips delay bin. Since 1 C/A chip is equal to 1/1,023,000 s, it is equivalent to 293.3 m. Therefore, the resolution per delay bin is 74.8 m. The images below present two examples of CYGNSS's DDM over two different surfaces.
In order to assess the conditions of sea surface during the acquisition of these measurements, sea wave height is overlaid with the specular reflection points where the DDMs were taken, and it is shown for both cases in Figure 2, at points labeled as A and B. In the case of the DDM for rough water surface ( Figure 3a) the SNR (Signal-to-Noise Ratio) is 2.3 dB and the wave height is around 3 m, marked in the map as A. For the calm ocean in Figure 3b, the SNR is much higher, around 15.3 dB, and the waves are around 0.6 m, marked as B in the map. To perform this study, the main source of information is taken from the SNR (Signal-to-Noise Ratio) measurements available in the CYGNSS database. The SNR is post-computed on the ground using the DDM values as follows: where DDM max , is the value of the DDM bin with maximum signal and N avg is the average noise per bin, computed from the delay-Doppler bins before the DMM peak (i.e., delay < 0, as in Reference [8]. The value of the SNR is then given in decibels (dB). The maximum number of samples per day, considering the 8 satellites and their 4 channels, is around 2.7 × 10 6 measurements. An example of this data is plotted in a world map in Figure 4, where both the measurement over land and oceans are displayed. Colorbar indicates the SNR of reflected signal in dB.
Re ected signal SNR (dB) As the aim of the study is to observe the possible fluctuations in the intensity of the signal after crossing the ionosphere, the following computation is performed for every vector of data in a day. Having the 4 channels of SNR values during a whole day, it is chopped from a user-input "start_time" to an "end_time". Then, using the quality flags that the CYGNSS database provides, the vector is chopped using only the points over the oceans farther away than 25 km from the coastline, and within a rectangular area that the user can define.
The output is a vector of data with internal discontinuities in time. In each per-channel vector, there are also discontinuities in the GPS satellite tracked. These discontinuities can be detected by the change of the PRN (Pseudo-Random Number) that identifies the tracked GNSS transmitter. Both types of jumps must be avoided in the computation of S 4 , because they may create false S 4 peaks. So, the vector, per each of the 4 channels, is divided into those periods between jumps (PRN or time jumps), and then the computation of the S 4 value is done using: where I is intensity in linear units, computed from the SNR value in dB, and the average is computed using a 12-samples-moving window along the vector. In Figure 5, the S 4 values using the SNR data from Figure 4 are plotted. For the sake of clarity in the interpretation, the points where S 4 is lower than 0.1 are not shown, and for the rest, their point size and color are proportional to the S 4 . Additionally, the roughness of the water surface could make the reflection to be diffuse instead of specular. This would happen when there are high waves in the area, as opposed to having a flat ocean surface. In practice, this is making a sort of filter in which the possible scintillation happening over rough sea surfaces, is hidden, keeping only high S 4 values over regions with a calm ocean.
To check the wind and wave height for day and hour under study, ICON [16,17] model meteorological online data were used [18]. As an example, continuing with day 21 November 2017, in Figure 6, the colormap of wave-height along the Atlantic ocean is underlaid the S 4 values, only showing the ones that are larger than 0.05. It can be check that most of S 4 peaks only appear over regions with quite calm ocean.
Bubbles and Depletions
Using the data available from the year 2017 until present, after applying the processing method described above, several interesting results found are shown. The research is mainly focused on the detection of Equatorial Plasma Bubbles (EPBs) or depletions. Given the characteristics of this experiment, these bubbles are expected to take the shape of transient peaks in S 4 curves as the reflected signal path crosses the bubble, while the GNSS receiver is moving along its orbit.
A first visual inspection of some days, like the one depicted before, shows that there are many peaks no matter which day is analyzed. Some of them are isolated, but others occur along with other peaks in slightly different times and other satellites or channels but in a relatively small region. One of these peaks in S 4 is located in mid the Atlantic during 21 November 2017, shown in Figure 7, and it may serve as an example to study its origin.
This isolated event in the Atlantic is the consequence of the rapid fluctuation of the SNR of the signal arriving to channel 1 of CYGNSS satellite 2, at around 4:26 UTC. The approximate length of the peak is 1400 km, and it lasts for about 5 minutes. As it can be seen in the SNR plot (Figure 8a), channel 1 oscillates rapidly from 1 dB to 7 dB, starting at 2:13 h to almost 2:18 h Local Time (LT), according to the longitude of the region. During this event, the ocean is calm, with wave heights of at most 2 m, as seen in Figure 6. Also, these plots help to explain that, given the way the S 4 is computed, only the SNR fluctuations occurring in a timescale of 12 s can appear as high/moderate S 4 values. In channel 3 of the same satellite, a slow drift of the SNR is observed at the same time as channel 1 is fluctuating quickly (Figure 8a). Despite this, the S 4 value in channel 3 keeps being very small, less than 0.05 (Figure 8b). On the other hand, the singular parabolic shape that it exhibits is due to the change in the incident angle of the reflected signal as the receiver satellite moves in the orbit capturing the signal of the much slower and higher GPS transmitter. In this case, the angle of incidence is almost vertical (∼4 • ), when the signal is maximum (∼7 dB), and increases up to around 25 • at the end of the plot. In the case of channel 1, the angle of incidence varies less than 1 • around 70 • during all the fluctuation period.
The previous example was an isolated peak in just one of the 4 channels of one satellite, but in other days analyzed, there are concentrations of these events in a small region, occurring during a finite time period. Figure 9 shows data from 24 August 2017. The points in red mark moderate scintillation measurements around 0.2-0.3, and they appear in some concentrated regions for different channels and satellites crossing above those regions. During this day, almost all the North Atlantic was very calm, with wave heights less than 2 m, as seen in Figure 10. To have a better view of these events, two detail plots are shown. In the first one in the Western region, (Figure 11), located about 300 km North of Bermuda island, a large number of moderate S 4 points are closely spaced in a zone that extends approximately 250 km × 320 km across different CYGNSS satellites and channels. All the measurements occur during the period between 6 h and 10 h UTC, even lasting during different passes of CYGNSS satellites over this zone.
In this region, the perturbation is particularly interesting as the peaks occur roughly at the same positions for several passes or channels, with similar values of S 4 in each of them. The event starts around 7:00 h UTC and ends around 10:00 h UTC, so given that the timezone is UTC-4, the local time is from 3 h to 6 h in the morning, coinciding with the hours immediately before the sunrise, that, in this place, on 24 August 2017, took place at 5:36 h LT.
Focusing on this peak, the values of SNR and the derived S 4 are plotted in Figure 12a,b versus LT. The sudden increase of SNR values is clearly visible in two of the channels of the satellite CYG06. In other satellites (not shown in these graphs), there are also similar events, that constitute the peak in S 4 shown in Figure 11. It is important to note that the angle of the incident signal over the ocean is smoothly varying around 28 • for channel 3, and from 14 • to 10 • in the case of channel 1, so the change in the reflection angle is not the cause of the rapid fluctuation of the signal. (b) S 4 index Figure 12. SNR and S 4 values plotted against LT for western peak in Figure 11 during 24 August 2017, for the four channels of one of CYGNSS satellites. Note that the S 4 rises with the transitions of SNR. For channel 3, it appears 2 peaks of S 4 , one before and one after the SNR peak, and the same happens for channel 1. In the S 4 map for the Eastern peak in North Atlantic (Figure 13), a region of about 900 km × 600 km of moderate scintillation is depicted. As in the previous case, different passes of CYGNSS satellites in several channels show values of S 4 larger than 0.2. This region is around 200 km to 600 km southwest from the Azores islands, and the water surface was also in calm, as shown in previous Figure 10 for the whole day in almost all the North Atlantic ocean. The information on these and other S 4 peaks is summarized in tables, recording for every event its date and time, duration, length, and magnitude of the S 4 value. In the data analysis these peaks are identified and characterized using the mentioned parameters. An event to be recorded must meet that the peak of S 4 must be larger than 0.2 during at least 5 s. Given that the integration window for computing S 4 is 12 s width, it means that a reported peak implies at least 17 seconds of fluctuations in the SNR of the received signal. Events shorter than this duration (or dimension) cannot be detected with the current coherent and incoherent integration times of the GNSS-R payload onboard. Table 2 shows the recorded peaks for 24 August 2017, from 7 h to 10 h (UTC), in all the North Atlantic region. Table 2 shows the date and LT, the satellite and channel in which the peak was detected, the coordinates where the event started, its duration and length, the inclination of the incident signal on the sea (measured from the vertical), and the maximum value of S 4 achieved during the peak. The length of the event is computed measuring the distance between the first and the last geolocated points belonging to the peak.
Note that the highlighted row in Table 2 corresponds to the peak measured by satellite 6 in channel 3 on (37.27 • N, 62.21 • W) at 5:31 LT, which is the one plotted in yellow in Figure 12a,b. As it can be seen, the peak lasts for 7 seconds over the threshold of 0.2, archiving a maximum height of 0.31.
Another day of interesting results, this case in the Indian Ocean during 20 May 2019, is shown in Figure 14. As seen in the plot, the Indian Ocean was calm during that day in regions from the Equator to the North. The plot only shows the S 4 values greater than 0.05, and they are distributed in regions relatively delimited between the Equator and latitude 15 • N. Table 3 shows S 4 peaks above 0.2 lasting for at least 5 s during that day, indicating its properties. Better statistics can be extracted by obtaining these tables for a large number of days. In the following lines, the study of 25 days equally distributed every 15 days from March 2017 to March 2018 is presented. The methodology used is to parse the whole day in all the oceanic regions from 40 • N to 40 • S recording the events with S 4 above 0.2 lasting more than 5 s. The number of peaks recorded per day in the year is shown in Figure 15. Figure 16 shows the histogram of those peaks in terms of the LT in which they were recorded, according to the timezone. It shows a peak around 6 h in the morning, which matches to the mean sunrise time. Figure 17 shows the same data regarding the maximum S 4 value reached during the peak.
S 4 Statistics
Another analysis conducted, is the study of the probability to find points with certain S 4 at a particular hour in LT, averaging the data over long periods of time. This way, it can be studied if there is any correlation between the S 4 obtained with this technique and the LT, as previous models and experimental evidence shows. The set of histograms in Figure 18 shows all the data points from 0 h to 24 h (UTC) during selected days in the year 2017.
The horizontal axis of the histograms represents the local time in hourly bins, and the vertical scale is the S 4 value in bins of 0.1. The color represents the number of counts per each of the two-dimensional bins. The color scale is adjusted up to 500 counts to increase the contrast for S 4 values larger than 0.1. This means that all the bins in red on the bottom side of the histogram are saturated.
The data plotted in the histograms correspond to all the specular reflections over open oceans and seas (further away than 50 km from any coastline), within the latitudes of CYGNSS coverage (40 • N, 40 • S). In total, every day there are approximately 1.9 × 10 6 samples. The registered values include all reflections on the cited areas without any other filter (i.e., no sea surface roughness filter).
In these histograms, a small dependency with LT can be observed, that is more visible in April and August, with a slightly higher probability of higher S 4 during sunrise and sunset, around 6 h and 17 h during 21st August, and 9 h and 21 h during both days in April. Also, a similar correlation can be observed on 20th November.
Discussion
The results presented in this study are consistent with previous observations and studies. As a general starting point, several peaks in S 4 are found in many days during the years of operations of the CYGNSS mission.
One of the main concerns during the study was the need to filter the GNSS reflections only for coherent reflections, as ionospheric scintillation may only be "visible" when a coherent wave crosses the ionosphere. In practice, this means that the signal has to be reflected over calm water. That is the reason why in every particular case, the wave height and winds speed maps were checked before a more in-depth analysis was performed to the data. For example, two of the days shown, 24 August 2017 ( Figure 10) and 21 November 2017 (Figure 6), exhibit calm sea on the regions of interest where some scintillation was found.
Bubbles Study
Obtained results match with previous studies and, in some cases, they can be clearly interpreted as the EPBs described, for example, in Reference [19]. In this work, a method for detecting and measure equatorial plasma depletions is explained. These depletions in the electron content of the ionosphere highly affect the radio wave signals crossing them, and they can rapidly appear, travel some distance, and then vanish. The way to study them in Reference [19] is by the use of data provided by ground-based GNSS receivers belonging to the International GNSS Service (IGS), available online since, at least, 2002, including different solar cycles.
GNSS satellites orbit at 20,000 km altitude (GPS), with periods of 12 h, which means that from a ground station they are in view during long times, making it possible to study the region of the ionosphere that crosses the vector from the satellite to the receiver. As this vector moves slowly compared to the estimated drift velocity of the plasma bubbles, they can cross the signal path during the tracking of a single GNSS satellite. This method is applied systematically to measure the duration and depth of these bubbles. The results show that these bubbles could have very different duration, lasting from 10 to almost 100 min during a solar maximum, or shorter periods during a solar minimum, as shown in Figure 19. Figure 19. Probability to detect equatorial plasma bubbles (EPBs) with a particular effective time duration, for both years studied in Reference [19]: 2009 (left) during solar minimum and 2014 (right) during the last solar maximum (adapted from Reference [19]).
In order to make a comparison between both studies, the equivalent length of the bubbles has been computed. Regarding the measurement from ground stations and neglecting the movement of the GNSS satellites during the transition of the bubble, the length of the bubbles can be estimated using the mean drift velocity of the ionospheric plasma, ∼100 m/s. That transformation linearly translates the horizontal scale of Figure 19, from 10-100 min to 60-600 km, respectively.
From the recorded CYGNSS data, using equally spaced days every 15 days from March 2017 to March 2018 -25 days in total-, all the peaks lasting more than 5 s with an S 4 value above 0.2 have been computed to display the histogram shown in Figure 20. The extension of the bubble, in this case, is computed from the coordinates of the first and last specular reflection considered part of the peak and it is shown in Figure 21. The total number of peaks analyzed in these histograms is around 4700 and they are distributed along the 25 days in 2017 using the data shown in the previous section represented in Figure 15. Comparing CYGNSS data with the one studied in Reference [19], many similarities can be extracted. The shape of the distribution fits very well in terms of bubble extension, ranging from around 40 km to 200 km. In both plots, an exponential-like shape can be observed.
Correlation to Local Time
Another aspect to highlight is the correlation between local time and the occurrence of high S 4 events, as shown in the set of histograms in Figure 18. Some of them show a higher probability of finding high S 4 values during hours around the sunrise and sunset. In particular, plots on 1 and 24 April, which are also very similar to each other, reveal peaks around 9 h in the morning and 8-9 h in the evening. It is also important to remark that the sunrise and sunset do not occur at the same time in all the regions covered in this graphs, which are all the oceans and seas with latitudes from 40 • N to 40 • S. Considering April is close to the equinox, the span between local sunrise and sunset is not very large from 40 • N to 40 • S: around 30 min. It would be exactly zero on the equinoxes. This difference is much higher for 25 July, around 2 h 20 min.
In this way, being the sunrise and sunset the most active periods in terms of ionospheric activity, in the equinoxes the peak in these histograms should be even higher than in the solstices because, during the equinoxes, sunrise and sunset happen at the same time for all latitudes. This can be observed in the July histogram (Figure 18c), but it is not evident in the November's one (Figure 18f). However, it is important to consider that the measurements used for these histograms have no sea roughness filter, and they use all the data over oceans within CYGNSS available latitudes.
Comparison with Plasma Density Data from Swarm
ESA Swarm mission is a constellation of 3 satellites launched in 2013 that globally provides data on the geomagnetic field and plasma density. Satellites named as Swarm A and C fly together in a polar orbit of around 460 km altitude and 87.4 • inclination, so they can sense the plasma density on the ionosphere. Swarm B is orbiting at 530 km, within another polar orbit plane, so it can sense a bit higher layer of the ionosphere. Swarm instruments can measure geomagnetic field intensity and direction, and also, local electron plasma density.
In order to compare with previous results from CYGNSS, Swarm measurements during 24 August 2017 have been analyzed. In Figure 22, the plasma density during some hours of this day is plot along with an index that the authors have defined to indicate the intensity of its fluctuations, the Normalized N e Fluctuation Index (NNeFI). This index is obtained from the plasma density given by Swarm (N e ), once per second. Ne measures the number of electrons per cubic centimeter with values that are in the order of 10 5 e − /cm 3 , the typical electron density in the ionosphere at this height. Using N e , the NNeFI is computed as: where N e is the electron density measured by Swarm and the average is computed using a 12-samples moving window. This index is plotted in a map in Figure 23, for the whole day and the three Swarm satellites, and it is underlaid the mean occurrence of scintillation events, using the WBMOD model predictions of the 90th percentile S 4 index. Note that only values of NNeFI above 0.05 are shown. It can be observed a correlation between the appearance of plasma density fluctuations and the predicted S 4 map. In Figure 24 it is plotted the same data from Swarm NNeFI along with the peaks of S 4 reported by CYGNSS during the same day that was studied in previous sections of this work. S 4 values plotted are filtered to be higher than 0.12. Swarm computed NNeFI exhibits high occurrence in high latitudes, above 60 • , both Northern and Southern, but they are out of the region covered by CYGNSS and they will be not mentioned here. Figure 24. NNeFI for Swarm A, B and C above 0.05 during day 24th August, along with the S 4 peaks above 0.12 detected by CYGNSS during the same day. Note the regions marked with a yellow circle where it can be observed a spatial correlation between both sources of data. Note also, the color scale is shared for S 4 and NNeFI.
Around meridian 0 • there is a large region of fluctuations observed by Swarm A and C, over Sahara and Ghana, which, in the part that enters to the Atlantic ocean, there is also some S 4 peaks detectable by CYGNSS. Another example is found in the line of S 4 peaks that goes from The Guianas to Cabo Verde that seems to be completed with some peaks in the NNeFI index from Swarm B. There is also some cases along the Pacific Ocean that are marked with circles, where there is a cluster of S 4 peaks at the same time as NNeFI peaks. There are also other regions in which there is mostly S 4 peaks but few or none Swarm indicators, for example in the South of Mexico, in the region between Bermudas and Cuba or in the Middle South Atlantic It is important to note that the availability of data is much larger in the CYGNSS constellation than the Swarm one. For one day, CYGNSS has 8 satellites with 4 channels each in a orbit with less inclination; Swarm is only three satellites in polar orbit. This fact could difficult the correlation between both databases, but here, there are some evidences that motivates further study.
Conclusions
This work has presented the first experimental evidence that GNSS-R can be used as a global ionospheric scintillation monitor, and in particular, of bubble and depletions. Future work on the search of ionospheric activity at all latitudes and during different solar activity periods could help in the improvement of existing ionospheric models, providing them with more complementary data than ground stations are currently providing and with a denser spatial coverage than Swarm mission.
The study shows the result of scintillation events during the years 2017 and 2019, including tables with peaks that correspond to EPBs that are consistent with previous studies in terms of duration, LT, and also latitudes. The duration of the events shown here is in the order of seconds (from 5 s to 30 s), which means bubbles of around 25 km to 150 km length. The depth of these bubbles in terms of maximum S 4 is in the order of 0.2 to 0.5.
This way to study the ionospheric scintillation could be very beneficial because it can cover large areas all over the Earth, including remote regions in the middle of oceans, where no ground stations could be placed. However, there is also has some limitations and they should be carefully identified and refined. One of them is that only calm ocean surfaces ensure coherent specular reflections. If the water surface has ripples, the signal would reflect from a larger glistening zone, arriving at the receiver with different phases, hiding the possible fluctuations in amplitude and phase. In further studies continuing the analysis of CYGNSS and other GNSS-R missions, an automated data filtering algorithm must be implemented to eliminate data affected by sea roughness.
On the other hand, it is important to take into account that the results presented in the study come from years 2017 to 2019, and solar activity cycle should reach its minimum in 2020. That implies that less activity is expected, and consequently, fewer scintillation events. It would be very interesting to study GNSS-R data from future missions when approaching the solar maximum.
A correlation study between these results and others obtained from different sources, for example, the Swarm geomagnetic and plasma density products is being performed to find possible confirmations of this new proposed technique and also to check what new possibilities could GNSS-R technique bring to the ionospheric monitoring field.
The results presented in this study are a small sample of the total amount of information that can be obtained using this technique. CYGNSS generates around 10.4 GB per day, and further exhaustive analyses of it could bring new information or improve current models of the ionosphere, which is especially interesting regarding oceanic regions. | 8,507.8 | 2020-11-18T00:00:00.000 | [
"Physics"
] |
APPLICATION OF THE META-MODELLING METHOD IN BUSINESS INTELLIGENCE
: The article deals with the issue of conceptual modelling of the MOLAP database structures used in multidimensional data analysis for BI. Management information for decision-making purposes was collected in the MOLAP database and delivered to the user via the management cockpit. Building OLAP databases is discussed from the database designer point of view The meta model of the OLAP database was described in conceptual and logical terms. The work includes a critical analysis of the meta-model assumptions for the design of systems for multidimensional data analysis. A postulate to revise the classic meta-model was presented and a new type of relationship was proposed. The assumptions of the meta-model for modelling and simulating business decisions were presented, as well as the application of meta modeling to define business processes. Based on Gozinto graphs, the meta model of structural and technological developments was defined.
Introduction
The large-scale application of BI (Business Intelligence) applications for decision support has created a need for new information processing techniques, in particular for new technologies for storing data in databases. The technology of classic SQL databases is modified to meet the requirements of advanced business users. For IT management of management information it is necessary to meet the challenge of creating modern tools to support the construction of manager dashboards. New types of databases to support multi-dimensional data analysis have appeared, such as the so-called MOLAP database (Multidimensional On-line Analytical Processing). The new modified meaning is attributed to computer simulation technique. There are three main areas of data structure modelling in BI applications: • ROLAP technology, where data are collected in two-dimensional tables, the database has a relational structure and is divided into tables of dimensions and facts, • MOLAP technology, data are collected in OLAP cubes in multidimensional databases, • simulation technique, data are collected in method and model databases, in addition to real data, simulation data are collected, models and methods are saved in the databases. As a result of the dynamic development of BI applications, classic SQL databases are modernized. On the one hand, new technological solutions are emerging in the field of using SQL to support data warehouses (ROLAP), and on the other hand, work is underway on a new generation of databases to support multidimensional data analysis that stores data in the form of OLAP cubes (MOLAP). Simulation in management is no longer an independent discipline, but rather is implemented in a comprehensive system called the decision support system.
Business Intelligence applications
BI (Business Intelligence) is information technology focused on preparing information for making decisions. This technology is designed to enable the user to transform data into decision information.
BI is based on modern information technologies supporting the analysis and design of information resources for the purposes of decision support. Obtaining data from the resources of transaction systems is of key importance. This acquisition requires tools from the fields of Data Mining and ETL (Extraction, Transaction, Loading). One of the central elements of the BI system is the data warehouse, where information processed in analytical mode (OLAP) is stored. The manager's cockpit is responsible for displaying information in a form adapted to decision needs.
Conceptual modelling of data structures in BI
The BI application development process can be divided into several phases. The starting point in the design process is to conduct a requirement analysis, i.e. to determine the users' information needs. Having a specific information scope, we can create a conceptual BI application model. Conceptual modeling uses a set of basic concepts such as classes, objects, relationships, processes and events. In this arrangement, the use of the meta model is important for the construction of diagrams. The metamodel is a generalization for a given class of conceptual models, it contains semantic invariants for marking graphic symbols in diagrams intended for modeling data structures. Using these symbols, one can define a specific database model or business process.
The concept of a meta-model was introduced to the IT literature by Ferstl and Sinz (2001, p. 122), defining the meta-model of E/R data, which significantly structured the discussion on semantic modelling of relational database structures. Using the meta-model category makes it easier to define specific data models that fall into a particular category and must meet the general rules for a given metamodel, where the classic E/R notation also fulfills its important role here . Conceptual design is a set of actions aimed at transferring the specified requirements specification to the semantic scheme of the future database. At the stage of the requirements analysis, the information specification of the future BI system is determined, and the semantic data model presents the data structure where data is collected for preparing information reports.
Metamodelling of the OLAP database
The first phase of building OLAP systems was dominated by the approach focused on creating relational databases supporting data warehouses. From a technical point of view, the approach to data warehouse design was practically no different from the traditional approach to relational database design. The task of the database designer was, among others, to define a semantic data model containing definitions of data formats and relationships between individual data elements. Designers have distinguished, as is known, two basic types of semantic models to support data warehouses: snowflake or multi-stars (Biniek, 2009, p. 287). All operations on the relational database were carried out using the SQL language, initially not adapted for processing large data sets. It was only after some time that the SQL language was modified to adapt it to data warehouse services, for example in some language implementations Cube operator software support has been developed (Eickler & Kemper, 1997, p. 465). Another approach to creating OLAP systems is defining analytical databases that support spreadsheets. In this case, the OLAP database supports specialized multidimensional data analysis. Using the spreadsheet as before was assisted by an OLAP database containing appropriately aggregated data, adapted primarily to the needs of analytical summaries. Such a database is fundamentally different from the classic ROLAP database, where E / R diagrams can be used to model data. In the OLAP database, one does not use two-dimensional relational tables containing key columns and data columns. Figure 4 shows the functional diagram of an OLAP database that supports spreadsheets. The MOLAP conceptual model does not have the traditional relational data model schemes i.e. stars and snowflakes. In this case, already during the construction of the conceptual model, the study defined only those data elements that are necessary to define a multidimensional OLAP cube, assuming that the defined cubes will be designed to support various analytical reports. As can be seen in the Figure 4., the designer defines the dimensions and their associated elements.
The MOLAP database consists of one or many multidimensional OLAP cubes. There can be many cubes in one base (Cube-1,…, Cube-n). The OLAP cube is structurally defined using predefined dimensions used to describe the components of data analysis that occur in a given decision support system, e.g. time, area and products. A specific dimension can occur in one or more cubes at once. There can be many dimensions in one database depending on the scope of data analysis. The dimension consists in data elements (Element-1,…, Element-n) always assigned to a specific dimension. An element is defined by giving it a unique name. Similarly, the dimension and the cube are defined. When defining elements, one first specifies the dimension, and then assigns elements to the dimension, which means that the elements do not exist independently in the database. The OLAP cube must contain at least one dimension. The sheet technique is adapted to support the input and output interface. The worksheet is used by the database user when entering data and preparing information sheets from the database.
Starting from the general premise of meta modelling, the author undertook the task of defining the meta-model of the MOLAP database, as shown in the Figure 5. This figure presents a graphic reflection of the relationship between the elements of the MOLAP database used in the software packages used so far, i.e. the socalled classic approach. Data objects represent business facts that occur in business operations. In the process of conceptual modeling, the fact is transferred to the database as an object of analytical data. The data object in the modeling process can take the form of an element, dimension or cube, where the data element consists of a homogeneous value defined in specific units of measure. The dimension consists of elements, and the OLAP cube of dimensions. Assigning elements to a dimension and a dimension to the cube is a sovereign decision of the OLAP database designer. Thus, the final form of a given data object is determined by the OLAP database designer, guided by the general modeling rules stored in the meta-model. Modelling organizational and functional dependencies uses system objects, namely: Cube, View, Dim, Subset, Function. These are objects existing in every MOLAP database, regardless of specific material entries (element, dimension, cube). In addition, the database designer must ensure that the base elements are assigned. The general rule is that there can be no element not assigned to a dimension in the database. Assignment is understood here as the logical connection of elements and dimensions (E / D), the logical connection of dimensions and cubes (D / C) and the logical connection of data cubes and system cube (C / SC). In the E / D assignment, some elements were consolidated, which necessitated defining consolidation rules. Element consolidation consists in the hierarchical linking of elements, i.e. determining their belonging to a certain defined parent group of elements. Each element of the base can have attributes defined, expressing some additional features of this element.
The relationship between logical units of the OLAP database is created through a unique element name. The element, in addition to the unique name, is assigned a type (numeric, text). Special diagrams are used to model relationships between dimensions and elements, the so-called diagrams of facts (Lechtenbörger & Vossen, 2003, p. 190). Figure 6 shows an example of the diagram of facts. Facts represent unitary elements of information (data) in a multidimensional analytical base. The fact describes quantitative values in the form of a measure (units of measure) and qualitative features that are defined in the base through various dimensions starting from the terminal dimension level. One can define several aggregation paths for the same dimension and the same baseline. Elements of a given dimension are functionally dependent on the base element. Each individual dimension contains a defined group of instances or elements. Dimensions are hierarchically shaped and have a specific aggregation path and each element of the base must belong to at least one specific dimension.
Using the facts diagram, it is possible to define the hierarchical structure of elements present in a specific MOLAP database. The central fact table (sales) was surrounded by various necessary hierarchical or linear dimensions.
The meta-model of the MOLAP database presented in Figure 5 is insufficient to fully represent objects and relationships in multidimensional data analysis. Classic MOLAP packages usually have the disadvantage that they allow defining only the hierarchy of elements and dimensions in cubes, and they are unable to define the functional relationships of elements occurring in independent dimensions. The basic requirement of the MOLAP base is that the functional relationship occurs within many dimensions. Generally, in conceptual modeling of the MOLAP database, three types of relationships should be distinguished: functionality, aggregation and membership relations. Aggregation and membership relationships dominate in systems already in use, while functional relations are not present in the currently used MOLAP databases. Assigning one or more dependencies on other elements requires changes in the MOLAP meta-model base. It is necessary for OLAP-based multidimensional data analysis packages to support all distinguishable relationships between elements, including functional dependency (Vossen, 1999, p. 152). The many-to-many (n: m) 1 mapping between elements of different dimensions within the same cube is marked grey, e.g. employee (employee dimension element) supports one or several products (products dimension). To the three traditionally used associations (to support hierarchical relationships and memberships) there should be added the EW / EW type relationship supporting functional relationships. Therefore, in the EW / EW table the author placed four entries for one functional relation. The relationship indicates which Ex element of the Wx dimension is functionally related to the Ey element of the Wy dimension. In addition to entries indicating a functional relationship, one can also place entries containing the status of the relationship, e.g. active or outdated, date of entry, etc. In addition, related elements can be assigned a cardinality (multiple of the relationship), e.g. a 1: 1 relation or n: m relation, and a set of connections elements is a finite set of four related elements. Each element of a given set of relations belongs to a specific dimension. An assignment of this type significantly facilitates the use of base elements when preparing specific multi-sectional reports, and in particular preparing reports for dashboards.
During the requirements analysis, a conceptual model of a multidimensional OLAP database is created in a few steps: • defining measures, functional dependence of the base, determining the relationship between dimensions and measures, • definition of dimensions and their hierarchy, • defining element consolidation.
So far, the study addressed functional and structural modeling. In the created diagrams, the attention of modellers was focused on the definition of objects and the connections of these highlighted objects. Correctly defined relationships between dimensions within a given OLAP cube are important. When preparing multidimensional data analyses, the author used tabular functions that enable downloading data from OLAP cubes and placing the data in prepared reports. Functions are system objects called by names intended primarily for manipulating data. There are both functions designed for filtering data and the function of operating on arrays. The database designer at the stage of creating its conceptual model can predict what functions will be used in a particular implementation.
In the conceptual modeling of databases to support BI applications, the simulation approach should also be highlighted. Computer simulation significantly contributes to the improvement of business decision making, including by simulating examining the potential effects of decisions. The simulation concerns strategic decisions, also medium and long-term operational decisions can be subject to simulation. The concept of computer simulation is presented in the Figure 8. The simulation model is inherently related to describing changes in variable values on the time axis. Time as a variable can be modeled in a continuous or discrete way. The model variables can be described algebraically or by functional dependencies. The simulation model has an internally built-in operationalization mechanism. There is a model operator for each model. In general, three types of simulation models are distinguished in computer simulation: • what-if models (Golfarelli et al., 2006), • goal seeking models, • steady state models.
The above-mentioned types of models are characterized by a common plane of operationalization. The simulation takes the form of a system of equations (algebraic, differential, differential), and the control takes place either on the time axis or in the form of fixed steps, regardless of the time course.
Assumptions of the meta-simulation model occur in the conceptual modeling of computer simulation. Petri networks were adopted as the basis for creating simulation models. The idea of a meta-simulation model is derived from the general assumptions of object-oriented meta-modelling defined by Ferstl and Sinz (2001, p. 199). Currently, the dominant notation in the construction of simulation models are algebraic equation and cybernetic models. For the semantic modelling of simulation model structures, among others, Petri networks were used (Biniek, 2009, p. 233).
Business facts diagram modelling typology
It should be noted that the use of the conceptual meta-modelling method of data structures allows for multiple benefits in the design of databases supporting business processes. By creating a universal model for the purpose of defining specific implementation models, the study thus brings semantic ordering of data structures, provide semantic completeness of specific notations, appropriate semantic accuracy, as well as the ability to supplement the notation with new elements available in a given meta-model. Meta-modelling greatly facilitates the creation of new database implementations to support BI applications. Based on the class definitions in the meta-model, one can create appropriate model libraries from which to retrieve elements to create specific implementations of business processes.
Modelling business processes constitute a separate issue in the design of BI applications. BPMN (Business Process Modelling Notation) notation, developed and maintained by BPMI (Business Process Modelling Initiative) organizations, are used to design applications that support the e-business sphere (Business Process Modeling Notation, 2003). BPD consists of a set of graphic elements used to define specific diagrams. These elements allow to create simple charts that are easily accepted by most business process analysts. The elements were designed to be distinguishable and their shapes easy to remember for most people linked with business modelling. For example, actions are rectangles and decisions are diamond-shaped. The author illustrated the issue presenting an example of a simple BPD diagram showing activities related to the operation of an online store.
One of the main assumptions when designing BPMN diagrams was to create a simple mechanism for creating business process models, while also meeting the possibilities of creating complex business processes. Combining these two conflicting requirements affects the allocation of graphic elements of notation to specific departments. Finally, a small number of categories were created so that the BPD user could easily recognize the basic types and understand the charts. The most accessible way to present diagrams for modeling business processes is the meta modelling method. Figure 11 presents the assumptions of the meta model containing the elements used when defining BPMN diagrams.
The use of the meta modelling method requires the definition of two basic elements that make up the modelling process: the metaphor (context) and the meta model diagram. To illustrate the modelling principle, the metaphor of the swimming pool (a pool with swimming lanes/tracks) was adopted. Figure 11 shows the meta model containing the system of concepts and elements used in business process diagrams. A specific business process diagram describes a process or subprocess. According to the metaphor, the diagram is placed in the lane (track) and the track in the swimming pool (marked pool). At least one track is placed in the diagram, and there can be n tracks in the bases (1, n) Two types of objects are used to define diagrams (data, process). There are four basic categories of elements in diagrams: • Flow Objects, • Connecting Objects, • Swimming Lanes (tracks), • Artifacts.
Recently, streaming databases have become an important element of information technologies used in e-business. Unlike relational databases operating on transactions and records, data streams describe the continuous flow of data between the end device and the database server.
Metamodel STD (Structural and Technological Development)
Gozinto graphs are the most common structure and production modelling technology. Creating the Gozinto metamodel is oriented towards structures, not processes. Creating business models involves such concepts as: Metamodel, m2m, model of models. Figure 12 shows the functional structure of the Gozinto graph. Product P has a specific functional structure, which consists of selected sub-elements, such as: Intermediates, Elements of parts list, Structure (relationships), Variants, Products. The goal of creating a Gozinto graph is to specify a quantitative list of products, including the structure of the product and the number of occurrences with connections within the previously defined structure. Instances include the number of parent and child instances. The Gozinto graph model allows us to define the parts list, alternative list, and parts usage list.
The structure of the P1 product (see Figure 12) includes links between the product, its intermediates and variants. The Gozinto graph contains a graphic model of the product structure, and also allows variants of technological processes. Building the semantic model is based on the theory of metamodelling of Gozinto graphs.
Conclusion
The paper presents the main assumptions for applying the metamodelling method to manage the structure of product production. Models of IT systems of small and medium enterprises on individual customer order enables differentiation of production orders, using models of B2B, BPMN, BI and Gozinto graph processes.
The Gozinto graph uses a hierarchical presentation, which means that it starts with raw materials (German: Rohstoffe = R) and inside contains intermediate products (German: Zwischenprodukte = Z), and top products are in the highest place (German: Endprodukte = E). Since it only shows the total quantity and does not focus on the base of production and on individual products, it can be compared with the quantity specification.
Thanks to Gozinto graphs, one can present structural relationships in production. Its basis is a structure specification that shows many (or all) production stages. Gozinto graphs use a diagram to show what quantities of raw materials are included in the semi-finished and final products. The amounts contained in these presentations are shown in Gozinto graphs by arrows.
The Gozinto graph is, in this order: • a help in presenting the product structure to determine the demand • an element supporting the planning of the implementation deadline and machine occupancy • a visualization form positioned between the structure and quantity specifications.
The use of Gozinto graphs to map manufacturing processes is important in this case. Mapping is carried out using production process diagrams and manufacturing processes. | 4,972.8 | 2020-01-01T00:00:00.000 | [
"Computer Science"
] |
On the Gaussian Random Matrix Ensembles with Additional Symmetry Conditions
The Gaussian unitary random matrix ensembles satisfying some additional symmetry conditions are considered. The effect of these conditions on the limiting normalized counting measures and correlation functions is studied.
Introduction and main results
Let us consider a standard 2n × 2n Gaussian Unitary Ensemble (GUE) of Hermitian random matrices W n : where ξ xy , η xy , x, y = −n, . . . , −1, 1, . . . , n are i.i.d. Gaussian random variables with zero mean and variance 1/2. Consider also the normalized eigenvalue counting measure (NCM) N n of the ensemble (1), defined for any Borel set ∆ ⊂ R by the formula where λ i , i = 1, . . . , 2n are the eigenvalues of W n . Suppose now that ensemble (1) has also an additional symmetry of negative (positive) indices x and y. We consider four different cases of symmetry: 3. (W n ) xy = (W n ) −xy , 4. (W n ) xy = (W n ) y−x .
The Gaussian unitary ensemble and Gaussian orthogonal ensemble (GOE) was considered in numerous papers (see e.g. [4]). The Gaussian unitary ensemble with additional symmetry of type (3) was proposed in the papers [1,3] as an approach to the weak disorder regime in the Anderson model. This ensemble was also considered in the papers [2,9]. In all these papers ensemble (3) was called as flip matrix model and studied by some supersymmetry approach and moments method. In this paper an approach is proposed that is simpler and the same for all four cases (3)- (6). This approach is a version of technique initially proposed in [7] and developed in the papers [5,4,6,8].
Using this technique we obtain the following results. First two ensembles (3) and (4) are GOE-like.
Proposition 1. The NCMs N (1) n and N (2) n of the ensembles (3) and (4) converge weakly with probability 1 to the semi-circle law N sc 2] (λ)dλ and the n −1 -asymptotics of the correlation functions of their Stieltjes transforms : where is the Stieltjes transform of the semi-circle law N sc .
The fourth ensemble (6) is GUE-like: n of the ensemble (6) converges weakly with probability 1 to the semi-circle law N sc and the n −1 -asymptotic of the correlation function of its Stieltjes transform coincides with (7) divided by 2 (i.e. GUE asymptotic).
As for the third ensemble, the additional symmetry produces new limiting NCM and correlation function: of the ensemble (5) converges weakly with probability 1 to the limiting non-random measure N where λ ± = 3 ± 2 √ 2 and the n −1 -asymptotic of the correlation function of its Stieltjes transform is given by the formula where f (z) is Stieltjes transform of the limiting measure N .
This result is somewhat unexpected for the Hermitian Gaussian random matrix ensemble with the rather large number (of the order n 2 ) of independent random parameters. But it shows how much the additional symmetry may affect the asymptotic behavior of the eigenvalues.
The limiting NCMs
In this section we consider the limiting normalized countable measures of the ensembles (3)- (6).
In that follows we use the notations and · to denote the average over GUE. We also use the resolvent identity and the Novikov-Furutsu formula for the complex Gaussian random variable ζ = ξ + iη with zero mean and variance 1, and for the continuously differentiable function q(x, x) where ∂ ∂ζ = 1 2 ∂ ∂ξ + i ∂ ∂η . We will perform our calculations in parallel for all four ensembles. First, let us observe that properties (3)-(5) are valid not only for the matrices of ensembles (3)-(5) but for their powers and hence also for their resolvents. Indeed, using induction by m and the symmetry of summing index we obtain: Thus, and, hence, Unfortunately, there is no any such property for the fourth ensemble. Now using the resolvent identity for the average G pq (z) , relation (10) and formula for the derivative of the resolvent we obtain Using these formulas, we obtain the following relations: Now we put p = q in all four cases and p = −q another time in the second case, and apply 1 2n n p=−n . Thus, using also the additional symmetries of the resolvents of ensembles (3)-(5), we obtain: In the appendix we prove that the variances of random variables g(z) in all cases above are of the order O(n −2 ) uniformly in z for some compact in C ± (as well as the variance ofĝ(z) in the second case). Besides, using Schwartz inequality for the matrix scalar product (A, B) = Tr AB, we obtain 1 2n Tr Tr Thus, all terms in square brackets in all four cases are at least of the order O(n −1 ). Hence, in the first and in the fourth cases we obtain the following limiting equation: which is the equation for f sc (z) -the Stieltjes transform of the semi-circle Law. Besides, since then for all z with e.g. |Im z| ≥ 3 uniformly in n we have in all cases Thus, in the second case ĝ(z) of the order O(n −2 ): Hence, the second case lead to the same limiting equation (15).
As for the third case, it leads to the following equation Its solution in the class of Nevanlinna functions is the Stieltjes transform of the measure (8).
The convergence with probability one in all four cases follows from the bounds for the variances in the section bellow and the Borel-Cantelli lemma.
The correlation functions
As in the previous section, we perform our calculations in parallel for all four ensembles.
Using the resolvent identity for the average g • (z 1 )G pq (z 2 ) , relations (10) and (12), we obtain Substituting in this relation the value of W ′ n in all four cases and using the symmetries of the resolvents, we obtain Then we put p = q in all four cases and p = −q another time in the second case, and apply 1 2n n p=−n and obtain 1.
Tr G 2 (z 1 )G(z 2 ) + r 1,n , where Tr G 2 (z 1 )G(z 2 ) + r 2,n , Tr where Tr Tr where As we show in the appendix, all r j,n , j = 1, . . . , 5 are of the order o(n −2 ). Thus, as one can easily show, all correlation functions F (z 1 , z 2 ) = g • (z 1 )g(z 2 ) above are of the order O(n −2 ). Moreover, since ĝ(z 2 ) is of the order O(n −2 ) in the second case, its easy to see that cases one and two lead to the same relation for F (z 1 , z 2 ) Tr As to the case four, it leads to Besides, due to the resolvent identity we have 1 2n Tr Tr In addition, as we show in the appendix, in these cases 1 2n Tr G 2 (z) = g(z) Thus, substituting in the relations (21), (22) the expressions (23), (24) and using the equation (15) for the limit of g(z) , we obtain in the cases one and two the GOE correlator asymptotic (7) and in the case four the twice less GUE asymptotic.
To treat the third case we use (11) and obtain that 1 2n n p=−n This gives the following relation for F (z 1 , z 2 ) Tr We show also in the appendix that in this case Substituting this relation in (25) we obtain Then, using the equation (17), we rewrite this relation in the form (9).
Conclusion
The purpose of this paper was to answer the question: "Can the additional symmetry properties influence on the asymptotic behavior of eigenvalue distribution of GUE?" The negative answer for the three cases of additional symmetry is not surprising, as these symmetries leave the number of independent random parameters of the order n 2 . The effect when in one case the additional symmetry essentially changes the limiting eigenvalue counting measure is very unexpected, especially the appearance of the gap in the support of limiting NCM. Unfortunately, the physical application of this effect is unknown to the author, though one of the other considered ensembles (flip matrix model) was used as an approach to weak coupling regime of the Anderson model. Proof . First we proof that the variance is of the order O(n −2 ) in all four cases. Indeed, in the first case, using (18) with z 2 = z 1 = z, we obtain v(1 + 2z −1 g(z) ) = −z −1
Besides, using the Schwartz inequality we obtain from (19) Thus, due to the bounds (16) and 1 2n Tr we have for |Im z| ≥ 3 the inequality v ≤ 2 9 (2n) 2 + . For the other cases the proofs are analogous.
To prove r 1,n = o(n −2 ) for we rewrite the second term in the parentheses as Since the value g • (z 1 )g • (z 2 ) is analytical and uniformly in n bounded for |Im z 1,2 | ≥ 3, and since, due to the Schwartz inequality | g • (z 1 )g • (z 2 ) | ≤ v = O(n −2 ), its derivative on z 2 is also of the order O(n −2 ) and hence the second term is of the order O(n −3 ). To prove that the first term is o(n −2 ) let us consider Then, using the resolvent identity for the average R • G pq (z 2 ) , relations (10) and (12), we obtain Substituting in this relation the value of W ′ n and using the symmetry of the resolvent we obtain Then we put p = q in all four cases and p = −q another time in the second case, and apply 1 2n n p=−n and obtain Tr G 3 (z 2 ) .
The cases of the terms r j,n , j = 2, . . . , 5 can be treated analogously, with exception for the last term of r 5,n . The last term of (20) 1 (2n) 2 1 2n n r,p=−n G 2 (z 1 ) r−p G rp (z 2 ) can be treated as follows.
First, observe that in the case four ĝ(z) = o(n −1 ). Indeed, using (13) with q = −p, we obtain ĝ(z) = −z −1 g(z) ĝ(z) − z −1 g • (z)ĝ(z) − z −1 1 2n where G T is transpose of G. Due to the the Schwartz inequality for the trace, the last term in r.h.s. of this relation is of the order O(n −1 ). Since the variance of g(z) is of the order O(n −2 ), the second term is at least of the order O(n −1 ) (in fact it is of the order O(n −2 ), since, as one can show, the variance ofĝ(z) is of the same order). Thus, ĝ(z) is of the order O(n −1 ). Its easy to show in the same way that ĥ (z) = 1 2n n j=−n G 2 (z) j−j is also of the order O(n −1 ) and its variance is of the order O(n −2 ). Now, using the resolvent identity for the average of Φ = 1 2n n p,q=−n G 2 (z 1 ) p−q G pq (z 2 ), | 2,728.6 | 2006-01-21T00:00:00.000 | [
"Mathematics",
"Physics"
] |
Implications of the steady-state assumption for the global vegetation carbon turnover
Vegetation carbon turnover time (τ) is a central ecosystem property to quantify the global vegetation carbon dynamics. However, our understanding of vegetation dynamics is hampered by the lack of long-term observations of the changes in vegetation biomass. Here we challenge the steady state assumption of τ by using annual changes in vegetation biomass that derived from remote-sensing observations. We evaluate the changes in magnitude, spatial patterns, and uncertainties in vegetation carbon turnover times from 1992 to 2016. We found the robustness in the steady state assumption for forest ecosystems at large spatial scales, contrasting with local larger differences at the grid cell level between τ under steady state and τ under non-steady state conditions. The observation that terrestrial ecosystems are not in a steady state locally is deemed crucial when studying vegetation dynamics and the potential response of biomass to disturbance and climatic changes.
Introduction
One of the largest uncertainties in Earth system models is in quantifying how the carbon uptake by terrestrial ecosystems will respond to changes in climate (Friedlingstein et al 2006, Friend et al 2014).As an emergent ecosystem property that partially determines carbon sequestration capacity, the vegetation biomass turnover times (τ ) have been used as a diagnostic metric to quantify the feedback between the carbon cycle and climate (Carvalhais et al 2014, Thurner et al 2016).However, there is a large uncertainty in the simulations of vegetation carbon stock as well as τ across earth system models, indicating different representations of the response of vegetation to future climate change (Friend et al 2014).Furthermore, our current understanding of τ and the vegetation dynamics is limited due to the lack of long-term observations of changes in vegetation.
As a result, the estimation of τ has relied so far on the assumption that the vegetation carbon in an ecosystem would eventually reach a steady state (steady state assumption, hereafter SSA) at which the net change of vegetation biomass becomes zero (∆C veg = 0), or so small compared to the total biomass that becomes negligible.The SSA has been shown to be a useful assumption at a large spatial scale.However, at local scales, an ecosystem is unlikely to maintain a steady state due to the influences from external factors such as disturbances and climate variability (Ge et al 2019).It is still unknown whether the SSA can hold at local spatial domains and how much the difference it can make to the τ estimation if one neglects the temporal changes in vegetation carbon.
In this study, we used estimates of annual changes in vegetation carbon derived from a multi-decadal dataset (Besnard et al 2021, Santoro et al 2022) and global estimations of gross primary productivity (GPP) that are driven by meteorological observations (Tramontana et al 2016, Jung et al 2020), for estimating and comparing τ estimates that are derived from SSA and non-steady-state assumption (hereafter NSSA), respectively, at local, biome and global scales.The validity of SSA was evaluated in different spatial domains to better quantify the effect of spatial scales on the patterns of carbon turnover times.
Data and methods
In this section, we first introduce the datasets used to estimate τ including above-ground vegetation biomass, below-ground biomass and GPP.We used a forest canopy cover dataset to examine the relationship between the changes in τ and tree canopy cover.Then the calculations of τ using three methods are introduced next with detailed explanations.
The multi-decadal estimates of AGB dataset
Annual above-ground biomass (AGB) estimates were derived from C-band satellite radar signals between 1992 and 2016 with a pixel size of 25 km (Santoro et al 2022).The very dense time series of observations by the European remote sensing WindScatterometer, the MetOp Advanced SCATterometer (ASCAT), and the Envisat advanced synthetic aperture radar (ASAR) were used to maximize the information content of forest structure in the signal, allowing for AGB estimates of higher accuracy compared to values obtained from a single observation (Santoro et al 2022).The annual estimation of AGB is obtained by synthesizing all daily observations of the radar backscatter at one location in a pixel (0.25 • × 0.25 • ), enabling the inference of a continuous time series of AGB estimation.By adapting the AGB retrieval method in time and space and computing a weighted average of individual AGB estimates, the annual AGB estimates were less impacted by data noise, instantaneous moisture conditions, precipitation, and snow cover (Santoro et al 2011).
Estimation of total vegetation carbon stock
The total vegetation biomass consists of AGB and below-ground biomass (BGB).We estimated BGB from the AGB time series by scaling with the rootshoot ratio (Huang et al 2021), R rs : Therefore the total vegetation biomass is: (1)
Estimation of GPP
We used estimations of GPP from the FLUXCOM project in which different machine learning approaches were applied to upscale global energy and carbon fluxes from eddy covariance flux measurements (Tramontana et al 2016, Jung et al 2020).In this study, GPP annual estimates driven by meteorological observations and remote sensing observations at the spatial resolution of 0.5 • and the time period from 1992 to 2016 are used as carbon influx into the vegetation carbon pool.The dataset was resampled (bilinear interpolation) to the spatial resolution of 0.25 • to match the AGB dataset.
Forest tree canopy cover change
Tree canopy cover (vegetation that is greater than 5 meters in height) was derived from the advanced very high resolution radiometer remote-sensing measurements (Song et al 2018).The version 4 long term data record (LTDR) was used to generate the data on tree canopy coverage from 1982 to 2016.Daily LTDR surface reflectance data were used to compute the normalized difference vegetation index (NDVI) at each pixel (0.05 • × 0.05 • ).Maximum NDVI composition was then used to obtain adjusted annual phenological metrics, which were used as input to supervised regression tree models to generate the annual product of tree canopy coverage.
Estimation of τ under steady state
Changing C veg over time is determined by the uptake of carbon and turnover times: C veg is the vegetation carbon stock.Assuming that the vegetation carbon pool is in a steady state, i.e. the change in C veg over time (dC veg /dt) equals zero, then vegetation carbon turnover times can be calculated as the ratio between vegetation carbon stock and GPP: Here τ SSA is calculated pixel-wise by using annual mean C veg and GPP over the period of 1992-2016.
Estimation of τ under non-steady state
Compared with the estimations of τ under steadystate assumption, the changes in C veg over time are considered (dC veg /dt ̸ = 0) when estimating τ under non-steady state (τ NSSA ).To derive a robust estimation of τ NSSA at each grid cell, we calculated τ NSSA using three different methods to assess the uncertainty built in the τ estimations.
Method 1
We estimate ∆C veg by calculating the annual difference of C veg between year t and year t−1.Then, a τ estimate can be derived for each year by applying GPP and ∆C veg at year t.Finally, we derive the τ under a non-steady state for each year: N Fan et al
Method 2
In the second method, we estimated the mean ∆C veg using the trend of C veg in a certain period to avoid the influence of outliers on the results.In this way, τ can be inferred as: . (5) Here the ∆C veg, trend is inferred by applying a simple linear regression model (least-square robust fitting) between the response variable C veg and time (C veg ∼ T).The coefficient of T is, therefore, the average annual ∆C veg over the whole period.Thus, the τ under a non-steady state can be estimated with the annual mean values of C veg , GPP, and ∆C veg .
Method 3
In the third method, we infer τ from equation ( 2) by applying a linear regression model (least-square robust fitting) at each grid cell in which (GPP-∆C veg ) is the target variable while C veg is the predictor, enabling annual turnover time to be inferred from the coefficient of C veg (1/ τ NSSA): Here ∆C veg is the difference of C veg between year t and year t-1.GPP is the carbon input in year t and C veg is the total carbon density in year t-1, respectively.
The impact of disturbance and climate variability on vegetation carbon turnover
We used the bootstrap aggregating (bagging) ensemble learning method to improve generalizability and robustness of the predictions of τ NSSA over a single estimator.The original training dataset is bootstrapped to form a subset, i.e. samples are drawn with replacement.Bootstrapping promote diversity of the predictions by increasing ensemble members.The issue of overfitting can be reduced by validating prediction with out-of-bag subset.Finally predictions of different ensemble members are aggregated to generate the combined results.In this study, we used multiple features to represent disturbance, temperature, atmospheric and soil moisture conditions (table 1).All variables are harmonized into the same spatial (0.25 • ) and temporal (annual) resolution as the τ NSSA .To estimate the relative importance of individual features, we employed the Shapley value (Lundberg et al 2017) which measures the average marginal contribution of a player in a cooperative game.The advantage of Shapley value is that it provide the magnitude and direction (negative or positive) of the impact of a feature to the deviation from the average prediction.Therefore, Shapley value provide a more intuitive and informative way to compare the relative importance of different features in a tree ensemble model.
Estimation of uncertainty
The uncertainty of τ NSSA estimation comes from both the uncertainty in the AGB and GPP.The uncertainties of annual estimations of AGB is derived from the weighted sum of the individual daily AGB estimate and a covariance component that accounts for the correlation between errors (Santoro et al 2022).
The uncertainty of GPP derived from three machine learning method and two partitioning method (Jung et al 2020).Therefore, the propagated error (standard deviation) of τ NSSA estimation can be derived from equation (4): where And
Comparison of τ under NSSA and SSA at grid cell level and global scale
The τ values (figure 1) represent the turnover time of the entire forest living vegetation biomass, averaged over the whole period of the observations.The comparison between estimates of τ NSSA using three different methods and τ SSA shows a consistent pattern that carbon turnover processes are far from a steady state at grid cell level (figure 1) in spite of a high global correlation between the spatial patterns (R > 0.98, bottom off-diagonal plots in figure 1).
Although there are differences in the estimations of τ NSSA that derived from the three methods, a similar pattern emerges which shows the difference between τ NSSA and τ SSA is the largest in the temperate and boreal forest ecosystems whereas the differences are substantially smaller in the tropical forest ecosystems.
Our results show a high spatial variability of τ values ranging from 0 to 15 years.The longest turnover times are located in the northern boreal forest ecosystem, where part of the biome has τ values longer than ten years, whereas carbon in the temperate forest ecosystem turnovers over much faster where the τ values are mostly under five years.The assumption that vegetation biomass is in steady state results in an maximum overall bias of τ by 10% (90th percentile), compared to the τ estimates under a non-steady state at the grid cell level (figure 1).This finding indicates that the assumption of steady state imposes a maximum deviation in τ of 10% in global forest ecosystems, although the degree of deviation changes from one region to another.The discrepancies between τ SSA and τ NSSA are substantially higher in the boreal forest interquartile range of (8.62%) ecosystem than in the tropical forest interquartile range of (1.86%) ecosystems indicating that the forests in the tropics are closer to a steady state, whereas assuming SSA in the boreal forest may cause large bias (figure S1).Here we show that the forest biomass at the global scale is roughly in a steady state whereas the SSA can be locally largely violated at the grid cell level, especially in the northern boreal forest ecosystems where the τ values can be substantially underestimated or overestimated if assuming SSA.
In line with a previous study in which the SSAinduced biases are assessed at site level (Ge et al 2019), we show that SSA causes substantial underestimations of τ from 5% (median value) up to 40% (99th percentile) in China during the period of 2011-2016 (figure S2).However, our results show a high heterogeneity where SSA can also cause overestimation of τ .Further analysis shows that the pattern also changes across different periods of time.For instance, there is a contrasting pattern between 2005-2010 and 2011-2016 in which the former is characterized by overestimation of τ induced by SSA in a large part of the southwest China whereas there is a widespread underestimation of τ in the latter.
The effect of large-scale disturbances on carbon turnover times
The disturbances from natural causes or anthropogenic activities can make an ecosystem deviate from a steady state.By estimating carbon turnover times at different periods, we quantified the degree of deviation if disturbances, e.g.deforestation, happened in a forest ecosystem.Figure 2 shows that the pervasive deforestation in the 90 s primarily affected the carbon turnover times in the southeast part of the Amazon, which is known as the 'arc of deforestation' (hereafter AOD, Durieux et al 2003).Our results clearly show τ NSSA is underestimated ranging between 5% (25th percentile) and 24% (1th percentile) lower than τ SSA in the AOD region from 1993 to 1998, indicating anthropogenic activity (mostly deforestation) accelerated the carbon turnover rates to a large extent.Compared with the AOD, forests in the middle of Amazon, where there are less population and disturbances, are closer to a steady state, as shown by the much less difference between τ NSSA and τ SSA .Further analysis shows that tree canopy cover (figure 2, row 2) and C veg (figure 2, row 3) changes decreased mainly during the same period of 1993-1998, whereas the changes in GPP does not follow the trend in the arc of deforestation.These results indicate that the acceleration of turnover times during this period is directly caused by the large decrease in the vegetation biomass, which is intimately associated with a decrease in forest cover in this region.On the other hand, our findings show that the forest ecosystems started to recover during the 1999-2004 period as the vegetation biomass increased by 10% to 20%, in line with the increased tree canopy cover in the AOD region.As a result, the carbon turnover times increased by 10% to 30% during the same period.From 2011 to 2016, the magnitude of changes in τ , C veg and tree canopy cover significantly decreased, indicating the forest ecosystems are closer to a steady state due to less disturbances.These findings show how turnover times and the steady state of the forest ecosystem can be largely affected by anthropogenic activities.
The impacts of climate and disturbance on carbon turnover times
To identify the potential factors that control the temporal changes of τ NSSA and quantify the role of each factor, we investigated the link between τ NSSA and multiple variables that represent different perspectives of climate, fire and forest cover change (table 1).By constructing ensemble regression models with bagging decision trees, we take into account nonlinear interactions among different predictors and estimated the importance of individual features.Our results show that the ensemble model can explain a substantial amount of variance globally (median R 2 = 0.56, figure S3).The analysis of predictor importance shows that climate, including air temperature (mean, maximum and minimum), precipitation, atmospheric & soil moisture deficit and radiation dominate vast areas in the northern boreal forest, part of temperate forest in China and tropical forest in Amazon, Congo basin and Indonesia (figure S4(a)).In contrast, we show a dominant role of forest cover change for explaining the inter-annual changes of τ NSSA in forested regions where are subject to intensive anthropogenic land use/land change and deforestation, such as the AOD region (figure S4(b)).To better understand the role of forest cover change or disturbance on τ NSSA , Shapley values are calculated on top of the ensemble regression model to estimate the contribution of individual features to model predictions (see Methods).We show that the change of forest cover has a large effect on the inter-annual changes of carbon turnover, that is, the increase of forest cover lead to longer turnover times whereas decrease in forest cover leads to faster turnover of carbon (figure 3).This result is consistent with our previous analysis which showed that there is a strong co-varying spatial patterns between turnover times and forest cover change (figure 2).The comparison of the Shapley values among all features indicate that forest cover change, which is an indicator of disturbance, is the main contributor to the changes of τ NSSA in the AOD region as the magnitude of Shapley value of forest cover is two to three times larger than the second ranking feature.Globally, we find similar effect of forest cover change in other forested regions (figure S5).These results show that, in addition to climate change, the effect of natural caused or anthropogenic induced disturbance have high impact on carbon turnover of vegetation in forested ecosystems.
The effect of spatial scale on the steady-state assumption
We further investigate the effect of spatial scale on the difference between τ NSSA and τ SSA in different biomes by gradually changing the spatial scale from 0.25 • (grid cell level) to 25 • (continental scale) via spatial aggregation as shown in figure 4.Here the difference between τ NSSA and τ SSA at each spatial scale is quantified by the 10, 25, 75 and 90 percentiles of the relative difference between τ NSSA and τ SSA (P10, P25, P75, P90, figure 4).We find that the difference between τ NSSA and τ SSA substantially decreases with increasing spatial scales.The P10 and P90 tropical forests decrease approximately 5%, whereas it decreases approximately 10% in temperate and boreal biomes when the spatial scale increases from grid cell to ecosystem scale.Globally, the difference between τ NSSA and τ SSA is approximately 3% at ecosystem scale, indicating that SSA will cause less errors in estimating carbon turnover times at larger spatial scales.
Discussion
Our findings imply that the two different assumptions, i.e.SSA and NSSA, should be applied based on different ecological principles and spatial scales.The common approach of defining τ as the ratio between carbon stock and carbon influx based on SSA can be justified and properly applied when the changes in net carbon flux are negligible relative to the total carbon stock (Carvalhais et al 2014).Although disturbances from nature or human beings could cause nonsteady-state behavior, neglecting the changes, in some cases, only make a little difference to the quantification of the spatial pattern of τ , which does not hamper the understanding of the dynamics of the terrestrial ecosystem carbon cycle.However, at a grid cell level, neglecting the changes in vegetation carbon (assuming vegetation is in a steady state) may result in locallized large biases.Using three methods, we provide robust estimations of τ under a non-steady state.The comparisons between τ SSA and τ NSSA show high heterogeneity in both space and time.A study (Ge et al 2019) showed large SSA-induced biases on τ estimation in varied ecosystems of China by using the data at ten FLUXNET sites from 2005 to 2015 which is consistent with our results.However, we further show that the magnitude and the signs of the SSA-induced biases are characterized by high spatial heterogeneity and can change in time.This is mainly caused by the changes in vegetation biomass due to climate change or disturbances (figure 2).
By using ensemble regression models with multiple features that represent different perspectives of disturbance and climate variability, we identified the dominant role of local disturbance, which is represented by forest cover change, on the major forest ecosystems (figure S4(b)).We found similar patterns in these ecosystems where the impact of disturbance on vegetation carbon turnover is two to three times higher in magnitude than the climate factors (figure 3).For instance, we quantified the role of forest cover change in the arc of deforestation in Amazon forest where the well-documented anthropogenic-induced disturbance caused large changes in the biomass of rainforest.We show that the effect of large decrease in forest cover can significantly accelerate carbon turnover (decrease in τ ).Interestingly, the effect of increase in forest cover, probably due to recovery of forest from disturbance, have higher impact in turnover, but in the opposite direction, i.e. carbon turnover slow down due to recovery.
We have shown substantial heterogeneity in the degree of validity of the steady-state assumption across space.The comparison between τ SSA and τ NSSA quantitatively shows that most global forest ecosystems are not far from steady state, while the largest differences are in the temperate and boreal forests.However, disturbances could drive the forest ecosystem away from steady-state.Our results show that the arc of deforestation in Amazon Forest have large difference between τ SSA and τ NSSA caused by drastic changes in vegetation biomass (figure 2).These results indicate that applying SSA at the grid cell level can cause substantial bias, potentially leading to locally misleading conclusions based on poor estimation of carbon turnover times.
Furthermore, our study quantified the link between spatial scales and the validity of SSA.Our results imply that SSA is approximately valid at large spatial scales (>15 • or 1500 km), at which scale the differences are much lower (∼5%) at grid cell level.The current understanding of the temporal dynamics of the terrestrial carbon cycle nearly all relies on Earth system models in which the carbon turnover rates which have large discrepancies in carbon pools and turnover among different models (Todd-Brown et al 2013, Friend et al 2014).The estimation of τ under NSSA with observational long-term biomass data provides insights into better understanding and thus modeling turnover rate and its spatial patterns.
Conclusions
Our findings suggest that the SSA is robust at a global scale yet becomes much less realistic locally at the grid cell level as the difference between local τ SSA and τ NSSA can be as large as 20% in some regions.The usage of the SSA would result in a substantial bias of τ , especially in regions with a high degree of disturbance, either from anthropogenic activities or natural causes.We found the impact of disturbance on vegetation carbon turnover can be two to three times larger than climate.However, at a larger spatial scale, the differences in τ estimations at SSA and NSSA significantly decrease because the annual changes in vegetation biomass are small compared with the total amount of biomass.With the novel long-term observations of vegetation biomass, we revealed a detailed picture of the spatial distribution of carbon turnover times under different assumptions and its relationship with spatial scales, which can guide the proper application of the two assumptions on different conditions.
Figure 1 .
Figure 1.Comparison of τ under SSA and NSSA using different methods.The upper off-diagonal subplots show the relative difference between each pair of datasets (column/row).The bottom off-diagonal subplots show the density plots and major axis regression line between each pair of datasets (m: slope, b: intercept, r: correlation coefficient).The ranges of both of the color bars are between the 1st and the 99th percentiles of the data.
Figure 3 .
Figure 3.Estimated Shapley values in the AOD region show the contribution of individual features to the model output.The colorbar shows the scaled values of explanatory variables.Red/blue color represents positive/negative effect of a feature.The higher/lower of the Shapley value of individual feature indicate a higher/lower contribution to the changes in model output.
Figure 4 .
Figure 4. Effects of spatial scale on the difference between τ SSA and τ NSSA (Method 1).The x-axis represents the increase of spatial scales from grid cell level (0.25 • ) to continental level (25 • ).The y-axis represents the absolute value of 10th, 25th, 75th and 90th relative difference (%) between τ NSSA and τ SSA.
Table 1 .
Features that are used in the ensemble regression. | 5,633.6 | 2023-09-19T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
Structure of the costume texture thickness investigation
The costume fabric is woven on the basis of tandoor sarja braids. Such textures are superficial in the body. When washing a suit sewn from the surface tissue of the body, in the process of using it only the body coverings are eroded and thinned, resulting in tissue rupture. This article presents the results of research on improving the quality, in particular, increasing the resistance to abrasion of cotton fabrics such as costume cloth. It is noted that the resistance of the fabric to abrasion depends on the indicators of its structure, that is, on the degree of mutual bending of the warp and weft threads or the supporting surface of the fabrics.
Introduction
The physical and mechanical properties of the fabrics intended for the suit are also subject to the requirements of their field of application [1]. The structure of the tissue is understood as the relative position of the body and back threads and their interdependence. The physical and mechanical properties of a fabric often depend on its weave, which affects the phase of placement of the threads in it during the weave formation, which determines the service life of the fabric, i.e. its abrasion resistance [2]. The abrasion resistance of the fabric, air permeability depends on its structural characteristics, i.e. the degree of mutual bending and density of the body and back yarns. This degree of mutual bending is determined by the area of a particular part of the body and the back yarn that can be approached by any surface, and this area is the base surface of the fabric [3]. The flattening of the base surface opens the porosity between the joints of the body, the backing threads and creates conditions for the passage of air. Therefore, in the production of fabrics for the suit, attention should be paid to the evaluation of its surface [4].
Factors related to tissue formation affect the initial parameters of tissue structure. For example, the thickness of the fabric depends on the linear densities of the body and back yarns that make it up [5]. These include factors such as the structure phase of the tissue, the shrinkage of the body and back yarns, the coefficients of filling, bonding and coating, and the base surface [6]. All of the above factors determine the structure of the tissue and the location of the threads in it.
Erosion resistance increased from 17,000 to 21,000 cycles when the phase order of the tissue structure changed from 3.5 to 6.2 and the amount of base surfaces changed from 16% to 1.8% [7].
Fabrics for a suit primarily depend on its service life and their resistance to abrasion and abrasion [8]. The abrasion of the fabric depends on the unevenness of the yarn, which is assessed by the base surface of the woven fabric, i.e. the points of exit of the yarn to the fabric surface. To increase the base surface, it is necessary to change the parameters set on the loom [9].
The base surface of the fabric depends on the mutual bending heights of the body and back yarns, and the points of the yarns protruding from the surface of the fabric that touch a particular surface are called the base surface [10]. The costume fabric is woven on the basis of tandoor sarja braids. Such textures are superficial in the body. When washing a suit sewn from the surface tissue of the body, in the process of using it only the body coverings are eroded and thinned, resulting in tissue rupture [11,12]. The suit remains unusable even if the back coverings remain intact. Therefore, on the surface of the body surface tissue is only the output of the body coating; its resistance to abrasion is low.
Materials and Methods
Costume textures are divided into three types according to surface performance: body surface, back surface and equal surface textures. Using the method of S.A. Khamraeva [11], the base surface of the suit fabric was determined. In view of the above, the following equation for determining the thickness, the design of the structure of the suit fabric on its thickness was improved (Fig. 1).
Fig. 1. Body and back surface tissue
If the tissue has a back surface, then the thickness of the tissue is determined as follows (1): (1) if the body is superficial (2): (3): The base surface of the fabric is assessed by the wave heights of the body and back yarns located in it.
Results and Discussion
The bending wave heights of the body and back yarns are as follows (4, 5): where, L t is the length of the body thread extracted and flattened from the tissue sample, mm; L t is the size of the tissue sample, mm; R a is the density of the tissue along the back, yarn/mm; L a is the distance between the back threads in a single element fabric, mm; f t is the distance between the selected yarns in a single element fabric, mm; L a is the length of the backing thread pulled and flattened from the tissue sample, mm; and R t is the density of yarns in the fabric, yarn/mm.
The calculated diameters of the threads in the body (d t ) and back (in) before weaving are determined as follows (6): where, T t , T alinear density of tan and back yarns.
During the tissue formation process, the wrap and back yarns are compressed and the thickness of the tissue is reduced due to the approximate diameter (d I t ) of the compressed wrap yarn and the approximate diameter (d I a ) of the compressed back yarn. The compression ratio (K) can be determined as follows (7): The compression coefficient of a yarn depends on its composition, diameter, bending height of the yarns in the fabric, and the density of the fabric. Costume fabric is used in various services. If we consider the special suits of the servants, for them the suit is the main garment. They are required to have a high performance of the appearance and properties of the suit. The parameter that reflects this is the base surface of the tissue.
The abrasion resistance of the fabric depends on the exit points of the yarns to the fabric surface, i.e. the base surface. The base surface is measured by the thickness of the tissue. The design is shown in terms of the thickness of a single-layer fabric with sarja 3/1 braid ( Fig. 1 and Fig. 2). The surface smoothness of the produced tissue was increased (Fig. 2).
To increase the base surface of the fabric, first of all it is necessary to know which direction to change the direction of the body or back, and secondly to change the parameters of the weaving machine accordingly, which allows to improve the base surface of the fabric. By determining the thickness of the tissue, it is possible to achieve equal surface tissue production in a short time. The results of theoretical research can be used to control the thickness of the tissue and, through it, to change the base surface as well.
The design of the thickness of the fabric showed that the suit fabric produced by ARK ECO TEXTIL was found to be superficial. The shortening of the threads in the fabric plays a special role in the design of the fabric. Therefore, when designing a suit fabric, it is advisable to design on the shortening of the threads in it.
In the textile industry, as in any other industry, the modernization and improvement of technological processes allows to achieve high efficiency. Computer-aided design of a particular fabric on looms allows creating new assortments and meeting customer requirements.
Once we have implemented the program, it is possible to create similar images and print them out on a printer (see sample cropping picture in Fig. 3). Work is underway to improve this program. The application, created for intricate jacquard braided fabrics and in fabric production using an image developed by designers, has high efficiency. The production of a similar program will increase the quality of textile products and increase research in this area.
When designing fabrics for a suit, it is necessary to pay attention to the parameters that determine the quality of the fabric (depending on consumer demand): elasticity, strength, air permeability, abrasion resistance, does not lose its condition after washing and cutting of two-layer tissue on the basis of Sarja 4/1. For the design of suit fabrics for autumn and winter seasons that can meet the requirements of consumers, it was recommended to weave a two-layer fabric on the basis of Sarja 4/1 (Fig. 3).
Conclusions
The basis of the fabrics for the suit is based on Sarja weaving, which is produced in many textile enterprises around the world. On the surface of the serrated tissue, the body and back coverings are formed diagonally, on the right side of the tissue is often oriented from bottom to left, up, right. It is shown that the physical and mechanical properties of the fabric depend on the smoothness of the base surface, its shear, density and the smoothness of the yarns.
From the cuts in Fig. 3, it can be concluded that in the A and C variant cuts it was found necessary to design the fabric for the suit. The bonds in the fabric designed in this way are strong and the dimensions of the yarns are preserved after washing, which prevents the yarns from slipping. The results of the design of the suit fabric on the thickness of the suit fabric, produced by ARK ECO TEXTIL, justified the fact that the fabric is superficial. | 2,269.4 | 2021-01-01T00:00:00.000 | [
"Materials Science"
] |
Special Issue on Ensemble Learning and Applications
: During the last decades, in the area of machine learning and data mining, the development of ensemble methods has gained a significant attention from the scientific community. Machine learning ensemble methods combine multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone. Combining multiple learning models has been theoretically and experimentally shown to provide significantly better performance than their single base learners. In the literature, ensemble learning algorithms constitute a dominant and state-of-the-art approach for obtaining maximum performance, thus they have been applied in a variety of real-world problems ranging from face and emotion recognition through text classification and medical diagnosis to financial forecasting
Introduction
This article is the editorial of the "Ensemble Learning and Their Applications"(https://www.mdpi. com/journal/algorithms/special_issues/Ensemble_Algorithms) Special Issue of the Algorithms journal. The main aim of this Special Issue is to present the recent advances related to all kinds of ensemble learning algorithms, frameworks, methodologies and investigate the impact of their application in a diversity of real-world problems. The response of the scientific community has been significant, as many original research papers have been submitted for consideration. In total, eight (8) papers were accepted, after going through a careful peer-review process based on quality and novelty criteria. All accepted papers possess significant elements of novelty, cover a diversity of application domains and introduce interesting ensemble-based approaches, which provide readers with a glimpse of the state-of-the-art research in the domain.
During the last decades, the development of ensemble learning methodologies and techniques has gained a significant attention from the scientific and industrial community [1][2][3]. The basic idea behind these methods is the combination of a set of diverse prediction models for obtaining a composite global model which produces reliable and accurate estimates or predictions. Theoretical and experimental evidence proved that ensemble models provide considerably better prediction performance than single models [4]. Along this line, a variety of ensemble learning algorithms and techniques have been proposed and found their application in various classification and regression real-word problems.
Ensemble Learning and Applications
The first paper is entitled "A Weighted Voting Ensemble Self-Labeled Algorithm for the Detection of Lung Abnormalities from X-Rays" and it is authored by Livieris et al. [5]. The authors presented a new ensemble-based semi-supervised learning algorithm for the classification of lung abnormalities from chest X-rays. The proposed algorithm exploits a new weighted voting scheme which assigns a vector of weights on each component learner of the ensemble based on its accuracy on each class. The proposed algorithm was extensively evaluated on three famous real-world benchmarks, namely the Pneumonia chest X-rays dataset from Guangzhou Women and Children's Medical Center, the Tuberculosis dataset from Shenzhen Hospital and the cancer CT-medical images dataset. The presented numerical experiments showed the efficiency of the proposed ensemble methodology against simple voting strategy and other traditional semi-supervised methods.
The second paper is authored by Papageorgiou et al. [6] entitled "Exploring an Ensemble of Methods that Combines Fuzzy Cognitive Maps and Neural Networks in Solving the Time Series Prediction Problem of Gas Consumption in Greece". This paper presents an innovative ensemble time-series forecasting model for the prediction of gas consumption demand in Greece. The model is based on an ensemble learning technique which exploits evolutionary Fuzzy Cognitive Maps (FCMs), Artificial Neural Networks (ANNs) and their hybrid structure, named FCM-ANN, for time-series prediction. The prediction performance of the proposed model was compared against that of the Long Short-Term Memory (LSTM) model on three time-series datasets concerning data from distribution points which compose the natural gas grid of a Greek region. The presented results illustrated empirical evidence that the proposed approach could be effectively utilized to forecast gas consumption demand.
The third paper "A Grey-Box Ensemble Model Exploiting Black-Box Accuracy and White-Box Intrinsic Interpretability" was written by Pintelas et al. [7]. In this interesting study, the authors proposed a new framework for the development of a Grey-Box machine learning model based on the semi-supervised philosophy. The advantages of the proposed model are that it is nearly as accurate as a Black-Box and it is also interpretable like a White-Box model. More specifically, in their proposed framework, a Black-Box model was utilized for enlarging a small initial labeled dataset, adding the model's most confident predictions of a large unlabeled dataset. In the sequel, the augmented dataset was utilized for training a White-Box model which greatly enhances the interpretability and explainability of the final model (ensemble). For evaluating the flexibility as well as the efficiency of the proposed Grey-Box model, the authors used six benchmarks from three real-world application domains, i.e., finance, education, and medicine. Based on their detailed experimental analysis the authors stated that the proposed model reported comparable and sometimes better prediction accuracy compared to that of a Black-Box while being at the same time interpretable as a White-Box model.
The fourth paper was authored by Karlos et al. [8] entitled "A Soft-Voting Ensemble Based Co-Training Scheme Using Static Selection for Binary Classification Problems". The authors presented an ensemble-based co-training scheme for binary classification problems. The proposed methodology is based on the imposition of an ensemble classifier as a base learner in the co-training framework. Its structure is determined by a static ensemble selection approach from a pool of candidate learners Their experimental results in a variery of classical benchmarks as well as the reported statistical analysis showed the efficacy and efficiency of their approach.
An interesting research entitled "GeoAI: A Model-Agnostic Meta-Ensemble Zero-Shot Learning Method for Hyperspectral Image Analysis and Classification was authored by Demertzis and Iliadis [9]. In this work, a new classification model was proposed, named MAME-ZsL (Model-Agnostic Meta-Ensemble Zero-shot Learning), which is based on zero-shot philosophy for geographic object-based scene classification. The attractive advantages of the proposed model are its training stability, its low computational cost, but mostly its remarkable generalization performance thought the reduction of potential overfitting. This is performed by the selection of features which do not cause the gradients to explode or diminish. Additionally, it is worth noticing that the superiority of MAME-ZsL model lies on the fact that the testing set contained instances whose classes were not contained in the training set. The effectiveness of the proposed architecture was presented against state-of-the-art fully supervised deep learning models on two datasets containing images from a reflective optics system imaging spectrometer.
Zvarevashe and Olugbara [10] presented a research paper entitled "Ensemble Learning of Hybrid Acoustic Features for Speech Emotion Recognition". Signal processing and machine learning methods are widely utilized for recognizing human emotions based on extracted features from video files, facial images or speech signals. The authors studied the problem that many classification models were not able to efficiently recognize fear emotion with the same level of accuracy as other emotions. To address this problem, they proposed an elegant methodology for improving the precision of fear and other emotions recognition from speech signals, based on an interesting feature extraction technique. In more detail, their framework extracts highly discriminating speech emotion feature representations from multiple sources which are subsequently agglutinated to form a new set of hybrid acoustic features. The authors conducted a series of experiments on two public databases using a variety of state-of-the-art ensemble classifiers. The presented analysis which reported the efficiency of their approach, provided evidence that the utilization of the new features increased the generalization ability of all ensemble classifiers.
The seventh paper entitled "Ensemble Deep Learning for Multilabel Binary Classification of User-Generated Content is authored by Haralabopoulos et al. [11]. The authors presented an multilabel ensemble model for emotion classification which exploits a new weighted voting strategy based on differential evolution. Additionally, the proposed model used deep learning learners which comprised of convolutional and pooling layers as well as (LSTM) layers which are dedicated for such classification problems. To present the efficiency of their model, they conducted a performance evaluation, on two large and widely used datasets, against state-of-the-art single models and ensemble models which were comprised with the same base learners. The reported numerical experiments showed that the proposed model presented improved classification performance, outperforming state-of-the-art compared models.
Finally, the eighty paper "Ensemble Deep Learning Models for Forecasting Cryptocurrency Time-Series" was authored by Livieris et al. [12]. The main contribution of this research is the combination of three of the most widely employed ensemble strategies: ensemble-averaging, bagging and stacking with advanced deep learning methodologies for forecasting the cryptocurrency hourly prices of Bitcoin, Etherium and Ripple. More analytically, the ensemble models utilized state-of-the-art deep learning models as component learners, which were comprised by combinations of LSTM, Bi-directional LSTM and convolutional layers. The authors conducted an exhaustive experimentation in which the performance of all ensemble deep learning models was compared on both regression and classification problems. The models were evaluated on forecasting of the cryptocurrency price on the next hour (regression) and also on the prediction of next price directional movement (classification) with respect to the current price. Furthermore, the reliability of all ensemble model as well as the efficiency of their predictions was studied by examining for autocorrelation of the errors. The detailed numerical analysis indicated that ensemble learning strategies and deep learning techniques can be efficiently beneficial to each other, and develop accurate and reliable cryptocurrency forecasting models.
We would like to thank the Editor-in-Chief and the editorial office of the Algorithms journal for their support and for trusting us with the privilege to edit a special issue in this high-quality journal.
Conclusions and Future Approaches
The motivation behind this Special Issue was to make a minor and timely contribution to the existing literature. It is hoped that the novel approaches presented in this Special Issue will be found interesting, constructive and appreciated by the international scientific community. It is also expected that they will inspire further research on innovative ensemble strategies and applications in various multidisciplinary domains. Future approaches may involve exploiting ensemble learning for improving prediction accuracy, machine learning explainability and enhancing model's reliability. | 2,319.8 | 2020-06-11T00:00:00.000 | [
"Computer Science"
] |
A case study of detecting the triplet of 3S1 using superconducting gravimeter records with an alternative data preprocessing technique
Due to their very low noise levels in the low frequency band (<1 mHz), superconducting gravimeters (SGs) are particularly suitable to observe long-period free oscillations of the Earth. This case study is dedicated to the detection of the triplet of the seismic normal mode 3S1 that was excited by the December 26, 2004, Sumatra-Andaman earthquake (Mw = 9.3). Using SG records, the Hilbert-Huang transformation is used as an alternative data preprocessing technique, instead of the traditional detiding method. After removal of atmospheric pressure effects from the original SG records, we applied the Hilbert-Huang transformation to the SG residues, to select the signals that included the frequency band of interest, and to construct a new data series. Then, by applying the multi-station experimental technique to five 273-h-long common new data series recorded at different SG stations, we clearly observed all of the three singlets of the mode 3S1, with the central singlet more evident compared to previous studies. Observations of the low-frequency modes 3S1 (n = 0, 1, 2, ...; l = = 1,2, ...) provide constraints on the inner and outer core structure. This case study provides an alternative data-preprocessing approach to observe the splitting frequencies of the low-frequency mode type 3S1 (n = 0, 1, 2, ...).
Introduction
After a large earthquake, various seismic normal modes (free oscillations) of the Earth are excited for several days to months (e.g., Alterman et al. 1974, Widmer-Schnidrig 2003).The typical length of the time that a seismic normal mode lasts for depends on its quality factor Q (which is inversely proportional to the ratio of the energy loss per cycle to peak strain energy), whereby the larger the Q, the longer the vibration of the normal mode (e.g., Aki andRichards 1980, Chao andGilbert 1980).The frequencies of the modes are closely related to the structure of the Earth.For instance, the triplet of the seismic normal mode 3 S 1 (generally the normal modes below 3 mHz) is sensitive to the density, compressibility, and velocity of P-waves in the mantle and core [e.g., Ritzwoller andLavely 1995, Resovsky andRitzwoller 1998, see also http://phys-geophys.colorado.edu/geophysics/nm.dir/3s/3s1.kerplot.gif].
Theoretically, if the Earth is taken as an idealized spherically symmetric, non-rotating, purely elastic, isotropic body, modes n S l m with the same n and l have the same eigenfrequency, which is referred to as degeneracy [Dahlen and Tromp 1998].Here n is the radial overtone number, l the angular degree, and m the azimuthal order.However, for the real Earth, its rotation, ellipticity and lateral heterogeneity remove the degeneracy, which results in the appearances of split peaks [e.g., Alterman et al. 1974, Dahlen and Sailor 1979, Rogister 2003].The splitting of the modes below 1 mHz is highly sensitive to the three-dimensional density structure of the Earth mantle and core [e.g., Okal 1978, Ritzwoller and Lavely 1995, Widmer-Schnidrig 2003], and therefore, observations of the splitting of modes below 1 mHz might help to constrain Earth models.
Due to the Global Geodynamics Project (GGP) [e.g., Crossley et al. 1999, Crossley andHinderer 2008], superconducting gravimeters (SGs) are extremely sensitive to gravity variations that are related to various geophysical processes (e.g., tides, inner core wobble, tectonic activity, Earth free oscillations).They also have very low noise levels in the low frequency band, and thus they are particularly suitable for the observation of long-period signals of interest.
Previous studies [e.g., Rosat et al. 2003aRosat et al. , 2004;;Hu et al. 2006a, b, Xu andSun 2009] have shown that for frequencies below 1 mHz, the SGs can reach a better signal-to-noise ratio than most of the broadband seismometers.Hence, SGs have specific advantages for the study of the gravest normal modes.
The 3 S 1 multiplet was first observed by Chao and Gilbert [1980], using seven records at the International Deployment Accelerometer spring gravimeter stations after the 1977
DATA AND EXPERIMENT DESCRIPTIONS
Indonesia earthquake (Mw = 7.8) and then by Roult et al. (2010) using 247 recordings at 157 Federation of Digital Seismograph Networks broadband seismometer stations after the 2004 Sumatra earthquake (Mw = 9.3).However, following a closer examination of the spectra shown in Figure 10 of Roult et al. (2010), we find that the middle spectrum line (m = 0) of the mode 3 S 1 is very weak and cannot be easily observed.Here, it will be shown that by combining only five SG records at three stations after the 2004 Sumatra-Andaman earthquake with an alternative analysis technique, instead of traditional detiding process for data preprocessing, we can clearly resolve all three singlets of the seismic normal mode 3 S 1 .
Methods
The preprocessing of the SG raw data generally involves tides removal and air correction (necessary for better observation of low frequency modes below 1 mHz), in addition to correcting for instrumental errors, such as spikes, gaps, and abrupt offsets.The detiding process is usually done frequencies IMF1-8 (2.5, 2.1, 1.92, 1.2, 0.31, 0.24, 0.03, 0.02) mHz, and amplitudes (1.5, 1.3, 0.8, 0.55, 0.7, 1.0, 4.0, 3 Alternatively, the SG data can be bandpassed to a certain frequency range of interest, although it must be noted that the presence of Gibbs phenomenon and marginal effects might result in some undesired effects in the original observations.While wavelet-based detection for weak signals [Hu et al. 2006a, b, Rosat et al. 2007] has great potential, the uniform frequency resolution will show inferior resolution in time and frequency compared to the Hilbert analysis representation [see Figure 1 of Huang et al. 1999]. Herein, the Hilbert-Huang transformation (HHT) analysis is used to locate our desired frequency band directly, which will generate a new data series that will be naturally free from influences of long-period signals, e.g., tidal effects and some other possible influences.To enhance each singlet peak, the multi-station experiment (MSE) technique [Cummins et al. 1991] will be applied.The following two paragraphs are short descriptions of these two techniques.
The HHT analysis technique was first proposed by Huang et al. [1998], to decompose a complicated data series into a finite, and often small number of, intrinsic mode functions (IMFs) that are based on empirical mode decomposition (EMD) (i.e., the sifting process) [see details in Huang et al. 1998].The essence of the method is to empirically identify the intrinsic oscillatory modes by their characteristic time scales in the data, and then to decompose the data accordingly.When the first IMF is obtained, the sifting process will be repeated again to find the next IMF in the residue.If the last residue becomes a monotonic function from which no more IMFs can be extracted, the sifting process stops.By virtue of the definition of IMF, the earlier sifted IMFs (signals) generally have higher frequencies.Figure 1 illustrates the method with synthetic data.Figure 1a shows the extracted IMFs based on EMD, and the corresponding instantaneous frequency of each IMF is shown in Figure 1b.Using these, by zooming in (horizontal or vertical zoom), we can roughly determine the frequency range of each IMF.By this criterion, with concentration on one or a few frequency bands of interest, we can select certain IMFs for further study.Figure 1c shows the corresponding Fourier amplitude spectrum of each IMF in Figure 1a, and is just a verification of our judgment from Figure 1b.
The MSE technique was first proposed by Cummins et al. [1991], and used by Courtier et al. [2000] to detect the triplet of the Slichter mode 1 S 1 [Slichter 1961; see also e.g.Smylie 1992].It was also used by Rosat et al. [2003b] to detect the mode 2 S 1 and by Rosat et al. [2006] and Guo et al. [2006] to search for the 1 S 1 triplet.This method takes into account the temporal and spatial properties of degree-one spheroidal modes to generate three new time series, each of which contains only one of the prograde equatorial (m=-1), retrograde equatorial (m=1) and axial (m=0) signals.Spectral analysis is then applied to every time series.In the present study, the MSE technique is used to effectively isolate and enhance the three singlets that correspond to the mode n S l m (m=-1, 0, 1).
Data and data analysis
The datasets are corrected minute data (gaps and disturbances filled with synthetic signals by station operator after decimation to 1 min), downloaded from the GGP data centre (http://www.eas.slu.edu/GGP/ggphome.html),and corrected for the local atmospheric pressure effect using a nominal constant admittance of -3 nms -2 /hPa.The air pressure correction is necessary to better detect the frequencies less than 1 mHz [Zürn and Widmer 1995].As a case study, five records from three SG stations, Canberra (C1), Bad Homburg (H1, H2), Sutherland (S1, S2), are used in the present study.The length of the records is 273 h, about one Q-cycle, as the length suggested by Dahlen [1982] for better frequency estimation of a certain mode.
After correction of the local atmospheric pressure effect, the gravity residues are decomposed into a series of IMFs, as shown by Figure 2a, based on the EMD [Huang et al. 1998], and after applying Hilbert transformation to every IMF and by examining the variation of the instantaneous frequencies with time of the IMFs (see Figure 2b), we chose the correct IMFs that included the signal of the mode 3 S 1 .In the present study, we chose the first three IMFs of all of the records, and accordingly constructed five common new data series, which included the signal of the mode 3 S 1 , which will be further used to detect the 3 S 1 triplet based on the MSE technique.As the typical frequency of 3 S 1 is very close to that of 1 S 3 , and as it also obeys certain selection rules [Alterman et al. 1974], the splitting frequencies of 3 S 1 might be seriously contaminated by those of 1 S 3 , because the typical frequencies (m = 0) of 1 S 3 and 3 S 1 are 0.944139 mHz and 0.944364 mHz, respectively [Resovsky and Ritzwoller 1998].However, as the quality factor Q of 3 S 1 (826.9) is almost three times of that of 1 S 3 (282.7)[Dziwonski and Anderson 1981], we can consider the Earth as a filter and use the records with a later starting point after the earthquake, to weaken the influence of 1 S 3 .As shown by Figure 3, we find that about 56 h after the Sumatra-Andaman event, the amplitude of 1 S 3 is largely reduced compared to that of five hours after the event.Indeed, based on the decay law A(t)=A(t 0 )exp[-r(t -t 0 )f/Q] [ Aki and Richards 1980], 56 h after the event, the amplitudes of the 1 S 3 and 3 S 1 decay to 14.7% and 51.8%, respectively, of those standing 5 h after the event.
Results
We applied the MSE technique to the five common new data series (i.e., the new data series constructed by the HHT technique, as indicated above), and we obtain three time-series, each of which contains only one of the (a) prograde, retrograde and axial modes that have been successfully enhanced in the corresponding amplitude spectra, as shown in Figure 4a (m=-1), b (m=1), and c (m=0), respectively.Hence, all of the three singlets of 3 S 1 have been clearly extracted and are close to the frequencies computed from PREM.By fitting a synthetic Lorentzian resonance function [e.g., Dahlen and Tromp 1998] to each singlet of the spectrum, we obtained the three splitting frequencies, as listed in Table 1.The errors given in the present study are standard deviations of the estimated frequencies obtained by least-square fitting with five to eight points.The exact number of points used for estimation of a certain singlet might vary, depending upon how many points around the peak fit the Lorentzian resonance function best.
In Table 1, the theoretical predictions are based on the model PREM-re [Roult et al. 2010] and the model PREM-re+SAW12D [Li andRomanowicz 1996, He andTromp, 1996], respectively.Both models are based on PREM, but the former includes only the Earth rotation and ellipticity, and the latter includes not only the rotation and ellipticity, but also the three-dimensional elastic structure heterogeneity of the mantle.Chao and Gilbert [1980] first observed the triplet frequencies of 3 S 1 by applying the spherical harmonic stacking technique [Buland et al. 1978] to seven spring gravimeters records, Roult et al. [2010] also observed the triplet frequencies of the mode 3 S 1 based on 247 seismic records by taking simply the average value of the frequencies corresponding to the mode of interest.We note that, the triplet frequencies of the mode 3 S 1 were observed by only the two above-mentioned studies, to our present knowledge.For instance, although Resovsky and Ritzwoller [1998] observed the frequency of 3 S 1 (m = 0) using a dataset of more than 4,500 seismograms, they did not provide the frequencies of 3 S 1 (m = ±1).As a comment here, we note that in the study of Roult et al. [2010], the middle spectrum line (m = 0) of the mode 3 S 1 is very weak.In our study, the middle spectrum line (m = 0) of 3 S 1 was clearly extracted from the records due to the combination of the HHT and MSE techniques.
As can be noted from Table 2, the observed 3 S 1 triplet in the present study are very close to the model PREM-re+SAW12D predictions, except that the central singlet deviates from the theoretical value a little more, as do the results of Chao and Gilbert [1980] and Roult et al. [2010].The causes for this are not clear.Besides, all of the observed frequencies [given by Chao andGilbert 1980, Roult et al. 2010, and the present study] corresponding to m=±1 are higher than the model PREM-re predictions, and smaller than the model PREM-re + SAW12D predictions.This implies that the mode 3 S 1 is very sensitive to the physical parameters of the mantle and core, and the observed triplet of 3 S 1 might provide significant constraints on the three-dimensional structure of the Earth.We can preliminarily conclude that based on the observations of the 3 S 1 triplet (by previous studies and the present study, see Tables 1 and 2), both the models PREM-re and PREM-re + SAW12D might need to be adjusted as the predictions based on these models deviate from the «most likely real values» (the observations given by different authors) by slightly systematic shifts of about -3.0 nHz and about 3.0 nHz in the frequency-decreasing and -increasing directions, respectively.In addition, from Table 1 we note that as model PREM-re + SAW12D predicted, the splitting width of the mode 3 S 1 is 3.22 nHz, close to the estimate (3.21 nHz) given by He and Tromp (1996), of 3.21 nHz, and the observed splitting widths given by Chao and Gilbert [1980], Roult et al. [2010], and the present study are 2.90 nHz, 3.23 nHz, and 3.26 nHz, respectively.
Conclusions
As the mode 3 S 1 is quite sensitive to the mantle and outer core, the observation of the splitting frequencies of the mode 3 S 1 can provide significant information of the deep a As a reference, the observation of the frequency (m = 0) of 3 S 1 given by Resovsky and Ritzwoller [1998], who did not provide the triplet 3 S 1 .
Table 1.Comparison of the observed triplet frequencies of 3 S 1 from previous studies with the present study, according to the model predictions.Present study -3.27#10 -4 -8.01#10 -4 -2.80#10 -4 interior of the Earth, and consequently can improve the Earth model.The application of the HHT and MSE analysis techniques to five common SG records leads to a clear observation of the triplet frequencies of the mode 3 S 1 .The observed 3 S 1 triplet frequencies from the SG records at the Canberra (CB), Sutherland (S1, S2) and Bad Homburg (H1, H2) stations are in close agreements with the theoretical predictions provided by the model PREM-re, and especially by the model PREM-re + SAW12D.The present study shows that the approaches (i.e., a combination of the HHT and MSE analysis techniques) proposed here are effective for the detection of the splitting frequencies of the mode 3 S 1 , as only five SG records were used, and consequently, it can potentially be applied to the detection of the Slichter triplet 1 S l m (m=-1, 0, 1), of which the claimed observations are still controversial.
computed from certain tide models, although removing the tides by this method is not very thorough, as tidal effects on the records can be different for individual places, and the tidal factors used to compute synthetic tides are just average values obtained over several years.
Figure 1 .
Figure 1.(c) Fourier amplitude spectra for the corresponding IMF1-8, demonstrating that each IMF in Figure1aindeed contains certain frequencies, as can be seen from Figure1b.Therefore certain IMFs can be selected to study the signal of interest.
Figure 2 .
Figure 2. The Hilbert-Huang transformation applied to the SG data following correction for pressure, as recorded at Bad Homberg (H2).(a) Extracted IMFs (1-7) of the SG data based on EMD.(b) Instantaneous frequencies corresponding to each of IMF1-7 of panel (a).
AFigure 3 .
Figure 3.Comparison of the Fourier amplitude spectrum of the 273-h SG record from Bad Homburg (H2) starting 5 h (top slot) and 56 h (bottom slot) after the earthquake.The spectra were performed on the first two IMFs, after corrections for pressure.The vertical dotted lines denote degenerate frequencies for the model PREM.
Table 2 .
Differences between the observations and the model predictions for the 3 S 1 triplet. | 4,096.4 | 2012-06-05T00:00:00.000 | [
"Geology",
"Physics",
"Engineering"
] |
A Low-Phase-Noise 8 GHz Linear-Band Sub-Millimeter-Wave Phase-Locked Loop in 22 nm FD-SOI CMOS
Low-phase noise and wideband phased-locked loops (PLLs) are crucial for high-data rate communication and imaging systems. Sub-millimeter-wave (sub-mm-wave) PLLs typically exhibit poor performance in terms of noise and bandwidth due to higher device parasitic capacitances, among other reasons. In this regard, a low-phase-noise, wideband, integer-N, type-II phase-locked loop was implemented in the 22 nm FD-SOI CMOS process. The proposed wideband linear differential tuning I/Q voltage-controlled oscillator (VCO) achieves an overall frequency range of 157.5–167.5 GHz with 8 GHz linear tuning and a phase noise of −113 dBc/Hz @ 100 KHz. Moreover, the fabricated PLL produces a phase noise less than −103 dBc/Hz @ 1 KHz and −128 dBc/Hz @ 100 KHz, corresponding to the lowest phase noise generated by a sub-millimeter-wave PLL to date. The measured RF output saturated power and DC power consumption of the PLL are 2 dBm and 120.75 mW, respectively, whereas the fabricated chip comprising a power amplifier and an integrated antenna occupies an area of 1.25 × 0.9 mm2.
Introduction
Many recent satellite communication, radar, and imaging systems use W and G bands to transmit and receive signals at high data rates [1][2][3]. To increase the data rates and sensitivity of these systems, wideband low-phase-noise and precise millimeter-wave (mmwave) and sub-millimeter-wave (sub-mm-wave) signal sources are required [4]. Despite the use of III-V or SiGe technologies for applications at these upper millimeter-wave (mmwave) frequency bands, many recent system implementations were performed with the CMOS technology to minimize their overall cost.
However, the main challenge posed in the design of communication systems based on CMOS technology at these frequencies is the poor performance of the essential building blocks of the phase-locked loops (PLLs), such as voltage-controlled oscillators (VCOs) with low phase noise and wide tuning ranges. All performance measures of the VCOs, including the amplitude of the output signal, tuning range, phase noise, and power consumption, unavoidably deteriorate as the operating frequency approaches the transit frequency (fT), which is limited by the parasitic capacitance of the transistors. In addition, the performance is further degraded by the high interconnection loss and low-quality factor (Q) of varactors. Many mm-wave and sub-mm-wave VCOs are based on frequency multiplications [4][5][6][7].
Although the frequency multiplication technique may slightly improve the phase noise [7], it exhibits a narrower locking range, as well as higher power and area consumption. Moreover, high-frequency PLLs use frequency synthesizers (FSs) with a low-frequency reference signal. Since voltage-controlled oscillators (VCOs) operate at high frequency, a phase comparison with a crystal reference oscillator carried out by a phase/frequency detector (PFD) should also be performed at high frequency. However, there is a speed limitation of the PFD, which is based on digital blocks at mm-wave and sub-mm-wave frequencies. In this regard, it is necessary to use frequency division to bring down the VCO output signal frequency to the desired frequency suitable for the PFD operation. Nevertheless, conventional frequency division is performed by a current-mode logic (CML), which is limited in operating frequency and presents high non-linearity [8]. For these reasons, several recent mm-wave FSs use injection-locked frequency dividers (ILFDs) or Miller frequency dividers [1,9]. Nevertheless, the implemented PLLs have narrow locking ranges, limiting their bandwidths and operating frequencies. Moreover, a CMOScompatible spintronic-based signal generator was proposed in [10]. Although the proposed device size is in the nanoscale with two degrees of freedom tuning ability, it requires high-gain amplification to bring the output power closer to 0 dBm, therefore increasing the overall power consumption and non-linearities. Furthermore, an optical solution was proposed in [11] to design a low-phase-noise wideband-modulated waveform generator centered at 40 GHz for synthetic-aperture radars (SARs) applications. Besides its CMOS incompatibility, the designed oscillator presents a large footprint unlike integrated RF PLLs.
In this work, a type II PLL with wide locking range operating in the sub-mm-wave frequency is implemented in the 22 nm FD SOI CMOS process. The frequency performance of the PLL is improved by the design of wideband quadrature VCO along with a wide locking range ILFD with PMOS input injection.
The rest of this paper is organized as follows: Section 2 discusses the system architecture and the circuit implementation of the essential building blocks; Section 3 shows the measurement results; and Section 4 summarizes the work.
System Architecture and Circuit Design
The proposed type II sub-mm-wave PLL is given in Figure 1 and is composed of a balanced I/Q VCO, injection-locked divider (÷4) and ÷64 CML frequency dividers, a phase/frequency detector (PFD), charge pump (CP), and a loop filter (LF). An external signal generator is used to provide a reference signal between 4.984 GHz to 5.234 GHz. This signal is then divided by eight using a ÷2 CML divider chain to provide the PFD reference signal between 623.047 MHz and 654.297 MHz. The linear-tuning output frequency of the PLL ranges from 159.5 GHz to 167.5 GHz, corresponding to the VCO output frequency range. phase/frequency detector (PFD) should also be performed at high frequency. However, there is a speed limitation of the PFD, which is based on digital blocks at mm-wave and sub-mm-wave frequencies. In this regard, it is necessary to use frequency division to bring down the VCO output signal frequency to the desired frequency suitable for the PFD operation. Nevertheless, conventional frequency division is performed by a current-mode logic (CML), which is limited in operating frequency and presents high non-linearity [8]. For these reasons, several recent mm-wave FSs use injection-locked frequency dividers (ILFDs) or Miller frequency dividers [1,9]. Nevertheless, the implemented PLLs have narrow locking ranges, limiting their bandwidths and operating frequencies. Moreover, a CMOS-compatible spintronic-based signal generator was proposed in [10]. Although the proposed device size is in the nanoscale with two degrees of freedom tuning ability, it requires high-gain amplification to bring the output power closer to 0 dBm, therefore increasing the overall power consumption and non-linearities. Furthermore, an optical solution was proposed in [11] to design a low-phase-noise wideband-modulated waveform generator centered at 40 GHz for synthetic-aperture radars (SARs) applications. Besides its CMOS incompatibility, the designed oscillator presents a large footprint unlike integrated RF PLLs.
In this work, a type II PLL with wide locking range operating in the sub-mm-wave frequency is implemented in the 22 nm FD SOI CMOS process. The frequency performance of the PLL is improved by the design of wideband quadrature VCO along with a wide locking range ILFD with PMOS input injection.
The rest of this paper is organized as follows: Section 2 discusses the system architecture and the circuit implementation of the essential building blocks; Section 3 shows the measurement results; and Section 4 summarizes the work.
System Architecture and Circuit Design
The proposed type II sub-mm-wave PLL is given in Figure 1 and is composed of a balanced I/Q VCO, injection-locked divider (÷4) and ÷64 CML frequency dividers, a phase/frequency detector (PFD), charge pump (CP), and a loop filter (LF). An external signal generator is used to provide a reference signal between 4.984 GHz to 5.234 GHz. This signal is then divided by eight using a ÷2 CML divider chain to provide the PFD reference signal between 623.047 MHz and 654.297 MHz. The linear-tuning output frequency of the PLL ranges from 159.5 GHz to 167.5 GHz, corresponding to the VCO output frequency range. The output signal of the I/Q VCO is amplified by a three-stage CS power amplifier with a power gain of 10 dB, a maximum saturated power of 10 dBm, and OP1 dB of 7 dBm.
I/Q VCO
The circuit configuration of the wideband I/Q VCO is shown in Figure 2. The resonator, which determines the output frequency of the VCO, was designed using grounded coplanar waveguides TL1-TL2, TL7-TL8, and the parasitic capacitances of transistors M1-M8. Transistors M1-M4 form the core of the VCO, which consists of two cross-coupled oscillators. Quadrature signal generation is achieved by the connection of transistors M5-M8. Consequently, balanced in-phase (I) and quadrature (Q) signals are obtained at the output of the VCO. The VCO outputs are isolated from the subsequent stages using buffers built by transistors M9-M12 along with grounded coplanar waveguides (CPWGs) TL5-TL8. A DC current of I 0 = 5 mA is used to bias the VCO through a current mirror configuration. The size of the biasing transistors is small enough to keep the resonance frequency of the resonator unchanged and maximize the output voltage swing. As a result, an output differential peak voltage of 550 mV is obtained for a supply voltage VDD of 1.5 VDC. Due to the significance of the parasitic capacitances of MOSFETs at mm-wave frequencies, the current tuning is more practical than varactor tuning in the mm-wave VCO design. The MOSFET gate-source capacitance is related to its drain-source voltage or drain current [12]. Therefore, a change in the bias current results in a change in the parasitic capacitances of M1-M12, which dominate the total capacitance of the resonator. Thus, a current-tuning network is constructed using the transistors M13-M16 and resistors R1-R2. A DC voltage is applied to the gates of the tuning transistors, producing drain currents that are added to the fixed bias current from the current mirror to feed the QVCO core transistors. The usage of the tuning transistors is crucial to isolate the VCO core from the tuning DC voltage source as well as to ensure the oscillation by reducing power loss trough tuning. Moreover, a differential tuning voltage is applied across VCTRL+ and VCTRL− to obtain higher linearity and a wider tuning frequency range. To obtain the positive tuning, for instance, VCTRL− is maintained at a fixed low DC voltage point and VCTRL+ is tuned between VCTRL− and VDD. Negative tuning is obtained by swapping the VCTRL+ and VCTRL− from the positive tuning scenario. As a result, a wide frequency range ∆ω = 10 GHz and a phase noise less than −100 dBc/Hz @ 100KHz were obtained at the output of the I/Q VCO.
PLL Divider Chain
The I/Q VCO output frequency, being in the order of 160 GHz, is too large for the PFD, which is made of digital blocks. Therefore, a frequency division is necessary to bring the I/Q VCO output frequency down to the center frequency of 638.67 MHz, suitable for phase and frequency comparison. The divider chain is composed of two different divider types, namely the injection-locked frequency divider (ILFD) and the current-mode logic (CML) divider. First, a network of 2 ÷ 2 ILFD (÷4) was designed to convert the center frequency f 0 from 163.5 GHz to 40.875 GHz, followed by the CML divider network. The ÷2 ILFD is depicted in Figure 3a and is based on injecting a current at f 0 in the middle point of the cross-coupled pair of M1 and M2. CPWG lines TL3 and TL4 are used in the resonator circuit along with the parasitic capacitances of M1, M2, and M5 instead of inductors for the first two dividers from the VCO output. This allows a wider locking range compared to the inductor-based divider.
PLL Divider Chain
The I/Q VCO output frequency, being in the order of 160 GHz, is too large for the PFD, which is made of digital blocks. Therefore, a frequency division is necessary to bring the I/Q VCO output frequency down to the center frequency of 638.67 MHz, suitable for phase and frequency comparison. The divider chain is composed of two different divider types, namely the injection-locked frequency divider (ILFD) and the current-mode logic (CML) divider. First, a network of 2 ÷ 2 ILFD (÷4) was designed to convert the center frequency f0 from 163.5 GHz to 40.875 GHz, followed by the CML divider network. The ÷2 ILFD is depicted in Figure 3a and is based on injecting a current at f0 in the middle point of the cross-coupled pair of M1 and M2. CPWG lines TL3 and TL4 are used in the resonator circuit along with the parasitic capacitances of M1, M2, and M5 instead of inductors for the first two dividers from the VCO output. This allows a wider locking range compared to the inductor-based divider.
The sinusoidal voltage signal from the I/Q VCO buffer is fed to the gate of the transistor M5, which produces a tail current = ( 0 + ) + 0 , where < 0 . The condition < 0 is necessary to ensure that a net positive current is injected. Denoting Q as the quality factor of the LC tank, and 0 = 2 × 1 √ and as its resonating frequency and current amplitude of oscillation, respectively, the synchronization range of the ILFD can be approximated as follows [13]: As the current is injected through the PMOS transistor M5, = 5 , where the injected voltage corresponds to the output voltage of the VCO and 5 is the transconductance of M5. In consequence, the synchronization range is proportional to the injected voltage amplitude , which is the peak voltage of the preceding VCO stage. Meanwhile, the PMOS injection offers easier voltage transfer (matching) and current conversion from the VCO, as well as lower process capacitance, resulting in a wider locking range of the divider resonator. Additionally, a differential-to-single-ended buffer is employed to isolate the VCO from the first ÷2 divider. Next, a six-stage ÷2 CML divider (÷64) succeeds the ILFD network to bring the center frequency from 40.875 GHz down to 638.67 MHz. The conventional CML divider utilizes two D-latches with resistive elements in a master-slave configuration [5]. Here, a simplified input-unbalanced D-latch with no resistive load is used (as seen in Figure 3b). The divider uses two latches cross-coupling to allow clock division. The first latch is composed of transistors M1-M4, whereas the second is formed by M5-M8. The latch transistors M3-M4 and M7-M8 also provide resistive loading. The divider input is given through the current source I0, which is implemented by an NMOS transistor with W/L twice that of M1-M2 or M5-M6. The designed improved CML divider exhibits high speed and full voltage swing with large bandwidth.
Other PLL Blocks
Other blocks include the phase/frequency detector (PFD), the charge pump (CP), and the loop filter (LF). The output of the frequency divider chain is applied at the DIV and ̅̅̅̅̅ inputs and the 5 GHz reference at REF and ̅̅̅̅̅̅ inputs of the PFD. As represented in Figure 4a, the three-state PFD is composed of two D-latches and an AND gate. When the reference signal is "high" while the divider signal is "low", the outputs UP-̅̅̅̅ and DOWN-̅̅̅̅̅̅̅̅̅ of the D-flip-flops (DFF) are forced to be "high" and "low", respectively. This causes the VCO to increase its output frequency, matching the reference frequency after subsequent frequency divisions. Similarly, a "low" state of the reference signal will induce a decrease in the VCO/divider output in order to match the reference frequency. The "zero" state is created when the reference and divider signals are equal in phase and frequency, leading to a reset of the DFFs through the AND gate. Meanwhile, when the The sinusoidal voltage signal from the I/Q VCO buffer is fed to the gate of the transistor M5, which produces a tail current i i = I inj cos(ω 0 t + ϕ) + I 0 , where I inj < I 0 . The condition I inj < I 0 is necessary to ensure that a net positive current is injected. Denoting Q as the quality factor of the LC tank, and ω 0 = 2 × 1 √ LC and I osc as its resonating frequency and current amplitude of oscillation, respectively, the synchronization range of the ILFD can be approximated as follows [13]: As the current is injected through the PMOS transistor M5, I inj = g m5 V inj , where the injected voltage V inj corresponds to the output voltage of the VCO and g m5 is the transconductance of M5. In consequence, the synchronization range is proportional to the injected voltage amplitude V inj , which is the peak voltage of the preceding VCO stage. Meanwhile, the PMOS injection offers easier voltage transfer (matching) and current conversion from the VCO, as well as lower process capacitance, resulting in a wider locking range of the divider resonator. Additionally, a differential-to-single-ended buffer is employed to isolate the VCO from the first ÷2 divider.
Next, a six-stage ÷2 CML divider (÷64) succeeds the ILFD network to bring the center frequency from 40.875 GHz down to 638.67 MHz. The conventional CML divider utilizes two D-latches with resistive elements in a master-slave configuration [5]. Here, a simplified input-unbalanced D-latch with no resistive load is used (as seen in Figure 3b). The divider uses two latches cross-coupling to allow clock division. The first latch is composed of transistors M1-M4, whereas the second is formed by M5-M8. The latch transistors M3-M4 and M7-M8 also provide resistive loading. The divider input is given through the current source I 0 , which is implemented by an NMOS transistor with W/L twice that of M1-M2 or M5-M6. The designed improved CML divider exhibits high speed and full voltage swing with large bandwidth.
Other PLL Blocks
Other blocks include the phase/frequency detector (PFD), the charge pump (CP), and the loop filter (LF). The output of the frequency divider chain is applied at the DIV and DIV inputs and the 5 GHz reference at REF and REF inputs of the PFD. As represented in Figure 4a, the three-state PFD is composed of two D-latches and an AND gate. When the reference signal is "high" while the divider signal is "low", the outputs UP-UP and DOWN-DOW N of the D-flip-flops (DFF) are forced to be "high" and "low", respectively. This causes the VCO to increase its output frequency, matching the reference frequency after subsequent frequency divisions. Similarly, a "low" state of the reference signal will induce a decrease in the VCO/divider output in order to match the reference frequency. The "zero" state is created when the reference and divider signals are equal in phase and frequency, leading to a reset of the DFFs through the AND gate. Meanwhile, when the two input signals of the PFD converge, a fast reset path causes the DFFs to enter state "zero" before any charge is transmitted to the succeeding CP stage. This is called the "dead-zone" and creates spurs in the PLL output. Therefore, the minimum pulse width of the PFD is limited to the reset path delay [14]. The dead-zone may be reduced by using inverter-generated delay blocks following the AND gate [15] or by employing a NOR gate in place of the AND gate [16]. In this work, an AND gate is formed by using a symmetric NAND gate followed by an inverter, allowing the reduction of the dead-zone and resulting in a wideband PFD. The PFD outputs are applied to a differential charge pump (shown in Figure 4b). The transistors M1-M4 are driven differentially by UP-UPn and DOWN-DOWNn signals from the PFD. Therefore, the two I 0 current sources from GND and VDD will be steered in transistors M1-M4 and the differential loop filter (LF) based on R1, C1 and R2, C2. The resistors R2 connected to VCM will set the common-mode voltage at the outputs of the CP to VCM. The differential architecture has several advantages, including an insensitivity to the PMOS and NMOS switch mismatches and improved speed and voltage range [17].
in Figure 5b). The transistors M1-M4 are driven differentially by UP-UPn and DOWN-DOWNn signals from the PFD. Therefore, the two I0 current sources from GND and VDD will be steered in transistors M1-M4 and the differential loop filter (LF) based on R1, C1 and R2, C2. The resistors R2 connected to VCM will set the common-mode voltage at the outputs of the CP to VCM. The differential architecture has several advantages, including an insensitivity to the PMOS and NMOS switch mismatches and improved speed and voltage range [17]. Moreover, a three-stage common-source power amplifier (PA) followed by an onchip bowtie antenna was designed to amplify the PLL output in a transmitter-antenna configuration (as seen in Figure 1). The schematic of the PA is depicted in Figure 4c. The first and second stages are composed of the duplicate transistor M1 as well as the coplanar waveguides TL2 and TL3, respectively providing voltage amplification. The last stage is formed by M2 and TL3 and performs power amplification. interstage capacitors C3 and C4 are used to suppress the DC signal from one stage to the other. The size of M2 is twice that of M1 to handle the output current of 10 mA. A common bias current of 5 mA is used to bias the class A amplifier with a DC supply voltage VDD of 1.5 V through a feeding network formed by the transistor Mb and the resistor R1. As a result, a total DC power consumption of 30 mW is generated. The high-pass network consisting of capacitors C1 and C2 and the coplanar waveguide TL1 was designed to match the input of the PA to 50 Ω. The gain of the PA ranges from 6.5 dB to 10 dB within the 159.5-167.5 GHz band. Meanwhile, the wideband bowtie antenna has a bandwidth of 32 GHz ranging from 140 GHz to 172 GHz, with a peak gain and reflection coefficient of 7 dBi and −25 dB at 160 GHz. The designed PLL is applied in a millimeter-wave transmitter to facilitate the performance measurement procedure.
Experimental Results
The chip (shown in Figure 5) was realized in the 22 nm CMOS FDSOI process and has an area size of 1.25 mm × 0.9 mm. The integrated bowtie antenna has inter-layers of Moreover, a three-stage common-source power amplifier (PA) followed by an onchip bowtie antenna was designed to amplify the PLL output in a transmitter-antenna configuration (as seen in Figure 1). The schematic of the PA is depicted in Figure 4c. The first and second stages are composed of the duplicate transistor M1 as well as the coplanar waveguides TL2 and TL3, respectively providing voltage amplification. The last stage is formed by M2 and TL3 and performs power amplification. interstage capacitors C3 and C4 are used to suppress the DC signal from one stage to the other. The size of M2 is twice that of M1 to handle the output current of 10 mA. A common bias current of 5 mA is used to bias the class A amplifier with a DC supply voltage VDD of 1.5 V through a feeding network formed by the transistor Mb and the resistor R1. As a result, a total DC power consumption of 30 mW is generated. The high-pass network consisting of capacitors C1 and C2 and the coplanar waveguide TL1 was designed to match the input of the PA to 50 Ω. The gain of the PA ranges from 6.5 dB to 10 dB within the 159.5-167.5 GHz band. Meanwhile, the wideband bowtie antenna has a bandwidth of 32 GHz ranging from 140 GHz to 172 GHz, with a peak gain and reflection coefficient of 7 dBi and −25 dB at 160 GHz. The designed PLL is applied in a millimeter-wave transmitter to facilitate the performance measurement procedure.
Experimental Results
The chip (shown in Figure 5) was realized in the 22 nm CMOS FDSOI process and has an area size of 1.25 mm × 0.9 mm. The integrated bowtie antenna has inter-layers of metal fills for metal density DRC rules. This does not affect the 7 dB gain of the antenna and its efficiency.
To measure the PLL, the measurement setup from Figure 6 is used. The signal from the integrated bowtie antenna is picked up by a horn antenna (Mi-Wave 261G-10/387) with 12 dB gain. Thereafter, the signal is applied to a WR5 waveguide. As the spectrum analyzer (FSW from Rhode & Schwarz) has a span of 70 GHz, the RF signal is down-converted with the FS-Z220 Rhode & Schwarz mixer. The LO signal is generated from a signal generator (MG3690C from Anritsu) using a InP multiplier (SMZ170 from Rhode & Schwarz). All the measurement devices are available at the System-on-Chip Center (SoCC) at Khalifa University. For phase noise measurements, the spectrum analyzer has the phase noise option enabled by software. The measured spectrum of the PLL is presented in Figure 7. The resolution (and view) bandwidth is 47 KHz. The measured RF signal power is −19.53 dBm To measure the PLL, the measurement setup from Figure 6 is used. The signal from the integrated bowtie antenna is picked up by a horn antenna (Mi-Wave 261G-10/387) with 12 dB gain. Thereafter, the signal is applied to a WR5 waveguide. As the spectrum analyzer (FSW from Rhode & Schwarz) has a span of 70 GHz, the RF signal is down-converted with the FS-Z220 Rhode & Schwarz mixer. The LO signal is generated from a signal generator (MG3690C from Anritsu) using a InP multiplier (SMZ170 from Rhode & Schwarz). All the measurement devices are available at the System-on-Chip Center (SoCC) at Khalifa University. For phase noise measurements, the spectrum analyzer has the phase noise option enabled by software. The measured spectrum of the PLL is presented in Figure 7. The resolution (and view) bandwidth is 47 KHz. The measured RF signal power is −19.53 dBm at 157.82 GHz. It can be observed that the worst-case scenario in-band spur for the standalone PLL is about −35.7 dBc. Therefore, the power at the PLL output after the buffer is close to 2 dBm, taking into account the gain of the PA, the gain of the two antennas, and the power losses due to free-space propagation at 160 GHz. at 157.82 GHz. It can be observed that the worst-case scenario in-band spur for the standalone PLL is about −35.7 dBc. Therefore, the power at the PLL output after the buffer is close to 2 dBm, taking into account the gain of the PA, the gain of the two antennas, and the power losses due to free-space propagation at 160 GHz. To measure the tuning range of the VCO, the PLL loop is disabled by turning off the components of the loop. The linear tuning range of the VCO is measured to be from 159.5 GHz to 167.5 GHz (as illustrated in Figure 8). Nevertheless, the VCO is functional at frequencies as low as 157.5 GHz. Additionally, it can be inferred by the figure that the measured VCO tuning frequency range closely converges with the simulated results. Figure 9 shows the measured phase noise of the PLL, VCO, and reference. The measured phase noise of the PLL is −103 dBc/Hz @ 1 KHz and −128 dBc/Hz @ 100 KHz. The free-running VCO has a phase noise of −111 dBc/Hz @ 100 KHz and −132 dBc/Hz @ 1 MHz. The reference at 5 GHz has a phase noise of −110 dBc/Hz @ 1 KHz and −149 dBc/Hz @ 100 KHz. To measure the tuning range of the VCO, the PLL loop is disabled by turning off the components of the loop. The linear tuning range of the VCO is measured to be from 159.5 GHz to 167.5 GHz (as illustrated in Figure 8). Nevertheless, the VCO is functional at frequencies as low as 157.5 GHz. Additionally, it can be inferred by the figure that the measured VCO tuning frequency range closely converges with the simulated results. Figure 9 shows the measured phase noise of the PLL, VCO, and reference. The measured phase noise of the PLL is −103 dBc/Hz @ 1 KHz and −128 dBc/Hz @ 100 KHz. The free-running VCO has a phase noise of −111 dBc/Hz @ 100 KHz and −132 dBc/Hz @ 1 MHz. The reference at 5 GHz has a phase noise of −110 dBc/Hz @ 1 KHz and −149 dBc/Hz @ 100 KHz. Furthermore, (1) predicts a linear synchronization with the injection signal level of the injection-locked divider. This is confirmed by the measurements in Figure 10. Due to the limitation of the tuning range of the VCO, the last measured point (in blue) at −20 dBm input power is extrapolated from the previous measurement point. Furthermore, (1) predicts a linear synchronization with the injection signal level of the injection-locked divider. This is confirmed by the measurements in Figure 10. Due to the limitation of the tuning range of the VCO, the last measured point (in blue) at −20 dBm input power is extrapolated from the previous measurement point. Table 1 provides the benchmark of the proposed PLL and comparisons with other relevant works. For instance, [4] is a 198-274 GHz CMOS PLL. Even though the performance in terms of bandwidth, DC power consumption, and chip area of the PLL in this work is good, the phase noise lies above −80 dBc/Hz @ 100 KHz with an output power of only −11 dBm. Moreover, the proposed PLLs in [8] and [9] produce phase noises more than −90 dBc/Hz @ 100 KHz with DC power consumptions in the range of 323-380 and 1150-1250 mW, respectively. Meanwhile, the proposed PLL in this work has a phase noise of only −103 dBc/Hz @ 1 KHz and −128 dBc/Hz @ 100 KHz, which is among the lowest Figure 10. Measured synchronization range of the injection-locked divider @ 160 GHz. Table 1 provides the benchmark of the proposed PLL and comparisons with other relevant works. For instance, [4] is a 198-274 GHz CMOS PLL. Even though the performance in terms of bandwidth, DC power consumption, and chip area of the PLL in this work is good, the phase noise lies above −80 dBc/Hz @ 100 KHz with an output power of only −11 dBm. Moreover, the proposed PLLs in [8] and [9] produce phase noises more than −90 dBc/Hz @ 100 KHz with DC power consumptions in the range of 323-380 and 1150-1250 mW, respectively. Meanwhile, the proposed PLL in this work has a phase noise of only −103 dBc/Hz @ 1 KHz and −128 dBc/Hz @ 100 KHz, which is among the lowest compared to the state-of-the-art sub-mm-wave PLLs. In addition, the bandwidth obtained is among the highest, with relatively high output saturated power in comparison to relevant works observed from Table 1. Although the performance of the designed PLL appears to be formidable, this was made possible by the use of the 22 nm FD-SOI CMOS, which offers relatively higher device performance at higher frequencies. As a result, the main disadvantage of the produced PLL is the high technology cost.
Conclusions
An 8 GHz linear tuning sub-mm-wave PLL applied in a transmitter front-end with antenna was designed and measured in this work. The proposed wideband sub-mm-wave I/Q VCO is based on a current tuning through differential voltage application, as it is more practical in the mm-wave frequencies and beyond. The PMOS-injected divider is based on a CPWG resonator and exhibits wider frequency locking range compared to the inductor-based NMOS frequency divider, covering the VCO bandwidth. The designed PLL was applied in a transmitter front-end along with an on-chip wideband bowtie antenna for performance testing purposes. The implemented PLL produces a phase noise of −128 dBc/Hz @ 100 KHz for the worst-case scenario, with outstanding performances in terms of power and area consumption compared to other recent relevant works (as represented by Table 1). To the knowledge of the authors, this is the lowest phase noise produced by a sub-mm-wave PLL to date. Data Availability Statement: The authors confirm that the data used in this study is either experimentally extracted and provided throughout the article or referenced below.
Conflicts of Interest:
The authors declare no conflict of interest. | 7,343 | 2023-05-01T00:00:00.000 | [
"Physics"
] |
Effect of Nickel on the Structural Properties of Mn Zn Ferrite Nano Particles
Nano particles of Mn0.5-x Nix Zn0.5Fe2-xO4 (X = 0.0, 0.1, 0.2, 0.3) have been synthesized by chemical co precipitation method. X-ray diffraction analysis confirms the formation of ferrites in nano phase. Lattice constant and particle size is found to be decreasing with increasing nickel concentration. The porosity calculated using x-ray density and measured density also shows a decreasing behavior with increasing nickel concentration.
Introduction
Manganese Zinc ferrite are technologically important materials because of their high magnetic permeability and low core loss.These ferrites have been extensively used in electronic applications such as transformers, choke coils, noise filters, recording heads etc. Ferrites prepared by conventional ceramic method involve high temperature which can result in the loss of their fine particle size.The bulk properties of the ferrites changes as one or more of its dimensions are reduced to nano size.(Y.Yamamoto, 1994;J.M.D Coey, 1972).The unusual properties exhibited by the ferrite nano particles and their promising technological applications have attracted much interest in recent years.The size and shape of the ferrite particles are dependent on the synthesis process.Wet chemical methods such as co-precipitation, sol gel and hydrothermal processing have been widely used to produce the fine particle size.Bueno et al. (A.R.Bueno, 2007) have reported the influence of Manganese substitution on the magnetic properties and micro structure of Ni 0.5-x Zn 0.5-x Mn 2x Fe 2 O 4 synthesized by nitrate-citrate precursor method.Verma et al (A.Verma, 2006) have reported the development of a new ferrite with low power loss based on Manganese Nickel Zinc Ferrite composition for switch mode power supplies.However few reports are available on the properties of nano Mn Ni Zn Ferrite.In the present investigation the studies on nano particles of Mn 0.5-x Ni x Zn Fe 2 O 4 (x = 0.1, 0.2 and 0.3) synthesized by chemical co-precipitation method is reported.
Experimental details
Nano particles of Mn (0.5-x) Ni x Zn 0.5 Fe 2 O 4 with x varying from 0.0 to 0.3 were prepared by co-precipitation method.Aqueous solutions of MnCl 2 , ZnSO 4 , NiCl 2 and FeCl 3 in their respective stoichiometry (100 ml of solution containing (0.5-x) M MnCl 2, (x) M NiCl 2 , 0.5 M ZnSO 4 and 100 ml of 1M FeCl 3 ) were mixed thoroughly at 80°C and this mixture was added to the boiling solution of NaOH (0.55 M dissolved in 1600 ml of distilled water) within 10 seconds under constant stirring and a pH of 11 was maintained throughout the reaction.Conversion of metal salts into hydroxides and subsequent transformation of metal hydroxide into nano ferrites takes place upon heating to 100°C and maintained for 60 minutes.The nano particles thus formed were isolated by centrifugation and washed several times with deionizer water followed by acetone and then dried at room temperature.The dried powder was grounded thoroughly in a clean agate mortar.The ground powder was then pelletized using hydraulic press and fired at 500 ˚C for 2 hrs.The structure and crystallite size were determined from the X-ray diffraction (XRD) measurements using Philips (PM 9220) diffract meter with CuKα (λ = 1.5406Å) radiation.
XRD analysis
The X-ray diffraction pattern for Mn (0.5-x) Ni x Zn 0.5 Fe 2 O 4 (With x = 0.0, 0.1, 0.2, 0.3) is shown in the Fig (i).These diffraction lines provide clear evidence of the formation of ferrite phase in all the samples.The broad XRD line indicates that the ferrite particles are of nano size.The average particle size for each composition was calculated from the XRD line width of the (311) peak using Scherrer formula (Cullity B.D, 1966).The values of the particle size and lattice constant as deduced from the X-ray data are given in the Table 1.
The average particle size for Mn 0.5 Zn 0.5 Fe 2 O 4 is found to be 13 nm and the particle size gradually decreases as the manganese concentration is decreased.This can be explained on the basis of cation stoichiometry.In a complex system like ferrites where many cations are involved, the nucleation and growth of the particles are expected to be influenced by the probability of a cation occupying available chemically inequivalent sites and its affinity to sites.For example, during the formation of MnFe 2 O 4 nuclei, Mn 2+ does not have strong preference of occupying only the tetrahedral site and a small fraction (<20%) can also occupy the octahedral site.In the nano scale range of the particles, Mn 2+ gets uniformly distributed amongst the different sites with two octahedral sites and one tetrahedral site available to it.Therefore Mn 2+ will have the highest probability of getting adsorbed by a growing nucleus (Chandana Rath, 2002).As the manganese concentration of the sample is replaced by nickel, the probability of Mn 2+ getting adsorbed by a growing nucleus is low.This accounts for the decrease in particle size as the concentration of nickel is increased.The lattice constant decreases with increasing nickel concentration as shown in the Figure( 2).This can be explained based on the relative ionic radius.The ionic radius (0.69Å) of Ni 2+ ions is smaller than the ionic radius (0.82Å) of Mn 2+ ions.Replacement of smaller Ni 2+ cations for larger Mn 2+ cations in the manganese zinc ferrite causes a decrease in lattice constant.However the lattice constant of the samples with nickel concentration up to x = 0.4 was found to be less than that of bulk.A significant fraction of Mn 2+ and Zn 2+ occupies the octahedral sites and forces Fe 3+ to the tetrahedral sites against their chemical preferences.Since Fe 3+ ions have smaller ionic radius (0.64Å), occupying the tetrahedral sites in place of larger divalent ions leads to contraction in lattice parameter as observed.
The measured density, ρ m was determined using the formula Where m is the mass, r the radius and h the height of the sample.The x-ray densities were calculated using the relation.
Where M is the molecular weight of the sample, N is the Avogadro's number and a is the lattice constant.The x-ray density (ρ x ) depends on the lattice constant and molecular weight of the sample, where as the measured density ( ρ m ) of the samples is calculated from the geometry and mass of the samples.It is observed from the table 1 that the x-ray density increases with the increase of Ni concentration.Since the x-ray density is inversely proportional to the lattice constant, it increases as the lattice constant decreases.The measured density of the ferrite was found to be increasing with x.The porosity P of the ferrite nano particles was determined using the relation.
The variation in porosity with the nickel concentration for all the samples is shown in the Figure (3).It is found that the porosity decreases with increase in nickel concentration.
Conclusion
Nano structured Mn Ni Zn Ferrite was prepared by chemical co precipitation method.Structural analysis with XRD indicates the formation of Mn Ni Zn Ferrite.It is found that as the nickel concentration of the sample is increased, the lattice parameter and particle size decreases.Porosity, calculated using both densities, also shows a decrease trend with increasing nickel concentration. | 1,656.2 | 2009-04-20T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Copolymerization of Ethylene with Functionalized 1,1-Disubstituted Olefins Using a Fluorenylamido-Ligated Titanium Catalyst
Considering the sustainability of material development, coordination polymerization catalysts effective for 1,1-disubstituted olefins are receiving a great deal of attention because they can introduce a variety of plant-derived comonomers, such as β-pinene and limonene, into polyolefins. However, due to their sterically encumbered property, incorporating these monomers is difficult. Herein, we succeeded in the copolymerization of ethylene with various hydroxy- or siloxy-substituted vinylidenes using a fluorenylamido-ligated titanium catalyst–MMAO system. This is the first example of ethylene/polar 1,1-disubstituted olefins’ copolymerization using an early transition metal catalyst system. The polymerization proceeded at room temperature without pressurizing ethylene, and high-molecular-weight, functionalized polyethylene was obtained. The obtained copolymer showed a reduced water contact angle compared with that of the ethylene/isobutene copolymer, demonstrating the increment in hydrophilicity by hydroxy groups.
Introduction
Olefin polymerization and copolymerization mediated by single-site transition metal catalysts have witnessed tremendous progress in the past 40 years, both in academia and industry, due to the discovery of methylaluminoxane (MAO), an excellent cocatalyst, by Kaminsky and Sinn [1].Early transition metal complexes such as metallocenes have been the representative catalyst precursors, activated with MAO or borate cocatalysts to generate well-defined active species.Various copolymers with uniform comonomer distribution can be synthesized from the well-defined active species.
The incorporation of polar functional groups into polyolefin can considerably improve their surface characteristics like adhesion, dyeability, printability, and compatibility, and broaden their range of applications as commodity plastics.Therefore, in recent decades, the coordination copolymerization of ethylene and polar vinyl monomers has been vigorously investigated [2][3][4].Late transition metal catalysts have played a leading role in this field because they are poisoned to a lesser degree by the electron-donating functional groups.Early transition metal catalysts can also be applied to comonomers with remote or protected functional groups [5].Some group 4 metal catalysts are shown to be tolerant to polar functional groups like bulky amines without the aid of alkylaluminum [6].By using these catalyst systems, high-molecular-weight crystalline polyolefins with a small number of functional sidechains containing carbonyl, amino, and hydroxy groups can be synthesized.
Recently, the use of 1,1-disubstituted olefins in coordination polymerization is drawing much attention in view of the sustainability of material development because many plantderived 1,1-disubstituted olefins, such as pinene and limonene, are available.Incorporating these comonomers has been challenging because of their sterically encumbered property.
Polymers 2024, 16, 236 2 of 11 For example, ethylene (E)/isobutene (IB) copolymerization using bis(indenyl)ethylidene zirconium dichloride activated with MAO produces a low-molecular-weight copolymer with low IB content, even under the excess feed of IB to E [7].Waymouth explored the cyclopolymerization of 2-methyl-1,5-hexadiene using a metallocene catalyst, and found that the vinyl groups inserted preferentially over the vinylidene groups, producing a regioregular polymer [8].However, these examples show that the insertion of 1,1-disubstituted olefins into group 4 metal-carbon bond is essentially possible.
Generally, constrained geometry catalysts (CGC) or modified half-titanocenes, which have open coordination sites for comonomers, are known to be effective for ethylene and non-polar 1,1-disubstituted monomers because they can accept various sterically encumbered monomers.The first example of an E/IB alternating copolymer was prepared by a CGC titanium catalyst with a cyclododecyl substituent on its nitrogen atom activated by a borate activator [9].Marks showed that a binuclear CGC-type catalyst (1, Figure 1) effectively copolymerized ethylene and various isoalkenes [10,11].We have demonstrated that the application of fluorenylamido-ligated titanium complex 2a-2c can promote the copolymerization of ethylene with IB or limonene [12].Recently, Nomura reported the copolymerization of ethylene and various plant-derived olefins such as limonene and β-pinene using half-titanocene complexes bearing a phenoxide ancillary donor, as represented by complex 3, activated by MAO to produce a comonomer content of up to 3.6 mol% under a pressurized condition [13].These catalysts show a high activity for the copolymerization of ethylene and other multi-substituted olefins such as 2-methyl-1-pentene (2M1P) [14,15] and 4-methylcyclopentene [16].However, no examples of copolymerizing polar 1,1-disubstituted monomers using early transition metal catalysts have been reported.Late transition metal catalysts have made remarkable progress in the copolymerization of ethylene and polar 1,1-disubstituted monomers like methacrylates, reflecting their lower oxophilicity [17][18][19].However, all these systems require high temperature (>95 • C) and/or ethylene pressure.
catalyst, and found that the vinyl groups inserted preferentially over the vinylidene groups, producing a regioregular polymer [8].However, these examples show that the insertion of 1,1-disubstituted olefins into group 4 metal-carbon bond is essentially possible.
Generally, constrained geometry catalysts (CGC) or modified half-titanocenes, which have open coordination sites for comonomers, are known to be effective for ethylene and non-polar 1,1-disubstituted monomers because they can accept various sterically encumbered monomers.The first example of an E/IB alternating copolymer was prepared by a CGC titanium catalyst with a cyclododecyl substituent on its nitrogen atom activated by a borate activator [9].Marks showed that a binuclear CGC-type catalyst (1, Figure 1) effectively copolymerized ethylene and various isoalkenes [10,11].We have demonstrated that the application of fluorenylamido-ligated titanium complex 2a-2c can promote the copolymerization of ethylene with IB or limonene [12].Recently, Nomura reported the copolymerization of ethylene and various plant-derived olefins such as limonene and βpinene using half-titanocene complexes bearing a phenoxide ancillary donor, as represented by complex 3, activated by MAO to produce a comonomer content of up to 3.6 mol% under a pressurized condition [13].These catalysts show a high activity for the copolymerization of ethylene and other multi-substituted olefins such as 2-methyl-1-pentene (2M1P) [14,15] and 4-methylcyclopentene [16].However, no examples of copolymerizing polar 1,1-disubstituted monomers using early transition metal catalysts have been reported.Late transition metal catalysts have made remarkable progress in the copolymerization of ethylene and polar 1,1-disubstituted monomers like methacrylates, reflecting their lower oxophilicity [17][18][19].However, all these systems require high temperature (>95 °C) and/or ethylene pressure.
Here, we have investigated the copolymerization of ethylene with 1,1-disubstituted olefins, bearing hydroxy groups or its derivatives, using complex 2a-2c activated with modified methylaluminoxane (MMAO).Although the hydroxy groups should be protected with silyl groups to achieve a high activity, the copolymerization proceeded at ambient pressure and temperature, producing high-molecular-weight (Mn > 10 5 ), functionalized polyethylene.The siloxy groups can be deprotected with the standard procedure using tetrabutylammonium fluoride, and the obtained hydroxy-substituted polyethylene showed superior surface wettability than the polyethylene that possessed no polar functional groups.Here, we have investigated the copolymerization of ethylene with 1,1-disubstituted olefins, bearing hydroxy groups or its derivatives, using complex 2a-2c activated with modified methylaluminoxane (MMAO).Although the hydroxy groups should be protected with silyl groups to achieve a high activity, the copolymerization proceeded at ambient pressure and temperature, producing high-molecular-weight (M n > 10 5 ), functionalized polyethylene.The siloxy groups can be deprotected with the standard procedure using tetrabutylammonium fluoride, and the obtained hydroxy-substituted polyethylene showed superior surface wettability than the polyethylene that possessed no polar functional groups.
NMR spectra were recorded on a Varian System 500 or a JEOL Lambda500 spectrometer at room temperature or 130 • C. The obtained spectra of 1 H NMR and 13 C NMR were referenced to the signal of a residual trace of the partially protonated solvents ( 1 H: δ = 5.91 ppm (C 2 HDCl 4 ) and 7.26 ppm (CHCl 3 )) and the signal of the solvent ( 13 C: δ = 74.7 ppm (C 2 D 2 Cl 4 ) and 77.1 ppm (CDCl 3 )), respectively.The high-resolution mass spectrometry of the new compounds was performed on a JEOL JMS-T100GCV spectrometer.The absolute molecular weights of polymers were determined on a Malvern HT350GPC chromatograph (T = 130 • C; eluent, o-dichlorobenzene) equipped with RI/light scattering/viscometer triple detectors.Differential scanning calorimetry (DSC) measurements were performed on a SHIMADZU DSC-60 system with a temperature elevation rate of 10 • C min −1 .The water contact angle was measured by a KYOWA DM-300 contact angle meter using a half-angle method.
Synthesis of Comonomer 5b
In a 100 mL two-necked flask, imidazole (75 mmol, 5.1 g) and isoprenol (5a, 30 mmol, 3.04 mL) were charged under nitrogen and diluted with dichloromethane (30 mL).The solution was cooled to 0 • C, and i Pr 3 SiCl (36 mmol, 7.6 mL) was added dropwise.The mixture was warmed to room temperature and stirred for 16 h.The resulting solution was washed twice with 10 mL of aqueous saturated NaHCO 3 .The collected aqueous phase was extracted with dichloromethane (5 mL × 3).The combined organic phase was dried with MgSO 4 and filtered, and the solvent was evaporated.The obtained crude product was purified with a silica gel column chromatography (hexane, R f = 0.30), producing 5b as a clear oil (7.20 g, >99%).
Synthesis of Comonomer 6b
In a 100 mL two-necked flask, (+)-trans-β-terpineol (6a, 2.0 mmol, 309 mg) and 2,6-lutidine (2.0 mmol, 0.23 mL) were charged under nitrogen and diluted with dichloromethane (3 mL).The solution was cooled to 0 • C, and i Pr 3 SiOTf (2.0 mmol, 0.54 mL) was added dropwise.The mixture was warmed to room temperature and stirred for 6 h.The resulting solution was washed three times with 5 mL of aqueous saturated NaHCO 3 solution.The collected aqueous phase was extracted with dichloromethane (3 mL × 3).The combined organic phase was dried with MgSO 4 and filtered, and the solvent was evaporated.The obtained crude product was purified with a silica gel column chromatography (hexane, R f = 0.79), producing 6b as a clear oil (282 mg, 45%).
Copolymerization of ethylene and polar 1,1-disubstituted olefins with semi-batch process
A representative procedure for Table 1, Run 3, is described here.In a 100 mL two-necked flask, 4b (2.46 mmol, 597 mg) was weighed under nitrogen and dissolved in 6 mL of toluene.To this solution, toluene solutions of modified methylaluminoxane (MMAO) (2.0 M, 2.0 mL, 4.0 mmol) and 2,6-di-tert-butyl-4-methylphenol (BHT, 0.30 M, 1.0 mL, 0.30 mmol) were added.The nitrogen in the headspace of the flask was removed under vacuum, and ethylene was introduced back at ambient pressure until saturation.Polymerization was started by adding a solution of titanium complex 2a (20 µmol, 11.9 mg in 1.0 mL toluene), and the reaction mixture was magnetically stirred for 20 min under a flow of ethylene.The polymerization was terminated by adding 2 mL of MeOH.The resulting mixture was poured into 200 mL of MeOH containing 4 mL of concentrated HCl to separate the polymer and catalyst.The precipitated polymer was collected by filtration and dried for 4 h under vacuum at 60 • C to obtain a constant weight.A total quantity of 363 mg of colorless polymer was obtained.
Copolymerization of ethylene and silyl-protected polar 1,1-disubstituted olefins with batch process
A representative procedure for Table 2, Run 9 was described here.To a 300-mL two-necked flask, comonomer 5b (2.46 mmol, 699 mg) was weighed, and MMAO (8.0 mmol in 3.7 mL toluene) was added.The nitrogen in the headspace of the flask was removed under vacuum, and 340 mL of ethylene (13.9 mmol at 25 • C) was introduced back.Polymerization was started by adding a solution of titanium complex 2b (20 µmol, 9.7 mg in 1 mL toluene), and the flask was sealed.The reaction mixture was magnetically stirred for 60 min maintaining 25 • C. The polymerization was terminated by adding 2 mL MeOH.The resulting mixture was poured into 100 mL of MeOH containing 10 mL of concentrated HCl to separate the polymer and catalyst.The precipitated polymer was collected by filtration and dried for 4 h under vacuum at 60 • C to obtain a constant weight.118 mg of colorless polymer was obtained.
Deprotection of silyl groups in the copolymer
In a Schlenk tube, ethylene/4b copolymer (40 mg, 4b content = 84 µmol) obtained in Table 1, Run 3, was dissolved in chlorobenzene (2.0 mL) at 100 • C. To this solution, 0.10 mL of tetrabutylammonium fluoride solution (TBAF, 1.0 M in THF, 0.20 mmol) was added, and the mixture was stirred at 100 • C for 15 h.The polymer was precipitated by adding excess methanol and collected.The obtained polymer was dried under vacuum at 60 • C for 6 h, producing 16 mg (quant.) of the deprotected copolymer.
Preparation of polymer films for water contact angle measurement
Before the sample preparation, polymer was dissolved in chlorobenzene, filtered through a 0.045 mm stainless mesh, and reprecipitated in methanol to remove impurities.A thin film was prepared by pressing the polymer at 180 • C and 4.0 MPa for 5 min.Seven measurements at different positions of the film were averaged to determine the water contact angle.
Results and Discussion
Previously, we have succeeded in copolymerizing ω-hydroxyalkenes using fluorenylamidoligated titanium complex by pretreating the comonomer with triisobutylaluminium before polymerization to prevent catalyst poisoning by the hydroxyl functional group [22].Converting the hydroxy group to silyl ether is also a conventional way for the efficient homo-and copolymerization of ω-hydroxyalkenes, and bulky trialkylsilyl groups tend to show a high polymerization activity [23].According to these previous studies, 1,1-disubstituted olefins with siloxy groups (4b-6b) were synthesized and applied for the copolymerization.The conversion of alcohols 4a-6a to silyl ether 4b-6b was performed using i Pr 3 SiCl/imidazole (for primary alcohols) or i Pr 3 SiOTf/2,6-lutidine (for a tertiary alcohol) conditions (Scheme 1).A large-scale reaction was possible for the conversion of isoprenol (5a), producing 7.2 g of silyl ether 5b in an almost quantitative manner.The obtained comonomer was purified with silica gel column chromatography prior to its application in the copolymerization process.triisobutylaluminium before polymerization to prevent catalyst poisoning by the hydroxyl functional group [22].Converting the hydroxy group to silyl ether is also a conventional way for the efficient homo-and copolymerization of ω-hydroxyalkenes, and bulky trialkylsilyl groups tend to show a high polymerization activity [23].According to these previous studies, 1,1-disubstituted olefins with siloxy groups (4b-6b) were synthesized and applied for the copolymerization.The conversion of alcohols 4a-6a to silyl ether 4b-6b was performed using i Pr3SiCl/imidazole (for primary alcohols) or i Pr3SiOTf/2,6lutidine (for a tertiary alcohol) conditions (Scheme 1).A large-scale reaction was possible for the conversion of isoprenol (5a), producing 7.2 g of silyl ether 5b in an almost quantitative manner.The obtained comonomer was purified with silica gel column chromatography prior to its application in the copolymerization process.
First, the copolymerization of ethylene and 6-methyl-hept-6-en-1-ol derivatives (4b) was performed in a semi-batch process using 2a-MMAO/BHT system (Scheme 2, Table 1).Here, BHT is added to modify the residual trialkylaluminums, which can competitively coordinate to the metal center of the catalyst with monomers and reduce the polymerization activity [24].Although the polymer yield was lower than that of the copolymerization with unfunctionalized 1,1-disubstituted olefins such as isobutene and 2-methyl-1-pentene (Run 1, 2), high-molecular-weight copolymer was obtained with longer polymerization time at ambient temperature and pressure (Run 3).The direct copolymerization of unprotected 4a (run 4) also proceeded when the amount of MMAO was increased, probably because an excess amount of alkylaluminum can mask the hydroxy group of 4a.The analysis of an ethylene/4b copolymer using 1 H NMR spectrum showed a triplet signal at 3.68 ppm, which is assigned to methylene protons adjacent to the oxygen atom, indicating the incorporation of 4b (Figure 2).Furthermore, a singlet 3H signal at 0.78 ppm and a 21H signal at 1.05 ppm are assigned to terminal methyl and isopropylsilyl protons, respectively.The methylene signal appears at the different chemical shift (3.53 ppm) in the ethylene/4a copolymer (Figure S9), indicating that the silyl ether remained in the copolymer.Thus, the comonomer content was calculated from the integral ratio of the -CH2OH signals and signals in the aliphatic region.A further deprotection of silyl groups Scheme 1. Synthesis of silyl-protected polar 1,1-disubstituted olefins 4b-6b.
First, the copolymerization of ethylene and 6-methyl-hept-6-en-1-ol derivatives (4b) was performed in a semi-batch process using 2a-MMAO/BHT system (Scheme 2, Table 1).Here, BHT is added to modify the residual trialkylaluminums, which can competitively coordinate to the metal center of the catalyst with monomers and reduce the polymerization activity [24].Although the polymer yield was lower than that of the copolymerization with unfunctionalized 1,1-disubstituted olefins such as isobutene and 2-methyl-1-pentene (Run 1, 2), high-molecular-weight copolymer was obtained with longer polymerization time at ambient temperature and pressure (Run 3).The direct copolymerization of unprotected 4a (run 4) also proceeded when the amount of MMAO was increased, probably because an excess amount of alkylaluminum can mask the hydroxy group of 4a. can be performed by heating the polymer solution in chlorobenzene with excess tetrabutylammonium fluoride (TBAF).The reaction proceeded quantitatively, and hydroxy-substituted polyethylene was recovered (Scheme 3).The deprotection of silyl group was confirmed by a higher field shifted methylene signal in the 1 H NMR spectrum (Figure S10).Moreover, the IR spectrum of the deprotected copolymer showed a broad signal around 3300 cm −1 , which is not observed in the ethylene/4b copolymer, showing the presence of hydroxy groups generated via the elimination of silyl groups (Figure S15).triisobutylaluminium before polymerization to prevent catalyst poisoning by the hydroxyl functional group [22].Converting the hydroxy group to silyl ether is also a conventional way for the efficient homo-and copolymerization of ω-hydroxyalkenes, and bulky trialkylsilyl groups tend to show a high polymerization activity [23].According to these previous studies, 1,1-disubstituted olefins with siloxy groups (4b-6b) were synthesized and applied for the copolymerization.The conversion of alcohols 4a-6a to silyl ether 4b-6b was performed using i Pr3SiCl/imidazole (for primary alcohols) or i Pr3SiOTf/2,6lutidine (for a tertiary alcohol) conditions (Scheme 1).A large-scale reaction was possible for the conversion of isoprenol (5a), producing 7.2 g of silyl ether 5b in an almost quantitative manner.The obtained comonomer was purified with silica gel column chromatography prior to its application in the copolymerization process.
First, the copolymerization of ethylene and 6-methyl-hept-6-en-1-ol derivatives (4b) was performed in a semi-batch process using 2a-MMAO/BHT system (Scheme 2, Table 1).Here, BHT is added to modify the residual trialkylaluminums, which can competitively coordinate to the metal center of the catalyst with monomers and reduce the polymerization activity [24].Although the polymer yield was lower than that of the copolymerization with unfunctionalized 1,1-disubstituted olefins such as isobutene and 2-methyl-1-pentene (Run 1, 2), high-molecular-weight copolymer was obtained with longer polymerization time at ambient temperature and pressure (Run 3).The direct copolymerization of unprotected 4a (run 4) also proceeded when the amount of MMAO was increased, probably because an excess amount of alkylaluminum can mask the hydroxy group of 4a.The analysis of an ethylene/4b copolymer using 1 H NMR spectrum showed a triplet signal at 3.68 ppm, which is assigned to methylene protons adjacent to the oxygen atom, indicating the incorporation of 4b (Figure 2).Furthermore, a singlet 3H signal at 0.78 ppm and a 21H signal at 1.05 ppm are assigned to terminal methyl and isopropylsilyl protons, respectively.The methylene signal appears at the different chemical shift (3.53 ppm) in the ethylene/4a copolymer (Figure S9), indicating that the silyl ether remained in the copolymer.Thus, the comonomer content was calculated from the integral ratio of the -CH2OH signals and signals in the aliphatic region.A further deprotection of silyl groups The analysis of an ethylene/4b copolymer using 1 H NMR spectrum showed a triplet signal at 3.68 ppm, which is assigned to methylene protons adjacent to the oxygen atom, indicating the incorporation of 4b (Figure 2).Furthermore, a singlet 3H signal at 0.78 ppm and a 21H signal at 1.05 ppm are assigned to terminal methyl and isopropylsilyl protons, respectively.The methylene signal appears at the different chemical shift (3.53 ppm) in the ethylene/4a copolymer (Figure S9), indicating that the silyl ether remained in the copolymer.Thus, the comonomer content was calculated from the integral ratio of the -CH 2 OH signals and signals in the aliphatic region.A further deprotection of silyl groups can be performed by heating the polymer solution in chlorobenzene with excess tetrabutylammonium fluoride (TBAF).The reaction proceeded quantitatively, and hydroxysubstituted polyethylene was recovered (Scheme 3).The deprotection of silyl group was confirmed by a higher field shifted methylene signal in the 1 H NMR spectrum (Figure S10).Moreover, the IR spectrum of the deprotected copolymer showed a broad signal around 3300 cm −1 , which is not observed in the ethylene/4b copolymer, showing the presence of hydroxy groups generated via the elimination of silyl groups (Figure S15).can be performed by heating the polymer solution in chlorobenzene with excess tetrabutylammonium fluoride (TBAF).The reaction proceeded quantitatively, and hydroxy-substituted polyethylene was recovered (Scheme 3).The deprotection of silyl group was confirmed by a higher field shifted methylene signal in the 1 H NMR spectrum (Figure S10).Moreover, the IR spectrum of the deprotected copolymer showed a broad signal around 3300 cm −1 , which is not observed in the ethylene/4b copolymer, showing the presence of hydroxy groups generated via the elimination of silyl groups (Figure S15).The 13 C NMR signals of the copolymer were assigned according to the previous assignment of the copolymer of ethylene and various 1,1-disubstituted olefins [12][13][14][15]18,19].In the 13 C NMR spectrum of the ethylene/4b copolymer, signals at 40 and 65 ppm, which are assigned to the isolate branching of the polyethylene chain and carbon adjacent to the oxygen atom, are observed, suggesting the isolated structure of the 4b unit (Figure 3).Successive insertion of 4b would not occur because no signals were observed between 60 and 55 ppm, where a carbon signal of polyisobutene appears.The observed sole melting temperature (Tm, 130 °C) is lower than that of homopolyethylene but high for the 6.0 mol% of incorporation. Th bimodal GPC trace of the copolymer indicated that a low-molecularweight fraction with high 4b incorporation exists.A separation of these fractions using extraction or crystallization in xylene was not possible, probably because the average molecular weight of the lower Mw fraction still exceeds 10 5 (Figure S14).can be performed by heating the polymer solution in chlorobenzene with excess tetrabutylammonium fluoride (TBAF).The reaction proceeded quantitatively, and hydroxy-substituted polyethylene was recovered (Scheme 3).The deprotection of silyl group was confirmed by a higher field shifted methylene signal in the 1 H NMR spectrum (Figure S10).Moreover, the IR spectrum of the deprotected copolymer showed a broad signal around 3300 cm −1 , which is not observed in the ethylene/4b copolymer, showing the presence of hydroxy groups generated via the elimination of silyl groups (Figure S15).The 13 C NMR signals of the copolymer were assigned according to the previous assignment of the copolymer of ethylene and various 1,1-disubstituted olefins [12][13][14][15]18,19].In the 13 C NMR spectrum of the ethylene/4b copolymer, signals at 40 and 65 ppm, which are assigned to the isolate branching of the polyethylene chain and carbon adjacent to the oxygen atom, are observed, suggesting the isolated structure of the 4b unit (Figure 3).Successive insertion of 4b would not occur because no signals were observed between 60 and 55 ppm, where a carbon signal of polyisobutene appears.The observed sole melting temperature (Tm, 130 °C) is lower than that of homopolyethylene but high for the 6.0 mol% of incorporation.The bimodal GPC trace of the copolymer indicated that a low-molecularweight fraction with high 4b incorporation exists.A separation of these fractions using extraction or crystallization in xylene was not possible, probably because the average molecular weight of the lower Mw fraction still exceeds 10 5 (Figure S14).Scheme 3. Deprotection of silyl groups in ethylene/4b copolymer.
The 13 C NMR signals of the copolymer were assigned according to the previous assignment of the copolymer of ethylene and various 1,1-disubstituted olefins [12][13][14][15]18,19].In the 13 C NMR spectrum of the ethylene/4b copolymer, signals at 40 and 65 ppm, which are assigned to the isolate branching of the polyethylene chain and carbon adjacent to the oxygen atom, are observed, suggesting the isolated structure of the 4b unit (Figure 3).Successive insertion of 4b would not occur because no signals were observed between 60 and 55 ppm, where a carbon signal of polyisobutene appears.The observed sole melting temperature (T m , 130 • C) is lower than that of homopolyethylene but high for the 6.0 mol% of incorporation. Th bimodal GPC trace of the copolymer indicated that a low-molecular-weight fraction with high 4b incorporation exists.A separation of these fractions using extraction or crystallization in xylene was not possible, probably because the average molecular weight of the lower M w fraction still exceeds 10 5 (Figure S14).
Comonomer 5b, which has a shorter distance between C=C double bond and protected hydroxy group than 4b, was also copolymerized with ethylene, although the polymer yield was much lower than the ethylene/4b copolymerization (Run 5).Polymerization in the heated condition largely decreased the molecular weight of the polymer, but a higher comonomer incorporation than polymerization at room temperature was achieved, probably because ethylene concentration in the reaction medium was lowered by heating (run 6).The direct copolymerization of isoprenol (5a) was unsuccessful, typically yielding a small amount of homopolyethylene (Run 7).In the copolymerization or homopolymerization of hydroxy-or siloxy-substituted vinyl monomers using group 4 metal catalysts, monomers with shorter distance between C=C double bond and functional groups tend to show a lower activity and a lower incorporation ratio, probably because of the bulkiness of the substituent [25][26][27].Furthermore, 5b would reduce the activity after the insertion into propagating chain end because oxygen atom of 5b can easily interact with titanium center via a six-membered ring formation, which prevents the further coordination of monomers (Figure 4).Comonomer 5b, which has a shorter distance between C=C double bond and protected hydroxy group than 4b, was also copolymerized with ethylene, although the polymer yield was much lower than the ethylene/4b copolymerization (Run 5).Polymerization in the heated condition largely decreased the molecular weight of the polymer, but a higher comonomer incorporation than polymerization at room temperature was achieved, probably because ethylene concentration in the reaction medium was lowered by heating (run 6).The direct copolymerization of isoprenol (5a) was unsuccessful, typically yielding a small amount of homopolyethylene (Run 7).In the copolymerization or homopolymerization of hydroxy-or siloxy-substituted vinyl monomers using group 4 metal catalysts, monomers with shorter distance between C=C double bond and functional groups tend to show a lower activity and a lower incorporation ratio, probably because of the bulkiness of the substituent [25][26][27].Furthermore, 5b would reduce the activity after the insertion into propagating chain end because oxygen atom of 5b can easily interact with titanium center via a six-membered ring formation, which prevents the further coordination of monomers (Figure 4).Our previous research shows several substituent effects of fluorenylamido-ligated titanium complexes on various olefin polymerizations [12].Generally, introducing electron-donating alkyl groups on the fluorenyl 2,7-position greatly improves the activity.The incorporation ratio of bulky comonomers is enhanced when tertiary alkyl group on the nitrogen was replaced with less bulky secondary alkyl group.Based on these previous results, the behavior of catalysts 2b and 2c in the copolymerization of ethylene and 5b was compared with catalyst 2a (Table 2).Here, the copolymerizations are conducted without Comonomer 5b, which has a distance between C=C double bond and protected hydroxy group than 4b, was also copolymerized with ethylene, although the polymer yield was much lower than the ethylene/4b copolymerization (Run 5).Polymerization in the heated condition largely decreased the molecular weight of the polymer, but a higher comonomer incorporation than polymerization at room temperature was achieved, probably because ethylene concentration in the reaction medium was lowered by heating (run 6).The direct copolymerization of isoprenol (5a) was unsuccessful, typically yielding a small amount of homopolyethylene (Run 7).In the copolymerization or homopolymerization of hydroxy-or siloxy-substituted vinyl monomers using group 4 metal catalysts, monomers with shorter distance between C=C double bond and functional groups tend to show a lower activity and a lower incorporation ratio, probably because of the bulkiness of the substituent [25][26][27].Furthermore, 5b would reduce the activity after the insertion into propagating chain end because oxygen atom of 5b can easily interact with titanium center via a six-membered ring formation, which prevents the further coordination of monomers (Figure 4).Our previous research shows several substituent effects of fluorenylamido-ligated titanium complexes on various olefin polymerizations [12].Generally, introducing electron-donating alkyl groups on the fluorenyl 2,7-position greatly improves the activity.The incorporation ratio of bulky comonomers is enhanced when tertiary alkyl group on the nitrogen was replaced with less bulky secondary alkyl group.Based on these previous results, the behavior of catalysts 2b and 2c in the copolymerization of ethylene and 5b was compared with catalyst 2a (Table 2).Here, the copolymerizations are conducted without the addition of BHT, and polymers with broader molecular weight distributions were obtained.The multimodal distribution of molecular weight may suggest the presence of multiple active species differentiated by the coordination of residual trialkylaluminums.Copolymerization using non-substituted fluorenylamido-ligated titanium catalyst 2c was Our previous research shows several substituent effects of fluorenylamido-ligated titanium complexes on various olefin polymerizations [12].Generally, introducing electrondonating alkyl groups on the fluorenyl 2,7-position greatly improves the activity.The incorporation ratio of bulky comonomers is enhanced when tertiary alkyl group on the nitrogen was replaced with less bulky secondary alkyl group.Based on these previous results, the behavior of catalysts 2b and 2c in the copolymerization of ethylene and 5b was compared with catalyst 2a (Table 2).Here, the copolymerizations are conducted without the addition of BHT, and polymers with broader molecular weight distributions were obtained.The multimodal distribution of molecular weight may suggest the presence of multiple active species differentiated by the coordination of residual trialkylaluminums.Copolymerization using non-substituted fluorenylamido-ligated titanium catalyst 2c was unsuccessful and produced only a small amount of polymer (Run 10), whereas copolymerization using catalyst 2b proceeded with a higher activity than 2a with a lower incorporation ratio of 5b (Run 8 vs. Run 9).These polymerization behaviors can be explained by the previously reported tendency of other polymerizations using 2a-2c.In the 1 H NMR spectrum of the copolymer obtained from Run 9, two signals are observed at 3.5-3.7 ppm, which is assigned to methylene groups (Figure S11).These two signals indicate that the silyl ether was removed during the polymerization, and the obtained E/5b copolymer possesses unprotected hydroxy groups.coordinate to the metal center of the catalyst with monomers and reduce the polymerization activity [24].Although the polymer yield was lower than that of the copolymerization with unfunctionalized 1,1-disubstituted olefins such as isobutene and 2-methyl-1-pentene (Run 1, 2), high-molecular-weight copolymer was obtained with longer polymerization time at ambient temperature and pressure (Run 3).The direct copolymerization of unprotected 4a (run 4) also proceeded when the amount of MMAO was increased, probably because an excess amount of alkylaluminum can mask the hydroxy group of 4a.The analysis of an ethylene/4b copolymer using 1 H NMR spectrum showed a triplet signal at 3.68 ppm, which is assigned to methylene protons adjacent to the oxygen atom, indicating the incorporation of 4b (Figure 2).Furthermore, a singlet 3H signal at 0.78 ppm and a 21H signal at 1.05 ppm are assigned to terminal methyl and isopropylsilyl protons, respectively.The methylene signal appears at the different chemical shift (3.53 ppm) in the ethylene/4a copolymer (Figure S9), indicating that the silyl ether remained in the copolymer.Thus, the comonomer content was calculated from the integral ratio of the - A plant-derived monomer 6b can also be incorporated into polyethylene in a batch polymerization process with a higher concentration of 6b (Scheme 4).The incorporation ratio of 6b was calculated using 13 C NMR because there is no methylene adjacent to the hydroxy group.By comparing the integral ratio of signals at 43 ppm (αδ) and 30 ppm (CH 2 in methylene unit, Figure 5), the comonomer content was calculated to be 1.5 mol%.Silyl groups on the hydroxy groups would not completely be deprotected because carbons assigned to isopropylsilyl groups were observed at 12-14 ppm.
by the previously reported tendency of other polymerizations using 2a-2c.In the 1 H NMR spectrum of the copolymer obtained from Run 9, two signals are observed at 3.5-3.7 ppm, which is assigned to methylene groups (Figure S11).These two signals indicate that the silyl ether was removed during the polymerization, and the obtained E/5b copolymer possesses unprotected hydroxy groups.A plant-derived monomer 6b can also be incorporated into polyethylene in a batch polymerization process with a higher concentration of 6b (Scheme 4).The incorporation ratio of 6b was calculated using 13 C NMR because there is no methylene adjacent to the hydroxy group.By comparing the integral ratio of signals at 43 ppm (αδ) and 30 ppm (CH2 in methylene unit, Figure 5), the comonomer content was calculated to be 1.5 mol%.Silyl groups on the hydroxy groups would not completely be deprotected because carbons assigned to isopropylsilyl groups were observed at 12-14 ppm.by the previously reported tendency of other polymerizations using 2a-2c.In the H NMR spectrum of the copolymer obtained from Run 9, two signals are observed at 3.5-3.7 ppm, which is assigned to methylene groups (Figure S11).These two signals indicate that the silyl ether was removed during the polymerization, and the obtained E/5b copolymer possesses unprotected hydroxy groups.A plant-derived monomer 6b can also be incorporated into polyethylene in a batch polymerization process with a higher concentration of 6b (Scheme 4).The incorporation ratio of 6b was calculated using 13 C NMR because there is no methylene adjacent to the hydroxy group.By comparing the integral ratio of signals at 43 ppm (αδ) and 30 ppm (CH2 in methylene unit, Figure 5), the comonomer content was calculated to be 1.5 mol%.Silyl groups on the hydroxy groups would not completely be deprotected because carbons assigned to isopropylsilyl groups were observed at 12-14 ppm.The obtained copolymer is possibly used for producing plastic materials with improved adhesiveness and paintability rather than polyolefins with no incorporated functional groups.To show the improvement of wettability, the water contact angle was compared between E/IB and E/4a copolymers to evaluate the effect of hydroxy groups incorporated into the copolymer (Figure 6).A self-standing film is not obtained for these copolymers because the polymer yield is not enough, but thin films can successfully be fabricated on polyimide film by pressing the polymer at the melting condition (180 • C).Incorporating 4a significantly decreased the water contact angle according to the incorporation ratio, which showed that hydroxy groups improved the hydrophilicity of the copolymer.
Figure 4 .
Figure 4. Structure of active species after the insertion of comonomer 5b.
Figure 4 .
Figure 4. Structure of active species after the insertion of comonomer 5b.
Figure 4 .
Figure 4. Structure of active species after the insertion of comonomer 5b.
CH2OH signals and signals in the aliphatic region.A further deprotection of silyl groups b
Table 2 .
Catalyst effect on the copolymerization of ethylene and 5b a .
Table 1 .
Copolymerization of ethylene and 1,1-disubstituted olefins using 2a-MMAO/BHT system a .Absolute molecular weight measured by GPC equipping RI-light scattering-viscometer triple detectors.c Determined by DSC.d Determined by 1 H NMR. e [MMAO] = 8.0 mmol.f Temp.= 80 °C.g Not detected. b , time = 60 min, and temp = 25 • C. b Absolute molecular weight measured using GPC equipped with RI-light scattering-viscometer triple detectors.c Determined using DSC.d Determined using 1 H NMR. e Not determined because of very small yield.
Table 2 .
Catalyst effect on the copolymerization of ethylene and 5b. a .
Table 2 .
Catalyst effect on the copolymerization of ethylene and 5b. a . | 8,024.8 | 2024-01-01T00:00:00.000 | [
"Materials Science",
"Chemistry"
] |
Distribution of Acoustic Power Spectra for an Isolated Helicopter Fuselage
The broadband aerodynamic noise can be studied, assuming isotropic flow, turbulence and decay. Proudman’s approach allows practical calculations of noise based on CFD solutions of RANS or URANS equations at the stage of post processing and analysis of the solution. Another aspect is the broadband acoustic spectrum and the distribution of acoustic power over a range of frequencies. The acoustic energy spectrum distribution in isotropic turbulence is non monotonic and has a maximum at a certain value of Strouhal number. In the present work the value of acoustic power peak frequency is determined using a prescribed form of acoustic energy spectrum distribution presented in papers by S. Sarkar and M. Y. Hussaini and by G. M. Lilley. CFD modelling of the flow around isolated helicopter fuselage model was considered using the HMB CFD code and the RANS equations.
Introduction
For a conventional helicopter, there are two fundamental elements that contribute to the generation of near-field and far-field noise, the main rotor and the tail rotor [1].Engine and fuselage noise are typically of secondary significance.A helicopter main rotor generates primarily tonal and broadband noise.Additional sources, including Blade Vortex Interaction (BVI) noise and High Speed Impulsive (HSI) noise, that are dominant at specific flight regimes.Basic loading noise during hover is generally dominant in a conical region directed 30 to 40 degrees downward from the rotor plane, while broad-band noise radiates mostly out of the plane of the rotor [1].So for some operating conditions and for specific directions of sound propagation, the broadband noise generated by all parts of helicopter (including fuselage) can be a significant contributor to the overall helicopter noise.
According to Proudman [2] the broadband noise can be considered, assuming isotropic flow turbulence and decay.The total acoustic power radiated from embedded finite regions of turbulence contained within an infinite volume of compressible fluid, is a function of the local time-averaged kinetic energy of the turbulence per unit volume, k, and the time averaged rate of dissipation of the kinetic energy per unit volume, H.
Another aspect, is the broadband acoustic spectrum and the distribution of acoustic power over a range of frequencies.The acoustic energy spectrum distribution in isotropic turbulence is non monotonic.The power spectrum function has a maximum at some value of Strouhal number.In the present work the value of acoustic power peak frequency is determined using a prescribed form of acoustic energy spectrum distribution presented in papers by S. Sarkar and M. Y. Hussaini [3] and by G. M. Lilley [4].
CFD modelling of the flow around isolated helicopter fuselage model is considered in this paper using the HMB CFD code and the RANS equations.
Broadband isotropic turbulence noise (Proudman's formula)
A simple approach to estimate acoustic emission of flying vehicle for turbulent flows, assumes that the emitted noise does not have any distinct tones, and that the sound energy is continuously distributed over a broad range of frequencies.In this case the broadband noise power, can be estimated from RANS equations using the mean flow Unlike the direct method of simulation and the integral methods, Proudman's [2] approach does not require unsteady CFD solutions and is based on Lighthill's acoustic analogy [5].
The intensity ,ܠ(ܫ )ݐ of the radiated sound in the farfield is proportional to the square of the fluctuating pressure p and is defined by were ρ ∞ , ∞ are the air density and pressure, ܿ ∞ is the ambient sound speed.The intensity of radiated sound at any point ,ܠ and at time t can be determined as follows [4,6]: Here ρ ∞ is the air density, ܿ ∞ is the ambient sound speed, ܡ is a position, ܚ is the distance vector between two internal points ܡ and ′ܡ inside the integration volume, and W is the retarded time, x=|x|.The space-time correlation function ܲ ݔݔ ݔݔ, can be written in the form ) with the non-dimensional space correlation function, ,)݈/ݎ() and the non-dimensional retarded time correlation function, <(:τ).Here ,ݑ ݈, and : are, respectively, reference values of the turbulent velocity, the turbulent length scale, and turbulent frequency; ).The forms for the space and temporal correlation functions accepted in [4] are where L is the integral lenght scale.Using the expression (2) for the space correlation function, Proudman obtained an estimate of the acoustic power per unit of volume in the time domain: where M t =(2k) 1/2 /c ∞ is the turbulent Mach number, k is the is turbulent kinetic energy per unit mass, and c ∞ is the speed of sound.The turbulent dissipation rate H can be determined [7] by (4) Sarkar and Hussaini [3] recommended the re-scaled constant, α H =0.1, based on DNS data.It should be noted that there are different expressions for the reference turbulent velocity ݑ (see, for example, paper [2], were ݑ = (2݇/3) 1/2 ).In this paper ݑ = (2݇) 1/2 following reference [8].
Total radiated acoustic power
An alternative approach to determine the acoustic power per unit of volume is based on the spectral transformation of the intensity ,ܠ(ܫ )ݐ of the radiated sound.The spectral density ,ܠ(ܫ ω) corresponding to the intensity ,ܠ(ܫ )ݐ of the radiated sound is ( [6]) The frequency of the sound ω is the same as in the turbulence and k is the wavenumber vector of the sound.Taking into account (2) the space-time correlation function (1) may be written: ܮ 2 + : 2 τ 2 ቁቇ with four-dimensional wave-number/frequency spectrum function [4]: (5) and the acoustic spectral density Here ܵ ܶ = ݑ/ܮ: (7) is the turbulent Strouhal number and According to [4] the total radiated acoustic power per unit volume of turbulence is in the frequency domain: Substituting ( 5), (6) gives the total radiated acoustic power per unit volume of turbulence
Proudman's constant
The expression (3) for the total acoustic power radiated per unit volume of flow in the time domain can be written as where ߙ is Proudman's constant.Taking into account (4) the re-scaled constant α H , and Proudman's constant are connected by the expression: The proposal that the total acoustic power radiated per unit volume in the time domain ( 3) is equal to the acoustic power in the frequency domain ( 9) can be written as ) After substitution (3) and ( 8) to (11)
(16) Expression ( 3) can be rewritten in the form Formulas ( 16) (18) allow estimation of the frequency peak and the acoustic power per unit volume spatial the steady computation of the flow using the turbulent Mach number ܯ ݐ and the specific dissipation ω for the k-Z turbulence model [9].
Another estimation of the acoustic power and the acoustic intensity was given by Lighthill [5]: Here ܮ ݂݁ݎ is a reference length.From the comparison of expressions ( 19) and (20) we have that Eventually we can write for the local sound pressure at the vicinity of an emitting body surface:
Numerical simulation of flow around an isolated helicopter fuselage
For simulations, an early version of the ANSAT helicopter produced by the JSC Kazan Helicopters is used.CFD computations and experiments were conducted for fuselage model with (without) skids and cross-beams.Multiblock grids for the CFD computations were constructed using the ANSYS ICEM-hexa software.The computational domain was resolved using hexahedral grids and the 3D steady incompressible Reynolds-Averaged Navier-Stokes (RANS) equations.Fully turbulent calculations were performed using the k-Z model [9].The computational hexa-grid for this model (without skids and cross-beams) contained 964 blocks and about 13.500.000cells.The computations were performed using the HMB solver of Liverpool University [10].The results of the CFD modelling were compared to wind tunnel experimental data.Fuselage was tested at the low speed T-1K wind tunnel of KNRTU-KAI (2.25 m diameter).The wind tunnel model fuselage was constructed using the same CAD model (figure 1), used for CFD modelling.The mesh topology and the surface grid near the area of the exhausts are presented in figure 2. The aerodynamic performances of this model and CFD code validation were considered in references [11][12][13][14], and were studied using the T-1K wind tunnel, that is equipped with a six-component Prandtl-type balances.The conditions of the wind tunnel experiment and CFD modelling corresponded to the free stream Mach number and the Reynolds numbers were of 0.1 and 4.4•10 6 , respectively.Results of numerical simulation for the value of pitch angle of 0 degrees, including visualization of the Sound Pressure Level, the vector field and the frequency peak in the acoustic power spectrum are resented at sections of flow, according to the slices map in figure 3. Figure 6 presents the distribution of frequency peak in the acoustic power spectrum distribution (see equation ( 16)) for the same sections.Lowering of the vector field disturbances (figure 4) leads to lowering of the SPL intensity (figure 5) and to decreasing of the frequencies spectrum area (figure 6).
Conclusions and future work
The acoustic properties (broadband noise) of the flow, and the frequency peak in the acoustic power spectrum distribution were estimated.Using Lighthill's acoustic analogy and the RANS equations, the broadband noise Sound Pressure Level was estimated based on the Proudman's formula.Proudman's approach was combined with Lilley's estimations to determine the frequency of the acoustic power spectrum distribution peak distribution.The structure and the acoustic properties of the flow around an isolated fuselage helicopter were examined in the in terms of mean flow field, turbulent kinetic energy and the dissipation rate.
In the future, the problem of the helicopter fuselage acoustic properties will be considered using CFD modelling based on the URANS equations and advances turbulence models.This work is supported by the "Project part of state task in the field of the scientific activity" Grant (No 9.1694.2014/K).
DOI: 10
.1051/ C Owned by the authors, published by EDP Sciences, kinetic energy k and the dissipation rate H.
Fig. 2 .
Fig. 2. Topology of blocks, and surface grid at the area of the engine exhausts.
Fig. 6 .
The frequency peak in the acoustic power spectrum distribution for the sections a) I, b) II, c) III. | 2,328.2 | 2016-01-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Heating up exotic RG flows with holography
We use holography to study finite-temperature deformations of RG flows that have exotic properties from an RG viewpoint. The holographic model consists of five-dimensional gravity coupled to a scalar field with a potential. Each negative extrema of the potential defines a dual conformal field theory. We find all the black brane solutions on the gravity side and use them to construct the thermal phase diagrams of the dual theories. We find an intricate phase structure that reflects and extends the exotic properties at zero temperature.
Introduction
Holography provides a valuable tool to study non-perturbative properties of strongly coupled Quantum Field Theories (QFT). In particular, it "geometrizes" the Renormalization Group (RG) in the sense that it maps the properties under changes of scale of a QFT to the properties of some dual geometry in one dimension higher than the QFT.
In this paper we will focus on thermal properties of QFTs that arise as deformations of some Conformal Field Theory (CFT) and whose RG flows exhibit exotic properties from the QFT viewpoint [1]. 1 Our model consists of five-dimensional gravity coupled to a scalar field whose potential possesses several negative extrema (but possibly also some positive ones). Each negative extremum gives rise to an AdS solution which is dual to a CFT.
The qualitative form of our potential is depicted on the right-hand side of figure 1 (the case with positive extrema will be discussed in section 4.3). This potential can be derived from a superpotential whose qualitative form is shown in figure 1 (left). Given a purely bosonic theory with a potential, the superpotential is not uniquely defined. However, we JHEP08(2018)034 IIb IIc ϕ V(ϕ) Figure 1. Qualitative forms of the superpotential (left) and potential (right) of our model. The definition of the points φ * , φ c and the different regions labelled I, II and III will be explained around eqs. (3.2) and (3.5).
want to mimic a situation in which our model is the bosonic truncation of a truly supersymmetric theory, in which case there would be a preferred superpotential. Therefore, as part of the definition of our toy model, we will imagine that the "true" superpotential is the one in figure 1 (left). Under this assumption, the extrema of the potential that are also extrema of the superpotential, labelled φ 1 and φ 4 in figure 1, would be dual to supersymmetric CFTs, whereas the extrema labelled φ 2 and φ 3 in figure 1 (right) would be dual to non-supersymmetric ones. In what follows we will continue to use this "supersymmetric" versus "non-supersymmetric" terminology with the understanding that it is meaningful only in reference to our choice of superpotential. Our goal will be to construct all the black brane solutions of the gravity model and to map each solution to a thermal state in one of the CFTs. As we will see, the fact that this map is non-trivial will result in interesting features of the phase diagrams of the dual CFTs. It is not surprising that some of these features resemble those found in the case of CFTs defined on a curved space [3], since both the temperature and the boundary curvature act as infrared (IR) cut-offs in the CFT.
We will see that the flows at non-zero temperature reflect but also extend some of the exotic properties of the zero-temperature flows. By smoothly deforming the potential on the gravity side we will show that, in some cases, the exotic thermal phase structure of the dual field theories can be continuously connected with more familiar non-exotic cases. We will also see that some of the exotic structures persist in cases in which the potential on the gravity side develops de Sitter-like maxima with positive energy density.
Note added. While this paper was being typewritten we became aware of ref. [4], which has significant overlap with our results.
The model
We study the Einstein-scalar model with action with a potential V (φ) derived from a superpotential W (φ) through By taking derivatives with respect to φ on both sides of (2.2) we see that an extremum of W (φ) will automatically be an extremum of V (φ), but the coverse is not true in general. We consider a particular model where the potential has more extrema than the superpotential. Our superpotential is: where L is a length scale, and we choose φ Q = 10 and φ M 0.5797. With these values for the parameters the superpotential has a maximum at φ 1 = 0 and a minimum at φ 4 2.297, as shown in figure 2. These points also correspond to supersymmetric extrema of the potential, which is displayed in figure 3. In addition, the potential possesses a nonsupersymmetric minimum at φ 2 0.861 and a non-supersymmetric maximum at φ 3 0.943, see figure 3 (right). At the extrema φ 1 , φ 2 , φ 3 , φ 4 of the potential the theory admits AdS solutions with radii given by We will denote the dual conformal field theories by CFT 1 , CFT 2 , CFT 3 , CFT 4 . For convenience, we have chosen the specific values of the superpotential parameters φ Q , φ M so that the mass of the scalar field is m 2 1 L 2 1 = m 2 3 L 2 3 = −3 both at φ 1 and φ 3 , which implies that the dual scalar operator has mass dimension three. For concreteness, in this paper we will restrict our attention to black brane solutions for which the value of the scalar field at the horizon, φ H , lies between φ 1 and φ 4 . Since the potential is invariant under φ → −φ, there is no loss of generality in considering only positive values of φ. Moreover, a preliminary exploration indicates that none of the physics that we will discuss is affected by the form of the potential beyond φ 4 , in particular by the presence of an extra non-supersymmetric maximum at φ 5 4.130.
As mentioned above, the form of our potential near φ 1 and φ 3 describes explicit deformations of the CFT 1 and the CFT 3 by a source Λ for a dimension-three scalar operator. Figure 3. Potential of our model. The definition of the points φ * , φ c and the different regions labelled I, II and III will be explained around eqs. (3.2) and (3.5). On the right we zoom into the region marked with a red circle on the left.
JHEP08(2018)034
We will also consider deformations of any of the CFTs by expectation values O of the corresponding operator. Although these are not expectation values in the vacuum but in thermal states, in an abuse of language we will still refer to them as VEVs. Our goal will be to find all the black brane solutions in the model with φ 1 ≤ φ H ≤ φ 4 and to use them to construct the phase diagrams of the dual CFTs.
Study of the flows
We use the following ansatz for black brane solutions: The scalar field is also a function of r. The reparametrization freedom in the r direction means that the solution is completely specified by two functions instead of three, for example by giving A(φ) and h(φ). The solutions we seek are regular at and outside the horizon and asymptote to AdS 5 at large r. By studying these solutions we can reconstruct the thermodynamics of the dual field theories.
CFT 1
In this section we will construct thermal states of the CFT 1 deformed by a source Λ for the dimension-three scalar operator O dual to φ. These states are dual to gravity solutions for which φ approaches φ 1 asymptotically.
Let us start by considering the vaccum of the theory. Since the potential is derived from a superpotential, we can solve the BPS equations to find a solution connecting φ 4 and φ 1 , acording to the structure of the superpotential, see figure 2. This solution interpolates smoothly between two AdS 5 geometries of different radii. It is known as a "skipping" solution because it skips the non-supersymmetric extrema at φ 3 and φ 2 [1]. The state in the CFT 1 dual to this flow has zero temperature because the gravity solution has no JHEP08(2018)034 horizon, and it has zero energy because we use a supersymmetry-preserving holographic renormalization scheme.
The CFT 1 theory has a second Lorentz-invariant state to which we will refer as a second vacuum. It corresponds to another horizonless gravity solution connecting φ 1 and φ 2 , which again smoothly interpolates between the corresponding AdS 5 geometries. However, in this case the minimum of the potential at φ 2 is not a minimum of the superpotential, so this solution is not supersymmetric. The dual field theory state has zero temperature but strictly positive energy density. In figure 4 these two vacua correspond to the two points where the curves touch the vertical axis. Note that the energy difference between them is independent of the renormalization scheme. Moreover, this difference is consistent with our choice of terminology regarding "supersymmetric" and "non-supersymmetric" solutions in the sense that the energy of the non-supersymmetric vacuum is higher than that of the supersymmetric one. In this language, the non-supersymmetric vacuum provides a simple example of metastable dynamical supersymmetry breaking [5].
We will now construct the thermal states of the CFT 1 . We parametrize the black brane solutions by φ H , defined as the value of φ at the horizon. Let us start by heating up the non-BPS vacuum. If we start with φ H slightly to the left of φ 2 , we obtain thermal states with low temperature, and as we move φ H towards φ 1 , the temperature increases, and goes to infinity when φ H → φ 1 . This corresponds to the thermal branch labeled I in figure 4. Near the AdS points the thermodynamic functions have the expected conformal behavior. The point labelled "a" in figure 4 indicates a thermal state on the I branch whose dual gravity solution is displayed in figure 5. Now let us heat up the BPS vacuum. We start with values of φ H slightly to the left of φ 4 , which correspond to low-temperature states on a second thermodynamic branch of the CFT 1 , labelled III in figure 4, that emanates from the BPS vaccum. As we decrease the value of φ H from φ 4 to a certain value φ c 1.3436 located between φ 3 and φ 4 (see the temperature first increases, then reaches a maximum and then decreases, approaching zero as φ H → φ + c . The branch III thus exhibits a maximum temperature. Interestingly, as φ H → φ + c the energy of the solution approaches that of the non-BPS vacuum. Note, however, that the I and III branches meet on the vertical axis with different slopes, meaning that the specific heat on each branch is different all the way down to T = 0. This is illustrated in figure 6, which shows the ratio s/T 3 , with s the entropy density. We see that, although both s and T go to zero as φ H → φ + c , the ratio remains finite and is different on the I and III branches. Figure 7. Scalar field as a function of the proper distance r p along the holographic direction, measured from the horizon, for different solutions with φ H approaching φ c from the right.
JHEP08(2018)034
We therefore conclude that in the limit φ H → φ + c we obtain a solution which has zero temperature from the viewpoint of the CFT 1 . This may seem in contradiction with the fact that the gravity solution seems to end not at an extremum of the potential but at a regular horizon located at φ H = φ c . Moreover, from the viewpoint of the CFT 1 , this limiting flow has the same energy as the non-BPS vacuum described by the flow from φ 1 to φ 2 . All these features can be understood by noting that, as φ H approaches φ c from the right, the corresponding flow develops a larger and larger region in which φ is in the vicinity of φ 2 and the metric is approximately AdS. In other words, the flow exhibits "walking" or quasi-conformal dynamics near the fixed point described by the CFT 2 . This large AdS region corresponds to the plateau in figure 7, where we plot the value of the scalar field as a function of the proper distance along the holographic direction for several flows with φ H close to φ c . Note that the proper distance from the plateau to the horizon approaches a finite constant as φ H → φ + c . As the size of the plateau grows so does the relative redshift between the horizon and the UV region where φ → φ 1 . In the limit φ H → φ + c the flow splits into two independent flows, one from φ 1 to φ 2 and one from φ 2 to φ c . Since the redshift diverges in this limit, the temperature and the entropy density go to zero as seen from the CFT 1 . Nevertheless, we will see below that the flow from φ 2 to φ c corresponds to a state in the CFT 2 with non-zero temperature and entropy density.
Having introduced φ c , we can now define three regions according to the value of φ H (see figure 3): Region III with φ ∈ (φ c , φ 4 ) .
JHEP08(2018)034
What we have seen so far is that solutions with φ H in regions I and III are dual to thermal states of the CFT 1 . As we will see shortly, solutions with φ H outside regions I and III are not states of the CFT 1 because in these cases φ does not asymptote to φ 1 at large r.
We are now ready to discuss the thermodynamics of the CFT 1 . From the free energy shown in figure 4 (left) we conclude that there is a first order phase transition at T c /Λ 0.499. We have displayed in solid green the preferred thermodynamical states. From the energy density shown in figure 4 (right), we find a latent heat for the first order phase transition of /Λ 4 5.2816. The entropy density is shown in figure 6. In these and in subsequent plots we show in dashed blue the locally stable but globally unstable thermal states, and in dotted red the locally unstable thermal states. Notice that in the free energy plot, for branch III, when changing from blue to red the curvature changes accordingly from convex to concave. Correspondingly, in the energy plot we see a negative specific heat and therefore also a negative speed of sound squared. If slightly perturbed, theses homogeneous black branes would display an exponentially growing mode which would lead the system to an inhomogenous configuration, as shown in [6].
Note that the free energy in figure 4 (left) is different from the usual swallow-tail firstorder phase transition that the reader may be familiar with in two related ways. First, the upper turning point lies at T = 0, leading to a metastable dynamical supersymmetry breaking vacuum. Second, the upper branch includes both locally stable and locally unstable regions, whereas in simpler cases these states are all locally unstable. We will see below that the CFT 1 can be continuously deformed so that its phase diagram becomes the usual swallow-tail first-order phase transition.
CFT 2
Let us now consider the CFT dual to the AdS solution for which φ = φ 2 . We are interested in flows that have this CFT as their UV fixed point. Since φ = φ 2 corresponds to a minimum of the potential the dual scalar operator has dimension ∆ 2 greater than four, specifically ∆ 2 4.5069. Turning on a source for such an irrelevant operator would destroy the UV of the theory. Therefore, we will seek flows that start at φ = φ 2 and are driven exclusively by a VEV.
The undeformed CFT 2 has one vacuum, given by the AdS solution with φ = φ 2 and h = 1. One way to turn on a non-zero temperature is to simply replace this solution by the corresponding Schwarzschild-AdS solution on the gravity side. In this solution the VEV of the scalar operator vanishes and therefore T is the only scale. As a consequence, any two values of T are physically equivalent to one another. Therefore one should really think of this entire branch as describing a single physically distinct thermal state. In addition to this state, the theory has a second thermal state described by the gravity solution connecting φ 2 and φ c that we found in the previous section. The functions in this solution are shown in figure 8. Examining the behavior near φ 2 we have verified that this flow is indeed triggered exclusively by a VEV of the scalar operator. Moreover, the solution possesses a regular horizon at φ = φ c , and therefore it describes a thermal state of the CFT 2 . From the behaviour near the horizon and from the fall-offs near φ = φ 2 we obtain the temperature JHEP08(2018)034 Figure 8. Numerical solution describing the thermal state with non-zero VEV in the CFT 2 . This flow may be obtained either as the limit φ H → φ + c or as the limit φ H → φ − c . and the entropy density for this state in units of the VEV, respectively, with the results Note that we are referring to this solution as a thermal state and not as a branch of states. The reason is that any two states are physically distinguished only by the dimensionless ratio T / O 1/∆ 2 , which as we have seen is uniquely fixed by the solution.
We therefore conclude that, at any non-zero T , the CFT 2 has two physically distinct thermal states, one with T / O 1/∆ 2 = 0 and one with T / O 1/∆ 2 0.5149. Comparing their free energies we have verified that the thermodynamically preferred state is the one with vanishing VEV. Interestingly, the state with vanishing VEV can be viewed as the IR limit of the thermal branch of the CFT 1 whose gravity flow ends at φ 2 , whereas the state with non-vanishing VEV can be viewed as the IR limit of the thermal branch of the CFT 1 whose gravity flow ends at φ c . This is illustrated, for example, by their entropyto-temperature ratios. Indeed, we see in figure 6 that the zero-temperature limit of s/T 3 for the upper thermal branch of the CFT 1 agrees precisely with the value of this ratio in the zero-VEV thermal state of the CFT 2 , represented by the continuous horizontal line corresponding to φ = φ 2 . Similarly, the zero-temperature limit of the lower thermal branch of the CFT 1 that ends at the non-BPS state agrees precisely with the value of this ratio in the non-zero-VEV thermal state of the CFT 2 , represented by the dashed horizontal line.
CFT 3
Recall that we have chosen the parameters of our superpotential so that the dimension of the scalar operator in the CFT 3 is ∆ 3 = 3. The flows that we will describe in this section are therefore characterised by a source Λ for this operator. Since the potential is not symmetric around φ 3 , flows with a positive source are physically distinct from flows with a negative source, and we will therefore consider each case separately. In terms of gravity solutions, this means that the scalar field φ will asymptote to φ 3 from the left for a negative source and from the right for a positive source. Figure 9. Numerical solution describing the thermal state with non-zero VEV in the CFT 3 . This flow is obtained for φ H = φ * .
JHEP08(2018)034
We start by studying the theory with vanishing source Λ = 0. The vacuum is given by the AdS solution with φ = φ 3 and h = 1. If we consider the zero-source theory at non-zero temperature, we find two physically thermal states, in analogy with the case of the CFT 2 . The first thermal state is described by the AdS-Schwarzschild solution with φ = φ 3 . The second thermal state corresponds to a gravity solution in which the scalar field starts in the UV at φ = φ 3 and increases monotonically until the flow ends at a regular horizon located at a point given by φ * 1.0701 (see figure 3). This flow is shown in figure 9. From the viewpoint of the CFT 3 with zero source, these two states are distinguished by the fact that the first one has vanishing VEV whereas the second one has Comparing their free energies we have verified that the thermodynamically preferred state is the one with vanishing VEV. We will now deform the CFT 3 by turning on a source for the scalar operator. Since this operator is relevant, the UV of the theory will still be described by the undeformed CFT 3 . We therefore expect that the two thermal branches of the sourceless theory will be reflected in the high-temperature physics of the deformed theory.
In order to classify the different gravity solutions with a φ(r) that asymptotes to φ 3 for large r it is useful to divide region II into three subregions: Region IIc with φ ∈ (φ * , φ c ) .
Solutions with φ H in IIb asymptote to φ 3 from the right, so they correspond to thermal states of the CFT 3 deformed by a positive source. Solutions with φ H in IIa and IIc asymptote to φ 3 from the left, so they correspond to thermal states of the CFT 3 deformed by a negative source. The difference between these two cases is that solutions with φ H in IIa are monotonic, whereas solutions with φ H in IIc exhibit a "bounce", namely, they are non-monotonic in φ: they first decrease and go below φ 3 , then they have a bounce, and finally asymptote to φ 3 from the left. The point φ * is defined as the point where the gravity solutions start to develop a bounce. From the field theory viewpoint, it is a limiting point where the source changes sign from positive to negative, so at exactly φ H = φ * we have a sourceless solution, as explained above.
Negative source
Thermal states of the CFT 3 deformed by a negative source correspond to gravitational solutions with φ H in regions IIa and IIc. The vacuum of this theory is given by a flow from φ 3 to φ 2 with h = 1, which corresponds to a smooth interpolation between two AdS solutions. Notice that this flow is not a solution to the BPS equations obtained from the superpotential (2.3). Now we can heat up this vacuum solution, and as we move φ H from φ 2 to φ 3 we reconstruct the thermodynamic branch labeled as IIa in figure 10 and figure 11 (left). Values of φ H near φ 2 correspond to low temperatures, and the thermodynamics corresponds to the thermodynamics of the AdS-Schwarzschild for the CFT 2 . Values of φ H near φ 3 correspond to high temperatures, and the thermodynamics corresponds to the thermodynamics of the AdS-Schwarzschild solution for the CFT 2 . Therefore, this IIa branch smoothly interpolates between the two zero-source, zero-VEV thermal branches of the CFT 2 and the CFT 3 , respectively. For an example of a numerical solution with φ H in IIa see figure 12, solution d.
Consider now the solutions with φ H in IIc. Recall that these solutions exhibit a bounce. If φ H is close to φ * from the right, the bounce is small, and this solution corresponds to large temperatures and the thermodynamics is given by the thermodynamics of the nonzero-VEV thermal branch of the CFT 3 . As we increase φ H , the bounce becomes larger, and when φ H gets close to φ c the turning point gets closer to φ 2 . In the limit φ H → φ − c , the flow decouples into two flows, one from φ 3 to φ 2 and one from φ 2 to φ c . This decoupling is analogous to the decupling in the limit φ H → φ + c that we studied in the CFT 1 . Again, there is an infinite redshift and the dual field theory state has zero temperature from the viewpoint of the CFT 3 . In particular, this implies that there are two Lorentz-invariant states, to which we will refer as non-degenerate vacua. In summary, the states with φ H in IIc give rise to the thermodynamic branch labelled as IIc in figure 10 and figure 11 (left). This branch interpolates smoothly between the sourceless CFT 3 branch with non-vanishing VEV at large temperatures and the sourceless CFT 2 branch with non-vanishing VEV at low temperatures. For an example of a numerical solution with φ H in IIc see figure 12, solution e.
We thus conclude that the CFT 3 deformed with a negative source possesses two thermal branches, labelled IIa and IIc in figure 10 and figure 11 (left). One of them connects the AdS-Schwarzschild solutions at the endpoints, and the other connects the non-zero-VEV branches. Nevertheless, the thermodynamics of the CFT 3 does not exhibit any phase transitions. The thermodynamically preferred states are those of IIa, and the states IIc are only metastable. There are no locally unstable thermal states in the CFT 3 . The hightemperature physics reflects the two thermal branches of the zero-source theory and the low-temperature physics reflects the two thermal branches of the zero-source CFT 2 .
Positive source
Let us now consider the CFT 3 deformed with a positive source. On the gravity side this corresponds to solutions with φ H in IIb that asymptote to φ 3 from the right. These solutions do not bounce, and the scalar field increases monotonically from φ 3 to φ H . For an example of one of these numerical solutions see figure 12, solution f. If we start with φ H slightly to the right of φ 3 we obtain thermal states with high temperature, and the thermodynamics is approximately that of the undeformed CFT 3 . If we now increase the value of φ H towards φ * , the temperature first decreases, then reaches a minimum value T min /Λ 19.12, and then increases, eventually diverging as φ → φ − * . This behaviour is clearly illustrated in figure 13 and figure 11 (right). Near this value, the thermodynamics approaches the thermal branch of the CFT 3 with zero source and non-zero VEV. The existence of a minimal temperature means that our analysis is unable to identify a candidate ground state of the CFT 3 deformed with a positive source. A possible reason is that such a source destabilizes the zero-temperature theory, and that only in the presence of a sufficiently large temperature the thermodynamic ensemble becomes well defined.
Note that the limit T /Λ → ∞ can be thought of as the zero-source limit. Indeed, in this limit we recover precisely the sourceless cases, since the thermodynamics asymptotes to the AdS-Schwarzschid thermal branch and to the branch with non vanishing VEV. This is somehow the inverse situation of that for the III branch of the CFT 1 : in that case there is a maximum temperature and the thermodynamics asymptotes to the two sourceless thermal branches for T /Λ → 0.
CFT 4
We close this section with some brief comments on this case. This theory has only one vacuum, the AdS solution with φ = φ 4 and h = 1. It has only one thermal branch, the
JHEP08(2018)034
AdS-Schwarzschild solution with φ = φ 4 . These features are reflected in the IR physics of the CFT 1 . Since φ 4 is a minimum of the potential the dimension of the scalar operator is larger than 4. Turning on a source for such an irrelevant operator would therefore destroy the UV and we have not found any flows that start at φ 4 and are triggered by a VEV.
Related theories
So far we have studied the theory (2.1) with the superpotential (2.3) and φ M = 0.5797. In this section we study the theory for other values of φ M . These new examples will shed light on some aspects of the thermodynamics of exotic RG flows that we have encountered above.
Connection with the usual swallow-tail
In this subsection we establish a connection between the thermodynamics of the CFT 1 studied in section 3.1 and the thermodynamics of a theory with the usual swallow-tail first-order phase transition. We obtain one theory as a continuous deformation of the other.
Specifically, we smoothly modify the superpotential (2.3) by continuously increasing the value of the parameter φ M from φ M = 0.5797 to φ M = 0.9, while keeping φ Q = 10 constant. For the initial value φ M = 0.5797, the thermodynamics of the CFT 1 is displayed in figure 4. As we increase the value of φ M the two non-supersymmetric extrema approach each other, and for φ M = 0.5808 they merge into an inflection point (we study this particular case in the next section). For larger values of φ M there are no non-supersymmetric extrema of the potential in between the two extrema of the superpotential. This implies that the non-supersymmeric vacuum of the CFT 1 does not exist. As a consequence, the upper thermal branch of this theory does not touch the T = 0 axis any more. This is illustrated in figure 14 (top), which corresponds to φ M = 0.64. In this case the turning point of the energy density is no longer a kink, and there are locally unstable phases in the two turning regions, separated by a region of locally stable states. This is more clearly seen in figure 14 (center), which corresponds to φ M = 0.8. If we keep increasing φ M further we eventually recover the usual swallow-tail situation in which there is a single locally unstable region, as shown in figure 14 (bottom), which corresponds to φ M = 0.9.
For all these values of φ M the CFT 1 possesses a first-order phase transition at the temperature T c indicated by a vertical line in figure 14. If we increase φ M even further, then the latent heat of the first order phase transition becomes smaller and smaller, until it vanishes for φ M 1.2. At this point the theory possesses a second-order phase transition. For larger values of φ M the transition becomes a smooth crossover.
Inflection point
We will now study the thermodynamics of the theory with φ M 0.5808. For this value the two non-supersymmetric extrema merge with one another giving rise to an inflection point at φ 2 = φ 3 0.9015. Similarly, the two points φ * and φ c also merge with one another at φ * = φ c 1. with region IIb. Regions I and III remain qualitatively similar. The non-supersymmetric vacuum for the CFT 1 still exists as the flow from φ 1 to φ 2 = φ 3 . Thus the thermodynamics of the CFT 1 remains qualitatively unchanged with respect to that of section 3.1 ( figure 4). Notice that the skipping flows, those with φ H in region III, skip only one CFT in this case.
Consider now the CFT defined by the inflection point. At leading order, the scalar operator dual to the scalar field is a marginal operator because the first and second derivatives of the potential vanish. However, beyond leading order the operator becomes marginally irrelevant if the source is negative, and marginally relevant if the source is positive [1]. For vanishing source, this CFT has only one vacuum given by the AdS solution with φ = φ 2 = φ 3 , and it has two thermal branches, the AdS-Schwarzschild solution, and a flow JHEP08(2018)034 from φ 2 = φ 3 to φ * = φ c . This flow corresponds to a solution with non vanishing VEV for the scalar operator. Turning on a negative source would destroy the UV since in this case the operator is marginally irrelevant. If instead the source is positive, then the operator is marginally relevant, and all the thermal solutions starting in region IIb are thermal states of this CFT. Thus this theory is the analog of the CFT 3 deformed by a positive source that we studied above.
It is interesting to ask what happens if we now increase φ M slightly above 0.5808. In this case the inflection point disappears and so does the CFT at that point. Region IIb disappears and regions I and III merge. What is then the fate of solutions which used to have φ H in region IIb? The inflection point disappears, but there is a region of the potential which has near-zero derivative, so it gives rise to a near-AdS solution. Thus, the corresponding solutions "stay for a long time" in this quasi-conformal region before reaching φ 1 . The thermodynamic branches no longer touch the T = 0 axis but they get very close to it due to the large redshift associated to the near-AdS region. Thus the states that used to be thermal states of the CFT at φ 2 = φ 3 now become thermal states of the CFT 1 .
Potential with a positive maximum
It is interesting that by varying the parameter φ M one can continuously deform the potential in such a way that the non-supersymmetric maximum at φ 3 lies in the positive region. For illustration purposes, we choose φ M = 0.5, for which the potential is shown in figure 15 (left). The existence of a globally defined superpotential guarantees that the supersymmetric, skipping flow that connects φ 1 to φ 4 still exists. In fact, we have verified that the entire thermodynamics of the CFT 1 is qualitatively identical to that discussed above.
Constant-scalar solutions with φ = φ 3 still exist, but the spacetime metric is now dS instead of AdS. It would be interesting to find flows that connect an AdS extremum to this dS solution.
JHEP08(2018)034
5 Interpolating between exotic RG flows at zero-temperature So far we have focussed on the study of thermal states described by exotic RG flows. In this section we consider the zero-temperature case. Exotic RG flows at T = 0 can be classified as: 2 • Bouncing flows In this section we review these types of flows and present a potential that allows us to interpolate continuously between all of them.
For presentation purposes, we consider a concrete example of a potential with a free paremeter g: where L is a lengh scale. By varying the parameter g, we find the different cases of exotic RG flows. We only consider zero temperature solutions (h = 1). For the range of g that we study, g ∈ (0.84, 0.99), the qualitative features of the potential are always the same. In particular, it has two maxima that we denote by φ 1 and φ 3 , and two minima that we denote by φ 2 and φ 4 , as in the rest of the paper. Nevertheless, we will see that the structure of the flows does change qualitatively as g varies within the range above. To illustrate this we focus on flows that end on the minimum at φ 4 from the left, which corresponds to turning on a negative source for the irrelevant operator at φ 4 . The absolute value of the source is irrelevant since it is the only scale at the CFT dual to φ 4 . We then adopt an IR viewpoint in the sense that we start at φ 4 as an IR fixed point and ask what the possible UV origin of the flow is for different values of g. Pictorially, one can think of this as "shooting" from the IR and seeing where the flow stops in the UV. We start with the value of the parameter g 0.84. In this case no exotic RG flow is present. The flow starts in the IR at φ 4 and stops in the UV at φ 3 . This is just an usual RG flow that interpolates smoothly between two adjacent AdS solutions.
Next we consider g 0.97. The solution starting at φ 4 bounces back before reaching φ 3 , see figure 16 (top left). This exotic behavior is related to the global structure of the potential, and it is difficult to predict a priori based on a purely local analysis. It would be interesting to find some physical quantity that indicates when a given potential will have bouncing solutions before solving the equations. Nevertheless, after solving many different cases we have developed some intuition about when it does happen: the potential must be "steep enough" between the maximum and the minimum, and also the minimum must be "deep enough". From the viewpoint of the field theory, this bouncing flow corresponds to a β-function that has branch cuts [1]. If we now increase the value of g in a continuous way up to g 0.99, the minimum at φ 4 gets deeper, the potential between φ 3 and φ 4 gets steeper, and the bounce gets larger. Eventually, the bounce is large enough to reach the minimum at φ 2 . Then, instead of bouncing back, the flow continues and reaches φ 1 , see figure 16 (top right). Thus, we can think of the skipping flow as a bouncing flow in which the bounce is large enough to overpass the nearest minimum. From the viewpoint of the field theory, this flow corresponds to a β-function that overpasses two CFTs at φ 2 and φ 3 , before reaching φ 1 . Now, if we fine-tune the parameter g in such a way that we get the exact limiting case between the bouncing flow and the skipping flow, g 0.98589, then the flow stops precisely at φ 2 . This corresponds to the irrelevant VEV flow, see figure 16 (bottom left). At the minimum φ 2 the dual scalar operator is irrelevant, and this flow correspond to turning on a VEV for that operator, while having vanishing source. VEV flows are non-generic in the sense that they require a fine-tuning of the potential.
JHEP08(2018)034
Finally, we can also fine tune the parameter g to get the limiting case when the bounce starts forming, g 0.84143. This is precisely the relevant VEV flow, see figure 16 (bottom JHEP08(2018)034 right). At the maximum φ 3 the dual operator is relevant, and this flow corresponds to turning on a VEV while keeping the source equal to zero. We close this section by noting that the different types of flows above can occur combined with one another for certain potentials. For illustration we will provide two examples. The first one is a skipping flow that skips not only two extrema but four. This arises as a solution with the following potential: This potential has six extrema at φ 1 < φ 2 < φ 3 < φ 4 < φ 5 < φ 6 , with φ 1 a maximum, φ 2 a minimum, etc. We find solutions starting in the UV at φ 1 = 0 and ending in the IR at φ 2 , φ 4 and φ 6 , as shown in figure 17 (left). Thus, in this case the CFT 1 has three non-degenerate vacua. Presumably this can be extended to an arbitrary number of vacua by chosing the potential appropiately. The second example is a flow that bounces twice and skips several extrema, see figure 17 (right). We obtain this flow as a solution with the following potential: By fine tuning the parameters of this potential, one can also construct other interesting cases, for example, an irrelevant VEV flow that also skips several extrema.
Discussion
We have studied exotic RG flows at non-zero temperature. In part, our results can be seen as a demonstration that the types of zero-temperature exotic RG flows listed in section 5 also exist at non-zero temperature. One relevant difference is the fact that the two cases that requiere fine tunning at T = 0, the two VEV flows, do not require such fine tunning JHEP08(2018)034 at non-zero temperature, since in this case the theory simply adjusts the ratio between the temperature and the VEV. The thermodynamics of the CFT 1 is sensitive to the entire form of the scalar potential on the gravity side between φ 1 and φ 4 . Therefore it must encode much information about the other CFTs. The information about the CFT 2 and the CFT 4 is presumably encoded in the IR behaviour of the CFT 1 near its metastable and the stable vacua, respectively. The information about the CFT 3 may also be encoded in the CFT 1 , albeit in a less obvious, possibly highly non-local way. It would be interesting to understand this in detail, perhaps along the lines of [8]. A more intriguing possibility would be to be able to extend this analysis to the case in which the potential possesses a positive maximum at φ 3 . In this case the solution at this point becomes a dS spacetime instead of an AdS spacetime, but we have seen that nevertheless the thermodynamics of the CFT 1 is qualitatively the same. It would also be interesting to construct flows between this dS geometry and one of the AdS extrema, along the lines of [9].
The gravity model that we have used is a bottom-up model. It would be interesting to find top-down models that exhibit similar features. However, in consistent truncations to five dimensions of ten-or eleven-dimensional supergravity that retain only one scalar it is uncommon that there is more than one extremum and, if so, the additional extrema are typically unstable because they violate the Breitenlohner-Freedman bound.
It would also be interesting to extend our analysis to the case of multiple scalar fields, which at zero temperature has been analysed in [7].
The CFT 1 in our model possesses two non-degenerate vacua. The one with higher energy is metastable. If one imagines that of our model may be the bosonic truncation of a supersymmetric model, then this provides a simple example of dynamical metastable supersymmetry breaking [5]. It would be interesting to use this model to study timedependent properties such as the evolution of bubbles of the metastable vacuum. | 10,131.8 | 2018-08-01T00:00:00.000 | [
"Physics"
] |
Carbachol Induces a Rapid and Sustained Hydrolysis of Polyphosphoinositide in Bovine Tracheal Smooth Muscle Measurements of the Mass of Polyphosphoinositides, 1,Z-Diacylglycerol, and Phosphatidic Acid*
The effects of carbachol on polyphosphoinositides and 1,2-diacylglycerol metabolism were investigated in bovine tracheal smooth muscle by measuring both lipid mass and the turnover of [SH]inositol-labeled phosphoinositides. Carbachol induces a rapid reduction in the mass of phosphatidylinositol 4,tbbisphosphate and phosphatidylinositol 4-monophosphate and a rapid increase in the mass of 1,2-diacylglycerol and phos- phatidic acid. These changes in lipid mass are sustained for at least 60 min. The level of phosphatidylinositol shows a delayed and progressive decrease during a 60-min period of carbachol stimulation. The addition of atropine reverses these responses completely. Car- bachol stimulates a rapid loss in ['H]inositol radioactivity from phosphatidylinositol 4,5-bisphosphate and phosphatidylinositol 4-monophosphate associated with production of ['Hlinositol trisphosphate. The car- bachol-induced change in the mass of phosphoinosi- tides and phosphatidic acid is not affected by removal of extracellular Ca2+ and does not appear to be second- ary to an increase in intracellular Ca2+. These results indicate that polyphosphoinositide and a the These results strongly suggest that carbachol-induced contraction is mediated by the hydrolysis of polyphos- phoinositides with the resulting generation of two messengers: inositol 1,4,5-trisphosphate and 1,Z-diacyl- glycerol.
The effects of carbachol on polyphosphoinositides and 1,2-diacylglycerol metabolism were investigated in bovine tracheal smooth muscle by measuring both lipid mass and the turnover of [SH]inositol-labeled phosphoinositides. Carbachol induces a rapid reduction in the mass of phosphatidylinositol 4,tbbisphosphate and phosphatidylinositol 4-monophosphate and a rapid increase in the mass of 1,2-diacylglycerol and phosphatidic acid. These changes in lipid mass are sustained for at least 60 min. The level of phosphatidylinositol shows a delayed and progressive decrease during a 60min period of carbachol stimulation. The addition of atropine reverses these responses completely. Carbachol stimulates a rapid loss in ['H]inositol radioactivity from phosphatidylinositol 4,5-bisphosphate and phosphatidylinositol 4-monophosphate associated with production of ['Hlinositol trisphosphate. The carbachol-induced change in the mass of phosphoinositides and phosphatidic acid is not affected by removal of extracellular Ca2+ and does not appear to be secondary to an increase in intracellular Ca2+. These results indicate that carbachol causes phospholipase C-mediated polyphosphoinositide breakdown, resulting in the production of inositol trisphosphate and a sustained increase in the actual content of 1,2-diacylglycerol. These results strongly suggest that carbachol-induced contraction is mediated by the hydrolysis of polyphosphoinositides with the resulting generation of two messengers: inositol 1,4,5-trisphosphate and 1,Z-diacylglycerol.
The intracellular events involved in the regulation of smooth muscle contraction remain a matter of controversy. It has been proposed that an increase in intracellular free ea2+ concentration activates myosin light chain kinase, a ea2+-calmodulin-dependent enzyme, leading to the phosphorylation of myosin light chain, and that this molecular event induces contraction through an increased interaction of myosin with actin (1)(2)(3)(4)(5)(6)(7). This model assumes that during the sustained phase of contraction, the intracellular free ea2+ concentration and the amount of phosphorylated myosin light chain remain elevated. However, recent works done by Morand National Institutes of Health Grant HL 35849. The costs of * This work was supported by the Muscular Dystrophy Association publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
$ To whom correspondence should be addressed.
gan and Morgan (8,9) show that, when aequorin is employed as an intracellular calcium indicator, addition of either phenylephrine or angiotensin to vascular smooth muscle leads to a transient, rather than sustained, increase in intracellular free Caz+, but a sustained contractile response. Likewise, Silver and Stull (lo), and Aksoy et al. (11) have shown that the amount of phosphorylated myosin light chain rapidly rises after agonist addition to either tracheal or vascular smooth muscles and then gradually returns toward the base-line value during the sustained phase of contraction. These studies indicate that the mechanisms by which Ca2+ acts may be more complex than previously thought. They have led to the postulate that a second calcium-dependent mechanism operates during the sustained phase of smooth muscle contraction (10)(11)(12)58).
In recent years it has been shown that the interaction of Ca*+-mobilizing hormones with their receptors activates a specific phospholipase C which catalyzes the hydrolysis of PtdIns-4,5-P2' in a variety of tissues or cells (13)(14)(15)(16)(17). This results in production of Ins-P3 and 1,2-diacylglycerol. It is currently believed that Ins-P3 causes the release of Ca2+ from an intracellular pool (presumably endoplasmic reticulum) and produces the initial intracellular eaZ+ transient (14,18), thereby activating Ca2+-calmodulin-dependent enzymes. On the other hand, an increase in the 1,2-diacylglycerol content of the plasma membrane is thought to activate the Caz+activated, phospholipid-dependent protein kinase (C-kinase) (17,19). Based on results obtained from the use of agents which bypass receptor-mediated events and directly activate Ca2+-calmodulin-dependent kinases and the C-kinase in smooth muscle, we have proposed that in smooth muscle contraction as well as secretory responses in many tissues, the calmodulin branch of the Ca2+ messenger system is transiently activated by a transient rise in cytosolic free Ca'+ concentration and is largely responsible for initiating cellular response; the C-kinase branch which is activated by both the sustained increase in plasma membrane Ca2+ influx rate and the increase in the 1,2-diacylglycerol content of the plasma membrane is responsible for sustaining the response (20,21,(23)(24)(25)(26)(27). In this model a sustained increase in intracellular free Ca2+ is not a prerequisite for the sustained phase of smooth muscle contraction.
The present study was undertaken to determine whether in bovine tracheal smooth muscle an agonist, carbachol, causes polyphosphoinositide breakdown and generates the two mes- sengers, Ins-P3 and 1,2-diacylglycerol. Our results show that carbachol stimulation causes a rapid decrease in the mass of the phosphoinositides and a small but significant increase in the mass of 1,2-diacylglycerol and phosphatidic acid. Furthermore, these changes in the lipid mass are sustained during a 1-h period of hormone action.
RESULTS
Effect of Carbachol on the Turnover of fH]Inositol-labeled Phosphoinositides-As shown in Fig. 1, the addition of carbachol to tracheal muscle strips prelabeled with [3H]in~sitol causes a rapid decrease in radioactivity from both the PtdIns-4,5-P2 and PtdIns-4-P pools. A loss of 28% of 3H radioactivity from PtdIns-4,5-P2 is detected at 30 s and is maximal at 2 min (a loss of 34%). The change in radioactivity of PtdIns-4-P follows a similar time course to that of PtdIns-4,5-P2. The radioactivity in both the PtdIns-4,5-Pz and PtdIns-4-P pool remains reduced for the initial 10 min. In contrast, the radioactivity of phosphatidylinositol shows no significant change during this period.
Effect of Carbachol on Inositol Phosphate Production-Carbachol stimulates the production of Ins-P, Ins-PZ, and Ins-P3 in [3H]inositol-prelabeled muscle strips (Fig. 2). Both Ins-P, and Ins-P3 increase rapidly following carbachol addition and reach peaks at 1 min (870% in Ins-P2 and 800% in Ins-P, of each control value). Then the values slightly fall but still remain 6-to 7-fold higher than the control values during 10 min. Ins-P rises less rapidly and reaches a peak (190% of control value) at 5 min and stays at that level. The rapid decrease in radioactivity of PtdIns-4,5-P2 and the concomitant accumulation of Ins-P, is consistent with carbachol- Effect of Carbachol on the Absolute Mass of Phosphoinositides and Phosphatidic Acid-The time course of the changes in the mass of phosphoinositides and phosphatidic acid is shown in Fig. 3. The resting content of PtdIns-4,5-P2 is about 0.4% of total phospholipid on a molar basis and also onetenth of the mass of phosphatidylinositol (about 4%). Upon carbachol stimulation, the content of PtdIns-4,5-P2 rapidly decreases and reaches a nadir (50% of the resting value) at 1 min. Then the content of PtdIns-4,5-P2 slightly recovers, but still remains appreciably lower than the resting value for at least 60 min. During a similar period of time, muscle strips not treated with carbachol show no significant change in the PtdIns-4,5-Pz content. The content of PtdIns-4-P also declines rapidly from the resting level of 0.4% and remains reduced during the carbachol stimulation of 60 min. Again, muscles not stimulated with carbachol show no significant change in the content of this phosphoinositide. These changes in PtdIns-4,5-Pp and PtdIns-4-P during the first 5 min are apparently similar to the changes in radioactivity of polyphosphoinositide in [3H]inositol-labeled muscles (Fig. l ) , but the change in the mass of PtdIns-4,5-P2 is quantitatively larger than that in the labeling experiments (50% uersus 34% as a maximal decrease). The mass of phosphatidylinositol shows a delayed decrease. The content of phosphatidylinositol shows no significant change at 30 s and then declines slowly and progressively for 60 min. The content of phosphatidylinositol at 60 min is reduced to 44% of the resting content. In contrast, the mass of phosphatidic acid rises rapidly and progressively from a resting value of about 0.2% of total phospholipid mass to reach a plateau of about 0.9% of this mass at 30 min.
As shown in Fig. 4, the changes in the mass of PtdIns-4,5-P p and phosphatidic acid are dose-dependent and become greater with increasing doses of carbachol. The ED,o for the carbachol-induced changes in PtdIns-4,5-P2 and phosphatidic acid is 5 and 3 p~, respectively. The values are 15-to 30-fold higher than the ED5, for the carbachol-induced tension development (28). One possible interpretation of these data is the existence of large numbers of spare receptors. A second is that there is an amplification step between messenger generation and physiological responses.
The changes in the mass of polyphosphoinositides and phosphatidic acid are reversed by atropine, an antagonist of muscarinic-type receptor (Fig. 5). The content of PtdIns-4,5-P, rapidly rises to its base-line value within 2 min of atropine addition. However, the reversal of this change in PtdIns-4-P and phosphatidic acid mass is slower and takes 25 min before complete recovery is seen.
To see if these changes in the mass of phospholipids are a specific response to the agonist carbachol, the effect of 80 mM K' in the extracellular fluid on polyphosphoinositide metabolism was examined. Eighty mM K+, a concentration which causes a maximal contraction, did not elicit any changes in the mass of polyphosphoinositides or phosphatidic acid in tracheal muscle (data not shown). These results also indicate that the phospholipase C activation resulting from carbachol stimulation is not a consequence of an increase in an intracellular Ca2+ concentration.
To determine the dependency of carbachol-mediated breakdown of polyphosphoinositides on extracellular calcium, the effect of carbachol on polyphosphoinositide metabolism was compared in the presence or absence of extracellular Ca2+. The carbachol-stimulated breakdown of polyphosphoinositides and the production of phosphatidic acid were similar in the presence or absence of extracellular Ca2+ (data not shown). Thus, carbachol-stimulated breakdown of polyphosphoinositides is not affected by removal of extracellular Ca2+.
Effect of Carbachol on 1,2-Diacylglycerol Production-The results shown in Figs. 1-3 clearly indicate that carbachol stimulates phospholipase C-mediated hydrolysis of polyphosphoinositides. Therefore, the change in the level of 1,2-diacylglycerol, the other product of phospholipase C-mediated hydrolysis of polyphosphoinositides, was examined. Initially, the quenching method of adding ice-cold chloroform/metha-no1 (1:2, v/v) to muscle strips was employed. In experiments employing this method, no significant increase in the 1,2diacylglycerol level in terms of changes in the absolute amount, or of radioactivity in [3H]glycerolor [3H]arachidonic acid-labeled 1,2-diacylglycerol were found (data not shown). In experiments in which [3H]arachidonic acid-labeling was employed, a decrease in PtdIns-4,5-P2 and an increase in phosphatidic acid were found (Fig. 6). However, when another method for terminating the reaction, freeze-clamping the muscle, was employed, changes in 1,Z-diacylglycerol content were found. As shown in Fig. 7, the mass of 1,2-diacylglycerol in freeze-clamped muscle rises rapidly and reaches a peak at 2 min after the addition of 2 p M carbachol. It then falls slightly to remain for 60 min at a value significantly above the control value. The plateau value is about 130% of the resting value. An experiment in which a higher concentration of carbachol (0.1 mM) was employed gives a similar result (data not shown). Thus, these data indicate that the choice of quenching method is critically important for the measurement of 1,2-diacylglycerol content of this tissue.
DISCUSSION
The present results demonstrate that carbachol, a muscarinic agonist, causes a rapid reduction in the mass of PtdIns-4,5-P2 in bovine tracheal smooth muscle (Fig. 3). Because the change is associated with concomitant increases in [3H]Ins-P1 production (Fig. 2) and in the contents of 1,2-diacylglycerol ( Fig. 7) and phosphatidic acid (Figs. 3 and 6), these data indicate that a carbachol-induced decrease in the mass of PtdIns-4,5-Pz is caused by a stimulation of the hydrolysis of PtdIns-4,5-P2 catalyzed by phospholipase C. The present work is the first to report the measurement of polyphosphoinositide breakdown based upon measurement of lipid mass as well as radioisotopic labeling in a smooth muscle experiment.
Several investigators have shown that various agonists cause polyphosphoinositide breakdown in smooth muscle tissues employing radioactive tracer methods (13, [42][43][44][45][46][47]60). These previous studies demonstrated agonist-stimulated changes in radioactivity of phosphoinositides and/or production of [3H]inositol phosphates in smooth muscle prelabeled with 3' P or ['HH]inositol. When a freshly isolated tissue is employed for labeling experiments, it is unlikely that a true isotopic equilibrium is reached because of short periods of labeling. Therefore, it is not possible to accurately estimate a change in the lipid mass based upon experiments of this type. In addition, it is not easy to know from such labeling experiments whether or not polyphosphoinositide breakdown is sustained in response to agonists. The direct measurement of lipid mass gives one a means of overcoming these difficulties.
Recently a number of hormones and neurotransmitters have been shown to cause polyphosphoinositide breakdown in their target tissues. The resulting products, Ins-P, and 1,2diacylglycerol, have been shown to function as intracellular messengers in the action of the particular agonist (14)(15)(16)(17). One of the unanswered questions concerning polyphosphoinositide breakdown is whether polyphosphoinositide breakdown is sustained during the sustained phase of the hormonal response. The present study shows that the mass of PtdIns-4,5-Pz and PtdIns-4-P remain lower than their resting values even after 60 min of continuous exposure to carbachol, and that the level of phosphatidylinositol continues to fall progressively during this period. In contrast, the mass of phosphatidic acid and 1,2-diacylglycerol remain higher than their resting values. These results clearly indicate that polyphosphoinositide breakdown is sustained and continues to generate messengers during the sustained response to carbachol. These results give support to the notion that polyphosphoinositide breakdown plays a messenger role during the tonic as well as the acute phase of carbachol-induced contraction in bovine tracheal smooth muscle. It is our postulate that the major intracellular pathway involved in the sustained phase of carbachol-induced contraction is the C-kinase pathway and that the C-kinase is maintained in its Ca2+-sensitive state during the sustained phase (20,21,23,27,28). Since 1,2diacylglycerol in the plasma membrane is believed to be a physiological factor which activates the C-kinase in situ (17,19), our data showing a sustained increase in the 1,2-diacylglycerol content of muscle strips strongly suggest that the Ckinase is actually maintained in its Ca2+-sensitive state during the sustained phase of carbachol-induced contraction.
To our knowledge, the only previous report in which long term effects of agonists on the mass change in phosphoinositides are reported is from the work of Farese et al. (48) in rat adrenal subcapsular cells. These workers showed that angi-otensin 11, a typical agonist causing phosphoinositide breakdown in this tissue (29,61), induces increases in the mass of PtdIns-4,5-Pz, PtdIns-4-P, phosphatidylinositol, and phosphatidic acid at 60 min. Since agonists causing phosphoinositide breakdown stimulate both breakdown and resynthesis of phosphoinositides, it is possible that if the resynthesis is greater than the breakdown of phosphoinositides, one might see net increases in the mass of these lipids. In fact, a rapid increase in the mass of PtdIns-4,5-P2 after the initial decrease has been reported to occur at an early time point (30 s ) after thrombin addition in human platelets (49,50). Thus, a balance between breakdown and resynthesis of phosphoinositides following agonist stimulation may determine the net change in the mass of phosphoinositides in a given tissue.
The decrease in the mass of PtdIns-4,5-P2, seen in carbachol-stimulated tracheal muscle, is associated with an equally rapid loss of mass in the PtdIns-4-P pool (Fig. 3). The time course of Ins-P, production is also similarly as rapid as that of Ins-P3 (Fig. 2). However, the reversal of PtdIns-4-P level toward the resting value after atropine addition takes a longer time than that of PtdIns-4,5-P2 (Fig. 5). Therefore, a loss of the mass of PtdIns-4-P after carbachol addition may not be explained solely by phospholipase C-mediated breakdown of PtdIns-4-P. Accelerated conversion of PtdIns-4-P to PtdIns-4,5-Pz is likely to contribute to a reduction in the mass of PtdIns-4-P. Similarly, the time course of a delayed decrease in the mass of phosphatidylinositol and the relatively small increase in Ins-P compared to the much larger decrease in the mass of phosphatidylinositol (Figs. 2 and 3) suggest that the bulk of the decrease in the mass of phosphatidylinositol occurs via phosphorylation to PtdIns-4-P and PtdIns-4,5-P2.
In the present study, some differences in the data on phosphoinositide breakdown are noted between the [3H]inositol-labeling experiment (Fig. 1) and the measurement of the lipid mass (Fig. 3). In the labeling experiment the radioactivity in phosphatidylinositol shows no significant change for at least 10 min (Fig. I), while the mass of phosphatidylinositol shows a decrease of 11% at 1 min and 27% at 5 min (Fig. 3). Moreover, the extent of decrease in PtdIns-4,5-P2 after carbachol addition is larger in absolute amount than that estimated by radioactivity measurements. Because muscles were labeled for 3 h with [3H]inositol and presumably true isotopic equilibrium was not reached in the present experiment (51,52), stimulated incorporation of [3H]inositol into phosphoinositides associated with enhanced resynthesis of these lipids may account for the smaller relative changes observed in the labeling experiment. It is also possible that [3H]inositol is incorporated into only a particular pool of each phosphoinositide (53) and changes in radioactivity do not represent overall changes in the mass of these lipids.
The changes in the mass of phosphoinositides and phosphatidic acid are carbachol-specific events because the application of 80 mM K+ leads to no significant change in the mass of these lipids. These data also indicate that simply raising intracellular Ca2+ concentration with high extracellular K' is not sufficient to activate the phospholipase C-mediated breakdown of the phosphoinositides. Furthermore, carbachol-me-. diated breakdown of phosphoinositides is not dependent on extracellular Ca2+. These data are similar to previous reports in a variety of cells or tissues (14,15,52).
In the present study, the quenching method of adding icecold chloroform/methanol(1:2, v/v) to the incubation medium was initially employed for the measurement of 1,2-diacylglycerol mass. However, the experiments using this quenching method were unsuccessful in detecting a significant increase in 1,2-diacylglycerol content. Likewise, a significant change in [3H]arachidonic acid-or [3H]glycerol-labeled 1,Z-diacylglycerol was not observed using this approach. In contrast, in freeze-clamped muscle an increase in the mass of 1,2-diacylglycerol is seen (Fig. 7). The base-line value of the 1,2diacylglycerol content in the freeze-clamped muscle is oneseventh of that determined using the quenching method. These results suggest that the use of such a method as freeze clamping, which allows biochemical reactions to be terminated immediately, is critically important for the determination of the 1,2-diacylglycerol content of solid tissues like tracheal smooth muscle. Otherwise a small change in 1,2diacylglycerol mass might be masked by an artificial increase in the mass of this lipid derived from the degradation of phospholipids or triacylglycerol during the solvent-based quenching procedure.
The increase in the actual 1,2-diacylglycerol content appears to be small compared to the larger mass changes of phosphatidic acid and polyphosphoinositides (Fig. 3) (37). Using the present methods, we could show a substantial increase (2-fold) in the mass of 1,2-diacylglycerol in Swiss 3T3 fibroblasts upon bombesin stim~lation.~ Hence, it is unlikely that the method used for the quantitation of the mass of 1,2-diacylglycerol is inadequate to detect an increase in 1,2-diacylglycerol. The difference between the results in fibroblasts and smooth muscle indicate that the steady state concentration of 1,2-diacylglycerol is determined both by its rate of formation and its rate of further metabolism.
In contrast to the small increase in the mass of 1,2-diacylglycerol, the content of phosphatidic acid, a phosphorylated product of 1,2-diacylglycerol kinase, shows a striking increase after carbachol addition (Fig. 3). These data suggest that the 1,2-diacylglycerol formed as a result of PtdIns-4,5-P2 hydrolysis is rapidly converted to phosphatidic acid by relatively high activities of diacylglycerol kinase in carbachol-stimulated tracheal smooth muscle, resulting in a net accumulation of only a small amount of 1,2-diacylglycerol. However, it is possible that in some domains of the plasma membranes there is a larger increase in 1,2-diacylglycerol concentration than is indicated by its total mass change. Furthermore, the possibility that another mechanism is operating for activation of the C-kinase is not excluded. Recently, a lipoxygenaseproduct of arachidonic acid has been proposed as a C-kinase activator (54). Because arachidonic acid is known to be mobilized from diacylglycerol and phospholipids in smooth muscle tissue during agonist stimulation (55-57), the possibility that such arachidonic acid metabolites are involved in the regulation of C-kinase activity is an especially interesting avenue to be explored. | 4,912.6 | 1986-11-05T00:00:00.000 | [
"Biology",
"Computer Science",
"Chemistry"
] |
Grayscale Image Authentication using Neural Hashing
Many different approaches for neural network based hash functions have been proposed. Statistical analysis must correlate security of them. This paper proposes novel neural hashing approach for gray scale image authentication. The suggested system is rapid, robust, useful and secure. Proposed hash function generates hash values using neural network one-way property and non-linear techniques. As a result security and performance analysis are performed and satisfying results are achieved. These features are dominant reasons for preferring against traditional ones.
Introduction
Cryptography uses mathematical techniques for information security. Information security is now a compulsory component of commercial applications, military communications and also social media implementations. So it can be said that cryptography is, furthermore, the most significant part of communication security (Arvandi et al. 2006). It maintains the condentiality that is the core of information security. Any cryptography requires condentiality, authentication, integrity and non-repudiation from those authorized to have it. Authentication relates to the identication of two parties entering into communication, while integrity addresses the unauthorized modification of an element inserted into the system (Sağıroğlu & Özkaya 2007). To date, there has been a large number of studies intended to advance robust cryptosystems and use them in communications. It is a novel and growing technique to perform the non-linear property of neural network to create secure hash function. Hash functions converts major definitions to minor values. As an input any message can be used, and as an output fixed length value is produced. Most popular hash functions are MD-5 (R. Rivest: 1992), SHA-2 that is by NSA (National Security Agency) and published in 2002 by the NIST as a U.S FIPS (Federal Information processing Standard) and SHA-3 that is based on an instance of the KECCAK algorithm that NIST selected as the winner of the Cryptographic Hash Algorithm Competition in 2013. Hash functions are used for data integrity and digital signature. Digital signature signs data in order to prove the accuracy of data and ID of sender. Hash function is digest of message which is attached to original message. Any modifying in original message makes hash function disabled. In other words; hash function is an information generating process from any message using mathematical techniques. Generated digital information is called message digest. Recycling of hash function must be almost impossible, so hash function must not inspire anyone about original message. It must be impossible to predict different messages whose hash values are the same. Hash value of every message is different, so any modifying in the original message makes digital signature invalid. Cryptography needs functions like this because they are able to provide safety communications (Soyalıç 2005). Hash functions are also used in Network and Internet Security. Any domain controlled PC client password is saved in _le server manager as its hash value, so administrator cannot do see clients original passwords. Also any malicious access to server database cannot capture client's original passwords. Up to now, there have been lots of studies to advance robust machine learning based hash-functions and use them in communications (Zou & Xiao 2009;Lian et al. 2007;YAYIK & KUTLU n.d.;Yayık & Kutlu 2013;Huang 2011). Near past and recently there is relatively much interest in using neural networks for cryptography (Lian et al. 2006) . Statistical analysis for sensitivity of SHA-2 secure hash algorithm and neural based hash function are nearly same (Sumangala et al. 2011), so it can be said that neural network will be used in cryptology in near future. There are many different approaches for image hash function algorithms. In 2011, Radon Transform based image fingerprinting (hashing) is proposed (Seo et al. 2004). Monga and Evans extracted vital image features using wavelet-based feature detection algorithm in order to advance image hashing system (Monga et al. 2006). In 2006, rotation invariance of Fourier-Mellin transform and controlled randomization based image hashing algorithm is introduced (Swaminathan et al. 2006). In last two decades neural network based hash function is studied by some researchers (Zou & Xiao 2009;Lian et al. 2007;YAYIK & KUTLU n.d.;Yayık & Kutlu 2013;Sumangala et al. 2011). Common feature of these neural network based related works is considering gray scale images or texts. In this paper, secure and robust neural image hash function for gray scale image, which uses non-linearity. Then, many experiments are performed to validate its security and statistical requirements. The rest of this paper is organized as follows. Section 2 describes proposed image hash function. Section 3 presents experimental results and performance analysis. Finally conclusion is given in Section 4.
Proposed Model
In the proposed hash function neural network shown in Figure 1. is used which has three layers that carries out ideal hash functions confusion, diffusion and compression properties. b so neural network can be defines as;
Block Hash
Neural network based hash function is depicted in Figure 2.
Each dimensions are passed through the block hash and 32x512 sized is performed. XOR values of consecutive rows are calculated in order to obtain 1x512 binary value (1).
Performance Analysis
In this section, whether suggested hash function validates statistical and security requirements or not is analyzed. So that, statistical distribution, diffusion and confusion, collision resistance and meet-in-the-middle analysis are performed.
Statistical Distribution of Hash Value
Security of hash function is directly proportional with uniform distribution of hash value. Figure 3 illustrates 2D graphs of pixel values of the original image localized.
Statistical Analyses of Diffusion and Confusion
Binary format of hash value consists of only 0 and 1 bits, while hexadecimal hash value consists of 16 different characters. Because of this changes in binary hash value must be nearly 50% (as shown in Table 3), in contrast changes in hexadecimal value must be nearly 100%, for each modification. Otherwise diffusion property do not satisfy. In order to control binary and hexadecimal hash value changes following steps are applied: 1. Calculate original image binary and hexadecimal hash value 2. Change value of image 10 pixels randomly. 3. Calculate modified image binary and hexadecimal hash value. 4. Compare and find differences between original and modified images binary and hexadecimal hash values. 5. Repeat 1-4 process Q times. In Figure 4, binary sensitivity of hash value is presented. As it is mentioned binary sensitivity is nearly 50% that satisfies diffusion of hash value. Also, almost 100% hexadecimal sensitivity mean the algorithm has very strong capability of robustness. Statistical parameters for binary sensitivity are defined below: Mean number of bits changed: Table 1.
One Way Property
Neural Networks most important and intriguing property that makes them useful for applications is their generalization capabilities that is their ability to produce reasonable outputs when they are fed with inputs not previously encountered. If targets' size is so different from inputs' size it is difficult to compute target from input, while it is easy to compute input from target. Due to this property neural network can be used in hash functions Desai2013. Parallel implementation is a significant property of neural networks. Each layer is paralleled. So, they can implement certain functionality independently. According to this, ANN are available for data progressing Neural networks has ability to make relationship using training function with non-linear and complicated values. Confusion is a special property caused by the nonlinear structure of neural networks. This property makes the output depend on the input in a nonlinear and complicated manner. It means that, a bit of output depends on all the bits of the input in a complicated way. Thus, it is difficult to determine the exact input. The confusion property of neural networks makes them a potential choice for cipher designing.
Analysis of Collision Resistance
Experimenters must make sure that each bit of original image effects hash value, after generating hash value. In other words hash value must be fully depended on the original image. Single bit change in image do not affects hash value that means vital information security vulnerability. So in this paper, collision resistance analyze is performed Q times as follows: 1. Generate hash value of original image (described in section 3.2) and store in ASCII format. 2. Randomly change least bits in original image 3. Generate hash value of new modified version of original image and store in ASCII format 4. Compare hash values generated in 1 and 3. Find and count same ASCII values at the same location.
(3) Plot of distribution of the number of collision hits is illustrated in Figure 5. Table. Maximum, minimum, mean values and standard deviations are listed in Table 2.
Meet-In-The-Middle Attack.
A meet-in-the middle attack is a technique of cryptanalysis against a block cipher introduced in 1977 (Diffie & Hellman 1977). It is a passive attack; it may allow the attacker to read messages without authorization, but against most cryptosystems it does not allow him to alter messages or send his own (Vanstone et al. 1996
Results and Conclusions
Secure hash function based machine learning techniques is presented and analyzed here. Proposed algorithm is efficient to require diffusion and confusion properties due to neural network information transfer process inspired from real biological systems. Analyses and experiments explained in this paper reveals that hash function satisfies sensitivity, minimum collision hit requirements and powerful against attacks like meet-in-the-middle. Figure 3, uniform distribution of hexadecimal hash value against local distribution of original image means high randomness that requires confusion. Figure 4, nearly 50% difference of binary format of hash value means high sensitivity that requires diffusion. But Figure 4 is not sufficient only by itself. In order to correlate Figure 4, statistically approaches are shown in Table 1. When looping sensitivity testing process as Q times, average 254, minimum 224, maximum 284 bits of 512 bits differs with minor standard deviation (8.61) and minimum 43.66%, maximum 55.46%, average 49.78% of 512 bits differs with minor standard deviation (4.92). These results satisfy sensitivity of neural network based hash function.
Calculation of same ASCII hash values at the same location that is called collision resistance is performed as Q times. Figure 5. illustrates collision resistance when Q = 2048. When looping collision resistance testing process as Q times, average 7 bit same ASCII values are found at the same location that can be ignored due to minority. So these results satisfy collision resistance of neural network based hash function. As a result; this system can be used in communication applications especially in military applications. | 2,408 | 2016-02-03T00:00:00.000 | [
"Computer Science"
] |
Rescaling the spatial lambda Fleming-Viot process and convergence to super-Brownian motion
We show that a space-time rescaling of the spatial Lamba-Fleming-Viot process of Barton and Etheridge converges to super-Brownian motion. This can be viewed as an extension of a result of Chetwynd-Diggle and Etheridge (2018). In that work the scaled impact factors (which govern the event based dynamics) vanish in the limit, here we drop that requirement. The analysis is particularly interesting in the biologically relevant two-dimensional case.
Introduction
Our purpose in this paper is to extend a result in [5] which shows that certain suitably rescaled spatial Lambda-Fleming-Viot (SLFV) processes converge weakly to super-Brownian motion (SBM). Our extension is analogous to that of allowing nearest neighbour interactions in interacting particle models, as opposed to taking long range limits, and is particularly delicate in the critical two-dimensional case. SBM is a well known measure-valued diffusion, introduced in [26] and [9], for which there is an extensive research literature (e.g., for reviews see [10], [13] and [22]). SLFV processes were introduced more recently, in [12], to serve as models for the evolution of allele frequencies in populations distributed across spatial continua. An analytic construction was given in [2], along with a discussion of the biological significance of the model. A more probabilistic construction was given in [25], one which gives a very useful connection between SLFV processes and their duals. Following [5], we consider here a neutral two-type version of the general SLFV model, taking "space" to be R d . Informally, our process (constructed below) is a Markov process (µ t ) t≥0 where for each x ∈ R d , µ t (x) is a probability distribution on the type space {0, 1}, with the interpretation that B µ t (x)({i})dx represents the proportion of the population of type i in a region B ⊂ R d at time t. We will consider an extension of the fixed radius case from [5] (Theorem 2.6 of that reference) and not the interesting variable radius case, also discussed there in Theorem 2.7, in which stable branching arises in the limit.
SBM arises as the limit under Brownian space-time rescaling of a range of critical spatially interacting models in mathematical physics and biology above the critical dimension including critical oriented percolation [17], critical lattice trees [18], the critical contact process [16], and the voter model [6]; it is believed to be the scaling of critical ordinary percolation in the same regime. The only scaling limit of the above which has been verified at the critical dimension is the voter model [6] where the critical dimension is two. In this case the simple nature of the dual process, a coalescing random walk, allows one to carry out the required explicit calculations. Now our challenge is to use the related but more complex dual of the Barton-Etheridge model to carry through the analysis. It is understood here that we are not taking "long-range" limits (e.g. as was done for the contact process in [11]) which will weaken the interaction and make the analysis considerably easier. In our setting this means not letting the impact factor (described below) approach zero in the rescaling.
We start a rigorous description of the model by recalling the definition of the fixed radius SLFV process given in [5]. Let r > 0 be the "interaction radius", let ρ ∈ [0, 1] be the "impact factor," and let Π be a Poisson point process on R d ⊗ (0, ∞) with intensity dx ⊗ dt.
We suppose the distribution of types in the population changes over time according to "reproduction events" determined by Π. Given µ t− , if (x, t) ∈ Π, choose an independent point z uniformly at random from the Euclidean ball B r (x) = {y : |y − x| ≤ r}, and (independently) a type α according to the distribution µ t− (z), and then set µ t (y) = (1 − ρ)µ t− (y) + ρδ α ∀y ∈ B r (x).
We keep µ t (y) = µ t− (y) for y / ∈ B r (x). Writing µ t (x) in the form w t (x)δ 1 + (1 − w t (x))δ 0 , we can reformulate the above dynamics more conveniently in terms of w t as follows. Starting from a Borel w 0 : R d → [0, 1] with compact support, for (x, t) ∈ Π, choose an independent parental location z uniformly at random from B r (x), independent of everything, and then: (i) with probability w t− (z) put w t (y) = (1 − ρ)w t− (y) + ρ for all y ∈ B r (x), (ii) with probability 1 − w t− (z) put w t (y) = (1 − ρ)w t− (y) for all y ∈ B r (x), (iii) for all y / ∈ B r (x) keep w t (y) = w t− (y). (1.1) As noted in Section 3 of [5], this description gives a well-defined w t : R d → [0, 1] which has compact support at all times. (See [25] for more details on the construction.) It will be useful to regard w t as the measure w t (x)dx, and for bounded Borel φ : Closely associated with the process w t is a dual process of coalescing "lineages". If we sample a finite number of spatial locations {x i } at time T , it is easy to see that the values w T (x i ) can be determined from w 0 by using Π to trace the lineages backward in time. Since Π run backwards is still a Poisson process, we may define a version of the lineages process starting at backwards time 0 from a finite number of locations {x i } as follows. If (x, t) ∈ Π, mark each lineage in B r (x) independently with probability ρ, and choose a point z uniformly at random from B r (x). If at least one of the lineages in B r (x) is marked, all marked lineages in B r (x) coalesce and the resulting lineage is moved to z. If no lineage is marked, no lineage moves. Lineages outside of B r (x) are not affected.
In this paper it will suffice to consider only the one and two-lineage systems, so we will ignore the higher lineage systems which are more complex to analyze.
We now give a more precise description of these Markov jump processes, using the language of "particles" instead of lineages. Let |Γ| be the Lebesgue measure of Γ ⊂ R d . Let U, U 1 , U 1 be independent random variables uniformly distributed on B r = B r (0), and letŪ have the law of U 1 + U 2 , i.e.,Ū has density P (Ū ∈ dz) = |B r (0) ∩ B r (z)| |B r (0)| 2 dz := hŪ (z)dz. (1.3) We letσ 2 1 d×d denote the covariance matrix ofŪ , so that if x = (x 1 , . . . , x d ), then (1.4) We will use this notation throughout, along with η t for the single particle dual and ξ t = (ξ 1 t , ξ 2 t ) for the two particle dual. (a) The single-particle dual η t . If we start with a single particle at x, it is easy to see that η t is the random walk on R d starting at x which makes jumps at rate ρ|B r | with jump distribution given in (1.3). We write P {x} for the underlying law of η.
(b) The two-particle dual (ξ 1 t , ξ 2 t ). If we start with two particles, one at x 1 and the other at x 2 = x 1 , (ξ 1 t , ξ 2 t ) is the Markov jump process starting at (x 1 , x 2 ), and with law P {x1,x2} , which makes transitions at rate ρ|B r | if y 1 = y 2 = y (y 1 +Ū , y 2 ) at rate ρ(|B r | − ρ|B r (y 1 ) ∩ B r (y 2 )|) if y 1 = y 2 (y 1 , y 2 +Ū ) at rate ρ(|B r | − ρ|B r (y 1 ) ∩ B r (y 2 )|) if y 1 = y 2 (U + U y1,y2 , U + U y1,y2 ) at rate ρ 2 |B r (y 1 ) ∩ B r (y 2 )| if y 1 = y 2 , (1.5) where U y1,y2 is an independent random variable, uniformly distributed over B r (y 1 ) ∩ B r (y 2 ). For y 1 = y 2 , the total jump rate at (y 1 , y 2 ), y 1 = y 2 , is 2ρ|B r | − ρ 2 |B r (y 1 ) ∩ B r (y 2 )|. To see the above rates consider, for example, the second transition from (y 1 , y 2 ) to (y 1 +Ū , y 2 ) for y 1 = y 2 where (y 1 , y 2 ) is the current site of our two-particle dual. The next jump in the first coordinate can only occur at a point (x, t) ∈ Π with x ∈ B r (y 1 ) so let (x, t) be the next such point. At (x, t) such a jump (affecting the first coordinate but not the second) can occur in one of two ways: if x lands in B r (y 1 ) \ (B r (y 1 ) ∩ B r (y 2 )) and the particle ξ 1 at y 1 is marked, or if x lands in B r (y 1 ) ∩ B r (y 2 ) and the particle at y 1 is marked and the particle at y 2 is not. The total rate in t is obtained by integrating out x and so is ρ(|B r (y 1 )| − |B r (y 1 ) ∩ B r (y 2 )|) + ρ(1 − ρ)|B r (y 1 ) ∩ B r (y 2 )| = ρ|B r | − ρ 2 |B r (y 1 ) ∩ B r (y 2 )|.
In either of the above scenarios the particle at y 1 will jump to z, a uniformly selected site in B r (x). Given y 1 , x will be uniformly distributed on B r (y 1 ) and so x − y 1 will be uniform on B r . Clearly given (y 1 , x), z − x is uniformly distributed over B r and so (x − y 1 , z − x) is a pair of independent uniforms on B r . Therefore the jump in ξ 1 at time t is z − y 1 = (z − x) + (x − y 1 ) and so has lawŪ as claimed. The other transitions are similar to analyze. The coalescence time for the two-particle dual starting at (x 1 , x 2 ) is τ = inf{t ≥ 0 : ξ 1 t = ξ 2 t }. (1.6) Although (ξ 1 t , ξ 2 t ) is Markov, the individual coordinates ξ 1 t , ξ 2 t are not (i.e., ξ 1 is not Markov with respect to the filtration σ(ξ 1 s , 0 ≤ s ≤ t) t≥0 ). However, when B r (ξ 1 t ) ∩ B r (ξ 2 t ) = ∅, both coordinates move independently according to the single particle dynamics, while for t > τ , the coalesced coordinates move together according to the single particle dynamics. It is also clear from (1.5) that the two-particle dual is translation invariant, that is, P {x1+x,x2+x} ((ξ 1 , ξ 2 ) ∈ ·) = P {x1,x2} ((x + ξ 1 , x + ξ 2 ) ∈ ·) ∀x, x 1 , x 2 ∈ R d . (1.7) The two special cases of the general duality equation in Proposition 2.5 of [5] that we need are the following. For all t ≥ 0, By standard approximation arguments, these equations then hold for all Borel ψ 1 , ψ 2 which are either nonnegative or integrable (on one side or the other). In particular, letting 1 denote the constant function 1 on R d , we have (1.10) Before stating the main fixed radius result of [5], Theorem 2.6, we introduce super-Brownian motion using the martingale problem formulation. If (X t ) t≥0 is a stochastic process, (F X t ) t≥0 will denote the right-continuous filtration generated by X. Let M F (R d ) denote the space of finite Borel measures on R d endowed with the topology of weak convergence, and for µ ∈ M F (R d ) let µ(φ) = R d φdµ. The space of bounded continuous functions on R d is denoted by C b (R d ), and C 3 0 (R d ) is the space of continuous functions on R d which vanish at infinity and have bounded continuous partials of order 3 and less.
Then (see, e.g., Theorem A.1 of [6] for uniqueness, and Theorem II.5.1 and Remark II.5.5 of [22] for existence) Super-Brownian motion with diffusion coefficient σ 2 and branching rate b, denoted SBM(X 0 , σ 2 , b), is the unique M F (R d )-valued Markov process (X t ) t≥0 with continuous paths and initial state X 0 , such that such that for every φ ∈ C 3 0 (R d ), is a local (F X t )-martingale with predictable quadratic variation process with the Skorokhod (J1) topology. Theorem 1.0 (Theorem 2.6 in [5]). Suppose that for a compact set D 0 ⊂ R d , supp(w N 0 ) ⊂ D 0 for all N , and as elements of In addition, suppose there are constants C 1 , C 2 ∈ (0, ∞) such that, as N → ∞, (The constant C(d) in Definition 4.1 in [5] should be C(d) = B1 (x 1 ) 2 dx.) As noted in [5], this result is similar in spirit to Theorem 1.1 in [6], which proves convergence to SBM for certain sparse "long range" kernel voter models. Due to conditions (1) and (4) above, J N → ∞ and hence the impact factors ρ/J N → 0. It is this fact and the mass scaling condition (3) which make these SLFV processes analogous to the long range voter models in [6]. As for the duals, conditions (1) and (2) ensure that the single particle dual motion converges to Brownian motion, while the condition J N → ∞ ensures that the interactions between dual particles are weak.
If the sequence J N were bounded, so that the impact factors ρ = ρ/J N do not vanish in the limit, the resulting SLFV processes would correspond to the "fixed" kernel voter models in Theorem 1.2 in [6]. In biological terms this corresponds to keeping the "neighbourhood size" finite in the scaling limit, while letting J N → ∞ effectively allows this parameter to become infinite; see the discussion in Section 2 of [14] and especially Definition 2.2 there. In that work they showed in this fixed neighbourhood size setting (Theorem 2.7 of [14]) that, with an appropriate selection term, the dual particle process converges to a branching Brownian motion in the scaling limit. The purpose of this paper is to prove that in this setting, with no selection, there is also a forwards limit theorem giving convergence to SBM.
Throughout this work we will assume d ≥ 2, and N ≥ 3. (1.14) If we set J ≡ 1, and take C 1 = C 2 = 1 for simplicity, the conditions (1)-(3) in Theorem 1.0 suggest the choices for M and K above except for the logarithmic correction to K for d = 2. Without this correction, one can show that the limiting process in Theorem 1.2 would be nonrandom heat flow acting on X 0 , as is the case for the voter model [23]. We do not consider the case d = 1 in (1.14). For this case, the Wright-Fisher SPDE was obtained in [15] as an appropriate scaling limit of SLFV, but under that assumption that the scaled impact factors approach zero like N −1/3 (see [21] for the corresponding scaling limit for the voter model). If the impact factors were bounded away from zero, the strong recurrence of one-dimensional random walk would lead to heavy clustering, resulting in scaling limits with segregation of types; the corresponding scaling limit for the voter model is the Arratia flow [1], not super-Brownian motion.
In order to state our limit theorem for scaled SLFV processes assuming (1.14), we must first identify certain constants γ (d) e that appear in the limiting SBM branching rate. These constants are determined by the asymptotic tail behavior of the coalescence times τ for the unscaled two-particle dual process defined in (1.5). Introduce (1.16) Recall that when outside B 2r , ξ 1 t − ξ 2 t behaves like a rate 2ρ|B r | random walk with jump distribution given in (1.3), and τ = inf{t ≥ 0 : the difference will escape to infinity with positive probability by transience, and so the limit in (1.15), which exists by monotonicity, will have a non-zero limit. For d = 2 the situation is more delicate. One can predict the 1/ log t behaviour of γ e (t) from the corresponding non-return probabilities for irreducible symmetric random walk on Z 2 with diagonal covariance matrix (see, e.g., Lemma A.3(ii) of [6]), but the slowing rates when the difference ξ 1 − ξ 2 is in B 2r complicates things. The limit (1.16) can be derived from Lemma 4.10 in [14]. The analysis there is based on a construction using successive "inner" and "outer" excursions of ξ 1 − ξ 2 from certain balls before coalescence occurs. Our argument represents the difference process as a time change of a rate 2ρ|B r | random walk with step distribution hŪ , and makes use of a reflection coupling. We feel the proof is of independent interest and so have included it in an Appendix. One advantage of the excursion approach in [14] is that it should also allow inclusion of a random "interaction radius", that is However, as is discussed below, our time-change representation of the dual difference process in the fixed radius case will also play an important role in the analysis of the martingale square function which is the key ingredient in the proof of our main convergence result, Theorem 1.2 below. With the choice of renormalization constants in (1.14) we now give a different description of the rescaled SLFV processes X N , which will clarify the comparison with Theorem 1.2 below of the fixed kernel voter model result in [6]. Assume X 0 ∈ M F (R d ) and the compactly supported initial conditionsw N . (1.18) For each N , letw N be the (original, unscaled) SLFV process defined in (1.1) with fixed interaction radius r, fixed impact factor ρ and initial condition w N 0 =w N 0 , and define the rescaled SLFV process by This process has the same law as w N defined using Π N right before Theorem 1.0, with J and M given in (1.14).
. Thus the interaction radius for w N is r/ √ N .) Finally our approximating empirical measures are given by (1.20) so that (1.18) just asserts that X N 0 → X 0 . A simple change of variables shows that in terms of the unscaled SLFV processes,w N , we have for any bounded Borel φ on R d , Here is our main result for the scaled SLFV process. For a measure or function H, we let supp(H) denote its closed support. Recall the definition ofσ 2 from (1.4).
It is important to note that in our scaling regime with J = 1, the originalw N we are working with is an ordinary SLVF process with fixed interaction range r and impact factor ρ, but with an initial condition in which type 1's are scarce.
Equation (1.21) should be compared to the corresponding rescaled empirical measures in [6] associated with a sequence of voter models ξ In that reference it is shown thatX N converges weakly in D([0, ∞), M F (R d )) to an appropriate SBM, whose branching rate is determined by the asymptotics of the escape probability (from 0) for a continuous time random walk starting at a uniformly chosen neighbour of 0 in the integer lattice through a two-particle dual calculation. This suggests the same should hold (as it does) for the SLFV but now with the asymptotics of the non-coalescing probability of our two particle dual playing the role of the random walk escape probability.
The proof follows a familiar outline, based in part on methods in [6]. For appropriate test functions φ the semimartingale decomposition from [5], recalled in Section 2, states that is a local martingale, and D N (φ) is a drift term of bounded variation.
In Section 2 we provide some elementary simplifications for the explicit expressions for both D N (φ) and the predictable quadratic variation process M N (φ) t from [5]. In Section 3 we use the above and the one-and two-particle duals to calculate the first moments of X N t and give uniform L 2 bounds on the total mass X N t (1) (Corollary 3.2) which will be used throughout.
Assuming the key Proposition 4.1 which is proved in Section 7, tightness of {X N } is then established in Section 4, where Theorem 1.2 is also proved by showing that any weak limit satisfies the martingale problem for SBM(X 0 , σ 2 , b). The term D N (φ) is easy to handle (Lemma 2.3); it is the asymptotic behavior of the quadratic variation process M N (φ) which requires some work. The key result here is the aforementioned Proposition 4.1 which we present here for the discussion below.
After establishing preliminary random walk results in Section 5 and facts about twoparticle duals in Section 6, it is proved in Section 7. Its proof uses Proposition 1.1 but the issues go well beyond this result. The behavior of the quadratic variation process is the main difference in the proofs of Theorem 1.2 and its counterpart in [5], Theorem 1.0. Lemma 4.3 in [5] shows that a key term in the variation process is negligible in the limit N → ∞. This fact is a consequence of the assumption J → ∞. In our case, with J ≡ 1, this term is nonnegligible, and in fact determines the limiting SBM branching rate. Its analysis is the main objective of Section 7. The analysis for d ≥ 3 is straightforward; it is the 2-dimensional case (the most relevant from a biological perspective) that is the most interesting. In this setting the proof requires an extension of the arguments in [6] and [8] used to analyze the voter model and stochastic Lotka-Volterra models, respectively. The analogues of Proposition 4.1 in [6] ((I1) in that reference) and [8] (Proposition 4.7 in this work) involved L 2 and L p (p > 1 is used) norms, respectively, instead of the L 1 norm in Proposition 4.1, but also had no supremum over time in the expectation. When the L 2 norm is expanded in the voter model paper this leads to a four-particle dual calculation, while for the more general stochastic Lotka-Volterra models considered in [8], a trick using the Markov property reduced this to a three-particle dual calculation. Here, because of the non-Markovian property of individual coordinates in the dual, similar calculations seem out of reach and we are led to the L 1 convergence in Proposition 4.1 which must be established using only one-and two-particle duals. The first issue here is that squares are easier to handle than absolute values (the p > 1 in [8] is bounded eventually by a square using a stopping argument), and here the innocuous looking Lemma 7.9 below allows one to handle the square (even with a supremum over time) by using a martingale argument. This then enables us to take absolute values inside the time integral where two-particle duals (albeit more complicated ones than those in [8]) can handle the calculation. Here a second issue arises as even in handling a second moment calculation in Proposition 7.2 of [8], the use of stochastic calculus there leads to a three-particle calculation. We follow a more efficient path in its analogue, Lemma 7.8, in Section 7 which only involves the two-particle dual. A third issue is the fact that the weaker L 1 convergence in Proposition 4.1 will require some additional technical work to establish the local uniform integrability of the {M N (φ) 2 t : N }, and hence identify the limiting square function. This is what occupies most of the proof of Theorem 1.2 in Section 4. As a small bonus, the fact that Proposition 4.1 controls the square functions uniformly in time means it also allows one to establish tightness without any higher moments. The required properties of the two-particle dual are established in Section 6. Lemma 6.1 represents the difference of the coordinates of the dual as the time change of a continuous time random walk and this result is then used to obtain several probability estimates on the two-particle dual. These results (notably Lemmas 6.3 to 6.7) then play a central role in Section 7. The time-change is particularly useful when controlling the two-particle dual when the particles are close together and the dual motions slow down.
It would be interesting to see if it is possible to extend Theorem 1.2 to the variable but bounded radius case discussed above.
Constants.
In proofs, C will denote a positive constant whose value may change from line to line. We will use C T and C φ for constants depending on T > 0 or functions φ in a similar way. In some cases constants will be numbered and dependence on various quantities indicated explicitly. Finally, most constants will have an implicit dependence on the impact radius r, this dependence will be pointed out in some cases for clarity.
Semimartingale characterization of the SLFV
for all s ≥ 0.
Proof. By the dynamics (1.1), for (x, s) ∈ Π N (we may assume there is at most one such x), w N s (y) = w N s− (y) for all y / ∈ B N r (x), and for y ∈ B N r (x), Thus, |w N s (y) − w N s− (y)| ≤ ρ1 B N r (x) (y), and so, The martingale characterization below is provided by Lemma 3.1 of [5]. The filtration below is implicit in their argument. Although φ = 1 is not included in that result it is easy to handle it by a localization argument using the stopping times T n = inf{t ≥ 0 : has the semimartingale decomposition: (2.6) Implicit in the above is the fact that the local martingale M N t (φ) is locally square integrable, but this is already clear from the fact that it has bounded jumps. The latter follows from Lemma 2.1 and (2.4) which imply for all s ≥ 0. (2.7) For the drift term D N s (φ) we will need only the following facts.
Part (a) follows easily from (2.1), and (b) is the special case of Lemma 4.2 (and its proof) in [5] for our choices of J, M, K in (1.14). (We note that the constant C(d) in Definition 4.1 in [5] is B1 (x 1 ) 2 dx.) Turning next to the martingale square function, for (2.10) Proof. Define Then replacing X N s with Kw N s in (2.2), after expanding and rearranging, we find that On account of 0 ≤ w N s ≤ 1, I is nonnegative, hence (a) follows for m N from the above expression, and is immediate form n from (2.9) (integrate out z 3 in the first line on the right-hand side).
Consider the integrals over B N r (x) in (2.11). By a change of variables and order of integration, Plugging this into (2.11), and using the definition ofm N Using the fact that |I( This proves (c).
Total mass bounds
We start with the dual particle systems for the rescaled SLFV process w N t in (1.19).
If η and (ξ 1 , ξ 2 ) are as in (1.8), (1.9), introduce the rescaled duals, (3.1) Then (1.8) and (1.9) imply for Borel ψ 1 on R d , and Borel ψ 2 on (R d ) 2 , and t ≥ 0 (recall As before, either ψ i ≥ 0, or one side is integrable for the above to hold. A simple change of variables shows that (3.2) implies (for ψ 1 as above) (a) There exists C 3.5 > 0 such that for s ≥ 0, and so conclude that This completes the proof of (3.5).
(b) For d = 2, using the two particle duality equation If we plug this bound and (3.8) into (3.7), we get Plugging these bounds into (3.9) we obtain (3.6).
Proof. By Proposition 2.2 and Lemma 2.3(a), , and so is a non-negative martingale. By Doob's L 2 submartingale inequality, As we don't know the square integrability yet, the first inequality holds by considering a sequence of localizing stopping times and applying monotone convergence. By Combining the above bounds we obtain (3.10) and hence the next to last statement as well.
It is easy to repeat the above reasoning using Lemma 2.4(a) and see that This in turn shows that the local martingale M N (φ) is in fact a square integrable martingale.
Proof of main result
The proof of Theorem 1.2 proceeds by taking limits as N → ∞ in Proposition 2.2 to derive the martingale problem for the limiting super-Brownian motion. The main issue is the identification of the square function of the limiting martingale part and the key here is the following result: This will be proved in Section 7. In this section we will establish Theorem 1.2, assuming this result. If S is a metric space, recall that a sequence of laws on D(R + , S) is C-tight iff it is tight and all limit laws are continuous. C-tightness on D(R + , S)×C(R + , S) is then defined in the obvious manner. The first step is to prove: {A N : N ≥ 3} is tight, and hence relatively compact, in C(R + , R) by Prohorov's theorem. It then follows from Proposition 4.1 that the sequence of continuous (recall (2.6)) increasing processes { M N (φ) · : N ≥ 3} is relatively compact in C(R + , R), and so also tight by Prohorov's theorem again.
Therefore if 0 ≤ s < t ≤ T , then by the above and Corollary 3.2, Using (4.2) and the C- Proof. By the Kurtz-Jakubowski theorem (e.g. see Proposition 3.1 in [6]) it suffices to show: The last ( We are ready to turn to the main result. Proof of Theorem 1.2. By Proposition 4.3 it suffices to show that every weak subsequential limit is the super-Brownian motion described in the Theorem. Fix φ ∈ C 3 0 (R d ). By Lemma 4.2 and Skorokhod's theorem, and then taking a further subsequence, we may assume that we are on a probability space where Since the limit is continuous a.s. one has in fact a.s. uniform convergence on compact time intervals. It also follows from the above and Corollary 3.2 that → 0 a.s. and in L 1 as k → ∞ for all T > 0. This and Proposition 4.1 show that It follows from (4.6), (4.7), and Proposition 4.1 that Define an a.s. continuous process by Then the above, the convergence of the initial conditions in (1.18), and the semimartin- )-martingale by Corollary 3.2, it follows from the above that M (φ) is a continuous martingale and a standard argument (e.g. see the proof of Theorem 3.5 in [6]) shows it is in fact an (F X t )-martingale. Recalling (1.12), (4.8), and the value of b in Theorem 1.2, it remains to identify the square function of M (φ) as A φ by showing For d ≥ 3 this is fairly easy, but we give a stopping argument to include the more delicate The convergence in (4.11) readily shows that We claim that (4.14) The reason there is an issue here is that we do not know whether or not lim k T N k J = T J a.s. It follows from (4.13) that for t ≤ T J we have lim k T N k J ∧ t = t = T J ∧ t (the convergence is uniform for t ≤ T J ∧ T for any fixed T ) and therefore by (4.11) and (4.9), A simple calculation using (4.13) shows that (sup ∅ := 0) with probability one for any In view of the above and (4.15), to prove (4.14) it suffices to show that for T > 0 fixed, By (4.9) and (4.11) this would follow from lim sup For this we will use the following lemma, whose proof is deferred to the end of this section.
It follows from our jump bounds in (2.7) that Recalling that M N (φ) is a square integrable martingale (from Corollary 3.2), we have by optional stopping, Next use (4.13), and the convergence in (4.9) and (4.11), together with Fatou's lemma, to see that where the last is by (4.20). Let J → ∞ and then n → ∞ to prove the result for s < t fixed, as required.
hŪ (z) given in (1.3). We will need basic information about this random walk, as well as a way to compareξ t to it.
Throughout the paper, Y t = Y x t will denote a rate 2ρ|B r | random walk with jump distribution that ofŪ starting at x under P x . That is, Y x t will be the pure-jump Markov process on R d with generator defined for suitable f . We will often make use of the Poisson process construction . . which have the same law asŪ , and S n =Ū 1 + · · ·Ū n , n ≥ 1 In particular, for all x ∈ R d , t > 0, and nonnegative Borel f , Proof. (a) According to Theorem 19.1 of [3], there is a uniform bound on the densities of S n / √ n, n = 1, 2, . . . , so that By a standard large deviations estimate, for 0 < α < 1, where we have used the large deviation bound with α = 1/2. This proves (a) for Y starting at x. The result for YŪ t follows from the observation that YŪ t has the same law as S N (t)+1 and a slight alteration in the above calculation.
To see this we switch to component notation, and writeŪ j = (Ū n is a sum of bounded, mean zero independent random variables, so a martingale square function argument (e.g. see Theorem 21.1 of [4]) shows that for The first sum is bounded by (k − 1) k . The second sum is bounded by This proves (5.4) for t ≥ 1.
(c) This is immediate from (b) and Markov's inequality.
Proof. Let A > |x|. By radial symmetry and (5.1), f (x) = |x| 2−d is a harmonic function for Y . If we let σ = t a ∧ T A , then |Y s∧σ | 2−d is a bounded martingale (recall a > 2r), and so proving (5.8). Proof. By radial symmetry and (5.1), log |x| is a harmonic function for Y . If σ = t a ∧ T A as before then log |Y s∧σ | is a bounded martingale, and (5.13) Using |Y T A | ≥ A and |Y ta | > a − 2r in the above gives Rearranging gives (5.10). A similar argument yields (5.11). For (5.12), rearranging (5.13) gives (5.14) Proof. By (5.5) with k = 2, for all x, A as in the Lemma, To handle P x (T A ≥ A 2 log A) we must first estimate E x (T 2 A ). Let σ 2 = E(|Ū | 2 ), σ 4 = E(|Ū | 4 ) and λ = 2ρ|B r |, and define the functions It is a straightforward calculation to check that both u = u 2 and u = u 4 satisfy This and the fact that for p = 2, 4, |y| p , and A Y (|y| p ) are bounded on {|y| ≤ A+2r}, so that where we have used (5.16). Let t → ∞ on the left-hand side of the above to conclude On account of this bound and Markov's inequality, we have for |x| < A/2 and A > 2, Together with (5.15) this proves (5.14).
The following technical result will play a key role in the proof of Lemma 6.5 below.
Proof. We may suppose |w| > 3r/ √ N , because otherwise C 5.18 can be chosen large enough so that the right side of (5.18) is at least one. Now for any A > |w| To handle the first term, we apply Lemma 5.3 with x = w √ N and a = 3r, Using s ≤ t and |w| ≤ (log N ) −α , and taking N ≥ N 0 (t), we see that for some C(t) > 0, Plug the above bounds in (5.20) to see that for N ≥ N 0 (t) ∨ N 1 (α, β), For the second term in (5.19), take k ≥ 1/α and use (5.5) to get 1.13)).
The two particle dual
In this section we collect some properties of the two-particle dual which will be needed in our analysis of the martingale square functions. Our main focus will be on the difference of the two particles. Define and observe that ψ r (a) is decreasing in |a| and 0 ≤ ψ r (a)/ρ|B r | ≤ 1. Consider the two-particle dual ( , and the coalescence time τ defined in (1.6). By the dynamics defining the two-particle dual (recall (1.5)), the fact that |B r (a) ∩ B r (b)| = |B r (0) ∩ B r (a − b)| shows that for y = 0,ξ makes transitions y → y +Ū at rate 2ρ|B r | − 2ψ r (y) 0 at rate ψ r (y), A is given bỹ Recall from Section 5 that Y x t is the rate 2ρ|B r | random walk starting at x ∈ R d under P x , and with jump distribution that ofŪ and generator A Y given in (5.1) for f ∈ B(R d ).
For a random variable V we let Y V denote the same random walk with initial law that of V , and will use this notation with other Markov processes below.
We will construct a version ofξ x t by absorbing a random time change of Y x at 0. Define β(y) = 1 − ψr(y) ρ|Br| and ds. (6.4) Note that for x = 0, and thus inf s≤t β(Y x s ) > 0 a.s. This implies that I(t) is finite and strictly increasing a.s. for all t. Evidently I(t) = ∞ for all t > 0 if x = 0. We will allow x = 0 later, but until otherwise indicated we will take our initial point x = 0. From the definition of I we see that for 0 < s < t, t − s ≤ I(t) − I(s).
(6.5) Therefore I −1 (t) exists for all t a.s., and , then it follows from (6.6) that for all but countably many t, and therefore that Clearly, I −1 (t) ≤ t. For x = 0 it is natural to define I −1 (t) = 0 for all t ≥ 0, which means thatỸ 0 t := Y 0 I −1 (t) = 0 for all t ≥ 0. Thus (6.7) holds for all x and We may apply Theorems 1.1 and 1.3 of Sec. 6.1 of [19] to see thatỸ x is the unique solution of the martingale problem for Here we note that the continuity of f is not needed for Theorem 1.3 of [19] in our jump process setting as the proof there shows. Uniqueness of the martingale problem is classical for such bounded jump generators, e.g., see Theorem 4.1 in Chapter 4 of [19]), and soỸ x is the unique Feller process with generator AỸ , and in particular is strong Markov. Finally we sendỸ x to its absorbing state, 0 according to the continuous additive For an independent mean one exponential random variable, e, define the absorbing time κ = κ x = inf{t ≥ 0 : C x t > e}, (6.9) and the absorbed processξ Thenξ x is a Feller jump process and an elementary calculation shows that it solves the From (6.1) we see that the two-particle dual difference,ξ, is the Feller jump process satisfying the same well-posed martingale problem, and so, as the notation suggests,ξ x has the same law asξ x . We have proved: where κ = κ x is as in (6.9), then We often denote the starting point x ofξ in the underlying probability as P x . The tail behaviour of the coalescing time κ x will be important for us. Introduce , a ∈ R d . (6.11) Proof. By definition of κ, The following result shows that I −1 (t) is close to t, and so Y x t is a good approximation toỸ x t .
Lemma 6.3.
There is a constant C 6.3 > 0 such that for all 0 < α < 1 and t > 1, and Proof. Let Y 0 = x, |x| > 2r. By (6.4) and Y s = 0 for all s, ds. (6.14) By an elementary argument, there is a constant C 6.15 = C 6.15 (d, r) > 0 such that We are assuming |x| > 2r, so using the density bound (5.2), we see that On account of (6.15), plugging this bound into (6.14) gives This proves (6.13), because by (6.5), P x t − I −1 (t) ≥ t α ≤ P x I(t) − t ≥ t α , and we also have I −1 (t) ≤ t by I(t) ≥ t. The proof for YŪ is essentially the same.
(6.21) By (6.13) with α = 1/2 and N ≥ N 0 (q) (recall |x| > 2r), Next, using the Markov property at time u N , we have for N ≥ N 0 (q), In the next to last line we have used the d = 2 bound; if d ≥ 3, We will also need a bound on the two-particle dual ξ t = (ξ 1 , ξ 2 t ) after the coalescing time κ for any d ≥ 2. In this setting assume W 1,x1 , W 2,x2 and W 3,0 are independent rate ρ|B r | random walks in R d with step (6.23) distributionŪ (now in R d ) and starting at points x 1 , x 2 , 0 ∈ R d , respectively.
Proof. The jump rate of W to the diagonal becomes unbounded as it approaches the diagonal (for ρ = 1), so we proceed more carefully than in the proof of Lemma 6.1, making use of optional stopping. Let Here U y is uniformly distributed on B r (y 1 ) ∩ B r (y 2 ) and is independent of the uniform (on B r ) r.v. U . It is easy to check that W n solves the martingale problem forĀ n on the LetT n = inf{t ≥ 0 :ξ t ∈ R n } ≤ ∞. Using the properties ofĪ −1 and (6.28), it is easy to check that It follows that If we defineξT n t =ξ(t ∧T n ), the above impliesξT n Here we recall again that the continuity of f assumed in Ch. 6 Theorem 1.3 of [19] is not needed in our jump process setting. A bit of arithmetic shows G n f (y) = (ρ|B r | − ψ r (y 1 − y 2 ))[E(f (y 1 +Ū , y 2 ) + f (y 1 , y 2 +Ū ) − 2f (y))]1(|y 1 − y 2 ≥ 1/n) If ξ = (ξ 1 , ξ 2 ) is the two-particle dual process, as described in (1.5), the above is the generator of the Feller pure jump process ξ(t ∧ T n ), where T n = inf{t ≥ 0 : ξ t ∈ R n } and so ξ Tn (t) = ξ(t ∧ T n ) also solves the martingale problem for G n (f ∈ B(R d × R d )).
By well-posedness of this martingale problem ((Section 2 and Thm. 4.1 of Chapter 4 of [19]) we concluded that ξ Tn andξT n are identical in law for all n ∈ N. Since R n ↓ ∅ and ξ(T n ),ξ(T n ) ∈ R n when these times are finite, it follows that T n ,T n ↑ ∞ a.s. as n → ∞ (in fact for large n they will be infinite a.s.), and therefore ξ andξ are identical in law.
The following result is now an easy consequence of (6.24), Lemma 6.6 and the bound I −1 (t) ≤ t for all t ≥ 0. Lemma 6.7. Assume ξ x is the two-particle dual in R d × R d , starting at x = (x 1 , x 2 ). Then we may assume there are random walks W i,xi (i = 0, 1, 2, x 0 = 0) as in (6.23) such that Proof. By a change of variables, Returning to the definition ofm N,1 by the martingale property of X N s (1) (Corollary 3.2). Next, we bound the difference |E(X N s (φ 2 )) − X N 0 (φ 2 )|. By the single particle duality equation (3.2) and a change of variables, Using the smoothness of φ and scaling, we see that Lemma 5.1(b) (it applies to the rate ρ|B r | walk η as well) implies Combining this bound with (7.5) and (7.6) gives (7.4).
To handlem N,2 s (φ) we apply the two-particle duality equation (3.3) and then split the resulting expression into two pieces, obtaining 1{τ N > s} dz 1 dz 2 dx. (7.9) Convergence of SLFV to SBM Lemma 7.2. There is a constant C 7.10 = C 7.10 (φ) > 0 such that for s ≥ 0, Proof. By translation invariance, changing of variables and order of integration, we see Changing variables again with x = x + ξ N,1 s and adding and subtracting φ 2 (x ), the right-side above equals For fixed z 1 , z 2 ∈ B r , letting z 3 = 0, Lemma 6.7 implies that using Lemma 5.1(b) for the last inequality. Plugging this bound into (7.11), we obtain which proves (7.10).
Using Lemmas 7.1 and 7.2 in (7.1) and (7.7), we arrive at the following: We turn now to the analysis of J N,2 (1) which is (7.13).
Proof. Let s ∈ [s N , 2s N ], let t N = N s/4, and define
τ > N s dy
To prove (7.15), by Lemma 7.4 it suffices to show that each E i is uniformly bounded in s ∈ [s N , 2s N ] by terms in the right side of (7.15).
where we have used (5.5). With this bound and X N 0 (·) ≤ K we obtain from the definition of E 1 that The above bound is then extended to all N ≥ 3 by increasing C. Using X N 0 (·) ≤ K again, we have It follows from Lemma 6.4, taking β = 1/3, that for N ≥ N 0 (q), As before, the above bound is then valid for all N ≥ 3 by increasing C. We split E 3 into two parts, letting G(a, b) = |YŪ I −1 (u) | > 2r for all u ∈ [a, b] . If we let G N = G(N s/2, N s) then we can write Using the above and Lemma 6.1, we see that on the above event and for u as above, where we recall that F Y t is the right-continuous filtration generated by the random walk Y . Then I −1 (s) is an (F Y t )-stopping time. By the strong Markov property of Y ,Ŷ 0 is a copy of Y starting at 0 and is independent ofF t N . SinceỸ t N is F t N -measurable, we may conclude from (7.17) and (7.16) that uniformly in x, Use this and the fact that P (τ (Ū ) > t N ) ≤ C K (from Proposition 1.1 if d = 2) in (7.18), to see that where the last line is very crude if d ≥ 3, and is an equality if d = 2. Combining the bounds for E 1 , E 2 , E 3 , E 3 , we establish (7.15). Corollary 7.3, Lemma 7.5, and q > 4 imply the following: Remark 7.7. To identify the square function of M N (φ) we will need to use the above and the Markov property to bound This means we will need to bound the expected value of the last term in (7.19) with X N s−s N replacing X N 0 . For d ≥ 3 we only need to bound the resulting double integral on the right-hand side of (7.19) by X N s−s N (1) 2 , but for d = 2 we require the following additional result.
Proof. By the duality equation (3.3) and then a change of variables, Here E 1 = (log N ) First, integrating y 2 out in E 1 yields By Lemma 6.1, switching toξ y √ N , and using I −1 (u) ≤ u, By Lemma 5.5, which is applicable because T ≥ s ≥ δ N and we consider only |y| ≤ √ δ N , the last probability above is bounded by C log 1/|y| log N (C = C(T )). It follows that By definition, We use the representation of ξ = (ξ 1 , ξ 2 ) in Lemma 6.6 and the fact that on G N ∩ {τ > N s}, ξ sN = W sN . Then, dropping the indicator of G N ∩ {τ > N s}, we see that With this bound, integrating out y 2 first and then y, it follows that Using X N 0 (·) ≤ K = log N and integrating out y 2 gives By the representation of ξ = (ξ 1 , ξ 2 ) given in Lemma 6.6, for any y, Using Lemma 5.5 again, for δ N ≤ s ≤ T , we have Combining the bounds on E 1 , E 2 , E 3 gives (7.20).
We are almost ready for the proof of Proposition 4.1. The proof is lengthy, so we separate out one of its key steps in the following lemma. For s ≥ s N define Now if j T = 2k − 1, then by (7.22), which also holds if j T is even. DefineX N T (1) = sup t≤T X N t (1). By Lemma 2.4(a), (3.5) and (3.10) we have where we have used Lemma 2.4(a), (3.5), and the martingale property of X N s (1) (Corollary 3.2). Thus from the above, (7.26), and Corollary 3.2, the left side of (7.21) is bounded above by and so to prove (4.1) it suffices to show sup and therefore So by our choice of δ, q (recall (7.14)), we need only integrate over [3δ N , T ] in (7.30).
Appendix: Proof of Proposition 1.1
From the discussion following the statement of Proposition 1.1 we may assume d = 2 throughout. Recall the definitions and time change construction from Section 6 (especially Lemmas 6.1 and 6.2), using the rate 2ρ|B r | random walk Y t , the difference processξ x t , and absorption timeτ = κ. By Lemma 6.
As the exact form of the killing rate k(x) will not be important in our arguments, we will replace it with a general radial function φ : where ↓ in |x| means non-increasing in |x|, and similarly for ↑. We assume throughout that φ has these properties. Recall the stopping times t A and T A from (5.7).
To prove (8.3) we first establish a number of properties of It is elementary that 0 ≤ Φ(x, A) ≤ 1 and that by recurrence, Φ(x, A) → 0 as A → ∞. The next two results will show that Φ(x, A) is increasing in |x|. Lemma 8.2. Let N ∈ N. If 0 = s 0 < s 1 < · · · < s N and f : R N +1 → R is bounded and ↑ in each coordinate then Proof. Let U, U 1 , U 2 , . . . be iid rv's uniform on B r , and let S m = U 1 + · · · + U m . The first step is to prove that if N = 1 then for m = 1, 2 . . . , E[f (|y|, |y + S m |)] is increasing in |y|. (8.10) Let u ≥ 0, and define h u (y) = P (|y + U | ≤ u) = |B r ∩ B u (−y)| |B r | = |B r ∩ B u (y)| |B r | . (8.11) It is easy to see that h u (y) is decreasing in |y|, so that |y + S 1 | is stochastically increasing in |y|, which proves (8.10) for m = 1. Now suppose m = 2, and consider P (|y + S 2 | ≤ u) = P (|y Clearly h u (y) depends only on |y|, and having established it is decreasing in |y|, the m = 1 case of (8.10) implies that E[h u (y + S 2 )] is decreasing in |y|, which shows |y + S 2 | is stochastically increasing in |y|, proving (8.10) for m = 2. The general inductive step for (8.10) is similar.
Proof. By monotone convergence, we may assume A, t are finite and g is bounded. Let g(0) = lim s↓0 g(s). For N ∈ N let M N ∈ N and 0 = s N 0 < s N 1 < · · · < s N M N = t satisfy s N i+1 − s N i < 2 −N for 0 ≤ i < M N , and define τ N = min{s N i : |Y s N i | > A} ∧ t. By right-continuity of |Y s |, τ N ↓ T A a.s. as N → ∞. By continuity of g on [0, ∞) and dominated convergence, It is easy to check that G N i is increasing in each of its variables, and hence applying A consequence of the strong Markov property we will use repeatedly is (8.14) Lemma 8.4. There exists C 8.15 = C 8.15 (r) > 1 such that for all k ≥ 2 and 0 < |x| ≤ k < A, . (8.15) Proof. By the monotonicity in Lemma 8.3, it suffices to prove (8.15) for x = x k = (k, 0). Assume additionally k > 6r ∨ r −1 and A > r 2 . By (8.14), |Y t3r | ≤ 3r, and monotonicity, we where we have set α(r) = Φ(x 3r , 4r) < 1. Insert this into (8.16) and rearrange to conclude where the second inequality uses k > 1/r and A > r 2 . In view of (8.17), letting C = 8/(1 − α(r)), we now have for all k > 6r ∨ r −1 ∨ 2 and A > k ∨ r 2 . It is easy to see that C can be increased so that (8.19) will hold for all k ≥ 2 and A > k, completing the proof of (8.15).
We will construct a coupling of the random walks Y t started at x = x in order to obtain good bounds on the difference Φ(x , A) − Φ(x, A). We start in discrete time. Let {U i } be iid r.v.'s which are uniformly distributed over B r , and for x ∈ H r = {(x 1 , x 2 ) : Let π denote the reflection mapping π(x 1 , x 2 ) = (−x 1 , x 2 ) and set x = π(x ). We will use a reflection coupling to define (S x n : n ≥ 0). Let H = {(x 1 , x 2 ) : x 1 ≤ 0}, , and define N c = N x,x c = min{n ≥ 1 : S x n ∈ B r (π(S x n−1 ))}.
Lemma 8.5. N c ≤ N := min{n ≥ 0 : S x n ∈ H } a.s., and so S x n ∈ H r for all 0 ≤ n < N c a.s.
The result follows.
We now define (S x n ) n≥0 by Then S x 0 = x, and it follows from Lemma 8.5 that for n < N c , S x n is in H r and so S x n = S x n , which implies that N c = min{n ≥ 0 : S x n = S x n }.
That is, N c is the coupling time of (S x n ) and (S x n ). If we let F S x n = σ(S x m , m ≤ n), then N c is an (F S x n )-stopping time, and S x is (F S x n )-adapted. We next show that S x n is an (F S x n )-random walk starting at x with step distribution U 1 , as the notation suggests.
Lemma 8.6. For any Borel
Proof. This is obvious on {N c ≤ n} (in F S x n ) since then S x n and S x n+1 equal S x n and S x n+1 , respectively. Suppose now that N c > n, and defineB =B(ω) = B r (S x n ) ∩ B r (π(S x n )), so that π(B) =B, (8.22) andB ⊂ B r (S x n ). (8.23) This last inclusion holds because S x n = S x n or π(S x n ) for all n. For simplicity we will write F n for F S x n in the rest of this proof. By the definition of S x n , P (S x n+1 ∈ A|F n )1(N c > n) = P (S x n+1 ∈ B r (π(S x n )), N c > n, S x n + U n+1 ∈ A|F n ) + P (S x n+1 / ∈ B r (π(S x n )), N c > n, π(S x n+1 ) ∈ A|F n ) = P (S x n+1 ∈B ∩ A, N c > n|F n ) + P (S x n+1 / ∈B, N c > n, π(S x n+1 ) ∈ A|F n ) = P (π(S x n+1 ) ∈B ∩ π(A), N c > n|F n ) + P (π(S x n+1 ) ∈B c ∩ A, N c > n)|F n ) (by (8.22)) = [P (S x n + π(U n+1 ) ∈B ∩ π(A)|F n ) + P (S x n + π(U n+1 ) ∈B c ∩ A|F n )]1(N c > n).
Next introduce the dependence on ω in the above, and use the fact that, conditionally on F n , S x n (ω) + π(U n+1 ) is uniformly distributed over B r (S x n (ω)) to see that if |C| is the Lebesgue measure of C, then the above evaluated at ω is a.s. equal to The result follows.
n ) denote a copy of the random walk starting at x under P x . Lemma 8.7. There is a constant C 8.7 so that for all x in the positive x 1 -axis and all n ∈ N, Proof. Use Lemma 8.5 and then the reflection principle to see that The step distribution of (S n ) has density f (u) = 2 √ r 2 − u 2 /|B r | ≤ 1/r on [−r, r]. It follows from the d = 1 version of (5.6) applied to random variables with this distribution that for a constant C = C(r), so we are done.
We now use translation invariance to extend the above to points x, x ∈ {(x 1 , 0) : = N c = min{n ≥ 1 : S x n ∈ B r (π m (S x n−1 ))} ≤ N x = min{n ≥ 0 : S x n ∈ H m }, (8.24) where the inequality is by Lemma 8.5, and The above results imply that both S x and S x are (F S x n )-random walks with step distribution U 1 , N x,x c = min{n ≥ 0 : S x n = S x n } (8.26) is their coupling time, and Next define coupled copies of the discrete time random walk with step distribution , and also setF x n = F S x 2n . We will writeF n for F x n if there is no ambiguity. Then it follows from Lemma 8.6 that bothŶ x n andŶ x n are (F n )-random walks with step distribution U 1 + U 2 , that is, they are (F n )-adapted and P (Ŷ x n+1 ∈ A|F n )(ω) = P (Ŷ x n (ω) + U 1 + U 2 ∈ A) a.s., (8.28) and similarly forŶ x . It follows easily from (8.26 Proof. By (8.29) The result follows from (8.27).
We move now to the continuous time random walks. Let N (t) be an independent Poisson process with rate λ = 2ρ|B r | and jump time sequence (s n ) n∈Z+ , i.e., s n = inf{t ≥ 0 : N t = n}. For K > 0 put x = (K + 2r, 0), and let x ∈ [K, K + 2r) × {0}. Define coupled continuous time rate λ random walks with step distribution U 1 + U 2 , starting at x and x, respectively, by The coupling time of these random walks is , and so by setting n = N t in (8.30), we have Let F t be the right-continuous filtration generated by (Y x , Y x , N ), and let Y t (respec-tivelyŶ n ) denote a generic rate λ continuous time (respectively, discrete time) random walk with step distribution U 1 + U 2 , starting at 0 under P 0 . Lemma 8.10. (a) Both Y x and Y x are rate λ continuous time (F t )-random walks (and (F t )-strong Markov processes) with jump distribution U 1 + U 2 . That is for y = x or x , t > 0, and any a.s. finite (F t )-stopping time S, P (Y y S+t ∈ A|F S )(ω) = P 0 (Y y S (ω) + Y t ∈ A) a.s. for any Borel A ⊂ R 2 . For y = x or x and 2r ≤ δ < A we let t y δ = inf{t ≥ 0 : |Y y s | ≤ δ}, T y A = inf{t ≥ 0 : |Y y t | ≥ A}, and also set We definet y δ ,T y A ,t x,x δ , andT x,x A in a similar way, using the discrete time random walkŝ Y x ,Ŷ x , for example,t y δ = min{n ≥ 0 : |Ŷ y n | ≤ δ}. Lemma 8.11. Let K > 3r and x = (K + 2r, 0). (c) LetŶ (1) be the first coordinate ofŶ , and let x, x , m be as above, with K > δ ≥ 3r, so that δ < |x| ≤ |x | < 2K. Let n = K 2−2ε . Then, using Lemma 8.9 for the second inequality and symmetry for the second to last inequality, we have k | ≥ K/2 , (8.36) provided K is larger than some K 0 (δ) > 0. We recall Theorem 21.1 in [4], which in the present context implies If we take p = p 0 (ε) large enough so that K pε > K 1−ε , substituting into (8.36) we obtain for a constant C > 0 depending on ε, . Multiplication of C by a large enough constant depending on δ allows us to remove the restriction K > K 0 (δ). That is, for some C 8.34 (δ, ε) > 0, So (c) is now immediate from (8.38).
As a consequence, so that ∆ = ∆ 1 + ∆ 2 , and bound ∆ 1 , ∆ 2 separately. For ∆ 1 , using (8.40) and T x A = t x 3r , ). (8.41) It suffices to consider the first term, as the second follows in the same way. By the strong Markov property (T A < t 3r )]. Now taking a = 3r in Lemma 5.3, and noting that 3r a.s.
Choose K 0 large enough so that K > K 0 implies K 2 > 2K r + 2. If, in addition we have K > K 0 and A > r 2 , then By replacing 8 with a sufficiently large constant C we may drop the additional conditions K > K 0 and A > r 2 , and so obtain for all K > 5r ∧ 2 and A > 2K + 2r, Plug this bound into (8.42) and use the coupling bound Lemma 8.11(c) with δ = 3r, ε = 1/2 to obtain .
The above and (8.41) imply .
Now consider ∆ 2 . Recalling from (8.40) that t x 3r ≤ t x 3r , ∆ 2 is bounded by the sum of In ∆ 2a , the event in the indicator function belongs to F t x . (8.44) Finally, consider ∆ 2b . Dropping the exponential and applying the strong Markov property to Y x at time t x 3r , we have (T A < t 3r ) . 3r | ≤ 2K + 10r. Let K 0 be large enough so that K > K 0 implies K 2 > (2K/r) + 10, and assume additionally that K > K 0 and A > (2K + 10r) ∨ r 2 . By the hitting probability bound (5.11), with a = 3r we see that if |Y x t x 3r | > 3r, then The same bound holds if |Y x T x 3r | ≤ 3r because then the left-hand side is zero. Now the additional restrictions on K, A can be dropped by replacing 8 with a larger constant C, so we may conclude that for A, K as in the Lemma and on {t x 3r < S c }, (T A < t 3r ) ≤ C log K log(A 2 ) . | 16,003.4 | 2019-09-07T00:00:00.000 | [
"Mathematics",
"Physics"
] |
Study of the Technical Feasibility of the Usage of Waste from Electric Posts as Coarse Aggregate in the Mixture of Concrete for Structural Purposes
— This work aims to analyze the technical and economic feasibility of using concrete post waste as coarse aggregate for the manufacture of new electricity posts. In order to carry out the experimental study, waste processing, characterization and finally concrete dosage were carried out. The adopted methodology consists in the partial substitution of the natural aggregate by recycled aggregate originated from damaged concrete electricity posts collected in the city of Palmas - TO. In order to obtain the results, specimens and new concrete posts were tested with the replacement of the aggregate, respecting the approval guidelines for concrete post for electric power networks. The results obtained point to the use of waste from unusable posts as a potential alternative in removing the residue from the environment, replacing natural aggregates in the manufacture of new posts that meet the mechanical resistance specifications.
I. INTRODUTION
The construction industry currently represents one of the largest consumers of natural resources in the world [1]. In addition, it is the largest generator of waste, in mass and volume, in urban environments. The lack of policies and guidelines related to this waste, culminates to its inadequate disposal on urban and natural environments, causing significant impacts to the environment, both on urban and rural [2].
According to the data from the National Department of Environmental Sanitation, construction and demolition waste (CDW) represents an amount of 40 to 70% of all solid waste generated in the country. The cleaning of this material improperly disposed in urban environments, generates a high cost for the municipalities, since they cannot be disposed in common sanitary landfills [3], a resource that could be used to benefit society and to improve urban infrastructure.
In this way, the recycling of construction waste is an effective alternative in reducing the extraction of natural resources for construction supplies, maintaining a healthy urban environment, decreasing municipal spending and increasing job creation [2].
Despite presenting itself as a viable alternative to all of the aforementioned problems, recycling is not a simple process and the product generated needs to undergo strict quality control so that this product can reach the market competitively enough to generate all the benefits that promises. In other words, it is not enough to adopt CDW recycling, it is necessary to meet market requirements [4].
Thus, in order of obtaining viability of the recycling of CDW, aspects such as technical performance of the recycled product, environmental impacts caused by the recycling process itself and disposal of the recycled waste at the end of the production chain and market viability must be taken into account [5]. Several studies have been developed to make the use of these recycled aggregates feasible in the constitution of concrete, among them, studies of evaluation of specific mass of aggregates from CDW and its reactions on the properties of concrete [6], use of steel fibers in concretes produced with recycled coarse aggregates [5], use of recycled CDW aggregates on the basis of paving structure [7], studies of the effects of the use of coarse and fine aggregates from CDW on the properties of structural concrete [8] and [ 9].
All of these work generate an important framework for studies on the use of recycled CDW materials in the constitution of structural concretes. Many of them show positive results in this area, such as the study by [6] that deals with the use of CDW residues separated by density in the concrete composition. In this study the author verified the direct relationship between the aggregate density and the ultimate strength range of the developed concrete.
These are the one works that motivate the study of the technical viability of the use of CDW obtained from concrete posts, in the composition of a concrete for structural purposes, since the source of the aggregate guarantees a homogeneity of its characteristics, both in relation to the density and the constituent materials and other features.
The viability of using this aggregate also enables the sustainability of posts production chain that grows every year along with urbanization, since the steel obtained from the demolition of the posts is already recycled and used for other purposes.
According to data provided by the company of electricity distribution in the state of Tocantins, [10], in 2019 3,940 reinforced concrete posts were discarded. According to the company, the main factor in the demolition of electricity distribution poles is the collision of cars, which damages completely the posts. Table 1 shows the number of posts lost in 2019 per month and the monthly average.
II. METHODOLOGY
In order to achieve the proposed objectives, the applied experimental methodology compared the results of resistance to compression and flexion of samples from posts dosed with conventional concrete and concrete posts with the substitution of natural coarse aggregate for recycled aggregate.
Sample preparation and crushing
In this phase, the preparation of the post residues was carried out, with the separation of the concrete from the steel bars. The removal of the bars was made manually in the company that manufactured the posts, with the aid of mallets and pneumatic hammers.
After the separation, the demolished concrete was sent for crushing in a jaw crusher, in order to obtain a similar granulometry to that of the natural aggregate, with a maximum characteristic length of 19 mm, the same maximum characteristic length of the natural aggregate with the removal of the fine material.
Characterization of the aggregate
Following the parameters of [11], the granulometric compositions of the coarse aggregate were determined, both natural and recycled, as well as the granulometry of the coarse aggregate.
To determine the specific mass and unit mass of the fine aggregate, the guidelines of [12] were adopted, it should be noted that only natural fine aggregates were used. For coarse natural (pebble) and replacement aggregates, the same determinations were made based on [13].
Concrete dosing
In this stage, the concrete was produced with the reference mix design, dosed by the method of the Brazilian Portland Cement Association (ABCP) and with the replacement of the coarse aggregate crushed concrete from the discarded posts.
This method consists of collecting data in the laboratory of the materials used in the production of concrete, they are: fineness modulus (MF), maximum characteristic length (MCL), humidity (h%), specific (γ) and unitary (δ) mass. From the data obtained, tables and graphs were used to The fck (characteristic strength at 28 days) of 25 MPa was defined for an aggressiveness class ΙΙ [14], a moderate aggressiveness class and a low risk of deterioration of the structure. Another characteristic adopted was a concrete slump equal to 50 ± 10mm.
Based on the mix mass, it was calculated the consumption of the inputs needed to make 6 specimens for each mixture, in percentages of 0%, 25% and 50% of replacement.
Characterization of dosed concrete
In the fresh state, for the determination of the slump of the material, the criteria presented in [15] were used, which determines the consistency by slump of the cone trunk, known as slump test.
In the hardened state, tests were carried out to determine water absorption, void ratios and specific mass, resistance to axial compression and, finally, bending tests on posts manufactured with the new concrete matrix.
The determination of water absorption, void ratios and specific mass were made based on [16], for this assay, two specimens were molded for each mixture, totaling 6 (six) specimens.
Three mixtures were performed, 6 specimens per mixture, totaling 18 specimens which, according to [17], were molded with 10x20 centimeters for each sample. After 28 days of normal curing, the samples were taken to the hydraulic press and broken according to [18], in order to obtain their compressive strength.
Bending Test on electric posts
The last and main test performed is the bending test on posts, which seeks to verify the resistance of the posts when subjected to bending efforts. 6 poles with a height of 7.5m were molded and tested according to the standard requirements of [19].
The procedure consists of the following stages: The setting of the base of the post to the test bench, with the length provided by the following equation: = 10 + 0,60 Where: "L" is the nominal pole length in meters and "e" is the footing length in meters. With the footing measure thus obtained, the post is fixed to the bench.
The distance from which the effort should be applied to the top of the post must be 200mm. The application and withdrawal of effort should always be slow and gradual, avoiding sudden sweeping of the load during the tests.
With the post set, the effort Rn, corresponding to its nominal resistance, was applied at a distance of 200mm from its top, for at least 1 (one) minute, to allow the accommodation of the footing.
After the setting, an effort is applied at 1.4 times the Rn, corresponding to the minimum rupture load of the pole, for a minimum of 5 (five) minutes. After the time of the first application, the load is progressively increased until the rupture load of the part is obtained.
Granulometry
The results of the granulometric characterization test of the natural fine aggregate revealed the fineness modulus of 3.39mm and the maximum characteristic length and 4.8mm.
In the analysis of the results collected in the granulometric characterization of the recycled aggregate ( Figure 1 1), it was found a better distribution among the fractions retained on the sieves when compared to the natural aggregate. The result of this is a higher percentage of material with a diameter of less than 12.5mm as well as a greater amount of powdery material in the sample, which can affect the slump and resistance of the dosed concrete, since this material can subtract hydration water and kneading of the cement.
Water absorption, specific and apparent mass.
The specific mass, apparent mass and water absorption values of recycled and natural coarse aggregates can be observed in the table below.
International Journal of Advanced Engineering Research and Science (IJAERS)
[ Vol-7, Issue-9, Sep-2020] https://dx.doi.org/10.22161/ijaers.79.1 ISSN: 2349-6495(P) | 2456-1908 The percentage of water absorption by the recycled aggregate is much higher than that of the natural aggregate, a common situation when it's about demolition material. The presence of fine aggregate and the porosity of the demolished concrete are the main responsible for the absorption of the mixing water, which ends up impairing the workability of the dosed concrete.
For the fine aggregate used, the specific and apparent mass values found are shown in the table below. Table 3. Values of specific and unit mass of the fine aggregate.
Concrete mix design in mass
The dosages elaborated according to the ACI method, to resist the compression of 25 MPa, which is the adequate strength for the manufacture of new concrete posts, with slump of 100mm for reinforced parts, with a deviation of 4.0 for reasonable control, resulted in the mass proportions shown in table 04
Slump
The dosage made with natural aggregates showed good workability and consistency within satisfactory standards, observing the standards established by [20] and [21]. The table 4 shows the results obtained for both mix designs. What is observed is that the dosage with 25% recycled coarse aggregate did not show a substantial change in its workability when compared to the reference mix design. However, on the second mix design, with a 50% substitution, there was a reduction in the concrete slump, even though, within the tolerance limits of +/-2.0 mm of the slump test.
Concrete in the hardened state
According to the adopted methodology, tests were carried out on cylindrical shape specimens and on molded posts. At 28 days of curing, axial load and water absorption tests were made on the cylindrical specimens, at CEULP / ULBRA materials and structures laboratory, in Palmas -TO.
The results of resistance and absorption can be seen in table 5. Both results of compressive strength and water absorption remained within the adopted reference values. [22] fixes the average absorption of the specimens by up to 5.5% and the individual limit of water absorption by concrete, for electric posts, up to 7%.
Bending resistance
The bending tests ( Figure 2) were carried out at the company Concreto Artefatos de Cimento in Araguaína Tocantins, with 6 posts of 5m length, two specimens of each mix design, all with an approximate age of 28 days of cure. to the modulus of rupture. Table 6 shows the results obtained during the experiment. As noted in the table above, the deflections for the nominal load of 150 kgf applied in the direction of lower inertia did not exceed the limit established by [19]. The deflections of all evaluated posts were within the range of 5% of their height, which corresponds to 37.5 cm.
The residual deflections measured after the removal of the load were below the established limit, 0.5% of the post nominal length The posts with 25% replacement showed residual deflections identical to those of the posts with a reference mix design, while the posts molded with 50% replacement showed a reduction in their residual deflections, which may be an indication of an increase in their elasticity module (less plastic deformation), still, meeting the parameters of the standards for concrete posts.
The number of cracks when applying the force of 140% of the nominal load value was accentuated in posts with 50% replacement, however, with their holes remaining below the limit of 0.30mm. After the load ceased, the cracks closed, becoming capillary pores, meeting the requirements of the standard, as shown in Figure 3.
IV. CONCLUSION
The results obtained in this study were confirmed when compared with the concepts used in the research and through experimental tests.
It can be said that the use of recycled concrete aggregates, originating from the crushing process of waste posts, presents itself as a potential solution for the manufacture of new concrete electric posts. Its quality allows a satisfactory behavior from the point of view of its mechanical resistance, the main aspect observed in the approval and acceptance of a concrete pole.
It is verified that the use of recycled concrete aggregates contributes to the removal of a significant volume of waste per discarded unit, thus contributing to a reduction in the amount of material that could be discarded. Not least, there would be a decrease in the extraction of natural aggregates obtained from mineral deposits. | 3,378.2 | 2020-09-06T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
A nonlinear conversion model form ITRFyy to CGCS2000
At present, ITRS series reference frameworks are widely used in the world. The results of GNSS are mostly based on the ITRF framework. Transform from ITRF to CGCS2000 is not easy, which restricts the promotion and use of CGCS2000. The conversion relationship between CGCS2000 and ITRF framework has imminent practical significance. This paper constructs the epoch reduction and frame conversion two-steps model which estimated the nonlinear model to solve the appeal problem. Effective test show that the nonlinear model accesses an improvement in not only precession but also accuracy relative to the tradition model.
INTRODUCTION
The CGCS2000 which was released in 2008 means our national geodetic coordinate system has transformed from reference ellipsoid centric coordinate to geocentric coordinate system, is a landmark in the research and application of geodetic survey in China (Chen, 2008;Yang, 2009;Wei ,2008). It also indicates that our geodetic reference system study has been in line with international practice. CGCS2000 have already made major contributions to our national defence construction. However, how to accurately calculate the results under the instantaneous epoch to the CGCS2000 framework has always been a bottleneck restricting the promotion and application of CGCS2000. This paper uses the epoch reduction and frame conversion two-steps method to solve the appeal problem.
THE NONLINEAR EPOCH CALCULATED MODEL
The coordinates of the transient epoch under the ITRFyy framework first should been calculated to 2000.0 epoch. Conversion parameters based on a specific epoch are usually calculated using the model which only considers the linear motion of the site. (1) Where is transient time is origin time is the velocity of the station However a large number of international studies have shown that the motion of the site is nonlinear, including not only the periodic motion of the annual and semi-annual periodicity, but also the jumps caused by tectonic motion such as large earthquakes, and the post-seismic deformation. Therefore, this paper adopts a model that takes into account nonlinear motion. The conversion model to the three position components is as follows: Where is the co-seismic deformation is the post-seismic deformation * Corresponding author: Eamil<EMAIL_ADDRESS>It can be seen that the above model is mainly composed of three parts: the influence of tectonic movement such as earthquake, the long-term tectonic velocity term and the anniversary term.
In order to accurately determinate this three models, the observation data from 2011-2017 of 410 national GNSS stations which located in our mainland were used. We considering the nonlinear and liner characteristics simultaneously, which means establish the above function model based on each GNSS station's time series, using parameter estimation method quantitatively estimated co-seismic, post-seismic,velocity and annual and semi-annual periodicity.
GNSS data process method 1)Daily process In order to eliminate the influence of the inconsistency of the model and processing strategy in data processing, the same model and method are used to uniformly process the above data using GAMIT/GLOBK (Herring, 2002) software. The processing of the GPS carrier data is performed in a 24-hour period, using a double-difference mode and a satellite orbit relaxation solution. To reduce calculation time, we divides the observatory into five sub-areas according to its geographical location, every partitions are processed separately. Each partition is bound together in the next step by public parameters. The obtained single-day relaxation solutions include the station position, the estimate of the satellite orbital lamp parameters, and the variance-covariance matrix.
2)Multi session adjustment The above daily relaxation solutions are combined through public estimates to obtained a total solution. Further, the transform parameters relative to the ITRF2014 are estimated by the globally distributed base stations which are included in the solution, and finally the single day non-reference solutions are converted to the ITRF2014 through the obtained 7 parameters. So we have got the continuous station position time series under the ITRF2014 framework.
Periodic model of GNSS reference station
We use the spectral analysis method to study the time series, aimed to detected whether the time series contain some other periodic signals, it can also verify the geophysical models have already been deducted in data process or not. The results of spectral analysis show that the most significant signal in the time series of horizontal direction is the annual period (see Figure 2, Figure 3), the semi-annual period followed.
Velocity model of GNSS reference station
Based on the above model, we get the velocity field of Chinese mainland (see Fig 4), the average possible error of horizontal velocity is ±0.3mm,vertical direction is about ±0.5mm.
METHOD FOR REFERENCE TRANSFORMATION
Tight constraint method and S-transformation method are the two widely used methods for frame transformation.
Tight constraint method:
A wide range of papers has been published on the concept of the free network and optimal methods of computing a set of coordinates from a singular normal equation. The set of constraints usually added to a free-network normal equation by tight constrained method (Mittermyer 1972;Perelmuter 1979;Blaha 1982;Dermanis 1994a;Xu 1997). The tight constraint is used in the situation that absence of datum, some stable stations are generally selected as the core station, and the value of the coordinate and velocity under a certain reference frame is taken as true value. It need high accuracy of the core stations' coordinates. Constrained least squares estimates can be implemented with Lagrangian multipliers. Although it is very effective to solve the rank-deficient problem of the normal equation, however it will cause the geodetic control network be deformed, means will generate bias for the coordinate of the stations that located far away from the core station.
S-transformation method:
S-transformation (also be called Helmert transformation) method is introduced by Baarda in the early 1950s. A lot of authors have contributed to this subject since then, such as Mierlo (Mierlo ,1980), who discussed free-networks analysis and the answer that can be given to the rank deficiency of the normal equations by S-transformations. Several researchers Teunissen, 1985;Koch, 1987;Crosilla et al. 1989;Xu, 1997) have delved more deeply into the problem, each adding their own contribution. The standard relation of transformation between two reference systems is an Euclidian similarity of seven (or fourteen) parameters: three translations, one scale factor, and three rotations designated respectively.
Bias caused by the two methods
In order to analyze the difference between the tight constraint method and the S-transformation method, we used the data of 1900 GNSS reference stations that located in China during August 1st to 31st, 2014 aimed to verify the different methods. After obtained the daily loosely solution per day` (e.g. in which stations' positions and velocities are constrained to a priori values with for positions and /yr for velocities), the relax network were aligned to ITRF2008 by using the two methods which are described above. Then we compared the coordinate results. The difference in coordinates are shown in Figure 2-10. It can be seen that the tight constraint method and the Stransformation method are quite different, for X direction, 50% stations' bias are in 1cm,90% stations's bias are in 5cm, similar for the Y direction. The bias of the Z direction are much bigger, especially some stations have already up to 10cm. Therefore, we use the S-transformation method as the framework transformation. Figure 5. bias of the XYZ directions Typically, the ITRF conversion uses 14 conversion parameters (ie, the rate of seven conversion parameters plus seven conversion parameters). The conversion between different ITRF frameworks is established by 14 parameters. The conversion parameters are given by IERS(https://www.iers.org/IERS/EN/ DataProducts/ITRF/itrf.html), as shown in the following table. The transformation model is :
PRECESSION ANALYSIS OF THE CONVERSION MODEL
In order to verify the validity of the transformation model established in this paper, 21 benchmarks with CGCS2000 true coordinate were selected for comparison. Test 1: When epochs are reduced, regardless of the impact of major earthquakes, annual events, etc., linear models are used to describe site position changes. Test 2: Estimate the impact of major earthquakes and annual events, using a nonlinear model to describe site position changes. The difference between the CGCS2000 coordinate results and the true values obtained by the two tests are shown in the Fig 6 and Fig7. (units is cm) It reveals that Test 1: horizontal differences are mostly within 2-3 cm, but elevation differences are large. The largest GUAN, WUHN had up to DM magnitude, HAIK and XIAA are close to dm. Test 2: The horizontal differences are mostly within 1 cm, the elevation differences are about 50% within 5 cm, and the largest GUAN and DLHA are about 8 cm. Test 2 had obviously improved the accuracy compared with test 1. At the same time, it proves that the transformation model constructed in this paper has very high accuracy.
CONCLUSIONS
We create a model which consider the nonlinear term of stations' time series to convert the transient epoch results under ITRFyy to CGCS2000. In the first step, coordinates of transient epoch are converted to the 2000.0 epoch, during this period we not only estimate the velocity term but also the nonlinear terms such as co-seismic, post-seismic displacement, load effect and son on. In the second step, we transform the coordinates of ITRFyy to CGCS2000 by using a S-transformation method. Then, we test the precession of the new model by comparing the coordinates generated from this model and the true values of 21 stations. The horizontal differences are mostly within 1 cm, the elevation differences are about 50% within 5 cm, and the largest GUAN and DLHA are about 8 cm. It is obvious that the new model is more accurate compare with the tradition model, meaning that the model constructed in this paper has very high accuracy and precession. | 2,234.4 | 2020-02-07T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
CULTO: AN ONTOLOGY-BASED ANNOTATION TOOL FOR DATA CURATION IN CULTURAL HERITAGE
: This paper proposes CulTO, a software tool relying on a computational ontology for Cultural Heritage domain modelling, with a specific focus on religious historical buildings, for supporting cultural heritage experts in their investigations. It is specifically thought to support annotation, automatic indexing, classification and curation of photographic data and text documents of historical buildings. CULTO also serves as a useful tool for Historical Building Information Modeling (H-BIM) by enabling semantic 3D data modeling and further enrichment with non-geometrical information of historical buildings through the inclusion of new concepts about historical documents, images, decay or deformation evidence as well as decorative elements into BIM platforms. CulTO is the result of a joint research effort between the Laboratory of Surveying and Architectural Photogrammetry “Luigi Andreozzi” and the PeRCeiVe Lab (Pattern Recognition and Computer Vision Lab) of the University of Catania,
INTRODUCTION
In the last decades, we have witnessed to the explosion of digital cultural assets all over the world.We are aware that digital cultural resources have a great potential -often not fully exploited -for giving access to cultural heritage to citizens, researchers and cultural and creative industries.Nevertheless, there is still a lack of software tools and applications able to transform such resources into semantically enriched ecosystems to ease information accessibility.The impact of such tools and applications would open new perspectives in the field of humanity research as well as increasing awareness by citizens and industries in terms of cultural identity and creativity.The need for specific actions has been also highlighted in three H2020 calls on European Cultural Heritage (Reflective 6 -2015, Reflective 7 -2015, SC6-CULT-COOP-2016-2017) stressing the importance of interconnecting digital cultural assets through thesauri, classification schemes, taxonomies and ontologies.
This paper proposes CulTO, Cultural heritage Tool based on Ontology, a software tool relying on a fine-grained computational ontology for Cultural Heritage domain modelling, with a specific focus on religious historical buildings, for supporting cultural heritage experts in their investigations.CulTO is specifically thought to support curation of photographic data and text documents for historical buildings and for indexing, retrieval and classification.The developed computational ontology aims also at enriching Historical Building Information Modeling (H-BIM) with non-geometrical information on historical buildings through the inclusion of new concepts about historical documents, images, decay or deformation evidence as well as decorative elements (Quattrini et. al, 2016).
CulTO computational ontology has been designed through a multi-facet bottom-up analysis of constructive, functional and decorative elements of a religious building -the church of Santa Maria delle Grazie in Misterbianco in Catania, Italy.We have modelled building elements at a high abstraction level using standard ontologies and schemas, thus enabling the generalization to other historical religious buildings as well as integration with existing Cultural Heritage ontologies (e.g., CIDOC-CRM) (Ronzino et al, 2016).On top of the ontology, we have developed a software tool driving experts in the annotation process, which is known being time-consuming and error-prone, for further automated content analysis methods.
Thus, the main contributions of CulTO are: 1) it allows users to provide concept-level annotations constrained by a specific formal ontology, 2) it enables the creation of clusters of collected information (both visual and non) as well as to identify automatically which part of a historical building a specific image belongs to, thus easing the categorization effort, and 3) it supports searching and retrieving information either by performing text query on the image content semantically-driven by our ontology.
The remainder of the paper is organized as follows: Section 2 discusses mainly the state of the art on ontologies and information retrieval methods for Cultural Heritage.Section 3 is the core of the paper and describes the ontology, the case study, the tool and a preliminary information retrieval model exploiting semantically enriched image annotations.Section 4 deals with H-BIM data enrichment.The results are discussed in Section 5, while concluding remarks and future activities are given in Sect.6.
Ontologies for Cultural Heritage
In recent years, the availability of a large-scale unstructured and distributed knowledge together with the massive production of multimedia data makes the cultural heritage domain particularly suited for semantic web modelling.Indeed, semantic web (ontologies, schemas, etc.) has found fertile ground in the cultural heritage because of the need to integrate, enrich, annotate and share the produced data.A well-known attempt to provide a mechanism able to perform integration, interchanging, structuring, reasoning and discoverability across many cultural heritage sources is the CIDOC/CRM ontology presented in (Crofts et al, 2003), developed mainly to store cultural heritage information.The CIDOC/CRM has been used as a conceptual representation of the cultural heritage domain in (Stasinopoulou et al, 2007), where an ontology-based metadata integration methodology is proposed.In (Papatheodorou et al, 2007) the expressiveness of the CIDOC/CRM ontology has been enhanced to perform inferences for intelligent querying through a Knowledge Discovery Interface.In (Alexiev et al, 2013) the "Fundamental Relations" approach is presented as an effective "search index" over the CRM complex graph.
Cultural heritage ontologies are often employed to support the development of high-level software tools for digital content exploitation.In (Ghiselli et al, 2005), a web-based virtual museum based on ontology is proposed where visitors can perform queries and create shared information by adding textual annotations.These new generation of approaches has enabled the conversion of traditional cultural heritage website into a well-designed and more content-rich one (Bing et al, 2014), integrating distributed and heterogeneous resources, thus overcoming the limitations of systems such as MultimediaN E-Culture project (Schreiber et al, 2008), which, instead, manually performs data enchriment through semantic web techniques for harvesting and aligning existing vocabularies and metadata schemas.MultimediaN E-Culture project also developed a new software, named "ClioPatra", which allows users to submit queries based on familiar and simple keywords.
An attempt to integrate the Building Information Modelling with an ontology-based knowledge management system is proposed in (Simeone et al, 2014) with the objective to improve BIM abilities for inference and reasoning through an ontology able to interrelate all the domains needed for a comprehensive interpretation of the historical artefacts.The underlying ontology has been then extended in (Cursi et al, 2015) to model artefacts, their historical contexts, the heritage processes and all the actors interacting with buildings during the conservation process.Recently, a new workflow to integrate HBIM 3D data with semantic web technologies, including taxonomies, has been presented in (Quattrini et al, 2017).More specifically, data enrichment is performed by creating a set of shared parameters in Revit (one of the most used BIM platform), contextually with 3D modelling, reflecting the properties defined during the ontology design.One of the biggest challenge that the information retrieval in the Cultural Heritage domain has to face is the natural heterogeneity of data.One of the main attempts to provide a unified access to digital collections is the CatchUp full-text retrieval system (Kamps et al, 2009).Ontologies have been often exploited in image retrieval systems to improve accuracy as they allow for bridging the "semantic gap", i.e., the gap between the low-level content-based features and the data interpretation given by users.
In the eCHASE project (Hare et al, 2006) several cultural heritage institution metadata schemas have been mapped into the CIDOM CRM to expose them using the Search and Retrieve Web Service (SRW).Recently the INCEPTION project (Llamas et al, 2016) has been focused on the innovation in 3D modelling of cultural heritage assets, enriched by semantic information, and their integration in a new H-BIM.The peculiarity of the system is that users are able to query the database using keywords and visualize a list of H-BIM models, description, historic information and the corresponding images, classified through the application of deep learning techniques.
CULTO
In this section, we present our system -CULTO -for supporting the modelling of cultural heritage buildings as well as the visual data annotation step, necessary to develop high-level applications for data curation, retrieval and classification.
Ontology description
The main core of CULTO is its ontology, which has been designed to characterize religious historical buildings.Before describing our ontology, some key aspects of churches are given.
The most peculiar elements of these buildings are defined as Functional elements, which are rooms of the building that absolve a specific function.Among those crypt, chorus, presbytery, chapel, transept, nave, apse and sacristy are some of the main examples.These structures possess the same Constructive Elements, such as stairs, horizontal structures, walls and opening, generally found in any other different type of building.
Other characteristic structures of churches are Ancillary Elements, a class that encloses altar, baptismal font and pulpit.These structures could be sorted, for example, by the constitutive materials or by date of realization information.Every Ancillary Element may exhibit a Decorative System, i.e. a simple Decorative Element, usually as a finishing, a sculptural decoration, a non-load-bearing ribs and a classical order elements, or a Decorative Structure (Restuccia, 1997), frequently found in portals and altars.A Decorative Structure is commonly a Simple System composed by an abutment and an arch or an architrave.Particularly, an entablature added on a Simple System lead to a Trabeated System, while a classical order (pedestal, column and entablature), lead to an Overlapped System.Nevertheless, developing a class called Find has been crucial to illustrate unknown objects, their function or the finding location.
All these elements, designed in the ontology as subclasses of PhysicalObject (a base class which encloses e.g.Altar, BlockAltar, Column, Capital, etc.), are characterized by peculiar properties, encoded as subclasses of the generic class PhysicalProperty (e.g.Material).The developed ontology could be adapted to other building types (and therefore lots of other study cases could be classified) by creating different subclasses of PhysicalObject and Physical Property in order to represent the objects belonging to the new application domain and their attributes.We exploited our visual ontology to support the image annotation phase.In particular, to accomplish this, we extended the previous ontology with the concepts describing the annotation process in a generic application domain.In particular, the link between user annotations and ontology entities is modeled through the Annotation class, a subclass of Sample class employed to associate sample images.to a Physical Property.Since the Annotation class is a subclass of the Sample one, it derives the property isInImage, used to specify the location of an annotated object in an image identifier.Thus, for each new annotation, an Annotation instance is automatically created and associated with the corresponding PhysicalObject subclass instance; this allows the tool to infer all relevant properties encoded into the ontology.
Our ontology has been developed using Protégé, a free, opensource ontology editor which supports OWL 2 (Ontologies Web Language) and RDF specifications.
Case study
The case study used in this paper is the church of Santa Maria delle Grazie in the ancient Misterbianco (5 km far from Catania in Italy).This church is one of the few memories that survived the catastrophic events occurred at the end of the 17th century in eastern and south-eastern Sicily: i.e., the disruptive Mount Etna eruption (1669) that covered and erased 16 Etnean towns and the earthquake ( 1693) that destroyed almost all the towns of the Val di Noto.The church was covered by the eruption of 1669 and was brought to light recently thanks to the excavations carried out by the Superintendence to Cultural Heritage of Catania.
The choice of this case study was motivated by the availability of a large set of documents and images whose classification and analysis is of key importance for understanding the architectural artefact and formulating specific hypotheses about the construction and transformation phases (Calabrò, 2016).Furthermore, the exceptional conservation conditions of the church because it has been buried under lava flow and then excavated, enables to reason on the classification and localization of archaeological finds (Figure 2).
The study on the church is in progress and we have also acquired 3D data by means of laser scanning and photogrammetric techniques (Figures 3, 4) in order to start an in-depth investigation on this valuable architectural heritage.The archival documents found so far span the period between the end of the 16th century and 17th century, up to 1667 (two years before being buried under a 12 mt blanket of lava)-and hundreds of images collected.Nevertheless, manual categorization and curation of the bulk of gathered information is largely impractical, as it is extremely expensive and error prone.Furthermore, the excavation work wasn't carried out as an archaeological one and the exact location of many findings, such as fragment of architectural decoration, frescos etc., it is still unknown; thus making the categorization process even trickier.Santa Maria delle Grazie is a regular plan church with a single nave and a large presbytery that presents two chapel, a bell tower and a large room recognized as sacristy (Figure 4).Bell Tower and Crocifisso chapel entrances are in the nave and sacristy is located between them.A little vaulted hallway connects so-called Gothic Chapel, dedicated to Santa Maria delle Grazie, and presbytery.Overall there are nine altars: six of them are in the nave, two are in chapels and the last, the major, is in presbytery.Considering one of the altars of the nave (Figures 5 -6), the hierarchies and relations between elements may be split into decorative system and block altar.The decorative structure frames the niche and is classified as an overlapped system that inherits simple system with classical order on it.Classical Order is tripartided in pedestal (composed of base, dado, cimasa), column (column base, shaft, capital) and entablature (architrave, freize and cornice).Block altar, instead, is composed by mensa, altar frontal and predella.
The annotation and visualization tool
To support data curation and retrieval we developed upon the previously described ontology an annotation tool, which aims at guiding and constraining users in the labeling process within the concepts enforced by the ontology.It provides means to draw polygons and assign classes (the type of the annotated part, e.g.altar, column, etc.) and labels (the real altar or column the user is currently annotating); a label indeed corresponds to a particular ontology instance/individual, whose properties (e.g. the kind of material or its shape) are already defined in the ontology itself.
Similarly to other annotation tools (I.Kavasidis, 2014;B. C. Russell, 2008) the interface presents the user with an image to work on, together with several tools for browsing through images, zooming in and out, adding, editing and removing annotations.
However, unlike those other tools, part of the assignment responsibility is moved from the user to the tool itself in two different ways: 1) once a class is chosen, the tool allows the user to select one of the instances belonging to that class as the label for the current annotation; users don't need to provide any other information since all the properties of the annotated part are automatically inferred from the ones of the selected label; 2) once a part is annotated (e.g. a column), the tool automatically prompts users to annotate all its subparts (e.g. its capital), guiding the annotation process and inferring the proper subparts labels.Moreover, the user may add a textual generic description of the current annotation (e.g. the presumed altar dedication) and select some other properties predefined in the ontology for any visual annotation (e.g. the object visibility) as shown in Figure 7.
Furthermore, the tool allows the user to tag the position where an object is found, enabling a successive post-processing stage.Finally, in order to allow the annotation of unknown objects, it is possible to insert additional instances selecting the class whose the object belongs to, by clicking the button "New", typing the new label name and selecting the object it is part of (if available) and all its visible properties (see Figure 7 b).Once the "Add Instance" button is pressed, the ontology is updated with the provided information so that the new instance can be reused.This is the mechanism provided by the tool to augment dynamically the knowledge about the current application domain.
As mentioned before our ontology has been developed Protégé, thus all the annotations are exposed in an RDF endpoint for further querying and retrieval.To enable the two tasks, we integrated in our annotation and visualization tool an RDF search engine.Thanks to the ontology-based structure and an ontology reasoner which is integrated in our tool, all our annotations are at the content-level encoding information not only on the type of objects but also on the materials of such objects.Thus, our search engine allows users to perform queries (shown in SPARQL) such as: 1) Find all marble objects SELECT ?annoWHERE { ?obj rdf:type culto:PhysicalObject. ?obj culto:hasMaterial ?material.?material culto:materialHasType ?type.?obj culto:hasAnnotation ?annoFILTER (str(?type) = 'marble').} 2) Find all marble altars SELECT ?objWHERE { ?obj rdf:type culto:Altar.?obj culto:altarHasMaterial ?material.?material culto:materialHasType ?type.FILTER (str(?type) = 'marble').}
H-BIM data enrichment
The ontology here presented potentially allows for overcoming one of the major lacks present in available commercial BIM platforms, that is the possibility to add new concepts about Cultural Heritage (historical documents, images, decay or state of conservation).
As a matter of fact BIM platforms are fully compliant with new building constructions both from geometrical and informative point of view.When dealing with Cultural Heritage, and in particular with Architectural Heritage, the main difficulty is to create new libraries of building components to be used in the virtual environment (Santagati et al, 2016;Murphy et al, 2013;Fai et al, 2014;Apollonio et al, 2013).
The ontology could serve as a semantic layer to be added to the BIM.Several tests were carried out to link the formalized ontology to Revit, one of the most used BIM platform around the world.
The main problem is that Revit does not have its own programming language (e.i.Autolisp for Autocad), so the only dialogue/exchange allowed is between databases, so the BIM model has to be exported into a database by means of DB link Figure 7. A: the user is prompted with a dialog box where he is able to tag the finding position with a red dot.B: once the user has selected the class Altar and clicked the "New" button, the dialog box allows the user to add a new instance in the current ontology and specify all its attributes (e.g. the altar material).
A B
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-2/W5, 2017 26th International CIPA Symposium 2017, 28 August-01 September 2017, Ottawa, Canada and the Protégé exported as RDF database, then they can be merged/compared by developing specific database tool as already tested in (Fioravanti et al., 2015).Furthermore, the shared parameters of the BIM model should be labeled according to the ontology definition.
CONCLUSION AND FUTURE WORKS
This work is projected towards novel and more intelligent ways to manage, enrich and implement data on Cultural Heritage for a broader knowledge process finalized at the preservation, valorization and conservation of cultural assets.Our ontologydriven visualization tool is a great leap forward to achieve such goal as it greatly supports users in the storage, curation and access of cultural heritage digital data.The next step will be mapping excavation findings to church in order to recompose all the digging steps.Although coming with a good photographical documentation, the excavation works were not carried out as archeological ones and the exact location of many findings (fragments of architectural decoration, frescos etc.) is still unknown.For example, the dates engraved on several findings could help the identification of altars naming and dating.
To support this task we are currently working on developing deep learning approaches that, leveraging our semantic visual annotations, will hopefully identify automatically matches.This possibility of mapping images related to findings on a plan will be very useful in all those archaeological excavations with imagery documentation but no planimetric localization.
In the future we will work on the fully integration of the developed ontology into BIM platform and on the possibility to use this ontology to semantically segment a point cloud.
These relationships are shown as arrows in Figure1, containing a partial visual representation of the developed ontology, and specify which class should be considered part of another (e.g.Capital is part of Column), while blue circles embody classes such as Capital, Shaft, ColumnBase (subclasses of PhysicalObject) or Material (subclass of PhysicalProperty).
Figure 1 .
Figure 1.(A) The Visual OWL representation of a subsection of the developed ontology.In particular, Column, Capital, Shaft and ColumnBase are defined as subclasses of PhysicalObject and are linked to each other by relationships in the form of XHasY; the column material is in turn defined as a subclass of PhysicalProperty.(B).Extension of our visual ontology to support the annotation phase.
Figure 2 .
Figure 2. The excavations works at the church of Santa Maria delle Grazie in ancient Misterbianco.
Figure 3 .
Figure 3. Longitudinal cross section of the church of Santa Maria delle Grazie (ancient Misterbianco) in 3D and ortographic view of the point cloud in grey scale.
Figure 4 .
Figure 4. Plan of the church of Santa Maria delle Grazie (ancient Misterbianco) | 4,804.8 | 2017-08-18T00:00:00.000 | [
"Computer Science"
] |
Probiotics as the live microscopic fighters against Helicobacter pylori gastric infections
Background Helicobacter pylori (H. pylori) is the causative agent of stomach diseases such as duodenal ulcer and gastric cancer, in this regard incomplete eradication of this bacterium has become to a serious concern. Probiotics are a group of the beneficial bacteria which increase the cure rate of H. pylori infections through various mechanisms such as competitive inhibition, co-aggregation ability, enhancing mucus production, production of bacteriocins, and modulating immune response. Result In this study, according to the received articles, the anti-H. pylori activities of probiotics were reviewed. Based on studies, administration of standard antibiotic therapy combined with probiotics plays an important role in the effective treatment of H. pylori infection. According to the literature, Lactobacillus casei, Lactobacillus reuteri, Lactobacillus rhamnosus GG, and Saccharomyces boulardii can effectively eradicate H. pylori infection. Our results showed that in addition to decrease gastrointestinal symptoms, probiotics can reduce the side effects of antibiotics (especially diarrhea) by altering the intestinal microbiome. Conclusion Nevertheless, antagonist activities of probiotics are H. pylori strain-specific. In general, these bacteria can be used for therapeutic purposes such as adjuvant therapy, drug-delivery system, as well as enhancing immune system against H. pylori infection.
Background
Helicobacter pylori (H. pylori) is a gram-negative, motile, helical and microaerophilic microorganism that is considered as one of the most successful pathogens due to persistent infection in human stomach [1]. The global prevalence of this bacterium is high, so that according to the latest statistics H. pylori has colonized the stomachs of 4.4 billion people worldwide [2]. There is ample evidence that H. pylori is the etiologic agent of both gastric (gastric malignancy, peptic ulcer, chronic gastritis) and extragastric diseases [3][4][5]. Depending on the geographical area, the rate of infection with this pathogen varies; frequency of infection with this bacterium is associated with several factors such as virulence factors (e.g. CagA and VacA) and socioeconomic status, for example the rate of infection in some parts of Africa is close to 100% [6]. According to the literature, post-treatment re-infection is common in low-income countries with poor public health policy [7]. Basically all patients infected with this bacterium should be treated; complete eradication of H. pylori improves peptic ulcer and mucosa-associated lymphoid tissue (MALT) lymphoma, as well as reduces the risk of gastric cancer and autoimmune liver disease [8][9][10]. The most common problems facing gastroenterologists include, (1) antibiotic-resistance phenomenon, (2) persistence of bacteria in latent status, (3) degradation of antibiotics in acidic gastric conditions, (4) re-infection especially in regions with high prevalence, (5) adverse side effects of antibiotics such as diarrhea, nausea, vomit, and abdominal pain, (6) rapid metabolization of antibiotics due to CYP2C19 enzyme, (7) poor compliance of multiple antibiotics [11][12][13]. In recent years, antibiotic resistance (with high divergence) has led to increased therapeutic failure in eradicating H. pylori with current regimens [14,15]. In the early 1990s, the eradication rate of the standard triple therapy was more than 90%, however, in recent decades, the effectiveness of this regimen has dropped to less than 70% [16][17][18]. According to the World Health Organization (WHO) report, the rate of resistance to clarithromycin and metronidazole ranged 14-34% and 20-38%, respectively [19]. Graham et al. suggested that the therapeutic regimens with less than 80% efficacy are considered as treatment failure [20]. Recently, adjuvant therapy with probiotics has received much attention as a new strategy to increase the success of anti-H. pylori therapy [15]. Probiotics are a group of bacteria that confer various health benefits to the host [21]. Intestinal colonization with these microorganisms maintains the integrity of the mucosal immune system and inhibits the side effects associated with antibiotic use [21,22]. Probiotics are used for purposes such as treating diarrhea and preventing allergic reactions [23]. In vitro studies have shown that some probiotics particularly Lactobacillus spp. possess anti-H. pylori activities [24]. García et al. found that co-existence of Lactobacillus and H. pylori in patients with severe gastrointestinal diseases was significantly lower than control subjects (without clinical symptoms); colonization of Lactobacillus spp. in stomach leads to several events such as reducing gastritis, promoting mucin regeneration, as well as downregulating gene expression in cag pathogenicity island [25]. Therefore, probiotic supplementation is considered as one of the promising solutions for the treatment of H. pylori infection in symptomatic patients [15]. Based on studies, the use of probiotics as a supplement in addition to standard antibiotic treatment significantly improves the eradication rate of H. pylori infection compared to the administration of antibiotics alone [26,27]. The main purpose of this study was to provide an overview of the benefits of using probiotics in the treatment of H. pylori infection.
First-line therapy
According to European Helicobacter and Microbiota Study Group (EHMSG) guidelines, triple therapy is still recommended as the first-line treatment for H. pylori infection in areas with low clarithromycin rate [28]. Increasing clarithromycin resistance leads to reduce the eradication rate of clarithromycin-containing triple therapy, for example in Argentina cure rate is estimated at 75% [29]. The situation in South Korea is even worse, so that based on the duration of treatment, the cure rate with this regimen has been estimated at 64% and 66% for 7 and 14 days, respectively [30]. According to the literature, clarithromycin resistance rates are 10.6-25%, 16%, and 1.7-23.4% in North America, Japan, and Europe, respectively [30][31][32][33]. On the other hand, metronidazole resistance is also increasing, so that the resistance in European and African countries is 17-44% and 100%, respectively [34][35][36]. Recently, Yao et al. showed that the rate of infection eradication in type 2 diabetic patients is up to 74% [37]. Bismuth quadruple therapy, a complex regimen containing proton pump inhibitors (PPIs), bismuth salt, tetracycline, and metronidazole is also recommended as second-line (or even first-line) in high clarithromycin resistance areas [38]. In accordance with multicenter randomized controlled trials (RCTs) , curing rate of bismuth quadruple therapy is significantly higher than the standard triple therapy (90.4% vs. 83.7%) at the same time (for 14 days) [39]. However, in a meta-analysis study, Luther et al. evaluated nine RCTs, and found that the eradication rate of infection in patients receiving bismuth quadruple therapy was the same as those who had received clarithromycin triple therapy (78.3% vs. 77%) [40]. But it should be noted that bismuth citrate is harmful to human health, so this drug (or even tetracycline) is contraindicated in some areas [41]. In a comprehensive meta-analysis on fourteen RCTs studies, it was shown that the eradication rate of infection with both bismuth and non-bismuth quadruple regimens was 6% higher than sequential treatment [42].
Second-line therapy
Levofloxacin triple therapy and bismuth quadruple therapy are considered as two well-known therapeutic strategies against H. pylori infection [43]. Levofloxacincontaining regimen contains a PPIs plus levofloxacin and amoxicillin [44]. According to the literature, eradication rate of infection in levofloxacin triple therapy and bismuth quadruple therapy is 74.5% and 78%, respectively [43,45]. Increased resistance to quinolones has now become a major concern in reducing the clinical efficacy of levofloxacin-containing therapy; resistance to quinolones in Europe, America, and Asia is 20%, 15%, and 10% respectively [46]. Due to the adverse event rates of levofloxacin in patients, it is recommended that treatment with levofloxacin be prescribed only in cases of treatment failure [47].
Third-line therapy
In general, third-line therapy is prescribed following antibiotic susceptibility testing (AST) and considered as a rescue regimen in case of failure in the first and second lines of treatment [43]. Nevertheless, due to the impossibility of testing in all areas, therefore therapeutic protocols such as bismuth-based levofloxacin quadruple therapy or rifabutin triple therapy (a PPI, rifabutin, and amoxicillin) are used as alternative empiric treatments [48]. All three treatment lines are summarized in Fig. 1.
Drawbacks of antibiotic therapy against H. pylori
Overall, there are some drawbacks versus successful antibiotic therapy that include, increasing antibiotic resistance (especially against clarithromycin and metronidazole), unfavorable acidic conditions of the stomach (degradation of antibiotics), non-FDA-approved of some antibiotics (e.g. nitazoxanide), side effects of all antibiotics, as well as toxicity and high price of some drugs [47,49,50]. Treatment failure may gradually lead to the progression of the primary infection to more severe complications such as peptic ulcer, MALT lymphoma, and gastric cancer [51]. In summary, probiotics help human body against H. pylori through direct or indirect antagonism interactions including secreting antibacterial substances (lactic acid, short-chain fatty acids, hydrogen peroxide, and bacteriocins), inhibiting bacterial colonization, enhancing mucosal barriers, and regulating the immune responses [52].
Comprehensive definition of probiotics
Probiotics are a group of living microorganisms that generally colonize the gastrointestinal tract and have undeniable effects for improving human health [53]. Today, the clinical benefits of probiotics are widely accepted; their therapeutic applications are in disorders such as diarrhea, antibiotic-associated diarrhea, functional digestive involvements, inflammatory bowel disease, cardiovascular diseases, allergic reactions, and cancer [54]. Lactobacillus spp. are one of the most well-known probiotics that their anti-H. pylori properties have been proven [55]. According to the evidence, colonization rate of Lactobacillus spp. in normal human gastric is 0-10 3 CFU (resistant to acidic conditions of the human stomach for 2 h); some Lactobacillus strains prevent the persistent colonization of H. pylori due to their specific adhesins [56]. According to the European Helicobacter Pylori Study Group (EHPSG), adjuvant therapy with probiotics can be helpful in increasing the cure rate of infections [57]. In addition to Lactobacillus spp., many other bacteria are accounted as bacterial probiotics against H. pylori; characteristics such as names of probiotics, their potential activity, in-vitro or in-vivo examinations, and country of study are listed in Table 1. However, some probiotics such as Lactobacillus spp. and Bifidobacterium spp. have been used more in clinical trials than other probiotics [58]. According to the literature, administration of a dairy Sheu et al. showed in their study that a yogurt containing these bacteria could improve the eradication rate of H. pylori infection, and also restore the depletion of Bifidobacterium in stool at the fifth week of treatment [60]. In addition, these bacteria can produce significant amounts of lactic acid in the stomach after successful colonization [61].
Substantial mechanism of probiotics against H. pylori infection
Probiotics have various mechanisms to eradicate or restrict H. pylori growth within the stomach of humans including, (1) inhibition the colonization of H. pylori via conquering gastric epithelial receptors or co-aggregation mechanism, (2) anti-H. pylori activity throughout the production of bacteriocins, organic acids, as well as biosurfactants, (3) supportive role in intestinal tissues by promoting mucin synthesis, (4) modulation of immune system response, (5), induction of antigen-specific antibodies, and (6) reduction of stomach inflammation ( Fig. 2). The details of each of the hypotheses proposed are discussed below.
Competition for binding sites
Like other bacteria, attachment is an important step in the continued colonization of H. pylori [85]. According to in vitro studies, L. reuteri inhibits the attachment of H. pylori via competition binding to asialo-GMI and sulfatide receptors [86]. Sakarya et al. showed that S. boulardii blocks the attachment of H. pylori to gastric epithelial cells through binding to sialic acid receptors [87]. Moreover, other probiotics such as L. acidophilus LB, L. johnsonii, L. salivarius, and W. confuse prevent the colonization of this pathogen through specific adhesion molecules [88][89][90]. Based on studies in thirty C57BL/6 female mice, Johenson et al. found that pre-treatment with L. acidophilus R0052 and L. rhamnosus R0011 completely inhibited the colonization of this bacterium compared to control group [71]. In addition, in a study on 13 patients infected with H. pylori, Myllyluoma et al. found that consuming a solution containing four probiotics for 56 days reduced the rate of infection by 27% [91].
Mucosal barrier
Mucous membranes are one of the first lines of defense to protect humans (or animal) against environmental pathogens; excessive secretion of mucins and large glycoproteins effectively cover the surface of gastrointestinal tracts and prevent the colonization of infectious agents, especially H. pylori [92]. Recent studies have shown that this bacterium inhibits the expression of several mucins genes such as MUC1 and MUC5 [93]. In vitro studies show that some probiotics e.g. L. rhamnosus and L. plantarum induce the expression of MUC2 and MUC3 genes (the most important mucins in gastrointestinal tract), leading to inhibition of H. pylori colonization [94]. Interestingly, Pantoflickova et al. showed in their study that consumption of L. johnsonii thickens the mucosal layer, which in turn prevents bacterial colonization [95].
Probiotics as antibiotics
Scientific studies have shown that probiotics can also act as antibiotic-producing bacteria, and are able to contain the growth of H. pylori by producing antimicrobial substances [96]. Streptomyces spp. are the largest antibiotic-producing probiotics; these bacteria produce a large number of antibiotics such as streptomycin, chloramphenicol, tetracycline, kanamycin, vancomycin, cycloserine, lincomycin, neomycin, cephalosporins, clavulanic acid [97][98][99]. Moreover, bacitracin as an effective antibiotic on peptidoglycan of Gram-positive bacteria is produced by B. licheniformis and some strains of B. subtilis [100].
Short-chain fatty acids produced by probiotics such as acetic acid, propionic acid, and lactic acid can lower the pH of the environment, leading to unfavorable gastric conditions for H. pylori [101]. Bacteriocins (antibacterial peptides) are other properties of probiotics that in turn have antagonistic activity against the survival of H. pylori [102]. Coconnier et al. first found that the supernatant fluid from Lactobacillus acidophilus LB significantly could reduce the viability of H. pylori [24]. In a clinical trial study, Michetti et al. showed that oral administration of culture supernatant fluid of L. acidophilus strain La1 had anti-H. pylori activity [63]. In later years, discovered that this property was due to antimicrobial nisin A [75]. Bacteriocins are a heterogeneous group of antimicrobial Fig. 2 Defenses mechanisms against H. pylori infection which subdivided into two main mechanisms including physiological barriers and immune system. Upon entrance of H. pylori into the stomach, both innate and specific immunity enter the area of infection (lamina propria). Consumption of probiotics has several advantages in strengthening and stimulating immune system versus this pathogen. Antibacterial activities of probiotics direct and indirect are helpful for human health. Therapeutic effects of these bacteria in gastric tract are including immune modulation (via interaction with TLRs), anti-H. pylori activity, co-aggregation of invasive bacteria, decrease pH by secretion of short chain fatty acids, support epithelial barrier integrity, mucin production, as well as promoting immune cells to inhibit gastric inflammatory response particularly IL-8 production, and induction of immunoglobulin secretions proteins that are mostly produced by lactic acid bacteria [103,104]. Although studies on the effects of bacteriocinlike compounds against H. pylori are limited, bacteriocins with anti-H. pylori activity are produced by some probiotic genera such as Pediococcus, Lactococcus, Bacillus, Weissella, and Bifidobacterium [74,105]. Bacteriocins reduce or inhibit the growth of H. pylori by a variety of mechanisms including, inducing pores in membrane, activating of autolytic enzymes, and downregulating expression of vacA, cagA, luxS, and flaA genes [52,[106][107][108]. In other study, Boyanova et al. introduced seven bacteriocins from L. bulgaricus that were able to kill both antibiotic-susceptible and-resistant bacteria [102]. However, although bacteriocins have been proposed as a new alternative to drug-resistant H. pylori strains, these antimicrobial peptides (AMPs) are strain-specific and are also sensitive to gastrointestinal enzymes [52,75].
Co-aggregation and auto-aggregation (querish)
Co-aggregation status occurs between different species (or strains) of probiotics and pathogenic strains (heterogeneous bacteria), while in the auto-aggregation status, only species of one genus react with each other [109]. According to in vitro studies, some probiotics such as L. reuteri DSM17648, L. gasseri, and L. johnsonni La1 (NCC533) are able to co-aggregate with H. pylori strains [110,111].
Immunomodulatory mechanism
Probiotics also modulate the immune system responses; Blum et al. was first showed the role of probiotics in modulating the immune system responses against H. pylori infection [111]. This bacterium increases the inflammatory response by promoting the secretion of TNF-α and IL-8, which in turn lead to the upregulation of gastrin-17, apoptosis, and finally peptic ulcer [91]. Yang et al. found that pre-treatment with L. salivarius in animal model reduced chronic gastritis through the inactivation of JAK1/STAT1 and NF-κB pathways [112]. In addition, probiotics through some processes such as upregulating the expression of MUC3, cyclooxygenase-1, and PGE2, facilitate the secretion of mucin and angiotensin, thus preventing the apoptosis of mucosal cells [113,114].
Probiotics as delivery system for the treatment of H. pylori infection
Although many people around the world are infected with this bacterium in the first years of life, the search for an effective vaccine began after identification of H. pylori by Varan and Marshall; however, the effectiveness of the vaccine is doubtful, because this bacterium suppresses the immune responses [115]. Until recently, the vaccines entered in phase III clinical trials were stopped due to insufficient immunity against this pathogen [116]. At the moment, Lactobacillus spp. can be used as promising candidates for oral vaccination; the most important reasons are: (1) safety, 2) being immunogenic, 3) low cost, 4) accessibility, 5) ease of administration [117]. Here are some recombinant probiotics containing H. pylori antigens such as Lactococcus lactis (UreB), L. lactis (NapA), L. lactis (CTB-UE), and B. subtilis (UreB); oral administration of each of them leads to an increase in serum levels of IgG and IgA [118][119][120][121].
Probiotics and animal models
According to animal studies, researchers have shown the benefits of probiotics including, (1) elimination of H. pylori infection, (2) reduction of gastritis, (3) inhibition of the progression of primary infection to gastric cancer and MALT lymphoma ( Table 2). According to animal experiments, probiotic supplementation can reduce the persistent colonization of H. pylori as well as gastric inflammation by modulating pro-inflammatory cytokines i.e. IL-8, IL-12, TNF-α, and H. pylori-specific IgG titer [69,[122][123][124]. Chronic infection can stimulate the immune system to create favorable conditions to support the growth of bacteria [125][126][127]. Bacterial virulence factors can disrupt the signaling pathways and cell junctions, leading to the formation of pre-cancerous lesions as hummingbird phenotype [128,129]. Curing H. pylori infection is considered as the main strategy for preventing gastric MALT lymphoma and can decrease the risk of secondary gastric cancer or relapse of gastric ulcers [130,131]. Probiotics can reduce the colonization of H. pylori by their protective compounds such as bacteriocins, organic acids, and biosurfactants [104]. According to the literature, H. pylori infection significantly affects the gastric microenvironment by several changes including DNA instability, disruption of NF-κB signaling pathway, as well as differentiation of autoreactive B cells and subsequent malignant transformation by genomic alternations [132,133]. In general, the use of probiotics effectively modulates immune responses, reduces gastritis by reducing pro-inflammatory cytokines, and ultimately prevents H. pylori-induced gastric malignancies [134][135][136].
Probiotics as adjuvant therapy Therapeutic effects of probiotics against H. pylori infection in children
There is ample evidence of the clinical effects of probiotics in treating and reducing bacterial load in children. Cruchet et al. conducted a randomized double-blind trial on children with asymptomatic H. pylori infection. In their study, the children were divided into five groups, so that four groups received probiotic Lactobacillus strains (live L. paracasei ST11 or L. johnsonii La1, and heat-killed L. paracasei ST11 or L. johnsonii La1), and one group received placebo. They found that the C13UBT value in children receiving live L. johnsonii La1 was significantly lower than other groups [151]. In a similar study, asymptomatic children were randomly treated with three regimens containing standard triple therapy [8 days), L. acidophilus LB (daily for 8 weeks) and, Saccharomyces boulardii plus inulin (daily for 8 weeks). Finally, results showed that the C13UBT value was significantly lower in children receiving triple therapy and Saccharomyces boulardii [152]. Based on several clinical trials, it has been concluded that the rate of eradication of H. pylori infection increases in children receiving probiotic diets (without antibiotics). Some of these studies that suggested clinical efficacy of probiotic supplementation in the eradication of H. pylori infection are listed in Table 3. Based on these studies, probiotics can significantly increase H. pylori eradication rate particularly in patients receiving Lactobacillus spp. and Bifidobacterium spp. supplementation. These probiotics have a high potential against H. pylori infection using various mechanisms [55,153]. In addition, probiotics can alter the gut microbiota to reduce gastrointestinal symptoms and drug side effects [154,155].
Recently, two meta-analyses have evaluated the clinical effects of probiotics in the treatment of H. pylori infection in children. Li et al. evaluated data from 508 sick children; the pooled ORs for H. pylori eradication rate by intention-to-treat (ITT) and per-protocol (PP) analysis in children who had received probiotic supplementation and control group was 1.96 (95% CI: 1.28-3.02) and 2.25 (95% CI: 1.41-3.57), respectively [167]. In another study, Fang et al. analyzed the clinical efficacy of Lactobacillussupplemented triple therapy in 484 children, and found that the relative risk (RR) of curing rate in the Lactobacillus-treated group was significantly higher than control group (RR: 1.19; 95% CI: 1.07-1.33); diarrhea was also significantly reduced (RR: 0.3; 95% CI: 0.10-0.85) in this group [168].
Therapeutic effects of probiotics against H. pylori infection in adults
In the present study we evaluated all studies conducted on the effect of probiotics against H. pylori infection in human (Table 4). According to the literature, probiotic supplementation increases the rate of infection eradication during firstand second-line treatment (Table 4). However, according to some studies, probiotic supplementation was significantly ineffective in improving the eradication rate of infection; in their network meta-analysis, Wang et al. found that probiotics in combination with triple therapy could not increase the eradication rate of infection [186]. In addition, most studies have shown that adverse events were significantly lower in the group receiving probiotics plus antibiotic than in the control group, but this was not the case in a number of other studies [178,181,185]. It is important to note that probiotics alone are not effective, but can only be prescribed as adjunctive therapy in clinical improvement [174]. In recent, using data of 467 patients with treatment failure, we showed that Lactobacillus-containing bismuth quadruple therapy for 10 days, significantly increases the cure rate of H. pylori infection in patients with previous treatment failure (RR: 1.77; 95% CI: 1.11-2.83; p value: 0.01. (Among all probiotics, the clinical effects of Lactobacillus spp. and S. boulardii have been further studied; S. boulardii and Lactobacillus species such as L. casei, L. reuteri, and L. rhamnosus GG are all safe and improve the quality of treatment [172,183,185]. It seems that multi-strain probiotics supplementation has a significant effect on the treatment of infection [173,181,182]. In accordance with this theory, Lu et al. showed that multi-strain probiotics (Bacillus, Saccharomyces, Streptococcus, Bifidobacterium, and Lactococcus) significantly increased the eradication rate of infection (RR: 1.12; 95% CI: 1.07-1.18; p value: 0.00001); however, heterogeneity was significant in their study [179]. In general, according to various studies, probiotic supplements are considered as a reliable strategy to increase the quality of treatment in individuals with treatment-naïve or treatment-failure.
Use of probiotics in the prevention of H. pylori infection
Vaccine prophylaxis as a suitable strategy has become a big challenge for this bacterium, because in many people it is colonized in childhood, the rate of infection is high, as well as the immunology of the stomach is unclear [187]. According to the results of a cohort study on 308 H. pylori-negative children, it was defined that the infection rate in groups receiving L. gasseri OLL2716 (LG21) was less than control group (4.1% vs 8.1%, respectively); nevertheless; the results was not significant [161].
Diversity of gut microbiota during H. pylori treatment with probiotic supplementation
In total, about 100 trillion bacteria have been colonized in the human body. Gastrointestinal microflora is one of the most complex microbial ecosystem, and protects host against colonization of pathogenic microorganisms [188,189]. Imbalance in this ecosystem due to the excessive use of antibiotics leads to several disorders such as inflammatory bowel disease (IBD), metabolic syndrome and even colon cancer [190][191][192]. According to the literature, H. pylori infection can cause dysbiosis in the intestinal microbiota, but short-and long-term changes in human gut microbiome after H. pylori infection are controversial [193,194]. In their meta-analysis, Ye et al. showed that the during long-term follow-up the frequency of Actinobacteria and Bacteroidetes was reduced; they also found that the frequency of Enterococcus and Enterobacteriaceae was increased, while Proteobacteria after a short-term increase, again returned to their normal amounts during long-term follow-up [194]. There is limit information about the effects of probiotics on gut microbiota during the H. pylori infection. In their study, Oh et al. evaluated functional changes in intestinal microbiota using the Illumina MiSeq system after standard anti-H. pylori treatment and probiotic supplementation. They found that the expression of genes involved in selenocompound metabolism pathway was significantly reduced in patients receiving probiotic; this phenomenon can be led to a reduction in side effects such as intestinal irritation as well as antibiotic resistance [195]. Wang et al., recently explored the effect of anti-H. pylori concomitant therapy vs. concomitant therapy plus probiotic supplementation (with S. boulardii) on the alternation of gut and throat microbiota in human subjects. They showed that there was significant quantitative and qualitative alternations in microbiota composition in both concomitant anti-H. pylori therapy and concomitant therapy plus probiotic supplementation groups. Nevertheless, in probiotic supplementation group most changes in gut microbiota reverted after 71 days (except for Bacteroides spp. and yeast counts), whereas changes in the throat microbiota were persistent. In addition, antibiotic resistance rate of bacteria such as Enterobacteriaceae, Enterococcus spp., and Bacteroides spp. was significantly higher in patients receiving concomitant therapy than patients receiving concomitant therapy plus probiotic supplementation. Moreover, their study revealed that co-administration of probiotics in the treatment of H. pylori infection could be more effective than post-antibiotic supplementation [196]. In a recent study by Cárdenas et al. the clinical effects of S. boulardii CNCM I-745 on gut microbiota of patients receiving standard anti-H. pylori therapy was evaluated. According to their results, supplementation with this probiotic significantly reduced gastrointestinal symptoms (p = 0.028); alterations in gut microbiota was also seen with higher abundance of Enterobacteria and lower abundance of Bacteroides and Clostridia upon treatment completion (p = 0.0156) [197]. In general, the antimicrobial activity of probiotics kills or inhibits the growth of resistant bacteria and ultimately reduces antibiotic resistance [195,196]. According to information at https:// clini caltr ials. gov/, all clinical trial studies on the effects of probiotic supplements on the eradication of H. pylori by August 2021 are listed in Table 5.
Disadvantages and limitations
Despite extensive research on the effectiveness of probiotics in eradicating H. pylori infection, there are many challenges in this filed. Due to differences in study design, duration of treatment, and variety of probiotics between clinical trial studies, there is no a reliable homogeneity between them, which in turn affects the interpretation of results. In addition, due to the small sample size of studies, more research needs to be done with larger populations. Unfortunately, in some studies, there is no significant difference between the probiotic supplement group and the control group. Finally, although the exact role of probiotics in the prevention or treatment of H. pylori remains unknown, consumption of probiotics may be associated with side effects such as increasing in serum histamine and also digestive disorders [198].
Conclusions and future perspectives
H. pylori is one of the most successful pathogens in the gastrointestinal tract, which through its virulence factors creates a complex interaction with the human host. Chronic infection caused by this bacterium leads to severe clinical outcomes. The frequency with this bacterium is high in developing countries and poor socioeconomic conditions, so that people living in these conditions are generally at high risk for re-infection. Moreover, self-medication with antibiotics on the one hand, and the spread of resistant strains on the other hand, all are considered as a serious threat for the successful eradication of this bacterium. Over the decades, the controversial results of all conducted studies about the treatment of H. pylori infection have been led to the failure to the eradication of this pathogen. Hence, probiotics have been considered by many researchers around the world. In the present study, based on in vitro, animal studies, and human clinical trials, we demonstrated the beneficial effects of probiotics against H. pylori infection. However, those alone are not effective in treating the bacterial infection. In addition, the anti-H. pylori activity of probiotics is strain-specific and remains as a mysterious phenomenon. To date, the therapeutic effects of probiotics against resistant strains of the bacterium have not been evaluated, and whole genome sequencing may solve the existing puzzles. It seems that to decrease the heterogeneity of results and make better decisions, future studies should focus on items such as genus/species, dosage, formulation, and treatment course. | 6,619.2 | 2021-10-20T00:00:00.000 | [
"Medicine",
"Biology"
] |
Self-phoretic Brownian dynamics simulations
Abstract A realistic and effective model to simulate phoretic Brownian dynamics swimmers based on the general form of the thermophoretic force is here presented. The collective behavior of self-phoretic dimers is investigated with this model and compared with two simpler versions, allowing the understanding of the subtle interplay of steric interactions, propulsion, and phoretic effects. The phoretic Brownian dynamics method has control parameters which can be tuned to closely map the properties of experiments or simulations with explicit solvent, in particular those performed with multiparticle collision dynamics. The combination of the phoretic Brownian method and multiparticle collision dynamics is a powerful tool to precisely identify the importance of hydrodynamic interactions in systems of self-phoretic swimmers. Graphic Abstract
Introduction
Computer simulation of active matter systems is currently a topic of intense scientific debate [1][2][3][4][5]. Active matter considers systems with at least one component able to draw energy from their environment in order to self-propel. Activity is an inherent property of most biological systems and recently a topic of growing interest for the investigation of synthetic active systems, with practical applications in fields such as microfluidics or microsurgery [6,7]. In this line, phoresis is one of the main physical principles employed for the design of synthetic active matter. Phoresis refers to the drift that Brownian particles experience in the presence of a solvent with an intrinsic gradient, which becomes self-propulsion when the gradient is locally generated at the Brownian particle surface. Artificial microswimmers with a locomotion based on phoretic effects behave therefore as passive colloids unless activated via thermal [8][9][10][11][12], electric [13][14][15][16][17], chemical [18,19], or magnetic [20][21][22] gradients.
The collective behavior of chemically propelled Janus particles showed aggregation behavior [18,23,24], and light powered micro-robots were observed to form living crystals [25][26][27]. The appearance of clustering and comet-like swarming structures was predicted by Brownian thermophilic active colloids [28,29]. The system dimensionality [30,31] and the presence and shape of hydrodynamic interactions have shown to play a relevant role on the collective behavior of such thermophilic swimmers [32,33].
Janus-like phoretic particles have already been investigated by various simulation approaches, although a e-mail<EMAIL_ADDRESS>(corresponding author) b e-mail<EMAIL_ADDRESS>(corresponding author) not really compared with each other. Some of the approaches are purely Brownian, and self-phoretic propulsion is accounted simply by a constant impulse [28,34,35], or even a constant acceleration in systems that are supposed to increase their temperature on time [29]. In the absence of an explicit solvent, phoretic interactions between particles have been considered with an additional term, which might, or not, be coupled to the self-propulsion term. Thermal fluctuations are most frequently considered, and in a few cases also hydrodynamic interactions which are non-specific and typically only a far field approximation [36]. However, none of these methods completely accounts that self-phoretic Brownian swimmers propel with a well-defined Péclet number when isolated, while in the neighborhood of others, their velocity and interparticle interactions need to adjust to the actual distribution of heat sources. Different types of approaches consider the presence of an explicit solvent, such that phoretic effects arise in the presence of temperature or concentration gradients. This is the case of simulations performed with molecular dynamics [37], or dissipative particle dynamics [38], or with the mesoscopic simulation approach known as multiparticle collision dynamics (MPC) [11,39]. With these approaches, the details of self-propulsion, intercolloidal phoretic interactions, and hydrodynamic interactions are not directly imposed or tuned, but a consequence of the solvent-colloid interaction, colloid shape, and solvent intrinsic inhomogeneities. Therefore, in the studies of collective properties of phoretic active systems, the effects of steric, phoretic, or hydrodynamic interactions occur all simultaneously, such that the contribution of each of them is most frequently not possible to be identified. In this way, the design of strate-gies to make them distinguishable is timely and highly desirable.
Here, we propose a modification of the standard Brownian simulation method for Janus dimers, in which the effect of a single phoretic force results into the selfpropulsion of the dimer and interparticle interactions are included but without any hydrodynamic interactions. We refer to this method as phoretic Brownian dynamics (Ph-BD). Furthermore, the precise values of self-propulsion velocity, intensity of the interactions, and Péclet number can eventually be closely mapped to those of the MPC simulations to allow for a fair comparison of the results obtained with both methods. On the other hand, we discuss another two simpler Brownian dynamics types of approaches for dimers. One with only self-propulsion, and another with a constant selfpropulsion and phoretic interaction. An example study of dense systems of self-thermophilic dimers is here performed. The comparison of these three Brownian methods also provides interesting conclusions about the interplay of phoretic attraction/repulsion, alignment, and motility-induced instabilities.
Phoretic Brownian dynamics (Ph-BD)
Janus particles are characterized by having two different surface compositions. This is also the case of dumbbell-like structures in which each bead is made of a different material. For the sake of simplicity, we focus here mostly in the case of thermophoretic colloids, but the procedure is almost equivalent for other phoretic cases, in particular catalytic or diffusiophoretic ones. Thermophoretic dimers are made by one bead which is assumed to be at a higher temperature than the environment, mimicking a material with high heat conductivity which can be locally heated. The second bead is therefore exposed to a significant temperature gradient and responds to it depending on its intrinsic surface properties. In this way, dimers with a ther- Fig. 1 Sketches of the propulsion direction of self-phoretic asymmetric dimers and phoretic interaction between dimer pairs, which is: a attractive for the case of colloids drifting up gradient, and b repulsive in the opposite case. c Sketch of the implemented forces in the Ph-BD model mophilic (or chemotatic) behavior propel toward the hot bead (see Fig. 1a), while dimers with a thermophobic (or antichemotatic) behavior propel against the hot bead (see Fig. 1b). The same effect also controls the interaction between swimmers. Two thermophilic dimers swimming close to each other fell the temperature gradient produced not only by their own hot bead, but also by the hot bead of the neighboring dimer. This means that thermophilic dimers are attracted to neighboring dimers, while thermophobic dimers are mutually repelled from neighboring dimers, as depicted in Fig. 1a,b, which also exerts certain torques on the dimers. Therefore, in order to model phoretic active systems in a realistic manner, the effect of propulsion and interparticle interactions has to be included in a unified manner. Phoretic effects are related to the temperature gradients which vary locally.
Since the aim here is to describe the motion of colloids at low Reynolds numbers, we start by considering the overdamped Langevin equation [40,41] where F i (r) is the total sum of forces acting on each particle i, and ξ i (t) is a random force with zero-mean ξ i l (t) = 0, and delta-correlated Gaussian x, y, z and i, j = 1, . . . , 2N s the particles under simulation, with N s the number of simulated dimeric swimmers. The friction coefficient, μ i , is considered to fulfill the Stokes-Einstein relation, μ i = C f πηs i , with s i the radius of the particle i, and η the fluid viscosity.
The numerical factor C f varies depending on colloid boundary conditions, and it is typically C f = 6 for stick and C f = 4 for slip boundary conditions [42]. The algorithm here used to integrate the motion equations was stochastic Euler. In general, the Euler algorithm has to be carefully considered due to its low precision and problems in most isothermal simulations [43,44]. Nonetheless, the precision of this algorithm is sufficient for the study here performed, and other algorithms can be easily employed for more extensive investigations.
The details of the interactions are then provided by the forces, which distinguish two types of particles: with i, j = 1, . . . , N s for the two beads of each dimer. The non-heated or phoretic bead is the one where the temperature gradient has a drift effect, which in this Ph-BD approach is considered in an effective manner by including a thermophoretic force F T (see Fig. 1c). Meanwhile, the hot bead is considered to be at a higher but constant temperature, such that does not feel any thermophoretic force.
Pairwise interactions are considered, first the two beads forming each dimer are linked by a strong harmonic force F H , obtained from the potential with the interparticle distance r ij = r i − r j , the harmonic constant κ H = 10 4 , used to strongly fix the beads equilibrium distance b as the sum of the beads' radii, b = s p + s h , with s p the radius of the phoretic bead and s h the radius of the hot bead. The relative dimensions of the dimer beads are defined by the radii aspect ratio, γ = s p /s h . Steric effects are accounted by the force F EV,j , by which all non-linked beads interact with excluded volume interactions taken into account with the potential where n determines the potential softness, ε = k B T relates to standard energy units, with k B the Boltzmann constant and T the average temperature, here both fixed to unity, defining the system units. The extra term on the right-hand side of the equation determines the repulsive character of the potential together with the cutoff radius, r c = 2 1/n σ, and we use here n = 24. The distance σ simply relates to the sum of the radius of the two interacting beads. The thermophoretic force exerted on the phoretic bead can be calculated as where α T is the bead thermodiffusion coefficient and ∇ ri T the gradient of temperature at the bead location. Note that α T is a material property, which can be arbitrarily modified or chosen to match a value determined by experiments or by simulations with explicit solvent, as will be shown later.
The corresponding Laplace equations need to be solved to obtain a good estimation of the temperature gradient. We consider here three important and reasonable simplifications, such that the Laplace equation can be analytically solved: i) Each hot bead center acts as point-like heat source with temperature T h at the bead's surface; ii) at a distance far enough the fluid reaches the average fluid temperature T , taken as the reference unit, i.e., T (r → ∞) = T ; iii) the effect of neighboring sources is considered to be additive. For each point-like source, all angular terms vanish due to the symmetry of the system such that ∇ 2 T (r) = 0, and the temperature at the center of each phoretic particle is then given by where r ij = |r i − r j | and r j are the hot bead's center positions. The expression in Eq. (7) corresponds to the gradient at the bead center. A more accurate estimation is to consider an effective value of the temperature gradient that considers the variation over the bead surface, for which Eq. (7) is integrated along the phoretic bead's diameter. For a phoretic bead of radius s p , placed at r i , and with a hot bead placed at distance r j , the integral limits are r ij −s p and r ij +s p , such that the temperature gradient can be approximated by Note that for an isolated swimmer the gradient is determined just by the linked hot bead, such that r ij = s p + s h is the only contributing term. For denser systems, the gradient takes into account the all neighboring hot heads, such that in the center of highly compact configurations the gradients eventually vanish and therefore also the thermophoretic force. The dimer velocity v s and the rotational diffusion D r are therefore not direct inputs of the model, but indirectly determined from other input values, mainly α T , ∇T , s p , and γ. The value of the module of v s is given by where both the dimer friction μ = C f πη(s h + s p ) and the temperature gradient ∇T = (T h − T )/(s h + 2s p ) depend on the hot and phoretic bead sizes. The axis direction of the swimmer n aligns with the direction of the temperature gradient direction considering also the sign of α T , which determines the direction of v s . The self-propulsion velocity can also be obtained from the simulations as v s = v · n. The rotational diffusion D r depends mostly on the particle size and aspect ratio γ and can be obtained by characterizing the longtime behavior of the mean-squared angular displacement, Δe 2 = (e(t) − e(t )) 2 , in simulations with equilibrium conditions; this is with T h = T . The resulting Péclet number can then be defined as Pe = v s /(D r s p ).
Other active Brownian dimer models
The method proposed in this manuscript, Ph-BD, differs from other approaches employed in the literature in the way that phoretic self-propulsion and interparticle phoretic interactions are coupled to each other. In order to better understand the relevance of this coupling, we propose two alternative methods. Self-propelled spherical colloids have been extensively investigated with the so-called active Brownian particle (ABP) model [34,45], which simply assumes a constant propulsion velocity in the particle main axis. The physical origin of the propulsion is not specified, such that it could be phoretic but also any type of biological specificity. We adapt this idea to the dimeric case by considering N s swimmers with two bounded monomers each, where the hot bead just follows Eq. (3), and the phoretic bead where the friction is that of the dimeric structure μ = C f πη(s h + s p ) and n is the orientation vector of the dimer. We here call this method the active Brownian multimer model (ABM). With this approach, there is no additional interparticle interactions, such that all apparent repulsions or attractions are consequence of the propulsion and/or steric interactions. The second approach includes also the effect of the phoretic interaction with a force as given in Eq. (6), but considering only the heat sources of neighboring hot beads, where the temperature gradient can be calculated with Eq. (7) or Eq. (8). We refer to this method the active Brownian multimers with phoresis model (ABM+ph). With this approach, the phoretic interdimer attraction (or repulsion) is in principle decoupled from the dimer propulsion since there are two different parameters control, i.e., v s and α T . There are approaches in which these, or very strongly related parameters, are independently varied [34,35], which cannot really correspond to a phoretic model since both self-propulsion and interparticle phoresis are simultaneously originated. Besides the fact that v s and α T should be related by Eq. (9) for thermophoresis, or an equivalent one for other phoretic phenomena, there is another relevant difference between Ph-BD and ABM+ph which is that in ABM+ph, the velocity of the particles is fixed, namely it does not depend on the position of the neighboring particles, while for Ph-BD both the velocity of the particle and the interparticle interactions are damped when various other swimmers are in the neighborhood, as accounted in the temperature gradient calculation. In order to better understand this effect, we focus here in the case that v s and α T are linked by Eq. (9).
Hydrodynamic self-phoretic model
The methods introduced until now consider steric, stochastic, and phoretic interactions, which means that hydrodynamic interactions (HI) have been disregarded.
Although in some cases this can be clearly justified, the effect of HI is frequently not known. In order to provide a tool that allows for a fair comparison, we consider now the method known as multiparticle collision dynamics (MPC) [11,39]. Multiparticle collision dynamics is here used to simulate the explicit solvent particles and their interactions [46,47], while molecular dynamics (MD) is employed for colloid-colloid and colloid-solvent interactions. This hybrid MPC-MD approach has already extensively proved to include both hydrodynamics and phoretic effects [48][49][50][51].
MPC method for the solvent The MPC method considers the solvent composed of N point particles of mass m performing alternate streaming and collision steps. During the streaming step, fluid particles translate ballistically for a certain time, h, the collision time, this is r k (t + h) = r k (t) + hv k (t). In the collision step, the particles are binned into cubic cells of side a, with a grid shift applied to the binning in order to restore Galilean invariance [52]. Interparticle interactions are treated within each of these cells, in which particles interchange linear momentum with all other particles in the same cell. Here, we employ the stochastic rotational dynamics collision rule, in which the momentum interchange is made rotating by an angle α the relative velocities to the center of mass around a random axis on the cell, [53][54][55][56]. The comparison with specific solvents can be done via dimensionless numbers, mainly the Schmidt number, Sc = ν/D = 13, and the Prandtl number, Pr = ν/κ T = 5.3. While Sc is smaller than the value for water, Pr is quite close to it. These two values ensure that momentum transfer is faster than that of mass, providing an efficient way to include hydrodynamic interactions, and that the stability of local temperature gradients is also ensured.
Molecular dynamics
Fluid-colloid interactions are considered using molecular dynamics, with the equations of motion being integrated using the velocity Verlet algorithm [43,44,57]. The thermophoretic nature of the colloids is determined by the choice of the fluidcolloid interactions, for which we used a displaced Mielike potential, This potential is very similar to that in Eq. (5) with the introduction of the Δ and the C parameters. The bead size is now determined by s ≡ σ + Δ, where Δ can be understood as the size of a core with hard-sphere interactions and σ the size of an additional layer with repulsive potential interactions. In this work, we use σ = Δ and s p = 6 for the size of the phoretic bead. The extra term on the right-hand side of Eq. (12) is C = for repulsive interactions, which have proved to account for thermophilic colloidal behavior, and C = 0 for attractive interactions for thermophobic [58,59]. For these interactions, n = 3 is chosen to obtain a soft repulsive potential for the phoretic (philic) bead, whereas n = 24 is chosen for the heated particle and also for the attractive potential (phobic). The cutoff radius of the interactions is r c = 1.26σ + Δ for the repulsive potential and r c = 1.1σ + Δ for the attractive. Harmonic and excluded volume interactions are considered similarly as for the Ph-BD case with Eq. (4) and Eq. (5). In order to mimic the heating produced by laser illumination of partially gold-coated colloids [60], we have rescaled the temperature of the fluid within a small shell (of 0.08s h ) around the heated bead to T h > T , while cooling the average temperature of the whole system to T = 1, by means of a simple velocity rescale [55,60]. Unless otherwise specified, we use T h = 1.5. All colloid-colloid interactions have been implemented via Eq. (5); this is Eq. (12), with Δ = 0, σ = s and n = 24, with the interactions being cut at r c = 2 1/24 σ.
This method has been implemented on LAMMPS [61], where we have modified the "srd" package routine [62] to include the colloid-solvent potential interactions. The MD time step has been chosen as Δt = 0.01h, similar as in the Brownian simulations, and the mass M of the colloidal beads is chosen to make the colloids neutrally buoyant.
Parameters for the comparison MPC vs. Ph-BD
In order to perform a fair comparison of the methods with and without HI, we are interested in having systems as similar as possible. Some values are input parameters in the Brownian dynamics simulations and therefore very easy to match, such as the average temperature k B T = 1, or the fluid viscosity η = νρ = 7.9.
The numerical factor C f for the friction coefficient is fixed as C f = 3 in order to match the employed MPC-SRD algorithm without angular momentum conservation and slip boundary conditions [63,64]. Other parameters are not direct input and need to be more carefully considered. For a proper comparison, it is of importance that parameters chosen for the two simulations models result in matching self-propulsion velocity and the Péclet number of diluted swimmer dimers systems. For this, we need to characterize the simulated thermophoretic coefficient α T and rotational diffusion D r .
The thermophoretic coefficient α T of a spherical bead could in principle be determined in full hydrodynamic simulations with an external temperature gradient [58]. This would be, however, a too rough estimation, first because the constant and position-dependent gradients are different, and second because it is known that the proximity of the hot bead screens part of the phoretic interactions of the surrounding solvent and the colloid surface. A more adequate estimation can be done by measuring the self-propelled velocity of a single swimmer with Eq. (9) and relating it then to the thermophoretic coefficient α T . Figure 2 shows simulation Results for dimers with phoretic bead sp = 6. Circles (in blue) correspond to thermophobic dimers; triangles (in red) correspond to thermophilic dimers. Full symbols correspond to asymmetric dimers (γ = 3); empty symbols to symmetric dimers (γ = 1). Lines relate to linear fits to Eq. (9) for small gradients results for four types of dimeric swimmers, corresponding to thermophobic and thermophilic character, and to the symmetric (γ = 1) and asymmetric (γ = 3) geometries. Velocities are calculated as an average of 20 independent simulations, and the error bars are of the order of the symbol size. Simulations are performed at various temperature gradients, which are achieved by changing the temperature of the hot bead T h . The increase in the velocity is clearly linear for moderate gradients, which allows us to determine the value of the thermophoretic coefficient for all the investigated cases as shown in Table 1. For the largest temperature gradients, the velocities deviate from the linear behavior. This deviation from the Fourier linear behavior can be expected and here can also be related to the limit of the method for these temperature gradients. Note that the negative sign of α T is well established by convention and it refers the motion of the swimmer toward the heat source. The sign of this coefficient naturally induces the interdimer phoretic attraction for thermophilic dimers, and phoretic repulsion for thermophobic ones, as shown in Fig. 1a, b, such that no further assumption has to be made in this regard.
To perform simulations with Ph-BD and MPC of dimers with the same bead sizes, the same solvent input parameters, and the same α T seems then a good strategy to have a fair comparison between methods, only the Péclet number is left to be discussed. The values of D r for the specified values result to be close to 60% larger in the Ph-BD simulations than in those performed with MPC, for all the four investigated cases (see Fig. 3a for the case s bd p /s hi p = 1). This is because the rotational diffusion is not a parameter fixed in any of the two methods but a consequence of all the other parameters, such as friction, particle size, thermophoretic coefficient, or fluid particle interactions which are different in both methods. In order to modify the rotational diffusion coefficient without affecting Table 1 Thermophoretic coefficient αT of single thermophilic and thermophobic self-propelled dimers, with different aspect ratios γ = s h /sp, as obtained from MPC simulations. Values are obtained as a fit to the data in Fig. 2 Table 1. The dashed lines at unity indicate perfect agreement between Ph-BD and MPC simulations. Thick vertical gray line corresponds to the case with optimal agreement for both vs and Pe, which occurs for s bd p = 8 other characteristic values, it is possible to vary the overall bead sizes. Further simulations in equilibrium with dimers with different s p (and different s h to preserve γ) are performed, and the measured values of D r are shown in Fig. 3a. As expected, the results show a decay of D r for growing particle sizes, which is produced just by the thermal noise in the position update of the two linked dimer monomers. Using a different monomer size allows then the tuning of the rotational diffusion, but, for simulations out of equilibrium with a fixed value of T h , also modifies the temperature gradient at the phoretic monomer surface and therefore also the resulting self-propelled velocity. The solution to keep the same v s value is then to modify T h to keep the gradient constant when changing the monomer size. As a check of this principle, we perform simulations modifying both s p and T h for a given gradient and then measure the self-propelled velocity. The results are shown in Fig. 3b in comparison with those of the self-propelled velocity of the hydrodynamic simulations used here as an input, and the agreement is very good within the error of the measurements in all cases. Note that in order to keep the same v s value, to modify T h is equivalent to modify α T since it is the product of both which determines F T and v s , as shown in Eq. (6) and Eq. (9), respectively. The resulting Péclet number shown in Fig. 3c results in a growing trend with particle size, which is related to the variation in the rotational diffusion. From Fig. 3, it is also clear that the optimal value is given by Brownian simulations with s p = 8, such that all presented Brownian simulations are from now on carried with this value.
Comparative study for collective dynamics
In order to perform a comparative study of the Brownian methods, simulations of dimeric thermophilic swimmers are performed first with the three Brownian methods previously discussed. Ensembles of 200 dimers both asymmetric, γ = 3, and symmetric, γ = 1, have been studied for a quasi-2d confinement case. In principle, this refers to 3d slides of liquid in which the swimmers move on a plane, which for the Brownian dynamics simulations means that the motion occurs in two dimensions.
The configuration used to initialize the simulation has the dimers center of mass placed on a square lattice covering almost the whole simulation box, with a randomly chosen direction of the dimer axis. Initial order disappears very quickly in all cases. All simulations run for a time t ∼ 300τ b , with τ b the ballistic time of a swimmer, defined as τ b = s p /v s , and representative snapshots of the latest configurations are shown in Fig. 4. Asymmetric dimers propel forming small clusters; some of these clusters dissolve due to collisions with isolated dimers or pairs of dimers, and some other coalesce with other small cluster, forming larger and more stable clusters, as can be seen in Fig. 4a. Symmetric dimers show initially similar dynamics, although interestingly the large clusters do not become stable and also end up dissolving in this case, as can be seen due to the small-sized clusters in Fig. 4b. Snapshots in Fig. 4 correspond to simulations performed with the Ph-BD method; qualitative roughly similar results are also obtained with ABM and ABM+ph methods. In order to more precisely understand the involved mechanisms and the difference between the methods, the quantification of a dynamic quantity is employed.
We introduce here the calculation of bounding time τ c for both asymmetric and symmetric dimers at two The bounding times of asymmetric dimers in Fig. 5a show to form stable clusters at both densities in simulations with phoretic attraction; this is with Ph-BD and ABM+ph simulations. Simulations without phoretic interparticle attraction show to saturate to a constant value, which curiously is the same for both simulated densities. This means that for these asymmetric dimers at these densities, self-propulsion is not enough to stabilize the clusters, and the consideration of the corresponding phoretic attraction stabilizes the clusters. The bounding times are slightly smaller for ABM+ph with respect to Ph-BD. Although the difference is not large, it indicates that diminishing the propulsion velocity of the dimers inside the cluster slightly increases its cohesion. The bounding times of symmetric dimers in Fig. 5b show that all three Brownian methods form just unstable clusters; several interesting conclusions can still be drawn from these results. The first one relates to the effect of density, which shows to increase the bounding time in all cases, although with different intensity. Density increases the probability of encounters which has two opposite effects since it enhances both clustering formation and its dissolution. For the symmetric dimers without phoresis (ABM), the effect is small but clear at not too small times. This is in contrast to the asymmetric case for which the difference is much smaller and even seem to disappear for large averaging times. We relate this difference to the particleinduced alignment when two dimers collide, effect that is much larger for symmetric dimers. Another clear conclusion is that Ph-BD enhances stability with respect to ABM+ph and that this effect increases with density. This is again related to the fact that the smaller propulsion velocity of the dimers inside a cluster for the Ph-BD cases increases their stability. The effect is though not straightforward to predict since it is not shown to be much larger for asymmetric dimer swimmers where the formed clusters are larger, than in the case of symmetric dimers where only small clusters would be affected. Curious is also the difference between ABM and ABM+ph for the symmetric dimers. The collective simulations here presented only analyze the thermophilic case, such the inclusion of phoresis includes an interparticle attraction, expected to translate into larger clustering affinity. This is indeed the case for the asymmetric dimers, but not for the case of symmetric dimers. Phoretic attraction combined with the non-adjusting self-propulsion velocity seems to induce additional alignment of the symmetric dimers, such that they became more prompted to swim away from the small nucleated clusters, producing this somehow counterintuitive effect. In other words, the fact that ABM dimers do not significantly change their orientation when colliding with others makes that in some cases they get stuck in configurations longer than in the presence of attraction, providing such structures with additional stability. For the φ = 0.3 symmetric density case, the ABM simulations have almost the same stability properties as those with Ph-BD, while for φ = 0.2, ABM simulations are even more stable than those with Ph-BD, where both self-propelled velocity and attraction diminish in the neighborhood of other dimers.
Simulations in the collective regime with the hydrodynamics phoretic model MPC are also performed in the previously discussed regimes. The most relevant conclusion is the occurrence of qualitative differences with the systems here presented, which, due to the fair comparison that these methods provide, can be related just to hydrodynamics. Detailed understanding of such results requires a detailed discussion of the shape of the hydrodynamic fields which will be presented elsewhere.
Summary and discussion
The Ph-BD method to perform Brownian dynamics simulations with a realistic inclusion of the phoretic self-propulsion and interactions is here presented. The main idea is that simply considering the well-known dependence of the phoretic force with the applied gradient properly couples the self-propelled velocity and the interaction between two or more particles. The Ph-BD method is here compared with other two simpler versions of BD simulations, which draws interesting conclusions illustrating the very subtle interplay of selfpropulsion, phoretic-induced attraction, repulsion, or orientation. Depending on the particle geometry, properties, and overall densities, these effects show that they can act together, or against each other. We also show in detail how the Ph-BD method can be adjusted to map the properties of experimental systems or sim-ulations with explicit solvent, as illustrated here for MPC simulations. The combination of MPC and Ph-BD simulations offers therefore the possibility to compare simulated systems which only differ in the inclusion of solvent-mediated interactions, which is a very powerful tool to understand the effect of hydrodynamic interactions. These methods are here used to investigate the properties of thermophoretic dimers, but it can be almost trivially extended to other phoretic effects such as diffusiophoresis, and also to other multimeric structures such as trimers or other oligomeric swimmers. Preliminary analysis indicates that it can also be extended to Janus spherical particles. The presented results for the collective dynamics of thermophoretic swimmers also indicate that these are the basis of synthetic active materials with various perspectives for applied materials. | 7,749 | 2022-03-01T00:00:00.000 | [
"Physics"
] |
iBeacon Indoor Positioning Method Combined with Real-Time Anomaly Rate to Determine Weight Matrix
This paper proposes an indoor positioning method based on iBeacon technology that combines anomaly detection and a weighted Levenberg-Marquadt (LM) algorithm. The proposed solution uses the isolation forest algorithm for anomaly detection on the collected Received Signal Strength Indicator (RSSI) data from different iBeacon base stations, and calculates the anomaly rate of each signal source while eliminating abnormal signals. Then, a weight matrix is set by using each anomaly ratio and the RSSI value after eliminating the abnormal signal. Finally, the constructed weight matrix and the weighted LM algorithm are combined to solve the positioning coordinates. An Android smartphone was used to verify the positioning method proposed in this paper in an indoor scene. This experimental scenario revealed an average positioning error of 1.540 m and a root mean square error (RMSE) of 1.748 m. A large majority (85.71%) of the positioning point errors were less than 3 m. Furthermore, the RMSE of the method proposed in this paper was, respectively, 38.69%, 36.60%, and 29.52% lower than the RMSE of three other methods used for comparison. The experimental results show that the iBeacon-based indoor positioning method proposed in this paper can improve the precision of indoor positioning and has strong practicability.
Introduction
Nowadays, Global Navigation Satellite System (GNSS) technology, navigation, and positioning services have become an indispensable service in people's lives, especially for travel-related services. Such location services cannot be provided indoors, however, due to difficulties in receiving signals from GNSS satellites [1]. How to achieve fast, cheap, stable, and high-precision positioning and navigation in indoor contexts where satellite signals are missing has therefore become an urgent problem to be solved. At present, positioning and navigation services for indoor uses mainly rely on technologies such as Wireless Local Area Network (WLAN), Ultra-Wide Band (UWB), iBeacon, etc. [2][3][4]. Of these, iBeacon has been favored by many scholars due to its low cost, ease of operation, and signal stability [5]. In addition, indoor positioning and navigation services based on this technology have already been commercialized in many large shopping malls, parking lots, and other such venues [6].
Target positioning based on iBeacon technology mainly depends on the RSSI (Received Signal Strength Indication) value of the iBeacon base station broadcast signal [7]. Specifically, the current methods for solving positioning coordinates based on iBeacon technology are mainly divided into two types: fingerprinting technology and trilateration model. A comparison of these two main techniques is shown in Table 1 [8][9][10][11]. Fingerprinting technology has become a hot topic in current research due to its high stability and precision. The basic theory of Fingerprinting technology is to match the RSSI with the values in the database. It is not necessary to convert the RSSI value through a multi-mathematic model into a distance for positioning calculation. Therefore, this method does not require a distance estimation model and is more stable. This approach, however, requires a complete and detailed fingerprint database to be collected and constructed in the locations where it is used, alongside the installation of base stations. This in turn raises costs and increases the complexity of the positioning process, meaning that the technology cannot be easily applied in any location [12]. In contrast, the only preliminary work needed in the trilateration approach is the deployment of iBeacon base stations at fixed locations and determination of the ranging model parameters, meaning that it is simple and economical to implement.
On the other hand, the trilateration method cannot deliver highly precise positioning, with an absolute positioning error of about five meters [3]. How to improve the positioning precision and stability of the trilateration method has therefore become a problem worthy of research. The relative imprecision of the trilateration method arises partly from its reliance on a nonlinear positioning model, which seeks the best the optimal solution algorithm, and partly from the fact that environmental influences make the RSSI value of the iBeacon base station signal unstable. In addition, the particular trilateration model used also affects the positioning precision. At present, the most commonly used method is to perform differential linearization and then apply the least squares method to solve, or iteratively solve, the positioning coordinates [13,14].
In the above approach, as the RSSI is prone to fluctuations, it is often necessary to do some preprocessing of the original signal before positioning. At present, the main methods used for the preprocessing of abnormal RSSI signal values are filtering algorithms, such as Kalman filtering, particle filtering, mean filtering, etc. [15][16][17]. While these methods can often achieve good results when there is a large amount of RSSI signal data available, in practice it is often necessary to perform coordinate calculations within just a few seconds, which requires a signal anomaly detection method that can be applied effectively even when the amount of data is small. At the same time, most of the current preprocessing methods for RSSI signals only reduce noise and eliminate abnormal signals, and thus do not make full use of the abnormal characteristics of the signal source.
This paper therefore proposes the use of the isolation forest algorithm [18] to detect and eliminate signal anomalies even with a small sample size. It then calculates the anomaly rate of each signal source while removing abnormal signals, and sets the weight matrix using each anomaly rate and the RSSI after removing abnormal signals. Finally, using the most common trilateration model solution method-i.e., the least squares methoda weighted Levenberg-Marquardt (LM) algorithm [19] is constructed to achieve highprecision positioning indoors, based on iBeacon technology.
Overview
The goal of this paper is to verify an indoor high-precision fast positioning method based on iBeacon technology. The basic theory of this paper is trilateration, applying the isolation forest algorithm and the weighted LM algorithm. The basic idea of this paper is to first collect the RSSI values of all iBeacon base stations deployed in the indoor scene, and Sensors 2021, 21, 120 3 of 16 then use the isolation forest algorithm to perform anomaly detection on the RSSI values of each signal source in a short period of time. After detecting abnormal signals, the anomaly rate of the signal broadcast by each iBeacon base station is calculated, and the average value of each group of signals, after removing the abnormal RSSI signal, is adopted as the final RSSI at the current point. Finally, a weight matrix is generated by combining the signal anomaly rate of each iBeacon base station with the RSSI value. Under the constraints of this weight matrix, the LM algorithm is used to solve the coordinates. The method flowchart of this paper is shown in Figure 1.
Sensors 2020, 20, x FOR PEER REVIEW 3 of 17 to first collect the RSSI values of all iBeacon base stations deployed in the indoor scene, and then use the isolation forest algorithm to perform anomaly detection on the RSSI values of each signal source in a short period of time. After detecting abnormal signals, the anomaly rate of the signal broadcast by each iBeacon base station is calculated, and the average value of each group of signals, after removing the abnormal RSSI signal, is adopted as the final RSSI at the current point. Finally, a weight matrix is generated by combining the signal anomaly rate of each iBeacon base station with the RSSI value. Under the constraints of this weight matrix, the LM algorithm is used to solve the coordinates. The method flowchart of this paper is shown in Figure 1.
BLE-Based and Trilateration
The trilateration method (see Figure 2) for indoor positioning based on Bluetooth low-energy devices is a fast and cheap positioning method. It determines the current point coordinates mainly by receiving the RSSI value broadcast by iBeacon base stations deployed in a fixed position in an indoor scene and applying a 2D plane coordinate solution model such as in Equation (1)
BLE-Based and Trilateration
The trilateration method (see Figure 2) for indoor positioning based on Bluetooth low-energy devices is a fast and cheap positioning method. It determines the current point coordinates mainly by receiving the RSSI value broadcast by iBeacon base stations deployed in a fixed position in an indoor scene and applying a 2D plane coordinate solution model such as in Equation (1) below [20], where (x i , y i ) is the unknown coordinate of the current point, (x n , y n ) is the fixed coordinate of each iBeacon base station, and D n is the distance between the current unknown point and each iBeacon base station. This distance can be calculated by an attenuation factor model between the RSSI value and D, as in Equation (2) [21]: Equation (1) only discusses the 2D situation, because the plane coordinates already satisfy most situations when used in daily life; however, Equation (1) can also be extended to 3D coordinates. At this time, where P is the RSSI value of the fixed iBeacon base station received at the current unknown point, P(d 0 ) is the RSSI value from the iBeacon base station at a certain distance, generally d 0 = 1m, and n is the signal attenuation factor in the current scene. In different environments, its value is also not always the same, and thus the relationship between the RSSI value and D (Distance between receiving signal point and base station) has to be obtained, as in Equation (3).
Sensors 2020, 20, x FOR PEER REVIEW 5 of 17 Although it is relatively simple to use Equation (5) to calculate the coordinates, because the RSSI signal is affected by various environmental factors, there will be large signal fluctuations and thus low positioning reliability regardless of the distance from the iBeacon signal source. The weighting method in Equation (5) cannot reduce the error caused by this low credibility of the short-distance signal source. This paper therefore constructs a new weight matrix in Equation (5) according to the signal anomaly rates, and uses the nonlinear least squares LM algorithm to solve the positioning coordinates under the constraints of this weight matrix.
Anomaly Detection and Isolation Forest
The RSSI value of iBeacon's Bluetooth signal is susceptible to fluctuations due to For the purposes of this paper, we set P(d 0 ) to A n , and rewrite P to the collected RSSI value to get Equation (4). This means that there are then only two unknown parameters in the formula A n , n. These can be obtained by doing fixed distance experiments and fitting in the experimental data.
As for the solution of Equation (1), it is currently more common to linearize it first and then solve it by least squares, giving the final solution formula as Equation (5): where X is the current unknown point coordinates. B, l can be obtained by linearizing the Equation (1), and the weight matrix W is usually set 1 RSSI 2 . When only the data broadcast by three iBeacons are used to calculate the current unknown point coordinates, the schematic diagram is shown in Figure 2.
Although it is relatively simple to use Equation (5) to calculate the coordinates, because the RSSI signal is affected by various environmental factors, there will be large signal fluctuations and thus low positioning reliability regardless of the distance from the iBeacon signal source. The weighting method in Equation (5) cannot reduce the error caused by this low credibility of the short-distance signal source. This paper therefore constructs a new weight matrix in Equation (5) according to the signal anomaly rates, and uses the nonlinear least squares LM algorithm to solve the positioning coordinates under the constraints of this weight matrix.
Anomaly Detection and Isolation Forest
The RSSI value of iBeacon's Bluetooth signal is susceptible to fluctuations due to environmental interference during the collection process. To obtain high-precision positioning results, it is necessary to discover and eliminate abnormal RSSI in time. In other words, it is often necessary to eliminate outliers within a few seconds when performing indoor positioning and navigation. The relatively small amount of data available in these timescales makes the conventional robust estimation, Gaussian filtering, and other methods less effective [22]. This paper uses the isolation forest algorithm proposed by Zhou Zhihua et al. (2008) [18] for abnormal signal detection. This algorithm can achieve better anomaly detection results even with a small amount of data.
The basic idea of the isolation forest algorithm is to use several random planes to cut the data space repeatedly until there is only one data point in each data space. The specific process of anomaly detection is as follows [18].
(I) Randomly select ψ samples from all the data as the training subsample, and put it into the root node of the tree.
(II) Specify a dimension and randomly generate a plane cutting point Q within the range of the current node data. The range of the cutting point is between the maximum and minimum values of the training subsample.
(III) A plane is generated at the cutting point Q, and the data space of the current node is divided into two subspaces: put the points less than Q in the currently selected dimension on the left branch of the current node, and put the points greater than or equal to Q in the branch to the right of the current node.
(IV) Repeat steps II and III on the left and right branch nodes of the node, and continue to construct new leaf nodes until there is only one datum on the leaf node, or the tree has grown to the set maximum height.
(V) After constructing T isolation trees with the training subsamples, for each RSSI value in the entire data that needs anomaly detection, make it traverse the generated T isolation trees, and calculate the results in the T isolation trees. The average depth of each RSSI value is used to calculate the outlier of each RSSI value, where the outlier is defined as in Equation (6): where E(h(RSSI)) is the average depth of a certain RSSI value in T isolation trees, and c(ψ) is the average value of the path length of the isolation tree generated by the training sample ψ.
(VI) Using Equation (6) and the isolation trees generated by training, calculate the S(RSSI) of the RSSI values broadcast by different iBeacons in turn. If the S(RSSI) is closer to 1, it is considered to be a signal outlier.
As shown in Figure 3, taking the RSSI value 0.2 m away from an iBeacon base station in the experimental scene as an example, RSSI 1 only needs one plane cut to make it the only datum in the space, whereas RSSI 2 needs to be cut six times to make it the only datum in the space. RSSI 1 is therefore considered to be the abnormal signal value within that group Sensors 2021, 21, 120 6 of 16 of data. According to this principle, and combined with Equation (6), anomaly detection can be performed on all the collected RSSI signals. Of course this is just an ideal situation. Only one segmentation strategy was used to detect abnormal signals. In fact, a variety of segmentation strategies needs to be used in the experimental part, and finally the score is calculated according to Equation (6), which is used to determine whether a signal value should be defined as an outlier.
LM Optimization with Weighted Anomaly Rate
When using Equation (5) to solve the coordinates, if the equation is directly linearized, a certain error will be generated in the model. We therefore used the LM algorithm to solve the coordinates iteratively. From Equation (1), the plane coordinate solution model (8) and (9) can be obtained: Furthermore, Equation (9) can be written as solving Equation (10): The LM algorithm is an improved Gauss-Newton method [23], and its iterative solution equation is as in Equation (11): are the plane coordinates from every iBeacon base station involved in the coordinate solution, and i D is the spatial distance between the coordinate position to be solved and the iBeacon base station, which can be calculated by Equation (4). Furthermore, in order to find the opti- Compared with other methods, using the isolation forest algorithm to perform anomaly detection on the received RSSI has the following advantages. (1) The ability to perform anomaly detection unsupervised, thereby detecting anomalies in continuous data without prior data. (2) The amount of calculation is small, and distributed training and calculation can be realized well. (3) It has good effect and stability for data sets with small data volume and low data dimensions. This algorithm is therefore suitable for rapidly (i.e., within a few seconds) detecting abnormalities within the RSSI signals received from each iBeacon base station in an indoor location. Abnormally fluctuating signals can be detected in a relatively short time, and the abnormal rate of each signal source in that time period can be calculated. The equation for calculating the abnormal rate is as in Equation (7): Abnormal signals All signals (7)
LM Optimization with Weighted Anomaly Rate
When using Equation (5) to solve the coordinates, if the equation is directly linearized, a certain error will be generated in the model. We therefore used the LM algorithm to solve the coordinates iteratively. From Equation (1), the plane coordinate solution model (8) and (9) can be obtained: Furthermore, Equation (9) can be written as solving Equation (10): The LM algorithm is an improved Gauss-Newton method [23], and its iterative solution equation is as in Equation (11): where x s is the plane coordinate vector [Y, X] T to be solved, (x i , y i ) are the plane coordinates from every iBeacon base station involved in the coordinate solution, and D i is the spatial distance between the coordinate position to be solved and the iBeacon base station, which can be calculated by Equation (4). Furthermore, in order to find the optimal solution coordinates, the sum of squares should be minimized, as shown in Equation (12).
The LM approach then performs a Taylor expansion of Equation (10) at (Y, X) = (Y k , X k ), and omits terms of the second order and above so as to get the iterative formula of Equation (13): where J f is the Jacobian matrix of the function. For the i iBeacon base station signal sources, J f can be expressed as in Equation (14), I is the identity matrix of the coordinate vector to be solved, and ε f is the residual vector matrix of each iteration, which can be calculated as in Equation (15): The µ in Equation (13), meanwhile, is the damping coefficient. When µ is small, the iteration process becomes the Gauss-Newton method of iteration, which has secondorder convergence. When µ is large, on the other hand, the iteration process follows the gradient descent method, and the iteration converges quickly, the damping coefficient can be controlled by the Equation (16): where h is the iteration step size, and L(0) − L(h) is as in Equation (17).
In the iterative process, the damping coefficient µ is adjusted through ρ, then the current iterative solution process is selected. When the iteration terminates at x s+1 − x s < λ, the coordinate solution is completed, and λ is a minimal value. The method for determining the initial damping coefficient of the iteration is µ 0 = max{(J f T J f ) ii }. It can be seen that the iterative solution of Equation (13) is not weighted, that is, the weight of the signal data of each iBeacon base station participating in the coordinate calculation is the same. In reality, however, the signal quality of different iBeacon base stations is different and thus the overall precision of the positioning can be improved by assigning different weights to iBeacon base stations. Accordingly, we add a weight matrix based on Equation (13) in combination with the anomaly detection results described in Section 2.3 to obtain Equation (18): where the weight matrix is constructed by Equation (19): (19) where RSSI i is the mean value of the i-th iBeacon base station after removing the abnormal values within a period of time, and β i can be expressed by Equation (20): α i can be obtained through Equation (7). According to Equation (20), the non-abnormal rate of each signal source is normalized. In other words, the iBeacon base stations at the location are sorted according to the fluctuations within a given period of time, and weights are assigned according to their abnormal rates. The optimal coordinates of the current point can be solved iteratively through Equations (11), (16) and (19) using the RSSI values of each iBeacon base station collected over that same period of time.
Experiment
The location for the experiment was an indoor venue with a size of around 92 m × 23 m. The experiment was mainly carried out in a corridor area, which had a minimum width of 2.4 m. We used a Xiao MI mi6 smartphone and its sensors for positioning. During our experiments, we held the smartphone horizontally in both steady and swaying states. The parameters and settings for the iBeacon base stations are shown in Table 2. First, in order to obtain the specific values of A and n in the trilateration model, we set up an iBeacon base station at a fixed location in the test scene, and collected the RSSI values for a period of time at a fixed distance from the base station. Data were collected between 0.3 m and 10.2 m away from the base station, with an interval of 0.3 m. After using the isolation forest algorithm to detect anomalies and eliminate the abnormal signals, the signal strength values collected at each fixed distance were averaged in order to obtain the RSSI corresponding to the current distance. Then, the distance measurement model was fitted under the optimization of the nonlinear least squares algorithm (Figure 4). After identifying and eliminating the abnormal RSSI data at each fixed point, model fitting was performed, and the parameters A n and n in the ranging model were obtained by fitting with A n = −71.58 dBm, n = 2.647. It is worth noting that the fitting method used in this article is the least squares method. More specifically, it is the same as the method used in positioning solution: LM algorithm. As shown in Figure 4, this paper has achieved a good fitting effect based on Equation (4). Obviously, there are no data over fitting phenomenon. Besides, the anomaly detection results at 0.3 m and 9 m from the iBeacon station are shown in Figure 5. It can be clearly seen from Figure 5 that the signal anomaly rate at 9 m was significantly (1.54 times) higher than that at 0.3 m (the abnormal rate at 0.3 m is 13.11%, and that at 9 m is 20.23%). It was therefore necessary to perform real-time anomaly detection for each signal source in the process of positioning and solving. has achieved a good fitting effect based on Equation (4). Obviously, there are no data over fitting phenomenon. Besides, the anomaly detection results at 0.3 m and 9 m from the iBeacon station are shown in Figure 5. It can be clearly seen from Figure 5 that the signal anomaly rate at 9 m was significantly (1.54 times) higher than that at 0.3 m (the abnormal rate at 0.3 m is 13.11%, and that at 9 m is 20.23%). It was therefore necessary to perform real-time anomaly detection for each signal source in the process of positioning and solving. After obtaining the specific A and n parameters required by the ranging model, we set up iBeacon base stations with the same parameters in 15 fixed locations on the experimental floor. At the same time, we planned the path of the experimental data collection ( Figure 6). In the subsequent coordinate calculation process, we performed coordinate calculations at intervals of 0.6 m, and used this to calculate and compare positioning errors. over fitting phenomenon. Besides, the anomaly detection results at 0.3 m and 9 m from the iBeacon station are shown in Figure 5. It can be clearly seen from Figure 5 that the signal anomaly rate at 9 m was significantly (1.54 times) higher than that at 0.3 m (the abnormal rate at 0.3 m is 13.11%, and that at 9 m is 20.23%). It was therefore necessary to perform real-time anomaly detection for each signal source in the process of positioning and solving. After obtaining the specific A and n parameters required by the ranging model, we set up iBeacon base stations with the same parameters in 15 fixed locations on the experimental floor. At the same time, we planned the path of the experimental data collection ( Figure 6). In the subsequent coordinate calculation process, we performed coordinate calculations at intervals of 0.6 m, and used this to calculate and compare positioning errors. After completing the iBeacon layout and data collection path planning, we carried out data collection experiments along the route shown in Figure 6. Then, the collected RSSI was matched with the MAC address, so that each corresponded to its signal source iBeacon base station. A custom coordinate system was used, the origin point for the coordinates is shown in Figure 6, and the MAC address and coordinates of each iBeacon base station are shown in Table 3. Table 3. iBeacons' MAC and Coordinates.
Number
MAC Y X After completing the iBeacon layout and data collection path planning, we carried out data collection experiments along the route shown in Figure 6. Then, the collected RSSI was matched with the MAC address, so that each corresponded to its signal source iBeacon base station. A custom coordinate system was used, the origin point for the coordinates is shown in Figure 6, and the MAC address and coordinates of each iBeacon base station are shown in Table 3. After preprocessing the collected data, i.e., performing the MAC and coordinate matching, we carried out anomaly detection on each group of data from different signal sources at 0.6 m steps according to the isolation forest algorithm described in Section 2.3. The anomaly rate of each group of data was calculated after eliminating the abnormal RSSI. For example, Figure 7 shows the original RSSI anomaly detection results received at the starting point from the iBeacon station B1 and the parallel data after the abnormal RSSI was eliminated.
Using the isolation forest algorithm for anomaly detection and calculating the anomaly ratio, the non-anomaly rate (trusted signal rate) was normalized and combined with the received RSSI to generate a weight matrix. Then, the weighted LM nonlinear least squares algorithm was used to calculate the coordinates of each point to be measured. The positioning results for an example point are shown in Figure 8. The valuation range in the figure refers to the coordinate area included in the 95% confidence interval. The coordinates at the center of that valuation range are chosen as the final positioning solution. As shown in Figure 8, the blue scale on the right represents the distance (m), and from small to high means an increase in distance.
According to the above method, the solutions for the 119 sets of data that were collected are shown in Figure 9. The error statistics are shown in Table 4. Sensors 2020, 20, x FOR PEER REVIEW 12 of 17 Using the isolation forest algorithm for anomaly detection and calculating the anomaly ratio, the non-anomaly rate (trusted signal rate) was normalized and combined with the received RSSI to generate a weight matrix. Then, the weighted LM nonlinear least squares algorithm was used to calculate the coordinates of each point to be measured. The positioning results for an example point are shown in Figure 8. The valuation range in the figure refers to the coordinate area included in the 95% confidence interval. The coordinates at the center of that valuation range are chosen as the final positioning solution. As shown in Figure 8, the blue scale on the right represents the distance (m), and from small to high means an increase in distance. According to the above method, the solutions for the 119 sets of data that were collected are shown in Figure 9. The error statistics are shown in Table 4. Using the isolation forest algorithm for anomaly detection and calculating the anomaly ratio, the non-anomaly rate (trusted signal rate) was normalized and combined with the received RSSI to generate a weight matrix. Then, the weighted LM nonlinear least squares algorithm was used to calculate the coordinates of each point to be measured. The positioning results for an example point are shown in Figure 8. The valuation range in the figure refers to the coordinate area included in the 95% confidence interval. The coordinates at the center of that valuation range are chosen as the final positioning solution. As shown in Figure 8, the blue scale on the right represents the distance (m), and from small to high means an increase in distance. According to the above method, the solutions for the 119 sets of data that were collected are shown in Figure 9. The error statistics are shown in Table 4. It can be seen from the table that the maximum point position error is 3.527 m, the average value is 1.540 m, and the RMSE is 1.748 m. Among them, X, Y are the errors of the positioning results in the two vertical directions of the plane. About X, the Y direction as shown in Figure 6. The positioning accuracy obtained by the method proposed in this article is consistent with the current research status based on BLE technology (meter-level accuracy) [2], and is sufficient to meet the needs of indoor navigation and positioning. It is beneficial that the method proposed in this paper greatly improves the stability of positioning accuracy, which will be discussed in more detail in Part 4.
Discussion
In order to better verify the effect of this method in reducing positioning errors, we also designed three comparison algorithms: (A) no anomaly detection together with a weightless LM algorithm to generate the solution [24]; (B) anomaly detection of signals to remove outliers together with a weightless LM algorithm to generate the solution; and (C) no anomaly detection, only the RSSI value used to set the weight matrix and then a weighted LM algorithm used generate the solution [25]. The respective errors of these three positioning methods are shown in Table 5. The experimental positioning results of the three algorithms are shown in Figure 10. Table 5. Errors of comparative methods. Except for Method C, the error in the X direction is worse than the error in the Y direction. This is because Method C only uses RSSI values for weighting; this will cause a certain degree of change in the direction of error distribution when solving. In fact, it is not difficult to see from Table 5. The error difference of the method proposed in this article in the two directions is also smaller than that of methods A and B. However, this does not affect the requirements for actual use. From Figure 10, it can be seen that the result obtained by using the positioning method proposed in this paper is the closest to the real planned path. By comparing Tables 4 and 5, it can be clearly seen that, compared to the three comparator positioning solution methods, the weighted LM algorithm combined with anomaly detection proposed in this paper can reduce errors and improve positioning precision, in both the X or Y direction, or on the plane. Compared with the A, B, and C comparator methods, the method proposed in this paper reduces the Mean YX error by 36.42%, 33.88%, and 26.60% respectively, and the RMSE by 38.69%, 36.60%, and 29.52%, respectively. In addition, the algorithm proposed in this paper reduces the maximum error by 46.64%, 46.50%, and 38.71% in respect to methods A, B, and C, respectively. This shows that the use of isolation forest algorithm for anomaly detection can effectively and quickly detect the abnormal fluctuations of the received signal. Eliminating these abnormal signals can reduce the positioning error and improve the positioning precision. What is more, weighting according to the abnormal ratio of each different signal source can further reduce the error in coordinate calculation.
Mthod in This Paper (m) Error of A (m) Error of B (m) Error of C (m)
In order further to illustrate the superiority of the positioning method proposed in this paper, the error distribution histogram ( Figure 11) and the cumulative distribution curves (Figure 12) of the four positioning algorithms are shown. Comparing Figures 9 and 10, it is obvious that the settlement results for individual positioning points have large fluctuations in all the comparator methods. This phenomenon will inevitably affect the user's experience during the navigation phase, especially the excessive errors in the calculated coordinate values.
Except for Method C, the error in the X direction is worse than the error in the Y direction. This is because Method C only uses RSSI values for weighting; this will cause a certain degree of change in the direction of error distribution when solving. In fact, it is not difficult to see from Table 5. The error difference of the method proposed in this article in the two directions is also smaller than that of methods A and B. However, this does not affect the requirements for actual use.
From Figure 10, it can be seen that the result obtained by using the positioning method proposed in this paper is the closest to the real planned path. By comparing Tables 4 and 5, it can be clearly seen that, compared to the three comparator positioning solution methods, the weighted LM algorithm combined with anomaly detection proposed in this paper can reduce errors and improve positioning precision, in both the X or Y direction, or on the plane. Compared with the A, B, and C comparator methods, the method proposed in this paper reduces the Mean YX error by 36.42%, 33.88%, and 26.60% respectively, and the RMSE by 38.69%, 36.60%, and 29.52%, respectively. In addition, the algorithm proposed in this paper reduces the maximum error by 46.64%, 46.50%, and 38.71% in respect to methods A, B, and C, respectively. This shows that the use of isolation forest algorithm for anomaly detection can effectively and quickly detect the abnormal fluctuations of the received signal. Eliminating these abnormal signals can reduce the positioning error and improve the positioning precision. What is more, weighting according to the abnormal ratio of each different signal source can further reduce the error in coordinate calculation.
In order further to illustrate the superiority of the positioning method proposed in this paper, the error distribution histogram ( Figure 11) and the cumulative distribution curves (Figure 12) of the four positioning algorithms are shown.
From Figure 11, it can be easily seen that the error of the method in this paper is relatively concentrated and the dispersion is small. Seventy percent of the errors are distributed between 0.5 m and 2 m, and 85.71% of the points can ensure that the plane error is less than 3 m, which is obviously better than any of the other three comparative algorithms. It can be seen from Figure 12 that the method proposed in this paper has a very high degree of improvement compared with method A. Among them, compared with the intermediate methods B and C, eliminating the abnormal RSSI value also played a role to a certain extent; however, it is not obvious. The greatest improvement comes from weighting based on the real-time status of each signal source. This shows that using only the RSSI value for weighting is indeed easy to ignore the real-time signal fluctuations of the iBeacon station at a close distance. The method proposed in this paper can improve this phenomenon well. It can be seen from Figure 12b, meanwhile, that the cumulative distribution curve of this method is basically above the other curves, indicating that the error value aggregation is better than with the other methods. With the method proposed in this paper, the probability of an error being within 3.6 m is 1, that is, all errors are less than 3.6 m, which is obviously smaller than with any of the other three methods. From Figure 11, it can be easily seen that the error of the method in this paper is relatively concentrated and the dispersion is small. Seventy percent of the errors are distributed between 0.5 m and 2 m, and 85.71% of the points can ensure that the plane error is less than 3 m, which is obviously better than any of the other three comparative algorithms. It can be seen from Figure 12 that the method proposed in this paper has a very high degree of improvement compared with method A. Among them, compared with the intermediate methods B and C, eliminating the abnormal RSSI value also played a role to a certain extent; however, it is not obvious. The greatest improvement comes from From Figure 11, it can be easily seen that the error of the method in this paper is relatively concentrated and the dispersion is small. Seventy percent of the errors are distributed between 0.5 m and 2 m, and 85.71% of the points can ensure that the plane error is less than 3 m, which is obviously better than any of the other three comparative algorithms. It can be seen from Figure 12 that the method proposed in this paper has a very high degree of improvement compared with method A. Among them, compared with the intermediate methods B and C, eliminating the abnormal RSSI value also played a role to a certain extent; however, it is not obvious. The greatest improvement comes from
Conclusions
In this paper, we propose an indoor positioning method that detects anomalies in the RSSI of iBeacon signals, eliminates the detected RSSI anomaly, and then sets a weight matrix based on the detected anomaly rate and the RSSI. Under the constraints of this weight matrix, the LM algorithm is used for coordinate calculation. Using this method eliminates the need to establish a fingerprint database before application, and it can ensure lower error and higher positioning precision. Experiments prove that the average point error of the method proposed in this paper is 1.540 m and the RMSE is 1.748 m. Furthermore, in all coordinate points of the solution, the error can be controlled within 3.6 m, and in 85.71% of cases the point error is less than 3 m.
At the same time, in order to better compare the effects of the method proposed in this paper on error control and precision improvement, we have calculated the coordinate error of three other methods. Compared with using the LM algorithm to solve the coordinates without anomaly detection and without setting the weight matrix, the RMSE is reduced by 38.69%. Compared with only using the isolation forest algorithm to detect and eliminate the error, it is reduced by 36.60%, and compared to eliminating outliers but only using RSSI to construct the weight matrix, RMSE reduced by 29.52%. The mean error by the method proposed in this paper is also the lowest. This proves that the algorithm in this paper has the following advantages.
(1) Abnormal signal fluctuations can be detected in a short time, and the positioning coordinates can be solved in time according to the state of different signal sources, with good positioning precision.
(2) While ensuring the overall positioning precision, the loss of positioning accuracy caused by signal fluctuations is greatly improved.
Author Contributions: All authors conceived and designed the study. Y.G. is a Master student that performed all development work. J.Z. is thesis supervisor and organized all work in the computer science subject, W.Z., G.X., and S.D. participated in scene setting, data acquisition, data processing and analysis. All authors contributed to the result analysis and discussions. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy. | 9,415 | 2020-12-27T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Index Design and System Analysis of Engineering Education Accreditation Evaluation System
The connotation of professional certification of engineering education is that a professional certification evaluation organization conducts a comprehensive evaluation of engineering professional disciplines set up by educational institutions. The entire evaluation process consists of specialized occupations, industry associations, industry federations, professional societies and education in this field. Experts and enterprise experts in related industries work together to ensure that these engineering professionals have sufficient professional qualities and abilities when entering the real industrial social environment. At present, the whole process of engineering education professional certification in China mainly relies on manual audit applications. In order to realize the interconnection between applicants and evaluators, an engineering education professional certification and evaluation system is needed. Through this platform, the docking between universities and evaluation expert groups can be realized, and the evaluation results can be visualized. This paper is a part of the basic work of the evaluation system. Its main work is: using data analysis and expert method to analyze the general standard of engineering education professional certification; By using data analysis and expert method, this paper analyzes some relevant standards of Engineering Education Accreditation in China; The specific indicators of the engineering education accreditation evaluation system are screened out by using the expert method; According to the whole process of evaluation and certification, combined with the characteristics of other evaluation systems, the index system of evaluation system is designed; Completed the functional and non functional analysis of the evaluation system. These works lay a solid foundation for the design and implementation of the subsequent engineering education accreditation evaluation system.
Introduction
At the beginning, the development of China's engineering education accreditation process is relatively slow, and the overall layout and system are still in the exploratory stage [1]. Up to now, the establishment of China's engineering education accreditation system and system has developed rapidly [2], and the relatively perfect evaluation system has almost achieved the equivalence of international accreditation [3]. China plans to achieve full coverage of all major categories by 2020, and build a three-level professional certification system. After a long time of supplement and improvement [4], China's professional certification standard has been relatively perfect, which is mainly composed of general standard and supplementary standard of engineering education professional certification [5]. General standards are interworking with international standards, covering all the standard items in international standards [6]. Professional supplementary standard is not a separate standard, but seven aspects of the general indicators are added to meet the special requirements of each specialty [7].
At present, half of all engineering disciplines in China have joined the evaluation and certification, and China is also committed to making all majors join the ranks of certification, which is also an important step to improve the certification system of Engineering Education in China [8]. Not only that, in the pursuit of "quantity", but also can not ignore "quality" [9]. In order to ensure that the certified majors can really cultivate professional talents with excellent comprehensive strength, we must ensure that the certification system is not only in line with international standards [10], but also in line with the actual situation of China. The reason is that although international mutual recognition is very important in the era of international integration in the future, serving our own country is the root Therefore [11], we can not ignore the Chinese characteristics in the formulation of the system [12].
Based on the design of general index, this paper will add Chinese characteristic index and complete the demand analysis of the system.
Analysis on the Concept of General Standard
The latest general standards of engineering education certification mainly include seven aspects: students, training objectives, graduation requirements, continuous improvement, curriculum system, teaching staff and supporting conditions [13]. The three concepts of engineering education accreditation are: student-centered, achievement oriented and continuous improvement [14]. Among them, student-centered is the purpose, achievement oriented is the requirement, and continuous improvement is the mechanism [15].
The connotation of the concept focuses on the cultivation of students [16]. The teaching content is designed according to the expectations of students. The evaluation object is the performance of students and the whole student must be considered [17]. The standard of measuring the teaching staff and other supporting conditions is whether it is conducive to students to achieve the expected goals. All conditions are intended to ensure that students can better complete the training objectives and achieve the graduation conditions.
The concept requires that the major that can be certified must prove that each qualified graduate achieves the goal and requirement through proof. The goal and graduation requirement must play a guiding role in daily teaching activities and help each person who undertakes the teaching task to clarify the responsibility. The evaluation of graduation requirements and training objectives must be decomposed into the whole process of students' learning.
The connotation of the concept is that the whole training process must be equipped with an effective quality monitoring and feedback mechanism. Teachers not only need to participate in the teaching plan design process, but also need to take responsibility in the monitoring and feedback mechanism. According to the students' performance feedback, they should continue to follow up, and adjust the training plan in time according to the feedback results. It is useless to only monitor the mechanism without improvement. In addition, in addition to the international common standards, the characteristic indicators also need to be added in the continuous feedback supervision mechanism.
Evaluation Index Design
According to the general standards of engineering education certification, interpretation of general standards for engineering education certification and relevant data analysis and design of the system certification indicators, classified into general indicators and characteristic indicators.
General Standard Index Design
The general standards are shown in Table 1 to table 7. The specific connotation interpretation of the following indicators can be interpreted and used in accordance with the general standards for engineering education certification.
Index number
Index name TY1001 Enrollment system and measures TY1002 Learning guidance TY1003 Follow up assessment of academic performance TY1004 Professional transfer regulations and credit recognition Table 2. Training objectives.
Index number Index name TY2001
Training objectives and contents TY2002 Evaluation and revision of training objectives Communication project management TY3012 Lifelong learning Table 4. Continuous improvement.
Index number Index name TY4001
Monitoring mechanism of teaching quality TY4002 Follow up feedback mechanism for graduates TY4003 Practicability of evaluation results
Characteristic Index Design
According to the relevant documents, the characteristic indexes other than the supplementary standards are shown in Table 8 to table 12. Explanation of index connotation: To focus on the classified cultivation of undergraduate talents, we should not only focus on the cultivation of high-level leading talents of engineering specialty, but also all levels and types of Engineering Science and technology talents. Table 9. Training objectives and characteristic indicators.
Index number
Index name TS2001 Training objectives aim at output TS2002 Moral cultivation Explanation of index connotation: TS2001: Design training system with advanced education concept. According to the school positioning, professional resources, social needs, stakeholder expectations and a series of conditions such as design and implementation, and the results of evaluation, continuous improvement.
TS2002: The training goal should not only focus on students' professional skills and working ability, but also on students' moral cultivation, so as to achieve both ability and political integrity, rather than "giant" in the industry and "villain" in the society. Explanation of index connotation: This standard requires that the graduation requirements must match with the training objectives to ensure the realization of the training objectives, so that the training objectives should not be gorgeous rhetoric, but the result of the joint efforts of teachers and students.
Index number
Index name TS4001 Output oriented quality monitoring mechanism Explanation of index connotation: This standard requires that the quality monitoring mechanism should be output oriented, and the evaluation mechanism for the achievement of curriculum teaching objectives should be specific, specific, and clearly related to the graduation requirements. The evaluation results should be deeply analyzed, and the improvement scheme should be put forward based on the analysis results. Table 12. Characteristic index of curriculum system.
Index number
Index name TS5001 Supporting graduation requirements TS5002 Curriculum system layout Explanation of index connotation: TS5001: The curriculum system needs to ensure the achievement of the graduation requirement index points supported by it, decompose the graduation requirement achievement tasks into specific teaching activities, and the assessment (the evaluation of students' learning achievements) can reflect the substantial contribution to the achievement of the corresponding index points.
TS5002: The first connotation of this standard is that the curriculum system should have a "system", the overall layout is clear and reasonable, teachers should also participate in the design and formulation, master the design ideas of the curriculum system, and establish the overall view of talent training. It is necessary to satisfy the relationship between the leading and the following, adapt to the ability oriented education, and balance the distribution of curriculum components. And there are appropriate curriculum objectives, syllabus and so on.
Feasibility Analysis
Feasibility analysis is a multi-faceted analysis that should be carried out when contacting with a new project. It can determine whether the problem can be solved in the shortest possible time with the minimum cost. The results of the analysis are of great significance to the development of the project. In addition, we also need to consider the necessary cost of the development project, whether it can be successfully implemented in technology, and whether the user operation is difficult, etc. Only by integrating all the analysis can we finally decide whether the development of the system is feasible. As the system does not involve profit, the feasibility analysis is only from the technical feasibility and operational feasibility.
(1) The technical feasibility analysis needs to be based on the functions, performance and other constraints of the system, from the perspective of technical implementation, such as computer software, hardware, development environment, etc. The main functions of this system include submitting PDF files, viewing files, inputting evaluation results, algorithm operation, result feedback, user and application management, etc. This series of functions involve the connection of database. Mysql database is used in the database of this system, and idea is used in the programming software. The technical risk is low, and the required functions can be realized, and the performance, reliability and maintainability meet the requirements.
(2) Operation feasibility requires that all users including technicians can successfully complete a function in the process of operation after the completion of the system. The man-machine interaction page provided by the system is easy to understand, and the overall process is clear. The user needs to register, login, upload, click to view the evaluation results and other simple operations. Administrators need to delete, change and view user information and application information, and there is no difficulty in operation. The operation that evaluators need to do is to click to view the application, input the evaluation results in the evaluation page, which will provide 5 alternative evaluation results of all indicators, and then directly select the evaluation and finally submit it. To sum up, the system is feasible in operation.
Functional Requirements
This system is mainly divided into four modules. Next, the functional requirements of each module are introduced User interaction module: the main function of this module is that users can register, log in, submit applications, and view evaluation results through this module. The specific process is to log in to the platform, upload all relevant information of the specialty to be applied, and these information will be recorded in the database. Experts can view these information and give evaluation results with the help of algorithm, and the results will be updated Feedback to the platform, users can view. This module uses the most basic registration login and file upload function, the key is to ensure the successful upload of data, because all the evaluation is based on the application data submitted by users, and the audit object of experts is also the data itself.
Evaluation module: This module specifies reasonable indexes according to the general evaluation standard of engineering education major. Experts can select the evaluation level of each index according to the relevant information provided by the major. The system will carry out certain calculation according to the selection result, and finally give the evaluation result suggestions and record the evaluation result. The core of the module is the process of evaluation, to ensure that all standard projects have selected the results, so as to ensure the data integrity in the data processing module.
Data processing module: This module converts the evaluation grade input by experts into computable data, and then calculates by certain algorithm according to different values and proportions. The final calculation results will be fed back to the system, and the data in this process will be recorded for query and change. The module focuses on the design of the algorithm, because the indicators are divided into two categories: general and characteristic, and the proportion of these two categories should be considered. Only when the algorithm is logical can the most accurate suggestions be given Management module: the administrator of the system can log in to the management module for authorization, addition, deletion, modification and other behaviors. The administrator can't perform the evaluation task, but has the right to view, delete and modify the application, evaluation results and user information.
Non Functional Requirements
Non functional requirements include performance requirements, security requirements, maintainability and scalability requirements and reliability requirements.
Performance requirements: the performance of the system mainly includes response time and throughput. As the operation of the system only includes simple upload, submit and save, there should be no response timeout in response time. If there are too many authentication applications in a certain period of time, there may be stuck phenomenon. Because it involves the algorithm to calculate the data evaluation results and give the final recommendations, so the algorithm should choose the most efficient algorithm in the reasonable optimization at the same time, which can greatly improve the analysis efficiency of the system. Security: the system will keep the user's personal information, submitted application information, evaluation results and other information strictly confidential, so as to prevent leakage and attack. The account authority will also be managed uniformly, and the security can be guaranteed Maintainability and expansibility: with the gradual improvement of the whole evaluation system, the evaluation index, especially the characteristic index, will continue to increase. The system maintains good expansibility, the index can be changed at any time, and the system will regularly sort out and maintain the account information.
Reliability: in terms of reliability, the system requires that it can be provided to experts in time after receiving user messages, and ensure that different applications can be carried out in parallel accurately at the same time, and ensure that the platform can run effectively on hardware with high pressure resistance, so as to ensure the successful evaluation process of each application in the peak period of centralized application.
Conclusion
Engineering education professional certification in China from the emergence to the present steady development, after years of improvement, has formed a more reasonable system, and has completed hundreds of professional certification process. Today, engineering education is very important, and the importance of engineering education professional certification is more and more prominent. The certification and affirmation of a major, to a certain extent, ensure the quality of personnel training. This paper has done the analysis of related theory and technology, analysis and design of certification index, system analysis and so on. It is worth noting that in order to make the system more suitable for China's engineering education, in addition to the general indicators, the characteristic indicators with Chinese characteristics are extracted, and these indicators are classified separately. In the future, the system will give the recommended ratings of the general indicators and characteristic indicators according to the specific algorithm, which can ensure that the system can be in line with international standards without losing its own characteristics It is not only limited to application, evaluation and feedback, but also involves on-the-spot inspection, which needs to rely on manual further evaluation. Therefore, the system can not directly give the final certification results, but only provides a platform between the school and experts in the evaluation of application. | 3,808 | 2021-06-04T00:00:00.000 | [
"Computer Science"
] |
Cyclic Nucleotide Regulation of PAI-1 mRNA Stability
Incubation of HTC rat hepatoma cells with the cyclic nucleotide analogue 8-bromo-cAMP results in a 3-fold increase in the rate of degradation of type-1 plasminogen activator-inhibitor (PAI-1) mRNA. Previous studies utilizing HTC cells stably transfected with β-globin:PAI-1 chimeric constructs demonstrated that at least two regions within the PAI-1 3′-untranslated region mediate the cyclic nucleotide-induced destabilization of PAI-1 mRNA; one of these regions is the 3′-most 134 nucleotides (nt) of the PAI-1 mRNA (Heaton, J. H., Tillmann-Bogush, M., Leff, N. S., and Gelehrter, T. D. (1998) J. Biol. Chem. 273, 14261–14268). In the present study, ultraviolet cross-linking analyses of this region demonstrate HTC cell cytosolic mRNA-binding proteins ranging from 38 to 76 kDa, with a major complex migrating at ∼50 kDa. RNA electrophoretic mobility shift analyses demonstrate high molecular weight multiprotein complexes that specifically interact with the 134-nt cyclic nucleotide-responsive sequence. The 50, 61, and 76 kDa and multiprotein complexes form with an A-rich sequence at the 3′ end of the cyclic nucleotide-responsive region; a 38-kDa complex forms with a U-rich region at the 5′ end of the 134 nt sequence. Mutation of the A-rich region prevents both the binding of the 50-, 61-, and 76-kDa proteins and formation of the multiprotein complexes, as well as cyclic nucleotide-regulated degradation of chimeric globin:PAI-1 transcripts in HTC cells. These data suggest that the proteins identified in this report play an important role in the cyclic nucleotide regulation of PAI-1 mRNA stability.
alyze the conversion of the zymogen plasminogen to plasmin, a broad spectrum endopeptidase that is responsible for intravascular fibrinolysis (1). This protein is also known to play a major role in biological processes involving localized proteolysis of extracellular matrix, such as tissue remodeling and tumor cell invasion and metastasis (2). Type-1 plasminogen activatorinhibitor (PAI-1), a 50-kDa glycoprotein, is a major regulator of plasminogen activation (3,4). PAI-1 is synthesized in a variety of cell types and its expression is regulated by growth factors and hormones, including agents that elevate intracellular cAMP levels (5)(6)(7)(8).
In HTC rat hepatoma cells, the cyclic nucleotide analogue 8-bromo-cAMP, together with the phosphodiesterase inhibitor isobutylmethylxanthine (designated cA), increases tissue type PA activity more than 50-fold primarily by decreasing PAI-1 mRNA by 90% and protein by 60 -70%. The decrease in PAI-1 mRNA is due to a 60% decrease in the rate of PAI-1 gene transcription and, more importantly, a 3-fold increase in the rate of PAI-1 mRNA decay (8,9). Utilizing HTC cells stably transfected with chimeric constructs containing portions of the mouse -globin gene and rat PAI-1 cDNA, the 1730-nucleotide (nt) PAI-1 3Ј-untranslated region (UTR) (nt 1331-3060) was shown to contain sequences that mediate the cA-induced destabilization (10). Furthermore, results obtained from deletion and insertion analyses using a series of -globin coding region: PAI-1 3Ј-UTR chimeric constructs demonstrated that at least two regions within the PAI-1 3Ј-UTR mediate the cA effect. One of these regions is the 3Ј-most 134 nt, from position 2926 to 3060, of the PAI-1 mRNA. This 134-nt sequence includes a 75-nt U-rich region present at its 5Ј end and a 24-nt A-rich region at its 3Ј end ( Fig. 1A) (10,11).
The decay rates of many mRNAs have been shown to be regulated by a variety of external stimuli including hormones, growth factors, and agents that elevate intracellular cAMP levels (see Refs. 12 and 13 for review). Studies aimed at elucidating the mechanism(s) involved in regulating mRNA stability have identified a number of potential cis-acting sequences and/or trans-acting factors (12,13); however, the molecular basis for the cyclic nucleotides regulation of mRNA stability remains largely undefined.
The aim of the present study was to further elucidate the mechanism(s) involved in the cyclic nucleotide-induced destabilization of PAI-1 mRNA in rat HTC cells. To this end, studies were conducted to identify the cytosolic factors that bind to the 3Ј-most 134 nt of the PAI-1 mRNA, to characterize the specificity and binding sites for these factors, and to determine their role in cA-induced PAI-1 mRNA destabilization. Results from ultraviolet (UV) cross-linking analyses demonstrate that specific RNA-protein complexes of ϳ38, 50, 53, 61, 65, and 76 kDa form with the 134-nt sequence, while RNA electrophoretic mobility shift analyses (R-EMSAs) demonstrate the formation of high molecular mass multiprotein complexes. The 50-, 61-, and 76-kDa and multiprotein complexes form between the A-rich region and HTC cell cytosolic proteins that are found in both polysomal and post-ribosomal fractions, while the 38-kDa complex forms between the U-rich region and proteins found in the polysomal fraction. Mutations in the A-rich region abolished formation of the 50, 61, and 76 kDa and multiprotein complexes as well as the ability of cA to regulate the decay of transcripts from stably transfected globin:PAI-1 chimeric contructs, suggesting that these RNA-protein complexes play an important role in the cA-induced destabilization of PAI-1 mRNA.
Cytoplasmic Extract Preparation-Monolayer cultures of rat HTC hepatoma cells were grown and maintained as described previously (9). Cells were incubated in serum-free medium for 10 to 16 h, harvested by trypsinization, pelleted, and resuspended in 25 mM Tris (pH 7.9), 0.1 mM EDTA, 1 mM Pefabloc ® SC (AEBSF), 1 M leupeptin, 1 mM benzamidine, 1 M E64. The cell pellets were subjected to three freeze/thaw cycles (10 min/cycle) followed by centrifugation at 4°C at 10,000 ϫ g for 15 min. The supernatant fraction (S10) was removed, assayed for protein concentration using Coomassie ® Plus Protein Assay reagent as directed by the manufacturer, and stored at Ϫ70°C. Isolation of polysomal and post-ribosomal (S130) extracts was performed essentially as described (14) with additional proteinase inhibitors added to buffer A (1 mM Pefabloc ® SC (AEBSF), 1 M leupeptin, 1 mM benzamidine, 1 M E64). The S130 fraction was concentrated using Centricon-3 ® concentrators as instructed by the manufacturer. The polysomal and S130 fractions were assayed for protein concentration and stored at Ϫ70°C.
The DNA templates used to generate PAI-1 RNA probes 2926 -3024, 3010 -3060, and 3024 -3060 were generated by polymerase chain reaction (PCR). Sequence for the T3 RNA polymerase promoter (AATTA-ACCCTCACTAAAGGG) was included at the 5Ј end of the forward primer and used with the appropriate reverse primer in PCR reactions with pSVL G/P (globin coding region/PAI-1 3Ј-UTR; Ref. 10) as template DNA. The DNA templates for 2926 -3060 containing the A-rich region mutations were also generated by PCR (see Fig. 5A for description of mutations) by incorporating the mutant sequences into the reverse PCR primer. The DNA templates used to generate the wild-type and mutant 3010 -3040 (Fig. 5A) were prepared by annealing complementary oligonucleotides containing the sequence for the T3 RNA polymerase promoter at the 5Ј end followed by the appropriate PAI-1 sequence.
Preparation of radiolabeled RNA probes was performed essentially as described (15) R-EMSA and UV Cross-linking Analyses-For R-EMSA, extracts and 32 P-radiolabeled RNA were incubated for 20 min at 25°C in buffer containing 8 units of RNase inhibitor, 10 g of yeast tRNA, 10 mM Hepes (pH 7.6), 5 mM MgCl 2 , 40 mM KCl, 5% glycerol, and 1 mM dithiothreitol. RNase T 1 (1 unit/l) was added to each reaction and incubation continued at 25°C for 10 min. Heparin (5 mg/ml) was added and the reactions were incubated at 25°C for an additional 10 min. The reactions were subjected to electrophoresis through a 5% nondenaturing polyacrylamide gel (80:1 acrylamide:bisacrylamide, 40 mA, 4°C, 4 h) and visualized by autoradiography. For UV cross-linking analyses, extracts and 32 P-radiolabeled RNA were incubated and treated with RNase T 1 and heparin as described for R-EMSA. The reactions were then exposed to a UV light source (UV Stratalinker 1800, Stratagene) at a distance of 2.5 cm from the light source for 10 min (1.8 J/cm 2 ) unless otherwise specified. RNase A (10 mg/ml) and RNase T2 (100 units/ml) were added and incubation continued at 25°C for 10 min; a combination of RNases was utilized in order to maximize digestion of the unbound radiolabeled RNA probe. SDS sample buffer was added and reactions were heated at 85°C for 2 min; RNA-protein complexes were analyzed by 0.1% SDS-10% PAGE (38:1 acrylamide:bisacrylamide, 40 mA, 25°C, 4 h) followed by autoradiography. For competition analyses, S10 proteins were preincubated with unlabeled competitor RNA for 10 min at 25°C prior to the addition of radiolabeled RNA. Each R-EMSA or UV cross-linking analysis figure presented is representative of at least three independent experiments.
Analysis of Chimeric Globin:PAI-1 mRNA Stability in HTC Cells-Cell culture and maintenance, stable transfections, construction of pSVL G/Gϩ2925/3054, riboprobe preparation, and ribonuclease protection analyses were conducted as previously reported (10). To prepare the chimeric construct pSVL G/Gϩ2925/3054 double mutant, the PAI-1 sequence from nt 2925-3054 was amplified by PCR, digested with BglII, and ligated into the BglII site of pSVL G/G (10). The forward primer contained BglII sequences at its 5Ј end, while the reverse primer contained the two mutations in the A-rich region (nt 3023-3028; AAAAAAA changed to cccccc and nt 3030 -3035 AUAAA changed to ccgccc) and BglII sequences at its 5Ј end.
Interaction of HTC Cell Cytosolic
Proteins with the cA-responsive 3Ј-most 134 nt of the PAI-1 3Ј-UTR-To identify potential mRNA-binding proteins in HTC cell cytosolic extracts that interact with the 3Ј-most 134 nt of the PAI-1 3Ј-UTR (nt 2926 -3060), UV cross-linking analyses were conducted. Using radiolabeled nt 2926 -3060 and HTC cytoplasmic extracts, a major ribonucleoprotein complex with an M r of 50,000 (doublet) and minor complexes of 38,000, 53,000, 61,000, 65,000, 76,000, and 86,000 were detected (Fig. 1B, lane 1). Formation of the RNA-protein complexes was dependent on protein concentration and on the time of exposure to UV light; no additional interactions were detected with greater than 30 g of extract or after 5 min of UV exposure (data not shown). To ensure complete formation of RNA-protein interactions, all subsequent reactions were exposed to UV light for 10 min (1.8 J/cm 2 ).
Ribonucleoprotein complex formation was abolished when the samples were treated with proteinase K or when S10 proteins were denatured by heating the extract prior to incubation with radiolabeled RNA (Fig. 1B, lanes 2 and 4, respectively). No complexes were detected when the RNA alone was subjected to UV cross-linking analysis. These results confirm that the com-plexes are comprised of RNA and protein. Heating the RNA for 15 min at 85°C prior to incubation with protein had no effect on complex formation (Fig. 1B, lane 3). Identical ribonucleoprotein complexes were seen when longer in vitro generated transcripts (nt 2714 -3060 or nt 2502-3060) that included the 3Јmost 134 nt were used as probes in UV cross-linking (data not shown).
R-EMSAs using the 3Ј-most 134 nt of the PAI-1 3Ј-UTR (nt 2926 -3060) demonstrated the formation of an abundant high molecular mass RNA-protein complex of approximately 175 kDa (determined by comparison with protein molecular mass markers) and a minor complex of approximately 140 kDa (Fig. 1C, lane 1). Complex formation was protein concentration-dependent and was abolished when proteinase K was added to the reaction or when S10 proteins were denatured prior to incubation with radiolabeled probe (Fig. 1C, lanes 2 and 4, respectively). No complexes were detected in the absence of extract (data not shown). These results indicate that the com-plexes are comprised of RNA and protein. Heating the RNA prior to incubation with protein had no effect on complex formation (Fig. 1C, lane 3). Subcellular Distribution of Cytoplasmic RNA-binding Proteins-To determine the subcellular distribution of the cytoplasmic proteins that interact with PAI-1 sequence 2926 -3060, UV cross-linking analyses were performed using S10, polysomal, or S130 fractions as the source of proteins. The 38-kDa complex was formed with polysome-associated proteins, but not with S130 proteins; the remainder of the RNA-protein complexes formed using proteins found in both the polysomal and S130 fractions. Using equal amounts of protein from the polysomal and S130 fractions, the majority of the mRNA binding activity was in the polysomal fraction. Likewise, the multiprotein complexes detected by R-EMSA formed with proteins found in both the polysomal and S130 fractions (data not shown).
Specificity of RNA-Protein Complex Formation-The specificity of the interactions detected between HTC cell cytoplasmic proteins and the radiolabeled PAI-1 sequence 2926 -3060 was demonstrated by competition UV cross-linking analyses and R-EMSA (Fig. 2). Extract was preincubated with unlabeled competitor RNA prior to the addition of radiolabeled PAI-1 probe. Unlabeled PAI-1 sequence 2926 -3060 as competitor reduced formation of the RNA-protein complexes detected by both UV cross-linking ( Fig. 2A, lanes 1-4) and R-EMSA (Fig. 2B, lanes 1-5) in a concentration-dependent manner, indicating that these interactions are specific. A portion of the PAI-1 3Ј-UTR (nt 2125-2296), which when inserted into the 3Ј-UTR of the -globin gene fails to confer cA-responsiveness (10), was not able to compete with labeled 2926 -3060 for complex formation (Fig. 2, A, lane 6; B, lane 7). Likewise, there was no competition for most of the complexes when CAT RNA was used as a nonspecific competitor (Fig. 2, A, lane 5; B, lane 6); however, the 86-kDa complex may be nonspecific as CAT RNA did compete for its binding. Furthermore, no RNA-protein interactions were detected when HTC cell S10 extract was incubated with radiolabeled PAI-1 sequence 2125-2296 or CAT RNA (data not shown), supporting the conclusion that the observed interactions are specific for the 134-nt cA-responsive sequence.
Since PAI-1 sequence 2926 -3060 contains a 75-nt U-rich region and a 24-nt A-rich region (Fig. 1A), competition UV cross-linking analyses and R-EMSAs were performed using unlabeled homoribopolymers as competitor RNAs. In the UV cross-linking studies (Fig. 3A), formation of the 38-kDa complex was inhibited by the presence of poly(U), but not poly(A). In contrast, formation of the 50-, 61-, and 76-kDa complexes was inhibited by the presence of poly(A), but not poly(U); the 38-, 53-, and 65-kDa complexes remained after competition with poly(A). Poly(C) and poly(G) did not inhibit formation of any of the observed complexes. These data suggest that the 38-kDa complex forms with PAI-1 sequences contained within the U-rich region and that the 50-, 61-, and 76-kDa complexes form with the A-rich sequence between nt 3023 and 3046. In the R-EMSA studies (Fig. 3B), formation of the RNA-protein complexes was abolished by the presence of poly(A), but was unaffected by poly(U), poly(G), or poly(C). These results suggest that the multiprotein complexes and the major complexes observed by UV cross-linking require the same A-rich sequence.
Identification of the Sequences Involved in RNA-Protein Complex Formation-To further define the nucleotide sequences to which the cytoplasmic proteins bind, radiolabeled RNA probes corresponding to those diagrammed in Fig. 4A were generated and used in UV cross-linking analyses. In each Highlighted are the U-rich (underline) and A-rich (bold underline) regions located between nt 2948 -3022 and nt 3023-3046, respectively. B, HTC cell S10 extract (30 g) was incubated with radiolabeled nt 2926 -3060 (1 ϫ 10 5 cpm; 0.4 ng) and UV cross-linking analysis was carried out as described under "Experimental Procedures" with the following modifications: lane 1, no modification; lane 2, proteinase K (0.5 mg/ml) was added to the reactions during the heparin incubation; lane 3, radiolabeled RNA was incubated at 85°C for 15 min followed by rapid cooling on ice; lane 4, S10 extract was incubated at 85°C for 15 min followed by rapid cooling on ice. The M r (in thousands) of the RNA-protein complexes as based on molecular weight standards is indicated. C, HTC cell S10 extract (30 g) was incubated with radiolabeled nt 2926 -3060 (2 ϫ 10 4 cpm; 0.08 ng) and analyzed by R-EMSA as described under "Experimental Procedures" with the following modifications: lane 1, no modification; lane 2, proteinase K (0.5 mg/ml) was added to the reactions during the heparin incubation; lane 3, radiolabeled RNA was incubated at 85°C for 15 min followed by rapid cooling on ice; lane 4, S10 extract was incubated at 85°C for 15 min followed by rapid cooling on ice. The approximate size of the RNA-protein complexes indicated (M r in thousands) is based on the migration of protein standards.
case, molar equivalents of each radiolabeled RNA were used (Fig. 4B). S10 proteins and nt 2966 -3060 formed complexes with the same migration as those observed using nt 2926 -3060. Cytoplasmic proteins and nt 3010 -3060 or 3024 -3060, containing the A-rich region, also formed complexes essentially the same as those observed using 2926 -3060, except that the 38-kDa complex was absent. When S10 proteins were incubated with 3010 -3040, the abundant complexes migrating at 50 kDa were present as well as faint complexes at 53, 61, 65, and 76 kDa; the 38-kDa complex was absent.
Incubation of cytoplasmic proteins with nt 2926 -3024 (Fig. 4B, lane 6), which contains only the U-rich region, resulted in formation primarily of the 38-kDa complex; only faint bands appeared at ϳ50, 53, and 65 kDa. This pattern of RNA-protein interactions was also observed using another 3Ј-UTR U-rich sequence located upstream at nt 2790 -2911 as probe (data not shown). S10 incubated with nt 2926 -2966 resulted in no RNAprotein complex formation (Fig. 4B, lane 7).
As a correlate to these studies, radiolabeled 2926 -3024 or 3010 -3040 were used as probes in R-EMSAs to determine if the A-rich region was also important for formation of the multiprotein complexes detected by native PAGE. As shown in Fig. 4C, the complexes detected using nt 2926 -3060 also form with nt 3010 -3040, but not with nt 2926 -3024 (compare lanes 2 and 3 with lane 1). These data indicate that the multiprotein complexes form using the same A-rich region as those ribonucleoprotein complexes detected by UV cross-linking analyses.
Together with results from homoribopolymer competition assays, these studies suggest that (i) the majority of the RNAprotein complexes detected by both R-EMSA and UV crosslinking analysis are the result of an interaction of cytoplasmic proteins with a sequence containing the A-rich region (nt 3023-3046) and (ii) only formation of the 38-kDa RNA-protein complex detected by UV cross-linking analysis results from specific interactions between S10 proteins and a sequence containing the U-rich region (nt 2948 -3022).
Mutational Analysis of the A-rich Region-Since the U-rich region of the 134-nt sequence alone does not confer cA-responsiveness onto the otherwise non-responsive globin mRNA (10) and the majority of the RNA-protein interactions occur within the A-rich region, mutational analyses were performed on the A-rich region. Mutations were generated in the context of the 134-nt PAI-1 sequence (nt 2926 -3060) or a 30-nt sequence containing the A-rich region (nt 3010 -3040), as illustrated in Fig. 5A. Mutation of the A-rich region (mutant I: nt 3023-3028, AAAAAA to cccccc and mutant II: nt 3030 -3035, AAUAAA to ccgccc) results in loss of RNA binding activity when the mutations were present in either RNA context (Fig. 5B), suggesting
FIG. 3. Effect of competitor homoribopolymers on RNA-protein complex formation between HTC cytoplasmic proteins and radiolabeled PAI-1 3-UTR sequence 2926 -3060.
A, unlabeled competitor RNA at 0, 10, or 100-fold molar excess was preincubated with 30 g S10 protein prior to incubation with radiolabeled nt 2926 -3060 (1 ϫ 10 5 cpm). RNA-protein complexes were analyzed by UV cross-linking analysis as described under "Experimental Procedures." B, unlabeled competitor RNA at 0, 10, or 100-fold molar excess was preincubated with 30 g of S10 proteins prior to incubation with radiolabeled nt 2926 -3060 (2 ϫ 10 4 cpm). RNA-protein complexes were analyzed by R-EMSA as described under "Experimental Procedures." The M r (in thousands) of the RNA-protein complexes is indicated. that the A-rich region located between nt 3023-3035 is necessary for formation of the 50-, 61-, 65-, and 76-kDa complexes. In addition, formation of the multiprotein complexes detected by R-EMSA also requires this region as indicated by the lack of complex formation using radiolabeled 2926 -3060 containing either mutation (Fig. 5C). Mutations in the A-rich region made in the context of the 134-nt sequence also decreased formation of the 38-kDa complex (Fig. 5B, lanes 2 and 3).
Role of the A-rich Region in cA-regulated PAI-1 mRNA Sta-bility-HTC cells were stably transfected with a chimeric construct containing the wild-type cA-responsive PAI-1 fragment inserted into the 3Ј-UTR of the murine -globin gene (pSVL G/Gϩ2925/3054 (10)) or pSVL G/Gϩ2925/3054 containing both A-rich region mutations (pSVL G/Gϩ2925-3054 double mutant). HTC cells were incubated in the absence or presence of cA and the decay rates of the chimeric mRNAs were determined as described previously (10). The top panel of Fig. 6 shows the gel from an RNase protection assay of one such
experiment and the bottom panel shows graphically the pooled data from two experiments. The wild-type 130-nt fragment mediated cA-induced destabilization of the globin:PAI-1 chimeric mRNA; in contrast, the mutant 130-nt fragment failed to confer cA-responsiveness onto the -globin gene. Consistent with previous reports (10), control constructs pSVL G/G (globin coding region/globin 3Ј-UTR) and pSVL G/P (globin coding region/PAI-1 3Ј-UTR) showed no cA responsiveness and a 2-fold increase in mRNA turnover in response to cA, respectively (data not shown). These results strongly suggest that the same A-rich region that interacts with HTC cell cytoplasmic proteins mediates cyclic nucleotide-induced destabilization of mRNA in HTC cells. DISCUSSION In HTC rat hepatoma cells, cA causes a 3-fold increase in the rate of degradation of PAI-1 mRNA; the 3Ј-most 134 nt of the PAI-1 mRNA is sufficient to mediate the major part of this effect (10). To better understand the mechanism by which cA induces destablization of PAI-1 mRNA, studies were conducted to identify trans-acting factors that interact with the 134-nt cA-responsive sequence, to characterize the specificity and binding sites of these interactions, and to determine their role in the regulation of mRNA stability. UV cross-linking analysis demonstrated HTC cytoplasmic mRNA-binding proteins of approximately 38, 50, 53, 61, 65, and 76 kDa, and R-EMSA demonstrated multiprotein complexes of ϳ175 and 140 kDa that interact with the cA-responsive 134-nt sequence. The 38-kDa mRNA-binding protein appears to interact with the U-rich region. Its binding is competed by poly(U), and the 38-kDa complex forms with PAI-1 mRNA containing U-rich sequences between nt 2966 and 3024, but not with the A-rich region (nt 3024 -3060). However, complex formation is markedly decreased by mutations in the A-rich region, suggesting that conformational changes or protein-protein interactions with other mRNA-binding proteins can influence formation of the 38-kDa complex. The binding site for the majority of the mRNA-binding proteins (50, 53, 61, 65, and 76 kDa) and the multiprotein complexes was limited to a 30-nt sequence containing the A-rich region. Finally, and most importantly, through mutational analyses the A-rich region was determined to be necessary for both RNA-protein interaction and for regulation of mRNA stability in HTC cells. Thus, these studies link the binding of cytoplasmic proteins to an A-rich region in the PAI-1 3Ј-UTR with the cyclic nucleotide regulation of PAI-1 mRNA stability in HTC cells.
Neither the mobility nor the abundance of the complexes formed between the 134-nt sequence and HTC cytosolic proteins, however, appears to be regulated by cA. The mRNA binding activity of cytosolic proteins isolated from HTC cells incubated with cA for 30, 60, 90, 120, 180, or 240 min was essentially the same as that of controls. In addition, the distribution of complexes between the polysome and S130 fractions did not appear to be altered after incubation with cA (data not shown). Several hypotheses may explain the observed lack of cA-regulated ribonucleoprotein complex formation. First, there may be subtle changes in complex formation that are not detectable within the limits of the assays performed in this study. Second, the protein components of the complexes may be modified in response to cA; for example, protein-protein interactions within the multiprotein complexes might be altered by cA without affecting their migration through a nondenaturing gel. Third, additional proteins that are induced or repressed by cA, but do not alter RNA-protein complex formation, may be necessary for the regulation of mRNA stability. Detection of these co-factors by R-EMSA may be difficult if their interaction with the multiprotein complexes is weak. Finally, the presence of cA may alter the function of a protein(s) by allosteric or active site modifications without affecting its ability to form ribonucleoprotein complexes. This modification would be reminiscent of the regulation of cAMP-responsive element (CRE)-binding protein (CREB) by protein kinase A (16). Phosphorylation of CREB at Ser-133 by protein kinase A enhances the transactivation potential of CREB; however, it has no effect on the binding of CREB to a consensus CRE suggesting separate regulation of CREB binding and transcriptional activity.
The stability of a variety of mRNAs is subject to regulation by intracellular cAMP levels. For example, lactate dehydrogenase A subunit (18), tyrosine aminotransferase (19), renin (20), ␣ 2 -adrenergic receptor (21), osteocalcin (22), the glucose transporter, GLUT1 (23), chorionic gonadotropin (24), phosphoenolpyruvate carboxykinase (25), and the RII subunit of protein kinase A (26) mRNAs are stabilized in the presence of cAMP, cAMP analogues, and/or cAMP elevating agents. Conversely, the messages for PAI-1 (9), adrenodoxin reductase (27), and tyrosine hydroxylase (28) are destabilized in the presence of cAMP, cAMP analogues, and/or cAMP elevating agents. A number of receptor mRNAs are also destabilized by elevated intracellular cAMP levels (29 -37). Despite the importance of cAMP as a regulator of mRNA stability, the mechanism by which increases in intracellular cAMP levels induce changes in mRNA turnover rates remains undefined in most systems described to date.
Limited studies, however, have identified potential cis-and trans-acting mediators of cAMP-regulated mRNA stability. In hamster smooth muscle cells, the cAMP elevating agent isoproterenol or the cAMP analogue CPT-cAMP destabilizes  2 -adrenergic receptor ( 2 -AR) mRNA and induces the binding of the M r 35,000  2 -AR binding protein (-ARB), to a 20-nt region in the 3Ј-UTR (34). The 20-nt AU-rich sequence, which contains an AUUUUA motif flanked by U-rich regions, was shown to mediate the agonist and cAMP-induced destabilization of  2 -AR mRNA. A nonconsensus AU-rich nonamer (UAAUAUAUU) in the human -AR 3Ј-UTR that binds the hamster -ARB in vitro was shown to be a critical determinant for the isoproterenolinduced destabilization of  2 -AR transcripts in transfected human embryonic kidney cells (35).
In contrast, a cytosine-rich region in the coding region of the luteinizing hormone/human chorionic gonadotropin receptor mRNA, which is destabilized during hCG-induced down-regulation, was found to form an M r 50,000 ribonucleoprotein complex with rat ovary cyotsolic proteins; complex formation was enhanced in the down-regulated state (37). Finally, phosphoenolpyruvate carboxykinase mRNA is stabilized by dibutyryl-cAMP, CPT-cAMP, or 8-bromo-cAMP in FTO-2B rat hepatoma cells (25,38). CPT-cAMP also decreases the binding of a 100-kDa cytosolic protein to a region in the 3Ј-UTR that contains alternating purine:pyrimidine bases, numerous repeats and palindromes; binding was shown to be sequenceindependent, requiring RNA secondary structure for complex formation.
The cis-acting sequences described in this report are unlike those described for other systems in which mRNA stability is regulated by intracellular cAMP levels. First, the sequence that mediates the cA-induced destabilization of PAI-1 mRNA involves a predominantly A-rich region located at the extreme 3Ј end of the PAI-1 3Ј-UTR. This is in contrast to the C-rich region implicated in the cA regulation of leutinizing hormone/ human chorionic gonadotropin receptor mRNA (37) and the primarily U-rich region containing AUUUA or nonconsensus AU-rich motifs that has been implicated in the regulation of  2 -AR mRNA stability (34 -36) and a number of other regulated mRNAs (39 -45). Second, RNA-protein complex formation between the cA-responsive PAI-1 sequence (nt 2926 -3060) or a truncated sequence containing the A-rich region (nt 3010 -3040, 3010 -3060, or 3024 -3060) is not abolished by heating the RNA at 85°C for 15 min, followed by rapid cooling (Fig. 1, B and C; and data not shown). This is in contrast to the cis-acting sequences involved in cAMP-regulated phosphoenolpyruvate carboxykinase mRNA stability (38); in this system, heating the RNA probe prior to UV cross-linking significantly reduced binding. However, because RNA can rapidly regain secondary structure after being heated, these experiments cannot rule out a role for RNA secondary structure in the binding we observe.
The identity of the mRNA-binding proteins described in this report is unknown. Because the target sequence of most of these proteins is A-rich, the possibility that poly(A)-binding protein is involved must be considered. Poly(A)-binding protein, which is highly conserved across species, has a molecular mass of about 70,000 (46); in contrast, the major protein binding to the PAI-1 3Ј-UTR has a mass of about 50,000.
Degradation of mature mRNAs is a regulated process that can have a significant, and rapid, impact on gene expression. The half-lives of mRNAs in eukaryotic cells can range from minutes for highly regulated gene products such as protooncogenes, growth factors, and cytokines to many hours for very stable species such as globin (see Refs. 12 and 13 for review). Regulation of mRNA stability, often but not necessarily in conjunction with changes in transcription rates, allows the level of a particular mRNA to be increased rapidly and/or transiently in response to various stimuli (13). In most cases, however, the mechanism(s) by which these stimuli exert their effects are not clear; it is not known whether these stimuli act directly or indirectly to cause an increase or decrease in mRNA decay rates. The studies presented here provide further insight into the mechanism by which cyclic nucleotides regulate PAI-1 mRNA stability in rat HTC hepatoma cells. The minimal sequence that interacts with HTC cytoplasmic proteins was limited to a 30-nt sequence that contains an A-rich region. This same A-rich region, by mutational analysis, was shown to be critical for cA-regulated PAI-1 mRNA destabilization in HTC cells. Studies are currently directed at (i) isolating, identifying, and characterizing the mRNA-binding proteins and the protein components of the multiprotein complexes and (ii) determining how the cytoplasmic proteins interact with the A-rich region to elicit the cA-induced destabilization of PAI-1 mRNA. The ability to link RNA-protein complex formation with the regulation of mRNA stability by cyclic nucleotides in HTC cells provides a valuable sytstem in which to study cis-and trans-acting mediators of regulated mRNA stability. | 6,965 | 1999-01-08T00:00:00.000 | [
"Biology"
] |
The combined processing of geomagnetic intensity vector projections and absolute magnitude measurements
A method for the combined processing measurements of the projections and the absolute magnitudes of geomagnetic field intensity vectors, based on mathematical technology of local approximation models, is proposed. The approach realized in this paper, based on the proposed method, provides an increase in quality of the measurements of the projections of the geomagnetic intensity vector. An algorithm for the two-stage combined processing of the measurements of projections and absolute magnitudes of geomagnetic field intensity vectors is developed. The operation of the combined processing algorithm was tested on models and observatory measurements. The estimates of the combined processing algorithm errors were obtained using statistical modelling. The reduction of the root-mean-square error values was achieved for the estimates of the projections of geomagnetic field intensity vectors.
Introduction
In this article, a method and an algorithm for combined processing measurements of the components (projections and the absolute magnitudes) of geomagnetic field intensity vectors is proposed.The approach realized in the paper, based on the proposed method, provides an increase in quality of the measurements of the projections of the geomagnetic intensity vector.The considered measurements are carried out by INTERMAGNET observatories equipped with systems of vector and scalar magnetometers; the definitive type data are used, containing systematic errors equal to zero (Mandea and Korte, 2011;INTERMAGNET, 2018).The measurement errors of vector and scalar magnetometers are represented by random, normally distributed errors with zero expectation and a predetermined variance.As usual, the measurement errors of the projections of geomagnetic field intensity vectors are significantly larger than the ones of the absolute vector magnitudes performed by the mentioned measurement devices.The formulation of the problem of reducing the noise root-mean-square (RMS) error values in the geomagnetic field intensity projection measurements is due to combined processing of the values of all its components.
In the following research, the following steps are outlined: 1.A method for combined processing of the measurements of projections and absolute magnitudes of geomagnetic field intensity vectors is formulated, based on formation of the sequences of piecewise-constant local models followed by their weighted averaging; 2. A two-stage algorithm for combined processing of the measurements of projections and absolute magnitudes of geomagnetic field intensity vectors is developed; 3. Testing of the algorithm on model and observatory data is performed; 4. The estimates of the algorithm errors, calculated using statistical modelling, are presented; the reduction of the RMS noise errors for the estimates of the geomagnetic field vector projections is proved.
The material in this research paper is intended for specialists (magnetologists) engaged in digital processing of geomag- Published by Copernicus Publications on behalf of the European Geosciences Union.
V. G. Getmanov et al.: The processing of geomagnetic vector projections netic field measurements.The need to reduce noise errors in estimates of the projections of the geomagnetic field intensity vectors measured by vector magnetometers arises in a number of technical and scientific applications.For instance, technogenic disturbances can affect the geomagnetic observatory hardware, affecting vector and scalar magnetometers differently: as a rule, the noise errors from vector magnetometers occurring due to such interference are greater than the noise errors from scalar magnetometers.The decrease in noise errors from vector magnetometers is necessary, e.g. for calculation of the gradients of the projections of the geomagnetic field intensity vectors in the navigation problems.
Nowadays the reduction of errors for vector magnetometers (with certain assumptions) is achieved using optimization of the calibration from scalar magnetometers (Merrayo et al., 2000;Olsen et al., 2003) or refinement of calibration characteristics (Soborov et al., 2008) based on special mathematical processing.In the measurement systems considered in this paper, in fact, parallel measurements are performed; a possible algorithm providing the decrease in errors for such measurements can be formed based on Kalman filters (Shakhtarin, 2008).However, due to the peculiarities of this problem, the construction of the resulting non-linear filters is associated with certain problems due to the inaccuracies of linearization and the accepted hypothesis concerning the type of initial intensity vector function.In other research (Soloviev et al., 2018), joint processing of vector and scalar magnetometer measurements aimed at improving the calibration accuracy of the so-called baseline, which only indirectly provides the considered reduction in errors.The combined processing of measurements of projections and absolute magnitudes of geomagnetic field intensity vectors proposed in this paper is significantly free from the mentioned problems.
2 A method for combined processing of measurements of projections and absolute magnitudes of geomagnetic field intensity vectors ) be the initial functions for the projections and absolute magnitudes of the geomagnetic field intensity vectors; we assume that Y 1 (T i), Y 2 (T i), Y 3 (T i), and Y 0 (T i) are their values registered by vector and scalar magnetometers, i = 0, 1, . .., N f − 1; the sampling interval T = 1 s; 1 s measurements from INTERMAGNET observatories are analysed in this study.For n = 0, . .., 3, we represent the noise errors of the measurement values W n (T i) in the form of uncorrelated, normally distributed random values with zero mathematical expectations and some variances.Such a representation of errors is, to a large extent, valid for cases of large technogenic noises that can occur when geomagnetic measurements are carried out.We consider, with some assumptions, that the spectrum of random components for the functions of the geomagnetic field is concentrated almost entirely in the low-frequency domain, and the spectrum of random measurement errors is concentrated in the high-frequency domain.
We assume that the measurements, the initial functions, and the errors are related by linear additive dependences: Using the specified observation values Y n (T i), we demand the determination of the estimates Y , where i = 0, 1, . .., N f −1, that would be close to the initial functions of the intensity vector projections.We perform the combined processing for the projections and magnitudes of the geomagnetic field intensity vectors in two stages.
At the first stage, on the main interval with the points i = 0, 1, . .., N f − 1, we introduce the N-point sliding local intervals with limiting points N 1 j , N 2 j , and the sliding step N d as well as the quantity of sliding intervals m 0 . (1) To simplify the considerations, we require the relations of multiplicity mN = N f and For a sliding interval with a number j , we define the model functions of a form Y M 1 c 1 j , T i , Y M 2 c 2 j , T i , and Y M 3 c 3 j , T i ; here, c n j is the vectors of model parameters, n = 1, 2, 3.These model functions can be, in particular, polynomial, piecewise constant, piecewise linear, etc.The size of local intervals determines the approximation errors.At small N there will be large fluctuation errors, and at large N there will be large systematic approximation errors.
Based on the above-defined measured values, models, and the maximum likelihood method (Kramer, 1975), we define the local functional S(c j Y j ) which determines the measure of closeness for local measurements and models, similar to Getmanov (2013) as the sum of the four functionals: Here, c T j = (c T Taking into account the assumption of errors in measurements, we carry out the identification of the optimal estimates of the model parameters c • j using the solutions of the sequence of optimization problems for local functionals: We perform the construction of sliding local models in an obvious way by assuming n = 1, 2, 3, and j = 1, . .., m 0 . At the second stage we introduce the unit functions E j (T i), equal to zero outside of local sliding intervals, add them together to get E (T i), and calculate the sequence of weighting coefficients R (T i): Let us perform the weighting averaging using Eq. ( 5) for the sum of the sliding local model sequence (Getmanov et al., 2015).
The method of combined two-step processing of the values of the projections and the absolute magnitudes of geomagnetic field intensity vectors consists of the sequential execution of the first and second stages in accordance with Eqs. ( 2)-( 6).
3 An algorithm for two-stage processing of measurements of projections and absolute magnitudes of geomagnetic field intensity vectors for piecewise-constant models Let us build the local intervals using Eq. ( 1) and define the local models on them as piecewise-constant functions Y M n c n j , T i = c n j , j = 1, . .., m 0 , and N 1 j ≤ i ≤ N 2 j .In this case it is obvious that the initial functions for geomagnetic field vector projections must be approximately constant at local intervals with duration N T .The local interval can be expanded if we treat piecewise-constant functions as local models.Let us formulate the equation for local functionals: We differentiate Eq. ( 7) with respect to c n j , equate the derivatives to zero, and get the necessary conditions for an extremum in the form of a system of three non-linear algebraic equations as a result: In this case, an exact analytical solution to this system is possible.Omitting the calculations, we obtain expressions for local estimates, j = 1, . .., m 0 : where n = 1, 2, 3, and get the piecewise-constant functions for local estimates Y and n = 1, 2, 3, using Eq. ( 8) according to Eq. ( 4); let us present them as the realization of the first stage.
Weighted averaging of sequences of piecewise-constant local estimates and the calculations of the estimate functions Y • n (T i) for the second stage are performed using Eqs.( 5) and ( 6).
The case of local intervals without sliding was considered, with the number of points N, N 1j = N (j − 1) and N 2j = N 1j + N − 1, where j = 1, . .., m and mN = N f .The local estimates H • n j = c • j , where j = 1, . .., m, were calculated, and the sequences of piecewise constant estimates H • L n (T i), n = 1, 2, 3, corresponding to the first processing stage, were formed.
For a sliding local intervals case with the number of points N, the sliding step N d was selected, as well as the number of sliding local intervals m 0 .Local estimates H • n j = c • j , j = 1, . .., m 0 , and the estimates of functions H • n j (T i), where n = 1, 2, 3 and i = 0, 1, . .., N f − 1, corresponding to the first processing stage, were calculated.Next, the second stage was performed where the estimates H • S n (T i) were found.For testing, the following values were selected: , and σ 0 = 0.5 nT.In Fig. 1, the calculation results are displayed for H MG 1 (Fig. 2a) and H MG 2 (Fig. 2b), H MG 3 (Fig. 2c); dashed lines with index 1 depict the initial functions for the projections of the geomagnetic field intensity vector H MG n (T i); lines with index 2 represent the noised measurements of the projections of the geomagnetic field intensity H M n (T i); piecewise-constant lines with index 3 represent the results of the first stage H • L n (T i); solid lines with index 4 show the estimates H • S n (T i) -the second-stage results with weighted averaging.
The performed testing of the processing algorithm on model data for a number of parameters led to the conclusion that the second stage of processing reduces the RMS of the errors of the first stage by 60 %-80 % on average.
Testing on real observatory geomagnetic data
The developed algorithm was tested using combined processing of 1 s sampled geomagnetic measurements from the INTERMAGNET observatory MBO (Mbour, Senegal).The measurements were recorded on 2 January 2014, they began at 01:16:37 UT, and the length of a test fragment was 96 s (N f = 96).For the processing algorithm, N = 12 and the sliding step N d = 1 were assigned.
In Fig. 2 the test results are shown for H 1 (Fig. 2a), H 2 (Fig. 2b), and H 3 (Fig. 2c).Dashed lines with index 1 depict the observatory measurements of the geomagnetic vector projections H n (T i); piecewise-constant lines with index 2 are related to the first processing stage -the functions for piecewise constant estimates H • L n (T i) without sliding are displayed; index 3 stands for the line corresponding to the result of the second processing stage -the estimate with weighted averaging H • S n (T i) with sliding.The results of testing the algorithm for combined processing of measurements of projections and absolute magnitudes of geomagnetic field intensity, displayed on Figs. 1 and 2, proved their satisfactory performance.
Error estimation for the algorithm for combined processing of measurements
The estimates of errors of the proposed algorithm for combined processing were found using statistical modelling.The first stage of combined processing was analysed.
For all possible values of indices k and m, the realizations of sequences of model normally distributed random numbers with zero mathematical expectation W n,s (k, m, T i) were formed, where i = 0, 1, . .., N − 1, and n = 0, 1, 2, 3, and s = 1, . .., S 0 is the number of realization for statistical modelling.For n = 0, the variance determining the noise error level for a scalar magnetometer assumed the value σ 2 0 ; for n = 1, 2, 3, the noise error variances for a vector magnetometer assumed the value σ 2 .For H n (k, m, H 0 ), W n,s (k, m, T i), and S 0 , random realizations were constructed: Geosci.Instrum. Method. Data Syst., 8, 209-215, 2019 www.geosci-instrum-method-data-syst.net/8/209/2019/The results of the algorithm implementation for the first stage -the estimates H • n,s (k, m, H 0 , N, σ, σ 0 ), n = 1, 2, 3, k = 0, 1, . .., K 0 − 1, m = 0, 1, . .., M 0 − 1, s = 1, . .., andS 0 -were calculated depending on the parameters H 0 , N, σ, σ 0 .The error of the processing algorithm ε 2 n (k, m, H 0 , N, σ, σ 0 ) was found using averaging over the number of realizations for fixed n, k,, and m: The error described by Eq. ( 10) was averaged over the number of the absolute geomagnetic vector projections n = 1, 2, 3 and then over different k and m.The final formula for estimating the error was the following: The results of the combined processing algorithm for the first stage were compared with the results of the operation of a possible linear filtering algorithm that was separately applied www.geosci-instrum-method-data-syst.net/8/209/2019/Geosci.Instrum.Method.Data Syst., 8, 209-215, 2019 to the recordings of vector magnetometer channels.The linear filtering algorithm in this case was represented by standard equations: The error estimate ε 2 1f (H 0 N σ ) for the linear filtering algorithm was calculated similar to Eqs. ( 10) and (11).
The efficiency of the proposed algorithm for combined processing of measurements was estimated using the introduction of a relative decrease factor for the RMS error values ρ(H 0 , N, σ, σ 0 ): Analysis of the graphs shows that, for a fixed value σ , the introduced factor ρ decreases when σ 0 decreases, which is physically understandable.It is also seen that this factor tends to limit values with increasing σ .For a fixed H 0 , an increase in N leads to a decrease in the factor ρ. The performed statistical modelling for a wide range of parameters shows that the estimates for ρ are about 0.15-0.3,which indicates the efficiency of the proposed combined processing.
Conclusions
The proposed method for combined processing of the measurements of projections and absolute magnitudes of geomagnetic field intensity vectors and the corresponding twostage algorithm developed appear to be satisfactorily workable.The testing of the developed combined processing algorithm on model and observatory measurement data proved its efficiency.The approach realized in this paper, based on the proposed method, provides an increase in quality of the measurements of the projections of the geomagnetic intensity vector, allowing us to eliminate possible unwanted disturbances of artificial (anthropogenic) origin.
Statistical modelling for the developed algorithm for combined processing of measurements shows that at the first stage the relative decrease factor of RMS errors can reach values of 0.15-0.3,and at the second stage the decrease in the first-stage RMS errors can reach approximately 60 %-80 %.
Further reduction of the RMS noise errors can be implemented based on combined processing using local piecewise linear models for the values of projections and absolute magnitudes of geomagnetic field intensity vectors.The proposed combined processing algorithm can be implemented for many practically important tasks, in particular, when optimizing the operation of three-component accelerometer systems, three-component angular velocity sensors, or other three-component data arranged in a similar way.The technique allows for processing of the data with sampling intervals smaller than 1 s.
Code and data availability.The observatory geomagnetic data used for the tests in our study are available at the INTERMAGNET website (http://www.intermagnet.org;INTERMAGNET, 2018) as magnetograms or as digital data files.
Author contributions.VGG is the author and the main developer of the method presented in this research.Also, the basic work on pro-
Figure 1 .
Figure 1.Results of testing the processing algorithm on model measurement data: 1 -initial model functions, 2 -noised model functions, 3 -first-stage result (piecewise-constant model estimates without sliding), and 4 -second-stage result (estimates with weighted averaging).
Figure 2 .
Figure 2. Results of testing the processing algorithm on observatory measurement data: 1 -initial data, 2 -first-stage result ( piecewise constant model estimates without sliding), and 3 -second-stage result (estimates with weighted averaging with sliding). | 4,235 | 2018-11-06T00:00:00.000 | [
"Physics"
] |
Improved Method for Electron Powder Diffraction-Based Rietveld Analysis of Nanomaterials
Multiphase nanomaterials are of increasing importance in material science. Providing reliable and statistically meaningful information on their average nanostructure is essential for synthesis control and applications. In this paper, we propose a novel procedure that simplifies and makes more effective the electron powder diffraction-based Rietveld analysis of nanomaterials. Our single step in-TEM method allows to obtain the instrumental broadening function of the TEM directly from a single measurement without the need for an additional X-ray diffraction measurement. Using a multilayer graphene calibration standard and applying properly controlled acquisition conditions on a spherical aberration-corrected microscope, we achieved the instrumental broadening of ±0.01 Å in terms of interplanar spacing. The shape of the diffraction peaks is modeled as a function of the scattering angle using the Caglioti relation, and the obtained parameters for instrumental broadening can be directly applied in the Rietveld analysis of electron diffraction data of the analyzed specimen. During peak shape analysis, the instrumental broadening parameters of the TEM are controlled separately from nanostructure-related peak broadening effects, which contribute to the higher reliability of nanostructure information extracted from electron diffraction patterns. The potential of the proposed procedure is demonstrated through the Rietveld analysis of hematite nanopowder and two-component Cu-Ni nanocrystalline thin film specimens.
Introduction
Nanostructured materials are present in our everyday life, both of anthropogenic origin and in their natural form.Nanoparticles have a wide range of technological applications, e.g., as catalytic agents, optoelectronic devices, or active ingredients in pharmaceuticals [1].Bulk nanomaterials comprise multicomponent melt-spun metallic alloys, inherently nanostructured thin films, or nanocomposites.As the industrial production and application of nanomaterials increases, an increasing amount is released into the environment, which accumulating in natural water bodies or in soil may trigger harmful effects on human health [2].The performance of nanomaterials, including their desired application and unintentional or adverse effects, is controlled by their atomic structure, particle size, and phase composition.
Electron diffraction is an ideal tool for the characterization of nanomaterials.In comparison with other diffraction techniques like X-ray or synchrotron diffraction, the main advantage of electron diffraction is their higher efficiency when detecting light elements and their locality.Selected area electron diffraction (SAED) measurements can be performed on nanocrystalline thin film areas even below 100 nm of the diameter or using extremely small quantities of nanopowder (electron powder diffraction, EPD) and provide sufficient signal and excellent signal-to-noise ratio for quantitative analysis on the second timescale.The reproducibility and accuracy of electron diffraction can be compatible with those of routine laboratory X-ray diffraction (XRD) measurements, provided that acquisition parameters are properly controlled and the resulting SAED patterns are calibrated [3].Additionally, EPD or SAED is complementary to other local transmission electron microscopy (TEM)based characterization techniques like high-resolution transmission electron microscopy (HRTEM) or nanobeam diffraction because they provide nanostructure information of statistical relevance.
The quantitative evaluation of powder diffraction patterns relies on the accurate measurement of diffraction peak intensities.In the case of complex structures or multicomponent materials, peaks frequently overlap, which makes the extraction of intensity values uncertain.Whole pattern fitting methods aim to overcome this problem using all the diffraction information in the measured scattering angle range.Rietveld analysis [4] is an established method of the whole pattern fitting of neutron and X-ray powder diffraction patterns, which provides quantitative data on phase composition, crystal structure, crystallite size, shape, preferred orientation, etc. (a comprehensive list of software toolkits performing Rietveld analysis can be found on the IUCr website https://www.iucr.org/resources/other-directories/software(accessed on 26 February 2024).Microstructure information is included in the diffraction line profile, i.e., the shape and broadening of the diffraction peaks; however, peak broadening also exhibits an instrumental broadening effect, which is related to the applied experimental setup.Thus, to obtain microstructure information from Rietveld analysis, it is of fundamental importance to properly separate the scattering angle-dependent sample and instrumental contributions to peak broadening, f (Q) and g(Q), respectively.
The Rietveld method is used less for the evaluation of electron diffraction patterns obtained from nanomaterials (SAED or EPD), mostly because of the potentially high contribution of multiple scattered electrons [5] and also because of the large influence of the setting of electron optics on the variation in detected intensity.The effect of multiple scattering on diffraction peak intensities as a function of the average atomic number and crystal thickness has been analyzed in the published literature in detail [5].In nanocrystalline materials, in general, it is much weaker than single crystal samples [6].In diffraction measurements performed on small crystallites at high enough electron energies, the kinematical scattering of electrons dominates [7].Also, in the case of lower symmetry crystals, the probability of multiple scattering is reduced [8].The predominance of kinematical scattering in the case of anatase TiO 2 nanoparticles of an average size of 7 nm at 120 keV was proven by calculations [9].For MnFe 2 O 4 nanoparticles, experimental structure factor amplitudes obtained from the Rietveld refinement of SAED patterns were compared with calculated kinematical and dynamical amplitudes, and it was found that at about 7 nm, crystallite thickness at 120 keV kinematical conditions was expected [10].In the case of Au 3 Fe 1−x alloy nanoparticles of 9.5 nm thickness, kinematical intensities at 200 keV were obtained by applying an approximately 2.5% correction [11] based on the Blackman twobeam approximation theory.These results indicate that the dominance of the kinematical scattering of 200 keV electrons is a reasonable presumption when the crystallite size of nanoparticles is close to 10 nm, and no significant overlapping of particles occurs.
The other issue limiting the applicability of the whole pattern fitting of ring-like electron diffraction patterns is related to instrumental broadening.Early works on electron diffractionbased Rietveld analysis reported full width at half maximum (FWHM) values extracted from the analysis but did not demonstrate the difference between the instrumental and sample contribution to diffraction peak broadening [7,12].In Ref. [7], the authors highlight that diffraction profiles obtained from unfiltered electron diffraction patterns exhibit 10-20% larger FWHM values than corresponding energy-filtered patterns.They also point out that the Rietveld method provides accurate lattice parameters and yields correct refined atomic coordinates even from non-energy-filtered data.However, in their subsequent work using energy-filtered electron powder diffraction data [9], no attempt was made to assess the crystallite size from diffraction peak broadening.In general, publications focus on the determination of the crystal structure [10,13], lattice parameter [11,12], occupancy change as a function of temperature [11] or texture analysis [14].These publications reported the average particle size obtained using alternative techniques like bright or dark field as well as HRTEM imaging, and in these works, no attention was paid to the nanostructural information hidden in peak broadening.However, in their paper, Ref. [13] considers peak broadening and applied FWHM values to quantify the data quality difference between electron diffraction patterns recorded using different techniques like conventional, precession, and theta-scan precession electron diffraction.
Boullay et al. [15] extended the quantitative analysis of electron diffraction patterns to nanostructural properties for the first time.They studied rutile and hausmannite nanoparticles and extracted the average size and shape data of the crystallites from ring-like electron diffraction patterns.To perform this, they established a two-step calibration procedure for modeling the instrumental component of peak shape broadening in the TEM.This procedure allowed the latter to separate successfully the scattering contribution of ZnO and ZnS in a two-phase nanocrystalline powder using electron diffraction data [16].The methodology of the two-step calibration procedure is detailed, and the effect of different TEM operation conditions is also discussed in a recent publication [17].
The procedure proposed for the determination of the instrumental broadening of the TEM in [15] is the following.As an initial step, the instrumental broadening of an X-ray diffractometer as a function of the scattering vector Q (Q = 4π sin θ/λ), g XR (Q) has to be determined using a sample of a high degree of crystallinity as the calibration standard.Then, a nanocrystalline calibration standard suitable for both XRD and electron diffraction is selected, preferably a stable oxide nanopowder of uniform and isometric size in the range of 10-20 nm.Using this nanopowder calibration standard, the sample contribution ( f XR (Q)) to the measured profile (h XR (Q)) is obtained as a convolution of g XR (Q) and f XR (Q) according to the following relation: where b XR (Q) is the background function.Then, the same sample is measured by EPD.
As the sample contribution f XR (Q) of the nanopowder calibration sample is known, it is used as input data during the Rietveld analysis of EDP.This step allows g TEM (Q), the instrumental broadening function of the TEM, to be determined according to the following relation: where h TEM (Q) is the intensity profile obtained from the ring-like electron diffraction pattern by summing up intensities at the same angular distance from the direct beam and b TEM (Q) is the background function of the electron diffraction pattern.According to this procedure, the determination of the instrumental broadening of a TEM requires three independent measurements and two calibration samples.As an alternative approach, in this work, we present a single step in TEM procedure, which allows the instrumental broadening function of TEM to be obtained using a multilayer graphene calibration sample by a single SAED measurement.This procedure is based on our previous work on SAED calibration [3], exploiting its reproducibility, the achievable ±0.1% absolute accuracy of SAED measurements, and the minimization of instrumental broadening.The potential of this procedure is demonstrated on nanopowder and nanocrystalline thin film specimens.
Materials
To determine instrumental broadening, electron diffraction measurements were carried out on Pelco ® graphene TEM support films (Ted Pella, Redding, CA, USA product # 21740) suspended on a lacey carbon-coated copper grid with a mesh size of 300.SAED patterns were recorded from several positions inside the same grid, and the pattern providing the most continuous and pure diffraction rings was selected for further analysis.
Nanocrystalline hematite was synthesized for plant nutrition experiments as detailed for sample S0 in Ref. [18].The nanoparticle concentration of the resulting suspension was increased by a factor of four by evaporation in a vacuum.This gentle method prevents the particle's aggregation and preserves the stability of the colloid sample.For TEM analysis, a drop of the suspension was deposited onto a lacey carbon-supported ultrathin carbonfilm-covered Cu grid (Ted Pella, Redding, CA, USA).Crystallite size distribution was determined by analyzing TEM images using ImageJ 1.53k software.
Cu-Ni thin film samples were deposited in a high vacuum system by direct current magnetron sputtering onto an ultrathin carbon-film-coated lacey carbon film supported by a 400 mesh copper grid (Ted Pella, Redding, CA, USA).The nominal thickness of the ultrathin amorphous carbon (a-C) film was 3 nm.The copper and nickel thin films were prepared by DC magnetron sputtering in an ultra-high vacuum (UHV) compatible vacuum chamber (with a base pressure of 6 × 10 −6 Pa) in 0.3 Pa Ar with a 3 Å/s deposition rate.The power and the deposition time were selected so that the thickness of the deposited films equaled ca. 15 nm each, resulting in an overall Cu-Ni film thickness of 30 nm.To avoid epitaxial growth and strain-related lattice parameter variation at the interface, Cu and Ni were deposited onto opposite sides of the a-C foil (Figure 1).This arrangement implies that due to the shadowing effect of the grid bars during Cu deposition, the film thickness and, thus, overall composition were not uniform inside the mesh.To overcome such local compositional changes, EDS analysis and corresponding SAED patterns were taken from the same area.
Materials
To determine instrumental broadening, electron diffraction measurements were carried out on Pelco ® graphene TEM support films (Ted Pella, Redding, CA, USA product # 21740) suspended on a lacey carbon-coated copper grid with a mesh size of 300.SAED patterns were recorded from several positions inside the same grid, and the pattern providing the most continuous and pure diffraction rings was selected for further analysis.
Nanocrystalline hematite was synthesized for plant nutrition experiments as detailed for sample S0 in Ref. [18].The nanoparticle concentration of the resulting suspension was increased by a factor of four by evaporation in a vacuum.This gentle method prevents the particle's aggregation and preserves the stability of the colloid sample.For TEM analysis, a drop of the suspension was deposited onto a lacey carbon-supported ultrathin carbonfilm-covered Cu grid (Ted Pella, Redding, CA, USA).Crystallite size distribution was determined by analyzing TEM images using ImageJ 1.53k software.
Cu-Ni thin film samples were deposited in a high vacuum system by direct current magnetron sputtering onto an ultrathin carbon-film-coated lacey carbon film supported by a 400 mesh copper grid (Ted Pella, Redding, CA, USA).The nominal thickness of the ultrathin amorphous carbon (a-C) film was 3 nm.The copper and nickel thin films were prepared by DC magnetron sputtering in an ultra-high vacuum (UHV) compatible vacuum chamber (with a base pressure of 6 × 10 −6 Pa) in 0.3 Pa Ar with a 3 Å/s deposition rate.The power and the deposition time were selected so that the thickness of the deposited films equaled ca. 15 nm each, resulting in an overall Cu-Ni film thickness of 30 nm.To avoid epitaxial growth and strain-related lattice parameter variation at the interface, Cu and Ni were deposited onto opposite sides of the a-C foil (Figure 1).This arrangement implies that due to the shadowing effect of the grid bars during Cu deposition, the film thickness and, thus, overall composition were not uniform inside the mesh.To overcome such local compositional changes, EDS analysis and corresponding SAED patterns were taken from the same area.
Electron Diffraction
Electron diffraction measurements were carried out using a Themis (Thermo Fisher Scientific, Waltham, MA, USA) TEM with Cs correction in the imaging system (spatial resolution in HRTEM mode 0.8 Å), operating at 200 kV, and equipped with a Schottky field emission gun (FEG) with an energy spread of ca.0.7 eV, and a four-segment Super-X EDS detector.SAED patterns were taken from an area of ca. 3 µm in diameter and recorded using a Ceta camera using Velox software (https://veloxusa.com/about-us,accessed on 19 February 2024) (Thermo Fischer Scientific, Waltham, MA, USA).SAED patterns were recorded using the highest (4 k × 4 k) camera resolution with special care of avoiding saturation and keeping the intensity in the linear range of the detector.SAED patterns were taken as detailed in [3].Measurements were carried out at a 650 mm nominal camera length, which provided scattered intensity at a large enough angular range for Rietveld
Electron Diffraction
Electron diffraction measurements were carried out using a Themis (Thermo Fisher Scientific, Waltham, MA, USA) TEM with Cs correction in the imaging system (spatial resolution in HRTEM mode 0.8 Å), operating at 200 Kv, and equipped with a Schottky field emission gun (FEG) with an energy spread of ca.0.7 Ev, and a four-segment Super-X EDS detector.SAED patterns were taken from an area of ca. 3 µm in diameter and recorded using a Ceta camera using Velox software (https://veloxusa.com/about-us,accessed on 19 February 2024) (Thermo Fischer Scientific, Waltham, MA, USA).SAED patterns were recorded using the highest (4 k × 4 k) camera resolution with special care of avoiding saturation and keeping the intensity in the linear range of the detector.SAED patterns were taken as detailed in [3].Measurements were carried out at a 650 mm nominal camera length, which provided scattered intensity at a large enough angular range for Rietveld refinement.The camera length was calibrated using 30 nm thick DC-sputtered nanocrystalline Cu thin film.Care was taken to ensure standardized illumination conditions by controlling Wehnelt bias (GunLens parameter in the Themis microscope (Thermo Fischer Scientific, Waltham, MA, USA)), spot size, and C2 (second condenser lens) current during diffraction measurements.In this way, beam convergence was controlled, which is a prerequisite for the reproducible determination of the instrumental component of peak broadening at a given combination of lens currents.To avoid unexpected hysteresis effects, switching between imaging and diffraction modes, was done always at the same (45kx) magnification.After setting up the desired illumination conditions, the focusing of the diffraction pattern was performed by adjusting the diffraction lens current.Overall control on standard experimental conditions during subsequent measurements is ensured by keeping the lens currents constant better than 3 × 10 −4 .
Diffraction Data Pre-Processing and Rietveld Analysis
The intensity profiles of the SAED patterns were obtained by integrating intensities at the same angular distance from the direct beam using the Process Diffraction v12.0.8 software [19].Center (x,y) and elliptical distortion (orientation α and measure of ellipticity ε) were first adjusted manually and then refined using the automated algorithm implemented in Process Diffraction for faint diffuse diffraction peaks [20].In this evaluation approach, microscope lens distortions were parametrized as elliptical distortion, and higher-order distortions were neglected (note that the treatment of higher-order distortions is not implemented in currently available electron diffraction processing software).The FWHM values of the diffraction peaks were determined by fitting the pseudo-Voigt function [5] using Origin 2023b (10.05) software.Rietveld analysis was carried out using MAUD 2.99 software [21,22] with atomic scattering factors for electrons [23].Although two-dimensional diffraction data were implemented in MAUD [24], we chose to use integrated intensity profiles as input data.In this way, the effects of microscope alignment and data pre-processing, i.e., camera length, center, and ellipticity, were handled separately from sample features.
Experimental Determination of Instrumental Broadening of TEM
The broadening of the diffraction peak results from the convolution of broadening caused by beam properties, lens aberrations, and sample properties.As the peak shape contains microstructural information, to obtain quantitative data on crystallite size, anisotropic shape, or strain, the separation of instrumental and sample contributions at the peak shape is essential.In XRD, this procedure is performed using a reference sample (e.g., LaB 6 , Al 2 O 3 , or silicon) that has a well-known and defect-free crystal structure and a large enough crystallite size.Following the same concept as in XRD, for the single-step determination of the instrumental broadening of TEM, g TEM (Q), such a reference material is needed, which lacks sample-related broadening effects.Because of the strong interaction of electrons with matter, the thickness of the reference sample should not exceed the elastic mean free path for electrons to avoid multiple scattering.Moreover, the ideal reference sample has a smooth surface and uniform thickness and covers the TEM grid evenly.In this case, the two-step procedure for obtaining g TEM (Q) [15] can be simplified to the following: where f TEM (Q) is the sample contribution to the h TEM (Q)-measured peak profile obtained by electron diffraction, and b TEM (Q) is the background of electron scattering.In our ap- proach, graphene is considered as an ideal, defect-free two-dimensional crystal with lateral dimensions on the micrometer scale.Indeed, during the SAED measurement, the crystallite size of graphene is limited by the applied SA aperture, which, in our measurements, is ca. 3 µm and allows the crystallite size-related peak broadening to be removed.Due to the especially thin nature of graphene, diffraction spots are strongly elongated parallel to the incident beam, which allows the diffracted intensity to be measured at high scattering angles as well.In the case of the few layers of thick graphene sample, like Pelco ® graphene 3-5 by Ted Pella (in the following denominated briefly P-graphene), individual graphene layers were rotated above each other by a random angle, resulting in approximately even intensity distribution along the hk diffraction rings.As the thickness of five layers of graphene still does not exceed 2 nm, the mean free path of electrons at 200 keV in graphite is ca 110 nm [25], and no multiple scattering is expected.Thus, the size and strain contribution to the diffraction peak shape, as well as its dynamical effect, can be neglected; so, P-graphene can be used as a calibration sample for instrumental broadening.The peak shape of the integrated intensity profile of the few layers of thick graphene sample can be considered as the instrumental broadening was caused by the applied combination of instrumental parameters like TEM lens currents and electron gun settings (acceleration voltage, Wehnelt bias (GunLens parameter in Themis)).
Figure 2a shows the SAED pattern taken from P-graphene, which was used as a reference sample for instrumental broadening determination.Three complete diffraction rings, namely the 10, 11, and 20 rings at 2.13, 1.23, and 1.06 Å, respectively, were recorded at the applied camera length.The FWHM values determined by the pseudo-Voigt fitting of the diffraction peaks were around 0.003 • , which accounted for a broadening of ±0.01 Å with respect to the peak maximum in terms of interplanar spacing.A similar value was provided by Ref. [26] for the broadening of single crystal diffraction of a few layers of graphene.
at high scattering angles as well.In the case of the few layers of thick graphene sample, like Pelco ® graphene 3-5 by Ted Pella (in the following denominated briefly P-graphene), individual graphene layers were rotated above each other by a random angle, resulting in approximately even intensity distribution along the hk diffraction rings.As the thickness of five layers of graphene still does not exceed 2 nm, the mean free path of electrons at 200 keV in graphite is ca 110 nm [25], and no multiple scattering is expected.Thus, the size and strain contribution to the diffraction peak shape, as well as its dynamical effect, can be neglected; so, P-graphene can be used as a calibration sample for instrumental broadening.The peak shape of the integrated intensity profile of the few layers of thick graphene sample can be considered as the instrumental broadening was caused by the applied combination of instrumental parameters like TEM lens currents and electron gun settings (acceleration voltage, Wehnelt bias (GunLens parameter in Themis)).
Figure 2a shows the SAED pattern taken from P-graphene, which was used as a reference sample for instrumental broadening determination.Three complete diffraction rings, namely the 10, 11, and 20 rings at 2.13, 1.23, and 1.06 Å, respectively, were recorded at the applied camera length.The FWHM values determined by the pseudo-Voigt fitting of the diffraction peaks were around 0.003°, which accounted for a broadening of ±0.01 Å with respect to the peak maximum in terms of interplanar spacing.A similar value was provided by Ref. [26] for the broadening of single crystal diffraction of a few layers of graphene.To model FWHM as a function of the scattering angle θ, we used the Caglioti function [27].
From the electron diffraction measurement of P-graphene (Figure 2a), three FWHM values were obtained (Table 1), which allowed the determination of the U, V, and W Caglioti parameters (Figure 2b).From the insert of Figure 2a, it can be seen that the diffraction peaks exhibited slight asymmetric broadening towards higher scattering angles.Asymmetry can be quantified as where Q is the scattering vector length at the peak maximum, and Q1 and Q2 are the corresponding Q values at half maximum (at Q ± FWHM) on the lower and higher wing of the peak, respectively.Using Equation ( 5), asymmetry values of the order 10 −4 were determined.
The asymmetric broadening of diffraction peaks is known from the XRD of random layer materials like smectites [28] or carbon black [29].Warren [29] explains asymmetric broadening with the diffraction pattern of a two-dimensional lattice, taking all orientations in space with equal probability.Due to the very thin nature of the individual layers and their three-dimensional randomness, the hk peak position varies as a function of the tilt angle of the layer with respect to the incident beam.The maximum value of the hk peak is obtained if the layer is normal to the beam, and the overall displacement of the maximum (∆sinθ), according to [29], is expressed as a function of the L (in-layer) dimension of the diffracting particle: In contrast to carbon black, P-graphene is a truly two-dimensional material without three-dimensional randomness.Monolayer graphene is commonly slightly corrugated in-TEM, which can lead to some in-plane tilt between the adjacent regions inside the 3 µm diffracted area.The lateral dimension of the ripples (L ′ ) can vary from the Ångström scale for single-layer graphene [30] up to several nanometres for bi-layer graphene [26].Using the estimate of L ′ ≤ 25 nm by Meyer et al. [26], Equation (6) provides an upper limit of ∆(sinθ) ≈ 1.6 × 10 −5 for the displacement of the maximum peak.As P-graphene is multilayer (3-5 layers), the amplitude of the ripples is expected to decrease by interlayer forces.According to [31], the second derivative of the bending energy density-which characterizes the resistance/toughness of multilayer graphene against bending-is three orders of magnitude larger for 4-5 layers than that for monolayer graphene.As the experimental peak broadened with an increasing tilt angle and the overall effect of undulations quickly diminished with the number of layers [26], we considered that the effect of ripples on broadening and, on the displacement of sinθ in our case, was negligible.
In the Caglioti model, the FWHM values of the diffraction peaks are expressed in a double scattering angle (2θ).According to Equation (4), the FWHM 2 was plotted against tanθ, and the U, V, and W parameters were obtained (Figure 2b).The error of the FWHM 2 due to peak asymmetry is in the order of 10 −8 , and so it can be neglected.
The goodness of the Caglioti parameters was checked by plotting the experimental integrated intensity profile of P-graphene together with the calculated intensity profile of graphite in Figure 2c using the U, V, and W parameters from Figure 2b.To estimate Gaussianity, an approximate value of G 0 = 0.6 was used (Table 1).The good agreement of the experimental peak profile and the peak profile reproduced using the U, V, W, and G 0 parameters (Figure 2c) proved that the obtained peak parameters allowed to model the instrumental broadening caused by the applied measurement setup satisfactorily.
Rietveld Analysis of Nanocrystalline Hematite-Refining G 0
We applied the obtained peak shape parameters to the Rietveld analysis of monophase homogeneous hematite nanopowder, previously studied with Mössbauer spectroscopy and HRTEM [18].The aim was to refine the Gaussianity of the instrumental peak profile.Bright-field (BF) and dark-field (DF) images of the hematite nanopowder are presented in Figure 3a,b.Crystallite size distribution (Figure 3f) was obtained from the dark-field image (Figure 3b,c), which resulted in an average crystallite size of 16.2 nm with a standard deviation of 8.4 nm.In the initial stage of the analysis, an approximate value of 0.6 for G0 was used based on Table 1, and the angle dependence of Gaussianity (G1) was kept at zero.The scale factor was adjusted, and the refinement of basic phase parameters and microstructure parameters was performed while peak shape parameters were kept fixed at the previously determined values.In this stage of refinement, the automated analysis option ("Refinement Wizard") of the software was used.The refinement cycle was stable and converged fast, resulting in an average crystallite size of 14.47 nm.In the last step of the automated analysis, the G0 and G1 parameters were also included in the refinement.The peak shape parameters used in the final refinement cycle are listed in Table 2.It is important to note that The SAED pattern (Figure 3d) was obtained from an area evenly covered by nanoparticles, as indicated in Figure 3e.The absence of texture was checked by the ca.±17 • tilting of the sample holder about the goniometer axis.Camera length calibration and data pre-processing were performed as detailed previously, with the integrated intensity profile used as input data for Rietveld analysis.As data pre-processing included center and ellipticity refinement, instrument parameters such as detector distance (corresponding to the camera length in electron diffraction), center displacement (corresponding to x, y), and tilting error (corresponding to ellipticity parameters α and ε) were excluded from the Rietveld analysis by keeping their values at zero.The background was determined on the intensity profile by interpolation.We used the structure parameters of hematite [32] as initial structure parameters, and instrumental broadening was modeled using the U, V, W, and G 0 parameters obtained from the SAED pattern of P-graphene.
In the initial stage of the analysis, an approximate value of 0.6 for G 0 was used based on Table 1, and the angle dependence of Gaussianity (G 1 ) was kept at zero.The scale factor was adjusted, and the refinement of basic phase parameters and microstructure parameters was performed while peak shape parameters were kept fixed at the previously determined values.In this stage of refinement, the automated analysis option ("Refinement Wizard") of the software was used.The refinement cycle was stable and converged fast, resulting in an average crystallite size of 14.47 nm.In the last step of the automated analysis, the G 0 and G 1 parameters were also included in the refinement.The peak shape parameters used in the final refinement cycle are listed in Table 2.It is important to note that this step did not produce any significant change in lattice parameters, microstructure parameters, and R-values (Table S1).The experimental and calculated diffraction profiles and the difference curve are displayed in Figure 3g, and the detailed results of the analysis are summarized in Table S1.
Rietveld Analysis of Cu-Ni Thin Films-Crystallite Size and Phase Ratio from Overlapping Rings
Two-component nanocrystalline Cu-Ni thin films were deposited at RT and 150 • C and measured by SAED.Integrated intensity profiles were analyzed using the Rietveld method.Cu and Ni are isostructural (Fm-3m) with a 0 lattice parameters of 3.61496 Å (AMCSD-0011145) and 3.52387 Å (AMCSD-0011153), respectively (2.5% difference).Our purpose was to extract the phase fraction and crystallite size of the two phases from SAED using the experimentally determined peak shape parameters.Control measurements of phase fraction were performed by TEM-EDS exactly at the position of the SAED measurement.The crystallite size difference between the two phases was recognized by the visual observation of the SAED patterns.
Copper tends to oxidize fast; thus, the formation of an amorphous surface oxide layer cannot be avoided, as indicated by the presence of broad rings marked by arrows in Figures 4f and 5f.These rings are well separated from the metal components, so they could be excluded from the analysis easily.The diffraction rings of Cu and Ni strongly overlap as they are broadened due to the few nanometers of crystallite size (Figures 4 and 5).Careful observation of the SAED patterns (Figures 4a and 5a) reveals that the inner arc of each ring corresponding to larger interplanar spacing values is sharper and exhibits a slightly spotty character, while the outer arc is more diffuse, suggesting that the crystallite size of copper is larger than that of nickel.The separation of Cu and Ni rings is more enhanced in the case of the 150 • C sample (Figure 5a) due to the formation of larger grain sizes at higher temperatures.Crystallite size distribution was determined by processing DF images (Figures 4c and 5c), and the average crystallite size of 5.1 nm and 8.2 nm were obtained for RT and 150 • C samples, respectively.These average crystallite sizes include both Cu and Ni crystallites.The increase in the average value is in agreement with expectations; however, the distinction between the size of Cu and Ni nanocrystals cannot be made (Figures 4e and 5e).In the case of the RT sample, the automated analysis did not converge; thus, the basic phase parameters were first refined with a fixed atomic ratio (50-50%), which was followed by the separate refinement of microstructure parameters.The procedure was repeated several times to check the reproducibility of the results and each time for at least 15 iterations needed to reach convergent cycles.The phase ratios and crystallite sizes for the two components resulted in 58.4 at% and 8.23 nm for Cu, and 41.6 at% and 5.44 nm for Ni, respectively.The composition measured with EDS is 56 at% Cu and 44 at% Ni (Figure 4d).In the case of the 150 • C film, the same automated strategy was used successfully as in the case of hematite nanoparticles, and the basic refinement yielded phase ratios and crystallite sizes of 30.7 at% and 25.3 nm for Cu and 69.3 at% and 10.9 nm for Ni, respectively.The composition measured with EDS was 34 at% Cu and 66 at% Ni (Figure 5d).The obtained results are detailed in Table S2 in the "Basic refinement" column.In the case of the RT sample, the automated analysis did not converge; thus, the basic phase parameters were first refined with a fixed atomic ratio (50-50%), which was followed by the separate refinement of microstructure parameters.The procedure was repeated several times to check the reproducibility of the results and each time for at least 15 iterations needed to reach convergent cycles.The phase ratios and crystallite sizes for the two components resulted in 58.4 at% and 8.23 nm for Cu, and 41.6 at% and 5.44 nm for Ni, respectively.The composition measured with EDS is 56 at% Cu and 44 at% Ni (Figure 4d).In the case of the 150 °C film, the same automated strategy was used successfully as in the case of hematite nanoparticles, and the basic refinement yielded phase ratios and crystallite sizes of 30.7 at% and 25.3 nm for Cu and 69.3 at% and 10.9 nm for Ni, respectively.The composition measured with EDS was 34 at% Cu and 66 at% Ni (Figure 5d).The obtained results are detailed in Table S2 in the "Basic refinement" column.
The careful observation of intensity ratios of 111 and 200 peaks revealed a smaller deviation between the fitted and measured intensities (Figure 6).In the case of the RT sample (Figure 6a), the intensity ratio of the 111 and 200 diffraction peaks indicated the The careful observation of intensity ratios of 111 and 200 peaks revealed a smaller deviation between the fitted and measured intensities (Figure 6).In the case of the RT sample (Figure 6a), the intensity ratio of the 111 and 200 diffraction peaks indicated the development of the preferred <111> orientation of nanocrystals.To include the texture parameter in the Rietveld analysis, we applied the March-Dollase approach [33], which allowed us to specify the crystallographic direction of the preferred orientation.The goodness of fit has improved apparently without a notable change in crystallite size and atomic ratio values.The obtained March parameters (0.71 and 0.67 for Cu and Ni, respectively) indicated a small degree of preferred orientation.
development of the preferred <111> orientation of nanocrystals.To include the texture parameter in the Rietveld analysis, we applied the March-Dollase approach [33], which allowed us to specify the crystallographic direction of the preferred orientation.The goodness of fit has improved apparently without a notable change in crystallite size and atomic ratio values.The obtained March parameters (0.71 and 0.67 for Cu and Ni, respectively) indicated a small degree of preferred orientation.In the case of the 150 °C sample, there was no clear indication of preferred orientation (Figure 6b).Thus, after basic refinement, the "arbitrary texture" option was activated, which allowed the variation in intensity ratios without crystallographic constraints [34,35].This option aims to modify intensities freely to reach the best fit between the observed and modeled values.No noticeable improvement was observed visually (Figure 6b), and phase ratios and crystallite sizes for the two components resulted in 35.8 at% and 17.1 nm for Cu and 64.2 at% and 11.4 nm for Ni, respectively (Table S2), which falls closer to the composition measured with EDS (Figure 5d).
Validation of the Peak Broadening Determination of TEM
The instrumental broadening of TEM in the whole scattering angle range was modeled with the Caglioti relation [4], using the FWHM values of the SAED of a few layers of thick turbostratic graphene (P-graphene) foil.The successful application of the automated refinement algorithm on SAED patterns of hematite nanopowder and nanocrystalline CuNi thin films proved the convenience of the instrumental broadening function.An analysis of single-phase hematite nanopowder allowed the refinement of the Gaussianity parameter as well.Crystallite sizes obtained from the refinements agreed well with image processing data.
During the analysis, our approach was to reduce the number of refined parameters as much as possible.By applying standard acquisition conditions, the variation in camera length was below 0.1% [3], so the correction for detector distance/camera length could be skipped.Data pre-processing before obtaining the integrated intensity profile allowed the diffraction pattern center to be found and corrected ellipticity with high accuracy [20]; thus, no further refinement of the center displacement or tilt error was needed.The Caglioti parameters U, V, and W for peak shape modeling were taken from the calibration measurement and kept fixed during the whole procedure.The only instrument-related In the case of the 150 • C sample, there was no clear indication of preferred orientation (Figure 6b).Thus, after basic refinement, the "arbitrary texture" option was activated, which allowed the variation in intensity ratios without crystallographic constraints [34,35].This option aims to modify intensities freely to reach the best fit between the observed and modeled values.No noticeable improvement was observed visually (Figure 6b), and phase ratios and crystallite sizes for the two components resulted in 35.8 at% and 17.1 nm for Cu and 64.2 at% and 11.4 nm for Ni, respectively (Table S2), which falls closer to the composition measured with EDS (Figure 5d).
Validation of the Peak Broadening Determination of TEM
The instrumental broadening of TEM in the whole scattering angle range was modeled with the Caglioti relation [4], using the FWHM values of the SAED of a few layers of thick turbostratic graphene (P-graphene) foil.The successful application of the automated refinement algorithm on SAED patterns of hematite nanopowder and nanocrystalline CuNi thin films proved the convenience of the instrumental broadening function.An analysis of single-phase hematite nanopowder allowed the refinement of the Gaussianity parameter as well.Crystallite sizes obtained from the refinements agreed well with image processing data.
During the analysis, our approach was to reduce the number of refined parameters as much as possible.By applying standard acquisition conditions, the variation in camera length was below 0.1% [3], so the correction for detector distance/camera length could be skipped.Data pre-processing before obtaining the integrated intensity profile allowed the diffraction pattern center to be found and corrected ellipticity with high accuracy [20]; thus, no further refinement of the center displacement or tilt error was needed.The Caglioti parameters U, V, and W for peak shape modeling were taken from the calibration measurement and kept fixed during the whole procedure.The only instrument-related refined parameters were G 0 and G 1 .No direct information on the scattering angle dependence of Gaussianity (G 1 ) was obtained during the calibration measurement.The measured G 0 of the diffraction peaks of the P-graphene standard varied in the range of 0.56-0.72.When initiating the Rietveld analysis, an approximate starting value for G 0 (0.6) and zero for G 1 were applied.Then, as a final step of the Rietveld analysis of hematite nanopowder, the refinement of the G parameters was performed, which resulted in 0.64 and 0.02 values for G 0 and G 1 , respectively.These values show minor changes with respect to the starting values.
It is remarkable that the variation in G parameters with respect to the starting values did not notably improve the R-factors; significant changes were neither induced in the lattice parameter nor crystallite size data (Table S1).It indicates that after complete TEM alignment, the Caglioti modeling of peak broadening with an approximate G 0 value obtained from the pseudo-Voigt fitting of diffraction peaks and neglected G 1 provides a satisfactory peak shape model at the applied experimental conditions.Thus, we were able to extract crystallite size information from the diffraction peak profile by a quick routine procedure.Moreover, it also demonstrates that the Rietveld analysis of the standardized electron diffraction measurement of an appropriate nanopowder provides G 0 and G 1 parameters for a specific lens current combination.These G values can be used later during the Rietveld analysis of more complicated samples.
In contrast to nanopowders, the determination of the average crystallite size of thin film samples is not straightforward using image processing techniques.If the film thickness exceeds the average crystallite size, overlapping grains occur, which hampers thresholding dark-field images.Also, strain fields, bending contours, or the moiré effect may contribute to the contrast of DF images, which prevents the effective use of automated image analysis routines [36].Moreover, on a DF image, only a small range of crystal orientations was present, which may distort crystallite size data, particularly in the case of some degree of preferred orientation of nanocrystals or anisotropic shape.These factors make crystal size determination by image processing more complex and increase the uncertainty of the results.In the case of multiphase nanocrystalline thin films, the rings of the component phases can be so close to each other that their separate use for DF image formation is not possible.In this case, crystallite size analysis based on dark-field images does not make a difference between the two components.
Rietveld analysis of SAED patterns of binary Cu-Ni thin films was applied to analyze the assumed crystallite size difference between the two components with broad overlapping reflections.In the case of both RT and 150 • C samples, results provided larger values for Cu than for Ni, which verified our presumption based on the visual observation of SAED patterns.We have to note that the crystallite size distribution histogram of the CuNi RT sample based on DF image processing (Figure 4e) exhibits a single maximum, which means that the average size difference between the two components cannot be resolved.In the case of the 150 • C sample, the histogram exhibits a minor peak around 17 nm (Figure 5e), which corresponds to a second population; however, the crystalline phase of the two populations cannot be recognized in such histograms.Rietveld analysis allowed the separate determination of the crystallite sizes of the two phases simultaneously in both Cu-Ni thin film samples.DF images can provide reasonable initial values for the Rietveld refinement and can also be used as control data for the refined values.
The development of the preferred orientation of (111) planes in the RT sample has been quantified by applying the March-Dollase approach.After texture parameters were included in the analysis, the R-factors improved without variation in crystallite size and atomic ratio values (Table S2).In the case of the 150 • C sample, the intensity difference between experimental and fitted curves did not indicate a straightforward development of the preferred orientation.Accordingly, the "Arbitrary texture" model resulted in a better fit in terms of R-values, and notable changes in the crystallite size and atomic ratio values with respect to the basic refinement.We conclude from these results that the increase in the copper 111 peak intensity should not be related to preferred orientation.The good intensity match of the measured and fitted copper 200 peaks (Figure 6b) also supports the lack of preferred orientation.
Besides their preferred orientation, several factors can modify the intensity ratios of random orientation distribution.While in powder XRD several thousands of crystallites are measured, at the same time, in SAED, especially when smaller apertures are used, the selected area may contain a lower number of diffracting crystals.In these cases, orientation statistics will be poor, and the condition of randomness is not fulfilled.In thin films with inhomogeneous grain size, larger grains contribute with higher intensity to the SAED due to the larger size of coherently scattering domains.With increasing thickness, the probability of double diffraction increases too.Distorted intensity ratios on the diffraction pattern lead to a poorer fit during Rietveld analysis.Moreover, if the texture is not defined correctly or multiple textures are present, the Rietveld algorithm does not converge.In these cases, the use of the "Arbitrary texture" model can be a good choice to overcome this difficulty.The "Arbitrary texture" model is a robust fitting algorithm that is not sensitive to the source of variation in intensity and is able to improve intensity fit without physical information on the crystallographic texture.For example, Wenk et al. [34] used it to compensate for the coarse nature of their standard.In the case of the Cu-Ni 150 • C sample, we think that the observed 111/200 intensity ratio of the copper phase was due to the strong Bragg reflections of some larger crystallites in the measured area rather than to the preferred orientation.It has to be noted that the atomic ratio obtained using the "Arbitrary texture" model provided the best fit with EDS data.This indicates that forcing a reasonably assumed but incorrect preferred orientation may improve the R-factors of the Rietveld analysis but does not provide real information on the nanostructure, which is an important limitation of electron diffraction-based Rietveld analysis.
Benefits of the Single Step in-TEM Determination of Instrumental Broadening
The determination of instrumental peak broadening is fundamental for diffraction pattern-based nanostructure analysis.In the case of well-resolved peaks, deconvolution can be performed using the measured peak profile of a single crystalline sample.This method was followed by [37] during the grain size analysis of the nanocrystalline FeAl alloy.Alternatively, peak shape modeling as a function of the diffraction angle, e.g., Caglioti modeling, can be performed, which allows the analysis of complex diffraction patterns with peak overlaps, as in the case of lower symmetry and/or multicomponent samples.The procedure proposed in [15] allows the instrumental peak to be obtained, broadening indirectly with the help of complementary XRD measurements.Their method determines the instrumental broadening as the difference between the measured broadening and grain size-related broadening.Our procedure makes the determination of the instrumental broadening of the applied electron optical setup easier, as no additional measurements are needed, and peak broadening parameters can be obtained from a single measurement on an appropriate calibration sample.Moreover, our proposed approach offers a direct measurement of the instrumental broadening, which allows for a higher accuracy compared to the indirect method in [15].
Our calibration sample was a few layers of thick graphene, which is available commercially; no additional synthesis procedures were needed, and this material could be stored at room conditions without alteration for a long time.Graphene covers the TEM grid uniformly, while nanoparticle calibration samples may form thicker aggregates, which leads to enhanced multiple scattering.Moreover, graphene foil is self-supporting; thus, an amorphous background on the SAED pattern is avoided.These factors contribute to a reduced background on the diffraction profile.P-graphene can also be used as supporting foil in the case of an electron diffraction measurement of a nanoparticle sample, serving at the same time as an internal standard both for the determination of camera length and peak broadening parameters.
The determination of peak broadening parameters of the TEM using the SAED of an appropriate calibration sample allows a reliable separation of instrumental and sample contribution to peak width on the analyzed specimen.Then, during Rietveld analysis, the instrumental parameters can be kept at previously determined fixed values.In contrast to XRD, it is an essential issue in electron diffraction because of the wide variety of available lens current combinations, some of which produce qualitatively similar SAEDs with clearly different instrumental contributions [38].A systematic study of the effect of the camera length and selected area aperture on peak width has been published recently [17]; however, the effect of factors like beam convergence and accelerating voltage also needs to be considered.Additionally, the Modulation Transfer Function of the recording medium [38][39][40] may also
Figure 1 .
Figure 1.(a) Schematic cross-section of the TEM grid covered by a 3 nm thick ultrathin carbon foil (Ted Pella, Redding, CA, USA) showing the experimental setup during the deposition of Cu-Ni thin films.(b) One mesh cross-section was enlarged, indicating the possible non-uniform thickness of Cu film sputtered from the grid side due to the shadowing effect of the grid bar.
Figure 1 .
Figure 1.(a) Schematic cross-section of the TEM grid covered by a 3 nm thick ultrathin carbon foil (Ted Pella, Redding, CA, USA) showing the experimental setup during the deposition of Cu-Ni thin films.(b) One mesh cross-section was enlarged, indicating the possible non-uniform thickness of Cu film sputtered from the grid side due to the shadowing effect of the grid bar.
Figure 2 .
Figure 2. SAED pattern taken from Pelco ® graphene 3-5 support film (Ted Pella) with the corresponding integrated intensity profile in the insert (a).The solid red line in the intensity profile shows the pseudo-Voigt fit of the diffraction peaks.(b) Polynomial fit to obtain the U, V, W Caglioti parameters from the SAED pattern of P-graphene.The error of tan θ is practically equivalent to Δ(sin θ) and calculated using Equation (6).(c) Experimental intensity curve of P-graphene sample (black dots) and calculated intensity curve of graphite plotted using the obtained Caglioti (U, V, W) and
Figure 2 .
Figure 2. SAED pattern taken from Pelco ® graphene 3-5 support film (Ted Pella) with the corresponding integrated intensity profile in the insert (a).The solid red line in the intensity profile shows the pseudo-Voigt fit of the diffraction peaks.(b) Polynomial fit to obtain the U, V, W Caglioti parameters from the SAED pattern of P-graphene.The error of tan θ is practically equivalent to ∆(sin θ) and calculated using Equation (6).(c) Experimental intensity curve of P-graphene sample (black dots) and calculated intensity curve of graphite plotted using the obtained Caglioti (U, V, W) and Gaussianity peak shape parameters (red line).Intensity was scaled to graphene 11, i.e., the graphite 110 peak indicated by an arrow.
Figure 3 .
Figure 3. Bright-field (a) and corresponding dark-field image (b) of hematite nanoparticles.(c) Threshold dark-field image used to obtain particle size distribution.(d) SAED pattern of hematite nanoparticles and (e) the area used to obtain the SAED pattern.(f) Crystallite size distribution as determined from (c).(g) Experimental (black dots) and calculated (red curve) intensity profile.Below the intensity profile, the difference curve between the measurement and fit is plotted.
Figure 3 .
Figure 3. Bright-field (a) and corresponding dark-field image (b) of hematite nanoparticles.(c) Threshold dark-field image used to obtain particle size distribution.(d) SAED pattern of hematite nanoparticles and (e) the area used to obtain the SAED pattern.(f) Crystallite size distribution as determined from (c).(g) Experimental (black dots) and calculated (red curve) intensity profile.Below the intensity profile, the difference curve between the measurement and fit is plotted.
Nanomaterials 2024, 14 , 444 11 of 18 Figure 4 .
Figure 4. SAED pattern of Cu-Ni thin film deposited at room temperature; Cu and Ni 111 diffraction rings are marked (a).The yellow square in the inset indicates the zoomed region.Bright-field (b) and dark-field images (c), EDS spectrum (d), and crystallite size distribution determined based on DF image (e).The integrated intensity profile (black dots) and fitted profile after final refinement are plotted on (f).Below the diagram, the difference curve between measurement and fit is seen.Arrows indicate Cu-oxide rings.
Figure 4 .
Figure 4. SAED pattern of Cu-Ni thin film deposited at room temperature; Cu and Ni 111 diffraction rings are marked (a).The yellow square in the inset indicates the zoomed region.Bright-field (b) and dark-field images (c), EDS spectrum (d), and crystallite size distribution determined based on DF image (e).The integrated intensity profile (black dots) and fitted profile after final refinement are plotted on (f).Below the diagram, the difference curve between measurement and fit is seen.Arrows indicate Cu-oxide rings.
Nanomaterials 2024, 14 , 444 12 of 18 Figure 5 .
Figure 5. SAED pattern of Cu-Ni thin film deposited at 150 °C (a); Cu and Ni 111 are marked.The yellow square indicates an enlarged region, bright-field (b) and dark-field image (c), EDS spectrum (d), and crystallite size distribution determined based on the DF image (e).Integrated intensity profile (black dots) and fitted profile after final refinement (f).Below the diagram, the difference curve between the measurement and fit is seen.Arrows indicate Cu-oxide rings.
Figure 5 .
Figure 5. SAED pattern of Cu-Ni thin film deposited at 150 • C (a); Cu and Ni 111 are marked.The yellow square indicates an enlarged region, bright-field (b) and dark-field image (c), EDS spectrum (d), and crystallite size distribution determined based on the DF image (e).Integrated intensity profile (black dots) and fitted profile after final refinement (f).Below the diagram, the difference curve between the measurement and fit is seen.Arrows indicate Cu-oxide rings.
Figure 6 .
Figure 6.Peaks 111 and 200 of Cu-Ni RT (a) and Cu-Ni 150 °C; (b) thin films after basic (top row) and final refinement (bottom row).Note the deviations between measured (black) and fitted (red) curves.In the case of the RT sample, intensity ratios indicate the 111 preferred orientation.
Figure 6 .
Figure 6.Peaks 111 and 200 of Cu-Ni RT (a) and Cu-Ni 150 • C; (b) thin films after basic (top row) and final refinement (bottom row).Note the deviations between measured (black) and fitted (red) curves.In the case of the RT sample, intensity ratios indicate the 111 preferred orientation.
Table 1 .
FWHM and Gaussian parameters of P-graphene diffraction peaks obtained with pseudo-Voigt fitting.
Table 2 .
Caglioti parameters (in deg 2 ) and Gaussian parameters used in the final refinement cycle of hematite nanopowder.The Caglioti parameters were obtained experimentally from P-graphene and the Gaussian parameters were refined. | 12,160.4 | 2024-02-28T00:00:00.000 | [
"Materials Science",
"Physics"
] |
MENTAL MODELS AND CREATIVE THINKING SKILLS IN STUDENTS’ PHYSICS LEARNING
. The study of mental models and creative thinking skills in students’ physics learning with the problem-based learning model has been scarce. This study aimed to analyze the relationship between mental models and creative thinking skills in high school students. Many previous research findings explain a relationship between mental models and creative thinking skills among students at the university level and workers. This mixed-methods study was conducted on high school students in Malang, East Java, Indonesia, aged between 14 and 15 years. The instrument used is in the form of mental models and creative thinking skills test questions. This finding explains no relationship between mental models and creative thinking skills because learning has not fully empowered mental models and creative thinking skills. On the other hand, learning at the previous level, students’ knowledge is still fragmented, so that is incomplete. Therefore, at the high school level, they need help to improve their mental models and creative thinking skills. This finding implies that teachers in developing learning materials, tools, and instruments must pay attention to the level of student knowledge so that learning can be more optimal.
Introduction
Students can use what is often called a model, or more specifically, a mental model, to understand the invisible (abstract) physical phenomenon that occurs on a microscopic scale.Educational psychologists explain that the mental model is an internal thought that acts as a structural analogy of a situation or process (Stains & Sevian, 2015).Its role is when one tries to understand, recount, and have a good predictor of the final state of a phenomenon (Moutinho et al., 2014).Understanding the mental model enables the development of more effective communication and decision-making (Bancong & Song, 2020).
In principle, the mental model represents multiple domains that support understanding, reasoning, and prediction.The mental models represent a more complex form of conceptual knowledge with a causal relationship (de Guzman Corpuz & Rebello, 2019).The characteristics of the mental models are structures related to the human knowledge of the natural world.The phase of knowledge processing is a memory unit involving symbols reflecting the knowledge of knowledge itself, thus giving birth to a good learning process (Ahi, 2016).
The mental model is built by the individual's cognitive system.It represents simplification, illustration, analogy, and simulation of natural objects.In an attempt to understand a new knowledge or a particular phenomenon, the mental model is built to refer to prior knowledge.The information presented enabled it to be interpreted (Gregorcic & Haglund, 2021).Therefore, a beginner in building his mental model is different from someone already an expert in his field in content, structure, and semiotics.Therefore, modifying learning depends on a mental model that is called conceptual change (Greefrath et al., 2021).According to cognitive psychologists, the mental model represents an internal scale model of external reality or a person's mental representation of an idea or concept (Haglund et al., 2017).An individual who has difficulty building his mental model will have difficulty building his thinking skills, not performing the problem-solving process well (Canlas, 2021).
The mental modeling process can be used to investigate physics concepts.The teacher can access the information to assist in building students' conceptual understanding (Hurtado-Bermúdez & Romero-Abrio, 2023).Mental model research results are mostly done on a large scale, such as in groups, so the data are reduced in groups to develop students' mental models (Brookes & Etkina, 2015).The learning process to see the development of students' mental models for each meeting is not adequately considered.The teacher cannot map the students' mental model as the material for learning evaluation.
The mental model in physics learning indicates that there is a good reason to construct good knowledge in explicitly explaining the allegations of a phenomenon (López & Pintó, 2017).Students make a mental effort to understand the complex system and build the proper mental representation to model and explain the system.The students continually modify and reorganize their mental model in every new experience, especially after the learning process (Childers & Jones, 2015).
The new experience is not only oriented to the mental model but also the skill in thinking or the so-called creative thinking skill.It is an important skill to solve problems in the era of openness (Ceylan, 2022).Such skills are interpreted as the ability to offer new perspectives, generate novelty and meaningful ideas, raise new questions, and produce solutions (Tawarah, 2017).
These skills need to be utilized to help individuals find the solution to solve problems.Creative thinking skills are beneficial in dealing with various problems in the era of globalization (Ritter & Mostert, 2017).The mental model has a relationship with creative thinking skills (Pitts et al., 2018).The mental model influences creative thinking (Leggett, 2017).Teachers as learners can apply a mental model frame or conceptual framework to students.Thus students will progress in learning (Schut et al., 2022).The concepts that have been formed in the students can bring creativity in the form of new, meaningful perspectives, ideas, and ideas.The students also become self-regulated learners (Yildiz & Guler Yildiz, 2021).
The Programme for International Students Assessment (PISA) international survey (Organisation for Economic Co-Operation and Development, 2018) states that Indonesia is still at the bottom in mathematics and science.Indonesia is not able to reach the top ten levels.PISA results describe that of 70 countries, Indonesia is still ranked 62. Therefore it still needs many struggles to be at the best level.Similarly, the Trends in International Mathematics and Science Study results in the context of mathematics and science in Indonesia are still at the bottom of the rankings compared to Singapore in the first rank (Organisation for Economic Co-Operation and Development, Asian Development Bank, 2015).
The facts described above are not much different from physics learning in Malang.Students still less create independence in learning during the lesson, presentation of the material is still dominated by teachers.In contrast, students have not found a good way of learning.Students feel confused about developing their conceptual framework (Adbo & Taber, 2009).At the same time, students' creative thinking skills are essential for competence in the 21st century (Ritter & Mostert, 2017).Such a learning process affects students "saturation and impacts students" cognitive learning outcomes.
A physics study at several high school schools in Malang reported by Yogantari (2015) revealed that as many as 35% of students have difficulty with elasticity and Hooke's law, 30% optics, and 15% kinematics.The difficulty is caused by the lessons experienced by students more minor than the maximum in a hands-on activity.As many as 76% of students stated that teachers as a learning resource still dominate learning in the classroom.As many as 14.6% of students found difficulties understanding the physics presented in diagrams.33% had difficulty understanding concepts, 38% had difficulty using mathematical representation, and the rest had difficulty making conclusions based on analysis.
Students' difficulties in studying physics affect low learning outcomes.They lack exploration and empowerment of mental models and creativity in learning.Learning activities are more oriented towards achieving mathematical knowledge elements than the mastery of physics concepts.They are accommodating in finding the most appropriate answer to the problems given based on existing information.
Material elasticity is an integral part of learning physics and everyday life.It can be seen when people use elastic material in most activities to protect the limbs such as the head, body, and feet.Mental models and creative thinking skills are perfect when collaborating with elasticity materials.The mental model explains the macroscopic and microscopic circumstances of material so that students will be accustomed to explaining how the state of a particle or molecule when given a force.Like creative thinking skills, students will be creatively trained by using elastic material to make a quality product to protect the limbs with the existing materials.
Physics emphasizes products, processes, applications, and attitudes.Physics learning is not only based on the results of cognitive learning as the final result, but the learning process should also prioritize and improve its quality.Creating learning conditions involving student learning experiences needs to be empowered to foster students' thinking to be scientific.Reasonable efforts have to be made by teachers and students to achieve student competence according to the critical demands in the curriculum, especially how to find more effective patterns or student learning models.The selected model can be used according to the situation and condition of the student.Therefore, good innovation is required by applying a constructivist-based learning model (Qarareh, 2016) in order to be able to empower the mental model and creative thinking skills of students (Barrett et al., 2013).
Problem-based learning (PBL) is a model that involves students' learning experiences through developing questions and thinking skills to solve physics problems.PBLs are the potential to facilitate the quality of a good mental model of an object/material that stimulates a change of thinking structure.Denizhan (2020) affirmed that group learning activities to solve problems or cases encourage students to think with their knowledge, identify necessary information, locate more relevant information, and analyze and evaluate to construct problem-solving flows.These activities have an impact on changing the mental model of the students.In addition, the involvement of real experience in learning is expected to stimulate students' mental models and improve students' creative thinking skills in physics.The findings of Salari et al. (2018) prove that PBL has the potential to change the mental model of students.Creating learning situations can further enhance the students' actual experiences in learning through students' mental models.
The potential of PBL in improving students' creative thinking skills can be accessed during learning.Creativity can generate ideas, novelty, new questions, or new and valuable solutions through preparation, incubation, evaluation, and elaboration.Preparation is done during the brainstorming of opinions about the issues/problems found, followed by an incubation phase in which issues are identified and discussed.The evaluation phase is concerned with deciding whether the ideas are relevant, and elaboration is the final phase.The ideas are applied/manifested in actual activity and then reevaluated (Seibert, 2021).The potential of PBL in fostering students' creativity can be started from individual activities, which are continued with group activities.Such activity will result in innovation in applying and transferring knowledge (McCrum, 2017).
The study of mental models and creative thinking skills of physics is still limited, and collaboration with the PBL model until the present.Although there may be performed separately between the two variables done by (Hofgaard Lycke et al., 2006;Lin, 2017), mental models and creative thinking skills using PBL models on elasticity materials of material have not been studied.Therefore, a comprehensive and in-depth study of the process of change or the development of mental models and students' creative thinking skills by applying the PBL model in physics learning is needed.
Method
This study used a mixed-methods embedded experimental model to explore the research subject fully.This mixed-methods approach can better understand the research problem than quantitative and qualitative methods alone.
The steps in this study are described as follows.The first step was to carry out mental models and creative thinking skills pretests to determine the students' mental models and creative thinking skills before implementing learning physics with the PBL model.The second step was learning physics using the PBL model in the treatment group and learning with the lecture or conventional model in the control group.At this stage, qualitative data was also collected through interviews with several selected students to confirm their answers during learning related to mental models and creative thinking skills.During the learning process, the development of students' mental models and creative thinking skills was observed through worksheets on each topic.Observers conduct observation activities, and learning activities are documented through photos and videos as evaluation material.After all, the topics were taught, the third step was to post-test mental models and creative thinking skills.The fourth step was collecting qualitative data by filling out response questionnaires and interviews with students to determine student responses to learning with the PBL model.After the four steps are completed, the interpretation of the quantitative and qualitative data is carried out to make conclusions following the formulation of the research problem.
The participants in this study were 78 students consisting of 39 students in the PBL class and 39 in the conventional class.The research was conducted at Public High School 8 Malang.The research subjects were 10th-grade science students with the category level five semesters (medium level).The average student who studied was 16 years old.The experimental class consisted of 20 male students and 19 female students.There are 25 male students and 19 female students in the control class.Before conducting the PBL experiment, the mental models and creative thinking skills instruments were tested for 250 11th grade students at several high schools in Malang, namely High School 1, High School 2, High School 3, High School 4, High School 5, High School 6, Frateran Catholic Senior High School, Christian High School Kalam Kudus, Petra Christian Academy, Catholic High School Santa Maria, and St. Albertus High School.
The mental models test instrument refers to the rubric developed by (Ifenthaler, 2006) by having three types of mental models: surface, matching, and deep.The creative thinking skills instruments developed include fluency, flexibility, originality, and elaboration.The creative thinking skills instrument was compiled and developed by Torrance (1990).The level of creativity: very creative (high level) (68-100), moderate creative (moderate level) (34-67), less creative (low level) (0-33).The creativity domain consists of fluency, flexibility, and originality: 1) fluency can be characterized by (0) students cannot provide ideas/answers, (2) students can come up with one to two ideas/answers, and (4) students can come up with three or more ideas/answers; 2) flexibility can be characterized by (0) students are not able to provide ideas/methods, (2) students can come up with one to two ideas/methods, and (4) students can come up with three or more ideas/methods; 3) originality can be characterized by (0) students do not answer/general ideas/common ideas and no originality, (2) students come up with moderate unique ideas, and (4) students come up with unique ideas; 4) elaboration; (0) there is no addition of ideas from students, (2) a simple addition of ideas from students, (4) extraordinary ideas from students.The mental models and creative thinking skills questions developed were ten questions.Before the instrument was applied, expert validation was carried out in theoretical physics and physics learning by the two experts from the postgraduate physics education State University of Malang, Malang.
Qualitative data were analyzed descriptively, while quantitative data were analyzed by linear regression to determine the relationship between mental models and creative thinking skills in physics learning with PBL and conventional learning models.Data analysis was assisted with SPSS version 23.00 for Microsoft Windows.Before data analysis, normality and homogeneity tests were performed.The results of the prerequisite test for both classes showed normal and homogeneous.
Results and discussion
The results of the answers' mental models and creative thinking skills analysis indicate a high level of thinking, as shown in Table 1.
Table 1.Students answers and categorize questions for mental models and creative thinking skills (source: created by authors)
Questions
Students' answers Category In designing a window, carpenters usually give a slit in order to enter the glass.
[1] Is glass an elastic object?
[2] Why do carpenters create slits in the windows to be able to insert the glass?Please explain.
[3] What is the condition of the glass particles in the morning and afternoon?
S25
[1] Yes, glass is an elastic object during the day.The glass will expand and return to its original shape in the afternoon with a high temperature.
[2] The goal is to give space for the glass when it expands during the day.
[3] In the morning, the substance particles vibrate weaker (releasing heat) so that they approach each other, and the object shrinks.Expansion occurs during the day when a substance or glass is exposed to sunlight (receiving heat).İt makes the substance particles vibrate faster so that they move away from each other.
Mental model type deep
The pole vault is one of the athletic sports in jumping numbers.The pole vault is performed with the aid of a pole to achieve the highest possible jump.Alfred made a jump using a pole and managed to get over the bar with a buffer height of 4.5 m.Explain things that allow Alfred to cross the bar.
S45
1.The pole used by Alfred is made of elastic and strong material.2. The force given by Alfred is quite large due to the influence of Alfred's mass and instantaneous velocity/impulse 3. The position of Alfred's grip at the end of the pole to provide a high jump 4. Alfred's running speed is set in such a way that it can help Alfred to jump.
Students' mental models and creative thinking skills on problem-based learning
Based on the results of analysis of variance (ANOVA), as shown in Table 2, it is known that the P-value of 0.568 is more significant than alpha (α = 0.05), which means that there is no significant correlation between students' mental models and students' creative thinking skills in the PBL model.Mental models' contribution to creative thinking skills in PBL learning is shown in Table 3.The R-value in the correlation between mental models and creative thinking skills in the PBL model is 0.096.The R 2 value is 0.009 or 0.9%.Thus, the mental models' aspect contributed 0.9% to students; creative thinking skills, and other factors contributed 99.1%.Table 4 can be determined the regression equation resulting from the relationship between mental models and creative thinking skills.The value of a = 93.221and b = -0.190,so the regression equation Y = 93.221-0.190 X.
Students' mental models and creative thinking skills on conventional model
Based on the results of ANOVA, as shown in Table 5, it is known that the significance value of 0.881 is more significant than alpha (α = 0.05).There is no significant correlation between students' mental models and students' creative thinking skills in conventional learning.Mental models' contribution to creative thinking skills in conventional learning is shown in Table 6.The R-value in the correlation between mental models and creative thinking skills in the conventional model is 0.025, and the R 2 value is 0.001 or 0.1%.Thus, the mental model's aspect contributes 0.1% to students' creative thinking skills.Other factors contribute as much as 99.9%.From Table 1, it can be determined the regression equation resulting from the relationship between mental models and creative thinking skills.The value of a = 59.257 and b = 0.24, so the regression equation Y = 59.257 -0.24X.The regression coefficient value between mental models and creative thinking skills is shown in Table 7.
The study results prove no linear relationship between students' mental models and creative thinking skills in physics learning with PBL and conventional models.The student's mental models do not determine their creative thinking skills.The relationship between creative thinking skills in students, exceptionally high school students, in learning physics from this study is an anomaly.High school students are in the formal operational stage and need the help of others.The environment inside and outside the home, including the school environment, significantly affects creative thinking skills.This information is in stark contrast to the findings of Mumford et al. (2012) in their research on many students in the university.Their findings confirm that there is a relationship between mental models and creativity.The knowledge or expertise possessed by the student contributes to creative problem-solving.
It means that the relationship between mental models and problem-solving could increase creativity.A student should be able to solve problems.In this case, creative thinking skills are needed.
According to Hester et al. (2012), before solving a problem, the mental models used by students to understand problems in this domain are assessed from two characteristics or attributes, namely subjective and objective.It was found that the objective and subjective features of mental models students were related to quality and originality.The assessment of subjective mental models attributes is based on a presentation mental models concept map in front of the assessment team after being involved in the training.The assessment team was asked to rate the student's mental models based on evidence of subjective and objective attributes.Objective attributes are assessed based on eleven criteria, while subjective attributes are assessed based on nine criteria.Assessment of objective attributes continues subjective attributes in concrete or tangible terms.For example, how many concepts are included in the mental models and the number of links provided between the concepts?The subjective and objective attributes of mental models can produce high-quality thinking that results in creativity.Encouraging creativity takes several presentations of essential concepts and can encourage students to formulate coherent things related to existing concepts.Toader and Kessler (2018) describe a creativity test conducted on several mental models teams.The teams are dissimilar, similar, and complementary.The results showed that the different mental model's team had higher creativity than the similar and complementary mental models team because of the potential for knowledge recombination when each team member reached a balance between exploration and exploitation.Another study was conducted by Curşeu and ten Brink (2016) on two different ethnicities, namely the Dutch and the Chinese.The results showed that group members who received divergent thinking manipulation had a less negative evaluation of minority opinion conceptualization than group members who did not receive divergent thinking manipulation.Divergent thinking can trigger group members to conceptualize minority differences of this opinion only in individualistic groups and not in collective groups.It will ultimately lead to less creative performance in groups operating collectively in a cultural context.This study also contributes to extrapolating cultural differences in creative performance from individual to group-level analysis.It shows that groups operating in a collective culture have lower creativity in divergent thinking tasks than groups operating in an individualistic culture.Another study by Marques Santos et al. (2015) was conducted on 161 teams of 735 people who investigated the mediating mechanisms of intragroup conflict and creativity in the relationship between shared mental models and team effectiveness (team performance and satisfaction).The results show that high shared mental models are associated with low intra-group conflict levels, encourage creativity, and improve team performance and satisfaction.These findings contribute to the relationship between shared mental models and creativity and emphasize the importance of shared understanding for creativity and effectiveness team, not the individual.
Another study examining mental models and creative thinking skills was conducted by Lucas and Mai (2022) but on workers, not on the students.There were two types of mental models: insight and production, which encourage workers to work more creatively so that their performance is the best.Mental models insight directs workers to focus more on preparation, such as information seeking, but mental models production help workers to behave more productively at work, such as by creating ideas and validating ideas.A worker with mental models insight is more likely to frame tasks and think about problems from a different perspective to produce a more creative approach to completing their tasks.In contrast, workers with mental models of production will prioritize production behavior rather than preparation.They will spend more time on activities such as examining creative ideas and using them creatively in their work.
Mental models can describe and represent thinking processes in solving problems, which can help predict and offer about how individuals will perform and behave in certain situations and obtain and process new information.A team's mental model relies heavily on input from team members and directs the team on how to proceed in terms of process and content.The main characteristic of mental models is that they can help coordinate and adapt as required by the tasks assigned to team members.Therefore, there is interaction with each other to exchange opinions and ideas that are creative and innovative.Mental models can be modified, adapted, and finally divided into teams.This construction also directs that team members can be independent in designing things.The mental model consists of several stages, among others, a mental model based on tasks, processes, and teams.It follows the findings of architectural engineering students (Casakin & Badke-Schaub, 2015).Mental models generate creative ideas, knowledge creation, thinking concept formation, and decision making and evaluation (Toader & Kessler, 2018).Similar studies were also conducted on online games and social media such as Facebook and Twitter.A person's mental model dramatically determines the outcome of the game because it requires good creative thinking, insight, concepts, techniques, and strategies (Wasserman & Koban, 2019).
Several research findings have been presented above that there is a correlation between mental models and creative thinking skills.However, this study did not find a relationship between mental models and creative thinking skills.Therefore, it will be explained in more detail.As stated in several previous studies, this research was conducted on 10th-grade high school students, not on university-level students.In this case, emotional students required much assistance, direction, and instructions in learning since they were junior high school students who had just moved to the high school level.They were at the stage of formal operational development.The new school environment dramatically affects the intelligence of students.Moreover, junior high school students who have just moved to high school still have a powerful teenage personality.They are shy in adjusting to school and learning, exceptionally high school physics which requires high reasoning and analysis.Children need the help of others during the development period, especially in the development of intelligence.As age and psychological maturity develop, the child's dependence on others decreases (Chopik et al., 2018).Junior high school students are highly dependent on others and teachers when they are in the school environment (Wanders et al., 2020).Students may develop continuously and can also develop the potential within themselves.However, there must be something else felt by the child.The nature of humanity encourages children to need the help of others in their emotional development.Parents and teachers are the closest individuals who can encourage children's emotional development (Xiao et al., 2022).
The relationship between emotional intelligence significantly affects mental models and creative thinking skills.Students with good emotional intelligence can acquire good skills and vice versa.One factor that affects a person's intelligence is personality (Petrides et al., 2004).Students with good personalities must have good mental models and creative thinking skills.On the other hand, students with bad personalities must display poor intelligence (Chen & Guo, 2020).Bad personality can be seen through the following indicators: lack/no motivation, loss of self-confidence, low self-esteem, loss of self-control, and high anxiety (Crocker & Park, 2004).If a student shows these characteristics, this shows that his emotional intelligence is low, and it has a fatal impact on his learning skills.
Conclusions
A comprehensive study and analysis carried out on students' mental models and creative thinking skills physics through the PBL model found no linear correlation between students' mental models and creative thinking skills in PBL and conventional models.Therefore, it can be interpreted that the student's mental models factor does not determine creative thinking skills.The relationship between mental models and creative thinking skills in students, exceptionally high school students in physics learning, is an anomaly.It is due to various factors: 1) High school students, especially those who move from junior high school, still have incomplete knowledge with the understanding that they have not had much experience and have not mastered many concepts.It is because elementary and junior high school science is still in the introduction stage, and identification is not yet at a deeper level of analysis.Therefore, it is necessary to have reasonable teacher assistance to make the interaction between students and the learning environment more optimal; 2) the learning pattern of high school teachers is faster with a high enough level of material delivery so that students still need suitable adjustments and training; 3) giving physics questions with higher level, i.e., mental models and creative thinking skills, and students are still not familiar with higher-order thinking skills questions and fast learning patterns.It can be traumatic, fearful, and anxious for students.This finding can also explain why the previous findings stated a relationship between mental models and creative thinking skills.It was carried out at a high level, namely undergraduate, postgraduate students, and even in the world of work.Adults have better knowledge levels, have good experiences, and are more emotionally stable.From elementary school to high school, students still need reasonable assistance from the teacher.The provision of materials can be adjusted to the child's abilities.This finding implies that teachers in developing learning instruments must pay attention to student knowledge so that they are not forced and are required to master what the teacher will teach optimally.Some suggestions for further research are measuring students' mental models and creative thinking skills in physics in learning science in elementary, junior high, and higher education with studies between cultures, ethnicities, or island regions with unique characteristics.In addition, it is necessary to study the relationship between mental models and creative thinking skills with other learning variables that have the potential to increase students' physics learning.
Table 2 .
The summary of anova correlation between mental models and creative thinking skills in problem-based learning model (source: created by authors)
Table 5 .
The summary of analysis of variance correlation between mental models and creative thinking skills in the conventional model (source: created by authors) a. Predictors: (constant) mental correlation conventional.
Table 7 .
The regression coefficient value between mental models and creative thinking skills in the conventional model (source: created by authors) a. Dependent variable: creative correlation conventional. | 6,714.4 | 2023-06-23T00:00:00.000 | [
"Physics",
"Education"
] |
A modular and generic monolithic integrated MEMS fabrication process
A modular and generic, monolithic integrated MEMS fabrication process is presented to integrate microelectronics (CMOS) with mechanical microstructures (MEMS). The proposed monolithic integrated fabrication process is designed using an intra-CMOS approach (to fabricate the mechanical microstructures into trenches without the need of planarization techniques) and a CMOS module (to fabricate the electronic devices) with a 3 μm length as minimum feature. The microstructures module is made up to three polysilicon layers, and aluminum as electrical interconnecting material. From simulation results, using the SILVACO suite (Athena and Atlas frameworks), no significant degradation on the CMOS performance devices was observed after MEMS manufacturing stage; however, the thermal budget of the modules plays a crucial role, because it set the conditions for obtaining the complete set of devices fabricated near their optimal point. Finally, to evaluate and to support the development of the proposed integrated MEMS process, a modular test chip that includes electrical test structures, mechanical test structures, interconnection reliability test structures and functional micro-actuators, was also designed.
Introduction
The MEMS (Micro-Electro-Mechanical Systems) acronym is commonly used to describe mechanical structures of micrometric dimensions performing an electronically controlled preset function [1].Currently, the MEMS (mechanical-microstructures) offered by manufacturers usually consist of sensors and/or actuators that are separately fabricated and then bring to interaction with electronic circuitry (hybrid integration); such hybrid systems show many functional drawbacks mainly due to external wiring [2].Nevertheless, MEMS is an evolving technology and includes more than just mechanical structures, also includes a wide number and variety of microcomponents (chemical, thermal, magnetic, mechanical, etc.) and electronic circuits.According to the nature of the microcomponents, the full potential of MEMS products has been possible only by its proper integration with a specific conditioning electronic circuit.In this sense, nowadays MEMS designers are facing different possibilities for integrating a monolithic system (combining sensors/actuators and electronic devices on a single substrate) based on the careful selection of both modules: micro-structures/components and CMOS circuits [2].MEMS designers can choose from Pre-CMOS, Post-CMOS and Intra-CMOS approaches.Each one of these approaches includes a set of fabrication steps to fulfill the desired requirements of an integrated MEMS process.The development of a better integration approach is the way to achieve a better system performance and new applications.Currently, some studies consider that a half of all existing MEMS categories are fabricated using a monolithic integration approach, some examples are print-heads, accelerometers, and recently frequency control [3,4] devices.This kind of integration offers a less expensive (from the point of view of interconnection between two different devices mainly) alternative in the reduction of parasitic elements and at the same time increasing signal detection sensitivity of the conditioning circuitry.A generic integrated fabrication process is one that can fabricate more than a single device.The CMOS technology proposed in this work is a generic one.However, specific microcomponents technology usually results quite limited by some specific applications.The goal of LIMEMS-INAOE laboratory [5] is to develop a generic and modular process capable of integrating intelligent and varied microstructures by exploiting their fabrication capabilities.Hence in this work, the design of a generic and modular integrated MEMS fabrication process is presented.The manufacturing base modules are fully detailed in section 2. In Section 3 the proposed generic and modular fabrication process is described.The design of the test chip is presented in section 4. Finally, the conclusions drawn from this work are summarized in section 5.
MEMS Technology
The integration of a new MEMS technology is required for the design and development of systems satisfying the need of more and accurate functionality at lower cost.The longterm goal will be the development of a multipurpose MEMS technology, which considers other related microcomponents.The monolithic integration approach presented in this work will be developed considering a polysilicon surface micromachining module and a CMOS module, whose integration is designed considering 10 -20 Ω -cm (~ 5 × 10 14 cm -3 ), p-type, 6-inch diameter, (0 0 1), silicon wafers.
A. Microstructures Module: PolyMEMS-INAOE Technology
The PolyMEMS ® technology [8,9] uses polysilicon films as structural material, and it is featured by a surface micromachining sub-module developed for the fabrication of electrostatic and electrothermal sensors and actuators.Such sub-module offers two structural levels, phospho-silicate glass (PSG) films as sacrifice material, and aluminum films for interconnections [6].The PolyMEMS ® process uses 3 mask and 4 lithography steps.In Figure 1 the Poly-MEMS ® process scheme is shown.Next each module is described.
The PolyMEMS ® fabrication process begins with the growth of silicon oxide 2000 Å at a temperature of 1000 °C; then a deposition of 0.5 μm undoped polysilicon by LPCVD at 650 °C.Phosphorus Silicate Glass (PSG) film of 2 μmthick is deposited by APCVD (Figure 1a).Using one lithography mask to etch the PSG and open windows, the anchoring of structures is conducted.The PSG is etched by RIE dry etching with CF4.The masking used is the AZ -2070 negative photoresist (Figure 1b).The polysilicon deposition is performed by LPCVD at 650 °C, the film is n+ doped with Phosphorus at 1000 °C; the thickness deposited are 1, 2 and 3μm in different samples (Figure 1c).Using the mask 2, the test structures are defined by dry etching in RIE.The gases used are SF6 and O2 in a ratio of 1.5:1.For masking a SiO2 film of 2000 Å is used (Figure 1d).A 1000 Å film of aluminum is then deposited (Figure 1e).The lithography for the aluminum is conducted using mask 3.This film is used as electrode material and to improve the mechanical stability of the anchored material.The aluminum is etched using a wet process with Al-Etch (Figure 1f).For releasing the structures, the last step is the removing of sacrificial material (PSG).By the use of mask 3, a 2 μm of photoresist film is the protecting material to avoid the etch of aluminum during the sacrificial etch.The release step is performed by wet etching process in a 49 % HF solution.A series of alternating washes of isopropanol, DI water and isopropanol is performed to remove HF residues.Finally, the samples are dried in a convection oven at 120 °C. Figure 2 shows a cross section of a microstructure using the PolyMEMS ® fabrication process.
B. CMOS Technology Module
The CMOS module is under development with the purpose of fabricate electronic circuits with 3 μm length as minimum feature and ± 5 supply voltage.For matching the threshold voltage transistors, a twin-well diffusion and latch up-free structure were used [7].In addition, the CMOS process uses a self-aligned titanium silicide-polysilicon gate, which also serves as local interconnect, polysilicon and aluminum for interconnection.These different interconnection levels are enough to obtain and to ensure the required intercommunication without an increase in difficulty or cost.In Figure 3 the CMOS process steeps designed represented in schematic form are shown [10].The CMOS module consists of 9 masks and 12 lithography steps.The main blocks are briefly discussed in the following: Initially the formation of the N-and P-wells, as twin wells, are ion implanted and the drive-in thermal diffusion is performed at 1200 °C.The junction depth (3.5 μm) is designed to be deep enough to avoid vertical punch-trough.The active areas are defined by using Poly buffered local oxidation of silicon (PBLOCOS) for a precise feature definition.A p-channel stopper is used to reduce the spacing between devices and providing better isolation.A 200 Å gate oxide is thermally grown at 900 °C in dry oxidation.400 nm of Poly films are low-pressure chemically vapor-deposited (LPCVD) at 650 °C, after that a 1000 °C phosphorus doping is performed during 30 minutes.This process is designed with shallow source/drain junctions (0.7 μm) as well as low gate and drain/source sheet resistances to minimize the delay and increase the current drive of the devices.The titanium silicide (TiSi2) electrodes are defined using a self-aligned process realized at 900 °C in nitrogen ambient.A precise low-dose of boron ions is implanted for threshold adjusting of CMOS transistors.Interconnecting aluminum film is deposited and patterned, and finally a 450 °C sintering is performed.The main process specifications are summarized 1. Figure 4 shows a cross section for the CMOS architecture obtained from a simulation of the process using the Athena environment from SILVACO ® suite [11].The process is designed with a first alignment mask process (not illustrated) to correctly align the subsequent photolithography steps.
A Monolithically Integrated MEMS Technology
The monolithic integration is performed in a single wafer which includes circuits and microstructures.A classification of the step process with several materials and chemical ambient must be correlated with some type of MEMS integration: Post, Pre, and Intra-CMOS approaches [12-14].The PolyMEMS INAOE ® technology has a thermal budget limited to 1000 °C, and according to the CMOS thermal budget shown in Figure 5, a Pre-CMOS approach is unsuitable due to the eventually long time increment required to adjust the twin-well drive-in from 1200 °C to 1000 °C.A Post-CMOS approach using the current PolyMEMS INAOE module cannot be compatible due to the requirements for the LPCVD Poly films, which are doped and thermally annealed at 1000 °C for a time longer than 30 minutes.Given the above considerations, as alternative we focus on the Intra-CMOS integration, keeping safe the CMOS module, since the INAOE has its own facility for device fabrication, adapt and optimize the required process flow to minimize both electronic and mechanical degradation in accordance with the thermal budget.
A. thermal Analysis on CMOS Process Design
In the developing of a practical intra-CMOS process, it is necessary to consider that the CMOS module is extremely sensitive to thermal treatments than the polysilicon microstructures.Because the goal is the integration of these different fabrication modules, a thermal study related with dopant profiles is required.For the analytical study, all the CMOS doping profiles was considered resulting in a wide range of overall annealing time to be included.Simulation routines were performed using Athena and Atlas environments from SILVACO ® , in which the tuning of all the modules was performed with the data obtained from the characterization of the 3 μm CMOS technology developed.
For example, the final part of the CMOS module is designed with a silicon/silicide interface for electrical interconnection, which imposes limitations to the post thermal cycles to a 900 °C maximum range, for avoiding structural interface damages due to titanium silicide (TiSi2) reactivity [15].On the other hand, at the initial part the P/N wells drive-in require a 1200 °C annealing temperature.At this high range temperature, we have identified a breaking step, where the CMOS sequence could be interrupted as a secure practical way to avoid undesirable post-thermal side effects on the CMOS architecture.The overall mechanical module may be fabricated after P/N well annealing and ending with the rest of CMOS steps without affecting the overall sequence.A thermal simulation was performed to demonstrate the invariance and robustness of the P/N wells after a long annealing time at 1000 °C as that required for the microstructure fabrication.Figure 6 shows the simulated post-annealing effect on the P/N wells after 200, 400, 800, 1000 and 2000 minutes of thermal annealing under nitrogen ambient at 1000 °C.From the Figure 6 can be observed that P-well and N-well show a 3.5 -m junction depth after 1200 °C annealing.From Figure 6 it can be seen that both impurity profiles remain almost unchanged with annealing time; also the annealing time was set with the intention to demonstrate that any type of microstructure, that may require a very long annealing time at 1000 °C, could really be considered.The most significant issue observed after the very long post-annealing treatments, was a slight surface concentration variation at both wells, which is due to boron and phosphorus segregation at both wells/capping oxide interfaces [16].
B. Trench for Microstructures
The way for the electronic-mechanical coupling is another key step in the design of the integrated process.Figure 7a shows the step height h of some poly-microstructure facing the surface CMOS diffusions.It can be observed the high aspect ratio of the microstructures.In the intra-CMOS approach, the microstructures will be fabricated before completing the CMOS steps, this means the CMOS photolithography steps will be affected by the height of the already defined microstructures.
It is well known that CMOS features are directly affected by some process variations like misalignments, film thickness variations due to deposition techniques or step coverage.Our integration approach is designed to be developed without some planarization technique; hence the microstructures depicted in Figure 7(a) will cause some lateral size shift during the photolithography steps for developing the CMOS devices.
In our proposed approach, we are considering the case when the microstructures are placed inside a shallow trench.In a general trace, Figure 7(b) shows a graphical representation for this approach; the trenched microstructures result uncritical for the minimum feature definition, considering the subsequent CMOS photolithography steps and the final interconnection with the microstructures.The depth of the trench is directly related with the specific microstructure arrangement but is no deeper than 6 microns.We are using P-type and (0 0 1) silicon wafers, hence when the shallow trench is etched with aqueous Tetramethylammonium hydroxide (TMAH) the four-fold symmetry wall-sloped allows the deposition of interconnecting stripes.
Considering general aspects for a trench some morphology requirements must be carefully considered.In our approach without planarization step, the presence of material stacking around the trench complicates the subsequent photolithography process but also results more difficult to define geometries over the PSG at the bottom of the trench.To compensate the lack of planarization step we propose the use of an extra fabrication mask to etch the materials around the trench.By using this extra mask, the height h will disappear and both the CMOS and microstructure definition become easier.
C. The proposed Generic and Modular Monolithic Integrated MEMS Process.
Figure 8 shows the fabrication steps, obtained using the Athena environment, of the entire integrated MEMS process designed.This monolithic integrated MEMS fabrication process is divided into four main sections; the entire fabrication sequence is carried out with 13 masks and 16 photolithography steps in a p-type (1 0 0) silicon wafer (Figure 8a 2) Polysilicon Microstructures: An insulator film is deposited for electrical isolation between the microstructures and the surface wafer.Then a sacrificial material is deposited followed by the deposition, doping and patterning of the structural material inside the trenches.A thermal treatment is realized at 1000 °C to minimize the residual stress.Process Step: 7. Thermal oxide, T = 1000 °C, Tox = 0.3 μm, (Figure 8i).8. LPCVD intrinsic polysilicon, T = 650 °C, Tpoly = 0.5 μm, (Figure 8i).9. Sacrificial oxide deposition, Tox = 3.0 μm, (Figure 8j).10.Structural material deposition, Tox = 2.0 μm, (Figure 8l).11.Thermal annealing and doping at 1000 °C, t = 120 minutes in N2, (Figure 8m).12.By the use of an additional mask, the material outside the trench is etched, (Figure 8o).
3) CMOS Part II:
The After the complete definition of the microstructures, the CMOS process sequence is realized in the top surface of the wafer.The standard PBLOCOS CMOS process is realized and the microstructures remain covered by the stacked materials from the local oxidation process.Field oxidation has a thermal cycle of 1000 °C for 2 hours, and serves as a stress reduction thermal cycle for the microstructures.The following fabrication sequence is listed below.Process Step: 13.PBLOCOS.Deposition of a stacked silicon dioxide, polysilicon and nitride films.This step protects the microstructures, (Figure 8p).
4) Interconnections and Releasing:
The passivation layer is deposited to protect both the CMOS and microstructures devices.The interconnection between the CMOS and the microstructures is carried out in the metallization step by using a sputtered aluminum film to ensure proper step coverage from the top of the wafer (CMOS area) to the bottom of the trench (microstructure area).The final step in the fabrication process is the sacrificial etch to release the microstructures (Figure 8z).The last etching process depends on the specific microstructure geometries, and it can be done by some dry or wet etching technique.Process Step: 25.Passivation film deposition, (Figure 8y).26.Aluminum deposition and patterning (simultaneous interconnection), (Figure 8y).27.Sintering at 450 °C (gas forming)
D. Simulation Results
Figure 9 shows a cross-section of the simulated MEMS fabrication process; inside the trench are the main devices for this technology.The modeled and simulated transistors have a length of 3 μm (minimal dimension), and the current in all the simulations is normalized to a width of 1μm. Figure 10 shows the simulated drain current (Id) versus gate voltage (Vd) curves for both types of transistors; the behavior of transistors was obtained at Vd = 100 mV with an interface charge of 5 × 10 10 C/cm 2 .The change in Id current after annealing of the devices at 1000 C at different period of time.It can be seen that Id-Vg characteristics of the transistors remain almost unchanged, indicating their robustness to the intra process fabrication of the micromechanical structures.In order to demonstrate the robustness of the CMOS process to the thermal treatments needed for completion of the MEMS fabrication, Figure 11 shows the impact of the annealing time on the threshold voltage of the devices.fabricated on the same substrate with minimal variation in the electronic-mechanical performance even after 2000 minutes of annealing time.
The Test Chip
A test chip was designed to evaluate and assist the development of the generic and modular integrated MEMS process proposed.This test chip will be useful for identifying the possible electrical and/or mechanical variations on the material properties and devices performance due to the fabrication process.Figure 13 shows the layout of the designed test chip.The chip size is 4.3 × 4.2 mm, and in it all the CMOS devices surround the mechanical structures.All test structures use a 2 × 5 terminal array module to facilitate the testing process at wafer level.
Design considerations of the test chip
In the test chip, all the CMOS devices surround the mechanical structures since they are always affected by external and internal stress and their location on the die is critical to guarantee the optimal performance of the designed device.A brief analysis about the stress is described below.
Usually the mechanical structures are tested at a wafer level and the residual stress is produced during the fabrication process.However, the technique required to attach the die to the package (adhesive, wire bond, molding compound), may apply stress gradients to the entire wafer [17,18].As the mechanical structures are stress sensitive components, it is convenient to locate them near the center of the die where the induced stresses are more uniform [19], [20].It is clear from the above that the design must include all the mechanical structures at the center of the die.
Conclusions
A generic and modular design approach for a monolithic MEMS process is proposed without specialized planarization techniques.The integration is realized with CMOS devices and polysilicon microstructures.From the simulation results a non significant degradation on the CMOS performance devices is observed after MEMS fabrication.The intra-CMOS process approach showed to be the best approach for a modular design of a MEMS fabrication process at the LIMEMS-INAOE.The thermal budget of the modules plays a crucial role, because it set the conditions for obtaining the complete set of devices fabricated near their optimal point, that is, without degradation of current handling and without Vto shift in the CMOS process and with the minimum residual stress on the micromechanical devices.A CMOS-MEMS test chip for the evaluation during both process development and fabrication was designed.The test structures provide optical and electrical information about the material properties to study the feasibility of the integration of CMOS devices and mechanical structures.
The use of the 2 × 5 probe pad array will improve the electrical measurements to perform statistical analysis.Polysilicon and aluminum were the base materials in the mechanical design of the test structures; nevertheless, another materials with Young modulus near the range of the mentioned materials, like silicon-germanium, amorphous silicon, even copper can be selected to fabricate the mechanical structures, and the same design will be enough to characterize the residual stress in the films.The information obtained from de test chip will be useful for tuning simulation routines and analytical expressions.Then, the developing of a new integration technology is supported.The fabrication process of the designed test chip is still in progress during the preparation of this paper.
Figure 3 .
Figure 3. Process flow of 3 um CMOS process designed.
Figure 4 .
Figure 4.A cross section view of the 3 μm CMOS process designed.
Figure 5 .
Figure 5. Thermal budget for 3 µm CMOS process.The dashed line limits the allowed temperature for fabrication process of microstructures for an intra-CMOS approach.
Figure 7 .Figure 8 .
Figure 7. (a) Schematic of an integrated microstructure besides surface CMOS diffusions showing the resulting aspect ratio differences, (b) Cross section of the generic integrated MEMS process here proposed.
): 1) CMOS Part I: The process starts with an initial thin oxide (~ 200 Å) to define the alignment marks and then the trenches are defined by using aqueous TMAH solution for Doping (/cm 3
Figure 10 .
Figure 10.Simulated Vg vs. Id curves for transistors, after three additional thermal cycles: (a) NMOS transistor and (b) PMOS transistor.The gate length of the transistors was 3 μm.
Figure 11 .
Figure 11.Threshold voltage variation on transistors after annealing time at 1000ºC.(a) NMOS transistor and (b) PMOS transistor.
Figure 12 .
Figure 12.Simulated Ids vs. Vds curves for transistors and after 2000 minutes of annealing: (a) NMOS, (b) PMOS.The voltage values of Vg are fixed at 1, 2, and 3 volts.The gate length of the transistors is 3 μm.
Figure 13 .
Figure 13.Layout of the CMOS-MEMS test chip designed.
A 2 × 5 probe pad array[6],[21], is used in the design of the test structures to facilitate the test process by employing standard probe cards.Also, the use of a 2 × N probe pads arrangement, allows the test pads to be an integral part of the test structures, to avoid having common buses among the test structures to prevent the interference between different structures, and to obtain the highest degree of modularity.The designed test structures are classified into four categories: CMOS test structures, Mechanical test structures, Interconnection reliability test structures and Functional micro-actuators, and they are distributed in modules into the chip.The test chip includes the following modules (highlighted in Figure 13): A) Fabrication test structures checked during processing; B) Devices test structures (MOSFETs, capacitors, diodes); C) Process test structures (sheet resistance, contact resistance); D) Interconnection reliability test structures (continuity, sheet resistance inside the trench); E) Stress monitors (buckling structures, rotating structure); F) Micro-actuators (chevron arrays); and G) Wiring pads (to wire the micro-actuators).Detail of some mechanical and electrical test structures are shown in Figure 14.
Table 1 .
CMOS Module specifications | 5,236 | 2017-11-26T00:00:00.000 | [
"Engineering"
] |
Use of light-weight foaming polylactic acid as a lung-equivalent material in 3D printed phantoms
The 3D printing of lung-equivalent phantoms using conventional polylactic acid (PLA) filaments requires the use of low in-fill printing densities, which can produce substantial density heterogeneities from the air gaps within the resulting prints. Light-weight foaming PLA filaments produce microscopic air bubbles when heated to 3D printing temperatures. In this study, the expansion of foaming PLA filament was characterised for two 3D printers with different nozzle diameters, in order to optimise the printing flow rates required to achieve a low density print when printed at 100% in-fill printing density, without noticeable internal air gaps. Effective densities as low as 0.28 g cm− 3 were shown to be achievable with only microscopic air gaps. Light-weight foaming PLA filaments are a cost-effective method for achieving homogeneous lung-equivalency in 3D printed phantoms for use in radiotherapy imaging and dosimetry, featuring smaller air gaps than required to achieve low densities with conventional PLA filaments.
Introduction
Additive manufacturing, or 3D printing, has allowed the fabrication of a wide variety of jigs and tissue-mimicking phantoms in medical physics [1].These bespoke phantoms and modular components can be used to assess the performance of other devices (such as imaging and radiation therapy treatment systems) at a low cost [2] and can extend the functionality of commercial solutions.The ability to make modular anthropomorphic and even individualised phantoms, containing lung and bone mimicking materials, is useful when commissioning new equipment and techniques.
Netherlands), a composite of PLA and wood fibres [3].Polymer foams and commercially available phantom materials, such as Gammex LN-300 and LN-450 (Gammex Inc, Middleton, USA) and CIRS inhale and exhale lung (CIRS Inc., Norfolk, USA) achieve lung equivalence without large air gaps, and more closely resemble lung tissue featuring microscopic air-filled alveoli, reducing this uncertainty.
Light-weight foaming PLA is a 3D-printable filament containing a chemical foaming agent that decomposes and releases gas when exposed to filament extrusion temperatures used in fused deposition modelling 3D printing [11].This decomposition results in the creation of microscopic bubbles (less than 0.1 mm in diameter) within the extruded PLA, increasing the printed filament diameter relative to the diameter of the 3D printing extrusion nozzle.The size and quantity of bubbles, and therefore density of the material, varies with printing temperature and extrusion flow rate [11].
The objective of this study was to characterise the use of 3D-printable light-weight foaming PLA as a lung-mimicking material for use in radiology and radiation therapy phantoms.
Methods
For this study, eSUN PLA-LW (eSUN Industrial Co., Shenzhen, China) was used as the light-weight foaming PLA filament.A conventional PLA filament, eSUN PLA+, was used for comparisons.The cost of the eSUN PLA-LW was 2.1 times the cost of the eSUN PLA + per kilogram.
The 3D printing slicing software Cura (v5.1.0,Ultimaker, Utrecht, Netherlands) was used to prepare prints for an Ender 5 material extrusion 3D printer (Shenzhen Creality 3D Technology Co Ltd, Shenzhen, China) with a 0.4 mm nozzle, a consistent layer height of 0.2 mm, a print speed of 80 mm s − 1 and no top or bottom layer used unless otherwise specified.The 3D printing slicing software ideaMaker (v4.2.3, Raise3D, Irvine, USA) was used to prepare prints for a Raise3D Pro 2 material extrusion 3D printer, with a 0.8 mm nozzle, a consistent layer height of 0.4 mm, a print speed of 80 mm s − 1 , and no top or bottom layer used unless otherwise specified.There were no variations in flow rate or layer height over the duration of the print, for example, for initial layer.The fan was set to a speed of 100% after 3 or 1 layers, on the Ender 5 and Raise3D Pro 2 printers, respectively (corresponding to layer heights of 0.6 and 0.4 mm).
The expansion of the foaming PLA was characterised for a variety of temperatures, using a method similar to that described by filament manufacturer colorFabb [12].Small hollow test cubes (2 × 2 × 2 cm 2 ) were each printed using the eSUN PLA-LW material with a single shell (or outer wall, or perimeter), 1 bottom layer, and no in-fill, at temperatures ranging from 200 to 260 °C (in 10 °C increments, with 260 °C the maximum temperature safely supported by the Ender 5 printer).Test cubes were printed one-per-job in the centre of the print bed, in contrast to the method described by colorFabb [12], where all cubes were printed in one job.Per-model temperature variations are not supported within all slicing software, and print pauses associated with temperature changes increase total print time.Unlike the col-orFabb method [12], print speed was not reduced, as the anticipated use of the filament was for 3D printed phantoms, which are already slow to print due to large volumes and high in-fill percentages, and print times would already be extended due to flow rate reductions.
The width of the shell was measured for each side of the cube using calipers.This measurement was exclusive of any unintended imperfections within the print, including Z seam and any minor expansion at base of print (0.1 mm "elephant feet" were observed in this study).An expansion ratio was calculated as a ratio of the measured shell thickness, and the measured shell thickness of an equivalent test cube printed with non-foaming eSUN PLA + at a temperature of 220 °C.The shell thickness for the non-foaming eSUN PLA + printed cube was expected to approximately match the nominal nozzle diameter used.
Subsequently, for each temperature, a test cube was printed using a gyroid in-fill pattern, variable in-fill densities, and an extrusion or flow rate equal to the reciprocal of the calculated expansion ratio.It was expected, that for this flow rate, the smaller volume of PLA-LW material extruded through the nozzle would expand to match the extrusion diameter of non-foaming PLA+.Test cubes were printed at in-fill densities of 20%, 40%, 60%, 80% and 100%, to establish the relationship between in-fill density and radiological properties.The assumption that the reciprocal of the expansion ratio is an appropriate flow rate can be validated by inspection of 100% in-fill prints for under-or over-extrusion, or by printing single shell cubes with variable flow rates, as suggested by colorFabb [12].
The CT number, effective mass density, ρ eff , and effective relative electron density, RED eff , of the printed test cubes were characterised via CT imaging on a Siemens SOMATOM scanner operating at 120 kVp (Siemens Healthineers AG, Erlangen, Germany) with a slice thickness of 0.5 mm, and a TomoTherapy system operating at 3.5 MVp (Accuray, Sunnyvale, USA), with a slice thickness of 1 mm.The mean, minimum and maximum CT number was sampled from the acquired scans in a central 1 × 1 × 1 cm 3 region of interest using ImageJ (v1.52p,National Institutes of Health, Bethesda, USA).The ρ eff and RED eff values were interpolated using sampled CT number values in the central 1 × 1 × 1 cm 3 region of interest of the cubes and 1 3 characterisation data obtained at commissioning of these imaging systems using a Gammex Model 467 tissue characterization phantom (Gammex Inc, Middleton, USA).To reduce uncertainties in radiological characterisation due to beam hardening effects, all CT imaging was performed with the test cubes surrounded by blocks of water equivalent plastic.This method of characterisation is consistent with the method used by Dancewicz et al. [3].
To facilitate comparison of the new PLA-LW material against the conventional PLA, a set of comparison test cubes were printed using the eSUN PLA + filament at infill densities of 20%, 40%, 60%, 80% and 100% using a gyroid in-fill pattern.These calibration cubes were similarly imaged to derive linear relationships between in-fill density and ρ eff and RED eff at both kV and MV energies.The PLA-LW and PLA + cubes were also compared qualitatively, by visual inspection of the acquired images.Reproducibility was assessed by printing of PLA-LW test cubes on the Ender 5 printer at a later date with a new spool of filament, and comparison of kV-derived ρ eff and RED eff values.
Results
The thickness of the outer contour wall printed with PLA + on the Ender 5 and Raise3D Pro 2 printers were 0.44 ± 0.03 mm and 0.84 ± 0.03 mm respectively, slightly larger than the respective nominal nozzle diameters of 0.4 and 0.8 mm.These PLA + measurements were used as the baseline for calculating the expansion ratios for the PLA-LW printed test cubes are shown in Table 1.
The radiological densities (ρ eff and RED eff for both kV and MV energies) of all test-cubes printed with an in-fill density of 100% are shown for the Ender 5 0.4 mm nozzle and Raise3D Pro 2 0.8 mm nozzle in Tables 2 and 3, respectively.
Effective mass and electron densities of 0.28 were achievable with 100% in-fill densities, featuring only microscopic air gaps.
The relationship between in-fill density and kV and MV CT-derived RED across both printers for different temperatures is shown in Fig. 1.
The kV-derived radiological densities were reasonably reproducible across initial and repeated printing of PLA-LW test cubes, with differences exceeding 0.03 g cm − 3 observed effect may also explain why there were minimal gains (in terms of reduction of printed density) with increasing temperature beyond 240 °C on the Raise 3D Pro 2 printer.
There was a minimal increase in effective densities for the PLA-LW material when printed at 230 °C or higher for in-fill densities between 60% and 100% on the Ender 5 printer with the 0.4 mm nozzle, which was not observed for the test cubes printed at lower temperatures or printed on the Raise 3D Pro 2 printer with larger nozzle.Measurements of the mass of these test cubes indicated that the similarity in density was a physical one (i.e., not an imaging or sampling error).This may be due to a printing issue, such as a higher flow rate than expected resulting from the higher temperatures, or a limitation in the method used to optimise flow rate setting.
As an alternative to the approach described here, the PLA-LW in-fill density could be reduced, instead of the flow rate.This approach could allow elimination of air gaps between extrusion lines (with gaps being filled by the foaming PLA).However, this approach has limitations: the external dimensions of the printed objects would be larger than desired, requiring larger tolerances to be used where parts are intended to be fit together, and over-extrusion at layer transition seams resulting from higher temperatures may introduce print defects (see Fig. 3).
While the PLA-LW material used in this study was approximately 2.1 times the cost of PLA + filament, when printing lung density materials it would be a more cost-efficient material, given the 2.8× reduction in flow rate seen at high temperatures (meaning 1 kg of PLA-LW would produce 2.8x extrusion length of 1 kg of non-foaming PLA).
One limitation in the use of the PLA-LW filament is that it is only able to achieve densities similar to water at lower for only 5 (of 35) combinations of temperature, flow-rate and in-fill density.These differences were observed at lower temperatures (≤ 230 °C) and may relate to environmental conditions during printing or storage of filament, or variations in filament itself.
The results demonstrate that PLA-LW supports the printing of lower density tissue substitutes with at greater in-fill densities, resulting in smaller air gaps.This is illustrated in Fig. 2.
Discussion
The effective density achievable at 100% in-fill was 0.28 g cm − 3 , approximately 25% of conventional PLA at 100% infill, and close to 50% of low-density Woodfill composite, reported as 0.53 by Dancewicz et al. [3].
The radiological characteristics in both kV and MV imaging beams were consistent, with differences between effective densities similar in magnitude to sampled standard deviations.This suggested there is no significant quantity of high-Z ingredient with a radiopacifying effect at lower energies present in the material.This consistency allows the PLA-LW material to be used as a lung-equivalent material in dosimetry studies, for which the characteristics of lowinfill PLA have been previously described [4,13].
There was a substantial difference in expansion between the two printers, which may be due to differences in hotend design, or the increased thermal mass of the material extruded through the larger nozzle diameter.If the extrusion maintains a temperature at which the chemical foaming agent can decompose for a longer amount of time, this will result in increased volume expansion.The thermal mass printer.For example, for the Raise 3D Pro 2 used in this study, there is minimal benefit to printing at a temperature above 240 °C, so this temperature should not be exceeded.
Greater volume expansion can be achieved at lower temperatures using a larger nozzle, which also facilitates increased printing times (due to increased layer height).
Based on the variations observed between the two printers, and the potential for filament batch variations [5], it is recommended that calibration cubes to characterise the relationship between in-fill density, temperature and radiological properties should be printed on other systems and analysed, as described in this study, before use in the fabrication of phantoms.For some printers, depending on printer bed height calibration or print surface, modification printing temperatures, and so the fabrication of inhomogeneous phantoms containing lung-and soft-tissue mimicking media would either require printing of components separately and subsequent assembly, the use of a system supporting two or more filaments quasi-simultaneously [7], or the modulation of printing temperature, which would increase print time due to pauses for heating and cooling, and could result in oozing or additional stress on the hot-end.
The use of an increased in-fill density for PLA-LW, compared to non-foaming PLA, would increase travel distance of the print head and result in longer prints.
The high printing temperatures seen in this study should be used with caution, due to the risk of degradation of any polytetrafluoroethylene (PTFE) within the hot end of the of initial layer flow rate, layer height or fan settings may be required for adhesion.
Conclusion
This study described a method for optimising printer parameters for a light-weight foaming PLA in order to achieve low mass-and electron-densities with smaller air gaps than seen for conventional PLA filaments.This material and technique can be used as a cost-effective method to achieve lung-equivalency in 3D printed phantoms for use in imaging and dosimetry.
Funding Open Access funding enabled and organized by CAUL and its Member Institutions
Declarations
Conflict of interest The authors have no relevant financial or non-financial interests to disclose.
Informed consent
The study did not use any data for which ethical review or informed consent was necessary.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.
Fig. 1 Fig. 3 Fig. 2
Fig. 1 Relationship between in-fill density and RED for test cubes printed on the Ender 5 printer with 0.4 mm nozzle and Raise 3D Pro 2 printer with 0.8 mm nozzle The resources used in this work were provided by the Herston Biofabrication Institute Cancer Care Services research program, supported by funding from the Royal Brisbane & Women's Hospital Foundation and Metro North Hospital and Health Service.
Table 2
Measured effective density, ρ eff , and effective relative electron density, RED eff , for kV and MV imaging beams for test cubes printed on the Ender 5 printer with 0.4 mm nozzle
Table 1
Expansion ratio and derived flow rate for PLA-LW on Ender 5 and Raise3D Pro 2 printers
Table 3
Measured effective density, ρ eff , and effective relative electron density, RED eff , for kV and MV imaging beams for test cubes printed on the Raise3D Pro 2 printer with 0.8 mm nozzle | 3,787 | 2023-09-06T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Proteomic Analysis of Altered Extracellular Matrix Turnover in Bleomycin-induced Pulmonary Fibrosis
Fibrotic disease is characterized by the pathological accumulation of extracellular matrix (ECM) proteins. Surprisingly, very little is known about the synthesis and degradation rates of the many proteins and proteoglycans that constitute healthy or pathological extracellular matrix. A comprehensive understanding of altered ECM protein synthesis and degradation during the onset and progression of fibrotic disease would be immensely valuable. We have developed a dynamic proteomics platform that quantifies the fractional synthesis rates of large numbers of proteins via stable isotope labeling and LC/MS-based mass isotopomer analysis. Here, we present the first broad analysis of ECM protein kinetics during the onset of experimental pulmonary fibrosis. Mice were labeled with heavy water for up to 21 days following the induction of lung fibrosis with bleomycin. Lung tissue was subjected to sequential protein extraction to fractionate cellular, guanidine-soluble ECM proteins and residual insoluble ECM proteins. Fractional synthesis rates were calculated for 34 ECM proteins or protein subunits, including collagens, proteoglycans, and microfibrillar proteins. Overall, fractional synthesis rates of guanidine-soluble ECM proteins were faster than those of insoluble ECM proteins, suggesting that the insoluble fraction reflected older, more mature matrix components. This was confirmed through the quantitation of pyridinoline cross-links in each protein fraction. In fibrotic lung tissue, there was a significant increase in the fractional synthesis of unique sets of matrix proteins during early (pre-1 week) and late (post-1 week) fibrotic response. Furthermore, we isolated fast turnover subpopulations of several ECM proteins (e.g. type I collagen) based on guanidine solubility, allowing for accelerated detection of increased synthesis of typically slow-turnover protein populations. This establishes the presence of multiple kinetic pools of pulmonary collagen in vivo with altered turnover rates during evolving fibrosis. These data demonstrate the utility of dynamic proteomics in analyzing changes in ECM protein turnover associated with the onset and progression of fibrotic disease.
The extracellular matrix (ECM) 1 comprises an intricate network of cell-secreted collagens, proteoglycans, and glycoproteins providing structural and mechanical support to every tissue. The dynamic interplay between cells and ECM also directs cell proliferation, migration, differentiation, and apoptosis associated with normal tissue development, homeostasis, and repair (1,2). Tissue repair following acute injury is typically characterized by the recruitment of inflammatory cells, enzymatic degradation of ECM immediately adjacent to the damaged tissue site, and subsequent infiltration of fibroblasts depositing new ECM. However, in the case of chronic tissue injury and inflammation, abnormal signaling pathways can stimulate uncontrolled ECM protein deposition, ultimately resulting in fibrosis and organ failure (3)(4)(5)(6). In fact, fibrotic diseases including idiopathic pulmonary fibrosis, liver cirrhosis, systemic sclerosis, and cardiovascular disease have been estimated to account for over 45% of deaths in the developed world (1).
Despite the wide prevalence of fibrotic diseases, there is currently a paucity of anti-fibrotic drug treatments and diagnostic tests (7,8). Median survival rates for idiopathic pulmonary fibrosis, for example, range from only two to five years following diagnosis (9,10). Failure in the development of successful anti-fibrotic treatments can in part be attributed to a poor understanding of the active and dynamic role played by the ECM during various stages of fibrotic disease. ECM components influence myofibroblast differentiation not only through their modulation of fibrogenic growth factor activity (e.g. TGF-), but also through mechanotransductive pathways whereby cells interpret altered ECM mechanical properties (3,5,(11)(12)(13). The search for novel target pathways in the development of anti-fibrotic therapies would benefit from a better understanding of dynamic ECM synthesis and degradation associated with the various stages of fibrotic disease.
The combination of stable isotope labeling and proteomic analysis provides a new approach for interrogating dynamic changes in ECM protein synthesis associated with fibrotic disease. We have developed a platform termed "dynamic proteomics," whereby protein synthesis rates from tissue samples are measured following the administration of stable isotope tracers (e.g. 2 H, 15 N) (14). Label incorporation into newly synthesized proteins is assessed via LC/MS analysis of mass isotopomer distributions in peptides derived from parent proteins through enzymatic degradation, providing a means to quantify the fractional synthesis rate (FSR) of individual proteins over the labeling period. Unlike traditional static proteomic techniques, this strategy provides valuable information regarding which proteins are actively synthesized or degraded during any specific stage of the disease process. Moreover, as measurements of label incorporation do not fluctuate based on the amount or yield of protein isolated (14 -16), dynamic proteomic strategies also offer additional robustness relative to traditional quantitative proteomic techniques.
The detection of ECM components in highly cellular tissues such as liver and lung poses an additional stumbling block in the proteomic analysis of fibrotic ECM. The identification of less abundant matrix components is limited by the overwhelming number of cellular proteins present in standard homogenized tissue samples. Standard global protein fractionation techniques (e.g. gel electrophoresis) are inefficient at enriching targeted subsets of proteins. Tissue decellularization techniques commonly utilized in regenerative medicine offer a novel approach toward the enrichment of ECM proteins prior to proteomic analysis (17). Tissue samples are incubated under mechanical agitation in the presence of weak detergents that solubilize cell membranes, releasing cellular protein components into solution while keeping the surrounding structural ECM intact. This technique has recently been applied in the compositional proteomic analysis of cardiovascular, lung, and colon tissues, leading to the identification of ECM-related proteins previously not associated with those tissues (11, 18 -20).
We present here the first study to combine dynamic proteomics with tissue decellularization in order to analyze altered ECM protein synthesis associated with pulmonary fibrosis. Bleomycin and sham-dosed mice were labeled for up to three weeks with heavy water ( 2 H 2 O), and lung tissue was subsequently collected and fractionated into cellular and extracellular components. Further fractionation of ECM based on guanidine solubility resulted in the identification of protein fractions with kinetically distinct characteristics composed of a variety of collagens, basement membrane proteoglycans, and microfibrillar proteins. Label incorporation into ECM proteins in sham-dosed control lungs was generally faster in the guanidine-soluble fraction, suggesting that the insoluble pool reflected more stable, slower-turnover matrix components. In bleomycin-dosed lungs, however, there was a significant increase in the synthesis of both guanidine-soluble and insoluble ECM proteins. These labeling and fractionation methods should be easily adaptable to a variety of animal and human tissue types and could provide a new approach toward actively monitoring the dynamic changes in ECM synthesis and composition associated with fibrotic disease.
EXPERIMENTAL PROCEDURES
Animal Protocols-10-week-old C57Bl/6 mice (Jackson, Sacramento, CA) underwent 2 H 2 O labeling according to a protocol similar to that previously described (21). Briefly, animals received a bolus intraperitoneal injection of 2 H 2 O in 0.9% NaCl to bring total body water enrichment to ϳ5%, followed by 8% 2 H 2 O drinking water to maintain body water enrichment at 5% for the remainder of the study. Shortly following initial 2 H 2 O administration, mice were dosed intratracheally with 1.5 units/kg of bleomycin (Sigma, St. Louis, MO) or saline as sham treatment similar to that previously described (22). Sham-dosed mice were euthanized at 6 and 21 days (n ϭ 3), and bleomycin-dosed mice were euthanized at 5 (n ϭ 3) and 17 or 21 days (n ϭ 1, 2). Premature euthanization of some mice (day 5 or day 17) was performed because of excessive weight loss and morbidity relative to control animals associated with bleomycin exposure. Plasma was collected via cardiac puncture. Bronchial lavage was performed with 0.9% NaCl. Lung tissue was then perfused with 0.9% NaCl, collected, snap frozen in liquid nitrogen, and stored at Ϫ80¦°C. Details regarding individual animal weights and labeling durations are provided in Table I. Approximate labeling times of 1 and 3 weeks are reported hereinafter to simplify interpretation of the data. All procedures were Institutional Animal Care and Use Committee approved.
Lung Tissue Preparation-Sequential extraction of lung tissue was performed to fractionate cellular and extracellular proteins, similar to previous work (23). 50 mg of lung tissue was minced with a razorblade and placed in 2-ml screw-cap vials. Tissues were rinsed four times with cold PBS for 5 min on a benchtop rotator to remove residual blood proteins. Tissues were then suspended in 0.5 M NaCl in 10 mM 8). Proteins were reduced with TCEP (5 mM) for 20 min at room temperature with vortexing and then incubated with iodoacetamide (10 mM) in the dark for 20 min to chemically modify reduced cysteines. Proteins were then digested with trypsin (Promega) at 37¦°C overnight using a 1:25 trypsin:protein mass ratio. Guanidine-insoluble protein fractions were processed in an identical manner using a volume of trypsin sufficient for 80 g of protein.
The following day, formic acid was added to a total concentration of 5%, and samples were centrifuged at 14,000 ϫ g for 30 min. The supernatant was transferred to a fresh tube, desalted with a C18 spec tip (Varian, Palo Alto, CA), dried via vacuum centrifugation, and resuspended in 0.1% formic acid/3% acetonitrile prior to LC/MS analysis.
Whole lung tissue homogenate was prepared using a Fast Prep-24 (MP Biomedical, Burlingame, CA) bead mill. 50 mg of lung tissue was suspended in H 2 O at a 10:1 volume:mass ratio with protease inhibitors and two 2.3-mm chrome steel beads (BioSpec, Bartlesville, OK) in a 2-ml screw-cap tube. Samples were homogenized at high speed three times for 30 s with 5-min intervals on ice and stored at Ϫ80¦°C. Proteins were precipitated with acetone at a 5:1 acetone:homogenate ratio by incubation at Ϫ20¦°C for 20 min followed by centrifugation at 2000 ϫ g for 5 min at 4¦°C prior to hydrolysis and GC-MS analysis.
Plasma 2 H 2 O Measurement-2 H 2 O enrichment from 100 l of mouse plasma was determined using a previously described method (24). Briefly, body water was evaporated from plasma via overnight incubation at 80¦°C. Samples were then mixed in 10 M NaOH and acetone and underwent a second overnight incubation. This material was extracted in hexane and dried with Na 2 SO 4 prior to GC-MS analysis alongside a standard curve of samples prepared at known 2 H 2 O concentrations.
LC-MS Peptide Analysis and Kinetic Calculations-Trypsin-digested peptides were analyzed on an Agilent 6520 quadrupole timeof-flight mass spectrometer with a 1260 Chip Cube nano-electrospray ionization source (Agilent Technologies, Santa Clara, CA). Peptides were separated chromatographically using a Polaris HR chip (Agilent #G4240 -62030) consisting of a 360-nl enrichment column and a 0.075 ϫ 150 mm analytical column, each packed with Polaris C18-A stationary phase with a 3-m particle size. Mobile phases were (A) 5% v/v acetonitrile and 0.1% formic acid in deionized water and (B) 95% acetonitrile and 0.1% formic acid in deionized water. Peptides were eluted at a flow rate of 350 nl/min during a 27-min nano-LC gradient (2% B at 0 min, 5% B at 1 min, 30% B at 18 min, 50% B at 22 min, 90% B at 22.1-33 min, 2% B at 33.1 min; stop time: 38 min). Each sample was analyzed twice, once for protein/peptide identification in data-dependent MS/MS mode and once for peptide isotope analysis in MS-only mode. Acquisition parameters were as follows: MS/MS acquisition rate ϭ 6 Hz MS and 4 Hz MS/MS with up to 12 precursors per cycle; MS acquisition rate ϭ 0.9 Hz; ionization mode ϭ positive electrospray; capillary voltage ϭ Ϫ1980 V; drying gas flow ϭ 4 l/min; drying gas temperature ϭ 290¦°C; fragmentor ϭ 170 V; skimmer ϭ 65 V; maximum precursor per cycle ϭ 20; scan range ϭ 100 -1700 m/z (MS), 50 -1700 m/z (MS/MS); isolation width (MS/ MS) ϭ medium (ϳ4 m/z); collision energy (V) ϭ Ϫ4.8 ϩ 3.6*(precursor m/z/100); active exclusion enabled (exclude after one spectrum, release after 0.12 min); charge state preference ϭ 2, 3, Ͼ3 only, sorted by abundance; total ion chromatogram target ϭ 25,000; reference mass ϭ 922.009798 m/z. Acquired MS/MS spectra were extracted and searched using Spectrum Mill Proteomics Workbench software (version B.04.00, Agilent Technologies) and a UniProtKB/Swiss-Prot mouse protein database (16,473 proteins, release 2012 02). Data files were extracted with the following parameters: fixed modification ϭ carbamidomethylation of cysteine; scans with the same precursor mass merged by spectral similarity within tolerances (retention time Ϯ 10 s, mass Ϯ 1.4 m/z); precursor charge maximum z ϭ 6; precursor minimum MS1 S/n ϭ 10; and 12 C precursor m/z assigned during extraction. Extracted files were searched with the following parameters: enzyme ϭ trypsin; Mus musculus; fixed modification ϭ carbamidomethylation of cysteine; variable modifications ϭ oxidized methionine ϩ pyroglutamic acid ϩ hydroxylation of proline; maximum number of missed cleavages ϭ 2; minimum matched peak intensity ϭ 30%; precursor mass tolerance ϭ 10 ppm; product mass tolerance ϭ 30 ppm; minimum number of detected peaks ϭ 4; maximum precursor charge ϭ 3. Search results were validated at the peptide and protein levels with a global false discovery rate of 1%. Details regarding specific proteins identified and unique peptide coverage are presented in the supplemental material.
Proteins with scores greater than 11.0 were reported, and a list of peptides with scores greater than 6 and scored peak intensities greater than 50% was exported from Spectrum Mill and condensed to a non-redundant peptide formula database using Excel. This database, containing peptide elemental composition, mass, and retention time, was used to extract MS spectra (M0 -M3) from corresponding MS-only acquisition files with the Find-by-Formula algorithm in Mass Hunter Qualitative Analysis software (version B.05.00, Agilent Technologies). MS spectra were extracted with the following parameters: extracted ion chromatogram integration by Agile integrator; peak height Ͼ 10,000 counts; include spectra with average scans Ͼ 12% of peak height; no MS peak spectrum background; unbiased isotope model; isotope peak spacing tolerance ϭ 0.0025 m/z plus 12.0 ppm; mass and retention time matches required; mass match tolerance ϭ Ϯ12 ppm; retention time match tolerance ϭ Ϯ0.8 min; charge states z ϭ ϩ2 to ϩ4; chromatogram extraction ϭ Ϯ12 ppm (symmetric); extracted ion chromatogram extraction limit around expected retention time ϭ Ϯ1.2 min.
Details of FSR calculations were described previously (14). Briefly, in-house software was developed to calculate the peptide elemental composition and curve fit parameters for predicting isotope enrichments of peptides in newly synthesized proteins based on precursor body water enrichment (p) and the number (n) of amino acid C-H positions per peptide actively incorporating H and 2 H from body water. Incorporation of 2 H into tryptic peptides decreases the relative proportion of M0 within the overall isotope envelope spanning M0 -M3. Fractional synthesis was calculated as the ratio of excess %M0 (EM 0 ) for each peptide to the maximal absolute EM 0 possible at the measured body water enrichment. Data handling was performed using Microsoft Excel templates, with input of precursor body water enrichment for each subject, to yield FSR data at the protein level.
Data from individual biological samples were filtered to exclude protein measurements with fewer than two peptide spectra measurements per protein. FSR data at individual time points (1 or 3 weeks) are reported as a cumulative value (percentage of protein newly synthesized over the entirety of the labeling period). The fold change in mean protein FSR between groups (bleomycin:control) was determined for both early (0 to 1 week) and late (1 to 3 weeks) fibrotic response by calculating the slope increase of FSR between collected data points. Protein FSR on day 0 was assumed to be 0%.
GC-MS OHPro Analysis-GC-MS analysis of OHPro FSR was carried out as previously described (21). Briefly, lung tissue protein fractions and whole homogenate proteins were hydrolyzed in 6N HCl at 110¦°C for 18 h. Extracted protein fractions were spiked with known amounts of 2 H 3 -labeled OHPro to provide an internal standard for quantitation. The amine group was deactivated with a solution of pentafluorabenzyl bromide, acetonitrile, water, and phosphate buffer. In order to silylate the hydroxyl moiety of OHPro, samples were incubated with a solution of acetonitrile, N-methyl-N-[tert-butyldimethyl-silyl]trifluoroacetamide, and methylimidizole. This material was extracted in petroleum ether and dried with Na 2 SO 4 . Derivatized OHPro was analyzed via GC-MS using selected ion monitoring of 424, 425, and 427 m/z ions in negative chemical ionization mode.
Incorporation of 2 H into OHPro was calculated as excess %M1 (EM 1 ). Fractional collagen synthesis was calculated as the ratio of EM 1 to the maximal EM 1 possible at the measured body water enrichment. The concentration of OHPro was determined using the 2 H 3 -OHPro internal standard and a standard curve analyzed with each batch of samples. Total lung collagen was determined using total lung tissue weights recorded at the time of collection.
Pyridinoline Cross-link Quantitation-Pyridinoline cross-links were quantitated by means of ELISA using the MicroVue Serum PYD Assay (Quidel, San Diego, CA) per the manufacturer's instructions. Lung tissue protein fractions were hydrolyzed as described previously for GC-MS analysis and diluted within the working concentration range of the assay similarly to what was previously described (25). Samples were adjusted to neutral pH with NaOH prior to analysis.
Statistical Analyses-Means and standard deviations (error bars) of fractional protein synthesis between groups (n ϭ 3) were compared via Student's t test at each time point. A Holm-Sidak correction for multiple comparisons was performed for all ECM proteins detected within each protein fraction. Analysis of variance was used for assessing statistically significant differences among three or more groups. Statistical significance was defined as a p value Ͻ 0.05.
RESULTS
Verification of Sequential Protein Extraction-Dynamic proteomic analysis of mouse lung ECM protein fractional synthesis was performed following fibrotic induction with bleomycin or sham treatment. Lung tissue proteins were fractionated to enrich for ECM and then trypsinized and analyzed via LC-MS to measure shifts in peptide mass isotopomer distributions (Fig. 1). Measurements of ECM protein enrichment from lung tissue were carried out through the sequential extraction of proteins into four fractions, NaCl-soluble, SDS-soluble, guanidine HCl-soluble, and insoluble, the latter two of which were enriched for ECM proteins (e.g. collagens, proteoglycans, etc.) as determined from LC-MS peptide spectral identification (Fig. 2). No appreciable enrichment for ECM proteins was detected in the NaCl-or SDS-soluble fractions.
Kinetics of Guanidine-soluble ECM Proteins-Guanidinesoluble protein extraction successfully enriched for a variety of pulmonary proteoglycans, as well as additional ECM proteins including fibronectin, collagen I, and collagen VI (Table II). Fractional synthesis of basement-membrane-associated proteoglycans ranged from roughly 10% to 20% and 30% to 50% newly synthesized molecules present in control lungs FIG. 1. Flowchart of dynamic proteomic analysis. Following the administration of bleomycin or vehicle, mice are continuously labeled with 2 H 2 O, which is incorporated into newly synthesized proteins over time. Proteins from harvested tissues are trypsinized into peptides and analyzed via LC-MS to measure isotopic shifting reflective of the fraction of each protein that was synthesized during the labeling period.
FIG. 2. ECM enrichment following sequential protein extraction. Spectrum mill identification of MS/MS peptide spectra from NaCl/SDSsoluble, guanidine-soluble, and residual insoluble pulmonary protein fractions. Peptides are divided into collagens, proteoglycans, other ECM, and non-ECM. after 1 and 3 weeks of label, respectively. Fractional synthesis of the same proteoglycans in bleomycin-dosed lungs was significantly higher in most cases, with the majority approaching 60% to 80% labeled at 3 weeks. Guanidine-soluble collagens and collagen-associated small leucine-rich proteoglycans also attained significantly greater label incorporation following bleomycin exposure. Fractional synthesis of guanidine-soluble collagens (types I and VI) increased from ϳ10% and 20% in control lungs to 20% and 50% in bleomycindosed lungs at 1 and 3 weeks, respectively. FSRs for biglycan and decorin, two small leucine-rich proteoglycans associated with collagen fibril assembly and growth factor signaling, were noted to be particularly rapid (Ͼ60% labeled in control lungs at 1 week). Label incorporation into fibronectin was also expeditious, reaching greater than 75% in both control and bleomycin-dosed lungs prior to 1 week. Protein-glutamine ␥-glutamyltransferase 2 (a.k.a. tissue transglutaminase), an enzyme involved in protein cross-linking, also showed increased fractional synthesis at both time points observed after bleomycin administration.
Kinetics of Insoluble ECM Proteins-Insoluble pulmonary protein fractions were enriched for a variety of collagens and microfibrillar proteins (Table III). Fractional synthesis of fibrillar collagens (types I, III, and V), those most associated with fibrotic scar tissue, was not significantly increased in bleomycin-dosed lungs after 1 week of label. However, fibrillar col-lagen fractional synthesis was remarkably elevated by 3 weeks, reaching a 6-fold higher percentage of label relative to control lungs. Insoluble type VI collagen fractional synthesis was significantly higher in bleomycin-dosed lungs at both time points, whereas type IV collagen fractional synthesis was significantly increased only at 3 weeks. Fractional synthesis of elastin, EMILIN-1, fibrillin-1, and fibulin-5, proteins associated with elastic microfibril formation, was also significantly higher in bleomycin-dosed lungs, with elastin reaching a greater than 8-fold increase in FSR at 3 weeks. Basement membrane proteoglycans laminin and perlecan were also detected in the insoluble protein pool, but their fractional synthesis was only elevated in fibrotic lungs following 3 weeks of label. These results confirm a time-dependent increase in insoluble protein deposition in the bleomycin lung model, with the majority occurring more than 1 week post-bleomycin exposure.
Kinetics of Individual ECM Proteins Fractionated by Guanidine Solubility-We identified multiple ECM proteins present in both guanidine-soluble and insoluble protein fractions, including collagen I, collagen VI, perlecan, and laminin. For the majority of these proteins, including laminin subunit -2, perlecan, and collagen ␣-1(I), fractional synthesis in control lungs was significantly higher in the guanidine-soluble fraction than in the insoluble fraction (Figs. 3A-3C). Although bleomycin administration did not appear to affect this trend for the two proteoglycans, the ratio of labeled to unlabeled collagen I across the two protein fractions was altered. Interestingly, guanidine-insoluble collagen VI fractional synthesis was higher than that of the soluble form, a trend that was maintained following the onset of fibrosis (Fig. 3D). Solubilityrelated changes in fractional synthesis were most pronounced for extracellular proteins compared with other classes of proteins, as demonstrated by very little change in ␣-smooth muscle actin kinetics across protein fractions (Fig. 5E). Early versus Late Fibrotic ECM Kinetics-Pulmonary administration of bleomycin has previously been shown to result in an early inflammatory phase (pre-1 week), followed by a later fibrotic phase (post-1 week) (26,27). To better understand how ECM protein synthesis is altered during these distinctive stages of fibrotic disease, we calculated the fold-change in ECM protein FSR between bleomycin-dosed and control lungs for these time periods (Fig. 4). Global ECM protein fractional synthesis appeared to be elevated in bleomycindosed lung tissue during both the early inflammatory and late fibrotic phase, and a small subset of proteins were particularly elevated during the late fibrotic phase. In the guanidine-soluble protein pool, labeling with collagens I and VI appeared to be most accelerated in the late fibrotic phase of disease, along with dermatopontin and MFAP-4 (Fig. 4A). These latter proteins play roles in TGF- signaling pathways and cellmatrix interactions, respectively (28,29). An analysis of the insoluble ECM protein pool identified fibrillar collagens (types I, III, and V) and microfibrillar proteins (elastin, fibulin-5, and fibrillin-1) as most elevated in fractional synthesis during the late fibrotic phase of disease (Fig. 4B). It is important to note that this method of analysis is less accurate for fast-turnover proteins, which are close to fully labeled at 1 week (e.g. biglycan, fibronectin, EMILIN-1), so that if any differences between groups were present at 3 weeks, they would not be apparent.
GC-MS Analysis of Pulmonary OHPro Fractional Synthesis-To further characterize sequentially extracted collagen subsets, we utilized methods similar to those previously published for determining total OHPro mass and FSR in tissues via GC-MS (21,30). OHPro was present in each pulmonary tissue protein fraction in different quantities (Table IV). The mass of OHPro present in the NaCl and SDS-soluble protein pools was minimal, comprising roughly 0.3% of total OHPro detected across all protein fractions. OHPro measured in the guanidine-soluble protein fraction accounted for roughly 2.5% to 5% of total collagen, and insoluble collagens made up the remaining 95% to 97.5%. Although the OHPro mass was elevated in the NaCl, SDS, and insoluble protein fractions following fibrotic induction with bleomycin, guanidine-soluble OHPro levels were unchanged. Quantification of pyridinilone cross-link density in the guanidine-soluble and insoluble protein pools revealed significantly elevated concentrations in the insoluble pool of control lungs, indicative of enhanced collagen stability and maturity (Fig. 5). Although no longer significantly different, pyridinoline cross-link density did not appear to be altered after 3 weeks. Similar to the collagen data observed in our dynamic proteomic analyses, the fractional synthesis rate of OHPro was significantly increased following the induction of fibrosis (Fig. 6A). Rapid label incorporation occurred in the NaCl and SDSsoluble OHPro pools, indicating that these fractions were largely populated by recently synthesized collagen proteins. Administration of bleomycin elevated label incorporation in these pools to nearly 100% at 1 week. OHPro fractional synthesis was also significantly higher in the guanidine-soluble and insoluble protein fractions. Importantly, label incorporation was similar to that observed in fibrillar collagens via LC-MS analysis. A comparison of total lung OHPro fractional synthesis (GC-MS) and insoluble collagen ␣-1(I) fractional synthesis (LC-MS) demonstrated close agreement between the two kinetic assays (Fig. 6B). The combination of OHPro mass and fractional synthesis data calculated from our GC-MS analysis also allowed for absolute quantitation of the newly synthesized OHPro present within each protein fraction (Fig. 6C). Note that these data are presented in log scale because of the dynamic range of collagen present in the various protein fractions. Newly synthesized guanidine-soluble and insoluble OHPro quantities were roughly 3-fold and 15-fold higher in bleomycin-dosed lung tissue than in control tissue at 3 weeks, respectively. Although NaCl and SDSsoluble OHPro masses were elevated in bleomycin-dosed mice, 100% label incorporation (i.e. plateau labeling) pre-vented an accurate assessment of absolute synthesis rates in those fractions. DISCUSSION A combination of dynamic proteomics and tissue decellularization was utilized to quantify changes in ECM fractional synthesis associated with the onset and progression of experimental fibrotic disease in vivo in the mouse. FSRs for dozens of ECM proteins were determined by monitoring stable isotope incorporation into newly synthesized proteins in a common model of pulmonary fibrosis. Conventional proteomic strategies targeting fibrosis-associated proteins are typically limited to semi-quantitative snapshots of ECM content, providing little to no insight into protein dynamics. Our analysis of healthy mouse lung tissue measured ECM protein FSRs ranging from less than 10% per week (e.g. type I collagen, elastin) to greater than 75% per week (e.g. fibronectin), FIG. 4. Early-and late-stage ECM kinetics in response to bleomycin. Fold change (bleo:control) in guanidine-soluble (A) and insoluble (B) ECM protein fractional synthesis following induction of fibrosis with bleomycin. Data represent group means and are divided into early (pre-1 week) and late (post-1 week) fibrotic response sorted by magnitude of fold change in late-responding proteins. Results for late response (1 to 3 weeks) were calculated using group differences in fractional synthesis at 1 and 3 weeks (as described in the text). demonstrating the complex dynamic state of pulmonary ECM. Following bleomycin exposure. ECM protein fractional synthesis was significantly altered, with some proteins affected more than others during early and late disease response. As fibrotic disease is characterized by perturbations in normal ECM dynamics resulting in ECM accumulation, we posit that the measurement of protein fractional synthesis provides a unique perspective on ECM accumulation and turnover in the development of fibrotic disease. The overwhelming majority of ECM proteins were detected in the guanidine-soluble and insoluble pulmonary tissue protein fractions. Overall, guanidine-soluble ECM protein FSRs were higher than insoluble FSRs in sham control mice. The elevated pyridinoline cross-link density detected in the insoluble protein fraction provides one explanation for differential protein extractability. This supports FSR data indicating slower overall ECM protein turnover in the insoluble protein fraction, as cross-linking promotes collagen fibril stability. Interestingly, several individual proteins identified in both fractions had significantly different FSRs, allowing for a direct comparison of guanidine-soluble and insoluble protein pool kinetics. Label incorporation occurred faster in the guanidinesoluble forms of collagen I, perlecan, and laminin than it did for the same proteins in the insoluble form in control lungs. This indicates that guanidine extraction of acellular lung tissue favors the enrichment of a subpopulation of more recently synthesized, less mature ECM proteins. Collagen VI demonstrated the opposite phenomenon, with the insoluble pool turning over at a faster rate than its guanidine-soluble counterpart. This heterogeneity in differential FSRs across guanidine-soluble and insoluble protein fractions might result from the preferential interaction of newly synthesized protein populations with other, more mature protein populations, or vice versa, and deserves further exploration.
Measurement of increased collagen content is currently the gold standard for assessing the severity of fibrotic tissue disease. We therefore focused much of our analytic effort on the characterization of collagen fractional synthesis across different protein fractions. Dynamic proteomic analysis revealed a dramatic increase in fibrillar collagen turnover (types I, III, and V) following bleomycin administration, in both the guanidine-soluble and the insoluble protein pools. Whereas label incorporation occurred more slowly in insoluble collagens than in guanidine-soluble collagens in control mice, bleomycin administration made label incorporation virtually indistinguishable between the two pools after 3 weeks. This reflects a dramatic accumulation of typically stable, slowly turning over collagen, most of which appeared to occur between 1 and 3 weeks post-induction of pulmonary fibrosis. Although bleomycin also increased the FSR of basement membrane proteoglycans (laminin, perlecan) in both fractions, the proportion of newly synthesized protein in each fraction was similar.
GC-MS analysis of total OHPro quantity and turnover provided additional insight into collagen flux within the various protein fractions. The relatively small but fast turnover pool of OHPro isolated in the NaCl and SDS-soluble protein fractions is indicative of newly synthesized collagens. Increased OHPro quantity and FSR within these fractions following bleomycin administration likely reflects an increase in new collagen synthesis. Guanidine-soluble OHPro fractional synthesis closely matched that of type I collagen as determined via LC-MS analysis following bleomycin administration, but no change was detected in OHPro quantity in this fraction. A higher FSR with no change in pool size reflects the presence of a steady state in which increased guanidine-soluble collagen synthesis is balanced with degradation or the conversion of newly synthesized protein molecules to an insoluble form. Accumulation of insoluble collagen was confirmed by an increased FSR and a roughly 70% increase in insoluble OHPro content at 3 weeks post-bleomycin. Elevated concentrations of pyridinoline cross-links present in the insoluble collagen fraction provide one means for collagen transformation between guanidine-soluble and insoluble states. Additional forms of collagen cross-linking might also contribute, as we also detected increased fractional synthesis of tissue transglutaminase in fibrotic tissues (31).
Along with collagens, elastic microfibrils are highly prevalent in lung tissue, contributing to pulmonary viscoelastic properties (5). We observed significantly elevated fractional synthesis of microfibril-related proteins including elastin, fibrillin-1, EMILIN-1, and fibulin-5 following administration of bleomycin, particularly during the later phase of disease response (post 1 week). Previous studies showed an increase in elastic fiber content associated with fibrotic disease (5,32,33). It is therefore likely that increased labeling of microfibrillar proteins comes as a result of increased synthesis and accumulation rather than an increase in the degradation of existing unlabeled proteins. These data indicate that like fibrillar collagen FSRs, elastic microfibril-related protein FSRs also might serve as effective markers of fibrotic disease activity.
Basement membrane proteoglycan FSRs were also altered by bleomycin administration. Guanidine-soluble proteoglycans had higher FSRs than insoluble proteoglycans in bleomycin-dosed tissue during both early and later disease response. Insoluble proteoglycan turnover, in contrast, was altered only during the later fibrotic response (1 to 3 weeks). Interestingly, collagen IV, though detectable only in the insoluble protein fraction, appeared to more closely resemble the fractional synthesis profile of guanidine-soluble basement membrane proteoglycans, potentially reflective of an interaction between these protein populations.
Other proteins of interest included small leucine-rich proteoglycans, which were observed to have a wide range of turnover rates. Biglycan and decorin, two commonly studied small leucine-rich proteoglycans associated with collagen fibril formation and TGF- superfamily growth factor activity (34,35), were nearly fully labeled in control lungs at 1 week. Although this experimental design factor diminished the absolute difference that we were able to detect in labeling between experimental groups, statistical differences in biglycan fractional synthesis were still observed. These differences may result from a combination of increased protein pool size and the presence of a small pool with a very slow turnover rate. Similar results were observed for fibronectin, an abundant ECM glycoprotein previously shown to increase in quantity shortly following bleomycin administration (36). Future experiments utilizing shorter labeling periods would be useful for further study of fast-turnover ECM proteins, which might represent robust dynamic markers of fibrotic disease.
Dermatopontin, another proteoglycan associated with TGF- activity through its interaction with decorin (37), fell well within the range of our labeling period. Dermatopontin turnover was higher in bleomycin-dosed lungs than in control tissues at both time points, indicative of a role in the fibrotic tissue response. Other ECM proteins including MFAP-2, MFAP-4, nephronectin, and periostin demonstrated very little change between bleomycin-dosed and control groups at 1week but large changes at 3 weeks. Such differences in individual ECM protein FSRs over time might allow for the identification of specific dynamic protein markers of different stages of fibrotic disease.
The applications for ECM-focused dynamic proteomics in the diagnosis and treatment of fibrotic diseases are potentially important. From a basic research perspective, these techniques are useful in profiling ECM protein flux associated with the onset and developmental stages of fibrotic disease. Identification of dynamic biomarkers could provide novel therapeutic targets, as well as allow for more accurate diagnosis of disease progression or anti-fibrotic drug efficacy. Comparisons of global ECM protein dynamics in various animal models of fibrosis with those observed in human disease might also provide valuable information regarding the validity of those animal models (i.e. reverse translation). This might be particularly relevant in the study of pulmonary fibrosis, where there is currently debate over the relevance of the bleomycin model to human idiopathic pulmonary fibrosis (27,38,39). As stable isotopes including D 2 O are routinely used in human subjects, the methods described herein are safely translatable to biopsied human tissue. Dynamic biomarkers of pulmonary fibrosis might also be obtainable in biofluids such as bronchial lavage fluid or plasma, potentially acting as surrogate markers of disease. This strategy is supported by multiple studies quantifying ECM breakdown products in plasma that appear to correlate with fibrotic disease (40 -43).
It is important to note that allowing for the hydroxylation of proline as a post-translational modification during LC-MS/MS peptide identification was a vital step in our analysis of collagen FSR, as Ͼ90% of extracellular collagen I peptides detected in this study included OHPro residues. We also considered the effect of proline hydroxylation on our calculation of collagen turnover, but we detected no change in collagen peptide FSR related to the presence of one or more OHPro residues (data not shown). Although proline hydroxylation eliminates one 2 H-labeling site in the de novo proline synthesis pathway, the impact of this difference on peptide FSR is minimized by two factors: the relatively greater abundance of alternative sources of proline (e.g. diet or protein degradation products), and the limited proportion of OHPro relative to other amino acids present in any given collagen peptide (21).
One shortcoming of this study was our inability to perfectly match the labeling times of animal groups at early and late collection points. Because of weight loss and morbidity associated with bleomycin administration, early sacrifice of some animals was required. However, as we report here increased ECM protein synthesis rates as a result of pulmonary exposure to bleomycin, shorter labeling periods in animals exposed to bleomycin do not account for these findings. In addition, we chose not to represent FSR data as a daily rate by fitting to a one-phase exponential association because of the high, presumably plateaued FSRs of many ECM proteins at both time points.
Another technical challenge lay in the difficulty of interpreting ECM protein FSR data during the onset of fibrotic disease because of the large changes in total ECM protein quantity. For example, it has been reported that the total ECM quantity may increase as much as 6-fold following the onset of liver fibrosis (44). Such drastic changes in pool size can make it difficult to interpret corresponding changes in protein FSR, as the ratio of synthesis to degradation shifts away from a steady state. In the case of collagen, the quantitation of total OHPro provided one solution, allowing us to calculate absolute collagen synthesis over the labeling period. Additional quantitative proteomics-based and non-proteomics-based techniques would also assist in understanding quantitative changes in particular proteins of interest. Future studies administering isotope label only at the later stages of disease might also ameliorate this problem, by distinguishing fractional synthesis associated with disease onset from that associated with the chronic fibrotic state. Although we do not report turnover data associated with cellular proteins here, such data will also likely be valuable in understanding disease progression. For example, smooth muscle actin, a marker of myofibroblast activation that we found to be present across multiple protein fractions, showed an increased FSR in bleomycin-dosed tissues.
Fibrotic diseases, characterized by a chronic imbalance in ECM turnover favoring elevated matrix deposition, present a significant worldwide medical problem with little currently available in the way of effective diagnostic or therapeutic strategies. Here, we demonstrate a technique combining dynamic proteomics and tissue decellularization biochemical procedures to quantify the fractional synthesis of a broad array of ECM proteins associated with fibrotic disease development. Fractionation of matrix proteins based on solubility resulted in the identification of physically separable ECM protein subpopulations with distinctive kinetic behaviors in both healthy and fibrotic pulmonary tissues. Moreover, we observed striking increases in fibrillar collagen synthesis 1 to 3 weeks post-bleomycin exposure, consistent with a pathogenic accumulation of mature cross-linked ECM. These techniques have implications in the development of improved diagnostics and ultimately treatments for fibrotic disease via improved understanding of matrix dynamics during the various stages of tissue fibrogenesis. | 8,834 | 2014-04-16T00:00:00.000 | [
"Medicine",
"Biology"
] |
Analysis and Characterization of Performance Variability for OpenMP Runtime
In the high performance computing (HPC) domain, performance variability is a major scalability issue for parallel computing applications with heavy synchronization and communication. In this paper, we present an experimental performance analysis of OpenMP benchmarks regarding the variation of execution time, and determine the potential factors causing performance variability. Our work offers some understanding of performance distributions and directions for future work on how to mitigate variability for OpenMP-based applications. Two representative OpenMP benchmarks from the EPCC OpenMP micro-benchmark suite and BabelStream are run across two x86 multicore platforms featuring up to 256 threads. From the obtained results, we characterize and explain the execution time variability as a function of thread-pinning, simultaneous multithreading (SMT) and core frequency variation.
INTRODUCTION
Parallel applications executing on shared-memory systems in the HPC world usually follow the single-program multipledata (SPMD) model, typically implemented with OpenMP.OpenMP, the de facto programming model for SPMD, spawns multiple threads when encountering a #pragma omp parallel clause.Each thread is then executed on one core/hardware thread of the system to execute the parallel region, and commonly all threads synchronize at the end of the execution of the parallel region to compute the final result.Some systemspecific activities, such as operating system (OS) daemons and interrupt processing, can cause preemption or interrupt handling to one or multiple of the threads, causing the execution of the parallel work to be delayed and the execution time to be dominantly determined by the slowest thread, while the others wait for synchronization, leading to a waste of resources like time and energy.Also, due to the randomness of the delay, it will in turn generate performance variability for the runtime of the parallel application.Performance variability has become an important limiter to the scalability in parallel computing [13].With the complexity of modern hardware architecture features increasing, variability has become an increasingly challenging issue for improving the efficiency of parallel computing [19].
Performance variability or run-to-run variations of applications owing to multiple components in the system can become an obstacle for the development of parallel applications in several ways, like performance debugging or quantifying the effects of system software and compilers changes [2].There have been various efforts to identify the potential causes of variability and in turn find solutions to reduce the possibility of variability occurrence, aiming at obtaining performance stability of parallel application executions.Most studies on variability have focused on MPI [9,11,16,23], as the explicit and synchronizing nature of message passing communication, and the large scale of applications using this programming model, make MPI applications more sensitive to noise, which in turn leads to load imbalance and ultimately performance degradation.Evaluating the impact of noise and the occurring performance variability in shared memory models such as OpenMP has received comparatively less attention.However, as the core count of modern CPUs increases, shared memory parallel applications using OpenMP are likely to be also impacted by OS noise.
Several strategies to optimize the performance of parallel programs with OpenMP have been proposed, studied, and have influenced the state-of-practice in execution.A common strategy is thread pinning [30], which can improve application performance by keeping threads bound to a specific core and avoiding expensive memory accesses.The authors in [21] have studied several thread-pinning strategies to improve the performance of OpenMP programs, while in a later work [22], they have proposed dynamic thread-pinning for phase-based OpenMP programs with multiple parallel regions.Their study is limited to a small scale of core/thread counts.However, they identified thread pinning as a critical factor to performance variability [20].The effective usage of simultaneous multithreading, the architectural mechanism that supports several hardware threads per physical core, has also been shown to improve the performance of MPI and MPI+OpenMP applications [16].Finally, tuning the core frequencies for performance is also important, and well-studied in the literature relevant to developing dynamic energy-efficiency techniques [4,17,18], as even in steady-state, frequency variation can cause high variability of the performance [25].
This work focuses on characterizing the performance variability of OpenMP on modern CPUs.Motivated by the scale of recent, modern multi-core systems, we conduct an extensive study of the impact of common performance-optimizing strategies on the performance of OpenMP applications, in an effort to further understand and pinpoint sources of performance variability in OpenMP.We use two micro-benchmarks from the EPCC benchmark suite [14] which focus on the performance of common OpenMP constructs, such as parallel for and synchronization, and the BabelStream benchmark [7], which assesses the memory bandwidth, and execute them on two different systems, using different variability-reducing strategies.In particular, we analyze thread-pinning, which can help in revealing the performance degradation related to unbound threads in parallel applications.We additionally explore simultaneous multithreading to show how it can affect the performance of OpenMP benchmark executions.Finally, during benchmark execution, we record the frequencies of all cores, to examine whether frequency variation exists, and how it can affect the variability of execution time.
Our study is conducted on two production clusters hosted by two different academic institutions.We do not have privileged access to these systems and therefore cannot control the node setup or operating system knobs and we cannot trace kernel-level events.We, therefore, rely on a statistical analysis of the observed execution times, repeatedly running every benchmark with multiple iterations of the kernels of interest.By studying the possible sources and characterizing the performance variability, we can categorize the sources of performance variability and find efficient solutions to mitigate a particular class of variability in future work.
The rest of the paper is organized as follows.We introduce related work in Section 2. Section 3 provides an overview of the proposed methodology to characterize the performance variability.We present our experimental setup in Section 4 and experimental results in Section 5. A conclusion of this work follows in Section 6.
RELATED WORK
Performance variability of parallel applications has been well reported on modern systems in multiple works.At the extraapplication level, there can be multiple reasons for unpredictable performance, with operating system noise (also referred to as OS jitter) being one of the most common reasons.Several works [6,9,23,28] study the impact of operating system activities on performance, looking primarily at large-scale parallel applications with MPI.A recent work [27] demonstrates that OS noise on non-uniform memory access (NUMA) architectures can cause high run-to-run performance variability.As the number of cores/processors on modern systems grows, OS noise can become a more significant factor of performance variability, as a small amount of perturbation can be greatly amplified in parallel computing.It is therefore important to study the impact of OS noise on performance variability and find solutions to mitigate it.
Aside from the operating system, performance variability can arise from contention and interference on shared resources.Bhatele et al. [2] show that sharing network resources on HPC systems is a primary source of performance variability.Xu et al. [31] show that interference on the I/O subsystem affects the performance of parallel applications.On systems with simultaneous multithreading, performance degradation can occur from oversubscription of the physical cores [16].Another source of variability is manufacturing variability [11], which leads to performance heterogeneity.The power variation from manufacturing variability can affect the performance stability of HPC applications, as it translates to CPU frequency variation [26].
As there is increasing evidence for performance variability of parallel applications, several techniques and tools have been proposed to measure and characterize performance variability in recent works.In particular, for OS noise, Pradipta et al. [5] develop a tool to monitor and evaluate the impact of OS noise on Linux-based systems through fine-grained kernel instrumentation.Gioiosa et al. [10] extend Oprofile, a Linux kernel-level tool to characterize the sources of OS noise.Morari et al. [23] extend the Linux tool LTTng to build LTTng-Noise, a tracing tool.De Oliveira et al. [6] develop the osnoise tracer, which analyzes noise activities via kernel instrumentation.A more generic technique to measure performance variability and statistically characterize performance distributions has been proposed by Kocoloski et al. [13], to assist in system parameter design such as power-capping.
A limited number of works have focused on analyzing the performance variability of OpenMP programs.Camacho et al. [1] show that thread binding can reduce execution time variation in OpenMP applications, and Mazouz et al. [20] study the effects of thread binding, OS jitter, and hardwarerelated sources (memory-access related sources, concurrent jobs, asymmetry between cores, dynamic voltage scaling and device temperature) on execution time variation in OpenMP.In our work, we also focus on analyzing and characterizing performance variability in OpenMP.We exclude interference from other applications and run our benchmarks in isolation.Additionally, as we do not have privileged access to the platforms in study, we exclude operating system knobs from our techniques and only observe the impact of operating system noise on OpenMP, with a statistical analysis of results.
METHODOLOGY
In this section, we describe the methodology followed to characterize performance variability in OpenMP.We note that we always execute benchmarks in isolation on a single node, eliminating the case of variability from application interference.We perform our experiments on production, site-managed clusters, therefore we do not have privileged access that would allow us to tune the execution environment.Instead of detailed trace analysis, we rely on multiple experiments and statistical analysis of the results.The following paragraphs describe the strategies we apply to detect the sources and impact of performance variability.
Thread pinning: By default, we let the operating system decide the thread placement on cores, as the default setup for OMP_PROC_BIND is set to false.In this case (before threadpinning), the threads may migrate between cores during the execution of parallel programs to improve work balance.In modern multi-core architectures, exploitation of locality is essential to efficiently run parallel programs [12].OpenMP supports users with fine-grained thread affinity control through thread pinning.A group of places, corresponding to a group of hardware threads, can be defined and OpenMP threads can be bound to specific places (therefore hardware threads) by setting a pinning policy.To achieve this, some OpenMP-related environment variables are used to specify the OpenMP settings in this paper, i.e., OMP_NUM_THREADS is used to define the number of threads, OMP_PLACES and OMP_PROC_BIND work together to pin each thread to a specific core.The thread affinity policy is set as close, which implies that worker threads are close to the main thread in contiguous partitions [24,30].
Using Simultaneous Multithreading: Simultaneous multithreading mechanism is implemented on one of the two platforms included in our experimental setup, Dardel (see Section 4 for details), where each core has two hardware threads (also referred to as logical cores).We evaluate two configurations in our experiments.The first one is single-threaded, denoted as ST, in which at most one hardware thread per physical core is used to run the benchmark.In this case, the additional hardware thread of the core is reserved for operating system activities to absorb noise and isolate the benchmark running from the system interference.In the second configuration, both hardware threads of the core are utilized to run our benchmarks.We refer to this configuration as MT.We collect the results under the above two configurations and compare the performance variability of the execution of the benchmarks.
Frequency logging on a separate core: During the execution of benchmarks, a background Python script is run on a separate core to collect the frequencies of all cores.By doing this, we try to avoid interference from the frequency logger and benchmark running on the same core and guarantee that the execution of benchmarks is influenced as little as possible by other background activities.In the next section, we showcase the performance variability that can be related to the frequency variations.
EXPERIMENTAL SETUP 4.1 Hardware platforms
We use two different hardware platforms for our experiments.The first platform, Dardel, is an HPE Cray EX supercomputer located at the PDC Center for High-Performance Computing in Sweden.Each node of Dardel integrates two AMD EPYC Zen2 2.25GHz 64-core processors, accommodating two hardware threads per core.From the operating system's view, there are a total of 128 cores and 256 hardware threads/logical cores.The cores are organized in 8 NUMA domains of 16 cores each, with each socket behaving as a quad-NUMA domain.The maximum frequency of each core is 3.4GHz.The system runs the SUSE Linux Enterprise Server 15 SP3 OS, with Linux kernel version 5.3.18-150300.59.76_11.0.53-cray_shasta_c.We use gcc v7.5.0 as the compiler.
The second platform, Vera, is a cluster located at C3SE Center for Scientific and Technical Computing at Chalmers University of Technology in Sweden.Each node of Vera integrates two Intel Xeon Gold 6130 2.1GHz 16-core processors, with a total of 32 cores.Each socket corresponds to a NUMA domain, with a total of 2 NUMA domains on the node.The maximum frequency of each core is 3.7GHz.The system runs Rocky
OpenMP benchmarks
We use three different OpenMP benchmarks for our evaluation of performance variability in OpenMP.We draw two benchmarks from the EPCC OpenMP micro-benchmark suite [15,29], one of the most comprehensive suites for OpenMP constructs, which provides measurements of the overhead incurred from an OpenMP construct by comparing the execution time of parallel code against this of serial code.We use schedbench, the benchmark focusing on the parallel for construct with different schedules, and syncbench, the benchmark which evaluates all the different available synchronization methods in OpenMP.The benchmarks can be run with different parameters.We present the parameters used for the two benchmarks in our evaluation in Table 1.The third benchmark is BabelStream [8], a common benchmark to measure memory bandwidth by executing simple vector kernels, including copy, add, multiplication, triad, and dot product.It has been used in previous work [11] to evaluate performance variability in a power-limited environment.We use the default parameters and an array size of 2 25 for BabelStream in our evaluation.
We have executed a large set of experiments with the three benchmarks on the two hardware platforms described above.For every runtime configuration, we run each experiment 10 times, to collect run-to-run performance variability, in addition to any variability reported by the EPCC benchmarks themselves, which also execute 100 repetitions of each microbenchmark.Due to page limitations, we only highlight those experimental results that show statistically significant performance variability and can shed light on the potential sources of this variability.In particular, we execute schedbench with three different schedules, namely static, dynamic and guided and various different chunk sizes [3], and present the results for specific schedules with the chunk size equal to 1. e.g., static or dynamic schedule with chunk size equal to 1, labelled as static_1 and dynamic_1 respectively.From syncbench, we select the reduction clause as the most representative of synchronization methods in OpenMP.For BabelStream, in every single run, we collect the minimum, average, and maximum execution time for each kernel and then normalize the minimum and maximum execution time to the average execution time.The run-to-run variations of execution time are depicted by comparing the normalized minimum and maximum execution times among 10 runs for every vector operation kernel respectively.5 EXPERIMENTAL RESULTS
OpenMP scalability
We begin our evaluation by examining the scalability of the three OpenMP benchmarks, to understand the trend of the average (Avg.)execution time in Table 2 and Figures 1 and 2, and examine whether higher thread counts have higher performance variability in Figure 3.In Figure 3, the minimum and maximum execution times are normalized to the average execution time for each run respectively, and run each benchmark 10 times.We employ thread pinning for all the experiments, and make use of SMT, where available.threads, for schedbench in Table 2 and for syncbench in Figure 1, showing the average execution time for all 10 runs.For syncbench, we additionally observe a sharp increase in the execution time when we start utilizing the second socket on both systems (30 threads with 2 NUMA domains for Vera, and on 128 threads with 2 quad-NUMA domains for Dardel), as well as when we utilize the logical cores, in addition to the physical cores, on Dardel (254 threads).Also, we additionally highlight that the micro-benchmark corresponding to the reduction clause is the most time-consuming among the synchronization micro-benchmarks.We finally showcase the scalability of BabelStream in Figure 2, observing that the execution time of BabelStream reduces when launching more parallel threads, as expected, on both Dardel and Vera.
Regarding the scalability of performance variability, higher thread counts add to performance variability for syncbench and BabelStream in Figure 3, especially when the thread count is high (≥ 128 HW threads/OpenMP threads on Dardel and ≥ 30 on Vera), while it is not as pronounced for schedbench, as seen in the first column of Figure 3.It is worth pointing out that when all cores/HW threads were used for this scalability experiment, we observed a significantly worse performance behavior.To avoid this, on both systems, we spare 2 cores/HW threads, using 30 out of the 32 cores on Vera and 254 out of the 256 hardware threads on Dardel.We highlight that we observe both higher run-to-run variability, and also high variability between the 100 repetitions of the micro-benchmark for schedbench and syncbench.We argue that this is due to operating system activities, which interfere with the benchmark execution when no spare cores/resources are left for them, causing more noise from the view of the user's application execution.This observation is in accordance with recent works [6,9], which argue for resource isolation required for system activities, in order to reduce the OS noise.
The effect of thread pinning
In this part, we evaluate the effect of thread pinning on reducing performance variability.We showcase our results from Dardel in Figure 4, which is in accordance with our evaluation on Vera, although due to the larger scale of Dardel nodes, performance variability is more pronounced on this platform.The first column of Figure 4 shows average (Avg.)execution times of schedbench for 10 runs.The run-to-run variations of the average execution time do not completely disappear but can be reduced after the threads are pinned in Figure 4d, where only run # 9 has a higher execution time, compared to Figure 4a, where runs #(2,8,9,10) take longer time to finish.We believe that thread-pinning plays an important role in removing run-torun variability and improving performance stability.The benefits of thread pinning are much more pronounced in the case of syncbench, as synchronization primitives are more susceptible to noise and even slight increases in the execution time of one thread can propagate throughout the operation.Figure 4b shows a high run-to-run variability for the reduction microbenchmark of syncbench, on 128 physical cores of Dardel, resulting in more than 3 orders of magnitude of differences in the execution time of the micro-benchmark.Contrarily, after pinning, we achieve a much higher performance stability for the micro-benchmark, as shown in Figure 4e.We note that the yaxes of the two subfigures have different scales.The run-to-run variability is almost eliminated after pinning, while the execution time variations between the 100 repetitions of the benchmark are also largely reduced for certain runs, e.g.runs #(2, 3).Our observations for BabelStream are similar.Figure 4c shows high run-to-run variability for all the five kernels of the benchmark before thread pinning, as there is a difference of up to 6× between the minimum and maximum execution times between 10 different runs.After pinning, in Figure 4f, we observe less run-to-run variability, especially for the copy and mul kernels.
As load balancing at the OS-level can be affected by threadpinning, it is promising to jointly consider pinning policy and application characteristics.In a nutshell, thread pinning is particularly beneficial for reducing run-to-run variability and improving performance stability of OpenMP applications, especially for memory-bound applications, as evidenced by BabelStream and synchronization-sensitive applications, as evidenced by syncbench.In the remainder of our evaluation, we use thread pinning for all our experiments.
The effect of SMT
We examine the effect of simultaneous multithreading on Dardel, as Vera does not support SMT.We compare the performance variability of our benchmarks in Figure 5, for the ST case, where we use only physical cores of Dardel, e.g.. 32/64/128 cores and OpenMP threads, and the MT case, where we use both two hardware threads of 16/32/64 physical cores of Dardel, i.e. 32/64/128 HW threads and OpenMP threads.We note that the use of SMT is usually decided by the developer/user, based on application properties, e.g. the computeboundedness or memory-boundedness of the application.However, in our evaluation, we regard SMT only as a potential source of performance variability, examining cases where we use the same number of threads.
For schedbench in Figure 5a and Figure 5d, even though some run-to-run variability exists under the ST configuration, we observe a very high variability among the 100 outer repetitions of the benchmark, for each single run under the MT configuration.Regarding syncbench, we compare the run-torun variations of execution times and adopt the coefficient of variation (CV), i.e. the ratio of the standard deviation to the average (lower is better), for every run, as the metric to measure the performance variability of execution time in Figure 5b and Figure 5e.The performance stability is significantly affected in a negative way when leveraging SMT especially for some synchronization directives such as for,single,ordered and reduction, as the CV values of all 10 runs show high variances in Figure 5e.For most synchronization cases, the ST configuration exhibits better performance stability in Figure 5b, by leaving the second hardware thread free, potentially available for OS activities, whereas higher performance variability, including run-to-run variations and the variations among the 100 outer repetitions for each single run can be seen with the MT configuration.Similarly, we compare the performance variability of the normalized minimum and maximum times for all 10 runs for BabelStream in Figure 5c and Figure 5f under the ST and MT configurations respectively.BabelStream also does not benefit from using hardware threads.
The above observations reveal that leaving the second thread in SMT implementation for system activities results in better performance stability, while the MT configuration makes the executing benchmark experience more SMT interference.This impact of additional hardware thread resources reserved for operating system, i.e.ST configuration, varies with benchmark characteristics and scale.For example, ST does not outperform MT much for BabelStream when only a few threads are used.Overall, leaving the additional thread resources implemented by SMT mechanism for OS activities can be a promising way to achieve performance stability for the OpenMP runtime.
The effect of frequency variation
We finally examine the effect of frequency variation on the performance variability of OpenMP, by continuously logging the frequency levels of all cores (read through the sysfs interface of the Linux CPUFreq), through a Python script executing on a separate core.Although the default governor on Vera is set to performance, boosting all the core frequencies to the maximum, for some of our experiments, we have observed performance variability, especially across NUMA nodes, which can be justified by frequency variations.For some of the experiments done on Vera when using same cores but from same NUMA node or cross NUMA nodes, we observed the different behaviours of performance variability that can be potentially explained by the variation of frequency.Figure 6 depicts the relation between variability of execution time for schedbench and frequency variation.Figure 6c, where we use cores across NUMA nodes, shows higher performance variability, both between different runs, and among the 100 outer repetitions, compared to Figure 6a, where we use the same number of cores on a single NUMA node.time, this explains the above observation that the execution times vary more, depending on the frequency variation.We made a similar observation for syncbench in Figure 7, where Figure 7c exhibits more variations for both run-to-run executions and outer repetitions for a single run compared to Figure 7a.The same effect of frequency variation can be seen in the grey region in Figure 7d.We note that on Dardel, we have not observed an obvious trend between performance variability and frequency variations, as Dardel exhibits less frequency variation compared to Vera.
CONCLUSION
This paper aims to characterize the performance variability of OpenMP benchmarks and analyze the potential sources and impact of the performance variability based on an experimental study.We have tested two OpenMP benchmarks from the EPCC OpenMP micro-benchmark suite and BabelStream, on two platforms, assessing the impact of thread pinning, SMT, and core frequency variation on performance variability.Our experimental results have illustrated that performance variability exists in OpenMP, both within a benchmark and between different runs, and can be reduced considerably by applying thread-pinning, leaving the additional hardware threads implemented by SMT for OS activities, but can be negatively affected by frequency variation during execution, which is beyond the control of the user.
For future work, we aim to extend our characterization to other benchmarks such as FP-intensive or cache-intensive benchmarks and larger OpenMP applications on other platforms.We also wish to pinpoint the exact sources of OS noise and their impact on OpenMP applications, in order to design strategies to mitigate or eliminate performance variability.
Figure 1 :
Figure 1: Execution time (µs) when increasing the number of HW threads in syncbench on Dardel and Vera.Regarding the scalability of execution time, we observe that the execution time increases as we spawn additional OpenMP
Figure 2 :
Figure 2: Execution time (ms) for BabelStream when increasing the number of HW threads on Dardel and Vera.
Figure 3 :Figure 4 :
Figure 3: Scalability of performance variability of normalized execution time in schedbench, syncbench, and BabelStream when increasing the number of used HW threads on Dardel and Vera.
Figure 6b and Figure 6d depict the behavior of frequency for these two groups of experiments respectively.The brown region in Figure 6d during all 10 runs indicates more frequent frequency variation, compared to Figure 6b.As higher frequency levels used to run the benchmark, dictated by the performance governor, positively influence the execution
Figure 5 :
Figure 5: Higher variability of execution time (µs) in schedbench ( first column, executed with 128 threads) and syncbench (second column, executed with 32 threads) and BabelStream (third column, executed with 128 threads) due to SMT implementation on Dardel.
(a) 16
cores from one NUMA node (b) frequency (c) 16 cores from two NUMA nodes (d) frequency
Figure 6 :
Figure 6: Higher variability of execution time(µs) in schedbench due to frequency variation on Vera.
(a) 16 cores from one NUMA node (b) frequency (c) 16 cores from two NUMA nodes (d) frequency
Table 1 :
Parameters of the EPCC OpenMP microbenchmarks | 6,056.4 | 2023-11-09T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Remote Control of a Mobile Robot for Indoor Patrol
Abstract: This study applies smartphone, Bluetooth, and Wi-Fi wireless network to control a wheeled mobile robot (WMR) remotely. The first part of this study demonstrates that the WMR can be controlled manually by a smartphone. The smartphone can remotely control the WMR for forward, backward, left-turn, and right-turn operations. The second part of this article presents object tracking. The WMR can follow a moving object through the use of image processing for object tracking and distance detection. In the third part, infrared sensor and fuzzy system algorithms are integrated into the control scheme. Through wall-following and obstacle-avoidance control, the WMR can successfully perform indoor patrol.
Introduction
Over the past few years, different types of wheeled mobile robots (WMR) have been proposed. The WMR's advantages include high mobility, high load, and easy control. Previous WMR studies have chiefly focused on developing effective performance and helping people work in various environments. Many robots have been used in different applications such as indoor services, space exploration, military undertakings, entertainment, healthcare services, etc. In recent years, intelligent control has been applied to WMR fora variety of tasks [1][2][3][4][5][6][7][8][9][10][11][12][13]. One of the most common intelligent controllers is the fuzzy controller [9][10][11][12][13] because the design is relatively simple and flexible. This paper presents three control schemes. The first one is the smartphone remote control, which uses the Zigbee wireless transmission to the computer-controlled robot. Forward, backward, left-turn, and right-turn movements can be performed via the smartphone. The second control scheme utilizes the EYECAM camera and image processing to detect moving objects and calculate the center distance, and then applies fuzzy controller to track the moving object. The moving target can be clearly seen on the screen of the smartphone. In the third scheme, the WMR integrates infrared sensors to the fuzzy control system and distance detection. A camera is put on the WMR for surveillance. The surroundings of the robot can be seen on the phone. Home patrol and monitoring can therefore be accomplished.
With advanced technology and industrial development, path following, navigation, target tracking, multi-vehicle coordination, path planning and other applications of WMR have been widely discussed. Many of the applications use intelligent systems in controller design such as fuzzy system and neural network. Fuzzy system is quite similar to human thinking. In the real world, the fuzzy logic is full of greater fault tolerance, has better promotion, and is better suited for a nonlinear system. When the WMR is meant to work in a variety of environments, the choice of the controller becomes a very important issue. In the paper of Juang et al. [10], the automobile used a visual sensor to track a moving object. The authors utilized a simplified type 2 fuzzy system to a WMR for target-following control. Zhan et al. [12] used a rapid path-planning and fuzzy logic control to make the WMR trace expected paths. Seder et al. [14] proposed a method that is based on the integration of a focused D*
Image Processing
In most image processing methods, the light source is very important to the final result. In this study, the image in the red/green/blue (RGB) color space is transformed to the hue/saturation/value (HSV) color space which can significantly reduce the impact of lightness. The HSV model defines a color space in terms of three constituent components; hue (H), saturation (S), and value (V). H is an angle from 0 degrees to 360 degrees. Saturation indicates the range of grey in the color space. It ranges from 0 to 100%. Sometimes the value is calculated from 0 to 1. Value is the brightness of the color and varies with color saturation. It ranges from 0 to 100%. The ranges of the hue and saturation color space can be set to appropriated values so that the image processing is performed as a filter and is utilized to discard useless information and preserve useful information. The center of an object can be detected by the use of the distance calculation method of the camera. With these methods, the robot can track a moving object. The RGB color space values are transferred to HSV color space values by the following equations [17][18][19]: After HSV transformation, the image is then transformed to a binary image so that the original image can be easily identified from the foreground and background. The binary image is applied to Appl. Sci. 2016, 6, 82 3 of 19 change the image color from 0 to 255, where 0 represents black and 255 represents white. A designation of 255 is the foreground representing the captured dynamic obstacle, whereas 0 is the background representing the image to be filtered out. In the binary image process, an appropriate threshold first needs to be set up. If the pixels' values of a captured image are larger than the threshold value, then they are set to 255. If the pixels' values are smaller than the threshold value, they are set to 0. The threshold will affect the captured image in object tracking control. Figure 1 shows an example of the original RGB image transformed into the HSV and binary images.
Appl. Sci. 2016, 6, 82 3 of 20 background representing the image to be filtered out. In the binary image process, an appropriate threshold first needs to be set up. If the pixels' values of a captured image are larger than the threshold value, then they are set to 255. If the pixels' values are smaller than the threshold value, they are set to 0. The threshold will affect the captured image in object tracking control. Figure 1 shows an example of the original RGB image transformed into the HSV and binary images.
E denotes the height from the camera to the floor, b represents the distance from the projection point of the camera on the floor to the closest viewpoint, and Ly is the largest range that the camera can see, as shown in Figure 3. Lx is the largest width that the camera can see; it is the length from the left to the right of the view, as shown in Figure 2. Coordinates of the target can be obtained as follows: (8) x is the distance of the target away from the center line of the camera, as shown in Figure 2.
y is the distance of the target away from the WMR, as shown in Figure 3. E denotes the height from the camera to the floor, b represents the distance from the projection point of the camera on the floor to the closest viewpoint, and Ly is the largest range that the camera can see, as shown in Figure 3. Lx is the largest width that the camera can see; it is the length from the left to the right of the view, as shown in Figure 2. Coordinates of the target can be obtained as follows: x " tanrβp 2u H q´1s¨y (8) x is the distance of the target away from the center line of the camera, as shown in Figure 2. y is the distance of the target away from the WMR, as shown in Figure 3. In Equations (7) and (8), u and v are the horizontal pixel and the vertical pixel of the center coordinates of the captured target, respectively. The H and W represent the pixel lengths of an image provided by the camera. In this paper, E is 15 cm, Lx is 160 cm, Ly is 150 cm, b is 10 cm, H is 320, and W is 480. We can obtain the center coordinates of the target after the image processing. The distance estimation is then used to calculate the distance between the WMR and the target. The dynamic tracking task can be performed via this information and the position of the target on an image frame. Figure 4 shows the relative coordinate of the target with respect to the WMR. In Equations (7) and (8), u and v are the horizontal pixel and the vertical pixel of the center coordinates of the captured target, respectively. The H and W represent the pixel lengths of an image provided by the camera. In this paper, E is 15 cm, Lx is 160 cm, Ly is 150 cm, b is 10 cm, H is 320, and W is 480. We can obtain the center coordinates of the target after the image processing. The distance estimation is then used to calculate the distance between the WMR and the target. The dynamic tracking task can be performed via this information and the position of the target on an image frame. Figure 4 shows the relative coordinate of the target with respect to the WMR. In Equations (7) and (8), u and v are the horizontal pixel and the vertical pixel of the center coordinates of the captured target, respectively. The H and W represent the pixel lengths of an image provided by the camera. In this paper, E is 15 cm, Lx is 160 cm, Ly is 150 cm, b is 10 cm, H is 320, and W is 480. We can obtain the center coordinates of the target after the image processing. The distance estimation is then used to calculate the distance between the WMR and the target. The dynamic tracking task can be performed via this information and the position of the target on an image frame. Figure 4 shows the relative coordinate of the target with respect to the WMR.
Fuzzy Control
In the first part of this study, we explored how the mobile robot is designed to track a moving object in complex environments. Figure 5 shows the dynamic tracking control scheme with fuzzy controller. The procedure of dynamic tracking is shown in Figure 6. There are two inputs of the proposed fuzzy steering controller, which are the horizontal position of the target in a frame of captured image and the distance between the moving target and the WMR. The output of the system is the turning angular velocity of the wheeled mobile robot. Figure 7 shows membership functions X of the horizontal position of the target in a frame of captured image; its fuzzy sets are L, MI and R, which represent left, intermediate, and right, respectively. Figure 8 shows membership functions of the vertical position Y of the target in a frame of captured image. Its fuzzy sets are N, M and F, which represent near, medium and far, respectively. The fuzzy sets of the left wheel speed (the output variable is D) are LS, LMS, LMM, LMF, LM, LFS, LFM, LFF, and LF, which represent a turn that is very slow, greatly slow, medium slow, minimally slow, medium, minimally fast, medium fast, fast and very fast, respectively. Membership functions of the left wheel speed are shown in Figure 9, where the value of wheel speed is the digital input code of the wheel's motor. Definition of the right wheel speed is similar. Table 1 illustrates the fuzzy rules of the left wheel speed for the fuzzy steering controller. Fuzzy rules of the right wheel speed can be obtained by a similar design process. object in complex environments. Figure 5 shows the dynamic tracking control scheme with fuzzy controller. The procedure of dynamic tracking is shown in Figure 6. There are two inputs of the proposed fuzzy steering controller, which are the horizontal position of the target in a frame of captured image and the distance between the moving target and the WMR. The output of the system is the turning angular velocity of the wheeled mobile robot. Figure 7 shows membership functions X of the horizontal position of the target in a frame of captured image; its fuzzy sets are L, MI and R, which represent left, intermediate, and right, respectively. Figure 8 shows membership functions of the vertical position Y of the target in a frame of captured image. Its fuzzy sets are N, M and F, which represent near, medium and far, respectively. The fuzzy sets of the left wheel speed (the output variable is D) are LS, LMS, LMM, LMF, LM, LFS, LFM, LFF, and LF, which represent a turn that is very slow, greatly slow, medium slow, minimally slow, medium, minimally fast, medium fast, fast and very fast, respectively. Membership functions of the left wheel speed are shown in Figure 9, where the value of wheel speed is the digital input code of the wheel′s motor. Definition of the right wheel speed is similar. Table 1 illustrates the fuzzy rules of the left wheel speed for the fuzzy steering controller. Fuzzy rules of the right wheel speed can be obtained by a similar design process. In this second part of the study, a fuzzy control scheme is designed to drive the WMR in an indoor patrol situation. The WMR is controlled by a sensor signal which provides distance information to the robot such that the robot can be driven and moved along a wall. In wall following, a safety distance between the robot and the wall is predefined. The robot moves along the wall and keeps a constant distance from the wall. Installed on the robot are two infrared (IR) sensors, which are located at the right and left sides of the robot. Each IR sensor points 45 degrees away from the In this second part of the study, a fuzzy control scheme is designed to drive the WMR in an indoor patrol situation. The WMR is controlled by a sensor signal which provides distance information to the robot such that the robot can be driven and moved along a wall. In wall following, a safety distance between the robot and the wall is predefined. The robot moves along the wall and keeps a constant distance from the wall. Installed on the robot are two infrared (IR) sensors, which are located at the right and left sides of the robot. Each IR sensor points 45 degrees away from the In this second part of the study, a fuzzy control scheme is designed to drive the WMR in an indoor patrol situation. The WMR is controlled by a sensor signal which provides distance information to the robot such that the robot can be driven and moved along a wall. In wall following, a safety distance between the robot and the wall is predefined. The robot moves along the wall and keeps a constant distance from the wall. Installed on the robot are two infrared (IR) sensors, which are located at the right and left sides of the robot. Each IR sensor points 45 degrees away from the heading angle, and are marked as SR_L and SR_R, as shown in Figure 10. The measured distance from the SR_R is dR and the distance measured by the SR_L is dL. Control sequence of indoor patrol is shown in Figure 11. In fuzzy steering controller, the inputs are two IR detecting signals and the output of the system is the turning angular velocity of the robot. Fuzzy sets of the left IR sensor are LF, LM and LC, which represent fast, medium, and slow, respectively. For the fuzzy sets of the left wheel, speeds are LS, LMS, LMM, LMF, LM, LFS, LFM, LFF, and LF, which represent turning very slow, greatly slow, medium slow, minimally slow, medium, minimally fast, medium fast, fast and very fast, respectively. In fuzzy steering controller, the inputs are two IR detecting signals and the output of the system is the turning angular velocity of the robot. Fuzzy sets of the left IR sensor are LF, LM and LC, which represent fast, medium, and slow, respectively. For the fuzzy sets of the left wheel, speeds are LS, LMS, LMM, LMF, LM, LFS, LFM, LFF, and LF, which represent turning very slow, greatly slow, medium slow, minimally slow, medium, minimally fast, medium fast, fast and very fast, respectively. In fuzzy steering controller, the inputs are two IR detecting signals and the output of the system is the turning angular velocity of the robot. Fuzzy sets of the left IR sensor are LF, LM and LC, which represent fast, medium, and slow, respectively. For the fuzzy sets of the left wheel, speeds are LS, LMS, LMM, LMF, LM, LFS, LFM, LFF, and LF, which represent turning very slow, greatly slow, medium slow, minimally slow, medium, minimally fast, medium fast, fast and very fast, respectively. The fuzzy rules for the indoor patrol controller of the left wheel speed are shown in Table 2.
Experimental Settings
The WMR used in this study is called Boe-Bot Idrobot (Parallax Inc., Rocklin, CA, USA) as shown in Figure 12. In addition to Bluetooth control, the robot can also be controlled by the smartphone (HTC Corporation, Taoyuan, Taiwan) via Zigbee wireless module (Texas Instruments Inc., Dallas, TX, USA). Zigbee is used to transmit command signals between the webcam (Kinyo Inc., Hsinchu, Taiwan), the robot, and the PC (ASUSTeK Computer Inc., Taipei, Taiwan). Wi-Fi communication is applied to the connection of smartphone and PC in indoor patrol. Due to the size of memory space, the Boe-Bot Idrobot can only process and store 2K Bytes. The IR sensors (Parallax Inc., Rocklin, CA, USA) and Zigbee occupied by personal identification number is another problem requiring further study. The transmitting frequency of the robot's Zigbee and camera (Kinyo Inc., Hsinchu, Taiwan) are 24 GHz. They are assigned to com port 5 (COM5) and COM3 on the PC. The image is transmitted at 50 ms per one frame. Zigbee receives information at 20 ms for one unit of data. Data is then processed by the PC. The dynamic objects are solved by the net wrapper to the open source computer vision library (Emgu.CV) of dynamic link library (DLL) files. Figure 13 shows the buttons of operation: stop, up, down, left, right, track and patrol. We use this operating interface to receive robot information, send control command, and process the feedback values. Image processing and fuzzy control are coded by C# language.
Experimental Settings
The WMR used in this study is called Boe-Bot Idrobot (Parallax Inc., Rocklin, CA, USA) as shown in Figure 12. In addition to Bluetooth control, the robot can also be controlled by the smartphone (HTC Corporation, Taoyuan, Taiwan) via Zigbee wireless module (Texas Instruments Inc., Dallas, TX, USA). Zigbee is used to transmit command signals between the webcam (Kinyo Inc., Hsinchu, Taiwan), the robot, and the PC (ASUSTeK Computer Inc., Taipei, Taiwan). Wi-Fi communication is applied to the connection of smartphone and PC in indoor patrol. Due to the size of memory space, the Boe-Bot Idrobot can only process and store 2K Bytes. The IR sensors (Parallax Inc., Rocklin, CA, USA) and Zigbee occupied by personal identification number is another problem requiring further study. The transmitting frequency of the robot′s Zigbee and camera (Kinyo Inc., Hsinchu, Taiwan) are 24 GHz. They are assigned to com port 5 (COM5) and COM3 on the PC. The image is transmitted at 50 ms per one frame. Zigbee receives information at 20 ms for one unit of data. Data is then processed by the PC. The dynamic objects are solved by the net wrapper to the open source computer vision library (Emgu.CV) of dynamic link library (DLL) files. Figure 13 shows the buttons of operation: stop, up, down, left, right, track and patrol. We use this operating interface to receive robot information, send control command, and process the feedback values. Image processing and fuzzy control are coded by C# language. Figure 15. Figure 15a is the use of radio frequency (RF) wireless from camera to PC network. Figure 15b shows the communication from WMR to PC. First, the IR's value needs to be obtained, then the value of transcoding using Zigbee wireless network is transferred to the PC. COM5 receives exclusive information. The data is then decoded. Figure 15c shows the control flow chart of the command that is sent to the WMR. Information is processed by fuzzy controller with the use of COM5, and the corresponding decoded speed is then sent to the WMR by the Zigbee wireless network. Figure 16 shows the block diagram of the remote control. Figure 15. Figure 15a is the use of radio frequency (RF) wireless from camera to PC network. Figure 15b shows the communication from WMR to PC. First, the IR's value needs to be obtained, then the value of transcoding using Zigbee wireless network is transferred to the PC. COM5 receives exclusive information. The data is then decoded. Figure 15c shows the control flow chart of the command that is sent to the WMR. Information is processed by fuzzy controller with the use of COM5, and the corresponding decoded speed is then sent to the WMR by the Zigbee wireless network. Figure 16 shows the block diagram of the remote control. Figure 15. Figure 15a is the use of radio frequency (RF) wireless from camera to PC network. Figure 15b shows the communication from WMR to PC. First, the IR's value needs to be obtained, then the value of transcoding using Zigbee wireless network is transferred to the PC. COM5 receives exclusive information. The data is then decoded. Figure 15c shows the control flow chart of the command that is sent to the WMR. Information is processed by fuzzy controller with the use of COM5, and the corresponding decoded speed is then sent to the WMR by the Zigbee wireless network. Figure 16 shows the block diagram of the remote control.
Experimental Results
The first experiment is tested in a terrain as shown in Figure 17. Figure 18 shows the PC-based control operation of the wheeled robot for lower right-turn control application. The robot is driven by a simple proportional control scheme.
Experimental Results
The first experiment is tested in a terrain as shown in Figure 17. Figure 18 shows the PC-based control operation of the wheeled robot for lower right-turn control application. The robot is driven by a simple proportional control scheme.
Experimental Results
The first experiment is tested in a terrain as shown in Figure 17. Figure 18 shows the PC-based control operation of the wheeled robot for lower right-turn control application. The robot is driven by a simple proportional control scheme.
Experimental Results
The first experiment is tested in a terrain as shown in Figure 17. Figure 18 shows the PC-based control operation of the wheeled robot for lower right-turn control application. The robot is driven by a simple proportional control scheme. Figure 19 shows the testing environment of the second test based on smartphone control operation. Figure 20 is the right-turn operation. Manual operation control sequences of the smartphone and the WMR are shown in Figures 21 and 22. After pressing the command arrow, as shown in Figure 20, the command signal will be transmitted to the WMR. The WMR will then perform the required action.
Appl. Sci. 2016, 6, 82 12 of 20 Figure 19 shows the testing environment of the second test based on smartphone control operation. Figure 20 is the right-turn operation. Manual operation control sequences of the smartphone and the WMR are shown in Figures 21 and 22. After pressing the command arrow, as shown in Figure 20, the command signal will be transmitted to the WMR. The WMR will then perform the required action. 12 of 20 Figure 19 shows the testing environment of the second test based on smartphone control operation. Figure 20 is the right-turn operation. Manual operation control sequences of the smartphone and the WMR are shown in Figures 21 and 22. After pressing the command arrow, as shown in Figure 20, the command signal will be transmitted to the WMR. The WMR will then perform the required action. The first and second experiments described above are manually operated. The following three experiments are autonomous control. The flowchart of the automatic dynamic tracking control sequence is shown in Figure 6, whereas the flowchart of the automatic indoor patrol control sequence is shown in Figure 11. In dynamic tracking, Figure 23 shows the tracking operation. Figure 24 shows the images on the PC screen that are captured by the robot's camera. The first and second experiments described above are manually operated. The following three experiments are autonomous control. The flowchart of the automatic dynamic tracking control sequence is shown in Figure 6, whereas the flowchart of the automatic indoor patrol control sequence is shown in Figure 11. In dynamic tracking, Figure 23 shows the tracking operation. Figure 24 shows the images on the PC screen that are captured by the robot′s camera. Indoor patrol and remote surveillance are also tested. The WMR is controlled by a two-input two-output fuzzy controller. Figure 25 is the testing field. Figure 26 shows different positions of the wall-following patrol. Figure 27 shows the images captured on the PC screen. Indoor patrol and remote surveillance are also tested. The WMR is controlled by a two-input two-output fuzzy controller. Figure 25 is the testing field. Figure 26 shows different positions of the wall-following patrol. Figure 27 shows the images captured on the PC screen. Remote surveillance is shown in Figures 28-31. Figure 28 is the testing field of the remote surveillance. Figure 29 shows different remote control operations. Figure 30 is the remote site. Figure 31 shows the captured images on the smartphone. Remote surveillance is shown in Figures 28-31. Figure 28 is the testing field of the remote surveillance. Figure 29 shows different remote control operations. Figure 30 is the remote site. Figure 31 shows the captured images on the smartphone. Remote surveillance is shown in Figures 28-31. Figure 28 is the testing field of the remote surveillance. Figure 29 shows different remote control operations. Figure 30 is the remote site. Figure 31 shows the captured images on the smartphone. Remote surveillance is shown in Figures 28-31. Figure 28 is the testing field of the remote surveillance. Figure 29 shows different remote control operations. Figure 30 is the remote site. Figure 31 shows the captured images on the smartphone.
Conclusions
This study presents integration of image processing techniques, fuzzy theory, wireless communications, and smartphone to a wheeled mobile robot (WMR) for real-time moving object recognition, tracking and remote surveillance. The WMR uses a webcam to capture its surroundings. The WMR calculates the relative position of the target object through image processing and distance computation algorithms. Fuzzy system is applied to robot control. In this study, three cases of experiments are given. In the first case, a PC and a smartphone are utilized to control directly the WMR's forward, backward, left-turn, and right-turn movements. The second case focuses on target tracking control. The WMR can track a specific target by the HSV algorithm and fuzzy controller. The target can be clearly seen on the smartphone via the webcam on the WMR. In the third case, the WMR is applied to surveillance usage. The WMR can be controlled remotely by a smartphone via wireless communications. The WMR uses infrared sensor and fuzzy controller for obstacle avoidance and wall-following control. The WMR can perform indoor patrol and monitor its surroundings. The home site conditions can be clearly seen on the smartphone. Experiments show that the proposed control design and system integration of the wheeled mobile robot works well for indoor patrol. | 6,351.8 | 2016-03-15T00:00:00.000 | [
"Computer Science"
] |
On the Linear Stability of the Lamb-Chaplygin Dipole
The Lamb-Chaplygin dipole (Lamb1895,Lamb1906,Chaplygin1903) is one of the few closed-form relative equilibrium solutions of the 2D Euler equation characterized by a continuous vorticity distribution. We consider the problem of its linear stability with respect to 2D circulation-preserving perturbations. It is demonstrated that this flow is linearly unstable, although the nature of this instability is subtle and cannot be fully understood without accounting for infinite-dimensional aspects of the problem. To elucidate this, we first derive a convenient form of the linearized Euler equation defined within the vortex core which accounts for the potential flow outside the core while making it possible to track deformations of the vortical region. The linear stability of the flow is then determined by the spectrum of the corresponding operator. Asymptotic analysis of the associated eigenvalue problem shows the existence of approximate eigenfunctions in the form of short-wavelength oscillations localized near the boundary of the vortex and these findings are confirmed by the numerical solution of the eigenvalue problem. However, the time-integration of the 2D Euler system reveals the existence of only one linearly unstable eigenmode and since the corresponding eigenvalue is embedded in the essential spectrum of the operator, this unstable eigenmode is also shown to be a distribution characterized by short-wavelength oscillations rather than a smooth function. These findings are consistent with the general results known about the stability of equilibria in 2D Euler flows and have been verified by performing computations with different numerical resolutions and arithmetic precisions.
Introduction
The Lamb-Chaplygin dipole is a relative equilibrium solution of the two-dimensional (2D) Euler equations in an unbounded domain R 2 that was independently obtained by Lamb (1895Lamb ( , 1906) ) and Chaplygin (1903); the history of this problem was surveyed by Meleshko & van Heijst (1994).The importance of the Lamb-Chaplygin dipole stems from the fact that this is a simple exact solution with a continuous vorticity distribution which represents a steadily translating vortex pair (Leweke et al., 2016).Such objects are commonly used as models in geophysical fluid dynamics where they are referred to as "modons" (Flierl, 1987).Interestingly, despite the popularity of this model, the stability properties of the Lamb-Chaplygin dipole are still not well understood and the goal of the present investigation is to shed some new light on this question.
We consider an unbounded flow domain Ω := R 2 (":=" means "equal to by definition").Flows of incompressible inviscid fluids are described by the 2D Euler equation which can be written in the vorticity form as where t ∈ (0, T ] is the time with T > 0 denoting the length of the interval considered, ω : (0, T ] × Ω → R is the vorticity component perpendicular to the plane of motion and u = [u 1 , u 2 ] T : (0, T ] × Ω → R 2 is a divergence-free velocity field (i.e., ∇ • u = 0).
The space coordinate will be denoted x = [x 1 , x 2 ] T .Introducing the streamfunction ψ : (0, T ] × Ω → R, the relation between the velocity and vorticity can be expressed as T and ∆ψ = −ω. (2) System (1)-(2) needs to be complemented with suitable initial and boundary conditions, and they will be specified below.
In the frame of reference translating with the velocity −U e 1 , where U > 0 and e i , i = 1, 2, is the unit vector associated with the ith axis of the Cartesian coordinate system, equilibrium solutions of system (1)-( 2) satisfy the boundary-value problem (Wu et al., 2006) where the "vorticity function" F : R → R need not be continuous.Clearly, the form of the equilibrium solution is determined by the properties of the function F (ψ). Assuming without loss of generality that it has unit radius (a = 1), the Lamb-Chaplygin dipole is obtained by taking where b ≈ 3.8317059702075123156 is the first root of the Bessel function of the first kind of order one, J 1 (b) = 0, and η ∈ (−∞, ∞) is a parameter characterizing the asymmetry of the dipole (in the symmetric case η = 0).The solution of (3)-( 4) then has the form of a circular vortex core of unit radius embedded in a potential flow.The vorticity and streamfunction are given by the following expressions stated in the cylindrical coordinate system (r, θ) (hereafter we will adopt the convention that the subscript "0" refers to an equilibrium solution) • inside the vortex core (0 < r ≤ 1, 0 < θ ≤ 2π): • outside the vortex core (r > 1, 0 < θ ≤ 2π): The vortical core region will be denoted A 0 := {x ∈ R 2 : ∥x∥ ≤ 1} and ∂A 0 will denote its boundary.The streamline pattern inside A 0 in the symmetric (η = 0) and asymmetric (η > 0) case is shown in figures 1a and 1b, respectively.Various properties of the Lamb-Chaplygin dipole are discussed by Meleshko & van Heijst (1994).In particular, it is shown that regardless of the value of η the total circulation of the dipole vanishes, i.e., Γ 0 := A 0 ω 0 dA = 0. We note that in the limit η → ±∞ the dipole approaches a state consisting of a monopolar vortex with a vortex sheet of opposite sign coinciding with the part of the boundary ∂A 0 above or below the flow centerline, respectively, for positive and negative η. Generalizations of the Lamb-Chaplygin dipole corresponding to differentiable vorticity functions F (ψ) were obtained numerically by Albrecht et al. (2011), whereas multipolar generalizations were considered by Viúdez (2019b,a).
Most investigations of the stability of the Lamb-Chaplygin dipole were carried out in the context of viscous flows governed by the Navier-Stokes system, beginning with the computations of dipole evolution performed by Nielsen & Rasmussen (1997); van Geffen & van Heijst (1998).While relations (5)-( 6) do not represent an exact steady-state solution of the Navier-Stokes system, this approximate approach was justified by the assumption that viscous effects occur on time scales much longer than the time scales characterizing the growth of perturbations.A first such study of the stability of the dipole was conducted by Billant et al. (1999) who considered perturbations with dependence on the axial wavenumber and found several unstable eigenmodes together with their growth rates by directly integrating the three-dimensional (3D) linearized Navier-Stokes equations in time.Additional unstable eigenmodes were found in the 2D limit corresponding to small axial wavenumbers by Brion et al. (2014).The transient growth due to the non-normality of the linearized Navier-Stokes operator was investigated in the related case of a vortex pair consisting of two Lamb-Oseen vortices by Donnadieu et al. (2009) and Jugier et al. (2020), whereas Sipp & Jacquin (2003) studied Widnalltype instabilities of such vortex pairs.The effect of stratification on the evolution of a perturbed Lamb-Chaplygin dipole in 3D was considered by Waite & Smolarkiewicz (2008); Bovard & Waite (2016).The history of the studies concerning the stability of vortices in ideal fluids was recently surveyed by Gallay (2019).
The only stability analysis of the Lamb-Chaplygin dipole in the inviscid setting we are aware of is due to Luzzatto-Fegiz & Williamson (2012); Luzzatto-Fegiz (2014) who employed methods based on imperfect velocity-impulse diagrams applied to an approximation of the dipole in terms of a piecewise-constant vorticity distribution and concluded that this configuration is stable.Finally, there is a recent mathematically rigorous result by Abe & Choi (2022) who established orbital stability of the Lamb-Chaplygin dipole (orbital stability implies that flows corresponding to "small" perturbations of the dipole remain "close" in a certain norm to the translating dipole; hence, this is a rather weak notion of stability).
As noted by several authors (Meleshko & van Heijst, 1994;Waite & Smolarkiewicz, 2008;Luzzatto-Fegiz & Williamson, 2012;Abe & Choi, 2022), the stability properties of the Lamb-Chaplygin dipole are still to be fully understood despite the fact that it was introduced more than a century ago.To the best of our knowledge, the present study is the first comprehensive investigation of the linear stability of the Lamb-Chaplygin dipole in the inviscid case, which is the only setting where it represents a true equilibrium solution of the governing equations.As a result, we find behavior that was not observed in any of the earlier studies.It is demonstrated that the Lamb-Chaplygin dipole is in fact linearly unstable, but the nature of this instability is quite subtle and cannot be understood without referring to the infinite-dimensional nature of the linearized governing equations.More specifically, both the asymptotic and numerical solution of an eigenvalue problem for the 2D linearized Euler operator suitably localized to the vortex core A 0 confirm the existence of an essential spectrum with the corresponding approximate eigenfunctions in the form of short-wavelength oscillations localized near the vortex boundary ∂A 0 .However, the time-integration of the 2D Euler system reveals the presence of a single exponentially growing eigenmode and since the corresponding eigenvalue is embedded in the essential spectrum of the operator, this unstable eigenmode is also found not to be a smooth function and exhibits short-wavelength oscillations.These findings are consistent with the general mathematical results known about the stability of equilibria in 2D Euler flows (Shvydkoy & Latushkin, 2003;Shvydkoy & Friedlander, 2005) and have been verified by performing computations with different numerical resolutions and, in the case of the eigenvalue problem, with different arithmetic precisions.
The structure of the paper is as follows: in the next section we review some basic facts about the spectra of the 2D linearized Euler equation and transform this system to a form in which its spectrum can be conveniently studied with an asymptotic method and numerically; a number of interesting properties of the resulting eigenvalue problem is also discussed, an approximate asymptotic solution of this eigenvalue problem is constructed in § 3, the numerical approaches used to solve the eigenvalue problem and the initialvalue problem (1)-(2) are introduced in § 4, whereas the obtained computational results are presented in § 5 and § 6, respectively; discussion and final conclusions are deferred to § 7; some more technical material is collected in three appendices.
2D Linearized Euler Equations
The Euler system (1)-( 2) formulated in the moving frame of reference and linearized around an equilibrium solution {ψ 0 , ω 0 } has the following form, where ψ ′ , ω ′ : (0, T ] × Ω → R are the perturbation variables (also defined in the moving frame of reference) in which ∆ −1 is the inverse Laplacian corresponding to the far-field boundary condition (7c) and w ′ is an appropriate initial condition assumed to have zero circulation, i.e., Ω w ′ dA = 0. Unlike for problems in finite dimensions where, by virtue of the Hartman-Grobman theorem, instability of the linearized system implies the instability of the original nonlinear system, for infinite-dimensional problems this need not, in general, be the case.However, for 2D Euler flows it was proved by Vishik & Friedlander (2003); Lin (2004) that the presence of an unstable eigenvalue in the spectrum of the linearized operator does indeed imply the instability of the original nonlinear problem.Arnold's theory (Wu et al., 2006) predicts that equilibria satisfying system (3) are nonlinearly stable if F ′ (ψ) ≥ 0, which however is not the case for the Lamb-Chaplygin dipole, since using (4) we have F ′ (ψ 0 ) = −b 2 < 0 for ψ 0 ≥ η.Thus, Arnold's criterion is inapplicable in this case.
Spectra of Linear Operators
When studying spectra of linear operators, there is fundamental difference between the finite-and infinite-dimensional cases.To elucidate this difference and its consequences, we briefly consider an abstract evolution problem du/dt = Au on a Banach space X (in general, infinite-dimensional) with the state u(t) ∈ X and a linear operator A : X → X .Solution of this problem can be formally written as u(t) = e At u 0 , where u 0 ∈ X is the initial condition and e At the semigroup generated by A (Curtain & Zwart, 2013).While in finite dimensions linear operators can be represented as matrices which can only have point spectrum Π 0 (A), in infinite dimensions the situation is more nuanced since the spectrum Λ(A) of the linear operator A may in general consist of two parts, namely, the approximate point spectrum Π(A) (which is a set of numbers λ ∈ C such that (A − λ) is not bounded from below) and the compression spectrum Ξ(A) (which is a set of numbers λ ∈ C such that the closure of the range of (A − λ) does not coincide with X ).We thus have Λ(A) = Π(A) ∪ Ξ(A) and the two types of spectra may overlap, i.e., Π(A) ∩ Ξ(A) ̸ = ∅ (Halmos, 1982).A number λ ∈ C belongs to the approximate point spectrum Π(A) if and only if there exists a sequence of unit vectors {f n }, referred to as approximate eigenvectors, such that ∥(A − λ)f n ∥ X → 0 as n → ∞.If for some λ ∈ Π(A) there exists a unit element f such that Af = λf , then λ and f are an eigenvalue and an eigenvector of A. The set of all eigenvalues λ forms the point spectrum Π 0 (A) which is contained in the approximate point spectrum, Π 0 (A) ⊂ Π(A).If λ ∈ Π(A) does not belong to the point spectrum, then the sequence {f n } is weakly null convergent and consists of functions characterized by increasingly rapid oscillations as n becomes large.The set of such numbers λ ∈ C is referred to as the essential spectrum Π ess (A) := Π(A)\Π 0 (A), a term reflecting the fact that this part of the spectrum is normally independent of boundary conditions in eigenvalue problems involving differential equations.It is, however, possible for "true" eigenvalues to be embedded in the essential spectrum.
When studying the semigroup e At one is usually interested in understanding the relation between its growth abscissa γ(A) := lim t→∞ t −1 ln ∥e At ∥ X and the spectrum Λ(A) of A. While in finite dimensions γ(A) is determined by the eigenvalues of A with the largest real part, in infinite dimensions the situation is more nuanced since there are examples in which sup z∈Λ(A) ℜ(z) < γ(A), e.g., Zabczyk's problem (Zabczyk, 1975) also discussed by Trefethen (1997); some problems in hydrodynamic stability where such behavior was identified are analyzed by Renardy (1994).
In regard to the 2D linearized Euler operator L, cf.(7a), it was shown by Shvydkoy & Latushkin (2003) that its essential spectrum is a vertical band in the complex plane symmetric with respect to the imaginary axis.Its width is proportional to the largest Lyapunov exponent λ max in the flow field and to the index m ∈ Z of the Sobolev space H m (Ω) in which the evolution problem is formulated (i.e., X = H m (Ω) above).The norm in the Sobolev space H m (Ω) is defined as ∥u∥ , where α 1 , α 2 ∈ Z with |α| := α 1 + α 2 (Adams & Fournier, 2005).More specifically, we have (Shvydkoy & Friedlander, 2005) In 2D flows Lyapunov exponents are determined by the properties of the velocity gradient ∇u(x) at hyperbolic stagnation points x 0 .More precisely, λ max is given by the largest eigenvalue of ∇u(x) computed over all stagnation points.As regards the Lamb-Chaplygin dipole, it is evident from figures 1a and 1b that in both the symmetric and asymmetric case it has two stagnation points x a and x b located at the fore and aft extremities of the vortex core.Inspection of the velocity field ∇ ⊥ ψ 0 defined in (5a) shows that the largest eigenvalues of ∇u(x) evaluated at these stagnation points, and hence the Lyapunov exponents, are λ max = 2 regardless of the value of η.
While characterization of the essential spectrum of the 2D linearized Euler operator L is rather complete, the existence of a point spectrum remains in general an open problem.Results concerning the point spectrum are available in a few cases only, usually for shear flows where the problem can be reduced to one dimension (Drazin & Reid, 1981;Chandrasekhar, 1961) or the cellular cat's eyes flows (Friedlander et al., 2000).In these examples unstable eigenvalues are outside the essential spectrum (if one exists) and the corresponding eigenfunctions are well behaved.On the other hand, it was shown by Lin (2004) that when an unstable eigenvalue is embedded in the essential spectrum, then the corresponding eigenfunctions need not be smooth.One of the goals of the present study is to consider this issue for the Lamb-Chaplygin dipole.
Linearization Around the Lamb-Chaplygin Dipole
The linear system (7) is defined on the entire plane R 2 , however, in the Lamb-Chaplygin dipole the vorticity ω 0 is supported within the vortex core A 0 only, cf.(6b).This will allow us to simplify system (7) so that it will involve relations defined only within A 0 , which will facilitate both the asymptotic analysis and numerical solution of the corresponding eigenvalue problem, cf.§ 3 and § 5.If the initial data w ′ in (7d) is also supported in A 0 , then the initial-value problem (7) can be regarded as a free-boundary problem describing the evolution of the boundary ∂A(t) of the vortex core (we have A(0) = A 0 and ∂A(t) = ∂A 0 ).However, as explained below, the evolution of this boundary can be deduced from the evolution of the perturbation streamfunction ψ ′ (t, x), hence need not be tracked independently.Thus, the present problem is different from, e.g., the vortexpatch problem where the vorticity distribution is fixed (piecewise constant in space) and in the stability analysis the boundary is explicitly perturbed (Elcrat & Protas, 2013). Denoting the perturbation streamfunction in the vortex core and in its complement, system (7) can be recast as where n is the unit vector normal to the boundary ∂A 0 pointing outside and conditions (9d)-(9e) represent the continuity of the normal and tangential perturbation velocity components across the boundary ∂A 0 with f ′ : ∂A 0 → R denoting the unknown value of the perturbation streamfunction at that boundary.
The velocity normal to the vortex boundary ∂A(t) is u n := u•n = ∂ψ 1 /∂s = ∂ψ 2 /∂s, where s is the arc-length coordinate along ∂A(t), cf.(9d).While this quantity identically vanishes in the equilibrium state ( 5)-( 6), cf ( 17), in general it will be nonzero resulting in a deformation of the boundary ∂A(t).This deformation can be deduced from the solution of system (9) as follows.Given a point z ∈ ∂A(t), the deformation of the boundary is described by dz/dt = n u n | ∂A(t) .Integrating this expression with respect to time yields where z(0) ∈ ∂A 0 and 0 < τ ≪ 1 is the time over which the deformation is considered.Thus, the normal deformation of the boundary can be defined as ρ(τ We also note that at the leading order the area of the vortex core A(t) is preserved by the considered perturbations We notice that in the exterior domain R 2 \A 0 the problem is governed by Laplace's equation (9c) subject to boundary conditions (9d)-(9f).Therefore, this subproblem can be eliminated by introducing the corresponding Dirichlet-to-Neumann (D2N) map which is constructed in an explicit form in Appendix A. Thus, equation (9c) with boundary conditions (9d)-( 9f) can be replaced with a single relation 1 holding on ∂A 0 such that the resulting system is defined in the vortex core A 0 and on its boundary only.It should be emphasized that this reduction is exact as the construction of the D2N map does not involve any approximations.We therefore conclude that while the vortex boundary ∂A(t) may deform in the course of the linear evolution, this deformation can be described based solely on quantities defined within A 0 and on ∂A 0 using relation (10).In particular, the transport of vorticity out of the vortex core A 0 into the potential flow is described by the last term on the right-hand side (RHS) in (9a) evaluated on the boundary ∂A 0 .
Noting that the base state satisfies the equation 3)-( 4), and using the identity • ∇ ⊥ ψ 0 , the vorticity equation ( 9a) can be transformed to the following simpler form where we also used (9b) to eliminate ω ′ in favor of ψ ′ 1 .Supposing the existence of an eigenvalue λ ∈ C and an eigenfunction ψ : A 0 → C, we make the following ansatz for the perturbation streamfunction ψ ′ 1 (t, x) = ψ(x) e λt which leads to the eigenvalue problem ∂∆ ψ ∂r = 0, at r = 0, where ∆ −1 M is the inverse Laplacian subject to the boundary condition ∂ ψ/∂n − M ψ = 0 imposed on ∂A 0 and the additional boundary condition (13b) ensures the perturbation vorticity is differentiable at the origin (such condition is necessary since the differential operator on the RHS in (13a) is of order three).Depending on whether or not the different differential operators appearing in it are inverted, eigenvalue problem (13) can be rewritten in a number of different, yet mathematically equivalent, forms.However, all these alternative formulations have the form of generalized eigenvalue problems and are therefore more difficult to handle in numerical computations.Thus, formulation ( 13) is preferred and we will focus on it hereafter.
We note that the proposed formulation ensures that the eigenfunctions ψ have zero circulation, as required where we used the divergence theorem, equations (9b)-(9c) and the boundary conditions (9e)-(9f).Since it will be needed for the numerical discretization described in § 5, we now rewrite the eigenvalue problem (13) explicitly in the polar coordinate system where ∂θ 2 and the velocity components obtained as u r 0 , u θ They have the following behavior on the boundary ∂A 0 , where "∼" means the norms on the left and on the right are equivalent (in the precise sense of norm equivalence), the essential spectrum (8) of the operator H will have m = −2, so that Π ess (H) is a vertical band in the complex plane with Operator H, cf.(15a) has a non-trivial null space Ker(H).To see this, we consider the "outer" subproblem Then, the eigenfunctions ψ C spanning the null space of operator H are obtained as solutions of the family of "inner" subproblems where C = 2, 3, . . . .Some of these eigenfunctions are shown in figures 2a-d, where distinct patters are evident for even and odd values of C. -1
Asymptotic Solution of Eigenvalue Problem (15)
A number of interesting insights about certain properties of solutions of eigenvalue problem ( 15) can be deduced by performing a simple asymptotic analysis of this problem in the short-wavelength limit.We focus here on the case of the symmetric dipole (η = 0) and begin by introducing the ansatz where f m , g m : [0, 1] → C, m = 1, 2, . . ., are functions to be determined.Substituting this ansatz in (15a) with the Laplacian moved back to the left-hand side (LHS) and applying well-known trigonometric identities leads after some algebra to the following system of coupled third-order ordinary differential equations (ODEs) for the functions where the Bessel operator B m is defined via r f , whereas the coefficient functions have the form, cf. ( 16), The functions g m (r), m = 1, 2, . . ., satisfy a system identical to (21), which shows that the eigenfunctions ψ(r, θ) are either even or odd functions of θ (i.e., they are either symmetric or antisymmetric with respect to the flow centerline).Moreover, the fact that system (21) couples Fourier components corresponding to different m implies that the eigenvectors ψ(r, θ) are not separable as functions of r and θ.Motivated by our discussion in § 2.1 about the properties of approximate eigenfunctions of the 2D linearized Euler operator, we will construct approximate solutions of system (21) in the short-wavelength limit m → ∞.In this analysis we will assume that λ ∈ Π ess (L) is given and will focus on the asymptotic structure of the corresponding approximate eigenfunctions.We thus consider the asymptotic expansions where λ 0 , λ 1 ∈ C are treated as parameters and f 0 m , f 1 m : [0, 1] → C are unknown functions.Plugging these expansions into system (21) and collecting terms proportional to the highest powers of m we obtain It follows immediately from (24a) that f 0 m−1 = f 0 m+1 .Since this analysis does not distinguish between even and odd values of m, we also deduce that which is an inhomogeneous first-order equation defining the leading-order term f 0 m (r) in (23) in terms of f 1 m (r).Without loss of generality the boundary condition (21b) can be replaced with f 0 m (0) = 1.The solution of ( 25) is then a sum of two parts: the solution of the homogeneous equation obtained by setting the RHS to zero and a particular integral corresponding to the actual RHS.Since at this level the expression f is undefined, we cannot find the particular integral.On the other hand, the solution of the homogeneous equation can be found directly noting that this equation is separable and integrating which gives where The limiting (as r → 1) behavior of functions (27a)-( 27b) exhibits an interesting depen-dence on λ 0 , namely, In particular, the limiting value of I r (r) as r → 1 changes when ℜ(λ 0 ) = 4, which defines the right boundary of the essential spectrum in the present problem, cf. ( 8).Both I r (r) and I i (r) diverge as O(1/(1 − r)) when r → 1 which means that the integrals under the exponentials in ( 26), and hence the entire formula, are not defined at r = 1.While the factor involving I i (r) is responsible for the oscillation of the function f 0 m (r), the factor depending on I r (r) determines its growth as r → 1: we see that |f 0 m (r)| becomes unbounded in this limit when ℜ(λ 0 ) < 4 and approaches zero otherwise.The real and imaginary parts of f 0 m (r) obtained for different eigenvalues λ 0 are shown in figures 3a,b, where it is evident that both the unbounded growth and the oscillations of f 0 m (r) are localized in the neighbourhood of the endpoint r = 1.Given the singular nature of the solutions obtained at the leading order, the correction term f 1 m (r) is rather difficult to compute and we do not attempt this here.If f 1 m−1 ̸ = f 1 m+1 , the solution of equation ( 25) will also include some extra terms in addition to (26)-( 27) which would correspond to another possible family of approximate eigenfunctions.However, as will be evident from the discussion below, the solutions given in ( 26)-( 27) capture the relevant behavior.Finally, in view of ansatz (20), the leading-order approximations to eigenfunctions are obtained multiplying the function f 0 m (r) by cos(mθ) or sin(mθ) with m → ∞ which introduces rapid oscillations in the azimuthal direction.
We thus conclude that when ℜ(λ 0 ) < 4, the solutions of eigenvalue problem (15) constructed in the form (20) include functions dominated by short-wavelength oscillations whose asymptotic, as m → ∞, structure involves oscillations in both the radial and azimuthal directions and are localized near the boundary ∂A 0 .Since as a result their pointwise values on ∂A 0 are not well defined, these solutions should be regarded as "distributions".We remark that the asymptotic solutions constructed above do not satisfy the boundary conditions (21c)-(21d), which is consistent with the fact that they represent approximate eigenfunctions associated with the essential spectrum Π ess (H) of the 2D linearized Euler operator.In order to find solutions of eigenvalue problem (15) which do satisfy all the boundary conditions we have to solve this problem numerically which is done next.
Numerical Approaches
In this section we first describe the numerical approximation of eigenvalue problem (15)-( 16) and then the time integration of the 2D Euler system (1)-( 2) with the initial condition in the form of the Lamb-Chaplygin dipole perturbed with some approximate eigenfunctions obtained by solving eigenvalue problem ( 15)-( 16).These computations will offer insights about the instability of the dipole complementary to the results of the asymptotic analysis presented in § 3.
Discretization of Eigenvalue Problem (15)-(16)
Eigenvalue problem (15)-( 16) is solved using the spectral collocation method proposed by Fornberg (1996), see also the discussion in Trefethen (2000), which is based on a tensor grid in (r, θ).The discretization in θ involves trigonometric (Fourier) interpolation, whereas that in r is based on Chebyshev interpolation where we take r ∈ [−1, 1] which allows us to avoid collocating (15a) at the origin when the number of grid points is even.Since then the mapping between (r, θ) and (x 1 , x 2 ) is 2-to-1, the solution must be constrained to satisfy the condition which is fairly straightforward to implement (Trefethen, 2000).
In contrast to (15a), the boundary condition (15b) does need to be evaluated at the origin which necessities modification of the differentiation matrix (since our Chebyshev grid does not include a grid point at the origin).The numbers of grid points discretizing the coordinates r ∈ [−1, 1] and θ ∈ [0, 2π] are linked and both given by N which is an even integer.The resulting algebraic eigenvalue problem then has the form where ψ ∈ C N 2 is the vector of approximate nodal values of the eigenfunction and H ∈ R N 2 ×N 2 the matrix discretizing the operator H, cf.(15a), obtained as described above.Problem (30) is implemented in MATLAB and solved using the function eig.
The discretization of all operators in H, cf (15), was carefully verified by applying them to analytic expressions and then comparing the results against exact expressions.Expected rates of convergence were observed as the resolution N was increased.
Since the operator H and hence also the matrix H are nonnormal and singular, the numerical conditioning of problem (30) may be poor, especially when the resolution N is refined.In an attempt to mitigate this potential difficulty, we eliminated a part of the null space of H by performing projections on a certain number N C of eigenfunctions associated with the eigenvalue λ = 0 (they are obtained by solving problem (19) with different source terms ϕ C , C = 2, 3, . . ., N C + 1, cf. ( 47)).However, solutions of problem (30) obtained in this way were essentially unchanged as compared to the original version.Moreover, in addition to examining the behavior of the results when the grid is refined (by increasing the resolution N as discussed in § 5), we have also checked the effect of arithmetic precision using the toolbox Advanpix (2017).Increasing the arithmetic precision up to O(10 2 ) significant digits was also not found to have a noticeable effect on the results obtained with small and medium resolutions N ≤ 100 (at higher resolutions the cost of such computations becomes prohibitive).These observations allow us to conclude that the results presented in § 5 are not affected by round-off errors.
In the light of the discussion in § 2.1- § 2.2, we know the spectrum of the operator H includes essential spectrum in the form of a vertical band in the complex plane |ℜ(z)| ≤ 4, z ∈ C. Available literature on the topic of numerical approximation of infinite-dimensional non-self-adjoint eigenvalue problems, especially ones featuring essential spectrum, is very scarce.However, since the discretized problem (30) is finitedimensional and therefore can only have point spectrum, it is expected that at least some of the eigenvalues of the discrete problem will be approximations of the approximate eigenvalues in the essential spectrum Π ess (H), whereas the corresponding eigenvectors will approximate the approximate eigenfunctions (we note that the term "approximate" is used here with two distinct meanings: its first appearance refers to the numerical approximation and the second to the fact that these functions are defined as only "close" to being true eigenfunctions, cf.§ 2.1).As suggested by the asymptotic analysis presented in § 3, these approximate eigenfunctions are expected to be dominated by short-wavelength oscillations which cannot be properly resolved using any finite resolution N .Thus, since these eigenfunctions are not smooth, we do not expect our numerical approach to yield an exponential convergence of the approximation error.To better understand the properties of these eigenfunction, we also solve a regularized version of problem (15) in which ψ is replaced with ψ δ := R −1 δ ψ, where R δ := (Id −δ 2 ∆), δ > 0 is a regularization parameter and the inverse of R δ is defined with the homogeneous Neumann boundary conditions.The regularized version of the discrete problem (30) then takes the form where the subscript δ denotes regularized quantities and R δ is the discretization of the regularizing operator R δ .Since the operator R −1 δ can be interpreted as a low-pass filter with the cut-off length given by δ, the effect of this regularization is to smoothen the eigenvectors by filtering out components with wavelengths less than δ.Clearly, in the limit when δ → 0 the original problem (30) is recovered.An analogous strategy was successfully employed by Protas & Elcrat (2016) in their study of the stability of Hill's vortex where the eigenfunctions also turned out to be singular distributions.
Solution of the Time-Dependent Problem (1)-(2)
The 2D Euler system (1)-( 2) is transformed to the frame of reference moving with velocity −U e 1 and rewritten in terms of the "perturbation" vorticity ω(t, x) := ω(t, x) − ω 0 (x) and the corresponding perturbation streamfunction ψ(t, x), such that it takes the form, cf. ( 7), To facilitate solution of this system with a Fourier pseudospectral method (Canuto et al., 1988), we approximate the unbounded domain with a 2D periodic box Ω ≈ T 2 := [−L/2, L/2] 2 , where L > 1 is its size.While this is an approximation only, it is known to become more accurate as the size L of the domain increases relative to the radius of the dipole which remains fixed at one (Boyd, 2001).We note that this is a standard approach and has been successfully used in earlier studies of related problems (Nielsen & Rasmussen, 1997;van Geffen & van Heijst, 1998;Billant et al., 1999;Donnadieu et al., 2009;Brion et al., 2014;Jugier et al., 2020).Since the instability has the form of shortwavelength oscillations localized on the dipole boundary A 0 , interaction of the perturbed dipole with its periodic images does not have a significant effect.The perturbation vorticity is then approximated in terms of a truncated Fourier series ω(t, x) , where M is the number of grid points in each direction in T 2 .Substitution of expansion (33) into (32a) yields a system of coupled ordinary differential equations describing the evolution of the expansion coefficients ω k (t), k ∈ V M , which is integrated in time using the RK4 method.Product terms in the discretized equations are evaluated in the physical space with the exponential filter proposed by Hou & Li (2007) used in lieu of dealiasing.We use a massively-parallel implementation based on MPI with Fourier transforms computed using the FFTW library (Frigo & Johnson, 2003).Convergence of the results with refinement of the resolution M and of the time step ∆t as well as with the increase of the size L of the computational domain was carefully checked.In the results reported in § 6 we use L = 2π.
Solution of the Eigenvalue Problem
In this section we describe solutions of the discrete eigenvalue problem (30) and its regularized version (31).We mainly focus on the symmetric dipole with η = 0, cf.figure 1a.In order to study dependence of the solutions on the numerical resolution, problems (30)-( 31) were solved with N ranging from 20 to 260, where the largest resolution was limited by the amount of RAM memory available on a single node of the computer cluster we had access to.The discrete spectra of problem (30) obtained with N = 40, 80, 160, 260 are shown in figures 4a-d.We see that for all resolutions N the spectrum consists of purely imaginary eigenvalues densely packed on the vertical axis and a "cloud" of complex eigenvalues clustered around the origin (for each N is there is also a pair of purely real spurious eigenvalues increasing as |λ| = O(N ) when the resolution is refined; they are not shown in figures 4a-d).We see that as N increases the cloud formed by the complex eigenvalues remains restricted to the band −2 ⪅ ℜ(λ) ⪅ 2, but expands in the vertical (imaginary) direction.The spectrum is symmetric with respect to the imaginary axis as is expected for a Hamiltonian system.The eigenvalues fill the inner part of the band ever more densely as N increases and in order to quantify this effect in figures 5a-d we show the eigenvalue density defined as the number of eigenvalues in a small rectangular region of the complex plane, i.e., ) where ∆λ r , ∆λ i ∈ R are half-sizes of a cell used to count the eigenvalues with ∆λ i ≈ 500∆λ r reflecting the fact that the plots are stretched in the vertical direction.We see that as the resolution N is refined the eigenvalue density µ(z) increases near the origin.
As discussed in § 2.1, a key question concerning the linear stability of 2D Euler flows is the existence of point spectrum Π 0 (L) of the linear operator L, cf. ( 7).However, the usual approach based on discretizing the continuous eigenvalue problem ( 15)-( 16), cf.§ 4.1, is unable to directly distinguish numerical approximations of the true eigenvalues from those of the approximate eigenvalues.This can be done indirectly by solving the discrete problem (30) with different resolutions N since approximations to true eigenvalues will then converge to well-defined limits as the resolution is refined; in contrast, approximations to approximate eigenvalues will simply fill up the essential spectrum Π ess (L) ever more densely in this limit.In this way we have found a single eigenvalue, denoted λ 0 , which together with its negative −λ 0 and complex conjugates ±λ * 0 , satisfy the above condition, see Table 1.As is evident from this table, the differences between the real parts of λ 0 computed with different resolutions N are very small and just over 1%, although the variation of the imaginary part is larger.Moreover, as will be discussed in § 6, λ 0 is in fact the only eigenvalue associated with a linearly growing mode.We now take a closer look at the purely imaginary eigenvalues which are plotted for different resolutions N in figure 6.It is known that these approximate eigenvalues are related to the periods of Lagrangian orbits associated with closed streamlines in the base flow (Cox, 2014).In particular, if the maximum period is bounded τ max < ∞, this implies the presence of a horizontal gaps in the essential spectrum.However, as shown in Appendix C, the Lamb-Chaplygin dipole does involve Lagrangian orbits with arbitrarily long periods, such that the essential spectrum Π ess (L) includes the entire imaginary axis iR.The results shown in figure 6 are consistent with this property since the gap evident in the spectra shrinks, albeit very slowly, as the numerical resolution N is refined.The reason why these gaps are present is that the orbits sampled with the discretization described in § 4.1 have only finite maximum periods which however become longer as the discretization is refined.
Finally, we analyze eigenvectors of problem ( 30) and choose to present them in terms of vorticity, i.e., we show ω i = −∆ ψ i , where the subscript i = 0, 1, 2 enumerates the corresponding eigenvalues.First, in figures 7a,b,c we illustrate the convergence pattern of the eigenvector ω 0 corresponding to the eigenvalue λ 0 , cf.Table 1, and representing an exponentially growing mode as the resolution is refined.We see that as N increases the approximations of the eigenvector converge to a constant value within the domain A 0 and diverge near its boundary, in agreement with the distributional nature of these eigenvectors established by our asymptotic analysis in § 3.More specifically, we see that the magnitude | ω 1 (r, θ)| of the eigenvector grows rapidly near the boundary, i.e., as r → 1, which is consistent with the behavior of the function f 0 m (r) describing the asymptotic solution, cf.expressions ( 26)-( 28) and figures 3(a,b).In addition, a rapid variation of | ω 1 (r, θ)| in the azimuthal coordinate θ is also evident for r ≲ 1.However, something that could not be discerned by the asymptotic analysis is that these oscillations are mostly concentrated near the azimuthal angles θ = ±π/4, ±3π/4.As expected, both the growth in the radial direction and the oscillations in the azimuthal direction become more rapid as the resolution N is refined.Given the distributional nature of the solutions of the eigenvalue problem (15), the classical notion of "convergence" of a numerical scheme is not entirely applicable here and instead one would need to refer to more refined concepts such as "weak convergence", but since they are quite technical, we do not pursue this avenue here.However, as will be shown in § 6, even if they are not fully resolved, the eigenvectors computed here still contain useful information.
Next, in figures 8a,c,e we compare the real parts of the eigenvectors associated with different eigenvalues: the complex eigenvalue λ 0 corresponding to the exponentially growing mode (already shown in figure 7), a purely real eigenvalue λ 1 and a purely imaginary eigenvalue λ 2 .We see that while these eigenvectors are qualitatively similar and share the features described above, the eigenvector ω 0 is symmetric with respect to the flow centerline, whereas the eigenvectors ω 1 and ω 2 are antisymmetric.Another difference is that in the eigenvector ω 0 associated with the eigenvalue λ 0 the oscillations are mostly concentrated near the azimuthal angles θ = ±π/4, ±3π/4, cf.figure 8a; on the other hand, in the eigenvectors ω 1 and ω 2 the oscillations are mostly concentrated near the stagnation points x a and x b , cf. figure 8(c,e).
The numerical approximations of the eigenvectors are characterized by short-wavelength oscillations.Here, "short-wavelength" means that a significant variation of the magnitude | ω(r, θ)| of the eigenvector with respect to both r and θ occurs on the length scale given by the grid size which shrinks as the resolution is refined.This feature is also borne out in figure 10 showing the enstrophy spectrum of the initial condition involving the eigenvector ω 0 .It is evident from this figure that significant contributions to the enstrophy come from a broad range of length scales, including the smallest length scales resolved on the numerical grid.The eigenvectors associated with all other eigenvalues (not shown here for brevity) are also dominated by short-wavelength oscillations localized near different parts of the boundary ∂A 0 .Since due to their highly oscillatory nature the eigenvectors shown in figures 8a,c,e are not fully resolved, in figures 8b,d,f we show the corresponding eigenvectors of the regularized eigenvalue problem (31) where we set δ = 0.05.We see that in the regularized eigenvectors oscillations are shifted to the interior of the domain A 0 and their typical wavelengths are much larger.The eigenvalues obtained by solving the regularized problem (31) are distributed following a similar pattern as revealed by the eigenvalues of the original problem (30), cf.figures 4b and 5b.In particular, for the main eigenvalue of interest λ 0 , cf.Table 1, the difference with respect to the corresponding eigenvalue of the regularized problem λ δ,0 is rather small and we have |ℜ(λ 0 − λ δ,0 )|/ℜ(λ 0 ) ≈ 0.024 (both are computed here with the resolution N = 80).The remaining eigenvalues of the regularized problem also form a "cloud" filling the essential spectrum Π ess (L).
Solution of the discrete eigenvalue problem (30) for asymmetric dipoles with η > 0 leads to eigenvalue spectra and eigenvectors qualitatively very similar to those shown in figures 4a-d and 8a,c,e, hence for brevity they are not shown here.The only noticeable difference is that the eigenvectors are no longer symmetric or antisymmetric with respect to the flow centerline.
Figure 7: Real parts of the eigenvector ω 0 corresponding to the eigenvalue λ 0 , cf.Table 1, and representing an exponentially growing mode obtained by solving the discrete eigenvalue problem (30) with different resolutions N .The grids covering the surface plots represent the discretizations of the domain A 0 used for different N .
Solution of the Evolution Problem
As in § 5, we focus on the symmetric case with η = 0.The 2D Euler system (1)-( 2) is solved numerically as described in § 4.2 with the initial condition for the perturbation vorticity ω(t, x) given in terms of the eigenvectors shown in figures 8a-f, i.e., Unless indicated otherwise, the numerical resolution is M = 512 grid points in each direction.By taking ε = 10 −4 we ensure that the evolution of the perturbation vorticity is effectively linear up to t ≲ 70 and to characterize its growth we define the perturbation enstrophy as The evolution of this quantity is shown in figure 9a for the six considered initial conditions and times before nonlinear effects become evident.In all cases we see that after a transient period the perturbation enstrophy starts to grow exponentially as exp( λt), where the growth rate (found via a least-squares fit) is λ ≈ 0.127 and is essentially equal to the real part of the eigenvalue λ 0 , cf.Table 1.The duration of the transient, which involves an initial decrease of the perturbation enstrophy, is different in different cases and is shortest when the eigenfunctions ω 0 and ω δ,0 are used as the initial conditions in (35) (in fact, in the latter case the transient is barely present).The reason for this behavior is that ω 0 is the sole true eigenvector of the operator L, whereas ω 1 and ω 2 are only approximate eigenvectors associated with the (approximate) eigenvalues λ 1 and λ 2 belonging to the essential spectrum Π ess (L) rather than to the point spectrum Π 0 (L).
As a result, ω 0 represents the only linearly growing mode, such that when ω 1 , ω 2 or any other approximate eigenvector is used as the initial condition in (35), a transient behavior ensues where the solution ω(t) of system (32) approaches the trajectory involving the growing mode ℜ e λ 0 t ω 0 .Hereafter we will focus on the flow obtained with the initial condition (35) given in terms of the eigenfunction ω 0 , cf. figure 8a.The effect of the numerical resolution N used in the discrete eigenvalue problem (30) is analyzed in figure 9b, where we show the perturbation enstrophy (36) in the flows with the eigenvector ω 0 used in the initial conditions (35) computed with different N .We see that refined resolution leads to a longer transient period while the rate of the exponential growth λ is unchanged.This demonstrates that this growth rate is in fact a robust property unaffected by the underresolution of the unstable mode.
The enstrophy spectrum of the initial condition (35) and of the perturbation vorticity ω(t, x) at different times t ∈ (0, 60] is shown in figure 10 as a function of the wavenumber k := |k|.It is defined as where σ is the azimuthal angle in the wavenumber space and S k denotes the circle of radius k in this space (with some abuse of notation justified by simplicity, here we have treated the wavevector k as a continuous rather than discrete variable).Since its enstrophy spectrum is essentially independent of the wavenumber k, the eigenvector ω 0 in the initial condition ( 35) turns out to be a distribution rather than a smooth function.
The enstrophy spectra of the perturbation vorticity ω(t, x) during the flow evolution show a rapid decay at high wavenumbers which is the effect of the applied filter, cf.4.2.However, after the transient, i.e., for 20 ⪅ t ≤ 60, the enstrophy spectra have very similar forms, except for a vertical shift which increases with time t.This confirms that the time evolution is dominated by linear effects as there is little energy transfer to higher (unresolved) modes.This is also attested to by the fact that for all the cases considered in figure 9a the relative change of the total energy Ω |u(t, x)| 2 dx and of the total enstrophy Ω ω(t, x) 2 dx, which are conserved quantities in the Euler system (1)-( 2), is at most of order O(10 −4 ) (this small variation of the conserved quantities is due to the action of the filter and the fact that the time-integration scheme is not strictly conservative, cf.§ 4.2).Since in the numerical solution the total circulation is given by the Fourier coefficient [ ω(t)] 0 = [ ω 0 (t)] 0 + [ ω i (t)] 0 , it remains zero by construction throughout the entire flow evolution.We now go on to discuss the time evolution of the perturbation vorticity in the physical space and in figures 11a and 11b we show ω(t, x) at the times t = 4 and t = 21, respectively, which correspond to the transient regime and to the subsequent period of an exponential growth.During that period, i.e., for 20 ⪅ t ≤ 60, the structure of the perturbation vorticity field does not change much.We see that as the perturbation evolves a number of thin vorticity filaments is ejected from the vortex core A 0 into the potential flow with the principal ones emerging at the azimuthal angles θ ≈ ±π/4, ±3π/4, i.e., in the regions of the vortex boundary where most of the short-wavelength oscillations evident in the eigenvector ω 0 are localized, cf.figure 8a.With thickness on the order of a few grid points, these filaments are among the finest structures that can be resolved in computations with the resolution M we use.The perturbation remains symmetric with respect to the flow centerline for all times and since the vorticity ω 0 of the base flow is antisymmetric, the resulting total flow ω(t, x) does not possess any symmetries.The perturbation vorticity ω(t, x) realizing the exponential growth in the flows corresponding to the initial condition involving the eigenvectors ω 1 and ω 2 (and their regularized versions ω δ,1 and ω δ,2 ) is essentially identical to the perturbation vorticity shown in figure 11b, although its form during the transient regime can be quite different.In particular, the perturbation eventually becomes symmetric with respect to the flow centerline even if the initial condition ( 35) is antisymmetric.The same is true for flows obtained with initial condition corresponding to all approximate eigenvalues other than λ 1 and λ 2 (not shown here for brevity).We did not attempt to study the time evolution of asymmetric dipoles with η > 0 in (5a), since their vorticity distributions are discontinuous making computation of such flows using the pseudospectral method described in § 4.2 problematic.
Discussion and Final Conclusions
In this study we have considered an open problem concerning the linear stability of the Lamb-Chaplygin dipole which is a classical equilibrium solution of the 2D Euler equation in an unbounded domain.We have considered its stability with respect to 2D circulationpreserving perturbations and while our main focus was on the symmetric configuration with η = 0, cf.figure 1a, we also investigated some aspects of asymmetric configurations with η > 0. Since the stability of the problem posed on a unbounded domain is difficult to study both with asymptotic methods and numerically, we have introduced an equivalent formulation with all relations defined entirely within the compact vortex core A 0 , which was accomplished with the help of a suitable D2N map accounting for the potential flow outside the core, cf.Appendix A. The initial-value problem for the 2D Euler equation with a compactly supported initial condition is of a free-boundary type since the time evolution of the vortex boundary ∂A(t) is a priori unknown and must be determined as a part of the solution of the problem.This important aspect is accounted for in our formulation of the linearized problem, cf.relation (10).The operator representing the 2D Euler equation linearized around the Lamb-Chaplygin dipole has been shown to have an infinite-dimensional null space Ker(L) and the eigenfunctions ψ C , C = 2, 3, . . ., spanning this null space, cf.figures 2a-d, can potentially be used to search for nearby equilibrium solutions.
We have studied the linear stability of the Lamb-Chaplygin dipole using a combination of asymptotic analysis ( § 3) and numerical computation ( § 5) employed to construct approximate solutions of the eigenvalue problem (15) together with the numerical time-integration of the 2D Euler system (32) in § 6.These three approaches offer complementary insights reinforcing the main conclusion, namely, that the Lamb-Chaplygin dipole is linearly unstable with the instability realized by a single eigenmode ω 0 , cf. figure 8a, featuring high-frequency oscillations localized near the vortex boundary ∂A 0 and the corresponding eigenvalue λ 0 embedded in the essential spectrum Π ess (L) of the linearized operator L. In other words, there is no "smallest" length scale characterizing the unstable mode, which is why it cannot be accurately resolved using any finite numerical resolution.This is one of the reasons why this form of instability specific to the inviscid evolution is so fundamentally different from the mechanisms underlying the growth of perturbations during the viscous evolution of the dipole that were observed in all earlier studies (Nielsen & Rasmussen, 1997;van Geffen & van Heijst, 1998;Billant et al., 1999;Donnadieu et al., 2009;Brion et al., 2014;Jugier et al., 2020).
An approximate solution of eigenvalue problem (15) obtained in § 3 using an asymptotic technique reveals the existence of approximate eigenfunctions in the form of shortwavelength oscillations localized near the vortex boundary ∂A 0 .Remarkably, eigenfunctions with such properties exist when ℜ(λ 0 ) < 4, i.e., when λ 0 is in the essential spectrum Π ess (H) of the 2D linearized Euler operator and it is interesting that the asymptotic solution has been able to capture this value exactly.We remark that with exponential terms involving divergent expressions as arguments, cf. ( 26), this approach has the flavor of the WKB analysis.We note that while providing valuable insights about the structure of the approximate eigenvectors the asymptotic analysis developed in § 3 does not allow us to determine the eigenvalues of problem (15), i.e., λ 0 serves as a parameter in this analysis.Moreover, since the obtained approximate solution represents only the asymptotic (in the short-wavelength limit m → ∞) structure of the eigenfunctions, it does not satisfy the boundary conditions (21c)-(21d).To account for these limitations, complementary insights have been obtained by solving eigenvalue problem (15) numerically as described in § 4.1.
Our numerical solution of eigenvalue problem (15) obtained in § 5 using different resolutions N yields results consistent with the general mathematical facts known about the spectra of the 2D linearized Euler operator, cf.§ 2.1.In particular, these results feature eigenvalues of the discrete problem (30) filling ever more densely a region around the origin which is bounded in the horizontal (real) direction and expands in the vertical (imaginary) direction as the resolution N is increased, which is consistent with the existence of an essential spectrum Π ess (H) in the form of a vertical band with the width determined by the largest Lyapunov exponent of the flow, cf. ( 8).The corresponding eigenvectors are dominated by short-wavelength oscillations localized near the vortex boundary ∂A 0 , a feature that was predicted by the asymptotic solution constructed in § 3.However, solutions of the evolution problem for the perturbation vorticity with the initial condition (35) corresponding to different eigenvectors obtained from the discrete problems ( 30)-( 31) reveal that λ 0 (and its complex conjugate λ * 0 ) are the only eigenvalues associated with an exponentially growing mode with a growth rate equal to the real part of the eigenvalue, i.e., for which λ ≈ ℜ(λ 0 ).When eigenvectors associated with eigenvalues other than λ 0 or λ * 0 are used in the initial condition (35), the perturbation enstrophy (36) reveals transients of various duration followed by exponential growth with the growth rate again given by ℜ(λ 0 ).This demonstrates that ±λ 0 and ±λ * 0 are the only "true" eigenvalues and form the point spectrum Π 0 (H) of the operator associated with the 2D Euler equation linearized around the Lamb-Chaplygin dipole.On the other hand, all other eigenvalues of the discrete problems ( 30)-( 31) can be interpreted as numerical approximations to approximate eigenvalues belonging to the essential spectrum Π ess (H).More precisely, for each resolution N the eigenvalues of the discrete problems other than ±λ 0 and ±λ * 0 approximate a different subset of approximate eigenvalues in the essential spectrum Π ess (H) and the corresponding eigenvectors are approximations to the associated approximate eigenvectors.This interpretation is confirmed by the eigenvalue density plots shown in figures 5a-d and is consistent with what is known in general about the spectra of the 2D linearized Euler operator, cf.§ 2.1.
In figure 9a we noted that when the initial condition ( 35) is given in terms of the eigenvector ω 0 , the perturbation enstrophy E(t) also exhibits a short transient before attaining exponential growth with the rate λ ≈ ℜ(λ 0 ).The reason for this transient is that, being non-smooth, the eigenvector ω 0 is not fully resolved, which is borne out in figure 10 (in fact, due to the distributional nature of this and other eigenvectors, they cannot be accurately resolved with any finite resolution).Thus, this transient period is needed for some underresolved features of the perturbation vorticity to emerge, cf.figure 11a vs. figure 11b.However, we note that in the flow evolution originating from the eigenvector ω 0 the transient is actually much shorter than when other eigenvectors are used as the initial condition (35), and is nearly absent in the case of the regularized eigenvector ω δ,0 .We emphasize that non-smoothness of eigenvectors associated with eigenvalues embedded in the essential spectrum is consistent with the known mathematical results predicting this property (Lin, 2004).Interestingly, the eigenfunctions ψ C , C = 2, 3, . . ., associated with the zero eigenvalue λ = 0 are smooth, cf.figures 2a-d.
We also add that there are analogies between our findings and the results of the linear stability analysis of Hill's vortex with respect to axisymmetric perturbations where the presence of both the continuous and point spectrum was revealed, the latter also associated with non-smooth eigenvectors (Protas & Elcrat, 2016).
In the course of the linear evolution of the instability the vortex region A(t) changes shape as a result of the ejection of thin vorticity filaments from the vortex core A 0 , cf. figures 11a,b.However, both the area |A(t)| of the vortex and its total circulation Γ are conserved at the leading order, cf. ( 11) and ( 14).We reiterate that the perturbation vorticity fields shown in figures 11a,b were obtained with underresolved computations and increasing the resolution M would result in the appearance of even finer filaments such that in the continuous limit (M → ∞) some of the filaments would be infinitely thin.
In this study we have considered the linear stability of the Lamb-Chaplygin dipole with respect to 2D perturbations.It is an interesting open question how the picture presented here would be affected by inclusion of 3D effects.We are also exploring related questions in the context of the stability of other equilibria in 2D Euler flows, including various cellular flows.
where α k , β k ∈ R, k = 1, 2, . . ., are expansion coefficients to be determined and the constant term is omitted since we adopt the normalization ∂A 0 f ′ (s) ds = 0.The boundary value f ′ of the perturbation streamfunction on ∂A 0 serves as the argument of the D2N operator, cf.(9c).Expanding it in a Fourier series gives where f c k , f s k ∈ R, k = 1, 2, . . ., are known coefficients.Then, using the boundary condition ψ ′ 2 (1, θ) = f ′ (θ), θ ∈ [0, 2π], cf.(9c), the corresponding Neumann data can be computed as which expresses the action of the D2N operator M on f ′ .In order to make this expression explicitly dependent on f ′ , rather than on its Fourier coefficients as in (40) , we use the formulas for these coefficients together with their approximations based on the trapezoidal quadrature (which are spectrally accurate when applied to smooth periodic functions (Trefethen, 2000)) f ′ (θ l ) cos(kθ l ), (41a) where {θ l } N l=1 are grid points uniformly discretizing the interval [0, 2π].Using these relations, the D2N map (40) truncated at N/2 Fourier modes and evaluated at the grid point θ j can be written as where k [cos(kθ j ) cos(kθ l ) + sin(kθ j ) sin(kθ l )] (43) are entries of a symmetric matrix M ∈ R N ×N approximating the D2N operator.
Figure 1 :
Figure 1: Streamline pattern inside the vortex core A 0 of (a) a symmetric (η = 0) and (b) asymmetric (η = 1/4) Lamb-Chaplygin dipole.Outside the vortex core the flow is potential.The thick blue line represents the vortex boundary ∂A 0 whereas the red symbols mark the hyperbolic stagnation points x a and x b .
Figure 3 :
Figure 3: Radial dependence (a) of the eigenvectors f 0 m (r) associated with real eigenvalues λ 0 = 2 (red solid line) and λ 0 = 6 (blue dashed line), and (b) of the real part (red solid line) and the imaginary part (blue dashed line) of the eigenvector f 0 m (r) associated with complex eigenvalue λ 0 = 3 + 10i.Panel (b) shows the neighbourhood of the endpoint r = 1.
Figure 4 :
Figure 4: Eigenvalues obtained by solving the discrete eigenvalue problem (30) with different indicated resolutions N .The eigenvalues ±λ 0 and ±λ *0 which converge to well-defined limits as the resolution N is refined, cf.Table1, are marked in red.The eigenvalue λ 0 is associated with the only linearly unstable mode, cf.§ 6.
Figure 8 :
Figure 8: Real parts of the eigenvectors corresponding to the indicated eigenvalues obtained by solving (a,c,e) eigenvalue problem (30) and (b,d,f) the regularized problem (31) using the resolution N = 80.The grid shown on the surface represents the discretization of the domain A 0 used in the numerical solution of problems (30) and (31).
Figure 11 :
Figure 11: Perturbation vorticity ω(t, x) in the flow corresponding the initial condition (35) involving the eigenvector ω 0 during (a) the transient regime and (b) the period of exponential growth.
Table 1 ,
are marked in red.The eigenvalue λ 0 is associated with the only linearly unstable mode, cf.§ 6.
Table 1 :
Eigenvalue λ 0 associated with the linearly growing mode, cf.§ 6, obtained by solving the discrete eigenvalue problem (30) with different resolutions N . | 14,827.4 | 2023-07-06T00:00:00.000 | [
"Physics"
] |
Character Based Pattern Mining for Neology Detection
Detecting neologisms is essential in real-time natural language processing applications. Not only can it enable to follow the lexical evolution of languages, but it is also essential for updating linguistic resources and parsers. In this paper, neology detection is considered as a classification task where a system has to assess whether a given lexical item is an actual neologism or not. We propose a combination of an unsupervised data mining technique and a supervised machine learning approach. It is inspired by current researches in stylometry and on token-level and character-level patterns. We train and evaluate our system on a manually designed reference dataset in French and Russian. We show that this approach is able to largely outperform state-of-the-art neology detection systems. Furthermore, character-level patterns exhibit good properties for multilingual extensions of the system.
Introduction
This paper deals with automatic detection of formal neologisms in French and Russian, with a language-agnostic objective. Formal neologisms are composed of a new form linked to a new meaning, in opposition to semantic neologisms, composed of a new meaning with an existing form. Whereas formal neologisms represent a tiny part of lexical items in corpora, and thus are not yet attracting a lot of research, they are part of the living lexicon of a given language and notably the gate to understand the evolution of languages.
The remainder of the paper is organized as follows. Section 2 details related works on computational approaches to neology. Section 3 describes key aspects of our method and experiments for neology detection. Section 4 presents evaluation results for French and Russian. Finally, Section 5 summarizes the experiments and evokes future developments.
Previous work
The study of neology has not been a high level priority within computational linguistics for two reasons. First, large diachronic electronic corpora were scarcely available for different languages until recently. Second, novel lexical units represent less than 5 percent of lexical units in corpora, according to several studies (Renouf, 1993, e.g.). But, from a bird-eyes view, linguistic change is the complementary aspect of the synchronic structure, and every unit in every language is time-related and has a life-cycle.
As shown by (Lardilleux et al., 2011), new words and hapaxes are continuously appearing in textual data. Every lexical unit is subjected to time, form and meaning can change, due to socio-linguistic (diastraty) and geographical (diatopy) variations. The increasing availability of electronic (long or short-term) diachronic corpora, advances on word-formation theory and in machine learning techniques motivated the recent emergence of neology tracking systems (Cabré and De Yzaguirre, 1995;Kerremans et al., 2012;Gérard et al., 2014;Cartier, 2016). These tools have a two-fold objective: gaining a better overview on language lifecyle(s), and allow lexicographers and computational linguists to update lexicographic resources, language processing tools and re-sources.
From a NLP point of view, the main questions are : how can we automatically track neologisms, categorize them and follow their evolution, from their first appearance to their integration or disappearance? is it possible to induce neology-formation procedures from expert-curated examples and therefore predict new words formation?
The standard, and rather unique, approach to formal neology tracking consists in extracting novel forms from monitor corpora using lexicographic resources as a reference dictionary to induce unknown words. This is often call the "exclusion dictionary architecture" (EDA). The first system designed for English is due to Renouf (Renouf, 1993) : a monitor corpora and a reference dictionary from which unknown words can be derived. Further filters are then applied to eliminate spellings errors and proper nouns.
Four main difficulties arise from this approach.
First, the design of a reference exclusion dictionary requires large machinereadable dictionaries: this entails specific procedures to apply this architecture to underresourced languages, and an up-to-date dictionary for other languages. Second, the EDA architecture is not sufficient by itself : most of the unknown words are proper nouns, spelling errors or other cases derived from boilerplate removal: this entails a post-processing phase. Third, these systems do not take into account the sociological and diatopic aspects of neologism, as they limit their corpora to specific domains: an ideal system should be able to extend its monitoring to new corpora and maintain diastratic meta-data to characterize novel forms. Fourth, post-filtering has to be processed carefully. For instance, excluding all proper nouns makes it impossible to detect antonomasia (i.e. the fact that a proper noun is used as a common noun, for example "Is he a new kind of Kennedy?").
In many cases, the EDA technique is complemented by a human validation phase, in which experts have to assign each detected "neologism candidate" (N C) a label, either "excluded" or "neologism". This phase enables to complement the exclusion dictionary and to filter candidates to achieve a 100% pre-cision for subsequent analysis. Usually, the guidelines for assessing the class of N Cs are as follows : a formal neologism is defined as a word not yet pertaining to usage in the given language at assessment time 1 . A nonneologism is a word pertaining to one of the following categories : a spelling mistake, a boilerplate outcome, a word already in usage. . . With this procedure, Cartier (Cartier, 2016) evaluated on a one-year subset, that 59.87% of French N C were actual neologisms. In Russian, nevertheless, they evaluated that only 30% of NC were actual neologisms, mainly due to the fact that the EDA technique was in its early phases and that the POS-tagger and spell-checker were not accurate enough. Thus, this approach is not suitable for real time detection or multilingual extension.
In this paper, we advocate a new method to overcome the drawbacks of this method. It combines an unsupervised text mining component to retrieve salient features of positive and negative examples, and a supervised method using these features to automatically detect new neologisms from on-going texts.
Dataset and Methods
To the best of our knowledge, there are no existing NLP techniques that take advantage of text mining techniques for detecting neologisms. Intuitively and practically, formal neologisms, as new form-meaning pairs, appear in specific contexts, such as quotation marks(c'est une véritable "trumperie" 2 ) or metalinguistic markers (ce que nous pouvons appeler des catholibans 3 .The word-formation rules at stake (Schmid, 2015) involve affixation, composition and borrowings, each implying specific character-based features. From these intuition and analysis, we propose a novel method combining an unsupervised technique to retrieve the salient features of neologisms (internal structure and context), and a supervised machine learning approach to de- tect formal neologisms in on-going texts. In the following, we will first present our corpora and reference data and detail the algorithms used.
Corpora and Reference Data
As reference data, we use the evaluation data proposed by (Cartier, 2016). It contains a list of N Cs and a label : excluded or neologism.
In order to see the candidates in context we queried their website 4 to retrieve texts containing one or more N C occurrences. The dataset used here is then limited to N Cs having at least one context available. Table 1 exhibits the statistics about this dataset 5 . One can see that the lack of experts for Russian has led to a much smaller dataset. Furthermore, the ratio of positive candidates is smaller in Russian due to a lower quality of the components.
Contextual character-level features for classification
The data mining component presented here aims to model the context of the candidates in order to classify them. It is an important tool to detect salient contextual and internal features of formal neologisms. Many Data Mining techniques have been used to deal with textual data (Borgelt, 2012), among them we chose an algorithm suitable for the particular type of patterns we wanted to compute (characterlevel patterns). Character-level analysis has received a growing attention from the scientific in recent years. This approach has proved its efficiency in various tasks (in particular in multilingual settings), among which Authorship Attribution (Brixtel, 2015), Information Extraction (Lejeune et al., 2015), Hashtags Prediction (Dhingra et al., 2016) or Terminology Extraction (Korenchuk, 2017). In this ex-periment, we mine closed frequent token and character sequences from the candidates contexts using the maximal repeated strings algorithm from Ukkonen (Ukkonen, 2009). These character level patterns (CLP ) are computed in linear time thanks to augmented suffix arrays (Kärkkäinen et al., 2006). The CLP computed in this paper have two properties : • they have a minimal frequency of 2 (in other words they are repeated); • they are closed: CLP cannot be expanded to the left nor to the right without lowering the frequency.
Patterns are extracted by comparing the contexts of each occurrence of the candidates belonging to the training set. Two kinds of patterns are computed. First, we computed token-level patterns (T LP ) which are words and punctuation marks. In some extent, the T LP method can be viewed as a variant of the Lesk algorithm (Lesk, 1986) where in addition to words unigrams there are n-grams mixing graphical words and punctuation. Second, character-level patterns (CLP ) pattern which are sequences of characters without any filtering. With CLP , the objective is to represent different levels of linguistic description in the same time: morphology (prefixes, suffixes), lexicon (words or group of words) and style (punctuation and combinations between words and punctuation).
Patterns and contexts
For each attested neologisms found in our corpus, the start and end offsets of their occurrences in the corpus are computed. We model the context as a vector of CLP and T LP frequencies, afterwards we are able to compare the contexts of neologisms and compare them to the context of non-neologisms. Four types of contexts have been identified: • Internal (resp. bilateral): n characters before the start offset of the N C and n characters after the end offset of the N C, including (resp. excluding) the N C itself • Left (resp. right): n characters before (resp. after) the start offset (resp. end offset) of the N C plus the N C itself Various context sizes have been experimented, from 10 to 400 characters, in order to assess the influence of the window size on the classification results. The context size is always computed in characters in order to have the same data for computing CLP and T LP .
Learning Framework and Evaluation Metrics
Once the CLP are computed in all the training set, they are used as features to train classifiers. For each candidate, the value of each feature will be the frequency of the CLP in the given context (bilateral, internal, left or right). The training of the classifiers has been performed with Scikit-learn (Pedregosa et al., 2011). Various classifiers (decision trees, support vector machines, bayesian networks). 10fold cross validation has been performed so that the figures presented here after are the mean of the results for each fold. In order to avoid learning biases, all the occurrences of a given candidate will be grouped in only one set per fold : the train set or the test set. Therefore, with T LP internal and bilateral contexts yield the same results : the N C itself can not be used by this method. Table 1 shows the results obtained with T LP for the French dataset with a SVM classifier (linear kernel) and C-parameter set at 1. We will only focus on SVM since this classifier outperformed Decision trees, random forests and bayesian networks . The results for the internal end bilateral context are the same because of the design of the train and test sets (see Section 3.2.2). Two results have to be highlighted here. First, the left context gave by far the best results, suggesting that there are clues announcing neologisms. Second, if we forget about left contexts, the results can be improved by expanding the windows size to 50 characters 6 . Our hypothesis is that expanding the context only improves the bad results and that expanding the left context mostly yields noise. With 72% F-measure in the best case, the T LP method was promising but it was quickly outperformed by the CLP method. On a first approach, we managed to tune the minimal (minlen) and maximal length (maxlen) of the CLP in order to reduce the search space because even in small windows there are a huge amount of CLP .
Results
We first observed that the optimal F 1measure scores were obtained with minlen = 3 and maxlen = 7. This result seemed to be consistent with what has been observed with comparable methodology used for the Authorship Attribution task (see for instance (Brixtel, 2015)). However, subsequent experiments with the same cross-validation method showed that removing these length constraints lead to similar results. Filtering patterns according to their support (relative frequency) has been tested as well but it gave instable results.
Finally, taking all the CLP appeared to be the best configuration. These results are showed ( Table 2). The CLP method takes advantage of internal properties of the candidates (prefixes and suffixes) and it allows us to get more clues in the immediate context of the N C. With a 84.9% F-measure, this method performs better than the 75% 7 presented in (Cartier, 2016). The bilateral context is the least efficient configuration. It shows that CLP including the candidate itself are very good features. Furthermore, it reduces the differences between the left and right contexts. The best results are still found in the immediate contexts but we do not find with CLP the same shift in the results when the context-size is modified. In Russian we observed the same phenomena with T LP . Therefore, we only present here the results for the CLP method (Table 3). Here, the results are even better than the results for the French dataset with more than 90% F-measure with a left context of length 50. The main difference is that there is more instability when the size and the types of contexts changes. This instability may come from the size of the dataset and the subsequent lower number of features.
There may be room for improvement for the bilateral and internal configurations by taking into account the relative position of the pattern (e.g. if the pattern has been found on the left side, right side or both sides of the candidate) and not only its number of occurences. 7 60% precision and probably a recall close to 100% Finally, among the classifiers we tested, SVM with linear kernels offers the best results. This is a result we expected since it is consistent with state-of-the-art results in stylometry (Sun et al., 2012). Decision trees perform a bit worse and, interestingly, random forests offer very little added-value. We plan to experiment Conditional Random Fields in order to take advantage of the sequential aspect of our input data.
According to the data we collected, the EDA approach shows a precision around 44 % (61% F-measure) for French and 30 % (46% Fmeasure). Even if it is difficult to precisely assess recall, we can only say that the method presented here shows a real improvement : 82% for French (84.9% F-Measure) and 87% for Russian (90.1% F-measure) in terms of precision.
Discussion and Perspectives
The preliminary study we have conducted demonstrates that a combination of unsupervised data mining and supervised Machine learning techniques can largely outperform the EDA approach used to detect formal neologisms. Moreover, this technique does not need any NLP pre-processing (tokenization, lemmatization, POS tagging. . . ) of the textual data, which is a great advantage for poorly endowed languages. It reduces the marginal cost for processing new languages.
We plan additional experiments to back the legitimacy of the approach : • experiment on other languages : we are currently collecting data Chinese, Czech and Portuguese; • compare with other machine learning techniques, especially CRF, which have proved good accuracy in sequence labelling; Additionally, we want to experiment the model to detect not only neologisms as a unique category, but categories of neologisms, as affixation, composition and borrowing are likely to retain specific and discriminative features that could be exploited in the detection process. | 3,753.4 | 2017-09-01T00:00:00.000 | [
"Computer Science",
"Linguistics"
] |
Experimental Study of Leakage Monitoring of Diaphragm Walls Based on Distributed Optical Fiber Temperature Measurement Technology
In geotechnical engineering seepage of diaphragm walls is an important issue which may cause engineering disasters. It is therefore of great significance to develop reliable monitoring technology to monitor the leakage. The purpose of this study is to explore the application of a distributed optical fiber temperature measurement system in leakage monitoring of underground diaphragm walls using 1 g model tests. The principles of seepage monitoring based on distributed optical fiber temperature measurement technology are introduced. Fiber with heating cable was laid along the wall to control seepage flow at different speeds. The temperature rise of the fiber during seepage was also recorded under different heating power conditions. In particular the effect of single variables (seepage velocity and heating power) on the temperature rise of optical fibers was discussed. Test results indicated that the temperature difference between the seepage and non-seepage parts of diaphragm wall can be monitored well using fiber-optic external heating cable. Higher heating power also can improve the resolution of fiber-optic seepage. The seepage velocity had a linear relationship with the final stable temperature after heating, and the linear correlation coefficient increases with the increase of heating power. The stable temperature decreased with the increase of flow velocity. The findings provide a basis for quantitative measurement and precise location of seepage velocity of diaphragm walls.
Introduction
Urban construction projects such as high-rise building projects have promoted the development and utilization of underground spaces. This also makes the foundation pits development a larger and more important matter [1]. Underground diaphragm walls are a common retaining structure used in foundation pits located in soft soil areas. Because of their complex structure and difficult construction, leakage problems often occur. These problems often cause serious hidden dangers to the safety of geotechnical engineering projects [2]. Therefore, it is of great significance to monitor the seepage of diaphragm wall during construction. Conventional monitoring methods, such as construction borehole inspection, light dynamic penetration, surrounding well water pressure and pumping tests, etc., can only reflect the local conditions of a diaphragm wall and cannot carry out continuous monitoring, and have many limitations during the actual construction process. Many techniques for monitoring seepage in geotechnical engineering have been proposed. Typical methods include: electromagnetic methods [3,4], thermal impedance methods [5] and resistivity methods [6][7][8]. These monitoring methods each have their own advantages and scope of application. However, the above monitoring methods cannot meet the requirements of diaphragm wall monitoring, to locate seepage points and measure seepage velocity, so they are not suitable for diaphragm wall monitoring.
In recent years, distributed optical fiber sensing technology has been widely used in engineering monitoring because of its flame-proof, explosion-proof, corrosion-resistant, electromagnetic interferenceresistant, high voltage-resistant, long-distance measurement, real-time measurement and positioning features [9]. Some studies applying distributed optical fiber sensing technology to monitor the leakage in geotechnical engineering projects were also conducted. Zhu et al. [10] proposed a distributed temperature sensing (DTS) technology based on Brillouin scattering light to monitor dam leakage. Tylerd et al. [11] put forward the singular value decomposition method in the process of DTS temperature measurement. Khan et al. [12] designed an automatic monitoring system for abnormal seepage points in a dam based on DTS technology. Because the temperature can be transmitted through the medium and the change of the medium is continuous, the seepage field can be understood by the changes of the medium temperature [13][14][15]. Minardo et al. [16] reported distributed temperature measurements in a perfluorinated graded-index polymer optical fiber (POF) with 50-µm core diameter for the first time. They showed that Brillouin optical frequency-domain analysis was able to resolve spatially the temperature-dependent Brillouin frequency shift profile along a 20-m POF fiber sample, at a nominal spatial resolution of 4 m. Saxena et al. [17] presented a Raman optical fiber distributed temperature sensor, using a wavelet transform-based signal processing technique for backscattered anti-Stokes and Stokes signals. The proposed technique enables automatic measurement of distributed temperature profile that has better temperature accuracy and very small spatial errors in detecting the location of hot zones. However, distributed optical fibers are insensitive to small temperature variations and are susceptible to environmental temperature in the case of small seepage. This defect limits their application in leakage monitoring of geotechnical engineering projects. At present, distributed optical fiber temperature measurement technology can only be applied to qualitative monitoring of seepage fields with large temperature differences and flow velocity, and cannot meet all the requirements of seepage field monitoring.
In this paper, a new method based on distributed optical fiber temperature measurement technology was proposed to monitor the leakage of diaphragm walls. Model tests under different seepage flow velocity and heating power conditions were carried out. The effects of heating power and seepage velocity were discussed. On the basis of test results, a qualitative relationship between seepage velocity and temperature rise is deduced. The test results can provide a theoretical basis and method for distributed optical fiber monitoring of underground diaphragm wall seepage.
Distributed Optical Fiber Sensing Technology
Distributed optical fiber temperature sensors (DOTSs) can measure the temperature based on the change of optical fiber length in the form of a continuous function of distance [18]. The DOTS principle is based on the Raman scattering of light inside the optical fiber. Using optical time domain reflection technology, a high-power narrow-band optical pulse is sent into the optical fiber. The backscattered light intensity changes with time can then be detected. Rayleigh scattering is the main factor causing the attenuation of optical fiber transmission. Although the backscattering effect is strong, it does not change significantly at the temperature of conventional optical fibers. Raman scattering and Brillouin scattering are much weaker than Rayleigh scattering in intensity, but they are directly related to temperature [19][20][21]. The different spectral distributions are shown in Figure 1. In Raman scattering, the temperature sensitivity of the anti-Stokes scattering signal is higher than that of Stokes scattering signal. In practical applications, the ratio of anti-Stokes signal to Stokes signal is often used as temperature information to reduce the influence of light source intensity, light injection conditions, geometrical size and structure of optical fibers. The ratio of anti-Stokes to Stokes intensity R(T) can be calculated by Equation (1) (1) that the ratio of anti-Stokes to Stokes intensity R(T) in Raman scattering is only a function of temperature. This is the fundamental theoretical basis of distributed optical fiber temperature sensor.
Seepage Monitoring Based on DTS
Diaphragm walls are the retaining structures of deep foundation pits, which bear large lateral water and soil pressures. During the grouting process, it is difficult to achieve complete filling due to the influence of embedded parts in the steel cage and the grouting materials [22,23]. Leakage can easily occur at positions where the filling is not dense or two walls connect. In the case of low seepage velocity, the temperature difference between the seepage location and surrounding medium is small. It is difficult to measure this kind of temperature difference using a distributed temperature measurement system. Hence, as shown in Figure 2, in this study the temperature field was superimposed on the temperature measurement optical fiber with a heating cable to increase the signal-to-noise ratio of the DTS system. The parameters of the heating cables used are listed in Table 1. In Raman scattering, the temperature sensitivity of the anti-Stokes scattering signal is higher than that of Stokes scattering signal. In practical applications, the ratio of anti-Stokes signal to Stokes signal is often used as temperature information to reduce the influence of light source intensity, light injection conditions, geometrical size and structure of optical fibers. The ratio of anti-Stokes to Stokes intensity R(T) can be calculated by Equation (1): where γ s is the anti-Stokes frequency; γ a s is the Stokes frequency; c is the speed of light in vacuum; ∆γ is Raman frequency shift; h is Planck constant; k is Boltzmann constant, and T is the absolute temperature of the environment. It can be seen from Equation (1) that the ratio of anti-Stokes to Stokes intensity R(T) in Raman scattering is only a function of temperature. This is the fundamental theoretical basis of distributed optical fiber temperature sensor.
Seepage Monitoring Based on DTS
Diaphragm walls are the retaining structures of deep foundation pits, which bear large lateral water and soil pressures. During the grouting process, it is difficult to achieve complete filling due to the influence of embedded parts in the steel cage and the grouting materials [22,23]. Leakage can easily occur at positions where the filling is not dense or two walls connect. In the case of low seepage velocity, the temperature difference between the seepage location and surrounding medium is small. It is difficult to measure this kind of temperature difference using a distributed temperature measurement system. Hence, as shown in Figure 2, in this study the temperature field was superimposed on the temperature measurement optical fiber with a heating cable to increase the signal-to-noise ratio of the DTS system. The parameters of the heating cables used are listed in Table 1.
As the leakage site has a large heat capacity, the temperature change rate is slower for the same heating time. This will lead to a large difference between the fiber temperature around the leakage site and that of a non-leakage site. The temperature change is also closely related to the seepage velocity. At the same heating power, the faster the seepage velocity is, the lower the temperature rise is. According to the measured distance of the DTS, the time from signal transmission to signal reception is determined. Then the light speed in the fiber is determined and the distance is calculated. The distance X from any point on the optical fiber can be calculated by Equation (2). The leakage of diaphragm walls can be effectively monitored and located by the temperature change determined by DTS.
where c is the light speed in vacuum; T' is the time from signal transmission to signal reception, and n is the refractive index of the measured light. As the leakage site has a large heat capacity, the temperature change rate is slower for the same heating time. This will lead to a large difference between the fiber temperature around the leakage site and that of a non-leakage site. The temperature change is also closely related to the seepage velocity. At the same heating power, the faster the seepage velocity is, the lower the temperature rise is. According to the measured distance of the DTS, the time from signal transmission to signal reception is determined. Then the light speed in the fiber is determined and the distance is calculated. The distance X from any point on the optical fiber can be calculated by Equation (2). The leakage of diaphragm walls can be effectively monitored and located by the temperature change determined by DTS. cT X n 2 ′ = (2) where c is the light speed in vacuum; T' is the time from signal transmission to signal reception, and n is the refractive index of the measured light.
Model Test Scheme
In this study, a model box was used for the test. As shown in Figure 3, the model box is 1.0 m × 0.6 m × 0.6 m in volume. It can be seen in Figure 4 that a concrete wall made of C30 concrete was used as the diaphragm wall in the model test. Each cubic meter of C30 concrete is composed of 175 kg of water, 461 kg of cement, 512 kg of sand and 1252 kg of stone. The mix ratio of these materials was 0.38:1:1.11:2.72. The thickness of the concrete wall was 5 cm. The diameter of fiber core was 3 mm and it was combined with a heating cable. The heating cable was uniformly arranged on the concrete wall in an "S" shape. Each section of fiber was numbered 1-6. It can be seen in Figure 5 that four leakage holes with a diameter of 5 cm were prefabricated on the concrete wall. The leakage was located on the left and right sides of the No. 3 optical fiber. Detailed information of the leakage is shown in Figure 4. In this model test, saturated silty clay from a soft foundation pit in the Jinan area was used as filling material to simulate actual geological conditions. It can be seen in Figure 6 that the water tank was connected to the leakage of diaphragm wall through a PVC pipe. The angle between the PVC pipe and the concrete wall was 30°. The PVC pipe was 50 cm long and 10 cm in diameter. The heating system was composed of a heating cable, an alternating current power supply with a voltage regulator used to regulate the voltage and a multi-functional voltmeter used to display the voltage to control the heating power. During the test, heating cables were tightly bundled with the temperature optical fiber. After electrification, the temperature of optical fiber was higher than that of the surrounding environment in order to monitor leakage. A TDGC2-5KVA single-phase voltage regulator manufactured by Zhejiang Fujian Electrical Appliances Co., Ltd. (Wenzhou, China) was
Model Test Scheme
In this study, a model box was used for the test. As shown in Figure 3, the model box is 1.0 m × 0.6 m × 0.6 m in volume. It can be seen in Figure 4 that a concrete wall made of C30 concrete was used as the diaphragm wall in the model test. Each cubic meter of C30 concrete is composed of 175 kg of water, 461 kg of cement, 512 kg of sand and 1252 kg of stone. The mix ratio of these materials was 0.38:1:1.11:2.72. The thickness of the concrete wall was 5 cm. The diameter of fiber core was 3 mm and it was combined with a heating cable. The heating cable was uniformly arranged on the concrete wall in an "S" shape. Each section of fiber was numbered 1-6. It can be seen in Figure 5 that four leakage holes with a diameter of 5 cm were prefabricated on the concrete wall. The leakage was located on the left and right sides of the No. 3 optical fiber. Detailed information of the leakage is shown in Figure 4. In this model test, saturated silty clay from a soft foundation pit in the Jinan area was used as filling material to simulate actual geological conditions. It can be seen in Figure 6 that the water tank was connected to the leakage of diaphragm wall through a PVC pipe. The angle between the PVC pipe and the concrete wall was 30 • . The PVC pipe was 50 cm long and 10 cm in diameter. The heating system was composed of a heating cable, an alternating current power supply with a voltage regulator used to regulate the voltage and a multi-functional voltmeter used to display the voltage to control the heating power. During the test, heating cables were tightly bundled with the temperature optical fiber. After electrification, the temperature of optical fiber was higher than that of the surrounding environment in order to monitor leakage. A TDGC2-5KVA single-phase voltage regulator manufactured by Zhejiang Fujian Electrical Appliances Co., Ltd. (Wenzhou, China) was used in this test. Its basic parameters are shown in Table 2.
During the test, the model box was filled with silty clay to simulate the real stratum. The water valve was opened to adjust the seepage rate and the heating power of used cable was adjusted many times, and the temperature of optical fiber was monitored. Specific test schemes are shown in Figures 3-6.
Test Process
In the preliminary preparation stage, the larger particles in the model box were removed, the silty clay surface was smoothed, the concrete wall was placed, and the same silty clay as the model box was added to the PVC pipe. According to the optical fiber burial method described above, the optical fiber was laid in an "S" shape. In the course of laying the fiber, it is necessary to ensure that the temperature measuring optical fibers are tightly bundled with the heating optical cables. The optical fibers were then tested. Then we connect the optical fiber with the DTS and set the coordinates. The optical fibers were fixed. After the temperature measuring optical fibers, heating cables and leakage points were basically in the same horizontal plane, soil was slowly added to the model box to the designated level. During the process of soil addition, attention should be paid to the protection of the optical fibers.
The heating power of cable was set to 4, 6, 8, 10 and 12 W/m. According to Ohm's law: A box-type distributed optical fiber temperature measurement system produced by Suzhou Nanzhi Sensing Technology Co., Ltd. (Suzhou, China) was used as the data acquisition and transmission system. It was selected for its high measurement accuracy and validity. It has characteristics of high measurement accuracy, short measurement time and long measurement distance. The basic parameters of the used DTS system are shown in Table 3.
During the test, the model box was filled with silty clay to simulate the real stratum. The water valve was opened to adjust the seepage rate and the heating power of used cable was adjusted many times, and the temperature of optical fiber was monitored. Specific test schemes are shown in Figures 3-6.
Test Process
In the preliminary preparation stage, the larger particles in the model box were removed, the silty clay surface was smoothed, the concrete wall was placed, and the same silty clay as the model box was added to the PVC pipe. According to the optical fiber burial method described above, the optical fiber was laid in an "S" shape. In the course of laying the fiber, it is necessary to ensure that the temperature measuring optical fibers are tightly bundled with the heating optical cables. The optical fibers were then tested. Then we connect the optical fiber with the DTS and set the coordinates. The optical fibers were fixed. After the temperature measuring optical fibers, heating cables and leakage points were basically in the same horizontal plane, soil was slowly added to the model box to the designated level. During the process of soil addition, attention should be paid to the protection of the optical fibers.
The heating power of cable was set to 4, 6, 8, 10 and 12 W/m. According to Ohm's law: The required heating voltage, set power and corresponding heating voltage were calculated as shown in Table 4. The voltage regulator was adjusted until the voltmeter showed the required voltage. Then the flow rate control valve was opened to drain water at the specified seepage rate. When the temperature rise of the optical fiber was less than 0.5 • C in 10 min, the heating power was turned off. The temperature of the optical fibers gradually returned to its initial state. In this process, the DTS was used to continuously record the temperature of the optical fibers in different periods. Thereafter, the heating power was adjusted to different values and the above steps were repeated. Different seepage velocities were controlled by adjusting the flow rate control valve and calculating the flow rate through the seepage flow. The flow velocities from v = 50 mm/h to v = 250 mm/h were designed respectively. The interval of seepage velocities was 50 mm/h. Under the specified heating power, the DTS was used to continuously record the temperature of optical fibers in different time periods. Then different heating powers were set up and the above steps were repeated. Finally, the data were analyzed and processed.
Temperature Change During Monitoring
The overall temperature rise of the optical fiber temperature measurement system under different seepage velocities is shown in Figures 7-12. As mentioned above, the leakage point in this test is located at the top of No.3 optical fiber, about 1.7-2 m away from the left end of the optical fiber. Figure 7 shows the temperature change of the optical fiber temperature measurement system when the seepage velocity is 0. It suggests that when there is no leakage in the concrete wall, the temperature of each section of the optical fiber is basically the same. There is no obvious temperature anomaly. This indicates that the heating cable used in this test can uniformly heat all parts of the optical fiber. Figure 8 shows the situation when the seepage velocity is 50 mm/h. It suggests that when the heating power is less than 8 W, the seepage point cannot be clearly distinguished at this seepage velocity. When the heating power is 10 W and 12 W, the temperature rise at the leakage site is 0.5-1 • C lower than that at the adjacent section. Figure 9 shows the situation when the seepage velocity is 100 mm/h. It suggests that when the heating power is greater than 6 W, the temperature rise at the leakage site is 1-1.5 • C lower than that at the adjacent section. The detection effect of the leakage point is better. However, when the heating power is 4 W, the location of leakage point cannot be clearly distinguished. Figure 10 shows the situation when the seepage velocity is 150 mm/h. When the heating power is greater than 6 W, the abnormal temperature rise at the leakage point is more obvious. Figures 11 and 12 show that the abnormal temperature rise becomes more obvious with the increase of heating power. However, when the heating power is 4 W, the abnormal temperature rise will not become more obvious with the increase of seepage velocity. It can be seen that if there is no leakage point and water content in the saturated clay medium is stable, the temperature rise curve of optical fibers can be maintained in a relatively stable state. First, it fluctuates in a small range above and below a certain temperature. After leakage, water participates in the heat transfer process between temperature measuring optical fibers and porous media. Because of the large specific heat capacity of water, the temperature change rate of the leakage part is slower under the same heating time, which results in the difference between the temperature of temperature measuring optical fibers at the leakage site and the temperature without leakage, and the temperature rise at the leakage site will be lower than that at the non-leakage site. distance is about 0.4 m. This result is somewhat counterintuitive. After the inspection of the test device, it was found that the bottom of the model box was not truly horizontal due to the processing technology used, and the right side was about 0.5 cm higher than the left side. This resulted in a slow flow of water to the left end of the optical fiber at the leakage hole. Therefore, in the following analysis, the lowest temperature change point within 0.4 m from No. 3 optical fiber was selected as the actual leakage point. device, it was found that the bottom of the model box was not truly horizontal due to the processing technology used, and the right side was about 0.5 cm higher than the left side. This resulted in a slow flow of water to the left end of the optical fiber at the leakage hole. Therefore, in the following analysis, the lowest temperature change point within 0.4 m from No. 3 optical fiber was selected as the actual leakage point. Figures 13-15. Figure 13 is the temperature rise curve at different seepage flow velocity when the heating power is 4 W. It suggests that the optical fiber has poor resolution for the temperature change at the leakage point under a low heating power. The leakage point detection effect is poor. Figures 14 and 15 show that when the heating power is gradually increased to 12 W, the resolution of optical fibers to seepage Interestingly, the most significant abnormal temperature rise point in Figures 9-11 was observed not at the position of No. 3 optical fiber, but closer to the left end of the optical fiber, and the offset distance is about 0.4 m. This result is somewhat counterintuitive. After the inspection of the test device, it was found that the bottom of the model box was not truly horizontal due to the processing technology used, and the right side was about 0.5 cm higher than the left side. This resulted in a slow flow of water to the left end of the optical fiber at the leakage hole. Therefore, in the following analysis, the lowest temperature change point within 0.4 m from No. 3 optical fiber was selected as the actual leakage point.
The temperature rises under different seepage flow velocities are shown in Figures 13-15. Figure 13 is the temperature rise curve at different seepage flow velocity when the heating power is 4 W. It suggests that the optical fiber has poor resolution for the temperature change at the leakage point under a low heating power. The leakage point detection effect is poor. Figures 14 and 15 show that when the heating power is gradually increased to 12 W, the resolution of optical fibers to seepage velocities is obviously improved. The detection effect becomes better with the heating power becomes higher. The conclusion is that the slight change of flow velocity will not cause the obvious change of temperature rise curve. When the seepage flow velocity changes significantly, the corresponding seepage points can be found through the temperature rise curve. Figures 13-15 suggest that when the heating power is 4 W, the temperature rise of optical fibers in the area without leakage is also uneven. This reflects that the spontaneous Raman scattering of the optical fiber is less sensitive to the change of lower temperature. Therefore, in the process of engineering application monitoring, high heating power should be selected as far as possible within a reasonable range.
1#
2# 3# 5# 6# Optical fib er numb er Figure 13 is the temperature rise curve at different seepage flow velocity when the heating power is 4 W. It suggests that the optical fiber has poor resolution for the temperature change at the leakage point under a low heating power. The leakage point detection effect is poor. Figures 14 and 15 show that when the heating power is gradually increased to 12 W, the resolution of optical fibers to seepage velocities is obviously improved. The detection effect becomes better with the heating power becomes higher. The conclusion is that the slight change of flow velocity will not cause the obvious change of temperature rise curve. When the seepage flow velocity changes significantly, the corresponding seepage points can be found through the temperature rise curve. Figures 13-15 suggest that when the heating power is 4 W, the temperature rise of optical fibers in the area without leakage is also uneven. This reflects that the spontaneous Raman scattering of the optical fiber is less sensitive to the change of lower temperature. Therefore, in the process of engineering application monitoring, high heating power should be selected as far as possible within a reasonable range.
Relationship Between Temperature and Flow Velocity
The water flow at the leakage point continuously takes away the heat around the optical fiber, a process that is closely related to the seepage velocity. When the heat generated by the heating cable equals the heat taken away by the seepage, the optical fiber will reach a final stable temperature. The seepage velocity under 12 W heating power is compared to analyze the relationship between the temperature change of optical fiber and the seepage velocity. Its relationship graph is shown in Figure 16.
Relationship Between Temperature and Flow Velocity
The water flow at the leakage point continuously takes away the heat around the optical fiber, a process that is closely related to the seepage velocity. When the heat generated by the heating cable equals the heat taken away by the seepage, the optical fiber will reach a final stable temperature. The seepage velocity under 12 W heating power is compared to analyze the relationship between the temperature change of optical fiber and the seepage velocity. Its relationship graph is shown in Figure 16. It can be seen from Figure 16 that when the seepage velocity is small, the temperature of the optical fiber in the seepage position rises faster, and the final stable temperature is higher, but it takes more time to reach the final stable temperature, and when the heating stops, the time required for the temperature to decrease is also longer. When the seepage velocity is high, the temperature of the fiber in the seepage position rises slower, and the final stable temperature is lower, but it takes less time to reach the final stable temperature. According to the law of thermodynamics, when the system reaches its final stable temperature, the following heat equation should be satisfied: where Q1 is the heat generated by the heating cable; Q2 is the heat exchanged between the optical It can be seen from Figure 16 that when the seepage velocity is small, the temperature of the optical fiber in the seepage position rises faster, and the final stable temperature is higher, but it takes more time to reach the final stable temperature, and when the heating stops, the time required for the temperature to decrease is also longer. When the seepage velocity is high, the temperature of the fiber in the seepage position rises slower, and the final stable temperature is lower, but it takes less time to reach the final stable temperature. According to the law of thermodynamics, when the system reaches its final stable temperature, the following heat equation should be satisfied: where Q 1 is the heat generated by the heating cable; Q 2 is the heat exchanged between the optical fiber system and the seepage; Q 3 is the heat exchanged between the optical fiber system and the silty clay; S w and S s are respectively the heat exchange area of water and soil with the optical fiber system, S w = nS 0 , S s = (1 − n)S 0 , n is the porosity of silty clay, S 0 is the surface area of the optical fiber system; α w and α s are respectively the heat exchange coefficients of water and soil with the optical fiber system; T end is the final stable temperature; T 0 is the initial temperature; ∆l is the length of water flowing through the optical fiber system in a unit time.
According to Xiao's et al. [24] research, the formula for calculating the heat transfer coefficient of fluid and heating system is shown in Equation (5): where D is the characteristic number of the process, it is related to the thermal conductivity of fluids, thermal conductivity of optical fibers, Reynolds number of the seepage and the diameter of optical fibers; u is the velocity of the seepage. By sorting out the above formulas, Equations (6) and (7) can be obtained: According to Equations (6) and (7), the seepage velocity can be quantitatively measured after a series of parameters such as heating voltage, heating process temperature rise and heating resistance are measured. The time of reaching the final stable temperature under different seepage velocities is fitted. As shown in Equation (8): According to Equation (8), the seepage velocity can be estimated based on the time when the optical fiber reaches the final stable temperature. In the leakage monitoring of diaphragm wall in practical engineering, the optical fiber system can be calibrated beforehand, and then prefabricated on the diaphragm wall. The seepage velocity can be calculated by measuring the final stable temperature and the time to reach the temperature. In order to further study the relationship between temperature rise of optical fibers and seepage velocity, the relationship diagram of the two under different heating power was made. As shown in Figure 17, the temperature rise of the optical fibers near the seepage point is taken as the ordinate and the velocity of the flow as the abscissa.
According to Yan's et al. [25] research, the seepage velocity is inversely proportional to the temperature change of the point in the geotechnical seepage field. The rise of the temperature increases with the decrease of the seepage velocity. The curves of the seepage velocity and the temperature rise are fitted under the heating power of 12, 10 and 8 W. When the heating power is 12 W, the correlation coefficient is 0.9536, and the slope of the curve is larger. This indicates that the response of the optical fiber system to the seepage velocity is more sensitive at a higher heating power. When the heating power is 10 W, the coefficient is 0.8658. When the heating power is 8 W, the coefficient is 0.5353. The correlation coefficient of fitting curve increases with the increase of heating power. Therefore, if the seepage velocity is calculated according to the fitting curve, the larger heating power should be selected as far as possible. y x 0.204 1.511 = + (8) According to Equation (8), the seepage velocity can be estimated based on the time when the optical fiber reaches the final stable temperature. In the leakage monitoring of diaphragm wall in practical engineering, the optical fiber system can be calibrated beforehand, and then prefabricated on the diaphragm wall. The seepage velocity can be calculated by measuring the final stable temperature and the time to reach the temperature. In order to further study the relationship between temperature rise of optical fibers and seepage velocity, the relationship diagram of the two under different heating power was made. As shown in Figure 17, the temperature rise of the optical fibers near the seepage point is taken as the ordinate and the velocity of the flow as the abscissa. According to Yan's et al. [25] research, the seepage velocity is inversely proportional to the temperature change of the point in the geotechnical seepage field. The rise of the temperature increases with the decrease of the seepage velocity. The curves of the seepage velocity and the temperature rise are fitted under the heating power of 12, 10 and 8 W. When the heating power is 12 W, the correlation coefficient is 0.9536, and the slope of the curve is larger. This indicates that the response of the optical fiber system to the seepage velocity is more sensitive at a higher heating power. When the heating power is 10 W, the coefficient is 0.8658. When the heating power is 8 W, the coefficient is 0.5353. The correlation coefficient of fitting curve increases with the increase of heating power. Therefore, if the seepage velocity is calculated according to the fitting curve, the larger heating power should be selected as far as possible.
Conclusions
(1) Our physical model tests prove that the distributed optical fiber temperature measurement technology with external cable can accurately locate the leakage point and distinguish between different seepage velocities in underground diaphragm wall seepage monitoring. This technology has a good monitoring effect.
(2) During the test, the temperature of optical fibers continues to rise due to the existence of heating cables. The temperature of the leaking part changes slowly. This results in a large difference between the temperature of the leaking part and the non-leaking part, and a low temperature rise of the leaking part. When the seepage velocity is low, the temperature rise of each section of the optical fiber is basically the same, and the seepage position cannot be distinguished basically. When the Figure 17. Relationship between temperature rise and seepage velocity.
Conclusions
(1) Our physical model tests prove that the distributed optical fiber temperature measurement technology with external cable can accurately locate the leakage point and distinguish between different seepage velocities in underground diaphragm wall seepage monitoring. This technology has a good monitoring effect.
(2) During the test, the temperature of optical fibers continues to rise due to the existence of heating cables. The temperature of the leaking part changes slowly. This results in a large difference between the temperature of the leaking part and the non-leaking part, and a low temperature rise of the leaking part. When the seepage velocity is low, the temperature rise of each section of the optical fiber is basically the same, and the seepage position cannot be distinguished basically. When the seepage velocity reaches 150 mm/h, the temperature rise of the optical fiber at the seepage point is obviously lower than that at the non-seepage point.
(3) When the heating power of the external cable is small, the resolution of the monitoring system to the seepage velocity is poor, so as high a heating power as possible should be chosen within a reasonable range to improve the monitoring effect.
(4) After heating, the temperature of the optical fiber increases continuously and eventually reaches a stable temperature. In this state, the heat generated by the optical fiber system is equal to the sum of the heat absorbed by the leakage and the heat absorbed by the surrounding soil. When the seepage velocity is high, the stable temperature is lower and it takes less time to reach the stable temperature. When the seepage velocity is low, the stable temperature is higher and the time to reach the temperature is longer. The relationship between seepage velocity and parameters related to final stable temperature, optical fiber and surrounding soil is established. It is hopeful that the quantitative measurement of seepage velocity can be realized. There is a linear relationship between seepage velocity and final stable temperature after heating. The correlation coefficient increases with the increase of heating power.
(5) It should be noted that the seepage velocity in this experiment is the average velocity over a period of time, and the temperature rise of the optical fiber is the average value of the temperature rise of the whole section of optical fiber. The relationship between the temperature rise of optical fiber and the seepage velocity can only be qualitatively analyzed. For further analysis, we need to use instantaneous velocity meter and more precise temperature measurement system to establish the mathematical relationship between temperature rise of optical fiber and seepage velocity.
Conflicts of Interest:
The authors declare no conflict of interest. | 8,918 | 2019-05-01T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Occupant Expectations on the Main IEQ Factors at Workspace: The Studies of Private Preschool Buildings
This study explores the relationship between the perceived performance of specific IEQ factors and occupants’ overall satisfaction with their workspace. The Indoor Environmental Quality (IEQ) in buildings is one of the most important factors affecting the physical development of children. The early education is compulsory and escalated numbers of 4-6 years children has boosted the numbers of private preschool. Frequently its operate in premises that have been fully refurbished, either in housing schemes, commercial buildings or institutional buildings. This has invited the questions on the building capability to provide a good environment to the children during the learning activities. Malaysia still lacks studies on the modification of the quality of the internal environment for private kindergartens. Post-Occupancy Evaluation (POE) is one method that has been used satisfactorily in the study and effect of IEQ in kindergartens. This research focuses on identifying occupant satisfaction towards IEQ in selected refurbished pre-school or kindergarten buildings. The objectives of the study are to identify and determine the IEQ through feedback from building users. The study collected data on overall satisfaction and overall design importance through building inspection and questionnaires, the data obtained was analyzed to become a benchmark for the studies. 240 kindergartens in the whole country were selected for the study of IEQ.
Introduction
Several researchers have demonstrated that if people work in good environmental conditions, their productivity The and health improve (Gao et al., 2014, Rosen, 2009, Ambu, 2008, Uline and Moran, 2008, Stankovic and Stojic, 2007, Wargocki et al., 2005, Mendell, 2004, Schneider, 2002. The early aim is a need of achieving a good comfort level in educational buildings. It is due to the fact that children spending around 30% of their life span in schools (Lee and Chang, 1999).
Comfort and health levels are arising issues, while the occupants are in the buildings. Given the impact of the school environment on health, numerous researchers (Salleh et including indoor air quality, temperature, odours or olfactory effects, visuals, acoustics, daylight and artificial lighting, ergonomics and space. Where several of these features are inadequate, it is difficult to identify the actual direct causes of occupant discomfort and health symptoms [Sulaiman et al., 2013;Bluysen, 2009;Chiang and Lai, 2002). However, some studies have already delved into indoor environmental conditions in educational buildings (Salleh et Environmental Assessment Method(HK-BEAM) and the TOBUS decision-making tool for upgrading office buildings. These systems are reliable in assessing and enhancing the IEQ of buildings. Indoor air quality, thermal comfort and acoustic performance of recently built schools (Mumovic, 2009) were evaluated by means of field measurements. Another study (Mors et al., 2011) focused on the actual thermal sensitivity and clothing insulation of children in none air-conditioned classrooms by means of both physical measurements and questionnaires.
In Malaysia most of the studies performed in public schools were organised by the government (Kamaruzzaman and Rasitah, 2011, Junaidah et al., 2010, Hussin et al., 2011. The buildings and facilities are well-prepared by the authorities that adhere to the standards provided. However, unlike the private preschools in Malaysia, the facilities and school buildings are provided by the individual person or non-government organisations (NGO). They are generally operated in re-adaptive buildings such as housing premises (KL City Hall, 2010). The bedrooms are converted to classrooms; therefore it used to be a challenge to provide an adequate environment for the children.
The broad aim of this paper is to determine the correlation between performance in refurbished private preschools and staffs' expectation on satisfaction and expectations levels, using the POE approach and guidelines. The objectives of this study are accordingly: i) to determine the satisfaction level of the building's occupants in terms of indoor environment and ii) to identify any correlation between individuals' expectations and their perception of the environment. The outcomes of the study are intended to be used as a benchmark for the improvement of private pre-schools in Malaysia.
Methods and Materials
In order to obtain a higher response rate, the questionnaire was designed to be only four pages long. It used a simple presentation that would not take long for the respondents to answer. Space was provided for the respondents to give additional comments. Before the questionnaire was sent out, it was piloted with 10 po-tential respondents from two pre-schools, whose comments and suggestions were taken into consideration in the final version.
Covering letters and questionnaires were distributed by hand to school administrative staff, to gain permission. The questionnaires were then collected within two weeks, during the building observation. Of the 1,020 questionnaires distributed around Peninsular Malaysia (to 5-7 staff, in 240 pre-schools; 711 classrooms); 521 were returned, of which 57 remained unanswered and 60 were incomplete. That is, 404 questionnaires were found to be useful. These figures are summarized in Table 1. The valid response rate recorded was almost 40%, which is sufficient for a social science study in Malaysia (Sarantakos, 2012). Krejic and Morgan's (1970) table to determine sample size shows that, for a population of 1,100, the minimum acceptable size is 285. In this study, the response rates of 408 can justifiably represent the total population. The questionnaire has two sections: Section A asked for general information about age, gender, work experience, etc., and Section B explored attitudes to the 21 factors relating to the internal environment of the building as listed below (Q1 to Q21). The occupants were asked to rate their answers on a seven-point Likert scale for "User satisfaction" and "Degree of importance". The results were used to elicit an occupant satisfaction score and a benchmark for building satisfaction. Analysis methods utilised to answers to the questionnaire indicate the occupants' ratings of their satisfaction with the internal environment, and how important they find these environmental conditions. The first part of the analysis determines the satisfaction score of a building (an overall rating for a building's indoor environment) using Equation 1: The second part of the analysis provides a graphical representation of the totals for each answer. This is called a "user satisfaction fingerprint" and normalizes each question to a score from+100% to-100% (Levermore, 1994(Levermore, , 1994a(Levermore, , 1999(Levermore, , 2000, using Equation 2.
FLS = 100
The third part of the analysis is similar to the second. However, using Equation 3, a normalized individual score for each person can be calculated.
A building walk-through form was used by the research team to compile an inventory of the building materials and contents of the classrooms. Data were collected to characterize the materials and condition of the ceiling, floor, interior walls, exterior walls, Heating, Ventilation and Air Condition (HVAC) equipment, and classroom contents. These were the items most frequently identified as sources of IEQ problems during the building inspections. Location of the building from the main road, surrounding activities, number of pupils in a classroom, mold, damp, the volume of the classrooms, and ventilation systems were also cited.
Building Observation
The building survey was correlated with the perceptions of the occupants towards the building. Of the 240 participating private pre-schools, 92.9% are operating in refurbished residential premises, mostly rented two-storey terraced houses (67.1%) with 100-199 m 2 of floor area. Two-thirds of the buildings are located on medium-local streets with moderate traffic flow, less than 50 m from the premises. Surprisingly, 96.6% of the classrooms (N=711) identified did not comply with the Uniform Building Bylaw (UBBL, 2012), requirement for space per person in classrooms. The descriptive analysis of building observations is detailed in Table 2. The most frequent occurrence of each characteristic is shown in bold.
Staff's Expectation
The staff offered their opinions on the degree of each IEQ factor experienced in their building. Figure 1 shows the satisfaction scores for all the buildings, with question 1 (noise) having the worst score and the only negative one, and question 18 (colleagues) the best The highest-scoring satisfaction subject areas are shown in Table 3. As most of the pre-schools have only 4 to 7 staff members, it is perhaps not surprising that the highest scores are in relation to immediate colleagues and management; there are strong bonds between the staff and ready interaction with management. Site observation indicated that walls painted in cheerful colours and decorated with various murals were attractive to pupils, and this feature was next highest on the list.
Even though all but one of the scores for the questionnaire are on the positive side, some are considered less favourably than others, as shown on Table 4. The external noise level had the lowest score (-13.77), not surprising given that all the buildings were in urban areas with a high volume of traffic, and in their previous roles were connected to main roads. There was also noise from within their own premises during teaching periods.
The buildings' occupants also gave less favourable scores for smell (+10.22), humidity (+16.50) and freshness (s+17.95), the three elements categorized as air quality factors. Overcowding in classrooms which had previously been bedrooms (9 m2 -23 m2) contributed to breathing difficulties and a smelly environment.
Air movement within the building was given a higher score (+24.00), followed by ventilation (+22.36) and temperature (+19.73), whether or not resulting from air conditioning.
Scores for Factors from the Questionnaires
Analysis of variance (ANOVA) is applicable if the research has two groups or more to compare. Based on the five factor scores generated, a further analysis on the differences between each factor in the questionnaire was carried out. Using SPSS, One-way ANOVA and t-tests were performed to determine the statistical significance of differences among the means of the groups in six selected variables.
a) Analysis one: Type of building
The questionnaire had five sub-groups based on building types, as detailed in Table 6. An analysis of variance revealed significant differences at the 5% level in the appearance [F(4,403)=2.73, p= 0.03] and intrusion[F(4,403)=2.67, p= 0.03] factor scores. On the other hand, there are non-significant differences at the 5% level in the air quality, general and workplace scores. Post hoc analyses using the Tukey HSD for significance indicated that the average number of errors was significantly lower in the bungalow condition towards general and intrusion factors than in the other four factors.
b) Analysis two: Type of ventilation
There were two major groups of ventilation: natural (n=183) air conditioning (n=221). The few hybrid/mixed ventilation systems were considered as air-conditioning, because most of the pre-schools turn on the air conditioning as early as 10 a.m., or whenever they felt the temperature increased in the building.
The t-test for independent samples revealed non-significant difference at the 5% level for other factors. This means that there is little difference between the two types of the ventilation.
c) Analysis three: Work category
The four sub-groups in the work category (see Table 2) were clerk (11), teacher (287), manager (79) and others (27; (includes teacher assistant, housekeeper, etc.). An analysis of variance revealed significant differences at the 5% level in intrusion [F(3,403)=4.9, p= 0.02]. However, there were non-significant differences at the 5% level in the other factors. Post hoc analyses using the Tukey HSD for significance indicated that the average number of clerks had a higher opinion of intrusion than the "others" category.
d) Analysis four: Age
The six sub-groups under age are detailed in Table 6. An analysis of variance revealed non-significant differences at the 5% level in all the factors, indicating that allege groups have the same opinion about the environmental factors.
e) Analysis five: Number of people in the building
See Table 6 for the four sub-groups based on number of people in the building. An analysis of variance revealed significant differences at the 5% level in the air quality IBCC 2016 600120 significance indicated that the mean scored for group of people in the building was lower compares to the group of 2 people towards the perception of air quality and the workplace in general. It is clear that crowded buildings will result in an unpleasant environment, linking freshness, smell, ventilation and temperature of the building, and affecting the opinion on the workplace in general. Moreover, Figure 2 explains the negative relation between the dissatisfaction/dislike levels of perception and the total hours staff spend in the building. Suprisingly results for satisfaction level had a similar range, where time spent in the building alsohad a negative relation to the satisfaction level. In other words, the more time spent in the pre-school building, the greater the satisfaction of the occupants. Nevertheless,for some occupants more time spent in the building generated unpleasant feelings towards it. A similar result was found between the perception of the workplace and the class floor area. Table 5 shows that the analysis of covariance for the perception of staff towards the workplace with the classroom floor area, time spent in the building and the duration of staff work in the same building has a nonsignificant relationship.
This explains why these variables did not affect the satisfaction level while working in the pre-school buildings. The questionnaire had four sub-groups for the number of hours spent in the building each day. An analysis of variance revealed significant differences at the 5% level in the general factor [F(4,403)=5.82, p= 0.00]. Post hoc analyses using the Tukey HSD for significance indicated that the mean score of the group spending 4-6 hours in the building was lower than that for those spending more than 10 hours there. This is due to the long time being exposed to the same environment, which might also affect the liking variable.
Conclusions
Most refurbished preschool building in Malaysia were found not to comply with minimum space regulation concerning children, teachers and staff, leading to the uncomfortable conditions and spreading out the infectious illness. It is suggested that the authorities to take this issues into consideration and provide the sufficient space for the occupants. It is also to put into account the location of the classrooms and schools when the premises being converted to the education buildings. From this research, it can be concluded that Post Occupancy Evaluation emphasizes the significance of improving Indoor Environment Quality in the refurbished preschool's buildings. There is still lack of awareness toward the significance of IEQ, and half of the respondents the majority were the buildings' owners) admitted they never bothered about IEQ in their buildings. Some also admitted that they had refurbished the building without referring to the guidelines from the local authority and might as well invited some contaminations and organics effluences such as polluted gases, mold and fungi into the building.
The current scenario preschool buildings in Kuala Lumpur are concerns on minimalist of IEQ consideration, neglecting to study the buildings' characteristics and the indoor environment for early education. Nevertheless, the study found that half of the building owners followed the guidelines from the local authority, and some buildings had the initiative to implement the guidelines. This is one of the approaches that are quite relevant, but proper guidelines and standards still need to be referred to with respect to IEQ and the renovation of the buildings.
It is recommended that POE should be undertaken in order to highlight the importance of IEQ and ensure its consideration in the refurbishment of pre-school buildings in order to improve early education for future generations. It is also recommended that there should be further analytical research into IEQ in refurbished pre-school buildings and study the correlation with occupants (the end-users of buildings used for early education) in Malaysia. | 3,595.2 | 2016-01-01T00:00:00.000 | [
"Environmental Science",
"Education",
"Engineering"
] |
An SVM Based Weight Scheme for Improving Kinematic GNSS Positioning Accuracy with Low-Cost GNSS Receiver in Urban Environments
High-precision positioning with low-cost global navigation satellite systems (GNSS) in urban environments remains a significant challenge due to the significant multipath effects, non-line-of-sight (NLOS) errors, as well as poor satellite visibility and geometry. A GNSS system is typically implemented with a least-square (LS) or a Kalman-filter (KF) estimator, and a proper weight scheme is vital for achieving reliable navigation solutions. The traditional weight schemes are based on the signal-in-space ranging errors (SISRE), elevation and C/N0 values, which would be less effective in urban environments since the observation quality cannot be fully manifested by those values. In this paper, we propose a new multi-feature support vector machine (SVM) signal classifier-based weight scheme for GNSS measurements to improve the kinematic GNSS positioning accuracy in urban environments. The proposed new weight scheme is based on the identification of important features in GNSS data in urban environments and intelligent classification of line-of-sight (LOS) and NLOS signals. To validate the performance of the newly proposed weight scheme, we have implemented it into a real-time single-frequency precise point positioning (SFPPP) system. The dynamic vehicle-based tests with a low-cost single-frequency u-blox M8T GNSS receiver demonstrate that the positioning accuracy using the new weight scheme outperforms the traditional C/N0 based weight model by 65.4% and 85.0% in the horizontal and up direction, and most position error spikes at overcrossing and short tunnels can be eliminated by the new weight scheme compared to the traditional method. It also surpasses the built-in satellite-based augmentation systems (SBAS) solutions of the u-blox M8T and is even better than the built-in real-time-kinematic (RTK) solutions of multi-frequency receivers like the u-blox F9P and Trimble BD982.
Introduction
High-precision positioning with low-cost global navigation satellite systems (GNSS) in urban environments remains a significant challenge. The industry demand, however, is high for many emerging applications, such as autonomous vehicles and intelligent transportation systems. Although accurate and reliable solutions have been demonstrated in open sky environments with low-cost GNSS receivers, the positioning accuracy will be greatly degraded in urban environments due to significant multipath effects, non-line-of-sight (NLOS) errors, as well as poor satellite visibility and geometry caused by severe signal blockages [1]. In urban environments, the NLOS signal errors for instance, could be unbounded to become as large as hundreds of meters in some severe circumstances. The effective detection of NLOS signals and subsequent elimination and compensation of NLOS signal effects can significantly improve positioning accuracy in urban environments.
Many methods have been proposed for that purpose [2]. The existing methods could be divided into environmental feature aided approach and GNSS self-maintained approach which differ in the type of information used for NLOS signal detection. For environmental features aided approach, the 3D map aided (3DMA) method is popular, including the ranging based 3DMA method and the shadow matching method [3]. The ranging based 3DMA method utilizes the 3D map to perform ray tracing to simulate the signal transmitting path among buildings and trees, and is thus able to determine the visibility of the signals and even correct the multipath effects and NLOS errors [4][5][6][7][8]. The shadow matching method has been designed for dense urban GNSS positioning which utilizes the visibility matching to determine the possible candidate location of the receiver, and so far it is the only method that utilizes signals that are not even being tracked [9,10]. The omnidirectional camera aided method is also widely used which captures the surrounding infrastructures with a wide-angle of view lens (180 degrees) so that the segmented sky area in the image could be used to predict visible satellites after projecting the satellite positions onto the image [2,11].
For GNSS self-maintained methods, they reply on only data from GNSS receivers and therefore reduce the complexity and the cost of the navigation system when compared to the environment feature aided approach. A dual-polarized antenna, for instance, can be applied to detect reflected signals since the right-hand circular polarized (RHCP) signal will be transferred to the left-hand circular polarized (LHCP) signal when reflected [12]. Consistency checking using receiver autonomous integrity monitoring (RAIM) algorithms is another widely used method for NLOS signals detection [13,14], but it is effective only when the majority of the received signals are line-of-sight (LOS) signals, which could not be guaranteed in urban environments where there are many reflectors and obstructions around. Machine learning methods were recently applied to explore the diverse features of GNSS data. Yozevitch et al. (2016), for instance, analyzed the relationship among signal visibility, C/N0, elevation, the 2nd order derivative of pseudorange, and used a decision tree to classify the LOS/NLOS signals. They demonstrated a 77.6% accuracy for the LOS signals and 87.2% for the NLOS signals [15]. used a support vector machine (SVM) to classify the LOS/NLOS signals based on features of C/N0, delta C/N0, pseudorange, delta pseudorange, and positioning residual. The obtainable accuracy is 75.4% [16]. An SVM classifier was also applied using the correlation and tracking information inside a software-defined GNSS receiver as the input features, demonstrating an overall classification accuracy of 82.8% in the urban environments [3]. The visibility labels used in those works are generated by the 3D model of near buildings, which however is not very precise because the 3D models are unable to represent the full shape of the infrastructures around. Deep-learning were also proposed to improve the classification accuracy [17], but they come with a much higher computational load. To date, all works consider the signal classification and positioning test only in a static mode and the performance of the classification accuracy and positioning accuracy in the kinematic scene are not validated.
In this paper, we propose a new multi-feature SVM signal classifier-based weight scheme for GNSS measurements to improve the kinematic GNSS positioning accuracy in urban environments. A GNSS system is typically implemented with a least-square (LS) or a Kalman-filter (KF) estimator, and a proper weight scheme is vital for achieving reliable navigation solutions. Many weight schemes have been proposed. The signal-in-space ranging errors (SISRE) considering the signal noises and the satellite orbit errors is one of them [18,19]. This method, however, works only in open-sky environments since no factor of the transmission path in GNSS denied environments has been considered. The two most popular factors to consider when determining the observation weight are the elevation angle [20,21], and C/N0 [22][23][24]. The elevation angle based weight model assumes that the multipath error, atmosphere and other unmodeled site-specific error will increase at lower elevation angles [25], which, however, also works well only in open-sky environments. C/N0 represents the ratio of the carrier power and the noise power of the received signal, which is a good indicator for the quality of the observations in different environments, and a combination of C/N0 and elevation information is often used to improve the weight model [26,27]. However, as C/N0 is very likely to be affected by the multipath effect [28,29], a multipath affected observation is not necessarily indicated with a large gross error. This demonstrates that the observation quality cannot be fully manifested by C/N0 and elevation values. The new weight scheme is based on the identification of important features in GNSS data in urban environments and intelligent classification of LOS/NLOS signals using the support vector machine (SVM) algorithm. With advantage of better interpreting the quality of the GNSS observations with identified features by the SVM classifier, the proposed weight scheme is superior to the traditional weight scheme as it can better model the GNSS measurement NLOS error in urban environments. To validate the performance of the newly proposed weight scheme, we have tested its computational load and successfully implemented it in a real-time single-frequency precise point positioning (SFPPP) system. The dynamic vehicle-based tests with a low-cost single-frequency u-blox M8T GNSS receiver demonstrate that the positioning accuracy using the new weight scheme outperforms the traditional C/N0 based weight model by 65.4% and 85.0% in the horizontal and up direction, and most position error spikes at overcrossings and short tunnels can be eliminated by the new weight scheme compared to the traditional method. It also surpasses the built-in SBAS solutions of the u-blox M8T and is even better than the built-in real-time-kinematic (RTK) solutions of multi-frequency receivers like the u-blox F9P and Trimble BD982.
The remaining of the paper is organized as follows. Firstly, the related existing works are briefly reviewed. Secondly, the methodology for the new weight scheme-based positioning system development, including an improved SVM classifier and a real-time single-frequency precise point positioning (SFPPP) system aided by the new weight scheme, is presented. Thirdly, the experiment setup, LOS/NLOS signal classification, and positioning results along with analysis are provided. Finally, the conclusions and recommendations for future works are given.
Methodology
The methodology of the SVM based weight scheme consists of two major components: SVM-based GNSS signal classifier and SVM-based weight scheme. They will be described in this section.
Feature Selection
The support vector machine (SVM) was firstly used for GNSS signal classification by Xu and et al. [3], and a test that utilizes more features was later conducted by Lyu and Gao [30]. Since the complex information of signal traveling routes in urban environments cannot be fully manifested in C/N0 and elevations, other features must be investigated to identify more information from GNSS observations. In [30], we have analyzed six features for GNSS signal for their correlations with visibility status, which include differenced C/N0, time single-difference ambiguity, time double-difference phase, time double-difference pseudorange, phase consistency, and pseudorange consistency. In this section, the six features of the same configuration will be used for signal classification. The elevation is a widely used feature for signal classification since satellites at lower elevations are more likely to be obstructed by buildings [31]. However, considering that the correlation of visibility to elevation is not essential, thus in this research, the elevation is not used to avoid the introduction of misleading information to the classifiers.
Support Vector Machine Based Signal Classification
A signal classifier could map the feature data into the probabilities of individual classes. There are many classifiers proposed in the literature, among which the support vector machine (SVM) is a popular one that has been adopted in many fields. When equipped with the radial basis function (RBF) kernel, the SVM can apply a non-linear margin for high dimensional features classification. The structure of the RBF SVM classifier used in this paper is given in Figure 1. The input of the classifier is the six features calculated from the GNSS observations, as presented earlier, where P and L represent Sensors 2020, 20, 7265 4 of 14 pseudorange and phase observations respectively, the P consistency and L consistency are calculated from the discrepancy between the observed observations and the Doppler predicted observations. The output of the classifier is the probabilities of the input observation belongs to LOS (P LOS ) or NLOS (P NLOS ). The open-source software libsvm [32] is used for training and testing in this work.
Sensors 2020, 20, x FOR PEER REVIEW 4 of 13 observations. The output of the classifier is the probabilities of the input observation belongs to LOS (PLOS) or NLOS (PNLOS). The open-source software libsvm [32] is used for training and testing in this work.
The SVM Based Weight Scheme
A new weight scheme based on the SVM classifier is developed in this section. The new weight scheme could utilize the intelligent identification of important features in GNSS data in urban environments and intelligent classification of LOS/NLOS signals from the SVM based classifier. The probabilities of the signal being NLOS or LOS are a good indicator of the observation quality. In order to integrate this indicator into the GNSS estimator, a weight scheme is required to map it into observation error covariance. It was indicated that the real-world unmolded GNSS observation error could be modeled as the combination of LOS error and NLOS error [26]. Thus, weight schemes will be considered as the sum of the LOS part and the NLOS part for Doppler, phase, and pseudorange observations.
Doppler Observation Weight Scheme
For LOS Doppler observation error, the equal weight model is effective as the differences are minor for the observation errors at different elevation angles [33]. Thus, the equal weight model is adopted for the LOS part of Doppler observation error, and the covariance for satellite j could be written as: where , indicates the LOS part of Doppler observation error standard deviation, which is taken as 0.06m/s in this paper. When Doppler observation is contaminated by the NLOS effect, the model used for the NLOS error part follows a similar form proposed by Suzuki et al. [34], which could be written as: where b is an empirical constant value to be tuned. e is the Euler's number, 2.718, indicates the probability of the observation being NLOS, which is the output of the signal classifier. When is larger than 0.95, the covariance is fixed to the value at = 0.95 to avoid introducing astronomical numbers as it will corrupt the 64-bit float data width-based estimator. The final covariance for Doppler observation error for satellite j could be calculated by adding the LOS part and NLOS part:
The SVM Based Weight Scheme
A new weight scheme based on the SVM classifier is developed in this section. The new weight scheme could utilize the intelligent identification of important features in GNSS data in urban environments and intelligent classification of LOS/NLOS signals from the SVM based classifier. The probabilities of the signal being NLOS or LOS are a good indicator of the observation quality. In order to integrate this indicator into the GNSS estimator, a weight scheme is required to map it into observation error covariance. It was indicated that the real-world unmolded GNSS observation error could be modeled as the combination of LOS error and NLOS error [26]. Thus, weight schemes will be considered as the sum of the LOS part and the NLOS part for Doppler, phase, and pseudorange observations.
Doppler Observation Weight Scheme
For LOS Doppler observation error, the equal weight model is effective as the differences are minor for the observation errors at different elevation angles [33]. Thus, the equal weight model is adopted for the LOS part of Doppler observation error, and the covariance for satellite j could be written as: where σ d,l indicates the LOS part of Doppler observation error standard deviation, which is taken as 0.06m/s in this paper. When Doppler observation is contaminated by the NLOS effect, the model used for the NLOS error part follows a similar form proposed by Suzuki et al. [34], which could be written as: where b is an empirical constant value to be tuned. e is the Euler's number, e ≈ 2.718, P NLOS indicates the probability of the observation being NLOS, which is the output of the signal classifier. When P NLOS is larger than 0.95, the covariance is fixed to the value at P NLOS = 0.95 to avoid introducing astronomical numbers as it will corrupt the 64-bit float data width-based estimator. The final covariance for Doppler observation error for satellite j could be calculated by adding the LOS part and NLOS part: Sensors 2020, 20, 7265 5 of 14
Phase and Pseudorange Observation Weight Scheme
For the LOS part of phase and pseudorange observation, the elevation model is accurate enough and has been widely applied. The simplified covariance for LOS phase/pseudorange observations could be written as [35]: where C j P,l and C j L,l represent the covariance of pseudorange and phase observations LOS error for satellite j; a is an empirical constant value to be tuned; ele indicates the elevation angle in rad. d is the scale factor between phase and pseudorange observation error, taken as 0.01 in this work. The C/N0 information is not considered in the LOS part, as it has already been interpreted by the signal classifier, which will be used for the NLOS part. The proposed NLOS part for pseudorange and phase error could be written as: where C j P,n , C j L,n represent the covariance of NLOS pseudorange and phase observation error for satellite j; c is an empirical constant variable to be tuned and will have different values for pseudorange and phase observations. The threshold for considering NLOS error is P NLOS = 0.4 instead of 0.5 due to some unavoidable misclassification of the signal classifier. Thus, when the P NLOS is smaller than 0.4, this observation will be taken as LOS signal. In this model, the covariance will increase exponentially as the P NLOS is above 0.4 to handle the significant increase of the NLOS error. As in the Doppler weight scheme, when P NLOS is larger than 0.95, the covariance is fixed to the value at P NLOS = 0.95 to avoid astronomical numbers from corrupting the estimator.
The final covariance for pseudorange/phase observation error could be calculated by adding the LOS part and NLOS part: where C j P , C j L represent the covariance of pseudorange and phase observation error for satellite j respectively.
Methodology Evaluation
To validate the performance of the newly proposed weight scheme, we have implemented it into a real-time Doppler aided single-frequency precise point positioning (SFPPP) system developed at The University of Calgary [36]. A feature of the SFPPP system is that it employs two separate Kalman filters: one for position determination using pseudorange and phase observations and the other for velocity determination using Doppler observations. The position solutions with this approach are more robust than processing all observations in a single filter when with undetected NLOS errors in urban environments. A consistency check based on a chi-square test is also implemented in the SFPPP system to ensure the integrity of the position solutions. If the test fails, the observation with the largest residual would be eliminated for another iteration of estimation. Figure 2 shows the architecture of the SVM based weight scheme aided SFPPP system. The raw GNSS observations are first input into the SVM based signal classifier. Then the calculated probability of the signal being NLOS would be passed into the proposed weight scheme to calculate the covariance for Doppler, phase and pseudorange observations. After that, the weighted observations are used by the SFPPP system for position determination.
Field Test Description
Two kinematic vehicle-based field tests were performed with a low-cost GNSS receiver (u-blox M8T) in the downtown of Calgary, on 4 August (field test 1) and 12 October (field test 2) 2020, respectively. Figures 3 and 4 show the routes of the two field tests in Google Earth view, in which the yellow triangles and green triangles represent the short tunnel and the pedestrian overcrossing connecting buildings. Figure 5 gives a demonstration of an overcrossing and a short tunnel. As it is shown, the two testing routes include the most challenging scenes in urban environments with overcrossing, short tunnel and urban canyon populated by high-density buildings. In both of the field tests, the vehicle was moving at a maximum speed of 50 km/h (speed limitation in urban Calgary). The filed test on 4 August lasted about 10 min within a small loop square route and will be used to train and assess the SVM signal classifier. The filed test on 12 October lasted 30 min and will be used for positioning accuracy evaluation. The route of field test 2 covered a wider Calgary downtown area including wide streets of six lanes and narrow streets of only two lanes, and the short tunnels and overcrossing over the testing route made the environment further challenging for GNSS positioning.
For GNSS data acquisition, an elevation cutoff angle of 5 degrees was adopted with a data sampling rate of 10 Hz and GNSS data from three constellations (GPS, GLONASS and Galileo) were logged. For the real-time SFPPP system, the orbit, satellite clock bias and real-time ionosphere products from CNES are used for direct corrections [37]. The tropospheric error is corrected using the Saastamonion model with the Global Mapping Function (GMF) [38] for both the zenith wet part and the hydrostatic part. The receiver and satellite phase biases b and b are absorbed by the ambiguity term. The code bias b is corrected using the 30 days differential code bias (DCB) product from the Center for Orbit Determination in Europe (CODE). The empirical parameters for the weight scheme are tuned using the dataset from field test 1, and the used values are given in Table 1. Those parameters are set the same for all satellites.
Three GNSS receivers were used which include a single-frequency receiver (u-blox M8T) and two multi-frequency receivers (u-blox F9P, Trimble BD982), all connected to the same antenna installed on the vehicle roof using a signal splitter. Only raw observations from single-frequency GNSS receiver u-blox M8T will be used in this work. The built-in RTK solutions from the two multifrequency receivers will be used only for positioning accuracy comparisons. An upward fisheye camera was set up on the vehicle roof to capture the image of the surrounding environments and thereafter to provide satellite visibility ground truth for the training and testing the SVM model. More details will be provided in the latter of this section. The SPAN system from Novatel, which includes a high-end Novatel Propack6 GNSS receiver and a tactical IMU
Field Test Description
Two kinematic vehicle-based field tests were performed with a low-cost GNSS receiver (u-blox M8T) in the downtown of Calgary, on 4 August (field test 1) and 12 October (field test 2) 2020, respectively. Figures 3 and 4 show the routes of the two field tests in Google Earth view, in which the yellow triangles and green triangles represent the short tunnel and the pedestrian overcrossing connecting buildings. Figure 5 gives a demonstration of an overcrossing and a short tunnel. As it is shown, the two testing routes include the most challenging scenes in urban environments with overcrossing, short tunnel and urban canyon populated by high-density buildings. In both of the field tests, the vehicle was moving at a maximum speed of 50 km/h (speed limitation in urban Calgary). The filed test on 4 August lasted about 10 min within a small loop square route and will be used to train and assess the SVM signal classifier. The filed test on 12 October lasted 30 min and will be used for positioning accuracy evaluation. The route of field test 2 covered a wider Calgary downtown area including wide streets of six lanes and narrow streets of only two lanes, and the short tunnels and overcrossing over the testing route made the environment further challenging for GNSS positioning.
For GNSS data acquisition, an elevation cutoff angle of 5 degrees was adopted with a data sampling rate of 10 Hz and GNSS data from three constellations (GPS, GLONASS and Galileo) were logged. For the real-time SFPPP system, the orbit, satellite clock bias and real-time ionosphere products from CNES are used for direct corrections [37]. The tropospheric error is corrected using the Saastamonion model with the Global Mapping Function (GMF) [38] for both the zenith wet part and the hydrostatic part. The receiver and satellite phase biases b r L1 and b s L1 are absorbed by the ambiguity term. The code bias b s P1 is corrected using the 30 days differential code bias (DCB) product from the Center for Orbit Determination in Europe (CODE). The empirical parameters for the weight scheme are tuned using the dataset from field test 1, and the used values are given in Table 1. Those parameters are set the same for all satellites.
Sensors 2020, 20, x FOR PEER REVIEW 7 of 13 external evaluation of the positioning accuracy of the SFPPP system using the new weight scheme. All GNSS receivers used in the tests were connected to the same GNSS antenna using a signal splitter. A Trimble Net R9 GNSS receiver was set on the roof of ENF building in the University of Calgary to serve as the base station for the Novatel SPAN system, u-blox F9P, and Trimble BD982 receivers, and the maximum baseline length during the whole filed test is about 6.2 km.
SVM Classifier Training and Performance Evaluation
This section focuses on the SVM-based classifier training and performance evaluation. The performance evaluation will be conducted in the testing accuracy aspect and the testing phase computational load aspect to validate the practicality of the SVM based classifier in a real-time GNSS application.
SVM Classifier Training and Performance Evaluation
This section focuses on the SVM-based classifier training and performance evaluation. The performance evaluation will be conducted in the testing accuracy aspect and the testing phase computational load aspect to validate the practicality of the SVM based classifier in a real-time GNSS application. Three GNSS receivers were used which include a single-frequency receiver (u-blox M8T) and two multi-frequency receivers (u-blox F9P, Trimble BD982), all connected to the same antenna installed on the vehicle roof using a signal splitter. Only raw observations from single-frequency GNSS receiver u-blox M8T will be used in this work. The built-in RTK solutions from the two multi-frequency receivers will be used only for positioning accuracy comparisons.
An upward fisheye camera was set up on the vehicle roof to capture the image of the surrounding environments and thereafter to provide satellite visibility ground truth for the training and testing the SVM model. More details will be provided in the latter of this section. The SPAN system from Novatel, which includes a high-end Novatel Propack6 GNSS receiver and a tactical IMU with a built-in fiber optical gyroscope, was used to provide the reference values (accurate at decimeter-level in this urban environment) for positioning accuracy analysis. Further, the position output from two multi-frequency GNSS receivers (Trimble BD982 and u-blox F9P) were logged for external evaluation of the positioning Sensors 2020, 20, 7265 8 of 14 accuracy of the SFPPP system using the new weight scheme. All GNSS receivers used in the tests were connected to the same GNSS antenna using a signal splitter. A Trimble Net R9 GNSS receiver was set on the roof of ENF building in the University of Calgary to serve as the base station for the Novatel SPAN system, u-blox F9P, and Trimble BD982 receivers, and the maximum baseline length during the whole filed test is about 6.2 km.
SVM Classifier Training and Performance Evaluation
This section focuses on the SVM-based classifier training and performance evaluation. The performance evaluation will be conducted in the testing accuracy aspect and the testing phase computational load aspect to validate the practicality of the SVM based classifier in a real-time GNSS application.
To get the satellite visibility ground-truth for training and testing the SVM based classifier, the upward fisheye camera based satellite visibility labeling method is applied [30]. Firstly, the captured image is segmented into sky areas and obstruction areas manually. Secondly, the satellite location is projected to the image plane via the Mei's fisheye model [39]. After that, the satellite visibility ground truth could be obtained via comparing the projected satellite location to the segmented image. Figure 6 is an example of this labeling process. An upward fisheye camera image is shown in the left part, and the right part is the segmented image with the projected satellites' location, in which the segmented sky areas are rendered as blue and the obstruction areas are rendered as black. Those satellites located in the blue areas will be marked as LOS satellites and vice versa. To get the satellite visibility ground-truth for training and testing the SVM based classifier, the upward fisheye camera based satellite visibility labeling method is applied [30]. Firstly, the captured image is segmented into sky areas and obstruction areas manually. Secondly, the satellite location is projected to the image plane via the Mei's fisheye model [39]. After that, the satellite visibility ground truth could be obtained via comparing the projected satellite location to the segmented image. Figure 6 is an example of this labeling process. An upward fisheye camera image is shown in the left part, and the right part is the segmented image with the projected satellites' location, in which the segmented sky areas are rendered as blue and the obstruction areas are rendered as black. Those satellites located in the blue areas will be marked as LOS satellites and vice versa. In the experiment of field test 1, 6078 images were captured using an upward fisheye camera installed on the roof of the vehicle, and they were segmented manually using a labeling tool developed at the University of Calgary to determine the satellite visibility for all GNSS observations. The 70% part of the data in field test 1 is used for training and the rest 30% part is assigned to testing. No data from field test 2 are used for training or testing as the heavy workload of manually labeling. Table 2 shows the LOS, NLOS and overall accuracy of the trained classifier on the testing dataset. The overall testing accuracy reaches 86.05%, which, however, is much worse than the accuracy from the static test that we presented in [30]. This indicates that the complex information of signal traveling routes in urban environments is harder to interpret according to the six features in the kinematic scene than in the static scene, as the signal traveling route changes much faster. Table 3 gives out the configuration and result to test the SVM based classifier prediction time. A total number of 1800 epochs of GNSS observations are input sequentially into the trained SVM based classifier to get the satellites visibility prediction. The result reveals that the prediction process for the In the experiment of field test 1, 6078 images were captured using an upward fisheye camera installed on the roof of the vehicle, and they were segmented manually using a labeling tool developed at the University of Calgary to determine the satellite visibility for all GNSS observations. The 70% part of the data in field test 1 is used for training and the rest 30% part is assigned to testing. No data from field test 2 are used for training or testing as the heavy workload of manually labeling. Table 2 shows the LOS, NLOS and overall accuracy of the trained classifier on the testing dataset. The overall testing accuracy reaches 86.05%, which, however, is much worse than the accuracy from the static test that we presented in [30]. This indicates that the complex information of signal traveling routes in urban environments is harder to interpret according to the six features in the kinematic scene than in the static scene, as the signal traveling route changes much faster. Table 3 gives out the configuration and result to test the SVM based classifier prediction time. A total number of 1800 epochs of GNSS observations are input sequentially into the trained SVM based classifier to get the satellites visibility prediction. The result reveals that the prediction process for the GNSS observations from a single epoch takes only 11.7 ms. The SVM based classifier consumes a low computational load and can be applied to high-rate real-time GNSS applications, especially given that only one of the four cores of the low-power Intel CPU i5 8250U was used in this test.
Positioning Accuracy Analysis
The evaluation of the proposed new weight scheme will be conducted in two ways. First, the positioning accuracy will be compared between the positioning solutions using the new weight scheme (SVM method) and the widely used C/N0 based weight model (C/N0 method) [22,24]. Then the positioning accuracy will be compared between the positioning solutions using the new weight model and independent position output from commercial receivers including high-end multi-frequency RTK solutions. The trained SVM classifier using the dataset in field test 1 will be directly applied to the dataset in field test 2 to validate the positioning performance. Since no data in field test 2 is involved in the training phase, the positioning performance shown below is expected to be reproducible. Figure 7 compares the positioning error in time series and the positioning error cumulative distribution function (CDF) of the SFPPP solutions using the SVM method and the C/N0 method. There is a significant gain in terms of position solution robustness for the SVM based weight scheme over the C/N0 based weight scheme. When the SVM method is applied, the number of epochs with positioning error over 10 m in either horizontal or upward directions is greatly decreased compared to the C/N0 method, and the number of position error spikes is also significantly reduced. Further, the overall accuracy of the position solutions is greatly improved by the proposed SVM based weight scheme. The 95% CDF of the positioning errors using the C/N0 method reaches 15 m, 30 m and more than 40 m in the east, north and up directions, while the SVM based weight scheme brings down the 95% CDF of the positioning errors to 5 m, 10 m, and 12 m in the three directions.
It is worth mentioning that the position errors for the SVM method have a mean value close to zero in all three directions, while a noticeable positive bias can be observed in the up direction for the C/N0 method. Such bias is caused by the undetected NLOS errors, as the NLOS effect will always bring positive signal delays to the observations, and the NLOS error would typically grow higher at lower elevations. This further demonstrates the benefit of the newly proposed weight scheme in the detection and proper handling of the NLOS error over the C/N0 method in urban environments. Table 4 shows the comparison of the statistics for the standard deviation (STD) and the root mean square (RMS) of positioning accuracy for the two methods. The RMS position errors of the SVM method are 7.8 m horizontally and 5.8 m vertically, which are 65.4% and 85.0% improvement in the horizontal and up directions respectively when compared to the traditional C/N0 method. 95% CDF of the positioning errors to 5 m, 10 m, and 12 m in the three directions.
It is worth mentioning that the position errors for the SVM method have a mean value close to zero in all three directions, while a noticeable positive bias can be observed in the up direction for the C/N0 method. Such bias is caused by the undetected NLOS errors, as the NLOS effect will always bring positive signal delays to the observations, and the NLOS error would typically grow higher at lower elevations. This further demonstrates the benefit of the newly proposed weight scheme in the detection and proper handling of the NLOS error over the C/N0 method in urban environments. Table 4 shows the comparison of the statistics for the standard deviation (STD) and the root mean square (RMS) of positioning accuracy for the two methods. The RMS position errors of the SVM method are 7.8 m horizontally and 5.8 m vertically, which are 65.4% and 85.0% improvement in the horizontal and up directions respectively when compared to the traditional C/N0 method. Figure 8 shows the comparison of the Google earth view of the track of the SFPPP using the C/N0 weight scheme and the SVM weight scheme. It can be seen that, there was position error spikes with the C/N0 method at almost every overcrossing and short tunnel, and the errors can be up to several blocks in some severe situations. As a comparison, the position solutions with the SVM method could stay tightly with the ground truth most of the time, even with an overcrossing or a short tunnel. Figure 8 shows the comparison of the Google earth view of the track of the SFPPP using the C/N0 weight scheme and the SVM weight scheme. It can be seen that, there was position error spikes with the C/N0 method at almost every overcrossing and short tunnel, and the errors can be up to several blocks in some severe situations. As a comparison, the position solutions with the SVM method could stay tightly with the ground truth most of the time, even with an overcrossing or a short tunnel.
In Figure 7, we notice that there is a significant position error spike at the epoch 550s in the east direction and two minor position error spikes at the epoch 180s and 1120s in the east and north direction for the SVM method. They can also be observed in Figure 8 at three locations, and it takes some epochs before the SFPPP position solution converges close to the ground truth after the position error spike occurs. This happens due to the misclassification of the SVM based signal classifier. This issue could be addressed through integration with a low-cost inertial measurement unit (IMU) to provide more information for NLOS detection and position aiding. This will be considered in future work. To further validate the benefit of the SVM based weight scheme, the SFPPP positioning accuracy with u-blox M8T using the SVM based weight scheme is also compared to the built-in solutions from some commercial receivers, namely the built-in SBAS solutions of the single-frequency u-blox M8T, the built-in RTK solutions of a dual-frequency u-blox F9P, and the built-in RTK solutions of multi-frequency Trimble BD982. The result is given in Table 5. First, it can be seen that, in urban environments, the receiver output RTK solutions from multi-frequency GNSS receivers like the In Figure 7, we notice that there is a significant position error spike at the epoch 550s in the east direction and two minor position error spikes at the epoch 180s and 1120s in the east and north direction for the SVM method. They can also be observed in Figure 8 at three locations, and it takes some epochs before the SFPPP position solution converges close to the ground truth after the position error spike occurs. This happens due to the misclassification of the SVM based signal classifier. This issue could be addressed through integration with a low-cost inertial measurement unit (IMU) to provide more information for NLOS detection and position aiding. This will be considered in future work.
To further validate the benefit of the SVM based weight scheme, the SFPPP positioning accuracy with u-blox M8T using the SVM based weight scheme is also compared to the built-in solutions from some commercial receivers, namely the built-in SBAS solutions of the single-frequency u-blox M8T, the built-in RTK solutions of a dual-frequency u-blox F9P, and the built-in RTK solutions of multi-frequency Trimble BD982. The result is given in Table 5. First, it can be seen that, in urban environments, the receiver output RTK solutions from multi-frequency GNSS receivers like the u-blox F9P and the high-end Trimble BD982 have fix rates of only 16.8% and 27.7%, respectively. Second, the positioning accuracy of the proposed method using real-time SFPPP with the u-blox M8T is better than the built-in SBAS solutions of the u-blox M8T, with an RMS improvement of 49.5% and 83.7% in the horizontal and up directions, and further it outperforms the built-in RTK solution from multi-frequency GNSS receivers like the u-blox F9P and the Trimble BD982. The comparison to independent receiver position output further confirms the effectiveness of the proposed new weight scheme for precise positioning in urban environments. It is expected that the new weight scheme can further improve the positioning accuracy in urban environments using multi-frequency receivers and multi-frequency precise point positioning (PPP) and RTK methods.
Conclusions and Recommendations
In this paper, a new SVM signal classifier-based weight scheme for GNSS measurements has been proposed to improve the kinematic GNSS positioning accuracy in urban environments. Traditionally C/N0 and elevation angle are widely considered for the weight of GNSS measurements, however, they cannot fully manifest the observation quality in urban environments. The new weight scheme is based on the identification of important features in GNSS data in urban environments and intelligent classification of LOS/NLOS signals. With advantage of better interpreting the quality of the GNSS observations with identified features by the SVM classifier, the proposed weight scheme is superior to the traditional weight scheme as it can better model the GNSS measurement NLOS error in urban environments.
The new weight scheme has been tested for its computational load and successfully implemented into a real-time single-frequency precise point positioning system to validate its performance. The dynamic vehicle-based tests with a low-cost single-frequency u-blox M8T GNSS receiver demonstrate that the positioning accuracy using the new weight scheme outperforms the traditional C/N0 based weight model by 65.4% and 85.0% in the horizontal and up direction, and most position error spikes at overcrossing and short tunnels can be eliminated by the new weight scheme compared to the traditional method. It also surpasses the built-in SBAS solutions of the u-blox M8T and is even better than the built-in RTK solutions of multi-frequency receivers like the u-blox F9P and Trimble BD982.
For future work, a low-cost IMU will be integrated with the GNSS solutions to refine the few major positioning error spikes shown in the test. Also, more feature data with other combinations from multi-frequency receivers will be used to increase the classification accuracy and thereafter further improve the accuracy and robustness of the positioning system in urban environments.
Author Contributions: Z.L. conceived of, designed the algorithm and performed the related experiments. Z.L. and Y.G. analyzed the data and wrote the paper. All authors have read and agreed to the published version of the manuscript. | 9,630.6 | 2020-12-01T00:00:00.000 | [
"Computer Science"
] |
The front-end hybrid for the ATLAS HL-LHC silicon strip tracker
For the HL-LHC, ATLAS [1] will install a new all-silicon tracking system. The strip part will be comprised of five barrel layers and seven end cap disks on each side. The detectors will be connected to highly integrated, low mass front-end electronic hybrids with custom-made ASICs in 130 nm CMOS technology. The hybrids will be flexible four layer copper polyimide constructions. They will be designed and populated at the universities involved, while the flexible PCBs will be produced in industry. This paper describes the evolution of hybrid designs for the barrel and end cap, discusses their electrical performance, and presents results from prototype modules made with the hybrids.
Introduction
The next step for the Large Hadron Collider will be a High Luminosity Upgrade, referred to as the HL-LHC project [3]. The upgrade to a more powerful accelerator complex will be accomplished in two phases. In phase 2 the ATLAS experiment will replace the current inner detector with an allsilicon tracking system in order to cope with the increased demands. This implies the replacement of the Transition Radiation Tracker (TRT) with a silicon strip tracking system. A sketch of the cross sectional view of the proposed overall inner tracker layout, composed of silicon pixel and strip detectors, is depicted in figure 1.
By the start of phase 2, after an integrated luminosity of about 700 fb −1 , a completely new inner detector will be needed regardless of the LHC upgrade plans, due to the significant radiation damage and additionally due to accumulated on-detector faults which impact on physics performance of the detector. An improved radiation resistant and more granular detector structure will be required for the 10× higher instantaneous luminosity, and the much higher track density, after the upgrade. The inner tracker system has to withstand increased particle fluences. In order not to have a negative impact on the physics reach of the detector, low mass and low radiation length materials are required. To keep the costs at a reasonable level, new n-in-p technology has been adopted for the inner tracker silicon strip sensors.
The silicon strip part of the inner tracker is electro-mechanically partitioned into central and forward regions. As shown in figure 1, the central region will be comprised of five full length barrel layers (two short strip, three long strip layers) plus one stub layer (to compensate for the loss in the transition region) whereas, the forward region will be comprised of seven end cap disks on each side (shorter strips in the vicinity of the beam line and longer strips in higher radii). For details, consult [4,5].
Baseline designs
In this section, a general overview is given of the baseline concepts of the central and forward regions of the inner strip tracker. The front end electronics to readout the strip detectors will be 130 nm technology custom designed mixed signal ASICs (the ABC130) mounted on low mass, -1 - polyimide based multilayer flexible PCBs. These highly integrated hybrids are mounted directly on silicon sensors. Mechanically, the hybrids are glued directly to the surface of the sensors. All electrical connections to the sensors and to the ASICs are through bonding wires. These electrical structures, referred to as single-sided modules, are the basic electrical building blocks of the system. Hybrid assembly, ASIC mounting, wire bonding and module building are all performed in-house either with automated machines or with custom designed and fabricated mechanical tools.
The available wafer size (currently 6") used to fabricate the silicon strip sensors along with the geometry of each inner tracker region, dictates the adopted segmentation of the sensors. The occupancy is then kept at a desirable level by subdividing the sensors in several strip rows [4]. Strips are in general shorter, closer to the beam pipe and longer, farther away.
Barrel
The baseline concept in the barrel is the Stave [6]. It is a highly integrated, modular structure, consisting of single-sided modules mounted on two sides of a mechanical core structure, with a total length of 1.4 m. The barrel modules are in turn composed of silicon strip sensors with a number of hybrid flex circuits mounted on them. Figure 2 depicts sketches of a short strip stave, illustrating also its cross section and a single-sided module. A total of 13 single-sided modules are mounted on each side of a stave. They are glued onto flexible copper-polyimide based bus tapes laminated on either side of a low mass, carbon fiber based core structure with embedded titanium cooling pipes. Barrel modules are envisioned to be identical with stereo angle between the two side made by rotating the modules of one side by 40 mrad. Staves are tilted at 10 • in phi relative to the barrel support structure to minimize Lorenz angle effects.
The axial strips of the barrel sensors on the two sides of the stave construct small angle (40 mrad) stereo pairs. All modules are otherwise identical. The staves are mounted 10 • tilted on barrel mechanical support structures.
Each short strip barrel sensor is approximately 10 × 10 cm 2 , as shown in figure 2, the sensors consist of 1280 short strips, each with an approximate strip length of 2.5 cm. The new readout front end ASICs (130 nm based technology chips) are designed to read out two adjacent rows of 128 strips each. Consequently, a total of 20 ABC130 chips, arranged in two rows of 10 ASICs each, are required to readout a short strip barrel sensor. The ABC130 chips are integrated on a -2 - polyimide based multi-layer flexible PCBs (hybrids). Each barrel hybrid serves 10 ABC130 chips along with other passive components and an additional ASIC (the Hybrid Control Chip, HCC 1 ). Two hybrids will then readout each short strip barrel sensor. The powering of the hybrids and high voltage biasing of the sensors through off-hybrid circuitry are currently subject of intensive R&D. The first thermo-mechanical prototype hybrids have already been designed and fabricated, which allow for the investigation of the thermal characteristics of the module and mechanical assembly and wire bonding trials. Figure 3 shows a manufactured ABC130 hybrid. The expected power dissipation of the new generation barrel hybrids is about 2 W with the sensors adding about 1 W at the end of operations. This results in an approximate overall power dissipation of a single-sided short strip barrel module of about 5W . The layer build up is similar to the electrical versions, which are currently being designed. Here several module powering and readout issues, along with the new features implemented into the new generation of ABC130 front end ASICs, will be addressed and evaluated. The ABC130 hybrids have the same width of the sensors and are mounted with no overhang on the sensors. The new hybrids are also narrower and thinner than the hybrids using ABCN250 ASICs. This is to done to reduce material as much as possible. Although same industry standard design rules have been adopted, the routing is more demanding than previous versions as higher clock and data rates are planned for the readout and several new features and algorithms have been added to the ABC130 and HCC. The hybrids are highly integrated entities, with a large impact on the overall performance of the system. Several barrel hybrids, based on the predecessor 250 nm CMOS technology front end readout ABCN-25 ASICs [2], have been prototyped so far. These have been tested quite thoroughly, themselves being the test platform for evaluating other features of the readout front end ASICs, the noise performance of the sensors and powering of the system. An example of a fully populated, early generation ABCN-25 barrel hybrid and its flex circuit layer build up is shown in figure 4. Various hybrids, with and without shield layer, have been fabricated and tested. Solid ground layer is on the bottom, followed by power, signal traces and finally components on the top. The adopted design rules are the industry standard of 100 µm track and gap width ≥ 350 µm/150 µm pad/drill vias. These hybrids are 108 mm long by 24 mm wide. The electrical and functional tests of hybrids have been quite successful with a typical average noise figure of ABCN-25 ASICs of about ≤ 400 e − , comparable to single chips. Figure 5 shows a built, single-sided short strip 250 nm barrel module and a module in a test frame. The position of the ASICs is centered to the sensor strip with all connections being via wire bonds. These electrical modules are the building blocks of reduced sized prototype substructures called Stavelets [7]. Stavelets were constructed to investigate various powering and signaling scenarios. Modules are placed on a jig with integrated cooling pipes. Hybrids are cooled through the sensors, which are in turn cooled down by the jig. Many successful intensive studies have been conducted with noise figures less than 650 e − .
End cap
The end cap disks are mechanically subdivided in wedge shaped structures called Petal, as illustrated in figure 6. The Petal concept is quite similar to that of the stave. Mechanically the Petal core is constructed quite similar to the stave core. Each end cap disk has six radial partitioning, referred to as rings. The three outer rings are additionally partitioned in azimuth. In order to minimize occupancy the inner rings sensors have short strips, whereas the outer rings have long strips. A total of nine sensors are mounted on each side of a Petal. The sensors are segmented in short/long strips (inner/outer rings). Each of the nine end cap sensors is different, with differing strip pitch both on each segment of a given sensor and on different sensors. As a consequence, there will be -4 -2014 JINST 9 C02027 up to 13 different end cap hybrids with different number of front end readout ASICs. The sensor strips have a small stereo angle (20 mrad) built in. Various options are being prototyped and tested to aid decision taking in near future. For details on these and related topics refer to [10].
The geometrical diversity of the end cap sensors, especially in some critical regions, makes the hybrid design a complex task. The innermost ring has both fine pitch and very short strips. The upper rings have additionally split sensors in azimuth (φ ). For these reasons a mini Petal prototype (called Petalet) is being built to evaluate various options and solutions for those two regions. Mini sensors with proper geometries (as shown in figure 7) have been fabricated by CNM [9] for the Petalet program. These sensors have been used to build Petalet modules. Modules are then tested individually and mounted on the bus tape glued to the surface of the Petalet core structure. An important goal of the Petalet program is to investigate different powering and readout options along with other specific issues.
The Petalet project studies two basic architectures of power and data/control signal connections to the hybrids. One option has all connections on a common side (called common power); -5 - whereas, the other splits these and puts power and signals on opposite sides (called split power). The complexity of either design of the hybrid or system design and their integration aspects will then be extrapolated from these studies to the Petal proper. From these studies one option will be selected for implementation. Each option has implications on the bus tape design, on the readout and powering.
Petalet hybrids have already been designed and manufactured, and populated in-house. The layer stack-up along with pictures of the bare and the fully stuffed hybrids of the options are shown on figures 8 and 9, respectively. Although the layer stack-ups differ, the design rules are, as in the barrel case, industry standard. Different approaches of differential signaling have been adopted with microstrip used as opposed to stripline. Electrical and functional tests have been very promising with similar noise figures to the barrel hybrids, as shown in figure 10. In this figure, a test frame for the common power hybrids is also shown. Petalet modules (shown in figure 7 for the split power case) have also been tested extensively with noise figures as expected of about 650 ENC.
Conclusions
The design, fabrication, assembly and testing of various prototype hybrids for the ATLAS Upgrade silicon strip tracker have progressed greatly during the last years. The barrel Stave/Stavelet programs are quite mature; whereas, the end cap, due mainly to the complexity of the sensor layout, has only recently started with the Petalet project. The hybrids of the Stave/Stavelet and Petalet progams are highly integrated, Cu-Polyimide based, multi-layer flexible PCBs housing multiple 130 nm (or 250 nm) CMOS technology ABCN front end readout ASICs. The fabrication of the hybrids has been done in three different European companies with comparable qualities and test results. There are still challenges to face with the new 130 nm CMOS process readout chip-set (ABC130 + HCC), which have higher clock/data rates, extra features and complexity. Power and signal integrity will be major topics of investigations, along with system integration and mechanical challenges. | 3,052 | 2014-02-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Persistent dark states in anisotropic central spin models
Long-lived dark states, in which an experimentally accessible qubit is not in thermal equilibrium with a surrounding spin bath, are pervasive in solid-state systems. We explain the ubiquity of dark states in a large class of inhomogeneous central spin models using the proximity to integrable lines with exact dark eigenstates. At numerically accessible sizes, dark states persist as eigenstates at large deviations from integrability, and the qubit retains memory of its initial polarization at long times. Although the eigenstates of the system are chaotic, exhibiting exponential sensitivity to small perturbations, they do not satisfy the eigenstate thermalization hypothesis. Rather, we predict long relaxation times that increase exponentially with system size. We propose that this intermediate chaotic but non-ergodic regime characterizes mesoscopic quantum dot and diamond defect systems, as we see no numerical tendency towards conventional thermalization with a finite relaxation time.
where α tunes the qubit-bath interaction anisotropy and g i describes the qubit-bath interaction strengths g i = g 0 (1 + γ δ i ). The total magnetization ∑ L−1 j=0 S z j commutes with H, giving rise to polarization sectors with definite total magnetization.
Supplementary Figure S1. Spectrum and central spin entanglement on and off resonance. Left panel plots the energy E as a function of the effective central fieldω 0 , for all eigenstates of H. Right panel shows the central spin entanglement for all eigenvalues on resonance (ω 0 = 0) and off resonance (ω 0 = 4). Vertical lines in the left panel denote the field values at resonance (gray dash-dotted line) and off resonance (blue dashed) used in the right panel. On resonance, dark and bright states can be easily distinguished by E and S 0 E , while off resonance these observables become comparable. Parameters: L = 11, N s = 1 (typical sample), α = 0.5, γ = 0.5, ∑ j S z j = −0.5. This model has a natural resonance point at which the effective z-field on the central spin and the surrounding bath spins are equal. At this point, the exchange interactions between the central spin and the bath are strongly enhanced. At α = 0, resonance occurs when ω 0 = ω. At finite α > 0 and in a fixed polarization sector, the last term in H shifts the resonance point is a constant for central spin 1/2. Collecting the terms in the Hamiltonian coupled to the central spin S z 0 yields the shifted resonance condition: Without loss of generality, we set ω = 0 throughout this work, such that the resonance condition is given byω 0 = 0 =⇒ ω 0 = −α g 0 ∑ L−1 j=0 S z j . The results shown in the main text focus on the physics of the system near resonance, where the difference between bright and dark states is most pronounced. This distinction is most clearly seen in the XX limit (α = 0), where dark states are product states |↓ 0 ⊗ |D − or |↑ 0 ⊗ |D + , whereas bright states have the form c 1 (ω 0 ) |↓ 0 ⊗ |B ↓ + c 2 (ω 0 ) |↑ 0 ⊗ |B ↑ , with nonzero c 1 and c 2 dependent on ω 0 . A thorough discussion of the spectrum in the XX limit is given in Ref. 1. Dark states are insensitive to changes in ω 0 . In contrast, bright states can be tuned to equal superpositions of the central spin up and down at resonance (ω 0 = 0), or configurations where the central spin is mostly polarized along either direction (as ω 0 → ∞, c 1 → 0, c 2 → 1 and as ω 0 → −∞, c 1 → 1, c 2 → 0). Thus the central spin can be essentially decoupled from the bath in bright states with strong off-resonant fields. Figure S1 shows the energy spectrum of H (left panel) across a range of shifted central field valuesω 0 , and the central spin entanglement entropy (right panel) forω 0 = 0 (squares) andω 0 = 4 (circles) -see vertical dash-dotted and dashed lines in the left panel respectively. We have fixed the total magnetization to be ∑ L−1 j=0 S z j = −0.5 < 0, such that dark states have S z 0 ≈ −0.5. In the spectrum, bright states come in pairs exhibiting level repulsion at resonance (see bands of red curves). Dark states show up as linear bands of near degenerate states (see black lines). Far from resonance, the central spin is nearly polarized in bright eigenstates, and has low entanglement entropy. The distinction between dark and bright states (as measured by observables such as E, S z 0 , and S 0 E ) thus becomes progressively less sharp away from resonance, and must be characterized by alternative means (e.g. by their sensitivity to ω 0 ).
Central spin projection: breakdown of perturbation theory.
In the main text, we established how perturbation theory captures the behavior of observables such as the central spin expectation value D(α, γ)|S z 0 |D(α, γ) , for a broad range of anisotropies α and small to moderate disorder γ. When γ 1.0, perturbation theory breaks down more rapidly as we tune α away from the α = 0 integrable line. This is shown in Fig. S2. Figure S2. Perturbation theory breaks down rapidly at large γ. Left plot shows the eigenstate expectation value of the central spin z-projection S z 0 for a typical sample of disorder with strength γ = 10.0. We see deviations from perturbation theory due to mixing between dark and bright states. The color coding used to separate dark and bright states is only nominal at sufficiently large α, as the states can no longer be precisely separated into two distinct clusters. Right plot shows the expectation [ S z 0 + 0.5] averaged over the N D eigenstates with smallest central spin projection, and N s = 500 disorder samples. The numerical data (markers) with γ = 1.0 and γ = 10.0 showcase the breakdown of perturbation theory for α 10 −4 (solid lines). Parameters L = 12, N s = 500 (right), ∑ j S z j = −1, ω = α. Locality of the adiabatic gauge potential. The adiabatic gauge potential (AGP) A α presented in the main text was used to develop a perturbation expansion (Section II), as well as establishing chaos (Section IV). The robustness of perturbation theory in our present context can be traced back to the locality of AGP; that is, A α is dominated by few-body terms at mesoscopic system sizes. In the main text, we presented the decomposition:
Supplementary
where σ λ j p i with λ j ∈ {x, y, z} denote the Pauli basis operators on site p i , where 0 ≤ p 1 < p 2 < . . . < p k ≤ L − 1 for every k = 1, . . . , L. In principle A α has contributions from operators with all possible supports. However, in Fig. S3, we show that A α for small α 1 has non-zero weight only for k-body operators with k = 3, 5, 7, . . . , and is dominated by 3-body terms.
Supplementary Figure S3. Locality of the A α . The vertical axis of the figure shows the sum of all squared-coefficients for operators with k-body terms (normalized by the trace norm squared of A α ). The horizontal axis gives the support (k). The AGP A α has contributions only from operators with odd support. It is dominated by 3-body terms, and exhibits a power law decay ∼ k −c . The exponent c ≈ 3 was found by linear regression on a log-log plot. Parameters: Inverse participation ratio for persistent dark states. In the main text, we characterized persistent dark states based on properties of the central spin in eigenstates. Persistent dark states can also be identified by their inverse participation ratio (IPR) relative to energy eigenbasis as α → 0: where {|n(0) } is the set of eigenstates (bright and dark) at α → 0, and |D(α) is any persistent dark state at α > 0. As α → 0, the persistent dark state coincides with a single unperturbed dark state; then IPR= 1 and the dark state can be thought of as being "localized" in the reference (α = 0) energy basis. We expect that the IPR decreases on increasing α from zero as the perturbed dark state has significant weight on an increasing number of unperturbed eigenstates. The IPR is bounded from below by the value 1/D, where D is the Hilbert space dimension of the appropriate polarization sector. A perturbed dark state can saturate this bound if it becomes an equal superposition of all reference states; then the dark state can be thought of as being fully "de-localized" in the reference basis. Figure S4 shows the behavior of the IPR for all persistent dark states over N s = 50 disorder realizations in a single magnetization sector ∑ j S z j = −1. The figure plots the quantity (1 − IPR) against α over two orders of magnitude. The dark unfilled circles show a distribution of IPR values for the different persistent dark states around their respective average values shown as red filled circles. Dark state IPRs collectively decrease with increasing α, with some persistent dark states approaching the bound 1 − IPR = 1 − 1/D (gold dashed line) at the highest α values. Note however that many persistent dark states are robust, showing little mixing and remaining highly localized in the reference basis even at the largest α. The average participation ratio is found to satisfy the scaling: in accordance to perturbation theory (see dotted grey lines with O(α 2 ) scaling for reference).
3/7
Supplementary Figure S4. Dependence of the dark state inverse participation (IPR) ratio on the anisotropy parameter. Adiabatic gauge potential norm for variations in disorder strength. The adiabatic gauge potential which generates translations in γ-space is denoted by A γ . The behavior of A γ is analogous to A α , and can be used to study integrability-breaking perturbations, as well as the onset of chaos by tuning γ. Figure S5 shows the exponential divergence of the Frobenius norm of A γ as a function of system size L.
Supplementary Figure S5. Exponential divergence of the of disorder averaged norm A γ with system size. Close to the integrable point (γ = 0.01), the norm scales sub-exponentially. The curves for larger γ break off from the γ = 0.01 line at a critical size L * and subsequently grow exponentially with slope 2 ln(2), reflecting slow relaxation. Parameters: N s = 200, α = 0.5, ω = α, ∑ j S z j = −1, c ≈ 1.
Persistent dark states and adiabatic gauge potential norm scaling in different magnetization sectors.
In the main text, our numerical results focused on the magnetization sector ∑ j S z j = −1 with zero magnetization density lim L→∞ ∑ j S z j /L → 0. This sector is the largest one containing both dark and bright states (note the ∑ j S z j = 0 sector contains only bright states). In this section, we present analogous results (specifically analogs of Figures 3 and 8 in the main text) in a magnetization sector with non-zero density ∑ j S z j /L = −1/4. As the Hilbert space dimension of this sector is smaller than that of the | ∑ j S z j | = 1 sector, our numerical simulations probe larger system sizes L ≈ 20. Figure S7 extends the results of Figure 8 of the main text to the ∑ j S z j = −L/4 sector. The figure shows the adiabatic gauge potential norm as a function of system size L for several values of the anisotropy parameter α. As in the main text, the gauge potential norm is a polynomial of L in accordance with an integrable perturbation of an integrable system up to a critical system size L * . For L > L * , the AGP norm grows exponentially with L:
Supplementary
The inset shows that the critical size L * is linearly dependent on log 2 (α), just as in the inset of Figure 8 in the main text. The slope ν ≈ 2.5 is however twice as large as the value in the ∑ j S z j = −1 sector. These results are consistent with an exponentially diverging relaxation time τ r ∼ C|α| 2ν 2 L . Therefore, the system size scaling of the gauge norm and the associated relaxation time continues to hold in magnetization sectors with non-zero density.
Resonant energy gap and dark-bright hybridization.
In the main text, we discussed various contributions to the AGP norm in the chaotic non-ergodic regime ( Figure 10). In particular, we found that the contribution from dark-bright mixing (DB) increased exponentially with L only at large α (α = 0.5). Here, we correlate this rise with a closing of a finite-size energy gap between the dark and bright manifolds in the spectrum. Figure S8 shows the distribution of dark and bright energies as a function of system size L, for α = 0.1 and 0.5. At α = 0.1 (left panel), the dark and bright manifolds are separated by an energy gap at the accessible system sizes, so that the dark and bright states only weakly hybridize. This explains the lack of exponential growth with L of the DB component of the AGP norm in the left panel of Figure 10 of the main text. On the other hand, at α = 0.5 (right panel), the dark and bright manifolds overlap in energy for L ≥ 14 and can strongly hybridize. This strong hybridization results in the exponential rise of the DB component of the AGP norm for L ≈ 12 in the right panel of Figure 10 in the main text. | 3,313.8 | 2020-05-27T00:00:00.000 | [
"Physics"
] |
On Scene Injury Severity Prediction (OSISP) machine learning algorithms for motor vehicle crash occupants in US
A significant proportion of motor vehicle crash fatalities are potentially preventable with improved acute care. By increasing the accuracy of triage more victims could be transported directly to the best suited care facility and be provided optimal care. We hypothesize that On Scene Injury Severity Prediction (OSISP) algorithms, developed utilizing machine learning methods, have potential to improve triage by complementing the field triage protocol. In this study, the accuracy of OSISP algorithms based on the “National Automotive Sampling System Crashworthiness Data System” (NASS-CDS) of crashes involving adult occupants for calendar years 2010–2015 was evaluated. Severe injury was the dependent variable, defined as Injury Severity Score (ISS) > 15. The dataset contained 37873 subjects, whereof 21589 included injury data and were further analyzed. Selection of model predictors was based on potential for injury severity prediction and perceived feasibility of assessment by first responders. We excluded vehicle telemetry data due to the limited availability of these systems in the contemporary vehicle fleet, and because this data is not yet being utilized in prehospital care. The machine learning algorithms Logistic Regression, Ridge Regression, Bernoulli Naïve Bayes, Stochastic Gradient Descent and Artificial Neural Networks were evaluated. Best performance with small margin was achieved with Logistic Regression, achieving area under the receiver operator characteristic curve (AUC) of 0.86 (95% confidence interval 0.82–0.90), as estimated by 10-fold stratified crossvalidation. Ejection, Entrapment, Belt use, Airbag deployment and Crash type were good predictors. Using only a subset of the 5–7 best predictors approached the prediction accuracy achieved when using the full set (14 predictors). A simplified benefit analysis indicated that nationwide implementation of OSISP in the US could bring improved care for 3100 severely injured patients, and reduce unnecessary use of trauma center resources for 94000 non-severely injured patients, every year. * Corresponding author. Electrical Engineering, Chalmers University of Technology, 412 96, Gothenburg, Sweden. E-mail address<EMAIL_ADDRESS>(S. Candefjord).
Introduction
Motor vehicle crashes (MVC) in US produce around 30000 fatalities and 4 million injured people every year, adding up to a societal economic burden of totally $240 billion or $800 per citizen (Blincoe et al., 2015). A significant proportion of the fatalities are potentially preventable (Ray et al., 2016;Berwick et al., 2016). Military trauma care has achieved remarkably high survival rates, 98%, for patients reaching a treatment facility (Berwick et al., 2016). If similar outcomes can be achieved in civilian care, up to 20% of trauma fatalities can be saved (Berwick et al., 2016). For a local sample in Miami-Dade (n = 98) it was concluded that over a third of MVC deaths were potentially preventable (Ray et al., 2016). Patients with severe injury have a higher probability of surviving if they are directly transported to a trauma center, to provide specialized care with minimal delay (Hu et al., 2017;Haas et al., 2010;MacKenzie et al., 2006;Candefjord et al., 2020). A key to provide adequate care to a larger proportion of patients is to attain a high triage accuracy (Ray et al., 2016;Sasser et al., 2012), so that the rate of appropriate decisions on where to transport the patient can be increased.
In this study, we evaluate if methods employing machine learning and variables that can be assessed on the scene of accident has potential to amend field triage. First, we provide a literature review including a description of the field triage process and the challenges of attaining a high triage accuracy. It also identifies previous studies that form the foundation for the present study. At the end of the introduction, we provide the aim of the current study.
Literature review and study motivation
The triage protocol is the most important decision support for identifying patients with severe injury, while using health care resources efficiently by recognizing patients not likely to be in need of specialized care. The current US guidelines for field triage of injured patients are based on four steps: 1) Vital signs, i.e. Glasgow Coma Scale, systolic blood pressure, and respiratory rate; 2) Anatomy of injury, e.g. penetrating injuries and flail chest; 3) Mechanisms of injury, e.g. falls from >20 feet and high-risk auto crash; and 4) Special considerations, e.g. older adults (aged > 55 years) (Sasser et al., 2012). The steps are assessed in sequential order, i.e. if any of the step 1 criteria indicate severe injury the decision scheme is completed with the recommendation to transport the patient to a trauma center, followed by steps 2-4 if no previous step signals severe injury. If any criteria in steps 1-3 is fulfilled the protocol recommends transport to a trauma center, and if step 4 is fulfilled transport to a trauma center should be considered. By employing four consecutive steps based on different risk factors the rate of undertriage can be decreased.
Even though sophisticated guidelines for field triage have been developed, the rate of undertriage of US trauma patients in general, and MVC occupants in particular, is high. The American College of Surgeon's Committee on Trauma (ACS-COT) (American College of Surgeons Committee on Trauma (ACS-COT), 2014) states that "an acceptable undertriage rate could be as high as 5%", when undertriaged patients are defined as having an Injury Severity Score (ISS) of 16 or more who were taken to a non-trauma center. Furthermore, ACS-COT states that "most agree that an acceptable percentage of overtriage is in the range of 25% to 35%". Xiang and colleagues (Xiang et al., 2014) showed that more than one third (34%) of patients with major trauma in the US emergency departments were undertriaged in 2010. Stitzel et al. (2016) showed, in a US population weighted sample of MVC based on the National Automotive Sampling System -Crashworthiness Data System (NASS-CDS) for years 2000-2011 (n = 9, 763, 984), that the rate of undertriage was 20% with an overtriage of 54%. Note that using machine learning terminology, low undertriage corresponds to high sensitivity/true positive rate (low number of false negatives), and low overtriage corresponds to high specificity (low false positive rate).
The criteria for high-risk auto crash in the field triage guidelines (Sasser et al., 2012) are: i) compartment intrusion measures; ii) passenger ejected from vehicle (partial or complete); iii) death in same passenger compartment; iv) vehicle telemetry data consistent with high risk of injury. Criteria iv is based on the development of Advanced Automatic Crash Notification (AACN) algorithms, which use vehicle sensor data such as ΔV (total change of velocity during crash), principal direction of impact force, belt status and airbag deployment to predict the probability of any MVC occupant being severely injured. AACN shows promise to improve triage for MVC (Augenstein et al., 2003;Champion et al., 2005;Kononen et al., 2011;Stitzel et al., 2016). The use of AACN in field triage protocols was acknowledged by the "Expert Panel of the National Center for Injury Prevention and Control, Centers for Disease Control and Prevention" in 2008 (National Center for Injury Prevention and Control, 2008). The panel recommended that for an estimated risk ≥20% of having a severe injury (defined as ISS > 15), the AACN provider should inform the Public Safety Answering Point (PSAP) that the occupant is at risk for a severe injury.
Candefjord, Buendia and colleagues Buendia et al., 2015) showed that so called On Scene Injury Severity Prediction (OSISP) algorithms, which are based on only crash characteristics that are feasible to assess on the scene of crash by first responders, achieved high accuracy for prediction of severe injury for MVC occupants in passenger cars and trucks . The OSISP concept has many similarities with AACN. The fundamental difference is that OSISP is designed to be implemented in a handheld device such as a tablet and information about the crash should be interpreted and input to the device by first responders (Olaetxea Azkarate-Askatsua, 2017), whereas AACN is integrated into the vehicle and retrieves data about the crash from the so called Event Data Recorder (Kononen et al., 2011). The AACN prediction can be executed directly following a crash, whereas the OSISP prediction is available after on scene assessment. AACN can utilize precise measurements of ΔV and principal direction of force (Kononen et al., 2011). OSISP, on the other hand, can employ some variables that can be assessed on scene but are typically not detected by vehicles, such as the sex and estimated age of the patient and whether the patient was ejected from or entrapped in the vehicle. The AACN and OSISP concepts are complementary, AACN can perform injury severity prediction at dispatch to aid planning the rescue and care operation and call adequate personnel and resources to the scene, while OSISP can be used on scene and incorporate data from on scene assessments, e.g. observations of the vehicle and patient. An advantage with OSISP compared to AACN is that applicability is not limited by the penetration rate of AACN systems in the contemporary vehicle fleet. This rate is limited by at least three important factors. First, The National Highway Traffic Safety Administration (NHTSA) predicted that only approximately 20% of the model year 2016 vehicle fleet are equipped with AACN (Lee et al., 2017). Second, functional AACN systems commonly require an active subscription based service, which may further reduce the proportion of vehicles for which AACN can be used (there exists no public data on rate of active subscriptions) (Lee et al., 2017). Third, to our knowledge there is no established standard for how AACN results are derived or presented to the PSAP, and few, if any, PSAPs/prehospital care systems have implemented routines for utilizing AACN data. OSISP can be used independently of vehicle telemetry data, and has potential to be used for most MVC. For most effective rescue and care and the widest coverage of MVC patients, AACN and OSISP could therefore be used in combination.
Aim
Current OSISP algorithms have been developed from the Swedish Traffic Accident Data Acquisition (STRADA) database using the method Logistic Regression Buendia et al., 2015). The aim of this study is to develop an OSISP algorithm for MVC occupants in US based on NASS-CDS data, using several machine learning algorithms and compare their performance. A high-performing OSISP algorithm may be used to refine US field triage protocols, to improve the care for severely injured patients while decreasing unnecessary use of trauma center resources.
Data selection
The scope of this study was adult MVC occupants registered in the NASS-CDS database for calendar years 2010-2015 (Radja, 2016). NASS-CDS includes investigations of around 5000 crashes per year involving passenger cars, light trucks, vans, and utility vehicles. The rationale for selecting calendar years 2010-2015 was that in 2010 injury scores were updated according to the AIS 2005 standard, and 2015 was the last full year available at the time the study commenced.
From 2010-2015, 45075 MVC occupants sustaining injury were identified in NASS-CDS (four data sets were linked, i.e. Accident, Event, GV and OA, described in Radja (2016)). Out of these, 43712 were car or light truck occupants, whereof 37873 occupants were ≥18 years old. Furthermore, 16284 occupants had no ISS data, leaving 21589 cases that were further analyzed.
Model variables
To classify occupants as being severely injured or not the ISS was used (Baker and O'Neill, 1976). ISS builds on classification of severity of each injury according to the Abbreviated Injury Scale (AIS) (Association for the Advancement of Automotive Medicine (AAAM), 2005). The threshold used to define severe injury was ISS >15, which is commonly recommended (Sasser et al., 2012).
The dependent variable was whether the occupant sustained severe injury or not. The predictor variables were chosen based on experience about potential for injury severity prediction, gained from several literature sources Candefjord et al., 2015;Stitzel et al., 2016;Kononen et al., 2011;Augenstein et al., 2003). Furthermore, a requirement for all selected predictors were that they were deemed to be feasible to assess at the scene of crash by first responders. We excluded vehicle telemetry data due to the limited availability in the contemporary vehicle fleet. All variables included in the model are detailed in Table 1.
Data representation and handling missing data
All of the variables in this study were categorical (Table 1). In order to achieve best performance for a machine learning task, the right choice of data representation technique is vital, especially for categorical data. Some machine learning algorithms, such as Support Vector Machines and Multi-layer Perceptron (Deep Learning), explicitly require all the input variables to be numerical (Hastie et al., 2009). There are plenty of techniques to transform categorical values to numerical data enabling one to use any algorithm for all types of data -numeric or categorical. Two widely used techniques are label encoding and one hot encoding, described in Brink et al. (2017, pp. 36-43). We found that one hot encoding consistently yielded improved performance over label encoding, and decided to use it for this study. Except the variables Vehicle and Location, all other variables exhibited missing values to varying extent (Table 1). We applied four different methods to handle missing data (Brink et al., 2017, pp. 36-43), creating one separate instance of the dataset per method. The methods were: remove cases with missing values (discarding 5394 cases that contained missing data), turn missing values into a new category level, imputing missing values using mode (most frequent value), and imputing missing values using conditional probability. While the first three methods are self explanatory, it is important to describe some details of the imputation using conditional probability. Assuming independence of the variables, the inherent information about probabilistic relationships between the predictor and target variables can be determined from the available complete data set (without missing values). Subsequently, these conditional probabilities can be utilized to impute the data for variables with missing values. Imputation via conditional probability will ensure that the missing value for a variable is filled with a value having the highest probability with respect to the target case, i.e., filling in a missing value with the most likely one determined by the similar cases but without missing value for that particular variable. Unlike the imputation using mode, imputation with conditional probabilities has the advantage of respecting the distribution of the variable and therefore should not introduce unexpected patterns in our data.
Machine learning algorithms
In this study we view machine learning as a broad concept, including traditional mathematical/statistical models that can be used for binary classification. Logistic Regression is a method that is commonly used in studies of MVC, see e.g. (Harrell, 2001;Schiff et al., 2008;Augenstein et al., 2003;Kononen et al., 2011;Candefjord et al., 2015;Buendia et al., 2015). We performed a literature study of similar problem domains and identified some natural competitors to Logistic Regression. We also diversified our search and tried several linear and non-linear machine learning methods. In an initial round of tests, we included Decision Trees, Random Forest, Linear Discriminant Analysis and Support Vector Machines (Hastie et al., 2009). The initial tests demonstrated subpar performance of these algorithms, therefore they were excluded from this study. We finally concluded with a set of four different algorithms that were deemed to have high potential: Ridge Regression, Stochastic Gradient Descent, Bernoulli Naïve Bayes, and Artificial Neural Networks. Every model was compared to the established Logistic Regression algorithm with respect to the performance metrics defined in Section 2.5. Python (version 3.5) was used as base for all data analysis, with the data science libraries Pandas (version 0.22.0) and Scikit-learn (version 0.19.0) (Pedregosa et al., 2011), using the default settings for classifiers. For Artificial Neural Networks, TensorFlow (version 1.0.0) (Abadi et al., 2016) was used. Output results were adapted so that all classifiers could be easily compared.
Logistic regression
Studies of MVC have commonly focused on risk factors and the potential of individual predictor variables for injury severity prediction (Schiff et al., 2008;Kononen et al., 2011;Candefjord et al., 2015;Buendia et al., 2015). This objective is well met by Logistic Regression, which describes the probabilistic relationships between the dependent variable and the individual predictor variables. Let P denote the probability of severe injury. Y is the dependent variable. We choose Y = 0 for non-severe injury and Y = 1 for severe injury. The model can be defined using the form in Equation (1), where the left hand side of Equation (1) is referred to as the logit transformation. The fraction P(Y=1) 1− P(Y=1) is called the odds ratio (OR) for the event Y = 1. The logit transformation enables estimating the logged odds of the event Y = 1. The OR for the kth predictor variable X k is given by the coefficient e β k and represents the constant effect of the predictor X k on the likelihood that Y = 1 will occur. This is exactly what we aim to measure in MVC analysis, i.e. quantifying the isolated effect of each X on Y with a single metric. Another advantage is that Logistic Regression does not require a linear relationship between the dependent variable and the predictors, since it applies a nonlinear log transformation to the predicted OR (it does not mean that the model is non-linear, the transformation is applied on the output).
Results of Logistic Regression modeling were expressed as adjusted OR, with corresponding 95% confidence intervals (CI) and levels of statistical significance (p-values). The overall low proportion of severe injury in this study (<6%) suggested that OR is a reasonable approximation of the relative risk. To report the required results, we implemented a wrapper class that encapsulated the default Logistic Regression from Scikit-learn and added the functionality to compute the aforementioned quantities.
Ridge regression
The performance of Logistic Regression depends on two key assumptions. First, there are no outliers (misclassified instances) in the dataset. Second, there does not exist any high correlations among the predictor variables (multicollinearity). A closer analysis of the dataset in the present study revealed that there exist cases with the same set of predictor variable values having different dependent variable outcome. This is expected in an MVC dataset, because similar conditions does not necessarily end with similar injury outcome. However, these outliers can unduly influence the results of the analysis and lead to incorrect inferences.
Ridge Regression uses a regularization approach that constrains/regularizes or shrinks the coefficients of the ordinary least squares regression model. It discourages learning a complex model by allowing misclassification of extreme outliers and thereby improves generalizability. Ridge Regression also solves the multicollinearity problem through a shrinkage parameter controlling penalty on model coefficients. The learner identifies coefficients that are close to zero, and does not aim to fit every training data point. The coefficient estimates produced by this method are also known as the L 2 norm. For detailed mathematical formulation and practical implementation of Ridge Regression please refer to Hastie et al. (2009, pp. 59-65), Müller and Guido (2016, pp. 51-57) and Tattar (2017, pp. 312-318).
Stochastic gradient descent
Maximum likelihood estimation is used by several machine learning methods, including Logistic Regression, to estimate the model coefficients (β k ) (Equation (1)). A minimization algorithm such as Gradient Descent (GD) optimization is usually employed for this purpose. An alternative approach is to replace GD with its counterpart called Stochastic Gradient Descent (SGD). In GD optimization the cost gradient is computed from the complete training set, which can become time consuming for large datasets. In SGD, the gradient is computed for one or a batch of training data points at a time until it converges. The term "stochastic" comes from the fact that the gradient based on randomly selected training samples is a "stochastic approximation" of the "true" cost gradient.
Besides being faster than GD, SGD is superior for datasets with redundant samples (all variable levels equal), which applied to the dataset in the present study. This observation formed the rationale to include a variant of Logistic Regression with SGD based training. We used a regularization approach called "Elasticnet", which is a convex combination of the L 1 and L 2 norms, which is available in Scikit-learn and well explained in the documentation.
Bernoulli Naïve Bayes
Usually, Naïve Bayes based algorithms are designed for text classification (McCallum and Nigam, 1998). However, Bernoulli Naïve Bayes is a variant where each feature is assumed to be a binary variable (Manning et al., 2008), and therefore appears as a strong candidate classifier to be considered for evaluation on the MVC dataset. The default implementation of Bernoulli Naïve Bayes provided in Scikit-learn was used.
Artificial Neural Networks
Artificial Neural Networks (ANN) is a mathematical formulation of a biological brain. Like the brain, an ANN model consists of several neurons interconnected over different layers (Haykin, 2009). This interconnected structure models and stores the information about the complex relationships between the predictor and target variables. The experience learned from the training data is stored as weights and biases for each individual neuron. The ANN model emulates a non-linear function between the input predictor variables and the output target variable as where F is the non-linear function estimation, U is the set of input predictor variables and y is the target variable. The weights w for the ANN model are decided based on an optimization algorithm, which minimizes the error between the estimated and the actual target variable during the training process.
Consider a training set (U(i), d(i)) N i=1 with N sample points. F(U(i); w) is emulated by the ANN model, d(i) is the desired value of the output corresponding to the inputs U(i), and the matrix w is a weight matrix. The network training is achieved by minimizing the loss function ℰ defined as In this study the ANN models used had one hidden layer with 20 neurons, and the training was achieved by the Levenberg-Marquardt training algorithm (Marquardt, 1963;Levenberg, 1944).
Performance estimates
The area under the receiver operator characteristic (ROC) curve (AUC) was used to measure the performance of the model. The ROC shows the power in terms of sensitivity and specificity for prediction of severe injury for different cutoff values of P(Y = 1) (probability of sustaining severe injury). The cutoff value determines the trade-off between sensitivity and specificity; increasing the sensitivity (identifying more MVC occupants with severe injury) is at the cost of decreasing the specificity (more false positives). Use of the OSISP algorithm in the field will require finding a suitable value for this cutoff. This is best determined by the management for health care trauma systems and is outside the scope of this study.
A 10-fold stratified cross validation (SCV) procedure was performed to estimate the performance of the OSISP algorithm for unseen data, in terms of ROC and AUC. The dataset was divided into ten randomized folds with approximately equal number of MVC occupants and similar distributions of severe/non-severe injury. One fold at a time was left out, a classification model was derived on data from the remaining nine folds. The model was then validated by classifying the observations in the left out fold. This procedure was repeated for all folds, i.e. performed ten times. Mean and 95% CI (±2 standard deviations from mean, assuming normal distribution) for the classification performance were calculated from the ten scores.
Feature ranking and variable subset performance
An advantage of machine learning is that it allows to determine which features are important and which can be considered redundant or unneeded. The outcome is a subset of relevant features. This process is often referred to as feature selection. An important goal in feature selection is feature ranking. When developing an OSISP model, it is valuable to know the ranking of the predictor variables (features) with respect to their importance in determining injury severity.
There are numerous approaches available for feature ranking. However, selection of a suitable method requires deep knowledge of the problem domain. In this study, our goal was to acquire ranking of the features for each machine learning algorithm. This requires the algorithm to inherently assign scores to the features. The coefficients in Logistic Regression (and its two variants), can serve for this purpose.
We followed the widely used recursive feature elimination (Guyon et al., 2002) approach to carry out the feature ranking. It starts with the full set of features and recursively removes the least significant feature(s), building a new model for each step. The process continues recursively on smaller subsets of features, until a desired number is reached. In our case, a regression model was trained on the full set of predictor variables, and their importance was determined through the regression coefficients. Then, by recursively eliminating the most redundant feature, the recursion stopped when only one feature was left (i.e. all were ranked). The algorithm's classification performance in terms of AUC was evaluated as a function of increasing number of features, in the order of their ranking from best to worst. It demonstrated which and how many features that were needed to approach the performance for the full feature set. Table 2 shows the classification performance in terms of AUC evaluated by 10-fold SCV for the top performing classifiers and imputation methods. The highest accuracy obtained was AUC = 0.86 (95% CI 0.82-0.90) by Logistic Regression. Consistently under all four approaches of handling missing data, both Logistic Regression and Ridge Regression achieved high accuracy. Logistic Regression, with marginally better 95% CI values, was the top performing classifier. SGD performed almost on par with the best classifiers, whereas Bernoulli Naïve Bayes and ANN showed slightly lower accuracy. The four methods of handling missing data produced relatively similar results, with Conditional Probabilities and New Category yielding best performance. In Section 3.2, we show more detailed results for the top performing method Logistic Regression.
Detailed results for proposed OSISP algorithm
The ROC curve for the top performing classifier Logistic Regression (Table 2), implemented with Conditional Probabilities imputation and evaluated by 10-fold SCV, is shown in Fig. 1. Examples of undertriage and overtriage rates, based on the recommendations by ACS-COT and adding overtriage at 1% undertriage, is shown in Table 4. The Logistic Regression model is detailed in Table 3, presenting the levels of statistical significance (p-values), OR and 95% CI for each variable.
The classification performance as a function of number of predictor variables, derived using the feature ranking procedure, is shown in Fig. 2. In order of importance, Ejection, Entrapment, Belt use, Airbag deployment and Crash type were the five strongest predictors, and together yielded an AUC approaching the full feature set (Fig. 2). The classification accuracy improvement leveled off after adding around five to seven of the variables with highest prediction power, using more variables generated relatively small improvements.
Significance of findings
The main finding is that an OSISP algorithm is capable of predicting severe injury in a US population of MVC occupants with an AUC of 0.86 (95% CI 0.82-0.90, Table 2), based only on variables deemed to be feasible to assess on the scene of crash by first responders. Only a subset of the 5-7 strongest predictors is needed to attain good performance (Fig. 2). The study shows that similar classification performance is achieved by several machine learning methods, and that the traditional method Logistic Regression appears to be a good choice for developing injury severity prediction algorithms (Table 2).
To put these findings into perspective, the proposed OSISP algorithm outperforms triage accuracy reported for field triage protocols, and performs on par with the best AACN algorithms. Rehn et al. (2012) reported an undertriage of 19.1% at an overtriage of 71.6% in their study on 1812 patients admitted to a primary trauma center, whereof 768 had major trauma (New Injury Severity Score over 15), after introducing a new two-tiered trauma team activation protocol. As previously mentioned, Stitzel et al. (2016) showed that undertriage was 20% with an overtriage of 54% for NASS-CDS data for years 2000, and Xiang et al. (2014 demonstrated undertriage of 34% in US emergency departments. In contrast, the OSISP algorithm achieves undertriage of 5% at overtriage of 59% (Fig. 1, Table 4), indicating that the method could be used to improve performance of field triage for US MVC occupants.
We chose to highlight the results for the Logistic Regression classifier. It showed the highest performance; however, the 95% confidence intervals overlapped with the other classifiers (Table 2) so there is no evidence that Logistic Regression is likely to have the best performance on a prospective dataset. However, Logistic Regression has some important advantages for an OSISP model compared to most machine learning models. The model is explainable and the result is easier to interpret than models such as ANN that does not provide model coefficients or OR for the predictor variables. This could be an advantage when introducing the model in a prehospital setting, the prehospital staff can relate their experience to how the mathematical model works. Logistic Regression is also less complex than advanced mathematical models underlying many other classifiers, and can potentially be less prone to overfitting than more complex models, especially when regularization techniques are employed. Due to these advantages and the results from the current study we therefore suggest to use Logistic Regression as the basis for OSISP. However, it should still be benchmarked against other machine learning methods in future studies, since it may still be outperformed by other classifiers, especially on larger and more intricate datasets where more complex models may enhance performance. Furthermore, new methods may be needed to handle the problem with unobserved heterogeneity in the MVC data, which will limit the performance of injury severity prediction.
Compared to previous studies on OSISP using data from Sweden Candefjord et al., 2015), the algorithms developed in the current study perform better. A plausible explanation is that the two strongest predictors in the current model, Eject and Entrap (Fig. 2), were not used in the earlier studies because this data is not available in the national Swedish MVC dataset . The model developed in the present study includes both occupants in light trucks and cars, which should simplify field use compared to the earlier models that were separate for trucks and passenger cars . The trends for the odds ratios for the different variables included in the study (Table 3) largely follow the trends reported in the literature Buendia et al., 2015;Sasser et al., 2012). A surprising finding in the present study was that crashes in rural environment were less dangerous than urban crashes (odds ratio 0.74, p < 0.05). The finding that Logistic Regression slightly outperformed other machine learning methods is in agreement with the study by Kusano and Gabler (2014). They evaluated different injury risk classifiers for developing AACN, including Logistic Regression, Random Forest, AdaBoost, Naïve Bayes, Support Vector Machine, and classification k-nearest neighbors. They used a NASS-CDS dataset aggregating years 2002-2011 to include 16398 vehicles involved in non-rollover collisions. The best models used Logistic Regression and yielded AUC of 0.86-0.89, where the highest accuracy was obtained for models including age and sex as predictors but this only contributed to a small improvement (past AACN models have been criticized for relying on age and sex) (Kusano and Gabler, 2014).
Compared to AACN algorithms, Stitzel et al. (2016) demonstrated that their Occupant Transportation Decision Algorithm (OTDA) achieves < 50% overtriage and <5% undertriage in side impacts and 6-16% undertriage in other types of crashes. They showed that this is an improvement in terms of lowered undertriage compared to the algorithms URGENCY and On Star (Stitzel et al., 2016), developed and evaluated in previous works (Augenstein et al, 2002(Augenstein et al, , 2003Bahouth et al., 2004;Rauscher et al., 2009;Kononen et al., 2011). The OSISP algorithm performs on par with the OTDA algorithm, showing an undertriage of approximately 7% at 50% overtriage (Fig. 1, OSISP is a single model for all crash types). OSISP attains high accuracy without utilizing vehicle telemetry data. In the future, an effective way of assuring most effective rescue and improving field triage could be to use AACN in conjunction with OSISP. For supporting dispatch planning of rescue missions, an algorithm based on vehicle telemetry data alone could be used, such as the OTDA (Stitzel et al., 2016). For supporting transport destination decisions OSISP could be used at the scene of crash, as additional important information then can be recognized, such as occupant being entrapped or ejected. Currently, vehicle telemetry data is not available for most crashes. OSISP can be used for all crashes and is straightforward to implement in a handheld device to be used at the scene of crash (Olaetxea Azkarate-Askatsua, 2017), and thus has high potential for improving field triage in the near future. In the long term, models like OTDA and OSISP could also be combined into a set of algorithms utilizing the most predictive data momentarily available in a continuum from awareness of accident to the MVC patient is delivered to the appropriate hospital, to provide a dynamic injury severity prediction to support decisions from dispatch to field triage.
We can perform a simplified benefit analysis following the work by Stitzel et al. (2016Stitzel et al. ( , p. 1217 and Table 5) on a population-weighted sample of NASS-CDS data. If we choose a threshold for the OSISP algorithm such that undertriage equals 5%, as deemed acceptable by ACS-COT (American College of Surgeons Committee on Trauma (ACS-COT), 2014), the cost is an overtriage of 59% (Table 4). Assuming that the classification accuracy would be similar for the whole US population, we expect an improvement of undertriage from 20% to 5%, with some increase in overtriage (59% versus 54%). This would translate to approximately 4600 more patients with severe injury being correctly triaged and receiving more appropriate care every year, if OSISP is implemented nationwide. If lowering overtriage with help of OSISP would be prioritized by the emergency medical services we could set a threshold producing undertriage of e.g. 10%, i.e. halving the rate of undertriage compared to current outcomes, which would yield an overtriage of 42%. This translates to more appropriate care for approximately 3100 severely injured patients, while reducing unnecessary use of trauma center resources for 94000 patients, every year.
Influence of variable encoding and data imputation
This study followed a rigorous protocol to assess the effects of different variable encoding and data imputation methods. Adapting some predetermined variable encoding and imputation method would have been simpler; however, such an approach is only viable if it is supported by the literature from similar studies. To our knowledge, no previous studies predicting the effect of different data representation and imputation methods on the accuracy of machine learning for an MVC dataset exist. In this study, we adapted two methods for encoding data and four ways to handle missing data. Independent of the imputation method and the type of learning algorithm, dummy variable encoding consistently produced better results than label encoding. For handling missing data, the Conditional Probabilities and New Category imputation methods produced the highest AUC scores (Table 2). However, the influence of different imputation methods was relatively small, which indicates that the classification performance is stable.
Limitations of the study
The size of the dataset in this study (n = 21589 occupants) is of the same order of magnitude as several similar studies Kusano and Gabler, 2014;Kononen et al., 2011). However, it is smaller than the study by (Stitzel et al., 2016) (n = 115159 occupants). Unfortunately, ISS data was not available for a large proportion of the compiled NASS-CDS cases (approximately 43%), which had to be excluded. OSISP models can potentially achieve higher accuracy for larger datasets in future studies.
This study did not employ weighting factors provided by NHTSA to account for the NASS-CDS sampling system (Radja, 2016). NASS-CDS samples events that are harmful (property damage and/or injury), and at least one vehicle needs to be towed away. The system samples a small proportion of all crashes based on a design where the country is divided into 1195 geographic areas, which are further divided into police jurisdictions. Within each jurisdiction, every week crashes are selected for investigation using a strategy that increases the probability that high severity crashes are included. The weighting factors can be used to derive estimates representative of the entire country, and assign less weight to more severe crashes (Radja, 2016). Since OSISP is mainly aimed to be used by prehospital personnel to complement the triage protocol , we reasoned that it is likely that the crashes those teams will experience are more severe than most crashes in NASS-CDS, and that adjusting the NASS-CDS sample may make the dataset less representative for an OSISP model. Furthermore, crashes with no ISS data were excluded, and the lack of that information may not occur completely at random. There is potential bias in our models due to that the data sample may not accurately represent the patient population seen by prehospital personnel prospectively.
It is clear from Table 2 that all the algorithms selected for this study consistently showed good performance, except ANN. When performing the 10-fold SCV using ANN, we observed that individual AUC scores of some of the folds were as low as 0.5, i.e. no better than random. Thus, the average of ten folds dropped to 0.78 in two out of the four results reported for ANN in Table 2. There is never a guarantee that ANN learning will not get stuck in a local optimum, which is the most plausible explanation of the behavior of ANN in this study. Another possible reason is that we have used only one hidden layer, there may be need for a more complex network architecture to overcome this problem. On the other hand, one important aspect is that ANN often requires huge datasets to outperform traditional machine learning methods.
Future work
Both of the variants of Logistic Regression considered in this study, namely SGD and Ridge Regression, make use of regularization to avoid overfitting. Logistic Regression showed marginally better results than its counterparts with regularization. This affirms that the performance scores for Logistic Regression are realistic and not an outcome of overfitting. Thus, selecting Logistic Regression as the OSISP algorithm is well justified in our scenario. However, the importance of Ridge Regression and SGD cannot be completely overlooked, since the differences in performance (AUC scores) are not statistically significant. An OSISP algorithm based on Ridge Regression or SGD is expected to deliver results comparable to those of Logistic Regression, or even better if the data available for model construction is noisy. Therefore, designing future models using new data consisting of more variables for the problem addressed in this study, all three versions of regression presented here should be considered for developing the OSISP model. ANN may have larger potential to improve its performance using larger and more diverse datasets, as compared to other machine learning methods. In the future, an OSISP algorithm could potentially also incorporate vehicle telemetry data and additional information from the scene of crash. Such rich information, with potentially complex patterns distinguishing severe injury, could lend itself well to the power of ANN. Therefore, we encourage the continued use of ANN for injury severity prediction.
To prove the potential benefits of OSISP for the US population, we recommend to perform a prospective clinical study in collaboration with emergency medical services. Ideally, such a study would benchmark the accuracy of the field triage protocol in use against the performance of the OSISP algorithm, with aim to validate the findings in the present retrospective study. The OSISP algorithm can be implemented in a smartphone or tablet for quick and easy recording of the included variables. A first version of a suggested design for a smartphone app has been presented in Olaetxea Azkarate-Askatsua (2017). The implementation should preferably be designed in close collaboration with medical responders.
OSISP has so far only been developed for MVC involving cars and trucks. Future studies could address models for other road users, such as motorcyclists, cyclists and pedestrians.
Conclusion
An OSISP algorithm for use in the US by first responders to predict the probability of MVC occupants being severely injured was developed, based on evaluations of several machine learning algorithms. The selected algorithm used Logistic Regression and showed high classification accuracy for differentiating severe and non-severe injury, and needed only a subset of the 5-7 strongest predictors to achieve good performance. Regression models appear to be well suited for MVC injury severity prediction. This study indicates that a simple to use OSISP tool for first responders could be utilized to improve field triage accuracy in the US, to improve care for thousands of severely injured patients every year, while reducing unnecessary use of trauma center resources for non-severely injured patients.
Financial disclosure
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Declaration of competing interest
None. | 9,469.6 | 2021-09-01T00:00:00.000 | [
"Computer Science"
] |
Theory of a Kaptiza-Dirac Interferometer with Cold Trapped Atoms
We theoretically analyse a multi-modes atomic interferometer consisting of a sequence of Kapitza-Dirac pulses (KD) applied to cold atoms trapped in a harmonic trap. The pulses spatially split the atomic wave-functions while the harmonic trap coherently recombines all modes by acting as a coherent spatial mirror. The phase shifts accumulated among different KD pulses are estimated by measuring the number of atoms in each output mode or by fitting the density profile. The sensitivity is rigorously calculated by the Fisher information and the Cramér-Rao lower bound. We predict, with typical experimental parameters, a temperature independent sensitivity which, in the case of the measurement of the gravitational constant g can significantly exceed the sensitivity of current atomic interferometers.
Introduction
The goal of interferometry is to estimate the unknown value of a phase shift.The phase shift can arise because of a difference in length among two interferometric arms, as in the first optical Michelson-Morley probing the existence of aether or in LIGO and VIRGO gravitational wave detectors [1].Phase shifts can also be the consequence of a supersonic airflow perturbing one optical path, as in the first Mach-Zehnder [2], or inertial forces as in Sagnac [3].Interferometers are among the most exquisite measurement devices and since their first realisations have played a central role on pushing the frontier of science.
Since the last decade, matter wave interferometers have progressively become very competitive when measuring electromagnetic or inertial forces.In particular, atom interferometers [4] [5] have been exploited to obtain the most accurate estimate of the gravitational constant [6] [7] [8] [9].The beam splitter and the mirror operations of an atom interferometer can be typically implemented in free space with a sequence of Bragg scatterings applied to a beam of cold atoms [5] [10].Alternatively, the phase shifts can be estimated by measuring the Bloch frequency of cold atoms oscillating in vertically oriented optical lattices which have been able to evaluate the gravitational constant g with accuracy up to 7 ~10 g g − ∆ [11] [12] [13] [14].The sensitivity of light-pulse atom interferometry scales linearly with the space-time area enclosed by the interfering atoms.Large-momentum-transfer (LMT) beam splitters have been suggested [15] and experimentally investigated [16] [17] [18], demonstrating up to 88 k splitting (where k is the photon momentum) [16] [18].Relative to the 2-photon processes used in the current most sensitive light-pulse atom interferometers, LMT beam splitters in atomic fountains can provide a 44-fold increased phase shift sensitivity [16].Further increases of the momentum differences between the interferometer paths are limited by the cloud's transverse momentum width since high efficiency beam splitting and mirror processes require a narrow distribution [19].
As an alternative to the atomic fountains, where the atoms follow ballistic trajectories, the interferometric operations can be implemented with trapped clouds [20] [21] [22].We have recently proposed [23] a multi-mode interferometer with harmonically confined atoms where multi beam-splitter and mirror operations are realized with Kapitza-Dirac (KD) pulses, namely, the impulse application of an off-resonant standing optical wave.With KD pulses applied to atoms in a harmonic trap, it is possible to reach large spatial separations between the interferometric modes by avoiding, at the same time, atom losses and defocusing occurring in Bragg processes (mostly due to the constraint of narrow momentum widths).In [23], the role of mirrors is played by the harmonic trap, which coherently drives and recombines a tunable number of spatially addressable atomic beams created by the KD pulses.The phase estimation sensitivity linearly increases with the number of beams and their spatial distance.The number of beams is proportional to the strength of the applied KD pulse while their distance is proportional to the ratio between the harmonic trap length and the wave-length of the optical wave.In this manuscript we discuss in detail the theory of the multi-modes KD interferometer which was introduced in [23].
Multi-Modes Kaptiza-Dirac Interferometer
The initial configuration of the interferometer is provided by a cloud of cold atoms trapped by an harmonic potential . The interferometric sequence is realised in four steps, see Figure 1: i) Beam-splitter: A KD pulse is applied to the atomic cloud state at the time 0 t .KD creates a number of spatially addressable atomic wave-packets that evolve along different paths under the harmonic confinement.( ) , where π τ ω = .
ii) Phase shift: Each spatial mode gains a phase shift θ with respect to its neigh- bour's modes due to the action of an external potential.
iii) Beam splitter: the harmonic trap coherently recombines the wave packets and a second KD pulse is applied to again mix and separate the modes along different paths.iv) Measurement: The phase shift is estimated by fitting the atomic density profile or by counting the number of atoms in each spatial mode at f t .The measurement can be done after ballistic expansion by optimising spatial separation of the modes and atom counting signal to noise ratio.
The sequences i)-iii) can be iterated an arbitrary number of times n before the final measurement iv).
The plan of the paper is as follows.In Section 2, we present a detailed description of the multi-modes KD interferometer.As an application we calculate the Fisher information and the Cramér-Rao lower bound sensitivity [24] of the interferometric measurement of the gravitational constant g in Section 3. We predict sensitivities up to in configurations realisable within the current state of the art and in the Section 4 we compare the performance of different atomic interferometers.In Section 5 we discuss two possible sources of noise and we finally summarise the results in Section 6.
Dynamics
Let's consider first a single atom described by a wave packet ( ) The time evolution of the state in the harmonic trap is given by where ( ) 0 , ; , K x t y t is the quantum propagator [25] ( ) with h m σ ω = .The KD beam-splitter is realized with an impulse application of a periodic potential ( ) ( ) , where 0 V is the strength of the pulse, r E is the atomic recoil energy and 4π k λ = .In the Raman-Nath limit [10] [26] [27], the duration of the pulse is short enough to not affect the atomic density but to only change the phase of the initial wave-function ( ) where we have used the Bessel generating function ( ) [28] and .The Raman-Nath limit has been experimentally demonstrated in [21] [29].Equation (3) shows that the KD beam-splitter creates Furthermore, in presence of an external field, each spatial mode created by the KD beam splitter gains during the time τ a phase shift θ with respect to its neighbour's modes.Right before the application of a second KD pulse, at time τ − , the wave func- tion is , , e e .
After iterating a number of times n the sequence of KD pulses and phase shift accumulations, the wave function at , , e e , where is the integer part of 2 n .For odd n we have The wave packets gain their maximum spatial separation after a further 2 τ evolution in the harmonic trap.
Eventually, the wave function at the final time ( ) where ; , e e , 2 2π = − + + In the limit if zero overlap between the various wave packets in Equation ( 8), ( ) the density function at the measurement time ( ) , , .
Equation (12) shows that there are ( ) created by the n applications of the KD pulses.This can of course be helpful if only weak KD pulses can be experimentally implemented.
In the limit of a large number 1 p of independent interferometric measurements, the phase estimation sensitivity saturates the Cramér-Rao [30] where N is the number of uncorrelated atoms.F denotes the Fisher information calculated from the particle density at the measurement time With Equation ( 12), Equation ( 14) becomes (see Appendix).We finally obtain where Notice that even in the case of a odd value of n, with 1 n , ( ) , 1 S n θ → .There- fore, for an even n or an odd 1 n , the phase estimation uncertainty of our interfero- meter becomes: ( ) ( ) which can also be written as ( ) since the total number of modes is As expected on a general ground from the theory of multimode interferometry [31], the sensitivity scales linearly with the number of momentum modes which have been significantly populated after KD beam splitters.The populations of higher diffraction orders vanish exponentially [10].
We remark here the important condition of non overlap of the wave packets corresponding to the different momentum modes at the time of measurement, Equation (11).A further interesting point is that Equation ( 18) is independent from the temperature of the atoms as long as their de Broglie wavelength remains larger than the internal spatial separation of the periodic potential creating the Kapitza-Dirac pulse.We will show this in the following Sections by considering as a specific application the interferometric estimation of the gravitational constant.
Estimation of the Gravitational Acceleration Constant g
We now investigate the KD interferometer theory to estimate the gravity constant g.
The evolution of the initial state ( ) 0 x ψ is influenced by the combined action of the harmonic confinement ( ) h V x , the gravitational field mgx and the KD beam splitters.
The goal is to estimate the value of the acceleration constant g.As explained in the previous Section, the phase shift θ arises from the external gravitational field acting during the phase accumulation period (until nτ ).We may engineer our Hamiltonian to switch on/off the gravity after the first beam splitter by modifying the frequency of the harmonic trap by ( ) , where , ω ω are the trap frequencies before and after the KD, respectively.We finally generalize our results by considering an atomic gas in thermal equilibrium at a finite temperature 0 T > .
To take in account the effect of the gravitational force on the dynamical evolution of the trapped atom states, we need to include in the free propagator ( ) 0 , ; , K x t y t Equa- tion (2) the linear gravitational field [25] ( ) where After the application of the first KD, the states are coherently driven by the harmonic trap and the external gravitational field.At the time t τ = , each spatial modes, created by KD pulse, are recombined and the wave function becomes , , e e 2 , since the quantum propagator undergone with gravity field is reduced to As expected, each spatial mode gains its phase shift 2d with respect to its neighbour's modes at time τ due to action of the external gravity field after the first KD pulse.A straightforward (slightly tedious) calculation provides the wave function at where and for even n The last KD pulse is applied on the wave function Equation ( 22) at time nτ , to mix and therefore spatially separate the modes for the final density profile measurement , , e , , .
iV kx Firstly, we consider the case without the gravity field.Then at the time , the wave function is where ( ) ( ) and for n even (odd) 0 lβ where is defined by Equation ( 9), can be expressed as , where ( ) is defined by Equation ( 24) and Except the phase difference between Equation (24) and Equation ( 27), a constant difference d is found in the centre position of each sub-wave packets induced by the gravity field.In the case of "no-overlap" condition (Equation ( 11)), which is satisfied when the width of the initial wave packet is much larger than the interwell distance of the KD optical lattice ( 0 1 σ λ ), the final density function becomes , , , from Equation (24) or , , , from Equations (( 27), ( 12) and ( 28)) show that the information on the estimated values of θ and g are mainly (or entirely) contained in the weights , depending on the final evolution during the measurement period.A small part of the information is involved in the center of sub-wave packets for half gravity evolution (Equation ( 29)).
We now consider an atomic gas at finite temperature T. To get some simple insight on the physics of the problem, we consider the system as made by a swarm of minimum uncertainty Gaussian wave packets Each wav packet evolves driven by the propagators calculated in the previous Section: and 1 e e e .π where ( ) for odd n and 0 ϕ = for even n.Replacing in Equation (31), we find that the density distribution at the output of the interferometer is where ( ) ( ) is the normalization constant and ( ) It is interesting to note that ( ) , , , e , , , , , , , .
Notice that the value of the gravitational constant g is only contained in the weights of the modes.
The requirement is that sub-wave packets in Equation (37) are spatially separated, which means ( ) Considering Equation (35), we have which means ( ) As expected, the spatial separation condition in Equation (37) ( ) This means that the initial wave packets width (the thermal de Boglie wavelength) should be much larger than the internal distance of the KD potential.This is consistent with Equation (11).The important result is that as long as this condition is satisfied, the sensitivity does not depend on the temperature.Substituting the density function Equation (37) at the measurement time f t into Fisher information Equation ( 14), we obtain The Fisher information for our system depends on the temperature, initial density profile, the interferometer transformation, and the choice of the observable that, here, is the spatial position of atoms.In this case, the estimator can simply be a fit of the final density profile.However, the same results would be obtained by choosing as observable, the number of particles in each Gassian spatial mode.Since the initial state is made of uncorrelated atoms, there is no need to measure correlations between the modes in order to saturate the Cramér-Rao lower bound Equation ( 13) at the optimal value of the value phase shift.
Before proceeding to discuss the finite temperature case, we calculate the highest sensitivity of the unbiased estimation of parameter g, which is guaranteed by the no-overlap condition 1 dB λ λ .
In the limit 0 T = , the Fisher information can be calculated analytically The Equation ( 43) can be rewritten as where If the gravity field is witched on in the last KD pulse, the density profile at final time is described by Equation (38).In this case, there is a further contribution to the Fisher Equation ( 42) from the shift on the center of sub-wave packets and we have
Sensitivity
We now estimate the expected sensitivity under realistic experiment conditions.We consider 10 5 88 Sr atoms trapped in an harmonic trap having seconds.Under these conditions, the maximum length spanned by the 88 Sr atoms is also increased from L n L = × , see the black lines in Figure 3.In practice the sensitivity is limited by the effective length of the harmonic confinement.With current technologies using magnetic traps, the largest spatial separation L could be pushed up to a few millimeters.Since the thermal de Broglie wavelength decreases when increasing the temperature, the no-overlap condition Equation ( 11) breaks down at ( ) In Figure 2, we plot the normalised sensitivity as a function of the temperature.The time-independent sensitivity is found for various numbers of KD pulses.Once the temperature is increased up to the crossover value 0 T , the sensitivity is drastically reduced see Figure 2.
When 0 T T < , the wave packets are spatially addressable (see dark and blue lines in Figure 3).When 0 T T > , the distinguishability of the wave packets decreases (red lines in Figure 3) and the uncertainty in the phase estimation increases as As a comparison with current atom interferometers, we calculate the sensitivity obtained from a simple interference pattern observed after a free expansion of an initial atom clouds relevant, for instance, when measuring the gravitational constant g using Bloch oscillations [12] [13] [14].As shown in [32], the momentum distribution is expressed as ( ) where λ is the wave length of the laser.A is a normalization factor and j denotes the lattice site and θ is the phase difference between lattice site.Since the finite size of the initial cold atomic cloud, there is only a finite number of terms in Equation ( 46) which contribute to the sum.We therefore have ( ) where 2 1 M ′ + is the maximum numbers of the lattice occupied by the initial atom gases.In Equation ( 46), each point has a Gaussian momentum distribution.Therefore, we obtain ) . Considering the experimental situations in [12] [13] [14], where int t is the interaction time of the neighbour cold atom under the gravity-like force.Therefore it could be approximate as the tunnelling time ~1 3 int t s in [13].With the Cramér-Rao lower bound Equation ( 13), we have ( ) where we have estimated the maximum occupied lattice sites by ( ) Therefore, the sensitivity is Considering the sensitivity for a single Kaptiza-Dirac pulse with Equation (51), we can reach a sensitivity larger than 3 order of magnitude that the sensitivity obtained in an interference pattern.The reason is that the KD pulses can create several wave packets spanning a distance 2 ~h Mkσ , which can be quite a bit larger than the typical dis- tances between the wave packets created in far field expansion measurements.In this case, the theoretical gain provided by Equation ( 43) is proportional to Mkσ σ , which can be ~10 3 with only once KD pulse and typical values of the experimental parameters.A further advantage is that such high sensitivity interferometry can be realised with a compact experimental setup.
Noise and Decoherence
We now consider the effects of noise and imperfections on the sensitivity of the interferometer.We mainly consider two kinds of perturbations, which may arise from the experimental realization of the interferometry.The first one is the effect of the anharmonicity, described by a position dependent random perturbation, and the second one the effect of a shift in position between different sequences of the KD pulses.
The effect of anharmonicity is investigated by numerically simulating the interferometric sequences with the following potential where . We take as length unit of the harmonic trap h σ and as time unit the inverse of the trap frequency 1 ω .The strength of the external gravity-like potential is described by a dimensionless parameter α , then e V x α = . To simplify the simulation, in the following we only consider a single KD pulse.
Starting with the ground state of the harmonic trap ( ) h V x , the time dependent wave functions can be found by operator splitting method [33] with Due to the perturbation potential ( ) R V x , the sub-wave packets are driven back to their initial position with a incoherent phase at t τ = and the total density profile could be dramatically destructed.It is interesting to note that the KD pulses still do a quite good job and that completed spatially separated wave packets with momentum lk can be found at the measurement time f t , see Figure 4.When increasing 1 N , the visibility of the wave packets decreases compared with the ideal case (Black line, without ( ) R V x ).This definitely makes a impact on the sensitivity, which can be found by , , e e 2 , where ( ) ( ) ( ) To get this result we have considered the properties of Bessel generating function [28].
, , e e , where ( ) . Equation (55) shows that the effect of off-center s h i f t makes only an phase shifts for each sub-wave packets.Therefore, the non-overlap condition Equation (11) does not have any modification even after considering the off-center shift In this case, the final density profiles is Equation ( 56) shows that the center shifts could induce a fluctuation by
Conclusion
During the last few decades, matter-wave interferometry has been successfully extended The second step uses the "no-overlap" condition by changing x to x′ in Equation (11).
Submit or recommend next manuscript to SCIRP and we will provide best service for you: Accepting pre-submission inquiries through Email, Facebook, LinkedIn, Twitter, etc.A wide selection of journals (inclusive of 9 subjects, more than 200 journals) Providing 24-hour high-quality service User-friendly online submission system Fair and swift peer-review system Efficient typesetting and proofreading procedure Display of the result of downloads and visits, as well as the number of cited articles Maximum dissemination of your research work Submit your manuscript at: http://papersubmission.scirp.org/Or contact<EMAIL_ADDRESS>
Figure 1 .
Figure 1.(color-online) Multimodes Kapitza-Dirac interferometer.The first Kapitza-Dirac pulse at 0 t = creates several modes consisting of atomic wave-packets evolving under the harmonic confinement and an external perturbing field.The n-th Kapitza-Dirac pulse at t nτ = mixes the odd or even n, respectively, of n: for odd n we have Equation(10).Secondly, with gravity field, the wave function under quantum propagator with gravity field
wave packet width 0 σ is equal to the thermal de Broglie wavelength while the initial average coordinates 0x and momentum 0 p are distributed according to the Boltzmann-Maxwell distribution ( ) (34) are important and the density profile at the final time reduces to a sum of weighted Gaussians of width ( ) , a single pulse creates ~9 modes which provide a sensitivity with a single measurement shot and a phase accumulation time of 0.1 seconds, ), after n pulses and phase accumulation time up to 0.1 n ×
Figure 2 .
Figure 2. (color-online) Normalized phase estimation sensitivity as a function of the temperature for even and odd n.
Figure 3 .
Figure 3. (color-online) Density profiles of the output wave function of Figure 2. The dark line, blue line and red line show temperatures below, equal and above the crossover temperature 0 T .
Using 1 N 1 N
groups of random numbers, we generate 1 N densities at the measurement time.Then, .Here, we use the Equation (53) to get the deriva- have been presented in Figure 5. Generally speaking, a strong perturbation of the harmonic potential decreases dramatically, see Figure 5(b), while, for 0.1 V R V < it is still possible to obtain a sensitivity comparable with the ideal case.A shift of the optical lattice with respect to the harmonic trap ( ) h V x is further possible reason for a decreased sensitivity.Assuming a off center shift of two consecu-
(
scheme to the estimation of the gravitational constant and estimate, with realistic experimental parameters, a sensitivity of 10 −9 , significantly exceeding the sensitivity of current interferometric protocols. | 5,402.6 | 2016-11-04T00:00:00.000 | [
"Physics"
] |
First-Principles Study on the Stabilities, Electronic and Optical Properties of GexSn1-xSe Alloys
We systematically study, by using first-principles calculations, stabilities, electronic properties, and optical properties of GexSn1-xSe alloy made of SnSe and GeSe monolayers with different Ge concentrations x = 0.0, 0.25, 0.5, 0.75, and 1.0. Our results show that the critical solubility temperature of the alloy is around 580 K. With the increase of Ge concentration, band gap of the alloy increases nonlinearly and ranges from 0.92 to 1.13 eV at the PBE level and 1.39 to 1.59 eV at the HSE06 level. When the Ge concentration x is more than 0.5, the alloy changes into a direct bandgap semiconductor; the band gap ranges from 1.06 to 1.13 eV at the PBE level and 1.50 to 1.59 eV at the HSE06 level, which falls within the range of the optimum band gap for solar cells. Further optical calculations verify that, through alloying, the optical properties can be improved by subtle controlling the compositions. Since GexSn1-xSe alloys with different compositions have been successfully fabricated in experiments, we hope these insights will contribute to the future application in optoelectronics.
Introduction
Since the emergence of grapheme [1], two-dimensional (2D) materials have attracted intense attention in the scientific community due to richness of the physical properties [2]. It is known that although graphene has high conductivities, the feature of having a zero band gap greatly restricts its application in the semiconductor industry [3]. As important supplements of graphene [4,5], 2D layered materials such as transition-metal dichalcogenides (TMDs) [6], black phosphorene (BP) [7], hexagonal boron nitride (h-BN) [8], metal carbides and carbonitrides (MXenes) [9], and monoelemental arsenene, antimonene [10], bismuthene [11], silicone [12], germanene [13], tellurene [14], etc., have been experimentally manufactured or theoretically predicted which exhibit unique electronic and optical properties for broad applications at the nanoscales. Versatile and complementary properties of these 2D materials can meet a large variety of requirements for potential application. A typical example is MoS 2 [15][16][17] that has attracted tremendous attention among the TMDs materials. While it has a large intrinsic band gap of 1.8 eV [18], the reported mobilities are only in the range of 0.5-3 cm 2 V −1 s −1 , which are too low for practical application in electronic devices [4]. Recently, the booming 2D materials of BP seems to make up the gap between graphene and MoS 2 because it has a tunable direct band gap from 0.3 eV of the bulk to 1.5 eV of the monolayer [19]. The resulted high theoretical mobility [7,19], excellent near-infrared properties [20] as well as high photoelectric conversion capacity [21] endow its widely potential application in the field of electronic and optical devices. However, the stability of phosphorene in air and water needs to be enhanced. Thus, it seems that the single-component materials always have some disadvantages that greatly affect their widespread applications.
Alloyed 2D semiconductors can display compositionally tunable properties, which distinct from both their bulk alloys and binary alloy end-members. Through forming heterojunctions, the limitation of single-component 2D materials is expected to be broken through. Abundant researches on different kinds of the heterostructures, such as van der Wall (vdW) heterostrutures, lateral (in-plane) heterostructures and solid solution heterostrutures, have confirmed that the properties of single materials can be tailored by mixing with the second components, which provides a feasible way to improving the electronic, optoelectronic, as well as the catalytic properties of nanostructures [19,[22][23][24][25]. For examples, the band gap of graphene is considerably open when vdW heterostructure of graphene/g-C 3 N 4 bilayer is formed [26]. Likewise, through forming MoS 2 -WS 2 heterostructure, the carrier mobility is greatly enhanced to 65 cm 2 V −1 s −1 [27]. Moreover, a facile and general method to passivate thin BP flakes with large-area high-quality monolayer h-BN sheets grown by the chemical vapor deposition (CVD) method was developed to preserve atomic layered BP flakes from degradation [28].
Recently, our group [29] has studied the electronic and optical properties of SnSe 2x S 2(1-x) anion alloy through first-principles calculations. It was found that the band gap is not confined in a certain value, but varies in the range of the band gap of SnSe 2 to that of SnS 2 , depending on the ratio of SnSe 2 and SnS 2 . The adsorption strength is enhanced in the visible spectral region after alloying. Moreover, the alloys are predicted to be stable and would be favorably fabricated from the calculation results of phase diagram. Shortly after that, Wang et al. [30] have experimentally prepared 2D SnSe 2(1-x) S 2x alloys with five different S compositions (x = 0, 0.25, 0.5, 0.75, and 1) by the chemical vapor transport (CVT) method. Different from the independent SnSe 2 or SnS 2 monolayer, carrier mobility of SnSeS field-effect transistor can be obviously increased by light illumination of 532 nm laser, indicative of the potential application as a phototransistor. Besides, different cation alloys, such as Mo 1-x W x S 2 , Mo 1-x W x Se 2 , etc., have also been successfully synthesized which exhibit different functions with respect to their end-members [31][32][33]. Therefore, novel properties can be obtained by alloying different 2D materials.
Due to high stability, earth abundance and environmental sustainability, 2D group IVA monochalcogenides (MXs), i.e., GeS, SnS, GeSe and SnSe [34][35][36][37], which are isostructural to black phase of phosphorene, have attracted intense attention recently. Experimentally, solid solutions can be formed with complete solid solubility among these MXs because of their structural similarity. Jannise et al [38]. have synthesized ternary Sn x Ge 1−x Se nanocrystals and exhibited adjustable components over the entire alloy range (0 ≤ x ≤ 1). Compositional tuning on the lattice parameters, band gaps, and morphologies have been demonstrated and the alloy formation mechanism were thereby proposed. Moreover, Fu et al [39]. have successfully prepared Ge-doped SnSe polycrystals by the zone-melting combined with hot-pressing methods, and found that Ge is not an ideal dopant for optimizing the thermoelectric properties of SnSe. Since experimental investigations on the composite system made by GeSe and SnSe have been successfully carried out, however, as far as we know, theoretical investigations related to this system have not been reported yet.
In this contribution, stabilities, electronic structures and optical properties of single-layer Ge x Sn 1-x Se alloys with different Ge concentrations are systematically examined on the basis of density functional calculations. The soluble temperature of the alloy, variations of the electronic and optical properties as well as the underlying reasons are given. The potential application in optoelectronics is proposed.
Computational Details
First-principles calculations were carried out on the basis of density functional theory (DFT), as implemented in the Vienna Ab-initio Simulation Package (VASP) program [40,41]. The electron-ion interactions are descried by the projector-augmented plane wave (PAW) method [42]. Perdew−Burke−Ernzerhof (PBE) functional [43,44] in generalized gradient approximation (GGA) was used to process the electron-exchange correlation interactions unless stated otherwise and a 450 eV energy cutoff for the plane-wave basis sets were used. Vacuum height is set to 20 Å along x-direction to avoid the interactions between two periodic repeating units. The convergence thresholds for energy and force were set to 10 −5 eV and 0.02 eV/Å, respectively. The Brillouin zone was represented by a Monkhorst−Pack [45] special k-point mesh of 1 × 3 × 3 for geometry optimizations, whereas a larger grid of 1 × 5 × 5 was used for band structure computations.
Structures and Stabilities of Ge x Sn 1-x Se Alloys
To best approximate the random solid solution, special quasi-random structures (SQSs) with five different Ge compositions (x = 0, 0.25, 0.5, 0.75, and 1) of the Ge x Sn 1-x Se alloys [46] were generated using the tool in the alloy theoretic automated toolkit (ATAT) [47]. Lattice parameters of each structure are then optimized, and a linear dependence on the composition is found, which is known as Vegard's law [48]. In Figure 1, three typical alloys with the Ge concentration of 0.25, 0.50 and 0.75 are illustrated. Note that phase stability of the alloy is critical to the possibility of alloy formation. To verify the structural stabilities of these alloys, a thermodynamic calculation was performed as previously proposed [29]. was used to process the electron-exchange correlation interactions unless stated otherwise and a 450 eV energy cutoff for the plane-wave basis sets were used. Vacuum height is set to 20 Å along x-direction to avoid the interactions between two periodic repeating units. The convergence thresholds for energy and force were set to 10 −5 eV and 0.02 eV/Å, respectively. The Brillouin zone was represented by a Monkhorst−Pack [45] special k-point mesh of 1 × 3 × 3 for geometry optimizations, whereas a larger grid of 1 × 5 × 5 was used for band structure computations.
Structures and Stabilities of GexSn1-xSe Alloys
To best approximate the random solid solution, special quasi-random structures (SQSs) with five different Ge compositions (x = 0, 0.25, 0.5, 0.75, and 1) of the GexSn1-xSe alloys [46] were generated using the tool in the alloy theoretic automated toolkit (ATAT) [47]. Lattice parameters of each structure are then optimized, and a linear dependence on the composition is found, which is known as Vegard's law [48]. In Figure 1, three typical alloys with the Ge concentration of 0.25, 0.50 and 0.75 are illustrated. Note that phase stability of the alloy is critical to the possibility of alloy formation. To verify the structural stabilities of these alloys, a thermodynamic calculation was performed as previously proposed [29]. First, the mixing enthalpy ΔHm(x) is calculated through subtracting the energy summation of pure GeSe (A) and SnSe (B) from the total energy of GexSn1-xSe monolayer. That is, Figure 2a, the calculated mixing enthalpies as a function of Ge concentrations are shown. It is found that the binary alloys always have positive mixing enthalpies, showing a phase separation tendency at low temperatures. Considering the contribution of the mixed entropy, the solubility of the alloy would be enhanced by increasing the temperature. First, the mixing enthalpy ∆H m (x) is calculated through subtracting the energy summation of pure GeSe (A) and SnSe (B) from the total energy of Ge x Sn 1-x Se monolayer. That is, In Figure 2a, the calculated mixing enthalpies as a function of Ge concentrations are shown. It is found that the binary alloys always have positive mixing enthalpies, showing a phase separation tendency at low temperatures. Considering the contribution of the mixed entropy, the solubility of the alloy would be enhanced by increasing the temperature. Here, the mixing entropy is calculated from a classical formula in the textbook [49]: and the free energy of mixing is calculated corresponding to the following formula: To obtain the critical temperature of alloy mutual solubility, a second-order polynomial based on the quasi-chemical model is used to describe the relationship between mixing enthalpies and Ge concentrations: where Ω is the interaction parameter dependent on the material. Accordingly, the mixing Helmholtz free energy can be rewritten as in which the value of ΔFm only relies on the Ge concentration x. Therefore, the binodal solubility curve and spinodal decomposition curve can be simply obtained by the formula of = 0 and = 0, respectively.
The obtained binary phase diagram is shown in Figure 2b, where we can see the binodal and spinodal curves meet at x = 0.5 at a critical temperature = Ω/2 . The symmetrical feature of the curve is consistent with the systems of Mo1-xTxS2 (T=W, Cr, and V) [17] and SnSe2(1−x)S2x [29]. The calculated critical temperature is 580 K, which is an effortlessly realizable temperature in the laboratory, indicating that this kind of alloys can be favorably prepared. Note that if we count in the entropy contributed by lattice vibration, the solubility may be underestimated. In fact, the SnxGe1-xSe alloys have been successfully fabricated by Buckley et al., who heated precursor solutions to 500 K and held for 4.75 h with stirring [38]. As indicated in Figure 2b, when the temperature exceeds the critical temperature of 580 K, the alloys become stable in the whole range of composition in thermodynamics. At variance, when the temperature is lower than 580 K, for example, 500 K, the alloys can be formed only when the concentration of Ge is in the ranges of 0 < x < x1 and x2 < x < 1. In other words, when the concentration of Ge is within x1 < x < x2, the alloy becomes unstable and decomposes into components x1 and x2 phases.
On the other hand, the growth of GexSn1-xSe alloys is believed as a cation exchange mechanism [38], which means that nucleation begins as SnSe (or a tin-rich selenide) and gradually incorporates Ge over the growth period. Thermodynamic driving force for the exchange can be evaluated from the calculations of substitution energy, which is defined as Here, the mixing entropy is calculated from a classical formula in the textbook [49]: and the free energy of mixing is calculated corresponding to the following formula: To obtain the critical temperature of alloy mutual solubility, a second-order polynomial based on the quasi-chemical model is used to describe the relationship between mixing enthalpies and Ge concentrations: where Ω is the interaction parameter dependent on the material. Accordingly, the mixing Helmholtz free energy can be rewritten as in which the value of ∆F m only relies on the Ge concentration x. Therefore, the binodal solubility curve and spinodal decomposition curve can be simply obtained by the formula of ∂F m ∂x = 0 and ∂ 2 F m ∂x 2 = 0, respectively. The obtained binary phase diagram is shown in Figure 2b, where we can see the binodal and spinodal curves meet at x = 0.5 at a critical temperature T c = Ω/2R. The symmetrical feature of the curve is consistent with the systems of Mo 1-x T x S 2 (T=W, Cr, and V) [17] and SnSe 2(1−x) S 2x [29]. The calculated critical temperature is 580 K, which is an effortlessly realizable temperature in the laboratory, indicating that this kind of alloys can be favorably prepared. Note that if we count in the entropy contributed by lattice vibration, the solubility may be underestimated. In fact, the Sn x Ge 1-x Se alloys have been successfully fabricated by Buckley et al., who heated precursor solutions to 500 K and held for 4.75 h with stirring [38].
As indicated in Figure 2b, when the temperature exceeds the critical temperature of 580 K, the alloys become stable in the whole range of composition in thermodynamics. At variance, when the temperature is lower than 580 K, for example, 500 K, the alloys can be formed only when the concentration of Ge is in the ranges of 0 < x < x1 and x2 < x < 1. In other words, when the concentration of Ge is within x1 < x < x2, the alloy becomes unstable and decomposes into components x1 and x2 phases.
On the other hand, the growth of Ge x Sn 1-x Se alloys is believed as a cation exchange mechanism [38], which means that nucleation begins as SnSe (or a tin-rich selenide) and gradually incorporates Ge over the growth period. Thermodynamic driving force for the exchange can be evaluated from the calculations of substitution energy, which is defined as where E(doped) and E(pure) are the total energies of the Ge-doped and pure SnSe supercells, µ Ge and µ Sn represent the chemical potentials of the Ge and Sn atoms, respectively. From this definition, the more negative of the E s , the easier of the alloy formation. The calculated substitution energies are listed in Table 1. These values are all considerably negative, suggesting that the Ge-Sn exchange mechanism accounts for the formation of alloys.
Electronic Properties of Ge x Sn 1-x Se Alloys
Next, electronic properties of the alloys are investigated with the variation of Ge concentration x. As shown in Figure 3a, the band gaps of SnSe and GeSe are calculated to be 0.92 (indirect) and 1.13 eV (direct), respectively, which are well consistent with the results calculated at the same level, e.g., 0.96 and 1.18 eV [50]. It is observed that as the composition of the Ge x Sn 1-x Se alloy becomes more Ge rich, the band gap gradually increases. Interestingly, this increase is approximately linear from x = 0 to x = 0.75, but displays a notable bending at x = 1. To analyze the reason for this observation, the band edge positions referenced to the vacuum energy levels of the alloys are given in Figure 3b. It can be seen that the positions of conduction band minimum (CBM) is almost invariable and linearly varied, while those of valence band maximum (VBM) decreases with less linearity, especially at the x = 1. Examining the partial charge densities of the alloys with the selected Ge concentrations shown in Figure 4, one can see that the densities at the CBM are always delocalized and evenly distributed. However, the densities at the VBM dominantly distribute around the Se atom, with a small distribution on Sn/Ge atoms, which seems thinner as the Ge concentration increases.
where E(doped) and E(pure) are the total energies of the Ge-doped and pure SnSe supercells, μGe and μSn represent the chemical potentials of the Ge and Sn atoms, respectively. From this definition, the more negative of the Es, the easier of the alloy formation. The calculated substitution energies are listed in Table 1. These values are all considerably negative, suggesting that the Ge-Sn exchange mechanism accounts for the formation of alloys.
Electronic Properties of GexSn1-xSe Alloys
Next, electronic properties of the alloys are investigated with the variation of Ge concentration x. As shown in Figure 3a, the band gaps of SnSe and GeSe are calculated to be 0.92 (indirect) and 1.13 eV (direct), respectively, which are well consistent with the results calculated at the same level, e.g., 0.96 and 1.18 eV [50]. It is observed that as the composition of the GexSn1-xSe alloy becomes more Ge rich, the band gap gradually increases. Interestingly, this increase is approximately linear from x = 0 to x = 0.75, but displays a notable bending at x = 1. To analyze the reason for this observation, the band edge positions referenced to the vacuum energy levels of the alloys are given in Figure 3b. It can be seen that the positions of conduction band minimum (CBM) is almost invariable and linearly varied, while those of valence band maximum (VBM) decreases with less linearity, especially at the x = 1. Examining the partial charge densities of the alloys with the selected Ge concentrations shown in Figure 4, one can see that the densities at the CBM are always delocalized and evenly distributed. However, the densities at the VBM dominantly distribute around the Se atom, with a small distribution on Sn/Ge atoms, which seems thinner as the Ge concentration increases. Moreover, from Figure 3b, the increase of band gap with the Ge concentration is due to the decline of VBM edge position. To get further insight for this variation, the atom-projected band structures (atom contributions are indicated by different colors) and the density of states (DOS) are illustrated in Figure 5. As is seen, the atom contribution to the CBM changes from Sn to Ge, while the VBM is contributed by the hybridization of both Sn/Ge and Se atoms. This situation is different from the transition metal alloy systems, in which the VBM states are uniformly distributed among W and Mo d-orbitals [31]. Scrutinizing the DOS, for CBM, the main contribution of SnSe is from Sn-p orbital. With the concentration of Ge reaches 0.75, the main contribution changes to Ge-p orbital. While CBM states experience a bigger change of atom contribution, the energy distribution is nearly unchanged. This situation is because the formed anti-bonding π* orbital may suffer a less energy disturbation with the orbital composition change. In contrast, for the VBM, the contribution seems like from the Se-p orbit all the time, along with the considerable contribution from the hybridization of s-and p-orbitals of Sn/Ge. As the electronegativity difference between Ge (2.01) and Se (2.55) is smaller than that of between Sn (1.96) and Se, the resulting weaker bonding at the VBM with the Ge concentration increasing gives rise to the energy of VBM uplift. Moreover, from Figure 3b, the increase of band gap with the Ge concentration is due to the decline of VBM edge position. To get further insight for this variation, the atom-projected band structures (atom contributions are indicated by different colors) and the density of states (DOS) are illustrated in Figure 5. As is seen, the atom contribution to the CBM changes from Sn to Ge, while the VBM is contributed by the hybridization of both Sn/Ge and Se atoms. This situation is different from the transition metal alloy systems, in which the VBM states are uniformly distributed among W and Mo d-orbitals [31]. Scrutinizing the DOS, for CBM, the main contribution of SnSe is from Sn-p orbital. With the concentration of Ge reaches 0.75, the main contribution changes to Ge-p orbital. While CBM states experience a bigger change of atom contribution, the energy distribution is nearly unchanged. This situation is because the formed anti-bonding π* orbital may suffer a less energy disturbation with the orbital composition change. In contrast, for the VBM, the contribution seems like from the Se-p orbit all the time, along with the considerable contribution from the hybridization of s-and p-orbitals of Sn/Ge. As the electronegativity difference between Ge (2.01) and Se (2.55) is smaller than that of between Sn (1.96) and Se, the resulting weaker bonding at the VBM with the Ge concentration increasing gives rise to the energy of VBM uplift. Moreover, from Figure 3b, the increase of band gap with the Ge concentration is due to the decline of VBM edge position. To get further insight for this variation, the atom-projected band structures (atom contributions are indicated by different colors) and the density of states (DOS) are illustrated in Figure 5. As is seen, the atom contribution to the CBM changes from Sn to Ge, while the VBM is contributed by the hybridization of both Sn/Ge and Se atoms. This situation is different from the transition metal alloy systems, in which the VBM states are uniformly distributed among W and Mo d-orbitals [31]. Scrutinizing the DOS, for CBM, the main contribution of SnSe is from Sn-p orbital. With the concentration of Ge reaches 0.75, the main contribution changes to Ge-p orbital. While CBM states experience a bigger change of atom contribution, the energy distribution is nearly unchanged. This situation is because the formed anti-bonding π* orbital may suffer a less energy disturbation with the orbital composition change. In contrast, for the VBM, the contribution seems like from the Se-p orbit all the time, along with the considerable contribution from the hybridization of s-and p-orbitals of Sn/Ge. As the electronegativity difference between Ge (2.01) and Se (2.55) is smaller than that of between Sn (1.96) and Se, the resulting weaker bonding at the VBM with the Ge concentration increasing gives rise to the energy of VBM uplift. Another interesting finding is the indirect to direct bandgap crossover occurs at the Ge concentration of 0.5. As is seen, SnSe is found to have an indirect bandgap, but with the Ge concentration increasing to 0.5, the alloy turns into a direct bandgap as the GeSe monolayer. While experimental investigations focusing on mono and few-layer SnSe and GeSe are scarce, the indirect/direct bandgap feature of SnSe/GeSe monolayers has been validated by previous reported theoretical results [50][51][52]. It is known that 2D materials with the direct bandgap is urgently needed as electrons can be directly excited from the VBM to the CBM by the incident light with an appropriate frequency without the aid of phonons, which leads to the potential application in the optical devices. Moreover, our Heyd-Scuseria-Ernzerhof (HSE06) [53] calculations give the band gaps of 1.39 and 1.59 eV for SnSe and GeSe monolayer, respectively, which are larger than those calculated at the PBE level whereas the band structure topologies are almost the same, with the main difference of CBM state upshift. Note that the HSE06 function used here is only to obtain more accurate band gap as the predicted band gaps of GeSe, SnSe bulks are very close to the experimental ones [50,52,54]. For example, the band gaps measured by Kim and Choi are 0.88 eV for SnSe single crystal and 1.10 eV for GeSe [54]. Theoretically, Shi and Kioupakis obtained the band gaps of 0.89 eV and 1.10 eV for SnSe and GeSe bulks at the HSE06 level, respectively [52]. Therefore, through forming alloys with x ≥ 0.5, on the one hand, the alloy becomes the direct bandgap semiconductor which facilitates the efficiency of light absorption; on the other hand, the alloy has the perfect band gap ranging from 1.50 to 1.59 eV, which is the most optimal value for the materials applied in solar cells.
Optical Properties of Ge x Sn 1-x Se Alloys
As described above, Ge x Sn 1-x Se alloys have the band gap ranging from 0.92 to 1.13 eV at the PBE level or from 1.39 to 1.59 eV at the HSE06 level. Note that these variation scopes cover the main solar irradiation of the visible range. To further validate the potential applications on the photoelectronics and photovoltaics, optical absorption properties are conducted according to the Kramer-Kroing relationship [55]. The calculated optical absorption coefficients with the incident light polarization along the y and z directions are presented in Figure 6. First, an anisotropy feature is observed as the absorption coefficients along the y direction (armchair), which are significantly larger than those of z direction (zigzag). Second, with the Ge concentration increasing, the optical absorption edges along the y and z directions are blue-shifted, which is consistent with the order of the calculated band gaps. Third, the absorption coefficients reach around 10 5 cm −1 when the photo energy is larger than 2.0 eV, especially the absorption strength is significantly enhanced through alloying. These observations notably verify that through alloying GeSe and SnSe, the optical properties can be improved by subtly controlling the compositions.
Conclusions
In summary, we have systematically studied the stabilities, electronic, and optical properties of GexSn1-xSe alloys with the variation of Ge content through first-principles calculation. The structures of alloys were first generated by the tool of ATAT. Thermodynamic analysis show that the alloys are considerable stable, and the soluble temperature of GeSe and SnSe is estimated to be 580 K. The negative substitution energies of Ge replacing Sn in the SnSe monolayer verify the proposed cation
Conclusions
In summary, we have systematically studied the stabilities, electronic, and optical properties of Ge x Sn 1-x Se alloys with the variation of Ge content through first-principles calculation. The structures of alloys were first generated by the tool of ATAT. Thermodynamic analysis show that the alloys are considerable stable, and the soluble temperature of GeSe and SnSe is estimated to be 580 K. The negative substitution energies of Ge replacing Sn in the SnSe monolayer verify the proposed cation exchange mechanism for the alloy growth. Electronic property calculations show that with the Ge concentration increasing, the band gap of alloys ranges from 1.39 to 1.59 eV at the HSE06 level. The non-linear increase of band gap, especially at the x = 1, is due to the uneven distribution of charge densities at the VBM states. Interestingly, when Ge concentration exceeds 0.5, on the one hand, the alloys have the direct bandgap which facilitates the efficiency of light absorption; on the other hand, the alloys have perfect band gaps in the range of 1.50-1.59 eV which falls into the most optimal scope for the application in solar cells. Optical calculations further verify that through alloying, the optical properties can be improved by subtly controlling the compositions. We believe that our work may shed light on the future applications of Ge x Sn 1-x Se alloys in optoelectronics. | 6,536.6 | 2018-10-25T00:00:00.000 | [
"Materials Science"
] |
Justifiability and Animal Research in Health: Can Democratisation Help Resolve Difficulties?
Simple Summary Scientists justify animal use in medical research because the benefits to human health outweigh the costs or harms to animals. However, whether it is justifiable is controversial for many people. Even public interests are divided because an increasing proportion of people do not support animal research, while demand for healthcare that is based on animal research is also rising. The wider public should be given more influence in these difficult decisions. This could be through requiring explicit disclosure about the role of animals in drug labelling to inform the public out of respect for people with strong objections. It could also be done through periodic public consultations that use public opinion and expert advice to decide which diseases justify the use of animals in medical research. More public input will help ensure that animal research projects meet public expectations and may help to promote changes to facilitate medical advances that need fewer animals. Abstract Current animal research ethics frameworks emphasise consequentialist ethics through cost-benefit or harm-benefit analysis. However, these ethical frameworks along with institutional animal ethics approval processes cannot satisfactorily decide when a given potential benefit is outweighed by costs to animals. The consequentialist calculus should, theoretically, provide for situations where research into a disease or disorder is no longer ethical, but this is difficult to determine objectively. Public support for animal research is also falling as demand for healthcare is rising. Democratisation of animal research could help resolve these tensions through facilitating ethical health consumerism or giving the public greater input into deciding the diseases and disorders where animal research is justified. Labelling drugs to disclose animal use and providing a plain-language summary of the role of animals may help promote public understanding and would respect the ethical beliefs of objectors to animal research. National animal ethics committees could weigh the competing ethical, scientific, and public interests to provide a transparent mandate for animal research to occur when it is justifiable and acceptable. Democratic processes can impose ethical limits and provide mandates for acceptable research while facilitating a regulatory and scientific transition towards medical advances that require fewer animals.
Introduction
Animal research is frequently considered justifiable based on a consequentialist calculus that invokes cost-benefit or harm-benefit analysis [1]. These ethical frameworks are formalised throughout the developed world with explicit statements in regulations and guidelines requiring researchers to justify their use of animals based on benefits to humans, animals, or the environment. These frameworks rely on researchers presenting evidence that their research may lead to benefits, such as addressing an unmet medical need, but many members of the public disagree. Opinion polls in the US and the UK show that the proportion of adults who believe medical research involving animals is morally acceptable has been falling since 2002 [2]. However, this trend in public opinion is at odds with the rising demand for healthcare [3], including drugs that are tested on animals as part of regulatory requirements [4]. Despite the trend in public opinion, current ethical and regulatory frameworks lack the capacity to reduce the scope and volume of animal research because the consequentialist calculus is too rough and imprecise. Options for democratising animal research ethics should be considered, including drug labelling to educate the public and public consultation by national animal ethics committees to engage the public in deciding when animal research is justified.
Ethical Flexibility
The research community has largely adopted the consequentialist ethics that were used to shift attitudes against animal research. In the 1970s, Peter Singer famously argued that the suffering of animals should not be given less weight than the suffering of humans, a view that he believed should end the vast majority of animal research [5,6]. However, others have argued that animal research is necessary to maximise goods and avoid harms [7]. Indeed, the benefits of animal research are the primary argument advanced by scientists who use animals [8], even though moral philosophers as varied as deontologists, ecofeminists, and virtue ethicists may find it unconvincing [9][10][11][12]. The consequentialist framework has now broadly been adopted by the research community, with animals given ethical standing and being included in cost-benefit rubrics. Animal ethics processes require justification for research projects (benefit) and address costs by embedding the 3Rs of replacement, reduction, and refinement into regulations and guidelines [13]. However, animal research continues, leaving its opponents dissatisfied.
One issue is that the calculation of ethical costs to animals and the benefits of animal research are so rough and imprecise (if they are even possible) [6,14] that they can lead to almost any conclusion. An optimist could argue that since a particular improvement in healthcare could benefit humans in perpetuity, but takes only a finite number of animals to achieve, almost any project could be justifiable. Moreover, the practical benefits of a research project, like the results of individual experiments, are hard to predict and may be more useful than anticipated. For example, in 2016, there were eight first-in-class drugs approved that perhaps may not have been possible without animal use (see Table 1) [15]. Opponents of animal research place less weight on the benefits and greater weight on the costs [11], or criticise researchers for having the opposite bias [14]. The flexibility and imprecision of the consequentialist calculus makes it very difficult to reject projects on ethical grounds, provided they have a properly articulated rationale.
In theory, there should be a point where the benefit side of the consequentialist calculus is outweighed by the costs to the animals. If it were possible to quantify a benefit, at least in health, it should probably be based on the World Health Organization's measure of disease burden, disability-adjusted life-years (DALYs). Moral philosophers, however, have argued that certain conditions should not be targeted based on the quality of the disorder or disease being researched, rather than the quantity of the benefit. For example, the ethics of better therapies for insomnia or experiments in psychology have been questioned [11,14], despite the fact that psychological disorders account for about a quarter of Europe's DALYs [16]. In fact, in 2015, depression was the 10th biggest contributor to disease burden, with 12 million DALYs lost [17]. Even if it is assumed that all of the 50-100 million vertebrates used in experiments per year [18,19] are used for the development of 30 new drugs [15], the 169 million DALYs lost to mental and substance use disorders [17] eclipses the 2-3 million animals each new drug requires on average. Although the consequentialist calculus is supposedly a quantitative exercise [14], no regulatory guidelines or ethics committees have been reported to utilise a cut-off based on DALYs or any other objective measure for approving projects. Defitelio (defibrotide sodium) Hepatic veno-occlusive disease afterhaematopoietic stem cell transplantation * Derived from the intestinal mucosa of pigs, defibrotide has been tested in several cell lines and in animals, such as mice [20][21][22].
Exondys 51 (eteplirsen)
Duchenne muscular dystrophy * Eteplirsen works by causing exon skipping to correct a genetic mutation. Animal studies on the approach used mice and dogs [23]. However, its approval using a surrogate marker of efficacy in humans is controversial [24].
Spinraza (nusinersen)
Spinal muscular atrophy * Nusinersen (like eteplirsen) works by modulating gene splicing to increase levels of a protein affected by an inherited genetic mutation. It was advanced to clinical trials based on work in mice and non-human primates [28][29][30].
Xiidra (lifitegrast) Dry eye disease Dry eye disease can affect animals like dogs, cats, and horses and animal models have provided evidence of an inflammatory role. Lifitegrast reduces inflammation by preventing LFA-1/ICAM-1 interactions and has been tested indogs and mice [34][35][36].
Zinbryta (daclizumab) Multiple sclerosis
Daclizumab was originally developed as immune suppressant for transplant patients and based on studies in mice showing mechanisms to suppress autoimmune responses. Human clinical studies facilitated its translation for multiple sclerosis [37,38].
Zinplava (bezlotoxumab) Clostridium difficile infection
Bezlotoxumab neutralises C. difficile toxin B. Early work involved characterising the effect of toxin B in animals like hamsters and rabbits and using rabbits to generate antibodies against toxin B [39][40][41].
* "Orphan" or rare diseases affecting fewer than 200,000 Americans often have limited treatment options or no drug treatment available.
Alternative Ethical Frameworks
There are alternative ethical frameworks to consequentialism, but these have not been formally adopted and leave the question of justifiability contested. Deontological or rights-based frameworks and virtue ethics frameworks may also still consider consequences. In the extreme, a rights-based framework calls for the total abolition of all animal research, as Tom Regan argued based on the idea that animals have inherent value as living creatures [42]. However, a minimalist view of animal rights would be that animals should have the right to freedom from "useless pain or misery" [43]. The usefulness of the animal's suffering here invokes consequentialism-it is the benefit for science and health that permits the research. Rights-based frameworks have also struggled to gain formal acceptance as courts have struggled with the idea of granting rights to animals [44], so though the framework may have philosophical value, few animal ethics systems formally adopt the principle. Similarly, virtue ethicists may argue that animal research is cruel and therefore immoral, using reasoning that is similar to consequentialist approaches [10]. However, it can also be argued that animal research is compassionate when conducted for the purposes of improving healthcare, perhaps based on discounting animal interests because of partiality to other human ties [9,45], but this potentially leaves virtue ethics in a similar impasse to consequentialism as individuals must weigh their compassion for patients against compassion for animals. There is as yet no ethical framework that can clearly and uncontroversially delineate justifiable and unjustifiable research in a way that is significantly more satisfactory from a philosophical and practical perspective.
Consequentialism and Public Interest
The conclusions of different consequentialist philosophers, scientists and perceptions of the public are often in disagreement. Scientists often argue that a study is justifiable because there is unmet medical need, but it is unclear whether members of the public (or consequentialist philosophers) would agree that every unmet medical need warrants the use of animals in medical research. In general, a majority of people believe animal research is ethically acceptable [2,46], but this includes no fine-grained information about particular conditions or approaches. A purely consequentialist ethical approach would probably target research funding in accordance with burden of each disease or disorder, but that is not the case in reality. Instead, funding for research into different disorders is skewed by societal attitudes, with certain medical conditions attracting an excess of funding while others are underfunded relative to their disease burden [47][48][49]. Societal attitudes are also less supportive of certain techniques, like genetic modification, in animal research [50].
The public might consider other factors when deciding whether animal research is acceptable for a particular condition. For example, it may be seen as more justifiable to use animals to improve treatments for conditions that affect earlier stages of life rather than later stages because there is potentially a greater benefit. Disorders that are perceived to have an element of choice, like addiction [51], could also conceivably be seen as less ethical choices for animal use in medical research. However, people with addiction still seek treatment, including medication, and this demand for new, more effective treatments has previously been argued as a justification for further research [52].
The public's demand for healthcare continues to rise suggesting a strong public interest in maintaining and improving current standards. Healthcare spending is rising at a rate above inflation, with large increases in pharmaceutical spending [3,53]. Interviews with stakeholders about pharmaceuticals policy show that they are also concerned about equitable access, for instance patients with rare disorders [54]. Rare (or "orphan") disorders pose a difficulty for consequentialist ethics because benefits are accrued by a smaller number of people, thus shifting the consequentialist calculus against lines of research. These rare diseases may have no effective therapies available and thus attract significant sympathy and political advocacy, as was the case for the controversial approval of eteplirsen [55]. Therefore, it appears that the public places at least some weight on the more abstract benefit of equitable access to healthcare that may impact or override an otherwise unfavourable consequentialist calculus.
Evaluations of animal research ethics must therefore weigh a multitude of competing viewpoints. Surveys have shown that public opinion is turning against animal research, with support dropping from 75% to 66% in the UK and from 63% to 51% in the US between 2002 and 2016 [2,46]. One Canadian survey also found minority (44%) public support for animal research that benefitted humans but harmed animals. Respondents were then presented with simplified typical arguments against animal research and virtually all undecided individuals were convinced animal research should not be supported [56]. On the other hand, the behaviour of the public in seeking healthcare and demanding new therapies, sometimes before they have been clinically tested [57], suggests that (insofar as animal research is currently necessary for drug development) there is an indirect public interest in continuing to do animal research. Although some philosophers argue that all medical research could be done morally in humans or using alternatives, scientists have consistently argued that animals are currently necessary to make medical advances [8,58]. If it continues, animal activists will continue to campaign against their work, potentially using aggressive protest techniques like targeting individual scientists [46,59]. Democratisation can help society navigate the uncertainty of the consequentialist calculus and balance it with other public interest considerations through individual action and enhanced opportunities for deliberative participation [60].
Ethical Health Consumerism
One form of democratisation is through ethical consumerism. Moral philosophers have previously argued for changes in individual behaviour to improve the treatment of animals. For example, consequentialists and virtue ethicists have made arguments to support vegetarianism because withdrawing support for the meat industry can either help reduce animal suffering or is otherwise compassionate and generous [10,61,62]. There is also historical precedent for similar actions in healthcare. For example, in early 19th century Britain, doctors who performed vivisection were subject to boycotts out of fear that they lacked sensitivity and compassion [63].
In the late 20th century, it was suggested (perhaps sarcastically) that patients concerned with animal research should give their doctor an advanced directive refusing treatments derived from animal research, i.e., all of them [64]. More recently, it was proposed to label medicines in the UK as tested on animals to inform the public of the role of preclinical animal work [65]. However, animal activists opposed the labelling initiative, citing concerns that patients may not comply with medication because it had been tested on animals and arguing that animal research made no meaningful contribution to drug development anyway [66]. In the same vein, animal activists have also argued (as a reductio ad absurdum) that supporters of animal research should volunteer to be experimented upon should they lose mental capacity [67].
However, the application of ethical consumerism to healthcare has the capacity to inform the public and to democratise the consideration of the consequentialist calculus. Non-compliance with medication may be a concern for clinicians, but refusing treatment is well within a patient's rights [68]. There is currently no external labelling indicating that a given drug was tested on animals and no further information in the product information sheet. Moreover, current labels do not even provide adequate information for the 5-10% of patients who are vegetarian or vegan about the suitability of a drug's ingredients [69]. Given the state of public opinion, opening healthcare to ethical consumerism through labelling and disclosure can instead be seen as a means of respecting the ethical views of a significant minority of the population. Currently, adult patients with decision-making capacity can refuse treatments that are inconsistent with their religious, ethical, or other personal beliefs [70,71]. Non-disclosure regarding the role of animals in a treatment's development implicitly denies the validity of patients' ethical beliefs and their right to give informed consent or refusal.
Objections to Ethical Health Consumerism
Animal activists have argued that a simple external label may not be sufficient on its own [66], but the proposal could be refined by requiring pharmaceutical companies to provide a plain-language summary of the role animals have played in a drug's development. Surveys have also been cited suggesting that pharmaceutical companies only do animal research to satisfy regulators [72]. On the other hand, industry studies suggest that results from animals are imprecise, but still useful predictors of the likelihood of adverse outcomes in clinical trials [73,74]. Since the reliability and usefulness of animal results differs based on circumstances (e.g., species and organ) [73], the pharmaceutical company that developed the drug is best-placed to summarise the role of animal research for an individual drug. However, care must be taken by regulators when reviewing these statements because there have been cases of researchers selectively reporting data from animal research that results in ineffective or unsafe compounds proceeding to clinical trials [75,76] just as regulators must take care to ensure clinical trials are well designed [77]. Requiring labelling and a plain-language summary of the role of animals in a drug's development could help the public understand the contribution that animals have made and to then evaluate whether the benefits to them have outweighed to costs to the animals.
Research on consumer behaviour suggests clinician and animal activist concerns about medical non-compliance or refusal are not likely to be a major issue. Consumer behaviour is affected by many factors other than ethical labels [78,79]. The expression of ethical concerns does not necessarily translate into changes in behaviour and consumers who do change their behaviour represent a minority [78]. For example, one study found an 'organic' label had no significant effect on choice of chocolate [80]. However, this has not prevented several ethical consumer movements targeting animal welfare and environmental issues, such as egg-laying hens, from achieving some success [81][82][83].
Moral philosophers have also questioned the possibility of refraining from being a party to animal research. It has been argued that animal research is too deeply entrenched in corporate and academic research centres for individuals to withdraw their support with any hope of affecting change [10]. However, a small minority of patients who demand an alternative type of care can help drive change, such as with the development of bloodless medicine for Jehovah's Witnesses [84]. Moderates concerned with animal use could approach the issue by accepting life-saving treatments, but refusing non-critical treatments and ensuring they implement an advance care directive to limit the amount of medical care they receive if they lose decision-making capacity. Even a small number of animal research objectors could be enough to encourage the regulatory and scientific developments needed for animal-free medical advances.
Participatory Decision-Making: Are National Animal Ethics Committees Needed?
Another process for democratising the consequentialist calculus of animal research is to give the public direct input into deliberative processes [60]. Researchers have variously argued for enhanced approaches to project assessment [18] and for political processes as necessary for the determination of what research is acceptable [85]. Electoral and parliamentary processes can initiate change, but legislation banning animal research is too blunt and inflexible to incorporate society's diverse views on animal research and the varying circumstances for different conditions. Fine-grained information about what kinds of research are acceptable is necessary to ensure that animal research meets public expectations.
Current institutional ethics review processes have been criticised because they do not adequately ensure that animal research is valid or ethical. For example, they may not adequately assess the scientific validity of a project before giving approval [86] and many animal welfare officers at Australian and Dutch universities feel that 3Rs opportunities remain unused [87,88]. There are also anecdotal reports that rejection of projects at the ethics committee stage never or almost never happens [89]. This suggests that the animal research approval processes used at the institutional level for individual projects are incapable of determining when animal costs outweigh potential benefits.
A national animal ethics committee, organised by funders and/or regulators, could democratically engage the public and weigh scientific evidence to determine what kinds of animal research are acceptable. Historically, these kinds of 'boundary organisations' have sat at the intersection of science and politics and help to resolve competing or opposing interests while simultaneously pursuing both [90,91]. The challenge is in designing a deliberative space or process that can adequately communicate science and give the public an appropriate degree of influence on decision-making [92]. Although there is already some democratisation of animal ethics procedures with independent or unaffiliated members being included in institutional animal ethics committees in several jurisdictions, the effectiveness of these roles has been criticised for lacking in independence or representativeness [89]. There are also challenges in terms of education and training and the inherent difficulty of representing the views of a public that is divided on animal research [93]. A national animal ethics committee could overcome some of these difficulties through standard public consultation processes (e.g., surveys, hearings, written submissions) and allow institutional animal ethics committees to focus on issues with individual projects rather than resolving macro-level ethical and public interest issues. Public engagement with these processes could be built through programs like the Concordat on Openness on Animal Research in the UK, which encourages signatories to better communicate their use of animals (including its limitations) through annual reporting and making online statements about their animal use and policies [94]. Public participants in surveys, focus groups, or hearings, may then be more engaged and educated about the issue.
The ongoing deadlock between animal activists and researchers suggests that animal research is a good candidate for participatory decision-making [95]. If public opinion follows current trends [2], then the deadlock and hostility towards scientists will only worsen over time. It may be costly because every citizen is potentially a stakeholder due to an interest in either healthcare or animal welfare, but online participation can reduce costs. Conducting surveys of representative samples can also help to reduce costs while providing accurate information on public opinion and reducing vulnerability to political campaigns. Animal research ethics may also require participants to understand highly technical concepts, but the issue could also attract many passionate patient advocates and animal activists which could ameliorate these difficulties through public education [95]. Moreover, both sides stand to benefit. Scientists would have an explicit democratic mandate to conduct particular lines of research. Objectors to animal research would, subject to current trends in public opinion, be able to gradually decrease the use of animals in research and lobby for changes like repealing requirements to use animals in drug development or increasing funding for 3Rs developments.
Accelerating Ethical Progress
The democratisation of decisions about animal research is a logical consequence of the general democratisation of science. As science becomes more open (e.g., through open access and open data practices) and participatory (e.g., citizen science), it is reasonable that information and decision-making processes about basic research should also be more available. Democratisation can improve animal research ethics because it will provide a process for clearly delineating justifiable and unjustifiable uses of animals and is likely to accelerate 3Rs implementation. Similarly, drug labelling with plain language summaries of the usefulness (and limitations) of preclinical animal research can help to educate the public about the role of animal research in their lives. This enables individuals to make decisions consistent with their ethical beliefs about their healthcare and refusals on ethical grounds can help accelerate improvements in animal welfare principles such as the 3Rs. If data on the rate at which patients refuse treatments on ethical grounds is collected, it can discourage researchers and biomedical companies from developing treatments that the public finds ethically unacceptable. It can also raise awareness in patients, who can use their powerful lobbying and advocacy groups to push for political changes such as repealing the regulatory requirement to test drugs on animals, as they have done with right-to-try laws [57].
Direct consultation and participatory decision-making through national animal ethics committees would likely be framed (at least initially) in consequentialist terms-cost or harm-benefit-and would require transparent weightings for different interests or viewpoints (e.g., biomedical scientists, patients, public opinion). Over time, it would be expected that the number of lines of animal research that would be considered acceptable would decrease. This would occur as long as the public is given genuine influence over decisions because current trends in public opinion show increasing opposition to animal research [2,46]. Scientists would also need explicit protections from sudden changes in sentiment, such as grandfathering of ethical approvals for already funded or reviewed projects and limits to the rate of change that could suddenly impact the biomedical workforce. However, the effect of democratisation and current trends would be to benefit animals who would be used in fewer experiments, patients who want to receive treatments consistent with their ethical beliefs, and scientists who will have some certainty that if they are working on animal projects that they have a democratic mandate to do so.
Political engagement with animal research is already promoting improved animal welfare. For example, political engagement and regulatory developments in Europe are already pushing scientists to develop and improve animal welfare standards [96]. Political and regulatory pressure must also be accompanied by the necessary resources to study and improve animal welfare, which may be very scarce in a regulatory and funding system focused primarily on human health needs. While practical scientific impediments to eliminating animal research completely may continue for some time, such as the impossibility of understanding behavioural processes without animals [8], increasing the level of information available to the public and their ability to participate in decision-making about the animal research that their governments support is likely to accelerate implementation of 3Rs principles.
Conclusions
The current consequentialist ethical framework and institutional approval processes for animal research cannot satisfactorily resolve the question of when animal costs outweigh potential human benefits. The consequentialist calculus is too rough and imprecise to produce clear, reproducible conclusions and the result is that there is a significant divide within and between public opinion on animal research, public demand for healthcare, and scientific opinion. Methods of democratisation can help stakeholders with diverse and conflicting viewpoints give input into what kinds of benefits may justify the use of animals in research. Facilitating ethical health consumerism through labelling disclosure of animal use in the development of drugs or convening a national animal ethics committee to determine which purposes are acceptable can provide the fine-grained data required to guide animal researchers to the most ethical projects. Democratising deliberations on the justifiability of animal research can help ensure that the interests of animal researchers and animal activists are balanced in accordance with public expectations and can potentially facilitate changes that would enable medical advances that use fewer animals. | 6,238.4 | 2018-02-01T00:00:00.000 | [
"Political Science",
"Medicine",
"Philosophy"
] |
In Vitro Inhibition of Colorectal Cancer Gene Targets by Withania somnifera L. Methanolic Extracts: A Focus on Specific Genome Regulation
An approach that shows promise for quickening the evolution of innovative anticancer drugs is the assessment of natural biomass sources. Our study sought to assess the effect of W. somnifera L. (WS) methanolic root and stem extracts on the expression of five targeted genes (cyclooxygenase-2, caspase-9, 5-Lipoxygenase, B-cell lymphoma-extra-large, and B-cell lymphoma 2) in colon cancer cell lines (Caco-2 cell lines). Plant extracts were prepared for bioassay by dissolving them in dimethyl sulfoxide. Caco-2 cell lines were exposed to various concentrations of plant extracts, followed by RNA extraction for analysis. By explicitly relating phytoconstituents of WS to the dose-dependent overexpression of caspase-9 genes and the inhibition of cyclooxygenase-2, 5-Lipoxygenase, B-cell lymphoma-extra-large, and B-cell lymphoma 2 genes, our novel findings characterize WS as a promising natural inhibitor of colorectal cancer (CRC) growth. Nonetheless, we recommend additional in vitro research to verify the current findings. With significant clinical benefits hypothesized, we offer WS methanolic root and stem extracts as potential organic antagonists for colorectal carcinogenesis and suggest further in vivo and clinical investigations, following successful in vitro trials. We recommend more investigation into the specific phytoconstituents in WS that contribute to the regulatory mechanisms that inhibit the growth of colon cancer cells.
Introduction 1.The Epidemiological Burden of Colorectal Cancer and Related Risk Indicators
Colon cancer is the fourth most common cancer globally, with rectum cancer ranking eighth in terms of incidence, according to GLOBOCAN 2018 data.When taken together, CRCs account for 11% of all cancer diagnoses worldwide, making them the third most common type of cancer diagnosed [1].In developed economies, CRC currently ranks third in terms of cancer-related mortality [2].In developing nations, cancer claims the lives of almost 70% of individuals [3], and this phenomenon is attributable to poorly managed, weakened, and constrained healthcare systems.Out of 191 countries worldwide, 10 have CRC as the most common cancer diagnosed in men; no country has CRC as the most common cancer diagnosed in women.Age-standardized (world) incidence rates per 100,000 of CRC in both sexes are 19.7, 23.6, and 16.3, respectively, for men and women.In high-HDI (human development index) countries, the age-standardized incidence rate for men is 30.1/100,000, while in low-HDI countries, it is 8.4.For women, the corresponding statistics are 20.9 and 5.9 [1,4].The increased incidence of CRC is associated with highfat diets, a low intake of whole grains, fruits, and vegetables, gender, age, race, and family history.It has also been determined that exposure to heavy metals like lead and infectious organisms poses risks for CRC [5].Enhanced comprehension of the onset and progression pattern of colorectal cancer (CRC), genetic and environmental risk factors, and the evolutionary process of the disease can enable researchers and healthcare providers to mitigate the effects of this life-threatening cancer [1,6].
The burden of colorectal cancer (CRC) is largely attributed to industrialization and westernization.There is a notable shift in this global burden towards countries of lower economies as they continually become Westernized due to trade, tourism, professionalism, and other exchange programs across different continents.Overall, there are up to eight-fold differences in CRC incidence between countries depending on the geographical region.Incidence rates typically increase steadily with rising HDI in nations that are experiencing significant developmental transitions, indicating that there is a correlation [4].
An Overview of Targeted Genes in the Present Study
This study evaluated the expression of five genes implicated as significant targets for cancer management, namely, COX-2, CASP9, Bcl2, Bcl-xL, and 5LOX.A significant protagonist in the control of apoptosis is CASP9, which signals the beginning of the mitochondrial caspase cascade.The caspase proteins are part of the chain reaction that is initiated by commands that support apoptosis and leads to the dissociation of many peptides and the fragmentation of cells.Comprehending caspase programming is essential for precisely regulating apoptosis in therapeutic settings [7,8].Apoptosis is a crucial physiological process that involves the purposeful death of cells in a range of biological mechanisms [9].There is a claim that impeding natural apoptosis increases the risk of malignancy [10,11].In contrast, there is evidence that a higher frequency of colorectal adenoma is strongly associated with a lower rate of apoptosis [12].
Arachidonic acid is converted into inflammatory prostaglandins by the rate-limiting enzymes cyclooxygenase 1 and 2 (COX-1 and COX-2).Cancer risk is increased by persistent inflammation [13].When there are inflammatory disorders, COX-2 is substantially stimulated.It is believed that COX-2 selective inhibitors have negligible or no gastrointestinal adverse effects while having the comparable anti-inflammatory properties and antipyretic, and while considering the advantages of analgesia as a broad-spectrum antagonist NSAIDs [13].Throughout the body, enzymes are essential in the organism's metabolic processes, since they are engaged in the generation of lipid prostaglandins [14].
It is necessary to comprehend each member of the Bcl2 family's propensity and concentration to comprehend the primary reactions that take place between them.The dominating interactions that dictate whether or not mitochondrial outer membrane permeabilization (MOMP) occurs are determined by these characteristics [15].Bcl-xL is present in the cytoplasm, extra-nuclear membranes like the mitochondrion, and the nuclear envelope, whereas Bcl2 is found in the mitochondrion, endoplasmic reticulum (ER), and the nuclear envelope [16].The precise methods of action of Bcl2 and Bcl-xL are complex, and numerous interactions with other proteins have been postulated.It is unknown how important a given interaction is for the final phenotype at the cellular level [17].It has been demonstrated that prexasertib management and siRNA-mediated Bcl-xL downregulation greatly boost apoptosis.Moreover, it has been demonstrated that prexasertib plus navitoclax exhibits a potent antitumor impact and suppresses Bcl-xL to cause apoptosis in malignant cells [9].
There is a wide range of lipoxygenases (LOXs) in fungi, bacteria, plants, and animals.These are iron-containing, non-heme enzymes.The Ca 2+ and ATP-dependent enzyme 5LOX catalyzes the first two stages in the synthesis of the peptide-LTs and the chemoattractant factor LTB4. Granulocytes, mast cells, monocytes/macrophages, and B lymphocytes are myeloid cells that express the 5LOX protein genome [18,19].
Phytotherapeutic Approaches to Treatment
The development of cancer drug resistance and its related side effects are closely linked to synthetic treatments for CRC.This has prompted research into natural alternatives as potential strategic options.But even with this powerful natural phytotherapeutic approach, research and exploration into the useful medicinal plants now in use is lacking [6,20,21].The progress in pharmacotherapeutics is hindered by a lack of understanding of the current plant metabolites, their biological roles, and the processes involved in their extraction.Plant-derived phytoconstituents have been shown to exhibit potent therapeutic effects in the treatment of a wide range of infectious diseases, making them less likely than synthetic drugs to cause a variety of side effects [22].Emerging anti-colorectal cancer therapies have largely come from organic substances derived from plants; they directly or indirectly contribute to almost half of all anticancer medicines currently in use.We have previously investigated and reported on the safety and cytotoxicity characteristics of WS, which informed the need for further exploration [20].The plant is a significant but little-studied plant species, natively occurring in Kenya, Africa.
1.4.The Botanical Description and Global Distribution of W. somnifera (L.) According to the biological classification system, Withania somnifera (L.) is a species of plant that is a member of the kingdom Plantae (plants), the sub-kingdom Tracheophytes (vascular plants), division Angiospermae, class Eudicots, clade Asterids, order Solanales, family Solanaceae, sub-family Solanoideae, tribe Physale-ae, genus Withania, and species somnifera [23][24][25].W. somnifera L. is a small shrub that grows abundantly in the subtropical regions and is frequently referred to as "Ashwagandha" in Hindi and Sanskrit.It grows in the dry tropical regions of Afghanistan, Pakistan, South and East Africa, Spain, Sri Lanka, Sudan, China, Congo, India, Egypt, Israel, Jordan, Madagascar, Morocco, Nepal, and the Canary Islands [26].The leaves and roots of this plant are used in Ayurveda, which is a traditional Indian medicine [27].Ayurveda is a well-respected traditional medical system with a long history.This approach uses a variety of natural chemical substances in different ways to meet its therapeutic objectives [28].Thousands of herbs, including Withania somnifera L., are beneficial in avoiding illnesses and preserving health according to the Ayurvedic system [20].
Our study sought to assess the effect of W. somnifera L. (WS) methanolic root and stem extracts on the expression of five targeted genes (cyclooxygenase-2, caspase-9, 5-Lipoxygenase, B-cell lymphoma-extra-large, and B-cell lymphoma 2) in colon cancer cell lines (Caco-2 cell lines).
Procurement of Caco-2 Cell Lines
The Department of Biochemistry and Medical Chemistry at the University of Pecs offered Caco-2 cell lines directly to our laboratory (the Department of Public Health) following procurement from the ATCC (American Type Culture Collection).Caco-2 has potential use in toxicity and cancer studies and are excellent hosts for transfection.The cancer cell lines were preserved in compliance with the manufacturer's instructions [29].
Plant Organ Acquisition
Organs of WS were obtained from the Perkerra irrigation scheme, Baringo South Sub-County in Baringo County (0 4786 Latitude, and 36.0274Longitude).The reclamation scheme lies near Marigat Township, around a hundred kilometers north of Nakuru City.It derives its name from River Perkerra, which is the only perennial natural river in the area and a source of water for irrigation [30].The organ samples were then transported to Egerton University, Kenya, for further processing.A taxonomist performed an identification of the plant species, and a voucher specimen was collected and deposited in the Egerton University's herbarium.
Extraction of WS Extracts Using Methanol and Acquisition of the Plant Organs
After being shade-dried, the chosen stem and root organs were finely powdered.The solvent used for serial exhaustive extraction (SEE) was methanol.To extract phytoconstituents, 1000 g of WS organ of the plant was immersed in a glass container and extracted for three days with continuous shaking using ethyl acetate.Upon undergoing filtration through Whatman filter paper with particle sizes ranging from 4 to 1 in diameter, crude solvent extracts were dechlorophyllated.To ensure that all soluble components were maximally extracted, this process was repeated three times [31].For maximum cell penetration, a recommended minimum volume of 70% methanol (MeOH) was utilized.Essentially, 70-80% MeOH is the most widely utilized solvent because it has strong cell content penetration and is therefore suitable for extracting all primary and secondary metabolites [32][33][34][35][36].The solvent was evaporated in order to concentrate the extract.A rotor evaporator (Marshall Scientific LLC., Hampton, NH, USA) operating at lower pressure and temperatures between 40 and 50 • C was used to remove the solvent.A freezer dryer (Azbil Telstar, SLU, Barcelona, Spain) was used to lyophilize the aqueous extract.Tightly stoppered vials were used to keep the dry, solvent-free metabolites before use [37].The desiccator was kept at 4 • C in a refrigerator.
Reconstitution of Plant Extracts
Dimethyl sulfoxide (DMSO) was applied in dissolving plant extracts for bioassays [38].It served as both a suspending medium and an inert diluent for crude plant extracts that were insoluble in water.Using double-distilled phosphate-buffered saline (ddPBS) as the diluent solvent and 0.5% DMSO as the dissolving solvent, a 30 mg/mL stock solution was created.The final concentrations of 2 mg/mL, 1 mg/mL, and 0.5 mg/mL for the treatment of Caco-2 cell lines were then made using the stock solution.
Passaging Cancer Cell Lines (Caco-2)
The Caco-2 cell-containing culture T-flask (75 cm 2 or 175 cm 2 ) was carefully placed inside a lamina hood and kept sterile.The used media was drawn out once the culture T-flask was opened.PBS was used to wash it twice.After covering, PBS-EDTA was applied and left for a short while.It was then gently and carefully pipetted.Two milliliters of trypsin was used to dissociate and detach Caco-2 cells from the surface and clumps, respectively.The surface was treated with trypsin by gently gliding the flask over it from side to side.The flask was placed inside the thermostat and left for five minutes.Five minutes later, the flask was removed, and Caco-2 medium was judiciously poured once there was sufficient apparent separation.The adherent Caco-2 cell type has a tendency to stick to surfaces.After being pipetted into a tube, the entire contents of the flask were centrifuged for five minutes at 125 rpm.The Caco-2 cells were then left at the tube's bottom, and the supernatant was pipetted out.The cells were gently shaken with a pipette by pipetting up and down after adding fresh media to the tube.Fresh culture T-flasks were filled with medium, the suspension was separated, and the growth thermostat was placed within.Growth conditions were maintained at 95% air, 5% CO 2 , and 37 • C [39].To allow for treatment, confluence was observed until 70-80%.
Treatment of Cancer Cell Lines with WS Extracts
An amount of 200 µL of extract solutions at varying concentrations levels (2 mg/mL, 1 mg/mL, and 0.5 mg/mL) was applied to passaged Caco-2 cell lines in fresh media.Following treatment, the cells were cultured for 36 h at 37 • C. A light microscope was used to examine the condition of the cells after incubation.The typical doubling time of cancer cell lines, which is between 36 and 48 h, was used to establish the exposure length of interventions (36 h).The cells received doses at different concentration levels, starting with 0.5 mg/mL to evaluate the cells' response to different doses and the time taken for significant reactions to occur.This allowed for the precise detection and evaluation of the regulatory properties that increased with dose level of concentration.Growth and possible physiological responses were noted at 12 h intervals.
RNA Isolation
Following the removal of the medium from the cell cultures, it underwent two PBS washes and a trypsin-EDTA treatment.Following centrifugation, the cell suspension was pipetted into a 4 cm 3 centrifuge tube.After adding 1 cm 3 of ExtraZol Tri-reagent solution, it was left to incubate at room temperature for 5 min.Chloroform (0.2 cm 3 ) was added.The sample was centrifuged at 12,000× g for 10 min at 2-8 • C after being incubated for 2-3 min.A sanitized tube was used to hold the aqueous phase.Isopropyl alcohol (0.2 cm 3 ) was added.The material was centrifuged once more at 12,000× g for 10 min at 2-8 • C following a 10 min incubation period.Then, 1 cm 3 of 75% alcohol was used to wash the RNA pellet after the supernatant was removed.It was vortexed, then centrifuged for 5 min at 2-8 • C at 7500× g.Following the removal of the supernatant, the particulate was dried.Next, 50-100 µL of DEPC water that was devoid of RN-ase was used to dissolve it.Subsequently to vortexing, the sample was incubated at 55 • C for 10 min.Up until it was used, the extracted RNA was preserved at −80 • C.
RNA Concentration and Purity Assessment Using UV Spectroscopy
Ultraviolet (UV) spectroscopy was employed to evaluate both the concentration and purity of RNA.To maximize the effectiveness of this procedure, RNA samples were first treated with RNAse-free DNAse to eliminate contaminating DNA.Throughout the procedure, extra impurities including leftover proteins and phenol that can affect absorbance measurements were carefully eliminated.At 260 and 280 nm, the absorbance of a diluted RNA sample was measured.The Beer-Lambert equation, which states that absorbance will change linearly with concentration, was used to quantify the concentration of nucleic acid.When RNA purity was measured using the A260/A280 ratio, a ratio of 1.8 2.1 indicated highly pure RNA.Using qRT-PCR high-throughput detection and quantification matrices of target DNA sequences, the relative gene expressions of CASP9, Bcl-xL, 5-LOX, Bcl2, and COX-2 targeted genes were determined.For internal control, the house-keeping gene used in our experimental study was HPRT1.The cross point between the threshold value and the amplification curve was shown by the Cp values used to express the PCR results.The proportional variations of the desired genes from the reference sample were calculated by applying the 2-Cp (Livak strategy) and the Cp values [40].
Data Analysis
IBM SPSS Version 26.0.3 (IBM Corp., 2019, Armonk, NY, USA) and MS Excel 2013 (Microsoft Corp., 2013, Redmond, WA, USA) were employed in the quantitative evaluation's computation.Following a normality examination of the data using the Kolmogorov-Smirnov test and the analysis of variance (ANOVA), the mean values of the pertinent variables were compared.The outcome was regarded as significant if it was p ≤ 0.05 within the 95% confidence interval.
Responses of COX-2 following Administration of Methanolic Stem and Root Extracts at Progressive Dose Concentrations
Methanolic extracts of WS were administered to Caco-2 cell lines at progressively higher doses of 0.00 mg/mL, 0.50 mg/mL, 1.00 mg/mL, and 2.00 mg/mL.In both extracts, COX-2 transcripts were gradually inhibited in a dose-responsive way (Figure 1).Additionally, a statistically significant distinction was identified in the inhibitory responses from stem extracts (p = 0.010) and root extracts (p = 0.001), as shown in Table S1 (Supplementary Materials).
CASP9 Reactions following Exposure to Intervention Therapy
Following exposure of Caco-2 to intervention therapy (root extracts and stem extracts) of WS, there was increased (upregulation) expression of CASP9 genes in both extracts, in a fashion dependent on dose concentration, (Figure 2).The two extracts differed noticeably from one another: p = 0.002 in root extracts and p = 0.011 in stem extracts (Table S1).The highest upregulatory activity was observed at higher concentrations.CASP9, which initiates the mitochondrial caspase mechanism, is an indispensable intermediary in the control of apoptosis.
CASP9 Reactions following Exposure to Intervention Therapy
Following exposure of Caco-2 to intervention therapy (root extracts and stem extracts) of WS, there was increased (upregulation) expression of CASP9 genes in both extracts, in a fashion dependent on dose concentration, (Figure 2).The two extracts differed noticeably from one another: p = 0.002 in root extracts and p = 0.011 in stem extracts (Table S1).The highest upregulatory activity was observed at higher concentrations.CASP9, which initiates the mitochondrial caspase mechanism, is an indispensable intermediary in the control of apoptosis.tracts) of WS, there was increased (upregulation) expression of CASP9 genes in both extracts, in a fashion dependent on dose concentration, (Figure 2).The two extracts differed noticeably from one another: p = 0.002 in root extracts and p = 0.011 in stem extracts (Table S1).The highest upregulatory activity was observed at higher concentrations.CASP9, which initiates the mitochondrial caspase mechanism, is an indispensable intermediary in the control of apoptosis.
Responses of Bcl-xL following Administration of Methanolic Stem and Root Extracts at Progressive Dose Concentrations
Following the exposure of Caco-2 cell lines to intervention, the expression of Bcl-xL genes was downregulated in a dosage-dependent way, in both extracts (Figure 3).There was a significant difference in their downregulatory potential (p = 0.001, roots and p =
Responses of Bcl-xL following Administration of Methanolic Stem and Root Extracts at Progressive Dose Concentrations
Following the exposure of Caco-2 cell lines to intervention, the expression of Bcl-xL genes was downregulated in a dosage-dependent way, in both extracts (Figure 3).There was a significant difference in their downregulatory potential (p = 0.001, roots and p = 0.001, stems) noticed in both extracts.Restricted Bcl-xL activation corresponds to enhanced apoptotic implications, which are essential for preventing the growth of malignant cells.
Bcl2 Expressions following Exposure to Intervention
There was a dose-related downregulation of Bcl2 gene activity, in both extracts (Figure 4).Their respective downregulatory effects differed significantly (p = 0.007, roots and p = 0.004, stems) recorded in both extracts.A variety of apoptosis-stimulating active metabolites found in WS are responsible for the distinctive downregulatory properties observed in Bcl2, as reported in our study.
Bcl2 Expressions following Exposure to Intervention
There was a dose-related downregulation of Bcl2 gene activity, in both extracts (Figure 4).Their respective downregulatory effects differed significantly (p = 0.007, roots and p = 0.004, stems) recorded in both extracts.A variety of apoptosis-stimulating active metabolites found in WS are responsible for the distinctive downregulatory properties observed in Bcl2, as reported in our study.
Bcl2 Expressions following Exposure to Intervention
There was a dose-related downregulation of Bcl2 gene activity, in both extracts (Figure 4).Their respective downregulatory effects differed significantly (p = 0.007, roots and p = 0.004, stems) recorded in both extracts.A variety of apoptosis-stimulating active metabolites found in WS are responsible for the distinctive downregulatory properties observed in Bcl2, as reported in our study.
5-LOX Responses following Administration of Methanolic Stem and Root Extracts at Progressive Dose Concentrations
The activity of 5-LOX genes was suppressed in both extracts when Caco-2 cell lines were administered to them, in a mechanism that was dependent on the administered dose
5-LOX Responses following Administration of Methanolic Stem and Root Extracts at Progressive Dose Concentrations
The activity of 5-LOX genes was suppressed in both extracts when Caco-2 cell lines were administered to them, in a mechanism that was dependent on the administered dose amount (Figure 5).In both extracts, there was also a noteworthy variation in their inhibitory characteristics (p = 0.001).
Nutrients 2024, 16, x FOR PEER REVIEW 9 of 14 amount (Figure 5).In both extracts, there was also a noteworthy variation in their inhibitory characteristics (p = 0.001).
Phytotherapeutic Effects of Both (Roots and Stem) Extracts on Cyclooxigenase-2 Modulation
While COX-1 modifies balance, COX-2 is heavily involved in inflammatory reactions [41].Although COX-2 expression in the colon is modest, it can be impacted by lipopolysaccharides, growth hormones, necrosis-related factors, and cytokines in adverse circumstances.Higher concentrations of COX-2 are linked to the onset and progression of colorectal cancer [42].The results from our experimental study demonstrated that upon exposure of Caco-2 cell lines on the extracts, COX-2 genes were dose-dependently inhibited (Figure 1).The copious distribution of the biochemically functional metabolites that are already present in both roots and stems accounts for the considerable advantage in the inhibition of COX-2 demonstrated in our investigation.These metabolites include, among others, 27-O-glucopyranosylviscosalactone B, 4,16-dihydroxy-5 h, 6h-epoxyphysagulin D, diacetylwithaferin A, physagulin D (1-6)-h-D-gluco-pyranosyl-(1-4)-h-D-glucopyra-
Phytotherapeutic Effects of Both (Roots and Stem) Extracts on Cyclooxigenase-2 Modulation
While COX-1 modifies balance, COX-2 is heavily involved in inflammatory reactions [41].Although COX-2 expression in the colon is modest, it can be impacted by lipopolysaccharides, growth hormones, necrosis-related factors, and cytokines in adverse circumstances.Higher concentrations of COX-2 are linked to the onset and progression of colorectal cancer [42].The results from our experimental study demonstrated that upon exposure of Caco-2 cell lines on the extracts, COX-2 genes were dose-dependently inhibited (Figure 1).The copious distribution of the biochemically functional metabolites that are already present in both roots and stems accounts for the considerable advantage in the inhibition of COX-2 demonstrated in our investigation.These metabolites include, among others, 27-O-glucopyranosylviscosalactone B, 4,16-dihydroxy-5 h, 6h-epoxyphysagulin D, diacetylwithaferin A, physagulin D (1-6)-h-D-gluco-pyranosyl-(1-4)-h-D-glucopyranoside, viscosalactone B, withaferin A, withanolide sulfoxide, and withanoside IV, each of which has been linked to inhibiting COX-2 activity [43][44][45].Notably, then, the roots and stems of WS may provide a viable substitute for conventional COX-2 antagonists used in phytotherapy, which are known to have adverse reactions [5].
Phytotherapeutic Effects of Both Extracts on CASP9 Regulation
Our findings showed that exposure to root and stem extracts varied in the overexpression of CASP9 in Caco-2 cells, suggesting that WS are appealing inducers of cell death responses.It was shown that CASP9 transcripts progressively elevated in a dose-responsive way after Caco-2 cell lines were exposed to the interventions (Figure 2).The capacity to cause cells in the gastrointestinal tract to undergo apoptosis is one possible preventive chemotherapy tactic [11].It has been proposed that suppressing spontaneous cell death increases the risk of cancer [10,11].Comparably, a decreased rate of apoptosis is highly linked with a higher frequency of colorectal adenoma [12].Therefore, studying the cell death cascade is a good way to treat CRC.The activity level of the apoptosis-associated markers (CASP9 and CASP10) may help assess the prognosis for patients with stage II colorectal cancer.The downregulation of CASP9 and CASP10 increases the lifetime of abnormal mucosa cells [46,47].Because of this, these cells may experience additional gene alterations, which could ultimately result in the development of malignant cells.The regulation of nuclear factor-kappa B (NF-kB) activity has been proposed as a mechanism by which withanolides exert their action.This is because NF-kB controls the activity of many genes involved in cell division, cancer metastasis, and inflammatory reactions.This could potentially explain why withanolides can promote apoptosis while limiting invasion, as it implies that they inhibit NF-kB activity and NF-kB-regulated gene expression [48].Withanolide D has been shown to decrease the levels of anti-apoptotic genes (TERT, Bcl-2, and Puma) in a study conducted in a leukemic murine mice model [49].At an IC50 of 92 g/mL, mitochondria-mediated apoptosis in triple-negative breast cancer cells (MDA-MB-231) could be induced by dysregulated Bax/Bcl2 expression and simultaneous disruption of mitochondrial membrane potential (∆Ψm), a novel fraction of proteins isolated from WS-roots, under conditions of high levels of reactive oxygen species (ROS).
CASP3 activation, G2/M cell cycle arrest, and nuclear lamina protein disintegration were also described [50].Additionally, pro-apoptotic and tumor-promoting proteins in the signaling cascade were altered by the crude water extract (0.5%) of WS, which contributed to the inhibition of tumor growth [49].Utilizing Withaferin A exposure has been reported to increase the number of late apoptotic cells and the aggregation of cells in the cell cycle's subG1 arrest.It has been connected to the cleavage of CASP-3 and PARP, which induces apoptosis [44,[51][52][53].In light of the aforementioned findings, investigating the dose-related apoptotic mechanisms of related genes, as demonstrated in our unique research and conclusions, is a viable pharmacotherapeutic therapy approach for the management of colorectal cancer.However, additional in vitro research findings are necessary to verify the current findings, prior to transitioning to in vivo experimental trials.
Phytotherapeutic Effects of Both Extracts on Bcl-xL and Bcl2 Regulation
Pro-and anti-apoptotic components of the Bcl-2 protein group are among the bestdefined groups of proteins associated with the regulation of cell death.Components of this family with anti-apoptotic properties, such Bcl2 and Bcl-xL, block apoptosis by either binding proforms of the death-inducing cysteine proteases, or by stopping the elimination of mitochondrial apoptogenic substances, like cytochrome c and AIF (apoptosis-inducing factor), from their bodies.Cytochrome c and AIF effectively activate caspases as they enter the cytoplasm.Caspases subsequently degrade several cellular proteins, inducing apoptosis [9].Our research showed that, in a dose-related way, stem preparations from WS dramatically reduced the level of expression of Bcl-xL and Bcl2.Comparable modulatory responses were distinctively expressed by both genes.That being said, the reason for their comparable activity is that they share a common anti-apoptotic genetic group [15].Reduced Bcl-xL and Bcl2 expression is associated with enhanced apoptotic effects, which are essential for inhibiting the growth of malignant cells.Our findings suggest that Bcl-xL and Bcl2 promote programmed cell death [54] and may be appropriate candidates for CRC intervention.The treatment extracts presented in our investigation exhibited a distinctive suppression of Bcl-xL and Bcl2, which we attribute to the presence of a variety of apoptosisstimulating functioning metabolites that were previously addressed as being present in WS [44].It has been reported that withaferin A increases apoptosis through PARP and CASP3 degradation and decreases levels of anti-apoptotic proteins such as Bcl-xL and Bcl2 [44,[51][52][53].
According to our empirical studies, Bcl-2 and Bcl-xL inhibition amplifies their reduced activity, which in turn leads to a reduction in the growth of Caco-2.Pharmacologic modulation of proteins from the Bcl-2 family will be limited until a more precise knowledge of how these molecules regulate cell apoptosis is obtained.Despite our effective in vitro utilization, our investigation highlights the need for more studies on Bcl-2 family connections and their pharmacological modification utilizing the commercially available WS plant-based constituents in vivo.For the very first time, we report that WS methanolic root and stem extracts potency is essential in regulating Bcl2 and Bcl-xL.In addition, their corresponding intracellular signaling pathways inhibit CRC growth.The utilization of methanol for effective extraction of Bcl-xL and Bcl2 modulatory phytoconstituents is highly recommended from the results of our findings.
Phytotherapeutic Effects of Both Extracts on 5-LOX Regulation
It appears that disrupting these channels may be useful in delaying the advancement of CRC and other malignancies, as some LOXs create intermediates (metabolites) in the arachidonic acid route that appear to promote carcinogenesis [55][56][57].5-LOX and its products Leukotriene (LT)-B4 and 5(S)-hydroxy-6E,8Z,11Z and 14Z-eicosatetraenoic acid (5-S-HETE) are two of these LOXs and intermediates [56].Thus, 5-LOX is a dioxygenase that converts arachidonic acid to 5-S-HETE, which is further converted to LTB4 by LTA4 hydrolase [58].Our findings demonstrated that the level of activity of 5-LOX was dramatically inhibited in a dose-related way by the utilized methanolic extracts (roots and stems).Therefore, it was concluded that methanol is a remarkable extraction solvent for obtaining the highest concentration of beneficial metabolites that can inhibit the expressive characteristics of 5-LOX.
It is unclear what steps lead to the activity of the 5-LOX genome during neoplastic transformation; however, 5-LOX activity does seem to be periodically enhanced [56].LOX inhibitors reduce the development of cancerous cells both in vivo and in vitro and cause death through mitochondrial pathways [57,58].WS preparations with an IC50 value of 0.92 mg/mL and a 65% suppression capacity for 5-LOX have also been reported by other researchers [18].The results of this study and additional studies suggest that 5-LOX communication cascade suppression and upregulation may be potential targets for both the prevention and the therapy of CRC.Thus, this study validates the use of WS natural 5-LOX inhibitory compounds for the therapeutic management of human colorectal cancer.
Conclusions
In this study, multiple dosages of W. somnifera L. methanolic extract were applied in vitro to Caco-2 cell lines.The stem and root extracts were empirically demonstrated to be of exceptional effectiveness in modifying the functional characteristics of cyclooxygenase-2, caspase-9, 5-Lipoxygenase, B-cell lymphoma-extra-large, and B-cell lymphoma 2. In addition, our results demonstrated that methanol is a viable extraction solvent of powerful metabolites that significantly modulates the expression of CRC associated genes.Therefore, utilizing WS as a potential colon cancer inhibitor with strong phytotherapeutic effects makes it plausible, due to the abundance of phytoconstituent biomolecules present in the plant.
The regulatory effects observed in this study solidify WS's status as a valuable plant of choice, with its stem and root extracts demonstrating substantial suppressive characteristics of CRC.This study is among the first to establish a direct correlation between the useful properties of WS preparations and specifically selected genomes, and possible medicinal applications for CRC management.However, more in vitro works are essentially required to corroborate the current conclusions.In this regard, we strongly suggest the utilization of WS animal models (in vivo) and later on in clinical applications, following additional and successful in vitro experimental trials.Further, we recommend the application of methanol as an excellent solvent for the extraction of bioactive metabolites in WS, to optimize regulatory benefits in the activities of cyclooxygenase-2, caspase-9, 5-Lipoxygenase, B-cell lymphoma-extra-large, and B-cell lymphoma 2. Lastly, we endorse further investigation into the particular phytochemicals linked to the regulatory mechanisms that prevent the development and advancement of CRC cells.
2. 9 .
Equipment and Procedure for a Quantitative Real-Time PCR (SYBR Green Protocol) Accurate, repeatable, and sensitive nucleic acid quantification is made possible by qRT-PCR.Observing the guidelines provided by the manufacturer, a Roche LightCycler 480 qPCR (Thermo Fisher Scientific, Rockford, IL, USA) platform with 96-well plates was utilized for one-step PCR, comprising proliferation and reverse transcription.The One-Step Detect SyGreen Lo-ROX one-step RT-PCR kit (Nucleotest Bio Ltd., Budapest, Hungary) was used for the procedure.At the end of each of the forty-five cycles (95 • C-5 s, 56 • C-15 s, and 72 • C-5 s), a fluorescence output was acquired in the thermal program, which was set up as follows: 42 • C for 5 min of incubation, followed by 95 • C for 3 min.Melting curve analysis (95 • C-5 s, 65 • C-60 s, 97 • C∞) was performed after each run to verify the specificity of the amplification.The following was the reaction mix: upon adding 5 µL of mRNA template fortified with sterilized double-distilled water, 10 µL of Master Mix, 0.4 µL of RT Mix, 0.4 µL of dUTP, and 0.4 µL of primers, an aggregate amount equal to 20 µL was attained.The primers were developed by Integrated DNA Technologies (Bio-Sciences Ltd., Budapest, Hungary), and the sequences were constructed with the Primer Express™ Software v3.0.1 package as shown in
Figure 1 .
Figure 1.COX-2 responses after exposure to stem and root extracts.
Figure 1 .
Figure 1.COX-2 responses after exposure to stem and root extracts.
Figure 2 .
Figure 2. CASP9 responses following exposure to intervention therapy (stem and root extracts).
Figure 2 .
Figure 2. CASP9 responses following exposure to intervention therapy (stem and root extracts).
Nutrients 2024, 16, x FOR PEER REVIEW 8 of 14 0.001, stems) noticed in both extracts.Restricted Bcl-xL activation corresponds to enhanced apoptotic implications, which are essential for preventing the growth of malignant cells.
Figure 5 .
Figure 5. 5LOX following exposure to stem and root-extracts intervention.
Figure 5 .
Figure 5. 5LOX following exposure to stem and root-extracts intervention.
Table 1 .
Forward and reverse primer sequences adopted and applied in our experimental study. | 7,529.2 | 2024-04-01T00:00:00.000 | [
"Medicine",
"Biology",
"Environmental Science"
] |
From Ivacaftor to Triple Combination: A Systematic Review of Efficacy and Safety of CFTR Modulators in People with Cystic Fibrosis
Over the last years CFTR (cystic fibrosis transmembrane conductance regulator) modulators have shown the ability to improve relevant clinical outcomes in patients with cystic fibrosis (CF). This review aims at a systematic research of the current evidence on efficacy and tolerability of CFTR modulators for different genetic subsets of patients with CF. Two investigators independently performed the search on PubMed and included phase 2 and 3 clinical trials published in the study period 1 January 2005–31 January 2020. A final pool of 23 papers was included in the systematic review for a total of 4219 patients. For each paper data of interest were extracted and reported in table. In terms of lung function, patients who had the most beneficial effects from CFTR modulation were those patients with one gating mutation receiving IVA (ivacaftor) and patients with p.Phe508del mutation, both homozygous and heterozygous, receiving ELX/TEZ/IVA (elexacaftor/tezacaftor/ivacaftor) had the most relevant beneficial effects in term of lung function, pulmonary exacerbation decrease, and symptom improvement. CFTR modulators showed an overall favorable safety profile. Next steps should aim to systematize our comprehension of scientific data of efficacy and safety coming from real life observational studies.
Cystic Fibrosis and the CFTR Protein
Cystic fibrosis (CF), the most common autosomal recessive disease in Caucasian populations, is a life-limiting condition, with respiratory failure secondary to end-stage lung disease accounting as the main cause of mortality. Early diagnosis through newborn screening, multi-professional care in dedicated centers, advancements in strategies of treatment, and availability of therapies have gradually improved median predicted survival up to 50 years [1].
CF is caused by mutations in the Cystic Fibrosis Transmembrane Conductance Regulator (CFTR) gene, first described together with its protein product in 1989 [2,3]. CFTR encodes a low conductance cAMP-dependent chloride channel located at the apical membrane of epithelial cells in several tissues including airways, the gastro-intestinal tract, sweat glands, and the male reproductive tract [4]. The protein includes two transmembrane domains (TMD1 and TMD2), two nucleotide-binding
Drug Development
CFTR function may be partially rescued by molecules known as modulators, which include potentiators (ivacaftor), that increase conductance of the CFTR channel, and correctors (lumacaftor, tezacaftor, elexacaftor), that improve CFTR trafficking to the cell surface.
Ivacaftor (IVA) was the first of these molecules that proved effective in a phase III clinical trial, with significant pulmonary and nutritional improvements, and paved the way to a new generation of precision medicine drugs in CF [8]. In 2015 the North American and European regulatory agencies licensed a combination of ivacaftor and lumacaftor (LUM), the latter being a corrector of p.Phe508del folding and trafficking defect. In spite of a modest improvement of lung function and other clinical outcomes, this association had the merit to target for the first time the most common CFTR mutation and widen the population treatable with modulators [9].
Following the increased knowledge on the functioning of CFTR modulators and the development of alternative experimental models, new compounds have been experimented. For patients with p.Phe508del and a residual function mutation in trans, treatment with tezacaftor/ivacaftor (TEZ/IVA) was effective in terms of lung function and resulted in a significantly lower rate of pulmonary exacerbations than placebo [10].
However, neither of these double combinations was found to be satisfactorily effective in patients carrying a single p.Phe508del and a second CFTR mutation that does not respond to previous CFTR modulator therapy and so-called 'minimal function' mutations. VX-659 and VX-445 (elexacaftor, ELX) are next-generation correctors with different mechanisms of action than previous generation correctors LUM and TEZ.
Co-treatment with two complementary correctors proved to be the most effective strategy to improve the expression of corrected p.Phe508del CFTR protein at the cell surface on the basis of both in vitro activity and clinical results [11,12]. ELX and TEZ, along with IVA, has recently extended the treated population also to those with a CFTR minimal function mutation [13]. In addition, both TEZ/IVA and ELX/TEZ/IVA showed better results than LUM/IVA in p.Phe508del homozygous patients [14,15].
This review aims at a systematic research of the current evidence on efficacy and tolerability of CFTR modulators nowadays available for different genetic subsets of patients with CF.
Search Methodology
This systematic revision was conducted according to the PRISMA statement [16]. Two investigators (AG and MC) independently performed the research on PubMed and screened the literature in order to identify phase 2 and 3 studies published in the study period 1 January 2005-31 January 2020. Key phrases included 'cystic fibrosis' or 'CFTR' and 'clinical trial'; 'cystic fibrosis AND modulators'. The research was extended also to other databases (EMBASE, Cochrane Central Register for Controlled Trials, Cochrane Database of Systematic Reviews). In order to increase the search sensitivity, the reference lists of the selected papers were assessed also manually. Non peer-reviewed papers were not selected due to poor methodological reliability.
Study Selection
After literature search, two independent investigators reviewed titles and abstracts in order to select those that fulfilled the study criteria; in case of disparity, a final decision was taken by a third reviewer (SA). The authors included only phase 2 and 3 clinical trials published in the mentioned study period. In accordance with the inclusion criteria mentioned above, literature on new modulators dealing with in vitro and preclinical data as well as phase 1 clinical trials, although of interest, was excluded. Phase 2 trials that have been completed with results but not yet undergone the peer-review process and publication were excluded from the systematic analysis. Articles were also excluded if (1) written in languages other than English; (2) they were abstracts presented in national and international congresses; (3) they were commentaries, correspondences, editorials, case-series. The full text was obtained for selected papers.
Data Extraction
Data of interest were extracted from each included paper. Qualitative and quantitative data were extracted by the same reviewers (AG and MC) who performed the study selection, with the help of a third reviewer (SA) if needed. Data of interest included name of the first author, year of publication, phase of study, study population and sample size, type of intervention, duration, primary and secondary endpoints, report of adverse events (AEs), and number of patients who interrupted the study drug due to the occurrence of any AE. Corresponding authors were contacted if data were unclear or not reported in the full text. In consideration of the heterogeneity of the papers, a meta-analysis was not performed. Figure 1 shows the selection process and the search results. Qualitative reviews, retrospective analysis, and in vitro experiments were rejected. A pool of 23 studies was included in the systematic review with a total of 4219 (age 6-11: 436; age 12+: 3783) patients. The papers selected were published between the years 2005 and 2020. We included 11 phase 2 and 12 phase 3 clinical trials. Characteristics of selected clinical trials are reported in Table 1. Figure 1 shows the selection process and the search results. Qualitative reviews, retrospective analysis, and in vitro experiments were rejected. A pool of 23 studies was included in the systematic review with a total of 4219 (age 6-11: 436; age 12+: 3783) patients. The papers selected were published between the years 2005 and 2020. We included 11 phase 2 and 12 phase 3 clinical trials. Characteristics of selected clinical trials are reported in Table 1. No discontinuations due to AEs
Primary and Secondary Outcomes
Analysis of outcome measures is reported in Table 1. Most studies assessed as primary outcome the improvement in lung function, measured by the absolute change in ppFEV1 from baseline through the study period (n = 13, 68%). A second group of studies, mainly phase 2 clinical trials, evaluated safety and tolerability as their primary outcome (n = 7, 37%). Sweat chloride concentration and number of pulmonary exacerbations (PEX) were the most prevalent secondary outcomes in 15 (79%) and six (31%) studies, respectively. Patient-reported outcomes (PROs) were considered as primary or secondary outcomes in 19 studies (83%). In all cases PROs were evaluated by the use of CFQ-R Respiratory domain score.
Lung Function
Patients who had the most beneficial effects from CFTR modulators were those with one gating mutation receiving IVA and those with p.Phe508del mutation, both homozygous and heterozygous, receiving ELX/TEZ/IVA [8,13,15].
Treatments with IVA alone and LUM alone were not effective on lung function in p.Phe508del homozygous patients in two different phase 2 clinical trials, while the LUM/IVA association increased ppFEV1 in both phase 2 and phase 3 trials in patients homozygous for p.Phe508del (+6.1 and +4.8 points, respectively) [9,18,19,22]. TEZ/IVA significantly improved FEV1 in patients ≥12 years both p.Phe508del homozygous and p.Phe508del along with a residual function mutation in comparison to placebo (ppFEV1 +4.0 and +6.8 points, respectively) [10,14]. In this last study group, TEZ/IVA was associated with ppFEV1 improvement of 2.1 points compared to IVA alone [10]. No change in FEV1 was described for TEZ alone in both populations [27].
Two triple combinations of either VX-659 or VX-445 (ELX) plus TEZ/IVA resulted in an overall improvement in ppFEV1 compared to placebo in two phase 2 clinical trials recruiting patients ≥12 years both p.Phe508del homozygous (9.7 and 11.0 points, respectively) and p.Phe508del along with a minimal function mutation (13.3 and 13.8 points, respectively) [11,12]. Treatment with ELX/TEZ/IVA confirmed significant improvements of lung function in the same patient populations with ppFEV1 increase over baseline of 10.4 and 13.8, respectively [13,15].
Referring to other genetic subsets, ataluren did not lead to any significant change in ppFEV1 in a population of patients aged 6 years or more carrying at least one non-sense mutation [21].
Change in Lung Clearance Index (LCI) was also considered as primary outcome by one single clinical trial [26]. Treatment with LUM/IVA demonstrated a slight but significant improvement in lung ventilation measured as change of LCI 2.5 from baseline (−1.09 units) compared to placebo in patients homozygous for p.Phe508del and aged 6-11 years.
On the contrary, ataluren did not result in significant reduction of PEX episodes compared to placebo for patients with at least one non-sense mutation in a phase 3 clinical trial [21].
Sweat Chloride
Sweat chloride concentration was a relatively common secondary outcome in trials evaluating CFTR modulators. The greatest improvement in sweat chloride was described in patients carrying one gating mutation receiving IVA (48.1 mmol/L in p.Gly551Asp and 49.2 mmol/L in other gating mutations) and those with p.Phe508del mutation, both homozygous and heterozygous, receiving ELX/TEZ/IVA (43.4 mmol/L in homozygotes and 41.2 mmol/L in heterozygotes) [8,13,15,23].
Patient-Reported Outcomes
Patients homozygous for p.Phe508del and patients with p.Phe508del and a minimal function mutation in trans receiving ELX/TEZ/IVA had the most significant improvements on quality of life in terms of CFQ-R Respiratory domain improvement (20.7 and 25.7 points compared to placebo, respectively) [13,15].
Compared to placebo, IVA improved quality of life with similar magnitude in three different groups: patients with p.Gly551Asp, patients with gating mutations other-than-p.Gly551Asp, and patients with p.Arg117His mutation (8.6, 9.6, and 8.4 points in CFQ-R, respectively) [8,23,24]. Treatment with TEZ/IVA was associated with CFQ-R increase of 11.1 points in patients with a residual function mutation and 5.1 points in p.Phe508del homozygotes [10,14]. Significant but lower improvements in CFQ-R were reported for patients receiving LUM/IVA (2.2 points) [9]. There was no improvement in CFQ-R Respiratory domain score in comparison to placebo for p.Phe508del homozygous patients on IVA and for patients aged 6-11 years on LUM/IVA [18,26]. Furthermore, ABBV-2222 on top of IVA in patients with one gating mutation and GLPG2737 on top of LUM/IVA in patients homozygous for p.Phe508del did not lead to increase in CFQR-R compared to placebo [29,31].
Safety
The highest rate of study drug discontinuation was reported during treatment with ataluren in patients carrying a non-sense mutation (3.4%) [21]. Patients homozygous for p.Phe508del and treated with LUM/IVA experienced drug discontinuation in 2.8% of study cohort resulting as the highest discontinuation rate among CFTR modulators approved for clinical use [9]. The most frequent adverse events were respiratory events including PEX, increase in cough or expectoration, upper respiratory tract infections, or hemoptysis.
Regarding extra-respiratory events, headache and diarrhea were the most reported. Table 2 summarizes the 15 most common adverse events across studies involving the four CFT modulators currently available in clinical practice.
Discussion
To the best of our knowledge, this is the most inclusive systematic review on efficacy and safety of CFTR modulators in people with CF, including also data from recent clinical trials on the triple combination therapy [32].
Expansion of the Target Population
Clinical studies on CFTR modulators span over a period of about 10 years, with a gradual enlargement of the genotypes reached by the trials. While in the 2010-2013 period several studies focused on IVA in patients carrying p.Gly551Asp, later the most tested compounds were LUM/IVA and TEZ/IVA in individuals with p.Phe508del/p.Phe508del or p.Phe508del/any genotypes, that together account for up to 85% of the affected alleles in North America and Europe [33].
A further expansion of the therapeutic indications of these compounds took place along three main research paths. First, testing the highly effective IVA in patients carrying mutations sharing the same class 3 functional classification as p.Gly551Asp or preserving some residual function. That was the case for IVA, that was demonstrated effective in patients carrying non-Gly551Asp gating mutations or the p.Arg117His mutation [23,24]. In 2017, FDA approved the extension of the use of IVA to treat additional CFTR mutations based on results from an in vitro cell-based model system [34].
Second, CFTR modulators previously tested for safety and efficacy in adults were examined in a subset of the pediatric population, as in the case of Davies and Ratjen, who tested IVA and LUM/IVA in 6-11 years old children [20,26]. Only eight studies out of a total of 19 exclusively enrolled adults (age 18+), while the majority involved a mixed population of adolescents, younger adults, and adults with CF (age 12+). Overall, a total of 11 studies, that is a slight majority, involved groups of patients with age <18 years old.
Finally, randomized controlled trials (RCTs) have been carried out to examine new molecules and combinations [10][11][12]14]. Following the controversial clinical effects of lumacaftor (LUM), additional research through high-throughput screening has been performed to identify next-generation correctors [35]. To date, TEZ and ELX demonstrated the best pharmacological properties and clinical efficacy in rescuing p.Phe508del CFTR. ELX in particular allowed to target new CF subpopulations including those carrying p.Phe508del and either residual or minimal function mutations.
Efficacy on Lung Function
Before the triple combination became available, of all CF genotypes it was IVA in patients with p.Gly551Asp who reached the best primary endpoint results in a clinical trial [8]. Long-term treatment with IVA in patients with gating mutations has been also associated with significant decrease in mortality and in need for lung transplantation [36].
The degree of lung function and PEX improvement in p.Gly551Asp patients receiving IVA has been for a few years the unapproachable benchmark and considered the only highly effective CFTR modulator [8]. Recently, the triple combination in p.Phe508del homozygotes and in p.Phe508del heterozygotes whose second mutation has minimal function showed clinical benefits of a magnitude similar to those of p.Gly551Asp treated with IVA. In fact, the latter group of patients in the phase 3 clinical trial achieved a 10.6 increase in ppFEV1 compared to placebo, and patients with two or one copy of p.Phe508del treated with ELX/TEZ/IVA experienced similar gains in ppFEV1, of 10.0 and 14.3, respectively [8,13,15]. These results acquire a particularly valuable clinical meaning considering that before triple combination patients homozygous for p.Phe508del and treated with LUM/IVA (6+ years old) or TEZ/IVA (12+ years old) had much smaller increases in lung function (2.8 and 4 percentage points, respectively) in comparison to placebo [9,14].
In relation to residual function genotypes, the phase 3 study EXPAND showed an increase in ppFEV1 of 6.8 percentage points for patients with p.Phe508del in trans with a residual function mutation and treated with TEZ/IVA [10]. The ongoing study of ELX in association with TEZ and IVA in subjects with p.Phe508del and a gating or residual function mutation will confirm whether or not triple combination could be extended even to patients already treated with an effective CFTR modulator therapy (https://clinicaltrials.gov/, NCT04058353).
Although ivacaftor has demonstrated efficacy in achieving relevant clinical endpoints, it should also be noted that an effort to develop better CFTR potentiators is underway. These potential new treatments have shown promising results in clinical trials [29][30][31].
Efficacy on PEXs
The efficacy of CFTR modulators in PEX reduction was most relevant for p.Gly551Asp patients treated with IVA (−55% PEX frequency) and p.Phe508del/minimal function patients treated with triple combination (−63% PEX frequency) [8,15]. Although PEXs were not an efficacy outcome in the 4-week trial of triple combination in p.Phe508del homozygotes, there was a decrease in respiratory-related events in ELX/TEZ/IVA group compared with TEZ/IVA group [13,14]. Open-label extension studies will explore the reproducibility of this outcome in patients with p.Phe508del homozygosity and over a longer period of time.
Efficacy on Patient Reported Outcomes (PROs)
The magnitude of improvement in quality of life, and especially in the respiratory symptoms domain, was in line with the increase in ppFEV1 across studies, thus confirming from a patient perspective the clinical results.
Safety
Respiratory-related events were reported most frequently for LUM/IVA, which was also the only compound associated with chest tightness. In 2017, Popowicz demonstrated that ppFEV1 acutely dropped of a mean of 19 percentage points (range −21% to -11%, p = 0.001) at 2 h after LUM/IVA initiation in patients with p.Phe508del homozygosity and moderate to severe lung function impairment [37]. These events are consistent with the relatively high number of patients who discontinued LUM/IVA during RCTs or open label extended studies [9,38]. Some authors interpreted chest tightness and bronchospasm as a marker of therapeutic efficacy, as mucus fluidification and swelling following LUM/IVA administration might cause airway obstruction until phlegm is eventually expelled. However, a direct bronchospastic effect might be a more realistic explanation, in the light that TEZ/IVA and the new triple combination therapy are better tolerated than LUM/IVA both in RCTs and real life.
Abnormal liver function tests have been reported as common adverse events in patients treated with CFTR modulators, both in children and adult populations; most of the events were low-to-moderate in severity and did not require drug discontinuation.
ELX/TEZ/IVA combination therapy was also associated with skin rash, both in the p.Phe508del homozygous and heterozygous study [13,15]. In the latter trial, the rash occurred in 22 patients (10.9%) in the triple combination group and 13 patients (6.5%) in the placebo group. All events were defined as low grade adverse events and resolved during the trial. Notably, in both trial groups rash was more common in women receiving a concomitant hormonal oral contraceptive.
Unanswered Questions
The introduction of CFTR modulators in clinical practice has produced a significant impact on short-term clinical outcomes in people with cystic fibrosis. However, high internal validity and strict inclusion criteria in RCTs inevitably lead to low representativeness of everyday clinical practice.
An open question is if there are substantial effects in the severe patient subgroup that did not meet lung function inclusion criteria and could not participate in the trials. Encouraging data come from a recent cumulative analysis of outcomes from clinical trials, taking into consideration patients whose ppFEV1 declined below 40 between screening and the randomization visit, and open-label extension studies [39]. However, severity was merely defined as ppFEV1 <40 and did not fully represent the wide range of real-life circumstances, like patients awaiting lung transplantation, continuous IV antibiotic therapy, recurrent life-threatening hemoptysis, or NTM pulmonary disease.
On the other side, no data are available about the effects of CFTR modulators on people with very mild or no pulmonary involvement. More evidence on this topic would be useful to clinicians in order to guide the choice on the best timing to initiate CFTR modulators in the absence of respiratory disease.
Since ELX/TEZ/IVA proved to be a new highly effective treatment for people with p.Phe508del mutation, many issues have been rising including its possible extension to treat rare mutations that have currently no indications to CFTR modulators. Both translational experiments as well as further understanding of the molecular basis of CFTR misfolding are now encouraged to answer this question.
A further question that lies unresolved is for how long clinical benefits secondary to CFTR modulation can be maintained. A global evaluation of open-label extended studies and real life reports from national registries might assess the long-term sustainability of the clinical benefits demonstrated in RCTs and the overall impact not only on lung health but also extra-respiratory involvement, co-morbidities, and mortality. Next steps should aim to systematize our comprehension of scientific data of efficacy and safety coming from real life observational studies.
New Strategies beyond CFTR Potentiators and Correctors
The molecules included in this systematic review all belong to the two categories of potentiators and correctors of CFTR protein along with their combinations. However, novel approaches to post-translational modifications of CFTR protein are being explored in order to both address orphan mutated alleles and optimize correction of p.Phe508del mutant protein. A complementary therapeutic strategy is based upon the fact that the effectiveness of correctors and potentiators not only depends on their potential to rescue CFTR function, but also on the quantity of protein to operate on. The assumption is that augmenting the pool of CFTR available for modulation might translate into greater function improvements. A recent high-throughput screening identified a new class of modulators acting neither as potentiators nor correctors and termed amplifiers. These compounds have CFTR shown to increase the expression of CFTR mRNA and immature CFTR protein across different CFTR classes and mutations, including p.Phe508del [40]. Nesolicaftor (PTI-428, Proteostasis) is the first amplifier tested both in vitro and in clinical trials. A study on HBE cell lines and patient-derived nasal cultures from Molinski and al. demonstrated a significantly higher increase in CFTR mRNA levels when the combination of PTI-428 and LUM was used in comparison to LUM treatment alone [41]. A phase 2 trial of nesolicaftor in combination with TEZ/IVA therapy in individuals with CF and homozygous for the p.Phe508del mutation has been recently completed. Although PTI-428 did not reach the expected lung function endpoint, the authors demonstrated an increase of approximately 50% in CFTR production (NCT03591094) [42]. Nesolicaftor has also been studied in triple combination with two other CFTR correctors, PTI-801 and PTI-808 (Proteostasis Therapeutics). The phase 2 study in patients with two copies of the p.Phe508del mutation showed a significant 5 points increase in ppFEV1 (NCT03500263). Proteostasis announced it will start two phase 3 clinical trials to follow up with the promising success of this triple combination, in p.Phe508del homozygotes.
Another therapeutic approach aims at stabilizing the mutant CFTR protein which, after proper trafficking to the cell surface, is already included in the membrane. The targets are variants with reduced stability, like Class VI mutations or LUM-rescued p.Phe508del protein, where the misfolding of the mutant protein prevented recycling and promoted lysosomal targeting and accelerated endocytosis [43]. A novel class of CFTR modulators, named stabilizers, exerts its effect by targeting conformation-dependent ubiquitines and other molecules involved in the peripheral quality control, thus increasing half-life of the CFTR channel and preventing premature degradation [44]. Several different molecules have demonstrated to improve CFTR stabilization, including the hepatocyte growth factor (HGF), the vasoactive intestine peptide (VIP), and the inhibitors of S-nitrosoglutathione reductase (GSNOR). Among them, cavosonstat (N91115, Nivalis Therapeutics) showed strong inhibitory activity against GSNOR, thus preserving levels of S-nitrosoglutathione and positively affecting the stability of CFTR [45]. The same compound together with CFTR correctors showed in vitro a significant increase in the expression and activity of p.Phe508del [46]. In a recent phase 1 trial cavosonstat was well tolerated and demonstrated significant reduction in sweat chloride when administered at the highest tested dose [47]. However, in a series of two phase 2 trials cavosonstat combined with IVA and LUM/IVA did not demonstrate significant effects on lung function (NCT02724527 and NCT02589236, respectively). To the best of our knowledge, no further clinical development for cavosonstat is planned at this time. | 5,597.2 | 2020-08-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
The Mediterranean Mussel (Mytilus galloprovincialis) as Intermediate Host for the Anisakid Sulcascaris sulcata (Nematoda), a Pathogen Parasite of the Mediterranean Loggerhead Turtle (Caretta caretta)
Sulcascaris sulcata (Anisakidae), a pathogenic nematode of sea turtles, may cause ulcerous gastritis with different degrees of severity. Previous studies demonstrated a high prevalence of infection in the Mediterranean loggerhead turtle (Caretta caretta), although no data on the potential intermediate hosts of this nematode has been published thus far from the Mediterranean basin. Here, using molecular analyses, we demonstrated that the cross sections of nematode larvae observed histologically in Mediterranean mussels (Mytilus galloprovincialis) collected from a farm along the Tyrrhenian coast of southern Italy belong to S. sulcata. The BLAST analysis of sequences at the ITS2 region of rDNA and mtDNA cox2 gene loci here obtained from samples of two Mediterranean mussels containing nematode larvae showed 100% homology with those at the same gene loci from the adults of S. sulcata collected from the Mediterranean Sea and deposited in GenBank. To our knowledge, this study is the first to present data on a potential intermediate host of S. sulcata in the Mediterranean basin and to report a nematode parasite from the Mediterranean mussel.
Introduction
The anisakid nematode Sulcascaris sulcata is a pathogenic parasite of the esophagus and stomach of sea turtles being able to cause ulcerous gastritis with different degrees of severity predominantly depending on the intensity of infection. S. sulcata infects the loggerhead turtle (Caretta caretta), green turtle (Chelonia mydas), and Kemp's ridley turtle (Lepidochelys kempii) in the Mediterranean and Caribbean Seas, and the South Atlantic, Western Atlantic, and Western Pacific Oceans [1].
Berry and Cannon [2] demonstrated experimentally that hatchling loggerhead turtles become infected by ingesting scallops infected with fourth stage larvae S. sulcata. Larvae attach at the base of the esophagus where four molts occur about three weeks after infection and mature to adults in at least 5 months. Adult parasites live in the stomach of sea turtles and eggs are shed in the marine Table 1. Marine molluscan hosts for larval forms of Sulcascaris sulcata updated by Lichtenfels et al. [3].
Polinices sordidus
Experimental Australia [2] While studying the occurrence of protozoan parasites in the Mediterranean mussel (Mytilus galloprovincialis) along the coast of Campania region of southern Italy, larval forms of nematodes were observed histologically in the tissues of Mediterranean mussels. Herein, using molecular analysis, we report for the first time the occurrence of larvae of S. sulcata in Mediterranean mussels from the Tyrrhenian Sea.
Results
A total of five (1.4%) individual Mediterranean mussels collected on February (one mussel), May (one mussel), July (two mussels), and August (one mussel) were histologically positive to one (n = 3), two (n = 1), and three (n = 1) cross sections of nematode larvae, respectively. Cross sections of larvae (n = 4) measured in mean 290.5 µm (range: 178 to 430.9) x 202.5 µm (range: 149.4 to 292.8). Larvae were encysted within the foot of Mediterranean mussels extending to the digestive gland and revealed host inflammatory reaction in all cases. In most cases, larvae appeared to be viable and were surrounded by well-defined hemocytic capsules.
The BLAST analysis of the ITS2 of rDNA and mtDNA cox2 sequences produced here from samples of two Mediterranean mussels containing nematode larvae showed 100% homology with those of adult stages of S. sulcata from the Mediterranean Sea, previously deposited in GenBank (Figures 1 and 2). Sequences obtained in the present study were deposited in GenBank under accession numbers MN736715.1 and MN736716.1 for ITS2 and MN991208 and MN991209 for cox2.
MN699444_Adult1
A
MN699444_Adult1
C
Discussion
The Mediterranean mussel has been intensively studied for pathogens in the whole Mediterranean as well as the Campanian coastal areas [19][20][21], but to date, our study describes the first finding of a parasite nematode in this bivalve species. According to McElwain et al. [22], parasitic nematodes are uncommon in marine bivalves. In mussels, the only data is by Lauckner [23], reporting in North Atlantic an infection by an anisakid larva thought to be Phocanema (Pseudoterranova) decipiens in a North Atlantic Mytilus edulis.
In the Mediterranean Sea, the occurrence of S. sulcata in loggerhead turtles seems to be limited to its eastern basin and the Tyrrhenian Sea [1,24]. Recently, we observed that all Sulcascaris positive loggerhead turtles from the Tyrrhenian came from the coastal sites located between Castel Volturno and the Naples Gulf (it includes also Monte di Procida), where all Mediterranean mussel farms and wild bivalve beds, registered along the Campania coast are concentrated [1]. According to Berry and Cannon [2], it is plausible to think that the farmed Mediterranean mussels may have been infected by filtering seawater contaminated with S. sulcata eggs and/or larvae laid with feces by an infected loggerhead turtle while feeding on farm ropes of Mediterranean mussels. A | 1,115.8 | 2020-02-01T00:00:00.000 | [
"Biology"
] |
High-Order Harmonics Generation in MoS2 Transition Metal Dichalcogenides: Effect of Nickel and Carbon Nanotube Dopants
The transition metal dichalcogenides have instigated a lot of interest as harmonic generators due to their exceptional nonlinear optical properties. Here, the molybdenum disulfide (MoS2) molecular structures with dopants being in a plasma state are used to demonstrate the generation of intense high-order harmonics. The MoS2 nanoflakes and nickel-doped MoS2 nanoflakes produced stronger harmonics with higher cut-offs compared with Mo bulk and MoS2 bulk. Conversely, the MoS2 with nickel nanoparticles and carbon nanotubes (MoS2-NiCNT) produced weaker coherent XUV emissions than other materials, which is attributed to the influence of phase mismatch. The influence of heating and driving pulse intensities on the harmonic yield and cut-off energies are investigated in MoS2 molecular structures. The enhanced coherent extreme ultraviolet emission at ~32 nm (38 eV) due to the 4p-4d resonant transitions is obtained from all aforementioned molecular structures, except for MoS2-NiCNT.
Introduction
Transition metal dichalcogenides (TMDs), such as molybdenum disulfide (MoS 2 ), are hexagonal semiconductors with transition metal atoms held together by a weak Wander Waals force between two layers of chalcogen atoms. 2D-TMDs are a developing class of materials with properties that make them particularly appealing for fundamental research and search of novel physical phenomena [1][2][3][4][5]. These materials are very compact, with some layers on a 2D atomic scale, and have a direct bandgap. As a result, they can provide a solid interface with the incident photon and have advantageous properties like broadband absorption, transparency, and high carrier mobility. The interest in 2D TMDs, particularly MoS 2 monolayers, resulted in a variety of applications [6][7][8][9][10][11]. The bandgap of TMDs depends on the number of layers and the type of dopants. Studying the modified morphological, linear, and nonlinear optical properties of 2D TMDs is a significant research topic as they can be utilized in evolving next-generation high-performance optical devices.
Additionally, TMDs like molybdenum disulfide (MoS 2 ) have received greater attention because of their large direct bandgap in few-layered material, leading to an increase in imen components' effect on harmonic yield. We selected the molecular species (MoS 2 semiconducting TMDs) to reveal the effect of any hidden resonance process on the yield of harmonic orders. The morphology, optical, and electrochemical properties of MoS 2 nanoflakes (NFs), MoS 2 -Ni, and MoS 2 -NiCNT molecular targets were described in [35]. Those results demonstrated that the bandgap of MoS 2 NFs is increased and decreased with the addition of the nickel NPs and carbon nanotubes, respectively. The bandgap variation results from the formation of smaller-sized flakes, perturbation of electronic energy of levels, and band edge shift in flakes. Moreover, the X-diffraction measurements depicted the absence of diffraction peaks from added constituents in MoS 2 -Ni, and MoS 2 -NiCNT samples [35]. Thus, those results demonstrated that MoS 2 NFs are the major constituents in MoS 2 -Ni, and MoS 2 -NiCNT samples whereas nickel NPs and carbon nanotubes influence the electronic, optical, and catalytic properties of MoS 2 NFs. We used nickel NPs and CNT composites as the dopants of MoS 2 material because of their ability to enhance the nonlinear optical properties and recombination cross-section of delocalized π-electron cloud, which results in the enhanced harmonic generation [28,30,31,36].
In HHG experiments, the fundamental pulse intensity must be beyond the barrier (above-barrier) suppression intensity (BSI) of targets to efficiently generate high-order harmonics. BSI is defined as the threshold laser intensity required to ionize the electron from the nucleus and formulated as~3.8 × 10 9 IP 4 /Z 2 W/cm 2 , where Z is an atomic number of ions and IP is an ionization potential. At the focus area, the minimum employed intensities of picosecond HP and femtosecond DP were 2.1 × 10 9 W/cm 2 and 1.3 × 10 14 W/cm 2 , respectively. The targets are kept away from the focus of HPs to reduce the probability of craters formation (see Materials and Methods section). The first (and second) IP of Mo and MoS 2 are 7.09 eV (16.1 eV) and 5.28 eV, respectively. The calculated BSI for Mo, Mo+, and MoS 2 are~9.6 × 10 12 W/cm 2 ,~6.4 × 10 13 W/cm 2 , and~3 × 10 12 W/cm 2 , which are lower than the minimum DP intensities. Thus, the employed intensities allow for generating high-order harmonics from the laser-induced plasmas containing ions and neutrals of molybdenum-contained molecular targets.
The obtained HHG spectra from the laser-induced plasmas of studied molecular materials at 7.3 × 10 9 W/cm 2 (HPs) and 4.7 × 10 14 W/cm 2 (DPs) are illustrated in Figure 1a. For better understanding, the harmonic intensities of low-order harmonics from all ablated targets are depicted as bar diagrams in Figure 2a. The harmonic cut-offs from Mo bulk, MoS 2 bulk, MoS 2 NFs, MoS 2 -Ni, and MoS 2 -NiCNT plasma plumes are extended up to 31H (48.05 eV), 31H (48.05 eV), 33H (51.1 eV), 37H (57.34 eV), and 29H (44.9 eV), respectively. The obtained harmonic plateau regions of Mo bulk, MoS 2 bulk, MoS 2 -NFs, MoS 2 -Ni, and MoS 2 -NiCNT are 13H-17H, 11H-17H, 11H-15H, 9H-13H, and 9H-13H, respectively. The observed harmonic cut-off from the atomic Mo bulk laser-induced plasma is consistent with previously reported results [18,19]. The observed cut-off energy from the plasma produced on MoS 2 molecular bulk target is similar to the same on Mo atomic bulk target, whereas the total harmonic yield in the former case was~10% lesser concerning the latter case.
The total harmonic yield obtained from laser-induced plasmas of MoS 2 NFs and MoS 2 -Ni targets is 1.4 and 2 times larger than from the plasma produced on the MoS 2 bulk target. The total harmonic yield in that case means a summation of the peak values of all harmonics obtained from targets. The MoS 2 -Ni plasma allowed the generation of the total harmonic yield which is 1.5 times larger compared to the MoS 2 NFs case. The main differences between nanoflakes and bulk targets are the material dimension and large surface-to-volume ratio. The MoS 2 nanoflake's surface texture has the potential to absorb a larger amount of heating radiation compared to the bulk material. Hence, the illumination of NFs by HPs results in a higher rate of particle ejection in plasma or, in other words, denser plasma concerning the ablation of the bulk target. The harmonic yield is proportional to the density of plasma particles. Correspondingly, the harmonic intensity induced by MoS 2 NFs should be larger than the same by bulk MoS 2 . We did not compare the harmonic intensity by weight of different MoS 2 -containing samples. We observed that the harmonic intensity from MoS 2 NFs was higher compared to the ablated MoS 2 bulk at the same driving and heating pulse intensities. We attribute this result to the nanoflake's surface texture, which has the potential to absorb a larger amount of heating radiation compared with the bulk material. spectively. The obtained harmonic plateau regions of Mo bulk, MoS2 bulk, MoS2-NFs, MoS2-Ni, and MoS2-NiCNT are 13H-17H, 11H-17H, 11H-15H, 9H-13H, and 9H-13H, respectively. The observed harmonic cut-off from the atomic Mo bulk laser-induced plasma is consistent with previously reported results [18,19]. The observed cut-off energy from the plasma produced on MoS2 molecular bulk target is similar to the same on Mo atomic bulk target, whereas the total harmonic yield in the former case was ~10% lesser concerning the latter case. The total harmonic yield obtained from laser-induced plasmas of MoS2 NFs and MoS2-Ni targets is 1.4 and 2 times larger than from the plasma produced on the MoS2 bulk target. The total harmonic yield in that case means a summation of the peak values of all harmonics obtained from targets. The MoS2-Ni plasma allowed the generation of the total harmonic yield which is 1.5 times larger compared to the MoS2 NFs case. The main differences between nanoflakes and bulk targets are the material dimension and large surface-to-volume ratio. The MoS2 nanoflake's surface texture has the potential to absorb a larger amount of heating radiation compared to the bulk material. Hence, the illumination of NFs by HPs results in a higher rate of particle ejection in plasma or, in other words, denser plasma concerning the ablation of the bulk target. The harmonic yield is proportional to the density of plasma particles. Correspondingly, the harmonic intensity induced by MoS2 NFs should be larger than the same by bulk MoS2. We did not compare the harmonic intensity by weight of different MoS2-containing samples. We observed that the harmonic intensity from MoS2 NFs was higher compared to the ablated MoS2 bulk at the same driving and heating pulse intensities. We attribute this result to the nanoflake's surface texture, which has the potential to absorb a larger amount of heating radiation compared with the bulk material.
The inclusion of particles as the dopants in MoS2 molecular structures enhances their performance as electronic and optoelectronic devices by modifying their low-order nonlinear optical properties [13,[38][39][40][41]. The engineering of selenium doping in MoS2 demonstrated an enhanced second harmonic generation due to an increase in the second-order nonlinear susceptibility of this structure. The titanium, vanadium, and nickel doping enhanced the photosensitivity and electrical conductivity in 2D-MoS2 molecular MoS2-NiCnt plasmas compared with MoS2 NFs and MoS2-Ni plasmas can be attributed to the destructive interference of harmonics in the former case.
Resonance-Enhanced Harmonic Generation
The resonant and suppressed harmonics were observed at around ~32 nm and ~38 nm from all Mo-contained materials except for MoS2-NiCNT (Figure 3a). The bar diagrams in Figure 4b represent the resonant harmonic intensity (25H) and neighboring longer-wavelength harmonics (21H and 23H). The four-step model explains the resonant-induced enhancement of single harmonic [45]. The resonant harmonic generation from Mo-contained materials is due to the influence of 4p-4d resonant transitions from the single-and double-ionized Mo [17]. The suppression of harmonics at 33-34 eV (~21H of 800 nm DP) in laser-induced plasmas of Mo materials (expect MoS2-NiCNT) is attributed to the contribution from the destructive interference of 4p-4d transitions with 4d orbital recombination. The behavior of resonant transitions with recombined orbitals is important in HHG because macroscopically, its yield depends on the interference of high-order harmonics generated from every ionic and atomic source.
The resonant harmonic (25H) generated in Mo plasma was 11 and 1.7 times more intense than 21H and 27H, respectively. The resonant harmonic intensity in the case of ablated Mo is ~2 times stronger in comparison with MoS2 laser-induced plasmas. Moreover, the yields of resonant neighboring harmonics (23H, 27H, 29H) generated from MoS2 are also smaller compared with Mo plasma. Hence, our results show that the plasmas from the targets containing sulfide ions produce a less intense resonant harmonic, which indicates that those ions affect the favorable conditions of resonance-induced enhancement of a single harmonic. The resonant harmonic transitions are tailored by the driving beam characteristics (wavelength, intensity, etc.) and components present in HHG media. HHG in molecular plasmas containing oxides, phosphides, and selenides produces reduced or no resonant harmonics [46]. One can suggest that the detuning/shifting of the resonant transition reduces the oscillator strength (gf) of this transition. Hence, the decreased resonant harmonic yield in our case might be due to the modification of the 4p-4d resonant transitions influenced by sulfide ion, which may diminish the gf of the transition thus leading to the decay of the enhancement of single harmonic. At the same time, MoS2 plasma allows the generation of 21H, which is comparable with the case of Mo plasma when a similar suppressed harmonic was produced. This result shows that the influence of sulfide ions on the destructive bonding between 4p-4d resonant transitions and recombination to the The inclusion of particles as the dopants in MoS 2 molecular structures enhances their performance as electronic and optoelectronic devices by modifying their low-order nonlinear optical properties [13,[37][38][39][40]. The engineering of selenium doping in MoS 2 demonstrated an enhanced second harmonic generation due to an increase in the secondorder nonlinear susceptibility of this structure. The titanium, vanadium, and nickel doping enhanced the photosensitivity and electrical conductivity in 2D-MoS 2 molecular materials. The third-order nonlinear absorption coefficient and strong nonlinear photoluminescence were exhibited from the silver NPs embedded in MoS 2 TMDs. Moreover, Ni-doped CdS thin films and ZnS NPs produced enhanced third harmonic generation compared to the undoped species due to enhanced third-order nonlinear optical properties. Thus the knowledge of the low-order optical nonlinearities of such species becomes crucial for the determination of the potential application of those properties in different areas of optoelectronics, as well as for the determination of the relation between their low-and high-order optical nonlinearities. Below, we present the studies of the low-order nonlinear optical properties of MoS 2 NFs and MoS 2 -Ni suspensions by applying the femtosecond Z-scan technique at 800 nm wavelength using 1.63 × 10 11 W/cm 2 intensity of DPs to observe any correlation with the high-order nonlinear optical properties of those doped molecular species determined during HHG experiments.
The Z-scan measurements were performed at the excitation pulse intensity 269 GW/cm 2 in the colloidal suspensions obtained by ultrasonic dispersion of the samples in distilled water. The measured normalized transmittances of the laser pulses propagated through the samples using open-aperture and closed-aperture Z-scan arrangements are shown in Figure 1b,c, respectively. The solid curves correspond to the theoretical fits of the experimental data. The equations used for theoretical fits are taken from [41].
The reverse saturable absorption and the combination of two-photon absorption and nonlinear refraction were observed in the studied samples (MoS 2 NFs and MoS 2 -Ni suspensions) in the cases of open-and closed-aperture Z-scan arrangements, respectively. The measured two-photon absorption coefficients and nonlinear refractive indices of MoS 2 NFs (and MoS 2 -Ni) were 1.3 × 10 −11 (1.5 × 10 −11 ) cm/W and 5.2 × 10 −16 (6.9 × 10 −16 ) cm 2 /W, respectively. The Z-scans of samples showed that the nonlinear refractive indices and nonlinear absorption coefficients increased in the case of nickel dopants, which might be due to the localized surface plasmon-induced charge transfer. Interestingly, the enhanced harmonic intensity produced from MoS 2 -Ni correlates with a larger nonlinear absorption coefficient and refractive index compared to MoS 2 NFs. The enhanced nonlinear absorption property of MoS 2 -Ni may increase the absorption of HP, which allows the formation of denser plasma. Correspondingly, stronger harmonics will be produced from the denser plasmas. Moreover, the characterization of these samples revealed that the presence of nickel NPs leads to the formation of smaller-sized NFs [35]. Earlier, the HHG experiments using small-sized NPs demonstrated a higher yield of harmonics compared with the larger-sized NPs due to enhanced surface-to-volume ratio leading to the increase of absorption [42]. In the case of larger-sized NPs, the inner atoms do not provide the harmonics due to the absorption of the XUV emission and screening of the accelerated electrons. Overall, the presence of nickel NPs affected the plasma density and MoS 2 NFs dimensions, which cause the generation of stronger high-order harmonics.
The total harmonic yields from MoS 2 NFs and MoS 2 -Ni plasmas were 2.5 and 1.7 times larger than those produced from MoS 2 -NiCNT plasma. One can conclude that the addition of multi-walled CNT decreases the harmonic yield. Macroscopically, the harmonic yield is influenced by the interference of the harmonics emitted from every molecular, atomic, and ionic source of the target. Whether it is constructive or destructive interference depends on the phases of generated harmonics from different sources. Typically, single-walled CNTs have a wall diameter of 2.5 nm. The multi-walled CNTs (MWCNTs) can be regarded as a group of single-walled CNTs having broader diameter distribution. As mentioned above, the diameter of our MWCNTs was in the range of 250-500 nm. Hence, the total harmonic signal from MWCNTs can be considered as a combination of the harmonics obtained from different walls of the tubes. In our experiments, the MWCNTs were not aligned in the preferred direction or orientation, which may influence the phases of the harmonics emitted from them. The differently aligned MWCNTs emitted harmonics having different phases, which modified the total harmonic yield emitted from MoS 2 -NiCNT. Notice that the aligned MWCNTs produced stronger third-harmonic emission than nonaligned MWCNTs [43]. Hence, the reduction in harmonic yield from MoS 2 -NiCnt plasmas compared with MoS 2 NFs and MoS 2 -Ni plasmas can be attributed to the destructive interference of harmonics in the former case.
Resonance-Enhanced Harmonic Generation
The resonant and suppressed harmonics were observed at around~32 nm and 38 nm from all Mo-contained materials except for MoS 2 -NiCNT ( Figure 1a). The bar diagrams in Figure 2b represent the resonant harmonic intensity (25H) and neighboring longer-wavelength harmonics (21H and 23H). The four-step model explains the resonantinduced enhancement of single harmonic [44]. The resonant harmonic generation from Mo-contained materials is due to the influence of 4p-4d resonant transitions from the singleand double-ionized Mo [17]. The suppression of harmonics at 33-34 eV (~21H of 800 nm DP) in laser-induced plasmas of Mo materials (expect MoS 2 -NiCNT) is attributed to the contribution from the destructive interference of 4p-4d transitions with 4d orbital recombination. The behavior of resonant transitions with recombined orbitals is important in HHG because macroscopically, its yield depends on the interference of high-order harmonics generated from every ionic and atomic source.
The resonant harmonic (25H) generated in Mo plasma was 11 and 1.7 times more intense than 21H and 27H, respectively. The resonant harmonic intensity in the case of ablated Mo is~2 times stronger in comparison with MoS 2 laser-induced plasmas. Moreover, the yields of resonant neighboring harmonics (23H, 27H, 29H) generated from MoS 2 are also smaller compared with Mo plasma. Hence, our results show that the plasmas from the targets containing sulfide ions produce a less intense resonant harmonic, which indicates that those ions affect the favorable conditions of resonance-induced enhancement of a single harmonic.
The resonant harmonic transitions are tailored by the driving beam characteristics (wavelength, intensity, etc.) and components present in HHG media. HHG in molecular plasmas containing oxides, phosphides, and selenides produces reduced or no resonant harmonics [45]. One can suggest that the detuning/shifting of the resonant transition reduces the oscillator strength (gf ) of this transition. Hence, the decreased resonant harmonic yield in our case might be due to the modification of the 4p-4d resonant transitions influenced by sulfide ion, which may diminish the gf of the transition thus leading to the decay of the enhancement of single harmonic. At the same time, MoS 2 plasma allows the generation of 21H, which is comparable with the case of Mo plasma when a similar suppressed harmonic was produced. This result shows that the influence of sulfide ions on the destructive bonding between 4p-4d resonant transitions and recombination to the 4d orbitals is negligible. For a detailed explanation of this effect, simulations need to be performed using the time-dependent density functional theory, which is out of the scope of the present article. Overall, the presence of sulfide ions in MoS 2 plasma produced less intense resonant harmonic photons and lesser total harmonic yield once compared with the Mo plasma.
The resonant harmonic (25H) generated in MoS 2 -Ni plasma was 2.5 and 1.4 times stronger compared with 25H produced in MoS 2 NFs and bulk MoS 2 plasmas, respectively. Meantime, the resonant harmonic generated in MoS 2 bulk, MoS 2 NFs, and MoS 2 -Ni plasmas was 4, 7, and 8 times stronger than 21H emitted from the same plasmas. The suppressed harmonic intensity (21H) produced from ablated targets corresponds to the following dependence: The enhanced 21H intensity from MoS 2 -Ni and MoS 2 NFs plasmas compared with the plasmas produced on bulk Mo and MoS 2 bulks might be due to higher plasma density leading to a larger concentration of the electrons recombined with the atoms and molecules while emitting 38 nm radiation, despite of the destructive interference of contributions from 4p-4d transitions with 4d orbitals. On the other hand, we did not observe the enhanced resonant harmonic and the suppressed 21H compared with the remaining higher-order spectrum in the case of MoS 2 -NiCNT plasmas. Moreover, the harmonic cut-off from MoS 2 -NiCNT was restricted to 27.5 nm (29H), which is the lowest cut-off compared to other Mo-contained materials used in our HHG experiments.
The presence of misaligned MWCNTs in the target may either detune or shift the resonant harmonic transition. This modification will lead to either reduction or increase of the gf of this transition, which might lead to a generation of featureless harmonic distribution (i.e., without the enhancement of 25H) or demonstration of stronger enhancement of this single harmonic or other harmonics. The detuning or shifting of transition may occur due to the alternation of the refractive index of plasma at the resonant wavelength, which can lead to the modification in the phases of the interacting waves (i.e., 25H from different components of MoS 2 -NiCNT plasma). The characteristic study of MoS 2 -NiCNT [35] revealed that the addition of MWCNTs into MoS 2 -Ni leads to a change in electrical and optical properties of MoS 2 NFs by (i) the creation of new energy levels and (ii) band edge shift in NFs due to interaction between the carbon and sulfur atoms in MoS 2 NFs. Hence, the addition of CNTs influenced the resonant optical transition, as well as modified the selection rules between the 4p-4d orbitals presented in MoS 2 NFs, which might lead to the generation of high-order harmonic spectra without the resonance characteristics for XUV photon at~32 nm.
Meanwhile, the intensity of 38 nm (21H, 33 eV) emission from laser-induced MoS 2 -NiCNT plasma was 2, 1.7, and 1.3 times stronger compared with the same from MoS 2 bulk, MoS 2 NFs, and Mo bulk plasmas, respectively. The enhancement of 21H from MoS 2 -NiCNT plasma compared with other materials can be considered as a result of the suppression of destructive interference of the contributions from the 4p-4d resonant transitions and its recombination with the 4d orbitals by modification in optical/electrical properties of MoS 2 NFs due to presence of differently aligned CNTs in targets.
Overall, the MoS 2 -Ni is a good source for emitting the intense coherent XUV photons at 32 nm and other harmonic wavelengths as compared to other targets used in our HHG experiments.
Effect of Driving and Heating Pulse Intensities on HHG in Ablated Molecular Structures
The resonant amplification of a single harmonic due to the presence of strong ionic transitions in HHG media, as well as the total harmonic yield, can be influenced by the electric field of DPs. Hence, we studied the effect of DPs on the resonant harmonic and total harmonic yield. The dependences of high-order harmonic spectra from the plasmas comprising different Mo-contained molecular materials at various intensities of DP and HP are shown in Figure 3. For better understanding, the total harmonic yields and cut-off photon energies at various intensities of DP and HP are summarized in Figure 4a photon energies at various intensities of DP and HP are summarized in Figure 6a,b, respectively. The harmonic cut-offs and intensities increased with the growth of DP and HP intensities. As per harmonic cut-off law, the maximum harmonic photon energy is defined by the relation EC = IP + 3.17 UP where IP is ionization potential and UP is a ponderomotive potential (UP ≈ 9.33 × 10 −14 Iλ 2 [W/cm 2 × µm 2 ]; here I is a DP intensity measured in W/cm 2 and λ is the wavelength of DP measured in micrometers). One can see that cut-off energy depends on the DP wavelength and intensity. The increase in HP intensity creates higher The harmonic cut-offs and intensities increased with the growth of DP and HP intensities. As per harmonic cut-off law, the maximum harmonic photon energy is defined by the relation E C = I P + 3.17 U P where I P is ionization potential and U P is a ponderomotive potential (U P ≈ 9.33 × 10 −14 Iλ 2 [W/cm 2 × µm 2 ]; here I is a DP intensity measured in W/cm 2 and λ is the wavelength of DP measured in micrometers). One can see that cut-off energy depends on the DP wavelength and intensity. The increase in HP intensity creates higher particle density in the plasmas. Thus the enhancement in the harmonic yield and cut-off energies from Mo-contained plasmas with an increase of HP and DP intensities is attributed to the higher plasma density and ponderomotive potential, respectively. The harmonic cut-offs and intensities increased with the growth of DP and HP intensities. As per harmonic cut-off law, the maximum harmonic photon energy is defined by the relation EC = IP + 3.17 UP where IP is ionization potential and UP is a ponderomotive potential (UP ≈ 9.33 × 10 −14 Iλ 2 [W/cm 2 × µm 2 ]; here I is a DP intensity measured in W/cm 2 and λ is the wavelength of DP measured in micrometers). One can see that cut-off energy depends on the DP wavelength and intensity. The increase in HP intensity creates higher particle density in the plasmas. Thus the enhancement in the harmonic yield and cut-off energies from Mo-contained plasmas with an increase of HP and DP intensities is attributed to the higher plasma density and ponderomotive potential, respectively. At 2.1 × 10 9 W/cm 2 (HP) and 1.3 × 10 14 W/cm 2 (DP) intensities, Mo bulk-contained plasma is unable to produce 32 nm (38 eV) resonant harmonic photons. Conversely, at these intensities, MoS 2 NFs and MoS 2 -Ni produced resonant harmonic photons with their neighboring harmonics. The observed result is due to the insignificant amount of Mo ions present in plasma from Mo bulk, because of the higher ablation threshold compared to MoS 2 NFs and MoS 2 -Ni targets. Moreover, the latter targets have higher nonlinear absorption. Therefore, at lower HP intensities, MoS 2 NFs, and MoS 2 -Ni targets might generate larger plasma concentrations compared to the Mo bulk. Notice that, at higher DP and HP intensities, the plasma produced on the bulk Mo allowed a generation of the enhanced resonant harmonic and the neighboring high-order harmonics compared to the MoS 2 bulk (Figure 4a). As stated earlier, it is probably due to the detuning or shifting of resonant harmonic transition or alteration of selection rules by the presence of sulfide ion in MoS 2 plasmas, which influenced its total harmonic yield.
Discussion
MoS 2 NFs and Nickel-doped MoS 2 NFs allow the generation of the highest harmonic yield compared with Mo, MoS 2 bulk, and MoS 2 -NiCnt at different employed intensities of HP and DP (see Figure 4a), which is complementary to the result discussed in Section 2.1. Overall, MoS 2 -Ni generated intense high-order harmonics at different HP and DP intensities.
As said before, the featureless and gradually decreasing pattern of harmonics from MoS 2 -NiCnt laser plasmas is due to the presence of randomly aligned CNTs influenced 4p-4d resonant transition and its interference with 4d orbitals.
Our studies showed that MoS 2 -NiCnt plasma produced the lowest harmonic yield and cut-off, while the MoS 2 -Ni plasma emitted the highest yield and cut-off energy for generating harmonics. Hence, the ablation of MoS 2 NFs doped with nickel NPs can be employed as an effective plasma medium for HHG and demonstration of resonance enhancement at λ = 32 nm. Meanwhile, the stability of harmonics and the number of shots that can be obtained from the target before the harmonic intensity starts to decrease are the important parameters for efficient HHG. Considering those factors the bulk target is more beneficial compared to powder samples. Keeping these factors into consideration, the cylindrical rotating targets are used for the generation of stable high-order harmonics [46].
Hence, making thin films of MoS 2 -Ni or keeping powder samples on a motorized rotating mount can produce stable harmonics, which can find potential applications in coherent diffractive imaging and attosecond spectroscopy.
Ni is easily oxidized, which means that the Ni NPs can be coated with NiO. The question arises how would this oxide impact the harmonic resonant? One can assume the oxidization of Ni NPs decorated on the MoS 2 nanoflakes. The harmonic intensity depends on the plasma density, which in turn depends on the characteristics of components (ions, neutrals, composites of Ni, MoS 2 , and S, etc.). However, there will be lesser availability of NiO (ionization potential is 10.7 eV) in the plasma because the employed heating pulse intensity is higher than the barrier suppression intensity of NiO. Additionally, the nickel NPs attached to the surface of MoS 2 nanoflakes are exposed for oxidization. Hence, the high-order harmonics generation from the MoS 2 -Ni composite is mainly from the ions and neutrals of MoS 2 , Ni, and, to some extent, Mo and S. Correspondingly, the resonance enhancement from the Mo-containing species in plasma will not be affected by the presence of the oxidized Ni.
A similar conclusion has been reported in the case of In and In 2 O 3 plasmas [45]. Strong enhancement of single harmonic from the indium plasma in the 62 nm range was, to some extent, suppressed once the indium oxide plasma was applied for HHG. However, the conditions of the latter plasma were changed once stronger ablation was used to the In 2 O 3 target, leading to the enhancement of a single (13th) harmonic of the 800 nm pump propagating through such plasma. An analogous conclusion was reported in [47].
The MoS 2 -Ni is a semiconductor-metal (more precisely, semiconductor-oxide-metal, if Ni is oxidized) heterostructure. It has been reported that this kind of heterostructure can cause charge transfer between two (three, if Ni is oxidized) materials [48]. In laser-induced plasmas, the quantum tunneling effect enhances the resonant electron transfer cross-section in strongly coupled plasmas [49]. In our case, resonant harmonic generation is due to the influence of 4p-4d resonant transitions from the single-and double-ionized Mo component in plasma [17] rather than from MoO y S x (before and after Ni depositions), MoS x , and NiS x (after Ni deposition appeared in XPS spectra [48]). Wang et al. [48] claimed that the intensity of oxidization is reduced after Ni deposition. The reduction in oxidization may increase the higher harmonic intensity.
Ni doping in MoS 2 introduces an additional energy level [50]. The enhanced PL spectrum can be observed by tuning the incident wavelength. The decrease of PL with the growth of concentration of MoS 2 -CNT and blue shift in PL spectrum was reported in [51]. Moreover, PL spectrum of MoS 2 -BN nanotubes becomes diminished and blueshifted by adding single-walled CNT [52]. Therefore, the Ni dopants enhance the PL spectrum in MoS 2 nanoflakes whereas the CNT quenches it. Moreover, the CNT shifts the peak of photoluminescence to the blue side.
The earlier report [35] has shown that the absorption mechanism is significantly varied by doping MoS 2 with NiO NPs and MWCNTs, which might lead to variations in the PL intensity. They have shown that the PL yield is enhanced in molybdenum disulfide (MoS 2 )/perylene-3,4,9,10-tetracarboxylic dianhydride (MoS 2 /PTCDA) heterostructure due to reduced bang gap and reduced screening of electrons. Moreover, the shift of PL peak was observed in this heterostructure compared to the individual species due to the electron hybridization in MoS 2 /PTCDA heterostructure. It was determined that the PL intensity is increased and the peak is blue-shifted with the increase in thickness of PTCDA organic molecule. One can expect that PL spectrum for MoS 2 -Ni becomes blueshifted due to an increase in the bandgap compared to the pristine few-layered MoS 2 nanosheets. Recently, TMD/organic molecule heterostructures show the enhancement of PL intensity and charge carrier mobility thus allowing considering them as potential candidates for the next generation of optoelectronic devices [6].
As mentioned, HHG in the molecular plasmas containing oxides, phosphides, and selenides produced a reduced harmonic intensity. To know the exact effect of charge transfer on harmonics, high-order harmonic experiments are needed to be performed from the films as HHG medium rather than from the laser-induced plasmas. Hence, our experiments do not provide information about the charge transfer effect on the resonant harmonic generation from MoS 2 -Ni. In summary, HHG from laser-induced plasmas depends on the plasma components and their density. The plasma density is higher in MoS 2 -Ni compared to other materials since enhanced absorption leads to the production of denser plasma and stronger harmonics.
HHG spectroscopy is proven to be a useful tool for the investigation of molecular, atomic, and electronic structures of materials, as well as the analysis of the dynamics of their properties in the ultrashort time scale [53][54][55][56][57]. For spectroscopic applications, a generated bunch of high-order harmonics should be separated, which requires sophisticated XUV filters and gratings. The resonant harmonic generation diminishes the requirement for harmonic separation [58][59][60][61][62]. The demonstration of resonance-induced enhancement of harmonic in present studies can provide further insight into the amendments of harmonic yield and the developments of high-order nonlinear spectroscopy.
The present work comprises the ablation of atomic, molecular, and complex targets for the formation of the optimal conditions for plasma formation to efficiently generate highorder harmonics. The first consideration of the comparative properties of such plasmas was reported in [63] where the high-order harmonics were analyzed from the plasma plumes prepared on the surfaces of complex targets. The studies of In-Ag targets showed that the characteristics of the high-order harmonics from the double-target plume were the same as those from the single-target plasmas. For the chromium-tellurium plasma, the enhancements of the 29th and 27th harmonics were obtained, thus indicating the appearance of the enhancement properties from both components of the double-target plasma. Those comparative studies also showed higher enhancement of a single harmonic in the case of atomic plasma (Sb) comparing to the molecular one (InSb). The additional component can only decrease the enhancement factor of the medium, due to the change in the oscillator strength and spectral distribution of the transitions involved in the resonance enhancement of the specific harmonic order. Our present studies allowed further consideration of the complex plasmas thus underlining a distinction of different mechanisms suppressing or enhancing HHG efficiency in comparison with the atomic plasmas of similar elemental content.
Materials and Methods
Earlier, the synthesis procedure and characterization of studied samples were reported in [35]. The synthesis of MoS 2 NFs with dopants is shown in Figure 5.
Synthesis of Nickel Oxide Nanoparticles
All of the used chemicals (N-dimethylformamide (DMF), nickel acetate (Ni (CH 3 CO 2 ) 2 2H 2 O), sodium hydroxide (NaOH), and sulfuric acid (H 2 SO 4 )) were purchased from Sigma-Aldrich. Without any additional processing, all of these reactants and solvents were utilized as received. We prepared the aqueous solutions using deionized water as well as ultrapure, double-distilled water.
Nickel oxide NPs were produced by chemically reducing nickel acetate with polyethylene glycol as a stabilizing agent. In this synthesis procedure, as a first step, we mixed 1 M aqueous solution of nickel acetate with polyethylene glycol (PEG) for 60 min while continuously stirring. A 1 M NaOH aqueous solution was filled into a burette tube in the vertical column and dispensed drop by drop into the nickel acetate/PEG mixture while continuously stirring. The resulting solution was then centrifuged at 5000 rpm for 10 min with 200 mL of deionized water before being stored in a glass vial for later use. The average sizes of the NiO NPs embedded in MoS 2 were in the range of 20-50 nm depending on the conditions of synthesis. These measurements were carried out during the SEM and TEM analysis of similar samples reported in Ref. [35].
Synthesis of Nickel Oxide Nanoparticles
All of the used chemicals (N-dimethylformamide (DMF), nickel acetate (Ni (CH3CO2)22H2O), sodium hydroxide (NaOH), and sulfuric acid (H2SO4)) were purchased from Sigma-Aldrich. Without any additional processing, all of these reactants and solvents were utilized as received. We prepared the aqueous solutions using deionized water as well as ultrapure, double-distilled water. Nickel oxide NPs were produced by chemically reducing nickel acetate with polyethylene glycol as a stabilizing agent. In this synthesis procedure, as a first step, we mixed 1 M aqueous solution of nickel acetate with polyethylene glycol (PEG) for 60 min while continuously stirring. A 1 M NaOH aqueous solution was filled into a burette tube in the vertical column and dispensed drop by drop into the nickel acetate/PEG mixture while continuously stirring. The resulting solution was then centrifuged at 5000 rpm for 10 min with 200 mL of deionized water before being stored in a glass vial for later use. The average sizes of the NiO NPs embedded in MoS2 were in the range of 20-50 nm depending on the conditions of synthesis. These measurements were carried out during the SEM and TEM analysis of similar samples reported in Ref. [35].
Synthesis of Multiwalled Carbon Nanotubes
The multi-walled CNTs (MWCNTs) were synthesized through the chemical vapor deposition (CVD) technique on the Al-Cu-Fe surface. Ethylene (C2H4) was used as a carbon source in the CVD approach used to create the MWCNT on the Al-Cu-Fe substrate. The CVD chamber was evacuated and heated in an atmosphere of argon and hydrogen (Ar:H2 = 10:1) at a pressure of about 250 mbar. The MWCNTs were developed at 1072 K for around 20 min. Then the furnace was turned off and allowed to cool down to room temperature under the argon atmosphere. The black deposition inside the quartz
Synthesis of Multiwalled Carbon Nanotubes
The multi-walled CNTs were synthesized through the chemical vapor deposition (CVD) technique on the Al-Cu-Fe surface. Ethylene (C 2 H 4 ) was used as a carbon source in the CVD approach used to create the MWCNT on the Al-Cu-Fe substrate. The CVD chamber was evacuated and heated in an atmosphere of argon and hydrogen (Ar:H 2 = 10:1) at a pressure of about 250 mbar. The MWCNTs were developed at 1072 K for around 20 min. Then the furnace was turned off and allowed to cool down to room temperature under the argon atmosphere. The black deposition inside the quartz tube was taken out and thoroughly washed with HCl:HNO 3 (1:3 ratio) for 10 min followed by washing with distilled water for 60 min. Notice that an approximately similar procedure was reported in Ref. [64].
Synthesis of Few-Layer MoS 2 and NiO Nanoparticles-Decorated MoS 2
Few-layer MoS 2 nanosheet sample was created by mechanically exfoliating bulk MoS 2 powder (99.999% purity, Sigma Aldrich) in DMF solvent using a pressurized ultrasonic reactor. In brief, 50 mg MoS 2 powder was suspended in 500 mL DMF and exfoliated for 10 h using intense ultrasonication. 2 mL of the supernatant of NiO NPs was added to the MoS 2 /DMF solution and exfoliated for another 10 h. Finally, the prepared solutions were filtered through 0.22 µm porous filter membranes and washed several times with deionized water. The final sample was vacuum dried for 12 h at 80 • C before being stored in cleaned airtight glass containers for further characterization and application.
Earlier, Lai et al. analyzed the optical, structural, vibrational, compositional, and morphological properties of a few-layered MoS 2 , as well as the NiO NPs and MWCNTs embedded on a few-layered pristine MoS 2 [35]. Particularly, these samples show peaks at 410.3 and 384.4 cm −1 in their Raman spectra, which represent the A 1g , E 1 2g , and E 1g vibrational modes of the hexagonal structure of MoS 2 . The Raman spectra of NiO NPsand MWCNT-decorated MoS 2 nanoflakes have shown strong peaks at 1349.5 cm −1 and 1604.0 cm −1 corresponding to the G-band related to the C-C bonds and D-band related to the defects in CNTs, respectively. Scanning electron microscopy and transmission electron microscopy confirmed the presence of Ni NPs and MWCNTs on the MoS 2 NFs surfaces [35].
As mentioned, the average sizes of Ni NPs were in the range of 20-50 nm. The diameters of MWCNT were in the range of 250-500 nm. These species were attached to the surface of 1-4 µm MoS 2 NFs. We did not measure the PL spectra of NiO, CNTs, and MoS 2 flakes.
HHG and Z-Scan Methods
The experimental setup for the generation of high-order harmonics from the abovedescribed molecular materials is presented in Figure 6a. The picosecond (800 nm, 200 ps, 1 kHz; Spitfire Ace, Spectra-Physics) pulses were used as HP for ablating the samples in the experiment. The femtosecond (800 nm, 35 fs, 1 kHz; Spitfire Ace, Spectra-Physics) laser pulses were used as DP for the generation of harmonics from the laser-induced plasmas produced on the surfaces of targets. The picosecond and femtosecond pulses were obtained from the same laser, by reflecting a small fraction of the uncompressed beam (~200 ps) before the compressor stage. The HP was focused using a spherical 300-mm focal length lens (L 1 ) on the target (T) placed inside the vacuum chamber to create a laser-induced plasma. The DP was delayed by 83 ns concerning the picosecond HP and focused into the laser-induced plasmas using a spherical 500-mm focal length lens (L 2 ) from the orthogonal direction to produce high-order harmonics. The DP propagated 0.2 mm above the target surface. The target and DP focusing positions are varied concerning the optical axis of DP and laser-induced plasmas, respectively, to determine the conditions for the highest yield of harmonics. The second diffraction patterns from grating are observed on either side of the odd harmonics (i.e., between 9H-13H) in HHG spectra. It is well known that even harmonics are absent in HHG spectra from laser-induced plasmas when using the single-color DPs due to the isotropy of the HHG medium leading to the absence of the nonlinear even-order susceptibilities. The further analysis of these harmonic spectra will be explained in the following section. Molybdenum (Mo), molybdenum disulfide (MoS2), molybdenum disulfide nanoflakes (MoS2 NFs), molybdenum disulfide nanoflakes doped with nickel NPs (MoS2-Ni), and molybdenum disulfide nanoflakes decorated with nickel NPs and multiwalled CNTs (MoS2-NiCNT) were used as targets of variable molecular consistency to produce laser-induced plasmas and to study the influence of the nickel and CNT on the nonlinear optical properties and generated harmonics from the Mo-contained molecular materials.
The Mo metallic sheet of dimensions 4 mm × 4 mm and thickness 1 mm was used for The generated harmonics and residual fundamental radiation propagated through the differential pump chamber (DPC) and slits before entering an XUV spectrometer. The spectrometer consisted of a gold-coated cylindrical mirror (CM), a 1200 lines/mm flat field grating (FFG) with variable line spacing (Hitachi), and a microchannel plate (MCP, Hamamatsu). The images present on the phosphor screen of MCP were collected by a CCD camera (Thorlabs) coupled with a laptop to acquire information about the intensities and spectra of generated harmonics in the XUV region. The target chamber and XUV spectrometer were maintained at 10 −5 mbar. Figure 6c illustrates the typical two-dimensional color plots of harmonics generated from different molecular plasmas. The second diffraction patterns from grating are observed on either side of the odd harmonics (i.e., between 9H-13H) in HHG spectra. It is well known that even harmonics are absent in HHG spectra from laser-induced plasmas when using the single-color DPs due to the isotropy of the HHG medium leading to the absence of the nonlinear even-order susceptibilities. The further analysis of these harmonic spectra will be explained in the following section.
Molybdenum, molybdenum disulfide, molybdenum disulfide nanoflakes, molybdenum disulfide nanoflakes doped with nickel NPs, and molybdenum disulfide nanoflakes decorated with nickel NPs and multi-walled CNTs were used as targets of variable molecular consistency to produce laser-induced plasmas and to study the influence of the nickel and CNT on the nonlinear optical properties and generated harmonics from the Mocontained molecular materials.
The Mo metallic sheet of dimensions 4 mm × 4 mm and thickness 1 mm was used for ablation and atomic plasma formation. The MoS 2 bulk molecular target was prepared in the form of a pellet having dimensions 10 mm × 10 mm with a thickness of 1 mm using a hydraulic press. The MoS 2 NFs, MoS 2 -Ni, and MoS 2 -NiCNT targets were prepared by impinging the powdered samples on the double-sided tape (i.e., the glue of double-sided tape) attached to a glass substrate. The sizes of these targets were 4 mm × 4 mm. Notice that ablated glass substrate did not produce harmonics.
We also studied the low-order nonlinear optical properties of those materials using the Z-scan technique by illuminating the suspended molecular samples using femtosecond laser pulses. The schematic of closed-and open-aperture configurations of the Z-scan scheme is illustrated in Figure 6b. The 800 nm, 35 fs pulses were focused into a 2 mm thick quartz cell containing molecular suspensions using the convex lens having a focal length of 400 mm. The quartz cell was moved along the z-axis direction of beam propagation by controlling the movement of the translation stage through a motor controller. The reflected and transmitted parts of the propagated laser light from the glass slide were used to measure the sample's low-order nonlinear optical properties using open-and closedaperture schemes. In the case of an open-aperture scheme, the reflected beam was collected directly by silicon photodiode PD 2 (PDA110A-EC, Thorlabs), whereas, in a closed-aperture configuration, only 10% of transmitted light was collected through an iris placed in front of photodiode PD 1 .
The accuracy of Z-scan measurements depends on the characteristics of the input beam profile. Therefore, we measured the laser profile using a CCD camera. The measured laser profile was close to the Gaussian shape. The full width at half maximum of laser diameter at the focus was 38 µm. The closed and open aperture methods were used to measure the nonlinear absorption coefficients and nonlinear refractive indices of molecular samples, respectively, using 800 nm laser radiation.
Conclusions
In conclusion, we demonstrated the generation of high-order harmonics and resonanceenhanced harmonic (at~32 nm) from the laser-induced plasmas of MoS 2 NFs doped with nickel NPs while analyzing different Mo-containing laser-induced plasmas. The characteristics of high-order harmonics were investigated by varying the DP and HP intensities. MoS 2 NFs doped with nickel generated the strongest coherent XUV emission than other targets. The harmonic enhancement was attributed to the increase in plasma density of MoS 2 NFs doped with nickel. The harmonics generation in MoS 2 -NiCNT plasmas was weaker as compared with other employed targets. The randomly aligned CNTs influenced the harmonic phase leading to destructive interference, which caused a reduction in harmonic yield. Further, the resonant harmonic at~32 nm was produced from all targets except for the MoS 2 -NiCNT. The most intense resonant harmonic was obtained in the case of MoS 2 -Ni plasma. The absence of resonant harmonic from MoS 2 -NiCNT might be due to the detuning of resonant ionic transition due to modification of the optical properties of MoS 2 NFs by carbon nanotubes. The 38 nm harmonic intensity from MoS 2 -NiCNT plasmas was stronger compared to the one generated in other targets. This feature was attributed to the suppression of destructive interference of 4p-4d transition contributions and recombination with 4d orbitals due to the presence of carbon nanotubes in the target. The present studies elucidate an approach for using various transition metal dichalcogenides with dopants to produce intense coherent XUV radiation. Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Data underlying the results presented in this paper is not publicly available at this time but may be obtained from the author upon reasonable request. | 11,145.4 | 2023-03-31T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Yang-Baxter deformations of the GL (2 , R ) WZW model and non-Abelian T-duality
By calculating inequivalent classical r-matrices for the gl (2 , R ) Lie algebra as solutions of (modified) classical Yang-Baxter equation ((m)CYBE)), we classify the YB deformations of Wess-Zumino-Witten (WZW) model on the GL (2 , R ) Lie group in twelve inequivalent families. Most importantly, it is shown that each of these models can be obtained from a Poisson-Lie T-dual σ -model in the presence of the spectator fields when the dual Lie group is considered to be Abelian, i.e. all deformed models have Poisson-Lie symmetry just as undeformed WZW model on the GL (2 , R ) . In this way, all deformed models are specified via spectator-dependent background matrices. For one case, the dual background is clearly found.
Introduction
The deformation of integrable two-dimensional σ-models has attracted considerable attention in two decades ago, in particular given their applications in string theory and AdS/CFT [1][2][3] (for a comprehensive review, see [4]).Integrable deformations of SU (2) principal chiral model were firstly presented in [5][6][7].The generalization of [6,7] as YB (or η) deformation of chiral model was introduced by Klimcik in [1][2][3].The YB deformations are based on R-operators satisfying the (m)CYBE or CYBE (homogeneous YB deformations) [8].The application of these integrable deformations to AdS 5 × S 5 superstring action has been presented in [9][10][11].Note that the initial input for construction of a YB deformed background is classical r-matrix.The r-matrices may be divided into Abelian and non-Abelian.It has been proved that the YB deformed chiral models related to Abelian r-matrices correspond to T-duality shift T-duality transformations [12].In the case of non-Abelian r-matrices it has been shown that the YB deformed chiral model corresponds to deformed T-dual models (with invertible two cocycle ω such that ω −1 = R) [13].Some of the YB deformations of the WZW models with compact or noncompact Lie groups have been also performed in [14][15][16][17][18][19].Generalization of this type of the deformations to Lie supergroups has been recently explored in [20].
The main purpose of this paper is to construct the YB deformations of WZW model based on the GL(2, R) Lie group.We first classify the inequivalent classical r-matrices as solution of (m)CYBE by using the automorphism transformation associated to the gl(2, R) Lie algebra.Then we obtain the YB deformed backgrounds of the GL(2, R) WZW model.As previously shown in [21], the WZW model on the GL(2, R) has Poisson-Lie symmetry with spectators.Here we will show that all YB deformed backgrounds have Poisson-Lie symmetry, in such a way that the resulting deformed backgrounds can be represented as original models of Poisson-Lie T-dual σ-models in the presence of the spectator fields when the dual Lie group is considered to be Abelian; in fact all deformed models will have Poisson-Lie symmetry just as undeformed WZW model on the GL(2, R) [21].
The paper is organized as follows.In Sec. 2, we start by recalling the YB deformation of WZW model.In Sec. 3, we first review the construction of WZW model on the GL(2, R) Lie group [21], then we solve the (m)CYBE in order to obtain the inequivalent classical r-matrices for the gl(2, R) Lie algebra.The backgrounds of YB deformed WZW models on the GL(2, R) Lie group are also constructed in this section; the results are summarized in Table 1.The conformal invariance conditions of the YB deformed backgrounds up to the one-loop order are discussed at the end of Sec. 3. In Sec. 4, we show that the YB deformed models can be considered as original ones of non-Abelian T-dual σ-models.For all deformed backgrounds, the spectator-dependent background matrices are represented in Table 2.At the end of Sec. 4, we also obtain the non-Abelian target space dual for one case of the deformed models.Some concluding remarks are given in the last section.
A review of the YB deformations of WZW model
The YB deformation of WZW model on a Lie group G is giving by [14,15] where Σ is worldsheet with the coordinates (τ, σ) 4 .In the second integral, B is a three-manifold bounded by the worldsheet with the coordinates α = (ξ, τ, σ), and κ is a constant parameter.Here Ω ab is a non-degenerate ad-invariant symmetric bilinear form on Lie algebra G of G where is defined by Ω ab =< T a , T b >; moreover, f c ab stand for the structure constants of G , and L ± are the components of the left-invariant one-forms which are defined in the following way in which g : Σ → G is the element of Lie group G and T a (a = 1, .., dim G) are the bases of Lie algebra G .Here the linear operator R : where ω is a constant parameter.When ω = 0, equation (2.3) is called the CYBE while for ω = ±1 this equation can be generalized to the mCYBE.The deformed currents J ± are defined by means of the following relation [14] ) 4 In the rest of the paper, however, we will use the standard lightcone variables Our conventions are such that the alternating tensor is where η and à are some real parameters measuring the deformation of WZW model.If we set η = à = 0 and κ = 0 one then recovers the action of the principal chiral model.Also for η = à = 0 and κ = 1 the action reduces to the same standard WZW model [14].Notice that the parameter ω can be sometimes normalized by rescaling R, accordingly, one can consider ω = 0, ±1.Indeed, the model (2.1) is integrable as shown in [14].In the following we shall consider the model (2.1) for the GL(2, R) Lie group.
3 YB deformations of WZW model on the GL(2, R) Lie group Similar to the calculations performed to obtain the classical r-matrices of the h 4 Lie algebra [19], here we use the automorphism transformation of the gl(2, R) Lie algebra to classify all corresponding inequivalent classical r-matrices as solutions of (m)CYBE.In order to obtain the YB deformations of the GL(2, R) WZW model, one needs to calculate all linear R-operators corresponding to the obtained classical r-matrices.Then, by calculating the deformed currents J ± we will obtain all YB deformations of the GL(2, R) WZW model.Before proceeding to obtain these, let us first consider the undeformed WZW model on the GL(2, R).
3.1
The WZW model based on the GL(2, R) Lie group We start by writing down the WZW model on the GL(2, R) group.The Lie algebra gl(2, R) = sl(2, R) ⊕ u(1) is generated by the set (T 1 , T 2 , T 3 , T 4 ) with the following commutation rules: where T 4 is a central generator.As mentioned in the above, one can obtain the (undeformed) standard WZW model from (2.1) by considering κ = 1 and setting η = Ã = 0 in formula (2.4).To make WZW model one needs a bilinear form Ω ab so that it satisfies the following condition [22]: A bilinear form on the gl(2, R) Lie algebra defined by commutation relations (3.1) can be obtained by (3.2), giving [21] where λ, ρ are some real constants.
In order to calculate the left-invariant one-forms we parameterize the GL(2, R) group manifold with the coordinates x µ = (x, y, u, v), therefore the elements of the GL(2, R) can be written as Thus, using these, the WZW model on the Gl(2, R) is worked out [21]: By comparing the above action with the original σ-model of the form one concludes that the background metric G µν and the antisymmetric B-field have the following forms: B = e −2x du ∧ dy.
Here we have assumed that λ = 1.In the following, we will also use this assumption.
3.2
Classical r-matrices for the gl(2, R) Lie algebra Classical r-matrices for gl(2, R) Lie algebra were, firstly, found in [23].There, the Lie bialgebras structures and corresponding classical r-matrices were classified in two multiparametric inequivalent classes.In this work, by using automorphism group of the gl(2, R) Lie algebra and we will classify all inequivalent classical r-matrices as the solutions of (m)CYBE.We show that that classical r-matrices are split into twelve inequivalent classes.Given Lie algebra G with the basis {T a } we define an element r ∈ G ⊗ G in the following form where r ab is an antisymmetric matrix with real entries.The linear operator R associated to a classical r-matrix has an important role in the deformation process of the WZW model.Accordingly, one may define [19] Considering the relation and then comparing (3.11) and (3.12) one obtains that Using (3.12) and (3.13) together with (3.2), one can rewrite the (m)CYBE (2.3) in the following form [19] f de c r da r eb + f de a r db r ec + f de b r dc r ea − ωf de c Ω da Ω eb = 0.
Here, the superscript "t" means transposition of the matrix.In order to calculate the r-matrices for a given Lie algebra G we need to solve equation (3.15).But to determine inequivalent r-matrices we should use the automorphism group of Lie algebra G .The automorphism transformation A on the basis {T a } of G is given by where {T ′ a } are the changed basis by the automorphism A obeying the same commutation relations as {T a }.As proved in [19] for a Lie algebra G with transformation A ∈ Aut(G ), two r-matrices r and r ′ as solutions of the (m)CYBE are said to be equivalent if the following transformation holds 5 This equation helps us to classify the inequivalent r-matrices for the gl(2, R).Before proceeding to do this let us find the general automorphism of the gl(2, R) which preserves the commutation rules (3.1).The automorphism transformation of gl(2, R) which preserves the commutation rules (3.1) is given by [24, 25] where a, b, c, d are arbitrary real numbers.
For solving the (m)CYBE (3.15) for gl(2, R) Lie algebra we consider r ab as following form: where m 1 , • • • , m 6 are real constants.Now puttig (3.18) into relation (3.15) and then using (3.1) and (3.3), the general form solution of (3.15) split into six r-matrices.The solutions contain the constants ω and m 1 , • • • , m 6 and are given as follows: where Backgrounds including metric and B-field Comments In the following, using the automorphisms transformation (3.17) and also (3.16), we determine representations of all inequivalent r-matrices of (3.19).Indeed, they are split into twelve inequivalent classes as follows: Theorem 3.1.Any r-matrix of the gl(2, R) Lie algebra as a solution of the (m)CYBE belongs just to one of the following twelve inequivalent classes 6 Now using Eq.(3.3), (3.12) and (3.13) one can find all linear R-operators related to the inequivalent r-matrices and then calculate the deformed currents J ± and also the YB deformed WZW models.
Backgrounds for YB deformations of the GL(2, R) WZW model
In this subsection we find all linear R-operators corresponding to the inequivalent r-matrices of Theorem 3.1.Then we obtain the deformed currents J ± from Eq. (2.4).After, using (2.1) we obtain all YB deformed backgrounds of the GL(2, R) WZW model.It is reminded that the symbol of each background, e.g.GL(2, R) (η, Ã,κ) i , indicates the YB deformed background derived by r-matrix r i ; roman numbers i, ii etc. distinguish between several possible deformed backgrounds of the WZW model, and the parameters (η, Ã, κ) indicate the deformation ones of each background.Notice that all deformed backgrounds include three parameters (ω, Ã, η) except for GL(2, R) (η,κ) ii .The deformed backgrounds including metric and B-field are summarized in Table 1. 6In Ref. [26], in order to classify the YB deformations of the AdS3 × S 3 string, the CYBE has been solved for the Lie algebra sl(2, R) L ⊕su(2) L ⊕sl(2, R) R ⊕su(2) R .There, authors have considered the basis {S0, S+, S−} for the sl(2, R) with the commutation relations [S0, S±] = ±S±, [S+, S−] = 2S0, and Ta, (a = 1, 2, 3) for the su(2) with [Ta, T b ] = −ǫ abc Tc.In both cases of sl(2, R) and su(2), they have used a bar to distinguish the right copy of the algebra from left copy.In their calculations, they have focused on the subalgebra generated by the generators {S0, S+, S0, S−, T1, T2}, and have ignored the transformations generated by {S−, S+}.For the algebra sl(2) L ⊕sl(2) R ⊕su(2) L ⊕su(2) R with the generators {S0, S+, S0, S−, T1, T2}, it has been obtained ten non-Abelian R-matrices as the solutions of the CYBE with ω = 0. We know that gl(2, R) = sl(2, R) ⊕ u(1) is embedded inside sl(2, R) ⊕ su(2), therefore, with dimensional reduction from six to four, we expect that one can obtain the r-matrices of Theorem 3.1 (only cases ω = 0) from those of [26].By checking this we found out that the R-matrices of those will be only in agreement with the r-matrices r i , r ii , r iii , r iv , r vi and r vii of ours.Thus, the r-matrix r v of ours cannot be concluded by reducing the R-matrices of those.In fact, we have one more solution from those of [26].
Here contrary of H 4 case [19], none of the backgrounds of YB deformed GL(2, R) WZW models can be related to the GL(2, R) WZW model (equations (3.8) and (3.9)), because the Killing symmetries of the deformed metrics are different from those of equation (3.8).Before closing this section, let us discuss the conformal invariance conditions of the deformed models.In Ref. [27] it has been identified that a necessary and sufficient condition for the η-model to have a standard supergravity background as target space is that an algebraic condition on the r-matrix.There, it has been referred to as the unimodularity condition that is (3.20) In the procedure of YB deformation, the initial input for construction of the deformed backgrounds is the r-matrix.When a r-matrix satisfies the unimodularity condition (3.20), the YB deformed background is a solution to standard supergravity.If not, the background becomes a solution to the generalized supergravity equations.Let us now look at the unimodularity condition on the solutions of (m)CYBE for the gl(2, R).Using the condition (3.20) together with (3.1) we find that only the r-matrices r iii , r v and r vii of Theorem 3.1 are unimodular, while the rest denote non-unimodular rmatrices.So, we expect that the deformed backgrounds by the aforementioned unimodular r-matrices can be satisfied the standard supergravity equations (the one-loop beta function equations [28]).By looking at the conformal invariance conditions, we find that the backgrounds generated by the rmatrices r iii , r v and r vii satisfy the one-loop beta function equations if the deformation parameters η, Ã vanish and κ = 1.The same condition happens for the rest of backgrounds generated by nonunimodular matrices.
YB deformed models as original ones of non-Abelian T-dual σmodels
In this section we shall show that all YB deformed WZW models of Table 1 can be obtained from a Poisson-Lie T-dual σ-model constructed on a 2 + 2-dimensional manifold M with the two-dimensional non-Abelian Lie group acting freely on M .As we will see, the dual Lie group is considered to be Abelian.Before we proceed to investigate this case further, let us briefly review the construction of Poisson-Lie T-dual σ-models in the presence of spectator fields [29][30][31].Since the Poisson-Lie duality is based on the concepts of the Drinfeld double, it is necessary to define the Drinfeld double group D. A Drinfeld double [32] is simply a Lie group D whose Lie algebra D admits a decomposition D = G ⊕ G into a pair of sub-algebras maximally isotropic with respect to a symmetric ad-invariant non-degenerate bilinear form < ., .>.The dimension of sub-algebras have to be equal.We furthermore consider G and G as a pair of maximally isotropic subgroups corresponding to the subalgebras G and G , and choose a basis in each of the sub-algebras as T a ∈ G and The basis of the two sub-algebras satisfy the commutation relations where f c ab and f ab c are structure constants of G and G , respectively.Noted that the Lie algebra structure defined by relation (4.2) is called Drinfeld double D.
Consider now a non-linear σ-model for the d field variables X M = (x µ , y α ), where x µ 's, µ = 1, ..., dim G represent the coordinates of Lie group G acting freely on the manifold M ≈ O × G, and A remarkable point is that the coordinates y α do not participate in the Poisson-Lie T-duality transformations and are therefore called spectator fields [31].The corresponding σ-model action has the form where R a ± are the components of the right-invariant Maurer-Cartan one-forms which are constructed by means of an element g of the Lie group G as As shown, the couplings E ab , φ aα , φ αb and φ αβ may depend on all variables x µ and y α .Similarly we introduce another σ-model for the d field variables XM = (x µ , y α ), where xµ 's parameterize an element g ∈ G, whose dimension is, however, equal to that of G, and the rest of the variables are the same y α 's used in (4.3).We consider the components of the right-invariant Maurer-Cartan forms on G as (∂ ± gg −1 ) a = R± a = ∂ ± xµ Rµa .In this case, the corresponding action takes the following form The σ-models (4.3) and (4.5) will be dual to each other in the sense of Poisson-Lie T-duality [29,30] if the associated Lie algebras G and G form a the Lie algebra D. There remains to relate the couplings E ab , φ αb and φ αβ in (4.3) to Ẽab , φ α and φαβ in (4.5).It has been shown that [29][30][31] the various couplings in the σ-model action (4.3) are restricted to be where the new couplings E 0 , F (1) , F (2) and F may be at most functions of the variables y α only.In is the Poisson structure on G so that matrices a(g) and b(g) are defined as follows: Eventually, the relationship between the couplings of the dual action and the original one is given by [29][30][31] 1) .( Analogously, one can define matrices ã(g), b(g) and Π(g) by just replacing the untilded symbols by tilded ones.As we will see, the Poisson-Lie T-duality approach in the presence of spectators helps us to construct the non-Abelian T-dual spaces of the YB deformations of the Gl(2, R) WZW models of Table 1.It's worth mentioning that in Ref. [21], the Gl(2, R) WZW model has been derived from a dual pair of σ-models related by Poisson-Lie symmetry, in such a way that the WZW model as original model has been constructed on a 2+ 2-dimensional manifold M ≈ O × G, where G = A 2 as a two-dimensional real non-Abelian Lie group acts freely on M .Below as an example, the non-Abelian T-dualization of the YB deformed background GL(2, R) (η, Ã,κ) i is discussed in detail by using the formulation mentioned above.
4.1 Non-Abelian T-dual space of the YB deformed background GL(2, R) (η, Ã,κ) where {T 1 , T 2 } and { T 1 , T 2 } are the basis of A 2 and 2A 1 , respectively.In order to calculate the components of right invariant one-forms R a ± on the A 2 we parameterize an element of A 2 as g = e −xT 1 e yT 2 . (4.10) Then, R a ± 's are derived to be of the form To achieve a σ-model with the background GL(2, R) (η, Ã,κ) i one has to choose the spectator-dependent matrices in the following form Since the dual Lie group, 2A 1 , has assumed to be Abelian, it follows from the second relation of (4.7) that b ab (g) = 0; consequently, Π ab (g) = 0. Using these and employing (4.6) one can construct the action (4.3) on the manifold M ≈ O × G.The corresponding background including metric and antisymmetric two-form field are given by which is nothing but the YB deformed background GL(2, R) (η, Ã,κ) i as was represented in Table 1.Thus, we showed that the background GL(2, R) (η, Ã,κ) i can be considered as original model from a dual pair of σ-models related by Poisson-Lie symmetry.In this manner one can obtain the spectatordependent matrices for all backgrounds of Table 1.The results for the spectator-dependent matrices are summarized in Table 2.
The dual model
The dual model is constructed on a 2 + 2-dimensional manifold M ≈ O × G with two-dimensional Abelian Lie group G = 2A 1 acting freely on it.In the same way to construct out the dual σ-model we parameterize the corresponding Lie group (Abelian Lie group 2A 1 ) with coordinates xµ = {x, ỹ}.In order to calculate the components of the right invariant one-forms on the dual Lie group we parametrize an element of the group as We then obtain R±1 = ∂ ± x, R±2 = ∂ ± ỹ.Now inserting (4.12) and (4.16) into equations (4.8) one can obtain dual couplings, giving us where ∆ = η 2 (ρ + 2) − 2ỹ 2 .Finally, inserting the above results into action (4.5), the corresponding metric and the antisymmetric tensor field are worked out to be
Summary and concluding remarks
We have obtained the inequivalent classical r-matrices for the gl(2, R) Lie algebra as the solutions of (m)CYBE by using its corresponding automorphism transformation.Using these we have constructed the YB deformations of the GL(2, R) WZW model.Our results including twelve models have been summarized in Table 1.We have shown that each of these models can be obtained from a Poisson-Lie T-dual σ-model in the presence of the spectator fields when the dual Lie group is considered to be Abelian.This means that all deformed models have Poisson-Lie symmetry just as undeformed WZW model on the GL(2, R).In fact, Poisson-Lie symmetry has been preserved under the YB deformation.Since all information related to the deformation of models is collected in the spectator-dependent background matrices E 0 , F (1) , F (2) and F , it seems that will be possible for another choice of these matrices (except ours), other integrable backgrounds can be made.This is a question that we will address in the future and hope to find such backgrounds.
(3. 14 )
For simplicity, one may use the representations (X a ) c b = −f ab c and (Y c ) ab = −f ab c to obtain a matrix form of the above formula, giving[19]
i 4 . 1 . 1
The original modelThe original model is constructed on 2 + 2-dimensional manifold M ≈ O × G in which G is considered to be the Lie group A 2 whose Lie algebra is denoted by A 2 , while O is the orbit of G in M .We use the coordinates {x, y} for the A 2 , and employ y α = {u, v} for the orbit O.In what follows we shall show the background of original model is equivalent to the YB deformed background GL(2, R) (η, Ã,κ) i .As mentioned earlier, having Drinfeld doubles one can construct the Poisson-Lie T-dual σ-models on them.The Lie algebra of the semi-Abelian double (A 2 , 2A 1 ) is defined by the following non-zero Lie brackets
Table 1 .
The YB deformed backgrounds of the GL(2, R) WZW model Background symbol Backgrounds including metric and B-field Comments | 5,565 | 2023-05-20T00:00:00.000 | [
"Mathematics"
] |
Quantitatively rating galaxy simulations against real observations with anomaly detection
Cosmological galaxy formation simulations are powerful tools to understand the complex processes that govern the formation and evolution of galaxies. However, evaluating the realism of these simulations remains a challenge. The two common approaches for evaluating galaxy simulations is either through scaling relations based on a few key physical galaxy properties, or through a set of pre-defined morphological parameters based on galaxy images. This paper proposes a novel image-based method for evaluating the quality of galaxy simulations using unsupervised deep learning anomaly detection techniques. By comparing full galaxy images, our approach can identify and quantify discrepancies between simulated and observed galaxies. As a demonstration, we apply this method to SDSS imaging and NIHAO simulations with different physics models, parameters, and resolution. We further compare the metric of our method to scaling relations as well as morphological parameters. We show that anomaly detection is able to capture similarities and differences between real and simulated objects that scaling relations and morphological parameters are unable to cover, thus indeed providing a new point of view to validate and calibrate cosmological simulations against observed data.
INTRODUCTION
Cosmological simulations are one of the most powerful tool to test our standing of Universe (Vogelsberger et al. 2020).Thanks to the blooming development in computational power, high resolution hydrodynamic simulations are able to model galaxy formation and evolution, tracking both dark matter and baryons across cosmic time.Recent simulations are quite successful in reproducing a wide range of galaxy properties such as stellar mass, rotation curves, chemical abundances, colors, and scaling relations (e.g.Vogelsberger et al. 2014;Stinson et al. 2012;Dubois et al. 2016;Schaye et al. 2015;Wang et al. 2015;Tremmel et al. 2017;Dutton et al. 2017;Pillepich et al. 2018;Nelson et al. 2017;Buck 2019).
Examining the agreement between real observations and simula-★ E-mail<EMAIL_ADDRESS>is the best way to assess the quality of simulations, and is crucial to test and optimize the physics modelled in simulations.Generally, the comparison between the distribution of galaxy properties retrieved from simulations and observations served as a diagnostic tool.
In a multi-dimensional parameter space, the diagnostic is performed through galaxy scaling relations.For example, the Tully-Fisher relation (Tully & Fisher 1977), the size mass relation (Courteau et al. 2007), the stellar mass-halo mass relation (Moster et al. 2010;Moster et al. 2018), and the mass metallicity relation (Tremonti et al. 2004;Gallazzi et al. 2005;Kirby et al. 2015), to name a few.The agreement between these scaling relations in real galaxies and in simulated galaxies is often used as a metric to tell if a simulation is 'good'.
Although scaling relations are one of the most commonly used way to assess the validity of cosmological simulations, they 'zip' billions of data points generated by a simulation down to a set of a few numbers to compare against the same few numbers also distilled from huge amount of data coming from observations.Inevitably, lots of information contained in both simulations and observations is lost during this process.As a result, sometimes a simulation can fit one set of observed scaling relations, while departs from another set of scaling relations at the same time.Even more challenging is the difficulty in quantitatively determining which set of scaling relations holds more significance when contradictions between them arise.It is then natural to look for alternative ways for data-model comparison in order to take full advantage of the large (spatial) resolution recently attained by both.
Indeed, great effort has been devoted to the analysis of galaxy images, with various metrics and statistics, both parametric ones such as Sérsic parameters (Sérsic 1963), and non-parametric ones such as Gini coefficient, 20 , bulge statistics (Gini- 20 system, Lotz et al. 2004), concentration, asymmetry, smoothness (CAS statistics, Conselice 2003), multimode, intensity, and deviation (MID statistics, Freeman et al. 2013).The comparison between the distribution of these image-based parameters in mock images to that in real images has now started to serve as one of the crucial tools in the calibration for modern simulations (Snyder et al. 2015;Bottrell et al. 2017a,b;Rodriguez-Gomez et al. 2019;Bignone et al. 2020;de Graaff et al. 2022).Over the years, although more and more morphological parameters are being proposed and improving our understanding of galaxy images, it is challenging to fully exploit the parameter space to characterize a galaxy image through a supervised and humandriven way.
Image recognition and generation has been one of the biggest highlights in the field of deep learning, from the early attempts to distinguish cats and dogs, to artificial intelligence (AI) face generators, and very recently to ChatGPT's brother, an incredible drawing AI called DALL-E.Astrophysicists have also tried to apply machine learning techniques to attack their science problems, especially those related to galaxy images (e.g.Dieleman et al. 2015;Storey-Fisher et al. 2021;Buck & Wolf 2021;Obreja et al. 2018;Buder et al. 2021;Cheng et al. 2021;Smith et al. 2022).A very recent work by (Tohill et al. 2023) further shows unsupervised machining trained on galaxy images can encode images into features that among which some are correlated to known morphological parameters, such as Sérsic index, asymmetry and concentration (see Tohill et al. 2023 Table 3).
Particularly, deep learning anomaly detection algorithms have the potential to be very powerful when comparing simulated and real galaxy images.In such studies, real observed galaxy images are treated as 'normal' images and a neural network will assign 'anomaly scores' to simulated galaxy images which quantifies how realistic these simulated images are.Margalef-Bentabol et al. (2020) used Wasserstein generative adversarial network (WGANs) (Arjovsky et al. 2017) to find outliers in Horizon-AGN simulation (Dubois et al. 2014), with H-band CANDELS (Grogin et al. 2011;Koekemoer et al. 2011) images as 'norm', and WGAN loss as anomaly score.Zanisi et al. (2021) further improve the performance of anomaly detection algorithm by combining the output of two separate PixelCNN (Oord et al. 2016) networks to generate pixel-wise anomaly score without sky background contamination.In their work, the Illustris Simulation (Vogelsberger et al. 2014) and IllustrisTNG (Nelson et al. 2017) were compared to -band Sloan Digital Sky Survey (SDSS) images and disagreement in small-scale morphological details are spotted.In this work, we will utilize GANomaly (Akcay et al. 2019), featuring a encoder-decoder-encoder generator structure and a better defined anomaly score, to compare NIHAO (Numerical Investigation of Hundred Astrophysical Objects) simulations (Wang et al. 2015) to tri-color SDSS RGB (-- band) galaxy images.GANomaly is a straightforward, concise and effective way to derive anomaly scores that are only related to galaxy features but not to background noise, while at the same time locating the anomaly1 .
This paper is organized as follows: In Section 2 we will introduce different sets of NIHAO simulations that will later be rated by GANomaly.In Section 3 the training set used in this work, the SDSS galaxy images will be reviewed.Section 4 outlines how GANomaly and anomaly score works.Section 5, 6, 7 and 8 present the main results of this paper, the comparison to scaling relations and morphological parameters, as well as additional discussion and interpretation on anomaly scores.We conclude in Section 9 and explore the feature space behind GANomaly in the Appendix A by trying to interpret the latent space with principal component analysis (PCA) and attach physical meaning to it.
SIMULATION
We make use of the "vanilla" version of the NIHAO simulation (hereafter 'NIHAO NoAGN') and its variations NIHAO AGN, NI-HAO n80, and NIHAO UHD to make a comparison across different galaxy formation physics models, parameters and resolutions.These simulated galaxies are further mock observed into SDSS-style RGB images.Finally, mock observed images are rated by GANomaly for their anomaly scores.Details on NIHAO simulations and the mock observation scheme are presented below.
NIHAO NoAGN
The NIHAO (Numerical Investigation of Hundred Astrophysical Objects) simulation (Wang et al. 2015;Blank et al. 2019) is a suite of hydrodynamical cosmological zoom-in simulations powered by the GASOLINE2 code (Wadsley et al. 2017).NIHAO adopts a flat ΛCDM cosmology and parameters from the Planck satellite results (Planck Collaboration et al. 2014).NIHAO includes Compton cooling, photoionization from the ultraviolet background following Haardt & Madau (2012), star formation and feedback from supernovae (Stinson et al. 2006) and massive stars (Stinson et al. 2012), metal cooling, and chemical enrichment.A series of prior work has proven NIHAO simulated galaxies reproduce galaxy scaling relations very well, including the Stellar Halo-Mass relation (Wang et al. 2015), the disc gas mass and disc size relation (Macciò et al. 2016), the Tully-Fisher relation (Dutton et al. 2017), the diversity of galaxy rotation curves (Santos-Santos et al. 2017), and the mass-metallicity relation (Buck et al. 2021).
We refer to this basic version (detailed in Wang et al. (2015)) of NIHAO simulations as 'NIHAO NoAGN'.NIHAO NoAGN is the basis of other variations of NIHAO described in the subsections below.
NIHAO AGN
As named, 'NIHAO NoAGN' does not contain active galactic nuclei (AGN) physics.Since it is widely accepted that black hole feedback is crucial in quenching of elliptical galaxies (e.g.Croton et al. 2006;Dutton et al. 2015) Blank et al. (2019) introduced black hole formation, accretion and feedback to the NIHAO project.In NIHAO AGN, a black hole is seeded when the central halo exceeds a certain mass threshold and then follows the accretion (Bondi 1952) and feedback model introduced by Springel et al. (2005), one of the most widely used and thus tested models.More details on the AGN implementation in NIHAO can be found in Blank et al. (2019), as well as in Waterval et al. (2022) for a nice summary.
Practically, NIHAO AGN is a re-run of NIHAO NoAGN, with the exact same initial conditions, parameters, and physics, except the additional AGN implementation, thus providing the AGN counterpart of all vanilla NIHAO galaxies.It is ideal to test the effect of the implemented AGN model by comparing AGN and NoAGN counterparts with scaling relations (Frosst et al. 2022;Waterval et al. 2022), or now even better, with our anomaly scores.
NIHAO n80
Galaxy formation involves a huge dynamical range, from molecular clouds to large scale environment, making it impossible to fully resolve some of the key processes.Effective models, often with parameters and thresholds, are usually adopted in cosmological numerical simulations to resolve this sub-resolution problem (Springel & Hernquist 2003).For example, star formation is usually modeled by a density threshold , in particles per cm 3 .Gas particles will start to turn into star particles, i.e. form stars, only when this threshold is reached.Although in a real Universe the 'expected' is above 10 5 cm −3 (McKee & Ostriker 2007), such density is out of reach even for the highest resolution simulations of spiral galaxy, as Vogelsberger et al. (2020) reviewed.In fact, current leading cosmological simulations tends to use around 0.1 − 1 cm −3 in the lower end, such as EAGLE (Schaye et al. 2014), Illustris (Vogelsberger et al. 2014), IllustrisTNG (Nelson et al. 2017), and around 10 − 100 cm −3 in the higher end, such as Governato et al. (2010), FIRE (Oñorbe et al. 2015), Brook & Di Cintio (2015), VINTERGATAN Agertz et al. (2021) and NIHAO (Wang et al. 2015).The exact value of is usually tuned by galaxy scaling relations.
All other NIHAO simulations described in this work, NIHAO NoAGN, NIHAO AGN, and NIHAO UHD, use = 10 cm −3 .To explore other values of , Macciò et al. (2022) produces a few re-run of NIHAO NoAGN galaxies with = 80 cm −3 .To clarify, these NIHAO n80 galaxies do not include AGN physics, have the same (mass) resolution as NIHAO NoAGN and NIHAO AGN, but have set to 80 cm −3 instead of 10 cm −3 .For an extensive study of the impact of the star formation threshold on the properties of NIHAO galaxies see Dutton et al. (2019); Buck et al. (2019b); Dutton et al. (2020).
NIHAO UHD
NIHAO NoAGN already has quite good resolution: dark matter particle mass from dm = 3.4 × 10 3 M ⊙ for dwarf galaxies to dm = 1.4×10 7 M ⊙ for the most massive galaxies.The ratio between dark and gas particle masses is initially the same as the cosmological dark/baryon mass ratio, Ω dm /Ω b ≈ 5.48.The gas and star particle force softenings are set to be approximately 2.34 times smaller than those of the dark matter particles (Wang et al. 2015).Additionally, a few Milky-way-like galaxies are selected to be re-simulated at even higher resolution ( dm ≈ 10 5 M ⊙ ).Buck et al. (2020) introduces the NIHAO UHD (Ultra High Definition) suite, which contains higher resolution counterparts of six NIHAO NoAGN galaxies, with the same initial conditions, parameters ( = 10 cm −3 ), and physics (no AGN).Those galaxies demonstrate the excellent convergence of the NIHAO simulations and show good agreement between the satellite mass function of MW and M31 (Buck et al. 2019a) and MW bulge properties (Buck et al. 2018(Buck et al. , 2019c)).
Mock observation images
In this work, we use 77 NIHAO NoAGN galaxies, 77 NIHAO AGN galaxies, 12 NIHAO n80 galaxies, and 6 NIHAO UHD galaxies with each galaxy projected along 20 randomly oriented axes.These galaxy simulations are further mock observed in SDSS -- bands as 64 × 64 × 3 galaxy images first through SKIRT (Camps & Baes 2020) radiative transfer following the same methodology as in Faucher et al. (2023), and then post-processed based on RealSim (Bottrell et al. 2017a(Bottrell et al. ,b, 2019) ) to arrive at realistic mock images.
SKIRT is one of the most widely used radiative transfer code to produce idealized synthetic galaxy images from simulations.For each star particle, assuming a Chabrier (Chabrier 2003) initial mass function (IMF), we assign an spectral energy distribution (SED) from FSPS (Conroy et al. 2009(Conroy et al. , 2010;;Foreman-Mackey et al. 2015) using the MIST isochrones (Paxton et al. 2011(Paxton et al. , 2013(Paxton et al. , 2015;;Choi et al. 2016;Dotter 2016) and the MILES spectral library (?) according to its age, metallicity and mass.Photons are sampled from the SED, launched isotropically in the rest frame of the particle and subsequently Doppler shifted.To reduce the stochasticity of the starformation histories caused by modeling populations of stars as single particles, we implement a subgrid recipe that effectively smooths out the simulated star formation histories such that the typical difference in age between two neighboring (in the temporal sense) young star particles is less than ∼ 1 Myr, the timescale on which stellar population spectra show significant variation.For star particles younger than 10 Myr, we also need to account for the absorption and emission by dust within photodisassociation regions (PDRs) that result from the remaining birth clouds of newly formed stars.Because these regions are below the spatial resolution of the simulations, we adopt the commonly used method (Groves et al. 2008;Jonsson et al. 2010;Hayward & Smith 2015;Trayford et al. 2017; ?; Kapoor et al. 2021;Faucher et al. 2023) of assigning SEDs from MAPPINGS-III that already include the effects of photoionization and obscuration within these dense molecular clouds.This model is characterized by a single free parameter which describes the clearing time of molecular clouds and is taken to be 2.5 Myrs following Faucher et al. (2023).Because the NIHAO simulations do not directly model the dust population, we assume that each gas particle contains a dust mass given by a 10% of its mass in metals.We also assume that no dust is present in gas above a maximum temperature of 16,000K.To perform radiative transfer calculations, we discretize space using an OctTree spatial grid that we subdivide until each grid cell contains at most one gas particle.The physical apparent size of each galaxy image is determined by the distribution of dark matter particles belonging to the galaxy's primary halo, as determined by the AHF halo finder (Knollmann & Knebe 2009).The image pixel scale of NIHAO mock images ranges from 0.58 -3.74 kpc/pixel.More details and example outputs of our radiative transfer procedure can be found in Faucher et al. (2023).
We further add observational realism including point spread function (PSF) convolution, shot noise, Gaussian sky noise, and arcsinh stretch based on the RealSim code by Bottrell et al. (2017aBottrell et al. ( ,b, 2019)).To convolve with SDSS PSF, we adopt Gaussian PSF with full width at half-maximum (FWHM) at the average seeing of all SDSS Legacy galaxies (1.286 ′′ ,1.356 ′′ , and 1.496 ′′ for SDSS -- bands).The physical width of the simulated galaxies are converted to angular size by hypothetically putting them at distance of redshift 0.109, the mean redshift of our SDSS training sample.The shot noise is a Poisson noise determined by zeropoints, airmass, extinction, and CCD gain in survey fields.Gaussian sky noise is obtained from the average sky noise over all Legacy galaxies.Lastly, an arcsinh stretch proposed by Lupton et al. (2004) is implemented to follow SDSS standard imaging scheme.
SDSS galaxy cutouts
SDSS (Blanton et al. 2017) is one of the largest ongoing surveys to map our Universe.The SDSS cutout tool enables one to get RGB image slices at desired position and width.The red color in SDSS images comes from SDSS near infrared () filter (7625Å), the green color from SDSS red () filter (6231Å), and the blue color from SDSS green () filter (4770Å).To get the galaxy images, we use the galaxy catalog by Meert et al. (2014), which provides the coordinates and stellar masses for 670,722 galaxies.Through the SDSS cutout tool 2 , we slice 64 × 64 pixels around each galaxy's coordinate in the pixel scale of the SDSS camera (0.396 ′′ /pix).Examples of SDSS images can be found in Fig. 3.
We further place a cut on stellar mass at 10 9 M ⊙ to avoid contamination from stars.These SDSS samples have redshift from 0.005 -0.395, with a mean redshift of 0.109, to be compared with NIHAO snapshots at redshift 0. Finally, these images are split into a training set of 579,197 images, and a test set of 64,356 images.During the training phase, the neural network only sees training set images.After GANomaly is fully trained, the test set will be used to validate the training performance and be used in the analysis presented in this paper.
NIHAO vs. SDSS sample statistics
All NIHAO galaxies presented in this work are selected to have stellar mass M * > 10 9 M ⊙ , in compliance with the stellar mass cut made on SDSS galaxies 3 .Fig. 1 shows the stellar mass distribution of NIHAO and SDSS galaxies.All three samples share a similar range in stellar mass, but the exact distribution over the mass range differs.The mass distribution of NIHAO NoAGN and NIHAO AGN is more or less the same, especially at the lower end, since AGN does not play a key role in lower mass galaxies compared to high-mass galaxies.Both NIHAO samples by construction have relatively even distribution among the mass range, with a slightly higher number of higher mass galaxies, but SDSS have a peak in the middle mass bin and deficits in low and high mass bins.Since GANomaly learned purely on SDSS images, it is reasonable that the neural network recognizes medium mass galaxy slightly better than lower mass and high-mass galaxies.We are aware of this selection effect, and we will compare galaxies that are in the same mass bins to overcome this issue.More discussion on how stellar mass effects the anomaly score can be found in Section 5. NIHAO UHD and NIHAO n80 galaxies will be handled individually later in this paper, since their population is limited.Their stellar mass is very similar to their NIHAO NoAGN counterpart. 2We provide the script to download all SDSS dataset at https://github.com/ZehaoJin/Rate-galaxy-simulation-with-Anomaly-Detection/blob/main/ SDSS_cutouts/download_cutouts.py. 3 77 out of 127 pairs of NIHAO NoAGN/AGN galaxies, 12 out of 20 NIHAO n80 galaxies, and 6 out of 6 NIHAO UHD galaxies survives the stellar mass cut.
Figure 1.The normalized stellar mass distribution for SDSS (green shade), NIHAO NoAGN (blue line), NIHAO AGN (orange line).The same color scheme will be used throughout this paper.The four red dashed vertical lines further enclose each sample into three (low, middle, high) mass bins.Galaxies in the same mass bin will be compared to each other in the later analysis.It is clear that SDSS's mass distribution peaks around 10 11 M ⊙ , while NIHAO's mass distribution is by construction more flat over all masses, with a slight excess on the higher end.
Anomaly detection by reconstruction
GANomaly (Akcay et al. 2019) is a Generative Adversarial Network (GAN) (Goodfellow et al. 2014) based anomaly detection model inspired by AnoGAN (Schlegl et al. 2017), BiGAN (Donahue et al. 2016), and EGBAD (Zenati et al. 2018).GANomaly detects anomaly by image reconstruction.GANomaly is trained to reconstruct normal (non-anomalous) images by learning the commonly shared features in the set of normal images.After training is finalized, GANomaly should only be able to reconstruct normal images, but fail to reconstruct any anomaly.Hence, comparison between original and reconstructed will reveal the anomaly.
Network architecture
Specifically, Fig. 2 visualizes the architecture of GANomaly.As a variation from GAN, GANomaly is made of a generator network (encoder E and decoder D ) and a discriminator network , with an additional encoder .An input image (64 × 64 × 3 in this work) first goes through the encoder part of the generator E and being encoded, or summarized as (1 × 128 in this work), i.e. the feature space representation of . then becomes the input of the decoder D that outputs x (64 × 64 × 3), the reconstructed version of .Finally, the reconstructed x is sent to another encoder and encoded into ẑ (1 × 128), the feature representation of the reconstructed x.Meanwhile, the discriminator will anonymously take both original input image and reconstructed image x, and try to tell which one is the real input image and which one is the fake image generated by the generator.The generator and the discriminator are then in rivalry and grow together: the generator tries to fool the discriminator by generating more and more realistic images, while the discriminator learns to stay sharp and distinguish generated images from the real ones.
Loss and anomaly score
To reach the goal of successful image reconstruction, three loss functions are defined and to be minimized during training.
where is an intermediate layer inside discriminator D. This loss function computes the L 2 distance between the feature representation of the original and the generated images.Note that although both being feature representation of , () is different from . () comes from a layer in the discriminator, and the features will be trained to best help distinguish real and generated images; on the other hand comes from the encoder, and the features will best serve the reconstruction of .
Contextual Loss, L con : The contextual loss directly compares the input image and the generated image by an L 1 distance, Minimizing L con simply pushes input and constructed x to be as identical as possible, thus contextual information of normal images will be learned.
Encoder Loss, L enc : The encoder loss is the L 2 distance between the encoded feature representation of original and reconstructed x. (3) 4 A simple binary adversarial loss would work in GANomaly too Minimizing L enc lets the generator learn how to grasp features of a non-anomalous image.
Overall, the training goal of GANomaly is to minimize the weighted sum of the three losses: where adv , con , and enc enables the adjustment of importance of the three losses.
The anomaly score of an input , A (), however, will not use the collective loss function, but only uses the difference in the feature space L enc .The contextual loss L con , although not directly linked to anomaly score, can be used to infer the location of the anomaly. (5)
Training
GANomaly is trained only on the training set of 579,197 real SDSS images.The input/output (, x) dimension is set to 64×64×3, and the feature space (, ẑ) dimension is chosen to be 1 × 128.Note that the dimension of the feature space is one of the hyper-parameters that are somewhat arbitrary and could be tuned.The 'features' are not fully independent and orthogonal to each other, and thus the feature space is hard to interpret.The encoder could summarize an image in 128 bits, but could also fully summarize the same image in 64 bits or 256 bits.The neural network is trained over 50 iterations on the training set, with loss weights adv = 1, con = 50, and enc = 1 5 , batch size 64, learning rate = 0.0002, and Adam optimizer 1 = 0.5.The whole training took around 150 hours on a Dual NVIDIA Quadro P1000 GPU with approximately 4GB of memory.
Overview
After training on the SDSS training set, we apply GANomaly to the SDSS test set and our NIHAO images.Note that the SDSS test set images have never been seen by GANomaly during the training phase, and hereafter all SDSS images mentioned are from the SDSS test set.A gallery of typical GANomaly output is shown in Fig. 3 in [original, reconstructed, residual] format, with anomaly score A on top.Note that the anomaly score is normalized by the lowest and highest scores in the SDSS sample thus ranging between 0 and 1 for SDSS (the highest anomaly score case, 1, is the SDSS 'black view' case).A non-SDSS image can get scores higher than 1 if it is even more abnormal than the 'black view' case, e.g. the apple in Fig. 3.A lower anomaly score means the image has less anomaly in feature space, or naively, a galaxy image with lower anomaly score is more realistic.As shown in the gallery, SDSS galaxy images are reconstructed nicely, with tiny residuals and very low anomaly score.However, GANomaly fails to reconstruct any anomaly or non-SDSS galaxy, such as an apple, a black view, and a cosmic ray instance (bottom row in Fig. 3).NIHAO simulated galaxies are reconstructed fairly nicely according to the residual map and anomaly score.The anomaly score distribution of all SDSS test set images versus that of all NIHAO NoAGN and NIHAO AGN images is shown in Fig. 4.Both NIHAO NoAGN and NIHAO AGN distributions overlap the SDSS distribution by around 70 per cent, but both of them are not able to reach the very low score where the SDSS distribution peaks, and both have a larger tail than the SDSS distribution.That indicates NIHAO simulations are generally realistic galaxy simulations, but still have space to improve.Below a more careful interpretation of anomaly scores across different sets of NIHAO simulations will be presented.
NIHAO NoAGN vs. NIHAO AGN in mass bins
Since the stellar mass is not evenly distributed in the SDSS training set (Fig. 1), and different stellar mass can lead to distinct galaxy morphologies, GANomaly will favor galaxies with certain stellar mass, as shown in Fig. 5. Lower mass galaxies, due to their weaker gravitational potential, are intrinsically more irregular or abnormal in terms of morphology compared to more massive galaxies in middle and high mass bins.The intrinsic anomaly and the unbalanced stellar mass population in training set together make SDSS galaxies in low mass bin have higher anomaly scores than in mid and high mass bins.For a fair comparison, we put both SDSS and NIHAO galaxies in low, middle, and high mass bins and compare the distribution in the same mass range.AGN feedback is believed to be essential in quenching of massive galaxies, thus one would expect little difference in anomaly score between NIHAO NoAGN and NIHAO AGN in low mass bin, while NIHAO AGN's anomaly score should outperform that of NIHAO NoAGN as we go to higher stellar mass, assuming the AGN implementation is realistic.Fig. 6 shows the anomaly score distribution for NIHAO NoAGN, NIHAO AGN, and SDSS in each of the three mass bins.The overlapping area (in per cent of SDSS area) between NIHAO and SDSS for AGN vs. NoAGN in low, middle, and high mass bins is 62 per cent vs. 68 per cent, 66 per cent vs. 59 per cent, and 74 per cent vs. 63 per cent6 .The plot shows that none of NoAGN nor AGN is outperforming the other significantly in all of the mass bins.The overlapping area implies that in the low mass bin, there is little difference between NIHAO NoAGN and NIHAO AGN since black holes do not play a key role there, as expected.While in the middle mass bin and especially in high mass bin, AGN starts to show slight advantage over NoAGN.Visually, the advantage of AGN in high mass bin comes from that AGN reproduces the second peak around anomaly score of 0.01 -0.03 a little better than NoAGN does.
To summarize, GANomaly hints some tiny improvement in the modeling of higher mass galaxies with AGN implementation, but generally draws a tie between with or without AGN on their overall performance.Similarly, using PixelCNN and its log-likelihood ratio (LLR) distribution (similar to anomaly score distribution here) to compare SDSS band images, Zanisi et al. (2021) found only a marginal improvement for quiescent galaxies from Illustris to Il-lustrisTNG despite their distinct AGN feedback implementation.GANomaly takes in mock observed galaxy images, i.e.SDSS -- band normalized light distribution maps, therefore, current AGN implementations in NIHAO seems to have 'no net effect' on normalized galaxy light distribution maps according to GANomaly.In Section 7 we will present more on how anomaly scores seem to be 'insensitive' to current AGN models in NIHAO.
Effect of star formation threshold
Among 81 NIHAO NoAGN galaxies used in this work, we have 12 galaxies that have NIHAO n80 counterparts.These NIHAO n80 galaxies have no AGN implementation and have star formation density threshold = 80 cm −3 instead of = 10 cm −3 as in NIHAO NoAGN (See Section 2.3).In this section, we will refer to the comparison between these 12 pairs of galaxies as NIHAO n80 vs. NIHAO n10 for clarity.To investigate this , we compare the anomaly score performance of = 80 cm −3 versus = 10 cm −3 galaxy by galaxy, as shown in Fig. 7.For these 12 pairs of galaxies, the ones with lower mass (< 10 10 M * /M ⊙ ) show no clear difference between n80 and n10 in anomaly score, while the ones with higher mass (> 10 10 M * /M ⊙ ) show better (lower) anomaly score in NIHAO n10.This implies that = 10 cm −3 seems to be a better choice than = 80 cm −3 in a NI-HAO NoAGN-like setup in high mass galaxies.This is an interesting result that is different from Macciò et al. (2022), which demonstrates that = 80 cm −3 is a better choice based on the investigation of the gas map instead of the stellar light map looked at in this work.The varied outcomes implied from diverse perspectives indicate that one single metric is not enough to tune effective parameters or thresholds in cosmological simulations, and an optimal value of is worth more study in future work.
Effect of resolution
In a similar fashion as in the n80 case, we compare 6 pairs of NIHAO NoAGN (referred to as 'NIHAO HD' in this section) and NIHAO UHD galaxies.As a reminder, the only difference between them is that NIHAO UHD has higher resolution than NIHAO HD.Fig. 8 shows that in all 6 galaxies, the anomaly scores are almost identical across different resolution.This is due to that both HD and UHD will Figure 3.A gallery of GANomaly performance.On each panel of three images, left is the input original image, middle is the GANomaly reconstructed image, right is the residual between input and reconstructed.and on top is the anomaly score.The anomaly score comes from the normalized encoder loss L enc , while the residual in the rightmost panel is the pixel-wise contextual loss.A smaller anomaly score indicates the galaxy image is more realistic compared to SDSS galaxy images.The residual on the right hand panels hints at the location of the anomaly, but note that there is no one to one relation between residual and anomaly score (See Section 8).All images are randomly selected.1 ∼ 3 row: SDSS test set galaxy images from low, middle and high mass bins; 4 ℎ row: NIHAO NoAGN galaxies in each mass bin; 5 ℎ row: NIHAO AGN galaxies in each mass bin; 6 ℎ row: NIHAO n80 and NIHAO UHD galaxies; 7 ℎ row: Sanity check with an apple, and two abnormal SDSS test set images.One when no galaxy is observed and the whole field of view is black, and one when a cosmic ray strikes through the camera.
effectively have mock images of the same resolution after convolving with the PSF.The same level of anomaly score performance shows that NIHAO HD is able to produce as nice SDSS style galaxy images as NIHAO HUD does at a significantly lower computational cost.
We also present the face-on galaxy image and GANomaly reconstruction for two NIHAO galaxies that have counterparts in all of NoAGN, AGN, n80 and UHD samples in Fig. 9.The mean anomaly score for every NIHAO galaxy and every SDSS test galaxy is visualized in Fig. 10.Again in terms of anomaly scores, NIHAO AGN vs. NIHAO AGN draws a tie with marginal advantage for NIHAO AGN at higher masses, NIHAO n10 wins NIHAO n80 (i.e.NIHAO NoAGN) in higher masses, and NIHAO UHD vs. NIHAO HD (i.e.NIHAO NoAGN) draws a tie.
SCALING RELATIONS
The compliance to scaling relations is a commonly used criteria to rate simulated galaxies, and it is natural to ask whether the scaling relation criteria agrees with GANomaly anomaly scores.From a 2023) compares simulated NIHAO galaxies with ∼ 2600 late-type galaxies from the Mapping Nearby Galaxy at Apache point (MaNGA) survey (Bundy et al. 2015).The comparisons are performed using multi-dimensional structural scaling relations using quantities such as size (R), stellar mass (M * ), rotational velocity (V), and stellar surface density within 1 kpc (Σ 1 ).Where R, M * , and Σ 1 are estimated using optical grz photometry from the DESI 7 survey (Arora et al. 2021), while the velocity measurements are tanh model fits to the MaNGA velcity maps (see Arora et al. 2023, for more details).For the comparisons all quantities, with the exception of Σ 1 , are measured at a radius corresponding to a stellar mass surface density of 10 M ⊙ pc −2 .The choice of the physically-motivated size metric allows for a uniform comparison between the simulations and observations of galaxies (after considering oberved errors).
In Fig. 11, we use 30 NIHAO AGN galaxies that are common between the analysis presented here and in Arora et al. (2023).NIHAO galaxies generally agree with the MaNGA observations well in terms of scaling relations, and none of the galaxies have an exceptionally high anomaly score.However, it turns out that the anomaly score of a galaxy is not correlated with the fact that this galaxy follows any of the scaling relations: a simulated galaxy that lies right on a scaling relation can have higher anomaly score than a galaxy that seems to be off the scaling relation.Naively, one would assume a galaxy that fits closer to a scaling relation is more realistic than a galaxy that is further away, but the anomaly score indicates that this assumption is not always correct.In other words, GANomaly and anomaly score are not parallel, but complementary to scaling relations.GANomaly examines full galaxy images, or light distribution maps and as such the joint distribution of stars position, age, metallicity and mass, which can not be tested through traditional scaling relations.This also suggests that although modern galaxy simulations can reproduce many observed scaling relations, such success in a few statistical quantities does not guarantee a realistic galaxy image.To rate the quality of a galaxy simulation, getting the light map correct could be as crucial as to get traditional scaling relations right.As we pursue increasingly more precise simulations and a more complete picture of galaxy for- As in Fig. 7, each dot shows the mean anomaly score of each 20 galaxy projections on the y-axis, and the y-error bar denotes the standard deviation of the 20 anomaly scores.The x-axis shows the stellar mass of NIHAO HD galaxies to group each counterpart.The anomaly scores for HD and UHD are almost the same, due to the smoothing of PSF.mation models, it is necessary to make use of both traditional scaling relations and deep learning driven image anomaly detection techniques to get a better understanding of simulations and thus to learn a more complete picture of our Universe.
MORPHOLOGICAL PARAMETERS
Scaling relations are complementary to anomaly scores, but morphological parameters are expected to be more in alignment with anomaly scores, as they both come from galaxy images.Here we investigate what the machine learning model may learn in addition to the traditional methods applied to in studies of mock images so far.Here we are mainly looking at the Gini- 20 (gini coefficient, 20 , and bulge statistics, Lotz et al. 2004), CAS statistics (concentration, asymmetry, and smoothness, Conselice 2003), and MID statistics (multimode, intensity, and deviation, Freeman et al. 2013).A nice review of the definition of these parameters can be found in Rodriguez-Gomez et al. ( 2019).We calculate morphological param-Figure 9. Two NIHAO galaxies named 7.0811and 8.2611 presented in NoAGN, AGN, n80 and UHD versions.In each set of images left is the original galaxy image, middle is the reconstruction by GANomaly, right is the residual between original and reconstructed, and on top is the name and anomaly score.Note, the projection axes are different across each versions, thus a direct comparison of anomaly scores between these images here is not fair.Besides, the residual in the right hand panels is not necessarily linked to the anomaly score and thus a large reconstruction residual does not explain a large anomaly as explained in section 4.3.In Figure 12 we compare anomaly scores to morphological parameters across NIHAO an SDSS.Many of the morphological parameters agree with anomaly score, such as in MID statistics.In such cases NIHAO galaxies have a better anomaly score when the distribution of morphological parameters matches that of SDSS galaxies' better.Further more, the general performance of NIHAO NoAGN and NIHAO AGN are quite similar in terms of most morphological parameters, this is in agreement with what is found in Section 5.2 based on anomaly scores.However, anomaly scores seem to disagree with some other morphological parameters.For example, in the case of asymmetry and smoothness 8 , the lowest anomaly scores for NIHAO AGN are found in highest masses, where the distribution of asymmetry and smoothness between NIHAO AGN and SDSS diverges.Besides, in the case of 20 , although the distribution of 20 between NIHAO and SDSS matches fairly well across all mass bins, the anomaly score can still vary.The agreement between morphological parameters and anomaly scores shows that there are some overlaps between these the approaches, while the disagreement between morphological parameters and anomaly scores suggests that the two approaches are not equivalent.
Unlike scaling relations that are based on physical properties, morphological parameters and anomaly scores are both based on galaxy images.It is then not surprising that scaling relations and anomaly scores are complementary, while morphological parameters and anomaly scores align more closely, sharing many overlaps.However, morphological parameters are supervised metrics to characterize galaxy images, while GANomaly is instead an unsupervised approach that summarizes a galaxy image into 128 features.The su- 8 The high asymmetry and smoothness values at the low-mass end of the galaxy population could be potentially caused by the choice of single Gaussian PSF instead of the non-Gaussian PSF in real SDSS images (Stoughton et al. 2002;Xin et al. 2018), since the clumpy star-forming regions are often compact in size, as shown in Bignone et al. (2020);de Graaff et al. (2022).It is possible that the increased asymmetry value can also effect anomaly score, which is a collective metric that consists of 128 features.We will explore the weights between different features in a further work.pervised approach by construction guarantees more interpretability, while the unsupervised approach aims to exploit the full information of galaxy images as much as possible.GANomaly, which is optimized to extract features that can fully reconstruct a SDSS-style galaxy image back, can potentially exploit more information from a galaxy image than a set of human-defined morphological parameters.The exact connection between anomaly score and morphological parameters can be explored by correlating the GANomaly feature space and morphological parameters.However, the current GANomaly feature space size of 128 is too large to be easily interpreted.In the Appendix A we attempted to bring the dimension of feature space down by a principal component analysis (PCA).In a upcoming work, we are introducing sparsity into the GANomaly feature space and encourage the feature space to automatically shrink into an optimal size.The sparse-GANomaly feature space will be more interpretable, and their connection to morphological parameters will be studied thoroughly, allowing the full exploitation of mock images while maximizing interpretability.
BACKGROUND
Staring at any 'galaxy image' referred in this work (e.g.Fig. 3,9), one might notice that a large amount of image area is occupied by background (black spaces and satellites), but not the galaxy of interest.Zanisi et al. (2021) pointed out that background has a not negligible impact on the anomaly judgement on one single PixelCNN.PixelCNN works analogous to GANomaly's contextual loss L con , in which each the original and reconstructed image is compared pixel by pixel.Any difference in any pixel will be reflected in the anomaly score, regardless of whether the pixel belongs to the galaxy or background.Zanisi et al. (2021) had to resolve this issue by training two separate PixelCNN.However, in GANomaly, anomaly score is defined only by encoder loss L enc , the difference between original and reconstructed features.Any noise in the background that is not a common feature among the training set should not significantly effect anomaly score.For example, in Fig. 13, a randomly chosen SDSS galaxy has its background satellite manually removed.Such change in background does result in an obvious difference in the pixel-wise residual, as predicted by (Zanisi et al. 2021), but the anomaly score stays rather stable.
CONCLUSIONS
In this paper, we introduced an anomaly detection algorithm GANomaly that is trained only on real galaxies to quantitatively rate galaxy simulations.Building on the idea of anomaly detection by reconstruction, GANomaly is a combination of a GAN and an autoencoder network to first summarize an input image into its feature/latent space representation, and then reconstruct the same images back.GANomaly is purely trained on normal set of data, therefore once trained, any outlier images or anomalies on the image will not be reconstructed.The anomaly is quantified by defining the anomaly score as the difference in feature space before and after reconstruction.Furthermore, we are only interested in relative anomaly scores between different sets of simulations.Note, this strategy further ensures that any inconsistency in the process of mock image generation will not enter.
To rate galaxy simulations against real observations, we treat SDSS -- band images as normal set data to train GANomaly, and then apply the trained model to rate mock observations of NIHAO galaxy simulations.Comparing NIHAO simulations with or without AGN implementation, we find that the current AGN model in NIHAO does not improve nor undermine the overall quality of galaxy images.Besides, we find extra resolution in simulation does not effect the quality of a mock image after convolution.
More importantly, we also see that the compliance to galaxy scaling relations does not directly correlate with anomaly scores.This suggests that the success in reproducing certain sets of galaxy prop-erties and galaxy scaling relations does not sign the ultimate victory in simulating our Universe, but a simulated galaxy that fits scaling relations well can still show anomalies when looking at the full galaxy image.Similarly, looking at the images alone might miss some important physical insights that can not be fully described by the image itself.To achieve the ultimate goal of modeling our Universe, simulations need to reproduce both observed scaling relations and realistic galaxy images.
On the other hand, both morphological parameters and GANomaly are devoted to probe galaxy images.Morphological parameters examine image in a supervised way, with clearly defined parameters, thus more interpretable.While GANomaly is an unsupervised deep learning method that builds up a feature space from data, such that the feature space can fully represent an realistic galaxy image.Both two approaches clearly have their own advantages, and combining both will allows us to harness the power of deep learning without losing interpretability, thus fully exploit the rich information carried by galaxy images.
The GANomaly model described in this paper examines galaxy images observed, or mock observed in SDSS -- band.The model is never restricted to any particular suite of simulations such as NIHAO, but can be directly applied to any other galaxy simulation once SDSS -- band mock observations are made.The algorithm itself is not limited to a particular telescope or survey (SDSS), or a certain set of maps (-- band intensity map).GANomaly can be re-trained on images from other telescopes, with different wavelengths, or even different maps, such as velocity maps, and density maps, to detect anomalies either in simulated data, or outliers in the observations themselves.
As we are heading towards an era of next-generation simulations with more sophisticated models, higher resolution, larger volumes, more physics included, and at the same time, a bursting age of big observational missions and large surveys, analyzing techniques that can make the best use of this huge data feed (beyond pure summary statistics) from both simulations and observations is highly needed.Anomaly detection techniques, like GANomaly, along with traditional methods like scaling relations and morphological parameters, together will shed some fresh light on our understanding of the Universe.the general behavior of SDSS and the various NIHAO samples is similar, there is a difference in the fact that the least important components explain more variance in SDSS with respect to NIHAO.We speculate that this may be due to actual observational noise from instrumental and other effects being hard to compress.
Apart from this, we find that the NIHAO samples generally fall within the realistic range of latent space spanned by the SDSS galaxies, as shown in Fig. A2, where we plot them in the plane of the first two principal components.Some of the NIHAO galaxies have slightly lower PC1s than SDSS galaxies have, and some of the NI-HAO galaxies have higher or lower PC2s than SDSS galaxies do.
In Fig. A3 we show a sample of SDSS galaxy images ordered by the value of the first PC (top panel) and a sample ordered by the value of the second PC (bottom panel).The first principal component appears to correlate with the apparent size of the galaxy in the image, and the second with color.In the light of this interpretation, Fig. A2 shows that some of the NIHAO galaxies have apparent size that are too small, which can be addressed in future work.Some other NIHAO galaxies have too extreme colors which can due to several reasons like e.g. a difference in SFR or the limitations of the dust model but the exact physical meaning of this needs to be explored in more detail and we leave it for future work.
Figure 2 .
Figure 2. GANomaly architecture, adopted from Akcay et al. (2019) and Di Mattia et al. (2019).GANomaly mainly consists of a generator network (encoder E and decoder D ), an encoder , and a discriminator network .An input image first gets encoded by E into , the latent/feature space representation of , and then through D reconstructed back to x.The reconstructed x is further sent to and encoded into ẑ, the feature representation of the reconstructed x.At the same time, the discriminator anonymously takes both input image and reconstructed image x, and trys to distinguish between the two.The training of GANomaly aims to minimize adversarial Loss L adv , contextual Loss L con , and encoder Loss L enc .
Figure 4 .
Figure 4. Distribution of anomaly scores for SDSS observations and NIHAO simulations.Each NIHAO galaxy is projected in 20 random orientations, and each of these 20 orientations are seen as independent images when calculating the anomaly score distribution.The distributions do have large overlapping area, but NIHAO does not reproduce the SDSS peak and have a larger tail.
Figure 5 .
Figure 5. Normalized distribution of anomaly scores for different stellar mass bins in the SDSS test set.Colored lines denote three mass bins and the green shaded histogram shows the combined dataset.The anomaly score favors middle and high-mass SDSS galaxies over lower mass ones.
7Figure 6 .
Figure 6.NIHAO NoAGN (blue) vs. NIHAO AGN (orange) vs. SDSS (green) in low, middle and high (top to bottom) mass bins.The 20 projections of a NIHAO galaxy are treated as independent images when calculating the anomaly score distribution.We do not see a significant preference towards NIHAO AGN or NIHAO NoAGN in any of the mass bin.
Figure 7 .
Figure 7. Anomaly score for 12 NIHAO n10 and NIHAO n80 counterparts.Each n10 and n80 galaxy is projected in 20 different rotations, giving 20 galaxy images (see Section 2.5).On this plot each dot shows the mean anomaly score of each 20 galaxy images on the y-axis, and the y-error bar denotes the standard deviation of the 20 anomaly scores.The x-axis is the stellar mass of the NIHAO n10 galaxy, just to group each counterpart.Anomaly score favors = 10 cm −3 over = 80 cm −3 in higher mass galaxies.
Figure 8 .
Figure 8. Anomaly score for 6 NIHAO HD and NIHAO UHD counterparts.As in Fig.7, each dot shows the mean anomaly score of each 20 galaxy projections on the y-axis, and the y-error bar denotes the standard deviation of the 20 anomaly scores.The x-axis shows the stellar mass of NIHAO HD galaxies to group each counterpart.The anomaly scores for HD and UHD are almost the same, due to the smoothing of PSF.
Figure 10 .
Figure 10.The mean anomaly score for each NIHAO galaxy over its 20 projections, along with the anomaly score distribution for all SDSS test galaxies, plotted as a function of their stellar mass.Green -SDSS; Blue -NIHAO NoAGN; Orange -NIHAO AGN; Grey -NIHAO n80; Red -NIHAO UHD.
Figure 11 .
Figure 11.Galaxy scaling relations plotted for NIHAO AGN galaxies, and real galaxies observed in MaNGA.NIHAO AGN galaxies are color-coded by their mean anomaly score over 20 different orientations.All quantities are measured at a radius corresponding to a stellar mass surface density of 10 M ⊙ pc −2 , expect Σ 1 , which is the stellar surface density within 1 kpc.No clear relation between anomaly score and the compliance to scaling relations is found in any of the scaling relation presented.
Figure 12 .
Figure12.Various morphological parameters compared to anomaly scores for SDSS (circle), NIHAO AGN (square), and NIHAO NoAGN (triangle).The y-value of the points shows the median trend of morphological parameters, and the shaded region indicates the 16 th to 84 th percentile range.NIHAO galaxies are color-coded by their mean anomaly score over 20 different orientations.In general, NIHAO galaxies have a lower (darker color) anomaly score when the morphological parameter matches that of SDSS better, and this is most obvious in the right column of MID statistics.However, anomaly score and morphological parameters do not always agree, such as in asymmetry, smoothness, and 20 .
Figure 13 .
Figure 13.The top row shows a SDSS galaxy image that has some satellites in its background, the bottom row shows the same SDSS galaxy but has the satellites manually removed.Different background results in different residual (right most panel, contextual loss), but almost the same anomaly score (labelled on top, encoder loss).
Figure A1 .
Figure A1.Fraction of explained variance as a function of principal component rank.SDSS is shown in brown and the NIHAO samples in different shades of blue (NIHAO No AGN in light blue, NIHAO AGN in purplish blue, NIHAO UHD in dark blue, NIHAO n80 in greenish blue).The top panel shows only the first twelve components with the y-axis in linear scale and the bottom panel shows all 128 components with the y-axis in log scale.
Figure A2 .
Figure A2.Plot of the first against the second principal component for SDSS galaxies (green).The NIHAO galaxy samples have been projected on the first two SDSS principal components and are shown in various shades of blue.
Figure A3 .
Figure A3.Images of a sample of SDSS galaxies ordered by the first principal component (top panels) and by the second principal component (bottom panels). | 12,161.8 | 2024-03-28T00:00:00.000 | [
"Physics",
"Computer Science"
] |
How COLOSS Monitoring and Research on Lost Honey Bee Colonies Can Support Colony Survival
Formation of This Group Since the mid-2000s beekeepers began to report cases of widespread, elevated mortalities of honey bee colonies (Figure 1) in different parts of the world. Today, international scientific monitoring of honey bee colony losses is organised as one of three ‘Core Projects’ of the non-profit honey bee research association COLOSS (prevention of honey bee COlony LOSSes). The topic of this Core Project, colony losses, is reflected in the acronym COLOSS, underlining its importance to the association! Since the very beginning of COLOSS as an EU COST-funded action in 2008, a working group has been dedicated to collect standardised data on honey bee colony losses. This group was termed “monitoring & diagnosis” and was first led and largely shaped by Romée van der Zee from the Netherlands. It is important also to note the involvement of other members who have been very active from the early days until today. These include Flemming Vejsnæs from the Danish Beekeepers Association, Victoria Soroker from Israel, Franco Mutinelli from Italy, and recently retired Preben Kristiansen from Sweden. No other international and long-lasting effort on honey bee colony health and mortality was established in Europe prior to this effort.
Since the mid-2000s beekeepers began to report cases of widespread, elevated mortalities of honey bee colonies ( Figure 1) in different parts of the world. Today, international scientific monitoring of honey bee colony losses is organised as one of three 'Core Projects' of the non-profit honey bee research association COLOSS (prevention of honey bee COlony LOSSes). The topic of this Core Project, colony losses, is reflected in the acronym COLOSS, underlining its importance to the association! Since the very beginning of COLOSS as an EU COST-funded action in 2008, a working group has been dedicated to collect standardised data on honey bee colony losses. This group was termed "monitoring & diagnosis" and was first led and largely shaped by Romée van der Zee from the Netherlands. It is important also to note the involvement of other members who have been very active from the early days until today. These include Flemming Vejsnaes from the Danish Beekeepers Association, Victoria Soroker from Israel, Franco Mutinelli from Italy, and recently retired Preben Kristiansen from Sweden. No other international and long-lasting effort on honey bee colony health and mortality was established in Europe prior to this effort.
It was the aim of the group to coordinate the efforts of several countries to find common ground in surveying beekeepers about losses of their honey bee colonies, and to use the data for risk analysis. In the European-dominated group, winter was defined as the most important period of honey bee colony losses, as most losses occur then, and the first coordinated surveys were soon conducted in a number of European countries (van der Zee et al., 2012). Lively discussions often prevailed during the group meetings, including, for example, on how to define the extent of winter, what actually is a dead honey bee colony, and much more. Scientists are often confronted with such discussions, but different climatic backgrounds, languages, and philosophies of beekeepers as addressees were new to the researchers. It is important to note that communicating with beekeepers as peers, and their valued involvement, were always important issues in this group, as was the premise to keep the study as simple as possible. Important definitions, study designs and methods for analysis were finally established in an article published in the COLOSS BEEBOOK (van der Zee et al., 2013). This project is only the first level of investigating honey bee colony losses. For precise identification of causes, prevalence of diseases, etc., expensive field studies are needed. Nevertheless, van der Zee et al. (2012) provides a first glimpse of honey bee colony mortality in Europe. Furthermore, it described apicultural best management practices, and demonstrated that achieving very high sample sizes (participation rates) in a highly cost-effective way is possible. The strength of this ongoing study undoubtedly is the large dataset it generated and continues to generate.
The Current Position
Representing only a few countries initially, the Core Project has grown to nearly 40 coordinators for monitoring in as many countries today. The backgrounds and working environments of these coordinators are extremely varied. Our colleagues include university researchers, and employees of veterinary agencies or beekeeping associations. They conduct the monitoring research as part of their profession, with national funding, or even without any funding.
From 2014 onwards, the authors of the current article have shared the administrative responsibilities of this Core Project. We host workshops each February to plan the forthcoming monitoring in the participating countries. The average in-person attendance is between 15 and 20 coordinators, though we also had to adjust to limited travel resulting from the COVID-19 pandemic and began to host online meetings. The 2021 online meeting attracted more than 50 participants, some of whom would not normally be able to attend because of travel costs.
The COLOSS monitoring year starts with the workshop in February, followed by the data collection period ( Figure 2). The survey is comprised of questions on the apiary location and number of colonies before winter, and the number of colonies lost due to natural disaster, queen failure or dead colonies or empty hives. Additionally, questions on migratory beekeeping, Varroa destructor control, forage sources, etc. inform us about hive management practices. The COLOSS monitoring Core Project provides a backbone of mandatory and optional questions, but coordinators are welcome to add questions of local importance for use at national level. National coordinators translate the survey into local languages, and distribute and promote it via various channels, including beekeeping magazines and in-person meetings. Over the past few years, internet-based surveys have gained considerable importance and have been gradually implemented within more countries. The opening and duration of the survey is at the discretion of the national coordinators, but April and May are the most important months for data collection. We aim for as many responses as possible from each country. Due to data protection issues, beekeepers can participate anonymously, but even if they leave contact details, these will never be shared with third parties. After data collection, aggregated national results can be quickly communicated to beekeepers, e.g., via press releases.
By 1 st of July each year, all collected raw data should be returned in an anonymised form to the Core Project chairs for analysis. The most obvious outcome is the annual winter mortality rate. Without doubt, this study is the world's largest investigation using standardised data on honey bee colony mortality, collecting at present information on more than 700,000 colonies from ∼28,000 beekeepers in 35 countries (Gray et al., 2020).
Beekeeper Involvement
The monitoring Core Project is probably the best-known activity of COLOSS amongst beekeepers, given the regular calls for their participation. Beekeepers may perceive this study as a survey, but we prefer to understand their contribution in the context of citizen science as those of experts in the field. They understand the terminology and report the empirical over-winter outcome of their colonies for the purposes of science. However, in our opinion, there should be even more levels of involvement of beekeepers in this research. First, beekeepers are participating as volunteers to help science. Second, in some countries, a close involvement of beekeepers enables their input in pre-trials of the survey, co-creation workshops for study design, comments on which questions should be included or open discussions on how data/results should be presented. In some countries, beekeeper contacts may be very important to recruit other beekeepers for participation in the survey. The response rate as a proportion of the known beekeeper population is highly variable among the participating countries, ranging from less than 1% in some countries to more than 10% in Denmark, Germany, Ireland, Netherlands, Malta, Scotland, Sweden and Norway (Brodschneider et al., 2016;Gray et al., 2020).
Dissemination of results is crucial for the success of this Core Project, and, like the promotion of the survey, greatly depends on the coordinators. When beekeepers kindly support us with valuable data on the well-being of their colonies, there should also be feedback to them to encourage future participation. This ranges from openly communicated loss rates to compare with their own over-wintering success, to the depiction of most common management practices, and science-based recommendations. Such feedback is given via articles in beekeeping magazines, websites dedicated to dissemination of the results, talks to beekeeping groups, etc.
The Outcome for Both Science and Beekeeping
The monitoring Core Project has published a series of open access articles in the Journal of Apicultural Research, reporting comparable honey bee colony loss rates for many countries. A typical winter loss rate is around 16%, with large variations among countries. Losses at national level can be as low as <5%, but can also be more than 30% (Gray et al., 2019;2020). A minority (usually 4-5 percentage points) of these losses are due to colonies with unsolvable queen problems, while the majority (around 10 percentage points) are dead colonies or empty hives. A very small proportion of the losses (typically 1 − 2 percentage points) are colonies lost due to natural disasters of various kinds such as flooding, fire, or storm.
Besides calculating colony loss rates, the data are used for statistical identification of risk factors and best beekeeping practice methods. It is important to point out the very diverse beekeeping management styles across the surveyed areas, from Scandinavia to the Mediterranean, as generalisations about best practice methods may not always be feasible. So far, the following risk factors have been identified as related to colony mortality: operation size (large beekeeping operations often have lower losses, Brodschneider et al., 2016Brodschneider et al., , 2018, migratory beekeeping (the nature of the effect varies from year to year and between countries; see Brodschneider et al., 2018, Gray et al., 2019, 2020, forage sources (effects differ among countries, Gray et al., 2019) and percentage of young queens (Gray et al., 2020). To elaborate the effect of young queens: this finding is based on the question "How many of the wintered colonies had a new queen in the year before wintering?". The percentage of colonies going into winter with a new queen was estimated as 55.0%, which is probably an underestimation, as supersedure is often not recognised by beekeepers. Higher percentages of young queens correspond to lower losses from unresolvable queen problems, and lower losses from winter mortality (and naturally the sum of losses of those two classes). Having young queens for good development of strong colony populations is therefore one clear recommendation that can be drawn from the investigation (Gray et al., 2020).
Where We Can Improve
We also want to use this article to evaluate the work of this Core Project critically.
Operation Level versus Apiary Level Study
A sometimes-raised point is that we investigate colony losses at operation level, not apiary level, as differences among apiaries may occur. Most respondents have a single apiary, and those who do not may well manage all their apiaries in the same way. This is one of the compromises we need to make; the alternative would be that larger beekeeping operations must fill out the survey for each apiary separately. This would increase spatial resolution, but also complicate identifying the effect of the same beekeeper managing these colonies!
Aspects of Sampling
Important aspects of sampling are coverage of the hive and beekeeper population by the survey (reach) and achieving a high response rate. These issues are discussed in detail in van der Zee et al. (2013). Low coverage may mean lack of geographic coverage, for example, or omitting part of the beekeeper population, such as older beekeepers, so that a Figure 2. The subsequent steps of the COLOSS monitoring year. a representative sample is not obtained. A high response rate is also needed. For example, if few older beekeepers respond to the survey, there may be non-response bias in the results. Random sampling of the whole population is traditionally recommended for representative results and working to achieve a high response rate from those selected to participate. Implementing random sampling does require more information about the beekeeper population than is often available. Most countries in the monitoring group now aim to reach as large a proportion of the beekeeper population as possible, through making the survey available as widely as possible. It is clear, however, that coverage and response rates vary widely between countries. This includes small sample sizes, i.e., a low proportion of the estimated beekeeper population responding and/or responses being limited to a few regions of the country.
Data Sharing and Repository
Data collected are analysed within the group and results of analysis, including calculated loss rates at country level, are made publicly available through group publications. Coordinators may analyse their own national data as they feel is appropriate and may choose to share their own national aggregated level data in suitable data repositories, subject to any undertakings they may have given to their participating beekeepers and appropriate data protection regulation. Sharing the complete dataset for analysis by researchers from outside the group is currently not available as we want to protect intellectual property, but this is a frequently addressed issue in the monitoring group.
Multifactorial Modelling
Beekeepers and policy makers want the results of our research to include a ranking of the most significant causes of colony mortality. Researchers broadly agree that the causes of colony losses are multifactorial, and a remote diagnosis based on reports from beekeepers is not very reliable, as they cannot detect, for example, virus infections or pesticide residues. It is possible to carry out multifactorial modelling of the risk of colony loss, as in van der Zee et al. (2014), and the statistical significance of effects and their importance in the model can be examined; however, these factors are limited to what the beekeeper can reliably answer questions about, such as aspects of hive management, and cannot consider all relevant risk factors. We, therefore, refrain from asking about some factors such as virus or pesticide load, or varroa infestation levels. These factors definitively need thorough scientific investigation, but cannot reliably be reported by beekeepers.
Seasonal and Annual Losses
As more countries from hotter climates have been participating in the survey, it has become apparent that the term "winter" cannot be applied universally, and that summer losses or annual losses are relevant outside of the temperate regions. The recently growing participation of North African and Middle East countries could develop into a subgroup focussing more on how to best survey these. Inclusion of countries where Apis cerana is kept would probably make these steps even more necessary.
Conclusion
The Core Project hopes to extend its outreach to gradually include more countries from other continents and increase response rates from beekeepers in the countries already participating. Our research relies critically on cooperation with beekeepers. We hope that the findings of our work, disseminated to the beekeepers, help to reduce colony losses. We also believe that the current monitoring since 2008 has educated society and beekeepers by reflecting on the welfare of honey bee colonies, and has provided a foundation for further investigations with participation of beekeepers as citizen scientists. Finally, the outcomes of this large-scale investigation deliver empirical international honey bee colony loss rates as demanded by many stakeholders and decision makers. | 3,584 | 2021-11-11T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
Lower Bounds for Data Structures with Space Close to Maximum Imply Circuit Lower Bounds
: Let f : { 0 , 1 } n → { 0 , 1 } m be a function computable by a circuit with unbounded fan-in, arbitrary gates, w wires and depth d . With a very simple argument we show that the m -query problem corresponding to f has data structures with space s = n + r and time ( w / r ) d , for any r . As a consequence, in the setting where s is close to m a slight improvement on the state of existing data-structure lower bounds would solve long-standing problems in circuit complexity. We also use this connection to obtain a data structure for error-correcting codes which nearly matches the 2007 lower bound by Gál and Miltersen. This data structure can also be made dynamic. Finally we give a problem that requires at least 3 bit probes for m = n O ( 1 ) and even s = m / 2 − 1. Independent work by Dvir, Golovnev, and Weinstein (2018) and by Corrigan-Gibbs and Kogan (2018) give incomparable connections between data-structure and other types of lower bounds.
Introduction
Proving data-structure lower bounds is a fundamental research agenda to which much work has been devoted, see for example [21] and the 29 references there.A static data structure for a function f : {0, 1} n → {0, 1} m is specified by an arbitrary map mapping an input x ∈ {0, 1} n into s memory bits, and m query algorithms running in time t.Here the i-th query algorithm answers query i which is the i-th output bit of f .Time is measured only by the number of queries that are made to the data structure, disregarding the processing time of such queries.The state of time t lower bounds for a given space s can be summarized with the following expression: t ≥ log(m/n)/ log(s/n).
(1.1) Specifically, no explicit function for which a bound better than (1.1) is known, for any setting of parameters.This is true even if time is measured in terms of bit probes (that is, the word size is 1), and the probes are non-adaptive.The latter means that the locations of the bit probes of the i-th query algorithm only depend on i.
Note that in such a data structure a query is simply answered by reading t bits from the s memory bits at fixed locations that depend only on the query but not on the data.All the data structures in this note will be of this simple form, making our results stronger.
On the other hand, for several settings of parameters we can prove lower bounds that either match or are close to (1.1) for explicit functions.Specifically, for succinct data structures using space s = n + r when r = o(n), the expression (1.1) becomes (n/r) log(m/n).Gál and Miltersen [12], Theorem 5, have proved lower bounds of the form Ω(n/r).
When s = n(1 + Ω(1)), (1.1) is at best logarithmic.Such logarithmic lower bounds were obtained for m = n 1+Ω (1) by Siegel [23], Theorem 3.1, for computing hash functions.For several settings of parameters [23] also shows that the bound is tight.The lower bound was rediscovered in [15].Their bounds are stated for non-binary, adaptive queries.For a streamlined exposition of this lower bound and matching upper bound, in the case of non-adaptive, binary queries see Lecture 18 in [26].Remarkably, if s = n(1 + Ω(1)) and m = O(s), or if s = n 1+Ω (1) no lower bound is known.Counting arguments (Theorem 8 in [17]) show the existence of functions requiring polynomial time even for space s = m − 1.
Miltersen, Nisan, Safra, and Wigderson [18] showed that data-structure lower bounds imply lower bounds for branching programs which read each variable few times.However stronger branching-program lower bounds were later proved by Ajtai [1] (see also [3]).
In this note we show that when m is close to s, say m = O(s) or m = s • poly log s even a slight improvement over the state of data structure lower bounds implies new circuit lower bounds.When m = s 1+ε for a small enough ε our connection still applies, but one would need to prove polynomial lower bounds to obtain new circuit lower bounds.When say m = s 2 we do not obtain anything.It is an interesting open question to link that setting to circuit lower bounds.
Circuits with arbitrary gates
A circuit C : {0, 1} n → {0, 1} m with arbitrary gates is a circuit made of gates with unbounded fan-in, computing arbitrary functions.The complexity measures of interest here are the number w of wires, and THEORY OF COMPUTING, Volume 15 (18), 2019, pp.1-9 the depth d.These circuits have been extensively studied, see for example Chapter 13 in the book [14], titled "Circuits with arbitrary gates."The best available lower bounds are proved in [22,8,7].In the case of depth 2 they are polynomial, but for higher depths they are barely super-linear.With this notation in hand we can express the best available lower bounds from [22,8,7].They show explicit functions f : {0, 1} n → {0, 1} m for which any depth-d circuit with w wires satisfies (2.1) Those papers only consider the setting m = n, but we note that no lower bound better than (2.1) is available for m > n, because such a lower bound would immediately imply a bound better than If every f i is computable in depth d with w wires then f can be computed in depth d with (m/n)w wires.To make the reduction explicit we can append the index i to the input.)Problem 2.2.Exhibit an explicit function f : {0, 1} n → {0, 1} m that cannot be computed by circuits of depth d with O(mλ d−1 (n)) wires, with unbounded fan-in arbitrary gates, for some d.
We show how to simulate circuits with data structures.While it is natural to view a data structure as a depth-2 circuit, a key feature of our result is that it handles depth-d circuits for any d.
Theorem 2.3.Suppose the function f : {0, 1} n → {0, 1} m has a circuit of depth d with w wires, consisting of unbounded fan-in, arbitrary gates.Then f has a data structure with space s = n + r and time (w/r) d , for any r.
Proof.Let R be a set of r gates with largest fan-in in the circuit.Note R may include some of the output gates, but does not include any of the input.Note that any other gate has fan-in ≤ w/r, for else there are r gates with fan-in > w/r, for a total of > w wires.The data structure simply stores in memory the input and the values of the gates in R.This takes space s = n + r.It remains to show how to answer queries efficiently.
Group the gates of the circuit in d + 1 levels where level 0 contains the n input gates and level d contains the outputs.We prove by induction on i that for every i = 0, 1, . . ., d, a gate at level i can be computed by reading (w/r) i bits of the data structure.For i = d this gives the desired bound.
The base case i = 0 holds as every input gate is stored in the data structure and thus can be computed by reading (w/r) 0 = 1 bit.
Fix i > 0 and a gate g.If g ∈ R then again it can be computed by reading 1 bit.Otherwise g ∈ R. Then g has fan-in ≤ w/r.Then we can compute g if we know the values of its w/r children.By induction each child can be computed by reading (w/r) i−1 bits.Hence we can compute g by reading (w/r) i bits.
This theorem shows that if for an explicit function f : {0, 1} n → {0, 1} m we have a data-structure lower bound showing that for space s = n + r the time t must be for some d, then we have new circuit lower bounds and Problem 2.2 is solved.We illustrate this via several settings of parameters.
Setting s = 1.01n and m = 100n.As mentioned earlier in this setting no data structure lower bound is available: (1.1) gives nothing.We obtain that even proving, say, a t ≥ log (n) lower bound would solve Problem 2.2.Indeed, pick d = 100 and r = 0.01n.By Inequality (2.2) to solve Problem 2.2 it suffices to prove a lower bound of t ≥ ω(λ 99 (n)) 100 , which is implied by t ≥ log (n).
The succinct setting s = n + r with r = o(n), m = O(n).In this setting the best available lower bound is t ≥ Ω(n/r), see [12], Theorem 5.For say d = 5 and r ≤ n/ log n, the right-hand side of (2.2) is within a polynomial of n/r.Hence for any setting of r the lower bound in [12] is within a polynomial of the best possible that one can obtain without solving Problem 2.2.In particular, for redundancy r = n/ log c (n), we have lower bounds Ω(log c n), and proving log 5c (n) would solve Problem 2.2.We note that moreover the data-structure given by Theorem 2.3 is systematic: the input is copied in n of the n + r memory bits.Thus the connection to Problem 2.2 holds even for lower bounds for systematic data structures.
The setting m = n 1+ε and s = n(1 + Θ(1)).As mentioned earlier, the best known lower bound is Ω(log m).We obtain that proving a data-structure lower bound of the form t ≥ n 3ε log 4 n would solve Problem 2.2.(Pick r = n and d = 3.) The setting s = n 1+Ω(1) and m = s 1+ε .Here no data-structure lower bounds are known.We get that a lower bound of t ≥ s 3ε log 4 n would solve Problem 2.2.
Bounded fan-in circuits
We also get a connection with bounded fan-in circuits (over the usual basis And, Or, Not).Recall that it is not known if every explicit function f : {0, 1} n → {0, 1} m has circuits of size O(m) and depth O(log m).Using Valiant's well-known connection [24] (see [25,Chapter 3] for an exposition) we obtain the following.Proof.Suppose the circuit has w = cm wires and depth d = c log m.It is known [24] (see Lemma 28 in [25]) that we can halve the depth by removing cm/ log d wires.Repeating this process say log log log n times the depth becomes O(log n)/ log log n = o(log n), and we have removed o(m) wires.Let R be the set of wires we removed.The data structure consists of the input and the values of R. Thus the redundancy is |R| = o(n).
It remains to see how to answer queries fast.Because the depth is o(log n), the value at every output gate depends on at most n o (1) wires that were removed, and at most n o (1) input bits.Thus reading the corresponding bits we know the value of the output gate.
In the other uses of Valiant's result, for depth-3 circuits and matrix rigidity, it is not important that each gate depends on few bits of R. However it is essential for us.For this reason it is not clear if the corresponding depth-reduction for Valiant's series-parallel circuits [6] yields data structures.
For completeness we discuss briefly lower bounds for dynamic data structures.These data structures are not populated by an arbitrary map as in the static case above.Instead they start blank and are populated via additional insertion (or update) algorithms whose time is also taken into account.Here the best known lower bounds are Ω(log 1.5 n) [16].We note that for an important class of problems, known as decomposable problems, it is known since [4] how to turn a static data structure with query time t into a data structure that supports queries in time O(t log n) as well as insertions in time O(log n), see Theorem 7.3.2.5 in the book [20].Hence a strong lower bound for such "half-dynamic" data structures would imply a lower bound for static data structures, and one can apply the above theorems to get a consequence for circuits.
Data structures for error-correcting codes
We can use Theorem 2.3 to obtain new data structures for any problem which has efficient circuits.Let f : {0, 1} n → {0, 1} m be the encoding map of a binary error-correcting code which is asymptotically good, that is m = O(n) and the minimum distance is Ω(m).Gál and Miltersen show [12], Theorem 5, that any data structure for this problem requires time ≥ Ω(n/r) if the space is s = n + r.Combining a slight extension of Theorem 2.3 together with a circuit construction in [11] we obtain data structures with time O(n/r) log 3 n.
We first state the result from [11] that we need.
Theorem 4.1.[11] There exists a map f : {0, 1} n → {0, 1} m which is the encoding map of an asymptotically good, binary error-correcting code and which is computable by a circuit with the following properties: (1) The circuit has depth two and is made of XOR gates, (2) it has O(n log 2 n) wires, (3) the fan-in of the output gates is O(log n), and (4) the fan-out of the input gates is O(log 2 n).
Remark 1. Actually the bounds in [11] are slightly stronger.(They prove that the minimum number of wires is Θ(n(log n/ log log n) 2 ).)We focus on the above parameters for simplicity.Properties (1)-( 3) are explicit in [11], Section 6. Property ( 4) is only needed here for the dynamic data structure in Theorem (4.3).It holds because the gates in the middle layer are grouped in O(log m) range detectors.And each range detector is constructed with a unique-neighbor expander where the degree in the input nodes is O(log m).
Next is the new data structure.
Theorem 4.2.
There exists an asymptotically good, binary code whose encoding map f : {0, 1} n → {0, 1} m has data structures with space n + r and time O(n/r) log 3 n, for every r.
THEORY OF COMPUTING, Volume 15 (18), 2019, pp.1-9 Proof.First note that if in Theorem 2.3 we start with a circuit where the output gates have fan-in k, then we can have a data structure with time k(w/r) d−1 .The proof is the same as before, using the bound of k instead of w/r for the output gates.Using Theorem 4.1 the result follows.
We also obtain a dynamic data structure for the encoding map.Proof.Suppose f has a depth-2 circuit where the output gates have fan-in k and the input gates have fan-out .Then this gives a data structure where the memory consists of the middle layer of gates.To compute one bit of the codeword we simply read the k corresponding bits in the data structure, and to update one message bit we simply update the corresponding bits.Essentially this observation already appears in [5], except they do not parameterize it by the fan-in and fan-out, and so do not get a worst-case data structure.Using Theorem 4.1 the result follows.
In both data structures, the query algorithms are explicit, but the preprocessing and updates are not (because the corresponding layer in the circuits in [11] is not explicit).
A lower bound for large s
As remarked earlier we have no lower bounds when s is much larger than n.Next we prove a lower bound of t ≥ 3 for any m = n O (1) even for s = m/2 − 1.This also appeared in [26], Lecture 18.The lower bound is established for a small-bias generator [19].A function f : {0, 1} n → {0, 1} m is an ε-biased generator if the XOR of any subset of the output bits is equal to one with probability p such that |p − 1/2| ≤ ε, over a uniform input.There are explicit constructions with n = O(log m/ε) [19,2].Theorem 5.1.Let f : {0, 1} n → {0, 1} m be a o(1)-biased generator.Suppose f has a data structure with time t = 2. Then s ≥ m/2.
Proof.Suppose for a contradiction that s < m/2.By inspection, any function g on 2 bits is either affine, or else is biased, that is either g −1 (1) or g −1 (0) contains at most one input.
Suppose ≥ m/2 of the queries are answered with affine functions.Then because s < m/2 some linear combination of these affine queries is fixed.This is a contradiction.
Otherwise ≥ m/2 of the queries are answered with biased functions.We claim that there exists one biased query whose (set of two) probes are covered by the probes of other two biased queries.To show this we can keep collecting biased queries whose probes are not covered.We must stop eventually, for s < m/2.Hence assume that the probes of f 3 are covered by those of f 1 and f 2 .
Because f 1 and f 2 are biased, for each of these two functions there exists an output value that determines the values of both its input bits.If these values occur for both f 1 and f 2 then the value of f 3 is fixed as well.Hence, there exists an output combination of ( f 1 , f 2 , f 3 ) ∈ {0, 1} 3 which never occurs.This means that the distribution of ( f 1 , f 2 , f 3 ) over a uniform input has statistical distance Ω(1) from uniform.But this contradicts the small-bias property, because by the so-called Vazirani XOR lemma, see for example [13], the distribution of ( f 1 , f 2 , f 3 ) is √ 8 • o(1) = o(1) close to uniform in statistical distance.
Comparison with independent work
Comparison with [10].Independent work by Dvir, Golovnev, and Weinstein [10] connects datastructure lower bounds and matrix rigidity.Both [10] and this paper aim to link data-structure lower bounds to other challenges in computational complexity.However the works are incomparable.[10] is concerned with linear data structures, and links them to rigidity or linear circuits.This paper is concerned with arbitrary data structures, and links them to circuits with arbitrary gates.[10] shows that a polylogarithmic data-structure lower bound, even against space s = 1.001n and m = n 100 would give new rigid matrices.This work gives nothing in this regime.The main results in [10] and this work are proved via different techniques.But each paper also has a result whose proof uses Valiant's depth reduction [24].
Comparison with [9].Independent work by Corrigan-Gibbs and Kogan [9] shows that better datastructure lower bounds for the function-inversion problem (and related problems) yield new circuit lower bounds for depth-2 circuits with advice bits.
Definition 2 . 1 .
The function λ d : N → N is defined as λ 1 (n) := √ n , λ 2 (n) := log 2 n and for d ≥ 3 λ d (n) := λ d−2 (n), where f (n) is the least number of times we need to iterate the function f on input n to reach a value ≤ 1.Note λ 3 (n) = O(log log n) and λ 4 (n) = O(log n).
Theorem 3 . 1 .
Let f : {0, 1} n → {0, 1} m be a function computable by bounded fan-in circuits with O(m) wires and depth O(log m).Suppose m = O(n).Then f has a data structure with space n + o(n) and time n o(1) .
Theorem 4 . 3 .
There exists an asymptotically good, binary code whose encoding map f : {0, 1} n → {0, 1} m has dynamic data structure with space O(n log n) supporting updating an input bit in time O(log 2 n), and computing one bit of the codeword in time O(log n). | 4,676 | 2019-12-18T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Numerical Approximation for Nonlinear Noisy Leaky Integrate-and-Fire Neuronal Model
Dipty Sharma 1,*, Paramjeet Singh 1, Ravi P. Agarwal 2 and Mehmet Emir Koksal 3 1 School of Mathematics, Thapar Institute of Engineering & Technology, Patiala 147004, India<EMAIL_ADDRESS>2 Department of Mathematics, Texas A& M University-Kingsville, Kingsville, TX 78363, USA<EMAIL_ADDRESS>3 Department of Mathematics, Ondokuz Mayis University, Atakum, Samsun 55139, Turkey<EMAIL_ADDRESS>* Correspondence<EMAIL_ADDRESS>or<EMAIL_ADDRESS>
Introduction
The large-scale neural network models in computational neuroscience have become familiar.The classical description of these (excitatory-inhibitory) neural network models is based on the deterministic/stochastic system.One of the most common models is known as the noisy leaky integrate-and-fire (NLIF) neuron model in which the behavior of the whole population of neurons is encoded in a stochastic differential equation (SDE) for the time evolution of membrane potential of a single neuron representative of the network.The dynamics of a single neuron is given by ( [1][2][3][4][5][6]) where V(t) represents the membrane potential of a single neuron and τ m is the time relaxation of the membrane potential in the absence of any communication.The communication of a single neuron with the network is modeled by the synaptic input current, I(t).The form of I(t) is a stochastic process, given by ( [7]) Here, each spike is treated as a delta function, and if a spike occurs at time t = t 0 , it is denoted by δ(t − t 0 ).The terms t n E m and t n I m in above equation represent the time of mth-spike receiving from nth-presynaptic neuron for excitatory and inhibitory neurons, respectively.Moreover, the terms N E and N I are the total number of presynaptic neurons, where J E and J I are the strength of the synapses for excitatory and inhibitory neurons, respectively.Since the above form of synaptic input current is the discrete Poisson process, it becomes very difficult for further investigation.In addition, the researchers have used the diffusion approximation in which the synaptic input current I(t) is approximated by a continuous in time Ornstein-Uhlenbeck-type stochastic process as given by Initially, it is assumed that every neuron generates spikes according to a stationary Poisson process with constant probability of generating a spike per unit time r, and it is also assumed that all these processes are independent between neurons, because of these assumptions, the mean value of the current, indicated by µ c , is given by br = (N E J E − N I J I )r and its variance, σ 2 c = (N E J 2 E + N I J 2 I )r.Here, we have to depict the likelihood of firing per unit time of the Poissonian spike train r and it is thus recognized as the firing rate, which should be computed as r = I ext + N(t), where N(t) is the mean firing rate of the network.On the other side, B t is the standard Brownian motion in above equation.
The next important factor in the modeling is that the neurons generate a spike only when its membrane potential V(t) arrives at a certain voltage, known as threshold V F , and instantly reset toward a resetting potential V R < V F and sends a signal over the network.
Comprising the continuous form of I(t) in SDE model (1), we obtain where τ m = 1.We suppose that the voltage of a neuron arrives at threshold level at time t − o , i.e., V(t − o ) = V F and after that the voltage arrives suddenly at resting potential, i.e., V(t Furthermore, one can write the associated FPE with source term by using Ito's rule [8], for the evolution of probability density function p(v, t) ≥ 0 of finding neurons at a voltage v ∈ (−∞, V F ], with time (t ≥ 0) where 2 , and N(t) is the mean firing rate of the network which is computed as the flux of neuron at the firing voltage.The source term of the Equation (2) comes from the fact that when the neurons generate spikes and send the signals over the network, their voltage immediately reset to the reset potential V R .At the relaxation time, no neuron have the firing voltage, for this reason, the initial and boundary conditions are given by We can easily verify the conservation of the total number of neurons in above equation.For this purpose, we need to describe the mean firing rate for the network.Since the mean firing rate is the flux of neurons at V F , the value of N(t) is given by N(t) = −a(N(t)) ∂p(V F ,t) ∂v .Integrating (3) across the voltage domain and using the above boundary conditions, we obtain the required condition.
Equation (2) represents the evolution of probability density function and therefore If we translate the new voltage variable by considering In [8], the authors provide theoretical and numerical analysis using finite difference approximation.For the references of mathematical aspects of nonlinear NLIF models, we refer [9][10][11][12][13].We are concern about finding the value of the unknown p(v, t) using alternative approach.The problem (2)-( 4) cannot be solved analytically, because of its complexity arising from nonlinearity and having a source term [14][15][16].Therefore, numerical methods are generally used, for example, finite difference method (FDM) is used to find the approximate solution of the governing equation [8].However, FDM has some disadvantages, for instance, the singularity in the delta source term, as in the above mentioned equation, makes the solution divergent.In order to use the FDM appropriately, the governing equation must be modified, due to the fact that the procedure becomes complicated.
Hence, in the present work, we propose a formulation based on finite element approximation to find the solution of governing equation.FEM is one of the powerful numerical methods for the solutions of problems that describe the real-life situations.Moreover, the characteristics of FEM tackle the singularity problem in an effective manner [17][18][19][20][21][22][23][24][25][26][27].The applicability of FEM regarding this model problem is demonstrated in the final section.
Description of the present paper is as follows: We consider the NLIF model described by Equations ( 2) and (3).In Section 2, we develop the numerical approximation based on the finite element approach.The analysis for the stability is provided in Section 3. We report some numerical examples from [8] and discuss the solution behavior graphically in Section 4. In the last section, we conclude the work done in this research article.
Preliminaries
Here, we state some basic definitions and auxiliary results, which will be used throughout the manuscript.As we are studying a nonlinear version of the FPE, we start with the notion of weak solution.Definition 1.We say that a pair of non-negative functions (p, N) with p ∈ L ∞ (R + ; ) is a weak solution of (2) and (3) if for any test function Here, the space L p (Ω), 1 ≤ p < ∞, refers to the space of functions such that f p is integrable in Ω, while L ∞ corresponds to the space of bounded functions in Ω.The set of infinitely differentiable functions in Ω is denoted by C ∞ (Ω) used as test functions in the notion of weak solution.The blow-up of solution and a priori estimates are given in [8].We here just state the results.
Theorem 1. (Blow-up) Assume that the drift and diffusion coefficients satisfy
for all −∞ < v ≤ V F and all N ≥ 0, and let us consider the average-excitatory network where b > 0.
is close enough to e µV F , then there are no global-in-time weak solutions to ( 2)-( 4).
Lemma 1 (A priori estimates).Assume h(v, N) = −v + bN, a(N) = a 0 + a 1 N on the drift and diffusion coefficients and that (p, N) is a global-in-time solution of ( 2)-( 4) in the sense of Definition 1 fast decaying at −∞, then the following a priori estimates hold for all T > 0: In a latest work [8], it was demonstrated that the problem ( 2)-( 4) can produce a finite time blow-up solution for excitatory networks b > 0 when the initial data is concentrated near sufficient to the threshold voltage.This result was obtained by giving no information about the behavior at the blow-up time.In a recent work [12], we state the theorem gives a characterization of this blow-up time when it occurs for b > 0.
There exist a classical solution of ( 2)-( 4) in the time interval [0, T * ) with T * > 0. The maximal existence time T * > 0 can be characterized as Moreover, when b ≤ 0 we have that T * = ∞, while for b > 0 there exist classical solutions which blow up at a finite time T * and consequently have diverging mean firing rate as t ↑ T * .
We now state the main result on steady states from [8].
1.
Under either the conditions b > 0 and 2a then there exists at least one steady state solution to (2)-(4).
2.
If both then there are at least two steady states to solution to (2)-(4).
3.
There is no steady state to (2)-( 4) under the high connectivity condition'
Finite Element Approximation
We construct the numerical approximation of the problem given in ( 2) and ( 3) in two ways: First we use FEM for space discretization that provides a system of ordinary differential equations, which is then solved by Euler's backward difference for time.Spatial discretization involves the construction of a weak formulation of problem over a given domain Ω = [v 0 , v n ] with specified boundary conditions at v = v 0 and v = v n .Weak formulation of the problem (2) and ( 3) is obtained by multiplying the equation with some test function w(v) and integrating over Ω, Ω w(v) ∂p ∂t In the present study, the mean firing rate N(t) is approximated by using backward FDM.Performing the integration by parts in above equation, we get the following equation: This resulting integral ( 8) is called weak formulation because it allows approximate function with less continuity (or differentiability) than the strong form given in Equation ( 2).Once we obtained the weak formulation, next step is to discretize the weak form for the easy representation and to capture the local effects more precisely.Weak form discretization consists of dividing the entire domain into set of elements, then developing the finite element model by seeking the approximation of a solution over a typical element.This discretization is tackled by taking n non-overlapping elements say D i = [v i , v i+1 ] for i = 1, 2, . . ., n with step size h given by : The unknown function p(v, t) must be approximated in a manner so that continuity or differentiability demands by weak formulation can be met.Since the weak formulation contains the first order derivative of p, any function with non-zero first derivative would be a candidate for approximation.Thus, semi discretization consists of finding where pj are the nodal values and ψ j are the basis functions given by There are many choices of weight function w(v) to be used.Particular choice for the weight function w(v) in Galerkin approach is the same as the choice of basis function ψ j (v).Thus, substituting weight function w = ψ l (x), l = i, i + 1 and approximation for the solution defined by Equation (10) in weak formulation obtained in Equation ( 9) leads to the following equation: On simplification of the above, we get as follows Solving Equation ( 12), we get the system of ordinary differential equations for p = ( p0 , p1 ) T , which can be expressed in matrix notation given by where , where By assembling the contribution from all elements, we get the following system for the global nodal vector p = [p 0 , p 1 , . . ., p n ] T on the entire domain where f is the column vector with all entries are zero except at reset potential V R .Ordinary differential Equation ( 14) requires implicit and stable time-stepping method to avoid extremely small time-step.
Firstly we discretize the time domain [0, T] into m subintervals with time step ∆t.We use the Euler's backward difference in time and get the following system from Equation ( 14): this algebraic system (15) can be solved for p j .
Stability Analysis of the Scheme
Fourier method is a very flexible tool for analyzing stability developed by von Neumann.In this method, initial data is demonstrated in terms of finite Fourier series and we examine the growth of individual Fourier component.After assembling the reduced system (13) and using b 1 = b N(t k−1 ) and b 2 = a(N(t k−1 )), we get the finite element difference-differential equation at the i−th node given by where Let us denote the error as e k−1 j at the (k − 1)th stage for nodal value j to satisfy Equation ( 16) At each time level, error can be expanded as e k j = A ξ k e iβ j h .Thus, substituting the error in Equation ( 17), we get as follows: Simplification of the above equation, we get the following Performing some algebraic manipulation, we get Whenever Q ≥ 0, we get |ξ| ≤ 1, hence, the scheme is conditionally stable.
Numerical Experiments
In this section, we present some numerical examples to demonstrate the behavior of the solutions of the nonlinear NLIF model.The performance of the developed scheme is tested by comparing our results with the existing scheme in the literature.Consider the system (2) and (3) with initial data as follows where change in mean v 0 and variance σ 2 0 describe different scenario of a solution.We notice that the behavior of the solution depends upon the value of excitatory (b > 0) and inhibitory (b < 0) average network.First we take constant diffusion coefficient a(N) = a 0 and find the effect on solution with change in value of b.
In Figure 1a, we find that after some time the solution p(v, t) goes to steady state by taking excitatory case i.e., b = 0.5 > 0 small enough.(20) with mean v 0 = 0 and variance σ 2 0 = 0.25.The system is considered for the excitatory case i.e., by taking b = 0.5 > 0 with activity dependent noise a(N(t)) = a 0 = 1. Figure 1a depicts the numerical solution p(v, t) using FEM at different time level using initial data (20); Figure 1b shows the comparison of the existing scheme and FEM for numerical Solution p(v, t) at t = 1.5.
The approximate solution p(v, t) for v ∈ [−4, 2] at different time levels t > 0 is plotted in Figure 1 with a reset potential V R = 1.From Figure 1a, we see that height of impulse decreases as time increases and after some time it reaches a steady state.In Figure 1b, we perform numerical approximation based upon FEM and compare the results obtained in [8] at a final time t = 1.5.
The evolution of firing rate N(t) with the time t > 0, is plotted in Figure 2. We find that firing rate has different range with change in b > 0 excitatory case as well as the inhibitory case b < 0. In Figure 2a, we consider the case when the initial data is centered at v 0 = 0 with b = 0.5.We observe that the solution reaches a steady state.We also take the cases when the initial data is centered at v 0 = −1 and different values of b = 3, 1.5, −1.5, to find different phenomena based on these values in Figure 2.
The errors for the approximate solution simulated in Figures 1, 3 and 4 are plotted in Figures 5-7 respectively.Numerical values of these errors and CPU time (MATLAB and Statistics Toolbox Release 2013a, The MathWorks, Inc., Natick, MA, United States) of the two methods are shown in Tables 1 and 2.
where p * is the numerical solution at finest grid N * .For the numerical experiments, we find the errors with the finest grid being N * = 320.Our test outcomes show that the error defined above is sure a monotone decreasing function as N increases i.e., N = 20, 40, 80, 160.In Figure 3a, we performed numerical approximation based upon FEM and compared the results obtained in [8] at a time t = 0.0408.The evolution of firing rate N(t) with the time t > 0, is plotted in Figure 3b, which describes the blow-up situation when the initial data is concentrated around v 0 = 1.5 with b = 1.5 > 0. Tables 3 and 4 gives the different error values at the final time t = 0.0408 for the same data that is graphically represented in Figure 6.From Figure 4a, it is clear that when initial data is concentrated enough near the threshold point V F , solution blows up at a finite time t = 0.0025, which is earlier than the phenomena described in Figure 3b.This happens because initial data is concentrated enough near the threshold point i.e., v 0 = 1.83, with b = 0.5 > 0 small enough.For different error values and CPU time for the data graphically represented in Figure 4, see Tables 5 and 6 .In Figure 8, we treat the cases for a(N) = a 0 + a 1 N(t), a 0 , a 1 > 0 type activity dependence noise.Figure 8a shows that by taking b = 0.5 and a(N(t)) = 0.5 + N(t)/8 solution goes to steady state.This further indicates that solution goes to a steady state earlier than the solution behavior provided in Figure 8b by taking a(N(t)) = 0.4 + N(t)/100.Figure 8c shows blow-up situation of a solution by taking b > 1 and a(N(t)) = 0.5 + N(t)/8.From Figure 8d, we find that by reducing the noise factor a(N(t)) = 0.4 + N(t)/100, solution goes to steady state.
The behavior of the solution by taking noise factor a(N(t)) = 1 + N(t)/100 for both the cases of excitatory and inhibitory are represented in Figure 9. From Figure 10a, we find that solution blows up in a finite time and right figure indicate the situation of steady state by reducing the noise factor.20) with mean v 0 = 1.5 and variance σ 2 0 = 0.005 and with activity dependent noise a(N(t)) = 1 + N(t)/100.Left: The system is considered for the excitatory case i.e., by taking b = 0.5 > 0; Right: The system is considered for the inhibitory case i.e., by taking b = −0.5 < 0.
Conclusions
In this article, we proposed a finite element method to find the approximate solution of the nonlinear NLIF model.The performance of the proposed method is validated by comparing with an existence scheme in the literature.The approximate solutions determined by Galerkin finite element method have same accuracy as achieved by high-order finite difference scheme (WENO-FDM).The proposed scheme takes less computational time as compared to WENO-FDM.The reason behind that the existing scheme contains many computational factors such as smoothness indicator functions and non-negativity weights etc.Moreover, we also included the role of both excitatory and inhibitory impulses in the model equation.The stability analysis of the proposed scheme is discussed which shows that the scheme is conditionally stable.The behavior of the solution is plotted by taking some test examples.The results reveal that the continuous Galerkin FEM is better than the WENO-FDM for simulating dynamics of large-scale neuronal networks in the brain.
Figure 1 .
The approximate solution p(v, t) for initial data
Figure 3 .
Left: The approximate solution p(v, t), Right: Firing rate N(t) for initial data given in Equation(20) with mean v 0 = 1.5 and variance σ 2 0 = 0.005.The system is considered for the excitatory case i.e., by taking b = 1.5 > 0 with activity dependent noise a(N(t)) = a 0 = 1.(a): Comparison of the existing scheme and FEM for the numerical Solution p(v, t); (b): Time evolution of the firing rate N(t).
Figure 4 .
Left: The approximate solution p(v, t), Right: Firing rate N(t) for initial data given in Equation(20) with mean v 0 = 1.83 and variance σ 2 0 = 0.003.The system is considered for the excitatory case i.e., by taking b = 0.5 > 0 with activity dependent noise a(N(t)) = a 0 = 1.(a): Comparison of existing scheme and FEM for numerical Solution p(v, t); (b): Time evolution of the firing rate N(t).
Figure 5 .
Figure 5. Error for the approximate solution p(v, t) plotted in Figure 1.
Figure 6 .
Figure 6.Error for the approximate solution p(v, t) plotted in Figure 3a.
Figure 7 .
Figure 7. Error for the approximate solution p(v, t) plotted in Figure 4a.
Figure 9 .
The approximate solution p(v, t) for initial data given in Equation (
Table 1 .
Error table using FEM for the approximate solution p(v, t) graphically represented in Figure1, at a final time t = 1.5 with the finest grid being N * = 320.
Table 2 .
Error table using finite difference-WENO scheme for the approximate solution p(v, t) graphically represented in Figure1, at a final time t = 1.5 with the finest grid being N * = 320.Errors of the numerical solution p(v, t) are calculated in different norms • 1 , • 2 , • ∞ , norms which are defined as follows
Table 3 .
Error table using FEM for the approximate solution p(v, t) graphically represented in Figure3, at a final time t = 0.0408 with the finest grid being N * = 320.
Table 4 .
Error table using WENO-FDM for the approximate solution p(v, t) graphically represented in Figure3, at a final time t = 0.0408 with the finest grid being N * = 320.
Table 5 .
Error table using FEM for the approximate solution p(v, t) graphically represented in Figure4, at a final time t = 0.00255 with the finest grid being N * = 320.
Table 6 .
Error table using WENO-FDM for the approximate solution p(v, t) graphically represented in Figure4, at a final time t = 0.00255 with the finest grid being N * = 320. | 5,206.6 | 2019-04-21T00:00:00.000 | [
"Mathematics"
] |
PixelBeing - An Eco-sustainable Approach to Robotics and AI
In this paper, we describe and define the range of possible applications and the technical contours of a robotic biotechnological system to be worn on the body for playful interactions. Moving from earlier works on Wearable and Modular Robotics, we describe how, by using modular robotics for creating wearable, it is possible to obtain a self-sustainable and flexible wearable system, consisting of freely inter-changeable input/output modules that through the use of solar, mechanical, and other sources of renewable energy are able to suit some specific tasks. Here, we drive the attention on early prototypes to show the potentialities of such an approach, and focus on depicting possible application in the future of electronics domain. Indeed, our artistic experiment is a clear example of how to scale down electronics to an eco-sustainable level, which can still create playful and useful interactions for many application domains.
Introduction
The inevitable destiny of all of the future technologies is to align to the process of lessening the environmental impact both in terms of production, materials, and consumption.Therefore, electronics, computing, and robotics too should try to reduce their environmental effect from the design phase.To work on design decisions to improve sustainability, designers need clear information on how to keep the whole process manageable, sustainable and doable.As a consequence, such product design consists both of choosing between an enormous array of options and, vice versa, a very limited subset of possibilities.
Therefore, the main intention of the PixelBeing project is to design an artifact able to provide product designers with actionable insights into the main triggers of environmental impacts, so they can change their design conception to be more environmentally friendly.We do so either to support eco-design strategies and to create necessarily low-cost and eco-friendly products, while keeping in mind that any product still needs its robustness, its flexibility and the usual support provided by any developer.Our decision was to realize a prototype that, in part, is a tool for product designers with which they are able to quickly see, compare and assess design variations without needing to go through the whole experimental process, building on the playware research methodology [1].Indeed, eco-friendly design factors, such as mass, energy use and transport volume, etc. are very many and difficult to ensemble together in the same production, and our experimentation shapes some knowledges that might provide easy entry points to make design improvements and allows designers to develop product-specific guidelines.
On one side, our research piece aspires to be part of that larger investigation that is facing a concrete challenge, a challenge that might lead towards an epochal change while, on the other hand, the PixelBeing project being a handmade art-oriented research piece (building on previous robot art investigations [2]) carries along with it a relatively low-demanding output in terms of tangible industrial production, by now, and on the opposite an extra request to fulfill a serious request in terms of innovative aesthetical results.
The PixelBeing thinking has meant trying to revisit High-Tech production of AI and Robotics artifacts under a more ethical light, looking for the best ways to analyze hotspots and opportunities in the production life cycles under an economical point of view, without sacrificing efficient, accurate and transparent results.A first example regards materials.In PixelBeing it has been a search on the most available and accessible, easy and effortless production methods that avoid the use of plastic or other toxic and high-impact materials as much as possible.Another clear example are methods.In PixelBeing we tried to minimize either the size and the value or the implant difficulty, in terms of speed and effort, of any given functional component, as well as, we focused on technologies where the lifespan of the whole structure and replacement of broken parts is an essential target.A final example regards energy.We focused our research on the most obvious energy impact of the technology in use.In PixelBeing, indeed, we mostly make use of solar panel circuits and we are actually immerged in the process of implementing mechanical energy ones in the form of kinetic energy, in which objects have the ability to do work when they move, and potential energy, in which objects have the ability to do work due to its own position.
In short, this electronic art project aims at an initial renovation of the idea of robotics so that it will be able, in the near future, to reach the design, production and consumption of objects that are ethically more adequate in the sense of eco-sustainability.
The PixelBeing Project
PixelBeing is a project for a theatrical character that consists of a robotic system to be worn on the body (derived from previous work [3,4]).It is meant to produce aesthetical and playful interactions, and it is built using concepts derived from modular robotics and modular playware [5,6].It foresees a full body suit made of a mask, gloves, shoes and a flexible wearable processing system, where freely interchangeable input/output modules can be positioned on the body suit in accordance with the aesthetical demand and tasks, at hand.The idea is to implement both a virtual (sensorsbased) interaction and a more physical one to reach a wider range of possible outcomes, behaviorally and aesthetically.Therefore, the basic challenge is to design a general interface that focuses on the users' body interaction with the real world, and possibly with a social environment.At the currents state we have tested the general principles and have developed the mask as well as started assembling the gloves and the body costume.This project mainly focuses on eco-friendly technologies and processes and tries to exploit the sustainability of any electronic (art) tools.It makes use of recycled components, biomaterials and supports, solar panels and all of the possible sources of renewable energies, and it also pay attention to the principle that procedures and resources should be as natural and easy to access, as much as possible.
The basic idea is a suit that ends up to be the playfield where to arrange different mechanical, electronic and electromechanical modules according to any need.The modules we use on it can be either isolated or interconnected and should be made so that they can be easily and quickly relocated, therefore, the way the resulting suit configuration will perform a task will depend both on the modules' specific functionalities and on their physical (or their geographical) displacement on the suit itself.Modularity is essential, and that is because besides of offering a larger variety of possible configurations and activities it is a pretty eco-friendly approach since it allows the less expensive procedure when a subpart of the artifact gets out of work.Of course, such What&Where System can be applied to a wide and complex number of situations and tested on many potentials and can be used to create body interactions in several application domains.
Therefore, it becomes crucial to experiment the possible "definition" and implementation of the idea of module and of module's functionality.We framed a number of modules general characteristics as: A) Each module-circuit is fully autonomous energetically and electronically, although a circuit can be thought as 'eventually' connected physically or virtually to others modules, to a font of energy, or to any other computer interface; B) Although there can be exceptions, each single circuit is conceived independently from its final location; C) A module should be applicable to any geolocation and should be thought as for a general purpose, not limited to any single and specific application.As said above, modules can be thought as either isolated or interconnected, and in the latter case we think to a communication paradigm where modules can communicate: A) Locally: Neighbor-to-Neighbor communication (wired or wireless); B) Globally: From one module to a module far away (wireless); C) One to one communication system; D) one to many communication systems; E) Many to many communication system.Our research, at the moment, can be considered primitive.We aim to evolve it in many different ways and a grant and/or a team experienced in electronics, robotics, informatics and AI would be ideal to such a goal.Indeed, we wish to implement modules that have a highly significant input set (a large number of sensors, including biofeedback and neurofeedback ones), a clever input pre/processing, and a large set of articulated outputs (including motor actions).
There are few more levels of research embodied into the PixelBeing.
The first one explores the use renewable energies.We already furnished PixelBeing with accumulators modules as well as displaced on his head different solar panels modules.
The two interconnected modules are able to sustain wee electronic circuits and if perfected should be able in the early future to supply enough power to larger circuits.
We are now in the process of exploiting the use of mechanical energy modules of two different kinds.The first one that makes use of the kinetic energy produced by the body of the actor wearing the suit, and the second one which simply uses the ability of taking advantage to work thanks to gravitational due to its own position in three-dimensional space.The second level of investigation is digging in the potentialities of the use of biomaterials when building and assembling modules.We are making a real effort trying to integrate electronic circuits within the most natural material.Of course, there are many elementary problems, especially related to the moisture potential, we are slowly overcoming.
Meanwhile we basically aim at reaching such a level of integration where electronics co-exists with alive materials and, probably a dream, circuits are somehow partially fed by the growth such processes like the chlorophyll photosynthesis produced by any plant.A third level of investigation is the one that focuses on targeting the minimal size of the modules since, often, the smaller modules are the lesser they consume, the more adaptable is to implement the whole structure, and the cheaper to repair/replace broken units.
The fourth level of investigation regards the energy flow.Since the PixelBeing design is a design that fully relies on renewable energies it has to rethink the flow of energy in terms of appropriate modules location.The "electronics space" must be thought in profitable geometrical terms, so that acquiring and distributing energy itself has to be done in most convenient way.
Discussion and Conclusion
In this paper we described and defined a full range of possible implications and the technical contours for a modern robotic biotechnological system.Moving a general overview on eco-friendly design we described how, by using modular robotics for creating wearable, it is possible to approach a self-sustainable and flexible wearable system, consisting of freely inter-changeable input/output modules that through the use of renewable energy we were able to reach some important goals as well as targets, as for example introduce natural materials and renewable energy in such delicate field as wearable robotics design.We described the progresses of our first prototypes to show the potentialities of such an approach, and focus on depicting possible future applications in all of the electronics domain.Indeed, our artistic experiment is a clear example of how to scale down electronics to an eco-sustainable level, which can still create functional, playful and useful interactions for many applicative fields.
Fig. 3 .
Fig. 3. Detail of the PixelBeing mask.A Solar-panels based circuit module mounted on PixelBeing's helmet
Fig. 4 .
Fig. 4. A detail of a sense-and-actuate module to be mounted on PixelBeing' suit.Later prototype.A biomaterial-based circuit made out of natural wood and moss. | 2,643.2 | 2020-01-13T00:00:00.000 | [
"Environmental Science",
"Engineering",
"Computer Science"
] |
The study of multi-peaked type-I X-ray bursts in the neutron-star low mass X-ray binary 4U 1636$-$536 with RXTE
We have found and analysed 16 multi-peaked type-I bursts from the neutron-star low mass X-ray binary 4U 1636$-$53 with the Rossi X-ray Timing Explorer (RXTE). One of the bursts is a rare quadruple-peaked burst which was not previously reported. All 16 bursts show a multi-peaked structure not only in the X-ray light curves but also in the bolometric light curves. Most of the multi-peaked bursts appear in observations during the transition from the hard to the soft state in the colour-colour diagram. We find an anti-correlation between the second peak flux and the separation time between two peaks. We also find that in the double-peaked bursts the peak-flux ratio and the temperature of the thermal component in the pre-burst spectra are correlated. This indicates that the double-peaked structure in the light curve of the bursts may be affected by enhanced accretion rate in the disc, or increased temperature of the neutron star.
INTRODUCTION
Thermonuclear (Type I) X-ray bursts show a sudden increase in X-ray intensity, becoming ∼ 10 − 100 times brighter than the persistent level, triggered by unstable ignition of accreted fuel on the surface of an accreting neutron star (NS) in low mass X-ray binaries (LMXBs) (Galloway et al. 2008;Galloway & Keek 2017). Type I X-ray bursts were first detected in 1975 in the binary 3 1820 − 30 in the globular cluster NGC 6624 (Grindlay 1976); subsequently a growing population of bursters has been observed by different Xray satellites (Galloway & Keek 2017). In a typical X-ray burst, the light curve shows a single-peaked profile with a fast rise (∼ 1 − 5 s) and an exponential decay within 10 − 100 s (Lewin et al. 1993;Strohmayer & Bildsten 2006;Galloway et al. 2008).
Besides the single-peaked normal bursts, multi-peaked bursts have also been reported in previous studies. Double-peaked bursts have been reported in several NS-LMXBs, e.g., 4U 1608−52 (Penninx et al. 1989), GX 17+2 (Kuulkers et al. 2002), 4U 1709−267 (Jonker et al. 2004) and MXB 1730−335 (Bagnoli et al. 2014). With the Rossi X-ray Timing Explorer (RXTE), Watts & Maurer (2007) analyzed 4 double-peaked bursts in 4U 1636−53. Still more rare, ★ E-mail<EMAIL_ADDRESS>† E-mail<EMAIL_ADDRESS>bursts with triple-peaked structure have also been observed in 4U 1636−53 (van Paradijs et al. 1986;Zhang et al. 2009). While investigating the cooling phase of X-ray bursts in 4U 1636−53, Zhang et al. (2011) reported 12 double-peaked bursts and found that most of them appeared at the vertex of colour-colour diagram. Recently, there were two new observations of double-peaked burst, one in the soft spectral state in 4U 1608−52 (Jaisawal et al. 2019), and another one in SAX J1808.4−3658 (Bult et al. 2019), both using the Neutron Star Interior Composition Explorer (NICER). The double-peaked structures in the burst light curve can be separated into two groups. The first one consists of bursts with a double-peaked profile in X-rays but a single-peaked profile in the bolometric lightcurve, generally accompanied by photospheric radius expansion (PRE), where the flux of the burst reaches the Eddington luminosity. In this case, the temperature of the photosphere temporarily shifts out of the instrument passband, causing an apparent dip in the observed X-ray light curve (Paczynski 1983). The other group consists of bursts that have a double-peaked profile both in X-rays and the bolometric lightcurve. Most of these bursts have low peak flux, although PRE bursts with double-peaked profiles both in X-rays and bolometric luminosity have been recently observed with NICER (Jaisawal et al. 2019;Bult et al. 2019).
Several theoretical models have been proposed to explain the double-peaked bursts. Fujimoto et al. (1988) proposed a model of stepped thermonuclear energy generation due to shear instabilities in the fuel on the NS surface. Melia & Zylstra (1992) suggested that the double-peaked bursts are due to the scattering of the X-ray emission by material evaporated from the disk during the burst. These models, however, can not reproduce the observed doublepeaked profiles in the light curve, black-body temperature and radius (Bhattacharyya & Strohmayer 2006a). Fisker et al. (2004) suggested that a waiting point impedes the nuclear reaction flow and causes a stepped release of thermonuclear energy, but this idea has difficulties in explaining the large dips observed between the two peaks (Bhattacharyya & Strohmayer 2006a). The thermonuclear flame spreading model provided by Bhattacharyya & Strohmayer (2006a) suggested that the double-peaked structure is caused by high latitude ignition and stalling approaching the equator. This model qualitatively explains the essential features of the light curve and reproduces the spectral evolution of two double-peaked bursts in 4U 1636−53. However, one of the problems of the flame spreading model is that it can not explain the triple-peaked bursts (Zhang et al. 2009). Lampe et al. (2016) in their simulations found that low accretion rate and high metallicity could affect the burst morphology and produce twin-peaked structure when a large amount of hydrogen has been depleted. Recently, Bult et al. (2019) suggested that the bright double-peaked bursts are due to the local Eddington limits associated with the hydrogen and helium layers of the NS envelope. Understanding these mechanisms is important, because current models for multi-peaked X-ray bursts have met with only partial success in explaining their light curves and temperature profiles.
Burst properties in an individual system depend mainly on accretion rate (Fujimoto et al. 1981;Bildsten 2000;Zhang et al. 2011;Galloway & Keek 2017). For a specific source, given a certain global accretion rate, the local accretion rate varies with latitude, being higher at the equator and lower at high latitude (Cooper & Narayan 2007). The ignition latitude depends on the column depth related to accretion rate to trigger a burst. As the increasing global accretion rate, the lower latitude region firstly reach the critical local accretion rate and the fuel become stable burning, the ignition should occur at higher latitude even pole. Concerning the research of Cooper & Narayan (2007) with the thermonuclear spreading model of Bhattacharyya & Strohmayer (2006a), Watts & Maurer (2007) expected to find more double-peaked bursts at higher global accretion rates than the single-peaked bursts. However, Watts & Maurer (2007) presented the analysis of the accretion rate limited of 4 double-peaked bursts and posed a challenge to the above expectation. In this paper, we collect a large sample to provide a more complete description of the observational features of multi-peaked bursts and further discuss the relation between accretion rate and multi-peaked structure.
In the time-resolved spectral analysis, the standard approach is to fit the X-ray burst spectra by assuming a constant persistent emission (non-burst component) during the burst (Galloway et al. 2008). However, recent studies provide evidence of enhanced accretion during type-I X-ray bursts (Worpel et al. 2013(Worpel et al. , 2015. The method gives improvements in the quality of spectral fit compared to the standard approach, and the analysis is sensitive to changes in the persistent spectrum in the 2.5 − 20 keV (Worpel et al. 2015). In this paper, we adopt both the standard approach and the method to analyze our sample of multi-peaked bursts.
The LMXB 4U 1636−53 is one of the best-studied sources of X-ray bursts. The NS is in a binary system in a 3.8 hr orbit (van Paradijs et al. 1990) with an 18th magnitude blue star companion (Galloway et al. 2008), and the spin period of the NS is 581 Hz (Strohmayer et al. 1998a,b). 4U 1636−53 is an Atoll source, and as the source moves in the colour-colour diagram (hereafter CCD) from the top right to bottom right, the accretion rate gradually increases, with a transition from the Island to the Banana state (Hasinger & van der Klis 1989). The single peak bursts show a uniform distribution in the CCD (Zhang et al. 2011). About a dozen double-peaked and two triple-peaked X-ray bursts have been discovered from this source using different satellites (Sztajno et al. 1985;van Paradijs et al. 1986;Lewin et al. 1987;Bhattacharyya & Strohmayer 2006a,b;Watts & Maurer 2007;Galloway et al. 2008;Zhang et al. 2009Zhang et al. , 2011. The large multi-peaked bursts sample makes 4U 1636−53 an ideal source to study the properties and evolution of this kind of bursts.
The structure of this paper is organised as follows. In Section 2 we describe the data analysis of our sample. In Section 3 we show the results of light curves and spectra. In Section 4 we discuss our findings in the context of previous theoretical work.
DATA ANALYSIS
The Rossi X-ray Timing Explorer (RXTE) was launched in 1995, and operated until 2012 with a circular orbit at an altitude of 580 km, correspoding to an orbital period of about 96 min (Bradt et al. 1993). We analysed all archived data from the proportional Counter Array (PCA) which is the main instrument onboard RXTE. The PCA consists of five collimated proportional counter units (PCUs), which are sensitive in the 2−60 keV energy range with an energy resolution of ∼ 1 keV at 6 keV (Jahoda et al. 2006). For each observation we used the Standard2 data (16-s time resolution and 129 energy channels) to calculate X-ray colours. We used the Standard1 mode (only the PCU2) to produce the burst light curves. For the timeresolved spectral analysis of the bursts, we extracted spectra in 64 channels from the Event data of all available PCUs.
We studied 336 type I X-ray bursts (as in Zhang et al. 2013) in LMXB 4U 1636−53 with RXTE and discover 16 multi-peaked bursts. The 0.125s bin light curves of these 16 bursts in the 2 − 60 keV are shown in Figure 1. Following the procedure in Zhang et al. (2011) and Zhang et al. (2013), we searched the 1s Standard1 light curve for burst visually, and considered that the start time of a burst is when the flux is larger than 3 times the 1 error of the average persistent flux. We find 14 double-peaked bursts (4 bursts has investigated by Watts & Maurer 2007), one triple-peaked burst (Zhang et al. 2009) and one quadruple-peaked burst which was not reported in previous work. We divided the 16 bursts into two classes according to the number of peaks. All the double-peaked bursts are classified as Class 1 bursts and the bursts with more than two peaks are classified as Class 2 bursts.
To study these bursts in detail, we introduce several parameters to characterise the burst light curve (see Table 1 and Table 2). In Table 1, the column is the burst number sorted by the value of peak flux ratio, 1,2 (see details later). Because most of the data around the peak show a symmetric distribution, we used a Gaussian function to fit the data around each single peak in each burst light curve to get the peaking time. The quantity ,1 is the peak time of the first peak, is the separation time between the first and second peak, and ,2 is the peak time of the second peak. In Table 2, we list the characteristics of the burst light curve in Class 2 bursts. The fitting process of triple-peaked burst is similar to that of the double-peaked bursts. About the quadruple-peaked burst, we have a more detailed discussion in Sec 3.5. The quantities ,3 and ,4 are the peak time of the third and fourth peak, respectively, 23 and 34 are the separation time between the second and the third peak, as well as between the third and the fourth peak, respectively. For all of these parameters we give the 1 error.
To trace the spectral state of the source when these multipeaked bursts appear, we made a CCD as shown in Figure 2. We defined the soft colour as the ratio of the count rate in the 3.5 − 6.0 keV to the count rate in the 2.0 − 3.5 keV bands, and the hard colour as the ratio of the count rate in the 9.7 − 16.0 keV to the count rate in the 6.0 − 9.7 keV bands (Zhang et al. 2009). The colours of the source are normalised by the Crab. For the colours of the source before the burst, we used 64-s of the pre-burst spectrum. In Figure 2, the grey points represent all available observations. The black crosses represent all the bursts in this source. The red filled circles stand for Class 1 bursts, and the blue filled triangles indicate the Class 2 bursts. We find that most of the multiple-peaked bursts are located close to the vertex of the CCD.
We extracted the 64-s interval spectrum prior to a burst as persistent emission. We generated the instrument response matrix using the tool pcarsp and the instrumental background using the tool pcabackest in HEAsoft for each spectrum. In this work, we fitted the spectrum in the 3.0 − 20 keV band using XSPEC version 12.10.1 (Arnaud 1996). We added a 0.5% systematic error to the pre-burst spectra because of calibration uncertainties. During the fitting process, we included the effect of interstellar absorption using the cross-sections of Balucinska-Church & McCammon (1992) and solar abundances from Anders & Grevesse (1989), with a fixed hydrogen column density, H , of 0.36 × 10 22 cm −2 (Pandel et al. 2008). Table 1. Properties of the light curves of Class 1 bursts in 4U 1636−53. The column gives burst number sorted by the value of the peak flux ratio ( 1,2 ) which is the ratio of the first peak to the second peak flux. We used a Gaussian function to fit each peak of every burst. The quantify ,1 gives the peak time of the first peak, is the separation time between the first and the second peak, ,2 is the peak time of the second peak. Table 2. Properties of the light curves of Class 2 bursts in 4U 1636−53. The quantities ,3 and ,4 are the peak time of the third and fourth peak, respectively, 23 and 34 are the separation time between the second and the third peak, as well as between the third and the fourth peak, respectively. For the triple-peaked burst, we use the same method that we used in double-peaked bursts. For the quadruple-peaked burst, we use two GAUSSIAN functions adding two BURS models to fit all data of the light curve (see detailed discussion in Sec 3.5).
Start time (UTC)
Obsid Grey dashed line shows the ratio 1,2 = 1. All these data are given in Table 1.
We used all available PCUs during the X-ray bursts to produce time-resolved spectrum. We corrected every spectrum for dead time according to the methods supplied by the RXTE team. Since the light curve of bursts decay is quite smooth, to compensate the lower count rates, we extracted spectrum over longer intervals in the tail of the bursts. For each multiple-peaked burst, we generated one instrument response matrix using the pcarsp and the instrumental background using the pcabackest in HEAsoft.
For the burst spectral analysis, we initially used a singletemperature blackbody model, TBabs*bbodyrad, to fit the net burst spectra, which is well established as a standard procedure in X-ray burst analysis (Kuulkers et al. 2002;Galloway et al. 2008). The model provides the blackbody colour temperature, bb , and the normalization, bb , proportional to the square of the blackbody radius of the burst emission surface, and it allows us to estimate the bolometric luminosity as a function of time assuming a distance of 5.95 kpc (Fiocchi et al. 2006). The bursts bolometric flux are calculated as where bb = 2 km / 2 10 , km is the effective radius of the emitting surface in km, and 10 is distance to the source in units of 10 kpc. We defined 1,2 as the ratio of the first peak to the second peak flux. We note that, to reduce the effect of the instrument, we used the bolometric flux (not the net burst count rate ratio in Figure 1) from the standard approach to calculate the ratio 1,2 .
After subtracting the instrumental background for the timeresolved spectra, we used another model that allows us to vary the pre-burst spectrum by a free scaling factor (Worpel et al. 2013). We used two models to describe the persistent emission: TBabs*(bbodyrad + powerlaw) and TBabs*(diskbb + powerlaw) in XSPEC. The fit results for the above two models are shown in Tables 3 and 4, respectively. In Table 3, bb is the blackbody temperature and bb is the normalization of the balckbody, is the power-law index and is the normalization of the power law. The spectral parameters of our best-fitting model are given with 1 error. In Table 4, dbb is the disk blackbody temperature and dbb is the normalization of the disk balckbody. Comparing the fitting results of the two pre-burst spectra models (see Table 3 and Table 4), we selected the TBabs*(bbodyrad + powerlaw) as the best pre-burst model, so we used TBabs*( *(bbodyrad + powerlaw) + bbodyrad) in our analysis (the so-called method) to re-fit the net burst spectra. The parameter was allowed to vary between −100 and 100 during the fits.
In Tables 3 and 4, p represents the persistent unabsorbed flux in the 2.5 − 25 keV band and is the luminosity of the source before the burst in Eddington units. We used the unabsorbed 2.5 − 25 keV flux, p , a bolometric correction factor bol = 1.2 (Galloway et al. 2008), and the source distance of 5.95 kpc to estimate the X-ray persistent luminosity, p . In the calculation of Eddington luminosity we assume that the ratio of the colour temperature to the effective temperature, / , is 1.4 (Madej et al. 2004), the NS mass is 1.4 and the hydrogen mass fraction X = 0.7. We note that, in the model of TBabs*(bbodyrad+powerlaw) corresponds to bb and in the model of TBabs*(diskbb+powerlaw) corresponds to dbb . Figure 1 shows the 14 bursts with double-peaked profiles. The bursts display large or gentle dips in their light curves. The first peak can be either weaker/shorter or stronger/longer than the second peak.
Multi-peaked bursts light curves
In Figure 3 we show the rise time, ,1 , of the first peak and the separation time, , of the two peaks as a function of the peak-flux ratio, 1,2 . The grey dashed line represents 1,2 = 1. The rise time of the first peak is in the range of ∼ 1 − 6s, and the separation in time between the two peaks is in the range of ∼ 3 − 7s. We find that: (1) in the case of 1,2 < 1, as the peak flux ratio increases, both the rise time of the first peak and the separation in time increase; (2) in the case of 1,2 > 1, the above parameters do not depend upon peak flux ratio. Besides the triple-peaked burst (burst #15) reported by Zhang et al. (2009), we also discovered a ∼ 40 s long quadruple-peaked burst (burst #16; date of observation: 2006 August 17), which was not reported in previous studies. Figure 2 shows the position of all multi-peaked bursts in the CCD of 4U 1636−53. The normal, single-peaked, bursts observed with RXTE are distributed more or less uniformly across the CCD. However, except for burst #1 and #8, most of the multiple-peaked bursts are located close to the vertex of the CCD. Table 3. Best-fitting parameters of the persistent emission before the 16 multi-peaked bursts in 4U 1636−53 with the model TBabs*(bbodyrad+powerlaw). The quantity bb is the blackbody temperature, bb is the normalisation of the blackbody, is the power-law index, is the normalisation of the power law, p represents the persistent unabsorbed flux in the 2.5 − 25 keV band and is the luminosity of the source before the burst in Eddington units. The spectral parameters of our best-fitting model are given with 1 error. Table 4. Best-fitting parameters of the persistent emission before the 16 multi-peaked bursts in 4U 1636−53 with the model TBabs*(diskbb+powerlaw). The quantity dbb is the disc blackbody temperature, dbb is the normalisation of the disk blackbody. is the power-law index, is the normalisation of the power law, p represents the persistent unabsorbed flux in the 2.5 − 25 keV band and is the luminosity of the source before the burst in Eddington units. The spectral parameters of our best-fitting model are given with 1 error. With the lowest peak flux ratio ( 1,2 ∼ 0.3) and highest peak flux (∼ 5.2 × 10 −8 ergs cm −2 s −1 using the standard approach), burst #1 is located at the bottom right, when the source was in the so-called upper banana branch in the CCD. In the light curve of burst #1, the first peak is relative weak ( 1,2 ∼ 0.3) and the whole burst is dominated by the second peak. Burst #1 has also the second shortest rise time ( ,1 ∼ 1.6 s) of the first peak and shortest separation time ( ∼ 4.0 s).
Multi-peaked bursts in the CCD
Burst #8 is located at the position between the Banana and the Island states in the CCD. Different from burst #1, burst #8 has the longest rise time ( ,1 ∼ 5.0 s) of the first peak and the longest separation time ( ∼ 6.6 s), and a high 1,2 (∼ 0.8).
The relation between pre-burst spectrum and burst profile
To investigate the relation between the pre-burst spectra and the multi-peaked burst light curve profiles, we compared the spectral parameters of the persistent emission of Class 1 bursts with two properties of the light curve: the peak flux ratio, 1,2 , and the separation time of two peaks, . Figure 4 shows the spectral parameters of the persistent emission (in two models) against the peak flux ratio. The red dots represent the Class 1 bursts, and the blue triangles represent Class 2 bursts. The left panels (a) and (b) in Figure 4 display, respectively, the blackbody temperature, bb , and power-law index, , against the peak flux ratio, 1,2 , for the model TBabs*(bbodyrad+powerlaw). The blackbody temperature ranges from 1.57 keV to 1.87 keV. There appears to be a positive correlation between the blackbody temperature and the peak flux ratio. In order to check that, we firstly fit these data with a constant model and get 2 = 24.6 for 13 d.o.f. We then fit these data with a line function, and get a slope of 0.07 ± 0.02 with a 2 = 13.8 for 12 d.o.f. The F-test probability for these two fits is 0.009, indicating that a linear function is slightly better than a constant. The right panels (c) and (d) in Figure 4 show, respectively, the disk-blackbody temperature, dbb , and power-law index, , against the peak flux ratio, 1,2 for the model TBabs*(diskbb+powerlaw). For panel (c) of Figure 4, we do the same analysis as in panel (a), obtaining a 2 = 31.7 for 13 d.o.f. for a constant model and a slope of 0.09 ± 0.03 with a reduced chi-square of 1.64 ( 2 / = 19.7/12) for the line function. The F-test probability for these two fits is 0.019. The above analysis indicates that the peak flux ratio, 1,2 , is marginally correlated with the temperature of the thermal component in the pre-burst spectra.
Time-resolved burst spectra
We adopted both the standard approach and the method to fit the time-resolved burst spectra. An example (burst #5) of the bestfitting parameters using standard approach is shown in the left panel of Figure 5. From the top to the third panel, we display the blackbody temperature, blackbody radius and bolometric flux as a function of time. We used a 0.5-s bin light curve in the 2 − 60 keV range in the fourth panel. The double-peaked structures occur simultaneously in the X-ray and bolometric light curves, which indicates that the double-peaked profiles during the X-ray bursts is not due to a passband effect of the instrument. The time-resolved temperature shows two local maxima, as is also the case in the bolometric flux curve, however, the peaking time in temperature is different from the peaking time in bolometric flux curve. After initially growing, the radius continues increasing following a dip, and then remains more or less constant. The double-peaked structures are also apparent in the bolometric flux in the rest of the bursts of Class 1 bursts. We then measured the local peak temperature of the doublepeaked bursts in the standard approach. The peak temperature ratio, represents the ratio of the first peak to the second peak temperature. Figure 6 shows the peak temperature ratios against the peak flux ratios in the 14 double-peaked bursts. The vertical and horizontal dashed lines represent 1,2 =1.0 and =1.2, respectively. When 1,2 > 1, the peak temperature ratios are always larger than 1.2. In the case of 1,2 < 1, the first peak temperature can be higher or lower than the second peak temperature and is always less than 1.2. There appears to be a bimodal distribution in the temperature ratios. Figure 7 shows the first and second peak flux in the standard approach against the duration time between two peaks, respectively. The typical PRE peak flux is in the range of (6 − 8) × 10 −8 ergs cm −2 s −1 in 4U 1636−53 (Lyu et al. 2015). In our present sample, there are no PRE events. There is no significant trend between the first peak flux and the duration time, however, an anti-correlation is present between the second peak flux and the separation time between two peaks.
We show the fits results of burst #5 with the method in the right-hand panel of Figure 5. As in the standard approach, both the bolometric flux and the X-ray light curve show a double-peaked structure. The evolution of black body temperature and radius in the method is similar to that in the standard approach. The increased black body radius in the cooling phase may be explained in two ways. One would be the influence of persistent emission (accretion geometry) in the soft state (Kajava et al. 2014), the other would be different canonical composition in the NS atmosphere (Suleimanov et al. 2011;Zhang et al. 2011). The plotted in the bottom panel shows values larger than one during the whole burst. Due to the new parameter , the bolometric flux is lower than in the standard approach and has larger error bars. The horizontal solid line in the bottom panels stands for =1. In general, the reduced 2 distribution in the method is lower than that in the standard approach.
Finally, we investigated the time evolution of these 16 multipeaked bursts in Figure 8 using the same order as in Figure 1. The horizontal black dashed lines correspond to = 1. In all multipeaked bursts, the values vary in the range of ∼ 1 − 7 during the bursting period, which is similar to the range observed by Worpel et al. (2013) in other bursts ( ∼ 2 − 10). In general, high values of coincide with large flux values during the burst. We study the correlation between and bolometric flux (e.g. blackbody temperature and radius) for the 16 multi-peaked bursts using the method of cross-correlation lags (Peterson et al. 1998;Sun et al. 2018). We do not find any obvious correlation between and burst spectral parameters, and also we do not find any evidence of a relation between the centroid of the cross-correlation function (time delay between and bolometric flux) and the double-peaked structure (e.g. the peak-flux ratio). .
The quadruple-peaked X-ray burst
In Figure 9, we show the light curve of the quadruple-peaked X-ray burst. We use two GAUSSIAN functions adding two BURS models to fit the quadruple-peaked burst light curve data, where BURS represents a model to describe the burst component in QDP 1 .
where is time, is the photons per second, and represent the start and peak time of each peak, respectively, and and stand for the normalisation and decay factor, respectively.
In the top panel of Figure 9, we show the fitting results, getting a 2 = 120.64 for 62 d.o.f. In the bottom, we show the residual of the fit in units of the error, and the black horizontal line represents the (data-fit)/err = 0. To verify the existence of the third peak, we also use three components (one GAUSSIAN function adding two BURS models) to re-fit the light curve, getting a 2 = 386.79 for 65 d.o.f. The F-test probability for these two fits is 1.1 × 10 −15 , which indicates that the third peak is significant. This is the first time that a quadruple-peaked burst is reported in 4U 1636−53.
There are three relatively strong peaks (∼ 3s, ∼ 7s, ∼ 22s for the first, second, fourth peak) in the light curve and one weak peak (∼ 19s for the third peak). There is a very long time separation (∼ 12s) between the second and the third peak. The separation time between the first and last peak in this quadruple-peaked burst is ∼ 18s, which is similar to the separation (∼ 17s) in the triplepeaked burst reported by van Paradijs et al. (1986), but about two times longer than that in the triple-peaked burst (∼ 8s) in Zhang et al. (2009).
We investigated the time-resolved spectra of the quadruplepeaked burst with two models. In the standard approach (the lefthand panel of Figure 10), both the bolometric light curve and X- (d) Figure 4. Parameter of the persistent spectrum before the multi-peaked burst in 4U 1636−53. Left panels: blackbody temperature ( bb ), and power-law index ( ) for the model TBabs*(bbodyrad+powerlaw) against the peak flux ratio ( 1,2 ); right panels: the disk blackbody temperature ( dbb ) and the power-law index ( ) for the model TBabs*(diskbb+powerlaw) against the peak flux ( 1,2 ). Red dots and blue triangles represent Class1 and Class 2 bursts, respectively. We use a linear (red line) and constant (black line) function to fit only the Class 1 bursts data.
ray light curve show four peaks, three pronounced and small one. The total burst fluence is (26.9 ± 3.5) × 10 −8 ergs cm −2 and the bolometric first peak flux is (2.0±0.6) ×10 −8 ergs cm −2 s −1 . There is a ∼ 5s long waiting period from 13s to 18s in the light curve, and the bolometric flux is always higher than the persistent emission. The effective temperature profile, after an initially rising, decreases with time and shows three noticeable local maxima. The blackbody radius increases steadily during the whole burst, which is similar to what was observed in the triple-peaked burst in Zhang et al. (2009). In the method (the right-hand panel of Figure 10), both the bolometric flux and X-ray light curve display four peaks as in the standard approach. Similar to the double-peaked bursts, the bolometric flux in the method is lower than that of the standard approach due to the variation being larger than one during the first two peaks. The evolution of the blackbody radius and the blackbody temperature in the method is similar to that in the standard approach.
DISCUSSION
We analyzed all available RXTE data of the LMXB 4U 1636−53, and we found 14 double-peaked bursts with complex profiles, one triple-peaked burst, and one quadruple-peaked burst that had not been reported before. The bolometric flux light curve of the multiple-peaked bursts exhibits a similar profile as the X-ray light curve, which indicates that the multi-peaked structures in 4U 1636−53 are not due to passband instrumental effect (Jaisawal et al. 2019). Between the peaks, the flux of these multi-peaked bursts never drops near or below the pre-burst level, which is different from the triple bursts in Boirin et al. (2007) and Beri et al. (2019).
We find a marginal positive correlation between the peakseparation time and the peak-flux ratio in Class 1 bursts (bottom panel of Figure 3). This suggests that double-peaked bursts with high peak-flux ratio value have a longer peak separation time. This phenomenon may be explained if the burst consumes less fuel during the first peak, such that it is easier to release subsequently the fuel that is left to trigger the second peak. Therefore, the bursts with higher peak flux ratio need more time to accumulate fuel to release the second peak energy. As we know, in the bottom panel of Figure 7, we also find an anti-correlation between the second peak flux and the separation time between two consecutive peaks. This indicates that the double-peaked bursts with longer peak separation time have a weaker second peak. If we assume that the mass accretion is negligible during the burst, this anti-correlation indicates that when the separation time is long, there is less fuel for the second peak. These results indicate that a single peak in these double-peaked bursts is not isolated, and the double-peaked profile is affected by the strength and separation time of the two single peaks.
Most of the multi-peaked bursts in our sample appear during the transition from the hard to the soft state in the CCD where PRE bursts are also present (Watts & Maurer 2007;Zhang et al. 2009). That the PRE and multi-peaked bursts appear in the same state of the source may provide an important clue to understanding the origin of the multi-peaked bursts, indicating that the appearance of the multi-peaked bursts could be affected by the transition of the source from the hard to the soft state. If mass accretion rate increases from the upper right to the lower right in the CCD, there is no apparent relation between mass accretion rate and the parameter 1,2 . We have checked the correlation between the persistent flux and the double-peaked burst parameters (peak ratio 1,2 ; peak separation ), but do not find a clear trend between them either. To further investigate if mass accretion affects the burst light curve structure, we studied the pre-burst spectra. We find that there is a positive correlation between the peak-flux ratio and the temperature of thermal component in the pre-burst spectra for the doublepeaked bursts, where we use a blackbody model to fit the burst time-resolved spectra. The soft thermal component of X-ray spectra in accreting NS-LMXBs is generally explained by the emission from the NS surface and the accretion disc. Sanna et al. (2013) analyzed the X-ray spectra of six observations of 4U 1636−53 in different spectral states taken with XMM-Newton and RXTE simultaneously. They find that the temperature at the inner disc radius is ∼ 0.1 − 0.8 keV, and the temperature at the NS surface is ∼ 1.4 − 2.0 keV. We used a black-body or a disk black-body to depict the enhancement of the thermal component. The emission from the NS surface and inner region of the accretion disc are degenerate in our persistent spectrum. The high blackbody (or disc blackbody) temperature could be due to an enhanced accretion rate in the disc or an increased temperature of the NS surface. Since the above discus- kT Figure 6. The peak temperature ratio ( ) against the peak flux ratio ( 1,2 ) for the double-peaked bursts in 4U 1636−53 obtained from fits using the standard procedure to fit time-resolved spectra of X-ray bursts. The vertical and horizontal dashed lines correspond to the 1,2 =1.0, =1.2 respectively. We note that the local maxima of temperature and flux do not necessarily occur at the same time. sion indicates that the double-peaked properties are not affected by accretion rate, we suggest that the double-peaked bursts with high peak flux ratio might appear when the NS surface temperature is high.
To date, there are several models to explain the unusual doublepeaked structure in X-ray bursts. One of them is the thermonuclear flame spreading model (Bhattacharyya & Strohmayer 2006a,b), which succeeded in explaining the large dip in the X-ray light curve and reproducing the spectral evolution of double-peaked bursts. Cooper & Narayan (2007) suggested that ignition latitude has a positive correlation with the accretion rate. Combing these ideas with the results in Figure 4 and assuming that the temperature of the NS is correlated to the accretion rate, the high peak flux ratio of double-peaked bursts would correspond to high latitude ignition (Bhattacharyya & Strohmayer 2006a,b).
Alternatively, Lampe et al. (2016) investigated the possibility of the nuclear origin of the double-peaked bursts based on the model of nuclear waiting points in the rp-process explaining the double-peaked profile (Fisker et al. 2004). They find that for certain metallicities and low accretion rate, there is an anti-correlation between the peak-flux ratio ( 1,2 ) and accretion rate. We do not find that the peak-flux ratio of double-peaked burst decreases when the source moves in the CCD (see Figure 2) from the top right to the bottom right, as the accretion rate gradually increases. If the temperature before the burst is propotional to mass accretion rate, the correlation between the peak flux and the accretion rate does not agree (see Figure 4) with the simulation of Lampe et al. (2016) . In their simulation, as accretion rate increases, the double-peaked structure shows a distinct two stage, different from the large dip in the bolometric flux profile of our sample (See Figure 1).
The newly discovered quadruple-peaked burst in 4U 1636−53 poses a problem to the thermonuclear spreading model of Bhattacharyya & Strohmayer (2006a,b). If this model is applied to ex- plain the four peaks, the burning front not only needs to stall three time, but it also needs a longer "waiting" period between the second and third peak than the separation time in the double-peaked burst. Similar to normal single-peaked bursts, we find that the value is large when the flux is high during the multi-peaked bursts. We investigated the relation between and the burst spectral parameters, but do not find any obvious correlation between them. The enhanced can be interpreted as changes in the accretion flow rate by Poynting-Robertson drag (Walker 1992). However, it is difficult to explain the dip between peaks by reducing the accretion.
The double-peaked structure could be due to the influence of the variation of the accretion geometry, but we should observe more double-peaked bursts to test this hypothesis. We note also that it is hard to explain the existence of triple or quadruple-peaked profiles with the variation of the geometry.
SUMMARY
We found 16 bursts with multi-peaked structure by investigating 336 type I X-ray bursts in LMXB 4U 1636−53 with RXTE. Our sample contains 14 double-peaked, one triple-peaked and one quadruplepeaked burst; the latter had never been reported before.
(i) Most of the multi-peaked bursts in our sample appear during the transition from the hard to the soft state in the CCD.
(ii) We find that double-peaked bursts with high peak-flux ratio appear when the pre-burst temperature is high; the high NS temperature may be due to enhanced accretion rate on to the NS surface.
(iii) We find an anti-correlation between the second peak flux and the peak-separation time for double-peaked bursts.
(iv) The quadruple-peaked burst shows a long separation time between the second and the third peak which is difficult to explain with current models.
(v) We use the method to re-analyse these 16 bursts and we find no evidence that the multi-peaked structure is due to enhanced accretion during the bursts.
APPENDIX A: DOUBLE-PEAKED EVENTS IN OUR SAMPLE
Some bursts appear to be flat after the onset followed by a clear peak, such as burst #6 and #8. In order to verify the significance of the double-peaked structure of these bursts, we use either one or two Gaussian functions to fit the bursts around the peaking time data.
For burst #6, at the top left panel of A1, we show the fitting results of the two Gaussian functions, yielding 2 = 113.3 for 17 d.o.f, and at the top right panel of A1, we show the fitting results with one Gaussian function, yielding 2 = 304.8 for 20 d.o.f. The F-test probability for these two fits is 0.0006, which indicates that the double-peaked structure is significant.
For burst #8, at the bottom left panel of A1, we show the fitting results of the two Gaussian functions, we yielding 2 = 59.8 for 21 d.o.f, and at the bottom right panel of A1, we show the fitting results with one Gaussian function, yielding 2 = 241.9 for 24 d.o.f. The F-test probability for these two fits is 1.4 × 10 −6 , which also indicates that the double-peaked structure is significant. This paper has been typeset from a T E X/L A T E X file prepared by the author. Figure A1. The light curves of burst #6 and #8 of 4U 1636−53. The peaking time data of these two bursts can be well described by two Gaussian function. | 9,524.2 | 2020-11-05T00:00:00.000 | [
"Physics"
] |
Recent advances in exact solutions of pairing models
Following a brief overview on the subject of Exactly Solvable Pairing Models, we describe two recent developments in the field that could have a future impact in nuclear structure theory. One concerns a recent extension to include the continuum and the second concerns development of the hyberbolic pairing model as an exactly solvable approximation to Gogny pairing.
Introduction
In recent years, there has been great interest in the exact solution of the BCS pairing hamiltonian. Originally discovered by Richardson in the early 60s, this subject was rediscovered some 40 years later in the context of ultrasmall superconducting grains [1], to appropriately describe the crossover from superconductivity to a normal metal as a function of the grain size. Since then, there has been a flurry of activity in the field, from the extension of the Richardson exact solution to several families of exactly-solvable models, now called the Richardson-Gaudin (RG) models [2], to the application of these models in many different areas of quantum many-body physics. The current status of the subject, with particular emphasis on its role in nuclear structure physics, was recently reviewed [3] as a chapter of the book "Fifty Years of Nuclear BCS". Here we focus on two recent developments in the field, developments that could perhaps prove useful in the ongoing program to develop a unified microscopic theory of finite nuclei.
The outline of the presentation is as follows. In Section 2, we briefly review Richardson's solution of the BCS pairing hamiltonian and then in Section 3 discuss its extension by Id Betan [4,5] to properly include the continuum. In Section 4, we briefly describe the two families of Richardson-Gaudin models associated with the SU(2) algebra and then in Section 5 focus on a recent application [6] of the hyperbolic model as an exactly solvable approximation to Gogny pairing. Finally, Section 6 provides a brief summary of the key points of the presentation.
Richardson's solution of the BCS pairing hamiltonian
We focus on a pairing hamiltonian with constant strength G acting in a space of doublydegenerate time-reversed states (k,k), where ε k are the single-particle energies for the doubly-degenerate orbits k,k. Cooper [7] considered the addition of a pair of fermions with an attractive pairing interaction on top of an inert Fermi sea (FS) under the influence of this hamiltonian, showing that the eigenstate is with E the energy eigenvalue. Richardson [8,9] proposed an ansatz for the exact solution of the hamiltonian (1) based closely on Cooper's original idea. For a system with 2M + ν particles, with ν of them unpaired, his ansatz involved the state with the collective pair operators B † α that build this state having the form found by Cooper for the one-pair problem, Here L is the number of doubly-degenerate single-particle levels and is a state of ν unpaired fermions (ν = k ν k , with ν k = 1 or 0), defined by c k c k |ν = 0, and n k |ν = ν k |ν .
In the one-pair problem, the quantity E 1 entering (4) is the eigenvalue of the pairing hamiltonian, as shown by Cooper. In the M-pair problem, Richardson proposed to use the M quantities E α (called the pair energies) as parameters chosen to fulfill (if possible) the eigenvalue equation H P |Ψ = E |Ψ . He showed that it is indeed an exact eigenstate of the pairing hamiltonian if these pair energies satisfy a set of M non-linear coupled equations which are now called the Richardson equations. The second term represents the interaction between particles in a given pair, whereas the third term represents the interaction between pairs. The eigenvalues of H associated with a given set of pair energies emerging from (6) are given by namely as a sum of the pair energies plus a contribution from the unpaired particles. Each independent solution of the set of Richardson equations defines a set of M pair energies that completely defines a particular eigenstate (3,4). All eigenstates of the pairing hamiltonian can be obtained in this way, both for systems with an odd or an even number of particles. The ground eigenstate is the energetically lowest solution in the ν = 0 or ν = 1 sector, depending on whether the system has an even or an odd number of particles, respectively.
There are few points worth noting here. First, if one of the pair energies E α is complex, then its complex-conjugate E * α is also a solution, as required for |Ψ to preserve the time-reversal invariance of the hamiltonian. Second, from the structure of the Richardson pair (4), we see that a pair energy that is close to the energy of an particular unperturbed pair 2ε k is dominated by this configuration, so that the associated pair is uncorrelated. In contrast, if a pair energy lies sufficiently far from any 2ε k in the complex plane, the resulting pair will be highly correlated. Lastly, the set of coupled non-linear Richardson equations can be solved numerically, and efficient algorithms for doing so have been developed (see e.g. [10]).
Extension to the continuum
Richardson's solution focused on pairing over a set of discrete states. In nuclear physics, a realistic single-particle spectrum includes both discrete states and continuum states. The latter are especially important when they are sufficiently close to the Fermi energy such that pair scattering into them can be important or when the Fermi level is itself in the continuum. This is the case both for weakly-bound systems and for unbound systems. A description of such systems requires explicit treatment of pair scattering into the continuum.
The first effort to include pair scattering to the continuum within a Richardson approach was reported by Hasegawa and Kaneko [11]. In that work, however, only the effect of resonances was considered. As a consequence, their calculations produced complex energies even for bound states of the system.
As is well known, a proper treatment of the continuum should treat not only resonances but also the background states obtained from contours in the complex plane that enclose all the resonances included. The first work that treated the full continuum was reported recently by Id Betan in two papers [4,5]. Here we briefly review the key points of this work and summarize the key results.
In the presence of the continuum, Richardson's equation (6) becomes where d k is the occupation of level k, including blocking effects, and g(ǫ) is the Continuum Single Particle Level Density (CSPLD).
In a proper treatment of the continuum, the CSPLD should include contributions both from resonances (res) and from background (bkgrd) states. In contrast to the earlier work of ref. [11], Id Betan included both contributions, viz.
The resonant contributions can be modeled in the usual Breit-Wigner form. The non-resonant (background) can be treated by rotating the integration contour of the resonant part to the imaginary axis and then using the Cauchy Theorem. More details can be found in ref. ([5].
The most recent paper [5] used this formalism to treat a nuclear chain that includes both bound and unbound systems, namely the even-A Carbon isotopes up to 28 C. When the system is bound, the pair energies that contribute to the ground state emerged in complex conjugate pairs, thus preserving the real nature of the ground state energy. This can be seen clearly from Figure 3 of ref. [5], which gives the pair energies for 22 C, the last of the bound even-C isotopes.
Once the system becomes unbound, however, this ceases to be the case. Now the pair energies that contribute to the ground state do not occur in complex conjugate pairs. More specifically, as can be seen from figure 11 of Ref. [5], where the pair energies for 28 C are displayed, there are a series of pair energies for which the real part is negative and these occur in nearly complex conjugate pairs. These are indeed the pair energies that are analogous to those for bound 22 C. The remaining pair energies all have positive real parts and are typically far from being in complex conjugate pairs. From these results, we see how complex energies and thus widths arise for the states of an unbound system within a Richardson approach that includes the true continuum.
Generalization to the Richardson-Gaudin class of integrable models
Here we discuss how to generalize the exactly-solvable BCS pairing model to a wider variety of exactly-solvable models, the so-called Richardson-Gaudin models [12], all of which are based on the SU (2) algebra. We begin by introducing the generators of SU (2), Here a † jm creates a fermion in single-particle state jm, jm is the time reverse of jm, and Ω j = j + 1 2 is the pair degeneracy of orbit j. These operators fulfill the usual SU (2) commutation We next consider the most general set of L Hermitian and number-conserving operators that can be built up from the generators of SU (2) for each level j with linear and quadratic terms only, It turns out that there are essentially two sets of conditions on the matrices X and Y under which the set of R operators commute and are complete. The two families of solutions associated with these conditions are referred to as the rational and hyperbolic families, respectively.
i. The rational family ii. The hyberbolic family In both families, the parameters η i are a set of L free real numbers. A third family, called the trigonometric, was shown to be equivalent to the hyperbolic family in [12], as you can pass from the hyperbolic to the trigonometric family by simply replacing real etas by imaginary etas. The traditional pairing model is an example of the rational family, as it can be obtained as a linear combination of the integrals of motion, But it is not the only one. Any hamiltonian that can be expressed as a linear combination of the R i operators, whether for the rational family or for the hyperbolic family, can be solved exactly using the same methods as were used by Richardson for the pure pairing model. In all cases, one is led to a system of coupled non-linear equations, whose solutions can be used to generate all eigensolutions of that integrable hamiltonian.
Application of the hyperbolic model in nuclear physics
We now discuss the one application in nuclear physics reported for the hyperbolic class of SU(2) RG models [6]. What was shown in this work is that hyperbolic model gives rise to a separable pairing hamiltonian with two free parameters, which after appropriate choice gives a good approximation to Gogny pairing, but which can be can be solved exactly.
The hamiltonian for the hyperbolic model we will discuss can be obtained as a simple linear combination of the hyperbolic integrals of motion, H = λ i η i R i . By defining λ = where G = 2λγ is a free parameter. If we define η i = 2 (ε i − α) , where ε i is the single-particle energy of level i and α plays the role of an interaction cutoff and if we make use of the pair representation of the SU (2) generators, we are led to a hamiltonian of the form By introducing the interaction cutoff α, we are led to a separable and exactly-solvable pairing hamiltonian with state-dependent pairing strengths in which the strengths do not grow unphysically as we increase the energy.
This separable pairing hamiltonian can be solved exactly using the RG approach. The resulting energies are given by where the pair energies E β are solutions of the set of non-linear coupled Richardson equations where Q = 1 2G − L 2 + M − 1. Each solution of Eq. (17) defines a unique eigenstate of the hyperbolic hamiltonian (15).
To see whether this model hamiltonian is useful, the authors of Ref. [6] applied it to two well-deformed nuclei, 238 U and 154 Sm. Here we discuss their results and conclusions for 238 U .
To carry out analysis, they first took the single-particle energies ǫ i from a Gogny HFB calculation for this nucleus. and then, using a BCS treatment of the hyperbolic hamiltonian (straightforward because of the separable character of the interaction), fitted the two parameters α and G to the gaps and pairing tensors of the Gogny HFB calculation. An optimal fit was obtained with the choice of parameters α = 25.25 M eV and G = 2 × 10 −3 M eV . Comparison of the results of the Gogny HFB calculation and the BCS results for the hyperbolic hamiltonian with this choice of parameters is shown in Fig. 1. Note that the hyperbolic model reproduces Gogny's fall off in ∆ i with increasing energy, in contrast to constant-G pairing for which ∆ i would remain constant. Overall, the hyperbolic model when treated in mean field gives a very good description of the corresponding mean-field physics of Gogny.
What is perhaps more important, however, is that this hamiltonian is exactly solvable, so that it can model in an approximate way the physics of Gogny beyond mean-field. This too was studied in Ref. [6]. The calculations were carried out in an energy window that extended 30 M eV above and below the Fermi surface. For such a window of energies, the full space involves 148 single-particle levels for the 46 active proton pairs, for which the exact shell-model space contains 4.8 × 10 38 states. Diagonalization in such a space is clearly prohibitive. In contrast, the exact solution of the hyperbolic hamiltonian requires the quite tractable solution of 46 coupled non-linear equations. When these calculations were carried out, it was found that the exact solution of the hyperbolic hamiltonian gives roughly 2 M eV more correlation energy than the BCS solution, suggesting the importance of going beyond mean field. As noted earlier, calculations were also carried out for 154 Sm and can in principle be carried out systematically to treat pairing in heavy nuclei beyond mean field.
Summary and concluding remarks
Following a brief review of Richardson's exact solution of the pairing model and its generalization to a wider class of exactly-solvable RG models, we discussed two recent advances that have the potential of impacting current efforts to develop a unified microscopic theory of atomic nuclei. One was the recent extension by Id Betan of Richardson's solution to the continuum, where we showed that when the continuum is treated appropriately the proper behavior is achieved on transitioning from bound to unbound systems. This could prove useful in the description of very weakly bound and unbound nuclear systems. The other was a first application of the hyperbolic model to nuclear systems, where it was shown that with appropriate choice of the parameters of the model it provides a very good and exactly-solvable approximation to Gogny pairing. This could prove useful in systematic HF + exact pairing calculations of heavy nuclei. | 3,500 | 2015-01-22T00:00:00.000 | [
"Mathematics"
] |
Quantification of electron accumulation at grain boundaries in perovskite polycrystalline films by correlative infrared-spectroscopic nanoimaging and Kelvin probe force microscopy
Organic–inorganic halide perovskites are emerging materials for photovoltaic applications with certified power conversion efficiencies (PCEs) over 25%. Generally, the microstructures of the perovskite materials are critical to the performances of PCEs. However, the role of the nanometer-sized grain boundaries (GBs) that universally existing in polycrystalline perovskite films could be benign or detrimental to solar cell performance, still remains controversial. Thus, nanometer-resolved quantification of charge carrier distribution to elucidate the role of GBs is highly desirable. Here, we employ correlative infrared-spectroscopic nanoimaging by the scattering-type scanning near-field optical microscopy with 20 nm spatial resolution and Kelvin probe force microscopy to quantify the density of electrons accumulated at the GBs in perovskite polycrystalline thin films. It is found that the electron accumulations are enhanced at the GBs and the electron density is increased from 6 × 1019 cm−3 in the dark to 8 × 1019 cm−3 under 10 min illumination with 532 nm light. Our results reveal that the electron accumulations are enhanced at the GBs especially under light illumination, featuring downward band bending toward the GBs, which would assist in electron-hole separation and thus be benign to the solar cell performance. Correlative infrared-spectroscopic nanoimaging by the scattering-type scanning near-field optical microscopy and Kelvin probe force microscopy quantitatively reveal the accumulated electrons at GBs in perovskite polycrystalline thin films.
Introduction
Organic-inorganic halide perovskites (e.g., CH 3 NH 3 PbX 3 , X = Cl, Br, I), featuring large absorption coefficient, high carrier mobility, and long diffusion length [1][2][3][4][5] , become emerging materials for solar cells with rapidly boosted power conversion efficiency (PCE) from 3.8% in 2009 6 to a recently certified over 25% 7 . Though different synthetic methods such as spin coating [8][9][10] , thermal coevaporation 11 , vapor-assisted depositions 12 , have been developed to pursue good crystalline morphology with high uniformity, the inherent polycrystalline nature of perovskite active layers leads to unavoidable existence of a large number of grain boundaries (GBs), similar to inorganic solar cells materials [13][14][15] . Understanding the role of GBs of polycrystalline perovskite thin films in solar cell performance is crucial for the rational design of perovskite active layer structures.
However, the issue about whether the GBs in a perovskites polycrystalline thin-film solar cell is electrically benign or detrimental to solar cell performances has triggered many theoretical and experimental studies. First-principle calculations suggest that while the intrinsic GBs in inorganic solar cells thin films, such as GaAs and Cu(In, Ga)Se 2 , generate deep levels states in band gaps and are therefore harmful for the device performance 16 , the GBs in the CH 3 NH 3 PbI 3 films with shallow point defects are electrically benign and are beneficial to the PCE of the perovskites solar cells 17,18 . However, this result is contrary to the conclusions obtained by nonadiabatic molecular dynamics studies combined with time-domain density functional theory calculations that GBs have negative influences owing to the accelerated electron-hole recombinations in CH 3 NH 3 PbI 3 19,20 . The first-principle calculations show that the enhanced structural relaxation of the defects at GBs results in the accumulations of deep traps (faster recombination) 21 . Experimental observations are also contradictory. For example, it has been demonstrated by nanoscale imaging techniques such as Kelvin probe force microscopy (KPFM) [22][23][24][25] that GBs bear higher surface potential with smaller work functions. A classical model 26 of interfacial states suggests that owing to the downward band bending toward the GBs, a barrier is formed with the built-in potential. As a result, the GBs would repel holes and attract electrons, which is expected to increase the minority-carrier (electrons) collection at the GBs and can be beneficial to the photovoltaic performance. Moreover, the spatially resolved imaging on photocarrier generations by scanning tunneling microscopy also indicates that efficient charge separation occurs at the heterointerface of grains 27 . However, this conclusion is contradictory with the observations that large grain sizes with reduced GBs possess better solar cell performances as reported in literatures [28][29][30] , which implies that GBs accelerate electron-hole recombination and thus degrade the optoelectronic properties of the film. In addition, the correlating microscopy measurements with scanning electron analytical techniques further consider that the presence of the surface trap states at grain junctions would limit the device performance 31 . In view of the debates on the current reports, it is necessary to quantitatively measure the local free-carrier density to deeply understand the role of GBs 32 .
KPFM and conducting atomic force microscope (c-AFM) are widely used to provide information about surface potential and integral current, respectively, at the perovskite GBs at the nanoscale, but they fail to directly quantify the distribution of the free-carrier densities. On the other hand, based on the Drude-like free-carrier absorption in the mid-infrared (IR), which occurs as the carrier density of samples is~10 19~1 0 20 cm −3 , 33-37 , scattering-type scanning near-field optical microscopy (s-SNOM) has been applied to quantify the carrier density distributions of highly p-doped poly-Si 33,34,36,38 , InP nanowires 35 , zinc oxide nanowires 37 , and doped SrTiO 3 ceramics 39 . In s-SNOM, a metal-coated AFM tip, under the illumination of a focused IR laser beam, is used to further concentrate the IR light into a nanoscale region underneath the tip apex for strengthening the near-field interaction between the nanoscale IR light and the sample. Thus, near-field amplitude and phase signals are acquired by demodulating the backscattered light from the metalized AFM tip working in a tapping mode 36 . Analyzing the amplitude and phase signals based on the Drude model allows for quantifying the carrier density distributions.
Here, we employ the s-SNOM imaging method to quantify the spatial distribution of carrier densities at GBs and intragrains (IGs) in the polycrystalline perovskite thinfilm. Taking CH 3 NH 3 PbI 3 as an example, larger near-field amplitudes at GBs rather than at IGs were measured, revealing higher carrier density at GBs. Quantitative analysis of the enhanced near-field amplitude under 532 nm laser illumination further shows that the density of carriers accumulated at the GBs increase from 6 × 10 19 cm −3 to 8 × 10 19 cm −3 in the perovskite layer. Correlative nanoimaging of s-SNOM and KPFM further shows that larger near-field amplitudes and higher surface potentials are more localized at the GBs, suggesting the accumulation of electrons due to the downward band bending at the GBs in perovskite polycrystalline films. The electron accumulation behavior of GBs in perovskite active layers can assist in electron-hole separations, which is benign to the solar cell performance.
Broadband s-SNOM image
First, near-field imaging was performed to qualitatively analyze the distribution of carriers in the perovskite film by the s-SNOM with a broadband laser source that covers the wavenumber range from 650 to 1400 cm −1 . The AFM topography image (Fig. 1a) of a polycrystalline film surface of CH 3 NH 3 PbI 3 on an FTO/glass (CH 3 NH 3 PbI 3 /FTO/ glass) shows the sizes of the grains range from 100 to 200 nm with height variations of~20 nm (see the line profiles shown in Fig. 1c). The simultaneously acquired infrared near-field amplitude image in Fig. 1b, featuring 20 nm spatial resolution as analyzed in Figure S1, exhibits a strong contrast between GBs and IGs: GBs appear brighter with larger amplitudes, whereas IGs are darker with smaller amplitudes. Such a contrast can be further analyzed by correlating the surface topography ( Fig. 1a) with the corresponding one-dimensional near-field amplitude ( Fig. 1b) along the black dashed line. As shown in Fig. 1c, the near-field amplitudes have strong contrast between the GBs and IGs. However, they also have additional correlations, though weak, with the locations of the surface, which can be seen from the anticorrelation between the lg(IR intensity) and lg(GB gap size) as shown in Figure S2 40,41 . Thus, it is necessary to clarify whether the larger infrared signals at GBs originate from the enhanced carrier distribution or from a topographyinduced infrared image difference.
To evaluate the possibility of the topography-induced enhancement of the infrared signals at GBs, we compared the variation of the near-field amplitude in the depression region (marked with a red circle in Fig. 1a) with those in other regions. As shown in Fig. 1c, although the depression region is~30 nm lower than the nearby regions, the near-field amplitudes remain comparable in these regions. The ratios of the variations of height and near-field amplitude at various GBs are not fixed ( Figure S3b), which means that there is no one-toone relationship between the height and near-field signal 36 . Further analysis of another line profile of the correlative one-dimensional near-field amplitude and the topography was shown in Figure S4. Therefore, the larger infrared signals at GBs are not completely derived from topographical effect. In fact, the infrared near-field amplitude in s-SNOM is related to the near-field interaction between the localized infrared and free carriers (plasmons) in the sample 33,36,[42][43][44] . As a result, the carrier accumulation at the GBs would result in the large near-field amplitude at the GBs of the perovskite films 23,25,27 . Moreover, the near-field amplitude signals increase at the grain boundary when the visible light is turned on (see below Fig. 2h), which is associated with the increase in carrier density. Thus, the illumination experiment further confirms that the carrier accumulation contributes to infrared image contrasts.
It has been reported that the chemical composition signatures of perovskite could also contribute to the nearfield spectra 45 . But, we did not observe the specific absorption peak of MA + , which might be owing to the weaker infrared response of the N-H bending in MA +46 compared with the C-N stretching in FA +45 . Probably, the interference of chemical composition signatures is minimized in this study. In addition, the ions could accumulate at the interfaces under an externally applied bias 47 . However, no external bias was applied in this study. Thus, the chemical signatures moieties and ionic accumulation of the MAPbI 3 films could not contribute to the near-field signals.
Broadband s-SNOM image under 532 nm laser illumination
The carrier distribution at the surface of the perovskite film can be tuned by the visible illumination 24,25,27,43,48 . The infrared near-field image of the perovskite film under light illumination was conducted to probe the density and spatial distribution of photocarriers at GBs and IGs in the film. As shown in Fig. 2b and Figure S5, visible laser centered at 532 nm was used to excite more free carriers in the perovskite films. The AFM topography in Fig. 2d (under 10 min of illumination) remains almost unchanged, compared with the one in the dark conditions in Fig. 2c. However, the infrared near-field amplitudes in the illuminating condition (Fig. 2f) Figure S6). However, the amplitude signals at the IGs recorded at positions I, II, and III are rarely influenced. The significantly enhanced near-field amplitude at the GBs indicates that more carriers are accumulated and trapped in the GBs under the 532 nm laser illumination. It is noted that the enhanced carrier density under illumination is temporary ( Figure S7) and the time-dependent measurements show that the s-SNOM signal intensity of the illuminated sample increases as prolonged illumination time ( Figure S8).
Near-field spectra of perovskite polycrystalline film at GBs and IGs
To quantify the carrier accumulation at the GBs under the 532 nm laser illumination, broadband near-field spectra were recorded, as shown in Fig. 3. The nearfield spectra at positions IG (A) and GB (B) marked in amplitude spectra at GB (blue open circle) decrease gradually as the wavenumber increases from 650 to 1400 cm −1 , and the spectra at IG (gray open square) show weaker intensities than those at the GBs. The near-field spectra can be assigned to the near-field interaction between the tip and free carriers (plasmons) in the perovskite film 36 . The free-carrier absorption below 1500 cm −1 has been directly demonstrated by the timeresolved infrared spectroscopy in CH 3 NH 3 PbI 3 49 . Moreover, a finite dipole model of s-SNOM 50 was employed to interpret the near-field spectra, in which the near-field interaction between the probing tip and perovskite film was described by their dielectric functions, ε Au and ε (CH 3 NH 3 PbI 3 ), respectively (see details of the model calculation in Supplementary Note 2). The calculated near-field spectra of GB (cyan solid lines) and IG (purple solid lines) in the dark are in agreement with the experimental data, with the fitting carrier density of n = 6 × 10 19 cm −3 and n = 1 × 10 16 cm −3 , respectively.
The near-field amplitude spectrum of IG under the external illumination of the 532 nm laser is similar to that in the dark condition (Fig. 3). The invariance of the amplitude at the IGs upon laser illumination can also be seen from the line profiles shown in Fig. 2h. The nearfield amplitude spectrum of the GB (red closed circle) increases by~20% compared with that in the dark. The calculated near-field amplitude spectrum (orange solid line) of the GB under illumination is in good agreement with the experimental data, with a fitting carrier density of n = 8 × 10 19 cm −3 . Spectral calculations show that the carrier density located at the GBs increases from 6 × 10 19 cm −3 in the dark to 8 × 10 19 cm −3 under 532 nm laser illumination, which indicates the accumulation of carriers in the GBs under external light illumination. In summary, the broadband nearfield spectral investigation has provided the quantification of the spatial distribution of carrier densities at the GBs. Ⅰ Ⅱ Ⅲ Ⅰ Ⅱ Ⅲ Ⅰ Ⅱ Ⅲ 0
Correlative nanoimaging of s-SNOM and KPFM
To gain insights into the relationships between the electrical properties and spectral information at GBs, integrated KPFM and s-SNOM measurements were performed to acquire the surface potential and infrared nearfield image simultaneously through a single-pass scan 52,53 . Figure 4a displays the topography of the perovskite polycrystalline film with a grain size of 200-500 nm. A clear contrast of the surface potential is observed around the GBs in the contact potential difference (CPD) image (Fig. 4b), which implies the difference in the electrical property. As shown in Fig. 4d, the higher CPD at GBs indicates that the local built-in potential is formed due to the downward band bending at GBs, which is consistent with previous reports, indicating the electron accumulation at GBs 22,23,25 . This conclusion can also be deduced from the correlative broadband s-SNOM image (Fig. 4c) and line profile (Fig. 4d) of the near-field amplitude which appears to be brighter with larger amplitude intensity at GBs. Furthermore, the line profiles (Fig. 4d) also show that the full width at half maximum of amplitude at the GBs is much narrower than that of the CPD in the KPFM measurement, indicating a much higher spatial resolution of s-SNOM (27 nm) with respect to that of KPFM (57 nm) as analyzed in Figure S9. The correlative broadband s-SNOM and KPFM measurements, which were performed simultaneously, are meaningful on a qualitative level despite the spatial resolution of s-SNOM mismatches that of KPFM.
The correlative KPFM and s-SNOM measurements help us unambiguously understand the physical picture at the GBs of the CH 3 NH 3 PbI 3 polycrystalline films. As indicated in Fig. 5a, the local built-in potential leads to the attraction of the electrons to the GBs and repelling of the holes to the IGs, assisting in electron-hole carrier separation and thus suppressing the recombination. Moreover, it has been reported that the local built-in potential also leads to the polarity inversion in the space-charge region around the GBs from inherent p-type grain bulk 30,54 to n-type with the electron densities 6 × 10 19 cm −3 at the GBs. Our results also support the inverted GB polarity in inorganic solar cells around the GB reported for the research on Cu(In, Ga)Se 2 films 55,56 . Under externally visible illumination, photoinduced electrons populated at the conduction band, further accumulate at the GBs owing to built-in potential, leading to increased electron density from 6 × 10 19 cm −3 in the dark to 8 × 10 19 cm −3 under the illumination, as illustrated in Fig. 5b. This physical picture presented in Fig. 5 supports the models proposed in previous reports [22][23][24][25]57 .
Discussion
In conclusion, we have quantified the electron accumulation behavior at the GBs in polycrystalline CH 3 NH 3 PbI 3 perovskite films by employing infrared near-field imaging technique (s-SNOM) with a spatial resolution of 20 nm. Broadband s-SNOM images indicate large near-field amplitude at the GBs, implying the higher carrier density at the GBs in a polycrystalline perovskite layer. Moreover, the results from broadband s-SNOM images further reveal that the carrier density increases from 6 × 10 19 cm −3 in the dark to 8 × 10 19 cm −3 under 532 nm laser illumination. Correlative nanoimaging of s-SNOM and KPFM further indicates the electron accumulation behavior of GB in perovskite active layers, elucidating the relationship between the spectral information and the electrical properties. Our observations by infrared nanoimaging with correlative KPFM are in accordance with previously reported KPFM and c-AFM results, and further confirm that the downward band bending at the GBs assists in electron-hole carrier separation and thus suppresses recombination, which would be benign to solar cell performance.
The correlative broadband s-SNOM and KPFM measurements can be extended to other perovskite structures with vibrational fingerprint information in the infrared spectra, such as FAPbI 3 (FA = formamidinium, a main molecular absorption emerges at~1700 cm −1 , which is attributed to antisymmetric C-N stretching of the FA molecule) 45 . In this case, the chemical information of individual grains on their local composition can be measured by broadband s-SNOM spectroscopy. Revealing the relationship between the electrical
Materials and methods
Room-temperature SSE deposition of perovskite polycrystalline films PbI 2 and CH 3 NH 3 I were purchased from Xi'an Polymer Light Technology Corp. (PLT). All of the reagent-grade chemicals were used as received. CH 3 NH 3 PbI 3 perovskite polycrystalline films were prepared according to the reported solvent-solvent extraction (SSE) method as described in literature 58 . In brief, a 42 wt% solution of PbI 2 and CH 3 NH 3 I (molar ratio 1:1) in 1-Methyl-2pyrrolidinone (NMP) was prepared. A 30 μL 42 wt% CH 3 NH 3 PbI 3 solution was spin-coated onto fluorinedoped tin oxide (FTO) coated glass substrates (TEC 15), and then, the solution was spun at 4500 rpm for 30 s. The solution-coated substrate was vertically dipped in ã 50 ml anhydrous Diethyl ether (DEE) bath immediately. The substrate was kept immersed until a brown film formed in~2 min. The substrate was then taken out of the bath and transferred to a hotplate at 150°C covered by a petri dish for 15 min with air annealing. The entire perovskite film fabrication process was performed at ambient conditions with~35% humidity.
Infrared s-SNOM measurements
A commercialized neaSNOM system (neaspec, GmbH), based on s-SNOM, was utilized to perform s-SNOM IR imaging 33,[35][36][37] . Standard Pt/Ir probes (Arrow-NCPt, Nanoworld) with a resonance frequency of Ω ≈ 250 kHz were used in the experiments, and the tapping amplitude of the cantilever system was~60 nm. A broadband MIR laser, covering from 650 to 1400 cm -1 , is generated by a difference frequency generator (TOP-TICA Photonics AG). The MIR laser is focused by the parabolic mirror to concentrate the IR light into a nanoscale region underneath the tip apex. For the broadband s-SNOM image, the reference mirror in the s-SNOM system was kept fixed at position d ≈ 0 (d denotes the optical path difference between the light backscattered from the tip and reflected from the reference mirror in the asymmetric Michelson interferometer), which corresponds to the white light position. At this position, the beam path lengths are equal for both the signal and reference arm, which gives the strongest signal owing to constructive interference under which the interference of all spectral components simultaneously maximizes the detector signal intensity. The secondorder demodulation method was employed for efficient background suppression. Nano-FTIR spectra were obtained by constantly moving the mirror in the reference arm of the Michelson interferometer, recording the resulting interferograms and their corresponding complex Fourier transformation.
For the illumination experiments, the sample was illuminated with a wavelength of 532 nm (corresponding to an energy of 2.33 eV, which is larger than the bandgap of perovskite) by a 10 mW solid-state laser. The 532 nm laser spot focused with the parabolic mirror below the tip position was elliptical with major and minor axes of (60 ± 5) μm and (40 ± 5) μm, which was determined with a microscope. The estimated irradiation power intensity at the sample surface was 200 mW/cm 2 , which was controlled by a variable neutral-density filter wheel.
KPFM
Correlative s-SNOM and KPFM were also measured by a commercialized neaSNOM system (neaspec, GmbH). We chose PPP-EFM probes (Nanosensors) whose fundamental resonance frequency was 78.5 kHz. The KPFM measurements were performed with a single-pass scan utilizing a frequency-modulation mode. The surface potential responses were extracted from the second eigenresonance of the probe. The feedback d.c. bias voltage was applied to the tip. Thus, e × V CPD = Φ tip − Φ sample , where Φ tip and Φ sample are the work functions of the tip and sample, respectively. | 5,031.6 | 2021-04-15T00:00:00.000 | [
"Materials Science",
"Physics",
"Engineering"
] |
A New Approach to Improve the Voltage Conversion Ratio in Topological Switched-Capacitor DC–DC Converters Using Negator Stage
This brief provides a simple yet novel structure to achieve voltage conversion ratio (VCR) that exceeds the theoretical attainable VCR of topological switched-capacitor DC-DC converters (SCC). Typically, the SCC terminals are connected to the input voltage, the output voltage, or the ground reference. The proposed structure in this brief utilizes the ground-connected terminals and instead connecting them to negative input voltage provided by a negator stage. Here, two types of SCC are used as case studies: Series-parallel (SPSC) and Fibonacci (FSC). The model for the proposed structure is constructed and verified experimentally using 3-stage SPSC and FSC. The experimental results are in good agreement with the proposed model with an error of less than 5% in all cases. The VCR obtained from the proposed structure exceeds the theoretical limits of conventional topological structures.
I. INTRODUCTION
T HE SWITCHED-CAPACITOR DC-DC converter (SCC) has been very attractive due to its magnetic-less feature which lends themselves to IC integration and high power density [1], [2]. As shown in Fig. 1, the SCC comprises a switch network that can be implemented using transistors or diodes, flying capacitors to pump charges from one stage to another, and a control circuit that drives the switch network. In general, the SCC can be used to buck/boost the input voltage [1], [3]. Considering the way it is synthesized, SCC can be categorized into two main groups: topological and non-topological converters. The topological SCC is synthesized in a systematic way based on well-established structures such as the Dickson charge pump, series-parallel (SPSC), Fibonacci (FSC), exponential charge pump and binary SCC [4], [5], [6], [7]. On the other hand, the switch network and the flying capacitors in Manuscript the non-topological SCC are structured on an ad hoc way [8]. This brief discusses only the topological SCC structures. The theory of SCC has been extensively investigated [9], [10], [11]. As a milestone, the fundamental limit of the SCC is proposed in [10] and is further generalized in [11]. The fundamental limit relates the maximum attainable voltage conversion ratio (VCR), VCR = v out /v in , for certain number of flying capacitors and switches. The fundamental limit shows that the FSC achieves the highest VCR with the minimum number of components count [10]. The fundamental limit is useful in synthesizing SCC to achieve certain conversions [12], [13]. Nevertheless, for applications that require high VCR [14], resonant converters that utilize both inductors and capacitors are commonly used. Alternatively, SCC can be used by increasing the number of converter stages. However, cascading SCC stages require more components that contribute to additional power losses [15].
Recently, [16] proposed to utilize a self-generated negative voltage (V NEG ) to improve the transient response of buck converters. While the concept of using negative voltage improves the buck converter performance, generating the negative voltage is achieved using inverting buck-boost converter which requires an inductor and dedicated controller circuit to control the duty cycle and drive the switches. For example, the loading switches (SE3 and SE4) need to be turned on when −1 < V NEG < −5V. In addition, the generated V NEG improves the input voltage of the buck by only 0.6x as indicated in [16], i.e., V NEG = −1.5V.
1549-7747 c 2022 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information. This brief proposes a new, yet simple technique to increase the VCR of a topological SCC beyond the theoretical gain limits with fewer components count by utilizing the concept of inserting negative voltage. In contrast to [16], the negative voltage is generated using a negator stage using one flying capacitor and four switches. The switches are driven by the same converter's controller, hence no additional circuit requirements. The negator stage is inserted prior to the SCC and feeds a negative voltage to some of the SCC terminals as illustrated in Fig. 2.
This brief is structured as follows: Section II discusses the design of the proposed structure of SCC with high VCR focusing on two topologies: SPSC and FSC. The experimental results of the proposed designs are presented in Section III. Finally, conclusions are drawn in Section IV.
II. PROPOSED DESIGN
The equivalent circuit model of the SCC is shown in Fig. 1. The ideal transformer models the ideal VCR (n = VCR) of the converter and R eq quantifies the SCC losses including conduction and switching losses [15]. Using this model, the output voltage can easily be written as: V out = nV in − I out R eq . Note that, n is considered the main steady-state parameter that determines the VCR of the converter. Hence, lossless SCC is assumed in this brief, i.e., R eq = 0, to thoroughly study the effect of the proposed structure on the ideal VCR.
A. Steady-State Analysis Assuming Lossless SCC
In general, conventional SCC topologies can be designed to achieve high VCR. Nevertheless, the conventional way of synthesizing SCC requires more components count. For example, an ideal VCR of five, i.e., v out = 5v in , can be achieved using four stages of SPSC, and/or three stages of FSC, as follows [10]: where k is the number of flying capacitors and F k is k th the Fibonacci number. Note that, (1) and (2) neglect the converter losses and consider only the ideal VCR.
In [12], a way to conceptualize and synthesize SCC using terminal weight is presented. Basically, for a topological SCC like SPSC or FSC with j number of terminals, the j th terminal weight (w j ) can be assigned according to the attainable VCR. Note that, the terminal weight (w j ) can be only a positive or negative integer number and it is assigned mathematically to satisfy the following condition: where k is the number of flying capacitors used in the converter, and k + 2 represents the number of all terminals. Usually, the first terminal of the SCC is assigned a weight that is equal to the maximum attainable VCR, i.e., w 1 = VCR max . For example, 4-stage SPSC has a maximum VCR of five and hence w 1 = 5. Furthermore, the weight of the last terminal is always −1, which implies that w k+2 = −1. The other terminals are assigned weights based on the SCC topology [12], [13]. Therefore, (3) can be modified to be: Considering the multiple-input, multiple-output (MIMO) SCC, the concept of terminal weight can be used to relate the terminal voltages in the converter as follows [11], [17]: where p is the number of inputs, q is the number of outputs, v g is the input voltage, v o is the output voltage and w is the terminal weight. This brief studies the single-input single-output (SISO) SCC boost operation. Therefore, the input voltage is connected to the first terminal and the output voltage is connected to the last terminal. By combining (4) and (5), the output voltage can be written as: where w 1 = VCR max , w j < 0 and v j is the voltage at the j th terminal that is usually set to zero, i.e., the terminal is grounded, in the conventional SCC topologies. The proposed structure, depicted in Fig. 2, uses a negator to provide a negative input voltage to one or more of the SCC terminals to achieve a higher VCR. If the negative input voltage is connected to all remaining SCC terminals, i.e., v j = −v in , (6) suggests that the output voltage will increase and can be calculated as: Note that, the model in (7) assumes that the output voltage of the negator stage equals −v in . Nevertheless, this voltage can be slightly less due to the conduction and switching losses of the negator stage. Therefore, without loss of generality, the output voltage of the negator, v negator , can be used to compute the output voltage of the converter as: In this brief, SPSC and FSC are chosen, which represent the linear and non-linear SCCs, respectively. Using (7), the output voltage of SPSC and FSC can be written as: As shown in Fig. 3, the proposed SPSC and FSC outperform the conventional counterparts by achieving higher VCR. For low number of stages, the VCR of the conventional and proposed structures are very close to each other. However, the VCR is growing drastically as the number of stages increases for the FSC case. On the other hand, the VCR increases linearly for the SPSC but with a steeper slope compared to the conventional one. In general, the proposed structures increase the VCR of almost (k − 2) for SPSC and k+1 j=2 |F j | for FSC compared to the conventional design counterparts. Note that, one can consider the negator as an additional stage, nevertheless, the proposed high VCR structure still outperforms the conventional SCC. For example, a 4-stage SPSC achieves VCR = 5, while it can be increased to VCR = 7 using the proposed structure with the 3-stage SPSC and the negator stage.
The proposed high VCR configuration is verified using 3stage SPSC and 3-stage FSC depicted in Fig. 4. The switches are driven using two complementary non-overlapping clocks, S a and S b . To better explain the operation of the proposed converters, consider steady-state for the charge in the flying capacitors in two main phases, charging and pumping, throughout a complete switching cycle.
During the charging phase, which occurs when S a is high and S b is low, all flying capacitors are fully charged. In the pumping phase, S a is High and S b is low, the charges are pumped to the output terminal. Moreover, the negator is assumed to be in steady-state, which implies that the voltage at nodes x and y equals −V in . The equivalent circuits for both phases are shown in Fig. 4(c) & (d) for SPSC and Fig. 4
(e) & (f) for FSC.
Due to the capacitor charge balance, the voltage across capacitors remains effectively the same for one complete switching cycle. Therefore, for SPSC charging phase shown in Fig. 4(c), V C1 = V C2 = V C3 = 2V in , and during the pumping phase, Fig. 4(d), the output voltage can be found to be 7V in . Similarly for the FSC case, Fig. 4(e) shows that V C1 = 2V in and V C2 = V C3 − 2V in . On the other hand, V C2 = 4V in during the pumping phase, Fig. 4(f), which implies that V C3 = 6V in . Hence, the output voltage of the FSC is calculated as 9V in . Finally, it is worth mentioning that the stress voltage of the capacitors and switches in the proposed designs is increased due to the availability of the −V in . For example, the stress voltage for the SPSC flying capacitors, C 1 , C 2 and C 3 , is now 2V in compared to only V in in the conventional SPSC. For high voltage applications, this increase in the stress voltage needs to be studied carefully to avoid additional overhead costs.
B. Efficiency Analysis Assuming Lossy SCC
In the previous subsection, the analysis is carried out assuming lossless SCC, viz R eq = 0. Here, the efficiency analysis is compare the conventional SCC design to the proposed one. In general, the converter efficiency can be quantified as: where P out = I 2 out R L and P loss is power losses due to the conduction and switching. The losses can be written as: P loss = I 2 out R eq .Therefore, (11) can be simplified to: The switching limits and vector analysis can be used to quantify R eq [15]. On the other hand, thorough details of SCC design and analysis using off-the-shelf components can be found in [18]. However, such analysis is beyond the scope of this brief. Instead, the steady-state model shown in Fig. 1 can be utilized to find R eq , which implies: Therefore, by using R L = V out /i out and substituting (13) in (12), the converter efficiency can be simplified to: The conventional and proposed converters are simulated using SPICE simulator with f s = 100 kHz, analog switches with R on = 10 , 0.1μF for all flying and output capacitors. The converter efficiency is calculated based on (14). Fig. 5 shows the efficiency for different loads. It can be seen that the proposed converter outperforms the 4-stage converter for both cases, SPSC and FSC.
The effective series resistance (ESR) of the capacitors has also some effects on the converter efficiency, especially for high-power applications. Using SPICE simulator, Table I the ESR of the 0.1μF capacitors is swept between 0 , i.e., ideal case, to 100 with fixed load resistance of 100k . Other parameters are kept the same, e.g., f s = 100 kHz and analog switches with R on = 10 . It can be seen that the ESR minimizes the converter efficiency slightly for low ESR. For extreme cases, the converter efficiency of the conventional SPSC is degraded by 1.73% due to the ESR, viz. from 99.47% in the ideal scenario to 97.74% for ESR of 100 , while it is about 1.31% for the proposed structure. On the other hand, the difference in the converter efficiency for the 4-stage and proposed FSC is about 5.53% and 2.4%, respectively. Therefore, the effect of ESR is minimal in the proposed structure compared to the conventional designs. In general, the ESR can be optimized for high converter efficiency.
III. RESULTS AND DISCUSSION
To test the model proposed in (7), 3-stage SPSC and 3-stage FSC are built as shown in Fig. 4 and tested using the experimental setup shown in Fig. 6. The converters are implemented using MC14066B analog switches, 10 μ F capacitors as flying and output capacitors, and 1 M load. The switches are driven by two non-overlapping complementary clocks, S a and S b , generated from the microcontroller and switching at 10 kHz with 124 ns deadtime which is verified using the TDS2021C oscilloscope [17]. The EL302T power supply is used to provide the input voltage to the converters. The voltage measured at the output of the negator stage is approximately 35 − 50mV lower than the input voltage. The quiescent current of converter is measured around 8.5 μA. Fig. 6 shows one of the experiments for FSC: input voltage, negator output voltage, and FSC output voltage are measured using digital multimeters (DMM) and are found to be 0.3V, −0.26V, and 2.57V, respectively. Using (8), the voltage is expected to be 2.54V which results in 1.17% error. With the same procedure, the experiment is conducted by varying the input voltage between 0.1 − 0.5V and measuring the output voltage for the proposed SPSC and FSC. Moreover, using the output voltage expected by the model in (8), the error is calculated as follows: The results are illustrated in Fig. 7 showing the output voltage versus the input voltage. It can be seen that the experimental results agree well with the models with an error of less than 5% for all cases.
IV. CONCLUSION
In this brief, a new configuration was proposed to achieve a high VCR for topological SCC by adding a negator stage to the converter. A model that predicts the ideal VCR for the proposed structure was presented and verified experimentally using two different topologies: SPSC and FSC. The VCR increased for both cases by adopting the proposed configuration. The proposed structure is studied using topological SCC, however, it can be further investigated in non-topological SCC. which may lead to highly efficient converters. | 3,682 | 2023-04-01T00:00:00.000 | [
"Engineering"
] |
A Novel Greedy Forwarding Mechanism Based on Density, Speed and Direction Parameters for Vanets
In the recent years, the study and developments of networks that do not depend on any pre-existing infrastructure have been very popular. Vehicular Ad Hoc Networks (VANETs) belong to the class of these networks, in which each vehicle participates in routing by transmitting data for other nodes (vehicles). Due to the characteristics of VANET (e.g. high dynamic topology, different communication environment, frequently link breakage), the routing process still one of the most challenging aspects. Hence, many routing protocols have been suggested to overcome these challenges. Moreover, routing protocols based on the position of vehicles are the most popular and preferred class, thanks to its many advantages like the less control overhead and the scalability. However, this class suffer from some problems such as frequent link breakages caused by the high-mobility of vehicles, which cause a low PDR and throughput. In this investigation, we introduce a novel greedy forwarding strategy used to create a new routing protocol based on the position of vehicles, to reduce the link breakages and get a stable route that improves the PDR and throughput. The proposed Density and Velocity (Speed, Direction) Aware Greedy Perimeter Stateless Routing protocol (DVA-GPSR) is based on the suggested greedy forwarding technique that utilizes the density, the speed and the direction of vehicles for selecting the most convenient relaying node candidate. The results of simulation prove that DVA-GPSR protocol outperforms the classical GPSR in all studied metrics like PDR, throughput, and the ratio of routing overhead by changing the quantity of vehicles in urban and highway scenarios. Keywords—VANETs, Routing protocol, GPSR, DVA-GPSR, direction, speed, density.
Introduction
Vehicular ad-hoc networks or VANETs in short, are a kind of a self-structured network that are designed directly by a set of intelligent vehicles. Each vehicle is equipped with a wireless transceiver and considered as a router. Some VANETs' features such as high link breakage, high dynamic topology and the high speed of vehicles make the task of routing data packets in the networks a very big challenge for researchers. Therefore, many researchers focus on designing the routing protocols, which are suitable for all vehicular scenarios and deal with those characteristics.
Routing protocols in VAENTs could be categorized into four classes [1], but those that are based on the position of vehicles are the number one thanks to their scalability and less control overhead [2]. In this paper, a novel routing protocol based on the location of vehicles is proposed that is based on four parameters; the density, the speed, the direction and the distance between destination and the relaying candidate node. These parameters are combined and used to improve the classical greedy forwarding strategy of GPSR routing protocol, this combination will create a new routing protocol called Density-Velocity-Aware-GPSR (DVA-GPSR) that will affect and enhance the performance of VANETs in urban and highway scenarios. As mentioned above, DVA-GPSR protocol selects the best relaying node by considering three parameters other than the classical one of GPSR. The first parameter helps us to calculate the angle between the direction of the relaying candidate and the direction of the target vehicle, this parameter is the angle direction. In order to increase the link lifetime between two vehicles, the second parameter that is the speed variation between the target node and the relaying candidate vehicle will be used to look for the smallest variation. The density or the neighbors' number of the relaying candidate vehicle is the third parameter, which helps to determine the connectivity mode in each path (sparse, medium or dense). These parameters are used to improve the PDR, the throughput and the routing overhead in the network for the classical GPSR in the proposed scenarios.
We have split this paper into six sections and each one describes a part of the paper profoundly. The paper is organized as follows. The related works are presented in section II. The original GPSR routing protocol, its benefits and drawbacks are provided in detail in section III; after that in section IV, we present and explain the strategy of the proposed DVA-GPSR. Section V presents the performance evaluation of the proposed DVA-GPSR based on simulation tools, and then the result analysis will be compared with the original GPSR. In section VI, we conclude this paper and present some of our future works.
Related Works
In this section, we are going to give an overview of some enhancements applied to the classical GPSR for VANETs. We will present mainly the most recent and cited papers.
In [3], Bouras et al. proposed a modified GPSR routing that is based on three parameters direction information, the speed of vehicles and the link quality in addition to the location information to select the next hop. Mainly, by using those parameters the future positions of only the source and the destination vehicles could be predicted. The benefits of GPSR-Modif is that it has a high value of PDR compared to the traditional GPSR, while keeping the E2ED (end-to-end delay) at the same level as GPSR. In [4] Silva et al. propose an adaptive GPSR (AGPSR) to enhance both the GF strategy and the PM technique of the classical GPSR. The GF technique is improved by using a special parameter called trust status TS. Moreover, AGPSR improved the PM technique by replacing it with a continuous greedy strategy. The proposed protocol proves high performance, but only for static nodes. In [5], Tu et al. provided a new modified GPSR based on Moving Vector, (GPSR-MV) to enhance both the GF and the PM techniques, by taking the vehicles' fast moving and forwarding efficiency into consideration and combining it with a simplified perimeter forwarding to avoid loop problem. The results show that GPSR-MV has a significant enhancement compared to the classical GPSR. GPSR-2P [6] Zaimi et al. developed GPSR-2P protocol, to resolve the congestion and saturation problems. Actually, authors replace the GF technique by introducing the multipath strategy only if the same node transmits two successive packets to the same destination; otherwise, the simple GF will be applied. The proposed enhancement of GPSR has significant results in case of PDR end E2ED. GPSR-2P is not efficient in case of more than two packets. In another paper, Yang et al. proposed All the above papers do not clearly adopting the enhancement of GPSR protocol to be implemented in a highway environment or in real map scenario. Moreover, the proposed enhancements used very complex techniques and weighted functions to select the next hop node. This paper is farther enhancing the GPSR approach by adopting both the highway and the urban environments by using a real map scenario. Moreover, the proposed technique is based on a simple and novel mechanism to select a next-hop node in VANETs.
3
The Strategy of the Traditional Gpsr Protocol GPSR [7] is the most popular position-based routing protocol that relies on geographic location information. In GPSR, two methods are utilized to transfer packets. The greedy forwarding (GF) in which the source select the closest neighbor to the target node as next hop to relay packet this method will be replaced by the perimeter forwarding (PF) in case of the failure as shown in Figure 1. The strong point of GPSR is that each vehicle could have the exact neighbor's information as the geographic location, the speed and the direction movement. However, in the classical GPSR only the location information is used in the selection of the next hop process that could be inaccurate. Furthermore, the use of the greedy forwarding technique reduces the number of hop from source to destination. However, the transmission quality of the connection link is totally ignored. This strategy causes a significant amount of packet drops that decreases the PDR and throughput. Moreover, for each link failure a new route has to be reestablished so the forwarded data will be suspended until a new relay node is found. As a result, the routing overhead is dramatically increased. The Strategy of the Proposed DVA-GPSR Our proposed scheme is built on top of the traditional GPSR protocol. It adopts that all vehicles in VANET have a GPS able to giving the accurate vehicle's information and they are equipped with an On-Board Unit (OBU) wireless transceiver/receiver for connecting each other. Hence, our main involvement is that we suggest a Novel greedy forwarding mechanism. In fact, a simple weighted function is used to select the most convenient relaying vehicle; the function consists of the angle direction, the speed variation and the density of the relaying candidate node, in addition to the classical parameter of GPSR that is the distance between the relaying candidate vehicle and the target vehicle. Then an improved GPSR protocol called DVA-GPSR is provided based on our proposed strategy.
4.1
The novel greedy forwarding mechanism As mentioned previously, the Source vehicle starts gather the mobility parameters: velocity and the position of all its neighbors. These parameters are implicated in the proposed function to calculate the link weight of all its neighbors.
• At first, we calculate the angle direction ( Figure 3) between each next hop candidates and the destination node as according to formula (1).
Where iVelocity is the velocity of the next hop candidate and dVelocity is the destination velocity. The rational between the concepts of the angle direction is to maintain the connection between vehicles as long as possible by choosing the small value of all calculated .
• Secondly, the distance between the sender and the destination node is calculated according to formula (2).
Where ( , ) signifies the location of the neighbor node called i and ( , ) denotes the destination location.
• The third parameter is used to calculate the speed variation between the target node and the next hop candidate node.
Where Si is the speed of the neighbor node called i and Sd denotes the speed of the destination node.
The previously mentioned equations will be used to formulate the weighted function (4). The link weight is calculated for every neighbor of the source node. If one of the neighbors vehicles have almost the same speed and direction as the destination as well as the calculated distance is reducing and the density of the neighbor is high or medium then the link connection is more stable. Hence, we will select vehicle that has the lowest weight value as the next-hop relay node. The formula (4) presents the weighted function of the next-hop candidate node called i.
Where the densityi is the number of neighbors for the next hop candidate i, used to determine the connectivity mode in each path (sparse, medium or dense) thus reduce the sparse connectivity problem; and + + + = 1, to choose the most accurate values of those factors several simulation had done. The problem of void area that often arises by using the classical GPSR, which lead to the local maximum issue, will be resolved by taking into account the density parameter in the novel greedy forwarding strategy to select the most suitable next hop. Indeed, the vehicle that has the high density (high number of neighbors) will be chosen as a relaying node; hence, the problem of local maximum is reduced. Moreover, the Figure 3 clearly explains the strategy, the source vehicle will choose A as a relaying vehicle since it has three neighbors while B has no neighbors. In this algorithm, S represented the source node and i presented the neighbor nodes. The source node gets all necessary information then calculates the proposed link weight formula between it and all its neighbors and put it in Wi. From the previous explanations, the neighbor that has the smallest value of Wi will be chosen as the next hop, otherwise the classical recovery process will be applied.
Simulation and Comparison
In this section, we evaluate the performance of the proposed DVA-GPSR in terms of routing overhead, packets delivery ratio (PDR) and average throughput with different vehicles' densities and number of destinations. The simulations are performed under NS3 and SUMO as network simulator and traffic simulator respectively. For urban simulations, we extracted the map of a part of Oujda city with 1.7 km * 1.5 km from OpenStreetMap For highway simulations, we are based on a highway scenario of 300 m * 1.5 km with four lanes in two opposite directions. The other different settings of simulation scenario are presented in Tableau 1. For DVA-GPSR, to find the most efficient values of α, β, γ and θ of the proposed function, we done several simulations with different values. The different results are generated and drawn by using Gnuplot software.
Impact of the number of vehicles in the network
Packet Delivery Ratio (PDR): Figure 5-a shows the results in a highway scenario, in terms of PDR by varying the number of vehicles. The PDR for DVA-GPSR protocol increases when the number of vehicles increases up to 68% while the PDR for GPSR decreases down to 59%. Figure 5 -b presents the same comparison for an urban scenario. We note that for DVA-GPSR protocol, PDR stays stable between 30% and 31% when the number of vehicles increase while the PDR of GPSR decreases down to 24%.
Fig. 5. Effect Change on density Respect to PDR in urban and highway scenarios
Average throughput: Figure 6 -a shows the results for both protocols in a highway scenario, in terms of the average throughput by varying the number of vehicles. The average throughput for both protocols increases when the number of vehicles increases. However, for DVA-GPSR protocol the throughput is increased up to 14 kbps while for GPSR it does not exceed 13 Kbps. Figure 6-b presents the same comparison for an urban scenario. We notes that for DVA-GPSR protocol, the throughput stays stable between 6 kbps and 6.5 kbps when the number of vehicles increases while for GPSR the throughput decreases down to 4.8kbps. Figure 7 shows that the DVA-GPSR performs better than GPSR during the simulation in both scenarios. Figure 7-a presents the results for a highway scenario, by varying the number of vehicles when we have 10 randomly selected destinations. We note that for both protocols the overhead decreases when the number of vehicles increases but DVA-GPSR has the low values of overhead down to 27.6% compared to GPSR. Figure 7-b presents the same comparison for an urban scenario in terms of routing overhead. The overhead for DVA-GPSR is low than the overhead for the classical GPSR and does not exceed 28.5%.
Conclusion
In this paper, we have proposed a novel greedy forwarding mechanism based on Density, Speed and Direction parameters for VANETs then we applied the proposed strategy on the classical GPSR routing protocol to be more convenient for VANETs scenarios. To prove the high performance of DVA-GPSR, we are based on a real urban environment, which is a part of Oujda (Al-Quds street). Simulation results demonstrate that the proposed DVA-GPSR outperforms the classical GPSR routing protocol in terms of better control packet overhead, PDR, and average throughput. For future works, we aim to take into account more impacting parameters to the routing protocol to support urban environment structures, and other performance metrics that related to QoS can be simulated and tested with different traffic scenarios. | 3,557.4 | 2020-05-20T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Enhancing credit scoring model performance by a hybrid scoring matrix
Competition of the consumer credit market in Taiwan has become severe recently. Therefore, most financial institutions actively develop credit scoring models based on assessments of the credit approval of new customers and the credit risk management of existing customers. This study uses a genetic algorithm for feature selection and decision trees for customer segmentation. Moreover, it utilizes logistic regression to build the application and credit bureau scoring models where the two scoring models are combined for constructing the scoring matrix. The scoring matrix undergoes more accurate risk judgment and segmentation to further identify the parts required enhanced management or control within a personal loan portfolio. The analytical results demonstrate that the predictive ability of the scoring matrix outperforms both the application and credit bureau scoring models. Regarding the K-S value, the scoring matrix increases the prediction accuracy compared to the application and credit bureau scoring models by 18.40 and 5.70%, respectively. Regarding the AUC value, the scoring matrix increases the prediction accuracy compared to the application and credit bureau scoring models by 10.90 and 6.40%, respectively. Furthermore, this study applies the scoring matrix to the credit approval decisions for corresponding risk groups to strengthen bank’s risk management practices.
INTRODUCTION
With the rapid growth in the credit industry and the management of large loan portfolios, application and behavioral scoring models have been extensively used for the credit risk evaluation decisions by the finance industry.
Application scoring models help banks determine whether credit should be granted to new applicants based on customer characteristics such as income, education, age, and so on (Akhavein, 2005).Behavioral scoring models help banks predict the probability that existing customer will default or become delinquent based on consumer's repayment and usage behavior (Boyer and Hult, 2005).
In this paper, we utilize a hybrid mining approach in the design of credit scoring models to support credit approval decisions based on the four main steps: (1) using genetic algorithm (GA) to select input features, (2) using decision trees for customer segmentation, (3) using regression (LR) to build the application and credit bureau scoring models based on important input variables of bank's internal application data and credit bureau data, (4) combining the application and credit bureau scoring models to construct the scoring matrix.
These previous studies have focused on creating more accurate classifiers with various hybrid architectures.However, there is scant research on the practical application of combined classifiers because it is difficult to implement functional composition or explain the underlying principle behind the decision of rejecting credit applications when applying the hybrid approach to the banks' risk management practices.
Therefore, this study evolves a GA-based feature selection and a hybrid model that combines two credit risk modeling approaches, the application and credit bureau scoring models to construct the scoring matrix for credit risk management of personal loan customers.The scoring matrix undergoes more accurate risk judgment and segmentation to further identify the parts required to enhance management or control within a personal loan portfolio.Furthermore, this study applies the scoring matrix to the credit approval decisions for corresponding risk groups to strengthen the bank's risk management practices.
SCORING MATRIX DEVELOPMENT
Figure 1 illustrates the development framework used for scoring matrix in this study, where the detailed development process is shown.
Data preprocessing
Data preprocessing is an important step but often neglected in the data mining process.The phrase "Garbage In, Garbage Out" is particularly applicable to the typical data mining projects.Thus, the representation and quality of data is first and foremost before running an analysis (Kotsiantis et al., 2006).
Data preprocessing tasks in this study includes data cleaning, data integration, data transformation, and data reduction.Data cleaning is the process of smoothing noisy data, identifying or removing outliers, and resolving inconsistencies.
Data integration is a procedure to integrate multiple databases or files.In data transformation, the data will be converted into the data mining process.Data cleaning is the process of removing irrelevant attributes from data and reducing attribute number by grouping attributes into intervals (binning).After the preprocessing, the data will be given to the data mining process.Holland (1975) proposed GA as a heuristic combinatorial optimization search technique.Compared to the traditional statistical approach, GA has the advantage of not being bounded by the form of functions.
Feature selection
This study exploits the nature of the GA fitness function to analyze the input variables influencing the personal loan payment status for feature information and then converts the rules of the hidden features of the various variables and transforms them into important values.
The relative important values range from 0 to 1 and they are normalized so that all inputs add up to approximately 1.A variable with greater value means that it is more capable of predicting results.The use of GA as a technique for ranking the importance of variables enables systematic identification of the usefulness of variables and objective ranking of their importance which is very helpful in model input selection namely for eliminating ineffective inputs while saving useful ones (Chi and Tang, 2007).
Customer segmentation
Customer segmentation involves partitioning customers into homogeneous segments based on their behavior, characteristics, and the nature of loan products.Customers belonging to the same segment possess similar risk characteristics or primary risk drivers.
Building individual credit scoring models for each subpopulation will enable us to separate the good customers from the bad customers more accurately than if one credit scoring model is built to handle the whole population.Selected segments should be sufficiently large to enable meaningful sampling for separate credit scoring model development.
Generally, the accuracy of the resulting score increases with the numbers of good and bad available (Mays, 2001).This study uses decision trees for segmentation.Decision trees isolate segments based on performance criteria (that is, differentiate between Goods and Bad), and are simple to understand and interpret.Besides identifying characteristics for use in segmentation, decision trees also identify optimal breakpoints for each characteristic (Ouyang et al., 2011).
Building credit scoring models
The literature has outlined the theoretical background for using LR for classification in credit scoring, and also shows that LR usually performs well in determining good and bad loans in similar tasks to the one examined here (Charitou et al., 2004;Kočenda and Vojtek, 2009).This study uses LR to build application and credit bureau scoring models based on important input variables of bank's internal application data and credit bureau data, respectively.LR uses a set of predictor variables to predict the probability of a binary outcome.The equation for the logit transformation of the probability of an event is as follows: Where p is the posterior probability of Goods, x is the input variables, 0 is the intercept of the regression line, and k is the parameters.
The LR transformation is the log of the G/B odds and is used to linearize posterior probability and limit estimated probability outcomes in the model to between 0 and 1. Maximum likelihood is used to estimate parameters 1 to k .These parameter estimates measure the rate of change of logit for one unit change in the input variable, that is, they are the slopes of the regression line between the target and their respective input variables To facilitate the use and interpretation of credit scoring models, credit scores are commonly scaled linearly to take more integer points.This study scales the points such that a total credit score of 300 points corresponds to G/B odds of 1 to 1, and that an increase in the credit score of 20 points corresponds to a doubling of G/B odds.Equations 1 and 2 show the derivation of the scaling rule that transforms the credit scores of each attribute.
Where woe is the weight of evidence for each grouped attribute, β is the regression coefficient for each variable, a is the intercept term from LR, n is the number of variables and k is the number of attributes for each variable.
Owing to the score-to-odds relationship having different meanings in different segments, this investigation applies calibration to standardize the relationship between the score and G/B odds.Credit scores in different segments thus can be compared directly.
To facilitate the use of credit strategies, credit scores are generally divided into different risk ranks according to the degree of risk score.This investigation classifies customers into five risk ranks, ranging from APS1 to APS5 and CBS1 to CBS5, based on application and credit bureau scores, respectively.
Scoring matrix
In this process, the five risk ranks of the personal loan application scoring model and the credit bureau scoring model are combined to construct a 5x5 scoring matrix.
The purpose is to present the possibility of customer payment using objective and specific data.This scoring matrix can better differentiate customer risk and further design corresponding credit strategies.
Credit strategy applications
Banks can design and implement related credit strategies based on the application scoring model they apply to personal loan application process.However, incorporating the credit bureau scoring model into the application scoring model enables banks to undertake more refined risk segmentation.This study applies the scoring matrix in credit approval decision.
METHODLOGY Data collection
The internal application data contains various socio-demographic characteristics and other information collected by a major bank in Taipei, Taiwan.
The sample comprises of 16,040 individual customers who were granted loans during 2009/11/1 to 2010/10/1.The internal application data are incomplete because of a lack of data on interactions between bank's internal customers and other financial institutions, namely the proprietary information resulting from business competition.
To improve the credit scoring model's performance, this study also collects credit-related information on personal loan customers from the public credit registers, as well as collecting bank's internal data.
In this study, the internal application data based on borrower characteristics in addition to credit bureau data.Meanwhile, the credit bureau data comprised five major dimensions which payment history, recent searches for credit, length of credit history, types of
Cumulative percent of sample K -S Statistics (%)
Figure 2. The K-S statistic.
credit used and credit utilization.
Sample
Customers are classified as either good or bad based on their payment performance connected with the loan.Those who are two or more installments in arrears being classified as bad.Of the 16,040 total personal loan customers, 15,532 are good while 508 are bad.Therefore, the G/B odds ratio is 15532/508=30.6.To avoid over fitting the construction model, this study uses the G/B odds ratio of 3 (Chuang and Chen, 2006).That is, 1,492 good are randomly selected and combined with 508 bad to form the development sample.To validate the stability and accuracy of the application scoring model, the data set of 2,000 customers is split into training and testing data sets using a ratio of 8:2 (Lee et al., 2006).
Evaluation of model performance
When a statistical model is used as a predictive tool, doubts can exist regarding the generalization of the model over time and new observations.Several methods exist for measuring the performance of statistical prediction models.
Two of the most widely applied methods are the Kolmorogov-Smirnoff (K-S) statistic and the receiver operating characteristic (ROC) curve analysis.The K-S statistic measures how far apart the cumulative distribution functions of the scores of Good and Bad are.The credit scoring model generating the greatest separability between the two distributions is considered the better model.The equation is as follows: Figure 2 shows that bad accumulate rapidly at low scores while good accumulate more rapidly at high scores.Additionally, the cumulative distribution function curve of the Goods lies to the right of that of the Bad.
The ROC curve analysis is commonly used for assessing the performance of various classification tools including biological markers, diagnostic tests and binary outcome models (Medema et al., 2009;Yu, 2009).The ROC curve as depicted in Figure 3 is the plot that displays the full picture of trade-off between the percentage of hits (for example, sensitivity) of a credit scoring model on the yaxis against the percentage of false alarms (for example, 1specificity) for all possible classification thresholds.
If high scores are defined to present a low default probability, then x-values represent the error rate with which good are classified as bad using a credit scoring model (for example, Type II error) and y-values represent one minus the error rate with which bad are classified as good using a credit scoring model (for example, Type I error).The ROC curve thus also completely represents Type I and Type II errors.
The area under the ROC curve (AUC) is widely used for assessing the discriminatory ability of a credit scoring model which can be interpreted as the probability that a classier is able to distinguish a randomly chosen good customer from a randomly chosen bad customer.The AUC value is equivalent to both the Gini coefficient (Thomas et al., 2002) and the Wilcoxon-Mann-Whitney test statistic (Hanley and McNeil, 1982).The AUC value ranges from 0.5 to 1, where larger AUC value indicates a more accurate credit scoring model.In most cases where good data is being used, the AUC value exceeding 0.7 represents good discrimination capacity (Cholongitas et al., 2006).
Input feature selection
GA is employed to eliminate ineffective variables according to the importance of input in modeling performance that occurs if a variable is no longer available to the model.The input variables are selected by the rule of important number>0.05(Wang et al., 2010) and the retained variables are then used to construct the credit scoring models.
The importance of each input variable for the application and credit bureau scoring models are listed in Tables 1 and 2 respectively.
APPLICATION SCORING MODEL RESULTS
This study employs LR to build the application scoring model based on important input variables of bank's internal data.Table 3 shows the nine significant variables of the application scoring model, as well as the attributes, G/B odds ratio, and attributes points of each variable.
These variables include age, gender, material status, education, occupation, years of work experience, home ownership, term of loan and loan amount.
The empirical results listed in Table 4 show that the K-S and AUC values of the application scoring model are 29.90 and 70.50%, respectively.To facilitate credit strategy applications, customers are classified into five risk ranks ranging from APS1 to APS5 in accordance with the degree of risk score where APS1 indicates highest risk and APS5 indicates lowest risk.
Credit bureau scoring model results
Because of differences in characteristic behaviors between revolving and transaction customers, this investigation first uses decision trees to partition the customers into two segments according to their payment behaviors.
This study then applies LR to build the revolving and transaction credit bureau scoring models for two segments based on important input variables of credit bureau data respectively.Table 5 indicates the eight significant variables of the revolving credit bureau scoring model, as well as the attributes, G/B odds ratio, and attributes points of each variable.The eight variables are outstanding amount of cash cards, maximum consecutive months of cash advance more than 0 in the last 12 months, number of credit cards more than 30 days past due in the last 6 months, worst days past due among unsecured products in the last 6 months, number of inquiring banks in the last 3 months, bureau abnormal credit record, average revolving ratio in the last 3 months and average utilization ratio of credit cards in the last 6 months.Table 6 indicates the seven significant variables of the transactor credit bureau scoring model as well as the attributes, G/B odds ratio, and attributes points of each degree of risk score, where CBS1 has the highest risk and CBS5 has the lowest risk.
Scoring matrix results
After building the application and credit bureau scoring models, this study constructs a 5x5 scoring matrix based on the five risk ranks of the two models.Table 4 shows that the K-S and AUC values of the scoring matrix are 48.30and 81.40%, respectively.
Comparative model performance
Regarding the K-S value, the scoring matrix increases the prediction accuracy compared to both the application and credit bureau scoring models by 18.40 and 5.70%, respectively.Regarding the AUC value, the scoring matrix increases the prediction accuracy compared to both the application and credit bureau scoring models by 10.90 and 6.40%, respectively.
The empirical results demonstrate that the predictive ability of the scoring matrix outperforms both the application and credit bureau scoring models.The scoring matrix thus enables more refined risk segmentation.
To further understand the prediction accuracy of three construction models built on different data sets.Figure 4 to 7 show that the scoring matrix has higher K-S and AUC values in both the training and testing sets than those of the application and credit bureau scoring models.Additionally, the empirical results listed in Table 4 show that the testing set has slightly lower accuracy than the training set, indicating the scoring matrix is stable.
CREDIT STRATEGY APPLICATION
The analysis results indicate that the K-S and AUC values of the scoring matrix are significantly higher than those of the application and credit bureau scoring models.By applying the scoring matrix to personal loan portfolio management, this investigation classifies 16,040 personal loan customers into the 25 cells thus allowing more accurate segmentation of customer risk.Furthermore, the cell can be grouped into three risk groups based on the G/B odds ratio of each cell.Table 7 lists the details, where cells with the G/B odds ratio below 30 are categorized as the high-risk group.Cells with the G/B odds ratio between 30 and 50 are categorized as the medium-risk group and cells with the G/B odds ratio exceeding 50 are categorized as the low-risk group.The purple zone indicates the high-risk group with an average G/B odds ratio of 11.6.The yellow zone indicates the mid-risk group with an average G/B odds ratio of 38.0 and the green zone indicates the low-risk group with an average G/B odds ratio of 125.4.These three risk groups reveals significant risk segmentation.The bank can then adopt the credit approval decision for different risk groups as listed in Table 8.
Consumers in the low-risk group are viewed as a very low credit risk by the banks.Banks can offer their best rates and terms to borrowers in this group.If customers belong to the medium-risk group, the banks can extend credit but require much higher interest payments to compensate for the increased risk associated with this group.If customers belong to the high-risk group, they are hard to obtain financing by the banks.
Conclusions
This study evolves a GA-based feature selection and a hybrid model that combines two credit risk modeling approaches.The application and credit bureau scoring models to construct the scoring matrix for credit risk management of personal loan customers.
Additionally, this study classifies personal loan portfolio into three risk groups based on the degree of customer risk.Focusing attention on different risk groups makes it possible to design corresponding credit strategies.For model validation, this study applies the K-S statistic and ROC curve to measure the predictability of the credit scoring model.
Regarding the K-S value, the scoring matrix increases the prediction accuracy by 18.40 and 5.70% respectively, compared to the application and credit bureau scoring models.Regarding the AUC value, the scoring matrix increases the prediction accuracy by 10.90 and 6.40% respectively, compared to the application and credit bureau scoring models.Overall, using the scoring matrix can more precisely and efficiently strengthen risk identification, assessment and management, making it an indispensable risk management tool for financial institutions.
Figure 1 .
Figure 1.The development process of scoring matrix. Where are cumulative distribution functions of Good and Bad and s is the corresponding score for the individual loan.
Figure 7 .
Figure 6.K-S for model results (training set).
Table 1 .
Results of the GA for input feature selection of application scoring model.
Table 2 .
Results of the GA for input feature selection of credit bureau scoring model.
Table 3 .
Results of application scoring model.
Table 4 .
Credit scoring results of three construction models.
Table 5 .
Results of revolving credit bureau scoring model.
Table 6 .
Results of transactor credit bureau scoring model.
Table 4 .
Additionally, this study classifies customers into five risk ranks, ranging from CBS1 to CBS5, based on the | 4,657.8 | 2013-05-14T00:00:00.000 | [
"Business",
"Computer Science",
"Economics"
] |
A Robust Semi-Blind Receiver for Joint Symbol and Channel Parameter Estimation in Multiple-Antenna Systems
: For multiple-antenna systems, the technologies of joint symbol and channel parameter estimation have been developed in recent works. However, existing technologies have a number of problems, such as performance degradation and the large cost of prior information. In this paper, a tensor space-time coding scheme in multiple-antenna systems was considered. This scheme allowed spreading, multiplexing, and allocating information symbols associated with multiple transmitted data streams. We showed that the received signal was formulated as a third-order tensor satisfying a Tucker-2 model, and then a robust semi-blind receiver was developed based on the optimized Levenberg–Marquardt (LM) algorithm. Under the assumption that the instantaneous channel state information (CSI) is unknown at the receiving end, the proposed semi-blind receiver jointly estimates the information symbol and channel parameters efficiently. The proposed receiver had a better estimation performance compared with existing semi-blind receivers, and still performed well when the channel became strongly correlated. Moreover, the proposed semi-blind receiver could be extended to the multi-user massive multiple-input multiple-output (MIMO) system for joint symbol and channel estimation. Computer simulation results were shown to demonstrate the effectiveness of the proposed receiver.
Introduction
Multiple-antenna techniques are well known to provide spatial diversity and multiplexing gains [1][2][3].Over the last few decades, the benefits of multiple-antenna communications have been verified in both theory and practice.On the other hand, tensor-based signaling approaches that utilize several signal dimensions such as time, space, and code, are seen as good technologies for improving the information transmission rate and enhancing communication reliability [4][5][6].Against this background, the problem of joint symbol and channel estimation is resolved by using tensor-based signaling approaches, and a number of semi-blind or blind receivers have been proposed for multiple-input multiple-output (MIMO) systems.
A parallel factor (PARAFAC) [7] based receiver is proposed in [8] by using the Khatri-Rao space-time (KRST) coding scheme, which can achieve a flexible tradeoff between error performance and transmission efficiency.In [9], the authors extend the KRST coding scheme by using the linear constellation precoding, and then developing several semi-blind receivers.These semi-blind receivers allow a joint symbol and channel estimation without requiring pilot sequences for the instantaneous channel state information (CSI) acquisition.In [10], the authors develop a new tensor-based receiver in MIMO relay systems for channel estimation by using PARAFAC analysis.A low complexity PARAFAC-based channel estimation scheme for non-regenerative MIMO relay systems is developed in [11].In [12], a novel semi-blind receiver is derived using a multiple KRST coding scheme for joint symbol and channel estimation.More recently, a nested PARAFAC-based receiver for cooperative MIMO communications is proposed in [13], and three-step and double two-step alternating least squares (ALS) algorithms are proposed to fit the nested PARAFAC model for estimating system parameters.For millimeter wave (mmWave) massive MIMO systems, a PARAFAC decomposition-based algorithm is developed in [14] to jointly estimate channel parameters of multiple users.In [15], the algorithm in [14] is extended to mmWave MIMO orthogonal frequency division multiplexing (MIMO-OFDM) systems for channel estimation, and Cramér-Rao bound (CRB) results for channel parameters are also derived.Considering the channel estimation issue in the presence of pilot contamination for multi-cell massive MIMO systems, a new PARAFAC-based approach is proposed in [16] to jointly estimate directions of arrival, fading coefficients, and delays.Although these works [8][9][10][11][12][13][14][15][16] consider different design approaches, their common feature is using the PARAFAC model, which needs to know the first column or row of one loading matrix to eliminate scaling ambiguity.Furthermore, the ALS algorithm used in these receivers exhibits a convergence problem when ill-conditioned factor matrices exist [17].
In contrast to the ALS algorithm, the Levenberg-Marquardt (LM) algorithm updates all the parameters to be estimated at the same time.The LM algorithm is successfully used to fit some tensor models, adapt to collinearity problems, and provide quadratic convergence [18][19][20].A LM algorithm is first proposed for fitting PARAFAC model in [18].In [19], the authors present a LM algorithm to the decomposition of the Block Component Model (BCM) in the uplink of a wideband direct-sequence code-division multiple access (DS-CDMA) systems.Recently, a LM algorithm was developed in [20] to jointly estimate information symbol and channel matrices for a generalized PARATUCK2 model.As an iterative algorithm, the LM algorithm is also sensitive to initialization.Thus, the optimization of the initial value is important to improve the performance of the LM algorithm.
In [21], a tensor-based space-coding scheme using PARATUCK2 model is developed.For the PARATUCK2 model, the number of channel uses can be different from one transmitted data stream to another.In [22], a generalized PARATUCK2 model is proposed by exploiting a tensor space-time (TST) coding.Recently, a Kronecker product least squares (KPLS) receiver is proposed in [23] to estimate the symbol and channel matrices.More recently, it is shown in [24] that a KPLS receiver can be extended to all the tensor-based systems.Although the KPLS receiver is a non-iterative and low-complexity solution, it needs the related core tensor unfolding to be right-invertible, which is a relatively harsh condition in signal design.
Inspired by [21] and [22], we considered a simple tensor space-time coding scheme for multiple-antenna systems, along with an efficient receiver.The allocation factor and the space-time code factor in the TST coding scheme in [22] are independent, while the allocation factor in our coding scheme is also a three-dimensional space-time code factor.Thanks to the special structure of the proposed coding scheme, the received signal can be constructed as a Tucker-2 model [25,26], which has uniqueness property under some suitable conditions.Then, a robust semi-blind receiver based on optimized LM algorithm is presented for joint channel and symbol estimation.Uniqueness and identifiability issues for the constructed Tucker-2 model are also discussed in this paper.Compared with existing receivers, the proposed receiver has a better estimation performance.Moreover, the proposed semi-blind receiver can be extended to the multi-user massive MIMO system.For the low-rank channel,the proposed receiver still has good performance for joint symbol and channel estimation even in the shorter length of code and information symbol, and larger number of data streams.
The organization of this paper is as follows.Section 2 presents a brief overview of the Tucker model.In Section 3, the system model is presented and the associated tensor signal model is formulated.Section 4 briefly reviews the receiver with the ALS algorithm and describes the proposed semi-blind receiver based on the optimized LM algorithm.Section 5 extends the proposed semi-blind receiver to multi-user massive MIMO systems for joint symbol and channel estimation.In Section 6, some simulation results are shown to demonstrate the performance of our semi-blind receiver.Conclusions are drawn in Section 7.
Notation: Scalars, vectors, matrices, and tensors are denoted by lower-case letters (a, b, • • •), boldface lower-case letters (a, b, • • •), boldface capitals (A, B, • • •), and underlined boldface capitals (A, B, • • •), respectively.A T , A H , A −1 , and A † represent transpose, conjugate transpose, inverse, and Moore-Penrose pseudo-inverse of the matrix A, respectively.A F denotes the Frobenius norm of A. I M denotes the M × M identity matrix.The operator vec (•) stacks the columns of its matrix argument to a vector, while unvec (•) represents the inverse vectorization operation.The Kronecker matrix product is denoted by ⊗.The term D i (A) corresponds the diagonal matrix out of the i-th row of A.
Tucker Model
This section first presents a brief overview of the Tucker model, and then focuses on the Tucker-2 model used in this work.For an Nth-order tensor T ∈ C I 1 ו••×I N , a Tucker-N model or Tucker model is defined in the following scalar form as [26]: where i n = 1, . . ., I n for n = 1, . . ., N, a i n ,r n and g r 1 ,...,r N stand for typical elements of the matrix factor A (n) ∈ C I n ×R n and the core tensor G ∈ C R 1 ו••×R N , respectively.Using the mode-n product representation, the model (1) can be written as: where G× n A (n) denotes the mode-n product of G and A (n) along the N th mode, gives a tensor where w r 1 ,...,r n−1 ,i n ,r n+1 ,...,r N is a typical element of the tensor W. It has been known that the Tucker model is not essentially unique [26], which restricts its application.Their matrix factors can be only determined up to nonsingular transformations characterized by nonsingular matrices.However, some low-order Tucker models with special structures are unique up to permutation and/or scaling ambiguities.
Assuming N = 3 and A (3) = I I 3 for the third-order tensor T ∈ C I 1 ×I 2 ×I 3 , we have: This model is called Tucker-2 model or Tucker-(2, 3) model, and is widely applied in data analysis and parameter estimation [4].A (1) and A (2) are the two loading matrices, and G is the core tensor.In the same way, such a model can be written in terms of mode-n product as: (5)
System Model
Consider a multiple-antenna system with M S transmit antennas and M D receive antennas as shown in Figure 1.h m D ,m S represents the channel coefficient between the m S -th transmit antenna and the m D -th receive antenna ( m S = 1, . . ., M S , m D = 1, . . ., M D ). s n,r represents the n-th symbol of the r-th data stream (n = 1, . . ., N, r = 1, . . ., R), with each data stream being formed of N information symbols.Each symbol s n,r is coded by a three-dimensional space-time code b m S ,r,p (p = 1, . . ., P), whose dimensions are the numbers of transmit antennas, data streams, and chips, respectively.We then define the antenna-to-slot allocation factor q p,m S , which is 0 or 1.Both the transmitter and the receiver know these factors b m S ,r,p and q p,m S .The signal transmitted from m S -th transmit antenna, during the n-th symbol period of the p-th chip, is given by: where s n,r and q p,m S are (n, r)-th and (p, m S )-th elements of signal matrix S ∈ C N×R and the antenna-to-slot allocation matrix Q ∈ C P×M S , respectively.x m S ,n,p and b m S ,r,p are typical elements of the transmitted signal tensor X ∈ C M S ×N×P and the coding tensor B ∈ C M S ×R×P , respectively.The elements in B are chosen as e √ −1ς 2π , where ς is taken from random uniformly distributed pseudorandom numbers.In our tensor coding scheme, the number of transmitted data streams is not restricted to be equal to that of transmit antennas, and the data streams can be allocated to an arbitrary set of transmitted antennas.Without considering the allocation of stream-to-slot, the coding scheme in [21] can be regarded as a special case of our tensor coding scheme with a fixed two-dimensional space-time code.
Assuming Rayleigh flat fading channels, then the discrete-time baseband signal at the m D -th receive antenna can be written as: m S b m S ,r,p s n,r + v m D ,n,p (7) where h m D ,m S is the (m D , m S )-th element of channel matrix H ∈ C M D ×M S , y m D ,n,p and v m D ,n,p are typical elements of the received signal tensor Y ∈ C M D ×N×P and the noise tensor V ∈ C M D ×N×P , respectively.
Constructed Tucker-2 Model
Let us define c m S ,r,p = q p,m S b m S ,r,p , where c m S ,r,p is the typical element of the compound tensor C ∈ C M S ×R×P .So Equation ( 7) can be written as: By comparing Equation ( 4) with Equation ( 8), the received signal tensor Y ∈ C M D ×N×P of noiseless signals satisfies a Tucker-2 model , with the following correspondences: G, A (1) , A (2) Using the mode-n product representation, the model ( 8) can be written as: where S and H represent the two loading matrices, and C is the core tensor.Let us define , we can obtain four compact forms of the Tucker-2 model ( 11): with, and, In this paper, two following assumptions are satisfied.
(a) The antenna-to-slot allocation matrix Q does not have an all-zero column.This means that at least one transmit antenna is used during each time slot; (b) Both the transmitter and receiver know the allocation matrix Q and the coding tensor B.
Uniqueness Issue
Due to the loading matrices factors being unique up to nonsingular matrices, the generalized Tucker-2 model is not essentially unique.This consequence can be verified by using the property of the mode-n product: where the noise tensor V has been omitted for convenience of notation, Θ S ∈ C R×R and Θ H ∈ C M S ×M S are nonsingular matrices.
It is shown that applying the uniqueness theorem of the Tucker model in [25], if the core tensor C is known, then S and H are unique to a scaling ambiguity, i.e., (S, where S and H are alternative solutions for S and H, respectively, Θ S = βI R and Θ H = 1 β I M S .Consequently, the priori knowledge of only one symbol is enough to resolve this scaling ambiguity factor β. Compared to the PARAFAC model used in existing receivers, the constructed Tucker-2 model only needs a priori knowledge of one symbol to eliminate the scaling ambiguity.Therefore, our scheme has higher spectral efficiency.
Identifiability Conditions
The identifiability for the constructed Tucker-2 model is an assignable problem for recovering the parameters to be estimated.In this paper, it is directly linked to the estimation of the signal matrix S and channel matrix H from the received signal tensor Y. Conditions of parameter identifiability is given in the following theorem.
Theorem 1. (Sufficient Conditions):
Assuming that H has independent and identically distributed (i.i.d.) entries, and S has a full column rank.P 1 denotes the number of nonzero elements in Q.Then sufficient conditions for identifiability of signal matrix S and channel matrix H are: Proof of Theorem 1. From Equation (12) and Equation ( 13), necessary and sufficient conditions for identifiability of S and H requires that (I P ⊗ H) F 1 and (I P ⊗ S) F 2 have a full column rank, i.e.
Rank (( Rank (( Under the assumptions in Theorem 1 that H has i.i.d.entries, M D M S can ensure H has a full column rank.Since I P and H have a full column-rank, then I P ⊗ H has a full column rank, i.e., Rank (I P ⊗ H) = PM S .Therefore, Equation ( 21) is satisfied if F 1 has a full column rank.We rewrite F 1 from Equation ( 15) as: where Since S has a full column rank, we can deduce that Rank (I P ⊗ S) = PR.Thus, condition ( 22) is satisfied if F 2 has a full column rank.We rewrite F 2 from Equation ( 16) as: where . . ,D P (Q)] T .Recall that Q does not have an all-zero column, which means that Q P has a full column rank.We have that F 2 is full column rank if B 2 is full row rank.Since B 2 has the block diagonal structure and B ••p has different generators, R M S ensures that B 2 has full row rank.Therefore, R M S can ensure that condition (22) is satisfied.This ends the proof.
Remark 1.The conditions in Theorem 1 is sufficient but not necessary for parameter identifiability.Sufficient condition (21) and condition (22) also concern the ALS algorithm.In fact, identifiability of signal and channel parameters is possible in our simulation results when M S > R. Necessary conditions for parameter identifiability is based on the dimensions of (I P ⊗ H) F 1 and (I P ⊗ S) F 2 .If the channel matrix H does not have a full column or row rank, i.e., L < min (M S , M D ), where L is the rank of the channel matrix H. Thus, identifiability conditions of Theorem 1 are no longer applicable because of the low-rank property of H.However, we can also deduce identifiability conditions based on Equations ( 21) and ( 22), i.e., necessary and sufficient conditions for identifiability of S and H requires that (I P ⊗ H) F 1 and (I P ⊗ S) F 2 have full column rank.For this case, we will do further analysis in Section 5.
Semi-Blind Receiver
The ALS algorithm is a classical solution for fitting tensor models.However, it is well known that the ALS algorithm exhibits a convergence problem when collinearity is present in one or more modes [27,28].The LM algorithm is successfully used to fit the PARAFAC and PARATUCK2 models, adapted to collinearity problems, and provide quadratic convergence [19,20].As an iterative algorithm, the LM algorithm is also sensitive to initialization.Thus, the optimization of the initial value is important to improving the performance of the LM algorithm.
In this section, a novel semi-blind receiver based on the optimized LM algorithm is developed for joint symbol and channel estimation.The basic principle of the optimized LM algorithm is to first resort to a LSK approximation problem [29,30], based on the singular value decomposition (SVD) of rank-1 matrix to initialize the symbol and channel matrices, and then update these two matrices at the same time in each iteration.Finally, the modified singular value projection (SVP) based algorithm [31,32] is used to further improve the performance of channel estimation.
The proposed optimal initialization method is based on the Kronecker least squares algorithm, which exploits SVD-based rank-one approximations to get an initial estimation of S and H from their Kronecker matrix product.
By post-multiplying Equation ( 14) with , where Ŝ(0) and Ĥ(0) are initial estimates of S and H.According to the Theorem 2.1 in [29], we have: where Ξ = unvec(∆) ∈ C M D M S ×NR is a rank-one matrix, and ∆ ∈ C N M D RM S ×1 is, given that: In this case, the Kronecker product matrix Z has been rearranged into a rank-one matrix Ξ.
Applying SVD to the rank-one matrix Ξ, the vectors vec Ŝ(0) and vec Ĥ(0) can be estimated by using a rank-one approximation method, i.e., by computing its largest singular value and the corresponding left and right singular vectors.Ŝ(0) and Ĥ(0) are determined up to a scaling factor, which can be removed by setting s 1,1 = 1 as in [27,30].The detailed process is shown below.
By applying SVD to the rank-one matrix Ξ, we have: where Σ ∈ C M D M S ×NR is a diagonal matrix containing singular values of Ξ, U ∈ C M D M S ×M D M S and V ∈ C NR×NR are unitary matrices.Using the rank-one approximation of Ξ, we have: where σ •1 is the largest singular value, and U •1 and V •1 are the corresponding left and right singular vectors.Thus, vectors vec Ŝ(0) and vec Ĥ(0) can be estimated as: where α is the scalar factor.vec Ŝ(0) and vec Ĥ(0) are determined up to this scalar factor.In practical communication systems, this scalar factor α can be removed by setting s 1,1 = 1.Thus, the value α in this paper is equal to 1 when s 1,1 = 1.Note that we can also choose Equation ( 14) to implement the above optimal initialization procedure.Define a parameter vector stacking all the unknowns as: where , and Q = NR + M D M S .The cost function to be minimized is given by: where ỹm D ,n,p (u) is the typical element of the tensor Ỹ (u) ∈ C M D ×N×P , which denotes the output tensor in absence of noise.
denotes the vector of residuals and L = NPM D .Let the J ∈ C L×Q be the Jacobian matrix of z (u) with respect to u, and g be the gradient of φ (u) with respect to u. J and g are respectively defined by: The optimized LM algorithm consists in optimizing u (0) , and estimating u (i+1) at the (i + 1)-th iteration from u (i) at the i-th iteration via u (i+1) = u (i) + ∆u (i) .The step ∆u (i) ∈ C NPM D ×1 is updated by solving the following modified normal equations: where λ (i) is the damping parameter to ensure that ∆u (i) is a descent direction.The whole procedure of the optimized LM algorithm used in our semi-blind receiver is listed in Algorithm 1.
Otherwise, u (i+1) is invalid, and set λ (i+1) = τλ (i) and τ ← 2τ; Step 6. i ← i + 1; end Acquire S (∞) and H (∞) : Remove the scaling ambiguity: We can then build the blocks of J H J as follows: The terms J H u S J u S , J H u H J u H and J H u S J u H can be respectively written as: Similarly, the partitioned structure of u allows us to write g as the concatenation of the following two gradients: where g u S ∈ C NR×1 and g u H ∈ C M D M S ×1 are respectively given by: In Algorithm 1, the estimated matrix H (∞) is projected onto a low rank estimated matrix new by the SVP based algorithm when L < min (M S , M D ).Here H (∞)
H
, where β l denotes the l-th largest singular value of H (∞) , U (C) •l and V (C) •l are the corresponding left and right singular vectors.The overall complexity of the optimized LM algorithm mainly depends on the per-iteration complexity and the numbers of iterations.The per-iteration complexity of this algorithm can be estimated as O (NR + M D M S ) 3 .Since the antenna-to-slot allocation matrix and the coding tensor are fixed and known at the receiver, the convergence of the optimized LM algorithm is usually achieved in only a few iterations.The average number of iterations for the optimized LM algorithm will be further analyzed in Section 6.
Extension to Multi-User Massive Mimo Systems
In the following section, we show that the developed algorithm can be applied to multi-user massive MIMO systems with hybrid precoding architecture for joint symbol and channel estimation.We consider a fully-connected hybrid precoding architecture, which is the typical model of massive MIMO systems.The base station communicates with M users simultaneously, and each mobile station is equipped with M D antennas.The base station is equipped with M S antennas and M RF independent radio frequency chains to transmit R streams for M D receive antennas in each mobile station.In the considered downlink system, each symbol s n,r is coded by a three-dimensional baseband code b (M) m RF ,r,p followed by a radio frequency code e m S ,m RF in the base station.At the m-th (m = 1, . . ., M) mobile station, the discrete-time baseband signal at the m D -th receive antenna is written as: m D ,m S q p,m S e m S ,m RF b m RF ,r,p s n,r + v where, Following [33,34], we also adopt a geometric channel model with L m scatterers between the base station and the m-th mobile station, L m = 1, . . ., L M .Under this model, the channel matrix H (m) is expressed as: where α (m) l denotes the complex gain of l-th path, θ are respectively given by: where λ denotes the signal wavelength, and d is the distance between two neighboring antenna elements.Similar to the analysis of Section 3.1, the received signal tensor Y (m) of noiseless signal also satisfies the Tucker-2 model, and the proposed algorithm in Section 4 remains suitable for joint symbol and channel estimation at each mobile station.However, two points are important to note here.First, identifiability conditions of Theorem 1 are no longer applicable because of the low-rank property of H (m) .However, we can deduce new identifiability conditions based on Equations ( 21) and ( 22), i.e., necessary and sufficient conditions for identifiability of S and H (m) require that I P ⊗ H (m) F 1 and (I P ⊗ S) F 2 have full column rank.For convenience of analysis, we assume that the antenna-to-slot allocation matrix is all-ones matrix.Then, we have the following theorem.
Theorem 2. Assuming that the path gains of the low-rank channel H (m) are Rayleigh distributed, and N and R are large enough.Then sufficient conditions for identifiability of H (m) and S are: Proof of Theorem 2. The channel model H (m) is expressed as Equation ( 48).The rank of H (m) is L m , and the path gains of the H (m) are Rayleigh distributed.F 1 is a full rank matrix, which contains different generators.Consequently, min (PM D , PL m , PM S ) R ensures that (I P ⊗ H) F 1 have full column rank.Since H (m) is a low-rank, i.e., L m < min (M S , M D ), P R L m can ensure (I P ⊗ H) F 1 has the full column rank.Since N and R are large enough, and S has the random nature, the rank of S is equal to N or R.Moreover, F 2 is also a full rank matrix because of its special structure.We deduce that (I P ⊗ S) F 2 is full column rank if min (PN, PR) M S , i.e., P max M S N , M S R .Therefore, condition (51) can ensure identifiability of H (m) and S.This ends the proof of Theorem 2.
Second, the low-rank property of the mmWave massive MIMO channel should be exploited.Due to very limited scattering of the mmWave channel and larger quantities of transmitting and receiving antennas, L m is usually less than M S and M D .Different from the conventional MIMO channel matrix that usually has full column or row rank, the rank of the mmWave massive MIMO channel matrix is much smaller than its dimension.This is called 'low-rank property' of the mmWave massive MIMO channel matrix.Therefore, the final part of the proposed Algorithm 1 takes advantage of this low-rank constraint rank H (m) L m to further improve the estimation accuracy of the channel.
Simulation Results and Discussion
We studied the performance of the proposed semi-blind receiver through numerical simulations.The channel matrix H has independent and identically distributed (i.i.d.) complex Gaussian entries with zero-mean and unit variance.The default values of the system parameters are set to M S = M D = 4, and the antenna-to-slot allocation matrix is all-ones matrix.Throughout the simulation, the coding tensor C is known at the receiver.Quadrature phase-shift keying (QPSK) constellations are used to modulate the transmitted symbols.All results are averaged over 10,000 independent Monte Carlo simulations.As in [8,9], the signal-to-noise ratio (SNR) at the receiver is defined as: where Ỹ denotes the noise-free signal tensor (the tensor-of-interest) containing both symbol and channel parameters.For each channel realization, the normalized mean square error (NMSE) for different receivers is computed as , where Ĥ is the estimation of H at convergence.In the first example, we evaluate the convergence performance of the optimized LM algorithm, which is used in our semi-blind receiver.We assume the system design parameters N = P = 5 and R = 3.In Figure 2, the average value of the cost function is plotted versus the number of iterations, for three SNR values.We observe from Figure 2 that for each SNR value, the cost function decreases as the number of iterations increases until the algorithm converges.We can also see that for the same number of iterations, the cost function decreases as SNR increases.The proposed algorithm needs few iterations to converge.For instance, the optimized LM algorithm achieves convergence in about 10 iterations at the SNR of 20 dB.In the second example, we evaluated the estimation performance of the proposed semi-blind receiver in terms of bit error rate (BER) and the NMSE of channel estimation.In particular, we compared the PARAFAC-based receiver with KRST (P-KRST) coding scheme in [8] and the training-based receiver with the space-time (TB-ST) coding scheme.For the TB-ST scheme, the symbol matrix is composed of two parts as in [9], i.e., the training symbol matrix and the unknown data symbol matrix.N tr denotes the length of the channel training sequence in the TB-ST receiver.
The transmission rates for the proposed coding scheme and the KRST coding scheme are RN PN = R P and M S N PN = M S P (data symbols per symbol period), respectively.However, the KRST coding scheme needs to know the first column of signal matrix S to eliminate the scaling ambiguity, while the proposed coding scheme only needs to know s 1,1 to eliminate the scaling ambiguity.Thus, the efficient transmission rates for the proposed coding scheme and the KRST coding scheme are RN−1 PN and , respectively.To ensure a fair comparison, the proposed coding scheme and the KRST coding scheme should keep the same efficient transmission rate, i.e., N = M S −1 M S −R .Thus, the system design parameters in this example are set equal to M S = 4, P = 7, and R = N = 3.For TB-ST coding scheme, we divide P = P tr + P d , where blocks P tr = 2 and P tr = 5 are used for channel training and data transmitting.Therefore, the length of the channel training sequence in the TB-ST receiver is N tr = P tr N = 6.
The BER performance of different receivers versus SNR is shown in Figure 3.It can be seen that the proposed semi-blind receiver outperforms the P-KRST and TB-ST receiver.The NMSE performance of the different receivers is demonstrated in Figure 4.It can be seen from Figure 4 that the P-KRST receiver has the best performance of channel estimation, and the proposed semi-blind receiver yields a smaller NMSE compared with the TB-ST receiver.From [8], the per-iteration complexity in the PARAFAC based receiver is O (M S M D PN).The complexity of the TB-ST scheme can be estimated as O (N r M S (M D + N r ) + RPM D (N + R)).The per-iteration complexity of the proposed O-LM algorithm is given at the end of Section 4. The TB-ST scheme has the least computational complexity due to the use of the channel training sequence.Due to the adoption of the simple KRST coding scheme, the PARAFAC based receiver has lower complexity than that of the proposed receiver.However, the TB-ST receiver requires a long channel training sequence, the PARAFAC-based receiver needs to know the first column or row of the signal matrix to eliminate the scaling ambiguity, but the proposed receiver only needs to know one symbol of the signal matrix.
In the third example, we evaluated and compared the performance of the traditional ALS (T-ALS) and optimized LM (O-LM) algorithms.We assume the system design parameters N = P = L and R = 5.Correlated MIMO channel is considered in this example, and the channel matrix H is modeled as in [35], where ρ denotes the normalized correlation coefficient with magnitude |ρ| ≤ 1.We consider ρ = 0 (non-correlation) and ρ = 0.8 (strong correlation), respectively.For each Monte Carlo run, the T-ALS algorithm is initialized with ten different random matrices as in [20,36].The estimation performance is evaluated after selecting the best initialization, which is the one that results in the minimum value of δ (j) .We observe from Figure 5 that the T-ALS and O-LM algorithms give a similar BER and NMSE performance, which means that these two algorithms converge to the same point.For the right subfigure of Figure 5, the NMSE of the T-ALS and O-LM algorithms is also shown in Table 1 for the sake of comparison.We can also observe from Figure 5 that for these two algorithms, BER and NMSE performance degrade when the channel becomes strongly correlated.The overall complexities of the O-LM algorithm and the ALS algorithm depend on the per-iteration complexity and the number of iterations.The per-iteration complexity of the O-LM algorithm is higher than that of the T-ALS algorithm.However, because of the robustness of the O-LM algorithm, the O-LM algorithm needs fewer iterations compared with the T-ALS algorithm.Therefore, the proposed algorithm has lower complexity compared with the existing T-ALS algorithm.The mean processing times required in the T-ALS and O-LM algorithms are shown in Figure 6.We observe that the mean processing time required in the O-LM algorithm is shorter than that of the T-ALS algorithm, especially when the channel becomes strongly correlated.From Figure 6 we can also observe that the advantage of the O-LM algorithm is obvious as L decreases from 8 to 7 compared with the T-ALS algorithm.In the fourth example, the influence of design parameters (P, R) for the proposed receiver is studied.In the left subfigure of Figure 7, it can be seen that the BER decreases when P increases, which expounds the performance gain brought by the time diversity.It can also be seen from this subfigure that the BER increases as the number of data streams R increases.The impact of design parameters (P, R) on the NMSE performance is shown in the right subfigure of Figure 7.As expected, we can observe that the NMSE decreases linearly as a function of P, and increases as R increases.Hence, appropriate values for the design parameters P and R can be selected according to requirements of the system performance and transmission rate.
In the fifth example, we assume M S = 3, R = 4, and N = P = 8 for our semi-blind receiver.The influence of the receive antenna was analyzed.We also compared the performance of our chosen coding tensor (OCCT) B with the random coding tensor (RCT) whose entries are circularly-symmetric Gaussian random variables.In Figure 8, it can be seen that both the BER and NMSE decrease when M D increases, which expounds the performance gain brought by the receive diversity.We also observed from Figure 8 that OCCT has a better performance than RCT.Although OCCT is suboptimal, this choice has good symbol and channel identifiability properties, which is advantageous from a receiver design viewpoint.In the sixth example, we studied the estimation performance of two different transmission schemes for our semi-blind receiver.The default values of the system parameters were set to M D = 5 and N = 6.In scheme 1, we assume M S = 2, R = 5, and P = 6.Three different antenna-to-slot allocation matrices are given as follows: In scheme 2, we assume M S = 3, R = 3, and P = 7.Three different antenna-to-slot allocation matrixes are given as follows: The BER and NMSE performance of the proposed receiver for different schemes is shown in Figure 9.For scheme 1, the proposed receiver with Q 2 has a better BER and NMSE performance than that of the proposed receiver with Q 1 .The reason is that the allocation matrix Q 2 provides a higher transmit spatial diversity gain than the allocation matrix Q 1 .For the same reason, the allocation matrix Q 3 outperforms Q 2 , and the allocation matrix Q 5 outperforms Q 4 .We also observe in Figure 9 that scheme 2 has a better BER and NMSE performance than scheme 1.The reason is that scheme 2 can provide a higher coding diversity than scheme 1.It is worth noting that scheme 1 has higher spectral efficiency compared with scheme 2. The transmission rates for scheme 1 and scheme 2 are about 5/6 and 3/7 (data symbols per symbol period), respectively.In summary, a desired tradeoff between estimation performance and transmission rate can be obtained by designing a suitable scheme.In final example, the multi-user massive MIMO system with a fully-connected hybrid precoding architecture was considered, where M S = 48, M × M D = 6 × 6, and L m = 2 for all m = 1, . . ., M. The carrier frequency of this system is set as 28 GHz [37], and d = λ/2.We assume that AoAs/AoDs are uniformly distributed in [0, 2π].For the considering multi-user massive MIMO system, we also evaluate the estimation performance of the proposed receiver in terms of BER and NMSE of channel estimation.It can be seen from Figures 10 and 11 that the BER and NMSE of the proposed semi-blind receiver decrease as P and N increase, and increase as R increases.The increase of P will reduce the transmission rate, but the increase or decrease of N has no effect on the transmission rate.That means that we can improve the estimation performance of the proposed semi-blind receiver by increasing N if the channel is constant over a long time interval before changing to another realization.We also observed from Figures 10 and 11 that the proposed semi-blind receiver still has a good performance for joint symbol and channel estimation even in a shorter length of code and information symbol, and a larger number of data streams, i.e., P = 24, N = 6, and R = 12.
Conclusions
We have developed a robust semi-blind receiver combined with the Tucker-2 model in multiple-antenna systems.The proposed receiver could jointly estimate the information symbol and channel parameters.Compared with existing semi-blind receivers, the proposed one gave better estimation performance, and had a higher spectral efficiency Moreover, the proposed semi-blind receiver was also applicable to multi-user massive MIMO systems.Perspectives of this work include an extension to relay-assisted massive MIMO systems by applying the antenna allocation matrix at the relays.Since both the source-relay and the relay-destination channel matrices have low-rank property, new identifiability conditions and efficient fitting algorithms will be deduced and developed, respectively.Another perspective considers extending the proposed robust semi-blind receiver into mmwave MIMO systems for joint channel parameter estimation, which includes AOAs, fading coefficients and time delays [38,39].
Figure 1 .
Figure 1.Block-diagram of the system model.
Algorithm 1
the n-th and m D -th column vectors of the identity matrices I N and I M D , respectively.The optimized LM algorithm First stage: • Compute the LS estimate of Z: Z = Y 3 (F 3 ) † ; where e m S ,m RF and h (m) m D ,m S are (m S , m RF )-th and (m D , m S )-th elements of the radio frequency precoder matrix E ∈ C M S ×M RF and the massive MIMO channel matrix H (m) ∈ C M D ×M S , respectively.y (m) m D ,n,p is the typical element of the received signal tensor Y (m) ∈ C M D ×N×P .Then Equation (45) can be rewritten as: th 's azimuth angles of arrival and departure (AoAs/AoDs) of the mobile station and base station, respectively.Λ MS θ transmit antenna array at a specific AoA and AoD, respectively.Finally, a BS φ (m) l and a MS θ (m) l are the steering vectors at the base station and mobile station, respectively.If uniform linear arrays are considered, the steering vectors a BS φ (m) l and a MS θ (m) l
Figure 2 .
Figure 2. Cost function versus the number of iterations.
Figure 3 .
Figure 3. Bit error rate (BER) performance of different receivers versus signal-to-noise ratio (SNR).
Figure 5 .
Figure 5. BER and NMSE performance of traditional alternating least squares (T-ALS) and O-LM algorithms for different L and ρ.
Figure 6 .
Figure 6.The mean processing times required in T-ALS and O-LM algorithms versus SNR.
Figure 7 .
Figure 7. BER and NMSE performance of the proposed receiver for different P and R.
Figure 8 .
Figure 8. Influence of the receive antenna and the coding tensor.
Figure 9 .
Figure 9. Performances of the proposed receiver for different schemes.
Figure 10 .
Figure 10.BER performance of the proposed receiver for the multi-user massive multiple-input multiple-output (MIMO) system.
Figure 11 .
Figure 11.NMSE performance of the proposed receiver for the multi-user massive MIMO system.
Table 1 .
NMSE of the T-ALS and O-LM algorithms. | 9,363.2 | 2019-05-16T00:00:00.000 | [
"Computer Science",
"Business"
] |