text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Natural Hazards and Earth System Sciences Development of a management tool for reservoirs in Mediterranean environments based on uncertainty analysis In compliance with the development of the Water Framework Directive, there is a need for an integrated management of water resources, which involves the elaboration of reservoir management models. These models should include the operational and technical aspects which allow us to forecast an optimal management in the short term, besides the factors that may affect the volume of water stored in the medium and long term. The climate fluctuations of the water cycle that affect the reservoir watershed should be considered, as well as the social and economic aspects of the area. This paper shows the development of a management model for Rules reservoir (southern Spain), through which the water supply is regulated based on set criteria, in a sustainable way with existing commitments downstream, with the supply capacity being well established depending on demand, and the probability of failure when the operating requirements are not fulfilled. The results obtained allowed us: to find out the reservoir response at different time scales, to introduce an uncertainty analysis and to demonstrate the potential of the methodology proposed here as a tool for decision making. Introduction The setting up of simulation models capable of studying the response of a system like a reservoir has become a fundamental tool for its management.These models can attain a greater relevance in the case of regulating systems of fluvial networks subjected to a hydrological variability of an extreme nature at an annual and seasonal scale.This is the case of the Mediterranean climate, where flow events are few and intense, and in which it is necessary to attend to supply demands from a wide network for irrigation, urban consumption and energy production.Simulation must be carried out under diverse scenarios, namely, different environmental, social, economic, territorial and energetic conditions, which involves fluctuations in the inflows to the system and in the demands due to variations in the population, changes in land uses, in hydro-electric production policy or discharges needed to feed the fluvial channel maintenance downstream. In the light of the Water Framework Directive, diverse global models for reservoir management have been proposed in terms of water demands and their planning.Among others, Wang et al. (2009) suggested a simulation model based on neural networks.These networks are defined from a "selftraining" process made over the available measured dataset.All the reservoir demands can be taken into account in the resulting model, which can usefully assess the definition of the operating criteria for the reservoir.Pulido-Velazquez et al. (2008) modelled the surface-groundwater flows in a watershed using a hydro-economic approach, from which these authors proposed management in terms of opportunity costs.Pallottino et al. (2005) proposed a system for decision making which took into account climate uncertainty and the hydrological processes by means of a scenario simulation analysis, so that it could be an alternative in the case of not being able to adopt probabilistic rules or determinist models to represent that uncertainty.Jørgensen and Bendoricchio (2001) set the bases for the development of a model for the analysis of risks from pollution applicable to reservoirs.Cabecinha et al. (2009) used a Stochastic-Dynamic methodology to forecast how the changes in land uses affected water quality in reservoirs, by linking them to the ecological state of water. With the aim of obtaining a global model for reservoir management, Gómez-Beas (2008) developed a first approximation for the numerical integration of the non-linear continuity equation, applied to Rules reservoir (southern Spain) under a set of simplifying hypotheses.This author obtained annual evolution curves of the stored volume for diverse input/output scenarios starting from the stochastic simulation of the inflow series to the reservoir based on available records. The objective of this work was to develop a management model for reservoirs in Mediterranean watersheds which takes into account the uncertainty associated with the water inflows due to the time variability in the climate, and which is able to assess, from this, the optimal operating criteria for a given set of objectives.The resulting model supplies information on the reservoir's evolution state in terms of the volume of water stored, as a result of the balance between the inflows to the reservoir and demands imposed on the supply system.To set up the model, a grey box type system was opted for, i.e. a system for which its fundamental governing equations are known, even when certain processes may not be contemplated in those equations, and which permits a direct control over the regulation elements of the outflows.Thus, in only one scheme, the control over the system and the verification of its service and exploitation conditions are integrated, with the aim of preventing the operational stoppage of the supply network.The model was applied to Rules reservoir in Granada (southern Spain), where agricultural and urban demands coexist in a special environment, in which the presence of snow allows the occurrence of two filling cycles in this reservoir: one due to rainfall events during autumnwinter, and a second one due to snowmelt during spring.Different scenarios related to operation criteria were performed to show the usefulness of the model as a decision-making tool, whose application to another reservoir can be directly achieved just by changing the project parameters, and generating local stochastic samples from the measured/simulated inflow data.This method can be especially useful when little information is available about the inflow regime. Study area and water demands The system studied in this paper is Rules reservoir, which is the dynamic system for the regulation and control of water supply for urban consumption and irrigation to the coastal area of the Guadalfeo river watershed in the south of Granada (southern Spain) (Fig. 1).The basin climate is the result of the interaction between semiarid Mediterranean and Alpine climate conditions (Aguilar, 2008), with an average annual rainfall of 656 mm, and 107.5 hm 3 yr −1 of inflows to the reservoir. Firstly, in order to set up the management model, a study was made on the demand data for each existing water use, as well as their distribution throughout the year.The estimates used in this study were made from data available included in the Southern Basin Hydrological Plan (PHCS, 1999), and they must only be considered as approximate values of the current situation, which may have changed due to variations in land uses, or variations in the priority criteria for demand guarantee and/or distribution. The urban supply to the Granada coastal population is the priority demand met from the reservoir.In PHCS (1999) a volume of 13 hm 3 yr −1 was established for this use.On the other hand, the supply pipeline was designed for a maximum flow of 1 m 3 s −1 , which corresponds to a volume of 30 hm 3 yr −1 .Along with these data, it is necessary to take into account the seasonal nature of the coastal population, where summer tourism causes a considerable increase in the water demand, this period being estimated to last from 30 June to 15 September every year.From this information, the value of the demanded flow for urban supply use was set at 0.95 m 3 s −1 , which corresponds to that required by the peak seasonal population and 0.268 m 3 s −1 during the rest of the year. As for the water intended for agriculture, starting from the irrigation provisions established by the PHCS (1999) for the irrigable area of Motril-Salobreña with a horizon of 20 yr, a provision of 113.62 hm 3 yr −1 was obtained.This volume is distributed throughout the year according to crop needs.As a first approximation to this model, a typical crop of the area, subtropical fruit trees, was considered. Together with these principal demands satisfied from Rules reservoir, those intended for the production of hydroelectric energy, which is considered with a maximum flow of 2.5 m 3 s −1 , and for the preservation of the river ecosystem downstream, for which an ecological flow of 1 m 3 s −1 was provided, should also be taken into account. Model conceptual schemes The interpretation of Rules reservoir as a dynamic system of the regulation and control of supplies for the coastal area of the Guadalfeo river watershed, make it necessary to use a control and verification model against failures, within which the fluvial regime, with its principal features in terms of its annual and hyperannual contributions, is represented.In the setting up of this model, its architecture is defined as a structure of calculation blocks, which together operate as a dynamic system of a single input and a single output; i.e. a system in which the impulse is a temporal series of measured or simulated flows, and the response is the sequence of the states of the reservoir, interpreted as its storage capacity, a consequence of certain work and demand conditions.The model adequately describes the behavior of the reservoir in the form of an advanced control grey-box type tool for system identification, i.e. a system resulting from the combination of black-box type and physical models (Tulleken, 1993).The black-box type models are based on observed data and obtained by means of experimentation, whereas the physical models are founded on specific knowledge of the response of the processes.With the aim of obtaining the advantages of both models, their combination was opted for so that the black-box type model evolves towards a grey-box type model physically more consistent with the reality of the system being studied. In order to carry out the control and verification, the failure mode associated with the operational stoppage limit state was considered (Losada, 2001); i.e. that in which the service is reduced or temporarily suspended due to causes outside the system.In the case of the system supplied from the reservoir, the failure is defined as the drop in reservoir level below the operational thresholds established, which is translated into a halt in the supply system due to insufficient water resources, implying an unacceptable social repercussion.In order to avoid that situation, a regulation is made of the system's outflows, which consists of acting dynamically on the supply intakes, calculating their opening from the reservoir level and instantaneous demand data by the integration of the continuity equation at every time step, and taking into account the situation of the reservoir at previous moments.In the case of the reservoir level dropping below the risk threshold, the outflow is reduced, and even stopped, during the time necessary to ensure the system's recovery so that the service can be guaranteed during a period of time.According to the priority established by the PHCS (1999), the model distinguishes between urban supplies and the remaining uses. The verification was carried out by the integration of the continuity equation under the necessary conditions, assimilating the reservoir at a variable control volume in the time with a single input, the inflow series, and several outputs, namely, the outflows from the intakes, the loss of volume from evaporation which takes place from the water surface, and, eventually, the outflow of the reservoir spillway.In order to integrate the continuity equation applied to the reservoir, two simplifying hypotheses have been defined.The first one refers to the density of the water stored; from the different work accomplished in Rules reservoir (Cruz, 2005;Quevedo, 2005), summer thermal stratification takes place every year, which is not always stable, giving rise to a turbulent mixture processes due to the action of the wind or to the interaction with the perimeter of the reservoir.During the rest of the year, well mixing is fully achieved and the water density is uniform throughout the water column.Thus, in the model, the water density was assumed to be constant over the time and uniform in the whole control volume.The second simplifying hypothesis admits a prismatic reservoir volume for each differential depth, so that the differential volume for each step time can be calculated from the differential depth and the plane reservoir surface, calculated by means of a polynomial fit from measured data. In accordance with the above, the modes of failure in the system were verified through the Eq. ( 1): where S e is the reservoir water surface [L 2 ], dH is the depth differential [L], Q in is the inflow to the system [L 3 T −1 ], and Q out comprises the reservoir outflows [L 3 T −1 ], according to Eq. ( 2), Q outlet is the sum of the outlet flows for each supply intake to satisfy demand [L 3 T −1 ], calculated according to the Eq. ( 3), where C is a coefficient associated with the charge losses through diverse elements and is comprised between 0 and 1, S out is the total drainage section of corresponding intake [L 2 ], and H -Cota represents the charge on the piping axis [L]. Q spillway is the spillway outflow [L 3 T −1 ], in the case of this occurring, calculated from the Moñino's fit (Moñino, 2004) according to experimental data in the expression of the Eq. ( 4). where H -P is the effective charge [L], calculated as a difference between the total depth and the crowning height of the spillway, provided that H -P is higher than zero, and A and B are parameters obtained experimentally for this spillway in particular.Q evap is the daily evaporated flow in the reservoir [L 3 T −1 ], obtained through the formula of Neitsch et al. (2002) in the Eq. ( 5), where η is the evaporation coefficient estimated as 0.6, ET 0 is the potential evapotranspiration obtained with the formula of Penman-Monteith [L T −1 ] and S e is the reservoir free surface [L 2 ]. The model was programmed in the module of Matlab, Simulink, within which the main processes can be formulated analytically, thus turning it into a grey-box type model.In Fig. 2 the model flow diagram is depicted.First, the model gives a reading of input data, namely, the inflow, the demand, the reservoir level at the previous moment, ET 0 , among others.Next, it integrates the continuity equation, obtaining a new reservoir level for the present moment.The following step is the calculation of expected outflow which can be released with each of the intakes only depending on that level, without applying the pressure loss or the valve opening criteria.After reading the demand data, the opening degree of each valve is calculated in terms of the reservoir level at that moment and at the previous time step, of the demand and of the thresholds established.Finally, the pressure loss coefficient is applied to the outflow, and this value is used for the flow balance in the next time step. Uncertainty analysis With the aim of planning for the short, medium and long term management of a reservoir by means of uncertainty analysis, a model with an appropriate structure for a large number of simulations in parallel is required, which must be capable of not losing the accurate information of detail.The technique that best fits this structure is the simulation by Monte Carlo integration, which was applied to a synthetic sample of N replications of a T-yr daily inflow series following the procedure depicted in Fig. 3 (Baquerizo and Losada, 2008;Polo and Losada, 2010).The initial daily inflow series was obtained by the hydrological simulation of the contributing area by the distributed and physically-based model WiMMed (Polo et al., 2009), widely calibrated and validated in the study site (Aguilar, 2008;Egüen et al., 2009) for the available inflow dataset during the 1991-2005 period.A complete description of this model can be found in Herrero et al. (2010Herrero et al. ( , 2011)); its specific focus on the spatiotemporal distribution of the meteorological variables should be pointed out, and its being capable to reproducing the high variability in Mediterranean climate regions.This simulated series maintained the original measured data but included the no-data periods and corrected the existing non-quality data.Taking as a reference this 10-yr daily inflow series, the construction of replicated input series to the model were carried out by means of Monte Carlo simulation.In order to perform this simulation a study was made of the distribution function of random variables which define the nature of the inflows to the reservoir with respect to its magnitude and annual distribution.Once the parameters defining those functions were obtained, some series of inflows fitting those parameters were constructed.The resulting series allow the realization of a large number of simulations in parallel, and from them it is possible to extract information relative to averages and confidence intervals of the objective random variables which define the reservoir response, interpreted as an instantaneous storage capacity, resulting in demands and regulation criteria.By making a study of the distribution functions of each of these variables separately, or of the joint distribution function of all of them, it is possible to make a scenario analysis on the variation in the operation system parameters.In this study, 500 inflow series, each of them consisting of ten years of daily inflow, were simulated from the empirical distribution functions obtained from the available 15-yr daily inflow dataset in the period 1991-2005. Water flow in the reservoir First, the hydrological simulation of a 10-yr series representative of the watershed was carried out.For this, a series of inflows to the reservoir was elaborated taking data measured in the period from 1991 to 2005, reconstructed with an alternation of dry, medium and wet years.The objective of making this series was to highlight the effect of the hydrological variability in the watershed in the management of the reservoir's resources, and the effect of the regulation carried out by the application of the model on the final result, represented as the volume of water stored at the end of the 10-yr simulation.Thus, it was aimed to emphasize the flexibility of the model in simulating any sequence of inflows to the reservoir.Figure 4 shows the reservoir state evolution in terms of the water level throughout the simulation.The inflow series and the outflow curves for each supply intake have also been plotted. From this figure, the annual and seasonal variation of inflow, a characteristic of Mediterranean climate, can also be noted, with very long dry periods in summer and heavy rainfall events in autumn-winter, which is reflected in the volume of stored water in the reservoir, and therefore, due to the regulation criteria established, in the outflows.The two annual cycles of the reservoir's filling can also be seen.These are more notable in the hydrological year between day 1100 and day 1500 of the series, this being a wet year.The first cycle with a larger volume coincides with the time of the heaviest rainfall events, and the second corresponds to the snowmelt in the spring.On the other hand, because of the seasonal nature of the population variability, higher supply flows are observed in the summer, a time when rainfall is scant, so that the outflows of the rest of the intakes are affected, even interrupted, due to the priority established on the urban supply use, which is likewise reflected in the fewer number of days of interrupted urban supply. Scenario analysis The next step was to carry out a scenario analysis due to the variation in diverse system parameters, specifically, in the demands and in the period of time in which it was desired to guarantee supplies.For this purpose, a simulation was made of 500 series of the daily inflow with a 10-yr duration obtained by Monte Carlo simulation.These synthetic series were the input to the reservoir management model, simulating with each of them for four scenarios.That is, the functional thresholds were fixed so that the opening and closing of the valves was carried out in such a way that there was a volume of water stored necessary to ensure supplies during a period of time of 15 days, 1 month, 2 months or 3 months.Thus, 500 series of results were obtained for each of the scenarios simulated. With this set of synthetic data, a frequency analysis was performed in order to estimate the density probability function of two variables: the reservoir state at the end of each ten-year series, represented by the stored volume; and the number of days on which the urban supply intake remained closed in that period of time as a consequence of the management carried out by means of the functional thresholds.For this work it was decided to assess the behavior for these two variables, and this could be done for any other variable resulting from the model.Starting from this analysis of frequencies, it will be possible to quantify the uncertainty associated with the natural processes produced in the watershed flowing into the reservoir, as well as being the basis for decision making.In order to facilitate the interpretation of these functions, a fit of each data series was carried out through Gaussian and Fourier functions.These fits are depicted in Fig. 5a-b, and in Tables 1 and 2 the goodness of each fit is shown by means of R-square parameter. The highly non-linear behavior of the objective variables selected, the final state of the reservoir and the number of days of interrupted water supply, can be observed in Fig. 5ab from the different results obtained by changing the guarantee period duration in the operation criteria.The increase in this parameter does not produce an orderly trend of change in the maximum observed in the corresponding probability functions in any of the cases.The non-linear nature of the hydrological response of the watershed, together with the specific features of the reservoir, and the particular time occurrence of inflows along with the time guarantee, are responsible for this. From Fig. 5a it can be observed how, for current demands, the highest frequencies are given in a ratio H end /H initial of 0.9 to 1, independently of the scenario considered, that is, for an equal final state, or even a somewhat lower one at the initial state, which underlines the functioning of the management exercised in the model.Thus, it can be concluded that with the regulation policy carried out by means of the model developed in this paper, there is a high probability that the reservoir will be found in the same situation at the end of a 10-yr series, and with an even smaller stored volume than the starting one, with the likelihood of its state improving at the end of that period of time being practically negligible, i.e. under 0.03, namely the resources received by the reservoir will be totally consumed.Also, if we complete this analysis from the results in Fig. 5b, it is observed that the maximum frequency of closing the supply intakes is around 0.1; that maximum probability corresponds to 200 days of closure with no great differences existing as a function of the guarantee criterion considered.That is to say, on 200 days of each series of 10-yr, the flow demanded is not being supplied, in this case the urban consumption, which is the priority use furnished from the reservoir.Therefore, not only are all the reservoir's resources consumed, but, in addition, there is a scarcity of water in some periods. In view of these results, one can ask oneself which parameters or variables of the system can be acted upon in order to increase the probability of improving the reservoir's state after a period of functioning of T years, in this case ten.For this purpose, the model was run on the same series randomly simulated and under the four scenarios of the supply guarantee volume mentioned, with some demanded flows, half of those estimated for this study, as can be observed in the figure itself.For demands equivalent to 50 % of those estimated, the probability that the reservoir would be in a better situation than the initial one has considerably increased, as was to be expected, with a likelihood of improving its state by 1.5 times around 0.05, for the four criterion considered.Taking these results, it can be deduced that acting on the flows demanded will improve the future situation of the reservoir.This conclusion is reinforced by the results in Fig. 5b, in which an important reduction in the number of days, from 200 to 25, of closing the supply valve was observed, and differences are noted according to the management criteria, so that the water scarcity situations that make it difficult to supply the population have significantly diminished, with the closure days reaching 0 if a guarantee of 3 months' supply is adopted.At this point, the difference between the guarantee criteria of 2 and 3 months should be noted since the curve fitting for 2 months presents an extreme, isolated value.It has been verified that this peak in the graph is due to the fit made.However, that fit was opted for due to its goodness, represented by its R-square value (see Table 2), and because it has been observed how the rest of the curve represents the trend well.Therefore, this point has been obviated when interpreting the results shown in the figure, since a very similar behavior has been observed in the data before the fit between the series of results of the scenarios of 2 and 3 months of a supply guarantee, so that it can be concluded that the number of days of the closure of the urban supply intake tends to be cancelled as the guarantee criterion is more restrictive. Conclusions Using the reservoir management model developed in this paper, annual evolution curves of the volume of water stored are obtained, which allows for the regulation of the water supply based on some established criteria in a sustainable way with the commitments existing downstream.The parameters R. Gómez-Beas et al.: A management tool for reservoirs in Mediterranean environments defined in this model characterize the reservoir in which it was applied; nevertheless, it is possible to give a general nature to the model just by varying those parameters for each particular reservoir in which it is wished to apply it. The resulting model allowed us: to obtain results at different time scales, to introduce an uncertainty analysis, as well as to show the potential efficiency of methodology. By means of the probability functions of the selected variables of interest, the model becomes a tool for decisionmaking, in such a way that it is possible to hold a criterion on the opening and closing of valves in terms of the desired reservoir situation at the end of a simulated period, or any other selected variable, taking into account the consequences of this decision in the supply system.These consequences are represented in this case by the number of days in which the minimum necessary flow for supplying the urban consumption is not met with. The model is designed in such a way as to facilitate its inclusion in hydrological watershed models.At present its incorporation into the distributed and physically-based hydrological model (WiMMed) is being developed.This will allow us to simulate changes in land uses and their influence on the associated water flows, as well as a random generation of an inflow series with the aim of making predictions under different scenarios. Fig. 1 . Fig. 1.Situation, Digital Elevation Model and land uses of Guadalfeo river watershed.Urban areas and irrigable areas furnished from Rules reservoir are also shown. Fig. 3 . Fig.3.Scheme of the procedure followed to derive probability functions for the variables selected. Fig. 5a . Fig. 5a.Function of probability of the final state of the reservoir compared to its initial state in a ten-year simulation. Fig. 5b . Fig. 5b.Function of probability of the number of days of closed urban supply intake in a ten-year simulation. Table 1 . Types of fit and goodness of fit for each density probability function of the final state of the reservoir compared to its initial state. Table 2 . Types of fit and goodness of fit for each density probability function of the number of days of closed urban supply intake.
6,484.8
2012-05-29T00:00:00.000
[ "Environmental Science", "Engineering" ]
Nano Zinc Oxide Green-Synthesized from Plumbago auriculata Lam. Alcoholic Extract Zinc oxide nanoparticles (ZnO NPs) were synthesized by using an alcoholic extract of the flowering aerial parts of Plumbago auriculata Lam. Dynamic Light Scattering (DLS) revealed that the average size of synthesized ZnO NPs was 10.58 ± 3.350 nm and the zeta potential was −19.6 mV. Transmission electron microscopy (TEM) revealed that the particle size was in the range from 5.08 to 6.56 nm. X-ray diffraction (XRD) analysis verified the existence of pure hexagonal shaped crystals of ZnO nanoparticles with an average size of 35.34 nm in the sample, which is similar to the particle size analysis acquired by scanning electron microscopy (SEM) (38.29 ± 6.88 nm). HPLC analysis of the phenolic ingredients present in the plant extract showed that gallic acid, chlorogenic acid, and catechin were found as major compounds at concentrations of 1720.26, 1600.42, and 840.20 µg/g, respectively. Furthermore, the inhibitory effects of ZnO NPs and the plant extract against avian metapneumovirus (aMPV) subtype B were also investigated. This assessment revealed that the uncalcinated form of Nano-ZnO mediated by P. auriculata Lam. extract possessed a significant antiviral activity with 50% cytotoxic concentration (CC50) and 50% inhibition concentration (IC50) of 52.48 ± 1.57 and 42.67 ± 4.08 µg/mL, respectively, while the inhibition percentage (IP) was 99% and the selectivity index (SI) was 1.23. Introduction Utilization of inorganic nanoparticles (NPs) for various applications, and consideration of environmental aspects has stimulated demand for synthesizing them via green chemistry approaches. Plants have become the preferred candidates for nanoparticle biosynthesis at large scales due to their rapid synthesis rates and the variations of the shape and size of the produced nanoparticles. Furthermore, many bioactive constituents of plants such as alkaloids, terpenoids, flavonoids, enzymes, amino acids, proteins, vitamins, and glycosides could be responsible for bio reduction, creation and preservation of metal nanoparticles [1]. Plumbago auriculata Lam. is a shrub native to South Africa and successfully acclimatized in tropical and subtropical regions of the world. This plant is referred to as Cape Leadwort or Cape Plumbago because it is abundant in the Cape districts of South Africa. The plant has the ability to adapt to various stressful conditions such as high humidity, high temperature, and diseases. P. auriculata has been recognized as a medicinal plant as it possesses numerous potent bioactive constituents such as α-amyrin, α-amyrin ac-etate, capensisone, isoshinanolene, β-sitosterol, plumbagin, isoshinanolone, diomuscinone, diomuscinone, and β-sitosterol-3-β-glucoside [2]. Several metal NPs have been successfully biosynthesized from different Plumbago species. Sliver NPs were previously green-synthesized using various Plumbago species such as P. indica, P. auriculata, and P. zeylanica. The produced silver NPs possessed an antioxidant activity and antitumor activity against Dalton's Lymphoma as well as antibacterial activity against Gram-positive and Gram-negative bacteria such as Bacillus subtilis, Staphylococcus aureus, Escherichia coli, and Klebsiella pneumonia. They also showed larvicidal activity against Aedes aegypti and Culex quinquefasciatus and antitubercular activity against Mycobacterium tuberculosis [3][4][5][6]. Selenium NPs were previously synthesized by green methods using P. zeylanica extract and found to have an antioxidant and antibacterial activities [7]. Copper NPs were successfully synthesized using P. zeylanica and were proven to be an antidiabetic nanomedicine [8]. Silver, gold, and bimetallic (silver and gold) NPs were produced from the medicinal plant P. zeylanica and showed antimicrobial and antibiofilm activities against Escherichia coli, Acinetobacter baumannii, Staphylococcus aureus, and a mixed culture of Acinetobacter baumannii and Staphylococcus aureus [9]. Highly stable and spherical zinc oxide NPs were also produced using P. zeylanica leaf extract and were found to have antibacterial activity against Staphylococcus aureus and Salmonella typhimurium [10]. Nanotechnology has extensively expanded in virology fields for various applications such as prophylactic, therapeutic, and diagnostic approaches. Nanoparticles have been applied for imaging purposes and as an adjuvant carrier for drugs to augment virucidal properties. Antiviral NPs have mainly been applied against hepatitis virus types A, B, C, and E, human immunodeficiency virus (HIV), and herpes simplex virus (HSV-1 and HSV-2) [11]. Nano-ZnO which allows zinc to be absorbed easily by the body, was categorized as a "GRAS" (generally recognized as safe) substance by the FDA (US Food and Drug Administration). Compared with other metal oxide NPs, ZnO NPs are considered to be an inexpensive and less toxic material and to exert a noticeable effect in several biomedical applications such as anticancer, drug delivery, antimicrobial, antidiabetic, anti-inflammatory, wound healing, and bio-imaging. [12] Avian metapneumovirus (aMPV) is a single-stranded RNA (negative-sense) enveloped virus belonging to the Metapneumovirus genus of the Paramyxoviridae family. This family comprised viruses responsible for producing respiratory ailments in humans and animals. In chickens, the aMPV is responsible for the incidence of a multi-factorial disease identified as the swollen head syndrome (SHS) [13]. Good management practices and rigorous biosecurity play key roles in avoidance of infection as well as minimizing the effects of an aMPV infection. There is no treatment available for aMPV infections; however, aMPV infections can be prevented by vaccination. Attenuated vaccines have been applied to control infections in growing turkeys and broiler chickens and facilitate the injection of inactivated vaccines for future layers and breeders before the onset of lay [14]. Most studies have been dedicated to the inhibition activity of ZnO NPs on bacterial infections, however there are few works focused on evaluating the interaction between P. auriculata or ZnO NPs and viruses. The current research was designed to scrutinize the antiviral activities of the P. auriculata ethanolic extract and its ZnO NPs on aMPV, which is considered to be one of the most challenging viruses causing significant economic losses in poultry cultivation. HPLC Investigation of Phenolic and Flavonoid Compounds HPLC investigation revealed the occurrence of sixteen compounds in the alcoholic extract of flowering aerial parts of P. auriculata Lam. Gallic acid, chlorogenic acid and catechin were found to be major components at concentrations of 1720.26, 1600.42, and 840.20 µg/g respectively ( Table 1). The detection of taxifolin was previously reported by Skaar et al. [15] who isolated taxifolin glycoside from the plant flowers. The occurrence of the rest of the identified compounds is presented here for the first time. However, the previous studies informed the isolation of the phenolic compounds apigenin, capensinidin, europinidin, pulchellidin, azalein, and capensinidin-3-rhamnoside from the flowers [16,17], together with apigenin, luteolin, and their glycosides from the leaves [17]. By reviewing the current literature, this is the first analysis for identification of phenolic compounds in P. auriculata by using HPLC methodology (Figures 1 and 2). UV Analysis Nano-ZnO produced through P. auriculata alcoholic extract showed a maximum absorption peak at 343.32 nm, indicating the synthesis of Nano-ZnO ( Figure 3) [18], since ZnO on a nanoscale has absorption patterns at wavelengths shorter than conventional ZnO. This is in agreement with pervious findings which showed shorter wavelengths for the nanoparticles of material oxides [19]. FT-IR Analysis of Nano-ZnO and P. auriculata Lam. The FTIR spectra were examined in the range from 400 to 4000 cm −1 to identify the different reactant groups involved in ZnO NPs synthesis and P. auriculata extract. According to the ZnO NPs spectrum (Figure 4), an intense absorption band at 3372 cm −1 (OH) was observed and attributed to water adsorption on the ZnO NPs surface. The bands at 2919 and 2850 cm −1 are due to CH stretching. Absorption bands at 1724, 1617, and 1449 cm −1 indicate C=O stretching of carboxylic group. The existence of a stretch band at 1032 cm −1 is due to CO, revealing the presence of the active groups for alcohols, ethers, and carboxylic acid esters. The emergence of new peak at 744 confirms that the ZnO NPs underwent CH bending, and the characteristic band of the ZnO NPs stretching mode was assigned to 561 cm −1 . FT-IR results for P. auriculata extract ( Figure 5) presented several bands at 3311, 3302, and 3245 cm −1 confirming the presence of an OH group in phenolic and alcoholic compounds. Medium peaks found at 3158, 3128, 3109, 2916, 2844, and 2804 cm −1 characterized the existence of a stretch alkane group (CH). The bands at 1724, 1619, 1449, and 1371 cm −1 in the spectrum of the plant extract were due to the presence of CH bending vibrations and C-C (in-cycle) or C=O stretch in the phenolic constituents. The frequencies of these bands decreased in intensity in case of ZnO NPs due to the attachment of phenolic and amino groups. The peak acquired at 1036 cm −1 illustrated the existence of CH alkanes group. Medium bands obtained at 799, 777, 748, and 709 cm −1 signified =CH bending frequencies of an alkene group [20,21]. DLS and Zeta Potential A size-distribution image (DLS) of the green-synthesized form of Nano-ZnO is represented in Figure 6. The detected distribution of ZnO NPs size varied from 2 to 68 nm. The average particle size distribution of NPs was 10.58 ± 3.350 nm with a PDI value of 0.343. The ZnO NPs' zeta potential was demonstrated to have a peak at −19.6 mV (Figure 7), signifying that the Nano-ZnO particles biosynthesized were negatively charged and moderately distributed in the medium. Negative values detected by zeta potential were responsible for stabilization of the nanoparticles. Transmission Electron Microscopy (TEM) and Scanning Electron Microscopy (SEM) Analyses TEM examination using low-and high-resolution power revealed the production of hexagonal Nano-ZnO with particle sizes between 5.08 and 6.56 nm with an average size of 5.82 ± 0.74 nm, as shown in Figure 8. The image obtained for the Nano-ZnO surface morphology was inspected by SEM. It showed that most of the Nano-ZnO particles were spherical with some agglomerated in shape and the particle diameter was ranged from 32.42 to 45.86 nm with an average size of 38.29 ± 6.88 nm (Figure 9). SEM analysis illustrated the morphology and size of Nano-ZnO that was dispersed moderately in the medium. XRD Analysis The existence of Nano-ZnO and exploration of their structural characteristics were confirmed by using X-ray diffraction (XRD). Nano-ZnO bio-synthesized by P. auriculata Figure 10). Additionally, Nano-ZnO was considered to be pure due to the absence of characteristic XRD peaks other than the observed zinc oxide peaks [22,23]. These peaks were demonstrated in a crystal system of hexagonal phase with a space group of P63mc and reference code of (01-089-0510). The average crystal size of Nano-ZnO was computed to be 35.34 nm. This data was very similar to the SEM measurement. The Cell Viability and Antiviral Activities of P. auriculata Lam. Extract and ZnO NPs In vitro antiviral assessment investigated the virus's capabilities for infection and multiplication at definite cell lines in culture systems. The cell culture system provided a rapid and accurate method to grow viruses at higher titers, produce vaccine strain cultures, study reverse genetics and inspect the antiviral compounds. Plants used in traditional medicine have been consumed for the treatment of various health circumstances, including infectious ailments. Nearly 60% of the anti-tumor and anti-infective remedies that are commercially available are derived from natural product origin. Therefore, traditional medicinal plants may serve as potent sources for acquiring new antiviral agents in the coming years. The acquiring of new antiviral drugs is a problematic mission due to the poor selective toxicity and the appearance of resistant viral variants that naturally arise. The occurrence of viral resistance to antiviral drugs is increasing and, consequently, viral diseases remain challenging to treat. The assessment of plants as promising origins for antiviral compounds has led to the detection of potent inhibitors of viral multiplication, increasing the possibility of detecting new bioactive plant compounds [24,25]. The genus Plumbago has previously been used for the treatment of various viral infections. The methanolic extract of P. zeylanica root showed antiviral activity against coxsackievirus B3 Nancy (CVB3), influenza A virus Hong Kong/1/68 (H3N2), and herpes simplex virus type 1 Kupka (HSV-1) at a concentration range of 0.8-200 µg/mL [26]. P. indica L. roots extracts showed remarkable inhibition of influenza A (H1N1) virus by inhibiting viral nucleoprotein synthesis and polymerase activity [27]. Metal and metal oxide NPs applications in virus-targeting formulations have shown the remarkable diagnostic or therapeutic ability of the agents, augmenting the targeted drug delivery functions [11]. ZnO NPs, due to their physiochemical characteristics, are considered a promising approach in developing antiviral agents and Nano-vaccines against RNA viruses HIV, and COVID19, as well as DNA viruses HSV-1 and 2. The most probable antiviral mechanistic pathways of ZnO NPs are blocking the virus's entry into the cells and deactivating the virus through virostatic potential [28,29]. The pneumoviruses are composed of various human and animal pathogens such as human respiratory syncytial virus (hRSV), human metapneumovirus (hMPV), bovine respiratory syncytial virus (RSV), and avian metapneumovirus (aMPV). Among these viruses, aMPV is one of the leading causes of high economic losses in infected poultry. aMPV mainly causes infection of the upper respiratory tract in both chickens and turkeys, although turkeys seem to be more susceptible [30]. P. auriculata extract and biosynthesized ZnO nanoparticles were assessed for antiviral capabilities against aMPV subtype B. The results revealed that there are significant activities of P. auriculata extract and Nano-ZnO with SI more than 1.2. In antiviral analysis, it is necessary to define the maximum concentrations of the extract and ZnO NPs that is not toxic to the cells (MNTC). P. auriculata extract showed cytotoxic effect lower than Nano-ZnO with MNTC values equal to 235.4 and 173.5 µg mL −1 , respectively. The CC 50 for the extract and ZnO NPs were well tolerated by the CER cells. The CC 50 for both the investigated agents was higher than 50 µg mL −1 and none of them produced any visible changes in cellular morphology or density. The CC 50 and IC 50 concentrations of both investigated agents found to be active against aMPV subtype B were calculated and recorded in Table 2. The screening of the antiviral performance of P. auriculata extract and ZnO NPs revealed that medium percentages of cell destruction were detected on CER cell line as they attained moderate antiviral inhibitory effects with CC 50 of 87.00 ± 3.76 and 52.48 ± 1.57 µg/mL, respectively. Remarkable antiviral activity was observed for P. auriculata Lam. extract (67.97 ± 4.63 µg/mL) and Nano-ZnO (42.67 ± 4.08 µg /mL) for MOI of 0.001 ID 50 /cells. The antiviral activity of zinc oxide could be due to interference with the zinc binding activity of a viral protein, M2-1, that is found in all known pneumoviruses and is essential for virus replication and pathogenesis in vivo [30]. The antiviral activity of P. auriculata could be due to the chelating activity of phenolic compounds present in the extract for metal ions, especially zinc, that is essential for viral replication [30,31]. These results revealed that ZnO NPs have superior activity to the plant extract. Plant Material Flowering aerial parts of P. auriculata Lam. were acquired from EL-MAZHAR botanical garden, Giza, Egypt. The plant was kindly authenticated by Dr. Mohamed El Gebaly, Botany Taxonomist at the National Research Centre Herbarium, Dokki, Giza, and Engineered by Therease Labib, Consultant for plant identification at El-Orman botanical garden. Voucher specimen (numbered 16062020) was reserved at the herbarium of the Department of Pharmacognosy, Faculty of Pharmacy, Cairo University. P. auriculata was air-dried, coarsely powdered and kept in tightly closed, amber-colored glass containers at room temperature. Extraction of Plant Active Ingredients The air-dried powder of the flowering aerial parts of P. auricolata (100 g) was extracted using 90% ethanol (250 mL × 3). The extract was evaporated under reduced pressure using Buchi rotary evaporators until completely dry (2.5 g). HPLC Analysis HPLC analysis was performed by means of an Agilent 1260 series. The separation was performed by Eclipse C18 column (4.6 mm × 250 mm i.d., 5 µm). The mobile phase comprised (A) water and (B) 0.05% trifluoroacetic acid in acetonitrile using a flow rate 1 mL/min. The mobile phase was automatically flowed in a linear gradient as follows: 0 min (82% A and 18% B); 0-5 min (80% A and 20% B); 5-12 min (60% A and 40% B); and 12-16 min (82% A and 18% B). The detector used was multi-wavelength, adjusted to 280 nm. The injection volume was 10 µl for the standard and extract solutions. The column temperature was set at 35 • C. Green Synthesis of ZnO NPs ZnO NPs were synthesized using the alcoholic extract of flowering aerial parts of P. auricolata by a method illustrated by Attia et al. [32] with slight modification, in which P. auriculata Lam. dried extract (1 g) dissolved in ethanol (100 mL) was reacted with zinc acetate (10 g) dissolved in doubly distilled water (1000 mL) and heated in a boiling water bath for 20 min. Ammonium hydroxide (a few drops) was then added to the reaction to raise the pH to 12, where a Nano-ZnO precipitate was formed. The mixture was set for half an hour for complete reduction of zinc acetate to Nano-ZnO. The formed ZnO was centrifuged at 4000 rpm, followed by washing by bi-distilled water (two times), and ethanol (two washes) to yield white pellets of Nano-ZnO upon freeze drying. Characterization of ZnO NPs The Nano-ZnO was initially analyzed by means of a Shimadzu UV-1601 (Shimadzu Corporation, Japan) UV-vis spectroscopy instrument, ranging between 200 and 600 nm. Then by FTIR analysis using attenuated total reflectance (ATR) mode by a Jasco FTIR 4100 spectrophotometer (Japan) to categorize the functional groups and various phytochemical compounds responsible for formation and stabilization of the nanoparticles. Dynamic light scattering (DLS) analysis was performed using a Zetasizer (HT Laser, ZEN3600 Malvern Instruments, Malvern, UK) to investigate particle size and zeta potential of the prepared nanoparticles. Morphology and particle size of Nano-ZnO mediated by P. auriculata Lam. extract were determined by TEM (JEOL-JEM-1011, Japan) and FE-SEM (Mira3 Tescan). A few drops from the suspension of ZnO nanoparticles was applied on a carbon-coated copper grid and the solvent was evaporated at room temperature before recording the images. The powdered sample was subjected to CuKα1 X-ray diffractometer radiation (λ = 1.5406 A • ) operating at 40 kV and 30 mA with 2θ ranging from 20 • -90 • to verify the occurrence of ZnO crystals and define their structure and size. Antiviral Assessment The antiviral activities of P. auricolata extract and ZnO NPs were investigated as described earlier according to Khon et al. [33] using chicken embryo related cells (CER cells) and aMPV strain UK/8/94 belonging to aMPV subtype B (GenBank Accession number Y14294). Maximum nontoxic concentrations (MNTC) for both investigated agents were determined and measured via the sulforhodamine B (SRB) assay [34]. Virus titers were determined by virus-produced cytopathic effect in cell culture and were labelled as 50% tissue culture infective concentration (TCID 50 ) per mL. The TCID 50 was computed according to Reed and Munch [35]. The inhibition percentage was calculated by using the formula: (IP) = (1 − T/C) × 100, where T is the antilog of the investigated agents-treated viral titers and C is the antilog of the control (without extract or Nano-ZnO) viral titers. IP was estimated to be positive if greater than or equal to 98%. The 50% cytotoxic (CC 50 ) and 50% inhibition (IC 50 ) concentrations for the plant extract and Nano-ZnO were computed from their concentration-response curves obtained by different concentrations for both investigated agents in the presence of 100 TCID 50 /mL using the MTT assay [36,37]. The selective index (SI) was computed from a ratio of CC 50 /IC 50. The results were acquired from triplicate assays with a minimum of five concentrations for P. auricolata extract and nanoparticles. The cytotoxicity percentage was obtained from [(A1 − A2)/A1] × 100, where A1 and A2 are the OD 540 nm of untreated and of treated cells, respectively. The protection percentages were computed from [(X1 − X2) × 100/ (X3 − X2)], where X1, X2, and X3 stand for the absorbance values of the extracts or ZnO NPs, virus, and cell controls, respectively. Conclusions This current work aimed to biosynthesize ZnO NPs from P. auriculata alcoholic extract and investigate the antiviral effects of the plant extracts and ZnO nanoparticles against aMPV subtype B. From this study, it can be speculated that the engagement of P. auriculata extract and/or ZnO nanoparticles with vaccination would be beneficial for the control of virus spreading in infected birds. In societies where this disease is endemic, this will reduce the environmental contamination and consequently the risk of virus spreading. Our in vitro results suggest that supplementing the poultry diet with P. auriculata extract and ZnO nanoparticles could be useful for controlling enteric infections of aMPV; however, further in vivo trials are required to confirm the antiviral activity observed in vitro and to find the optimum concentration of P. auriculata and Nano-ZnO required in bird diets to have antiviral effectiveness due to poultry digestive tract complexity.
4,834.2
2021-11-01T00:00:00.000
[ "Materials Science" ]
Unleash GPT-2 Power for Event Detection Event Detection (ED) aims to recognize mentions of events (i.e., event triggers) and their types in text. Recently, several ED datasets in various domains have been proposed. However, the major limitation of these resources is the lack of enough training data for individual event types which hinders the efficient training of data-hungry deep learning models. To overcome this issue, we propose to exploit the powerful pre-trained language model GPT-2 to generate training samples for ED. To prevent the noises inevitable in automatically generated data from hampering training process, we propose to exploit a teacher-student architecture in which the teacher is supposed to learn anchor knowledge from the original data. The student is then trained on combination of the original and GPT-generated data while being led by the anchor knowledge from the teacher. Optimal transport is introduced to facilitate the anchor knowledge-based guidance between the two networks. We evaluate the proposed model on multiple ED benchmark datasets, gaining consistent improvement and establishing state-of-the-art results for ED. Introduction An important task of Information Extraction (IE) involves Event Detection (ED) whose goal is to recognize and classify words/phrases that evoke events in text (i.e., event triggers). For instance, in the sentence "The organization donated 2 million dollars to humanitarian helps.", ED systems should recognize "donated" as an event trigger of type Pay. We differentiate two subtasks in ED, i.e., Event Identification (EI): a binary classification problem to predict if a word in text is an event trigger or not, and Event Classification (EC): a multi-class classification problem to classify event triggers according to predefined event types. Several methods have been introduced for ED, extending from feature-based models (Ahn, 2006;Liao and Grishman, 2010a; to advanced deep learning methods Nguyen et al., 2016c;Zhang et al., 2020b;Nguyen et al., 2021). Although deep learning models have achieved substantial improvement, their requirement of large training datasets together with the small sizes of existing ED datasets constitutes a major hurdle to build high-performing ED models. Recently, there have been some efforts to enlarge training data for ED models by exploiting unsupervised or distantly-supervised Araki and Mitamura, 2018) techniques. The common strategy in these methods is to exploit unlabeled text data that are rich in event mentions to aid the expansion of training data for ED. In this work, we explore a novel approach for training data expansion in ED by leveraging the existing pre-trained language model GPT-2 (Radford et al., 2019) to automatically generate training data for models. Motivated by the promising performance of GPT models for text generation, we expect our approach to produce effective data for ED in different domains. Specifically, we aim to fine-tune GPT-2 on existing training datasets so it can generate new sentences annotated with event triggers and/or event types, serving as additional training data for ED models. One direction to achieve this idea is to explicitly mark event triggers along with their event types in sentences of an existing ED dataset that can be used to fine-tune the GPT model for new data generation. However, one issue with this direction is that in existing ED datasets, numbers of examples for some rare event types might be small, potentially leading to the poor tuning performance of GPT and impairing the quality of generated examples for such rare events. In addition, large num-bers of event types in some ED datasets might make it more challenging for the fine-tuning of GPT to differentiate event types and produce high-quality data. To this end, instead of directly generating data for ED, we propose to use GPT-2 to only generate samples for the event identification task to simplify the generation and achieve data with better annotated labels (i.e., output sentences only are only marked with positions of event triggers). As such, to effectively leverage the generated EI data to improve ED performance, we propose a multitask learning framework to train the ED models on the combination of the generated EI data and the original ED data. In particular, for every event trigger candidate in a sentence, our framework seeks to perform two tasks, i.e., EI to predict a binary label for being an event trigger or not, and ED to predict the event type (if any) evoked by the word via a multi-class classification problem. An input encoder is shared for both tasks that allow training signals from both generated EI data and original ED data to contribute to the representation learning in the encoder (i.e., transferring knowledge in generated EI data to ED models). Despite the simplification to EI for better annotated labels of data, the generated sentences might still involve noises due to the inherent nature of the language generation, e.g., grammatically wrong sentences, inconsistent information, or incorrect event trigger annotations. As such, it is crucial to introduce mechanisms to filter the noises in generated data to enable effective transfer learning from generated EI data. To this end, prior works for GPTbased data generation for other tasks has attempted to directly remove noisy generated examples before actual usage for model training via some heuristic rules (Anaby-Tavor et al., 2020;. However, heuristic rules are brittle and restricted in their coverage so they might overly filter the generated data or incorrectly retain some noisy generated samples. To address this issue, we propose to preserve all generated data for training and devise methods to explicitly limit impacts of noisy generated sentences in the models. In particular, we expect the inclusion of generated EI data into the training process for ED models might help to shift the representations of the models to better regions for ED. As such, we argue that this representation transition should only occur at a reasonable rate as drastic divergence of representations due to the generated data might be associated with noises in the data. Motivated by this intuition, we propose a novel teacher-student framework for our multi-task learning problem where the teacher is trained on the original clean ED datasets to induce anchor representation knowledge for data. The student, on the other hand, will be trained on both generated EI data and original ED data to accomplish transfer learning. Here, the anchor knowledge from the teacher will be leveraged to guide the student to prevent drastic divergence of representation vectors for noisy information penalization. Consequently, we propose a novel anchor information to implement this idea, seeking to maintain the same level of differences between the generated and original data (in terms of representation vectors) for both the teacher and the student (i.e., generated-vsoriginal data difference as the anchor). At the core of this techniques involves the computation of distance/difference between samples in generated and original data. In this work, we envision two types of information that models should consider when computing such distances for our problem: (1) representation vectors of the models for the examples, and (2) event trigger likelihood scores of examples based on the models (i.e., two examples in the generated and original data are more similar if they both correspond to event triggers). As such, we propose to cast this distance computation problem of generated and original data into an Optimal Transport (OT) problem. OT is an established method to compute the optimal transportation between two data distributions based on the probability masses of data points and their pair-wise distances, thus facilitating the integration of the two criteria of event trigger likelihoods and representation vectors into the distance computation between data point sets. Extensive experiments and analysis reveal the effectiveness of the proposed approach for ED in different domains, establishing new state-of-theart performance on the ACE 2005, CySecED and RAMS datasets. Model We formulate the task of Event Detection as a word-level classification problem as in prior work Ngo et al., 2020). Formally, given the sentence S = [w 1 , w 2 , . . . , w n ] and the candidate trigger word w t , the goal is to predict the event type l from a pre-defined set of event types L. Note that if the word w t is not a trigger word, the gold event type is N one. Our proposed approach for this task consist of two stages: (1) Data Augmentation: to employ natural language generation to augment existing training datasets for ED, (2) Task Modeling: to propose a deep learning model for ED, exploiting available training data. Data Augmentation As presented in the introduction, our motivation in this work is to explore a novel approach for training data augmentation for ED based on the powerful pre-trained language model for text generation GPT2. Our overall strategy involves using some existing training dataset O for ED (i.e., original data) to fine-tune GPT-2. The fine-tuned model is then employed to generate a new labeled training set G (i.e., synthetic data) that will be combined with the original data O to train models for ED. To simplify the training data generation task and enhance the quality of the synthetic data, we seek to generate data only for the subtask EI of ED where synthesized sentences are annotated with positions of their event triggers (i.e., event types for triggers are not required for the generation to avoid the complication with rare event types for fine-tuning). To this end, we first enrich each sentence S ∈ O with positions of event triggers that it contains to facilitate the GPT fine-tuning process. Formally, assume that S = w 1 , w 2 , . . . , w n is a sentence of n words with only one event trigger word located at w t , the enriched sentence S for S would have the form: S = [BOS, w 1 , . . . , T RG s , w t , T RG e , . . . , w n , EOS] where T RG s and T RG e are special tokens to mark the position of the event trigger, and BOS and EOS are special tokens to identify the beginning and the end of the sentence. Next, the GPT-2 model will be fine-tuned on the enriched sentences S of O in an auto-regressive fashion (i.e., predicting the next token in S given prior ones). Finally, using the fine-tuned GPT-2, we generate a new dataset G of |O| sentences (|G| = |O|) to achieve a balanced size. Here, we ensure that only generated sentences that contain the special tokens T RG s and T RG e (i.e., involving event trigger words) are added into G, allowing us to identify the candidate trigger word in our word-level classification formulation for ED. As such, the combination A of the synthetic data G and the original data O (A = O ∪ G) will be leveraged to train our ED model in the next step. To assess the quality of the synthetic data, we randomly select 200 sentences from G (generated by the fine-tuned GPT-2 model over the popular ACE 2005 training set for ED) and evaluate them regarding grammatical soundness, meaningfulness, and inclusion and correctness of annotated event triggers (i.e., whether the words between the tokens T RG s and T RG e evoke events or not). Among the sampled set, we find that 17% of the sentences contains at least one type of such errors. Task Modeling This section describes our model for ED to overcome the noises in the generated data G for model training. As discussed in the introduction, we employ the Teacher-Student framework with multitask learning to achieve this goal. In the proposed framework, the teacher and student employs a base deep learning model with the same architecture and different parameters. Base Model: Following the prior work (Wang et al., 2019), our base model consists of the BERT base model to represent each word w i in the input sentence S with a vector e i . Formally, the input sentence [[CLS], w 1 , w 2 , . . . , w n , [SEP ]] is fed into the BERT base model and the hidden states of the last layer of BERT are taken as the contextualized embeddings of the input words, i.e., E = [e 1 , e 2 , . . . , e n ]. Note that if w i contains more than one word-piece, the average of its word-piece embeddings is used for e i . In our experiments, we find that fixing the BERT base parameters achieve higher performance. As such, to fine-tune the contextualized embeddings E for ED, we employ a Bi-directional Long Short-Term Memory (BiL-STM) network to consumes E; its hidden states, i.e., H = [h 1 , h 2 , . . . , h n ], are then employed as the final representations for the words in S. Finally, to create the final vector V for ED prediction, the max-pooled representation of the sentence, i.e., h = M AX P OOL(h 1 , h 2 , . . . , h n ), is concatenated with the representation of the trigger candidate, i.e., h t . V is consumed by a feed-forward network, whose last layer has |L| neurons, followed by a softmax layer to predict the distribution P (·|S, t) over possible event types in L. To train the model, we use negative log-likelihood as the loss function: L pred = − log P (l|S, t) where l is the gold label. As the synthetic sentences in G only involve information about positions of event triggers (i.e., no event types included), we cannot directly combine G with O to train ED models with the loss L pred . To facilitate the integration of G into the training process, we introduce an auxiliary task of EI for the multi-task learning in the training process, seeking to predict the binary label l aux for the trigger candidate w t in S, i.e., l aux = 1 if w t is an event trigger. To perform this auxiliary task, we employ another feed-forward network, i.e., FF aux , which also consumes the overall vector V as input. This feedforward network has one neuron with the sigmoid activation function in the last layer to estimate the event trigger likelihood score: P (l aux = 1|S, t) = FF aux (V ). Finally, to train the base model with the auxiliary task, we exploit the binary crossentropy loss: ). Note that the main ED task and the auxiliary EI task are done jointly in a single training process where the loss L pred for ED is computed only for the original data O. The loss L aux , in contrast, will be obtained for both original and synthetic data in A. Knowledge Consistency: The generated data G is not noise-free. As such, training the ED model on A could lead to inferior performance. To address this issue, as discussed in the introduction, we propose to first learn the anchor knowledge from the original data O, then use that to lead the model training on A to prevent drastic divergence from the anchor knowledge (i.e., knowledge consistency promotion), thus constraining the noises. Hence, we propose a teacher-student network, in which the teacher is first trained on O to learn the anchor knowledge. The student network will be trained on A afterward leveraging the consistency guidance with the induced anchor knowledge from the teacher. We will also use the student network as the final model for our ED problem in this work. In our framework, both teacher and student networks will be trained in the multi-task setting with ED and EI tasks. In particular, the training losses for both ED and EI will be computed based on O for the teacher (the loss to train the teacher is: In contrast, the combined data A will be used to compute the EI loss for the student while the ED loss for the student can only be computed on the original data O. As such, we propose to enforce the knowledge consistency between the two networks for both the main task ED and the auxiliary task EI during the training of the student model. First, to achieve the knowledge consistency for ED, we seek to minimize the KL divergence between the teacher-predicted label-probability distri-bution and the student-predicted label-probability distributions. Formally, for a sentence S ∈ O, the label-probability distributions of the teacher and the student, i.e., P t (·|S, t) and P s (·|S, t) respectively, are employed to compute the KL-divergence loss L KL = −Σ l∈L P t (l|S, t) log( Pt(l|S,t) Ps(l|S,t) ). By decreasing the KL-divergence during the student's training, the model is encouraged to make similar predictions as the teacher for the same original sentence, thereby preventing noises to mislead the student. Note that different from traditional teacherstudent networks that employ KL to achieve knowledge distillation on unlabelled data (Hinton et al., 2015), the KL divergence in our model is leveraged to enforce knowledge consistency to prevent noises in labeled data automatically generated by GPT-2. Second, for the auxiliary task EI, instead of enforcing the student-teacher knowledge consistency via similarity predictions, we argue that it will be more beneficial to leverage the difference between the original data O and the generated data G as an anchor knowledge to promote consistency. In particular, we expect that the student which is trained on A, should discern the same difference between G and O as the teacher which is trained only on the original data O. Formally, during student training, for each mini-batch, the distances between the original data and the generated data detected by the teacher and the student are denoted by d T O,G and d S O,G , respectively. To enforce the O-G distance consistency between the two networks, the following loss is added into the overall loss function: , where |B| is the mini-batch size. The advantage of this novel knowledge consistency enforcement compared to the KL-divergence is that it explicitly exploits the different nature of the original and generated data to facilitate the mitigation of noises in the generated data. A remaining question for our proposed knowledge consistency concerns how to assess the difference between the original and the generated data from the perspective of the teacher, i.e., d T O,G , and the student networks, i.e., d S O,G . In this section, we will describe our method from the perspective of the student (the same method is employed for the teacher network). In particular, we define the difference between the original and the generated data as the cost of transforming O to G such that for the transformed data the model will make the same predictions as G. How can we compute the cost of such transformation? To answer this ques-tion, we propose to employ Optimal Transport (OT) which is an established method to find the efficient transportation (i.e., transformation with the lowest cost) of one probability distribution to another one. Formally, given the probability distributions p(x) and q(y) over the domains X and Y, and the cost function C(x, y) : X × Y → R + for mapping X to Y, OT finds the optimal joint distribution π * (x, y) (over X × Y) with marginals p(x) and q(y), i.e., the cheapest transportation from p(x) to q(y), by solving the following problem: where Π(x, y) is the set of all joint distributions with marginals p(x) and q(y). Note that if the distributions p(x) and q(y) are discrete, the integrals in Equation 1 are replaced with a sum and the joint distribution π * (x, y) is represented by a matrix whose entry (x, y) represents the probability of transforming the data point x ∈ X to y ∈ Y to convert the distribution p(x) to q(y). By solving the problem in Equation 1 1 , the cost of transforming the discrete distribution p(x) to q(y) (i.e., Wasserstein distance Dist W ) is defined as: Dist W = Σ x∈X Σ y∈Y π * (x, y)C(x, y). In order to utilize OT to compute the transformation cost between O and G, i.e., d S O,G , we propose to define the domain X and Y as the representation spaces of the sentences in O and G, respectively, obtained from the student network. In particular, a data point x ∈ X represents a sentence X o ∈ O. Similarly, a data point y ∈ Y stands for a sentence Y g ∈ G. To define the cost function C(x, y) for OT, we compute the Euclidean distance between the representation vectors of the sentences X o and Y g (obtained by max-pooling over representations of their words): and h Y g,i are the representation vectors of the ith words of X o and Y g , respectively, obtained from the student's BiLSTM. Also, to define the discrete distribution p(x) for OT over X , we employ the event trigger likelihood Score X o for the trigger candidate of each sentence X o in X that is returned by the feed-forward network FF S aux 1 It is worth mentioning that this problem is intractable so we solve its entropy-based approximation using the Sinkhorn algorithm (Peyre and Cuturi, 2019). for the auxiliary task EI in the student model, i.e, Score X o = FF S aux (X o ). Afterward, we apply the softmax function over the scores of the original sentences in the current mini-batch to obtain p(x), i.e., p(x) = Sof tmax(Score X o ). Similarly, the discrete distribution q(y) is defined as q(y) = Sof tmax(Score Y g ). To this end, by solving the OT problem in Equation 1 and obtaining the efficient transport plan π * (x, y) using this setup, we can obtain the distance d S O,G . In the same way, the distance d T O,G can be computed using the representations and event trigger likelihoods from the teacher network. Note that in this way, we can integrate both representation vectors of sentences and event trigger likelihoods into the distance computation between data as motivated in the introduction. Finally, to train the student model, the following combined loss function is used in our framework: L = L pred + αL aux + βL KL + γL dist , where α, β, and γ are the trade-off parameters. Datasets, Baselines & Hyper-Parameters To evaluate the effectiveness of the proposed model, called the GPT-based data augmentation model for ED with OT (GPTEDOT), we conduct experiments on the following ED datasets: ACE 2005 (Walker et al., 2006): This dataset annotates 599 documents for 33 event types that cover different text domains(e.g., news, weblog or conversation documents). We use the same preprocessing script and data split as prior works (Lai et al., 2020c;Tong et al., 2020b) to achieve fair comparisons. In particular, the data split involves 529/30/40 articles for train/dev/test sets respectively. For this dataset, we compare our model with prior state-of-the-art models reported in the recent works (Lai et al., 2020c;Tong et al., 2020b), including BERT-based models such as DMBERT, AD-DMBERT (Wang et al., 2019), DRMM, EKD (Tong et al., 2020b), and GatedGCN (Lai et al., 2020c). CySecED (Man Duc Trong et al., 2020): This dataset provides 8,014 event triggers for 30 event types from 300 articles of the cybersecurity domain (i.e., cybersecurity events). We follow the the same pre-processing and data split as the original work (Man Duc Trong et al., 2020) with 240/30/30 documents for the train/dev/test sets. To be consistent with other experiments and facilitate the data generation based on GPT-2, the experiments on Cy-SecED are conducted at the sentence level where inputs for models involve sentences. As such, we employ the state-of-the-art sentence-level models reported in (Man Duc Trong et al., 2020), i.e., DM-BERT (Wang et al., 2019), BERT-ED , as the baselines for CySecED. RAMS (Ebner et al., 2020): This dataset annotates 9,124 event triggers for 38 event types. We use the official data split with 3,194, 399, and 400 documents for training, development, and testing respectively for RAMS. We also perform ED at the sentence level in this dataset. For the baselines, we utilize recent state-of-the-art BERT-based models for ED, i.e., DMBERT (Wang et al., 2019) and GatedGCN (Lai et al., 2020c). For a fair comparison, the performance of such baseline models is obtained via their official implementations from the original papers that are fine-tuned for RAMS. For each dataset, we use its training and development data to fine-tune the GPT-2 model. We tune the hyperparameters for the proposed teacherstudent architecture using a random search. All the hyperparameters are selected based on the F1 scores on the development set of the ACE 2005 dataset. The same hyper-parameters from this finetuning are then applied for other datasets for consistency. In our model we use the small version of GPT-2 to generate data. In the base model, we use BERT base , 300 dimensions in the hidden states of BiLSTM and 2 layers of feed-forward neural networks with 200 hidden dimensions to predict events. The trade-off parameters τ , α, β and γ are set to 0.1, 0.1, 0.05, and 0.08, respectively. The learning rate is set to 0.3 for the Adam optimizer and the batch size of 50 are employed during training. Finally, note that we do not update the BERT model for word embeddings in this work due to its better performance on the development data of ACE 2005. Results of experiments on the CySecED test set are presented in Table 2. This table reveals that the teacher-student architecture GPTEDOT significantly improves the performance over previous state-of-the-art models for ED in cybersecurity domain. This is important as it shows that the proposed model is effective in different domains. In addition, our results also suggest that GPT-2 can be employed to generate effective data for ED in domains where data annotation for ED requires extensive domain expertise and expensive cost to obtain such as the cybersecurity events. Moreover, the higher margin of improvement for GPTEDOT on CySecED compared to the those on the ACE 2005 dataset suggests the necessity of using more training data for ED in technical domains. Finally, results of experiments on the RAMS test set are reported in Table 3. Consistent with our experiments on ACE 2005 and CySecED, our proposed model achieve significantly higher performance than existing state-of-the-art models (p < 0.01), thus further confirming the advantages of GPTEDOT for ED. Ablation Study This ablation study evaluates the effectiveness of different components in GPTEDOT for ED. First, for the importance of the generated data G from GPT-2 and the teacher-student architecture to mitigate noises, we examine the following baselines: (1) Base O : The baseline is the base model trained only on the original data O, thus being equivalent to the teacher model and not using the student model; and (2) Base A : This baseline trains the base model on the combination of the original and generated data, i.e., A, using the multi-learning setting (i.e., the teacher model is excluded). Second, for the multi-task learning design in the teacher network, we explore the following ablated models: (3) Teacher −A : This baseline removes the auxiliary task EI in the teacher from GPTEDOT. As such, the OT-based knowledge consistency for EI is also eliminated; (4) Teacher −M : In this model, the main task ED is utilize to train the teacher, so the corresponding KL-based knowledge consistency for ED is also removed. Third, for the design of the knowledge consistency losses in the student network, we evaluate the following baselines: (5) Student −OT : This ablated model eliminates the OT-based knowledge consistency loss for the auxiliary task EI in the student's training of GPTEDOT (the auxiliary task is still employed for the teacher and the student); (6) Student −KL : For this model, the KL-based knowledge consistency for the main task ED is ignored in the student's training; (7) Student +OT : In this baseline, we use OT for the knowledge consistency on both the main and the auxiliary tasks. Here, for the main task ED, the cost function C(x, y) for OT is still obtained via the Euclidean distances between representation vectors while the distributions p(x) and p(y) are based on the maximum probabilities of the label-probability distributions P s (.|X o , t o ) and P s (Y g , t g ) for the ED task; and (8) Student +KL : This baseline employs the KL di- vergence between models' predicted distributions to enforce the teacher-student consistency for both the main task and the auxiliary task. To this end, for the auxiliary task EI, we convert the final activation of FF aux into a distribution with two data points (i.e., [FF aux (X), 1 − FF aux (X)]) to compute the KL divergence between the teacher and the student. Finally, for the importance of Euclidean distances and event trigger likelihoods in the OTbased distance between O and G for knowledge consistency in EI, we investigate two baselines: (9) OT −Rep : Here, to compute OT, we use constant cost between every pair of sentences, i.e., C(x, y) = 1 (i.e., ignoring representation-based distances); and (10) OT −Score : This model uses uniform distributions for p(x) and q(y) to compute the OT (i.e., ignoring event trigger likelihoods). We report the performance of the models (on the ACE 2005 development set) for the ablation study in Table 4. There are several observations from this table. First, the generated data G and the teacher-student architecture are necessary for GPTEDOT to achieve the highest performance. In particular, comparing with Base O , the better performance of GPTEDOT indicates the benefits of the GPT-generated data. Moreover, the better performance of Base O over Base A reveals that the simple combination of the synthetic and original data without any effective method to mitigate noises might be harmful. Second, the lower performance of Teacher −A and Teacher −M shows that both the auxiliary and the main task (i.e., multi-task learning) in the teacher are integral to produce the best performance. Third, the choice of methods to promote knowledge consistency is important and the proposed combination of KL and OT for the ED and EI tasks (respectively) are necessary. In particular, removing or replacing each of them with the other one (i.e., Student +OT and Student +KL ) would de-Dataset Sentence ACE 2005 I was totally shocked by the court's decision to agree with Sam Sloan after he TRG s sued TRG e his children. CySecED According to the last update by the company, the following techniques are used to protect against such TRG s malware TRG e . RAMS The Russian officials TRG s vowed TRG e to bomb the ISIS bases after the last week's TRG s attack TRG e . Do you think the TRGs attack TRGe will happen to you or do you think the TRGs attack TRGe will happen to you? 15% Inconsistency this morning we were watching the news and heard the news about the tragic TRGs death TRGe of a young boy and her mother in Iraq. 12% Missing Labels Aaron Tramailer's story is the story of a woman who was forced into suicide. 29% Incorrect Labels The SEC is a very good place to TRGs hide TRGe money. 26% crease the performance significantly. Finally, in the proposed consistency method based on OT for EI, it is beneficial to employ both representation-level distances (i.e., OT −Rep ) and models' predictions for event trigger likelihoods (i.e., OT −Score ) as removing any of them hurts the performance. Analysis To provide more insights into the quality of the synthetic data G, we provide samples of sentences that are generated by the fine-tuned GPT-2 model on each dataset in Table 5. This table illustrates that the generated sentences also belong to the domains of the original data (i.e., the cybersecurity domain). As such, combining synthetic data with original data is promising for improving ED performance as demonstrated in our experiments. As discussed earlier, the generated data G is not free of noise. In order to better understand the types of errors existing in generated sentences, we manually assess 200 sentences randomly selected from the set G generated by the fine-tuned GPT-2 model on the ACE 2005 dataset. We categorize the errors into five types and provide their proportions along with example for each error type in Table 6. This table shows that the majority of errors are due to missing labels (i.e., no special tokens TRG s and TRG e are generated) or incorrect labels (i.e., marked words are not event triggers of interested types) generated by the language model. Finally, to study the importance of the size of the generated data to augment training set for ED, we conduct an experiment in which different numbers of generated samples in G (for the ACE 2005 dataset) are combined with the original data O. The results are shown in Table 7. According to this table, the highest performance of the proposed model is achieved when the numbers of the generated and original data are equal. More specifically, decreasing the number of generated samples potentially limits the benefits of data augmentation. On the other hand, increasing the size of generated data might introduces extensive noises and become harmful to the ED models. Related Work ing. In this work, we propose a novel method to augment training data for ED by exploiting the powerful language model GPT-2 to automatically generate new samples. Leveraging GPT-2 for augmenting training data has also been studied for other NLP tasks recently (e.g., relation extraction, commonsense reasoning) (Papanikolaou and Pierleoni, 2020;Zhang et al., 2020a;Madaan et al., 2020;Bosselut et al., 2019;Kumar et al., 2020;Anaby-Tavor et al., 2020;Peng et al., 2020). However, none of those works has explored GPT-2 for ED. In addition, existing methods only resort to heuristics to filter out noisy samples generated by GPT-2. In contrast, we propose a novel differentiable method capable of preventing noises from diverging representation vectors of the models for ED. Conclusion We propose a novel method for augmenting training data for ED using the samples generated by the language model GPT-2. To avoid noises in the generated data, we propose a novel teacher-student architecture in a multi-task learning framework. We introduce a mechanism for knowledge consistency enforcement to mitigate noises from generated data based on optimal transport. Experiments on various ED benchmark datasets demonstrate the effectiveness of the proposed method.
8,013.6
2021-01-01T00:00:00.000
[ "Computer Science" ]
Cotinine Enhances Fear Extinction and Astrocyte Survival by Mechanisms Involving the Nicotinic Acetylcholine Receptors Signaling Fear memory extinction (FE) is an important therapeutic goal for Posttraumatic stress disorder (PTSD). Cotinine facilitates FE in rodents, in part due to its inhibitory effect on the amygdala by the glutamatergic projections from the medial prefrontal cortex (mPFC). The cellular and behavioral effects of infusing cotinine into the mPFC on FE, astroglia survival, and the expression of bone morphogenetic proteins (BMP) 2 and 8, were assessed in C57BL/6 conditioned male mice. The role of the α4β2- and α7 nicotinic acetylcholine receptors (nAChRs) on cotinine’s actions were also investigated. Cotinine infused into the mPFC enhanced contextual FE and decreased BMP8 expression by a mechanism dependent on the α7nAChRs. In addition, cotinine increased BMP2 expression and prevented the loss of GFAP + astrocytes in a form independent on the α7nAChRs but dependent on the α4β2 nAChRs. This evidence suggests that cotinine exerts its effect on FE by modulating nAChRs signaling in the brain. INTRODUCTION Posttraumatic stress disorder (PTSD) represents a consequential set of reactions to a traumatic event(s) perceived as threatening life and resulting in the person experiencing feelings of intense helplessness and horror. Management strategies for PTSD include therapies to facilitate fear memory extinction (FE), which have shown to inhibit fear responses (Rutt et al., 2018), but there is a paucity of evidence to support long-term pharmacological and psychotherapeutic treatment options (Wessa and Flor, 2007). Furthermore, even if FE is achieved with treatment, between 60 and 72% of patients with PTSD re-experience a return of their symptoms, or suffer for many years of them raising questions around the mechanisms underlying effective short-and long-term FE (Steenkamp et al., 2015). Several mechanistic studies suggest the medial prefrontal cortex (mPFC) plays a role in the consolidation of FE learning (Quirk et al., 2000). Previous studies revealed that cotinine, is a positive allosteric modulator (PAM) type II of the α7 nicotinic acetylcholine receptors (nAChRs), enhanced contextual FE, by mechanisms involving the activation of the extracellular-signal regulated kinases (ERKs) (Zeitlin et al., 2012) and calcineurin A (CaA) (Alvarez-Ricartes et al., 2018b) in the hippocampus (HIP) of mice (Mendoza et al., 2018). In addition, orally administered cotinine, prevented the loss of synaptic density in the PFC and HIP of chronically stressed male mice (Grizzell et al., 2014). PAMs type II of the nAChRs bind to allosteric sites inducing a positive effect on receptor activity, by stabilizing its active conformation and preventing its desensitization. This concept describes very well the effect of cotinine that is a weak agonist of the α7nAChR to the orthostatic site, but positively modulates the activity of the human α7 receptors, increasing the amplitude of the currents induced by nicotine and acetylcholine in Xenopus oocyte expressing the human receptor (Terry et al., 2015). Altogether, we proposed that cotinine is a positive allosteric modulator of the receptor, though, we cannot discard that cotinine may affect other effectors that indirectly act over the nAChRs to modulate its activity. At our knowledge the model of Xenopus Oocytes used by Terry and others is one of the best models to characterize the modulation of the nicotinic receptors by new drugs to minimize the risk of being affecting other human effectors that will not be present in the oocyte. However, structure-based docking studies investigating the binding of cotinine to the α7 nAChRs, combined to cell based binding and activity studies to test the docking hits using mutagenesis, it could be used to determine the predicted direct interaction of cotinine and the nicotinic receptors subunits as well as the nature of the allosteric modulation involved. We have found that Cotinine alters the mRNA expression of BMP2 and BMP8 in the brain of restrained mice (unpublished observation). Because BMPs control the expression of stress factors, we hypothesized that their expression could also be affected by cotinine during fear extinction by mechanisms dependent on the nAChRs. The neurobiological mechanisms underlying the psychological and physical consequences of trauma affecting FE appear to also involve changes in the HPA axis components affecting astrogenesis (Perez-Urrutia et al., 2017). In this respect, the bone morphogenetic proteins (BMPs), members of the transforming growth factor (TGFβ) family, broadly expressed in the brain, play an important function in the CNS regulating astrogenesis (Imura et al., 2008) and neurogenesis during development (Shaked et al., 2008;Morikawa et al., 2016) and adulthood (Colak et al., 2008), and perhaps their function can be affected in the brain by PTSD. The primary purpose of this study was to determine the cellular and behavioral effects of infusing cotinine into the PFC on FE and the expression of bone morphogenetic proteins (BMP) 2 and 8, in C57BL/6 male mice. A secondary purpose is to identify the role of the α4β2and α7nAChRs on cotinine's behavioral, cellular and molecular actions. This investigation revealed that the infusion of cotinine into the mPFC enhanced contextual FE, glial fibrillary acidic protein (GFAP) + astrocytes survival, and the BMP2/8 expression in the brain of conditioned mice by a mechanism dependent on the nAChRs. Animals Male C57BL/6 mice, 3-4 months of age and 30-35 g of weight, were obtained from the University of Chile (Santiago, Chile) and maintained on a 12 h/12h light-dark cycle with ad libitum access to food and water. Mice were maintained grouped (2-4 mice by a cage) in a controlled environment with average temperatures between 23-25 • C and 50-70% humidity. The protocols were designed to minimize animal suffering and reduce the number of animals used. Animal studies were performed in compliance with the ARRIVE guidelines (Kilkenny et al., 2010) and after the approval of the Institutional animal care and use committees of the University of San Sebastián, Chile according to the Guide of Animal care and use of laboratory animals of the National Institute of Health (NIH publication 80-23/96). The authors declare that every effort was made to reduce the number of mice used and their level of disconfort and suffering. Experimental Design This study investigated the effect of cotinine infused directly into the mPFC on depressive-like behavior, fear consolidation and extinction as well as the expression of BMPs in the PFC and HIP of mice subjected or not to fear conditioning (FC) and FE (Figure 1). Infusion of Drugs Into the Prefrontal Cortex The infusion of drugs into the mPFC was performed using osmotic pumps as previously described (Besson et al., 2007). First, mice were subjected to behavioral analysis and then subjected to a stereotaxic surgery under anesthesia, with a mix including Ketamine, xylazine, and acepromazin (80, 8 and 1 mg/kg), to implant a cannula in the medial prefrontal cortex (AP: + 2.0 mm; L 1.5 mm, V 3.5 mm, angle 30 • ) attached to a Alzet R osmotic pump (Alzet, Cupertino, CA, United States) filled with 100 µl of PBS alone (Vehicle) or a drug solution in PBS. The rate of delivery of drugs of the device was 0.25 µl/h and was kept for 14 days until the end of experiments. One week after surgery mice underwent FC and FE. Behavioral Analysis One week following pump implantation, mice were conditioned and 24 h later tested in the fear retention test. On the next day, mice were subjected to daily FE trials for 5 days. Following FE trials, mice were tested for locomotor function using the open field test. Fear Conditioning Fear conditioning was performed as reported (Zeitlin et al., 2012). The conditioning chamber (33 cm × 20 cm × 22 cm) is surrounded by a sound-attenuating box that generates a background white noise (72 dB) and contains a camera at the top of the chamber connected to a computer. The conditioning chamber contains a speaker on one side and a 24 V light on the other, with a 36-bar insulated shock grid floor. Each mouse was placed in the chamber for 2 min before the onset of a tone lasting 30 s (2,800 Hz and 85 dB). In the last 2 s of the tone, mice received a foot shock of 1 mA, kept in the conditioning chamber for 2 min, and then returned to their home cages. The chamber was sanitized with 70% ethanol and dried after each trial. Freezing behavior was defined as the absence of all movement (except for movement secondary to breathing) and was assessed using Freezing behavior was measured using the Any-maze R software (Stoelting, CO, United States). Fear Retention and Extinction Tests Fear retention and FE experiments were performed as previously described using the same cohorts of mice. To assess FR, mice underwent re-exposure to the conditioning chamber, in the absence of the unconditioned stimuli (foot-shock and auditory cues), for 3 min in daily extinction trials. The extinction trials persisted until the decrease in freezing behavior reached an asymptote. Open Field Test Open field is a reliable test evaluating the locomotor activity and exploratory behavior in mice (Walsh and Cummins, 1976). A period of pre-test habituation was carried out in which the mice were placed individually in the apparatus for 10 min on two consecutive days. The OF test device consisting of a square, open-top, arena (40 cm × 40 cm × 35 cm) that was virtually divided into two areas: peripheral and central zones. The mice were placed in the corner of the arena and allowed to roam freely for 25 min. The time (min), distance (m), and speed (m/sec) of the mice in the arena were recorded. The behavior of mice was documented using a camera located above the arena and analyzed with the software Any-Maze (Stoelting). Brain Tissue Preparation For the protein studies, after euthanasia by cervical dislocation, the brains were removed, and the left hemispheres were dissected and quickly frozen for Western blot analyses. The right hemisphere was placed in 4% paraformaldehyde in PBS pH 7.4 at 4 • C for 24 h, and then embedded in 2% agarose molds. Brain regions of interest were located using the Paxinos Atlas (Paxinos and Franklin, 2001), and serial sections of 20 µm (n ≥ 3/mouse) were collected using the Vibratome Leica VT1000S and mounted on positively charged slides (Biocare Medical, Concord, CA, United States) for the immunohistochemistry (IHC) analysis. Western Blot Analysis The protein analysis was performed using Western blot according to our previously published protocol (Alvarez-Ricartes et al., 2018b). The brain tissues were homogenized by sonication in lysis buffer with phosphatase and proteases inhibitors (Cell signaling technology, Danver, MA, United States). The homogenates were centrifuged, and aliquots of the supernatants used for the determination of protein concentration using the Biorad kit (Bio-Rad, Hercules, CA, United States). Brain extracts in SDS-denaturant loading buffer with equal amounts of protein were separated by electrophoresis in an SDS-PAGE (4-20%) gel and transferred to nitrocellulose membranes. After the transfer, membranes were blocked with TBS containing 0.1% of Tween 20 (TBST) and skim milk 10% (w/v) for 45 min and incubated with primary and secondary antibodies. The primary antibodies used included the rabbit polyclonal anti-BMP8 (United States Biological life sciences, Marblehead MA, United States, # 221074), rabbit polyclonal anti-BMP2 (Invitrogen PH131215), anti-Calcineurin A (Cell signaling technology) and mouse monoclonal anti-GFAP (Bio SB, BSB5567, Santa Barbara, CA, United States). For protein loading and transfer control, rabbit polyclonal antibody against β-actin (# 662102, Biolegend, San Diego, CA, United States) and rabbit monoclonal antibody against β-Tubulin (Cell signaling technology) were used. After incubation with antibodies, membranes were washed and analyzed for immunoreactive (IR) bands using the ECL imaging system and the NIH image J software (National Institute of Health, Bethesda, MA, United States). GFAP + Cells and BMP8 Immunohistochemical Analysis The analysis of GFAP + cells and BMP8 IR was performed as previously described. Sagittal sections of the mPFC (AP: + 1.8 mm, ML: 0.0 mm, DV: −2.5 mm) and the HIP (Approx. Bregma −4.08 mm, interaural 4.92 mm) were collected in PBS and processed for GFAP and BMP8 IR. Brain slices were immersed in xylene and decreasing the graduation of ethanol baths for hydration. Slices were then subjected to antigenic recovery in buffer citrate (pH = 6) in a pressurized saucepan (Biocare Medical, Walnut Creek, CA, United States) for 30 min, and incubated in a 3% hydrogen peroxide solution for 5 min, washed with PBS, and blocked with a horse serum solution (Vectastain Elite ABC, Vector Laboratories, Burlingame, CA, United States) for 10 min at room temperature (RT). After blocking, sections were washed in PBS and incubated with an antibody against GFAP 1:100 (Sigma) or BMP8 1:200 (Cell signaling technology) for 1 h at RT. After washing with PBS, sections were incubated with a biotinylated secondary antibody for 10 min. Then, sections were washed with PBS and incubated with the amplifier solution from the Vectastain Elite kit for 10 min at RT. The reaction was visualized using ImmunoDetector DAB (SB Bio Inc., Santa Barbara, CA, United States). For counterstaining, sections were stained with hematoxylin for 30 s, dehydrated in baths containing ascending concentrations of alcohol and xylene, and mounted with synthetic resin. For the IR analysis, for each mouse, three digital images were randomly selected at 40X magnification in the areas of interest (hippocampus and frontal cortex) (n = 5 mice/condition). The images were taken using a digital camera attached to a light microscope (Micrometrics, MilesCo Scientific, Princeton, MN, United States) connected to a camera operated by commercial software (Micrometrics SE Premium). The determination of the area of the immunolabeling was calculated delimiting the IR areas using the ImageJ software (National Institute of Health). For these analyses, GFAP + astrocytes were selected randomly from the frontal cortex and the CA1, CA2, and dentate gyrus regions of the hippocampus and quantified. Using a digital camera on an inverted microscope, black and white images of GFAP + astrocytes were obtained and processed with Image J software. Using a 20X objective, cells were chosen randomly in the same area selected for immunostaining, and the binary overlay of a cell was created by thresholding. For all images a threshold value was established at the level at which the binary overlay entirely enclosed the cell body and projections. All pixels above the threshold value were considered as belonging to the cell images. Finally, the binary silhouette of the whole cell was reduced to its one-pixel outline for estimation of the fractal dimensions with the FracLac 2.5 ImageJ plugin Karperien and Jelinek (2015). Statistical Analysis All values were expressed as a mean ± standard error of the mean. The behavioral differences between treatment groups were assessed by one-way or two-way analysis of variance (ANOVA) with post hoc Tukey analysis. P < 0,05 was considered statistically significant. All statistical analyses were performed with the software GraphPad Prism 8 (GraphPad Software Inc., San Diego, CA, United States). Cotinine Infused in the mPFC Enhanced Fear Extinction in the Conditioned Mice To determine whether the infusion of cotinine into the mPFC could alter FE, cotinine was infused into this region 1 week prior to FC and FE. In the extinction trials, all PBS-treated mice showed a decrease in freezing behavior with the time that reached a steady state decrease by day 5. A repeated measure ANOVA throughout the 5 days of extinction revealed a significant difference induced by treatments [F(3,23) = 7.231, P = 0.03] and days [F(4, 92) = 15.22, P < 0.0001) on the freezing behavior. Post hoc Tukey's multiple comparison test revealed that the infusion of cotinine alone did not change the acquisition or recall of the fear memory. In the retention test, cotinine-treated conditioned mice, showed a similar freezing reaction than PBS-treated conditioned mice (P = 0.47). In the extinction trials, mice treated with cotinine plus DHβE showed a trend of reduced retention of fear memory when compared to mice treated with cotinine alone but the difference did not reach statistical significance respect to conditioned mice treated with cotinine alone. Mice treated with DHβE alone induced a significant decrease in FE when compared to DHβE plus cotinine (Figure 2A, P < 0.05). Conversely, mice treated with cotinine plus MLA showed a significant impairment in fear extinction when compared with PBS-treated and cotininetreated fear conditioned mice (P < 0.05) ( Figure 2B). The enhancement of extinction in cotinine-treated mice was lost in mice co-treated with Cotinine plus MLA. The reduction in FE was even more pronounced in the mice treated with MLA alone than mice treated with cotinine plus MLA ( Figure 2B). Effect of Intracortical Cotinine on Locomotor Activity in the Open Field Test of Conditioned Mice To determine the changes in locomotor activity induced by the infusion of drugs using stereotaxic techniques, mice were tested using the OF test for 25 min. The results did not show significant differences between treatment groups in distance travelled [F(4,26) = 0.981, p > 0.05; Figure 3A] or speed [F(4,26) = 0.9919, P > 0.05] (Figures2C,D). The infusion of cotinine in the mPFC of the conditioned mice prevented the loss of astrocytes showing a significantly higher GFAP IR in the CA3 (P < 0.05), DG (P < 0.005), and PFC (P < 0.0001) than the one present in the PBS-treated conditioned mice ( Figure 3B). The simultaneous treatment of the conditioned mice with cotinine plus MLA in the PFC did not prevent the effect of cotinine in enhancing GFAP IR, however, the infusion of DHβE prevented the increase of GFAP IR induced by cotinine in the conditioned mice that showed GFAP IR not significantly different from the PBS-treated conditioned mice (P < 0.05) and significantly different from the inhibitor alone (P < 0.001). Cotinine Downregulated BMP8 Expression in the Prefrontal Cortex of the Conditioned Mice The analysis of the protein expression of BMP8 in the PFC revealed a significant change in the expression of BMP8 between treatment groups [One-way ANOVA, F(4,56) = 12,33, P < 0.0001]. Post hoc analysis showed that after FE, the total BMP8 IR was not different between control non-conditioned mice and the conditioned mice after FE (ns, P = 0.651). However, after FE the cotinine-treated mice showed significantly lower levels of BMP8 IR than PBS-treated mice in the PFC (75% decrease, * * P < 0.005) (Figures 4A,B). Conditioned mice treated with cotinine plus MLA showed BMP8 IR levels in the mPFC were not significantly different from conditioned mice infused with cotinine alone, or non-stressed mice (P > 0.05) (Figures 4A,B). The antagonism of the α4β2 receptors with DHβE in the mPFC did not prevent the downregulation of BMP8 induced by cotinine. Effect of Cotinine on BMP2 Levels in the Prefrontal Cortex of the Conditioned Mice The results demonstrate a significant difference in the expression of the mature form of BMP2 (mBMP2) between treatment groups [One-way ANOVA, F(2,15) = 12.69, P < 0.001). Post hoc analysis showed that cotinine-treated conditioned mice showed significantly higher levels of mBMP2 IR than the vehicle-treated conditioned and non-conditioned mice in the PFC (P < 0.001) (Figure 5A). The evaluation of the effect of the antagonism of the nAChRs on cotinine's actions revealed that neither MLA or DHβE prevented the increase induced by cotinine on BMP2 level in the mPFC (P > 0.05) (Figure 5A and Supplementary Figures 1a-c). Effect of Cotinine on Calcineurin a Level in the Prefrontal Cortex of Conditioned Mice It has been shown that Calcineurin A (CaA) is involved in FE and cotinine, alike other antidepressants, increases its expression in the HIP (Alvarez-Ricartes et al., 2018b). The results showed a significant difference in CaA expression between treatment FIGURE 2 | Effect of cotinine infused into the prefrontal cortex on fear conditioning and extinction. Male mice, treated as described in Materials and Methods section, were subjected to fear conditioning (FC) (A) and fear extinction (B) for five consecutive days in absence of presence of cotinine (Cot) treatment alone or plus MLA (α7 nAChR antagonist) or DHβE (α4β2 nAChRs antagonist). Locomotor activity was also tested measuring distance travelled (C), and speed (D) in the open field test. Statistical significance was evaluated using one-way ANOVA with Tukey post-test. ns, not significant, ** highly significant P < 0.01. groups [One-way ANOVA, F(4,32) = 10.44, P < 0.0001]. The post hoc analysis demonstrated that after FE, the cotininetreated conditioned mice showed a higher level of CaA than the conditioned mice treated with PBS in the PFC (77% increase P < 0.05). The evaluation of the effect of the antagonism of the nAChRs on cotinine's actions revealed that MLA prevented the increase induced by cotinine on CaA level in the mPFC (P < 0.001) (Figure 5B and Supplementary Figures 2a-c). DISCUSSION The understanding of the molecular mechanisms involved in cotinine's action is fundamental to discover new therapeutic targets to promote FE in affected individuals. In here, it was discovered that cotinine infusion in the mPFC enhanced contextual FE and prevented astrocyte loss by mechanisms involving the α7 and α4β2 nAChRs, respectively. In addition, it was found that cotinine increased BMP2 and decreased BMP8 expression in the mPFC as evaluated after FE. The evidence also suggests that cotinine acting on the nAChRs is required to enhance FE by modulating the activity of the mPFC and its connectivity with other regions of the fear loop such as the AMY and the HIP. The inhibition of α4β2 nAChR with DHβE did not have the same effect that cotinine plus DHβE, indicating that the changes in behavior induced by the concommitant inhibition of the α4β2 receptors during cotinine treatments were not the result of the effect of the inhibitor itself, but the result of the inhibition of cotinine acting on this nicotinic receptor. We propose that DHβE alone freeze the receptor in its desensitized state but cotinine inhibits this effect by altering the receptor protein conformation. Previous studies showed that nicotine ameliorated fear conditioning that has been blocked by glutamate receptor inhibitors such as MK801 and that infusion of DHβE into the HIP negatively affected the positive effects of nicotine on contextual FC (Andre et al., 2008). It has been proposed that different processes in the mPFC are involved in the acquisition and recall of fear learning. Because several neurons containing nAChRs project from the mPFC to the HIP and AMY (Pitkanen et al., 2000), the nAChR's antagonists may interrupt the connectivity between the mPFC and these regions also involved in FE. The mPFC projects to several cortical and subcortical structures, including the AMY, insular cortex, lateral hypothalamus, periaqueductal gray area and rostral ventrolateral medulla (Vertes, 2004(Vertes, , 2006Hoover and Vertes, 2007). Previous studies revealed that lesions of the infralimbic cortex (IL) in the mPFC resulted in a FIGURE 3 | Cotinine prevented astrocyte loss in the prefrontal cortex and hippocampal formation of conditioned mice. After fear extinction (FE), mice (n = 5-8 mice/group) infused with cotinine (Cot) or Cot plus MLA but not with Cot + DHb E, showed higher GFAP immunoreactivity (IR) in the hippocampus regions [CA1, CA3 and the dentate gyrus (DG)], and medial prefrontal cortex (mPFC) of mice by an α4β2 nAChR-dependent mechanism. In the dentate gyrus (DG) also a significant effect of the inhibition of α7nAChR antagonistwas also observed. (A) The pictures despict the pictograms of representative immunohistochemical analysis of brain regions analyzed using a mouse anti-GFAP antibody; (B), Plots representing the area of IR in the brain tissues showed in panel A. Statistical significance was evaluated using one-way ANOVA with Tukey posttest. ns, not significant, *** highly significant P < 0.001. FIGURE 4 | Cotinine decreased BMP8 expression in the prefrontal cortex of conditioned mice. Cotinine (Cot) almost suppressed the expression of BMP8 in the prefrontal cortex of mice as detected after fear fear extinction (FE). Cot or Cot plus DHβE-infused mice (n = 5-8/group) showed significantly lower levels of BMP8 than mice infused with PBS, or Cot plus MLA. (A) The plot represents BMP8 immunoreactivity (IR) levels after treatments (B). The pictures depict the BMP8 IR in the medial prefrontal cortex (mPFC) as measured after treatments. Cotinine inhibited BMP8 expression in the mPFC of mice by a α7nAChRs-dependent mechanism. Statistical significance was evaluated using one-way ANOVA with Tukey posttest. ns, not significant, ** highly significant P < 0.01. ES, exposed to stress (FC). significant spontaneous recovery of conditioned fear responses after extinction in rats (Quirk et al., 2000). Changes in IL synaptic activity occurs during the acquisition (Muigg et al., 2008) and expression of extinction (Orsini et al., 2011(Orsini et al., , 2013Orsini and Maren, 2012). Recent findings indicate that the IL target the basolateral AMY, which in turn recruits the intercalated cells to suppress fear (Strobel et al., 2015). In addition to the mPFC, the HIP is important for the retrieval of FE memory associated with context . The modulation of the fear renewal with the expression of the contextdependent fear memories after extinction, involves enhanced activity of hippocampal neurons proyecting to the AMY and mPFC (Knapska and Maren, 2009;Knapska et al., 2012). In agreement with this evidence, it has been shown that cotinine by acting mainly in the PFC can positively influence astrocyte survival in the HIP. Previous studies found that infusion of MLA or DHβE into the mPFC enhanced contextual FC. The data suggest that most likely α4β2 nAChRs mediate the enhancement of contextual fear conditioning by nicotine (Raybuck et al., 2008;Gould, 2009, 2010). In here, it was that retention of fear memory was not affected by infusion of cotinine into the mPFC, however, cotinine plus the selective inhibitor of the α7nAChR induced higher retention of the fear memory. Conversely, the infusion of cotinine plus DHβE inhibited the fear response to the context in the retention test. Cotinine could prevent the enhancing of FC induced by DHβE by acting on the α4β2 nAChRs in the mPFC, and preventing the conformational changes induced by DHβE resulting in closed or resting states. Future studies using siRNA technology and additional antagonists would be required to discard the unspecific effects of these antagonists on other cell factors. Mistaken effects are unlikely considering the pharmacological profile of these nicotinic antagonists is well known and the doses used are in the range of doses previously investigated in the same strain of mice at the same ages. More inteerstingly a previous study has shown that the positive modulation of the α7nAChR in astrocytes inhibits the inflammatory response induced by LPS (Patel et al., 2017). Similarly, we found the effects of cotinine on astrocyte structure and numbers. Astrocytes represent more than 60% of the cells in the brain, and have many functions in the CNS. These functions include modulating the blood-brain barrier, supporting neuronal plasticity and survival, providing neurons with energy molecules and trophic factors and removing glutamate from the extracellular space (Ashton et al., 2012;Honsek et al., 2012;Navarrete et al., 2012). Astrocytes function is affected in PTSD with stress significantly decreasing the survival of astrocytes (Imbe et al., 2012;Perez-Urrutia et al., 2017). A substantial reduction in the number of astrocytes in the HIP and PFC of fear conditioned and restrained rodents have been found. However, IN cotinine (Alvarez-Ricartes et al., 2018b) and oral cotinine (Perez-Urrutia et al., 2017) prevented astrocyte loss induced by stress in the HIP and PFC of mice. The protective effect of cotinine toward astrocytes in these regions during stress was prevented by the antagonism of the α4β2 nAChRs with DHβE, but not by MLA, with the exception of the DG that also showed a minor effect of cotinine induced by MLA. This evidence suggests that the effect of cotinine on astrocyte survival is mainly mediated by FIGURE 5 | Cotinine increased BMP2 and calcineurin A expression in the prefrontal cortex of conditioned mice. Calcineurin A (CaA) immunoreactivity (IR) was higher in the prefrontal cortex (PFC) of cotinine-treated conditioned mice than PBS-treated conditioned mice or control non-shocked mice after fear extinction (n = 8-10/group). Cotinine-treated mice showed significantly higher levels of BMP2 than the vehicle-treated and non-shocked mice. BMP2 (A) and CaA (B) IRs are represented as the percentage of control non-stressed mice values and normalized to total actin levels. (C) The diagram summarizes the hypothetical signaling pathways involved in the positive effects of cotinine on PTSD pathology. ES, exposed to stress (FC); Veh, vehicle; ns, not significant; Statistical significance was evaluated using one-way ANOVA with Tukey post-test. *, Significant P < 0.05. **, highly significant (P < 0.01). ***, highly significant P < 0.001. the α4β2 nAChRs. The involvement of other receptors such as the α7b2 receptors mediating the effects of cotinine cannot be discarded. GABAergic interneurons in the CA1 subregion of the hippocampus express functional α7β2 nAChRs, which show pharmacological sensitivity to DHβE (Liu et al., 2012). In addition, the facilitation of FE by cotinine was accompanied by a decrease of BMP8 and the increase of mBMP2 expression in the FC. Like other members of the TGFb superfamily, BMPs are synthesized as pre-proproteins synthesized at the rough endoplasmic reticulum (ER). In the secretory pathway, a BMP proprotein is cleaved on the C-terminal side by convertases, to release the C-terminal peptide. The C-terminal peptide homodimerizes by disulfide bonding to form the mature secreted BMP (Sopory et al., 2006). Active BMP dimers result in the activation of the SMAD proteins. It has been shown that proBMP-2 binds to BMP receptor IA with a similar affinity to its mature form (Hauburger et al., 2009), however, it induces a more delayed effect on gene expression when compared to the mature form of the peptide (von Einem et al., 2011). More importantly, like its effect on FE, the cotinine-induced decrease in Bmp8 was dependent on the activity of the α7nAChRs, but not by the antagonism of the α4β2 nAChRs. The actual evidence is the first time that a connection between the effect of cotinine on astrocytes or behavior with the changes in BMPs expression. Previous evidence showing that BMPIIR deletion promoted neurogenesis and reduced anxiety behavior in rodents, supports the view that BMPs are involved in the regulation of mood and cognitive abilities in mice during stress (McBrayer et al., 2015). The promoter region of the gene that codifies for BMP8 contains four GC response elements (GREs). In fact, dexamethazone upregulates BMP8 expression suggesting a role mediating GC actions. In the PFC of non-stressed mice, the IR for BMP8 was detected surrounding membranes, but the pattern of distribution looks different in the PFC of conditioned mice. Previous evidence suggests that inhibition of BMP8 mediates the positive effects of exercise on synaptic plasticity and learning (Gobeske et al., 2009). BMP2 is one of the most potent osteo-inductive BMPs and one that has been approved by the American Food and Drug Administration for clinical use for spinal fusions and bone healing in humans (Govender et al., 2002). However, undesirable side effects limit its use (Benglis et al., 2008). Basolateral amygdala signaling is active in adult neural stem cells and it is crucial to adult neurogenesis in the subependymal zone. Conditional deletion of Smad4 or infusion of Noggin, an extracellular antagonist of BMP, in adult neural stem cells suppressed neurogenesis and lead to increased differentiation in oligodendrocyte (Colak et al., 2008). In addition, a nuclear variant of Bmp2 (nBmp2) that is translated from an alternative start codon and has a nuclear localization signal overlapping the site of proteolytic cleavage (Felin et al., 2010). It is intriguing that BMP8 immunoreactivity was detected around the plasma membrane in the non-conditioned mice. However, the pattern of distribution changed with stress. New investigations are required to assess the effect of stress on BMP8 translocation to other cell compartments in the brain. The decrease in BMP8 and the increase of BMP2 involved the α7and α4β2 nAChRs, respectively. These changes can be involved in the enhancement of FE, and the preservation of astrocytes function. It is known that BMP2 participates in astrogenesis and may also play a role a role in preventing astrocyte decrease induced by cotinine during stress conditions. In addition, the α7 receptor, which is positively modulated by cotinine, is present in the astrocytes and microglia, and its activation reduces neuroinflammation in several neurologial conditions (Shytle et al., 2004;Rehani et al., 2008;Foucault-Fruchard and Antier, 2017;Hoover, 2017). Thus an effect of cotinine on microglia is very likely. Consistent with this idea, it has been shown that cotinine inhibits the activation of monocytes. Pre-treatment of monocytes with cotinine changes the inflammatory response to Gram negative bacteria inhibiting the production of cytokines upregulated by NF-kappaB signaling such as TNF-a, interleukin (IL)-1b, IL-6, IL-12/IL-23 p40, while stimulating the expression of the anti-inflammatory IL-10. This anti-inflammatory effect was attained by acting on the monocytic α7 nAChR by a mechanism involving the PI3K/GSK-3beta pathway; but not NF-kB. Cotinine can prevent the activation of microglia acting on these receptors to reduce neuroinflammation and the consequent synaptic deficit and brain disconnection induced by stress. The modulation of the stress response by BMP proteins under conditions of stress, may also be controlled by mechanisms involving the nicotinic receptors and their downstream signaling factors. Thus, the investigation of the BMPs as new therapeutic targets for PTSD is guarantee. CONCLUSION Based on this evidence, we conclude that cotinine favors astrocyte survival, and reduce the effect of stress on fear memory by acting on the α7-α4β2 nAChR. Also, it is possible to hypothesize that BMP2 and BMP8 proteins are involved in fear retention and extinction in subjects that suffered chronic or traumatic unscapable stress. Cotinine-induced inhibition of BMP8 and stimulation of BMP2 expression may facilitate extinction ( Figure 5C). On the other hand, the positive modulation of the α7nAChR may protect astrocytes, reduce inflammatory responses and support cognition and mood stability. Cotinine may induce BMP2 expression by activating transcription factors downstream of ERK such as Sp1 (Kawamata and Shimohama, 2011), and this increase may promote synaptic plasticity and astrogenesis in the mPFC. AUTHOR'S NOTE This material is the result of work supported with resources and the use of facilities at the Bay Pines VA Healthcare System FL, United States and the University San Sebastian, Chile. The contents do not represent the views of the Department of Veterans Affairs or the United States Government. DATA AVAILABILITY STATEMENT The datasets generated for this study are available on request to the corresponding author. ETHICS STATEMENT The animal study was reviewed and approved by the Ethical Committee of Universidad san Sebastian. AUTHOR CONTRIBUTIONS All authors participated in all aspects of the manuscript and revised the submitted version. FUNDING This work was supported by the Fondo de Ciencia y Tecnología (Fondecyt #1190264).
7,975.8
2020-04-02T00:00:00.000
[ "Medicine", "Biology", "Psychology" ]
ISLAMIC FINANCING TOWARDS ECONOMIC GROWTH : A STUDY ON 4 OIC COUNTRIES Islamic financing is a new phenomenal and one of the fastest growing component on financial system. The aim of this study is to examine the relationship between Islamic financing and economic growth of 4 OIC countries by using panel data from 1990 to 2012. This study employed panel unit root test and two stages least square (2SLS) as a method. The empirical results show that government expenditure and net export have impact to economic growth. Islamic financing is positive and significantly correlated with economic growth in 4 OIC countries which reinforces the idea that a well-functioning banking system through of financial development will promotes economic growth. This study can be used by the government to take noted in offering more Islamic financing because it’s profitable to improve their income and later will increase economic growth. INTRODUCTION The financial development assumes the fundamental part in the global financial system.It acts as dynamic participant in channelizing assets from surplus specialist to deficiency agents.However, there are conflicting views from past literatures on concerning the impact of financial development to economic growth (Schumpeter, 1911;Robinson, 1952;Goldsmith, 1969;Mckinnon & Shaw (1973); King & Levine, 1993;De Gregoria & Guidotti, 1995). Since Schumpeter (1911) explained that the banking sector has been shown as a tool in productive of investment financing, play the role of savings mobilization, encourages innovation and becomes driver on economic growth.Meanwhile, Robinson (1952) noted that economic growth leads on financial development.It also supported by Patrick (1966), who found the speculation of supply leading suggested that in the early phases of increase on financial development induces capital formation.In addition, some previous studies found positive sign and statistically significant the connection between financial development and economic growth (Jayaratne & Strathan, 1996;McCaig & Stengos, 2005).Meanwhile, some previous studies also found a negative effects but significant on relationship between financial development and economic growth (De Gregio & Guidotti, 1995;Akinboade, 2000;Hao, 2006). On the other sides, the early past studies on financial development and economic growth more concentrate on analysis of cross country (Goldsmith, 1969;King & Levine, 1993;Levine & Zervos, 1998).The results suggested that finance can anticipate for economic growth, however it does not manage causality issues and utilize properties of data for time series.Furthermore, the conclusion analysis of cross country view are sensitive for chose nations, assessment on methods, frequency of data, utilitarian type of the relationship, and the measurement variable of study (Hassan & Bashir, 2003;Khan & Senhadji, 2003;Chuah & Thai, 2004;Al-Awad & Harb, 2005). Out of the extensive research carried out in this field, there are no sufficient works conducted within the Islamic financial framework.Therefore, the objective of this paper to examine the relationship between the development of Islamic financing system and economic growth, particularly in the context balanced panel data of 4 OIC countries, using panel unit root test based on Levine et al., (2002) and two stages least square approach.The paper organized into four parts, first discuss on previous studies on similar issue then provide specification model followed by discussion on findings and concluding remarks. LITERATURE REVIEW The relationship between financial development towards economic growth has been debated issues of development economics.From previous studies carried out on this field, at least there are three type of causal relationship between financial development and economic growth: 1. Supply-leading 2. Demand-leading 3. Bi-directional causal relationships Supply leading is financial development leads to economic growth, meanwhile demand leading is economic growth leads to financial development.Odhiambo (2008) Gries, Kraft, and Meierrieks (2009) conducted the Hsiao Granger method, the Vector Auto-Regression (VAR), and the Vector Error Correction Model (VECM) for the purpose causal relationship between financial deepening, trade openness and economic growth in 16 Sub-Saharan African countries.The results show to support the hypothesis that finance lead to economic growth.However, it suggests that the adoption of more balanced policy approach in Sub-Saharan countries may diminish financial system.Bangake and Eggoh (2011) investigated by using panel co-integration test, dynamic OLS and panel VECM approach in 71 countries including 18 developing countries from 1960 to 2004.They also support the view of existing bi-directional causality among financial development and economic growth in developing countries. In addition, Kar, Nazliogu, and Agir (2011), who focused on the Middle East and North Africa (MENA) for the period of 1980 to 2007.They also used a simple linear model.They employed six new indicators of financial development includes the ratio of narrow money to income, ratio of broad money to income, ratio of quasi money to income, ratio of deposit money bank liabilities to income, ratio of domestic credit to income, and ratio of private sector credit to income.Meanwhile, real incomes refer as proxy of economic growth.They also found that there is two-way directional between financial development indicator specific and economic growth. Furthermore, Hassan, Sanchez and Yu (2011) focused on 168 countries including the low-and middle-income countries from 1980 to 2007.The countries classified by geographic region and VAR model was used as method.This study catch up two important results, there is strong relationship between financial development and economic growth in the long run and bidirectional causality exists between financial development and economic growth among the Sub-Saharan African countries, the East Asian countries, and the Pacific countries.Ghamati and Mehrara (2014) investigated the effect of indicators financial development on economic growth with using panel data in 10 countries during 1997-2007.The result showed that financial development indicator has an important effect on the level of GDP and also other explanatory variables are statistically significant in economic growth. With regard to the relationship between Islamic financing development and economic growth, Furqani and Mulyany (2009) conducted the relationship between Islamic banking and economic growth in Malaysia from 1997 to 2005.Total Islamic financing, real GDP per capita, fixed investment and trade activities are the data that used and represent as real economic sector.This study also employed co-integration test and VECM model as method.The findings show that Islamic bank financing has positive sign and statistically significant to economic growth and capital accumulation in the long run.Their result seems to support demand hypothesis which is Islamic bank in Malaysia depends on the growth of the GDP.In contrast, the findings from Majid and Kassim (2008) are in favour of the supply-leading view.Goaied and Sassi (2011) investigated by using panel GMM procedure in 16 MENA countries over the period 1962 to 2006 to see the impact of Islamic financing on economic growth.They found that Islamic finance has a weak relation on economic growth but the connection indicates to be positive and supported by theory. Abduh and Omar (2012) investigated by using ARDL framework for quarterly data in Indonesia from 2003 until 2010 to see the relationship of Islamic finance and economic growth.The results show that there are bidirectional relationship between Islamic finance and economic growth for both in the short run and the long run.Tajgardoon et al. (2013) found that there is bidirectional relationship between Islamic banking and economic growth and also bidirectional between economic growth and export in Asia countries over period 1980-2009.Yusof and Bahlous (2013) found that Islamic banking contribute to economic growth both in the long run and the short run for both GCC countries and the selected East Asia (EA) countries.In the short run however, Islamic banking contributes more to economic growth in Malaysia and Indonesia compared to the GCC countries.Tabash and Dhankar (2014) found there is a strong positive association between Islamic banks financing and economic growth in UAE from 1990 to 2010.Besides that, Islamic banks financing has contributed to increase of investment and in attracting foreign direct investment in the long term.Therefore, the implementation of Islamic finance with efficient method can enhance growth. METHODOLOGY AND DATA COLLECTION To employ the relationship between Islamic financing and economic growth, this study used balanced panel data cover from 1990-2012 in 4 OIC countries.The 4 OIC countries include Jordan, Kuwait, Malaysia and Saudi Arabia.The economic growth as dependent variable measured by the real gross domestic product (GDP) per capita (see Levine, 1997).Financial development can be defined as "improvement in quantity, quality and efficiency of financial intermediary service (Calderon and Liu, 2003)".The indicator of financial development is domestic credit to private sector to GDP, which has positive and linked to economic growth directly (King & Levine, 1993;Kemal, Qayum & Hanif, 2007;Hasan, Sanchez & Yu, 2011).Liquid liabilities measured by liquid liabilities on financial system to GDP (Favara, 2003;Ahlin & Pang, 2008).The impact of liquid liabilities has positive impact to economic growth. Quasi money measured by broad money supply to GDP (Gilman & Harris, 2004;Noor & Abbas, 2014).It expected has positive relationship with economic growth.Islamic financing measured total financing of Islamic bank to GDP which has positive effect to economic growth (Abduh & Omar, 2012;Tabash & Dhankar, 2014).Net export measured by export minus import (Kim & Lin, 2009;Ristanovic, 2010).Increased on net export will contribute to economic growth.Investment measured by ratio of real gross domestic (public plus private) to GDP (Barro, 2003;Caporale et al., 2009).Increased on investment enhances to economic growth.Government expenditure measured by total government consumption expenditure to GDP (Al-Malkawi et al., 2012;Alkhuzaim, 2014).The effects of government expenditure on economic growth could be either positive or negative.Data cover from 2000-2012 taken from World Bank, International Monetary Fund (IMF) and Islamic Banks and Financial Institutional Information (IBIS). The panel data stationary is tested through Levine et al., (2002) unit root test.To analyze the hypothesis that economic growth is affected by Islamic financing through financial development, therefore Two Stages Least Square (2SLS) is utilized on stationary data.The following equations are utilized for data analysis through 2SLS: Where RESULTS AND DISCUSSION To test the order of integration on the variables, this study utilize the methodology of Pedroni (2004), we begin with stationary test of our panel data.It was mentioned earlier that the stationary test of the panel data is needed in order to avoid the problem of spurious regression. Here, we discuss on the Levine et al. (2002), to determine the existence of unit root test in our panel data series can be referred in Table 4.1.Kim & Lin, 2009;Ristanovic, 2010;Alkhuzaim, 2014). CONCLUSION This study examines the relationship of Islamic financing to economic growth through financial development in 4 OIC countries from 1990-2012.The results found all the variables are stationary after first different based on panel unit root test Levine et al., (2002) approach therefore two stage least square is utilized.The main findings found that government expenditure and net export have effects of increasing economic growth.It means that two variable is source of economic growth in allocating to the more productive sectors of the economy.Besides that, it proved that Islamic financing has positive impact to economic growth indirectly through financial development.Therefore, Islamic financing can open additional channel of funding from investors to the financial system and improve real sector of economy in OIC countries. investigated the causal relationship between finance and economic growth in Kenya for a period of 1969-2005.It employed the dynamic tri variate granger causality test and error correction model.He found that there is only one way causality from economic to finance.It indicates that finance plays a minor role in contribute to economic growth.In addition, in 2011 by using the same method to analyse, he found the same result that economic growth causes financial development in South Africa for 1960-2006 period.Therefore, he concluded that the hypothesis of finance causes economic growth does not hold in South Africa. Table 4 Notes: a stationary based on all other test; b stationary based on individual intercept; c stationary based on individual intercept and trends, d stationary based on noneFrom table 4.1, it shows that all variables are integrated of order one I (d) and hence the null of unit root test is rejected.As conclusion, we could say that our panel data series are stationary at first difference and we can proceed with two stage least square (see Table4.2). Table 4 . 2 shows the result fTabash & Dhankar, 2014)g two stage least square.It can be seen that financial development as instrumented of Islamic financing, liquid liabilities and quasi money have positive signs and statistically significant at 1%.It means that increased in Islamic financing will contribute to economic growth through financial development (e.g.Abduh & Omar, 2012;Tabash & Dhankar, 2014).Government expenditure and net export have positive signs and statistically significant at 1% level.When 1% increases in government expenditure and net export will increase by coefficients 0.59 and 0.22 respectively on economic growth (e.g.
2,878.4
2016-06-30T00:00:00.000
[ "Economics", "Business" ]
IoT: A Mobile Application and Multi-Hop Communication in Wireless Sensor Network for Water Monitoring In this research, we discuss the implementation of multi-hop communication in sending data regarding the temperature, humidity and the water quality of Maninjau Lake area by utilizing wireless sensor network (WSN) technology. Maninjau lake has seen a growth in fish farming in recent years, which has caused water pollution and high rates of fish mortality. The performance of the sensor node or the end device node in sending data to the coordinator needs to be maximized, so that the information is well received by the user. The result of the performance is not only related to the equipment’s used, but also to the network topology and the communication model. In this study, we use multi-hop data communication technique using Zigbee S2 wireless module standard IEEE 802.15.4 and a router node on a mesh topology network. Data is received in real time by the coordinator and can be monitored via a mobile device connected to the internet using IoT (ESP 8266) module. The result of the simulation shows that each end device sends the data of temperature, humidity and the PH of water of Maninjau Lake to the router nodes and forwards them to the coordinator. Information can be accessed by the user through a mobile phone application. There are many potential applications for this research in monitoring water quality, preventing pollution and reducing aquaculture mortality. Keywords—Mobile application, Multi-hop communication, wireless sensor network (WSN), Mesh topology, and IoT (ESP 8266) module. Introduction The Internet of Thing (IoT) has recently attracted significant research attentions from academia and researchers in research field such as monitoring system application. IoT technology can reduce of time efficiencies and support in monitoring system by using sensors. Furthermore, Fan, et. al [1] introduced a wearable body area network (WBAN) is developed for human safety in workplace and Syed, et. al [2] introduced emergencies monitoring application has introduced for saving human lives in warning situation. In other hand, the technology of wireless sensor networks (WSNs) is being increasingly implemented over wide areas. Zigbee is one of the wireless communication modules used in WSN and it supports multi-hop communication. Zigbee is small in size and belongs to the low-cost wireless protocols section of the IEEE 802.15.4 standard. Zigbee technology is usually implemented for monitoring systems, such as for natural disasters, climate change, smart industry and others. In order to work well, WSN needs to manage data communication between each node in a communication network. In addition, one of the challenges in implementing WSN technology in wide areas is the process of multi-hop communication in sending data from several sensor nodes to the IoT services. The end device node is placed at a certain distance and it transmits the data obtained from the coordinator. In this paper, we concentrate on maximizing the transmission of data from multiple sensor nodes (end devices) to the server using mesh topology for water monitoring application in Maninjau Lake, Indonesia. The communication model used is multi-hop communication. Literature Review Several researchers have discussed the multi-hop communication in WSN such as Nandini, et. al [3] use a query process to improve energy efficiency of the nodes demonstarting that energy consumption was much lower than direct data transmission. Otherwise Min, et. al [4] discuss data dissemination during communication density in wireless sensor networks. The simulations used two approaches: (1) cooperative mesh structure (CMS) construction, and (2) CMS-based data dissemination. The simulation results show that the concept of distributed multi-hop cooperative communication (DMC) works well in improving Quality of Service (QoS) and this can be implemented on a wide and dynamic network system. Xiang, et. al [5] introduce a distributed algorithm scheduling to reduce time latency and energy cost. Rasin, et. al [6] discuss water quality monitoring using sensor networks. These re-searchers focused on the condition of pH, the turbidity and the temperature of the water. The purpose of this study was to improve the efficiency, the cost and to find out a more economical maintenance of water monitored wirelessly using Zigbee devices. Kim, et. al [7] introduced a monitoring and control system using Zigbee in real time. These researchers focused on the electrical energy management used at home. Applications designed are website and mobile-based and can control the use of electricity to reduce living costs. Janos, et. al [8] implemented a wireless network using ZigBee for a monitoring system of plant moisture on a green house. They used a mobile robot that functions as a station. In the simulation tool, information is sent through a broadcast. furthermore Dahoud, et. al [9] explored using Wireless Sensor Network (WSN) in sending data to the gateway to control air quality. The researchers tested the conditions of CO, NO2 and LPG. Sensor data can be accessed by the user in real time through the internet network. Garcia et. al [10] also analyzed the monitoring system for fruit freshness during its delivery. The researchers utilize wireless network technology to determine the freshness of the fruits by using Xbow and Xbee technology. Elankavi et. al [11] discuss the data collection techniques of several studies and emphasize the advantages and disadvantages of each study. Material and Method In this study, the sensor nodes were placed at predetermined points in the area of Maninjau Lake to detect the quality of lake water. Maninjau lake has experienced repeated incidents of pollution and mass deaths of fish, which have been caused by poor water quality. The network topology used is mesh topology, and information delivery method used is multi-hop communication. Data were sent from each end devices to router nodes to be forwarded to the coordinator. The researchers also utilize the Internet of Things technology to maximize the performance of the equipment's in generating information. This information is accessed by the user through android applications that have been provided. Coordinator node The coordinator node functions to receive information from the end device nodes that are placed at some points that has been specified in the Maninjau Lake area. Coordinator node consists of the following components: arduino mega microcontroller, Xbee S2, real time clock, memory, ESP 8266, LCD and battery. Data are received in real time and are stored in memory that has been provided. Figure 1 shows the schematic of the coordinator node that was created. Router node: The function of the router node is to maximize the range of wireless communications sent by the end device to the coordinator. Components of router node used in this study are arduino uno microcontroller, Xbee S2, Xbee shield and battery. Figure 2 shows the schematic of the router node that was created. End device node: End device node functions to collect information on the water temperature, humidity and pH of Maninjau Lake. The components of the end device node in this study consist of arduino uno microcontroller, Xbee S2, Xbee shield, DHT 22 sensor, pH sensor and battery. Figure 3 shows the schematic of the created end device node. Internet of Things (IoT): Inside the IoT, an electronic device can communicate with other electronic devices independently. IoT can also be utilized for smart homes application. This research, we use IoT ESP 8266 module. The details on the research equipments specifications are shown in Table 1. Mesh Topology and Mobile Application Design: Multi-hop communication methods for wide areas can use mesh topology. The discussion done in this section is to test the performance of each sensor node to transmit data to the coordinator. Figure 4 below is the mesh topology in Maninjau Lake and figure 5 shown the mobile application design. The DHT22 sensor test is to see the humidity and temperature conditions of Maninjau Lake. Data are taken in real time with humidity interval 0-80 and temperature interval 0-35. From the simulation has been done in obtaining data stored in memory as much as 44 times. The average sensor response in the humidity field is between 55.53 to 60.98 and the temperature between 26.18 to 28.88. Figures 6 and 7 below are DHT22 response to humidity and temperature. The PH sensor test is to see the water conditions of Maninjau Lake. Data are taken in real time with PH interval 0-6. The simulation show data stored in memory is 44 times and obtained the sensor response for the water PH condition between 3.58 to 4.69. Figures 8 below are PH response in end devices. Fig. 8. PH in end devices If seen from the graph of simulation result above, packet loss still happened during the simulation. This can be caused by the communication path density that occurs from the route of end device to the coordinator. Conclusion In this paper, we have discussed multi hop communication using wireless sensor network for cooperative data transmission in Maninjau Lake. The responses from each end device based on the data sent to the coordinator shows that the average result is 55.53 to 60.98 for humidity and 26.18 to 28.88 for the temperature. pH sensor test shows that the average pH of Maninjau Lake water is between 3.58 to 4.69. This research has important ecological and economic applications. Monitoring water quality remotely provides a cost-effective way of preventing pollution incidents and reducing the rate of fish mortality in the aquaculture industry. In order to reduce packet loss, further research is required by implementing distributed algorithms for near optimal channel allocation and localization techniques for each node. Additional work is extending the coverage of monitoring system by using LoRa technology and calculate the network communication performance in wide area simulation.
2,224.6
2020-07-10T00:00:00.000
[ "Computer Science" ]
Agile logistics challenges of petroleum product distribution in Nigeria Agile distribution of petroleum product is a strategy towards uninterrupted supply of quality products at affordable prices. Industry and economic productivity as well as transportation in Nigeria is largely driven by petroleum energy due to low investments in other sources of energy, yet the petroleum energy supply and distribution is marred by lingering history of scarcity, high prices, and distribution bottlenecks to end users all over the nation. The study through a descriptive survey following literature view, primary and secondary data collection sought to determine at granular level the limiting factors of agile petroleum product distribution in Nigeria. Petroleum supply, consumption and pricing are significant aspects of the study survey. Study findings shows limiting factors of distribution agility cut across pricing, supply, technology, infrastructure, accessibility and lead time, storage, management, and Human Resource issues. Significantly lack of local refining capacity and poor distribution network are notorious for supply limitations and rising price of products. Addressing the challenges would require strong political will, driven by genuine commitment and focused attention towards economic growth and improvement in the welfare of the citizens. Introduction The petroleum industry is involved in a global supply-chain that includes domestic and international transportation, ordering and inventory visibility and control, materials handling, and information technology.Logistics is a supply chain function that plays an essential role to ensure a safe, timely and cost-effective delivery.Oil and gas logistics includes typically transportation of crude oil from the production sites to refineries as well as transportation and distribution of oil products to markets and customers.The demand for oil and gas has increased in tandem with the economy of the world as it continued to prosper.From the reference point of 1960, Organization of the Petroleum Exporting Countries (OPEC) projects that oil demand would increase up to five times by 2035 (OPEC, 2014). The demand for petroleum product is affected by myriad of challenges.Alongside the need to replace depleted wells, old plant, and equipment, and to accommodate ever higher standards and tighter regulations is the need to adopt logistics strategies that ensure consumers continue to receive products in a timely and orderly manner (OPEC, 2014).An effective and efficient logistics process is an industry key for reducing lead times and costs, increase the company's profits and in managing supply chains; demand management, efficient distribution of petroleum products among customers, better transportation scheduling, warehouse management, and quality and timeliness of information (Lisitsa, Levina and Lepekhin, 2019). In Nigeria, the consumption of petroleum products Onwioduokit and Adenuga (2000) recount has continually been on the increase with most of the demand for products fulfilled over time by importation due to the poor condition of the refineries.Starting from the 1980s the demand for petroleum products they acknowledge have increased tremendously reflecting rapid growth in the number of automobiles, industries, households, intensified rural-urban migration, economic and political developments.The bulk of products consumption has been the Premium Motor Spirit (PMS) or petrol, Automotive Gas Oil (AGO) or diesel, dual purpose kerosene and bitumen/asphalt.Together, they account for more than 60.0 per cent of the total consumption of petroleum products.The petroleum product distribution in Nigeria is challenged by short deliveries, high lead time, increased cost of distribution, scarcity of products and high pump prices.Nigeria is an emerging and fast developing market and the demand for petroleum product continues to icrease.As such, the need for sufficient and uninterrupted supplies is a major task for national development and economic growth.A sustainable economic development cannot be successfully achieved without an appreciable vibrant supply chain which involves the even distribution of petroleum products to keep alive economic activities such as manufacturing, production distribution and general logistics involving the movement of materials and human resources from one place to the other.In this regard, the study seeks to:  Examine the limiting factors to agile distribution of petroleum products in Nigeria. Evaluate petroleum supply and consumption. And analyse product pricing. Literature review Energy is key to economic productivity and industrial growth and is central to the operation of any modern economy.Energy is one of the most important inputs for economic development and from a physical viewpoint, the use of energy drives economic productivity, consumption, and growth.Many production and consumption activities involve energy as a basic input (Zahid, 2008).In Nigeria the petroleum products and the derivatives constitute the main source of energy for production, manufacturing, distribution and general logistics of goods, passengers, and services.A logistics system which meets the need for timely availability, quality, and affordable products is necessary to drive economic productivity and growth.The components of petroleum and its by-products traded include the following: Premium Motor Spirit (PMS), Automotive Gas Oil (AGO), Household Kerosene (HHK), Dual Purpose Kerosene (DPK), Aviation Turbine Kerosene (ATK), Low Pour Fuel Oil (LPFO), Liquefied Petroleum Gas (LPG), Bitumen, Lubricants and Base Oil.Fuels are extracted from crude petroleum and pass through various processing units by thermal cracking in the refineries. Distribution Enterprise Task A significant aspect of the enterprise logistics function is distribution.Specifically referring to the delivery of the finished products to the customer.Schewe and Smith (1980) define distribution as the physical movement of products to the ultimate consumers.It consists of order processing, warehousing, and transportation.Distribution Straka (2017) identify is responsible for storage and transportation of goods to consumers as well as related information, management, and control activities.It ensures the most appropriate way, selection, and analysis of transport, which is most suitable for transfer of products manufactured by enterprises to achieve failure-free performance of the market.The distribution network comprise of specific nodes which include; factories where products are manufactured or assembled; a depot or deposit relating to standard type of warehouse for storing of merchandise (high level of inventory); distribution centres for order processing and fulfillment (lower level of inventory) and also for receiving returning items from clients, transit points built for cross docking activities, which consist in reassembling cargo units based on deliveries scheduled (only moving merchandise). Agility of Petroleum Product Distribution The petroleum industry is frequently classified into three main sections: upstream, midstream, and downstream operation.Upstream activities refer to the exploration and production of crude oil.This includes searching for potential underground or underwater oil and gas fields, drilling of exploratory wells, and subsequently operating the wells that recover and bring the petroleum crude oil and/or raw natural gas to the surface.Midstream activities refer to the refining, transportation (by pipelines, rail, tankers), storage and wholesale marketing of crude or refined petroleum products.Downstream activities refer to the transportation and marketing of end-user products, e.g.petrol, diesel, and liquefied petroleum gas (LPG). The agility of petroleum product distribution is a strategy to building an agile supply chain of the same.Agility Tiefenbacher (2020) denotes is a strategy to thrive, improve performance and be competitive in VUCA environments, that is environments that are volatile, uncertain, chaotic, and ambiguous characterized by being unpredictable and thereby necessitating adaptive, flexible approaches and interventions to muster them.Logistics and Supply chains are one of the same ultimately concerned with delivering the right thing to the right place at the right time (LaBotz, 2022). The agile supply chain is a more recent concept which focuses on leveraging responsiveness to customer demand.Here, logistics plays a crucial role in aligning material flow across the supply network with customer demand, and in ensuring execution in a flexible and customized manner (Harrison & Hoek, 2008).The origins of the agility paradigm in logistics and supply chain can be traced back to the theory of lean enterprises, which comprises concepts and methods which aim to minimize waste and consequently maximize efficiency and cost-effectiveness as the company uses fewer resources (capital, financial, human, organisational) and less time to achieve the same goal (Sajdak, 2015). Agile logistics strategy of petroleum product distribution represents a distribution process which ensures the availability right products to the right customer at the right place, in the right condition and right quantity at the right time, and right (lowest possible) costs.Logistics is the task of managing two key supply chain flows; material flow of the physical goods from suppliers through the distribution centers, stores and flow of Information; demand data from the end-customer back to purchasing and to suppliers, and supply data from suppliers to the retailer, so that material flow can be accurately planned and controlled (Harrison & Hoek, 2008).The agile strategy ensures the adequacy of place (moving refined products to places where there is a demand for them) and time (maintaining the right stocks levels) and proper distribution of the products through procedural alignment, end-to-end accountability, social proximity, and deep integration of suppliers. Petroleum Distribution Chain in Nigeria The actors in the Nigerian industry consist of both private and public organizations.The public actors are the government agents and functionaries such as the Nigerian National Petroleum Corporation (NNPC) and its subsidiaries, the Department of Petroleum Resources (DPR), the petroleum products pricing regulatory authority (PPPRA), the Pipeline and Product Marketing Company (PPMC) among others.The private segment consists of both indigenous and foreign actors.The PPMC is charged with the task of large quantities supply, distribution, and marketing of petroleum products in Nigeria.There exists network of four thousand (4,000) kilometers of pipelines which are interconnected with twenty-one ( 21) highly dispersed depots. The products may be obtained from the four local refineries or in the event of a supply short-fall from off-shore refineries by way of imports.Imported refined products are received at PPMC depots at Atlas Cove.From here, products are pumped to nearby depots at Mosimi in Shagamu, from which products are pumped into various other depots through the pipelines; Booster pump stations are provided along the route and between two adjoining depots.This arrangement is necessary to boost the flow of products in the pipelines along the routes.Movement of products from the depots is the responsibility of major oil marketers. Lack Local Refinery Capacity Nigeria is blessed with the abundance of petroleum resource and natural gas, but the country is seriously challenged by local refining capacity which meet the soaring energy need.World Energy 2023 report according to Akintayo (2023) reveal local refineries output of Nigeria has dropped to near zero output in 10 years.According to data the statistical review, output from the nation's four refineries fell from 92,000 barrels per day in 2012 to just 6,000 barrels per day in 2022 representing a 92 per cent drop in production capacity.Similarly, the Organisation of the Petroleum Exporting Countries' Annual Statistical Bulletin (2023), also revealed the country's crude oil refining capacity fell from 33,000 bpd in 2018 to 6,000 bpd in 2022, representing about 81 per cent drop in production output.The drop in refining capacity has made the country to rely solely on the importation of refined petroleum products to meet the local energy consumption needs.NBS report spanning from 1997 up to 2014 presented in table 1 below shows a continued decline in capacity utilization of the refineries and year on year growth in crude oil refining percentage respectively.Adeosun and Oluleye (2017) acknowledge Nigeria despite having a nameplate refining capacity, 445,000 bpd which exceeds demand has over the last four decades consistently struggled to keep its refineries functioning optimally and is ranked as the 3rd highest importer of petroleum products in Africa, importing over 80% of products consumed.The scarcity of petroleum products and the rising cost of the petroleum products is an indication that refined product importation is not sustainable for agile distribution of petroleum products.Okereke, Emodi and Diemuodeke, (2022) lament the shortage of refining capacity at existing oil refineries is the main driver of Nigeria's fuel crisis, which hampers the socio-economic development of the country, placing a high subsidy burden on the government and has long made Nigeria dependent on imported petroleum products.Similarly, Ehinomen and Adeleke (2012) decry the low-capacity utilization of government owned refineries and petrochemical plants in Kaduna, Warri and Port Harcourt and inadequate crude oil allocation to the refineries for domestic consumption as well as long turnaround time for the maintenance of the refineries poses a lot of challenge to NNPC and result into petroleum product shortage and eventual importation of the products a situation that puts a drain on the scarce foreign exchange of the nation. An attempt by the government at the removal of subsidy without addressing the issues of local refining capacity has compounded the issue of product scarcity and high prices with PMS and AGO pricing reaching 600 NGN and 1300 NGN respectively, the highest prices ever recorded in the history refined product distribution in Nigeria.Because industrial production activity and business operation in Nigeria largely depend on petroleum energy following the epileptic electric power supply and lack of technology to harness the solar power, the increase in the price of the petroleum products have resulted to hyperinflation which has affected the cost of production, local cost of transportation and prices of consumer commodities.Sales and business struggle due to high cost of doing business, low demand, and patronage while many shut down and workers lose their jobs. Source: Author's Analysis Figure 2 above presents a graphical analysis of the capacity utilization and year on year percentage growth in crude refining with reference table 1.The line graph shows a staggered progression in the refinery capacity utilization and year on year growth in oil refining for the years referenced depicting a very hopeless situation for the industry in Nigeria.There has not been a serious approach in addressing the issues of local refinery capacity in the country since time and the challenges have compounded till date.With growth in population and increasing demand for development which put pressure on energy supply to drive industrial and economic productivity, the reliance on importation of refined products has become highly unsustainable.A significant aspect of agile product distribution is delivering the products available to the end users at affordable cost.This cannot be realized by dependance on importation to feed the local market.No doubt pump price of petroleum products in Nigeria is due to importation is determined by world crude prices, and inflation in the import countries.Interruption in production due political instability, social unrest and natural disaster in the import countries would translate to scarcity of product in Nigeria. Insecurity, Terrorism and Acts of Sabotage Agile supply and distribution of petroleum products will remain a mirage in the presence of insecurity and safety concerns which disrupt operation and investment in oil production, refining and distribution.Industrial sabotage, crude oil theft, illegal refining operations, pipeline vandalism and piracy present significant challenges which adversely impact onshore oil and gas production and fraught delivery of products to the market.The disruptive activities of the various militant group and instability in the Niger Delta region is acknowledged a major threat to the security of the oil & gas industry.Oturugbum (2021) acknowledge pipeline vandalism and militancy represents a means which the inhabitants of the oil producing areas adopt in pressing their demands to the government and oil companies.Unfortunately, this is due to the prevalence of corruption, poverty, and governments unfulfilled promises, among others.The Petroleum Production Act and other legislation which criminalize the act of tampering and vandalism of crude oil and natural gas pipelines with the death penalty and life imprisonment have not deterred the commission of relevant offences as the challenge continues to escalate.Adeosun and Oluleye (2017) lament Nigeria lost over NGN 50 billion to pipeline vandalism between January and April 2016.Okumagba (2019) reports within a space of 10 years between 2006 -2016, Nigeria recorded over over 16,083 cases of "pipeline tampering" and vandalism amounting to a loss of 174.57 billion (or approximately $484 million) and between 2016 and 2017, a total of 992 cases of pipeline vandalism were recorded across pipelines and depot lines, amounting to a loss of 167 billion (approximately $464 million).Nigeria Extractive Industries Transparency Initiative (NEITI) records between 2017 and 2021 (in 5 years) Nigeria recorded a total of 7,143 cases of pipeline vandalism.The country recorded in 11 years (2009-2020) a loss of 619.7 million barrels of oil valued at $46.16 billion to theft, pipeline tampering and vandalism.This is higher than the size of the country's foreign reserves.In addition, Nigeria lost 4.2 billion litres of petroleum products from refineries, valued at $1.84 billion at the rate of 1440,000 barrels per day, from 2009 to 2018 (Aduloju, 2023). Corruption and Mismanagement Unscrupulous and corrupt practices of government agencies, oil companies and marketers as well as mismanagement of petroleum revenue largely fraught the agility of petroleum product distribution in Nigeria.Subsidy of petroleum products leads to a situation where the pump price of petroleum product is higher in neighbouring countries, Benin, Cameroon, and Chad.The aim of the petroleum subsidy was undermined following large-scale smuggling of petroleum meant for Nigerian consumption and subsidized by the government to the neighbouring countries where they are sold at higher prices making superfluous profits short-changing the government and causing scarcity in Nigeria (Terkimbi, 2015 & Ayanruoh, 2023).Oshunkeye, (2012) alleges the oil subsidy is bedeviled by criminal tendencies of oil importers and sharp practices in the distribution of import allocation, approval of subsidy payments and actual release of subsidy cheques.While the significance of petroleum resources as a substantial source of economic advantages became increasingly obvious, susceptibility of the resource regions to military attacks, regional pressure, and oil-related problems, such as oil spills, crude oil theft, illegal refining of crude oil, accidents involving oil trucks, pipeline vandalism, and explosions will continued to exacerbate and requiring strategic management innovations by government to bring under control (Francis et al., 2022).Mismanagement, lack of proper guidelines and standards for the design, construction, and operation of major oil pipelines result in corrosion and aging pipelines, ruptures, operational error, and mechanical failure which lead to leaks and explosion that disrupt product flow (Akinpelu 2021, Achebe, Nneke, & Anisiji, 2012,). Dependency on Road Haulage The supply of refined petroleum products to end users is faced with many challenges.Efficient transportation of petroleum product requires a modal integration of shipping via vessels, pipeline, road trucking and rail tankers.While the pipeline is acknowledged the most efficient single mode due to the benefit of large volume delivery at high speed and constant flow resulting to cheaper cost relative to other modes, zero accidents and limited environmental concerns the case is different in Nigeria where on the contrary road haulage is the dominant mode of petroleum distribution.This is because the pipeline mode is underdeveloped and fraught by the combined force of vandalism by oil bunkering, industrial sabotage, militancy, poor maintenance, and neglect.According to Bataiya, (2018) 98% petroleum products that include fuel components like premium motor spirit, domestic pure kerosene, and automotive gas oil's lifting from supply sources to all parts of the country is done by road which of itself is in gross disrepair and affecting the distribution of the products.70% of the federal road which constitute a 70% of the vehicular freight movement is overstretched with 40% and 27% in poor and fair condition respectively requiring rehabilitation and maintenance work and a 3% unpaved.78% and 87% of state and local government roads respectively are in poor condition.The road transport research acknowledge is not efficient for petroleum distribution over long distances due to lower speed, interrupted flow, limited capacity leading to high cost of transportation per unit volume.Nwoloziri et al. (2021) acknowledge the inefficiency of railways and pipeline transportation heavily contribute to total dependency on road transportation in the distribution of petroleum products to various depots and stations located all over the country.Ehinomen and Adeleke (2012) recounts the despair, neglect and repeated vandalization of the state-run petroleum products pipelines and oil movement infrastructure nationwide, coupled with frequent accidents of haulage trucks on the nations heavily used highways pose complex problems managers, and operators in the oil industry.Road haulage of petroleum products is marred by road traffic accidents, theft, and robbery incidence, bad and poor road networks as well as various hindrances such as delays at police, customs check points and many other bottlenecks Study Framework and Data The agile logistics of petroleum product distribution is a concept representing a distribution strategy which ensures uninterrupted supply of right products to the right customer at the right place, in the right condition and right quantity at the right time, and right (lowest possible) costs.The study follows a descriptive survey design based on quantitative and qualitative approach relying on primary and secondary data.While Primary data is gotten by the distribution structures questionnaire, secondary data is ensued from e-library of National Bureau of Statistics (NBS) and literatures reviews. Limiting factors of Agile Distribution At a very granular level, the study sought to survey the factors which limit the agile distribution of refined petroleum products to end users in Nigeria.Distribution agility based on the review of literature hinges on continuous product availability at affordable prices without quality compromise.Research observation and findings shows the distribution of refined petroleum products in Nigeria is fraught with the challenges of scarcity, product adulteration, and high prices.And efforts by the government to address this have not yielded the desired results.Perceived factors which limit distribution agility were presented to selected staff of six major independent petroleum marketers and NNPC retail as presented in Table 2 below.30 questionnaires were distributed to the executive staff of each of the study organizations drawn from sales, marketing, admin, HR, imports, finance, business analyst, logistics and transport units via one-on-one interview making a total of 210 questionnaires.187 questionnaires, which represent 89% of the total questionnaires distributed, were returned valid for the research.The questionnaire instrument follows a five-point likert scale ranged 1 to 5 where 1 = strongly disagree, 2 = agree, 3 = undecided, 4 = agree, 5 = strongly agree.The qualitative data is coded with the capability of the SPSS and analysed for data results and discussions.The perceived limiting factors of distribution agility cut across pricing, supply, technology, infrastructure, accessibility and lead time, storage, management, and Human Resource (HR) issues.3 above presents the analysis of the response of the respondents as per their level of agreement or disagreement to the perceived limiting factors of petroleum distribution agility in Nigeria. The sampled factors on a five-point likert scale shows a mean response ranged 4 -4.5 higher than the likert mean value 3 and standard deviation values less the likert scale mean value indicating the respondents largely agree the sampled variables are limiting factors to petroleum product distribution in Nigeria.The sample factor of lack of local refinery capacity has the highest mean response above 4.5 indicating the respondents are very highly concerned about the limitations of product importation to meet local consumption demand.The data is further subjected to K-S Test for more information about the sample and to test the significance of the sample mean to the liket mean value and that the data result is not by random chance..000c 0.000 c 0.000 c 0.000 c 0.000 c 0.000 c 0.000 c 0.000 c 0.000 c a. Test distribution is Normal. b. Calculated from data. The K-S Test is presented in table 4. The variables F1-F9 represent the perceived limiting factors of agile product distribution presented in table 9.The K-S Test mean values and standard deviation as in descriptive statistics of table 9 are greater and less than the liker mean values respectively.The P value < 0.05, and negative and positive upper and lower bound mean interval for the data parameters are indication that the mean of the data parameters is significantly different from the likert mean value 3.This validates the descriptive result of table 3 and leads to accept that the sample variables are limiting factors to agile distribution of petroleum products.This explains the reason for low levels of agility in product supply and distribution.There is a limit to which a country will effectively meet her local consumption requirements depending solely on importation. Supply Consumption Analysis Dependence on imported refined products is notorious for the high cost of products in the hands of the consumers following rising cost of dollar, cost of production in the exporting countries port clearance and landing costs.It also means Nigeria will not be in control of its supply.Any challenge in production, supply shortage, political unrest or economic concern in the exporting countries will translate to supply shortages in Nigeria.The supply country would prioritize local consumption and cut short export in the event of production constraints.This will translate to severe shortage to Nigeria, scarcity of product and hike in price as few distributors with products would hoard them to sell at exploitive prices. Product Pricing The study surveys the pump price of AGO, PMS and HHK for the period of 2017 to 2023 to examine how the price of the products have fared over the years.I significant aspect of product agility is continual availability of best quality at affordable prices.But it is unfortunate that the price of the refined petroleum products has continued to increase over the years becoming unaffordable and making life difficult for the people following the attendant hyperinflation bearing in mind that industrial and economic productivity in the country is driven largely by the petroleum energy.Source: NBS E-Library Table 6 above presents the yearly average price of AGO, PMS and HHK for the period of 2017 to 2023 alongside with the Year on Year (YOY) percentage increase.The price of the products has increased over the years with YOY percentage increase nearing a 200% for PMS in the year 2023.This is occasioned by the sudden removal of subsidy on the product without plans of local supply.The AGO and HHK reached highest price increase of 167% and 96% respectively in the year 2022.This is also following the removal of subsidy on the products. Study Findings The agility of petroleum product distribution is a strategy towards uninterrupted supply of refined petroleum products to consumers at affordable prices and best quality.Research observation and finding shows the petroleum supply and distribution in Nigeria have had history of challenges ranging from product scarcity, inconsistent pricing, and quality compromise.At the root of the challenge is festering issues of corruption, mismanagement petroleum resource and infrastructure, lack of local refinery capacity, dependency on importation, poor distribution networks and technology constraints.Lack of local refinery capacity is fuel by corruption, sabotage, and lack of political will.There is no identified radical approach by the government to resuscitate the hitherto moribund refineries and to build new ones.The local refineries currently operate at near zero out levels.Corrupt gains of oil subsidy and importation has engineered the sabotage of any government effort at building local refinery capacity. Insecurity, pipeline vandalism, theft, and terrorism which disrupt supply are acts of sabotage and militancy that the government must address.Product importation is the reason for increasing cost of petroleum products following rising foreign exchange, devaluation of Naira, rising global oil prices and inflation in the country of import.The product importation itself places foreign exchange burden on the Naira leading to further devaluation of the currency.Also supply disruption in the exporting countries will translate to scarcity in the country of import.Storage capacity constraints, access limitations to retail delivery locations, poor transport infrastructure and networks are transport and distribution challenges which lead to long delivery cycle, high transit time and delivery cost per unit volume.Furthermore, expertise limitations, lack of integrated management system and technology driven process have led to poorly coordinated distribution system. Conclusion More than 70% of energy requirement in Nigerian for transportation, domestic consumption, economic and industrial productivity depend on petroleum and its derivatives.This is chiefly because of the underdevelopment of electric power and other sources of energy.Over time, the demand for petroleum products has soared in the following rising population, increasing demand for economic productivity, general logistics and distribution, as well as quest for improved standard of living.While the consumption demand for petroleum product has continued to increase, the issues of scarcity, high prices and corrupt practices which fraught the distribution of petroleum in Nigeria has compounded and successive governments have not shown a political will in addressing them.Significantly lack of local refinery capacity and poor distribution network the study identifies are notorious for supply limitations and rising price of products.Nigeria is blessed with abundance of natural petroleum resource yet depends on importation of refined petroleum products to satisfy her local consumption demand.The reliance on road haulage in distribution is inefficient and possess significant bottleneck and risk to product supply.To ensure the availability of products which meet local demand at affordable prices and even distribution to all regions, the government is tasked to show genuine commitment in addressing the issues identified.Savings from removal of petroleum subsidy as implemented by the current government should be invested in local refinery capacity, revitalization of the pipeline for efficient distribution and development of other sources of energy especially electricity supply to reduce over dependence on petroleum energy. Disclosure of conflict of interest The author declares that there is no conflict of interest in this manuscript. Figure 1 Figure 1 Chanel of Petroleum Product Distribution in Nigeria Figure 3 Figure 3 Product Supply and Consumption AnalysisFigure3shows local consumption for the years under review increasingly depends on foreign supply of products.Local supply capacity for all product group have declined over the years with PMS reaching negative levels in 2018 and 2019.This explains the reason for low levels of agility in product supply and distribution.There is a limit to which a country will effectively meet her local consumption requirements depending solely on importation. Table 3 Descriptive Statistics of limiting factors to agile distribution Source: Author's SPSS Data Computation Output Table 4 One-Sample Kolmogorov-Smirnov Test Table 5 Product Supply and Consumption Two sources of product supply import and local refinery are identified.Data is sourced from the NBS e-library and products considered are Premium Motor Spirit (PMS), Automated Gass Oil (AGO) and Household Kerosene (HHK).Data parameter of product category examined are import volume, local consumption, and local refinery output in liters for the period of 2017 -2019 as available on the NBS e-library data base.With regard to table 5 above, data analysis shows PMS import has increased steadily from 17.3 billion liters in 2017 to 20.8 billion liters in 2019 representing a 17.6% increase.The consumption volume also experienced a steady increase form 18.3 billion liters in 2017 to 20.5 billion liters in 2019 representing a 11% increase.The local refinery capacity decreased from 1 billion liters in 2017 to 307 million liters in 2019 representing a 70% decline in output.The local refinery contribution to PMS consumption nosedived from 5.58% in 2017 to 0% in 2018 and 2019.There is a gap of 679 and 307 million liters of PMS in the 2018 and 2019 respectively which infers smuggled volume PMS.This is also validated by the recorded supply excess against local consumption for the same years.The imports and local consumption of AGO increased from 4 billion liters in 2017 to 5 billion liters in 2019 representing an increase of 25% respectively.Local refining capacity decreased from 470 million in 2017 to 10 million liters in 2019 which is a 97% decline.Local refinery contribution to total supply decrease from 9.9% in 2017 to 0% in 2019.HHK imports decreased from 340 million litres in 2017 to 128 million litres in 2019 representing a 62% decline.Consumption decreased from 944 million litres in 2017 to 270 million litres in 2019 which is a 73.4% decline in consumption.Contribution of local refinery to consumption decreased from 63% to 52%. Table 6 Petroleum Pump Price for the period of 2017 to 2023
7,247.8
2024-03-30T00:00:00.000
[ "Business", "Engineering", "Environmental Science" ]
OTSUCNV: an adaptive segmentation and OTSU-based anomaly classification method for CNV detection using NGS data Copy-number variations (CNVs), which refer to deletions and duplications of chromosomal segments, represent a significant source of variation among individuals, contributing to human evolution and being implicated in various diseases ranging from mental illness and developmental disorders to cancer. Despite the development of several methods for detecting copy number variations based on next-generation sequencing (NGS) data, achieving robust detection performance for CNVs with arbitrary coverage and amplitude remains challenging due to the inherent complexity of sequencing samples. In this paper, we propose an alternative method called OTSUCNV for CNV detection on whole genome sequencing (WGS) data. This method utilizes a newly designed adaptive sequence segmentation algorithm and an OTSU-based CNV prediction algorithm, which does not rely on any distribution assumptions or involve complex outlier factor calculations. As a result, the effective detection of CNVs is achieved with lower computational complexity. The experimental results indicate that the proposed method demonstrates outstanding performance, and hence it may be used as an effective tool for CNV detection. Background Copy number variation is a type of structural variation in which a copy or deletion event impacts a large number of base pairs.According to evidence, copy number variations in specific genes may affect the levels of gene expression in one or more cancer types, which may affect how many types of cancers develop and progress [1].Deletions or amplifications of relatively significant DNA fragments are referred to as copy number variations (from 50 base pairs to several trillion bases) [2].Because of the intimate relationship between CNV and gene expression, particularly in the tumor [3] cells where the influence of CNV on oncogenes and suppressor genes is particularly significant, as well as the high association of specific copy number variants with intellectual disability, autism [4], and schizophrenia [5], detecting CNV has become an important challenge for researchers and clinical laboratory practice. More and more CNV detection techniques are being developed as a result of the advancement of next-generation gene sequencing technologies and the expansion of the volume of data produced [6].These techniques generally fall into one of four categories for data utilization: paired-end mapping (PEM), read depth (RD), split read (SR), and de novo genome assembly (AS).Each of these approaches has unique traits and a range of potential applications.The PEM-based method, which has a superior identification effect for big-length deletion, employs the relation between the spacing of the doubleended read segment and the length of the inserted fragment to assess whether the gene sequence is altered.The SR-based method detects variant breakpoints using nonnormal alignment information and offers good detection results for all deletion lengths.The AS-based strategy to reassemble short sequences before variant identification, which theoretically should have the greatest identification results, is seldom used in real investigations because of the enormous amount of good quality data that is needed as well as the expensive cost of assembly.The primary method for identifying genomic copy number variations is the RD-based approach, which relies on the correlation between read coverage depth and actual copy number [7][8][9].Theoretically, it is capable of identifying any type of variation, but due to coverage depth's statistical properties, it needs to be enhanced in terms of its ability to detect copy number variation that is smaller in size and amplitude. Based on the NGS data and the aforementioned methodologies, several different methods have been created, the majority of which are RD-based.FREEC [10,11] calls genomic alterations by constructing and normalizing read depth profiles, it can also estimate the purity of tumor cells and can be used for the detection of germline variant events when control samples are provided.CNVnator [12] detects copy number variation events by employing a mean-shift algorithm on read depth profiles under a predefined strategy.ACE [13] fits a model to the read depth data, calculates the tumor purity and cell ploidy with the least amount of error, and then forecasts the absolute copy number.iCopyDAV [14] detects copy number variation events utilizing Total Variation Minimization (TVM) and Circular Binary Segmentation (CBS).CNV-LOF [15] finds CNVs from the standpoint of local data density, which significantly improves the efficiency of local CNV identification.CNV_IFTV [16] creates isolated forests to calculate the anomaly scores of read depth profiles, then applies the total variation model to smooth the scores and forecast CNVs.By combining different sequencing signals, LUMPY [17] suggests a signal mapping framework to predict CNV.It can also find several other forms of gene structural variants.PEcnv [18] fills the gap in the recognition of small CNVs by detecting CNVs of varying sizes using a base coverage corrected model and a dynamic sliding window.IhybCNV [19] improves detection performance by integrating results from different detectors.LDCNV [20] blends global and local and presents a better anomaly score computation algorithm based on KNN that more accurately captures the degree of abnormality.Restricted by the intrinsic complexity of NGS data, how to efficiently retrieve valuable information from the heavy data and how to set thresholds with more confidence still has to be researched further to further evaluate the data features in order to forecast CNV more consistently through simple and interpretable computational algorithms. In light of the aforementioned factors, we here present a novel method for detecting CNV in NGS data, named OTSUCNV (based on OTSU).The idea is to use a straightforward and efficient sliding window strategy to locate breakpoints in RD data, and then use the OTSU method on the tiny data that has been processed to automatically isolate the anomalous portion.The two important contributions that we make are as follows: 1.A simple dynamic sliding window model is used to process the RD data so that base sequences in adjacent positions with similar RD values are merged, and breakpoints are identified.2. The combination of the T-test and the adapted OTSU algorithm for the categorization of copy number abnormal and normal events allows for the correct identification of even low amplitude variant events with high confidence. Overview of the OTSUCNV Figure 1 depicts the method's workflow.It accepts a fastaformatted reference sequence file and a bam-formatted read segment alignment file, preprocesses the input data, and then executes two primary phases to declare the CNV, including: 1.The entire DNA sequence is divided into contiguous and non-overlapping segments using the boxplot threshold and the adaptive mean window calculator proposed in this paper.2. Using the independent samples T-test combined with the OTSU algorithm, CNV was inferred from the deviation between each segmental profile and the normal profile. In addition, the method is implemented in Python and is available for free at https:// github.com/ hotsn ow-sean/ OTSUC NV. Preprocessing Based on the input BAM alignment files, we can obtain the read count (RC) profile by tallying the number of read segments aligning to each position of the reference sequence, representing the coverage of each base position.Subsequently, we binning the reference genome into non-overlapping bin windows, and compute the average read count for each bin window, which is referred to as the read depth (RD) profile.Following the acquisition of the initial RD profile, some preprocessing will be applied to it, including eliminating reference genome unlawful bases and correcting GC bias, the latter of which uses a technique developed in earlier work [7,12].The RD profile needed for further processing will be obtained after the preprocessing.The RD profile can be written as follows, where N stands for the number of bins: where r i represents the RD value of each bin. Segmentation The pre-processed data are now ready to perform the segmentation procedure in order to identify contiguous regions with the same copy number (similar read depth values).In this paper, we propose an adaptive sliding window algorithm to accomplish the segmentation task, which determines the possible breakpoint locations based on the robustness mean difference between the local left and right sides, and then merges adjacent bins with similar RD values into larger segments.The algorithm is briefly described as follows. First, for the sequence R to be processed, inspired by other researches [21,22], we define the local one-sided robustness mean at a position i as where ω m,i represents the weight of position m relative to the computational point i, theoretically, if the point m belongs to the same segment as the point i, then the weight ω m,i is large, otherwise, it is small.In addition, k represents the maximum value of the one-sided size of the sliding window, which is used to limit the amount of calculation when many consecutive points belong to the same segment, and can be artificially specified.And k is much smaller than the size N of the RD profile. To make the weight assignments reasonable, we use the following formula: A negative power function is used to achieve the purpose of decreasing the weights as the distance from the calculation point increases, where r i − r i−1 allows the weights to keep decreasing smoothly and slowly while the breakpoint is not crossed, and once the breakpoint is crossed, the significant difference in the RD values at the breakpoint will result in a significant decrease in the weight of subsequent calculations.According to this formula, due to the low weight of points from different segments, the resulting mean will better represent the average RD value of the same (2) Fig. 1 Flowchart of the OTSUCNV method segment as the calculated points.In addition, if the weight ω is less than a certain threshold during the computation, the computation will be terminated directly to improve the computation efficiency and the robustness of the mean value.The threshold can also be specified artificially. Along with the local mean, we define the local robustness mean difference as: We calculate Diff i at each position in the sequence Diff according to the above equation to obtain a sequence of mean differences, denoted as Diff.In the sequence Diff, the values at the breakpoints will form extremes concerning the values on both sides of them.To mitigate the impact of small local extrema on the algorithm, here we use a boxplot procedure to filter out regions with relatively large values in Diff.The formulas to calculate the upper and lower bounds are as follows: Q1, Q3, and IQR are all statistical parameters of Diff.Q1 represents the first quartile, Q3 is the third quartile, and IQR is the difference between Q1 and Q3.By using the upper and lower bounds provided by the boxplot, we can efficiently filter out the relatively large and small values within a dataset. For the filtered larger values as well as the smaller value regions, we further filter the local extreme or minimal values among them, and their locations are the breakpoints. For the hyperparameters mentioned in the above algorithm, in addition to being artificially specified, a better parameter selection strategy has been derived in this study through extensive experiments, and the user can simply ignore the specification of parameters and use the default implementation in the provided program.The pseudo-code of the algorithm is shown below (Algorithm 1). From Algorithm 1, it can be seen that the computational workload of this algorithm mainly focuses on the calculation of the robustness mean at each position.The calculation formula (2) for the robustness mean requires the computation of the weights of neighboring points.In the process of calculating the weights (formula 3), the sum of squared distances can be accumulated during the loop.Therefore, the complexity of calculating each neighboring point is O (1).As the number of points calculated around each point is significantly smaller than the scale of the RD profile, the overall computational complexity can be regarded as O(N).Therefore, the proposed algorithm can accomplish the segmentation task with a relatively low and stable time complexity. After the segmentation, we partition R into some consecutive non-overlapping segments of different sizes according to the segmentation result, expressed by the following equation. where s i represents the set of all RD values for the i-th segment. To achieve a clearer understanding of the algorithm steps, we have provided a simple diagram in Fig. 2. The x-axis in the figure represents the position index, and the red point represents the RD values, while the blue line represents the calculated mean difference (formula 4).The two horizontal dashed lines represent the upper and lower limits, and the two green points represent the breakpoints obtained in the end.From the Fig. 2, it can be observed that the robust mean difference calculated using the proposed formula can effectively reflect the probability of a point being a breakpoint.After filtering with the threshold of the boxplot, reasonable inferences can be made regarding the potential location of breakpoints. Inferring CNVs based on the OTSU Generating probability Based on the set S obtained by the segmentation procedure, the RD values need to be initially classified according to their numerical magnitude.The probability of occurrence of each category of RD values must be evaluated before applying the OTSU algorithm.Therefore, the independent samples T-test is used here to test all the s i two-by-two pairs, and the sets that are not significantly different are aggregated into the same class.This results in several categories of RD values, and assuming a total of m categories, each category can be expressed as the set of several segments that are not significantly different from each other: where |C i | and n i denote the number of elements in C i and n denotes the number of elements in the set S. We take the average value of the RDs contained in each class as a representative value, and the ratio of the number of its elements to the number of all segments as the probability of the occurrence of this RD value.Then for a certain average value of RD, the probability corresponding to it is as follows: Predicting CNVs by OTSU After the previous processing, the average RD value of each category and its corresponding probability of occurrence can be obtained.Next, to distinguish the abnormal segments from normal segments, we use the OTSU [23] method, which is an application method for the automatic selection of thresholds in the field of image segmentation with simple computation and good self-adaptability and can find a threshold with high confidence according to the distribution of the data itself. First, for copy number variation detection, we can consider the abnormal event as the foreground and the normal event as the image's background.Since copy number abnormalities can be simply divided into two types of numerical performance: increasing and ( 7) missing, in order to unify the processing, we find the distance of all RD values to the RD values corresponding to the normal copy number and obtain the following distance array (the distances are listed in ascending order): where u normal denotes the RD value corresponding to the normal copy number, which can be calculated by any reasonable method, the most common method is to take the plural.In this paper, we use a more robust method to calculate it, this method can be referred to [24].For convenience, we denote the previously obtained probabilities as: Note that the index of the probabilities indicated above corresponds to the index of the distance values. According to the characteristics of the RD-based method, the larger the deviation from the normal RD value, the more abnormal the segments corresponding to that RD value are.Now suppose that the data are divided into two categories C 0 and C 1 by a certain threshold k where C 0 represents the category less than or equal to the threshold and C 1 represents the category greater than the threshold, then the probability of occurrence of each category and the respective mean values are given by the following equation: where The between-class variance is defined as: After that, by searching the optimal threshold k that maximizes the between-class variance, the normal and abnormal data can be separated. Algorithm 2 OTSU classifier According to Algorithm 2, it can be seen that the OTSU classifier only has a single loop of m iterations, therefore the time complexity of this algorithm is O(N), which is linear complexity. In addition, owing to the peculiarity of CNVs, the magnitude of the gain fragment can be much larger compared (11) ) to the loss fragment.This can cause an imbalance in the distribution, which can be similar to uneven lighting in image segmentation which can have a significant impact on the performance of the OTSU algorithm.Here we use a simple strategy to reduce the negative impact of this situation on the prediction results, called extreme value suppression.In brief, before applying the OTSU algorithm, we reduce the values in the distance array D that are too large, using the following formula: where D mean represents the average of all distance val- ues.In theory, if the above-mentioned extreme values exist, then this step will affect just the data closest to the extreme values, achieving the goal of limiting the negative effects of the extreme values.If there is no such extreme value, the distance values corresponding to all anomalous RDs are equally reduced and have little effect on the prediction outcomes.In the subsequent sections, we will illustrate the effectiveness of this procedure with experimental results. In order to provide a clearer understanding of the algorithm steps, we have presented an example in Fig. 3.The x-axis in the figure represents the distance of all RD values relative to the normal RD value (refer to formula 9), while the y-axis represents the probability density of the values.And the dashed vertical line indicates the position of the optimal threshold calculated by the OTSU algorithm.From the probability density curve of the data distribution in Fig. 3, it can be observed that all the data are mainly concentrated in two peaks, with the peak near the position close to 0 (corresponding to normal RD values) being higher.The optimal threshold calculated by the OTSU algorithm is precisely located near the valley where the two peaks intersect.This clearly demonstrates the reliability of differentiating between normal and abnormal data using the OTSU algorithm. Results To assess the effectiveness of OTSUCNV, we performed experiments on both simulated and real datasets.For each dataset type, we compared our proposed method with four peer methods designed for the same purpose.Furthermore, the efficacy of these methods was measured using precision, sensitivity, and F1-score metrics.Precision was defined as TP/PP, sensitivity as TP/P, and F1-score as the harmonic mean of precision and sensitivity.In this context, TP refers to the number of genomic positions that are duplicated both in the declared CNVs and the confirmed CNVs.PP corresponds to the total number of genomic positions included in the declared ( 14) CNVs, whereas P represents the total count of positions in the confirmed CNVs. Simulation studies For the simulated experiments, we utilized hg38 as our reference genome, which is available for download from the Ensembl database, http:// asia.ensem bl.org/.Then, we simulated the test gene sequences using SInC [25] and ART [26], along with a reference genome.In this study, SInC was responsible for simulating copy number variations in the normal reference sequence, while ART was used to simulate sequencing of the generated test sequences and ultimately produce FastQ files [27].Subsequently, BWA [28,29] and Samtools [30] were used with default parameters to obtain the aligned BAM file for CNV detection.In this study, we used SInC to generate three different sets of gene sequences with CNV region lengths ranging from 3000 to 50000 bp.For each sequence, ART was employed to generate sequencing data with coverage depths of 2X, 6X, and 10X.To ensure the reliability of our experiments, each coverage depth was repeated 30 times to minimize experimental variability.Finally, the average performance of 90 samples was taken as the final metric for each of the three different sequencing coverages.Using the simulation data generated above, we compared its performance with four different peer methods, which are FREEC [11], CNV-LOF [15], KNNCNV [31], and LDCNV [20].Figure 4 shows the experimental results of these methods on simulation data, where the experimental results for each different coverage are averaged over a total of 90 samples for 3 different variant configurations and 30 sequencing repetitions of the simulation.According to the figure, FREEC shows an F1 score close to 0.8 in samples with different coverage.LDCNV performs poorly in terms of precision, ranking fifth in F1 score.The F1 scores of CNV-LOF and KNNCNV improve with increasing sample coverage, ranging between 0.6 and 0.8.While our method outperforms the other four peer methods in terms of precision, sensitivity, and F1 score.Even in samples with 2x coverage, the F1 score remains around 0.9.Overall, OTSUCNV performs better than the other four peer methods on the simulated dataset. To further discuss the importance of the extreme value suppression in the proposed method, we conducted an ablation experiment with the same experimental data and experimental steps for the extreme value suppression step.The experimental results are shown in Fig. 5, and it can be seen that the application of this step led to a significant increase in the sensitivity of the CNV prediction, thus greatly improving the F1 score of the results. Application to real datasets The real sequencing samples were obtained from the 1000 Genomes Project [32].For our study, we selected six commonly used samples (NA12878, NA12891, NA12892, NA19238, NA19239, NA19240) in this field of research, all of which were aligned to the hg18 version of the reference sequence.In this algorithm study, these six samples were only used for tool performance validation.The DGV Gold Standard Variants for these samples were Fig. 3 Example of finding the optimal threshold using OTSU downloaded from the Database of Genomic Variants (DGV, http:// dgv.tcag.ca/ dgv/ app/ home) [33] As shown in Fig. 6, we conducted comparison experiments with four previous peer methods on six real datasets.Based on the experimental results, our proposed method achieves a relatively high level of F1 score.Specifically, CNV-LOF, FREEC, and LDCNV have lower overall rankings due to their lower precision.In comparison to KNNCNV, OTSUCNV demonstrates higher F1 scores on four samples and exhibits higher precision on each sample.Overall, OTSUCNV also demonstrates advantages in experiments with real samples. Comparison of running time To evaluate the execution efficiency of the algorithm, the proposed method was tested on 30 simulated samples along with four peer methods.The tests were conducted on a PC with a 2.9GHz CPU and 16.0GB memory.The average execution time for the 30 samples is shown in the Table 1. In terms of execution time, our method is the fastest, except for FREEC.However, FREEC requires additional preprocessing to calculate the percentage of GC content in a given sequence file in FastA format, and its test time does not include the time for GC calculation.The step took approximately 8 seconds under the same experimental conditions.Overall, OTSUCNV is an efficient CNV detection approach. Discussion and conclusion We developed a novel method for CNV detection in whole genome sequencing, called OTSUCNV, which has been demonstrated to perform well on samples of different coverage depths and both real and simulated datasets.We can apply it to the analysis of germline and tumor data.OTSUCNV first segments DNA sequences using an adaptive sliding window technique, and then clusters the segmented RDs using independent sample T-tests to obtain the probability of occurrence of each class of RDs.Finally based on the OTSU algorithm, all RD representative values are classified as normal or abnormal, and the gene segments they correspond to are naturally indicated as CNVs.Our method has several advantages: (1) the proposed sequence segmentation approach exhibits (2) the use of the modified OTSU method for CNV prediction eliminates the difficulty of manually selecting thresholds and demonstrates good performance both theoretically and practically; (3) compared to four peer methods, our algorithm has low computational time complexity, with segmenting and predicting stages having only linear time complexity.Overall, our method offers high cost-effectiveness in CNV detection. We conducted studies with four peer approaches on both simulated and real datasets to illustrate the effectiveness of the OTSUCNV method.The experimental results show that our method outperforms other four methods in terms of F1 scores, outperforming them comprehensively on simulation datasets and performing similarly to KNNCNV on real datasets.Moreover, through a comparison of running times, is has been proven that OTSUCNV is more efficient.Therefore, OTSUCNV may become a promising tool for detecting CNVs. For future work, we plan to make improvements to our method in the following two areas: (1) In the RDbased CNV detection method, the size of the RD calculation window is a crucial factor but currently, the selection is based on empirical knowledge.Therefore, we intend to design an algorithm to avoid manual selection.(2) During the CNV prediction stage, we treat both gain and loss cases as the same anomalous event.Although the impact of this strategy is currently reduced by the method of extreme value suppression, it should be possible to find a more robust treatment.Therefore, we plan to further optimize the algorithm to solve this problem. Fig. 4 Fig. 5 Fig.4 Performance comparison of OTSUCNV with the four peer methods in terms of precision, sensitivity, and F1-score.The F1-score is shown in black dashed lines ranging from 0.1 to 0.9 with an increment of 0.1.a-c They represent the performance of the aforementioned approaches for three distinct coverage samples: 2x, 6x, and 10x
5,828.2
2024-01-30T00:00:00.000
[ "Computer Science", "Medicine" ]
Innovation-Sustainability Nexus in Agriculture Transition: Case of Agroecology Abstract Different governments and international organizations have shown interest in agroecology as a promising pathway for transition to sustainable agriculture. However, the kinds of innovation needed for agro-ecological transition are subject to intense debate. The scale of this debate is itself an indicator of the complicated relation between innovation and sustainability in the agro-food arena and beyond. This review paper analyses the potential of agro-ecology in agricultural sustainability transitions. It also explores whether agro-ecological transition is a sustainable innovation (cf. ecological, green, open, social, responsible). Furthermore, the paper investigates the potential contribution of agro-ecological transition to sustainability, using the 3-D (Direction, Distribution and Diversity) model of the STEPS centre. Agroecology is one of the few approaches that can harmoniously combine innovation and sustainability in agriculture while promoting genuine transition to agro-food sustainability since it embraces all dimensions of sustainability (environmental, economic, social/cultural/ethical). Nevertheless, it can be taken for granted neither that all traditional practices can be classified as ‘agro-ecological’ nor that all farmer-led innovations can be included in the agro-ecological repertoire. Moreover, the relationship between the three aspirations of agroecology (science, movement and practice) needs further elaboration in order to maximise potential for agriculture transition. knowledge between the worlds of science, policy and practice is needed to foster a genuine transformation of food systems, which is necessary in making the transition towards sustainability. Transition will most likely not depend on one or even a small number of technological innovations, but is likely to arise from a constellation of mutually interacting systems of innovations (Twomey and Gaziulusoy 2014). This is particularly true in the case of food systems where social innovations also seem important. According to Hinrichs (2014), social and organizational innovations are as central to sustainability transitions in food systems as any particular innovative technology. In fact, transition to sustainable agro-food systems requires complex and holistic changes in which social innovation plays as big a role as technological innovation (IPES-Food 2015). Innovation has become a key issue in the discussion about the relation between agriculture and sustainability (e.g. EIP-AGRI 2013;FAO 2013FAO 2012Global Harvest Initiative 2016;IPES-Food 2015). It is widely admitted that the transition to agricultural sustainability requires 'sustainable innovation ' (e.g. El Bilali 2018a). Different models of sustainability-oriented innovation have been promoted. The Network for Business Sustainability (2012) identified multiple definitions relating to Sustainability-Oriented Innovation (SOI): eco-innovation, ecological innovation, environmental innovation, frugal innovation, green innovation, inclusive innovation and social innovation. In fact, there is a growing emphasis on 'responsible' (European Commission 2013), 'sustainable' (Charter and Clark 2007;Chonkova 2015), 'social' (Caulier-Grice et al. 2012;European Commission 2017;Mulgan et al. 2007;Nicholls and Murdock 2012;Osburg 2013), 'open' (Chesbrough 2003;Christensen et al. 2005) and 'ecological' (Carrillo-Hermosilla et al. 2010;Charter and Clark 2007;Fussler and James, 1996;Kemp and Foxon 2007;Kemp and Pearson 2008;Reid and Miedzinski 2008) innovation as a way of combining harmoniously innovation and sustainability. Sustainable innovation means paying attention to ecological integrity along with the diversity of social values; encouraging plural innovation pathways; promoting fairer und wider distribution of the benefits of innovation; and fostering inclusive and participatory governance of innovation processes (STEPS Centre 2010). There are different pathways leading towards transition to sustainable agriculture and food systems. In fact, there are several contending paradigms and narratives about sustainable agriculture and ways to achieve it (e.g. Elzen et al. 2017;Levidow 2011;Van der Ploeg 2009). Nevertheless, agroecology is considered one of the most prominent and promising pathways (e.g. Duru et al. 2014;Gazzano and Gómez Perazzoli 2017;Huang and McCullough 2013;Isgren and Ness 2017;Meek 2016;Miles et al. 2017;Ollivier et al. 2018;Wezel et al. 2016). In fact, agroecology is gaining ground, both in developed and developing countries, within the debate on how to address the systemic problems faced by agriculture (Gazzano and Gómez Perazzoli 2017;Isgren and Ness 2017). The transformative potential of agroecology is nowadays widely recognised not only by many agroecology scholars and organic agriculture movements [e.g. International Federation of Organic Agriculture Movements (IFOAM) -Organics International] (Hilbeck and Oehen 2015), but also by a number of international organisations [e.g. United Nations Conference on Trade and Development (UNCTAD), World Bank, FAO (FAO 2015;IAASTD 2009;FAO 2018d)], and expert panels such as IPES-Food (IPES-Food 2016). This review paper aims to cast light on innovationsustainability nexus in agro-ecological transition. Section 2 analyses the potential of agro-ecological transformation in agriculture sustainability transitions. Section 3 explores whether agro-ecological transition is a sustainable innovation (cf. ecological, open, social, responsible, green); while section 4 investigates the contribution of agro-ecological transition to sustainable development against the 3-D dimensions (Direction, Distribution and Diversity) suggested by the STEPS centre. Agro-ecological transition Agroecology (Altieri 1980;Dalgaard et al. 2003;Gliessman 1998;Gliessman 2015;Wezel and Soldat 2009), is an approach that dates back to the beginning of the 20 th century (e.g. Friederichs 1930;Hanson 1939;Harper 1974). It links together science, practice and movements focused on social change (Wezel et al. 2011) through the integration of participatory, transdisciplinary and change-oriented research and action (Ernesto Méndez et al. 2013;Gliessman 2016). Dalgaard et al. (2003) consider agroecology as the study of interactions between living organisms (plants, animals), humans and the environment within agricultural ecosystems. Meynard (2017) points out that agro-ecology is far from a simple merger between agronomy and ecology; it is an innovative and multidisciplinary (natural, economic and social sciences) project that connects politics and action. Simply put, agroecology is an approach that utilizes ecological principles to design and manage productive, resilient and sustainable farming and food systems (Gliessman 2015;IPES-Food 2016). Recently, small farms and food sovereignty gained momentum in the overall discourse on agroecology (Altieri 2009). More and more peasants' movements and civil society organisations (e.g. La Via Campesina) propose agroecology as an alternative system to resist the growth-oriented innovation system in agriculture with its inevitable consequences for rural areas (Rosset and Martinez-Torres 2013), thus placing focus on the bio-politics of not only nourishing humanity, but also access to resources and distributive justice (Anonymous 2014). According to Tittonell (2015), agroecology describes relations between humans, ecosystems, traditional farming, innovation and technology. The integration of the three practical forms of agroecology (scientific discipline, agricultural practice, social movement) and linkage with other food movements (e.g. food sovereignty) have provided a collective-action to contest the dominant agro-food regime and create agro-food alternatives (Levidow et al. 2014). The agro-ecological philosophy and message have also profoundly influenced and shaped other alternative agro-food movements and communities such as permaculture (e.g. Ferguson and Lovell 2014;Maye 2018), conservation agriculture (e.g. Vankeerberghen and Stassart 2016) and organic agriculture (e.g. Lampkin et al. 2017). Agro-ecological practices embrace soil fertility management, pest control, biodiversity conservation, agroecosystem integrity etc. Agro-ecological practices pay particular attention to ecosystem services and biological processes such as biological pest control, symbiotic nitrogen fixation, agrobiodiversity and habitat diversity as well as integration of crop and livestock production (Lampkin et al. 2017;Wezel et al. 2014). However, while agro-ecological transition referred mainly to the transformation of crop systems towards more ecological practices and techniques; an increasing number of articles deal with livestock agro-ecological transition (Dumont et al. 2013) in different countries such as France (Beudou et al. 2017;Ryschawy et al. 2017) and Australia (Cross and Ampt 2017). In fact, agro-ecological transition reduces greenhouse gas (GHG) emissions and mitigates the impact of livestock on climate change (Dumont et al. 2013;Martin and Willaume 2016). Similarly, while the agro-ecological approach initially focused on agroecosystems, it now deals more and more with the broader agro-food system. The agroecology narrative diagnoses the problem with existing agro-food systems as profit-driven agroindustrial monoculture systems that undermine farmers' knowledge and make them dependent on external inputs while increasing the distance between consumers and agri-producers (Levidow 2015a). By promoting the development of 'coupled innovations', agroecology reconnects the dynamics of innovation in agriculture and food, with a view to improving the whole agri-food system . Francis et al. (2003) and Gliessman (2006) expanded agroecology understanding and scope by putting a stronger emphasis on the notion of sustainable food systems with agroecology being "the science of applying ecological concepts and principles to the design and management of sustainable food systems" (Gliessman 2006). Agroecology promotes transition towards a sustainable agro-food system that restores ecosystem services, enhances human welfare and promotes community-based economic development (Miles et al. 2017). Therefore, agroecology is presented as a way of transforming and redesigning food systems, from the farm to the fork, with the goal of achieving environmental, economic and social sustainability (Gliessman 2015;Gliessman 2016). In fact, current agro-ecological thinking focuses on critique of the whole agro-food regime rather than just the 'green revolution' Holt-Giménez and Altieri 2013). That's to say that agroecology aims to stimulate the development of alternative thinking about the future of agriculture and strengthen ecological processes in agricultural systems while addressing the problems of concentration, alienation and access to land, along with other issues such as food sovereignty and family agriculture (Gazzano and Gómez Perazzoli 2017). In fact, agroecology has a significant contribution to the persistence of family agriculture (Babin 2015;McCune et al. 2017;Santamaria-Guerra and González 2017). Gliessman (2015) proposed a five-level framework for classifying food system change thanks to agroecology approaches and practices (Box 1). This clearly shows that agro-ecology is also presented as a food system transformation pathway. According to the High Level Panel of Experts on Food Security and Nutrition (HLPE 2017) any process of transformation or transition to sustainability in agriculture and food systems should take into account agroecology in order to contribute to achieving sustainable food security and nutrition. To counteract the negative effects of intensification and globalisation, many scholars (e.g. Altieri 2009Altieri , 2002Gliessman 2006) proposed to orient agricultural research more towards the needs of peasants and smallholders that are at risk through technocratic farming systems. Agroecological innovation is key to the transition towards sustainability in the current agro-food system (Levidow 2015). Thanks to many social and grassroots movements (e.g. La Via Campesina), the Latin American agroecology agenda has inspired transformational strategies in other world regions (Table 1), such as in Europe (e.g. Féret and Moore 2015; IPES-Food 2018). Indeed, according to the EU's Standing Committee on Agricultural Research (SCAR-FEG 2009), agro-ecological principles should be given priority in agriculture research agendas in the European Union. The European organic sector promotes agro-ecological research with the concept of 'ecofunctional intensification' linking farmers' knowledge and innovation with scientific research (Niggli et al. 2008). This new understanding of 'agro-ecological innovation' is promoted by a European alliance involving civil society organisations and farmers (ARC2020 and Friends of the Earth Europe 2015). Agroecology represents a good example of transition toward sustainable farming and food system. It regenerates agroecosystems and advocates sustainable use of natural resources (Cross and Ampt 2017). Agro-ecological practices and innovations are diverse and multifaceted. Pant (2016) analysed agro-ecological innovations in soil and water conservation, crop improvement, crop intensification and market differentiation. It emerges that agro-ecological transition goes beyond 'regreening of agriculture' and represents a clear example of 'strong ecological modernisation' or 'strong ecologisation' of agriculture based on ecosystem services provided by biodiversity (Duru et al. 2014). Despite its well-documented positive impacts, agroecological transition faces context-specific technical, political, social, cultural and economic obstacles (Beudou et al. 2017). Moreover, agro-ecological approaches may have, in the short-term, trade-offs against productivity and potentially negative impacts on profitability (Lampkin et al. 2017) and these may affect widespread uptake. However, one of the main obstacles to niche agro-ecological innovations is opposition from established players in the agricultural regime, such as the agriculture research system (Prasad 2016). Other obstacles to agroecology are linked to all barriers to diversity and diversification in agro-food systems that arise from a range of policies and regulations tailored to the needs of the industrial food system (e.g. food safety rules, seed legislation, intellectual property protection legislation) (IPES-Food 2016). Concepts of participation, 'conscientization' and autonomy, are central in agroecological movements and they represent the backbone of the political ecology of the agro-ecological transition (Moore 2017). However, policy advocacy is often hampered by the apolitical history of agroecology movements (Isgren and Ness 2017), historically strong only at the local level; so much so that Gonzalez de Molina (2013) highlights the necessity for a 'political agroecology' to endow agroecology with the necessary political instruments and approaches to upscale it to regional and national levels. Therefore, the agroecology movement should engage more actively in politics in order to foster largescale agroecological transition in the food system. Agro-ecological initiatives are also constrained by cultural politics (i.e. conflicting values about appropriate types of agriculture) and associated environmental (e.g. historical land use), cognitive (e.g. conception of space) and relational (e.g. agricultural extension) mechanisms (Meek 2016). Sustainability of agro-ecological transition as a system innovation Many alternative agro-food movements have a critical attitude towards innovations especially those of a technical/ technological nature. Agroecology is one such movement widely cited by scientists with the intention of opening up scientific preoccupation and contesting the technocratic governance of agricultural innovation, oriented towards commercial benefits, agricultural intensification and Box 1: Levels of agro-food system changes thanks to agro-ecological approaches. • Level 1: Increase the efficiency of industrial and conventional practices in order to reduce the use and consumption of costly, scarce, or environmentally damaging inputs. • Level 2: Substitute alternative practices for industrial/conventional inputs and practices. • Level 3: Redesign the agroecosystem so that it functions based on a new set of ecological processes. • Level 4: Re-establish a more direct connection between those who grow food and those who consume it. • Level 5: On the foundation created by the sustainable farm-scale agroecosystems achieved at Level 3, and the new relationships of sustainability of Level 4, build a new global food system, based on equity, participation, democracy, and justice, that is not only sustainable but also helps restore and protect earth's life support systems upon which we all depend. Source: Gliessman (2015). expansion of global trade ). Agroecology is not against innovation in general, only to certain types. In fact, the Institute for Agriculture and Trade Policy (2013) points out that agroecology "is by definition an innovative, creative process of interactions among smallscale producers and their natural environments". However, agroecology faces the task of challenging the dominant models of innovation in agriculture. Beside technologicalscientific innovation, it also embraces know-how, social and organisational innovation forms (IFOAM EU Group et al. 2012). Agroecology promotes social and organisational innovation as an alternative strategy across the whole agro-food chain with the aim of strengthening connection between agro-ecological farmers and consumers to support their innovations. These agro-ecological initiatives are variously known as short food supply chains (SFSCs) or alternative agro-food networks and they are clear examples of social innovation (Galli and Brunori 2013). Such new agro-ecologically-inspired local networks and citizen-community alliances can counterweight the dominant agri-food system (Fernandez et al. 2013). Innovation has always occurred in agriculture cf. farmers' innovations (e.g. Richards 1985). However, many scholars dealing with the agro-food system do not feel comfortable with the current narrow definition of innovation, meaning technological and commercialised innovation. This innovation model ignores existing farmers' knowledge and undervalues their capacity to innovate (Levidow 2015). It has privileged laboratorybased and scientific knowledge in research agendas at the expense of farmers' agro-ecological knowledge (Vanloqueren and Baret 2009). This process was seen as causing profound social or cultural changes (Godin 2015(Godin , 2008 that are not always seen as positive for farming and rural communities. The term 'agro-ecological innovation' is nowadays widely used in the literature (e.g. Blazy et al. 2011Blazy et al. , 2010Hubert et al. 2017;Prasad 2016;Salliou and Barnaud 2017) and this clearly shows that agro-ecological practices and techniques, mostly based on local and traditional knowledge, are considered innovative in many local contexts. In fact, agroecology represents a new relationship to knowledge and innovation (Meynard 2017). According to Holt- Giménez and Altieri (2013), agroecology is "knowledge intensive (rather than capital intensive), tends toward small, highly diversified farms, and emphasizes the ability of local communities to generate and scale-up innovations through farmer-to-farmer research and extension approaches". Agroecologists use different innovative grazing and cropping strategies. Agro-ecological groups and communities of practice champion locally appropriate technologies and participatory methods in research and extension (Isgren and Ness 2017). Demeulenaere and Goldringer (2017) consider agro-ecological transition, especially practices related to selection and exchange of seeds, as radical and breakthrough innovations. Agro-ecological groups and movements are driven and fashioned by innovators who collaborate via mutual engagement, joint enterprise and shared repertoire (Cross and Ampt 2017). They often exist on the margins of conventional agri-innovation systems (Cross and Ampt 2017;Isgren and Ness 2017;Miles et al. 2017;Prasad 2016) and challenge existing research and extension paradigms regarding innovation (Cross and Ampt 2017;Isgren and Ness 2017). Agroecology represents a promising alternative pathway for innovation. It can be considered not only ecological but also as 'socially responsible innovation' (cf. Tilman et al. 2002) as it contributes to addressing grand challenges of our time, such as degradation of natural resources; malnutrition/food insecurity; poverty and climate change as well as associated socio-ethical issues (De Schutter 2011;Pereira et al. 2015). Agroecology also has many similar characteristics to 'open innovation' (e.g. Chesbrough 2003;Christensen et al. 2005); so much so that agroecology calls for a more open approach towards knowledge management and sharing to ensure a wider access to knowledge and innovation or what McCune et al. (2017) refer to as the 'counterhegemonic process of internalization and socialization of agroecological knowledges'. This is particularly the case with participatory breeding that is promoted by agroecology (e.g. Malandrin and Dvortsin 2013). In fact, the management of collective rights and intellectual property rights (IPR) is particularly problematic and challenging in the agricultural sector (HLPE 2017). In this regard, biotechnologies raise many ethical concerns (EGE 2008) as they could pave the way for market predominance by a few companies, which might impact innovation and the local economies in developing countries. Therefore, agroecology stresses innovation and knowledge as public goods (cf. Stiglitz 2007) and agro-ecological movements struggle against the 'patenting' of biological resources and privatisation of germplasm e.g. hybrid corn (Lewontin and Berlan 1990). In this context can be also seen the defence of seed sovereignty and resistance against -both direct (e.g. application of IPR to living materials) as well as indirect (e.g. establishing seed certification requirements and quality standards) -outlawing of informal systems of seed exchanges (e.g. Wattnem 2016). However, open source networks and platforms not only deal nowadays with the exchange of seeds and living materials but also machines/technological innovations. One example of such a platform is the Open Source Ecology; a network of farmers, engineers, architects and supporters, whose main goal is the eventual manufacturing of 50 of the most important machines (e.g. tractors) necessary for modern agriculture (Open Source Ecology 2018). Agro-ecological innovations are by nature ecological and green. In fact, the adoption of agro-ecological practices, techniques and processes helps to decrease environmental impact and reduce pollution and other negative impacts of resource use. Agro-ecological practices are also 'green' as they are based on natural resource use coupled with little or none environmental impact. In fact, agro-ecological innovations represent a good example of 'strong ecological modernisation' in agriculture (Duru et al. 2014). Evidence shows that agro-ecological approach is associated with positive environmental impacts such as increased biodiversity and agrobiodiversity (e.g. Blesh and Wolf 2014;Lampkin et al. 2017;Lanka et al. 2017;Salliou and Barnaud 2017), improved resource use and reduced emissions (e.g. Lampkin et al. 2017). According to Lampkin et al. (2017) agroecology brings about an increased focus on 'ecological innovation' alongside the more traditional emphasis on technological innovation in the agro-food sector. It is widely admitted nowadays that to meet sustainability challenges, more attention should be paid to social innovations, grassroots innovators and processes (Leach et al. 2012;Loconto et al. 2017;Moulaert 2013;Smith and Seyfang 2013). Social innovations are considered good not only for the economy but also for the society (Caulier-Grice et al. 2012) as they engage with social problems in a way that is more efficient, fair, and as effective or sustainable as existing solutions (Phills et al. 2008). Nevertheless, social innovations are not value-neutral but rather are socially and politically constructed, and context-dependent (Caulier-Grice et al. 2012). Agro-ecological innovations have many features of social innovation. In fact, they meet the social needs of farmers (including smallholders in developing countries); lead to new or improved relationships; or develop new collaborations between multiple stakeholders (e.g. agroecology movements, groups and communities of practice). Furthermore, all the eight common features of social innovation identified by Caulier-Grice et al. (2012) apply to agroecology i.e. cross-sectoral, open and collaborative, grassroots and bottom-up, pro-sumption (cf. production-consumption) and co-production, mutualism, creating new roles and relationships, better use of assets and resources, and developing assets and capabilities. Agro-ecological innovation can be considered as a 'transformative social innovation' (Prasad 2016) that emphasizes the roles of social movements and the reengagement of vulnerable communities in societal transformation. In fact, agroecology has always had an important social component. For instance, small-scale and family-based agro-ecological agriculture is based on the social activism of self-mobilised organizations and people aiming to stop neoliberal politics undermining the sustainability of rural ways of life (Santamaria-Guerra and González 2017). Moreover, agro-ecological practices contribute to social well-being and are accessible to farmers in emerging and developing countries. In general, agro-ecological movements and groups are grassroots and bottom up and they have the essential characteristics of communities of practice (COPs). They promote a wide range of farmer-driven innovations or peasant innovations and favour transfer of local and traditional knowledge among farmers (Cross and Ampt 2017;das Chagas Oliveira et al. 2012). These peasant networks (e.g. campesino a campesino) allow farmers to be empowered as agricultural innovators (Holt-Giménez 2010). One merit of agroecology is that of valuing local knowledge of farmers while mixing it with scientific knowledge (Meynard 2017) thus opening up the innovation arena to the contribution and input of farmers and local communities. For instance, Isgren and Ness (2017) show that in western Uganda, agroecology movement takes the form of a civil society network that links farmer groups and non-governmental organizations. Agroecology is also an integral component of 'social farming', which represents a social innovation in many rural areas (González et al. 2014). Agroecology, as a social innovation, helps creating a new dynamism in rural areas (among others, through creation of multi-stakeholder networks) and contributes to sustainable rural development (e.g. Snapp and Pound 2011). Agroecologyinspired social networks also increase cooperation between rural and urban social actors (Rover et al. 2017). According to Levain et al. (2015), operationalising ecological intensification in the context of socio-technical transition towards agroecology represents a 'system innovation'. Agroecology is an approach that integrates environmental, social and economic sustainability and aims to promote a sustainable design and management of agroecosystems. In fact, agro-ecological practices and processes produce environmental and social benefits along with economic value. Evidence shows that agroecological innovations make possible sustainable land use, assure an increase in income, and maintain family employment (das Chagas Oliveira et al. 2012). Moreover, agroecology plays an important role in supporting the livelihoods of smallholder farmers (Lanka et al. 2017). Therefore, agroecology seems appropriate to ensure a sustainable management of resources involved in agricultural production, while promoting food security/ sovereignty and protecting the rural landscape (Bocchi et al. 2012). Having said that, it can be taken for granted neither the sustainability of all peasant innovations nor that all traditional practices can be classified as 'agro-ecological'. Maybe for this reason the IPES-Food (2016) calls for a 'paradigm shift' not only from industrial agriculture but also from subsistence farming to diversified agroecological systems. However, Pant et al. (2014) note that current agro-ecological approaches have provided a limited understanding of transformations to sustainability in subsistence agrarian economies. Moreover, as Ely et al. (2016) show, the so-called 'indigenous innovation pathway' (that may imply also the use of transgenic crops) is not synonymous of agro-ecological pathway. There is also the risk of convergence of agro-ecological niches with the dominant discourse around commercialization in agriculture (Isgren and Ness 2017), thus inducing a conventionalisation of the agro-ecological approach or simply its incorporation into the 'corporate-environmental food regime' (Levidow 2015). Therefore, a fundamental question remains whether agroecology will conform to the dominant agro-food regime or help to transform it (Levidow et al. 2014). This concern was also expressed by Giraldo and Rosset (2018) as follows "there is an enormous risk that agroecology will be co-opted, institutionalized, colonized and stripped of its political content". Likewise, Pant (2016) used the term 'paradox of mainstreaming agroecology' to refer to an apparent contradiction between upscaling agro-ecological niche innovations and the concerns for loosing core principles and values of agroecology in the mainstreaming process. As a result, it is fundamental to scrutinise the sustainability of agroecological transition. Agro-ecological transition: direction, distribution and diversity questions The contribution of agro-ecological transition to sustainable development is assessed against the 3-D's (Direction, Distribution and Diversity) of the STEPS Centre (Box 2). The direction of change advocated and promoted by agroecology is clearly towards a 'strong ecologisation' of agriculture (Duru et al. 2014). That's to say a model of agriculture that is based on ecosystem services provided by biodiversity (e.g. Peeters et al. 2013). According to IPES-Food (2016) a fundamental shift in the direction of agroecology is likely to be the only way to set agriculture and food systems on sustainable footing. The increasing recognition that hunger is fundamentally a distributional question tied, among others, to poverty and social exclusion (e.g. Sen 1981) led to a growing understanding that increases in food production have to occur predominantly within developing countries if they are to have an impact on food security (e.g. Pretty et al. 2011). This is central to the discourse in agroecology and underpins collaboration with food sovereignty and the right to food initiatives. Therefore, the issue of distribution (cf. distributive justice) is a central tenet in the agroecology movement (Anonymous 2014) as it aims to address social inequity and injustice. Agroecology is orientated towards food sovereignty, equitable resources distribution, and rights-based approaches (Giménez and Shattuck 2011). However, the question "[…] could food systems based around diversified agroecological farming Box 2: Questions regarding innovation for sustainable development. In a Manifesto on innovation, sustainability and development, the STEPS Centre (2010) called for a radical shift in how we think about and perform innovation to move towards innovation for sustainability and sustainable development. This means nothing less than a radical change in the whole innovation process (agenda setting, monitoring, evaluation, funding). For that, three arrays of questions related to direction, distribution and diversity, should be addressed, the 3 D's: • Technical, social and political Directions for change: What is innovation for? Which kinds of innovation, along which pathways? And towards what goals? • Distribution: Who is innovation for? Whose innovation counts? Who gains and who loses? • Diversity: What -and how many -kinds of innovation do we need to address any particular challenge? succeed where current systems are failing, namely in reconciling concerns such as food security, environmental protection, nutritional adequacy and social equity" (IPES-Food 2016:6) is complex and does not admit simple answers. Moreover, Gómez et al. (2013) point out that even agroecology research and publications follow a 'colonial pattern' where industrialized countries lead publishing and conduct studies both in industrialized and nonindustrialized countries. Another question linked with equity is that of legitimacy. Also in this regard, agroecology has a hard and long way to go in order to confirm and convince about its legitimacy as an alternative and more sustainable agro-food system (e.g. Montenegro de Wit and Iles 2016). Experiences regarding agricultural transitions in many countries (e.g. the Netherlands) show the importance of nurturing and dealing with diversity as a part of successful transition governance (Grin 2012). Also research on agro-ecological transition emphasises not only the diversity of innovations to promote but also the diversity of local actors to coordinate, therefore the need to implement a holistic and transdisciplinary approach to agro-ecological transition (Duru et al. 2014). It seems that agroecology can accommodate the diversity of farms (e.g. Blesh and Wolf 2014) and agro-ecological practices can be adapted and adopted by farmers in different biophysical and socio-cultural contexts, both in developed and developing countries. In fact, there are different connotations of agroecology even within the same country e.g. USA (Huang and McCullough 2013). Moreover, agroecology tries to address the root causes of standardization and specialization -that have decreased the diversity of scale, form and organization across the agro-food system (Hendrickson 2015) -and strengthens linkages between biological and cultural diversity in landscapes (Plieninger et al. 2018). Sustainable intensification shows the variety of agendas and visions regarding sustainable agriculture. In general, sustainable intensification agendas promote a 'toolkit' of various options and techniques for reconciling higher productivity with environmental sustainability (Constance et al. 2016). Meanwhile, counter-hegemonic global food movements embrace agroecology. They promote a concept of 'eco-functional intensification' (Niggli et al. 2008). However, there are also some attempts to reconcile these two opposed agendas. For instance, Buckwell et al. (2014) consider agroecology one of the pathways to achieve sustainable intensification in Europe together with biodynamic, organic, integrated, precision and conservation agriculture. Likewise, the European Innovation Partnership for Agricultural Productivity and Sustainability (EIP-AGRI) encompasses different approaches such as sustainable intensification, organic farming and low-external input systems (EIP-AGRI 2013). Agroecology is considered an agricultural intensification pathway also in Sub-Saharan Africa. In fact, agroecology is considered as a pathway to achieve 'ecological intensification ' (e.g. Bonny 2011;Doré et al. 2011;Levain et al. 2015; in agriculture. However, according to Levain et al. (2015), the concept of 'ecological intensification' relies upon 'semantic ambivalences and epistemic tensions'. The diversity of soil, climatic, economic, social and political conditions results in a large spectrum of pathways to sustainable intensification. The PROIntensAfrica project (a Horizon 2020 coordination and support action) identified four different pathways to sustainable intensification of the agri-food system in Africa: conventional agriculture pathway; eco-technical pathway; agroecology pathway; and organic agriculture pathway. According to PROIntensAfrica (2017) "The agroecology pathway is based on convergence of agronomy and ecology. Maximization of productivity or production are not the main goals of this pathway, rather the optimization of outputs while the farm systems are retained in a healthy state. Intensification in this sense is subordinated to social and economic development and autonomy of the production systems and of the farm". Agro-ecological methods, but not necessarily agro-ecological principles, were also adopted by some conventional agriculture actors, such as agrochemical companies and some governments. These have incorporated agro-ecological methods into 'sustainable intensification' agendas. For instance, in Europe, the nascent 'sustainable intensification' (neoproductivism) agenda selectively incorporates agro-ecological practices within a broader toolkit including biotech (Levidow 2015a). Such a move and process was criticized by many farmers' organisations, NGOs and social movements (ARC2020 and Friends of the Earth Europe 2015; Levidow 2015a; Levidow et al. 2014). The example of agroecology also shows the interrelations between direction, distribution and diversity dimensions. Adoption of agro-ecological transition as a direction of change in agriculture sector also implies changes in the distribution of benefits, risks and costs. Appraisal of the direction of context-specific innovation in agroecology takes into account also equal distribution of benefits, one of the objectives of the agroecology movement being to address social inequity and injustice issues by changing power structures and improving the whole governance of the agro-food system. Furthermore, the agroecology movement takes a serious view of direction and distribution questions and for that it deliberately pursues diverse innovation pathways to accommodate different needs and aspirations, including those of marginalised and poor groups such as smallscale farmers in the Global South. This, in turn, implies that the agro-ecological approach pays attention not only to technical/practice-related innovations but also to social and organisational ones. Nevertheless, there are also some tensions between agroecology and innovation, such as those found by Foran et al. (2014), but also synergistic interactions between agroecology and agricultural innovation systems. Although the agroecology movement succeeded to a large extent in finding a synthesis/symbiosis among and combining the dimensions of direction, distribution and diversity of innovation, some tension remains with respect to these issues. Agroecology is considered at the same time as a science, a movement and a practice (Wezel et al. 2011) but there might be some tension about these three aspirations of agroecology. In fact, agroecology as a science is promoting innovations in agroecosystem management but the agro-ecological movement is mainly promoting traditional agricultural practices, considered by some as farmer-led innovations. Tension persists not only among the three aspirations and dimensions of agroecology but also within the same dimension (e.g. science/research). According to Levidow et al. (2014), the tension between 'conform versus transform' roles (conforming to the dominant agro-food regime or transforming it) is evident in three areas of the European agro-ecological research: participatory plant breeding, farm-level agroecosystems development, and short food-supply. This is also about the source of agro-ecological innovation to be prioritized as agroecology tries to avoid establishing any hierarchy or value system between innovation by farmers and innovation from scientific research and it even promotes a stronger collaboration between scientists and farmers (cf. participatory research and extension). Nevertheless, this tension in agro-ecological research is emblematic of the difficulty of maintaining harmony between agro-ecology as a science, on the one hand, and agroecology as a social movement and practice, on the other hand. Similarly, such a tension exists also between agroecology as a social movement and 'institutionality' (Giraldo and Rosset 2018). While 'institutionalisation' can be considered as an indicator of agroecology success and its scaling up, it might also strip the agro-ecological movement of its freedom of manoeuvring and action as well as of its label as an 'alternative' movement. Conclusions Agroecology is a promising pathway of transition to sustainable agriculture. In fact, agro-ecological transformation holds the potential of contributing to genuine agriculture sustainability transition. Moreover, agro-ecological transition can be considered as a sustainable innovation because it is ecological (agroecological practices are harmonious with ecosystems), green (based on natural resource use and with positive or neutral environmental impact), open (agro-ecological practices can be used by anybody, they are not patented and they are accessible by farmers in developing countries), social (it contributes to social well-being in rural areas and agro-ecological movements are bottom up/ grassroots, inclusive). Therefore, agroecology embraces all dimensions of sustainability (environmental, economic, social/cultural/ethical) and pays attention to ecological integrity of agro-ecosystems along social values diversity, encouraging plural transformation pathways, promoting fairer und wider distribution of benefits, fostering inclusive and participatory processes. Nevertheless, it can be taken for granted neither the sustainability of all peasant and farmer-driven innovations nor that all traditional and indigenous practices can be classified as 'agro-ecological'. Furthermore, the contribution of agro-ecological transition to sustainable development is obvious. In fact, the direction of change promoted by agroecology is clearly towards more ecology and, consequently, more sustainability in agriculture and food systems. However, one should be aware that agro-ecology might not be considered the only pathway of transition towards sustainability and the diversity of options should be defended. In other words, agroecology should not be transformed into a new 'regime'. Moreover, while benefits of agroecology are widely and, to a large extent, equally distributed to small-scale farmers in the Global South, that does not mean that there are no losers in the agroecological transition, as in any change process. Last but not least, the relationship among the three aspirations of agroecology (science, movement and practice) needs further elaboration in order to keep the transformative potential of agroecology, but also its 'image' as a 'sustainable innovation' i.e. one of the few approaches that harmoniously combines innovation and sustainability in agriculture and food system and promotes agro-food sustainability transitions. Conflict of interest: Author declares no conflict of interest.
8,251.6
2019-01-01T00:00:00.000
[ "Agricultural And Food Sciences", "Economics" ]
Spraying dynamics in continuous wave laser printing of conductive inks Laser-induced forward transfer (LIFT), though usually associated with pulsed lasers, has been recently shown to be feasible for printing liquid inks with continuous wave (CW) lasers. This is remarkable not only because of the advantages that the new approach presents in terms of cost, but also because of the surprising transfer dynamics associated with it. In this work we carry out a study of CW-LIFT aimed at understanding the new transfer dynamics and its correlation with the printing outcomes. The CW-LIFT of lines of Ag ink at different laser powers and scan speeds revealed a range of conditions that allowed printing conductive lines with good electrical properties. A fast-imaging study showed that liquid ejection corresponds to a spraying behavior completely different from the jetting characteristic of pulsed LIFT. We attribute the spray to pool-boiling in the donor film, in which bursting bubbles are responsible for liquid ejection in the form of projected droplets. The droplet motion is then modeled as the free fall of rigid spheres in a viscous medium, in good agreement with experimental observations. Finally, thermo-capillary flow in the donor film allows understanding the evolution of the morphology of the printed lines with laser power and scan speed. Results and Discussion Two main studies were carried out. The first one consisted on the analysis of the influence of the main process parameters on the morphology of the printed features, which was performed through the CW-LIFT printing of straight lines of Ag-NP ink at different laser powers and scan speeds and their further characterization. In the second one we analyzed the dynamics of liquid ejection during CW-LIFT through a fast photography imaging study of the process at the same printing conditions used in the former experiment. This allowed unveiling transfer dynamics completely different from that of traditional pulsed LIFT, and correlating it with the printing results. Line printing. Straight lines of Ag-NP ink were printed following the procedure sketched in Fig. 1b at laser powers ranging between 0.5 and 3.6 W and scan speeds from 150 to 600 mm/s; optical microscopy images of the obtained results are presented in Fig. 2a. In most of the analyzed conditions, continuous lines as long as 2.5 cm are observed, with no bulging in any case. At 0.5 W (the minimum laser power, not shown in Fig. 2a), however, no continuous lines were obtained at any scan speed; only very small droplets arranged along the laser scan direction could be appreciated, similar to those observed at 3.6 W and 150 mm/s. The absence of bulging is relevant, since it is a major source of undesired short circuits between adjacent lines in printed devices. This phenomenon is a common problem in direct writing techniques like inkjet printing. It appears as a result of the printing mechanism, which consists on the forming of the lines through the overlap of successive droplets, and can be very detrimental in printed electronics applications 41,42 . CW-LIFT is free from bulging since printing occurs through the continuous ejection of material instead of through accumulation of sequentially transferred droplets. Nevertheless, the definition of the printed lines is not perfect; some satellite droplets (around 20 µm in diameter) appear alongside the edges. However, as proved in our previous study 40 through the fabrication of a sensor containing a set of interdigitated electrodes, the satellites are not necessarily detrimental for the functionality of the resulting device: irrespective of the substrate used in that case (glass, polyimide, paper) no short circuit between adjacent electrodes was ever observed. Many commercial printed devices (sensors, RFID tags, etc.) have dimensions that do not require high levels of line definition 1,2,6,7 and are not essentially different from the proof-of-concept provided in ref. 40 . Three different types of morphology can be recognized depending on the printing conditions: bright and uniform lines, lines with a darkened center, and almost invisible traces. This last morphology (as in Fig. 2a at 3.6 W and 150 mm/s) corresponds to the practical absence of transferred material, though some scattered droplets are found in the receiving substrate under close inspection. At a scan speed of 150 mm/s in Fig. 2a, the three morphologies are present, with a clear evolution from bright/uniform to invisible traces as the laser power increases. The same evolution seems apparent at the other scan speeds, with the transition from one morphology to another occurring at higher powers as the scan speed increases. The line width ranges between 120 μm and 240 μm along the entire set of analyzed conditions; such feature sizes are similar to those typical of screen printed devices (RFID tags, sensors) 11,43 . Nevertheless, we can expect that the spatial resolution in CW-LIFT could be increased by tighter focusing of the laser beam on the donor film. SEM images of the central region of each line were obtained, and a representative selection is shown in Fig. 2b. At low laser powers (1.0 and 2.0 W), Ag-NPs homogeneously cover the whole line and only some small aggregates of few NPs are observed. As the laser power increases, the aggregates tend to become larger and more abundant. This phenomenon has been previously observed in laser sintering experiments 44,45 , and attributed to the melting and coalescence of Ag-NPs during irradiation. Also, the NPs concentration in the line decreases with laser power, leading to completely scattered NPs and even some voids at the highest powers (2.6 and 3.6 W). Using confocal microscopy, 3D profiles of the lines were obtained, and a selection corresponding to the same conditions as those presented in the former paragraph is presented in Fig. 2c. At low laser powers (1.0 and 2.0 W) the transverse line profile is rather uniform, with a nearly rectangular cross section, but as the power increases material tends to accumulate in the line edges. At the highest powers (2.6 and 3.6 W), the center of the lines appears almost completely depleted (especially at speeds of 150 and 300 mm/s). The average line thicknesses were obtained from the 3D profiles, calculated as the ratio between the volume and the surface area provided by the microscope software (SensoMap 5.1). The corresponding results are plotted in Fig. 3a versus laser power for all the analyzed scan speeds. It is observed that line thickness tends to decrease with laser power. A maximum average thickness of 90 nm is obtained at 600 mm/s and 1.0 W, a value similar to that of lines deposited through inkjet-printing 14 . Also, in general terms line thickness seems to increase with scan speed at a fixed laser power. These data, together with the SEM images, can be related with the line uniformity and brightness distribution observed in the optical microscopy images. For instance, in the case of the laser scan speed of 600 mm/s, at low powers (1.0 and 2.0 W) the relatively uniform profile of the lines (Fig. 2c), as well as the rather homogeneous distribution of NPs (Fig. 2b), are consistent with the bright appearance of the lines in Fig. 2a. At the highest power (3.6 W), however, the lower concentration of Ag-NPs (plus scattered groups of aggregates, Fig. 2c) results in a less compact line, especially in the center, which accounts for its darker appearance. Similar agreements are found for the other laser scan speeds (Fig. 2a). Electrical resistance measurements were carried out after curing in order to determine the optimum printing conditions for the aimed electronic applications. Sheet resistance versus laser power is plotted in Fig. 3b for all the analyzed scan speeds. It is observed that sheet resistance increases with laser power and decreases with scan speed. Remarkably, low sheet resistances, on the order of 1 Ω/□, are obtained at 600 mm/s, similar to those characteristic of inkjet printing 14 . At the scan speeds of 450 and 300 mm/s, sheet resistance increases dramatically above the laser power corresponding to the transition between bright/uniform lines and lines with dark center. At the lowest scan speed (150 mm/s) the printed lines are practically non-conductive. The corresponding resistivity was estimated from sheet resistance and average thickness measurements (assuming a uniform rectangular cross-section for all the lines). A minimum value of 3.5 μΩ·cm was obtained, 25% larger than the nominal value given by the provider, and approximately twice the value of bulk silver. The evolution of resistivity with laser power is similar to that of sheet resistance (Fig. 3c), but its relative increase at the laser power corresponding to the transition mentioned above is lower. In consequence, the increase observed in sheet resistance is not only due to the decrease in line thickness with laser power, a merely geometric effect, but it is also contributed by the increase in resistivity. This, in turn, can be attributed to the NP concentration in the line, which decreases with laser power, especially in the central part (Fig. 2b), the one that starts appearing darker at the transition. The influence of NP concentration on the obtained electrical resistivity can be due to the poor electrical contact among NPs at decreased concentrations, as well as to an overestimation of the filled volume fraction in the presence of voids, which translates into a higher apparent resistivity. A similar behavior has been reported for printed Ag lines showing a 'coffee-ring effect' 46 . From all these results it can be concluded that the optimal parameters for printing circuits correspond to lines deposited at high scan speeds and low laser powers, making 600 mm/s and 1.0 W the best scenario among the analyzed conditions, which is in good agreement with our previous study 40 . In these conditions, rather uniform and continuous lines with low sheet resistance are obtained. These results are especially attractive from an industrial point of view, since not only do they prove the feasibility of CW lasers, but also allow operating with low powers and high speeds. At higher speeds, and with tighter focusing of the laser beam, even better results could be expected. Although CW-LIFT had already been proved feasible from solid donor films 47 , the extension to liquids is remarkable, since this broadens considerably the range of materials printable with the technique; among others, conductive inks like the ones analyzed in this work, ubiquitous in printed electronics circuits. Transfer dynamics. Movies of the transfer process at different laser scan speeds and output powers were acquired using a fast-imaging camera, and a representative snapshot of an ejection event corresponding to each analyzed condition is presented in Fig. 4 (see Supporting Information for viewing the full videos). The donor film is always located on the top of the images, and the laser beam is scanned from right to left. The images were acquired without the receiver substrate to clearly visualize the ejection behavior. It can be observed how, as the laser beam scans the donor film, ink is ejected as a myriad of tiny droplets that propagate in air like a spray. Three different types of ejection modes can be identified. The first one corresponds to a curtain of atomized ink in air (found at the lowest laser power for any scan speed, and at all laser powers for 300 mm/s; it is also barely visible at 150 mm/s and 2.0 W). The second one resembles a horsetail (at scan speeds of 450 and 600 mm/s for laser powers from 2.0 W onwards). Finally, the third one corresponds to the absence of visible ejection of material (scan speed of 150 mm/s at 2.6 and 3.6 W laser power). Remarkably, the observed Table 1. Estimated values of liquid ejection speed assuming it to be constant and using the triangle method. The front position was not enough well defined in some cases to provide a reasonably good estimate. In general, the ejection speed increases with laser scan speed, but no clear trend with laser power is observed. behaviors are completely different from those of pulsed LIFT, in which transfer was achieved through the deposit of single droplets by a jet of ink. Instead, in CW-LIFT ink is transferred in a continuous manner as the laser beam scans the donor film. The observed spraying dynamics can be attributed to pool-boiling in the donor film when it is irradiated by the laser beam 40 . In contrast to the typical picture of conventional pulsed LIFT, where a single cavitation bubble is generated at the donor substrate-film interface, in pool-boiling a myriad of bubbles would be produced in the irradiated area of the donor film, in the same way as in water boiling in a kettle. The longer irradiation times in CW-LIFT compared to those of pulsed LIFT are consistent with this hypothesis. Indeed, the time t i that any portion of the donor film is irradiated by the laser beam can be estimated as: where D is the diameter of the laser beam on the sample and v the laser scan speed. For the scan speeds analyzed in this work (150-600 mm/s) these times range between 700 and 170 µs, much longer than the few ns irradiation times characteristic of pulsed LIFT. Furthermore, the corresponding thermal penetration depths are estimated to range between 25 and 50 µm 40 , which indicates that practically the entire donor ink thickness is heated up by the laser beam. The hypothesis of pool-boiling is also supported by the relationship between these irradiation times (t i ) and estimates of the characteristic times for boiling inception in the donor film. Assuming that the film is uniformly heated and ignoring thermal losses, the required time to trigger boiling (t B ) can be estimated as: where ρ m corresponds to the ink density, l to the donor film thickness, c p to the specific heat of the ink, T B to the boiling temperature, T 0 to room temperature, and P to the laser power. For the ink and irradiation conditions of this work, t B ranges between 2 and 8 μs 40 . Furthermore, the time required for a bubble to nucleate in these conditions (t N ) can be estimated to be around 1 μs 40 . Both of these times (t B and t N ) are much shorter than the irradiation times t i , so that there is enough time to reach the boiling temperature and allow a large number of bubbles to nucleate during the scanning of the donor film. The burst of the bubbles generated during pool-boiling would result in the ejection of material, which would in turn account for the spraying behavior. When the bubbles reach the liquid free surface, ink can be ejected through two successive mechanisms 48,49 . In the first place, the bubble burst proceeds through the breakup of the bubble dome wall into multiple fragments, which leads to the release of extremely small droplets. Next, the void left in the liquid free surface by each bursted bubble can also lead to a second ejection of droplets. These droplets would arise from the tiny jet that results from the streams converging in the bottom of the void when this is replenished. The perturbation of the liquid free surface originated by multiple consecutive bursts would result in the ejection of droplets in multiple directions, which would ultimately account for the observed spray. From the images in Fig. 4 we can roughly estimate the liquid ejection speed in the following way: assuming a constant speed for the ejected droplets (at least during the early stages of emission), if we draw a right triangle whose hypotenuse is tangent to the spray front (Fig. 4, sketch displayed at 2.6 W, 450 mm/s), the speed can be inferred from the x and z measurements through the laser scan speed. The values obtained by this method are presented in Table 1 (in some cases the front is not clear enough to carry out a proper measurement). It can be observed that the ejection speed ranges between 0.1 and 1.2 m/s, with a trend to increase with scan speed for each laser power, and with no clear trend versus laser power for each scan speed. The series of movies at a laser scan speed of 300 mm/s allows a more detailed analysis of the ejection speed than that performed in the former paragraph. It can be observed that liquid emission proceeds through a series of bursts that appear in the images as rather well-defined lobes, especially visible at low powers (1.0 and 2.0 W); lobes can also be appreciated at the other scan speeds for a laser power of 1.0 W, though they are not so apparent. As the power increases, the lobes evolve into a continuous and shorter curtain with a more uniform front. The evolution of the lobes can be easily tracked from the series of snapshots that constitute the movie (Supporting Information) since their shape does not change substantially as they propagate. Therefore, their vertical front position z with respect to the donor film surface can be easily measured at different times t, and from these measurements the z(t) dependence can be obtained. A couple of lobes were selected for each laser power (Fig. 5a) and their vertical front position was plotted versus time in Fig. 5b; in the case of 3.6 W it was not possible to accurately identify and track different lobes, so there is no plot for that power. In all the cases it can be observed how the droplets in the spray were initially ejected at a speed on the order of 1 m/s to very quickly slow down to a terminal velocity on the order of a few mm/s. The initial speeds obtained with this second method are between 3 and 4 times higher than the estimates presented in Table 1, measured with the less accurate method described in the preceding paragraph (which allows obtaining an average value for the early stages of ejection at best). Unfortunately, for the other laser scan speeds it is not possible to track the evolution of a well-defined expansion front allowing a measurement as accurate as that inferred from the plots in Fig. 5b. Nevertheless, the estimates of Table 1 at least provide an acceptable order of magnitude for the ejection speed at early times. We can model the dynamic behavior of an ejected droplet in the spray front as that of a rigid sphere moving in air (and therefore submitted to both the force of gravity and a viscous frictional force) and not interacting with the other droplets around it. Assuming an average speed for the flying droplet of around 0.3 m/s (Table 1) the diameter of the droplets in the calculation of the Reynolds number were chosen to be consistent with the sizes of the satellites in the border of the lines (around 20 µm); the diameter of a sessile droplet is between 2 and 4 times that of the spherical droplet with the same volume for a hydrophilic ink like the one used in the experiments. At the obtained Reynolds numbers, the Stokes law reasonably applies, and thus the viscous frictional force can be assumed to be proportional to the droplet speed according to the relation: where η is the dynamic viscosity of air, d is the diameter of the droplet, and v its instantaneous speed. By solving the equation of motion in the z direction, the vertical position of the droplets in the front would be given by: where v z0 is the z component of the ejection speed, v z∞ is the z component of the terminal velocity and τ is the damping time, defined as τ = m/k (where m is the droplets mass), and related to v z∞ by v z∞ = gτ, where g is the acceleration of gravity. For fitting purposes, an initial time t 0 was considered in the equation, since the exact ejection time could not be determined with enough precision from the images. The parameters obtained from the fits are given in Table 2. As observed in Fig. 5b, the function given by equation 4 fits the experimental data points fairly well. The main discrepancy is found for a laser power of 2.6 W at early times, which results in an overestimation of the initial speed. This is clear in the comparison between images and plots in Fig. 5; for example, the values of v 0z and v z∞ for lobes 2 and 6 are similar, but the same z is attained for very different times. The slight departure of the fitted lines from the experimental plots can be attributed to the fact that 1) a flying droplet is not perfectly spherical and that 2) in the very early times of ejection the droplet speed is high enough for the Reynolds number to become close to 1, the limit of validity of Stokes law. In any case, the proposed model seems to describe quite well the observed spraying dynamics. Assuming that indeed the droplet motion is governed by Stokes' law, the differences observed between lobes could be attributed to differences in droplet size, which would affect the k and m values. In fact, from the previous calculation we can determine the average diameter of the ejected droplets. Indeed, we can relate it with the terminal velocity v z∞ as: Table 2. Table 2. Ejection speed (v z0 ), terminal velocity (v z∞ ) and damping time (τ) obtained from the fits of the plots in (Table 2) are in good agreement with the observed diameters of the sessile droplets in the borders of the lines, which provides additional support for the proposed model as a good description of the ejection dynamics corresponding to the sprayed droplets during CW-LIFT. The origin of the two different ejection modes, curtain of atomized ink and horsetail, seems difficult to determine. The experiment addressing this aim should encompass a close-up inspection of the donor film during irradiation, ideally one that allowed visualizing the onset of boiling in the ink bulk (hardly feasible in an opaque liquid like the used silver ink). Nevertheless, and even if only as a tentative hypothesis, the two ejection modes might be correlated with two different regimes characteristic of pool-boiling. The continuous spray corresponding to the horsetail mode seems compatible with the nucleate boiling regime, in which boiling proceeds through the formation of a large number of individual bubbles continuously bursting in the liquid surface. On the other hand, the presence of lobes in the curtain mode suggests that this mode could be associated with the transition boiling regime; in this regime individual bubbles coalesce into larger blobs whose intermittent burst could result in the observed lobes. However, aside from the mere qualitative correspondence outlined here, it seems difficult to correlate the proposed boiling regimes with the observed evolution of the ejection behavior with both laser power and scan speed. In addition to boiling, other effects (like the thermo-capillary flow described in the next section) contribute to the outcome of each ejection event, thus hindering the aimed correlation. Lobe number Laser power (W) v z0 (m/s) v z∞ (mm/s) τ (ms) d (μm) Correlation between printed lines and transfer dynamics. The spraying dynamics seems at first quite incompatible with the printing of continuous lines like the ones obtained in the experiments. In the spray, the ejected droplets are scattered in all directions, so that it would be expected to find a wide and diffuse cloud of droplets on the receiver substrate instead of a rather well-defined line. However, the gap between donor and receiver is small enough (Fig. 4, sketch displayed at 3.6 W, 150 mm/s) to allow collecting most of the ejected material before there is substantial spreading. Nevertheless, some residual spread cannot be avoided, as is evident from the presence of the tiny satellite droplets in the line borders. Also due to the small gap, it can be understood that completely different ejection modes (curtain, horsetail) can lead to similar printed lines; see, for example, the lines and associated ejection dynamics corresponding to 1.0 W, 300 mm/s (curtain) and 2.0 W, 450 mm/s (horsetail) in Fig. 4. Thus, there seems to be no correlation between these two types of ejection modes and the two line morphologies obtained when there is significant amount of transferred material (uniform lines and lines with a darkened center); naturally, the almost invisible traces correspond to the absence of visible ejection in the fast-photography movies. The evolution of both the printed line morphology and ejection dynamics with both laser power and scan speed is puzzling. We observe that the amount of transferred ink decreases when the laser power increases and the scan speed decreases (Fig. 2a). In both situations the energy per unit area accumulated during the scan increases, so that one would expect an increase in the amount of deposited material, contrarily to what really occurs. Furthermore, in this evolution ink depletion starts from the center of the lines towards the edges (lines with darkened center), until no material is found on the receiver substrate (3.6 W, 150 mm/s). A similar trend is observed in the fast-photography images (Fig. 4): fewer material seems to be ejected as the accumulated energy per unit area increases, though for 450 and 600 mm/s the evolution is not so clear. The evolution with both laser power and scan speed of the printed line morphology could be easily explained assuming that the deposited ink is vaporized again by the scanning laser beam. This would be consistent with the long confocal parameter (2.2 mm) of our laser system, and with the Gaussian distribution of our laser beam, that would account for the darkened center of the lines. However, this hypothesis is not compatible with the evolution observed during material ejection: at high laser powers and low scan speeds there is practically no material transfer. Therefore, another explanation is required. In order to provide the aimed explanation, we recorded a high-angle movie of the evolution of the donor film surface as the laser beam was scanned along it. A selection of frames obtained at conditions leading to the three types of line morphology is presented in Fig. 6. The selection displays the transfer of two parallel lines at different times during the scanning process. It is observed that the laser scan depletes ink in the lines, and that ink depletion is more prominent as the accumulated energy per unit area increases; furthermore, it takes longer for the line to replenish. The depletion, however, cannot be attributed completely to the ejection of ink, since in the conditions where it is most prominent (3.6 W, 150 mm/s), no material ejection is observed in the fast-photography images (Fig. 4). It can be attributed, however, to thermo-capillary convection, a phenomenon commonly observed in the sintering of inks using CW laser irradiation 44,50,51 . As the Gaussian laser beam scans the donor film it locally heats up the ink, creating a temperature gradient between the irradiated region and the rest of the film. This gradient results in an outwards flow of ink away from the beam spot. The thermo-capillary convection hypothesis can also explain the evolution observed from one type of line morphology to another. At 3.6 W and 150 mm/s thermo-capillary convection is maximum, which results in ink depletion in the totality of the irradiated area, so that no ink is ejected and no material is deposited accordingly (Fig. 4). When the scan speed is increased to 450 mm/s at the same laser power (lower accumulated energy per unit area), ink depletion is mitigated and occurs mainly in the center of the line (peak of the Gaussian distribution). This can account for the line with darkened center obtained in these conditions: ink is mostly ejected from the borders of the line. This effect could even be enhanced by similar convection in the deposited ink induced by the laser beam, whose long confocal parameter would allow it to reach the receiver substrate 44 . Finally, at the mildest conditions (1.0 W, 150 mm/s) we observe very little depletion (Fig. 6), if any, which indicates a weak contribution of thermo-capillary convection. In the practical absence of ink depletion, all the irradiated area contributes to pool-boiling, which results in the obtained uniform line. A simple calculation can help to test the consistency of the thermo-capillary flow hypothesis. The driving force in any Marangoni flow is the surface tension gradient that, assuming it to be constant, can be approximated by σ where Δσ corresponds to the surface tension difference between the center of the laser scanned line and its border (D is the diameter of the laser beam). This surface tension gradient must balance the tangential stress τ, which in a flow parallel to the liquid free surface with a velocity profile u(z) (with z the Cartesian coordinate perpendicular to the liquid free surface) is given by: where μ is the dynamic viscosity of the liquid and U the flow velocity in the liquid surface (l is the thickness of the donor film). The approximation in equation 6 assumes that the ink is at rest at the donor film-substrate interface. The combination of equation 6 and the surface tension gradient estimate allows obtaining the capillary flow velocity: In thermo-capillary convection the surface tension gradient is generated by a temperature gradient, in our case the temperature difference between the center of the laser scanned line and its edge. For our water-based ink, we can assume Δσ to be around 13 mN/m 52 , assuming that the center of the line is at the boiling temperature of water and that the edge remains at room temperature, 300 K. Using the parameters provided in the Methods section, U results to be on the order of 1 m/s. We can then obtain a characteristic time ( = t c D U /2 ) for this process of around 50 μs, smaller than the irradiation times t i found in the previous section, which makes it plausible that thermo-capillary flow is responsible for the ink depletion observed in the laser scanned lines (Fig. 6). It might be tempting to compare the characteristic time t c with the boiling trigger time t B obtained in the preceding section in order to better test the thermo-capillary flow hypothesis; in fact, the presence or absence (total or partial) of material in the printed lines will depend on the competition between boiling and capillary convection. However, such comparison would not be completely correct. First, it is important to stress that both times, t c and t B , are only rough estimates, too rough to withstand an accurate comparative analysis. Second, t B is the time required to start bubble nucleation in the donor film, but the onset of massive boiling in the liquid bulk (needed for droplets ejection) does not occur until a substantially longer delay, hard to estimate. Third, t c is the time required for the convection front to reach the border of the line, but partial ink depletion can start earlier than that. In fact, according to the result of equation 7, a stripe around 20 µm thick in the center of the line could be depleted of ink due to thermo-capillary flow in only 10 µs. A final remark is still in order. The estimate of equation 7 is independent of both the laser power and the laser scan speed, but the experimental results are clearly not. This is a consequence of using a linear surface tension profile in the calculations: any other profile (Gaussian, for example) has non-constant gradient, and would therefore be dependent on the irradiation conditions. Conclusions The study of the influence of laser power and scan speed on the morphology of the CW-LIFT printed outcomes has revealed a range of conditions where the technique is feasible for printing continuous lines of conductive inks free from bulging. The study has also brought to light the surprising evolution of the amount of transferred material versus printing conditions: above a certain threshold, it decreases with increasing laser power and decreasing scan speed. The optimum printing conditions correspond to relatively low laser powers and high scan speeds, the same conditions at which the lowest sheet resistances are obtained. The fast-photography investigation of the liquid ejection process has shown that transfer proceeds through a spraying behavior that clearly contrasts with the jetting dynamics typical of pulsed LIFT. The images have revealed two different modes of spray depending on the irradiation conditions: a curtain of atomized ink at low scan speeds, and a horsetail-shaped spray predominantly at high scan speeds. The estimated time scales involved in the process are consistent with the hypothesis of pool-boiling in the donor film as the mechanism responsible for the observed spray. The motion of the sprayed droplets can be well described with the simple model of a rigid sphere moving in air submitted to the action of gravity and viscous drag. The assessment of the droplet terminal velocity allows estimating their size, which is in good agreement with that observed in the deposits. In spite of the broad angle of ejection characteristic of the spraying dynamics, it is possible to attain rather good definition in the printed lines by setting a small gap between donor and receiver substrates. Nevertheless, some tiny satellites are always present in the edges of the lines. The combined analysis of the transfer dynamics with the morphology of the printed lines has provided a plausible explanation for the surprising evolution of their thickness with laser power and scan speed: thermo-capillary flow in the donor film results in increased ink depletion in the scanned line at irradiation conditions of high accumulated energy per unit area. Methods Laser direct-writing system. All the experiments were carried out using an Nd:YAG laser (Baasel Lasertech, LBI-6000) working at the fundamental wavelength (1064 nm) in CW mode. The beam had a Gaussian intensity profile with a maximum output power of 5 W. The laser was equipped with a set of two galvanometric mirrors which allowed to scan the laser beam along the sample at speeds ranging from 1 to 600 mm/s. After the galvo head an f-theta lens (100 mm) focused the laser beam on the sample plane. The resulting beam diameter on the donor film was around 100 μm. Sample preparation and printing. The transferred ink was a commercial silver nanoparticle water-based ink (Metalon ® JS-B25HV) commonly used in inkjet printing applications, with a viscosity of 8 mPa·s, a density of 1.3 g/cm 3 and a solid content around 25% in weight (particle size smaller than 60 nm). The ink was spread along the donor substrate, a glass microscope slide, using a blade coater, ensuring a thickness of about 30 μm (estimated through weight measurements); the receiver substrate was also a conventional microscope glass slide. A gap of 150 μm between donor and receiver substrates was set through spacers in order to ensure proper transfer. The sample was placed on a hot plate that was kept at 75 °C; it helped to pin the contact line of the liquid once the ink was transferred. After transfer, the Ag-NP lines were left to dry on the hot plate and subsequently cured in an oven for 1 hour at 200 °C. Sample characterization. Line characterization was carried out using optical microscopy (Carl Zeiss, model AX10 Imager.A1), confocal microscopy (Sensofar PLμ 2300) and scanning electron microscopy (JEOL J-7100). Electrical measurements of sheet resistance were carried out using a two point probe. Fast-imaging system. The imaging setup consisted on a fast-camera (AOS Technologies AG, model S-PRI F1), with an acquisition speed of 1000 frames per second, coupled to a 10× microscope objective (numerical aperture of 0.28), and a 150 W halogen light source (ThorLabs Inc, model OSL1-EC) facing the camera in shadowgraphy configuration 22,23,53 . The sample was placed between the objective and the lamp at grazing incidence respect to the optical axis, so that the donor substrate appeared on top of the recorded image with the ink film facing downwards. The receiver substrate was removed during image acquisition.
8,394.4
2018-05-22T00:00:00.000
[ "Materials Science", "Physics", "Engineering" ]
Spherical collapse of small masses in the ghost-free gravity We discuss some properties of recently proposed models of a ghost-free gravity. For this purpose we study solutions of linearized gravitational equations in the framework of such a theory. We mainly focus on the version of the ghost-free theory with the exponential modification $\exp(\Box/\mu^2)\Box^{-1}$ of the free propagator. The following three problems are discussed: (i) Gravitational field of a point mass; (ii) Penrose limit of a point source boosted to the speed of light; and (iii) Spherical gravitational collapse of null fluid. For the first problem we demonstrate that it can be solved by using the method of heat kernels and obtain a solution in a spacetime with arbitrary number of dimensions. For the second problem we also find the corresponding gyraton-type solutions of the ghost-free gravitational equations for any number of dimensions. For the third problem we obtain solutions for the gravitational field for the collapse of both"thin"and"thick"spherical null shells. We demonstrate how the ghost-free modification of the gravitational equations regularize the solutions of the linearized Einstein equations and smooth out their singularities. Introduction It is widely believed that the quantum gravity will "cure" a decease of the classical general relativity, its singularities. In particular, in the domain of a spacetime near singularities, where the curvature becomes large, the Einstein equations should be modified. Such a modification, which, for example, is dictated by the string theory, should include additional terms in the gravitational effective action, that are both, higher in curvature and in its derivatives. It was proposed many different modifications of the Einstein theory of the general relativity, that, in particular, change its infrared and ultraviolet behavior (see, e.g. a review [1]). In this paper we discuss a special class of the modified gravity theory with higher derivatives. It is instructive to check at first the effects of such a modification in the linearized version of the corresponding theory. It is well known that addition of quadratic in the curvature corrections modifies the standard Laplace equation for the gravitational potential ϕ in the Newtonian approximation, which takes, for example, the following form (l 2 +1) ϕ = 4πρ . (1.1) It is easy to show that such a modified equation for a point mass has a decreasing at the infinity solution, which remains finite at r = 0. The parameter l, which is determined by the coupling constant for the quadratic in the curvature term in the effective action, provides a UV cut-off in the regularized solution (see e.g. [2,3]). A general analysis of the Newtonian singularities in higher derivative gravity models was recently performed in [4]. A connected problem is a possibility of mini-black hole production in the gravitational collapse of a small mass. To study such a problem one may consider it first in the linearized version of the theory. If a corresponding solution is regular and its curvature for a small mass M is uniformly small in the whole spacetime, one might conclude that in this regime higher in curvature corrections are small and can be neglected. In such a case one can expect that for the small enough mass M a black hole is not formed. In other words, in such a theory there exists a mass gap for the back hole formation. Long time ago in the paper [3] it was demonstrated that the gravitational theory with quadratic in the curvature terms in the action possesses this property. However, in a general case, a propagator in a theory with higher derivatives contains extra poles, reflecting the existence of the additional to the gravitons degrees of freedom. As a result, the corresponding theory with higher derivatives contains either ghosts or tachions [2,5,6]. In order to escape this problem a new type of the modification of the Einstein theory, called ghost-free gravity, was proposed [7,8,9]. A similar model was proposed earlier in [10] (see also discussions in [11]). The main idea of this approach is to consider a theory, which is non-local in derivatives. Suppose that in the linearized version of such a theory the box operator is modified and takes the form a( ) , where a(z) is an entire function of the complex variable z. Then the propagator for such a theory does not have additional poles, different from the original pole, describing propagation of the gravitons. There exists a variety of entire functions. It is sufficient to use the function a(z) to be an exponent of the polynomial in z. The simplest choice is when this polynomial is a linear function, so that the modified box-operator is exp(− /µ 2 ) , where µ, which has the dimensionality of the mass, is the UV cut-off parameter. One may hope that such cut-off regularizes singularities inside black holes and in the big-bang cosmology [12,13,14]. It should be emphasized that non-local in derivatives operators of a similar form and their properties were considered long time ago in the papers [15,16,17,18,19]. In the present paper we study solutions of the ghost-free gravitational equations in the linearized approximation. In the section 2 we demonstrate that for any number of spacetime dimensions D a solution for the Newtonian potential for a point mass can be obtained by using the heat kernel method. We describe these solutions and compare them with the solutions of the linearized Einstein equations. We demonstrate that for D ≥ 4 the gravitational potential is regular at the position of the source, and the parameter µ is the scale of the UV regularization. We also show that for D = 4 the obtained result coincides with the one obtained earlier by the Fourier method and presented in the paper [7]. In section 3 we boost the solution for a static point mass. By taking the Penrose limit with v → c of the boosted metric, we obtain a solution of the ghost-free equations for gravitational field of a "photon" moving in D-dimensional spacetime. We show that these solutions are similar to the field of non-rotating gyratons [20,21,22]. The main difference is that a function of (D − 2) transverse variables, which enters the metric of the gyraton and which is a solution of the (D − 2) flat Laplace equation, in the ghost-free gravity becomes a solution of the (D − 2) operator a(− ) . As a result, the singularity of the gyraton metric at its origin is smooth. In the next sections 4-6 we study the problem of the spherical collapse of null fluid in the ghost-free gravity. In section 4 we demonstrate how a solution for the spherical null shell collapse can be obtained as a superposition of a spherical distribution of gyratons, that pass through the fixed point and are the generators of the null shell. The method can be used in a spacetime with an arbitrary number of dimensions. In the present paper we illustrate it for a special case of the four dimensional spacetime. We use this approach in section 5 to obtain a solution for the gravitational field of a thin null shell in the ghost-free theory. We show that a solution, obtained in the developed gyraton-based approach, correctly reproduces the known solution for the collapsing null shell in the linearized Einstein gravity. We obtain also a solution for the ghost-free gravity and compare it with the solution for the Einstein gravity. In particular, we calculate the Kretschmann curvature invariant for the solution and demonstrate that its singularity is smoothened. However, it remains divergent at the origin r = 0. Finally, we study the collapse of a spherical thick null shell and obtain its gravitational field by taking a specially chosen superposition of thin null shells. We demonstrate that the ghost-free gravitational field is everywhere regular in this model, and for small mass M the apparent horizon (and hence a black hole) does not form. This means that in the ghost-free gravity there is a mass gap for the mini black hole formation in the gravitational collapse, and, in this sense, its properties are somehow similar to the properties of the theory with quadratic in curvature corrections, discussed in [3]. Section 7 contains summary and discussions of the obtained results. In this paper we use the sign convention adopted in the book [23]. Non-local Newtonian gravity 2.1 Newtonian potential in the ghost-free gravity In this paper we consider solutions of the linearized ghost-free gravity equations. We start by analyzing static solutions for a point mass in this approximation. Such a solution gives a modified Newtonian potential. The gravitational field of a point source in four dimensions was obtained earlier in [7]. The authors of this paper used the Fourier method for this purpose. We demonstrate that this result can be obtained much easier by using the method of the heat kernels. This method, practically without any changes, allows one to solve a similar problem in the higher dimensional case. We consider a flat spacetime and denote by D the number of its dimensions. It is often convenient to use a number n, connected with D as follows D = n + 3. The static metric in the weak-field approximation in the standard D dimensional gravity can be written in the form (see e.g. [22]) where the Newtonian potential ϕ satisfies the equation The operator is a standard Laplace operator, which in the flat (Cartesian) coordinates is of the form In the four-dimensional spacetime n = 1, so that G = G 4 and (2.2) takes the standard form of the Poisson equation for the Newtonian gravitational potential. It should be emphasized that there exists an ambiguity in the choice of the form of the higher dimensional coupling constant. We denote this constant by G D and fixed it by the requirement that the form of the Einstein equation is the same in any number of dimensions We introduce another constant G, related with G D in such a way, that the form of the equation for the Newtonian potential ϕ is the same for any D. For a point mass m ρ = mδ n+2 (x) , (2.5) and the potential ϕ is (see e.g. [22]) In the four-dimensional case this expression takes the standard form ϕ = −Gm/r. Following [7] we write the modified ghost-free equation for the Newtonian gravitational potentialφ, created by a point source of the mass m, in the form (2.7) We shall use the following form of the non-local operator a( ), proposed in [7] a( ) = e − /µ 2 . (2.8) The parameter µ specifies the characteristic energy scale, where the adopted theory modification becomes important. In what follows we shall also use another parameter, s, related with µ as follows s = µ −2 . (2.9) This is convenient, since some of the relations will contain derivatives and integrals over s, which do not look "elegant" in terms of µ. On the other hand, µ has more direct "physical meaning" as the mass (energy) of the UV cut-off of the modified gravity. After required calculations are performed one can simply substitute the parameter s in terms of µ by using the relation (2.9). Heat kernel approach Let us discuss now equation (2.7) and its solution. We shall use the following standard notions. For any operatorB we denote by B(x, x ) its value in the x-representation whereÎ is a unit operator. Using these notations one can write the equation (2.7) as follows We denote by an inverse toF operator The other operator,K(s), is also well known. In x-representation it is just a standard heat kernel K n (x, x |s) of the Laplace operator. It has the following form K n (x, x |s) ≡ 1 (4πs) 1+n/2 e −λ(x,x )/(4s) . (2.17) This formula requires some explanations. We remind that we denoted by D = n + 3 the total number of spacetime dimensions. Since the gravitational field of a static source is static, the gravitational potential depends only on D − 1 = n + 2 spatial coordinates. Correspondingly, the heat kernel (2.17) is a function of 2 spatial points, x and x and λ(x, x ) is the square of the spatial distance between these points The degree 1 + n/2 of expression in the denominator reflects the fact that we are working in a space with D − 1 = n + 2 dimensions. It is easy to check that the operator can be written in terms ofK(s) as followŝ Before presenting explicit form of the solutions for the Newtonian potentialφ in the ghostfree gravity, let us make the following remark. The equations (2.7)-(2.8) can be identically rewritten in the form φ = 4πGρ ,ρ =K n (s)ρ . (2.21) In other words, the Newtonian potentialφ in the ghost-free gravity is a solution of the standard Poisson equation for the sourceρ, which is obtained by smearing the original distribution ρ(x). For the point mass (2.5) the smeared source is ρ(x) = mK n (x, 0|s) . (2.22) In this approach one may say that the ghost-free gravity regularizes the potentialφ by smearing a source, which generates it. Of course, the results obtained by the source smearing and by applying operator to "un-smeared" source ρ are the same. Let us present the results. In flat three-dimensional space (D = 4, n = 1) in Cartesian coordinates Here erf(x) is the Gauss error function, which is defined as follows In flat four-dimensional space (D = 5, n = 2) in Cartesian coordinates There exist simple relations between K n (x, x |s) and A n (x, x ) in spacetimes with different number of dimensions Using these relations one can obtain a general expression for A n (x, x ) This relation contains a so called lower incomplete gamma function γ(a, x), which is is defined as . Thus in the ghost-free gravity the Newtonian field of a point mass m is ϕ(r) = −Gm γ n/2, r 2 /(4s) π (n−1)/2 r n . (2.33) At small x one has γ(a, x) ≈ x a /a. Hencẽ This relation implies that the Newtonian potential of a point mass in the ghost-free gravity in any number of dimensions D ≥ 4 is finite at the origin. In other words, this potential is properly regularized. Non-spinning gyratons in the host free gravity We obtain now the gravitational field of an ultra-relativistic particle in the framework of the ghost-free gravity. Instead of solving the modified gravitational equations for a source moving with the speed of light we use the following procedure which is well known in the standard General Relativity. We first make the Lorentz transformation of the static solution for a point mass m and obtain the metric for the object moving with velocity β. After this we take a so called Penrose limit of the metric of the moving body. Namely, we take the limit β → 1, while keeping the energy of the object γm fixed. As a result one obtains an object in D-dimensional spacetime which was called a gyraton [20,21,22]. In a general case, when a static source has an angular momentum, the corresponding gyraton is spinning. We restrict ourselves by the case of non-spinning gyratons. A new element in our derivation is performing the described procedure not within the General Relativity, but in the ghost-free theory of gravity. To perform the calculations it is convenient to use the following notations for the standard Cartesian coordinates x µ = (t, y, ζ ⊥ ). Heret is just the time in the frame, where the source is at rest, y is the coordinate in the direction of motion of our source, and ζ ⊥ = (ζ 2 , . . . , ζ D−2 ) are the coordinates in (D − 2) dimensional plane orthogonal to the direction of the motion. We shall call the latter the transverse coordinates. The Newtonian potential for a point source of mass m in the ghost-free gravity, obtained in the previous section, takes the following form in these coordinates whereφ is defined by (2.33). To obtain a metric for a source moving with the velocity β (along the y-axis in the positive direction) we make the following Lorentz transformation Here (t, ξ, ζ ⊥ ) are Cartesian coordinates in the new inertial frame, where the source is moving. We also denoted The flat metric ds 2 0 in the new coordinates The form of this metric remains the same in the limit β → 1, while in this limit Here . . . denote sub-leading in γ terms. As a result, the metric perturbation in this limit takes the form When taking this limit, we assume, as usual, that the energy of the object, γm is fixed (the Penrose limit), and we denote M = Gγm . (3.10) We also use the following relation Combining these results, we finally obtain the following expression for the function Φ Let us notice, that A n−1 (ζ ⊥ ) is nothing, but a solution of the ghost-free gravity equations for a point source, reduced to D − 2 dimensional transverse plane with coordinates ζ ⊥ . For large µ it reduces to a solution of the D − 2 dimensional Poisson equations. This means, that in this limit the obtained solution of the ghost-free gravity equations reduces to the standard metric of non-rotating gyratons [20,21,22]. In what follows we restrict ourselves by considering a four dimensional spacetime. For this case the integral, which enters the definition of A 0 , has an infrared divergence at large s. Let us discuss this case in more detail. Let us notice that 14) The parameter η, which has the dimensionality of the length, is an infra-red cut-off parameter. Here Ei(a, z) is the exponential integral defined as The function Ei(1, z) for small z has the following expansion Here γ = 0.5772156649 is the Euler constant, and . . . denote the terms vanishing in the z = 0 limit. Assuming that η 2 is large and using (3.16) one can write The corresponding expression for the metric is (3.20) In the limit µ → ∞, when the ghost-free gravity reduces to the Einstein theory, This result reproduces the well known Aichelburg-Sexl solution [24] (see also [25]). Let us notice that the solution (3.20) contains an arbitrary parameter, the infrared cutoff parameter η. However, the change of this parameter can be easily absorbed into the redefinition of the advanced time v. Hence, this ambiguity just reflects freedom in the gauge choice. In order to demonstrate this, let us consider a metric of the form Then one gets This confirms our above conclusion, that the ambiguity in the choice of the cut-off parameter η can always been absorbed in the change of the coordinates. In what follows we use this option and simply put η 2 = 4s. For this choice For this choice the expansion of the function F (z) for small z takes a very simple form Null shell as a superposition of null gyratons We use now the above described gyraton metric in order to study the gravitational collapse in the ghost-free gravity. Namely, we consider a collapse of a spherical null shell. As earlier we use linearized gravitational equations. For simplicity, we restrict ourselves by 4D case. Instead of solving the corresponding equations we shall use the following trick. Let us notice that a sum of solutions of the linearized theory is again a solution. In the linear approximation the solution has the form ds 2 = ds 2 0 + dh 2 . (4.1) Here ds 2 0 is the flat metric and dh 2 is a perturbation. Let us consider a set of gyratons passing through a chosen point O of the spacetime. We use the Cartesian coordinates (t, X, Y, Z) in the the Minkowski spacetime, and identify a point O with the origin of the coordinate system O = (0, 0, 0, 0). We denote by (e X , e Y , e Z ) unit vectors in the directions of the axes X, Y , and Z, respectively, and by n a unit vector in 3D space in the direction of the motion of a fixed gyraton. One has n = sin α (cos β e X + sin β e Y ) + cos α e Z . (4.2) Here (α, β) are standard coordinates on a unit 2D sphere. Let a be an index enumerating the gyratons and the metric perturbation created by such a gyraton is dh 2 a . Then the perturbation dh 2 created by a set of the gyratons has the form We assume that all gyratons have the same energy. We also take a continuous limit of the discrete distribution of the gyratons and assume that such a distribution is spherically symmetric. Thus, we write An extra factor (4π) −1 reflects that we use averaging over a unit sphere. As a result of the averaging, the source of the metric dh 2 is a thin null shell located at the null cones Γ ± t = ± X 2 + Y 2 + Z 2 . (4.5) For the sign minus, Γ − is a null cone with apex at O, which describes a collapsing spherical null shell. For the sign plus, the null cone Γ + describes an expanding null shell. Our starting point is gyraton metric (3.18), which we rewrite in the form Here F = ln(ζ 2 ⊥ /η 2 ) , for the Einstein theory ; Ei(1, ζ 2 ⊥ /4s) + γ + ln(ζ 2 ⊥ /4s) , for the ghost-free gravity . (4.7) Let us notice this is a metric of a gyraton without spin. Namely these gyratons will be considered in this paper. We also found convenient to use the following terminology. We call a worldline of a gyraton a null string. Its projection to the 3D space, the gyraton trajectory, is an oriented straight line. A unit orientation vector n along this line determines a direction of the motion. Quantity ξ is the coordinate along the trajectory in the direction of motion of the "photon", and ζ ⊥ = (ζ 1 , ζ 2 ) are Cartesian coordinates in the 2D plane orthogonal to this direction. We study a spherically symmetric distribution of the gyratons, which has the property that the null strings, representing them, intersect at a single spacetime point O. We also choose the parameter ξ along each of the strings to vanish at this point O. Consider a point P = (X, Y, Z) of the 3D space. There exist exactly two gyraton trajectories, passing through this point. Let (α + , β + ) be angles of a unit vector n + (see (4.2)) along the line, connecting the origin with the point P . The parameter ξ + at P for such a trajectory is positive. The direction vector for the second trajectory is n − = −n + and its angles are (α − = π − α + , β − = π + β + ), while the corresponding coordinate ξ − = −ξ + is negative. It is convenient to perform the calculations of dh 2 in two steps. First we introduce the following objects is the stress-energy tensor of a gyraton moving in the direction n (see (4.2)). Similarly, for where T µν is the stress-energy tensor of a spherical null shell, constructed from gyraton null strings. Next, we use T (y ⊥ ) to find the metric perturbation dh 2 for the thin null shell of mass M dh 2 = M dy ⊥ F (y 2 ⊥ ) T (y ⊥ ) . It should be emphasized that the quantities dh 2 (α,β) and T (α,β) (y ⊥ ), which enter (4.4) and (4.9), must be first written in the coordinate system, that does not depend on the particular value of the parameters (α, β). We use the Cartesian coordinates (X, Y, Z) for this purpose. Thus we need first to establish relations between the gyraton associated coordinates (ξ, ζ ⊥ ) and the Cartesian coordinates (X, Y, Z). Useful geometric relations In order to find this coordinate transformation let us introduce a couple of new useful unit vectors. The first one is a vector k which is orthogonal to n and is directed from the gyraton trajectory to the Z-axis (see Figure 1). It can be written as a linear combination k = k Z e Z + k n n . The second unit vector p is orthogonal to both n and k. It can be written as Consider a point P , which has the Cartesian coordinates (X, Y, Z), so that the vector P connecting the origin point O with P is (4.17) The same vector can be written in the gyraton's frame as follows (ζ ⊥ = (ζ k , ζ p )) P = ξn + ζ k k + ζ p p . Z = ξ cos α + ζ k sin α . Evaluating integrals We assume that ζ p takes the value y p and use relation (4.22) to find a corresponding value of the angle β. By solving this equation one gets These relations show that for each value of y p there exist two different values of the angle β, which we denoted by β ± . Using (4.24) it is easy to check that The expression in the right-hand side is a sum of the contributions for two values of β ± corresponding to the same y p . The quantity ζ p in each of these terms should be substituted by y p . We use a similar trick as earlier to calculate an integral over the angle variable α. First, we use the relation (4.21) with ζ k = y k to find the angle α. One gets One also has ξ ± = ± r 2 − y 2 ⊥ . (4.36) In order to obtain this results one needs to solve quadratic equations, which have two solutions. We single out a solution as follows. A line y k = y p = 0 coincides with the gyraton's trajectory and α and β are spherical angles of its direction. For y k = y p = 0 relations (4.33), (4.34) and (4.36) give Namely these conditions (4.37) fix an ambiguity in the sign choice in the relations (4.33) and (4.34). Let us notice also that (4.24) in the limit y k = y p = 0 takes the form (4.38) By using (4.37) and (4.38) it is easy to see that two solutions (α + , β + ) and (α − , β − ) describe two opposite points of a unit sphere. As we explained earlier these two solutions describe two gyraton trajectories passing through the same space point (X, Y, Z). Stress-energy tensor The relations (4.41) can be used to find the stress-energy tensor of the null shell constructed from null stings representing a set of gyratons. Using (4.9) and (4.41) one gets Taking the limit y ⊥ → 0 in this relation and using relations (4.11) and (4.37) one obtains where u = t − r, v = t + r. This relation correctly reproduces the expected expression for the stress-energy tensor of the null shell. It is a superposition of the stress-energy tensors of a contracting and expanding spherical null shells of mass M . One can easily solve the linearized Einstein equations for such a null-shell problem. The solution is well known and simple. Inside both the collapsing and expanding shells the spacetime is flat, while outside them the metric is a linearized version of the Schwarzschild metric Our next goal is to reproduce this result by using the representation (4.12). This will provide us with a useful test of the validity of our approach. Averaged metric We have all the required expressions to perform the calculations. However, one can greatly simplify the problem using the following observation. Since the distribution of the null strings representing gyratons is spherically symmetric, the corresponding averaged metric dh 2 must also have this property. Hence, it can be written in the form Here the metric coefficients h tt , h tr , h rr and H are functions of t and r, and dω 2 is the metric on a unit sphere. The spherical coordinates (r, θ, φ) are related with the Cartesian coordinates (X, Y, Z) as follows X = r sin θ cos φ , Y = r sin θ sin φ , Z = r cos θ . (5.5) In order to find the metric (5.4) it is sufficient to obtain its value near a single spatial point. We choose such a point as follows: P X = (X = r, 0, 0), where X > 0. For this choice the metric (5.4) reduces to We perform now calculations of dh 2 at P X and by comparing it with (5.6) we find the expression for the metric perturbation (5.4). One has near P X In order to obtain the metric perturbation dh 2 one must to calculate the integral over (y p , y k ) in (4.12). We introduce polar coordinates in the (y p , y k )-plane: y p = ρ cos ψ , y k = ρ sin ψ . (5.11) We also write T (y ⊥ ) = T + + T − , (5.12) Because of the presence of δ-function the term T + is non-zero only for t ≥ 0, while the other term T − is non-zero for t ≤ 0. We write expression (4.12) in the form 14) 18) and dots indicate terms linear in y p and y k , which do not contribute to the integral over ψ (5.15). Thus one has Integrals K 1 and K 2 can be easily taken with the following result δ-function in (5.19) can be written as This relation also implies that t = ± X 2 − ρ 2 . (5.23) After integration in (5.14) the terms T ± give similar contributions, so that finally one obtains the following result Comparing (5.24) with (5.6) we finally get This expression is valid everywhere in the domain r ≥ |t| for both positive and negative time t. Using GRTensor program one can check that the Ricci tensor for this perturbation of the flat metric in the linear in M approximation vanishes everywhere outside the null shell. This provides one with a good test of the correctness of the performed calculations. A case of linearized Einstein equations For the linearized Einstein equations The corresponding expression ( After this gauge transformation one gets Let us notice that the infrared cut-off η 2 in (5.26) does not enter the final result and the change of this parameter is simply absorbed into the redefinition of the gauge field V µ . Let us mention also that the Kretschmann invariant in its lowest non-vanishing order is as it must be for the Schwarzschild metric. Let us emphasize that the same result can be obtained directly from the perturbation of the metric in the form (5.25). Ghost-free case One can easily repeat similar calculations for the an arbitrary function F (z), where where z = r 2 −t 2 . We used the GRTensor program for this purpose. The calculations are straightforward. However, the intermediate formulas are quite long. That is why we do not reproduce them here. Let us only present the expression for the Kretschmann invariant in its lowest order for such the metric in the ghost-free gravity For the ghost-free theory with a( ) = exp(− /µ 2 ) one has F (z) = Ei(1, z/(4s)) + γ + ln(z/(4s)) , (5.35) 36) Q(z) = 1 8s 2 z 2 (8s 2 + 4sz + z 2 )e −z/2s − 4s(4s + z)e −z/4s + 8s 2 . Here β 2 = 1 − t 2 /r 2 is a dimensionless parameter which outside the shell, where |t| < r, is less or equal to 1. This means that the curvature for the linearized ghost-free theory in a case of the collapse of the spherical null shell is weaker than the singularity for the linearized Einstein equations. However, the ghost-free gravity solution still remains singular at least for the chosen scheme of the regularization. "Thick" null shell model In the previous section we discussed the gravitational field of a spherical null thin shell. This shell represents a spherical δ-type distribution of the energy. The mass of the shell is M . It collapses with the speed of light and shrinks to zero radius at the moment of time t = 0. In the linearized theory the corresponding perturbation of the background flat metric is dh 2 (t, r). Certainly, such a model is an idealization. To study more realistic model of the collapse we assume that the spherical collapsing null fluid is represented by a pulse, which is not infinitely sharp in time, but has final time duration. We characterize its profile by a function q(t). The meaning of this function is the mass density per a unit time, dM/dt, arriving to the center r = 0 at time t. The total mass of such "thick" shell is M = dt q(t) . (6.1) Using the linearity of the equations one can write the corresponding solution dh 2 (t, r) as the superposition of dh 2 (t, r) perturbations as follows To simplify the calculations we further specify the model. Namely, we choose q(t) to be a step function, which has a constant value M/b during the interval t ∈ (−b/2, b/2), and which vanishes outside this interval. For such a step function a solution has different form in the different spacetime domains (see Figure 2). Let us describe first these domains. It is convenient to use the advanced time, v = t + r, and the retarded time, u = t − r, coordinates. In the domains T − , where v < −b/2, and T + , where u > b/2, the spacetime is empty, and the metric there is flat. In the domain N − , where v ∈ (−b/2, b/2) and u < −b/2, one has only in-falling null fluid flux, while in N + , where u ∈ (−b/2, b/2) and v > b/2 one has only out-going null fluid flux. In the domain, I, where v ∈ (−b/2, b/2) and u ∈ (−b/2, b/2) one has a superposition of the in-coming and out-going null fluid fluxes. And finally, in the domain R, where v > b/2 and u < −b/2, the spacetime is empty. Gravitational field in the I-domain Let us consider first I-domain. Denote by (t, r) the coordinates of the point of "observation" in this domain. A thin null shell, crossing the center r = 0 at time t , contributes to the integral (6.2) only if t ∈ (t − r, t + r). Let us denote x = t − t, then the formula (6.2) takes the form 3) Thus one has The linear in x term in (6.37) disappears as a result of the integration. Ghost-free gravity case One has F (z) = Ei(1, z/4s) + γ + ln(z/4s) . (6.11) The Tailor expansion of this function for small z is Using this expansion one gets for small r in the I-domain J 0 = r 3 (420s 2 − 21sr 2 + r 4 ) 1260s 3 + . . . , (6.13) Calculations of the Kretschmann invariant R 2 for small r in the I-domain give Here, as earlier,Ṁ = M/b. This relation shows that for the thick shell model the curvature remains finite at r = 0. (∇r) 2 invariant There exists another useful invariant for the spherically symmetric geometry. This invariant is A line in the (t, r) plane, where this invariant vanishes, is an apparent horizon. Using GRTensor one find that in the leading order in M this invariant in the I-domain is Since in this domain r < b, one has (∇r) 2 > 1 − 2M b s . (6.18) This relation means that for given s, which is the square of the UV cut-off parameter µ of the ghost-free theory, s = µ −2 , and fixed duration b of the pulse the mini-black hole is not formed if the mass M is small enough. Gravitational field in the R domain The perturbation of the gravitational field in the R domain is described by the following expression where now Here F (z) = F 0 (z) + ∆F (z) , F 0 (z) = γ + ln(z/4s) , (6.21) ∆F (z) = Ei(1, z/4s) . (6.22) We denote by J 0 n and ∆J n the contribution to J n of the terms F 0 and ∆F , respectively, so that one has J n = J 0 n + ∆J n . (6.23) We consider the perturbed metric at large r in the R domain. We also assume that r |t ± b/2| and use the following asymptotic form of the function ∆F (z) for large z ∆F (z) = 4s z e −z/4s (1 + O(1/z)) . (6.24) Thus one can use the following approximation Using this approximation and (6.20) one obtains Calculation of J 0 n is straightforward and J 0 n can be easily obtained by using the Maple program. One also obtains the following expressions for A n (t) Here and erf is the Error function, which is defined as Let us emphasize that in spite of the presence of the imaginary unit i in the above formulas, expressions for A n (t) are real, as they should be. After straightforward calculations by using the GRTensor program, we obtain the following expression for the Kretschmann tensor in the leading M 2 order Keeping the first term in W , which contains the highest power of r, we obtain sbr 4 e −r 2 /(4s) (erf(iτ + ) + erf(iτ − )) . (6.34) Once again, in spite of the presence of i the answer for ∆R 2 is real. Expressions (6.32) and (6.34) imply that in the R domain for a fixed value of time t and r → ∞ the curvature exponentially fast reaches its asymptotic value, which is equal to the Schwarzschild curvature of a spherical object of mass M . Gravitational field in the N ± -domains Let us briefly discuss now the gravitational field in the N ± domains. We focus on the incoming N − domains. The case of the N + is similar. Let (t, r) be coordinates of the "observation" point. We denote v = t + r and u = t − r. The condition that the "observation" point is in The position t of the apex of the null-shells, that give non-zero contribution at the point of the "observation", obeys the conditions t ∈ (−b/2, t + r). As earlier, we denote x = t − t . Then Thus one has In the linearized Einstein gravity one uses the following expression for F F (r 2 − t 2 ) = ln The required integrals (6.20) can be easily calculated. We do not reproduce here these results since the obtained expressions are quite long. We present here only final expression for the Kretschmann tensor, which we obtained by using th GRTensor program In the case of the ghost-free gravity the curvature is modified in the narrow "skin" domains close to v = ±b/2 (for N − domain) and close to u = ±b/2 (for N + domain), while outside of them and at large r the ghost-free theory corrections only slightly modify the above described Vaidya metric. Summary and discussion Let us summarize and discuss the obtained results. We remind that we study the modification of the solutions of the Einstein theory in the framework of the ghost-free theory, proposed in [7,8,9]. We focus mainly on a special type of such a theory in which a free field operator is transformed to the non-local operator of the form exp(− /µ 2 ) . We study solutions of such a theory in the linearized approximation. We first study a gravitational field of a point mass in this theory. Such a problem was solved in the four dimensional space time in [7] by using the Fourier methods. We demonstrate that the propagator of the linearized ghost-free theory in any number of spacetime dimensions is directly related to the heat kernel in this space. Using the heat kernel approach we obtain solutions for the gravitational field of a point mass in D-dimensional spacetime. We demonstrate that these solutions are always regular at the position of the source. In 4D spacetime the obtained solution coincides with the result presented in [7]. In the second part of the paper we study solutions of the host-free gravity for ultrarelativistic sources. We first obtain ghost-free analogues of gyraton solutions [20,21,22] of the higher-dimensional gravity. For this purpose we boost the obtained ghost-free solution for a point mass, and by taking the Penrose limit, we found a ghost-free solution for a Ddimensional spinless "photon". We demonstrate that the corresponding metric is similar to the metric of a spinless gyraton. The main difference is that a solution of (D − 2)-dimensional flat Laplace equation for a point charge, that enters the gyraton metric, is modified. Namely, the function of the transverse variables becomes a solution of (D − 2) ghost-free modification of the corresponding Laplace operator. Using this result we obtain spinless gyraton solutions of the host-free gravity in D-dimensional spacetime and demonstrate that their transverse singularities are regularized. Finally, we study a spherical gravitational collapse of null fluid in the framework of the ghost-free gravity. As earlier, we restrict ourselves by working in the linearized approximation. Since a solution of the ghost-free equations for the dynamical problem seems to be complicated, we used the following trick based on the results, obtained earlier in this paper. We used a fact that in the linearized theory a superposition of any solutions is again a solution. To obtain the gravitational field for a collapsing spherical thin null, we "construct" it as a superposition of the gyraton metrics for a spherically symmetric distribution of the gyraton sources moving along a null cone, representing the shell. Such gyratons intersect at a single point of the spacetime, which is an apex of the null cone. In the adopted linearized approximation the gyratons, forming the thin shell, cross this point without interaction, so that after passing the apex point they form an expanding spherical null shell. The gravitational field for such null shells is obtained by averaging of the single gyraton metric over homogeneous spherical distribution of the gyratons. We demonstrated that by using this approach one correctly reproduces the stress-energy tensor for both, collapsing and expanding thin null shells. We checked that the obtained solution in the linearized Einstein gravity correctly reproduces the expected result. Namely, the gravitational field inside both, collapsing and expanding shells vanishes, while outside of them it is constant. The latter field is nothing but a linearized version of the Schwarzschild metric with mass M . After this, we obtained a similar solution for the ghost-free gravity. To study its properties we calculated the Kretschmann scalar for it and demonstrated that its singularity at r = 0 is smoothened, but still is present. Finally, we study a case of the thick null shell collapse. In the linearized theory a corresponding solution is obtained by a superposition of the thin-shell solutions, that is by averaging these solutions with different positions of their apexes at r = 0. As a result, we construct a solution for a thick null shell with an arbitrary distribution of the collapsing mass M = M (v) at the spatial infinity. We considered in detail a special model, where the mass M (v) is a linear function of the advanced time v = t + r, that vanishes for moment v < −b/2 and v > b/2, and remains constant inside this interval. We demonstrated, that in the linearized Einstein gravity the solution is a superposition of two linearized Vaidya metrics, for the incoming and outgoing null fluid fluxes. The corresponding Kretschmann scalar is R 2 = 252Ṁ 2 r 4 , whereṀ = M/b. It is singular at r = 0. In the linearized ghost-free gravity the corresponding solution is modified. The main new feature is, that the metric is regular at r = 0. As a result, its Kretschmann scalar R 2 = 32Ṁ 2 3s 2 is finite at r = 0. We also calculated the invariant (∇r) 2 and demonstrated for the collapse of a thick null shell of the small mass M the apparent horizon is not formed. This result can be interpreted as follows: in the ghost-free gravity there exists a mass gap for the black hole formation in the gravitational collapse. This result is similar to the result obtained in [3] for the null shell collapse in the theory gravity with quadratic in the curvature corrections. In both cases, the theory has an additional scale parameter, the UV-cut-off length. This property differs such theories from the classical Einstein gravity, where the mass gap is absent and the black-holes of arbitrary small mass can be formed. A well known consequence of this is Choptuik [26] type universal scaling properties of near-critical solutions. Let us make a few general remarks concerning solutions of the ghost-free gravity equations. It seems that the method of the heat kernels, developed in this paper for the case of static sources, may not work properly when a source is moving arbitraryly. The heat kernels are well defined in the space with the Euclidean metric, but they do no have this property for the metrics with the Lorentzian signature. The reason is that the Laplace operator is non-positive definite, while this is not true for the box operator. In this paper we used the method based on the gyraton solutions to overcome this problem for a special case of null shells. It would be interesting to investigate solutions of the linearized ghost-free gravity for generic moving sources, say, for the emission of the gravitational waves in such a theory, and the back-reaction of this radiation on the accelerated objects. Another very interesting, but much more complicated problem is to study how the ghost-free gravity modifies the metric inside black holes near their singularity.
10,883.6
2015-04-01T00:00:00.000
[ "Physics", "Geology" ]
A Spectral Element Reduced Basis Method for Navier-Stokes Equations with Geometric Variations We consider the Navier-Stokes equations in a channel with a narrowing of varying height. The model is discretized with high-order spectral element ansatz functions, resulting in 6372 degrees of freedom. The steady-state snapshot solutions define a reduced order space through a standard POD procedure. The reduced order space allows to accurately and efficiently evaluate the steady-state solutions for different geometries. In particular, we detail different aspects of implementing the reduced order model in combination with a spectral element discretization. It is shown that an expansion in element-wise local degrees of freedom can be combined with a reduced order modelling approach to enhance computational times in parametric many-query scenarios. We consider the flow through a channel with a narrowing of variable height. A reduced order model (ROM) is computed from a few high-order SEM solves, which accurately approximates the high-order solutions for the parameter range of interest, i.e., the different narrowing heights under consideration. Since the parametric variations are affine, a mapping to a reference domain is applied without further interpolation techniques. The focus of this work is to show how to use simulations arising from the SEM solver Nektar++ [7] in a ROM context. In particular, the multilevel static condensation of the high-order solver is not applied, but the ROM projection works with the system matrices in local coordinates. See [1] for further details. This is in contrast to our previous work [21], since numerical experiments have shown that the multilevel static condensation is inefficient in a ROM context. Additionally, we consider affine geometry variations. With SEM as discretization method, we use global approximation functions for the high-order as well as reduced-order methods. The ROM techniques described in this paper are implemented in open-source project ITHACA-SEM1. The outline of the paper is as follows. In Sec. 2, the model problem is defined and the geometric variations are introduced. Sec. 3 provides details on the spectral element discretization, while Sec. 4 describes the model reduction approach and shows the affine mapping to the reference domain. Numerical results are given in Sec. 5, while Sec. 6 summarizes the work and points out future perspectives. Problem Formulation Let Ω ∈ R 2 be the computational domain. Incompressible, viscous fluid motion in spatial domain Ω over a time interval (0, T) is governed by the incompressible Navier-Stokes equations with vector-valued velocity u, scalar-valued pressure p, kinematic viscosity ν and a body forcing f: Boundary and initial conditions are prescribed as with d, g and u 0 given and The Reynolds number Re, which characterizes the flow [8], depends on ν, a characteristic velocity U, and a characteristic length L: We are interested in computing the steady states, i.e., solutions where ∂u ∂t vanishes. The high-order simulations are obtained through time-advancement, while the ROM solutions are obtained with a fixed-point iteration. Oseen-Iteration The Oseen-iteration is a secant modulus fixed-point iteration, which in general exhibits a linear rate of convergence [11]. Given a current iterate (or initial condition) u k , the next iterate u k+1 is found by solving linear system: Iterations are typical stopped when the relative difference between iterates falls below a predefined tolerance in a suitable norm, like the L 2 (Ω) or H 1 0 (Ω) norm. Model Description We consider the reference computational domain shown in Fig. 1, which is decomposed into 36 triangular spectral elements. The spectral element expansion uses modal Legendre polynomials of order p = 11 for the velocity. The pressure ansatz space is chosen of order p − 2 to fulfill the inf-sup stability condition [12,13]. A parabolic inflow profile is prescribed at the inlet (i.e., x = 0) with horizontal velocity component u x (0, y) = y(3 − y) for y ∈ [0, 3]. At the outlet (i.e x = 8) we impose a stress-free boundary condition, everywhere else we prescribe a no-slip condition. The height of the narrowing in the reference configuration is µ = 1, from y = 1 to y = 2. See Fig. 1. Parameter µ is considered variable in the interval µ ∈ [0.1, 2.9]. The narrowing is shrunken or expanded as to maintain the geometry symmetric about line y = 1.5. Fig. 2, Fig. 3, and Fig. 4 show the velocity components close to the the steady state for µ = 1, 0.1, 2.9, respectively. The viscosity is kept constant to ν = 1. For these simulations, the Reynolds number (6) is between 5 and 10, with maximum velocity in the narrowing as characteristic velocity U and the height of the narrowing characteristic length L. For larger Reynolds numbers (about 30), a supercritical pitchfork bifurcation occurs giving rise to the so-called Coanda effect [15,22,21], which is not subject of the current study. Our model is similar to the model considered in [9], i.e. an expansion channel with an inflow profile of varying height. However, in [9] the computational domain itself does not change. Spectral Element Full Order Discretization The Navier-Stokes problem is discretized with the spectral element method. The spectral/hp element software framework used is Nektar++ in version 4.4.02. The discretized system of size N δ to solve at each step of the Oseen-iteration for fixed µ can be written as where v bnd and v int denote velocity degrees of freedom on the boundary and in the interior of the domain, respectively, while p denotes the pressure degrees of freedom. The forcing terms on the boundary and interior are denoted by f bnd and f int , respectively. The matrix A assembles the boundary-boundary coupling, B the boundary-interior coupling,B the interior-boundary coupling, and C assembles the interior-interior coupling of elemental velocity ansatz functions. In the case of a Stokes system, it holds that B =B T , but this is not the case for the Oseen equation because of the linearized convective term. The matrices D bnd and D int assemble the pressure-velocity boundary and pressure-velocity interior contributions, respectively. The linear system (7) is assembled in local degrees of freedom, resulting in block matrices A, B,B, C, D bnd and D int , each block corresponding to a spectral element. This allows for an efficient matrix assembly since each spectral element is independent from the others, but makes the system singular. In order to solve the system, the local degrees of freedom need to be gathered into the global degrees of freedom [1]. The high-order element solver Nektar++ uses a multilevel static condensation for the solution of linear systems like (7). Since static condensation introduces intermediate parameter-dependent matrix inversions (such as C −1 in this case) several intermediate projection spaces need to be introduced to use model order reduction [21]. This can be avoided by instead projecting the expanded system (7) directly. Next, we will take the boundary-boundary coupling across element interfaces into account. Let M denote the rectangular matrix which gathers the local boundary degrees of freedom into global boundary degrees of freedom. Multiplication of the first row of (7) by M T M will then set the boundary-boundary coupling in local degrees of freedom: The action of the matrix in (8) on the degrees of freedom on the Dirichlet boundary is computed and added to the right hand side. Such degrees of freedom are then removed from (8). The resulting system can then be used in a projection-based ROM context [18], of high-order dimension N δ × N δ and depending on the parameter µ : Reduced Order Model The reduced order model (ROM) computes accurate approximations to the highorder solutions in the parameter range of interest, while greatly reducing the overall computational time. This is achieved by two ingredients. First, a few high-order solu-tions are computed and the most significant proper orthogonal decomposition (POD) modes are obtained [18]. These POD modes define the reduced order ansatz space of dimension N, in which the system is solved. Second, to reduce the computational time, an offline-online computational procedure is used. See Sec. 4.1. The POD computes a singular value decomposition of the snapshot solutions to 99.99% of the most dominant modes [10], which define the projection matrix U ∈ R N δ ×N used to project system (9): The low order solution x N (µ) then approximates the high order solution as x(µ) ≈ Ux N (µ). Offline-Online Decomposition The offline-online decomposition [10] enables the computational speed-up of the ROM approach in many-query scenarios. It relies on an affine parameter dependency, such that all computations depending on the high-order model size can be moved into a parameter-independent offline phase, while having a fast input-output evaluation online. In the example under consideration here, the parameter dependency is already affine and a mapping to the reference domain can be established without using an approximation technique such as the empirical interpolation method. Thus, there exists an affine expansion of the system matrix A(µ) in the parameter µ as The coefficients Θ i (µ) are computed from the mapping x = T k (µ)x + g k , T k ∈ R 2×2 , g k ∈ R 2 , which maps the deformed subdomainΩ k to the reference subdomain Ω k . See also [16,17]. Fig. 5 shows the reference subdomain Ω k for the problem under consideration. For each subdomainΩ k the elemental basis function evaluations are transformed to the reference domain. For each velocity basis function u = (u 1 , u 2 ), v = (v 1 , v 2 ), w = (w 1 , w 2 ) and each (scalar) pressure basis function ψ, we can write the transformation with summation convention as: with The subdomain Ω 5 (see Fig. 5) is kept constant, so that no interpolation of the inflow profile is necessary. To achieve fast reduced order solves, the offline-online decomposition expands the system matrix as in (11) and computes the parameter independent projections offline, which are stored as small-sized matrices of the order N × N. Since in an Oseen-iteration each matrix is dependent on the previous iterate, the submatrices corresponding to each basis function are assembled and then formed online using the reduced basis coordinate representation of the current iterate. This is the same procedure used for the assembly of the nonlinear term in the Navier-Stokes case [18]. Numerical Results The accuracy of the ROM is assessed using 40 snapshots sampled uniformly over the parameter domain [0.1, 2.9] for the POD and 40 randomly chosen parameter locations to test the accuracy. Fig. 6 (left) shows the decay of the energy of the POD modes. To reach the typical threshold of 99.99% on the POD energy, it takes 9 POD modes as RB ansatz functions. Fig. 6 (right) shows the relative L 2 (Ω) approximation error of the reduced order model with respect to the full order model up to 6 digits of accuracy, evaluated at the 40 randomly chosen verification parameter locations. With 9 POD modes the maximum approximation error is less than 0.7% and the mean approximation error is less than 0.5%. While the full-order solves were computed with Nektar++, the reduced-order computations were done in ITHACA-SEM with a separate python code. To assess the computational gain, the time for a fixed point iteration step using the full-order system is compared to the time for a fixed point iteration step of the ROM with dimension 20, both done in python. The ROM online phase reduces the computational time by a factor of over 100. The offline time is dominated by computing the snapshots and the trilinear forms used to project the advection terms. See [18] for detailed explanations. Conclusion and Outlook We showed that the POD reduced basis technique generates accurate reduced order models for SEM discretized models under parametric variation of the geometry. The potential of a high-order spectral element method with a reduced basis ROM is the subject of current investigations. See also [6]. Since each spectral element comprises a block in the system matrix in local coordinates, a variant of the reduced basis element method (RBEM) ( [19,20]) can be successfully applied in the future.
2,815.4
2018-12-28T00:00:00.000
[ "Engineering", "Mathematics", "Physics" ]
Real-Valued Systemic Risk Measures : We describe the axiomatic approach to real-valued Systemic Risk Measures, which is a natural counterpart to the nowadays classical univariate theory initiated by Artzner et al. in the seminal paper “Coherent measures of risk”, Math. Finance, (1999). In particular, we direct our attention towards Systemic Risk Measures of shortfall type with random allocations, which consider as eligible, for securing the system, those positions whose aggregated expected utility is above a given threshold. We present duality results, which allow us to motivate why this particular risk measurement regime is fair for both the single agents and the whole system at the same time. We relate Systemic Risk Measures of shortfall type to an equilibrium concept, namely a Systemic Optimal Risk Transfer Equilibrium, which conjugates Bühlmann’s Risk Exchange Equilibrium with a capital allocation problem at an initial time. We conclude by presenting extensions to the conditional, dynamic framework. The latter is the suitable setup when additional information is available at an initial time. Introduction Both the financial crisis started in 2007 and the dramatic economic shocks related to the COVID-19 pandemic have brutally proved the partial inadequacy of past approaches to the management of risk for complex systems of interacting entities. It has become more and more evident how deep interconnectedness played a major role in propagation of risk at all levels, from smaller businesses to large scale multinational entities. The literature on systemic risk has widely focused on modeling the structure of financial networks, aiming at portraying the spread of shocks (both exogenous and endogenous) in the systems. These models assume a more or less detailed knowledge of balance sheets of institutions (interbank network and exposures, recovery rate at default, liquidation policy, bankruptcy costs, cross-holdings, leverage structures, fire sales, and liquidity freezes). As pointed out in [1], however, once such models have been implemented, "one still has to understand how to compare the possible final outcomes in a reasonable way or, in other words, how to measure the risk carried by the global financial system."We will adopt here an axiomatic approach, which addresses specifically this problem. Essentially, this axiomatic approach identifies suitable maps associated with a risky position for the system (i.e., a vector of financial positions) a single amount (namely, a real number). Such an amount, intuitively speaking, summarizes the overall risk of the system. The literature on systemic risk is nowadays very vast and growing. For empirical studies on banking networks, one might look at [2][3][4]. The works [5][6][7][8][9] study interbank lending with a mean field approach and using interacting diffusions. Concerning systemic risk modeling, we mention among the many contributions [10] for a classical contagion model, ref. [11] for a default model, ref. [12][13][14] for illiquidity cascade models, ref. [15,16] for an asset fire sale cascade model, and [17] for a model which includes cross-holdings. Further works on network modeling are [18][19][20][21][22][23][24]. We refer the reader to [25,26] for a detailed overview on the literature on systemic risk. In this work, we will mostly focus on real-valued Systemic Risk Measures, which have been studied in recent works inspired by the axiomatic approach to univariate monetary Risk Measures of [27]. An alternative, set-valued approach has also gained significant attention in the literature, as testified, for example, by the recent "Special Issue on Vector-and Set-Valued Methods in Stochastic Finance and Related Areas" (Finance Stoch. Volume 25, issue 1, January 2021). By its own very nature, the classical framework of [27] allows for explicitly modeling the mechanism of injecting capital in order to make risky positions acceptable and thus seems a natural environment for treating the systemic case. The classical setup is static, with deterministic amounts exchanged at initial times and random events that take place at a terminal time T. To simplify the presentation, we will assume a zero interest rate, e.g., the time T amounts are already in discounted terms. The presence of additional information can be modeled by replacing the initial trivial sigma algebra, of the static case, with a general one. Thus, we will depict the extension of the theory of Systemic Risk Measures to a dynamic, multiperiod framework. Before discussing this topic, we will elaborate some details on the static case, which help with developing the intuition of the main concepts with less technicalities. Univariate Monetary Risk Measures We denote with L 0 (Ω, F , P) the set of (equivalence classes of) P−a.s. finite random variables on the probability space (Ω, F , P). The framework of [27], which is now textbook material and is excellently exposed in [28], consists of three essential ingredients. First, the financial positions whose risk one wants to quantify are represented by random variables X ∈ L 0 (Ω, F , P), where the amount X(ω) is interpreted as a gain if positive, loss if negative. Second, it is assumed that there exists a subset A ⊆L 0 (Ω, F , P) of random variables that are considered acceptable by the agent or the financial regulator. The set A is assumed to be monotone that is X ≥ Y, Y ∈ A ⇒ X ∈ A. Finally, the univariate monetary Risk Measure η, which measures the risk of the positions X ∈ L 0 (Ω, F , P), is defined as the minimal amount m ∈ R that must be added to X in order to make the resulting (discounted) payoff acceptable that is X → η(X) := inf{m ∈ R | X + m ∈ A}. (1) A key feature of such a map is the cash additivity property: η(X + m) = η(X) − m, for all m ∈ R. As this property points out, it is necessary (and meaningful) to express X, m and η(X) in the same monetary unit, and this allows for the monetary interpretation of η(X) as a value which can be effectively added to X. Assuming that the set A is convex (respectively is a convex cone) the map in (1) is convex (respectively convex and positively homogeneous) and is called convex (respectively coherent) Risk Measure, see [29,30]. A rather conservative example of coherent Risk Measure is the worst case Risk Measure ρ W : L 0 (Ω, F , P) → R ∪ {±∞} associated with the cone A W := L 0 + (Ω, F , P) of non-negative random variables ρ W (X) := inf m ∈ R | X + m ∈ A W = −ess inf(X). Convexity is meant to express mathematically the principle of diversification, which is summarized in the famous saying "don't put all your eggs in one basket". Diversification might also be expressed with the weaker condition of quasiconvexity which accurately express the principle that diversification can not increase the risk. As a result, in [31,32], quasi-convex Risk Measure is only assumed to satisfy monotonicity and quasi-convexity. Any quasi-convex Risk Measures can be written in the form where each set A m ⊆ L 0 (Ω, F , P) is monotone and convex, for each m. The set A m models the class of payoffs carrying the same risk level m. In the quasi-convex case, several degrees of acceptability, expressed via the risk level m (see also [33]), are admitted. This marks a difference with the convex and cash additive case, in which each financial position is either acceptable or not acceptable. Systemic Risk Measures Let us now consider a financial system of N agents or institutions, portrayed by the exposures X = [X 1 , . . . , X N ] ∈ (L 0 (Ω, F , P)) N . Given the theory of univariate Risk Measures, a natural first step to design the measurement of the risk of the system consists in applying (possibly different) Risk Measures η n to X n , n = 1, . . . , N, and then aggregate the resulting amounts as ρ(X) := ∑ N n=1 η n (X n ). This approach is clearly quite naïf, since it almost completely ignores the multivariate nature of the exposures X (if e.g., η n is law invariant for each n, the amount ρ(X) only depends on the marginals of the vector X) and, more generally, it does not take into account the fact that securing each component of X is in principle not enough to secure the system as a whole. It is worth noticing that, in the above mentioned classical theory of univariate Risk Measures, the valuation of the risk is obtained by aggregating the scenario-dependent exposures into a deterministic amount, namely η(X), securing the financial positions. This rather trivial remark, however, stresses the fact that, for the systemic case, one needs to identify a second aggregation mechanism over the components of the system. This aggregation over agents in the system can be obtained in several different ways (both conceptually and mathematically), as we now explain. First Aggregate, Then Allocate In the axiomatic approach to Systemic Risk Measures, the objective is thus the identification of a functional ρ : (L 0 (Ω, F , P)) N → R that evaluates the overall risk ρ(X) of the whole system X = [X 1 , . . . , X N ] ∈ (L 0 (Ω, F , P)) N . A significant part of recent literature has focused on Systemic Risk Measures in the form where η : L 0 (Ω, F , P) → R is a univariate monetary Risk Measure and Λ : R N → R expresses an aggregation rule which provides a univariate risk factor Λ(X). We observe here that Λ(X) stands for the random variable ω → Λ • X(ω). One of the most common and natural choices is , the systemic Expected Shortfall introduced in [34], or the Contagion Value at Risk (CoVaR) introduced in [35]. Among the drawbacks of this approach, we mention that this aggregation rule seems not entirely appropriate when modeling systems in which cross-subsidization between institutions might be rather unrealistic. Moreover, in this case, a more traditional approach consisting of applying a univariate (coherent) Risk Measure η to the single risky positions would be prudent enough. The latter claim is a consequence of the fact that, by sub-linearity of coherent Risk Measures, it holds that η(∑ N n=1 X n ) ≤ ∑ N n=1 η(X n ). A possible aggregation function that takes into account the lack of cross-subsidization between financial institutions is the summation of losses below a given threshold: . Such an approach is adopted for example in [36,37], and generalized in [38] where also the effect of gains (positive parts) is considered by using Λ(x) = − ∑ N n=1 α n (x n ) − + ∑ N n=1 β n (x n − d n ) + . Contagion effects in the style of the model in [10] can be also taken into account as in [39]. The aggregation rule is in this case given in the form The matrix Π = (Π ij ) i,j=1,··· ,N represents the relative liability matrix, i.e., firm i has to pay the proportion Π ij of its total liabilities to firm j. It is of great importance to specify that, in this last case, the vector X is interpreted as future profits and losses before propagation of shocks and contagion take place. An alternative approach, which is adopted when working with aggregations of the form Λ(x) = ∑ N n=1 x n and Λ(x) = ∑ N n=1 −(x n − d n ) − assumes that the random vector X already incorporates the potential exposures due to contagion effects. Systemic Risk Measures of the form X → η(Λ(X)) are characterized axiomatically in [39] on a finite state space, in [40] on a general probability space, and in [41,42] in a conditional setting. These references also present further examples of possible aggregation functions. Whenever η in (4) is a monetary Risk Measure, the characterization (1) allows for writing Thus, systemic risk can again be interpreted as the minimal cash amount that secures the system when it is added to the total aggregated system loss Λ(X). In the cases when Λ(X) can not be directly interpreted as cash, then ρ(X) has to be understood in a mild sense as some risk level of the system, rather than the more explicit and practical meaning as a capital requirement. As explained in [1], a considerable portion of existing Risk Measures in the literature are in the form (5): the DIP of [36,43], the CoVar ( [35]), the MES ( [34]), and many other examples that can be found in [44]. First Allocate, Then Aggregate An alternative approach, introduced by [1], consists of measuring systemic risk as the minimal cash that secures the aggregated system by adding the capital into the single institutions before aggregating their individual risks. This is particularly meaningful when aiming at securing the whole system by intervention at the level of single agents. If again Λ : R N → R denotes an aggregation function, this second approach can be expressed as Here, the amount m n is added to the financial position X n of institution n ∈ {1, ..., N} before the corresponding aggregation Λ(X + m) takes place. This procedure is conceptually relevant once we realize that injecting cash first might prevent cascade effects and contagion to happen at all. The systemic risk is then measured as the minimal (total) amount ∑ N n=1 m n injected into the institutions to secure the system. A related concept, also based on acceptance sets, is developed by [45] in the context of set-valued Systemic Risk Measures. The authors also admit capital injection before aggregating individual risks (this is called aggregation mechanism sensitive to capital levels in [45]). However, the latter approach and the one presented in (6) are hardly comparable due to the intrinsic conceptual differences in considering total amounts of capital to secure the system (as in (6)) versus set-valued approaches. A risk measurement regime in the form (6) has a significant advantage with respect to the one described in Section 3.1. Indeed, assume for the sake of the discussion that (5) and (6) admit optima m (5) ∈ R, m (6) ∈ R N , respectively. In contrast to (5), we then see that (6) addresses not only the problem of determining a total amount ρ(X) to secure the system as a whole, but also determines a way to decompose the total risk ρ(X) into partial risk allocations m n (6) ∈ R to be attributed to each institution n: since ρ(X) = ∑ N n=1 m n (6) , it is self evident that this procedure is consistent with the computation of the overall risk ρ(X). On the contrary, the amount m (5) secures the system in a less explicative way, especially from the point of view of a regulator imposing capital requirements to each institution. Once the amount m (5) is identified, it is yet to be specified if and how each component of the system participates in the benefits of its allocation. Dual representation results have been obtained in [1] in the "first allocate, then aggregate" case and by [46] in both cases for Systemic Risk Measures based on acceptance sets. Duality results have also been studied for set-valued Systemic Risk Measures of the "first aggregate, then allocate type" in [47]. Multivariate Shortfall Risk Allocation A particular case of Systemic Risk Measures of the type "first allocate, then aggregate"has been specifically studied in [48]. The authors consider a loss function : R N → (−∞, +∞], which is nondecreasing in the componentwise order, convex, lower semicontinuous and such that inf < 0 and (x) ≥ ∑ N n=1 x n − c for some constant c. Such a functional extends to the multivariate case the loss functions considered in [28]. The multivariate shortfall risk of the position X is then defined as A clever way to tackle this optimization problem and to get its dual representation is to apply the powerful theory of Orlicz spaces (for an overview of Orlicz space theory applied to utility maximization problem, we refer to [49]). Consider the multivariate Orlicz heart induced by the loss function , namely M θ := {X ∈ (L 0 (Ω, F , P)) N | E[θ(λ|X|)] < +∞ ∀λ > 0}, where θ(x) := (|x|) and |·| is taken componentwise. By suitably restricting the domain of R to M θ , the map R is found to be real-valued, convex, monotone, and translation invariant (R(X + m) = R(X) − ∑ N n=1 m n ). As the topological dual of M θ is the Orlicz space L θ * associated with the convex conjugate θ * of θ, a dual representation of R based on the duality (M θ , L θ * ) holds true (see [48] Theorem 2.10). Assuming permutation invariance for , i.e., (x) = (π(x)) for every componentwise permutation π, optimal allocations are proved to exist and are characterized in terms of Lagrange multipliers ([48] Theorem 3.4). If one selects loss functions of the form (x) = ∑ N n=1 n (x n ) for univariate loss functions 1 , . . . , N , the amount R(X) only depends on the (one dimensional) marginals of the law P X on R N . Thus, in order to capture the truly multivariate nature of the risk associated with the position X of the system, one needs to consider more sophisticated functions . The choice of a particular the loss function allows for studying systemic sensitivity of shortfall risk and its allocation, impact of exogenous shocks, and computational aspects of the risk allocations, namely of the optimal allocations m X satisfying ∑ N n=1 m n X = R(X). As we shall see in the following section, allowing for scenario-dependent allocations in place of the deterministic ones in [48] can prevent in general the aforementioned drawback of marginal dependencies, even in the case of aggregation of single institutions' loss functions ∑ N n=1 n (x n ). Scenario-Dependent Allocation The approach described in Section 3.2 can be generalized replacing the deterministic amounts m ∈ R N with random vectors Y ∈ C ⊆ (L 0 (Ω, F , P)) N . Here, C stands for a set of admissible assets with possibly random, scenario-dependent payoffs. This approach, introduced by [1], raises the question on the initial time evaluation of those assets Y in C: since now the vector m will be substituted with a random vector Y, the simple componentwise addition ∑ N n=1 m n implemented in (6) has no natural counterpart. Following [50], however, one can assign a measurement π(Y) of the risk (or the cost) associated with Y ∈ C, by postulating the existence of a evaluation map π : C → R, which is assumed to be monotone increasing. Under these premises, (6) can be extended as For the sake of interpretation, C could be a set of (vectors of) admissible financial assets that can be used to secure the system. Adding Y to X componentwise yields the terminal time value X + Y. Aggregation via Λ allows for determining whether the addition of Y produces an acceptable position, and π(Y) is the valuation of Y. An example for π and C which will play a major role is: and π(Y) = ∑ N n=1 Y n . Here, the notation ∑ N n=1 Y n ∈ R means that ∑ N n=1 Y n is equal to some deterministic constant in R, even though each single Y n , n = 1, · · · , N, is a random variable. Then, as in (6), the Systemic Risk Measure can again be interpreted as the minimal total cash amount ∑ N n=1 Y n ∈ R needed at an initial time to secure the system by distributing the cash at the future time T among the components of the risk vector X. However, as opposed to (6), in general, the allocation Y i (ω) to institution i does not need to be known at an initial time, but depends instead on the scenario ω ∈ Ω that has been realized at time T. As mentioned in [51], "this corresponds to the situation of a lender of last resort who is equipped with a certain amount of cash today and who will allocate it according to where it serves the most depending on the scenario that has been realized at T". Additional restrictions and constraints on the possible allocations of cash are given by the set C. The Systemic Risk Measures with deterministic allocations presented in Section 3.1 can be absorbed in this setup by the extreme choice C = R N . Multidimensional Acceptance Set The Systemic Risk Measure (7) is a particular instance of the wider class of Systemic Risk Measures defined through a general monotone multidimensional acceptance set Indeed, for given A⊆L 0 (Ω, F , P) and Λ and setting we have X + Y ∈ A iff Λ(X + Y) ∈ A. Obviously, not all multidimensional set A may be written in the form (10), as, for example, if A := A 1 ×...×A N and A n ⊆L 0 (Ω, F , P) for each n. Observe that, applying the definition (9), this latter acceptance set A generates the Risk Measure ρ(X) = ∑ N n=1 η A n (X n ), where each univariate Risk Measure η A n is associated with the set A n . In analogy to (3), another possible generalization of (7) is achieved, see [1], by allowing the acceptance set A Y ⊆(L 0 (Ω, F , P)) N to depend on the vector Y ∈ C and by defining the quasi-convex map as: In the remainder of this paper, however, we will focus only on real-valued convex Systemic Risk Measures defined via an aggregator functional and one dimensional acceptance set. Axiomatic Definition of Systemic Risk Measures As anticipated before, we present now the Systemic Risk Measures theory developed by [1], which parallels the classical axiomatic definition of univariate Risk Measures via acceptance sets. Again, (L 0 (Ω, F , P)) N is the set of (equivalence classes of) vectors of P−a.s. finite random variables. The space (L 0 (Ω, F , P)) N equipped with the classical componentwise P-a.s. order relation is a vector lattice. All inequalities between vectors of random variables are meant to hold the P-.a.s.. We recall that a map f : A vector X = [X 1 , . . . , X N ] ∈ (L 0 (Ω, F , P)) N denotes a configuration of risky factors at a future time T associated with a system of N entities. Let be the set of admissible allocations at terminal time and acceptable positions, and consider a map π : C → R so that π(Y) stands for the risk (or cost) associated with Y. Finally let Θ : (L 0 (Ω, F , P)) N × C → L 0 (Ω, F , P) denote some aggregation function, jointly in X and Y. Definition 1. The Systemic Risk Measure associated with C, A, Θ and π is the map ρ : The map ρ is a Convex Systemic Risk Measure if it is monotone decreasing and convex on {ρ(X) < +∞}. As is customary, we convene that inf{∅} = +∞. This definition translates mathematically the idea that the systemic risk of a random vector X is measured by the minimal risk (cost) of those random vectors Y that make X acceptable if appropriately aggregated. Among the many advantages of the general formulation (12) is the possibility to design Systemic Risk Measures that include both the case "first allocate, then aggregate" as in (6) and (7) by putting Θ(X, Y) =Λ(X + Y), and the case "first aggregate, then allocate" as in (5) by putting Θ(X, Y) :=Λ 1 (X)+Λ 2 (Y), where Λ 1 : (L 0 (Ω, F , P)) N → L 0 (Ω, F , P) is an aggregation function and Λ 2 : C → L 0 (Ω, F , P) could be, for example, the total discounted cost of Y. Moreover, it is readily verified that the framework of Section 3.2.1 can be embedded in the one in (12). As shown in Proposition 4.1 [1], there are simple conditions ensuring that ρ in (12) is a convex Systemic Risk Measure. Proposition 1. Suppose that A⊆L 0 (Ω, F , P) and C ⊆ (L 0 (Ω, F , P)) N are convex, A is monotone, Θ : (L 0 (Ω, F , P)) N × C → L 0 (Ω, F , P) is concave and Θ(·, Y) is increasing for all Y ∈ C, then ρ defined (12) is a convex Systemic Risk Measure. Remark 1. For the sake of generality, the Definition 1 is given with no integrability assumptions on either X or Y and admitting the values +∞ and −∞ for the functional ρ. Nevertheless, if a more detailed analysis is to be carried over, a restriction to suitable spaces is in place. The motivations are analogous to those of the univariate case, where the most common restrictions are those to L p (Ω, F , P) p ∈ [1, ∞] or to more general Orlicz spaces and Orlicz hearts. It is clear, and it was shown in detail [52], that finiteness of ρ can not be guaranteed if working on unrestricted vector spaces, as for example L 0 (Ω, F , P). Practice shows that selecting an appropriate environment for the variables X in order to have ρ(X) > −∞ is best done on a case-by-case manner: in [48,53], finiteness is, for example, achieved by working on Orlicz hearts. Guaranteeing ρ(X) < +∞, instead, lends itself to structural assumptions on C and A and Θ. For example, under the same assumptions of Proposition 1 , if additionally We mention a few basic examples, taken from [1], to construct Risk Measures by using the acceptance set A W associated with the worst case Risk Measure (see (2)) and the aggregation function Possible candidates for the set of terminal time allocations C are, on one hand, the deterministic allocations C = R N and, on the other hand, the family of so-called "constrained scenario-dependent cash allocations". Recall the definition of C R in (8) and consider the sets Then, all the following can be expressed in the form described in Definition 1: Risk Measures Associated with Utility Functions and Fairness Concepts Given the general framework for measuring systemic risk presented in the previous sections, a particular choice π, C, A clearly allows for a more detailed analysis. We here present some of the main findings in [53]. The starting point is fixing the aggregation function Λ(x) = ∑ N n=1 u n (x n ) for utility functions u n , n = 1, . . . , N, representing preferences of the single agents in the system. We consider the acceptance set where the set C R was introduced in (8), and the cost functional π : C →R defined by π(Y) = ∑ N n=1 Y n . The space L ⊆ (L 0 (Ω, F , P)) N serves as an environment for the risky positions of the system X and describes possible integrability conditions. The Systemic Risk Measure introduced in Definition 1 then takes the form Thus, ρ B (X) represents the minimal total cash amount ∑ N n=1 Y n ∈ R needed at an initial time to secure the system by distributing the cash at the future time T among the components of the risk vector X. By considering scenario-dependent allocations, possible dependencies among the banks are taken into account, as the budget constraints in (13) will not depend only on the marginal distribution of X. As already mentioned before, this would be the case for deterministic Y n . The choice of a space L allows for a more detailed study, e.g., for the problem of finiteness of the Systemic Risk Measures. In [53] the subspace L is determined by the Orlicz hearts associated with the utility functions u 1 , . . . , u N . This specific choice of the underlying space has been popularized in several recent works in the non systemic case. Indeed, univariate convex Risk Measures on Orlicz spaces have been introduced by [49] and [54] and deeply analyzed in several works. Just to mention a few, we address the reader to [55][56][57][58][59][60]. If u : R → R is concave, increasing and satisfies lim x→−∞ u(x) x = +∞ (say, for example, a utility function satisfying the Inada conditions), then the function φ(x) := −u(−|x|) + u(0) is a strict Young function that is: φ : R → [0, +∞) is even and convex on R, φ(0) = 0 and lim x→+∞ φ(x) x = +∞. The Orlicz space L φ and Orlicz heart M φ are then defined respectively as and they are Banach spaces when endowed with the Luxemburg norm. Once one realizes that requesting the integrability condition E[φ(X)] < +∞ automatically yields E[u(X)] > −∞, it is easy to reach the conclusion that the Orlicz framework is a very natural setup when working with utility maximization problems. A detailed discussion on this can be found in [49]. Just to mention a few key properties that will help with understanding our discussion in the following, the topological dual of M φ is the Orlicz space L φ * , where the convex conjugate φ * of φ, defined by φ * (y) := sup x∈R {xy − φ(x)}, y ∈ R, is also a strict Young function. It is also well known that L ∞ (Ω, F , P) ⊆ M φ ⊆ L φ ⊆ L 1 (Ω, F , P). In addition, for a probability measure Q P such that dQ dP ∈ L φ * , the inclusion L φ ⊆ L 1 (Ω, F , Q) also holds true. Given the utility functions u 1 , . . . , u N : R → R with associated Young functions φ 1 , . . . , φ N , the underlying space L is given by Moreover, the space L Φ * := L φ * 1 × · · · × L φ * N , induced by the conjugates of the functions φ 1 , . . . , φ N , is the topological dual space of M Φ and the dual system (M Φ , L Φ * ) will be useful to obtain dual representation results. The following set of assumptions allows for several interesting findings on properties of the functional ρ B . They are tacitly assumed in Section 5. Assumption 1. 1. C For all n = 1, . . . , N, u n : R → R is increasing, strictly concave, differentiable, and satisfies the Inada conditions For all n = 1, . . . , N, it holds that, for any probability measure Q P where v n (y) := sup x∈R {u n (x) − xy} denotes the convex conjugate of u n . Under these assumptions, the functional ρ B turns out to be finite-valued, monotone decreasing, convex, continuous, and subdifferentiable on L ([53] Proposition 2.4). The following results highlight a natural counterpart for the multivariate case of well-known ones for (univariate) Risk Measures. The proof is based on an automatic continuity result, namely on the extended Namioka-Klee Theorem 1 [61]. (i) Suppose that, for some i, j ∈ {1, . . . , N}, i = j, we have ±(e i 1 A − e j 1 A ) ∈ C for all A ∈ F ,where e 1 , . . . , e N are the elements of the canonical basis of R N . Then, (ii) Suppose that ±(e i 1 A − e j 1 A ) ∈ C for all i, j and all A ∈ F . Then, A first relevant consequence of the dual representation result above is that it allows for establishing existence of allocations for the minimization problem expressed by ρ B . It is remarkable that a solution is not known to exist in the space L, thus an enlargement of the domain of optimization is needed (as customary, for example, in the classical utility maximization problems). An additional assumption is needed to establish existence. It is easy to check that C R satisfies this property, and other examples are provided in Section 5.1. For the existence of the optimal allocations in ρ B (in an extended sense), we will need the set Theorem 1 ( [53], Theorem 4.19). Let C = C 0 ∩ M Φ and suppose that C 0 ⊆ C R is closed for the convergence in probability and closed under truncation. For any X ∈ M Φ , there exists Y X ∈ C 0 ∩ L 1 (P; Q X ) such that so thatỸ X is the solution to the extended problem ρ B (X). The dual representation in Proposition 2 also allows for linking the Systemic Risk Measure based on random allocations (13) to one based on allocation of deterministic amounts. To this end, suppose that a probability vector Q = [Q 1 , . . . , Q N ] is given. Treating the vector of probability measures Q as a vector of pricing measures, it is possible to introduce a Systemic Risk Measure that is quite naturally associated with Q by The amount ρ Q B (X) represents the minimal systemic cost ∑ N n=1 E Q n [Y n ] among all Y ∈ L which fulfill the acceptability constraint E[∑ N n=1 u n (X n + Y n )] ≥ B. Similarly, one may introduce the systemic utility maximization problem in the case when the valuation of the allocations is assigned by the expectation under Q that is: One can also consider its counterpart Notice that, both in (16) and (17), the allocation Y belongs to a vector space L of random variables, without the requirement Y ∈ C R . The problem π Q A (X) is a maximization of the expected systemic utility among all Y ∈ L satisfying the budget constraint ∑ (13) and (16) are a priori different objects: even though they both subsume the same systemic budget constraint (expected systemic utility above a certain threshold B), ρ B is defined only through the cash amount ∑ N n=1 Y n ∈ R, while ρ Q B relies on the computation of the value (or the cost) of the random allocations ∑ N n=1 E Q n [Y n ]. A similar comparison applies to π A and π Q A . In [53], it is shown that: (ii) if A := ρ B (X), the four problems above share the same (unique) solution Y X ; (iii) the dual optimizer Q X also satisfies which can be interpreted saying that it determines a systemic risk allocation (15). Drawing a parallel between this property and related findings in utility maximization theory, one might say that the domain D plays the same role here of the set of martingale measures for the underlying stock in the classical theory. Several conclusions can be drawn from the points above. Firstly, ρ Q X B is a valid alternative to ρ B (sharing same value and solution) and can then be used to compute the systemic risk. Additionally, (18) explains how the pricing operators given by E Q X [·] provide a valuation for the risk component Y n X of the optimal allocation which is consistent with ρ B . Items (ii-iv) above also suggest how Q X can be considered as pricing/valuation measure for allocating the amount ρ B (X) at initial time. More precisely: Definition 3. A vector (ρ n (X)) n=1,...,N ∈ R N is a systemic risk allocation of ρ(X) if it fulfills ∑ N n=1 ρ n (X) = ρ(X). The requirement ∑ N n=1 ρ n (X) = ρ(X) is known as the "Full Allocation" property; see, for example, [38]. In the case of deterministic allocations Y ∈ R N , i.e., C = R N , the optimal deterministic Y X is a somehow canonical risk allocation ρ n (X) := Y n X ∈ R. For general (random) allocations Y ∈ C ⊂ C R , at first glance, there is no direct counterpart of such a natural systemic risk allocation. However, as mentioned above, the properties listed above regarding ρ B (X) and ρ Q X B (X) provide evidence of the fact that a natural choice is Interpretation and Implementation of ρ(X) One main economic justification for the use of ρ B as in (13) for systemic risk valuation is that the optimal allocation Y X of ρ B (X) maximizes the expected systemic utility among all random allocations of cost less than or equal to ρ B (X) (problem π A , using Item (ii) on page 12). The class C determines the level of risk sharing between the banks, ranging from no risk sharing in the case of deterministic allocations C = R N , to the case of full risk sharing C = C R . In between, several intermediate cases can be considered: fix h ∈ {1, . . . , N} clusters of institutions, set n := [n 1 , . . . , n h ] ∈ N h , be the corresponding partition of {1, · · · , N} and let I m , m = 1, . . . , h denote the set of institutions belonging to the m-group. Then, the sets C (n) = C model a whole family of possible restrictions for allocations. Indeed, the conditions ∑ i∈I m Y i = d m model a constrained sharing and allocation procedure among the agents in the sole cluster I m . We conclude reporting some relevant remarks made in conclusion of the analysis in [53], explaining once again the relevance and potentiality of the family of Systemic Risk Measures in the form (13). (a) "the mechanism can be described as a default fund as in the case of a CCP". Indeed, the properties of ρ B inspire the following procedure: at time 0, according to some systemic risk allocation ρ n (X) n = 1, . . . , N, satisfying ∑ N n=1 ρ n (X) = ρ B (X), the amount ρ B (X) is collected. ρ n could be determined consistently using (19). At terminal time, the amount ρ B (X) is distributed among the institutions according to Y n X , the optimal scenario-dependent allocations satisfying ∑ N n=1 Y n X = ρ B (X), so that "the fund acts as a clearing house " . (b) Alternatively, in terms of capital requirements and risk sharing mechanism, at time 0, ρ n (X) (a capital requirement) is associated with each institution n = 1, . . . , N in the system. At terminal time, each bank provides (if negative) or collects (if positive) the amount Y n X − ρ n (X). This means that, at terminal time, a risk sharing mechanism takes place. Observe that this sharing mechanism is possible given that Remarkably, there is an incentive for a single bank to enter in such a mechanism, based on the principle of choosing a fair risk allocation, as explained below. A fundamental issue for each financial institution is to decide whether its allocated share of the total systemic risk determined by the risk allocation ] is fair. With the choice Q = Q X , it is possible to show ( [53], Corollary 4.3, Lemma 4.5 and (4.11)) that π A (X) = π Q X A (X) = max ∑ N n=1 a n =A, When the amount A is chosen to be equal to the risk measurement for the system that is (21) can be rewritten as Consequently, by exploiting Q X for valuation, the problem of systemic utility maximization (17) reduces to individual utility maximization problems for the single banks: The optimum Y n X and its value via the optimal measure Q X , namely E Q n X [Y n X ], are then fair for the n th bank in the system, as Y n X maximizes the individual indirect utility as in Equation (22). Notice, however, that this fairness principle for individual banks is conjugated with a systemic-regulatory mechanism that is expressed in (21) through the outer maximization over the allocations a ∈ R N . We will elaborate more on this feature when dealing with the equilibrium concept in Section 6. The Exponential Case: Explicit Formulas Explicit formulas can be found for the value of ρ(X), Y X and Q X in the case of exponential utility functions and for C = C (n) (see (20)). To be more specific, these formulas are available for the choices u n (x) = −e −α n x /α n , α n > 0, n = 1, . . . , N, and B < ∑ N n=1 u n (+∞) = 0. The vector Q X of probability measures with densities dQ m is the solution of the dual problem (14), i.e., and E Q m X [Y n X ], m = 1, . . . , h, n ∈ I m , is a systemic risk allocation, as in Definition 3. It is also possible to conduct a sensitivity analysis based on the explicit formulas above, and a study of monotonicity with respect to grouping (i.e., variation of the partition defining C (n) ). We refer the reader to [53] Sections 6.1 and 6.2 for the details. Remark 2. Unlike in the general case, for exponential utility functions, the optimum Y X in (23) actually belongs to the set L = M Φ , thus being an optimum for ρ(X) and ρ Q X (X) in the strict sense. On Systemic Optimal Risk Transfer Equilibrium In order to introduce an equilibrium concept, we reformulate some of the results on the Systemic Risk Measure in (13), for the case C = C R ∩ M Φ . Proposition 3. Suppose a proper optimum Y X ∈ M Φ exists for ρ B (X) and ρ Q X B (X). Then, Proof of Proposition 3. Observe that Observe now that setting a n = E Q n X Y n X and Z n = Y n X − E Q n X Y n X , we obtain that E Q n X [Z n ] = 0 and a n + Z n − E Q n X [Z n ] = Y n X . Additionally, ∑ N n=1 Z n = 0 by items (i)-(iv) on page 12. Hence, a defined in this way satisfies the constraints in RHS of (24). We then get ρ Q X (X) = ∑ N n=1 E Q X n Y n X ≥RHS of (24). Conversely, for any a, Z satisfying the constraints of RHS of (24), setting Y n = a n + Z n − E Q n X [Z n ], one gets a vector fulfilling the constraints of (25) and such that E Q n X [Y n ] = a n , implying that also ρ Q X (X) ≤RHS of (24), which completes the proof. The formulation in (24) has an interesting consequence for the interpretation of ρ B (X) = ρ Q X B (X). Consider indeed the following two-step procedure: each bank n pays or receives the amount a n (according to whether this is positive or negative) at the initial time. At terminal time, a second, scenario-dependent allocation takes place: in exchange for the price E Q n X [Z n ], bank n pays or receives the additional amount Z n . The terminal time allocation is a reinsurance exchange among the banks in the system, since the clearing condition ∑ N n=1 Z n = 0 imposes a "conservation of capital"for the system as a whole. Now, this procedure yields a terminal time value equal to a n + Z n − E Q n X [Z n ] which can be added to the initial position of X, yielding the final position X n + a n + Z n − E Q n X [Z n ] for bank n. ρ(X) is then the infimum total amount ∑ N n=1 a n which can be allocated at an initial time, in such a way that, for some reinsurance exchange, the overall satisfaction of the system (E ∑ N n=1 u n (X n + a n + Z n − E Q n X [Z n ]) ) is above the threshold B. A similar allocation and reinsurance procedure have been considered in [51], where the concept of Systemic Optimal Risk Transfer Equilibrium (SORTE) has been introduced as an equilibrium for a combination of capital allocation and reinsurance problems. Equilibria related to exchange and allocation procedures, as well as risk sharing problems aimed at reducing the overall risk of a system via reallocation, have been widely studied in the literature. For a review on Arrow-Debreu Equilibrium, we refer to Section 3.6 of [28]. Following in the spirit the Arrow-Debreu Equilibrium theory and working in the framework of a pure exchange economy, the authors in [62,63] provide the existence of so-called risk exchange equilibria. The study of such equilibria, even though in different forms, had been initiated by the seminal papers of Borch (see [64]). In [65], inf-convolutions of convex Risk Measures have been successfully applied in the study of risk sharing, and these have been further analyzed in [66][67][68][69][70][71]. This area of research has been very active and many other relevant contributions on risk sharing also appeared recently; in particular, we mention [72][73][74][75][76][77][78][79]. In addition, in this context, we consider a class of N agents, each having individual risky position or random endowments given by the components of the risk vector X := [X 1 , ..., X N ]. Instead of designing a measurement for quantifying at the initial time the risk for the system, the procedure we are to describe shows how to conjugate a capital allocation problem regarding an amount A ∈ R exogenously assigned to the system (say, a risk measurement obtained using a Systemic Risk Measure as described in previous sections) with a reinsurance mechanism. For additional possible interpretations of the amount A, we refer the reader to the related discussion in Section 5.2 of [53]. The SORTE combines a systemic optimal (deterministic) allocation with Bühlmann's Risk Exchange Equilibrium. In a one period framework {0, T}, each of the N agents is characterized by a strictly concave, strictly monotone utility function u n : R → R. Systemic Optimal (Deterministic) Allocation To describe a systemic optimal allocation, suppose that the amount A is allocated at an initial time among the agents in order to optimize the satisfaction/performance of the system as a whole. Denoting by a n ∈ R the cash received (provided) if positive (resp., if negative) by agent n, the terminal time endowment at disposal of agent n will be given by (X n + a n ). The optimal allocation a X ∈R N can then be determined according to an optimization problem for the system's utility in the form Assuming that the agents will retain a cooperative attitude also at a terminal time and that they will still believe in the overall reliability of the others, one can combine this initial time allocation with a reinsurance procedure inspired by [62,63]. The aim is twofold: on the one hand, this further increases the optimal total expected systemic utility. On the other hand, this guarantees that each participant will be maximizing his/her expected utility under budget constraints determined in an aggregated way by the system as a whole, thus adding a rationality component that takes into consideration also single agents' satisfaction. Risk Transfer Equilibrium In Bühlmann's Risk Exchange Equilibrium, each agent is entitled to risk exchange with the other participants. Agent n receives or provides the amount Y n (ω) at terminal time (with the same convention on signs as in the case of systemic optimal allocation above), in exchange for a price Q here is some pricing probability measure. Observe that Y n , being scenariodependent, is a terminal time measurable random variable. The exchange variables Y n have to satisfy the clearing condition: The clearing condition expresses the fact that no capital is produced or lost in this reallocation via Y, so that the Risk Exchange Equilibrium also describes a reinsurance mechanism. The pair ( Y X , Q X ) is a Risk Exchange Equilibrium if (a) for each n, Y n X maximizes among all variables Y n and (b) ∑ N n=1 Y n X = 0 P−a.s.. Systemic Optimal Risk Transfer Equilibrium The two procedures can now be combined as follows. Given some amount a n assigned to agent n, this agent may buy Y n at the price p n ( Y n ) in order to optimize E u n (a n + X n + Y n − p n ( Y n )) . As in Bühlmann's definition, with initial endowments X n + a n for each agent n = 1, . . . , N, the pricing functionals p n , n = 1, ..., N have to be selected in such a way that the optimal solution verifies the clearing condition However, a n is not exogenously assigned to each agent, but only the total amount A is at the disposal of the whole system. Thus, the optimal way to allocate A among the agents is given by the solution ( Y n X , p n X , a n X ), n = 1, . . . , N of the following problem: Among the main findings in [51], we mention existence, uniqueness, and Pareto optimality for SORTE. Additionally, explicit formulas are provided in the exponential case, i.e., when considering u n (x) := 1 − exp(−α n x), n = 1, . . . , N for α 1 , . . . , α N > 0 . The SORTE concept can be extended, see [80], by replacing the utilitarian aggregation of single agents' utility functions, namely ∑ N j=1 u j (x j ), to a multivariate utility function U : R N → R satisfying some regularity conditions including a multivariate formulation of the Inada conditions. As a particular example of such U, one may take U(x) := ∑ N j=1 u j (x j ) + Γ(x) for a concave increasing upper bounded function Γ : R N → R. As Γ is not required to be strictly increasing nor strictly concave, the special selection Γ = 0 leads to the original concept of SORTE. The additional term Γ could be imposed to the financial institutions in the system by some regulatory authority. One can show existence and uniqueness also for this multivariate extension of SORTE (called Multivariate Systemic Optimal Risk Transfer Equilibrium). In the multivariate extension, the preferences of each agent depend on the action of the other agents in the system. Hence, it is natural to formulate a Nash Equilibrium in this context. Remarkably, the optimal allocation Y n X of the multivariate SORTE is indeed a Nash Equilibrium (see [80] for details). Conditional Systemic Risk Measures The setting in the approaches described in the previous sections is static, that is: the Systemic Risk Measures as described above do not leave room for incorporating dynamic elements (additional information or presence of intermediate payoffs and valuation, just to mention a few examples). It is essentially a one period setup where the initial sigma algebra is assumed to be the trivial one {Ω, ∅}. In order to be able to consider the more realistic multiperiod setting, a further step consisting of selecting a general sub sigma algebra G ⊆ F and quantifying the risk of a position X given the information modeled by G is required. The resulting conditional Risk Measure will then be a map ρ G having a range in L 0 (Ω, G, P). This well established technique in the literature has mostly been applied in the framework of univariate dynamic Risk Measures. Among the first contributions on conditional convex Risk Measures, we mention [81]. Since then, this approach has gained more and more attention in the literature. We refer the reader to [82] for an overview on univariate dynamic Risk Measures. Several results have been obtained for the case of quasi-convex conditional maps and Risk Measures, see [32] and [83,84]. A vast amount of literature has focused on conditional counterparts to classical static results regarding dual representation and separation properties, using L 0 -modules. Among the many contributions in this stream of research, we mention [85][86][87][88][89] and references therein. Overall, the fact that the natural conditional counterparts hold for static results is not so surprising. The two are intrinsically related by a Boolean Logic principle. As seen in [90], traditional Theorems carry over to the conditional setup assuming that suitable concatenation properties hold. A Conditional Systemic Risk Measure is a map that associates to a N-dimensional risk factor X∈L F ⊆ (L 0 (Ω, F , P)) N a G-measurable random variable. This means that a conditional Systemic Risk Measure quantifies the risk of a given system taking into account the fact that more and more information accumulates over time. Conditional Systemic (multivariate) Risk Measures were first studied by [41,42]. In the latter, it is emphasized how additional information might also be of a spatial nature, namely concerning systemic relevant structures. The sigma algebra G might then incorporate information on the state of a subsystem, see [91] or [92]. The papers [41,42,93] consider only the conditional extension of (static) Systemic Risk Measures of the "first aggregate, then allocate " form. Multivariate and set-valued conditional Risk Measures, and related time consistency aspects, have also been analyzed in [94][95][96][97][98]. As can be easily guessed, a natural counterpart of Systemic Risk Measures of the form "first allocate, then aggregate" can also be defined in the conditional setting. This is the object of the analysis in [99], on which this section is based. Most properties of Shortfall Systemic Risk Measures, i.e., those in forms similar to (13), are seen to carry over to the conditional setting. Furthermore, the usual recursive property in the classical theory of time consistency for univariate Risk Measures can be replaced in the systemic conditional case by a new consistency property, this time of vector type, with respect to sub sigma algebras H ⊆ G ⊆ F . Further details on this will be provided in the following: For conditional Systemic Risk Measures ρ G in rather general form (conditionally convex and monetary, monotone maps), the following conditional dual representation holds: where Q G is a set of vectors of probability measures and α(Q) ∈ L 0 (Ω, G, P) is a penalty function. A specific interesting case of such maps ρ G is the class of Conditional Shortfall Systemic Risk Measure, associated with a multivariate utility functions U of the form U(x) = ∑ N j=1 u j (x j ) + Γ(x), already introduced in the previous section. These are inspired by the static ones (13), and defined by Here, B is a bounded, G-measurable random variable and the set of G-admissible allocations is C G ⊆ Y ∈ (L 1 (Ω, F , P)) N such that N ∑ n=1 Y n ∈ L ∞ (Ω, G, P) . Observe that each Y n is an F -measurable random variable, but ∑ N n=1 Y n is required to be G-measurable. These definitions clearly mimic those in (13) and (8). In the case of C G and C R , it is particularly evident how deterministic amounts have been substituted by G-measurable ones, which are "known once the information in G is known" . For the trivial selection G = {∅, Ω}, the authors in [99] extend the results in [53] to the more general aggregator U. Theorem 5.4 of [99] summarizes the fundamental properties of the Conditional Shortfall Systemic Risk Measure ρ G and, in particular, shows that (i) the functional ρ G takes values in L ∞ (Ω, G, P) and is both continuous from below and from above; (ii) the essential infimum in (27) is actually a minimum, attained at the vector of allocations Y(G, X) = [Y 1 (G, X), ..., Y N (G, X)] ∈ C G ; (iii) ρ G admits a dual representation which specializes the one in (26) with a more explicit formulation of the penalty function α and of the set Q G . (iv) the supremum in the dual representation (26) of ρ G is actually a maximum, with optimizer Q(G, X) = [Q 1 (G, X), ..., Q N (G, X)] which is a vector of probability measures also satisfying: Similarly to the static case described in Section 5.2, in the exponential case, it is possible to explicitly determine formulas for ρ G (X), the value of the Conditional Shortfall Systemic Risk Measure , for the primal minimizer Y(G, X) of ρ G (X) and for the dual optimum vector of probability measures Q(G, X). Such formulas closely resemble those in Theorem 2, with expectations replaced by conditional expectations w.r.t. G. As anticipated before, for given sub sigma algebras H ⊆ G ⊆ F , a new type of consistency property holds. Such a property seems to have no direct counterpart in the wellknown univariate case. Indeed, the consistency ρ H (−ρ G (X)) = ρ H (X), which is the usual time consistency property in the univariate case, is not even well defined in the systemic setting, as a Systemic Risk Measure maps a random vector into a single random variable. Nevertheless, time (or information) consistency properties turn out to be well defined for: (i) the vectors of primal minimizers Y(G, X) (for ρ G (X)) and Y(H, −Y(G, X)) (for ρ H (−Y(G, X))); (ii) the initial time allocation vectors (a(G, X)) k := (E Q k (G,X) [Y k (G, X)|G]) k (for ρ G (X)) and a(H, −a(G, X)) (for ρ H (−a(G, X))). Such consistency properties take the following form: It is also shown in [99] how the primal and dual optimal allocations for Shortfall Systemic Risk Measures, in both the static and dynamic cases, admit an interpretation in the sense of (a dynamic extension of) Multivariate Systemic Optimal Risk Transfer Equilibrium. Conclusions In this work, we presented an axiomatic theory for Systemic Risk Measures which extends the classical univariate framework. Such Systemic Risk Measures map the original risk carried by a system of financial institutions to real numbers that quantify the risk level for the system as a whole. The problem of determining such an amount is intimately related to allocation procedures, since it is implicit in these systemic risk measurement regimes that the total amount should serve to secure the system via allocation to each participant. We presented how different allocation procedures can be considered, before or after aggregation of individual risky positions on the one hand, at an initial or at terminal time on the other hand. In the case of terminal time random allocations and for Systemic Risk Measures of shortfall type, we presented how allocation procedures can be viewed as fair and satisfactory for both the system as a whole and the single agents in it. This is intimately related to the fact that optimal allocations in this case are suitably defined equilibria for the system (SORTE). We also showed how possible additional information at an initial time can be incorporated in the axiomatic theory for the systemic case using a conditional approach inspired by the univariate case. As briefly sketched in the previous sections, the theory of Systemic Risk Measures is rich and still has unexplored or partially unexplored ramifications. In the univariate setting, the study of Risk Measures in a continuous time framework generated several theoretical developments in different areas. Just to mention a few examples, connections have been found with BSDEs ( [82,[100][101][102]) and Nonlinear Expectation ( [103]). We believe that also, in the systemic, multivariate case, the continuous time theory may herald new and interesting research topics. Risk Measures have frequently been studied in the presence of a financial market. Indeed, given a securities stochastic market, let the subset G ⊆L 0 (Ω, F , P) describe the terminal time values of (self financing) trading strategies in the available securities. The market adjusted Risk Measure is then the functional ρ : L ∞ (Ω, F , P) → [−∞, +∞] defined by ρ(X) := inf{m ∈ R | ∃g ∈ G : m + X + g ∈ A}. (28) In such a risk measurement regime, it is possible to trade in the underlying financial market in order to achieve an acceptable terminal position. A detailed study of these market adjusted Risk Measures and their relationship with the notion of no arbitrage can be found in [28] Section 4.8. Very recently, a similar approach has been developed for Systemic Risk Measures in [104], also in connection with some systemic notion of arbitrage. A deeper study of these connections, including equilibrium aspects, is still to be developed. A further topic for future investigation concerns the robust version of Systemic Risk Measures theory. In the last decade, many papers were aimed at establishing the fundamental Theorems of asset pricing and the key pricing-hedging duality in a probability free setup, or in a non dominated setting. In the latter case, a class of-a priori non dominatedprobability measures replaces the usual single reference probability P, leading to the theory of Quasi-Sure Stochastic Analysis ( [105][106][107][108][109], just to mention only a few). In the former case, instead, a pathwise and more radical approach is preferred (see, for example, [110][111][112]). The above-mentioned papers only treat the (robust) univariate case. A first attempt to examine Systemic Risk Measures in a probability free environment is elaborated by [104], and we foresee that a more detailed inspection on this subject could be very fruitful. As is well known, Martingale Optimal Transport theory is an extremely powerful tool when aiming at pathwise pricing-hedging duality results ( [113][114][115][116][117][118]). We observe that also in the recent theory of Entropy Martingale Optimal Transport (see [119]) the connections in the robust setup between Optimal Transport and Utility Functionals/Risk Measures arise quite naturally from the main duality relation. Conflicts of Interest: The authors declare no conflict of interest.
14,718.2
2021-04-30T00:00:00.000
[ "Economics", "Mathematics" ]
Enzymes Responsible for Synthesis of Corneal Keratan Sulfate Glycosaminoglycans* Keratan sulfate glycosaminoglycans are among the most abundant carbohydrate components of the cornea and are suggested to play an important role in maintaining corneal extracellular matrix structure. Keratan sulfate carbohydrate chains consist of repeating N-acetyllactosamine disaccharides with sulfation on the 6-O positions of N-acetylglucosamine and galactose. Despite its importance for corneal function, the biosynthetic pathway of the carbohydrate chain and particularly the elongation steps are poorly understood. Here we analyzed enzymatic activity of two glycosyltransferases, β1,3-N-acetylglucosaminyltansferase-7 (β3GnT7) and β1,4-galactosyltransferase-4 (β4GalT4), in the production of keratan sulfate carbohydrate in vitro. These glycosyltransferases produced only short, elongated carbohydrates when they were reacted with substrate in the absence of a carbohydrate sulfotransferase; however, they produced extended GlcNAc-sulfated poly-N-acetyllactosamine structures with more than four repeats of the GlcNAc-sulfated N-acetyllactosamine unit in the presence of corneal N-acetylglucosamine 6-O sulfotransferase (CGn6ST). Moreover, we detected production of highly sulfated keratan sulfate by a two-step reaction in vitro with a mixture of β3GnT7/β4GalT4/CGn6ST followed by keratan sulfate galactose 6-O sulfotransferase treatment. We also observed that production of highly sulfated keratan sulfate in cultured human corneal epithelial cells was dramatically reduced when expression of β3GnT7 or β4GalT4 was suppressed by small interfering RNAs, indicating that these glycosyltransferases are responsible for elongation of the keratan sulfate carbohydrate backbone. In higher eukaryotes, high numbers of cells interact with each other and form complicated but well organized tissue structures. During the tissue formation in the developmental stage, cells recognize the surrounding environment and interact with neighboring cells and the extracellular matrix. Proteo-glycans (PGs) 3 are glycoproteins that carry linearly elongated polysaccharides called glycosaminoglycans (GAGs) on core proteins and that are largely found in the extracellular matrix and play important roles for development, maintenance, and function of the tissues. Production of the GAG chain takes place mostly in the Golgi apparatus. Several glycosyltransferases and carbohydrate-modifying enzymes such as sulfotransferases act on GAG elongation and modification. So far almost all of the enzymes involved in GAG production are cloned and analyzed for their enzymatic activities; however, the process of GAG production is still unknown in some GAGs because of the presence of multiple enzymes with redundant activities. Keratan sulfate (KS) PGs are major components of the cornea and also found in cartilage and brain. Because of the high concentration of these molecules in the cornea, their biological function has been extensively studied and found to include maintenance of corneal extracellular matrix structure (1)(2)(3)(4)(5)(6)(7)(8)(9). Corneal KS PGs consist of PG core proteins, such as lumican, keratocan, and mimecan, carrying KS GAGs in an N-linked manner (10 -12). KS GAG is a linear carbohydrate chain made of sulfated disaccharide repeats of -3Gal␤1-4GlcNAc␤1-with sulfate on the 6-O position of GlcNAc and galactose (10 -12) and exhibiting modifications such as fucose and sialic acid (13,14). Production of the KS GAG chain on PGs is processed by glycosyltransferases and sulfotransferases localized in the Golgi apparatus, and matured KS PGs are secreted into the extracellular matrix. Elongation of the carbohydrate backbone of the KS GAG chain is catalyzed by enzymes of two glycosyltransferase families, ␤1,3-N-acetylglucosaminyltransferase (␤3GnT) and ␤1,4-galactosyltransferase (␤4GalT), and sulfation of the chain is catalyzed by two carbohydrate sulfotransferases. Recent studies of carbohydrate sulfotransferases have identified enzymes responsible for sulfation of the corneal KS GAG chain as KS galactose 6-O sulfotransferase (KSG6ST) and corneal N-acetylglucosamine 6-O sulfotransferase (CGn6ST, also known as GlcNAc6ST-5 and GST4␤) (15,16). Because CGn6ST only transfers sulfate on the nonreducing terminal GlcNAc but not onto internal GlcNAc, sulfation of GlcNAc * This work is supported by National Institutes of Health Grant EY014620 (to T. O. A.). The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. □ S The on-line version of this article (available at http://www.jbc.org) contains supplemental Tables S1-S5 and Fig. S1. 1 residues of KS GAG is coupled with KS GAG elongation (16). On the other hand, KSG6ST transfers sulfate on galactose located both internally and on the nonreducing terminal of the carbohydrate chain (15,17). KSG6ST also prefers a sulfated carbohydrate as substrate (15,17), suggesting that galactose sulfation occurs after production of the GlcNAc-sulfated poly-N-acetyllactosamine chain and that GlcNAc sulfation is necessary for sulfation of galactose residues by KSG6ST. Patients with macular corneal dystrophy type I, which is caused by deficiency of functional CGn6ST, have no detectable highly sulfated KS in the cornea (18), serum, and cartilage (19,20), suggesting that GlcNAc sulfation is required to produce highly sulfated KS GAG. Observations that macular corneal dystrophy patients with CGn6ST mutations develop corneal opacities (21) and that mice lacking the orthologous sulfotransferase display corneal thinning with abnormalities in corneal extracellular matrix structure (9) indicate important roles for KS GAG sulfation in the function and maintenance of the cornea. However, unlike KS sulfation, mechanisms underlying KS GAG chain elongation are poorly understood. In humans, ␤3GnT and ␤4GalT enzymes are encoded by eight and seven genes, respectively. Among these, ␤3GnT7 and ␤4GalT4 are thought to be responsible for elongation of the KS carbohydrate chain because these enzymes have higher activity for sulfated than nonsulfated substrates (22,23); however, direct evidence in support of this hypothesis has not been reported. Here, using soluble forms of recombinant enzymes and glycosidase-assisted column chromatography, we demonstrate that ␤3GnT7 and ␤4GalT4 can produce KS GAG carbohydrate in vitro. We also observed that suppressing expression of either ␤3GnT7 or ␤4GalT4 reduced highly sulfated KS GAG production in cultured human corneal cells, indicating that these glycosyltransferases are responsible for KS GAG elongation. Preparation of Soluble Enzymes-Expression vectors or an empty pcDNA3.1-HSH were transfected individually into Lec20 cells (26), kindly provided by Dr. Pamela Stanley, using Lipofectamine Plus reagent (Invitrogen) according to the man-ufacturer's instructions. The cells were then cultured for 24 h with ␣-minimum essential medium (Irvine Scientific, Santa Ana, CA) containing 10% fetal bovine serum, and the medium was replaced with Opti-MEM medium (Invitrogen) supplemented with 40 g/ml of L-proline and cultured cells for 24 h at 37°C. We then concentrated the culture medium up to 200fold using a Microcon YM-30 (Millipore Corp., Bedford, MA) and mixed that medium (1:1) with glycerol for storage at Ϫ20°C. Protein production was confirmed by Western analysis using alkaline phosphatase-conjugated anti-T7 tag antibody (Novagen, Madison, WI) and the LumiPhos WB chemiluminescence solution (Pierce). Column Chromatography-To determine the size of reaction products, we performed gel filtration chromatography with a column (1.0-cm diameter ϫ 120-cm length) of Bio-Gel P-4 (Bio-Rad) equilibrated with 100 mM ammonium acetate buffer, pH 6.8. The reaction mixture samples were applied to the column, and the eluate was collected in 1-ml fractions. The elution pattern of carbohydrate products was monitored by counting 35 S radioactivity using a liquid scintillation counter. For further analyses, we pooled radioactive fractions, desalted them by Sephadex G25 gel filtration (Sigma) equilibrated with 7% (v/v) 1-propanol/water medium, and lyophilized the samples again. To identify the degree of sulfation of products, we separated them by anion exchange HPLC using a Whatman Partisil SAX-10 column (4.6-mm diameter ϫ 25-cm length). The column was first equilibrated with 60% acetonitrile/water, and then after loading a sample, the products were eluted under the following conditions: 60% acetonitrile/water for 5 min; a linear gradient from 60% acetonitrile/water to either 45 mM (up to tetrasulfated materials); or 70 mM (up to heptasulfated materials) KH 2 PO 4 containing 60% acetonitrile/water for 75 min. The flow rate was 1 ml/min, 1-ml fractions were collected, and 35 S radioactivity was monitored as described. To evaluate sulfation status, we prepared multi-sulfated carbohydrate standards by enzymatic reactions as described previously (16). Glycosidase-associated Carbohydrate Structure Determination-To determine nonreducing terminal structure of materials in P-4 gel filtration fractions, the purified materials were treated with glycosidases and analyzed as described above. For ␤-galactosidase and hexosaminidase A treatments, lyophilized samples were dissolved in water and incubated with either 25 milliunits of Jack bean ␤-galactosidase (Seikagaku Co., Tokyo, Japan) or 5 milliunits of human placental hexosaminidase A (Sigma) in a 30-l mixture containing 50 mM sodium citrate buffer pH 3.5, at 37°C for 5 min (␤-galactosidase) or for 16 h (hexosaminidase A). The samples were then boiled for 5 min, and the digests were separated by column chromatography. For keratanase treatment, we incubated purified samples with 75 milliunits of keratanase from Pseudomonas sp. (Seikagaku Co.) in a 20-l mixture containing 100 mM Tris-HCl, pH 7.5, at 37°C for an hour, stopped the enzymatic reaction by boiling, and subjected samples to column chromatography. KS Synthesis in Cultured Human Corneal Epithelial Cells-SV40-immortalized human corneal epithelial cells were maintained in Dulbecco's modified Eagle's medium/Ham's F-12 50/50 mix medium (Mediatech, Inc., Herndon, VA) supplemented with 15% fetal bovine serum, 4.2 g/ml of bovine insulin, 0.8 g/ml of cholera toxin (Invitrogen), 8.3 g/ml of mouse epidermal growth factor (Invitrogen) and 33 g/ml gentamycin (Sigma). To produce sulfated KS, we co-transfected CGn6ST and KSG6ST expression vectors into 1 ϫ 10 6 cells using Nucleofector electroporator (Amaxa Inc. Gaithersburg, MD) according to the manufacturer's instruction and cultured cells in the above medium at 37°C for 48 h. To analyze effects of glycosyltransferase expression on KS production, we co-transfected CGn6ST and KSG6ST expression vectors together with specific glycosyltransferase siRNAs (sequence information is listed in supplemental Table S1), which were obtained from Ambion (Austin, TX). As a negative control, we used a commercially available negative control siRNA (NC#2; Ambion). Western Blot Analysis of Highly Sulfated KS-Transfected cells were washed with cold PBS five times, and lysates were prepared using 120 l of lysis buffer containing 50 mM Tris-HCl, pH 7.4, 1% Nonidet P-40, 150 mM NaCl, a 1ϫ proteinase inhibitor mixture (Sigma), and 1 mM phenylmethylsulfonyl fluoride. After SDS-PAGE, the proteins were electroblotted to an Immobilon-P transfer membrane (Millipore). The membranes were treated with blocking buffer (10% nonfat milk in PBS) at room temperature for 1 h and then probed with 5D4 anti-KS antibody (1:4000) (Seikagaku Co.) in blocking buffer for 1 h. The membranes were washed in 0.05% Tween 20 in PBS three times and reacted with horseradish peroxidase-conjugated goat anti-mouse IgG antibody (1:4000) in blocking buffer for 1 h. After washing membranes in 0.05% Tween 20 in PBS three times, signals were detected using the ECL Plus reagent (GE Healthcare, Piscataway, NJ). Signal intensity was calculated using NIH Image 1.62 software. Quantitative Reverse Transcription-PCR Analysis-Total RNA was isolated from transfected cells by RNeasy Mini kit (Qiagen) according to the manufacturer's instruction. After DNase I treatment, we subjected samples to reverse transcription using Superscript II reverse transcriptase (Invitrogen). Quantitative PCR analysis was carried out using Power SYBR Green PCR master mix (Applied Biosystems, Foster City, CA) with a specific primer set for each glycosyltransferase gene (see supplemental Table S2). Amplification of DNA products was monitored by the Mx 3000 QPCR system (Stratagene, La Jolla, CA) using the following reaction conditions: initial denaturation at 95°C for 10 min followed by 40 cycles of denaturation at 95°C for 30 s, annealing at 55°C for 1 min, and extension at 72°C for 30 s. Cyclophilin A mRNA served as the internal control. RESULTS GlcNAc-sulfated Poly-N-acetyllactosamine Production by ␤3GnT7, ␤4GalT4, and CGn6ST in Vitro-To analyze enzymatic activity of ␤3GnT7 and ␤4GalT4 for KS carbohydrate synthesis in vitro, we constructed expression vectors encoding soluble glycosyltransferases and the sulfotransferases, KSG6ST and CGn6ST, and transfected Lec20 cells, a cell line derived from Chinese hamster ovary cells, with vectors individually to secrete enzymes into the medium (Fig. 1). Because Lec20 cells do not produce endogenous ␤4GalT1 (26), most glycosyltransferase activity in the medium originates from the transfected expression vector (Fig. 1B). Enzymes prepared from the culture Ϫ -6GlcNAc␤1-6Man␣1-6Man␣-octyl, was treated with concentrated culture medium of mock-transfected Chinese hamster ovary (CHO) cells (open circles), mock-transfected Lec20 cells (closed circles), or Lec20 cells transfected with an expression vector encoding soluble ␤4GalT4 (closed triangles), and analyzed by P-4 gel filtration. The retention position of the monosulfated trisaccharide substrate is marked with S. Note that addition of galactose onto the substrate molecule was observed in the reaction with culture medium of mock-transfected Chinese hamster ovary cells but not in that of mock-transfected Lec20 cells. OCTOBER 12, 2007 • VOLUME 282 • NUMBER 41 medium were mixed in several combinations and incubated with a monosulfated trisaccharide substrate, 35 SO 3 Ϫ -6Glc-NAc␤1-6Man␣1-6Man␣-octyl, and carbohydrate donor substrates, UDP-GlcNAc, UDP-Gal, and PAPS. After 16 h of incubation at 37°C, the reaction products were analyzed by P-4 gel filtration column chromatography (Fig. 2). When the substrate was reacted with a mixture of ␤3GnT7 and ␤4GalT4 glycosyltransferases, two major products were detected as molecules larger than the substrate ( Fig. 2A), indicating that the two enzymes can add carbohydrate onto the carbohydrate sub- strate. When we incubated the substrate with a mixture of ␤3GnT7, ␤4GalT4, and CGn6ST, we detected several products of much larger size (Fig. 2B) than products from a mixture lacking CGn6ST ( Fig. 2A). Enzymes Responsible for Corneal KS GAG Synthesis To identify the carbohydrate structure of the products, we collected products separately and analyzed them by glycosidase treatment followed by column chromatography. Fraction I, containing the most abundant product, was slightly larger than the starting substrate by P-4 gel filtration column chromatography, and its size was not altered by hexosaminidase A treatment (Fig. 2C). However, a component of fraction I was converted to a digested fraction with the same retention position as the starting substrate by ␤-galactosidase treatment (Fig. 2C). This result indicates that the product in fraction I was a tetrasaccharide carbohydrate resulting from addition of one galactose on the nonreducing terminal of the starting trisaccharide substrate by ␤4GalT4. This tetrasaccharide product was next analyzed by SAX-10 anion exchange chromatography, and the product was found to be a monosulfated carbohydrate (Fig. 3A). From these results, we conclude that the product was monosulfated tetrasaccharide Gal␤1- We next analyzed the carbohydrate structure found in fraction II (Fig. 2B) using the same strategy. A chromatogram of fraction II on a P-4 column showed a broader peak (Fig. 2D), but the pattern was converted to a sharper peak by ␤-galactosidase treatment (fraction IIЈ in Fig. 2D). Meanwhile, hexosaminidase A treatment, which hydrolyzes both nonsulfated GlcNAc and 6-Osulfated GlcNAc located on nonreducing terminal of a carbohydrate substrate (27), converted constituents of fraction II into two components (II-1 and II-2 in Fig. 2D). Because the retention positions of fractions IIЈ and II-1 were very close but not identical and because both peaks overlapped with the original broad peak of fraction II (Fig. 2D), we conclude that fraction II contains a mixture of fractions IIЈ and II-1 and that the II-1 component is hydrolyzed by ␤-galactosidase and converted to component IIЈ (Fig. 2D). We also performed sequential digestion of fraction II components with ␤-galactosidase followed by hexosaminidase A and found that all products were converted to a single fraction having the same elution position as fraction II-2 and also fraction I (Fig. 2D). These results indicate that the carbohydrate backbone structure of fraction II is a mixture of a pentasaccharide, GlcNAc␤1-3Gal␤1-4GlcNAc␤1-6Man␣1-6Man␣1octyl (fraction IIЈ), and a hexasaccharide, Gal␤1-4GlcNAc␤1-3Gal␤1-4GlcNAc␤1-6Man␣1-6Man␣1-octyl (fraction II-1), and that hexosaminidase A treatment converted the pentasaccharide product (fraction IIЈ) into a tetrasaccharide carbohydrate, Gal␤1-4GlcNAc␤1-6Man␣1-6Man␣1-octyl (fraction II-2). The sulfation status of components of II, II-1, and II-2 was analyzed by a SAX-10 column (Fig. 3, B-D). Fraction II was separated into two components, monosulfated material and disulfated material (Fig. 3B). Fraction II-1 was also separated into two components, both of which eluted at the same positions as fraction II, but the proportion of mono-to disulfated materials in fraction II-1 was nearly 1:1 (Fig. 3C). Because increased sulfation was apparently due to the presence of CGn6ST, disulfated components found in fractions II and II-1 OCTOBER 12, 2007 • VOLUME 282 • NUMBER 41 Similar to fraction II, components of fractions III and IV were separated into two fractions by hexosaminidase A treatment (Fig. 2, E and F), indicating that each fraction contains carbohydrates of two different lengths, with or without exposed Glc-NAc at the nonreducing terminal. Similar to what was seen in fraction II, sequential treatment of the carbohydrate components in fraction III with ␤-galactosidase and hexosaminidase A converted the two carbohydrates into a single component exhibiting a sharp peak with a retention position identical to fraction II-1 on the P-4 chromatogram (Fig. 2E). This result indicates that carbohydrate components of fraction III are hepta-and octasaccharides, both of which are elongated products of fraction II-1 carbohydrates, to which either Gal␤1-4Glc-NAc␤1or GlcNAc␤1-structures have been added onto the fraction II-1 carbohydrate backbone. These results suggest that all observed carbohydrate products were made of an incomplete poly-N-acetyllactosamine backbone consisting of repeating N-acetyllactosamine units, with or without galactose on the nonreducing terminal on the starting substrate. We therefore estimated that fraction IV consisted of nona-and decasaccharide carbohydrates made of poly-N-acetyllactosamine repeats attached to the starting substrate. The sulfation status of carbohydrates in fractions III and IV was again analyzed by SAX-10 HPLC before and after hexosaminidase A treatment (Fig. 3, E-J). By this analysis, we observed multi-sulfated carbohydrate products in the fractions up to tetrasulfated status (Fig. 3, H and I). Because we conclude that fraction III contains hepta-and octasaccharides, both of which carry 3 GlcNAc in their carbohydrate backbones, and because we included CGn6ST in the reaction mixture, the product should be trisulfated carbohydrate if the molecules were fully sulfated. By SAX-10 HPLC analysis, however, we found that more than 50% of the products were mono-and disulfated products in fraction III (Fig. 3E), indicating that the products were not fully sulfated on their GlcNAc residues. To understand the sulfation pattern of undersulfated carbohydrate products, we treated components of fraction III with keratanase, which recognizes GlcNAc-sulfated disaccharide SO 3 Ϫ -6GlcNAc␤1-3Gal linked to GlcNAc (with or without sulfation) via a ␤1-4 linkage and hydrolyzes that linkage (Fig. 4A). Thus, when a nonradioactive sulfate is attached to GlcNAc located close to 35 SO 3 Ϫ -GlcNAc, the molecule will be hydrolyzed and converted to radiolabeled trisaccharide. If nonradioactive sulfate is attached to GlcNAc located close to nonreducing terminal, the molecule will be converted to pentasaccharide (Fig. 4A). Following keratanase treatment, almost all fraction III components were converted to a trisaccharide component (Fig. 4B). The sulfation status of components of fraction III, which was TABLE 1 Determined carbohydrate structures and their molar ratio in GlcNAc-sulfated poly-N-acetyllactosamine product synthesized by a mixture of ␤3GnT7/␤4GalT4/CGn6ST in vitro F, Galactose; ■, GlcNAc; E, Mannose. *, actual components found in these fractions are the illustrated structures without nonreducing terminal GlcNAc, because of hexosaminidase A treatment. **, these structures are estimated from the results of keratanase treated disulfated hepta/octasaccharide structures. OCTOBER 12, 2007 • VOLUME 282 • NUMBER 41 originally identified as a mixture of mono-, di-, and trisulfated carbohydrates, was also identified as a monosulfated product following keratanase treatment, indicating that carbohydrates in fraction III were sulfated on the internal GlcNAc located closest to 35 SO 3 Ϫ -GlcNAc. This result suggests that GlcNAc sulfation of carbohydrate products by a mixture of ␤3GnT7/ ␤4GalT4/CGn6ST was consecutively but not randomly occurring during elongation of carbohydrate backbone. Thus, we determined the carbohydrate structure of the products synthesized by the three enzymes ␤3GnT7, ␤4GalT4, and CGn6ST in Table 1. Based on the result, we illustrated synthetic flow of carbohydrate products generated by the three enzymes and calculated efficiencies of each enzymatic reaction step (supplemental Fig. S1). We also calculated average reaction efficiency for each enzymatic reaction that repeatedly takes place by the same enzyme for production of the same nonreducing terminal structure during GlcNAc-sulfated poly-N-acetyllactosamine chain production (Fig. 5). Enzymes Responsible for Corneal KS GAG Synthesis Highly Sulfated KS Production by ␤3GnT7, ␤4GalT4, CGn6ST, and KSG6ST in Vitro-Corneal KS GAG is highly sulfated on both GlcNAc and galactose residues (10 -12). Sulfation of galactose residues is catalyzed by KSG6ST (15). From its substrate specificity, sulfation of galactose by KSG6ST likely occurs after production of GlcNAc-sulfated poly-N-acetyllactosamine GAG by CGn6ST and glycosyltransferases (11,16,23). Using soluble enzymes including KSG6ST, we tested pro-duction of highly sulfated KS GAG in vitro. The reaction products of a mixture of ␤3GnT7, ␤4GalT4, and CGn6ST included elongated carbohydrates of several lengths (Fig. 6A), and one fraction containing a mixture of nona/decasaccharides was identified to have mostly tri-and tetrasulfated structures (Fig. 6E). When a mixture of ␤3GnT7/␤4GalT4/CGn6ST products was further treated by KSG6ST, the degree of sulfation of the nona/decasaccharide fraction was increased up to heptasulfated products (Fig. 6F), indicating that GlcNAc-sulfated poly-N-acetyllactosamine carbohydrates were utilized and converted to highly sulfated KS carbohydrate following KSG6ST treatment. When a trisaccharide carbohydrate substrate was treated with all four enzymes (␤3GnT7, ␤4GalT4, CGn6ST, and KSG6ST), elongation efficiency was equivalent to that found in a reaction of three enzymes (Fig. 6, A and C). The products were also sulfated up to hexasulfated components (Fig. 6G), which was a higher degree of sulfation than that seen in the ␤3GnT7/␤4GalT4/CGn6ST mixture (Fig. 6E) but lower than that seen in a two-step reaction with ␤3GnT7/␤4GalT4/ CGn6ST followed by KSG6ST (Fig. 6F). Interestingly, treatment of a trisaccharide substrate with a mixture of KSG6ST and ␤3GnT7 and ␤4GalT4 glycosyltransferases elongated the products up to nona/decasaccharides, and these products were also sulfated up to tetrasulfated components (Fig. 6H). Because treatment of the substrate with two glycosyltransferases without any sulfotransferase only produced up to penta/hexasaccharide carbohydrates ( Fig. 2A), this result indicates that KSG6ST enhanced elongation of the poly-N-acetyllactosamine backbone by the glycosyltransferases. From these results, we conclude that four enzymes: ␤3GnT7, ␤4GalT4, CGn6ST, and KSG6ST, are sufficient to synthesize highly sulfated KS carbohydrate in vitro. ␤3GnT7, ␤4GalT4, CGn6ST, and KSG6ST Are Required for Sulfated KS Production in Cultured Corneal Cells-We next analyzed production of KS GAG in cultured corneal cells. Under normal culture conditions, SV40-transformed human corneal epithelial cells do not produce detectable levels of highly sulfated KS GAG, which is recognized by the 5D4 monoclonal antibody (28,29). However, the corneal cells began producing 5D4-positive, highly sulfated KS carbohydrate when CGn6ST and KSG6ST were overexpressed (Fig. 7, A and D), and this was a consistent result with our previous observation of KS production in HeLa cells (25). These findings indicate that both sulfotransferases are required to produce highly sulfated KS carbohydrate. Human genes B3GNT7 and B4GALT4 encode ␤3GnT7 and ␤4GalT4, respectively, and corneal epithelial cells endogenously express both genes (Fig. 7, G and H). 4 When we suppressed B3GNT7 expression in the cultured corneal epithelial cells by transfection of specific siRNAs, 5D4positive KS production was dramatically reduced to less than 10% of levels seen in cells transfected with negative control siRNA; even both CGn6ST and KSG6ST were overexpressed (Fig. 7, D and G). Similarly, we observed a 50% reduction of 5D4-positive product in the cells using B4GALT4 siRNAs (Fig. 7, E and H). Because siRNAs for other glycosyltransferases, Enzymes Responsible for Corneal KS GAG Synthesis such as ␤1,3-N-acetylglucosaminyltransferease-2 (␤3GnT2) and ␤1,4-galactosyltransferase-1 (␤4GalT1), both of which are likely major contributors to poly-N-acetyllactosamine elongation, did not have a significant effect on production of highly sulfated KS production in cells (Fig. 7, F and I), we conclude that both ␤3GnT7 and ␤4GalT4 are required for elongation of the KS carbohydrate backbone in human corneal cells. DISCUSSION In this study, we demonstrate that ␤3GnT7 and ␤4GalT4 are required for KS GAG production in vitro and in cultured human corneal cells. ␤3GnT7 was originally identified as a gene involved in tumor invasiveness and later as a member of the ␤3GnT family (30). ␤4GalT4 was identified as a member of ␤4GalT family by expressed sequence tag data base searches (31) and suggested to function in neolacto-series glycosphingolipid biosynthesis (32). Seko et al. (22) reported ␤4GalT activity that acts preferentially on sulfated substrates in human colorectal mucosa and later found that the enzyme responsible for that activity was ␤4GalT4. Using in vitro analysis, they suggested that among seven enzymes of the ␤4GalT family, ␤4GalT4 functions in biosynthesis of sulfo sialyl Lewis X structure as well as in production of KS GAG (22). They also analyzed enzymatic activity of a COS-7 cell membrane fraction containing overexpressed ␤3GnTs using several carbohydrate substrates and found that ␤3GnT7 has higher enzymatic activity for sulfated N-acetyllactosamine substrates than for nonsulfated substrates, suggesting that ␤3GnT7 functions in KS biosynthesis (23). Their studies suggested that four enzymes, ␤3GnT7, ␤4GalT4, CGn6ST, and KSG6ST, could produce sulfated KS GAG in vitro (23), but such an observation has not been reported until this study. KS GAG biosynthesis apparently occurs in two stages: the production of GlcNAc-sulfated poly-N-acetyllactosamine chain by ␤3GnT7, ␤4GalT4, and CGn6ST and the production of highly sulfated KS by sulfation of galactose by KSG6ST (11,16,22,23). CGn6ST transfers sulfate only on the nonreducing terminal GlcNAc (16); therefore, sulfation of GlcNAc residues must be coupled to elongation of the carbohydrate backbone processed by glycosyltransferases. On the other hand, KSG6ST has much higher sulfation activity on substrates containing N-acetyllactosamine disaccharide with sulfate on GlcNAc (15,17), suggesting that sulfation of galactose occurs after production of the GlcNAc-sulfated poly-N-acetyllactosamine chain. Furthermore, because ␤3GnT7 has very weak glycosyltransferase activity on a sulfated substrate when the nonreducing terminal galactose is sulfated (23), sulfation of the nonreducing terminal galactose by KSG6ST may inhibit elongation of the KS GAG backbone. However, when we reacted a sulfated carbohydrate substrate with all four enzymes including KSG6ST, elongation of carbohydrate products was not significantly inhibited (Fig. 6C), although the degree of product sulfation was not as high as that seen with ␤3GnT7/␤4GalT4/CGn6ST treatment followed by KSG6ST reaction (Fig. 6, F and G). This result indicates that production of a GlcNAc-sulfated poly-N-acetyllactosamine chain is not inhibited by the presence of KSG6ST. Surprisingly, when we reacted the substrate with three enzymes (␤3GnT7, ␤4GalT4, and KSG6ST) without CGn6ST, we observed elongated carbohydrates, which were not seen in a reaction of two glycosyltransferases with no sulfotransferase ( Figs. 2A and 6D). This unanticipated result can be explained as follows. Firstly, elongation of the carbohydrate chain by ␤3GnT7 and ␤4GalT4 is not coupled to sulfation of galactose by KSG6ST because the nonreducing terminal galactose is a less favored substrate for KSG6ST (17). Second, nonsulfated carbohydrate products made of repeating disaccharides of Gal␤1-4GlcNAc are less hydrophilic and therefore less efficient substrates for continuous chain elongation by glycosyltransferases because of poor solubility. Indeed, production of nonsulfated or less sulfated carbohydrate may be a primary cause of corneal deposit formation of macular corneal dystrophy patients (33). Thus, when KSG6ST is present during carbohydrate synthesis by ␤3GnT7 and ␤4GalT4, elongated carbohydrates are occasionally but not cooperatively sulfated on galactose residues by sulfotransferase, such that sulfated products become more hydrophilic and are utilized for further elongation by glycosyltransferases. Taken together, we conclude that highly sulfated KS is mainly produced by ␤3GnT7, ␤4GalT4, CGn6ST, and KSG6ST and that biosynthesis of carbohydrate occurs in two steps: production of GlcNAc-sulfated poly-N-acetyllactosamine structure followed by sulfation of galactose residues. Because natural corneal KS GAG chains are efficiently elongated more than 10 sulfated N-acetyllactosamine repeats (11,12,34), we hypothesize that in KS GAGproducing cells, ␤3GnT7, ␤4GalT4, and CGn6ST are tightly associated or closely co-localized with each other to achieve efficient production of GlcNAc-sulfated poly-N-acetyllactosamine chains in the Golgi, whereas KSG6ST may be localized elsewhere or not tightly associated with the other enzymes. Further analysis of cellular localization or biochemical examination such as immunoprecipitation is required to test this hypothesis. A mixture of ␤3GnT7, ␤4GalT4, and CGn6ST produced GlcNAc-sulfated poly-N-acetyllactosamine carbohydrate in vitro. We confirmed production of up to a tetrasulfated decasaccharide (Table 1); however, the products clearly contained longer and more sulfated carbohydrates, because P-4 column chromatography detected peaks that likely contain larger molecules (Fig. 2B). Although long carbohydrate products were made by a simple reaction with a three-enzyme mixture, the length of products was much less than that of corneal KS GAG (11,12), indicating that reaction conditions of this study were not as high as that in vivo. Further optimization, such as altering the ratio of the three enzymes or the concentration of donor molecules may improve the efficiency of KS carbohydrate production in vitro. In our reaction conditions, although galactosylation of the nonreducing terminal GlcNAc by ␤4GalT4 was more than 90%, the efficiencies of the other two reactions were about 50% (Fig. 5), indicating that reaction conditions of these steps must be improved to generate longer carbohydrate products. However, the most inefficient step of KS GAG production in vitro was the first step of ␤3GnT7 reaction, which transfers GlcNAc to the nonreducing terminal galactose of monosulfated tetrasaccharide carbohydrate. Because 82.2% of products were remained as Gal␤1-4(SO 3 Ϫ -6)GlcNAc␤1-6Man␣1-6Man␣1-octyl (Table 1), the effi- Enzymes Responsible for Corneal KS GAG Synthesis ciency of GlcNAc transfer at this step was only 17.8%, which was one-third the average efficiency of ␤3GnT7 enzyme observed throughout sulfation-coupled poly-N-acetyllactosamine elongation (Fig. 5), suggesting that the tetrasaccharide, Gal␤1-4(SO 3 Ϫ -6)GlcNAc␤1-6Man␣1-6Man␣1-octyl, may not be a suitable ␤3GnT7 substrate. A commonly recognized carbohydrate structure of N-linked corneal KS consists of an extended linear KS GAG chain attached to a complextype N-glycan core structure via one or two repeats of nonsulfated N-acetyllactosamine disaccharides (11,12,34). ␤3GnT7 may have less activity on galactose located close to the N-glycan core structure, and formation of N-acetyllactosamine repeats immediately adjacent to the N-glycan core may be catalyzed by other glycosyltransferases. Indeed, among ␤3GnT family proteins, ␤3GnT7 shows the lowest activity on a nonsulfated N-acetyllactosamine connected to a complex-type N-glycan core (23), supporting this idea. During continuous sulfation-coupled carbohydrate elongation, we observed that both ␤3GnT7 and ␤4GalT4 showed at least 4-fold higher activity toward an N-acetyllactosamine structure having sulfated GlcNAc at the position closest to the nonreducing terminal than toward a terminal structure without sulfate (Fig. 5). This finding is consistent with substrate specificity of these enzymes observed in vitro (22,23) and strongly suggests that ␤3GnT7 and ␤4GalT4 are responsible for KS GAG production. We also observed that there was no carbohydrate product exhibiting more than a tetrasaccharide structure without sulfation at the nonreducing terminal during the glycosyltransferase reaction with CGn6ST (Table 1) or FIGURE 6. Enzymatic synthesis of highly sulfated KS chains in the presence of KSG6ST in vitro. The starting substrate was treated with a mixture of three enzymes, ␤3GnT7, ␤4GalT4, and CGn6ST (A) followed by treatment of KSG6ST (B), and analyzed by P-4 gel filtration chromatography. The substrate was also treated with either a mixture of four enzymes, ␤3GnT7, ␤4GalT4, CGn6ST, and KSG6ST (C), or a mixture of three enzymes, ␤3GnT7, ␤4GalT4, and A, D, and G), ␤4GalT4 (B, E, and H), ␤3GnT2 or ␤4GalT1 (C, F, and I). Forty-eight hours later, the cell lysates were analyzed by SDS-PAGE followed by Coomassie Brilliant Blue staining (A-C) or by Western blot analysis using the 5D4 antibody for highly sulfated KS production (D-F). Transfected cells were also analyzed by quantitative reverse transcription-PCR to confirm knockdown effects of each siRNA (G-I). NC, negative control siRNA. Calculated values in G-I represent percentages of target gene expression in a specific siRNA-transfected cells over that of negative control siRNA-transfected cells. OCTOBER 12, 2007 • VOLUME 282 • NUMBER 41 JOURNAL OF BIOLOGICAL CHEMISTRY 30095 without sulfotransferase ( Fig. 2A), suggesting that a carbohydrate molecule with a nonsulfated tetrasaccharide at the nonreducing terminal is not utilized as a glycosyltransferase substrate. Presently, however, it is unclear whether this is due to lower hydrophilicity of the carbohydrate molecule or less preferable substrate structure. Enzymes Responsible for Corneal KS GAG Synthesis Decreasing ␤3GnT7 mRNA levels dramatically reduced sulfated KS production to less than one-tenth of controls in cultured human corneal epithelial cells (Fig. 7D). Reduced ␤4GalT4 mRNA also inhibited sulfated KS production but less effectively than suppression of ␤3GnT7 mRNA, although in both cases siRNAs suppressed mRNA expression more than 70% (Fig. 7, G and H). siRNAs specific for ␤3GnT2 and for ␤4GalT1 did not significantly suppress sulfated KS production, indicating that reduction of sulfated KS production in cultured corneal cells was due to specific suppression of ␤3GnT7 or ␤4GalT4 by corresponding siRNAs. The weaker effect of ␤4GalT4 siRNAs may be because ␤4GalT4 has much higher enzymatic activity for elongation of KS GAG chains than ␤3GnT7; thus, the remaining ␤4GalT4 mRNA may account for ␤4GalT4 activity in producing sulfated KS. Further investigation is required to eliminate the possibility that other ␤4GalT enzymes compensate for ␤4GalT4 in sulfated KS production. In summary, we have detected production of sulfated KS by four enzymes, ␤3GnT7, ␤4GalT4, CGn6ST, and KSG6ST, in vitro and in cultured cells. Because specific suppression of either ␤3GnT7 or ␤4GalT4 reduced sulfated KS production in cultured corneal cells, we conclude that ␤3GnT7 and ␤4GalT4 play major roles in elongating sulfated KS in the cornea. Further characterization of the glycosyltransferases and the sulfotransferases in producing sulfated KS may establish roles for the carbohydrate chain for corneal extracellular matrix organization.
7,607.6
2007-10-12T00:00:00.000
[ "Biology", "Chemistry" ]
A numerical model of the ionospheric signatures of time-varying magnetic reconnection: III. Quasi-instantaneous convection responses in the Cowley-Lockwood paradigm . Using a numerical implementation of the Cowley and Lockwood (1992) model of flow excitation in the magnetosphere–ionosphere (MI) system, we show that both an expanding (on a ∼ 12-min timescale) and a quasi-instantaneous response in ionospheric convection to the onset of magnetopause reconnection can be accommodated by the Cowley–Lockwood conceptual framework. This model has a key feature of time dependence, necessarily considering the history of the coupled MI system. We show that a residual flow, driven by prior magnetopause reconnection, can produce a quasi-instantaneous global ionospheric convection response; perturbations from an equilibrium state may also be present from tail reconnection, which will superpose constructively to give a similar effect. On the other hand, when the MI system is relatively free of pre-existing flow, we can most clearly see the expanding nature of the response. As the open-closed field line boundary will frequently be in motion from such prior reconnection (both at the dayside magnetopause and in the cross-tail current sheet), it is expected that there will usually be some level of combined response to dayside reconnection. Observations of expanding and quasi-instantaneous convection responses There are seemingly conflicting views on how the ionospheric convection pattern responds to a change in the applied reconnection. The first observation of an evolving Correspondence to: S. K. Morley<EMAIL_ADDRESS>ionospheric response to magnetopause reconnection was made by Lockwood et al. (1986) who correlated IMF data from the AMPTE-UKS (Active Magnetospheric Particle Tracer Explorer -U.K. Satellite) with ion temperature measurements from EISCAT. They found that the observed ion temperature enhancement propagated away from noon with a mean velocity of 2.6±0.3 km s −1 . Other studies, for example, the case study by Saunders et al. (1992) and the statistical surveys by Todd et al. (1988), Etemadi et al. (1988) and Khan and Cowley (1999) all reported this expansion, which was combined with ideas on boundary motions and flows discussed previously by Siscoe and Huang (1985) and Freeman and Southwood (1988) in the model of the ionospheric flow response proposed by Cowley and Lockwood (1992). The Cowley-Lockwood model invokes an equilibrium state that the near-Earth system will approach with the cessation of reconnection. As the system approaches equilibrium the flows will die away, even if open flux is still present. Subsequent reconnection at the magnetopause or in the tail will perturb the system away from equilibrium and excite flow which carries the system towards a new equilibrium for the changed amount of open flux. Under an applied change in magnetopause reconnection, convection will first respond locally on the dayside, before evolving around the flanks and into the nightside (Cowley and Lockwood, 1997). The view of an expanding reconfiguration of the convection flow pattern, interpreted in the framework of the Cowley-Lockwood model, was challenged by Ridley et al. (1998). Their study used a global assimilative mapping technique (AMIE, Richmond and Kamide, 1988) to generate patterns of electrostatic potential from magnetometer data. A method of producing residual potential patterns, subtracting an initial potential map from subsequent potential maps, was developed to show the deviation from the initial potential pattern. They interpreted their results as showing a globally instantaneous response, within the temporal resolution of their data, and thus concluded that this was inconsistent with the Published by Copernicus GmbH on behalf of the European Geosciences Union. Cowley-Lockwood model. Ruohoniemi and Baker (1998) used another large-scale technique to observe the convection response to variations in the IMF. Using the Doppler shift of observed coherent radar echoes from the SuperDARN radar network, these authors then used the line-of-sight velocities to constrain a mathematical fitting of an electrostatic potential pattern. Where insufficient data is gathered for a spherical harmonic fit of a given order, data from a statistical model (the APL model, see Ruohoniemi and Baker (1998) and references therein) is used to augment the measurements. The same technique was used by Ruohoniemi and Greenwald (1998) for a more detailed examination of one of the periods used as a case study in Ruohoniemi and Baker (1998). These studies also concluded that the ionospheric convection pattern responded globally simultaneously to a change in the applied reconnection voltage. In a comment on the Ridley et al. (1998) paper, Lockwood and argued that the data presented by Ridley et al. (1998) in fact showed an expansion commensurate with the timescales for reconfiguration inferred by the prior, directly-measured studies. They also raised an important point regarding the distinction between the convection change and the shape of the convection pattern. A recent review by Ruohoniemi et al. (2002) highlighted some further shortcomings of using global mapping procedures (e.g. AMIE). These authors reached the conclusion that global assimilative techniques have a tendency to "globalize" local behaviour. They noted that the integration of statistical model data into the solution can also tend to globalize the effects of a local disturbance. They considered the case of a localized velocity vortex and reasoned that in the absence of sufficient data coverage in the surrounding area, the solution for the global potential pattern would adjust itself to accommodate a greater convective flow. Numerical modelling of time-dependent convection Further to this discussion, it should be noted that using statistical averages for a quasi-steady state goes against the paradigm of a time-varying response to convection. Consider the usage of a global assimilative method in the case of data coverage only in one quadrant of the hemisphere. The statistical model used, (e.g. the APL model, Ruohoniemi and Baker, 1998), will supply averaged data corresponding to a fixed IMF orientation. This data corresponds to a quasisteady (or "equilibrium") condition, and does not consider the history of the magnetosphere-ionosphere system. By presupposing and adding in the equilibrium (final) state to a reconfiguring (intermediate) state, the global mapping procedure may artificially show a more instantaneous response. Thus caution must be exercised when interpreting data that has been "filled in" with statistical data. Freeman (2003) described both the expanding and instantaneous responses within a single mathematical framework of the expanding-contracting polar cap (ECPC) model. His results showed that the expanding response fitted the observations better and that examining the convection patterns, the technique used by Ridley et al., made it hard to differentiate between the competing models. The expanding solution found by Freeman had the peaks of the convection pattern expanding around the polar cap boundary, but the entire equilibrium boundary was allowed to respond instantaneously. This form of expanding solution is consistent with the initial conceptual picture put forward in Cowley and Lockwood (1992), as shown in their Fig. 5. However, in later papers (Cowley and Lockwood, 1997;Lockwood and Cowley, 1999) these authors noted that the equilibrium boundary perturbation would expand tailward as newly-opened flux was added to the tail (see Figs. 2 and 3 of Lockwood and Morley, 2004). The numerical model of Lockwood and Morley (2004), hereafter Paper 1, goes beyond the Freeman (2003) model by taking these later ideas fully into account. While the high-latitude ionosphere can have a change in potential communicated across it on a timescale of tens of seconds by means of a fast Alfvén wave (Freeman et al., 1991;Ruohoniemi and Greenwald, 1998), the change in potential (associated with the development of new region 1 and region 2 field-aligned current (FAC) systems) may take time to develop. Two-stage ionospheric convection response Using magnetometer data, Murr and Hughes (2001) found that the onset of change occurred globally on timescales of about 2 min -a similar timescale to that reported by Ridley et al. (1998). However, their study also showed that the subsequent reconfiguration took place as a function of local time. The timescale for this reconfiguration was, on average, 10 min for complete reconfiguration of the ionospheric convection pattern -a result similar to those reported by Lockwood et al. and co-workers. There have been a number of other observations of a two-stage response in ionospheric convection reported in the literature. Jayachandran and Mc-Dougall (2000) used two digital ionosondes to study the polar cap convection change associated with southward turnings of the IMF and reported both a quasi-instantaneous and an expanding response. Combined observations of the polar cap using SuperDARN radars and ground magnetometers were presented by Nishitani et al. (2002) and Lu et al. (2002). These authors found that the ionospheric flow vortices responded initially within the temporal resolution of their instruments, whereas the polar cap boundary and peak magnetic perturbation responded with a delay of up to 25 min. Lu et al. (2002) inferred that the two-stage response was a consequence of several interacting processes, including a fast rarefaction wave in the magnetosphere and a fast magnetosonic wave in the ionosphere. They also argued that the most important process was the feedback between the ionosphere and magnetosphere. The propagation of a fast magnetosonic wave as the cause of fast onset of change in response to a change in reconnection was also cited by Nishitani et al. (2002). Lopez et al. (1999) showed in their three-dimensional MHD simulation that the convection pattern across the entire polar cap begins to change a few minutes after the arrival of the southward IMF, whereas the onset of the equatorward motion of the open-closed field line boundary depends on the local time, with equatorward motion of the midnight boundary delayed by ∼20 min relative to the onset of the boundary motion at noon. The MHD modelling study of Slinker et al. (2001) also found a fast onset of change in the ionospheric convection with a slower reconfiguration. The fast response was attributed to the propagation of a fast magnetosonic wave through the magnetosphere. Using a numerical implementation of the Cowley-Lockwood paradigm, Lockwood and Morley (2004) showed that a quasi-instantaneous response could occur alongside the expected expansion, to an extent that depended on the level of pre-existing flow before the change. This paper will show how the , hereafter CL) conceptual framework can accommodate a quasi-instantaneous response, demonstrated with the numerical model of Lockwood and Morley (2004). That the CL paradigm could accommodate this kind of response was noted in Paper 1, but was not studied in detail. In this paper we examine this effect using the Lockwood-Morley numerical model with an input reconnection rate variation having two identical pulses separated in time and making use of the techniques developed by Morley and Lockwood (2005, hereafter Paper 2). Lockwood and Morley (2004) presented a novel numerical model of the ionospheric convection response to variations in reconnection rate, incorporating all the concepts of the CL paradigm. The numerical model uses non-circular polar cap and equilibrium boundaries, and includes the propagation of a perturbation in the open-closed field line boundary (OCB). A full description of the model, its assumptions and limitations can be found in Paper 1. Paper 2 extended the model to calculate the ionospheric ion temperature and used the model to analyse the accuracy of the various methods used to derive expansion effects from experimental data. In this section we will outline the basic features of the model which are relevant to the expansion of the simulated flow patterns. For further details the reader is referred to Papers 1 and 2. The Lockwood-Morley numerical model The model calculates the temporal evolution of the OCB and the equilibrium boundary latitudes, OCB and E , at all magnetic local times. At any point in this temporal development of the boundaries, various outputs can be derived. A flowchart summary of the operation of the model can be seen in Fig. 1 of Paper 2. The basic operation of the numerical model relies on three types of input: the input reconnection rate (ε) variation, the initial conditions of the model high-latitude ionosphere and the constants assumed for the return of the OCB to equilibrium. We first define the convection velocity across the boundary (V ), in its own rest frame, for a given time. The latitudinal convection velocity at the boundary (V cn ) is then calculated and the latitudinal velocity of the OCB (V b ) is determined. Thus the latitude of the OCB ( OCB ) is defined at all MLT for any given simulation time, t s . The equilibrium boundary latitude ( E ) is then updated as a function of MLT to accommodate the new amount of open flux contained within the polar cap. The speed at which the perturbation to the equilibrium boundary propagates antisunward is the limit to the expansion of response of the OCB, and hence to the expansion of the convection pattern. The simulation time is then advanced and the input reconnection rate for the current model timestep updated. These steps then repeat for each simulation time. The latitudinal convection velocity is used to specify the distribution of electric potential around the polar cap boundary. Using the ionospheric convection solution given in Freeman et al. (1991) and Freeman (2003) for a circular OCB, the instantaneous convection velocity for all points in the modelled high-latitude ionosphere is specified for any given timestep. Perturbation analysis is used to allow for departures of the OCB from a circular form. As in Paper 2 the ion temperature at a series of simulated stations at a constant latitude is calculated for each timestep. As there is a symmetry about the noon-midnight meridian, we employ a latitudinal ring of 72 simulated stations in the dawn hemisphere (at 67 • ; i.e. outside the polar cap), with a spacing in MLT of 10 min. Note that the polar cap is centred on a latitude of 90 • in the frame used and not offset towards the nightside as in a conventional geographic or geomagnetic frame. Therefore, using a latitudinal ring in this frame ensures that the (equilibrium) OCB is equidistant from the stations at all MLTs. This allows us to isolate the azimuthal expansion of the convection pattern and avoid mixing it with any latitudinal expansion. Modelling the response to pulsed reconnection Paper 2 presented an examination of how using crosscorrelation methods and threshold techniques can influence the observed convection response to IMF changes in timeseries data. It was concluded that the most reliable method of recovering the expansion of the onset of the convection response is to use time series data with a threshold that is a fixed fraction of the peak response. The first pulse of reconnection in the simulation presented in this paper is exactly identical to that used for the example of model results discussed in Paper 2. A second, identical, pulse of reconnection is added here, commencing at t s =540 s, 8 min after the commencement of the first pulse. period of magnetopause flux transfer events (Rijnbeek et al., 1984;Lockwood and Wild, 1993). The simulation commences with a total open flux of F P C =8×10 8 Wb. Each reconnection pulse starts at the X-line centre and expands towards both dusk and dawn (to the points a and b, respectively) at a rate (d|φX|/dt s ) which we set to 0.167 • s −1 , corresponding to 1 h of MLT per 1.5 min. Similarly, the end of each reconnection pulse is first seen at 12:00 MLT, t=1 min after the onset, and propagates towards both a and b at the s r same rate. Thus each pulse causes the reconnection to be active at any MLT between a and b for 1 min. Two such pulses are included in the present simulation, with onsets at t s =1 min and t s =9 min. The background reconnection rate outside the two pulses is taken to be zero. A constant reconnection electric field of 0.2 V m −1 is applied at each MLT of the ionospheric projection of the X-line for 1 minute, expanding from noon MLT to cover a maximum X-line extent of 4 h of MLT. This azimuthal movement of the active reconnection region is as proposed and deduced from observations by, e.g., Lockwood et al. (1993a), Milan et al. (2000) and McWilliams et al. (2001). The model response is identical to that presented in Paper 2 until the commencement of the second pulse. The effects of an existing flow-pattern (that is, a disturbed OCB) caused by the first pulse on the model response to the second pulse are discussed below. Prior to the onset of the second pulse of reconnection, the flows generated by the initial pulse of reconnection have not yet subsided and the convection pattern at simulation time t s =540 s is shown in Fig. 1. A full sequence of flow pattern plots for a similar reconnection scenario to that employed here is given in Paper 1. At this time the second pulse of reconnection is about to commence. The convection pattern is diminishing in strength while the extent of the pattern continues to grow over the polar region. The upper panel of Fig. 1 shows the variation with simulation time, t s , of the input reconnection voltage, XL (in blue), divided by five. It can be seen that the reconnection pulses are spaced 8 min apart and that they are identical. The polar cap voltage, P C (in red), has already peaked at about 18 kV by this time and has subsequently decayed slightly -this is shown in the presence of only 3 plotted equipotentials, which are separated by 3 kV and centred about zero. The boundary locations for this time are shown s 1 r 1 r 2 s 2 a b Fig. 4. Model output boundary characteristics corresponding to Fig. 3. The format is the same as Fig. 2. r 1 and s 1 mark the extent of the perturbation to the equilibrium boundary (dash-dot line) caused by the first pulse of reconnection. Similarly, r 2 and s 2 mark the extent of the perturbation to the equilibrium boundary due to the second reconnection pulse. The extent of the active OCB (the active "merging gap", mapping to the segment of the magnetopause where reconnection is ongoing -solid line) is between the points a and b. The middle and lower panels show the poleward convection velocity at the OCB and the electrostatic potential around the OCB, respectively. in Fig. 2 (top panel), along with the convection velocity in the boundary rest frame (middle panel) and the electrostatic potential along the OCB (lower panel). The equilibrium boundary perturbation -the point at which the OCB can have responded to the added open fluxhas propagated beyond 03:00 and 21:00 MLT (points s and r) on the dawn and dusk sides respectively. The poleward flow between 10:00 and 14:00 MLT (the maximum extent of the merging gap) is balanced by the equatorward flow outside of this region. (2005), but with a second burst of reconnection, identical to the first, 8 min after the commencement of the first. Note the change in the ion temperature scale compared with that used by Morley and Lockwood (2005), which is required because much higher temperatures are driven in response to the second pulse. each reconnection pulse. The ionospheric merging gap voltage is at its peak of XL =158 kV and the transpolar voltage, P C , has risen from 16 kV to 20 kV (but does not reach its peak value of near 30 kV until 150 s later). Figure 4 shows the boundary characteristics at t s =600 s in the same format as Fig. 2. The latitudes of the OCB (solid line) and equilibrium boundary (dash-dot line), as a function of MLT, are shown in the upper panel. A boundary erosion identical to that caused by the first pulse is superimposed on the already distorted boundary. This new bulge is bounded by points a and b. The extent of the equilibrium boundary perturbation to the second pulse is marked by points r 2 and s 2 , while the corresponding limits for the perturbation due to the first pulse, r 1 and s 1 , have expanded to nearer midnight compared with Fig. 2. The middle panel shows V cn has increased by a substantial amount between a and b. This increase is not balanced (as it was at the same stage for the first pulse) by equatorward flow between both a and r 2 and b and s 2 . Comparison of the lower panel with Fig. 2 shows that the potential around the OCB is enhanced everywhere between r 1 and s 1 . Figure 5 shows the formedogram (from the Greek formedon -meaning "in layers crosswise") for the ion temperature enhancement resulting from this two-pulse reconnection variation. This format show the contours of the ion temperature enhancement, resulting from the enhanced ion flow, as a function of simulation time, t s , and magnetic local time, where 12:00 MLT corresponds to station 0. The simulated stations are at a latitude of 67 • and are positioned around the dawn hemisphere with a spacing in MLT of 10 min. We have examined the effect of the initial pulse of reconnection in isolation in Paper 2. This first pulse of reconnection produces convection in the model, as the OCB responds to displacement from its equilibrium configuration. When the second pulse of reconnection is applied, the convection resulting from the first pulse is still present. Although the response in ion heating to the first pulse is identical to that presented in Paper 2, comparison of Fig. 5 with Fig. 7 of Paper 2 highlights the difference in the level of the response to the second pulse. This is also apparent from Fig. 5 alone when one compares the initial response to reconnection pulses 1 and 2, the maximum simulated ion temperature reaching over 1120 K. This is significantly larger than the peak of about 1040 K in response to the first reconnection pulse. The temperature anomaly in the data series for simulated station 3 ( 11:30 MLT) is not an erroneous point: 6. Same as Fig.5, but normalized to the maximum response at each simulated station. Contours of (T i -T i0 )/(T p -T i0 ) are plotted as a function of MLT and simulation time, t s , where T i0 is the value of T i at t s =0 and T ip is the peak temperature seen at that station during the simulation. it arises from the proximity of the active reconnection X-line footprint and was discussed in Paper 2. This is the only station to be engulfed by the temperature enhancement that is localized around the end of the merging gap. Figure 6 shows the ion heating response (as a function of both simulation time, t s , and MLT) by presenting the ion temperature change for each MLT, normalized to the local peak change at that MLT. For the first pulse, the response is weaker at all MLT and shows a clear anti-sunward propagation, with the enhancement being later at larger virtual station number (i.e. at MLT increasingly removed from noon). However, the (larger) response to the second reconnection pulse shows both a quasi-instantaneous response and an expanding response. The character of the response varies with MLT. The apparently instantaneous response present near midnight is due to numerical noise (of order 0.1 K) being amplified by the normalization as described in Paper 2. Visual comparison with Fig. 5 shows that the amplitude of the temperature change is negligible. The two reconnection pulses are identical and the difference in response to the two highlighted by Figs. 5 and 6 reveals that the effect of a pre-existing background flow is substantial. Equivalently, localized addition of open flux to a non-equilibrium polar cap can cause a large-scale response. This will be explained in the following section. Discussion We have presented results from the Lockwood and Morley (2004) numerical model predicting the effect of superposing a reconnection pulse on both an undisturbed and an alreadydisturbed ionosphere. The inputs to the model are identical to those presented in Paper 2, with the exception of the input reconnection rate which now includes a second, identical, pulse of reconnection commencing at t s =540 s, 8 min after the commencement of the first reconnection pulse. The convection patterns presented in Figs. 1 and 3 (for t s =540 s and 600 s, respectively) appear to show a global increase in convection strength in response to the second of the two applied reconnection pulses. For the first pulse (perturbing an undisturbed MI system) the effect is initially localised around the reconnection site, but then expands. The second pulse (perturbing an MI system that has already been perturbed by the first pulse) has a larger response which has both instantaneous and expanding characteristics (as revealed by Fig. 6). The principles of this effect can be understood from response to the first pulse of reconnection. This perturbation due to the first pulse is close to encircling the polar cap, allowing the entire OCB to respond, and thus in the absence of the second pulse the convection pattern would have decayed exponentially in a quasi shape-preserving manner (see Paper 2). However, the perturbation to the equilibrium boundary caused by the addition of new open flux to the polar cap is initially confined to the reconnection site with a subsequent expansion at a velocity dφ r2 /dt s , with point r 2 propagating eastward away from a and s 2 propagating westward away from b. This is exactly the same perturbation motion experienced by the equilibrium boundary in response to the first pulse of reconnection. The residual effect of the first pulse can be seen in the OCB latitude by the fact that it has migrated equatorward at MLTs outside the merging gap (except near midnight, where neither E nor OCB have yet been influenced). However, the OCB is still poleward of its equilibrium position everywhere outside the ionospheric footprint of the X-line (the "merging gap"). The effect of the second pulse can be seen as a large equatorward migration within the active merging gap ab. The effects of the two pulses can be also be distinguished in the poleward convection speeds, with equatorward motion (V cn <0) at MLTs outside the merging gap (except near midnight) and strong poleward flow (V cn >0) inside the merging gap. An enhancement of the equatorward flow is seen just outside of the merging gap (between a and r 2 and between b and s 2 ), but this is much weaker than the poleward flow between a and b. This should be contrasted with the situation for the equivalent time following the first pulse (Fig. 4 in Paper 1) where the total equatorward flow (between a and s 1 and between b and r 1 ) matches the total poleward flow between a and b: this is the case for all times when a zero-flow equilibrium system is perturbed by a single pulse of reconnection and can also be seen in Fig. 2. Indeed, in the bottom panels of Figs. 2 and 4 it can be seen that the additional equatorward flow between a and s 2 and between b and r 2 causes only a very small perturbation to the potential distribution around the OCB. In the CL model the displacement from equilibrium is what determines the flow normal to the OCB at a given location. Where the OCB is close to the equilibrium boundary, only weak flow will be excited. In this example the OCB and equilibrium boundary are close together at s 2 and r 2 , hence | OCB − E | is small and V cn is near zero. This situation arises because the equilibrium boundary at these MLTs is perturbed equatorward by the second pulse of reconnection. However, the OCB has been eroded equatorward by a similar amount in response to the first pulse. In the absence of the perturbation due to the first pulse the streamlines would return across the OCB sunward of s 2 and r 2 . Near these points at this simulation time, t s =600 s, the flow that would have been caused by the first pulse in isolation is poleward and comparable in magnitude to the equatorward flow that would result from the second pulse in isolation. They can thus be considered to cancel each other out here, and the equatorward flow required because the ionosphere is incompressible must take place elsewhere (between r 1 and r 2 and between s 1 and s 2 , i.e. closer to midnight). The flow increases at all MLTs between s 1 and r 1 . It is important to note that the flows do not superpose -rather it is the boundary perturbations that superpose and their subsequent return towards their equilibrium position governs how the flow responds. This "global" strengthening of the convection pattern may provide an explanation for the quasi-simultaneous response reported by several authors (e.g. Ridley et al., 1997;Ruohoniemi and Baker, 1998). In this context, it is useful to note that Ridley et al. studied perturbations to the flow in response to southward turnings of the IMF using "residual potential patterns". These were obtained by subtracting from each observed potential a pre-existing potential from before the IMF change. Thus in their examples there was pre-existing flow as in the simulation presented here. Ion temperature response As described in Paper 2, the convection response is also clearly seen in the ion heating -the dependence of ion heating on the convection velocity approximately follows a squared relationship (we here neglect the effects of thermospheric neutral winds and their (slow) response to the convection change). The formedogram of simulated ion temperature (as a function of simulation time and MLT) shown in Fig. 5 reveals response to the first pulse of reconnection which, as in Paper 2, expands away from noon at about 11 km s −1 . The second pulse of reconnection commences at t s =540 s. The erosion of the OCB is almost identical to that caused by the first pulse of reconnection applied. However, the response in the ion heating reveals a very different convection response. The response in frictional ion heating, hence in the convection, appears in Fig. 5 to be nearly simultaneous across the entire dayside. However, as discussed earlier, observing at a fixed threshold level can lead to a significantly reduced apparent expansion velocity and to a reduced apparent extent of the expansion. This formedogram also shows a bifurcation in the response to the second pulse, with an apparent expanding component in addition to the more instantaneous response. Figure 6 shows the ion heating response (as a function of simulation time and MLT), where the data from each simulated station has been normalized to the local maximum -i.e. the contours represent the fraction of peak change in ion temperature at a given MLT. Here the effect of the reconnection onset just after t s =540 s is clearly seen globally, with a rise to peak convection response within 3 min of the commencement of addition of open flux by the second pulse of reconnection. Only on the nightside is there any evidence for an antisunward expansion of this quasi-instantaneous response. This interaction between the flow present prior to the second pulse and the new flow, giving the global response, then falls away and a second, expanding, response can be seen moving anti-sunward with a similar phase velocity to that for the first pulse, i.e. with a velocity of about 11 km s −1 . The peak ion heating response at 00:00 MLT is seen to occur at t s =1250 s, about 12 min after the onset of the second reconnection pulse, corresponding to the timescale for reconfiguration of the entire convection pattern. The temporal separation between the two responses becomes sufficiently large that we can separate them as we approach the dawn-dusk meridian. It is interesting to note that in this case the expanding response is of lesser magnitude than the quasi-instantaneous response, as measured by the ion heating, falling as low as 70% of the peak response at some MLTs. The effect of pulse separation To illustrate the effect of the separation between reconnection pulses on the quasi-instantaneous and expanding responses we here use the input reconnection scenario described earlier in this paper, but increase the separation between the applied pulses of reconnection to 10 min; i.e. the second pulse of reconnection now commences at t s =660 s. The formedograms of simulated ion temperature for this case are shown in Figs. 7 and 8 (corresponding to Figs. 5 and 6, respectively). Comparison of Fig. 7 with Fig. 5 shows that the form of the response in ion temperature is qualitatively similar. Since the reconnection pulses have a greater separation in time, the flow excited by the initial pulse of reconnection has decayed further at the onset of the second pulse. The response to the second pulse is therefore slightly lower in magnitude than in the previous example. Figure 8 shows the effect of the increased pulse separation on the heating, as a fraction of the maximum response at each station. As before, the quasi-instantaneous response is present across much of the high-latitude ionosphere. Here the onset of the ion heating response occurs over a greater MLT extent than before, and no longer shows the antisunward expansion of the quasi-instantaneous response present in Fig. 6. This global strengthening of the ionospheric convection again shows the greatest perturbation in the temperature profiles for each simulated station. This is explained by the fact that the expanding response, which is smaller in this example because of the greater pulse separation, weakens as it spreads around the auroral oval towards the nightside. It therefore has less effect on the magnitude of the change in convection, and the resultant ion heating. This can be seen in the different ion temperatures just after the onset of the second reconnection pulse. For the case of an 8-min separation the peak temperature measured here is 1125 K, whereas for a pulse separation of 10 min the peak temperature around 10 K lower. However, the heating associated with the quasi-instantaneous response covers a larger MLT extent. The decay of the instantaneous response is with the S. K. Morley and M. Lockwood: Quasi-instantaneous convection responses in the Cowley-Lockwood paradigm same time constant as that for the decay of the residual flow associated with the first pulse (of order 10 min). The superposed and expanding responses are still difficult to distinguish on the dayside, where the simulated ion temperature increase is greatest. However, on the dayside the response to an applied pulse of reconnection is observed in the ion heating as a temperature enhancement expanding away from the reconnection footprint. In these examples the quasi-instantaneous strengthening of the convection pattern, as seen in the associated ion heating, can only be fully separated from the classic expanding response on the nightside. For small pulse separations, the instantaneous response will not be seen as the two reconnection pulses effectively merge into a single, longer-lived pulse. Comparison of Figs. 5 and 7 illustrates how the magnitude of the instantaneous response in ion heating decays with increasing pulse separations above 8 min, eventually disappearing to leave two identical expanding responses of the type presented in Paper 2. Conclusions We have shown that both expanding (on a ∼12-min timescale) and quasi-instantaneous responses of the ionospheric convection to magnetopause reconnection can be explained with the Lockwood (1992, 1997) model of flow excitation in the coupled magnetosphere-ionosphere (MI) system. This model has a key feature of timedependence, necessarily considering the prior history of the MI region. The work presented in this paper shows that a residual flow from magnetopause reconnection can produce a quasi-instantaneous global ionospheric convection response; residual flow may also be present from tail reconnection, which will superpose constructively to give a similar effect. The flow generated by the second pulse is not a simple superposition of the flow patterns associated with the two pulses in isolation. Rather, it is the boundary perturbations (the OCB and equilibrium boundary) which are superposed and the combined effect of the two pulses gives a quite different response to the isolated patterns or the superposition of the two as the OCB migrates back towards its equilibrium position. As the OCB will frequently be in motion from such prior reconnection (both at the dayside magnetopause and at a magnetotail neutral line), it is expected that there will usually be some level of combined response to dayside reconnection. IMF B y and its variations will also have an effect, though modelling of the effect of a strong B y component cannot be achieved with the current model, since symmetry is assumed in the analytic solution of Laplace's equation. The example presented offers a plausible explanation for reported global responses within the Cowley-Lockwood paradigm. The model predicts a quasi-instantaneous strengthening of the convection pattern along with the expected expanding reconfiguration response, in accordance Ann. Geophys., 24,[961][962][963][964][965][966][967][968][969][970][971][972]2006 www.ann-geophys.net/24/961/2006/ with observations by, for example, Murr and Hughes (2001) and Lu et al. (2002). Note that these responses in our modelling are not due to the presence of different magnetosonic wave speeds. Other mechanisms have been proposed to explain certain cases, including an overdraped lobe causing reconnection onset simultaneously across a large MLT extent (e.g. Shepherd et al., 1999). However, our results show that these are also not necessary to explain the observations. In the sample results presented, the "global" and "expanding" responses in the bulk flow occurred too close together to be separated across much of the dayside. The reconfiguration of the ionospheric flows in response to the applied pulse of reconnection is seen to occur as an expanding reconfiguration of the type discussed by Cowley and Lockwood (1992). Further, the separation between dayside reconnection events, is seen to be an important factor in the degree of separation between the global and expanding responses, and in the level of the quasi-instantaneous response caused by the superposition of the OCB and equilibrium boundary perturbations. In the present paper we have concentrated on the effects of pulses in the magnetopause reconnection rate; however, similar effects will also be produced by pulses in the nightside reconnection in the cross-tail current sheet.
8,890
2006-05-19T00:00:00.000
[ "Physics", "Environmental Science" ]
Low temperature magnetic properties of frustrated pyrochlore ferromagnet Ho2Ir2O7 We have performed low temperature magnetic susceptibility, specific heat, electrical resistivity and neutron diffraction measurements to study the magnetic properties of frustrated pyrochlore ferromagnet Ho2Ir2O7. The magnetization and dc resistivity measurement indicate metal-insulator transition near 140 K. The accumulated value of observed entropy calculated from heat capacity data is Rln2, indicating the Ho3+ doublet ground state. The Ir sublattice order is antiferromagnetic which is evident from the magnetization measurements. The best fit of the neutron diffraction spectra at 1.8 K shows the magnetic structures with two types of components. One of the magnetic structure is all-in-all-out type, which is due to the strong coupling between Ho3+ and Ir4+ ions and develops below 18 K. It suggests that the magnetic moment of Ho3+ ions align in the field of Ir4+, ordering along local <111> directions. There is also a Ho3+ magnetic component perpendicular to <111> below 100 K. This perpendicular component may originate from the defects in the Ir sub-lattice. Defects such as Ir deficiency or an Ir atom pointing at the opposite direction will cause the emergent of the perpendicular component. Introduction The study of frustrated pyrochlore oxides having the composition A2B2O7 is very crucial to understand the ground state that results from the competing interactions of the spins on the corners of the tetrahedron lattice. In case of the pyrochlore ferromagnet, the local Ising anisotropy makes the spins to point either into or out of the center of tetrahedron and consequently the spin configurations on ground state of one tetrahedron are six-fold degenerate having two-in-two-out structures. The pyrochlore lattice consists of a three dimensional network of corner sharing tetrahedra. If we extend this two-in-two-out structure to the entire pyrochlore lattice, we obtain a macroscopic ground state, which eventually lead to the residual entropy. The macroscopic degenerate ground state, a feature possessed by a number of frustrated materials, has been a fascinating area of research. Frustrated magnetism on site A leads to one of the remarkable phenomenon termed as "spin ice" behaviour where residual entropy has been observed in pyrochlore materials e.g. Ho2Ti2O7 [1], Dy2Ti2O7 [2], Ho2Sn2O7 [3]. For the pyrochlore lattice, it is a well-established fact that antiferromagnetic interactions lead to the high degree of geometrical frustration [4]. Interestingly it is also been observed that even in case of ferromagnetic interaction in pyrochlore lattice, the frustration may arise due to the strong single-ion anisotropy [5]. The iridium based pyrochlore compounds, R2Ir2O7 (R = Pr, Nd, Sm and Eu) have been investigated and a metal-nonmetal changeover has been observed in these compounds [6]. The Ir 4+ ions containing five 5d electrons differ from identical Mo 4+ and Ru 4+ ions as follows: Ir 4+ has low spin S = ½ configuration, while Mo 4+ and Ru 4+ have S = 1 configuration and consequently Ir 4+ brings about different properties. In this paper, we present the low temperature magnetic properties of frustrated pyrochlore Ho2Ir2O7. The magnetization measurements indicate metal-insulator transition near 140 K. Neutron diffraction experiments reveal two types of the components of Ho 3+ in the magnetic structure of pyrochlore Ho2Ir2O7. Experimental details Polycrystalline samples of Ho2Ir2O7 have been prepared using conventional solid-state synthesis technique. The starting materials were Ho2O3 (99.99%) powder and IrO2 (99.99%) powder in the ratio 1:2 respectively. The materials were grinded well with a mortar and pestle and we pre-sintered the samples at 800 0 C for 12 hours. After pre-sintering of the samples, we pelletized the samples and heated at 1100 0 C for 20 days. The phase purity of samples is confirmed with synchrotron X-ray diffraction measurements. The dc magnetization is performed with a superconducting quantum interference device (SQUID) magnetometer (Quantum Design, USA) for various temperatures down to 2 K and magnetic fields up to 5 T. The specific heat measurement is performed using a relaxation method in a physical property measurement system (PPMS) (Quantum Design, USA) down to 1.8 K. The AC susceptibility measurements has been performed with PPMS with Hac = 10 Oe superimposed on various dc magnetic fields up to 2 T, in the temperature range of 2 K -30 K. Neutron powder diffraction measurements were carried out on the time-of-flight powder diffractometer POWGEN at the Spallation Neutron Source, Oak Ridge National Laboratory. Diffraction data were collected with the center wavelength 1.066 Å. Rietveld refinements for structures of Ho2Ir2O7 were performed using the FullProf program. DC magnetization measurements Temperature dependence of the dc susceptibility of Ho2Ir2O7 at 50 Oe and 1 kOe in the temperature range 50 K -300 K is shown in figure 1. The ZFC and FC curves at 1 kOe are observed to be separated around 141 K. This ZFC-FC behaviour is consistent with the earlier reported studies on the family of these compounds, indicating that it shows metal-insulator transition (MIT) [7]. The shape of FC curve below TMI depends upon the preparation of the sample. In our case, below TMI, FC lies above the ZFC curve for both field values. (Note that FC and ZFC for H = 1 kOe have been divided by a factor of 1.65 for scaling purposes). One interesting feature of the FC curve at 1 kOe is that it starts separating apart ZFC curve around 225 K, 84 K higher than the TMI. Previous studies have reported similar effects for metallic pyrochlore compound Pr2Ir2O7, where resistivity starts to increase below TMI [8]. This effect is attributed to the enhanced magnetic scattering of the charge carriers while approaching the magnetic ordering, due to spin-ice behaviour in a frustrated pyrochlore. The temperature dependence of inverse susceptibility (not shown here) and the Curie-Weiss law fitting of H = 50 Oe data give magnetic moment of Ho 3+ = 8.362(1) and θw = 1.9 K, indicating the ferromagnetic coupling of spins in Ho2Ir2O7. Figures 2(a) and 2(b) show the temperature dependence of the real (χ') and imaginary (χ'') parts of the ac susceptibility of Ho2Ir2O7 for Hdc = 0.7 T and Hac = 10 Oe and under different ac field frequencies ranging between 10 Hz -10 kHz. As shown in figure 2(a), χ' shows a peak near 5 K, which is consistent with the earlier published results of spin-ice materials and this peak is associated with the dynamic spin freezing at the low temperature [3,9,10]. From figure 2(b) we observed that χ'' shows a broad peak around 6 K, which is also associated with freezing of spins at the low temperature. The maxima in χ' and χ'' decreases with the increase in the ac frequency which is expected for spin freezing. The monotonic decrease in the in χ' and χ'' with increase in the temperature is in well agreement with the published results for the spin-ice materials [3]. We also observed that peak in χ'' shifts to the lower temperature with increase in the ac frequency which indicates the relaxation of the magnetization in Ho2Ir2O7 at low temperatures. Heat Capacity measurements The magnetic contribution to the specific heat (Cm) is obtained by subtracting lattice contribution (specific heat of pyrochlore Lu2Sn2O7), and nucleon hyperfine contribution, which has been estimated from Schottky specific heat, from the total specific heat (Cp) of Ho2Ir2O7 in zero field. The magnetic entropy is calculated by integrating Cm/T over temperature using the formula: ΔSm = ∫ ′′ T′ , where T' and T'' are the initial and final temperature taken for integration interval. As we can observe from the temperature dependence of entropy (as shown in figure 3), zero-point entropy of R/2 ln3/2 does not appear in Ho2Ir2O7 and ΔSm is saturated near 30 K and attains the value 5.92 J/mol-K, which is slightly more than spin freezing condition Rln2 = 5.76 J/mol-K. Similar results are reported for Dy2Ir2O7 [6]. In case of Dy2Ir2O7 the spin freezing condition is obtained near 20 K. Neutron diffraction measurements Neutron powder diffraction experiments had been carried out on Ho2Ir2O7 at various temperatures (the detailed results to be published elsewhere). Figure 4 shows the Reitveld refinement data on Ho2Ir2O7 at T = 29 K. The red dots represent the experimental data. The solid black line is the theoretically calculated intensity. The peaks (420) and (222) might be possibly appear because of the lattice defects and not from the magnetic structure. The blue curve represents the difference of both values. The best fit of the neutron diffraction spectra at 1.8 K shows the magnetic structures with two types of components. One of the magnetic structure is all-in-all-out type, which is due to the strong coupling between Ho 3+ and Ir 4+ ions and develops below 18 K. It suggests the magnetic moment of Ho 3+ ions align in the field of Ir 4+ ordering along local <111> directions [11,12]. Another magnetic component of Ho 3+ , perpendicular to <111> is observed below 100 K. This perpendicular component may originate from the defects in the Ir sublattice. Defects such as Ir deficiency or an Ir atom pointing at the opposite direction will cause the emergent of the perpendicular component [11]. Resistivity measurements We have performed resistivity measurements with standard four probe method. The temperature variation of the resistivity is shown in figure 5(a) in the temperature range 1.85 K -7 K and under various magnetic field values ranging up to 5 T. The sample shows insulating behavior at low temperature (1.85 K) and H = 0 Oe. We have observed that with the increase in the field, the electrical behavior of the Ho2Ir2O7 sample changes from insulator to the metallic behavior. Figure 5(b) shows the temperature variation of the resistivity at the high temperature and indicates the metal-insulator transition near 140 K. Further we have analyzed the thermistor behavior of Ho2Ir2O7 as shown in the figure 5(c) and find that 1/T variation of ln(ρ) does not follow the common semiconductor thermistors formula 1/T = A + B ln(ρ) + C [ln(ρ)] 3 where A, B and C are constants [13]. Conclusion In conclusion, we have performed low temperature ac and dc susceptibility, specific heat, electrical resistivity and neutron diffraction measurements to study the magnetic properties of frustrated pyrochlore ferromagnet Ho2Ir2O7. The dc magnetization and resistivity measurements show metalinsulator transition near 140 K. The total entropy accumulated from heat capacity data is Rln2, which indicates the Ho 3+ doublet ground state. The Ir sublattice orders is the antiferromagnetic which is evidenced by the magnetization measurements. The magnetic structures with two types of components are present in the best fit of the neutron diffraction spectra at 1.8 K. One of the magnetic structure component is all-in-all-out configuration of Ising spins, along local <111> axes and develops below 18 K. There is also a component of magnetic structure, which is perpendicular to <111> axes. The perpendicular component of the magnetic structure may possibly originate from the defects in the Ir sublattice.
2,518.6
2017-04-20T00:00:00.000
[ "Physics" ]
Being about It: Engaging Liberatory Educational Praxis : Engaging in liberatory praxis at times means operating outside of set or limiting boundaries. As such, this article is written in the first person to highlight both a praxis of freedom (outside of the status quo) and to show impact. It pays homage to elder educators in the process of (re)membering those who have paved the way for liberatory praxis and influence my personal educational philosophy presented here for guidance. Student voices are amplified in the process of highlighting ways that (re)searching, (re)visioning, (re)cognizing, (re)presenting, and (re)claiming are actualized to engender educational freedom across curricular content in a higher educational setting. Introduction "Teaching and the shaping of character is one of our great strengths. In our worldview, our children are seen as divine gifts of our creator. Our children, their families, and the social and physical environment must be nurtured together." -Asa G. Hilliard, III "What is a teacher? I'll tell you: it isn't someone who teaches something, but someone who inspires the student to give of her best in order to discover what she already knows." -Paulo Coelho "A gifted teacher is not only prepared to meet the needs of today's child, but is also prepared to foresee the hopes and dreams in every child's future." -Robert John Meehan Liberatory Education has been defined as an approach to learning that provides students with agency, acknowledges student realities and experiences, and also acknowledges oppressive policies and societal structures that may hinder their progression [1,2]. Engaging in liberatory educational praxis at times means operating outside of set or possibly limiting boundaries of what is considered to be academically astute. This leads to questions of who is creating the definitions and boundaries and whether they are equitable for all participants. It also involves the inclusion of student voices as central and valid in the process of discovery and learning. When engaged appropriately and critically with particular care [3], the inclusion of all voices does not mean the eclipsing of some voices belonging to those who might see things differently. The building of community that is included in the process makes room for all perspectives and the ability to sift through tension to grow from perceived conflict. As such, this article diverges from conventional expectations of academic writing that require formal, abstract, rigid, conventional, and personally detached third-person language establishing an authoritative stance [4]. It is purposefully written in the first person to highlight both a praxis of freedom (stepping outside of boundaries as discussed above) and to exemplify both the process and the impact of fully engaging and modeling liberatory praxis. Doing what we speak of; being about it. This is my attempt to fully capture the spirit and feeling of what I am clear is MY work [5]. It will directly Through readings, formalized lessons, informal conversations, and consistent observation (at times when they did not realize I was watching) I learned. I learned about the world around me, about my history, how to think critically and critically analyze content/information, how to actively listen, how to see myself, how to take perspective, how to engage with others, how to conduct myself, how to respond (or not to), how to leave the world better than I found it. Some of these names you will find on publications and in professional settings, some you will not; and all are valid and important. It was through these guides that I learned the acts of (re)searching and engaging African history through multiple sources [5,8], (re)membering who we are, all we have done, and all we are capable of [8,9], (re)visioning educational praxis to create spaces of liberation and freedom [1,8,10,11], (re)cognizing where we have been, where we are now, and where we must be [6,9,12], (re)presenting African and women's (often hidden) historical perspectives [5], and (re)claiming identity and excellence [5,6,10,12]. I learned how to apply all of this in both scholarship and praxis, and they held me accountable to do so. I felt their enthusiasm as they engaged in scholarship and bathed in the light they cast as they urged me to never sit in anyone else's shadow. They taught me so many things, but it was never really about what they taught (the topic or subject), but how they approached the process of teaching. Their depth of knowledge, their passion, their support, their willingness to learn from (young, inexperienced, still learning) me, their dispositions as they taught, and the way they lived their teachings. They modeled by being who they said they were and who they were asking me to be (the essence of being about it). They modeled through authenticity and consistency. They modeled by allowing me to be me, while gently nudging, teaching, and reminding me to become the best version of myself. I grew to meet their expectations first, then to meet my own as they taught me that what really mattered was the bar I set for myself, what I believed about what I could accomplish, and the language I used in the process. I found my own bar and I continue to raise it as I grow in the understanding of myself and learn from my students. This was my first introduction and foray into liberatory educational praxis. I still sit at their feet to learn and also sit next to them to work. Now I model what I have absorbed for others as they did for me [8]. You will also see much of what I have said about them as well as the tenets of liberatory praxis reflected in my personal philosophy statement. I must also reiterate that I have had many teachers that have unfortunately taught me the exact opposite of these lessons. I have encountered people who did not care that I was in the room, who did not seek to know nor connect with me, who even said harmful things in the process of "teaching" me. They said things that felt like personal and cultural attacks communicating a different kind of lesson separate from the curriculum. They did not like me personally or people who are phenotypically similar to me. I was still expected and required to show them respect when it was beyond clear that they had none for me. They mishandled their responsibility to shape young lives. Whether these things were on purpose or through general ignorance, I could not tell the difference. Intentionality does not matter. Their actions still stung and left an enduring mark. At times it can be easier to remember the painful lessons than it is to remember the positive and the optimal because of the scars left behind [11]. Light has a softer touch and leaves a different mark. This is why I do this work and engage in this praxis. I will not speak their names here though for two reasons: 1. I do not seek to exalt or reflect their actions, behaviors, or legacy, and 2. I offer them the protection they did not extend to me. Even in this act I am being who my forever teachers have taught me to be. I am retaining my integrity and extending grace, and being solution focused. This article is about radiating that light. Liberatory Educational Praxis This work begins with the understanding that I am a teacher always and in all ways. It was not always my plan; however, teaching opportunities kept finding me when I was not looking for them. I originally planned to be a Hollywood star, drama major, and all Educ. Sci. 2023, 13, 625 4 of 17 of that, but that is a story for another time. My masters and doctorate degree journeys took me down different paths that kept returning me to the classroom teaching at different institutions (PWI and HBCU) where I engaged liberatory praxis wherever I was (of course, not to the more in-depth and integrated level I am now). As Dillard [5] discusses, I was in search of more lucrative professional options when I was called to the classroom again by someone who was familiar with my work. This is how I found myself teaching at the number one HBCU (for sixteen consecutive years): Spelman College. Though it was not my initial choice to go down this path, I have, however, always loved teaching and understood my responsibility from the beginning of my teaching career. Because of my models (forever teachers) and because of my personal experiences sitting in the place and space of a learner in low socioeconomic schools in so-called inner city spaces, I knew that how I showed up mattered. I also knew that students' experiences with their teachers remain with them [11]. With this understanding, I have actively chosen to leave my students with light. I came to realize that the responsibility also does not begin and end in the classroom within the specified class time. Whether I am engaged in a formal classroom exchange, mentoring, mothering, on social media, or just "doing me", my actions, words, and demeanor are always teaching a lesson to whomever may be paying attention; and students are always paying attention. This may seem like common sense, but I learned this sitting at the feet of Baba Asa G. Hilliard, III. Through his direct words and his informal actions, he taught me that a master teacher [the African teacher] is "a parent, friend, guide, coach, healer, counselor, model, storyteller, entertainer, artist, architect, builder, minister, and advocate to and for students" [10]. A teacher is one who works to ensure that education is not subjugating for students. Subjugating education focuses on one perspective and ignores the histories, perspectives, and realities of all others [11]. S(he) is one who works to ensure that students have fun while learning, that they learn the truth, are considered, seen, appropriately challenged, protected, allowed to experiment, and allowed to make mistakes, and that they experience growth. By doing so, s(he) creates a space for liberatory educational praxis where students can actively participate in their own process. S(he) is about the work and (re)members to place students at the center. According to Freire [1], liberatory education facilitates an understanding of social systems of oppressions for students and equips them with an understanding of how to challenge and change those systems. Students are guided through the process of (re)cognizing and understand how American education has been historically structured for one demographic and employed as a tool of oppression. One way this is achieved is through problem-posing education that employs dialogue to (re)vision education and builds critical consciousness, stimulates critical thinking, connects to reality, engenders creativity, and nurtures reflection and action [5]. It transforms and allows students to (re)claim the educational experience. Once this is understood, whether education is subjugating or freeing is up to the teacher who must embody the lessons taught about curricular freedom. Teachers literally shape the future both by what they teach and by how they teach. The content and the quality are both salient ingredients to the equation. A big part of this is how we (re)present and interact with students [11] because how we choose to show up and engage is a choice. At times we forget that college students are not yet full-fledged adults, but what I like to call advanced adolescents who still need guidance and nurturing as they are still learning how to independently engage with the world without the direct supervision of parents or guardians. Freire [13] reminds us that teachers should not solely be seen as nurturers nor just as strict purveyors of information, but we need to strike a balance between the two because our students need love in the process of learning. Relationships are vital to the learning interaction, and I would argue that it is the very foundation of liberatory educational praxis. We create spaces of freedom when we get to know our students, create relationships, and allow them to grow in the process of learning. Teaching is a dynamic process that should develop the mind, heart, character, and behavior of students. It should simultane-ously tap their genius and touch their spirit as teachers (re)cognize the divinity and full humanity of children/students who are (or should be) regarded as blessings and revered as our future [10,11,14]. A teacher must (re)search and learn to speak the individualized language of each student. In other words, learn their cultural background, learning history, strengths, preferences, challenges, needs, etc. Once we know students then we can begin to build trust and engage on a deeper level. This, in my view, is educational freedom. This is the apex of learners' understanding of how to create and transform rather than simply regurgitate and mimic a template as they trust to engage deeper. When we remember how we liberate, we seek to engage in intentional educational practice. We (re)search, (re)vison, (re)cognize, (re)present, (re)claim, (re)member [5]. We are clear about what we are teaching and why we are teaching, and work to teach it in a manner that touches the spirit and both reflects the professor's passion and ignites passion in students [11]. From the teachings of Ptahhotep (2350 B.C.E.) as shared through Hilliard, Williams, and Damali [15]: The Seba (wise person/master teacher feeds the Ka (soul) with what endures . . . the wise is known by his or her good actions. The heart of the wise matches his or her tongue and his or her lips are straight when he or she speaks. The wise have eyes that are made to see and ears that are made to hear what will profit the offspring. In other words, they approach education authentically and are accountable in the process. Freire [1] adds that the teacher asks themselves, and are clear, about whom and on whose behalf they are working. In this view and in the words of Dillard [5], I am clear about who I am working for, and I am fulfilling my covenant to the ancestors by tapping into their knowledge and sharing it with the next generation. I am working from a place of passion. As an Educational Psychologist, I have a deep level understanding of what happens within the brain as we are presented with new information. I understand how the past (positive and negative messages and experiences) and present (functioning and environment) interact to set the stage for how we encode information. I apply that understanding to module development, course creation, student interactions, and everyday teaching as I engage in trauma-informed practice. Trauma-informed practice focuses on considering and being responsive to students who have experienced trauma [16]. Teachers who engage this form of praxis adapt an interlaced socioemotional approach that focuses on establishing relationships with students, avoiding triggers, employing tailored de-escalation when students are triggered, creating physical, psychological, and emotional safe spaces, and providing students with choice, voice, and empowerment. The goal is to enhance the educational process for students, at times providing them with positive experiences that they have not previously had. I am intentional in creating course experiences for students towards a praxis of freedom. As I one day aspire to be counted among the lineage of master teachers, I share my educational philosophy in the next section with these words regarding authenticity in mind. I then share the feedback of students across years and content in the following section to see if my philosophy matches with their reality (not just mine). I sincerely hope that, in the spirit of the elders named and the teaching of Ptahhotep shared above [15], that what I say (or think) I do connects to the sentiments and actual experiences of my students. My Teaching Philosophy My overall goal as an instructor is to have a lasting impact on my students; one that surpasses the semester exams; one that they not only faintly remember but hold onto and employ. I pray that, one day, I have the same impact on my students that my teachers (those names that I have called) have had on me. I seek to enhance the overall educational environment for both colleagues and students. I believe in developing relationships with students and creating an overall course experience by creating community amongst them, teaching content, fostering growth, making information meaningful, and instructing them to think both critically and globally beyond course information through the presentation of multiple perspectives. I encourage students to actively participate in their learning by conducting further research rather than blindly taking in the information I am giving to them at face value. This is a vital component to liberatory educational praxis [1]. It is important that students take information to the next level on their own. I feel it is important to be culturally responsive to meet the diverse needs of all students (kindergarten through college, traditional and nontraditional students) from different backgrounds, as all course environments are diverse, even those that appear to be phenotypically similar. The picture of the virtual classroom I created during the COVID-19 quarantine is an example of how I show that all students are welcome ( Figure 1). Teachers who are culturally responsive get to know their students, work to understand their cultural backgrounds, and find ways to incorporate their cultural and lived perspectives and experiences with course content [17]. They focus on student learning (as opposed to grade-focused achievement), cultural integration, social justice, and indigenous language revitalization. This is salient because research shows that how students encode, think, and process information is connected to cultural nuance, transmission, and preference [11,17,18]; and these needs do not end at the culmination of high school. employ. I pray that, one day, I have the same impact on my students that my teachers (those names that I have called) have had on me. I seek to enhance the overall educational environment for both colleagues and students. I believe in developing relationships with students and creating an overall course experience by creating community amongst them, teaching content, fostering growth, making information meaningful, and instructing them to think both critically and globally beyond course information through the presentation of multiple perspectives. I encourage students to actively participate in their learning by conducting further research rather than blindly taking in the information I am giving to them at face value. This is a vital component to liberatory educational praxis [1]. It is important that students take information to the next level on their own. I feel it is important to be culturally responsive to meet the diverse needs of all students (kindergarten through college, traditional and nontraditional students) from different backgrounds, as all course environments are diverse, even those that appear to be phenotypically similar. The picture of the virtual classroom I created during the COVID-19 quarantine is an example of how I show that all students are welcome ( Figure 1). Teachers who are culturally responsive get to know their students, work to understand their cultural backgrounds, and find ways to incorporate their cultural and lived perspectives and experiences with course content [17]. They focus on student learning (as opposed to grade-focused achievement), cultural integration, social justice, and indigenous language revitalization. This is salient because research shows that how students encode, think, and process information is connected to cultural nuance, transmission, and preference [11,17,18]; and these needs do not end at the culmination of high school. Regardless of the age or level of the student I am working with, I assist them in (re)cognizing, (re)visioning, and considering cultural differences in themselves and others, and guide them through the process of challenging their own biases and deeply held/limiting beliefs about their own abilities and about those of others [5]. My unique educational background affords me the ability to guide and debrief this process by creating a non-judgmental, safe, and brave space where students push past their comfort zones and engage in difficult dialogues without defensiveness. This assists in students considering themselves responsible global citizens who understand the importance of reflecting Regardless of the age or level of the student I am working with, I assist them in (re)cognizing, (re)visioning, and considering cultural differences in themselves and others, and guide them through the process of challenging their own biases and deeply held/limiting beliefs about their own abilities and about those of others [5]. My unique educational background affords me the ability to guide and debrief this process by creating a non-judgmental, safe, and brave space where students push past their comfort zones and engage in difficult dialogues without defensiveness. This assists in students considering themselves responsible global citizens who understand the importance of reflecting on their own practice so that they can present differently when they join the professional workforce. This is also where the possibilities for transformation and freedom, or (re)claiming, begin to take shape. This begins on day one when students enter the classroom. In my personal practice, I take attendance by calling names aloud so that I connect them with faces and so I can hear their voices. I ask students to correct me if I mispronounce their name and ask them to tell me the meaning/stories behind their names. I ask them if there is something else they would like the class to call them. I ask that anyone who finds speaking out in class challenging let me know in private. I explain that we will all get to know each other, that they are not just bodies in seats, but important to the educational journey we are taking during the semester. During COVID, while students were wearing masks, I took the process a step further by asking them to send me selfies introducing themselves so that I could see and know their faces. I explain all expectations so they are clear about what to expect from me and the course. I explain that importance is not placed on the letter grade but what they learn and carry with them and how they engage the process. I explain that everyone in theory begins with an A; they keep it by attending class, reading, and coming to class prepared, by participating in discussions and experiential activities (which cannot be made up), by reading all instructions thoroughly and following rubrics to ensure all expectations are met, but also by being creative in how they complete assignments. I explain that although everything is laid out in the syllabus (which they should always consult and refer back to), I will also go over assignment expectations a month in advance so that they can ask questions and fully understand how to be successful and have ample time to complete assignments. I commit to giving them my all and ask that they meet me at the same level. I see learning as a dynamic and constructive process entailing content and all the prior knowledge and experience students bring with them. In terms of constructivism (the intersection of human ideas, or intelligence, with real-world experiences), I have the research-based content; however, the process unfolds (the specific examples used, and the connections made) as we go based on who is in the classroom and their particular needs [19]. I invite students to develop a growth mindset, or the belief that intelligence can develop or be enhanced [20]. I also invite them to share their personal experiences to add to the lesson, teaching them again to be active participants in their learning process (owning the experience). Through this process, students learn, or (re)cognize, that they bring their own knowledge and value to the classroom. They are not passive repositories for information [1]. They also learn to critically engage with the material on their own through active reading and metacognitive (critically thinking about their thought and learning process) practice. They come to understand that once they adjust their thinking or engage in the process of (re)visioning [5], there are no limits to what they can come to understand and accomplish. Anything is literally possible [11]. I view accessibility as equitable education ensuring that ALL students are afforded the opportunity to receive everything they should from the course. This is another process of (re)presenting. This means offering content in multiple formats with closed captioning for videos, making sure materials are visually stimulating (colorful and balanced) and readable (rather than informationally overwhelming), and engaging with students in ways that make them feel connected to the course, the professor, and to each other. This means designing the course with engagement and representation, and diversifying ways students can creatively express their understanding of content. I consider myself a teacher, a scholar, a growth facilitator, and a change agent. Vital to this identification is continued professional development and remaining current on educational research for innovation, engaging in best practices, adapting to the needs of various learners, and keeping course content updated and challenging [5,11]. It is salient to continue doing self-assessment and self-work and to remain solution-focused by consistently merging research and practice; doing the work rather than just talking about it. As Baba Asa Hilliard [10,21] explained, teachers should see teaching as "a calling, a constant journey towards mastery, a scientific 'becoming a library', a matter of care and custody for our culture and traditions, a matter of a critical viewing of the wider world, and a response to the imperative of MAAT". The tenets of MAAT are truth, justice, righteousness, harmony, order, balance, and reciprocity. This work is absolutely my calling. I am fully invested in my role and impact as a teacher. I like to find creative and experiential ways to help students form connections. I enjoy the process. I instill this in my students and teach them self-care in the process of balancing responsibilities and doing good work. John Henrik Clarke said, always "do your best work". Thus, I understand there is not a point at which you arrive at the end, so I attempt to continue to level up as a teacher. Teachers are learners and learners are teachers in a dynamic process. It is important to note that the process is also not always sunshine and rainbows. I also express love through holding students accountable to their responsibilities and performance. They receive the grade they earn regardless of, or more accurately, because of, our relationship. They are held accountable to being their best selves and doing their best work. When they fall short of this expectation they deal with the consequences of those decisions. It is my understanding of this, my love of teaching, my commitment to researchbased practice, and my continued development to grow and address areas of weakness, as well as my desire to inspire change and have a positive impact on students, that makes me an effective instructor and mentor and allows me to create liberatory educational spaces for all learners. A Note on Amplifying Student Voices I can speak about myself all day, but as far as the reader is concerned it may or not be true. It could simply be word play and lip service to gain another publishing credit. I could also be writing about what I think I do, or what I aspire to be as a teacher without it actually translating to students in the ways I think it is or without it being effective. This conversation about my praxis is not truly authentic unless it comes directly from those who are impacted. I would not dare presume to speak for them without their input as they are the experts on their own experience [2,7]. This would be the antithesis of trauma-informed practice, of which powerlessness and thus voicelessness are signifying features [22]. Due to the way kindergarten through twelfth grade education in America has been traditionally structured, the classroom has historically been a space where students can feel like helpless participants in a predetermined process: a process where they have no input, and thus no voice; where outcomes do not matter. Trauma-informed practices seek to shift this by seeking student input and seeking to understand their perspectives. One of my goals as a professor is to be a positive force in the lives of my students. One way to do this through liberatory praxis is by amplifying their voices. I seek to support their growth and teach them that they all have important things to say that the world needs to hear. I work with students to come to trust their own voices while still remaining accountable to growth. In an effort to be about the work and test whether my philosophy translates into reality, I decided to trust their voices too, and to employ them as a personal guide for growth. In anticipation of this publication, I put out a call to my students to request their feedback on my teaching style and their personal experiences taking a course with me. The call was answered by students who have taken my various courses (African Diaspora and the World [ADW], Child Psychology, Junior Research, Multicultural Education, Psychology of the Inner City Child) over the years, both past and present. I asked everyone the same exact open-ended question, specifically worded in order to allow them to answer in whatever way was authentic and true to their personal experience. What was your experience with me as a teacher? Each student who submitted a response also consented to having both their response and their name published in this article. In fact, they were informed that sending a response to me was confirmation that they were giving consent to have both their words and their names printed. Choice and voice are again the central point. They were asked to write whatever was veritable to their experience and I explained that I would print their exact response without alteration. I present to you their (n = 10; female = 9, male = 1) unfiltered responses (unedited) beginning with a certificate a student presented to me upon her graduation in 2015 ( Figure 2). Student respondents ranged from first-year students to 8-year graduates reflecting back to their time as enrolled students and speaking on continued mentorship relationships. Of the four graduates, two are currently teaching, one is in curriculum development and educational policy, and one is studying to become a lawyer as she currently engages in work focused on maternal health and reproductive justice. It is also important to note that each student respondent took at least two classes with me (7 = 2 classes; 3 = 3 classes), adding a bit more weight to their evaluations. response to me was confirmation that they were giving consent to have both their words and their names printed. Choice and voice are again the central point. They were asked to write whatever was veritable to their experience and I explained that I would print their exact response without alteration. Student Responses I present to you their (n = 10; female = 9, male = 1) unfiltered responses (unedited) beginning with a certificate a student presented to me upon her graduation in 2015 ( Figure 2). Student respondents ranged from first-year students to 8-year graduates reflecting back to their time as enrolled students and speaking on continued mentorship relationships. Of the four graduates, two are currently teaching, one is in curriculum development and educational policy, and one is studying to become a lawyer as she currently engages in work focused on maternal health and reproductive justice. It is also important to note that each student respondent took at least two classes with me (7 = 2 classes; 3 = 3 classes), adding a bit more weight to their evaluations. African Diaspora and the World (ADW) Program is centered on the experiences of African-descended people. Through this program, students learn about themselves while discovering the missing pieces of American and World history that were not taught in high school. As a graduate of a predominately white high school, I felt a sense of urgency to enroll in this course despite it being a requirement. My first semester of this course was taught by another professor on campus. Due to conflicts, this class was mainly offered virtually until the very end The African Diaspora and the World (ADW) Program is centered on the experiences of African-descended people. Through this program, students learn about themselves while discovering the missing pieces of American and World history that were not taught in high school. As a graduate of a predominately white high school, I felt a sense of urgency to enroll in this course despite it being a requirement. My first semester of this course was taught by another professor on campus. Due to conflicts, this class was mainly offered virtually until the very end of the fall semester. This made it much more difficult to remain engaged and motivated in the course. Once spring semester registration opened, I knew that I had to make a change. Student Responses During the summer prior to attending college, my older sister, an alum, urged me to enroll in Dr. Chateé Richardson's section of the ADW course. Since I was unable to enroll in her section in the fall due to scheduling conflicts with other courses, I made the "executive" decision to ensure that I created a schedule that would accommodate her class. From the first day, though virtual due to COVID, I knew my decision was the right one. Dr. Richardson's passion for not only African Diaspora and the World but for her students' growth radiated every minute of each class. Her vulnerability allowed us to be comfortable in our vulnerability and truly gain a sense of self. As incoming freshmen, this space was a necessity as we entered a whole new world. Her effective teaching method allowed us to understand that we come from a lineage of kings, queens, warriors, trailblazers, and ancestors who possessed pure audacity. Dr. Richardson's selflessness and open-mindedness allowed for amazing group discussions that would have us leave class with new perspectives, new joys, new motivation, new wisdom, new knowledge, and new bonds with our Spelman sisters. The realization that her class would come to an end was a dreadful one. Dr. Richardson's approach to life and lesson created for each of us a true home away from home. It also gave meaning to the idea that the college is a family. After attending her class, I gained a mentor, second "mama" away from home, and a new sense of pride as a descendant of power. Nosharai Harris (ADW 111 and 112; Class of 2025): When entering Dr. Richardson's classroom, I felt a sense of a warm welcome and safe space. Her strong passion for the course's subject matter and, most definitely, for her students was evident to me. She treated us all as her own. Every student has a safe class they are more than comfortable in, and Dr. Richardson's was my safe class. During a regular class, we discuss the first 5 to 10 min to debrief and discuss our day or week. Then we discuss the assigned texts and/or videos before arriving at the class, leading to a class discussion. Dr. Richardson's class consists primarily of discussions favorable to my peers and me. Speaking about how our understanding of the texts created various questions and opinions. Through her excitement and accurate knowledge of the course material, I enjoyed learning the course material; her effective teaching method allowed my peers and me to truly understand the generations who came before us and helped us be where we are today. Dr. Richardson's class is one that I keep near and dear to my heart. Dr. Richardson and I remain in touch to this day, seeking assistance in my current classes and advice. In my capacity as a student, I am incredibly thankful for the opportunity to gain a mentor and a fantastic professor. Jada Holland (Child Psychology and Multicultural Education; Class of 2023): Dr. Richardson is an amazing and unique professor. I have never experienced a class as exciting, immersive, and challenging as hers. Her teaching style is very special. She builds great bonds and relationships with her students and makes sure that we know that she cares about us. She brings in a great energy to class daily, and always has a positive attitude and very thought-provoking prompts to bring to the class. My experience in multicultural education was very great. It was definitely one of my favorite classes of my senior year and I am so happy I got the chance to experience it. Dr. Richardson's class is truly an immersive experience. She has lots of hands on and field activities included, which are very beneficial to education majors, as well as any other major, willing to question their own beliefs and biases, and become a better person. I am definitely a better educator and woman because of her, and I am very glad I took her course. PowerPoints with images, videos, and key points that tied into our assigned reading. Dr. Richardson's consistent use of multimedia content enhanced my learning retention and gave me a deeper understanding of the interconnectedness of education practices, law and policy and psychology. What made her classes so unique was her ability to be vulnerable and draw from her own personal experiences as a mother, educator and Black woman. This allowed students to feel connected to their instructor while also reflecting on their own experiences. I personally really appreciated when Dr. Richardson was willing to be a speaker for an educational symposium I put together my junior year at Spelman for students in our college community. She facilitated a lecture around the Psychology of the Inner City Child class and although the session was an hour long and not a semester, students reviewed that they appreciated the thoughtfulness, relatability and the layers that Dr. Richardson presents. As a student it meant a lot to me that she took my personal academic projects seriously and invested her time and energy to volunteer as a speaker. Zoe Shepard (ADW 111 and 112; Class of 2026): Dr. Richardson is a dynamic teacher. Not only does she create unique assignments to teach the concepts students are meant to learn, but she does so in a way that makes them active participants in their learning. More than this, she masterfully incorporates different mediums into her lessons which not only helps to convey ideas and information, but it helps us to have a more expansive worldview. Classes are often a combination of looking through archival videos, watching films centered on the African experience, or examining different pieces of literature to learn about black people throughout the diaspora, their contributions, and the impact that their work has on our lives today. Through thorough class discussions and lectures, we as students learn to critically examine what we've known with what we're coming to know, and we get to do so in a genuinely safe space. Aside from academic feats, Dr. Richardson cares for the wellbeing of each and every one of her students in ways that inspire us to bring our whole selves and our whole perspectives with us when we enter the classroom. She doesn't just invite you to learn, she makes room for you. Delaiah Sneed (ADW 111 and 112; Class of 2026): There is a sincere embedded level of honesty in Dr. Richardson's teaching. Her curiosity surpasses the desire to know the past as it affects the present but how we as students think and can connect the dots. Her teaching is conversational, mutual, it is a two-way street that is her job just as much as it is ours. Maya Thirkill (ADW 111 and 112; Class of 2020): Dr. Richardson is an excellent professor. Her most profound lessons were in the discussions of literature and life rather than tests of memory. Her seminar teaching style encouraged students to discuss the origins of societies and cultures, which often challenged deep-held generational beliefs, without judgment. Students were asked to consider the social framing of every text when assigning its value and consider the aspects of each person's intersectional identity before meeting them with opposition. She cultivated a safe space that fostered authentic kinship between students, so that we viewed each other as partners in scholarship rather than competitors, being rewarded for impeccable regurgitation. This kinship extended beyond the classroom and continues into adulthood. Each student's worldview was expanded because they were able to exchange perspectives on sensitive topics with sonder, and this equips generations of Dr. Richardson's students with the confidence and cultural humility to continue expanding their worlds. In signing up for it, I was extremely nervous. Despite this, I am glad that I did. I can recall entering Dr. Richardson's classroom and being taught facts about black history that are seldom talked about in school. I learned that black history does not start with slavery, and although I was taught this prior to arriving to Spelman, I was taught it once again, from a different lens. We talked about race, gender, politics, geography, and so much more. In doing so, she granted us the platform to share personal experiences on discriminatory factors that we might have faced, to share things that we read or learned previously, and to ask questions. Her willingness to listen to us and respond to us are two traits that are seldom seen by professors. For her unique ability to do this, I thank her. Indeed, Professor Richardson is a great educator. She teaches her students so that they will comprehend the material and apply it to their own lives rather than temporarily remembering it for an exam. Her contribution has been so significant that my parents noticed a change in my appreciation for black culture, and it is because of her that my appreciation for it increased. Because of her stellar teaching habits and ability to connect with the students, I gladly recommend her. Spending my Wednesday evenings exploring the sociocultural theory, cognitive development, and psychosocial development would eventually become the highlight of my week. I vividly remember trekking to our Child Psychology course. I met the powerhouse I would come to know as Professor Richardson. She had a masterful way of building relationships that made us love coming to class. She also engaged us with real-world examples that made concepts seem real and tangible. She gave us assignments that forced us to think critically about the world around us. Instead of providing strict guidelines, she challenged us to decide what was appropriate, whether we'd included enough information, and if we "understood the assignment". Darius Williams (Child Not only have I benefitted from the wisdom imparted by Dr. Richardson, but so have the countless children taught by my fellow cohort members and me. One of the most valuable takeaways I gained from our time together was my framework for advocacy, which developed in our Multicultural Education course as we explored poverty, gender socialization, culturally responsive pedagogy, and diverse populations of students and their families. The chance to dive into these topics has greatly informed my practice as an educator. If "keeping it real" were a professor, it would be Dr. Chateé Richardson. Her connection to the work of an educator gives her a refreshing relevance and legitimacy that can only come from doing the work. Whether firmly telling me to "get my [stuff] together" or providing space and empathy, Dr. Richardson always knew what I needed and showed she cared. I am blessed to say she still does that today. Common Threads I was nervous to receive and read students' unfiltered feedback. It initially felt like a good idea, but once it was time to read what students wrote I paused because I did not know exactly what content I would encounter. I was greatly optimistic but unsure. As I read through these responses, I am overwhelmed with a myriad of feelings that I struggle to find the words to describe. I am grateful, humbled, honored, awestruck, and in some instances, speechless. I am emotional and I feel like I am doing what I have been called to do [21]. As I analyze student replies, I see common threads that are woven through different responses from students who have not spoken to one another; some who have never met as their time as my students did not overlap. The student responses represent a cross-section of the courses I teach so the sentiments expressed are not tied to feelings elicited by specific course content. This is important to note because the content and curriculum for African Diaspora and the World (where we resituate African people and women in history that has been taught monoculturally beginning in antiquity), Child Psychology (focused on optimal development from conception through adolescence), Multicultural Education (an experiential course where students challenge biases against different facets of identity), and Psychology of the Inner City Child (focused on the trauma children experience in these spaces and how to mitigate the effects) are vastly different. I and my style of student engagement (across an eight-year span) are the common denominators in the equation, and this actually was captured in the themes that arose from response data. My teaching philosophy and student responses were analyzed pragmatically [19] employing thematic analysis [23]. The process of pragmatism not only searches for occurrences, but how they are actually employed in real-world contexts. I was sifting through the information presented for overlapping threads or patterns between my philosophy and student experiences as well as across their responses. The goal was to allow the data to tell me the student reception or outcomes of my educational praxis. Information is presented in six lists: (1) codes that appeared only in my teaching philosophy (only three broad codes); (2) codes that only arose singularly in one student response (twenty-six experience-specific codes); (3) codes that overlapped in the teaching philosophy and one student response; (4) codes generated between two students; (5) codes appearing in multiple student responses; and (6) codes shared in the teaching philosophy and multiple student responses (between two and up to all ten responses). Thematic analysis is a stepwise process that begins with reading through qualitative information multiple times to become familiar with it and to notice initial themes and patterns. The next step is to read through it again piece by piece to generate initial codes, creating labels to collapse data. The information is read again to check codes, and they are then combined into themes in an attempt to accurately reflect what is surfacing in the data. Themes are then analyzed to search for shared meaning across the data. The analysis process ends with a meaningful understanding of what is going on within and between the data. Thematic analysis allows you to pull meaning directly from the data. As Braun and Clarke explain, it serves to both accurately "reflect reality" as it is or unfolds, and "to unpick or unravel the surface of 'reality'" to establish meaning [23] (p. 83). It allows for multilayered analysis. Table 1 represents themes that either only showed up in my teaching philosophy or only showed up once in student responses. There were only three codes that were specific to the philosophy. The codes represented here were broad (enhancing the whole environment) or very specific (continued professional development and engaging in research-based practice). The latter two codes are also very specific to my personal growth process and are not necessarily on display for students to observe. It is good that this is the shortest list of codes. This means that a majority of the codes generated from the philosophy overlapped with at least one student response or also came up for multiple students. This is the first step in understanding that my philosophy is translating into action. The overlapping threads of responses are represented in Table 2. Both tables are presented on page 14 for reference. The symbols used to code themes to separate lists are also shown in the title cell of the table. I was excited to see all of the overlapping threads across lists three through six. I was particularly surprised and humbled to discover that there were many overlapping threads of multiple student responses to my personal teaching philosophy (represented in list 6). I strive to embody the best practices that I learn and research. This schematic is confirmation that I am authentically engaging in liberatory praxis. The analysis inspires contemplation on how I have presented in the past and inspiration to continue to grow and develop so that the singular and dual codes (lists 2-4) become less person-specific and more consistent so that they may be shared or experienced by all. Reading through the responses and codes and sitting in contemplation, I am actively and metacognitively thinking about my role in "(re)membering how we liberate". I am actively thinking about the spirit and responsibility of my work [5]. I am thinking about ways to create professor trainings to ensure more students have transformative and liberatory experiences regardless of environment (PWI vs. HBCU), course subject, or the cultural background of the professor. I discussed earlier that my varied (unique) educational background (drama made me comfortable in front of the classroom and creative in how I approach teaching and learning; counseling psychology taught me how to read people, get to know them, have compassion and empathy, help students to nondefensively see themselves and consider beliefs, get people to see multiple perspectives, and debrief difficult conversations; educational psychology allowed me to understand the physical, emotional, and mental intricacies of how people learn and what happens in the process) has specifically prepared me to be able to engage in this teaching style, and this is true; however, anyone can commit to this work because it all starts with building relationship. Employing a growth mindset, this is something everyone can learn to do. I am thinking about what things are possible when we simply take the time to get to know who our students are, do the work to understand what they need, and care about not just the outcome, but their learning experience. I am thinking about how we bottle this for others to learn and apply in their personal praxis but keep it dynamic and student-centered. I am thinking about how this translates and what this looks like. I am thinking about awareness and application. I am thinking about my personal calling and dynamic consistency. I am thinking about my next level. Because I have started with connecting with students as a firm foundation, I have never had a negative experience with this model. Some students become more comfortable than others. There have absolutely been moments of discomfort where I had to work to moderate conversation to a place of balance in terms of students feeling fully heard and understood. However, these students were able to engage in a deeper level of communication and agree to see each other's perspectives as valid although they were very different because they understand the purpose is not to change one another but to challenge one another and to grow in the process. There have absolutely been moments of student defensiveness and frustration that needed particular care to wade through and massage. Darius Williams mentions how I had to tell and help him to get himself together. None of these things are negative. They are a part of the process that takes understanding, grace, patience, flexibility, and compassion. I do, however, have to continue to check in with myself and continuously engage in my personal self-care practices as I engage this work at this level of emotional connection. Compassion fatigue, which can lead to exhaustion and burnout, is a real and serious issue of consideration [24]. The trauma-informed, socioemotional, culturally responsive, and compassionate teacher can become a magnet for heavy disclosures. Trust has been built, connections have been formed, and thus a potential pathway to real emotion has been established. There is heavy responsibility here within this space. There are times when I have to recalibrate because my reserves have been depleted. Sometimes I have to be quiet and breathe or even meditate to recenter. I may have to go outside and sit in the sunshine immediately following a conversation. I may need to take my shoes off and stand or walk in the grass with my bare feet. I need to re-ground myself. I listen to music or engage in leisure reading (I always have a leisure book with me) to return to my happy place. I run or walk consistently (engage physical activity as an outlet). For me it is all worth it and it is necessary work. My students all deserve this level of consideration, concern, and care, and according to the disclosures presented here, they personally feel they are receiving it. Conclusions This was definitely not an easy task to engage in. It was particularly nerve-racking. First, writing about yourself or your personal praxis can be a difficult task. It takes a lot of introspection and sometimes we just do not want to look that deeply. It is much easier not to. I was taught a long time ago by Cheryl Tawede Grills, one of my most prolific teachers, that once we know we cannot unknow. We must face, analyze, process, engage, and move through the new information. Knowing changes us. We have to sit in discomfort, and we have to shift. Second, when I asked students to write whatever they wanted about their experiences with me as a teacher, I opened myself up to true critique and unfiltered feedback. I asked them to be honest in their assessment of how I show up as a teacher; no holds barred. This was uncomfortable. A few of my elders have taught me that when we are completely comfortable, we are not growing. This does not mean that we cannot have moments of comfort. Sometimes the learning is comfortable when we are working to remain in a space of accountability. But this is not possible without consistent feedback. Without any prompting, I see many of the words found in my personal philosophy repeated in student responses. This tells me that I am indeed living up to the legacy of my elders and being who I say I am as an educator. I am not simply doing the work, but I am doing my best work. I learned this from my students in this moment. I thought I was doing something, but now (and I say this with humility) I know and understand that I am. I am clear that I have to continue to engage in introspection and continue this work. (Re)membering is responsibility [5]. I am truly humbled and amazed by my students' words. My heart is full and I am inspired. It is difficult to put all of my feelings into words. I think this is partly because I, like many, continue to battle impostor syndrome (the inability to believe that one's success is deserved or has been legitimately achieved; not seeing one's own skill or expertise). It is not about deserving though; it is about what I have worked for, and it is about my responsibility. As this article comes full circle, it is about the spirit of this work [5]. It is about the path initiated by my elders, taken up by me, and the legacy passed on to my students. I hope I continue to live up to my elders' expectations, my students' expectations, and my expectations. I will continue to engage in the processes of (re)searching, (re)visioning, (re)cognizing, (re)presenting, and (re)claiming in the process of educating to ensure that I am actually educating and that I accurately see and live my role. I understand from my teachers (the elders who came before me and my students) that honesty, consistency, and continued growth are key to maintaining this liberatory praxis. As a beginning step to my next level of work as a teacher, I ask the reader to please take a moment and refer back to the end of the introduction where I discussed my teachers who mishandled their responsibility. Think back to the memories and enduring lessons students are left with from these experiences. Think about your own personal experiences sitting in the seat of a learner. I implore you to ask yourself the following question: How will you be remembered?
13,095.6
2023-06-20T00:00:00.000
[ "Education", "Philosophy" ]
Charm and beauty isolation from heavy flavor decay electrons in Au+Au collisions at $\sqrt{s_{\rm NN}}$ = 200 GeV at RHIC We present a study of charm and beauty isolation based on a data-driven method with recent measurements on heavy flavor hadrons and their decay electrons in Au+Au collisions at $\sqrt{s_{\rm NN}}$ = 200 GeV at RHIC. The individual electron $p_{\rm T}$ spectra, $R_{\rm AA}$ and $v_2$ distributions from charmed and beauty hadron decays are obtained. We find that the electron $R_{\rm AA}$ from beauty hadron decays ($R_{\rm AA}^{\rm b\rightarrow e}$) is suppressed in minimum bias Au+Au collisions but less suppressed compared with that from charmed hadron decays at $p_{\rm T}$ $>$ 3.5 GeV/$c$, which indicates that beauty quark interacts with the hot-dense medium with depositing its energy and is consistent with the mass-dependent energy loss scenario. For the first time, the non-zero electron $v_2$ from beauty hadron decays ($v_2^{\rm b\rightarrow e}$) at $p_{\rm T}$ $>$ 3.0 GeV/$c$ is observed and shows smaller elliptic flow compared with that from charmed hadron decays at $p_{\rm T}$ $<$ 4.0 GeV/$c$. At 2.5 GeV/$c$ $<$ $p_{\rm T}$ $<$ 4.5 GeV/$c$, $v_2^{\rm b\rightarrow e}$ is smaller than a number-of-constituent-quark (NCQ) scaling hypothesis. This suggests that beauty quark is unlikely thermalized and too heavy to be moved in a partonic collectivity in heavy-ion collisions at the RHIC energy. The pursuit of Quark-Gluon Plasma (QGP) is one of the most interesting topics in strong interaction physics [1][2][3][4]. Recent experimental results from Relativistic Heavy-Ion Collider (RHIC) and Large Hadron Collider (LHC) support that a strongly coupled QGP matter (sQGP) has been created in ultra-relativistic heavy-ion collisions [5,6]. Studying the properties of the QGP matter and understanding its evolution in the early stage of the collisions are particularly helpful for broadening our knowledge of the early born of the universe. Heavy quark (charm and beauty) masses, different from that of light quarks, are mostly coming from initial Higgs field coupling, which is hardly affected by the strong interactions [7]. Thus heavy quarks are believed to be produced predominately via hard scatterings in the early stage of the collisions and sensitive to the initial gluon density. And their total production yields can be calculated by perturbative-QCD (pQCD) and are number of binary collision (N coll ) scaled [8]. Theoretical calculations predict that the heavy quark energy loss is less than light quarks due to suppression of the gluon radiation angle by the quark mass. Beauty quark mass is a factor of three larger than charm quark mass, thus one would expect less beauty quark energy loss than charm quark when they traverse the hot-dense medium created in the heavy-ion collisions [9][10][11]. Experimentally, the nuclear modification factor (R AA ), which is defined as the ratio of the production yield in A+A collisions divided by the yield in p+p collisions scaled by the number of binary collisions, is used to extract the information of the medium effect, such as the parton energy loss [12]. Recent measurements on the R AA of open charm hadrons and leptons from heavy flavor (HF) decays show strong suppression at high transverse momentum (p T ), and with a similar magnitude as light flavor hadrons, which indicates strong interactions between charm quark and the medium [13][14][15][16][17]. However, due to technique challenges, most of the electron measurements are the sum of the products from HF decays without charm and beauty contributions isolated. Recently, with the help of vertex detectors, some of the experiments extracted the charm and beauty contributions from the heavy flavor electron (HFE) measurements but with large uncertainties [18,19]. Naively, heavy quarks are too heavy to be pushed moving together with the collective flow during the expansion of the partonic matter unless the interactions between heavy quarks and surrounding dense light quarks are strong and frequent enough. After sufficient energy exchange, the system could reach thermal equilibrium. Therefore, heavy quark collectivity could be an evidence of light quark thermalization. And its elliptic flow, defined as a second harmonic Fourier coefficient (v 2 ) for momentum space anisotropy [20], is proposed to be an ideal probe to the properties of the partonic matter, such as the thermalization, intrinsic transport parameters, drag constant and entropy [8,[21][22][23][24][25][26]. Apparently, measuring charm and beauty v 2 separately is crucial to constrain the diffusion parameters extracted from quenched lattice QCD [27,28]. In particular, beauty mass is about three times larger than charm mass, its final state behavior could be different from that of charm. Unfortunately we are very ignorant of that. Up to date, there are many measurements of heavy flavor hadron spectra and v 2 , but most of them are for charmed hadrons or electrons from heavy quark decays. Some attempts for arXiv:1906.08974v1 [nucl-ex] 21 Jun 2019 separation of charm and beauty contributions in heavy flavor decay electrons are only for their momentum distributions. There is no measurement for beauty v 2 either in hadronic decays or indirect electron channels at RHIC. We developed a data-driven method to isolate charm and beauty contributions from the inclusive HFE measurements based on most recent open charm hadron measurements in minimum bias (Min Bias) Au+Au collisions at √ s NN = 200 GeV at RHIC. Taking the advantage of the Heavy Flavor Tracker (HFT), the STAR experiment has achieved precision measurements on p T spectra of D 0 -mesons at 0 < p T < 10 GeV/c [13][14][15], as well as other charmed hadrons (D ± , D s and Λ c ) at 2 GeV/c p T < 8 GeV/c [29,30] and J/ψ at 0 < p T < 10 GeV/c [31,32]. The parameterized uncertainties of D 0 spectrum and the extrapolation to unmeasured region include three parts: a) 1-σ band of the D 0 spectrum by fitting with a Levy [33] function with uncorrelated uncertainties; b) Half of the difference between Levy and power-law [34] fits; c) For correlated systematic uncertainties the spectrum is scaled to upper and lower limits. The total uncertainty is then quadratically summed from above three components. The D s and J/ψ uncertainties are obtained in the same way. The uncertainties of D 0 (D s and J/ψ) p T spectrum are 10.7% (19.2% and 8.4%) at low p T up to 88.7% (95.9% and 96.5%) at high p T where functional extrapolation uncertainty dominates. The D ± spectrum is obtained by scaling the D 0 spectra with a constant fitted from the measured D ± /D 0 ratio (0.429 ± 0.038), since there is no clear p T dependence observed. The Λ c spectrum is fitted and extrapolated down to zero p T with the measured D 0 spectrum multiplying different model calculations on Λ c /D 0 [35][36][37]. The uncertainty of Λ c is mainly from the average of the three models and contributes to the total electron spectrum from charm decays less than 15.2%. The above spectra of charmed hadrons, with parameterized p T spectra are used as inputs and forced to decay to electrons via semileptonic decay channels, of which the J/ψ spectrum is input in a Pythia decayer [38] and decay to e + e − . Figure 1 shows the electron spectra from D 0 (blue dashed curve), D ± (brown dot-dot-dashed curve, scaled by 1/10), D s (green dot-dashed curve), Λ c (cyan longdot-dashed curve) and J/ψ (magenta long-dashed curve) decays and the sum of all charm contributions (c → e, black solid curve) in Au+Au collisions at √ s NN = 200 GeV. The electron spectra from charmed hadron decays are normalized by measured parent particle cross sections and decay branching ratios. The uncertainties of the charmed hadron p T inputs analyzed from above process are propagated into the decay electron spectra. The uncertainties of branching ratios are also taken into account. In particular, the uncertainty of D ± /D 0 ratio is propagated into the spectrum of D ± → e. Electron spectra from D s , Λ c and J/ψ are scaled by N coll to 0-80% centrality from 10-40%, 10-80% and 0-60%, respectively, [29,30,[35][36][37], J/ψ [31,32] and the sum of them (c → e)) and the spectrum of inclusive HFE [39] in minimum bias Au+Au collisions at √ sNN = 200 GeV. The spectrum of beauty decay electrons (b → e) is obtained by subtracting the c → e contributions from the HFE data. Uncertainties are shown as shaded bands. since there is no clear centrality dependence observed from current precision and the normalization uncertainties are taken into account. The total uncertainties of the electrons from individual charmed hadron decays are shown as shaded bands in Fig. 1. Components of relative uncertainties of the total c → e spectrum are shown in Table I. The black solid squares denote the inclusive HFE spectrum measured by STAR [39]. The electron spectrum from beauty decays (b → e), shown as the solid circles, is then calculated by subtracting c → e contribution from the inclusive HFE spectrum from p T = 1.5 GeV/c to 10 GeV/c. The uncertainties of the last two points at p T > 7 GeV/c are quoted safely with 2-σ uncertainties from c → e due to higher p T extrapolation. TABLE I. Uncertainty components of the total c → e spectrum (0 -10 GeV/c) from electron spectra of charmed hadron decays. The beauty contribution fraction in total inclusive HFE in Au+Au collisions (f b→e AA ) can be obtained by taking the ratio of b → e and the inclusive HFE, shown as solid circles in Fig. 2. Here we compare the results with previous measurements in p+p collisions by STAR via an electron-hadron correlation approach (red open squares) [39] and by PHENIX with recent built-in vertex detector (green crosses) [40]. The fixed-order nextto-leading log (FONLL) calculation [8] is presented as gray dashed curves. The STAR p+p data are parameterized with the FONLL function (cyan dashed curve with band). The averaged f b→e pp (blue solid squares) are the average of the STAR p+p data and the PHENIX p+p data with half of their differences taken into account in the uncertainty bars. The beauty contributions in the inclusive HFE in Au+Au collisions are clearly modified compared with those in p+p collisions. At p T ∼ 4 GeV/c, beauty and charm contributions are comparable and at p T > 7 GeV/c beauty contribution is up to 90%, which is significantly higher than those in p+p collisions. Since charm is strongly suppressed, the enhanced beauty fraction is consistent with less beauty suppression compared to charm in Au+Au collisions at √ s NN = 200 GeV. The R AA of individual charm and beauty contributions (R c→e AA and R b→e AA ) can be extracted by where f b→e AA is the beauty fraction in Au+Au collisions and f b→e pp is the averaged beauty fraction of the STAR and PHENIX data in p+p collisions from Fig. 2. The R ince AA is the nuclear modification factor of inclusive electrons from heavy flavor decays measured by STAR [39]. Figure 3 shows the R c→e AA and R b→e AA as a function of p T extracted from Eq. (1) and (2) as blue solid squares and red solid circles, respectively. The results are consistent with an impact parameter template analysis with STAR HFT (open green symbols) [19] and show an improved precision. The DUKE model predictions [41] are consistent with R b→e AA result, but overestimate the R c→e AA data at p T > 4 GeV/c. Two dashed curves of b(c) → e/FONLL are obtained directly by the definition of R AA with the parameterized b(c) → e spectra from Fig. 1 divided by the FONLL calculations as a cross-check, which shows in a good agreement with data. Clear suppression at p T >∼ 3.5 GeV/c is observed for both R c→e AA and R b→e AA , which indicates charm/beauty quarks strongly interact with the hot-dense medium and lose energy. However, R b→e AA shows less suppression compared with R c→e AA at p T > 3.5 GeV/c, which is consistent with mass-dependent energy loss that beauty loses less energy due to suppressed gluon radiation and smaller collisional energy exchange with the medium by its three-times larger mass compared to charm [9][10][11]. From the Eq. (2), the uncertainty of R b→e AA can be calculated by where δ devotes the relative uncertainty and f cb = f c→e AA f b→e AA = 1 − f b→e AA f b→e AA , and the uncertainty of R c→e AA is obtained in a similar way. Table II shows the components of relative uncertainties of R c→e AA and R b→e AA . The v 2 of D 0 (0-80%) measured from STAR [42] is parameterized with the function below, which is modified from [43] with adding a linear term forced to pass through the origin according to the natural properties of v 2 , where n is the number of constituent quarks, p i (i = 0, 1, 2, 3) are free parameters. Assuming Λ c (n quarks = 3) also follows NCQ scaling as D-mesons (n quarks = 2), the v 2 of charmed hadrons in each p T bin can be sampled from Eq. (4). The azimuthal angle (φ) distributions of charmed hadrons can be obtained following the function [43] dN dφ = 1 + 2v 2 cos(2φ). where v ince 2 denotes the v 2 of inclusive HFE from the parameterized average of the measurements of STAR [39] and PHENIX [44] by Eq. (4). The uncertainties from the parameterization of the D 0 v 2 with the quadratic sum of statistical and systematic uncertainties, including the high p T extrapolation, are propagated into the uncertainties of v D→e with the φ-meson spectrum [45] and v 2 (0-80%) [46] as the inputs in a Pythia decayer. The non-zero electron v 2 from beauty decays at p T > 3.0 GeV/c is observed and is consistent with electrons from decays of hadrons containing charm or strangeness within uncertainties at p T > 4.5 GeV/c. This flavor independent v 2 at high p T could be due to the initial geometry anisotropy. A smaller v b→e 2 compared with v c→e 2 is observed at p T < 4.0 GeV/c, which may be driven by the larger mass of beauty than that of charm. The black dashed curve represents the v b→e 2 assuming that B-meson v 2 follows the NCQ scaling. The v b→e 2 deviates the curve at 2.5 < p T < 4.5 GeV/c with a confidence level of 98.2% (χ 2 /ndf = 11.92/4), which indicates that beauty is unlikely thermalized and too heavy to be moved following the collective flow of lighter partons. In summary, this paper reports the individual electron transverse momentum spectra, R AA and v 2 distributions from charm and beauty decays in minimum bias Au+Au collisions at √ s NN = 200 GeV at RHIC. We found that the electron R AA from beauty decays are suppressed at high p T > 3.5 GeV/c but less suppressed compared with that from charm decays, which indicates that beauty interacts with the hot-dense medium and loses energy and is consistent with the mass-dependent energy loss scenario. At the first time, the non-zero electron v 2 from beauty decays at p T > 3.0 GeV/c is observed and consistent with hadrons containing charm or strangeness at p T > 4.5 GeV/c, which could be mainly due to initial geometry anisotropy. And its smaller flow compared with that from charm decays at p T < 4.0 GeV/c is observed. At 2.5 < p T < 4.5 GeV/c, v b→e 2 is smaller than a numberof-constituent-quark (NCQ) scaling hypothesis, which indicates that the extremely heavy mass of beauty quark prevents itself participating in the partonic collectivity and the first non-thermalized particle (beauty quark) is observed in heavy-ion collisions at RHIC energy.
3,714.2
2019-06-21T00:00:00.000
[ "Physics" ]
EGG LAYING AND ALBUMEN GLAND COMPOSITION OF ARCHACHATINA MARGINATA DURING GROWTH PHASES The egg laying pattern and albumen gland composition in three growth phases of Archachatina marginata: snailet, juvenile and adult were investigated in this study. The juvenile stage laid the highest number of snails (9.3) followed by the adult stage (4.7), while no egg was laid by the snailet phase. However, the eggs laid by the adult phase were significantly (p<0.05) heavier than those laid by the juvenile stage. The adult stage had the biggest albumen gland (7.30±0.1cm) and the highest lipase, proteinase, α-glucosidase, amylase and cellulase activities followed by the juvenile stage, while the snailet had the least values. However, results of organic composition of the albumen gland show that juvenile phase has a statistically higher protein, glucose and lipid composition than the other two growth phases. Correlation analysis showed a strong positive relationship between egg number and albumen gland glucose, protein and lipid composition It can thus be inferred that juvenile stage of A. marginata has the best reproductive capability and potential. INTRODUCTION The search for cheap animal protein by citizens of developing countries like Nigeria is enormous because of the fluctuating economy.In recent time, attention has been shifted to a new branch of study, that is, micro livestock-the rearing of invertebrate and lower animals like grasscutters, rabbits and snails.Snails are unique because of their simple nature and ability to feed on wide varieties of feeds including household wastes. Generally, snails pass through three growth stages namely: Infantile (snailet), juvenile and mature (adult) phase (South, 1992).Ademolu et al. (2009) had earlier observed that the juvenile phase of A. marginata recorded a significantly higher concentration of glucose and lipids in their haemolymph than other growth phases. Studies by different scientists have shown that size and total number of eggs laid by snails differ from one species to another and various reasons have been adduced for this observation, namely: tract size (Idokogi and Osinowo, 1998), genetic make-up and haemolymph characteristics (Ademolu et al., 2009).Nevertheless, the knowledge of the properties of the albumen gland -site of egg coating and fertilization can provide an insight into this age variation in egg laying pattern and ultimately assist in better production of A. marginata in captivity.The thrust of this work is to examine the egg laying pattern and albumen gland composition of A. marginata during three growth phases. MATERIALS AND METHODS The study was carried out in the Animal House of the Department of Biological Sciences, Federal University of Agriculture, Abeokuta, Nigeria (7010'N and 302'E).Nine plastic cages (30 x 40 x 24 cm) were filled with top loamy soil up to 5cm before introducing the snails. A total of forty-five (45) Giant African land snails -A.marginata were purchased from the snail pen of the Department of Forestry and Wildlife Management.The snails were in three growth phases namely: snailet (±100.5 g), juvenile (±133.2g) and adult (±211.7 g) as earlier described by Idokogi and Osinowo (1998).The cages housing the snails were cleaned every day before feeding and the top loamy soil replaced every 2 weeks.The snails were fed ad libitum for 12 week with fresh pawpaw leaves and water.The eggs laid during the experiment were recorded and incubated in cans containing loamy soil. Snails were dissected following the method of Segun (1975).The albumen gland was carefully removed from the hermaphroditic duct with a pair of sterile scissor into a clean petri dish.The colour and texture of the gland was observed by feeling with the fingers.The weight and width of albumen gland from the three groups were taken by sensitive weighing balance (PM-Mettler -K) and tape rule respectively. Nine (9) snails from each group were used for the chemical analysis of the albumen gland.The activities of amylase, αglucosidase, cellulase, lipase and proteinase were determined following the method of AOAC (1990).Similarly, the organic contents of the albumen gland were determined as follows -protein was analysed by Biuret method (Henry et al., 1977), while glucose was determined colorimetrically (Baumniger, 1974).Lipid content was assayed by method of Grant (1987). All data collected from this study were subjected to one-way analysis of variance (ANOVA) and wherever there were significant values, means separation was done by SNK-test.Correlation analysis was also done to determine the relationship between the egg number and the organic content of the albumen gland. RESULTS The juvenile stage laid the highest number of eggs followed by the adult stage, while the snailet phase did not lay egg at all.However, the adult snails laid significantly bigger eggs (table I).Also, the adult stage had the heaviest and biggest albumen gland followed by the juvenile stage the smallest was observed in the snailet (table I).The color of the albumen gland of the experimental snails varies from cream color in snailet phase to golden yellow in the adult phase (table I).The shape and texture of the gland are bean-like and soft respectively for the 3 phases of life. The albumen gland of adult A. marginata recorded higher enzymatic activities for all the enzymes than the other two phases of development (table II).Statistical analysis showed that the albumen gland of the juvenile stage had a significantly (p<0.05)higher concentrations of lipid and glucose than both adult and snailet phases.Additional analysis (r 2 ) of the number of eggs laid and the organic concentration of the albumen gland revealed a strong positive relationship between them.The r 2 values for glucose, protein and lipid were 0.99, 0.61 and 0.90 respectively. DISCUSSION The egg laying pattern of different phases of development of A. marginata reveals that the juvenile stage laid the highest number of eggs followed by the adult stage.However, the smaller number of eggs laid at the adult stage might possibly be due to degenerating of some reproductive organs due to age.The juvenile stage is a growing and active phase of snails and thus has the ability to produce higher number of eggs.Idokogi and Osinowo (1998) reported that snailets have rudimentary reproductive structures, suggesting that they were not sexually matured and this might explain their inability to lay any egg.The higher enzymatic activities of the adult phase albumen gland suggests the ability of the gland cells to digest the substrates of those enzymes better and this might likely explains the bigger sizes of the eggs laid by them as there are more nutrients available for the growth of the eggs cells.In a related study, Maha et al. (2009) reported that steroid hormones were abundantly present in the tissues of Biomphalaria alexandrina snails while low levels were recorded in juvenile snails.The low level in juvenile stage probably indicates the commencement of reproductive activities unlike the adult stage where the process is well established. The coloration of the albumen gland becomes intense as the snail ages suggesting level of growth.The color of tissues like muscles suggests their age and growth level (Odiete, 1999).Juvenile stage had a significantly higher concentration of glucose and lipid in its albumen gland relative to other stages of development.Similarly, a significantly higher concentration of glucose and lipids was reported in the haemolymph of juvenile A. marginata (Ademolu et al., 2009).Lipids and glucose are rich energy metabolites that supply the cells with much needed energy at this phase of development.The higher number of eggs laid during this phase might not have been unconnected to the Table II.Enzymatic activities and composition of the albumen gland of A. marginata. Snailet Juvenile Adult Enzymes (nmol) Cellulase 3.3±0.25.4±2.1 6.2±1.1 Amylase 5.1±0.1 6.5±0.high concentration of these metabolites.The albumen gland organic composition might suggests the number of eggs laid by a particular snail, as there existed a strong positive relationship between albumen gland composition and number of eggs laid. Table I . Egg laying pattern and albumen gland in A.marginata. (Producción de huevos y glándula del albumen en A. marginata). abc Mean values on the same row having the same superscript are not significantly different (p>0.05).
1,827
2013-12-01T00:00:00.000
[ "Biology" ]
Research on Query Disambiguation and Expansion for Cross-language Information Retrieval Query disambiguation is considered as one of the most important methods in improving the effectiveness of information retrieval. By combining query expansion with dictionary-based translation and statistics-based disambiguation, in order to overcome query terms ambiguity, information retrieval should become much more efficient. In the present paper, we focus on query terms disambiguation via, a combined statistical method both before and after translation, in order to avoid source language ambiguity as well as incorrect selection of target translations. Query expansion techniques through relevance feedback were performed prior to either the first or the second disambiguation processes. We tested the effectiveness of the proposed combined method, by an application to a French-English Information Retrieval. Experiments involving TREC data collection revealed the proposed disambiguation and expansion methods to be highly effective. Introduction In recent years, the number of studies concerning Cross-Language Information Retrieval (CLIR) has grown rapidly, due to the increased availability of linguistic resources for research.Cross-Language Information Retrieval consists of providing a query in one language and searching document collections in one or more languages.Therefore, a translation form is required.In the present paper, we focus on query translation, disambiguation and expansion in order to improve the effectiveness of information retrieval through various combinations of these methods.First, we are interested to find retrieval methods that are capable of performing across languages and which do not rely on scarce resources such as parallel corpora.Bilingual Machine Readable-Dictionaries (MRDs), more prevalent than parallel texts, appear to be a good alternative.However, simple translations tend to be ambiguous and yield poor results.A combination that includes a statistical approach for a disambiguation can significantly reduce Communications of the IBIMA 2 errors associated with polysemy1 in dictionary translation.In addition, automatic query expansion, which has been known to be among the most important methods in overcoming the word mismatch problem in information retrieval, is also considered.As an assumption to reduce the effect of ambiguity and errors that a dictionarybased method would cause, a combined statistical disambiguation method is performed both prior to and after translation.Although, the proposed information retrieval system is general across languages in information retrieval, we conducted experiments and evaluations concerning French-English information retrieval. The remainder of the present paper is organized as follows.Section 2 provides a brief overview of related works.Both dictionary-based and the proposed disambiguation methods are described in Section 3. A combination involving query expansion is described in Section 4. Evaluation and discussion of the experiments of the present study are presented in Section 5. Section 6 involves Word Sense Disambiguation and Section 7 describes the conclusion of the present paper. Related Research in CLIR The potential of knowledge-based technology has led to increasing interest in CLIR.The query translation of an automatic MRD, on its own, has been found to lead to a drop in effectiveness of 40-60 % compared to monolingual retrieval (Hull and Grefenstette, 1996;Ballesteros and Croft, 1997).Previous studies have used MRDs successfully, for query translation and information retrieval (Yamabana et al., 1996;Ballesteros and Croft, 1997;Hull and Grefenstette, 1996).However, two factors limit the performance of this approach.The first is that many words do not have a unique translation and sometimes the alternate translations have very different meanings (homonymy and polysemy).The fact that a single word may have more than one sense is called ambiguity.Translation ambiguity significantly exacerbates the problem in CLIR (Oard, 1997).Most of the previously proposed disambiguation strategies rely on statistical approaches, but without considering ranking or selection of source query terms, which affect directly the selection of target translations.The second challenge is that dictionary may lack some terms that are essential for a correct interpretation of the query.In the present study, we propose the concept of the combined statistical disambiguation technique, applied prior to and after dictionary translation to solve lexical semantic ambiguity.In addition, a monolingual thesaurus is introduced to overcome bilingual dictionary limitation.Automatic query expansion through relevance feedback, which has been used extensively to improve the effectiveness of an information retrieval (Ballesteros and Croft, 1997;Loupy et al., 1998), is considered.Selection of expansion terms was performed through various means.In the present study, we use a ranking factor to select the best expansion terms-those related to all source query terms, rather than to just one query term. Translation Disambiguation in CLIR There are two types of lexical semantic ambiguity with which a machine translation system must contend: there is ambiguity in the source language where the meaning of a word is not immediately apparent but also ambiguity in the target language when a word is not ambiguous in the source language but it has two or more possible translations (Hutchins and Sommers, 1992).In the present research, query translation/disambiguation phases are performed after a simple stemming process of query terms, replacing each term with its inflectional root and each verb with its infinitive form, as well removing most plural word forms, stop words and stop phrases.Three primary tasks are completed using the translation/disambiguation module.First, an organization of source query terms, which is considered key to the success of the disambiguation process, will select best pairs of source query terms.Next a term-by-term translation using the dictionary-based method (Sadat et al., 2001), where each term or phrase in the query is replaced by a list of its possible translations, is completed.Missing words in the dictionary, which are essential for the correct interpretation of the query is resolved through a monolingual thesaurus.This may occur either because the query deals with a technical topic, which is outside the scope of the dictionary or because the user has entered some form of abbreviations or slang, which is not included in the dictionary (Oard, 1997).To solve this problem, an automatic compensation is introduced, via synonym dictionary or existing thesaurus in the concerned language.This case requires an extra step to look up the query term in the thesaurus or synonym dictionary, find equivalent terms or synonyms of the targeted source term, thus performing a query translation.In addition, short queries of one term are concerned by this phase.The third task, disambiguation of target translations, selects best translations related to each source query term.Finally, documents are retrieved in target language.Query expansion will be applied prior to and/or after the translation/disambiguation process.Among the proposed expansion strategies are, relevance feedback and thesaurusbased expansion, which could be interactive or automatic. Organization of Source Query Terms All possible combinations of source query terms are constructed and ranked depending on their mutual co-occurrence in a training corpus.A type of statistical process called co-occurrence tendency (Maeda et al., 2000;Sadat et al., 2001) can be used to accomplish this task.Methods such as Mutual Information MI (Church and Hanks, 1990), the Log-Likelihood Ratio LLR (Dunning, 1993), the Modified Dice Coefficient or Gale's method (Gale and Church, 1991) are all candidates to the co-occurrence tendency. Co-occurrence Tendency If two elements often co-occur in the corpus, then these elements have a high probability of being the best translations among the candidates for the query terms.The selection of pairs of source query terms to translate as well as the disambiguation of translation candidates in order to select target ones, is performed by applying one of the Communications of the IBIMA 4 statistical methods based on cooccurrence tendency, as follows: This estimation uses mutual information as a metric for significance of word cooccurrence tendency (Church and Hanks, 1990), as follows: Here, Prob(wi) is the frequency of occurrence of word wi divided by the size of the corpus N, and Prob(wi, wj) is the frequency of occurrence of both wi and wj together in a fixed window size in a training corpus, divided by the size of the corpus N. • Log-Likelihood Ratio (LLR) The Log-Likelihood Ratio (Dunning, 1993) has been used in many researches.LLR is expressed as follows: Where, Disambiguation of Target Translations A word is polysemous if it has senses that are different but closely related.As a noun, for example, right can mean something that is morally acceptable, something that is factually correct, or one's entitlement. A two-terms disambiguation of translation candidates can be applied (Maeda et al., 2000;Sadat et al., 2001) is required, following a dictionary-based method.All source query terms are generated, weighed, ranked and translated for a disambiguation through co-occurrence tendency.The classical procedure for a two-term disambiguation, is described as follows: Construct all possible combinations of pairs of terms, from the translation candidates. Request the disambiguation module to obtain the co-occurrence tendencies.The window size is set to one paragraph of a text document rather than a fixed number of words.Choose the translation, which shows the highest co-occurrence tendency, as the most appropriate. The disambiguation procedure is used for two-term queries due to the computational cost (Maeda et al., 2000).In addition, the primary problem concerning long queries, involves the selection of pairs of terms, as well as the 5 Communications of the IBIMA order for disambiguation.We propose and compare two methods for n-term disambiguation, for queries of two or more terms.The first method is based on a ranking of pairs of source query terms before the translation and disambiguation of target translations.The key concept in this step is to maintain the ranking order from the organization phase and perform translation and disambiguation starting from the most informative pair of source terms, i.e. a pair of source query terms having the highest co-occurrence tendency.Co-occurrence tendency is involved in both phases, organization for source language and disambiguation for target language.The second method is based on a ranking of target translation candidates.These methods are described as follows: Suppose, Q represents a source query with n terms {s1, s 2, …, sn}. 2. Rank all combinations, according to their co-occurrence tendencies2 toward highest values. 3. Select the combination (si, sj), having the highest co-occurrence tendency, where at least one translation of the source terms has not yet been fixed. 4. Retrieve all related translations to this combination from the bilingual dictionary. 5. Apply a two-term disambiguation process to all possible translation candidates, 6. Fix the best target translations for this combination and discard the other translation candidates. 7. Go to the combination having the next highest co-occurrence tendency, and repeat steps 3 to 7 until every source query term's translation is fixed. Second Method: (Ranking and disambiguation of target translations) 1. Retrieve all possible translation candidates for each source query term si, from the bilingual dictionary. 2. Construct sets of translations T1 , T2, …, Tn related to each source query term s1 , s 2, …, sn, and containing all possible translations for the concerned source term.For example, Ti = {t i1, …, tin} is the translation set for term si. 5. Fix these target translations, for the related source terms and discard the other translation candidates. 6. Go to the next highest co-occurrence tendency and repeat step 4 through 6, until every source query term's translation is fixed. Communications of the IBIMA 6 Query Expansion in CLIR Following the research reported by (Ballesteros and Croft, 1997) on the use of local feedback, the addition of terms that emphasize query concepts in the pre and post-translation phases improves both precision and recall.In the present study, we have proposed the combined automatic query expansion before and after translation through a relevance feedback.Original queries were modified, using judgments of the relevance of a few highly ranked documents, obtained by an initial retrieval, based on the presumption that those documents are relevant. However, query expansion must be handled very carefully.Simply selecting any expansion term from relevant retrieved documents could be risky.Therefore, our selection is based on the co-occurrence tendency in conjunction with all terms in the original query, rather than with just one query term.Assume that we have a query Q with n terms, {term1…termn}, then a ranking factor based on the co-occurrence frequency between each term in the query and an expansion term candidate, already extracted from the top retrieved relevant documents, is evaluated as: where, co-occur(termi, expterm) represents the co-occurrence tendency between a query term termi and the targeted expansion candidate expterm.Co-occur(termi, expterm) can be evaluated by any estimation technique, such as mutual information or the log-likelihood ratio.All co-occurrence values were computed and then summed for all query terms (i =1 to n).An expansion candidate having the highest rank was then selected as an expansion term for the query Q.Note that the highest rank must be related to at least the maximum number of terms in the query, if not all query terms.Such expansion may involve several expansion candidates or just a subset of the expansion candidates. Experiments and Evaluation Experiments to evaluate the effectiveness of the two proposed disambiguation strategies, as well as the query expansion, were performed using an application of French-English information retrieval, i.e.French queries to retrieve English documents. Linguistic Resources Test Data: In the present study, we used test collection 1 from the TREC3 data collection. Topics 63-150 were considered as English queries and were composed of several fields.Tags <num>, <dom>, <title>, <desc>, <smry>, <narr> and <con> denote topic number, domain, title, description, summary, narrative and concepts fields, respectively.Key terms contained in the title field <title> and description field <desc>, an average of 5.7 terms per query, were used to generate English queries.Original French queries were constructed by a native speaker, using manual translation. Monolingual Corpora: The Canadian Hansard corpus (parliament debates) is a bilingual French-English parallel corpus, which contains more than 100 million words of English text as well as the corresponding French translations.In the present study, we used Hansard as a monolingual corpus for both French and English languages. Bilingual Dictionary: COLLINS French-English dictionary was used for the translation of source queries. Monolingual Thesaurus: EuroWordNet (Vossen, 1998) a lexical database was used to compensate, for possible limitations in the bilingual dictionary. Stemmer and Stop Words: Stemming was performed using the English Porter 4 Stemmer.A special French stemming was developed and used in these experiments. Retrieval System: The SMART Information Retrieval System 5 was used to retrieve both English and French documents.SMART is a vector model, which has been used in several studies concerning Cross-Language Information Retrieval. Experiments and Results A retrieval using original English/French queries was represented by Mono_Fr/Mono_Eng methods, respectively.We conducted two types of experiments.Those related to the query translation/disambiguation and those related to the query expansion before and/or after translation.Document retrieval was performed using original and constructed queries by the following 4 http://bogart.sip.ucm.es/cgi-bin/webstem/stem 5 ftp://ftp.cs.cornell.edu/pub/smartmethods.All_Tr is the result of using all possible translations for each source query term, obtained from the bilingual dictionary.No_DIS is the result of no disambiguation, which means selecting the first translation as the target translation for each source query term.We tested and evaluated two methods fulfilling the disambiguation of translated queries (after translation) and the organization of source queries (before translation), using the co-occurrence tendency and the following estimations: Log-Likelihood Ratio (LLR) and Mutual Information (MI).LLR was used for Bi_DIS, disambiguation of consecutive pairs of source terms, without any ranking or selection (Sadat, 2001), for LLR_DIS.bef, the result of the first proposed disambiguation method (ranking source query terms, translation and disambiguation of target translations) and LLR_DIS.aft, the result of the second proposed disambiguation method (ranking and selecting target translation).In addition, MI estimation was applied to MI_DIS.bef and MI_DIS.aft,for the first and second proposed disambiguation methods.Query expansion was completed by the following methods: Feed.bef_LLR, which represents the result of adding a number of terms to the original queries and then performing a translation and disambiguation via LLR_DIS.bef.Feed.aft, is the result of query translation, disambiguation via LLR_DIS.befmethod and then expansion.Finally, Feed.bef_aft, is the result of combined query expansion both before and after the translation and disambiguation via LLR_DIS.bef.In addition, we tested a query expansion before and after the disambiguation method MI_DIS.bef,together with the Communications of the IBIMA 8 following methods: Feed.bef_MI,Feed.aft_MI and Feed.bef_aft_MI. Results and performance of these methods are described in Table 1. Discussion The first column of Table 1 indicates the method.The second column indicates the number of retrieved relevant documents, and the third column indicates the precision averaged at point 0.10 on the Recall/Precision curve. The fourth column is the average precision, which is used as a basis for the evaluation.The fifth column is the R-precision and the sixth column represents the difference in term of average precision of the monolingual counterpart. Compared to the retrieval using original queries (English or French), All_Tr and No_DIS showed no improvement in term of precision, recall or average precision, whereas the simple two-term disambiguation Bi_DIS (disambiguation of consecutive pairs of source query terms) has increased the recall, precision and average precision by +1.71% compared to the simple dictionary translation without any disambiguation.On the other hand, the first proposed disambiguation method (ranking and selecting target translations) LLR_DIS.aft,showed a potential precision enhancement, 0.5012 at 0.10 and 90.82% average precision; however, recall was not improved (4131 relevant documents retrieved).The best performance for the disambiguation process was achieved by the second proposed disambiguation method (ranking source query terms and selecting target translations) LLR_DIS.bef, in average precision, precision and recall.The average precision was 101.51% of the monolingual counterpart, precision was 0.5144 at 0.10 and 436 relevant documents were retrieved.This suggests that ranking and selecting pairs for source query terms, is very helpful in the disambiguation process to select best target translations, especially for long queries of at least three terms.Results based on mutual information were less efficient compared to those using loglikelihood ratio.However, ranking source query terms before the translation and disambiguation resulted in an improvement in average precision, 100.91% of the monolingual counterpart. Although, query expansion before translation via Feed.bef_LLR/Feed.bef_MI,gave an improvement in average precision compared to the nondisambiguation method No_DIS, a slight drop in precision (0.4507/0.4394) and recall (413/405 relevant retrieved documents) was observed compared to LLR_DIS.bef or MI_DIS.bef.However, Feed.aft_LLR/Feed.aft_MIshowed an improvement in average precision, 101.33%/101.25%compared to the monolingual counterpart and improved the precision (0.5153/0.5133 at 0.10) and the recall (433/ 430 retrieved relevant documents).Combined feedbacks both before and after translation yielded the best result, with an improvement in precision (0.5242 at 0.10), recall (434 retrieved relevant documents) and average precision, 102.89% of the monolingual counterpart when using LLR estimation.A disambiguation using MI for co-occurrence tendency yielded a good result, 103.53% of the monolingual counterpart for average precision.These results suggest that combined query expansion both before and after the proposed translation/disambiguation method improves the effectiveness of an information retrieval, when using a cooccurrence tendency based on MI or LLR. Thus, techniques of primary importance to this successful method can be summarized as follows: • A statistical disambiguation method based on the co-occurrence tendency was applied first prior to translation, in order to eliminate misleading pairs of terms for translation and disambiguation.Then after translation, the statistical disambiguation method was applied in order to avoid incorrect sense disambiguation and to select best target translations. • Ranking and careful selection are fundamental to the success of the query translation, when using statistical disambiguation methods. • A combined statistical disambiguation method before and after translation provides a valuable resource for query translation and thus information retrieval, • Log-Likelihood Ratio was found to be more efficient for query disambiguation than Mutual Information, • A co-occurrence frequency to select an expansion term was evaluated using all terms of the original query, rather than using just one query term.These results showed that CLIR could outperform the monolingual retrieval.The intuition of combining different methods for query disambiguation and expansion, before and after translation, has confirmed that monolingual performance is not necessarily the upper bound for CLIR performance (Gao et al., 2001).One reason is that those methods have completed each other and that the proposed query disambiguation had a positive effect during the translation and thus retrieval.Combination to query expansion had an effect on the translation as well, because related words could be added. The proposed combined disambiguation method prior to and after translation, was based on a selection of one target translation in order to retrieve documents.Setting a threshold in order to select more than one target translation is possible using weighting scheme for the selected target translations in order to eliminate misleading terms and construct an optimal query to retrieve documents. Conclusion Dictionary-based method is attractive for several reasons.This method is cost effective and easy to perform, resources are readily available and performance is similar to that of other Cross-Language Information Retrieval methods.Ambiguity arising from failure to translate queries is largely responsible for large drops in effectiveness below monolingual performance (Ballesteros and Croft, 1997).The proposed disambiguation approach of using statistical information from language corpora to overcome limitation of simple word-by-word dictionary-based translation has proved its effectiveness, in the context of information retrieval.A cooccurrence tendency based on a loglikelihood ratio has showed to be more efficient than the one based on mutual information.The combination of query expansion techniques, both before and after translation through relevance feedback improves the effectiveness of simple word-by-word dictionary translation.We believe that the proposed disambiguation and expansion methods will be useful for simple and efficient retrieval of information across languages. Ongoing research includes a search for additional methods that may be used to improve the effectiveness of information retrieval.Such methods may include the combination of different resources and techniques for optimal query expansion across languages.In addition, thesauri and relevance feedbacks will be studied in greater depth.A good word sense disambiguation model will incorporate several types of data source that complete each other, such as a part-of-speech tagger into statistical models.Finally, an approach to learning from documents categorization and classification in order to extract relevant expansion terms will be examined in the future. Table 1 : Evaluations of the Translation, Disambiguation and Expansion Methods (Different combinations with LLR and MI co-occurrence frequencies)
5,007.8
2010-04-20T00:00:00.000
[ "Computer Science" ]
Grammatical Deviations in Philippine Phishing Emails | ABSTRACT Just as language is proven to be useful in making day-to-day life convenient, some situations have also demonstrated the possibility of language being used to deceive people. One instance is the proliferating use of phishing emails. Given this occurrence, a riveting endeavor in the concept of actual language utilization is the study of grammatical deviations in phishing emails. Employing error analysis, this study sought to determine how a grammar study can be helpful in the examination of the imposters’ language, specifically in the case of phishing emails. The purpose of this research was to document t he dominant errors that appeared in Philippine phishing emails and to explain how grammatical deviations can give away deception. The result of this research showed that of the fifteen collected phishing emails, all or 100% contained usage errors. This means that of the fifteen phishing emails in this study, each one has errors. The most frequent of these errors are errors in capitalization, punctuation, and word forms. This result implies that although imposters pretend to be legitimate, they cannot imitate and copy the language of authentic emails. This may be because when legitimate institutions and organizations like banks, schools, or establishments release official emails, they do so after thorough proofreading and editing. Phishers, however, may not have the same mechanisms to ensure grammar correctness and accuracy. Based on these findings, the researcher infers that with good grammar skills, one can have a larger inclination to distinguish phishing from genuine emails. Contrariwise, those who do not have profound knowledge of grammar conventions may have a larger possibility of falling into the phishing trap. Introduction Language plays an integral role in people's daily lives.With language, one can produce and receive messages and ultimately participate in communication situations.For official transactions, language also plays a crucial part in financial undertakings such as sales marketing, shopping, and banking.There is no doubt that language is a powerful tool for any type of human activity (Racoma, 2013).Just as language is undoubtedly proven to be powerful and useful in making life convenient, some instances, however, have also demonstrated the possible means by which language is used by imposters to deceive, steal, or commit a crime, and one widespread occurrence is that of phishing emails (Nlebedum, 2017). Phishing emails are electronic messages sent to a large number of people from websites that pretend to be legitimate in an attempt to steal people's money, identity, or private and sensitive information such as credit card numbers, bank information, or passwords.It is among the goals of phishing attacks to look as if it is from an authentic organization in the hopes that someone will click on the link and provide their personal information or download malware (Giandomenico, 2020).Taking these definitions of phishing emails, it can be deduced that phishers are imposters who pretend to be someone they are not.They claim to be legitimate banks, organizations, or institutions.Although these types of deceit may be harmless for people who immediately ignore these phishing emails, these dishonesties may harm other vulnerable victims because of money loss and identity theft (Baykara & Gürel, 2018).Given the idea that language can be used by imposters in deceitful activities, one riveting field of study may be the examination of grammatical deviations in deceitful messages like phishing emails.Grammatical deviation refers to the breaking of the rules in the formation of words and sentences (Budiharto, 2016).This definition implies that grammatical deviations do not demonstrate language correctness when measured against the prescriptive rules of English.In the context of prescriptive grammar, language correctness refers to the notion that certain words, word forms, and syntactic structures meet the standards and conventions or the rules prescribed by traditional grammarians.When a text (either written or oral) displays an instance of faulty, unconventional, or controversial usage, like a misplaced modifier or an inappropriate verb tense, then such a case can be described as a usage error or a grammatical error (Nordquist, 2019;2020).In this study, grammatical deviations in phishing emails were to be treated as errors since business emails are supposed to reflect a certain degree of grammatical accuracy.In light of this goal, the following were the objectives of this paper: a) Document the grammatical deviations in the phishing emails in the Philippines b) Explain how grammatical errors can give away deception in the context of phishing emails 2. Literature Review Frost (2012) has argued that many people misspell common words, confuse similarly spelled words like "it's" and "its," or sometimes simply use the incorrect forms of words such as "there, their, and they're."More so, she enumerated the common types of grammatical errors such as subject-verb disagreement, mixing up the past and present tenses, apostrophe errors, failure to put a proper ending on a past tense verb, misuse of commas, and misplaced modifiers.In a similar vein, Amiri and Puteh (2017) also presented the common errors in academic writing.These errors are classified under sentence structure, articles, punctuation, capitalization, word choice, prepositions, verb form, singular/plural noun ending, redundancy, word form, subject-verb agreement, word order, possessive, and verb tense. Although grammar correctness is not treated as the primary indicator of excellence, people require grammar to communicate efficiently since language cannot function without it (Norquist, 2020).In the academe, grammar skills can be one way to show learning and progress, which is why some studies have focused on identifying the grammatical errors present in students' writing (Limengka & Kuntjara, 2013;Royani & Sadiah, 2019;Alghazo & Alshraideh, 2020).Their studies have shown that students' writing contained various grammatical errors such as verb agreement, capitalization, usage, sentence patterns, pronoun, spelling, addition, omission, misformation, misordering, blends, verb error, article errors, wrong word, noun ending errors, and sentence structures.Based on these results, it can be deduced that grammar remains a difficult thing to master even in the academe.This paper, however, attempts to bring a new insight into which grammar can be viewed.Since some authors have acknowledged that spelling or grammatical irregularities are among the cues that essentially distinguish between phishing and genuine emails (Parsons, Butavicius, Pattinson, Calic, Mccormac, & Jerram, 2016;Diaz, Sherman & Joshi, 2020;Irwin, 2022), the researcher of this present work contends that the study of grammatical errors is not only boxed in the academe but may also be useful in the examination of the legitimacy or falsity of a particular message.Hence, the goal of this present work is to bring insight into how grammar studies can be helpful in the examination of the language of imposters, specifically in the case of phishing emails. The detection of phishing emails has received considerable research efforts over the last few years.In a more technical spectrum of phishing email studies, some authors have developed certain software to aid in the detection of fraud messages.In one study, Baykara and Gürel (2018) developed a software called the "Anti Phishing Simulator."With their software, phishing, and spam emails are detected by examining mail contents.It must be noted, however, that in the earlier work of Park, Stuart, Taylor, and Raskin (2014), the need for human and computer competencies to complement each other in the detection of phishing emails has long been indicated.Hence, although some computers and gadgets have been installed with spam email detectors as a product of modern-day technology and innovations, the need for human beings to develop the skill to manually detect deception and lies cannot be underestimated.Brooks (2018) has previously recognized the importance of being able to manually recognize when an email is genuine or not.Hence, she explored the language of persuasion in phishing emails through the lens of speech acts.Eight years before this work of Brooks (2018), Chiluwa (2010) had already conducted the same study, which revealed that the commissive act is used as a persuasive strategy in hoax emails through unrealistic and suspicious promises, while the directive act is used to impart urgency in the receiver to act promptly.Both studies by Brooks (2018) and Chiluwa (2010) have accentuated the fact that phishing emails can be examined by looking at the discursive function of the language of imposters.The researcher of this current work, however, contends that other than looking at the pragmatics in the language of the phishers, the lies of the imposters behind the phishing emails can also be brought into the open through the use of error analysis. In a qualitative study by Blythe, Petrie & Clark (2011), they posited that spelling and grammar could not be a reliable cue to give away phishing since 38% of the texts were spelled appropriately, and 68% looked convincing as they contained logos indistinguishable from the authentic online article.This is interesting because this opposed the contentions of Parsons, Butavicius, Pattinson, Calic, Mccormac, & Jerram (2016), Sherman & Joshi (2020), and Irwin (2022), who acknowledged that spelling or grammatical irregularities are among the cues that essentially distinguish between phishing and genuine emails.These differing views imply that the cues that give away a phishing attack may vary from one context to another.In the work of Blythe, Petrie & Clark (2011), the texts used were phishing emails in Canada, where English is the first language.Hence, the phishers may have proficiency in the language since it was their first language to begin with. The review of the literature discloses that the examination of grammar issues has proven to be useful not only in the field of education but also in other fields.Since language is also used in acts of deception by imposters through phishing attacks, a study on language correctness and grammatical deviations may also be carried out to reveal the deception behind phishing emails.The review further reveals that there was a plethora of investigations on phishing emails through the lens of software development and textual and discourse analysis, but not many phishing email studies were done through the lens of error analysis.Moreover, the review also shows that authors have varying notions on whether or not grammatical deviations can be used to reveal a phishing attack, underscoring the need for a contextualized study to be conducted.Hence, this paper aims to bring the study of grammar errors outside the halls of the academe and instead identify the grammatical errors in phishing emails in the Philippines.More so, this paper aims to explain how these errors may give away deception, which can be a significant contribution to the efforts to detect phishing attacks. Methodology This study employed qualitative research, particularly error analysis.Ellis and Barkhuizen (2005) defined error analysis as a set of procedures used to identify, describe, and explain errors in a language.Simply put, error analysis deals not only with the identification and the detection of errors but also with explaining the reason for such error occurrences.Ellis and Barkhuizen (2005) further outlined the process of error analysis in four ways.These are the (a) collection of a sample, (b) identification of errors, (c) description of errors, and (d) explanation of errors.These steps are similar to the steps in error analysis laid by Corder (1974), which includes the (a) examination of the text word for word and sentence by sentence and (b) counting and converting the errors into a percentage to examine the occurrence. To gather and analyze the data, a combination of the steps of error analysis specified by Corder (1974) and Ellis and Barkhuizen (2005) was followed in this study.First, the phishing emails were collected, and each email was examined word for word and sentence by sentence.Second, the errors were then identified, and the frequency of errors was counted and converted into a percentage to examine the occurrence.Lastly, the researcher then described and explained the possible reasons for such occurrences. These steps of error analysis have been useful in this study because the texts used were phishing emails.Since the phishers were pretending to be legitimate banks in the Philippines, such as Landbank of the Philippines (LBP), Bank of the Philippine Islands (BPI), and Banco de Oro (BDO), the language in their emails needed to be free from errors to appear legitimate and convincing, otherwise such errors may give away their lies and deception.The phishing emails pretending to be from LBP, BPI, and BDO were chosen since these were the ones that appeared most frequently on social media platforms when looking up the terms "email scams," suggesting that these were the banks that phishers frequently use in their phishing emails.To ensure that these emails were indeed phishing or scams, the researcher gathered the emails that the victims, targets, or recipients of these scams published on their official social media platforms to warn the public about such deception.To abide by the ethical considerations in this study, the names of these individuals who posted the phishing emails on their public social media platforms were kept confidential and were then not disclosed in this work. Results and Discussion Fifteen (15) phishing emails were gathered in this study.Examination of these emails revealed that all or 100% of these emails contained usage errors.This means that of the fifteen phishing emails gathered, none of those emails were error-free.Table 1 below shows the types of deviations or errors found across the fifteen phishing emails, as well as the frequency distribution of these types of deviations and the corresponding percentage. Based on the data the table presents, 60 errors were found across the fifteen phishing emails pretending to be Landbank, BPI, and BDO.These errors were then distributed to nine (9) types, and they were related to capitalization, punctuation, word form, preposition, verb form, spelling, articles or determiners, sentence fragments, and run-on sentences.Of the nine types of errors identified, the three dominant ones are errors in capitalization at 26.67%, punctuation at 25%, and errors in word form at 11.67%.These three categories of errors were discussed one after the other, being the top three dominant errors found in phishing emails. Table 1: Frequency distribution of the grammatical deviations in phishing emails The first dominant type of error found in phishing emails is capitalization.It was the most dominant in the sense that more than one-fourth of the errors across the fifteen phishing emails were related to capitalization issues.Frame 1 shows the samples of capitalization errors found in phishing emails. "Greetings, our valued Customer," "Dear Valued Client," "You can verify your Account at…" "This is to inform you that you have (1) Pending transaction…" Frame 1 Nordquist (2018) explicitly discussed the rules of capitalization in the English language.According to him, to capitalize means to use an upper case for the first letter of a word.There is a need to capitalize the first word in a sentence, the pronoun "I," and the names and nicknames of particular persons and characters.In the shown samples from the phishing emails, however, some words that do not need to be capitalized were capitalized, such as the "c" in the "Greetings our valued Customer," the "v" and "c" in the "Dear Valued Client," the "a" in the "You can verify your Account at…" and the "p" in the "This is to inform you that you have (1) Pending transaction…".The words customer, valued, client, account, and pending did not appear at the beginning of the sentence, and neither are they proper nouns.Hence, capitalizing them in the sentence, as shown in frame 1, makes them deviate from the standard norm in English.One possible reason for this error could be that the phishers were not aware of these rules and conventions concerning proper capitalization. The second dominant type of error found in phishing emails has something to do with the proper punctuation in English.It was the top two most dominant errors in the sense that one-fourth of the errors across the fifteen phishing emails are related to punctuation issues.Frame 2 shows the samples of punctuation errors found in phishing emails. "Greetings from BDO Unibank" "To verify your account, please login in our app.""Your account must be verified/" Frame 2 According to Nordquist (2016), punctuation is the set of marks such as ampersands, apostrophes, asterisks, brackets, bullets, colons, commas, dashes, diacritic marks, ellipsis, exclamation points, hyphens, paragraph breaks, parentheses, periods, question marks, quotation marks, semicolons, slashes, spacing, and strike-throughs which used to regulate texts and clarify their meanings, mainly by separating or linking the words, phrases, and clauses.One basic rule involving punctuation is to end a sentence with a period (.), a question mark (?), or an exclamation point (!).In the shown samples from the phishing emails, however, the sentences "Greetings from BDO Unibank," "To verify your account, please login in our app," and "Your account must be verified/" did not end with any of these marks.The first sentence, "Greetings from BDO Unibank," should have ended with an exclamation point (!), while the next two sentences should have ended with a full stop or a period.Thus, the lack of proper punctuation in the sentences, as shown in frame 1, makes them erroneous.One possible reason for this error could be that the phishers were not aware of these rules and conventions concerning proper punctuation or that they simply did not pay enough attention to the importance of punctuation in the sentences.The third dominant type of error found in phishing emails is categorized under word from.Based on the numerical data presented, 11.67% of the total errors across the fifteen phishing emails are related to issues with the word form.Frame 3 shows the samples of word form errors found in phishing emails. "We urge you to update your record to keep your record update.""…to shield your information and to Security your account." Frame 3 Word form errors happen when the correct word is selected, but an incorrect form of the word is utilized in the sentence.An example of this is the sentence, "Young people can be independence in the United States."The correct one should have been, "Young people can be independent in the United States."In the incorrect sentence, the word "independence" is a noun, and it cannot be used to describe young people.To solve this, the adjective form "independent" was used instead.The same is true for the sentences in frame 3.In the first sentence, "We urge you to update your record to keep your record update", the word update is in the form of a verb.However, the word "record" is the one being described.Hence, the adjective "updated" should have been used to modify the noun "record" instead.In the second phrase, "…to shield your information and to Security your account", the word "security" is a noun, but since it follows the infinitive "to", the verb form "secure" must have been used instead.One possible reason for this error could be that the phishers were not aware of these rules and conventions concerning proper word form or that they simply did not know that there were errors in the word form they used. Being a country that treats English as a second language, it is common for people in the Philippines to commit errors in their actual language use, especially when speaking.Emails, however, are a different story, especially those that are supposed to come from legitimate institutions and organizations like banks.Unlike oral speeches, written texts like emails can undergo editing, proofreading, and finalizing before they are released to the intended targets.Some institutions like banks may have already prepared generic email templates that can be easily filled with personalized details to suit the intended recipient.This means that the emails of authentic and legitimate banks have a few errors if not completely none. Phishing emails have to keep up with the quality of authentic emails.Since phishers are pretending to be legitimate, the quality of their emails has to reflect professionalism and authority; otherwise, they will appear as a copycat.Considering that there are sixty errors found across the fifteen phishing emails gathered, the result of this study implies that the errors in the phishing emails are not mere honest mistakes that the authors have overlooked while writing; instead, these errors were a reflection of the phishers' linguistic capacity.This goes to imply that although imposters pretend to be legitimate, they cannot imitate and copy the language of authentic emails.Unlike authentic emails, phishing emails contain an apparent amount of grammar deviations or errors.These errors then give away signs of deception.These findings concurred with the previous arguments of Parsons, Butavicius, Pattinson, Calic, Mccormac, and Jerram (2016), Sherman and Joshi (2020), and Irwin (2022), who acknowledged that spelling or grammatical irregularities are among the cues that essentially distinguish between phishing and genuine emails. Conclusion Generally, this study revealed that, unlike genuine emails, phishing emails contain grammatical deviations or errors, and the most frequent ones are related to capitalization, punctuation, and word forms.This result implies that although imposters pretend to be legitimate, they cannot imitate and copy the language of authentic emails.This may be because when legitimate institutions and organizations like banks or institutions send out emails, they do so after thorough proofreading and editing to make sure their emails are flawless or at least close to perfection.Phishers, however, may not have the same mechanisms to ensure grammar correctness and accuracy.Hence, the grammatical errors in their phishing emails may become cues that give away their intent to deceive. Based on these findings, the researcher of this study infers that the Philippine phishers lack grammar proficiency, considering that the capitalization and punctuation errors in their phishing emails are basic concepts taught spirally across Philippine education systems.Additionally, the researcher also infers that with good grammar skills, one can have a larger chance of distinguishing phishing from authentic emails, subsequently lowering the chance of getting deceived.On the contrary, those who do not have profound knowledge of grammar conventions may have a larger possibility of falling into the phishing trap.However, there is a need to point out that this study is limited to analyzing the grammar errors found in phishing emails.The texts used were the ones posted on the social media platforms by the recipients of these emails, and these email recipients were not interviewed to verify the role of grammar errors in their detection of these phishing emails.Considering this limitation, the study recommends a
5,079.2
2024-06-05T00:00:00.000
[ "Linguistics", "Computer Science" ]
Jet flavour classification using DeepJet Jet flavour classification is of paramount importance for a broad range of applications in modern-day high-energy-physics experiments, particularly at the LHC. In this paper we propose a novel architecture for this task that exploits modern deep learning techniques. This new model, called DeepJet, overcomes the limitations in input size that affected previous approaches. As a result, the heavy flavour classification performance improves, and the model is extended to also perform quark-gluon tagging. Introduction The Standard Model of particle physics (SM) [1,2] is a remarkably effective theory, able to describe the experimental observations made thus far in high energy physics with unprecedented precision and completeness.Despite its success however, this model fails to explain several observations like the baryon asymmetry and the presence of dark matter, which inspires searches for extensions to the SM.The study of the recently discovered [3][4][5] Higgs boson [6][7][8][9][10][11], and the search for extensions of the electroweak sector are two of the most active research sectors in the field. In both cases the investigated phenomenon sports a clear flavour asymmetry, making the ability to classify jets originating from heavy-flavour (bottom and charm) quarks extremely beneficial.Heavy-flavour (HF) jets contain an open-bottom or open-charm hadron as a result of the fragmentation process.This hadron carries a large fraction of the initial parton momentum.HF hadrons also have a sizeable lifetime, with a cτ of ∼ 0.5 mm and ∼ 0.3 mm for bottom and charm, respectively.Most of the HF hadrons produced in the fragmentation process undergo a decay process far enough from the primary interaction vertex (PV) to result in displaced tracks and its decay products can be clustered in resolved secondary vertices (SV). Traditionally, jet flavour classification has leveraged both fragmentation and lifetime properties of HF hadrons.The discrimination power provided by single features is not enough for a both efficient and pure classification, but the collective behaviour of the tracks in the jet, as well as the presence of a reconstructed secondary vertex, allows for a good performance of a combined algorithm.For this reason, machine learning has long been used in the field of jet flavour classification, starting from simple naive bayes classifiers [12,13], to more complex and modern models like shallow neural networks [14,15], or tree ensembles [14]. More recently, simple dense deep neural networks [16] have been successfully employed for this classification task alongside recurrent neural networks (LSTM) [17], outperforming previous classifiers.The ATLAS RNN algorithm [18], which is based on a recurrent neural network, processes as a sequence a small number of selected high purity charged particle tracks associated with the jet, sorted by their impact parameter.It exceeds the performance of the IP3D algorithm [19], which considers the same inputs, but does not fully take into account the correlations between the tracks. Within CMS, the DNN based b-tagger DeepCSV [20] has been used as the baseline reference for b-tagging performance for quite a while.DeepCSV is a fully connected neural network consisting of 5 layers with 100 nodes each using high level features of selected tracks and vertices as input.In contrast with the ATLAS RNN tagger, DeepCSV combines properties from selected tracks and secondary vertices directly, but does not employ sequence processing. The common feature between these two approaches, however, is that both models use only a small subset of the charged jet constituents that pass stringent quality criteria in order to provide the cleanest and simplest environment possible for the classifier to perform its task. Yet, a stringent selection comes with information loss and therefore potential performance degradation.It has recently been shown that architectures compressing and converting individual track features to a latent space can overcome these limitations.The ATLAS Deep Sets b-tagging algorithm [21] relying on the idea of Deep Sets [22], where the features of the input tracks are transformed into a latent space representation before being combined by a simple per-feature sum, has been shown to have similar performance as the RNN approach with reduced training and inference time given the same inputs.Studies on relaxing the input selection criteria on the tracks show a significant gain. In this paper, we describe DeepJet, a new network architecture that does not rely on a selection of the jet constituents and thus overcomes the previous limitations on the purity and number of inputs.In addition, DeepJet considers the full information of all jet constituents, charged and neutral particles, secondary vertices, and global event variables simultaneously. Training Samples The DeepJet model is trained and tested using a sample of simulated anti k T [23] jets (with R = 0.4) drawn from a mix of QCD multijet events and fully hadronic top-quark pair (t t) events, generated with PYTHIA8 [24] and POWHEGv2 [25][26][27][28], respectively.In both samples the hadronization and showering are performed using PYTHIA8.In order to apply the approach to a realistic reconstruction problem, GEANT4 [29] is used to simulate the response of the CMS Phase 1 detector [30,31] to the generated particles.The jet constituents are reconstructed using the Particle Flow (PF) algorithm [32] within the CMS Phase 1 detector reconstruction algorithms [33].The entire training sample consists of roughly 130 million jets in total.This is then split into a training, a validation and a testing sample in the ratio 0.765:0.135:0.1. Labeling The jets are labelled by ghost association [20].In this approach the last instance of each generated b and c-hadron before the decay have their momentum scaled to a very small value, in order to carry only directional information.These "ghosts" are then included among the particles to be clustered by the jet algorithm.If a jet contains at least one b hadron, it is labeled a b jet.If the jet contains at least one c hadron, and no b hadrons, it is labelled a c jet. Everything else is then labelled as a light jet.The jets labelled as b-jets are further sub-labelled into three categories: bb for jets containing two b-hadrons, b lept for jets containing b-hadrons decaying leptonically, and b for jets with b-hadrons decaying hadronically.The light jets are also sub-labelled into quark and gluon jets using the CMS definition [34]. Input features DeepJet uses approximately 650 input variables, divided into four categories: global variables, charged PF candidate features, neutral PF candidates features, and SV features associated with the jet.The global variables include jet-level information such as the jet kinematics, the number of tracks in the jet, as well as the number of secondary vertices in the jets.Additionally, the global variables also include the number of reconstructed primary vertices in the event, in order to help the network learn the effect of multiple interactions per bunch crossing (pileup).The information related to the charged PF candidates is summarised in 16 features of the first 25 candidates with respect to their displacement significance.These features contain information on the track kinematics, track fit quality, displacement, and displacement uncertainty with respect to the PV.Similarly, 6 features of the leading 25 neutral candidates are provided to the network.Finally, up to 4 secondary vertices are considered, each with 12 variables, such as the flight distance significance.A full list of the variables used can be found in the appendix. Preprocessing of features Since the b-tagger is meant to generalise beyond the specific kinematic domain used for the training, it is important to remove any flavour dependence on transverse momentum (p T ), or pseudorapidity (η) to avoid the classifier leveraging those differences.During the data pre-processing, we downsample each of the flavour classes in p T and η such that they have the same shape as the b-jet class.Several variables are also bounded within a physically well defined range.In case features of an object are either not available or infinite, they are replaced with an appropriately chosen value for the specific feature.The particle objects are ordered based on their assumed importance.Charged particle candidates are ordered by impact parameter significance, while neutral candidates are ordered by the shortest distance to a secondary vertex.If there is no secondary vertex in the jet, p T ordering is used instead.Secondary vertices are sorted by flight distance significance. Architecture The concept of DeepJet is to use low-level features from as many jet constituents as possible as opposed to selecting a few that are well identified and reconstructed and exploiting higher level features.The exact number of features and constituents can vary with experiment and use case, but the all-inclusive concept remains the same.To process this large input space, a suitable architecture is needed.An illustration of the DeepJet architecture can be seen in Fig. 1.In the first step, an automatic feature engineering is performed for each constituent using convolutional branches comprising 1x1 convolutional layers [35].The filter size of 1x1 is chosen as inherently all candidates should undergo the same feature transformation without taking into account the other constituents in the jets, and a priori a specific class of physics objects should be treated in a consistent manner independent of input ordering.1 Separate convolutional branches are used for vertices, charged PF candidates and neutral PF candidates.The convolutional branches use a sequences of layers with a decreasing number of filters, as this projects and compresses the features of each constituent into a lower dimensional space.Each of the outputs are then fed into a a dedicated recurrent layer of the LSTM type.This treats the constituents as a sequence, following the order described in section 2.4.In particular the LSTM nodes are well suited for dealing with the variable length sequences of inputs, which occurs in jet physics.The charged candidate LSTM uses 150 nodes, whereas the neutral and secondary vertex LSTM uses 50 nodes each.The three LSTM outputs are concatenated with the global features of the jet and then fed into a fully connected layer with 200 nodes, followed by 7 fully connected layers with 100 nodes each.Finally 6 outputs nodes are integrated into the multi-classifier, allowing it to perform b-tagging, c-tagging and quark/gluon tagging.Throughout the network ReLU activation functions are used, except for the output layer, where softmax activation is applied. At the beginning of the network, and in between each layer, batch normalization [36] is performed, and dropout [37] with a rate of 0.1 is applied for regularisation. Training procedure The neural network is trained using the Adam optimizer [38] with a learning rate of 3 • 10 −4 for 65 epochs and a categorical cross entropy loss.The learning rate is also halved if the validation sample loss stagnates for more than 10 epochs.After the first training epoch, the input batch normalization layer is frozen.Possible over-training is monitored through the validation loss, which is mostly dominated by medium p T jets, and additionally through ROC curves in different p T ranges.The training is performed on an Nvidia GEFORCE GTX 1080 Ti GPU.With a training sample size of roughly 130 million examples, the training convergence time is around 3 days. Comparison with other b-tagging algorithms The overall performance of DeepJet is compared to the CMS b-tagger DeepCSV, which is a multiclassifier embedded in the CMS reconstruction framework, and which uses similar, but strongly 1This concept is closely related to the idea of Deep Sets [22].preselected, inputs along with additional high-level variables.The comparison is made on a fully hadronic t t sample, as shown in Fig. 2. DeepJet performs significantly better than DeepCSV with an efficiency2 increase of almost 20% at 10 −3 misidentification probability3 for jets with p T > 90GeV. Charged (16 features) x25 The performance of the classifiers can also be compared on a set of multi-jet QCD events, which allow accessing higher jet momenta.The inclusive results are shown in Fig. 3.A comparison for different jet p T can be found in Fig. 4 for fixed light jet misidentification probabilities.In both cases DeepJet shows an increasing performance gain, in particular for higher jet p T , presumably exploiting the information contained in all constituents. The multi-classifier nature of DeepJet and DeepCSV allows the models to also efficiently perform c-jet identification.Figure 5 shows that DeepJet outperforms DeepCSV also in this task.In this case, both DeepCSV and DeepJet use a binary discriminator defined as P(c) P(c)+P(udsg) .Finally, DeepJet can be used for quark/gluon discrimination.DeepJet is compared to the "quark/gluon likelihood" [34], a binary quark/gluon classifier included in the CMS reconstruction framework.For this purpose, the DeepJet output is also projected to a binary classifier, P(uds) P(uds)+P(g) . 2True Positive Rate.3False Positive Rate.Also here, a performance increase can be observed when employing the DeepJet model on a sample consisting purely of light quark and gluon jets, as shown in Fig. 6.Moreover, given that a standard quark/gluon classifier uses a binary approach, we expect a significantly larger performance gain in scenarios with more realistic heavy-flavour contamination. Qualitative assessment of the performance gain source Given that the architecture and the network inputs are changed simultaneously between DeepCSV and DeepJet, a new network configuration is trained to assess the relative impact of these changes to the performance.This model contains the same inputs as DeepCSV, but its architecture is changed to mimic that of DeepJet, with a series of convolutional layers feeding into a LSTM processing the list of inputs belonging to different tracks.Additionally, DeepCSV uses the six most displaced tracks in the jet passing the pre-selection criteria.In this implementation the limit is raised to 25, without removing the selection requirements.An architecture including all the inputs used by DeepJet fed into a simple dense deep neural network has been found practically impossible to train to convergence.The performance of the DeepCSV RNN model is shown in Fig. 7 for inclusive t t events.The gain in performance of DeepJet comes largely from the removal of the track selection, complemented by the network structure that allows efficient processing of the larger and less pure track set.Similar improvements coming with similar preprocessing of unselected jet constituent inputs could be observed in the Deep Set based tagger [21].This qualitative statement also explains the additional gain observed in DeepJet at high jet p T as the track selection was historically optimised for mild jet boosts.The default DeepJet architecture is also compared to an adapted architecture employing the Deep Set idea.For this purpose, the DeepJet architecture is adapted as follows: the 1x1 convolutional branches are extended to contain 100 filters, each, increasing the number of free parameters.The final layer of each branch is enlarged further, with the charged branch being set to 256 filters, whereas the vertex and the neutral branch are set to 128 filters.At the same time, the LSTM layers are replaced by a simple sum that is evaluated independently for each convolutional branch, after which these sums are concatenated and fed to a series of fully connected layers. As shown in Fig. 8, the performance of the Deep Set based architecture shows a slight gain for b to c jet identification, but shows no significant difference with respect to the b to light jet identification or the quark/gluon discrimination performance compared to the default DeepJet architecture. Conclusion A multiclass flavour tagging algorithm, DeepJet, has been developed to exploit the full information in a jet.The model was tested using CMS simulation.The model relies on low-level variables and loose selection of the inputs, and it uses an architecture that can process these inputs in an efficient manner.When compared with fully connected models using a smaller set of engineered features a gain in performance is observed in all topologies of flavour tagging, in some cases exceeding a two-fold efficiency gain for the same misidentification rate.The model goes beyond heavy flavour tagging and performs quark-gluon discrimination as well.The applicability of the DeepJet approach has been successfully extended in different directions, such as the construction of tagging algorithms for exotic long lived particles [39]. DeepJet has been trained using the homonymous github package [40,41], with a streamlined training process and multi-threaded data access.The training set has been scraped from the simulation data using [42], which contains an algorithmic description of all the variables used in the training. ( 6 Figure 1 .Figure 2 . Figure 1.An illustration of the DeepJet architecture.Three seperate branches are used to process charged candidates, neutral candidates and secondary vertices.The algorithm makes use of 1x1 convolutional layers to perform automatic feature engineering for each class of jet constituents.The three RNN (LSTM) layers combine the information for each sequence of constituents.Finally the full jet information is combined using fully connected layers. Figure 3 . Figure 3.The performance of DeepJet and DeepCSV in three different jet p T ranges in QCD multijet events.The performance is shown for both b vs c (dashed lines), and b vs light (solid lines). Figure 4 .Figure 5 . Figure 4. Performance of DeepJet and DeepCSV as a function of jet p T .The efficiency for b-jets is shown for three different fixed light-jet misidentification probabilities (mistag rates) using QCD multijet events. Figure 6 . Figure 6.Quark gluon discrimination performance of DeepJet compared to the CMS "quark-gluon likelihood" method in a sample of pure light quark (uds) and gluon jets. Figure 8 . Figure 8.Comparison of the DeepJet architecture with an adapted DeepJet architecture based on the Deep Sets approach.Left: b jet identification performance.Right: quark/gluon jet separation performance.
3,952.2
2020-08-24T00:00:00.000
[ "Physics", "Computer Science" ]
Sensitivity of multi-PMT Optical Modules in Antarctic Ice to Supernova Neutrinos of MeV energy New optical sensors with a segmented photosensitive area are being developed for the next generation of neutrino telescopes at the South Pole. In addition to increasing sensitivity to high-energy astrophysical neutrinos, we show that this will also lead to a significant improvement in sensitivity to MeV neutrinos, such as those produced in core-collapse supernovae (CCSN). These low-energy neutrinos can provide a detailed picture of the events after stellar core collapse, testing our understanding of these violent explosions. We present studies on the event-based detection of MeV neutrinos with a segmented sensor and, for the first time, the potential of a corresponding detector in the deep ice at the South Pole for the detection of extra-galactic CCSN. We find that exploiting temporal coincidences between signals in different photocathode segments, a $27\ \mathrm{M}_{\odot}$ progenitor mass CCSN can be detected up to a distance of 269 kpc with a false detection rate of $0.01$ year$^{-1}$ with a detector consisting of 10000 sensors. Increasing the number of sensors to 20000 and reducing the optical background by a factor of 140 expands the range such that a CCSN detection rate of $0.08$ per year is achieved, while keeping the false detection rate at $0.01$ year$^{-1}$. Introduction A core-collapse supernova (CCSN) explosion is the final stage in the evolution of stars featuring masses greater than ∼ 8 solar masses (M ) [1]. Neutrinos play a crucial a email<EMAIL_ADDRESS>(corresponding author) role during these explosions, carrying away most of the energy that is radiated during the collapse in a short burst of ∼ 10 s [2,3]. In 1987, the first and so far only neutrinos from a CCSN were detected. This supernova, named SN 1987A, exploded in the Large Magellanic Cloud, a satellite galaxy of the Milky Way at a distance of 51.4 kpc from Earth, emitting a burst of neutrinos. During this episode, 25 neutrino events with estimated energies of ∼15 MeV were detected in temporal coincidence by three different neutrino observatories: twelve by the Kamiokande neutrino detector [4], eight by the IMB detector [5] and five by the Baksan scintillation telescope [6]. Even though this detection confirmed the general picture of CCSN explosions, more than 30 years later a detailed picture of the physics of the core collapse is still missing. Neutrino telescopes that use abundances of natural water or ice as detection medium such as Antares [7] in the Mediterranean Sea, Baikal [8] in Lake Baikal or IceCube [9] at the South Pole have studied neutrinos in recent decades. The deep ice at the South Pole, with its high transparency and low radioactive contamination, has proven to be an excellent site for the detection of high-energy atmospheric and astrophysical neutrinos (see e.g. [10,11]). The IceCube Neutrino Observatory instruments one cubic kilometer of South Pole ice in depths between 1450 m and 2450 m with 5160 Digital Optical Modules (DOMs). DOMs are spherical glass pressure vessels equipped with one 10-inch photomultiplier tube (PMT) facing downwards and associated read-out electronics [12]. The modules are mounted on strings with a horizontal spacing of 125 m and a vertical distance of 17 m between modules. Neutrinos are detected indirectly via secondary charged particles which are produced after a neutrino interacts in the ice or bedrock below the detector. During their passage, these particles induce Cherenkov light emission in the ice, which is detected by the photomultipliers. The current string layout of IceCube is optimized to reconstruct the energy and direction of neutrinos with energies above ∼100 GeV. The detection of light from MeV neutrinos by more than one module, however, the prerequisite for event-based reconstruction, is very unlikely, since the produced low-energy secondary particles travel only a few centimeters in the ice. Therefore, IceCube does not have the capability to detect MeV supernova neutrinos individually. Nevertheless, a nearby CCSN can be detected by the observation of an increased counting rate in all DOMs in a time window of ∼ 10 seconds, corresponding to the supernova neutrino burst [13]. Based on the success of IceCube, new extensions to the original detector are being planned that will further improve its performance. The IceCube upgrade [14] is scheduled for installation in the Australian summer of 2022/23 and consists of about 700 additional optical modules distributed across seven strings. This extension will significantly expand IceCube's sensitivity to neutrino oscillation parameters using atmospheric neutrinos. The construction of a large detector for high-energy astrophysical neutrinos, IceCube-Gen2 [15], is planned to begin in the second half of the decade and is designed to encompass a large volume with about 10 000 optical sensors on 120 strings. Beyond a mere increase in the number of sensors, novel module types with segmented photosensitive areas are being developed for IceCube-Gen2. It is expected that such modules will provide increased sensitivity not only for high-energy neutrinos, but also for MeV neutrinos. The segmentation will enable a new approach to study low-energy events using temporal coincidences between photons within the same module (local coincidences), improving the sensitivity to CCSN detection. While closely spaced (unsegmented) sensors can also be used to detect CCSN neutrinos [16], this scenario is not applicable to very sparsely instrumented neutrino telescopes like IceCube-Gen2, which are optimized for high-energy astrophysical neutrinos. The KM3NeT Collaboration [17] is developing a local-coincidence method for the segmented sensors of their sparse detectors in the Mediterranean Sea [18]. In this work, we investigate CCSN detection via local coincidences for a detector in the South Pole ice equipped with 10 000 multi-PMT digital Optical Modules (mDOMs) [19] which has been developed for the IceCube Upgrade [20]. The paper is organized as follows: in Section 2 an introduction to the mDOM concept and its features is provided. In Section 3 we describe the simulation of su- pernova neutrinos used for the sensitivity studies. The final Section 4 describes a set of trigger conditions for MeV neutrino identification and background suppression, which is subsequently used to estimate the sensitivity of a detector equipped with mDOMs for the detection of extra-galactic CCSNe. A multi-PMT optical module for the South Pole ice The mDOM [19], depicted in Fig. 1, features 24 PMTs with 80 mm photocathode diameter housed inside a pressure vessel. PMTs and support structure are coupled to the pressure vessel with optical gel. In contrast to the single large PMT in the current IceCube DOM, the PMTs in the mDOM cover the whole solid angle with high homogeneity. They are surrounded by reflector cones to further increase the PMT's effective area. This design results in several advantages: -Increased sensitive area per module by more than a factor two; -A near uniform 4π coverage; -PMT orientations provide intrinsic information about the photon direction, thereby improving event reconstruction; -Simultaneous photon detection in several PMTs of the same module (local coincidence) which can be utilized to identify neutrino interactions over background. In particular, the last point has the potential to distinguish photons from interacting MeV neutrinos from detector background. In this context, each module can be treated as an individual detector which performs an event-by-event detection. Thus, the sensitivity scales with the number of sensors and the results of our study are independent of the inter-module spacing. Simulation A GEANT4 [21] Monte Carlo simulation of the mDOM [22] has been adapted and is used to study the sensitivity of the module to MeV supernova neutrinos. The simulation takes into account the interactions of MeV neutrinos in the ice and the mDOM response, as well as the optical background produced by radioactive decays in the glass of the pressure vessel. Detector simulation The mDOM simulation includes all relevant mechanical components of the module: pressure vessel, PMTs, reflector cones, support structure and optical gel. The PMT is modeled as a solid glass body with the photocathode on the inside. Each component has its own material properties. The support structure of the PMTs is simulated as a total absorbing massive object. When a photon reaches the surface of the photocathode it is removed and saved. Its detection probability is given by the quantum efficiency (QE) of the PMT. A further simulation of the PMT or front-end electronics is not included. The medium around the module was modeled after the properties of the South Pole ice [23]. These vary with depth and have a layered structure reflecting climatological changes or volcanic activities in the past [24]. In the simulation, absorption and scattering are parameterized as a function of the mDOM depth, however, on the scale of the simulated volume around the module, the ice properties are assumed to be constant. The simulations are performed using a single mDOM and afterwards are scaled to a detector with 10 000 modules. The varying ice properties in depths between 1400 to 2490 m 1 are taken into account by using the effective per-module volume (see section 3.3). With a vertical module spacing of approximately 13 m in the IceCube-Gen2 detector, shadowing effects can be neglected. 1 IceCube-Gen2 strings will probably be longer and extended into greater depths, increasing the number of modules in cleaner ice and thus the detector performance. However, in this work, the module distribution is limited to depths with measured ice properties by IceCube's LED calibration system [23]. SN neutrino burst The gravitational collapse of type II supernovae ends in a neutron star or, if the mass of the progenitor is larger than ∼ 10 M , a stellar-mass black hole might be the outcome [25]. During this collapse, electron capture by protons produces a temporally sharp electron neutrino burst lasting ∼ 10 ms, which is called neutronization peak. After that, the main reaction that produces neutrinos and antineutrinos of all flavors is pair production. The neutrino energy spectra are parameterized as [26,27]: where E ν and E ν are the neutrino energy and its mean, respectively, t is the time after the collapse, and α(t) can be calculated from the moments of the neutrino energy spectrum, In this work, we employ fluxes from two different CC-SNe models [28], which are based on the LS220 (Lattimer -Swesty) Equation of State (EoS) [29]: The luminosity, mean energy, and mean squared energy as a function of time from the models are shown in Fig. 2. Upon reaching Earth, some neutrinos will interact within the instrumented volume. The most important reactions for a detector in the ice are elastic neutrinoelectron scattering (ENES) for electron neutrinos originating from the neutronization peak, and inverse beta decay (IBD) which has the largest cross section of all MeV neutrino interactions in ice [13,30]. The generated e ± usually exceed the Cherenkov threshold and therefore produce visible light that propagates through the ice and can be detected by the optical modules. Note that we do not take into account interactions with oxygen nuclei or the scattering of other neutrino flavors, which reduces the event rate by about 5% (see Table 1 energy neutrinos produce higher energy secondary particles in the ice, which in turn produce more photons, increasing the probability of being detected in one or more PMTs. Thus, although the mean energy of the CCSN neutrinos range mainly from about 5 to 15 MeV (see Fig. 2), the larger effective cross section of the higher energy neutrinos and their higher Cherenkov photon yield lead to a mean energy of about 25 MeV for the detected neutrinos. Results shown in this work do not include any oscillation scenario, which is a conservative choice. In both normal and inverted mass ordering scenarios a larger number of detected events is expected than in the nooscillation case, with the inverted scenario providing the largest signal because energeticν x would oscillate intoν e [13]. Effective volume The effective detection volume of a mDOM for MeV neutrinos is calculated for different depths with the simulation described in Section 3.1. Electrons are injected homogeneously into a sufficiently large spherical volume of South Pole ice surrounding the module. As they travel through the ice, they induce Cherenkov radiation that can be detected by the module's PMTs. The effective volume is given by where N det and N gen are the number of detected and generated electrons, respectively, and V gen is the generation volume. Electrons producing at least one photon registered by a PMT are considered detected. A sufficiently large generation volume means that an increase in the generation volume does not increase the effective volume. To reduce computational effort, the mDOM is assumed to be spherically symmetric, requiring particles to be simulated only from one direction. This approximation is sufficient for the re-scaling of the effective volume used later in the analysis. The same light production is assumed for electrons and positrons. Simulations at different depths have been performed for an electron energy of 25 MeV, where each depth has a corresponding absorption and scattering length taken from [24]. The effective volume as a function of the module depth is shown in Fig. 4. Table 1 presents a comparison to the effective volumes of the IceCube detector, which was calculated in [13]. For this result, it is assumed that the 10 000 Table 1 Effective volume and effective mass for IceCube and a detector equipped with 10 000 mDOMs. Quantities have been obtained for the detection of e − with an energy of 25 MeV. The results for the mDOM detector include statistical uncertainties only, while the IceCube results also include systematic uncertainties. IceCube data taken from [13]. IceCube with DOMs 725 ± 95 (3.8 ± 0.5) × 10 6 3.5 ± 0.5 Detector with 10 000 mDOMs 1776 ± 6 (1.705 ± 0.005) × 10 7 15.71 ± 0.05 mDOMs are evenly distributed over the same depths as the IceCube modules. An increase of about a factor 2.4 for the effective volume per module is obtained for the mDOM (in agreement with the larger effective photocathode area) resulting in a factor ∼ 4.5 larger effective volume of a detector equipped with 10 000 mDOMs compared to the current IceCube detector. CCSN simulation In order to perform the simulation efficiently, positrons from IBD and electrons from ENES are generated in the ice at random positions inside the simulated volume following the model described in section 3.2. It is assumed that the neutrino flux comes from the zenith. To simulate the events, a time t of the flux is sampled according to the flux distribution from Fig. 2. Using the associated mean energy and mean squared energy, the energy spectrum at each point in time is calculated using Eq. 1. Afterwards, a single energy value from the spectrum is sampled according to its probability. Next, the corresponding angular cross section for this energy and the interaction that is being simulated is calculated, and a direction for the generated particle is sampled. Detected events are weighted according to their interaction probability and the total flux of (anti) neutrinos through the simulated volume. The weight W is given by The first element W int (E) represents the interaction probability of the neutrino traveling through the simulated volume. The generation volume surrounding the module is simulated as a cylinder of South Pole ice with a radius of 20 m and a length of 40 m with the mDOM in its center. The cylinder base is facing the SN, which simplifies the weight calculations. Thus W int is given by where σ(E) is the total cross section of the simulated interaction, n target is the number of targets per volume in the ice for such an interaction, and l = 40 m is the length of the simulated volume. Note that W int (E) is different for each simulated particle since it depends on the energy. The flux weight W flux is given by where N gen is the number of particles that have been generated, r = 20 m is the radius of the cylindrical cap of ice facing the SN, and d is the distance at which the SN is being simulated. L(t) and E(t) are the fluxes and mean energies from the CCSN models. Finally, W eff accounts for the fact that the simulation is performed for a single mDOM at a particular depth z sim . Since the ice properties vary with depth, the results are scaled using the mean effective volume of the whole detector: where N mDOM is the number of mDOMs that are being simulated and V eff (m, z sim ) is the effective volume for multiplicity m at the depth where the simulation is performed. We define multiplicity as the number of PMTs per module that have detected photons from the same event. The effective volume does not scale in the same way for events detected in a single PMT as it does for events detected in multiple PMTs. This is because coincident events are generally closer to the module than events detected in a single PMT, and thus are less affected by a variation in absorption length. Therefore, events are assigned a different effective volume weight V eff (m) depending on the multiplicity m, with each one calculated using the method of Section 3.3 with the appropriate multiplicity condition. The limited size of the simulated volume, chosen to speed up the simulations, does not cover the whole sensitive volume for a detection threshold of a single photon registered in a PMT. Therefore, the following calculations slightly underestimate the sensitivities. However, this does not impact the conclusions in this work, since our main results are based on events detected in coincidence in several PMTs. The distribution of event multiplicities for a CCSN simulation at 10 kpc distance are shown in Fig. 5. For the high-mass CCSN we obtain more than 2.4×10 6 neutrinos with at least a single detected photon and ∼ 1.4 × 10 6 for the low-mass progenitor. In both cases, more than 12% of the detected events are in local coincidence, i.e. at least two photons are registered in different PMTs, while the value is slightly lower for the low-mass progenitor. Figure 6 shows the distribution of time differences between the first and last registered photon for events detected in coincidence. It demonstrates that ∼ 85% of the local coincidences occur in less than 1 ns between the first and last registered photon, and ∼ 99% within 10 ns (note that the simulation does not include PMT pulse generation and readout). Hence, photons from the same event arrive almost simultaneously at different PMTs. This property can be exploited to suppress background and improve the identification of individual MeV neutrinos. Identification of MeV neutrinos from CCSNe over background In this section, a study of local and global coincidence conditions is presented with the aim to optimize the signal-to-background ratio. To this end, we define trigger conditions for identifying single neutrino interactions based on fast local coincidences, as well as a minimum number of such interactions for identifying a supernova outburst. The trigger conditions are defined as follows: a) Once a PMT detects a photon, a time window of length ∆t coin is opened. If within ∆t coin at least m−1 different PMTs in the same module detect one or more additional photons the trigger condition is fulfilled and a neutrino event is detected; b) After condition a) is satisfied, a second time window opens with ∆T SN = 10 s. If N ν − 1 further neutrino events according to a) from any mDOM in the detector fall into this time window, a supernova detection is claimed. The trigger conditions are designed to be strict enough to effectively suppress backgrounds and keep the false detection rate at an acceptable level. In the following sections, the main background sources are discussed and an estimate of the sensitivity of the method to extra-galactic CCSNe is presented. Background sources Due to the low radioactivity of the deep ice at the South Pole, background events originate almost exclusively from the intrinsic noise of the optical module and from the interaction of neutrinos of similar energy in the ice from other sources. These backgrounds are described in the following. PMT dark rate PMTs produce a measurable signal (dark rate) even if operated in complete darkness [31]. The dark rate of the PMT model in the mDOM at −30°C, a typical operating temperature in the deep ice, was measured with ∼ 30 s −1 for a threshold of 0.3 pe [32]. The exact value varies from PMT to PMT and ranges up to ∼ 50 s −1 . We assume a conservative dark rate of f d = 50 s −1 for the purpose of this study. Following a similar calculation performed in [16], the probability that the PMT produces a dark rate signal within a time window ∆t coin is given by the complementary probability of not registering any photon: Once a photon is detected, at least m − 1 other PMTs in the module are required to register dark counts in the time window ∆t coin to satisfy trigger condition a). The probability for this can be calculated again using the complementary probability where N PMT = 24 is the number of PMTs in a mDOM and B cum (m|n, p) = m k=0 n k p k (1 − p) n−k is the cumulative binomial distribution for m successes out of n tries when the probability for success is p. The rate at which trigger condition a) is satisfied due to the PMT dark rate in a module anywhere in the detector is given by with N tot being the total number of mDOMs in the detector. Radioactive decays in the pressure vessel glass Scintillation and Cherenkov radiation from radioactive decays in the glass of the pressure vessel are the main background source in IceCube DOMs [9] and the mDOM. The radiation levels of the borosilicate glass used in the mDOM have been measured 2 and are listed in Table 2 assuming secular equilibrium between isotopes within the natural decay chains. Radioactive decays in other parts of the module, e.g. the gel, are found to be negligible. A GEANT4 simulation is performed to estimate the noise contribution of these decays in the mDOM. Within a simulation run, the number of isotopes decaying inside a time window of 20 min is drawn randomly from a Poisson distribution. All decay chains are simulated independently and the photons reaching a PMT are saved after taking its QE into account. Subsequently, the photons are mixed within the time window. Temporal correlations between mother-daughter isotopes are preserved if the decay time is within the time window. Otherwise, the decay time is changed to a random time within the window so that the number of decays remains constant (following the secular equilibrium argument). Finally, the time-ordered hits are checked for coincidences. Solar neutrinos Solar neutrinos with an energy above the Cherenkov threshold in ice originate mainly from β + -decays of 8 similar to that of supernova neutrinos. Their spectrum is simulated in GEANT4 with a total flux of Φ νe = 1.7 × 10 6 cm −2 s −1 and Φ νµ,τ = 3.3 × 10 6 cm −2 s −1 using the data from [34]. We consider only the elastic scattering process with electrons in the ice, since it dominates the interaction rate by two orders of magnitude [35]. Further background sources The following background sources are not included in this study: -Cosmic-ray induced atmospheric muons: these highenergy muons can penetrate into the instrumented volume, generate photons, and produce a detection pattern in a mDOM similar to that of MeV neutrinos. It is expected that due to their extended signature in the detector, the majority of these muons can be identified and rejected, resulting in a short dead time (not accounted for in this work). However, some of them might escape the identification algorithm and thus become part of the background for this analysis. In fact, for KM3NeT, atmospheric muons cause the majority of background events at multiplicities above six [18]. On the other hand, the scattering length in ice is much shorter than in water. Hence, photons produced simultaneously by a muon arrive spread out in time at a module. Therefore, for our case, we expect a smaller contribution of these muons to the background. Corresponding studies require a full detector simulation, though, which goes beyond the scope of this work. -Muons or electromagnetic showers from low-energy atmospheric neutrinos: these neutrinos have energies in the range of 10 MeV -1 GeV, but their interaction rate is only ∼1 s −1 in a Gton volume of ice [36]. The expected background event rates for different multiplicities in a time window of ∆t coin = 20 ns in a detector with 10 000 mDOMs are depicted in Fig. 7. The largest contribution to the background comes from radioactive decays in the pressure vessel glass. Scintillation is the main background source for the overall detection rate, producing ∼ 98% of the detected light after a radioactive decay. Nevertheless, since this light Table 3 Number of signal and background events and effective volume for different multiplicities (see trigger conditions in Section 4; ∆t coin = 20 ns, ∆T SN = 10 s), for a detector with 10 000 mDOMs. is emitted over a long period of time, the probability for high multiplicity coincidences is low compared to that from Cherenkov emission. For m ≥ 6, this allows the use of simulations without scintillation, which significantly reduces computational power and, as a consequence, increases statistics by a factor of ten. A total of 4500 days of radioactive background in a single mDOM is simulated for the no-scintillation case. Note that the cutoff at high multiplicities for radioactive and solar background in Fig. 7 is due to limitation in statistics, since these events are very rare. Due to the lack of noise events at higher multiplicities, only trigger conditions up to a threshold of m = 8 are considered. Multiplicity In particular, the 232 Th chain produces, by far, the most important contribution to the background due to 208 Tl, which decays into an excited state of 208 Pb that produces a γ of ∼ 2.6 MeV. This γ can either interact with the vessel glass or the surrounding ice. In both cases, some of its energy is transferred to an electron, which then generates Cherenkov radiation that produces a detection pattern in the module similar to MeV neutrinos. Note that ∆t coin = 20 ns is still conservative with respect to the time resolution of the module, which we expect to be around 10 ns. As the coincidence time of most photons from supernova neutrino interactions is lower than 1 ns (Section 4.1.4), reducing the time window to a few nanoseconds would further suppress the background. Table 3 shows the number of detected events for signal and background for ∆t coin = 20 ns and ∆T SN = 10 s as well as effective detector volumes for a detector with 10 000 mDOMs. Theses conditions will also be used for the sensitivity estimation in the next chapter. CCSN sensitivity The (13) Here we neglect triggers from the combination of background photons from different sources as their contribution is small. With this background, the expected number of false supernova detections N fSN within an observation time δt is given by (14) where P cdf is the cumulative Poisson distribution. On the other hand, the probability for a supernova at distance d to produce N ν neutrinos that meet trigger conditions a) is (note that these neutrinos are registered within ∆T SN by construction) where µ 0 is the expected number of triggered neutrino events from the simulation of a CCSN at 10 kpc for the chosen multiplicity m condition. Table 4 presents trigger conditions for which the false CCSN detection rate per year (δt = 1 year) is ≤ 1 and ≤ 0.01, respectively. The distance at which a CCSN is still detectable with a probability of 50% for such conditions is also listed. Due to the high rate of low multiplicities from background sources, the best results are obtained for high multiplicities, with the largest distance reached at m ≥ 7. The trigger condition (m ≥ 7 , N ν ≥ 7) can be used to send supernova alerts with very high confidence (about one false detection per century), and identify CCSN at a distance of 269 kpc with 50% probability. With a relaxed set of conditions of (m ≥ 7 , N ν ≥ 6), SNe up to 291 kpc can be detected with less than one false CCSN detection per year. A detailed discussion of acceptable false detection rates goes beyond the scope of this paper but the assumed rates are rather conservative according to [37]. Figure 8 shows the probability from Eq. 15 as a function of distance that a 27 M CCSNe is detected for the aforementioned cases. These results clearly demonstrate the potential of the method to increase the sensitivity to distant CCSNe compared to the current Ice-Cube detector, which can detect CCSNe up to 50 kpc with a false detection rate of 0.1 year −1 [13]. In the case that other telescopes detect a CCSN, we can also analyse archival data in search for a signal. Let us assume that the time at which the neutrinos should have arrived at the detector is known with an accuracy of one hour [38]. Then Eq. 14 gives the expected number of false supernova detections N fSN generated by the background within δt = 1 h. The probability Q that the background did not produce a false supernova-like signal within this interval is Q = P (0, N fSN (δt = 1 h)). (16) Figure 9 shows Q in terms of one-sided Gaussian standard deviations for the detection of a 27 M CCSN as a function of the minimum number of neutrino events N ν together with the distances at which such a CCSN can be detected. Only the case m ≥ 7 is shown since it provides better results than other multiplicity conditions. Q drastically increases with the increase of N ν while the range for identifying distant supernovae decreases only moderately. For example, for a number of Table 4. detected events N ν = 5 a background origin can be excluded at 3.2 σ, while at least a corresponding number of events will be detected in 50% of cases from a 27 M CCSNe at a distance of 322 kpc. If N ν = 7 events with m ≥ 7 are detected we obtain a 4.9 σ confidence that such signal was not produced by background with a 50% detection probability at 269 kpc distance. The KM3NeT collaboration estimates a 5 σ discovery range of 30 kpc for a CCSN with a 27 M progenitor star [18] if the arrival time of the burst is exactly known 3 . We can make the same assumption and, following their approach, obtain the sensitivity for such a detection from Z = 2((s + b)ln(1 + s/b − s). Here, s is the expected number of signal and b the expected number of background events. The 5σ discovery horizon in this scenario reaches 315 kpc for a 27 M CCSN using m ≥ 7, and 234 kpc for the 9.6 M model. It should be noted that the KM3NeT result is based on a full detector simulation with atmospheric muon background rejection while this study does not. As stated before, we expect the contribution of the muon background in ice to be less significant than in water at high coincidence multiplicities. The trigger conditions applied in this work can also be further optimized, e.g., by reducing ∆t coin or by shortening the ∆T SN window. The Q (σ) Fig. 9 Detection prospects for a CCSN whose time is known to within 1 h (m ≥ 7 , ∆t coin = 20 ns , ∆T SN = 10 s). Left axis: probability in σ that the signal is not produced by background fluctuations. Right axis: distance at which a 27 M progenitor mass CCSN is detected with 10% (upper boundary), 50% (middle mark) and 90% probability (lower boundary), when at least Nν detected events are required. latter is based on the fact that the flux is expected to be most intense in the first few seconds of the burst. In this work, we have used the detection horizon as a performance metric. The ranges achieved reach up to several hundred kpc. However, most CCSNe candidates within this range are located within ∼ 50 kpc in the Milky Way or the Small and Large Magellanic Clouds. Beyond, CCSN progenitors are expected again in the nearest galaxy, M31, which is ∼ 800 kpc away, and in more distant galaxies. Using the estimated CCSNe population based on recent observations and scaled to the star formation rate [16], we obtain a CCSNe detection rate of 0.046 year −1 , i.e. one SN about every 20 years (m ≥ 7 , N ν ≥ 7). One way to increase the detection horizon and thus the SN detection rate is to use a pressure vessel with lower radioactive contamination. In fact, this is not an unrealistic scenario, as pressure vessels for IceCube-Gen2 are currently under investigation with radioactive contamination levels about a factor of 100 lower than those assumed in this work. Figure 10 shows the expected SN detection rate for hypothetical detectors with 10 000 and 20 000 mDOMs as a function of the false SN rate and the noise reduction factor. In order to double the CCSN detection rate, the noise level must be significantly reduced and the number of modules increased. Under the challenging scenario that the radioactive noise of the vessel glass could be reduced by a factor of ∼ 140, a doubling of the number of installed modules would allow the false SN detection rate to be kept below 0.01 year −1 while doubling the expected CCSN detection to one every ∼12 years. The same would be achieved with a reduction factor of ∼100 and a relaxation of the trigger conditions allowing a false detection rate of 0.1 year −1 . Conclusions In this paper, we have shown that exploiting temporal coincidences between detected photons within a segmented photosensor significantly increases the sensitivity of sparsely instrumented neutrino telescopes to MeV neutrinos and thus distant CCSNe. Due to its negligible optical background, the deep ice at the South Pole is particularly well suited for this purpose. For a detector equipped with 10 000 sensors consisting of 24 3-inch photomultipliers, we find that CCSNe up to a distance of 269 kpc can be identified with 50% probability with 0.01 false SN detection per year. Increasing the number of installed modules to 20 000 and using pressure vessels with significantly reduced optical background could extend the range such that one CCSN every ∼12 years can be observed. If the arrival time of CCSN neutrinos is known from an independent observation with δt = 1 h, a 27 M CCSN at [322, 269] kpc can be detected in 50% of cases and with a [3.2, 4.9] σ certainty that the signal was not produced by background. We note that our studies, which are based on the simulation of a single sensor, do not account for background from atmospheric muons. However, we ex-pect that this background can be effectively suppressed with tailored selection and reconstruction algorithms. Since each sensor represents a self-contained detector for MeV neutrino, the sensitivities depend only on the number of photosensors but not on the inter-module spacing. Hence, future sparsely instrumented neutrino telescopes like IceCube-Gen2, optimized for TeV-PeV neutrinos, will not only increase the sensitivity to the high energy universe but, if equipped with segmented photosensors, also significantly increase our reach for the detection of CCSNe with MeV neutrinos beyond the galactic range.
8,349.6
2021-06-27T00:00:00.000
[ "Physics", "Environmental Science" ]
Rice Husk Ash as Raw Material for the Synthesis of Silicon and Potassium Slow ‐ Release Fertilizer Rice husk ash (RHA) is a waste material produced in large quantities in many regions worldwide, and its disposal can be problematic. This work describes a method for using RHA to synthesize silicon and potassium slow-release fertilizer. The extraction of silica from RHA was accomplished by alkaline leaching with KOH. Different KOH concentrations and reaction times were evaluated and the best production of K2SiO3 solution was achieved using 6 mol L and 6 h, respectively. The fertilizer was synthesized by the reaction of K2SiO3 with KAlO2 in aqueous medium, followed by calcination at 500 °C. X-ray fluorescence (XRF) and X-ray diffraction (XRD) analyses indicated that the fertilizer composition was similar to mineral kalsilite. Solubility essays indicated lower K and Si release percentage in neutral medium. Kinetic mechanisms of release tests can be well explained by the pseudo-second order model. The proposed synthesis seems to be a viable process offering economic and environmental benefits. Introduction Nowadays, one of the most important goal of agronomic management is the development of environmentalfriendly fertilizers, which promote a sustainable nutrient management to ensure growth crop yield. 1 One method of reducing fertilizer nutrient losses involves the use of slowrelease fertilizers, which have been designed to gradually release nutrients to plants at a rate to coincide with the requirement of crops.The advantages of using slow-release fertilizers instead of the conventional type are various, such as the increased efficiency of the fertilizer, the continuous supply of nutrients for a prolonged period and a decrease of nutrient losses by volatilization and leaching out to surface and ground water. 2,3otassium (K) is known as one of the most required nutrient during plant growth.K plays an important role in the energy state of the plant, translocation and storage of assimilates and maintenance of water in plant tissues. 4or many crops, silicon (Si) is also an important nutrient and its availability to plants is often associated to increase of crop yield. 5 Plants under intensive cultivation that require high absorption of Si, such as rice, sugar cane, and grasses in general, can quickly deplete the soluble Si content of soil.This element, therefore, needs to be replaced by fertilization.The application of Si fertilizer can influence plants in two ways: (i) by improvement of the fertility and chemical properties of the soil, and (ii) by direct effects on plant growth and development. 6Positive effects that have been reported include improved plant structure, such as upright leaves and stems, 7 and increased resistance to fungi and insects due to deposition of Si under the plant cuticle. 8he high Si content of rice husk ash (RHA) has led to interest in its use as a source of Si for plants and for the production of numerous Si-based materials. 9RHA is generated after burning the rice husk, which is a waste from the rice processing industry, mainly used as a fuel to produce heat and steam required for the drying and parboiling of the rice grains.][11] The rice processing industry is a very important sector of global agribusiness and generates around 200 tons of waste biomass for every 1000 tons of harvest grains. 12herefore, rice husk and RHA can be easily obtained in large quantities and are cheap raw materials, so their use is an attractive way of avoiding environmental problems due to improper disposal, while at the same time increasing profit margins.Furthermore, the agricultural use of a product containing RHA enables the recovery of valuable elements that were used previously for crop fertilization. The aim of this study was therefore to develop a method to prepare a slow-release fertilizer containing silicon and potassium, using RHA as the raw material. Chemical reagents All chemical reagents used in this study were analytical grade.Solutions were prepared using bidistilled water. Characterization of raw material and products Chemical characterization of RHA and the produced fertilizers was performed using X-ray fluorescence (XRF) analysis of compressed tablets of the samples, employing a PANalytical Axios Max spectrophotometer. The pulverized samples were analyzed by X-ray powder diffraction (XRD), employing a Shimadzu XRD-6000 diffractometer, with Cu Kα radiation (λ 1.5418 Å), dwell time of 2 degree min -1 , voltage of 40 kV and current of 30 mA, at the interval 2θ from 10 to 80° and 0.02 degree per step.The samples were placed and oriented by gently hand pressing on neutral glass sample holders.The qualitative phase analysis was performed with X'Pert HighScore software package (Philips, Netherlands) and the minerals were identified by comparison with the data bank of the International Centre for Diffraction Data (ICDD).The Rietveld refinement was applied to the RHA diffraction pattern in order to estimate the silica crystallinity index. 13The refinement was carried out using the GSAS/Expgui software. 14,15lica extraction from RHA The silica was obtained by adding KOH (Sigma-Aldrich, Brazil) solution to RHA in a polypropylene beaker (150 mL), using a ratio of 4.0 mL KOH per g of RHA, under heating (70 ºC) and vigorous stirring, as described by Kalapathy et al. 16 The mass of RHA used in each mixture was 5.0 g.In order to identify the best conditions for the reaction, an evaluation was made of different KOH concentrations (1, 2, 4, 6, 8 and 10 mol L -1 ) and reaction times (1, 2, 4 and 6 h).After allowing the solution to cool at room temperature (25 °C), it was filtered using a quantitative filter paper (grade 54, Whatman) and the liquid potassium silicate was stored in a sealed polypropylene bottle (50 mL). Quantification of silica content Content of silica in the potassium silicate solution was quantified after its precipitation induced by the dropwise addition of 5 mol L -1 HCl solution (Vetec, Brazil).The precipitation of silica began when the pH of the solution decreased to below pH 10.After reaching pH ca.7.0, the mixture was centrifuged at 3500 rpm (relative centrifugal force of 1068.56 × g) for 10 min.The supernatant was discarded, water was added to the centrifuge tube, and the material was centrifuged again.This procedure was repeated three times.The solid gel separated by centrifugation was oven-dried at 110 °C for 24 h.Precipitated silica was characterized by XRF in order to avoid weighing errors resulting from possible contamination with the by-product potassium sulfate salt, which can remain even after repeated washings.The XRF measurements were performed in a Philips model PW2400.The results were interpreted with the semi-Q Philips software and were normalized to 100%.All assays were performed in triplicate. Experimental design and statistical analysis Precipitated silica yielded from RHA (η (%) ) was determined using the following equation 1: (1) The silica mass in RHA was determined by using XRF, as described above. Data were treated using analysis of variance (ANOVA) and the averages of the replicates were compared using Tukey's test at 5% probability.The objective was to choose the best conditions for the silica extraction, considering the concentration of KOH solution and the reaction time.The statistical calculations were performed using R Statistical software (R Foundation).Vol. 28, No. 11, 2017 Synthesis of fertilizers Potassium silicate solution was produced using the best extraction conditions previously selected.The fertilizers FERT1 and FERT2 were synthesized by reaction between solutions of potassium silicate as obtained above and potassium aluminate (KAlO 2 ), as described by Fernandes et al. 17 (modified).KAlO 2 solution was prepared by reacting 50.0 g of aluminum scrap and 1.0 L of KOH solution (1.0 mol L -1 ), performed in polypropylene beaker (2000 mL) under an extraction hood due to the release of hydrogen gas.The solution was stored in a sealed polypropylene bottle (1500 mL). FERT1 synthesis was performed using 120.0 mL of potassium silicate solution and 100.0 mL of potassium aluminate solution, under continuous stirring at 70 °C.After formation of a gel, manual agitation with a polypropylene stick was continued for a further 5 min.The produced gel was filtered in a qualitative filter paper (grade 5, Whatman), washed with distilled water and subsequently dried in an oven at 110 °C for 24 h.A portion of FERT1 was calcined at 500 °C for 4 h in a muffle furnace, with from room temperature (25 °C) to 500 °C at a rate of 10 °C min -1 to produce FERT2. Study of fertilizer release A preliminary evaluation was made in order to quantify the solubility of K from FERT1 and FERT2.The assay was performed at 25 °C in a temperature-controlled incubator from Tecnal model TCM44.In a beaker (250 mL), 1.00 g of fertilizer was added to 200 mL of distilled water.The mixtures were kept in the incubator for 6 h, without stirring, followed by filtration using quantitative filter paper (ashless, grade 40, Whatman).The retained solid material was discarded, and the filtered solution was analyzed using a Digimed flame photometer model NK-2000 to determine the potassium ion content, being the essay performed in triplicate.The calibration of the flame photometer was done by using a potassium standard solution in different concentrations. An additional solubility test was performed with the fertilizer that showed the best result, i.e., the one that presented lower rate of potassium release.Fertilizer solubility curves were built using six different times: 3, 6, 12, 24, 36 and 48 h, and three different solvents: bidistilled water, 0.1 mol L -1 citric acid solution and 0.5 mol L -1 hydrochloric acid solution, as described by Mangrich et al. 18 Citric acid solution was used in order to simulate the organic acids present in soil solutions.Portions (1.0 g) of the dry fertilizer samples were added to 150 mL of HCl solution, 150 mL of citric acid solution or 200 mL of bidistilled water.The mixtures were prepared in three different beakers (250 mL) and kept in an incubator during the different test period. 19The mixtures were subsequently filtered using a quantitative filter paper (ashless, grade 40, Whatman), the retained solid material was discarded, and the filtered solution was stored in a refrigerator. Measurements of the K and Si contents of the filtered solutions were performed using an inductively coupled plasma optical emission spectrometry (ICP OES equipment from Varian model 720-ES).The following ICP OES parameters were used: argon, RF generator power of 1.2 kW, plasma gas flow rate of 15 L min -1 , auxiliary gas flow rate 1.5 L min -1 , nebulizer pressure of 200 kPa and solution uptake rate of 2 mL min -1 .Calibration solutions were prepared from serial dilutions of ICP-standard of Si and K (SpecSol).The concentration of the calibration solution ranged from 2.5 to 25.0 mg L -1 .Each calibration curve was obtained by using six calibration solutions with different concentrations.The emission lines for the analysis by ICP OES were chosen according to previous interference studies that were 766.491 nm for K and 251.611 nm for Si.Calibration was checked initially and then after the analysis of every 20 samples, and in case of deviation of more than 10%, the equipment was recalibrated. In order to describe the potassium and silicon release behavior from the fertilizer in bidistilled water, citric acid solution and hydrochloric acid solution, kinetic parameters were calculated using the results of the solubility tests performed as described above.The study of the sorption kinetics can provide valuable insights into the reaction pathways and into the mechanism of sorption reactions. 20hree different models were tested in order to assess which one was the most suitable for describing the kinetic mechanism.The models used in the investigation were the pseudo-first order, 21 pseudo-second order 22 and intra-particle diffusion. 23The equations corresponding to each model are presented in Table 1. Raw material characterization The concentrations of chemical elements (expressed in their oxide forms) present in RHA were determined using XRF analysis.As expected, the main component of the RHA was silica (SiO 2 ) with 92.3% content.The root mean square (RMS) of the XRF results was 0.035.The sum before normalization was 86.5%. The X-ray diffraction pattern of RHA (Figure 1) stands together with the results of the calculation using the Rietveld method.As shown in Figure 1, RHA essentially consisted of amorphous silica and cristobalite.The occurrence of crystallization in some parts of RHA was due to the high temperatures reached during the firing, above 800 °C, at which crystallization is likely to start in RHA silica. 24e numerical indicators that measure the quality of the fits between the intensities are the Bragg (Rwp) and expected (Rexp) factors.The Rwp measures the quality of the fit between the observed and calculated intensities, while the Rexp measures the quality of the collected intensities.In this study, the indicator values were 11.13 and 9.84% for Rwp and Rexp, respectively.Since the Rwp and Rexp values were close (Rwp / Rexp ratio of 1.1311), the refinement is considered satisfactory.The Rietveld analysis indicated that the percentage of amorphous silica and cristobalite was of 47 and 53%, respectively, with lattice parameters a = b = 4.989 ± 0.006 Å and c = 6.953 ± 0.008 Å. Quantification of silica obtained from RHA The amounts of silica precipitated from RHA under the different reaction conditions are shown in Figure 2. The average quantity of precipitated silica varied according to the treatment used and increased with reaction time for all KOH concentrations up to 6 mol L -1 .For concentration greater than 6 mol L -1 , the amount of precipitated silica became almost constant.The maximum precipitation yield was 98% of the RHA silica. Statistical analysis The results obtained in the silica precipitation tests were compared by ANOVA analysis, which revealed significant differences between the average yields.Tukey's test for comparison of averages was applied and the results are shown in Figure 2. The treatments with higher yield of silica, which did not differ significantly (identified by the letters aA in Figure 2) were concentration × time: 6 mol L -1 × 6 h; 8 mol L -1 × 6 h and 10 mol L -1 × 6 h.Hence, the treatment using 6 mol L -1 of KOH solution and a reaction time of 6 h was chosen as the first step of the fertilizer synthesis. Preliminary study of potassium release The preliminary study of fertilizer solubility was performed in the flame photometer.The average concentrations of potassium found in the solubility tests were 746.3 ± 25.1 and 503.33 ± 5.7 mg L -1 for FERT1 and FERT2, respectively.For a period of 6 h, FERT2 showed a lower percentage of potassium release (29.6%)compared to FERT1 (43.9%), indicating that the heat treatment used for FERT2 synthesis was able to decrease K release in water for 6 h by about 67.5%. In recent studies, lower rates of potassium release were obtained by using higher calcination temperature.Mangrich et al. 25 prepared a kalsilite-type (α-K 2 MgSi 3 O 8 ) mineral structure from mixtures of potassium carbonate (commercial salt), lime shale and oil-shale fines, using a calcination process at 900 °C.The solubilities of the product, expressed as a percentage of K 2 O, were 30.3% in 0.5 mol L -1 HCl, 23.2% in 0.1 mol L -1 citric acid and 6.9% in H 2 O. Ma et al. 26 synthesized a hexagonal kalsilite with an irregular blocky morphology through a sintering process, by heating at 900 °C for 2 h.The solubilities of K 2 O from K 2 MgSi 3 O 8 in 0.50 mol L -1 HCl and 0.10 mol L -1 citric acid solutions were 38.94 and 23.58%, respectively.For this reason, the calcination process proved to be an important step during the development of the fertilizer with slow-release behavior. As the aim of this work was to develop a fertilizer with slow-release properties, therefore only the chemical characterization of FERT2 is described in this paper. Fertilizer characterization The results of XRF analysis showed that the silica content of the FERT2 fertilizer was 38.0%.The potassium and aluminum contents, expressed in terms of oxides, were 33.9 and 24.8%, respectively.The concentrations of other elements determined were 3.1, 0.1 and 0.1% of CO 2 , P 2 O 5 and Na 2 O, respectively. The X-ray diffraction pattern of FERT2 shown in Figure 3 presents the main diffraction peaks and intensities coincident with the structure of hydrated potassium aluminum silicate (KAlSiO 4 .5H 2 O). 27This mineral, known as kalsilite, is classified as a feldspathoid, which is a group of minerals chemically similar to feldspars, but with lower proportions of silica.They are characterized as being unsaturated in silica and rich in alkalis. 28The other peaks with lower intensity were not identified by using X'Pert HighScore software package. Si and K release studies The results of the ICP OES analysis of solubilized Si and K in the liquid extracts are shown in Figure 4 and Figure 5.The Si release curve indicated lower solubility in a neutral medium, around to 0% (Figure 4).The release of Si was higher in both acid solutions, reaching around 50% for 48 h.Approximately 32% of the potassium were into water as K + ions for a 48 h period (Figure 5).In acidic solution, the release of K + was higher, with almost total release in citric acid solution.The slower solubility of the fertilizer in the citric acidic solution indicates that it is a characteristic of the initial attack.However, thermodynamically, the citric acid is better than HCl due to the formation of more stable complex (C 6 H 5 K 3 O 7 ). 26he kinetic parameters calculated and measured in order to describe the behavior of Si and K release from FERT2 are shown in Table 2.The pseudo second-order model provided the best correlation to describe the release mechanism of Si and K into the three different liquid extracts.In addition, this model also fits the experimental data well, i.e., q e (ion desorbed at equilibrium time) values calculated from linear form are in accordance with experimental q e values for both elements studied, especially in bidistilled water.The kinetic parameters for Si release in bidistilled water indicated a slower behavior than in acid solutions, as can be seen in Figure 4.This behavior can be explained because acid solutions have higher concentration of hydrogen ion that can react with silicate producing silicic acid, which is the most soluble form of Si.Citric acid solution, even if diluted, has relatively high acidity (pH ca.2.3). The desorbed constant rates (k) for K release essays in bidistilled water and HCl have close values on the same scale of magnitude, whereas the test in acid citric presented the smaller rate of all essays (Table 2 and Figure 5).In pure water, the experimental and calculated K + release were smaller probably due to the possible similarity between the structures of FERT2 and kalsilite (KAlSiO 4 ) mineral, as shown in Figure 3.This mineral is formed by Si-O-Al bonds between silicon-oxygen (SiO 4 ) and aluminumoxygen (AlO 4 -) tetrahedrons.The potassium ions form chemical bonds with oxide ions (O 2-) from the tetrahedrons, hence becoming strongly bonded to the structure and difficult to release. Conclusion The findings of this study demonstrate the potential of RHA as a raw material to synthesize a Si and K slow-release fertilizer.The process used to extract the silica from RHA was able to precipitate more than 95% of the silica.The fertilizer synthesized has a chemistry composition and X-ray diffraction pattern similar to the hydrated potassium aluminum silicate (KAlSiO 4 .5H 2 O), known as kalsilite.The essays of Si and K release from the fertilizer in bidistilled water, citric acid and hydrochloric acid solutions showed a small solubility of the fertilizer in neutral environment.In acid solutions, the releases of Si and K were higher, with more than 50% of Si release and almost total K release in 48 h.The pseudo second-order model was suitable for describing the kinetic mechanism, being the coefficient of determination greater than 0.9993 for all essays.Since RHA is a waste generated in large quantities and can cause environmental issues due to improper disposal, the developed process seems to be a viable way of using RHA waste and metal filings, offering considerable environmental benefits.The product here prepared took into consideration knowledge of chemical structure and environmental care.In our view, the use of a waste to Experimental q e / (mg g -1 ) 623.0 1788.9 1914.1 t: time: q e : ion desorbed at equilibrium time t; q t : ion desorbed at time t; k: desorbed rate constant; R 2 : coefficient of determination.Vol. 28, No. 11, 2017 produce fertilizer is a rational approach for the crucial task of food and renewable energy efficient production. Figure 2 . Figure 2. Quantification of precipitated silica in terms of mass and percentage extraction yield for different reaction periods (1, 2, 4 and 6 h),and KOH solution concentrations.The small letters represent the Tukey's test applied to the average of the precipitated silica in the tests performed in triplicate.Averages followed by the same letters (lower case letters for KOH solution concentrations and upper case letters for reaction period) are not significantly different at the 95% level. Figure 1 . Figure 1.XRD diffraction pattern of RHA and refinement curves by the Rietveld method.The curves correspond to: (a) the experimental XRD pattern, (b) the XRD pattern calculated using the Rietveld intensity function, (c) the contribution of the cristobalite phase, and (d) the contribution of the amorphous phase to the overall profile.Line (e) represents the difference between the observed and calculated intensity values (y obs -y calc ). Figure 3 . Figure 3. X-ray diffraction pattern of FERT2.The lower bars indicate the diffraction peaks of the potassium aluminium silicate (code: 38-0216 in X'Pert HighScore), described by Kosorukov and Nadel 27 (ICDD reference pattern). Table 1 . Linearized form of the equations of the used kinetic models Table 2 . Kinetic pseudo-second order model for Si and K nutrient release
5,040
2017-01-01T00:00:00.000
[ "Chemistry" ]
Two-colour high-speed asynchronous optical sampling based on offset-stabilized Yb : KYW and Ti : sapphire oscillators We present a high-speed asynchronous optical sampling system, based on two different Kerr-lens mode-locked lasers with a GHz repetition rate: An Yb:KYW oscillator and a Ti:sapphire oscillator are synchronized in a master-slave configuration at a repetition rate offset of a few kHz. This system enables two-colour pump-probe measurements with resulting noise floors below 10−6 at a data aquisition time of 5 seconds. The measured temporal resolution within the 1 ns time window is below 350 fs, including a timing jitter of less than 50 fs. The system is applied to investigate zone-folded coherent acoustic phonons in two different semiconductor superlattices in transmission geometry at a probe wavelength far below the bandgap of the superlattice constituents. The lifetime of the phonon modes with a zero wave vector and frequencies in the range from 100 GHz to 500 GHz are measured at room temperature and compared with previous work. References and links 1. J. Shah, Ultrafast Spectroscopy of Semiconductors and Semiconductor Nanostructures (Springer Science & Business Media, 1999) 2. R. P. Prasankumar and A. J. Taylor, Optical Techniques for Solid-State Materials Characterization (CRC Press, 2011). 3. S. Savikhin, “Shot-noise-limited detection of absorbance changes induced by subpicojoule laser pulses in optical pump-probe experiments,” Rev. Sci. Instrum. 66, 4470–4474 (1995). 4. R. Gebs, G. Klatt, C. Janke, T. Dekorsy, and A. Bartels, “High-speed asynchronous optical sampling with sub-50 fs time resolution,” Opt. Express 18, 5974–5983 (2010). 5. V. A. Stoica, Y.-M. Sheu, D. A. Reis, and R. Clarke, “Wideband detection of transient solid-state dynamics using ultrafast fiber lasers and asynchronous optical sampling,” Opt. Express 16, 2322–2335 (2008). 6. E. Péronne, E. Charron, S. Vincent, S. Sauvage, A. Lemaı̂tre, B. Perrin, and B. Jusserand, “Two-color femtosecond strobe lighting of coherent acoustic phonons emitted by quantum dots,” Appl. Phys. Lett. 102, 043107 (2013). 7. A. Abbas, Y. Guillet, J.-M. Rampnoux, P. Rigail, E. Mottay, B. Audoin, and S. Dilhaire, “Picosecond time resolved opto-acoustic imaging with 48 MHz frequency resolution,” Opt. Express 22, 7831–7843 (2014). Introduction Ultrafast laser spectroscopy systems have drawn great attention during the last decades due to their versatile applications in solid-state physics, biology and medicine.Modern spectroscopy techniques rely on ultrashort laser pulses and offer femtosecond temporal resolution.This enables studying the dynamics of fundamental ultrafast excitations in semiconductors like charge carriers, phonons, plasmons, excitons and many more [1,2].Understanding these processes is essential to identify the basic limitations in todays semiconductor-based high-speed communication systems. Ultrafast phenomena are typically studied in a conventional pump-probe configuration composed of a single laser to excite and probe the reflectivity (or the transmission) of the sample for different time delays.The measured sample response is a unique mixture of different excitation and detection mechanisms.In order to resolve weak signal contributions (the relative change in the reflectivity ΔR/R due to acoustic phonons is in the order of 10 −4 − 10 −7 ), different beam modulation techniques for lock-in amplification need to be applied to reduce noise contributions from the implemented light sources [3].Still, these techniques suffer from long data aquisition times.In addition to that, beam size modulation and beam poynting from mechanical delay stages influence the measurement.These limitations can be overcome by implementing high-speed asynchronous optical sampling (ASOPS), where the repetition rate of two gigahertz lasers is synchronized at an offset of several kHz [4].This way, the time delay between successive pulse pairs from both lasers is linearly ramped between zero and the inverse repetition rate within a fraction of a millisecond.The high scanning speed allows for shot-noise limited noise floors below ΔR/R ≈ 10 −6 within seconds of trace averaging.A feedback loop is applied to stabilize the repetition rate offset, thus enabling measurements with sub-50 fs time resolution [4].Asynchronous optical sampling can also be applied to oscillators with lower repetition rates, however the corresponding scanning rate needs to be decreased in these measurements to maintain the temporal resolution.In recent experiments, asynchronous optical sampling was applied to study ultrafast dynamics in a variety of structured materials [5][6][7][8][9].In addition to that, several ASOPS-variants have been demonstrated [10][11][12][13]. Up to now, Ti:sapphire lasers with a central wavelength around 800 nm (corresponding to a photon energy of 1.55 eV) are the work horse for ultrafast pump-probe experiments due to the broad emission bandwidth and the commercial availability of these lasers.During the last years, a huge progress in the development of femtosecond solid state lasers (SSLs) with GHz repetition rates, based on the ytterbium doped double tungstates Yb:KYW and Yb:KGW, was noticeable [14][15][16][17][18].These lasers are cost-efficient and compact, as they are pumped with high-power semiconductor laser diodes.Also, several alternative host crystals for ytterbium are currently investigated.Only recently, pulses with an energy of 1.6 nJ and a duration of about 60 fs at a 1.8 GHz repetition rate have been demonstrated, utilizing an Yb:CALGO crystal [19].The emission wavelength of these Yb-based SSLs is around 1.05 μm (corresponding to a photon energy of 1.18 eV), which offers the possibility to probe III-V semiconductors like GaAs and AlAs below the bandgap, while other compound semiconductors like In x Ga 1−x As or GaN y As 1−y can be studied around the bandgap, given the appropriate composition. Here, we present an ASOPS implementation that combines the titanium and the ytterbium laser technology: An Yb:KYW laser is synchronized to a Ti:sapphire laser at a repetitionfrequency offset of a few kHz and a fundamental frequency of 1 GHz.This way, dualwavelength measurements at high scanning speed and temporal resolution with largely separated pump-and probe wavelengths are made possible.In particular, ultrafast phenomena in utmost important semiconductors with a bandgap in the energy range from 1.18 eV to 1.55 eV can be pumped above the bandgap and probed in transmission below the bandgap.In combination with the high signal resolution associated to high-speed ASOPS, this system is ideal to The paper is organized as follows: First, the Kerr-lens mode-locked (KLM) lasers are described.Second, the synchronization scheme is presented and the two-colour ASOPS system is characterized in terms of temporal resolution and detection sensitivity.Third, the system is applied to measure acoustic phonons in a GaAs/AlAs and a GaAs/AlGaAs superlattice.The resulting lifetimes of the folded zone-center modes are discussed. Kerr-lens mode-locked Yb:KYW and Ti:sapphire oscillator The schematic layout of the Yb:KYW laser is shown in Fig. 1(a).A similar oscillator was demonstrated previously in Ref. [20].We demonstrate a three-fold improvement in terms of output power by use of an increased pump power and optimized resonator elements.As pump source, a fiber-Bragg grating stabilized single-mode fiber-coupled pump laser with a maximum delivered output power of about 750 mW at a center wavelength of about 980 nm (Model 3CN01511GL, 3SPGroup) is used.The M 2 -value of the collimated pump beam is measured with a beam profiler (ModeMaster, Coherent Inc.) to about 1.06.A f = 30 mm lens is used to focus the pump beam into the Yb:KYW crystal.The crystal (EKSMA OPTICS) is cut at Brewster angle with a doping concentration of 10 at.% and a thickness of 1 mm.The pump beam polarization is parallel to the N m axis of the crystal.The crystal is mounted on a copper holder without any active cooling.The resonator consists of four mirrors (LayerTec), positioned in a symmetric ring-configuration.The radius of curvature (ROC) of the two curved mirrors is 30 mm.Given the thickness of the crystal and the ROC of the mirrors, the angle Θ is adjusted for astigmatism compensation to about 13.5 • [21].Dispersion management is done by inserting two GTI mirrors that produce a total group delay dispersion (GDD) of -2400 fs 2 , leading to a net dispersion of about -2200 fs 2 .This is much more than in Ti:sapphire lasers (net GDD is less than -100 fs 2 in our case), which is due to the higher nonlinear index of refraction of the Yb:KYW laser crystal.An amount of 0.5 % of the laser power is coupled out of the oscillator.The laser is mounted in an aluminum housing to prevent any lasing disturbance due to air turbulences.Kerr-lens mode-locking can be easily initiated by knocking on a mirror or the housing. The measurement results of the Yb:KYW laser are summarized in Fig. 2. The radio fre- quency (RF) spectrum is measured within a 13 GHz bandwidth at a 30 kHz resolution and shows a fundamental repetition rate f 0 at around 1 GHz and corresponding higher harmonics with only little amplitude modulation.This modulation could be a sign for multipulsing.In section 3, however, we will give a direct proof of the CW mode-locked operation of the laser with no multipulsing.The pump dependency of the output power is measured by decreasing the pump power.The result is shown in Fig. 2(b).The most striking feature is the sudden reduction in output power to half the value at a pump power of 550 mW where the laser jumps from the KLM regime into the CW regime.At the highest incident pump power of about 710 mW, the output power is about 340 mW.The optical-to-optical efficiency in this case is more than 45 %.The high conversion efficiency can be attributed to the Gaussian-like pump beam and the high absorption cross-section of the Yb:KYW crystal.The slope efficiency in the KLM regime is 45 %.The optical spectrum and an exemplary interferometric auto-correlation at maximum power is displayed in Fig. 2(c) and Fig. 2(d), respectively.The central wavelength is (1052 ± 1) nm with a bandwidth of (7.2 ± 1.0) nm.The error is given by the optical resolution of the implemented spectrometer.The resulting bandwidth corresponds to a bandwidth-limited pulse duration of roughly (161 ± 23) fs, if a squared hyperbolic secant (sech 2 ) shape is assumed.Evaluating the measured auto-correlation yields a pulse duration of about 180 fs.If the amount of dispersive optics between the laser and the nonlinear crystal in the auto-correlator (about 2000 fs 2 ) is taken into account, a pulse duration of (172 ± 18) fs can be estimated.Thus, the measured pulse duration corresponds to the expected value within the given measurement accuracy.The laser was running for several hours without any further necessary adjustments.The relative intensity noise (measured within a 1 Hz to 100 kHz bandwidth) is below 0.1 %. The measured M 2 -value of the output beam is 1.04 and 1.01 in the tangential and sagittal plane, respectively, and thus close to a perfect Gaussian beam.These measurements demonstrate the feasibility of the given Yb:KYW laser for ultrafast spectroscopy experiments.By implementing the latest pump laser modules with output powers approaching 1 Watt, the output power of the Yb:KYW laser should be scalable to the 500 mW level.The schematic layout of the Ti:sapphire oscillator is displayed in Fig. 1(b).The oscillator (GigaJet, Laser Quantum Ltd.) is pumped by a frequency doubled Nd:YVO 4 (Finesse Pure, Laser Quantum Ltd.) laser with a pump power of about 5 W. In comparison to the Yb:KYW resonator, an additional folding of the cavity is applied for dispersion management (broadband chirped reflectors with about -40 fs 2 for each mirror).The radius of curvature of the curved mirrors as well as the focal length of the focusing lens is 30 mm.The crystal is actively cooled, the crystal thickness is about 2 mm in order to obtain good absorption of the pump beam.The laser delivers an output power of 900 mW at a bandwidth of around 30 nm, corresponding to a bandwidth-limited pulse duration of about 23 fs assuming a sech 2 pulse shape.Further details about high-repetition-rate Ti:sapphire lasers can also be found in Ref. [22]. Two-colour high-speed ASOPS characterization The setup for two-colour ASOPS experiments is schematically drawn in Fig. 3(a).About 10 mW of both laser beams is used for measuring the tenth harmonic of the fundamental repetition rate in the photodiodes P1 and P2.These signals are fed into a feedback loop (TL-1000-ASOPS, Laser Quantum Ltd.) to offset-lock the Ti:sapphire lasers repetition rate to the free-running Yb:KYW laser at a frequency difference of a few kHz.The working principle of the phase-locked loop is described in detail in Ref. [4].For the repetition frequency regulation, two of the Ti:sapphire laser mirrors are mounted on top of piezo crystals.Anti-reflection coated polarizing beam splitters as well as low-and highpass dielectric filters were implemented for superimposing and separating the beams from both lasers.The two-colour ASOPS data aquisition is triggered by a two photon absorption (TPA) cross-correlation signal, that is generated by focusing about 200 mW of both beams onto a 1 mm thick ZnTe crystal.This represents a power loss of about 60% in the Yb:KYW beam.A more efficient optical trigger could be realized for example by appling sum-frequency generation in a nonlinear crystal instead of two-photon ab-sorption in combination with an optimized detection scheme.The remaining power of the two beams was sufficient for the targeted experiments. First, the time resolution of the two-colour ASOPS system is studied via a TPA crosscorrelation measurement, similar to the generation of the trigger signal.Both beams are collinearly overlapped with a polarizing beam splitter and focussed in a 0.5 mm thick GaP crystal, that is placed at the sample position.The resulting change in transmission versus time delay is displayed in Fig. 3(b).A time window of more than the pulse-to-pulse distance is chosen to resolve the subsequent pulses.The frequency offset for this measurement is 5 kHz.The measured signal shows distinct peaks at a distance of the inverse repetition rate.The width of the first TPA signal is about 320 fs.The shape of a pure TPA signal represents a convolution of pump-and probe pulses.The slowly oscillating tail of the TPA signal is a filtering effect from the frequency response of the photodiode P4 that is used for the experiment and has no further influence on the following measurement results.The duration of the pump pulses at the GaP crystal are estimated to about 210 fs by taking into account a total GDD of 2400 fs 2 from dispersive optics in the beam path of the Ti:sapphire oscillator.The estimated duration of the probe pulses is (172 ± 18) fs, taken from the discussion above.The width of the calculated convolution function assuming sech 2 pulses is then roughly (296 ± 14) fs, which is close to the measured width of 320 fs.The remaining difference between the estimated and the measured width could be explained by jitter due to different optical path lengths in the trigger channel and the measurement channel.The width of the second-and the third TPA signal is about 350 fs and 370 fs, respectively.From these measurement results we conclude, that the temporal resolution within the 1 ns time window is limited to below 350 fs mainly by the duration of the implemented laser pulses and an accumulated timing jitter that amounts to less than 50 fs.It should be possible to improve the temporal resolution to below 250 fs by avoiding dispersive effects or by implementing dispersion compensating optics. The given experiment clearly demonstrates the CW mode-locking (CWML) of the Yb:KYW laser.When less GDD or a smaller outcoupling rate is chosen for the mirrors in the Yb:KYW oscillator, a multipeak structure is visible in the same cross-correlation measurement.Thus, CWML or multipulse operation of the Yb:KYW laser can directly be observed and distinguished by the given ASOPS cross-correlation measurement.Usually, the optical spectrum or the RF spectrum is used to verify a clean mode-locking of the laser.These methods, however, do not directly reveal the CWML operation of a laser and are limited by the resolution of the spectrometer.Auto-correlation measurements are a direct proof of the pulsed operation of the laser but the time window of auto-correlators is limited by the maximum mirror displacement. In a second experiment, we characterize the detection sensitivity of the two-colour ASOPS system.Previous studies proofed, that the detection sensitivity in ASOPS measurements is at the shot-noise level if low-noise Ti:sapphire oscillators are implemented.In any ASOPS epxeriment, however, high-frequency amplitude noise in the probe beam will be overlayed with the pump-probe data, as soon as the laser noise surpasses the noise of the photoreceiver.Due to the kHz scanning speed associated to high-speed ASOPS, AC-coupled photorecievers with a typical gain of 40 V/mA and noise equivalent powers of a few pW/ √ Hz can be used.The bandwidth of the implemented photoreceivers covers the range from 25 kHz to 130 MHz.The upper frequency limit in this study is limited to 100 MHz by the analog-to-digital converter.So, the relevant frequency range for the targeted ASOPS measurements reaches from 25 kHz to 100 MHz.In order to estimate any noise contributions from the oscillators to the ASOPS pump-probe data, the noise power spectral density (PSD) of both lasers was measured.To resolve noise contributions with frequencies below 25 kHz, a DC-coupled photodetector with a 150 MHz bandwidth (Thorlabs, PDA10A) was used only for this particular experiment.The results are shown in Fig. 4(a).At low frequencies the noise PSD of the Yb:KYW laser is larger than that of the Ti:sapphire laser, whereas at high frequencies both lasers are comparable.At a frequency of about 1 MHz, the Ti:sapphire laser has larger noise PSD than the Yb:KYW laser. The broad feature at around 150 MHz stems from the limited bandwidth of the photodetector used for the noise detection.Since no special care was taken to reduce the laser noise in the Yb:KYW oscillator, the comparatively low noise is quite surprising at first glance.Though it can be explained by the lifetime of the upper laser level of the Yb:KYW crystal (0.3 ms), which acts as a low-pass filter with a cut-off frequency of 3 kHz.By chance this is below the detection bandwidth, thus making the Yb:KYW laser well suited for low-noise high-speed ASOPS experiments.Correspondingly, the relative intensity noise (RIN) of the Ti:sapphire and the Yb:KYW laser within the relevant detection bandwidth are in the same order of magnitude (0.042% and 0.053%, respectively).Comparing the calculated shot-noise limit of the detector to the noise PSD, the laser noise is above the shot-noise, thus it should be visible when conducting ASOPS.Figure 4(b) compares ASOPS measurements with zero pump power, using the Yb:KYW oscillator or the Ti:sapphire laser as probe laser at a data aquisition time of about 1 s and 1000 s and an offset frequency of 5 kHz.The time to measure a single trace is 0.2 ms.For an actual pump-probe measurement, the time axis is divided by a factor of f 0 /Δf to obtain the time delay between the pulses.From the data in Fig. 4(b) it can be clearly seen, that averaging reduces the amplitude of the noise background.The power spectral density of the time-domain data is calculated via fast Fourier transform (FFT) and displayed in Fig. 4(c).For both lasers, a direct mapping of the above-shot-noise laser noise is observed, in good agreement with the measurements in Fig. 4(a).The averaging process reduces most of the spectral content equally despite some frequencies at 10 kHz and a high-frequency structure at 10 MHz in the measurement with the Ti:sapphire laser.We attribute this circumstance to an electronic artefact from the control of the piezos in the Ti:sapphire laser.The square root of the integrated data shown in Fig. 4(c) yields the noise floor, which is equal to the standard deviation of the time-domain traces.Figure 4(d) compares the noise floor in the ASOPS measurements with the shot-noise limit of the photoreceiver for different data aquisition times.The points indicate a clear increase in detection sensitivity with increasing aquisition time.A saturation occurs in the noise floor of the Ti:sapphire oscillator for large data aquisition times because of the electronic artefact visible in Fig. 4(c), which can not be further averaged.Although the noise floor in our system is in excess of the shot-noise by a factor of about 2, a noise floor below 10 −6 is achieved within seconds of trace averaging.Reduction of noise components above the shot-noise in the given two-colour ASOPS system could be realized with balanced detection schemes. Two-colour high-speed ASOPS experiments To demonstrate the capabilities of the two-colour pump-probe system, we investigate coherent acoustic phonons in two different semiconductor superlattices (SLs).The dispersion of a SL is back-folded due to the periodicity of the structure.Thus, high-frequency acoustic phonons close to the center of the Brillouin zone can be investigated by means of CW Raman and timeresolved pump-probe experiments.In the latter, the excitation and detection of these modes was proofed to be based on impulsive stimulated Raman scattering and the acoustic deformation potential, respectively [23].Throughout the past, SLs based on GaAs/AlAs has evolved as a useful material system to investigate high-frequency ultrasonic [24][25][26][27] and potential applications like phonon cavities [28].This is partly due to the perfect lattice-matching of both materials and the consequential growth control in molecular beam epitaxy techniques.Moreover, Ti:sapphire lasers can be used to pump and probe around the first interband transition of the superlattice, leading to enhanced acoustic signal contributions.Finally, the short penetration depth at 800 nm (α −1 ≈ 1μm) enables a spatial decoupling of the generation and detection of the acoustic waves, that can be used for example in remote detection [29][30][31].On the other hand, the absorption limits the above-mentioned studies to reflection geometries or to the use of thin samples.In this study, we give results on resonant pumping at around 800 nm, while probing with the Yb:KYW laser far below the bandgap of the SL at 1.05 μm.We use the low absorption of the probe beam to measure the lifetime of the first-branch zone-folded acoustic mode in transmission geometry.The first investigated sample consists of 40 GaAs/AlAs superlattice periods, grown on a (001) oriented GaAs substrate.The substrate is wet-etched to measure the structure at 800 nm in reflection and transmission, which was done in a previous study [23].The individual layer thickness is 19 monolayers.The experimental setup is described above and displayed in Fig. 3(a).The Ti:sapphire laser is used as a pump beam with a central wavelength of about 810 nm, whereas the Yb:KYW laser serves as probe beam.Pump-and probe power are adjusted to 100 mW and 5 mW, respectively.Both beams are focused onto the sample by the same lens to a spot size of about 30 μm.The frequency offset is set to Δf = 5 kHz and about 5 million measurement scans are averaged, resulting in a measurement time of about 17 minutes.The measurement is done at room temperature. The resulting time-dependent change in the relative transmission of the Yb:KYW beam is depicted in Fig. 5(a).The background from carrier dynamics is already removed for clarity.The signal shows a decaying oscillation at a frequency of around 460 GHz.The fast Fourier transform spectrum of the oscillation is given in Fig. 5 taken from Ref. [32].The individual layer thickness in the calculation was adjusted to 5.28 nm to match the experimental data.For comparison, a spectrum obtained in reflection geometry at a probe wavelength of 800 nm is displayed as well.By comparing the measurement results with the calculations, the strong acoustic feature at 460 GHz can be attributed to the lower first-branch zone-center phonon mode.The upper first-branch zone-center mode cannot be detected due to symmetry reasons [23].The phonons at twice the laser wave-vector q = 2k laser are attributed to the back-scattered Raman process, thus they are not visible in transmission.For the same reason, no low-frequency Brillouin oscillation is visible in the transmission data.The missing back-scattered Raman-contributions explain, why the transmission time-domain data has a more simple envelope than in the reflection experiments, where the two back-scattered folded modes lead to a strong beating of the acoustic oscillations. The data in Fig. 5 also allows for the extraction of phonon lifetimes.Phonon lifetimes and the corresponding scattering mechanisms are of high importance, since they directly influence the speed of many nanoscaled electronic devices.Figure 5(c) shows an exponentially damped sinusoidal fit to the data.An additional constant phase was introduced in the argument of the sinusoidal to match the data.The first 20 ps after excitation were not taken into account, because the background substraction leads to an artefact.The fit yields a decay time of (76 ± 1) ps.Recent pump-probe studies on a different GaAs/AlAs superlattice reveiled room temperature decay times of 1020 ps and 300 ps at frequencies of 320 GHz and 640 GHz, respectively [34].This indicates a strong difference in the extrinsic scattering rates, which are the driving mechanisms for phonon decay at room temperature.In an additional experiment, acoustic phonons in a GaAs/Al 0.3 Ga 0.7 As superlattice are studied.The structure was grown on a 1 mm thick GaAs substrate with an individual layer thickness of 15 nm and 30 nm, respectively.The pump power for this sample was adjusted to 200 mW, while all other measurement parameters are analogue to the previous measurement.The data are taken in transmission through the GaAs substrate.In reflection geometry this would lead to a strong contribution from Brillouin oscillations from the substrate, which are completely absent in transmission geometry.The results are displayed in Fig. 6.A clear acoustic feature is visible in the time-domain trace.The according frequency spectrum reveals a dominating frequency of 118 GHz with a smaller contribution at a frequency of 233 GHz.Comparing with the Rytov-dispersion (the parameters of Al 0.3 Ga 0.7 As were assumed to be the bulk values), these features can be assigned to the first-and second zone-folded center mode.Zone-folded modes of third and higher orders are not visible within the accuracy of our measurement.Based on the phonon spectrum, the time-domain data is fitted by a sum of two exponentially decaying sinusoidal functions.The result is shown in Fig. 6(c) and yields a decay time of (295 ± 2) ps and (78 ± 4) ps for the first and the second folded mode, respectively.The decay times are in agreement with the three-phonon scattering model for high-frequency acoustic phonons insofar as the second folded phonon mode decays faster than the first folded phonon mode.Additional measurements are necessary to further investigate the frequency-dependence of the acoustic phonon lifetimes in the demonstrated superlattice. Conclusion In summary, we introduced a high-speed pump-probe system, based on an Yb:KYW and a Ti:sapphire oscillator with a gigahertz repetition rate.An offset of a few kHz between both lasers, stabilized by a feedback loop, enables two-colour asynchronous optical sampling experiments at several kHz scanning speed.Noise floors of the pump-probe data below 10 −6 are achieved within seconds of trace averaging in close proximity to the shot-noise level.A temporal resolution of 350 fs, including a timing jitter of less than 50 fs, was measured in accordance to the duration of the implemented laser pulses.As a showcase experiment, zonefolded coherent acoustic phonons in semiconductor superlattices were probed in transmission.The demonstrated system opens new possibilities for high-speed pump-probe experiments at wavelength configurations of 800 nm and 1050 nm for pump or probe pulses, respectively. Fig. 2 . Fig. 2. Characterization results of the Yb:KYW laser.(a) RF spectrum, inset: zoom into the fundamental repetition rate f 0 = 0.996 GHz (b) laser output power versus pump power (c) optical spectrum and sech 2 fit (d) interferometric auto-correlation, yielding a pulse duration of about 180 fs. Fig. 3 . Fig. 3. (a) Schematic layout of the two-colour ASOPS setup.Solid lines represent laser beams, dashed lines represent electric connections.P1-P2: photodiodes (bandwidth > 10 GHz), P3-P4: photodiodes (bandwidth from 25 kHz to 130 MHz), ZnTe: zinc telluride crystal, PBS: polarizing beam splitter, DF1-DF2: dielectric filters, ADC: 100 MHz analogto-digital converter.(b) Two photon absorption cross-correlation measurement results in GaP.The left and right inset display different zooms into the first signal feature to demonstrate the width and the slowly oscillating tail of the TPA signal, respectively. Fig. 4 . Fig. 4. Noise characterization using the Ti:sapphire (black) and the Yb:KYW oscillator (red).Blue straight lines correspond to the calculated shot-noise limit of the photoreceivers used for the ASOPS measurements with a bandwidth from 25 kHz to 130 MHz (AC-Det).(a) noise power spectral density, measured with a DC-coupled photodiode (DC-Det).The blue dashed line corresponds to the noise level of the DC-coupled photodiode.(b) Averaged ASOPS time traces at zero pump power at a data aquisition time of about 1 s and 1000 s.(c) corresponding power spectral density of the data in (b).(d) noise floor for different data aquisition times. (b) together with the calculated Rytov dispersion of the superlattice.The value for the sound velocity and the density of the materials is Fig. 5 . Fig. 5. (a) Measured acoustic phonons in the GaAs/AlAs superlattice after substraction of the background.(b) upper part: calculated dispersion relation.Straight horizontal lines represent the double wavevector of the probe laser.Straight vertical lines indicate the expected backscattered acoustic phonon frequencies.lower part: FFT spectrum of the data in (a) and result of a measurement in reflection at a probe wavelength of 800 nm.(c) zoom into the first 200 ps of the data in (a) and exponentially damped sinusoidal fit. Fig. 6 . Fig. 6.(a) Measured acoustic phonons in the GaAs/Al 0.3 Ga 0.7 As superlattice after substraction of the background.(b) calculated Rytov dispersion of the superlattice and phonon spectrum (c) data from (a) and fit curve, based on a superposition of two exponentially decaying sinusoidals.
6,689.2
2015-07-13T00:00:00.000
[ "Engineering", "Physics" ]
Inhibition of DNA cross-linking by mitomycin C by peroxidase-mediated oxidation of mitomycin C hydroquinone. Mitomycin C requires reductive activation to cross-link DNA and express anticancer activity. Reduction of mitomycin C (40 microm) by sodium borohydride (200 microm) in 20 mm Tris-HCl, 1 mm EDTA at 37 degrees C, pH 7.4, gives a 50-60% yield of the reactive intermediate mitomycin C hydroquinone. The hydroquinone decays with first order kinetics or pseudo first order kinetics with a t(12) of approximately 15 s under these conditions. The cross-linking of T7 DNA in this system followed matching kinetics, with the conversion of mitomycin C hydroquinone to leuco-aziridinomitosene appearing to be the rate-determining step. Several peroxidases were found to oxidize mitomycin C hydroquinone to mitomycin C and to block DNA cross-linking to various degrees. Concentrations of the various peroxidases that largely blocked DNA cross-linking, regenerated 10-70% mitomycin C from the reduced material. Thus, significant quantities of products other than mitomycin C were produced by the peroxidase-mediated oxidation of mitomycin C hydroquinone or products derived therefrom. Variations in the sensitivity of cells to mitomycin C have been attributed to differing levels of activating enzymes, export pumps, and DNA repair. Mitomycin C hydroquinone-oxidizing enzymes give rise to a new mechanism by which oxic/hypoxic toxicity differentials and resistance can occur. The mitomycin antibiotics, produced by various Streptomyces, give rise to electrophiles capable of cross-linking DNA following either one-electron or two-electron reduction by enzymatic or chemical systems (1). The one-electron reduction product, the mitomycin C semiquinone radical anion (MC . ), 1 reacts with molecular oxygen (itself a stable diradical) at close to the diffusion controlled rate (10 9 to 10 10 M Ϫ1 s Ϫ1 ) to give the parent mitomycin C (MC) quinone and superoxide (O 2 . ) (2). Since very few cross-links are required to give rise to a lethal event (3), the regeneration of MC and the production of a superoxide anion, a species of low toxicity relative to a DNA cross-link, represents a detoxification step. Under physiological oxygen concentrations, the half-life of MC . would be expected to be less than 0.1 ms. Therefore, in the presence of physiological concentrations of oxygen, only an extremely small proportion of the MC . produced could be involved in the direct alkylation of biomolecules, and the yields of DNA cross-links via this pathway would be negligible. The two-electron reduced species, mitomycin C hydroquinone (MCH 2 ), does not react rapidly with oxygen and, therefore, cross-links DNA in a manner largely independent of the concentration of oxygen. Thus, the degree of initial DNA damage by MC under aerobic conditions almost exclusively depends upon a two-electron reduction to MCH 2 . Under very low oxygen concentrations, greater cellular damage would be expected, reflecting the rate of production of both MCH 2 and MC . . There is also evidence that most of the damage resulting from the production of MC . under hypoxic conditions is due to disproportionation (4) or further reduction of MC . to MCH 2 and subsequent formation of an alkylating species therefrom, and not due to the direct interaction of MC . with DNA. Such processes may also play a role in aerobic alkylations following one-electron reduction, but competition from the reaction of MC . with O 2 is likely to greatly reduce the interaction of MC . with further reducing molecules. In biological systems MC can be reduced by a variety of enzymes, some favoring one-and some two-electron activation (5). Solid tumors show resistance to therapeutic modalities such as radiation due to areas of hypoxia arising from poor tumor vascularization (6). Mitomycin antibiotics have been used as an adjunct to x-irradiation to selectively target the radiation-resistant hypoxic fraction of human tumors (7,8). Variations in the toxicity of MC to cells have been attributed to differing levels of oxygenation, activating enzymes (and their localization), export pumps, and DNA repair (9 -12). A number of genes coding for resistance proteins in Streptomyces lavendulae have been cloned and expressed. One of these genes, mcrA, codes for a 54-kDa protein (MCRA) that appears to be a mitomycin C hydroquinone oxidase (13,14). Expression of the cDNA for the bacterial resistance protein MCRA in CHO-K1/dhfr Ϫ cells resulted in profound resistance to MC in these cells under aerobic conditions, with little change in sensitivity to MC under hypoxia (14). The marked resistance to MC under aerobic conditions observed in MCRA-expressing CHO-K1/dhfr Ϫ cells resembles that produced in cell lines selected for resistance to MC under aerobic conditions (14). This finding suggests that a mechanism of resistance based upon the oxidation of MCH 2 by a functional homologue of the MCRA protein may be operative. Since many peroxidases have high * This work was supported in part by United States Public Health Service Grant CA-80845 from the NCI, National Institutes of Health. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. § Current address: Vion Pharmaceuticals Inc., New Haven, CT 06511. ** To whom correspondence should be addressed. Tel.: 203-785-4533; Fax: 203-737-2045. 1 The abbreviations used are: MC . , mitomycin C semiquinone radical anion; H33258, Hoechst 33258; MC, mitomycin C; MCH 2 , mitomycin C hydroquinone; HRP, horseradish peroxidase; LP, lactoperoxidase; MP, myeloperoxidase; MCRA, mitomycin resistance protein A; QH 2 , hydroquinone; Rz, Reinheitszahl ratio, the ratio of the absorbance at the Soret band max to that of the max of the protein band; HPLC, high performance liquid chromatography. affinities for hydroxylated aromatic compounds and hydroquinones as oxidizable substrates, we compared the ability of various peroxidases to that of MCRA to oxidize MCH 2 and to inhibit DNA cross-linking. In this report we demonstrate that horseradish peroxidase (HRP), myeloperoxidase (MP), and lactoperoxidase (LP) oxidize MCH 2 in the presence of a source of hydrogen peroxide and prevents to various degrees the crosslinking of T7 DNA by MCH 2 . EXPERIMENTAL PROCEDURES Chemicals-T7 DNA and other chemicals and enzymes were purchased from Sigma, except where specified. Peroxidase purity was accessed by determining the Rz value (the Rz value equals the ratio of the absorbance at the heme prosthetic group Soret band max to that of the protein band max ) for the various enzyme preparations; for HRP (affinity-purified) the Rz (A 403 nm /A 275 nm ) value was equal to that reported for the pure protein (15), for LP the Rz value (A 412 nm /A 280 nm ) was 89% of that reported for the pure protein (16), and for MP the Rz value (A 430 nm /A 275 nm ) was 60% of that reported for the pure protein (17). Hoechst 33258 was obtained from Molecular Probes, Inc. (Eugene, OR). Purified MCRA was prepared by David H. Sherman, and MC was supplied by Bristol Myers-Squibb, Inc. (Wallingford, CT). Stock solutions of MC (2.8 mM) and sodium borohydride (NaBH 4 ) (20 mM) were made in isopropanol, which was used because it is an excellent hydroxide radical scavenger. Theoretically, the indirect generation of hydroxide radicals as a consequence of the redox cycling of MC, the reduction of oxygen by NaBH 4 , or the action of potentially protective peroxidases could interfere with the DNA cross-linking assay by introducing strand breaks. Therefore, since the objective was to examine DNA-DNA cross-linking by MC/NaBH 4 and enzymatic inhibition of cross-linking, the prevention of radical nicking was essential to these meas-urements. The final concentration of isopropanol in the reaction mixture was 2.4%, which was more than sufficient to prevent radical nicking in this system (18). DNA Cross-linking and Nicking-DNA cross-linking kinetics and the extent of DNA cross-linking were determined using a DNA renaturation assay (14,19). The assay is based upon the observation that, under conditions of neutral pH, upon snap cooling thermally denatured covalently cross-linked T7 DNA rapidly renatures since the strands are held in register, yielding a highly fluorescent complex with H33258, whereas T7 DNA containing no cross-links does not. T7 DNA at a concentration of 100 g/ml in 20 mM Tris-HCl buffer containing 1 mM EDTA and 40 M MC at pH 7.4 was reacted with NaBH 4 . The crosslinking reaction was initiated by the addition of 1% by volume of freshly prepared 20 mM NaBH 4 solution in isopropanol, which was prepared by a 25-fold dilution of a 0.5 M solution of NaBH 4 in 2-methoxyethyl ether with dry isopropanol. The final concentration of NaBH 4 in the reaction mixture was 200 M. Small (15 l) aliquots were removed at various time intervals, diluted 100-fold with 5 mM Tris-HCl, 0.5 mM EDTA buffer, pH 8.0, containing 0.1 g/ml H33258, and assayed for DNA cross-links. The 100-fold dilution of DNA and alkylating agent effectively prevents significant further reaction. The concentrations of reactants were chosen to give a maximum of ϳ30 -50% of the DNA being cross-linked (i.e. at least one cross-link in 30 -50% of the molecules). The potential nicking by radicals or other species under the reaction conditions employed was ascertained by measuring the decrease in the fluorescence of the complex of H33258 and the stably pre-cross-linked DNA following a heat/chill cycle (19,20). The pre-cross-linked T7 DNA was prepared by treating 0.5 mg/ml DNA with 0.2 mM 1,2-bis(methylsulfonyl)-1-(2-chloroethyl)hydrazine in 10 mM Tris-HCl, 1 mM EDTA buffer, pH 8.0, for 24 h at 37°C. Fluorescence measurements were performed using a Hoefer Scientific Instruments TKO 100 fluorometer. Determination of MC by HPLC-HPLC measurements of the loss and regeneration of MC were performed in 20 mM Tris-HCl buffer containing 1 mM EDTA, 40 M MC, and 200 M NaBH 4 at pH 7.4 using a modification of the method of Kumar et al. (4). Spectroscopic Measurement of the Formation of Mitosene-All spectroscopic determinations were performed using a Beckman model 25 spectrophotometer at 575 nm. In these studies, MC and NaBH 4 were used at increased final concentrations of 200 and 500 M, respectively, because of the low absorbances of MC and the MC-derived species being measured at 575 nm. Stock solutions of 10 mM MC and 50 mM NaBH 4 in isopropanol were employed for these studies. All cuvettes were freshly acid-washed (HCl/HNO 3 ) and stored submerged in distilled H 2 O prior to use. This procedure minimized problems due to bubble formation on and adhesion to the faces of the cuvettes. Decomposition Kinetics of NaBH 4 -The decomposition kinetics of NaBH 4 in 20 mM Tris-HCl buffer containing 1 mM EDTA and 20 g/ml phenol red at pH 7.4 were determined by following the change in absorbance of phenol red at 560 nm, which varies linearly with very small molar changes in the consumption or generation of hydrogen ions (21). The measurements were based upon the alkalinization of the medium from the consumption of 1 mol of protons/mol of NaBH 4 during its decomposition at pH values close to neutrality, as represented in the following reaction. The alkalinization occurs because at pH values close to neutrality boric acid (pK a 10.2) is essentially undissociated. The reaction was FIG. 1. Comparison of the decay kinetics of NaBH 4 and the post-reduction spectral changes produced in MC solutions under identical conditions (pH 7.4, 37°C). A, decomposition kinetics of NaBH 4 as measured by medium alkalinization at 560 nm. B, the initial dashed portion of this trace largely represents the kinetics of the reduction of MC to MCH 2 by NaBH 4 (⌬ absorbance at 575 nm), whereas the latter continuous portion largely represents the kinetics of the progression of MCH 2 to high absorbing mitosene species (⌬ absorbance at 575 nm). Determination of Hydrogen Peroxide-The quantity of H 2 O 2 generated during the decomposition of NaBH 4 in air saturated buffer was estimated using a modification of the method of Pick and Keisari (22), which is based upon the oxidation of phenol red by H 2 O 2 /HRP to a compound that absorbs strongly at 610 nm at high pH values. The quantity of H 2 O 2 generated was determined by comparing the absorption values to a standard curve. Spectroscopic Measurement of Hydroquinone Oxidation-The oxidation of hydroquinone (QH 2 ) by HRP, LP, or MP was evaluated by following the decrease in absorption of QH 2 at 288 nm in a 20 mM Tris-HCl, 1 mM EDTA buffer, pH 7.4, at 37°C. Kinetics of NaBH 4 Decomposition, MC Reduction, and MCH 2 Decomposition-The decomposition kinetics of NaBH 4 in 20 mM Tris-HCl, 1 mM EDTA buffer, pH 7.4, using a protometric assay at 37°C appeared to follow first order or pseudo first order kinetics with a short t1 ⁄2 value of 4.7 Ϯ 0.9 s (Fig. 1). If the rate of reduction of MC by NaBH 4 is assumed to be proportional to the concentration of NaBH 4 , the rate of reduction of MC would be expected to follow the disappearance of NaBH 4 and occur rapidly over a short temporal window, resulting in the bolus production of MCH 2 . Reduction of MC to form MCH 2 should result in a disruption of the extended conjugation of the double bonds, resulting in a shift in the absorption maxima to lower wavelengths and in a bleaching of absorbance at higher wavelengths. However, upon the subsequent formation of high absorbing mitosene products, the absorbance at these wavelengths should then increase beyond the initial level of MC. Visible spectral scans of MC before and at various times after the addition of NaBH 4 were performed, and a rapid bleaching followed by a rise in absorbance beyond the original level was observed, centered around 575 nm. At 37°C, the bleaching at 575 nm was maximal within 3-5 s (Fig. 1). This bleaching was followed by a progressive increase in absorption at this wavelength to approximately twice the absorption of the initial MC solution and appeared to follow first order or pseudo first order kinetics with a t1 ⁄2 value of 14.8 Ϯ 2.8 s at 37°C and pH 7.4. This finding is consistent with the "pulse-like" generation of MCH 2 . HPLC studies indicated that only about one half of the initial amount of MC was reduced under these conditions despite the 5-fold molar excess of NaBH 4 . The short t1 ⁄2 of NaBH 4 due to its rapid reaction with the vast molar excess of H 2 O provides a possible explanation for the limited reduction of the available MC. Since only ϳ50% of the MC was reduced, the initial reduction product(s) has little absorption at 575 nm compared with MC and the mitosene products subsequently generated have ϳ2-3-fold greater absorption at 575 nm than MC. The rate of production of mitosenes was highly temperaturesensitive with the t1 ⁄2 at 30°C being ϳ30 s compared to ϳ15 s at 37°C (Fig. 2A). Addition of MCRA prior to the reduction of MC by NaBH 4 blocked these spectral changes under aerobic conditions (Fig. 2B). MC is believed to react following reduction as shown in Fig. 3; the methoxy group at position 9a is lost subsequent to hydroquinone formation to give leuco-aziridinomitosene. Oxidation after the formation of leuco-aziridinomitosene cannot result in the regeneration of MC, which can only occur if enzymatic oxidation occurs at the level of MCH 2 . Ad- by various oxidizing components as determined by HPLC The data have been normalized such that the % yield of MC from MCH 2 has been calculated relative to the assumed level of MCH 2 formation. The addition of NaBH 4 to aerobic solutions of MC under the conditions described resulted in a 50 -60% net loss of MC in all experiments. It is assumed that this net loss of MC represents the quantity of MCH 2 initially formed and that, in the absence of added oxidizing components, which increase the level of recovery of MC, essential all the MCH 2 spontaneously gives rise to products other than MC. The values presented are the average of two experiments each conducted in duplicate. dition of MCRA at the point of maximum bleaching (3-5 s after the NaBH 4 ) gives results almost equivalent to those obtained by the addition of MCRA prior to NaBH 4 . The spectroscopic absorption is restored to close to the original MC level without the production of high absorbing mitosenes; the bleaching dip is reduced slightly in magnitude when MCRA is added prior to the NaBH 4 , probably due to some MCRA mediated re-oxidation occurring during the reduction phase. The late addition of MCRA, i.e. at the maximum MCH 2 concentration, served to combat the argument that the MCRA merely prevented the initial reduction of MC. HPLC analysis indicated that, when MCRA was initially present in the reaction mixture, essentially all of the MCH 2 was back-oxidized to MC and there was little or no net MC loss or mitosene formed (Table I). Addition of MCRA or HRP/H 2 O 2 at a time point equivalent to the half-reaction point (ϳ30 s at 30°C and pH 7.4), as judged from the spectroscopic studies ( Fig. 2A), reduced the quantity of MC being restored by one half, indicating that one half of the total MCH 2 generated must still be present at this time. This finding implies that the t1 ⁄2 values determined spectroscopically and by DNA cross-linking reflect those for the loss of MCH 2 itself and not a subsequent low absorbing species, i.e. the conversion of MCH 2 to leucoaziridinomitosene, which appears to be the rate-determining step. The kinetics do not represent a rate-determining reduction of MC by NaBH 4 since the t1 ⁄2 of NaBH 4 is much shorter than that observed for the 575 nm spectral changes, nor can it represent the reaction of a low absorbing component formed subsequent to the loss of the methoxy group since MC regeneration would not be possible after this point (Fig. 3). Peroxidase Oxidation of MCH 2 -Spectroscopic experiments similar to those described above were performed in which peroxidases were substituted for MCRA. Addition of HRP at equivalent time points gave visible spectral traces that were surprisingly similar to those obtained with MCRA (Fig. 4A). Surprisingly, the addition of H 2 O 2 was not required for HRP to block the production of the high absorbing species (Fig. 4A), to prevent the net loss of MC as determined by HPLC, or to block the cross-linking of T7 DNA by MC (Tables I and II). Thus, like MCRA, HRP catalyzed the regeneration of MC from the bleached material, although the yield was somewhat less. When these experiments were repeated using LP and MP, both of these peroxidases were able to decrease the generation of mitosenes and affect the regeneration of MC from the bleached material to various degrees, but were much less effective than HRP and MCRA ( Fig. 5 and Table I). The order of effectiveness was MCRA Ͼ HRP Ͼ LP Ͼ MP. The absence of a requirement for H 2 O 2 to manifest the inhibitory effects of HRP coupled with the observation that catalase antagonized the effects of HRP (Fig. 4B and Tables I and II) If a self-propagating chain reaction occurred to any significant extent, as shown in Scheme 1, a greater ratio of QH 2 oxidation to H 2 O 2 addition would be expected. Since this did not occur, we examined H 2 O 2 production via mechanism (c), the partial-reduction of O 2 by NaBH 4 . Solutions in which NaBH 4 was allowed to decompose were assayed under aerobic and anaerobic (purged N 2 ) conditions for H 2 O 2 . H 2 O 2 was detected under aerobic conditions only when the peroxidase/phenol red system was present during the decomposition of the NaBH 4 . The quantity of H 2 O 2 trapped by HRP increased with increasing quantities of HRP within the range examined (10 -50 units/ml) (Table III). This effect was not observed when standard H 2 O 2 calibration curves were generated, where the absorbance at 610 nm was dependent upon the concentration of H 2 O 2 only and independent of the concentration of HRP in the range of the enzyme concentration employed. This was due to the fact that the 10 units/ml HRP used was vastly more than sufficient to complete the oxidation of the phenol red indicator by the available H 2 O 2 present during the assay period. The dependence upon the concentration of HRP suggested that some other component was competing with the HRP for the generated H 2 O 2 . Since NaBH 4 is a powerful reductant and H 2 O 2 a powerful oxidant, one would expect the NaBH 4 to readily reduce the H 2 O 2 . To test this possibility, the addition of HRP was delayed until after the NaBH 4 had decomposed; this procedure resulted in no H 2 O 2 being trapped, even when the mixture was spiked with as much as 40 M H 2 O 2 before the addition of NaBH 4 . When spiked with 80 M H 2 O 2 , only ϳ5% of the H 2 O 2 was found to remain after the decomposition of the NaBH 4 (Table III). Thus, H 2 O 2 has a dynamic existence in this system, with H 2 O 2 being generated from the reduction of O 2 by NaBH 4 and then consumed by further reduction. These results explain the lack of a need for an external source of H 2 O 2 for the peroxidases to oxidize the MCH 2 while still being sensitive to inhibition by catalase. Effects of Peroxidases on DNA Cross-linking-Enzymes with the ability to rapidly oxidize MCH 2 back to MC before a significant proportion of the MCH 2 had undergone subsequent reactions would be expected to protect DNA against cross-linking in model systems. For this reason we compared the ability of HRP, LP, and MP to that of MCRA to protect T7 DNA from cross-linking by NaBH 4 reduced MC. All of the peroxidases were effective in blocking the cross-linking of T7 DNA by re-duced MC, and relatively high levels of protection were produced by relatively low concentrations of some of these enzymes (Table II). Smaller differences were observed in the ability of peroxidases to block T7 DNA cross-linking by reduced MC than in their ability to inhibit the production of high absorbing mitosenes and the yields of MC as a result of the back oxidation of the reduced species. Matching kinetics were observed for DNA cross-linking by reduced MC and for the postreduction spectral changes in MC. Half-maximal effects were found to occur for both parameters in ϳ15 s at 37°C and pH values of 7.4 (Fig. 7) and in ϳ4 min at room temperature (ϳ23°C) and pH 8.0. The primary reason isopropanol was used as the vehicle for both MC and NaBH 4 was to provide protection for the T7 DNA from potential oxygen-derived radical-mediated nicking. Previously, it had been shown that the 2.4% final concentration of isopropanol used in these experiments was more than sufficient to block radical nicking of DNA (18). However, to verify that the T7 DNA was not being nicked significantly relative to the level of cross-linking, we measured the net level of crosslinking of T7 DNA stably pre-cross-linked to ϳ30% (30% of the population of DNA molecules contained one or more crosslinks) with another agent, 1,2-bis(methylsulfonyl)-1-(2-chloroethyl)hydrazine, after which the DNA was subjected to further cross-linking in the MC/NaBH 4 /peroxidase systems. If an intact T7 DNA molecule contained just one cross-link, the strands would be held in register and the entire molecule after thermal denaturation would renature upon snap cooling. If just one strand break was introduced into the DNA molecule, a minimum of two cross-links, depending upon their position relative to the nick, would be required to fully renature the DNA after snap cooling. Therefore, the presence of significant nicking in relation to the number of cross-links would greatly attenuate the measured cross-linking signal. If a pre-crosslinked population with an X mole fraction of the molecules containing one or more cross-links was subjected to further cross-linking by a second agent to the extent that this would give a Y mole fraction of cross-links when used as a single agent, the net mole fraction cross-linked in the resultant population if no nicking occurred would be X ϩ (1 Ϫ X)Y. This formula approximates the sum of X and Y only if X and Y are small. The true value is somewhat less than the sum because additional cross-links added to previously cross-linked molecules would not give rise to an increase in signal. If nicking was the primary reason, for example in the Ͼ90% decrease in the cross-linking signal observed when HRP/H 2 O 2 was added to the MC/NaBH 4 system, a comparable reduction would be expected in the pre-existing cross-linking signal and the overall level of cross-linking would be greatly reduced. In our dual cross-linking experiments, the measured net cross-linking closely matched the calculated value for the combined crosslinking signal in the absence of significant nicking (Table IV). Therefore, the protection seen by the peroxidases was largely due to a decrease in the number of cross-links and not to an artifactually reduced signal due to the introduction of a large number of nicks. DISCUSSION The instability of MCH 2 under normal physiological conditions complicates the study of enzymes that interact with this species. Ideally, to study the oxidation of MCH 2 by enzymatic systems, it would be optimum to be able to add MCH 2 directly to reaction systems or to be able to "pulse" generate the MCH 2 in solution. NaBH 4 appears to reduce MC to MCH 2 as the major product over a short temporal window under the chosen conditions ( Fig. 1) relative to the rate of subsequent MCH 2 reactions and, therefore, appears to be a suitable reductant for studies of this kind. Since the kinetics and by-products of the reactions of NaBH 4 with MC, water, and oxygen are relevant to studies on the possible role of peroxidases in the mechanism of resistance to MC, we have examined these reactions. Of particular relevance is the transient generation of H 2 O 2 during the aerobic decomposition of NaBH 4 as a result of the reduction of molecular oxygen and the subsequent further reduction of the generated H 2 O 2 , obviating the addition of exogenous H 2 O 2 for the peroxidases to be operative, thereby explaining the sensitivity to inhibition of peroxidase protection by catalase without the addition of exogenous H 2 O 2 . HPLC analysis confirmed the regeneration of MC from the initial reduced species (MCH 2 ) by MCRA and the various peroxidases. Addition of MCRA at the half-reaction point, based upon the time course of the spectroscopic changes (ϳ30 s at 30°C and pH 7.4), resulted in the regeneration of MC being decreased by approximately one half, indicating that one half of the total MCH 2 generated must still be present at this time. This finding implies that the t1 ⁄2 values determined spectroscopically and by DNA cross-linking reflect those for the loss of MCH 2 itself and not a subsequent low absorbing species, i.e. the conversion of MCH 2 to leuco-aziridinomitosene, which appears to be the rate-determining step. The kinetics do not represent a rate-determining reduction of MC by NaBH 4 since the t1 ⁄2 of NaBH 4 is much shorter than that observed for the spectral changes, nor can it represent the reaction of a low absorbing component formed subsequent to the loss of the methoxy group since MC regeneration would not be possible after this point. The cross-linking of T7 DNA by NaBH 4 reduced MC appeared to follow first order kinetics and was extremely rapid under these conditions, with half maximal cross-linking occurring in ϳ15 s at 37°C and pH 7.4, matching the kinetics seen for the production of high absorbing mitosenes in the spectroscopic studies (Fig. 7). This finding implies that the same rate-determining step of MCH 2 to leuco-aziridinomitosene was limiting in both of these processes. If the enzymes were oxidizing MCH 2 , their inclusion should block T7 DNA cross-linking in the MC/NaBH 4 system; this indeed occurred, with MCRA, HRP, LP, and MP strongly blocking the measured cross-linking signal (Table II). Single-strand breaks could arise from radicals that theoretically could be generated by the interaction of components in the reaction mixture and appear to reduce the level of crosslinking of DNA that was measured. However, experiments involving the additional cross-linking of stably pre-cross-linked DNA in the presence and absence of protective systems were consistent with the absence of significant nicking and the inhibition of cross-link formation. Unlike MCRA, concentrations of HRP, MP, and LP that largely blocked the cross-linking of T7 DNA were not as effective in preventing the net loss of MC or the spectral changes in the cases of MP and LP. Therefore, significant quantities of products other than MC must be produced by the HRP-, LP-, and MP-mediated oxidation of MCH 2 , or a large fraction of the protection from the cross-linking of DNA via these peroxidases arose from the preferred oxidation of species subsequent to MCH 2 formation but prior to cross-link formation. Preferential oxidation of the species formed after the loss of the methoxy group could restrict the molecule to monofunctional alkylation that would not be detected in our system. Oxidation to species that may not be able to be reductively reactivated to a cross-linking or alkylating species may result in superior resistance to that resulting from the regeneration of MC. Thus, it seems feasible that plant peroxidases such as HRP may offer plant roots some protection from the toxic products of Streptomyces and other soil bacteria. Variations in the sensitivity of neoplastic cells to MC have been attributed to differing activities of reductive activating enzymes, export pumps, and DNA repair enzymes (9 -12). The existence of enzymes capable of oxidizing MCH 2 back to the MC prodrug or other inactive forms gives rise to another possible mechanism by which resistance to the mitomycin antibiotics and toxicity differentials between oxygenated and hypoxic tumor cells could arise. Recently, work from our laboratory has shown that MCRA expressed in CHO-K1/dhfr Ϫ cells conferred profound resistance to MC under aerobic conditions only, resulting in a phenotype with an extreme oxic/hypoxic differential (14). High levels of resistance only under aerobic conditions resembles that produced in cell lines selected aerobically for MC resistance (see Ref. 14 for the appropriate references). These findings suggest that a mechanism of resistance based upon the oxidation of MCH 2 by proteins that are functional homologues of MCRA could be a mechanism contributing to resistance to MC. It is of interest to note that one of a group of proteins cloned because of their ability to block the hypersensitivity of Fanconi anemia lymphoblastoid cells to MC was tentatively identified as a peroxidase based upon sequence analysis (23); moreover, this protein showed the appropriate induction after H 2 O 2 exposure (24). It should be noted that the selective loss of twoelectron reducing pathways responsible for the toxicity under aerobic conditions could result in a similar resistance profile if the total two-electron flux was small compared with the total one-electron flux, resulting in an insignificant contribution of two-electron reducing mechanisms to the total toxicity under hypoxia. If some form of peroxidase activity were involved in the oxidative detoxification of MCH 2 , a source of H 2 O 2 would also be required. MC could supply this source of H 2 O 2 under aerobic conditions as a consequence of one-electron reduction and redox cycling. If such a system were operative, the ratio of one-to two-electron reducing systems, and possibly their relative locations in the cell, could influence toxicity. Increasing the level of one-electron reduction, thereby fueling a MCH 2 peroxidase with H 2 O 2 , could be expected to alleviate toxicity under aerobic conditions. Consistent with this hypothesis is the observation that increased resistance to MC under aerobic conditions was found when NADH:cytochrome b 5 reductase was over expressed in the mitochondria of Chinese hamster ovary cells (11). TABLE IV Comparison of calculated net and measured net cross-linking of DNA stably pre-cross-linked with 1,2-bis(methylsulfonyl)-1- (2chloroethyl)hydrazine and subjected to further cross-linking by NaBH 4 reduced MC T7 DNA and T7 DNA pre-cross-linked by treatment with 1,2-bis(methylsulfonyl)-1-(2-chloroethyl)hydrazine to a level of 31.6 Ϯ 1.1% were each subjected to cross-linking by various reaction mixtures containing protective systems. If the decrease in the observed cross-linking of DNA as a result of the inclusion of a protective system was solely due to the prevention of cross-link formation, then the measured net % cross-linking should match the calculated value. The % cross-linking values are the arithmetic means of three determinations Ϯ the standard deviation. Heme peroxidases occur in a variety of mammalian tissues and secretions, with very high levels being found in some of the lymphocyte cell lineages. Neutrophils contain the highest levels of any normal mammalian tissue (25) containing 2% (dry weight) myeloperoxidase sufficient to impart a faint green color to these cells. Chloromas, an extramedullary tumor of granulocytic lineage, contain extremely high levels of myeloperoxidase (6% dry weight) giving the tumor a characteristic green coloration (26). This level of myeloperoxidase equates to several hundred times the concentration of MP used in these model studies.
7,817.6
2001-09-14T00:00:00.000
[ "Chemistry", "Medicine" ]
Controlling the Emission Spectrum of Binary Emitting Polymer Hybrids by a Systematic Doping Strategy via Förster Resonance Energy Transfer for White Emission Tuning the emission spectrum of both binary hybrids of poly (9,9′-di-n-octylfluorenyl-2,7-diyl) (PFO) with each poly[2-methoxy-5-(2-ethylhexyloxy)-1,4-phenylenevinylene] (MEH-PPV) and poly[2-methoxy-5-(3,7-dimethyl-octyloxy)-1,4-phenylenevinylene] end-capped with Dimethyl phenyl (MDMO-PPV–DMP) by a systematic doping strategy was achieved. Both binary hybrid thin films of PFO/MEH-PPV and PFO/MDMO-PPV–DMP with various weight ratios were prepared via solution blending method prior to spin coating onto the glass substrates. The conjugation length of the PFO was tuned upon addition of acceptors (MEH-PPV or MDMO-PPV–DMP), as proved from shifting the emission and absorption peaks of the binary hybrids toward the acceptor in addition to enhancing the acceptor emission and reducing the absorbance of the PFO. Förster resonance energy transfer (FRET) is more efficient in the binary hybrid of PFO/MDMO-PPV–DMP than in the PFO/MEH-PPV. The efficient FRET in both hybrid thin films played the major role for controlling their emission and producing white emission from optimum ratio of both binary hybrids. Moreover, the tuning of the emission color can be attributed to the cascade of energy transfer from PFO to MEH-PPV, and then to MDMO-PPV–DMP. Introduction Organic-based materials for electronic applications, such as organic light-emitting diodes (OLEDs), have contributed significant advances in the field of polymer emitting devices. OLEDs gather several striking features, such as being low-cost; being easy to assemble by using wet methods; having a low power intake, owing to low conductivity of the organic materials; and having the ability to perform well in flat panel displays [1,2]. As white light is generally acquired by combining three principal colors (blue, green and red), two or more emitting constituents are piled in multilayer configuration [3][4][5][6] or assorted within a distinct layer by mixing or doping [7][8][9][10]. The latter methodology is favored, as the white OLEDs can be designed with a simpler configuration and a convenient approach escaping the vacuum coating (e.g., solution dispensation) for cost-effective white electroluminescent components. The emission efficacy of OLEDs can be adjusted by matching the electron and hole inoculation through the combination of two dissimilar polymers with different electrical properties [11]. Further, by shifting the wavelength of the fluorescence emission from the absorption peak of the leading polymer, the optical properties of a polymer composite can be enhanced in relation to the reduced optical loss. The wavelength shift is attained by a non-radiative Förster-type energy transfer from the excited state of the donor polymer to that of the acceptor polymer [11]. The principle of a Förster-type energy transfer is a spectral overlay between donor emission and acceptor absorption, distance between donor and acceptor, positioning of the dipoles of acceptor and donor molecules and the prevailing medium [12,13]. The Förster-type radiative energy transfer (FRET) arises due to dipole-dipole interaction, in which the separation between donor and acceptor must be <100 Å [14,15]. To estimate the Förster radius, three different techniques have been considered: a direct measurement of the energy transfer rate [16], photoluminescence (PL) quantum efficiency [17] and spectral overlap [18]. In all the techniques, it is essential to approximate the polymer molecules to a hard sphere so that the concentration of a particular composite can be associated to the intermolecular distance of the donor and the acceptor. In the steady-state photoluminescence measurements, the energy transfer in most composites was observed to be a two-step process comprising exciton migration in the donor, followed by a Förster-type energy transfer from the donor to the acceptor molecule [19,20]. The FRET technique was exploited in the present work to produce WOLED with improved outcome. FRET excitation in the blend demands good mixing of the two materials and good spectral overlap between acceptor absorption and donor emission [21]. Blending of both donors and acceptors for energy transfer can significantly reduce the concentration quenching of the excitons formed, enhancing the device performance [22][23][24]. In the present work, we used the ternary blend of poly [9,9 -di-n-octylfluorenyl-2,7diyl] (synonym: PFO), poly [2methoxy-5-(2-ethyl-hexyloxy)-1,4-phenylenevinylene] (synonym: MEH-PPV) and poly[2-methoxy-5-(3,7-dimethyl-octyloxy)-1,4-phenylenevinylene] end capped with Dimethylphenyl (synonym: MDMO-PPV-DMP) to achieve cascaded energy transfer for tuning emission colors and improving device performance. Three different emission components of PFO/MEH-PPV/MDMO-PPV-DMP were miscible with each other. Furthermore, the donor emission spectrum (PFO) and the acceptor absorption spectrum (MDMO-PPV-DMP or MEH-PPV) overlapped significantly. The main component of PFO acted as a diluent, matrix and excitation energy donor for the ternary blend, producing light with high efficiency. Therefore, when the ternary blend was excited near the absorption peak wavelength of PFO, light emission from MDMO-PPV-DMP and MEH-PPV was expected, suggesting the cascade energy transfer used for WOLEDs. As the research on the photophysical mechanism of ternary hybrid systems is rare, the current research is focused in detail on the study of controlling the emission spectrum of binary emitting polymer hybrids by a systematic doping strategy via FRET for white light emission. Materials and Methods PFO and MEH-PPV, with an average molecular mass of 58,200 and 40,000 g/mol, respectively, were purchased from Sigma Aldrich (Saint Louis, MI, USA), whereas MDMO-PPV-DMP was purchased from American Dye Source, Inc. (Morgan Boulevard, QB, Canada) and used as received, without any purification. The structure of each polymer and its energy level are shown in Figure 1. and 1.67 mg/mL) were mixed with a fixed concentration (15 mg/mL) of the donor. The binary hybrids with different weight ratios were spin-coated from the toluene solutions on clean glass substrates for absorption and emission spectra measurements. The optimum ratio of each binary hybrid was used to produce ternary hybrid. All thin films were controlled to be constant by taking 50 μL of each sample and then spin-coated at fixed 2000 rpm, with a deposition period of 20 s. All thin films were annealed at 120 °C for 10 min in a vacuum oven to remove the toluene. A surface profilometer (Dektak 150, Bruker, Billerica, MA, USA) was employed to measure the thickness of all thin films. The thickness of pristine PFO, MEH-PPV and MDMO-PPV-DMP was at around 110, 122 and 125 nm, respectively. For binary-and ternary-blend thin films, the thickness was slightly changed in the range of 113-120 nm. The absorption and emission spectra were collected by using a UV-Vis spectrometer (JASCO V-670, Cremella, Italia) and spectrofluorometer (JASCO FP-8200, Cremella, Italia), respectively. The emission data of each sample were employed to obtain the CIE coordinates, using the OriginLab program version 2019b (Northampton, MA, USA). All measurements were performed at ambient conditions. Optical Properties The normalized optical absorption and emission spectra of the pristine PFO, MEH-PPV and MDMO-PPV-DMP thin films are shown in Figure 2a. The extent of a polymer to absorb the light of a given wavelength can be represented by its absorption coefficient. The absorption coefficient of all pristine polymer thin films was indicated in Figure 2b. The lowest unoccupied molecular orbital (LUMO) and the highest occupied molecular orbital (HOMO) levels of the polymers, as well as the strong overlap of PFO emission with both absorption of MEH-PPV and MDMO-PPV-DMP, meet the necessary conditions of Förster-type energy transfer. The two acceptors (MEH-PPV and MDMO-PPV-DMP) and the donor (PFO) were dissolved separately in toluene, and then various amounts of each acceptor solution, namely 0.1, 0.3, 0.5, 1.0, 2.0, 3.0, 5.0 and 10 wt.% (0.015, 0.045, 0.075, 0.15, 0.31, 0.46, 0.79 and 1.67 mg/mL) were mixed with a fixed concentration (15 mg/mL) of the donor. The binary hybrids with different weight ratios were spin-coated from the toluene solutions on clean glass substrates for absorption and emission spectra measurements. The optimum ratio of each binary hybrid was used to produce ternary hybrid. All thin films were controlled to be constant by taking 50 µL of each sample and then spin-coated at fixed 2000 rpm, with a deposition period of 20 s. All thin films were annealed at 120 • C for 10 min in a vacuum oven to remove the toluene. A surface profilometer (Dektak 150, Bruker, Billerica, MA, USA) was employed to measure the thickness of all thin films. The thickness of pristine PFO, MEH-PPV and MDMO-PPV-DMP was at around 110, 122 and 125 nm, respectively. For binary-and ternary-blend thin films, the thickness was slightly changed in the range of 113-120 nm. The absorption and emission spectra were collected by using a UV-Vis spectrometer (JASCO V-670, Cremella, Italia) and spectrofluorometer (JASCO FP-8200, Cremella, Italia), respectively. The emission data of each sample were employed to obtain the CIE coordinates, using the OriginLab program version 2019b (Northampton, MA, USA). All measurements were performed at ambient conditions. Optical Properties The normalized optical absorption and emission spectra of the pristine PFO, MEH-PPV and MDMO-PPV-DMP thin films are shown in Figure 2a. The extent of a polymer to absorb the light of a given wavelength can be represented by its absorption coefficient. The absorption coefficient of all pristine polymer thin films was indicated in Figure 2b. The lowest unoccupied molecular orbital (LUMO) and the highest occupied molecular orbital (HOMO) levels of the polymers, as well as the strong overlap of PFO emission with both absorption of MEH-PPV and MDMO-PPV-DMP, meet the necessary conditions of Förster-type energy transfer. It can be expected that the energy transfer from PFO to MEH-PPV and from PFO to MDMO-PPV-DMP will vary by varying the composition of each binary hybrid thin film. When the hybrid thin films containing the acceptor (MEH-PPV or MDMO-PPV-DMP) with a small fraction were excited at the absorption peak wavelength (355 nm) of the absorbing donor (PFO), the emission peak wavelength of the hybrid shifted towards that of acceptor and its intensity dramatically enhanced with increasing content of acceptor as shown in Figure 3. Moreover, Figure 4 shows that the absorption peak wavelength of the donor (~380 nm) was slightly red shifted and its absorbance dramatically decreased upon addition various content of the acceptor (MEH-PPV or MDMO-PPV-DMP) in the binary hybrids, which may lead to tune the conjugation length of the donor [24,25]. Additionally, new absorption peaks centered at 500 and 510 nm in the PFO/MDMO-PPV-DMP and PFO/MEH-PPV binary hybrids, respectively, were enhanced upon increments of the acceptor content, implying the possibility of dimer formation in the binary hybrids [14,26]. It can be expected that the energy transfer from PFO to MEH-PPV and from PFO to MDMO-PPV-DMP will vary by varying the composition of each binary hybrid thin film. When the hybrid thin films containing the acceptor (MEH-PPV or MDMO-PPV-DMP) with a small fraction were excited at the absorption peak wavelength (355 nm) of the absorbing donor (PFO), the emission peak wavelength of the hybrid shifted towards that of acceptor and its intensity dramatically enhanced with increasing content of acceptor as shown in Figure 3. Moreover, Figure 4 shows that the absorption peak wavelength of the donor (~380 nm) was slightly red shifted and its absorbance dramatically decreased upon addition various content of the acceptor (MEH-PPV or MDMO-PPV-DMP) in the binary hybrids, which may lead to tune the conjugation length of the donor [24,25]. Additionally, new absorption peaks centered at 500 and 510 nm in the PFO/MDMO-PPV-DMP and PFO/MEH-PPV binary hybrids, respectively, were enhanced upon increments of the acceptor content, implying the possibility of dimer formation in the binary hybrids [14,26]. It can be expected that the energy transfer from PFO to MEH-PPV and from PFO to MDMO-PPV-DMP will vary by varying the composition of each binary hybrid thin film. When the hybrid thin films containing the acceptor (MEH-PPV or MDMO-PPV-DMP) with a small fraction were excited at the absorption peak wavelength (355 nm) of the absorbing donor (PFO), the emission peak wavelength of the hybrid shifted towards that of acceptor and its intensity dramatically enhanced with increasing content of acceptor as shown in Figure 3. Moreover, Figure 4 shows that the absorption peak wavelength of the donor (~380 nm) was slightly red shifted and its absorbance dramatically decreased upon addition various content of the acceptor (MEH-PPV or MDMO-PPV-DMP) in the binary hybrids, which may lead to tune the conjugation length of the donor [24,25]. Additionally, new absorption peaks centered at 500 and 510 nm in the PFO/MDMO-PPV-DMP and PFO/MEH-PPV binary hybrids, respectively, were enhanced upon increments of the acceptor content, implying the possibility of dimer formation in the binary hybrids [14,26]. The absorption and fluorescence spectra of pristine MEH-PPV and MDMO-PPV-DMP (at concentrations equivalent to the weight ratios used in the binary hybrids) are presented in Figure 5. The very weak emission intensities of these acceptors' concentrations, compared with their corresponding emission intensities in the binary hybrids, give evidence for the efficient Förster-type energy transfer from PFO to both MEH-PPV and MDMO-PPV-DMP. The absorption and fluorescence spectra of pristine MEH-PPV and MDMO-PPV-DMP (at concentrations equivalent to the weight ratios used in the binary hybrids) are presented in Figure 5. The very weak emission intensities of these acceptors' concentrations, compared with their corresponding emission intensities in the binary hybrids, give evidence for the efficient Förster-type energy transfer from PFO to both MEH-PPV and MDMO-PPV-DMP. It is very interesting to note that the hybrid containing only 2.0 wt.% of acceptor in the binary hybrids of PFO/MDMO-PPV-DMP or PFO/MEH-PPV gives an emission spectrum that is the average of the individual spectra of PFO and MEH-PPV or MDMO-PPV-DMP when excited at the absorption peak wavelength of donor. The hybrids with more than 5 wt.% in each binary hybrid demonstrated an emission spectrum similar to that of pure acceptor, although a reduced full width at half maximum (FWHM) was observed. The peak of PL intensity at 550 nm of the binary PFO/MDMO-PPV-DMP blend (inset Figure 3a) was improved~8 and 25 times compared with that in the pure PFO ( Figure 3a) and pure MDMO-PPV-DMP (Figure 5d), respectively, whereas the peak of PL intensity at 565 nm of the binary PFO/MEH-PPV hybrid (inset Figure 3b) was improved 12 and 42 times compared with that in the pure PFO (Figure 3a) and pure MEH-PPV (Figure 5c), respectively. Therefore, an efficient energy transfer from PFO to each acceptor can be confirmed. At a high content of acceptors (≥5 wt.% for MEH-PPV and ≥3 wt.% for MDMO-PPV-DMP), significant quenching can be observed instead of enhancement in the corresponding emission peak intensity of the acceptors. These quenching can be attributed to the heterodimerics (exciplexes) formation at a high acceptor content [27]. On the other hand, the quenching can also be attributed to the active self-quenching in the hybrids, resulting from the formation of homodimers (excimers) between monomers of the acceptor [28] that are then converted to heat. An interesting phenomenon was detected at 10 wt.% of each acceptor (MEH-PPV or MDMO-PPV-DMP), where the PFO (donor) emission was simultaneously quenched with acceptor emission. This complete quenching of donor emission means that the energy transfer was completed at this acceptor ratio. Moreover, the reduction of the acceptor emission means that the energy transfer from the donor to the significant number of acceptor monomers was converted into heat rather than being emitted in the form of fluorescence, where some of the acceptor monomers act as dark quenchers without any fluorescence [27]. To avoid complete emission quenching, in particular from the donor, and to achieve white emission from the binary hybrids, it is of primary importance to precisely balance the delicate ratio between the donor and acceptors [29]. This ratio was obtained for the both binary blends with 2.0 wt.% of acceptor, as shown above in Figure 3. So, with this ratio of each acceptor, it can be expected to produce a PFO/MDMO-PPV-DMP/MEH-PPV ternary hybrid with white emission. To confirm this expectation, the PL spectra of each PFO/2 wt.% MDMO-PPV-DMP and PFO/2 wt.% MEH-PPV binary hybrids and their ternary hybrid were collected and are presented in Figure 6. The PL spectrum of the ternary hybrid with these desired ratios of the acceptors confirmed the production of white emission. Moreover, when the ternary polymers are molecularly intermixed within the Förster radius, the cascade energy transfer from PFO to MEH-PPV, and then to MDMO-PPV-DMP, can be expected. The cascade energy transfer can be confirmed by measuring the photoluminescence excitation (PLE) spectra of the ternary blend (Figure 7). When the emission at 550 nm was observed while changing the excitation wavelength, the PLE spectra were very similar to that of pure PFO thin film, which indicates that the emission from MEH-PPV mainly creates from the excitation of PFO. Furthermore, to confirm the energy Moreover, when the ternary polymers are molecularly intermixed within the Förster radius, the cascade energy transfer from PFO to MEH-PPV, and then to MDMO-PPV-DMP, can be expected. The cascade energy transfer can be confirmed by measuring the photoluminescence excitation (PLE) spectra of the ternary blend (Figure 7). When the Micromachines 2021, 12, 1371 7 of 14 emission at 550 nm was observed while changing the excitation wavelength, the PLE spectra were very similar to that of pure PFO thin film, which indicates that the emission from MEH-PPV mainly creates from the excitation of PFO. Furthermore, to confirm the energy transfer from PFO to MDMO-PPV-DMP in the ternary hybrid, the PLE spectrum at emission wavelength of 435 nm was measured. It is also very similar to the PLE spectrum of PFO. Therefore, in the order of PFO>MDMO-PPV-DMP>MEH-PPV, the cascade energy transfer is a highly convenient process. Meanwhile, a part of the PFO excitation energy is transferred directly to MEH-PPV because there is a part of spectral overlap between the MEH-PPV absorption and the PFO emission. Moreover, when the ternary polymers are molecularly intermixed within the Förster radius, the cascade energy transfer from PFO to MEH-PPV, and then to MDMO-PPV-DMP, can be expected. The cascade energy transfer can be confirmed by measuring the photoluminescence excitation (PLE) spectra of the ternary blend (Figure 7). When the emission at 550 nm was observed while changing the excitation wavelength, the PLE spectra were very similar to that of pure PFO thin film, which indicates that the emission from MEH-PPV mainly creates from the excitation of PFO. Furthermore, to confirm the energy transfer from PFO to MDMO-PPV-DMP in the ternary hybrid, the PLE spectrum at emission wavelength of 435 nm was measured. It is also very similar to the PLE spectrum of PFO. Therefore, in the order of PFO>MDMO-PPV-DMP>MEH-PPV, the cascade energy transfer is a highly convenient process. Meanwhile, a part of the PFO excitation energy is transferred directly to MEH-PPV because there is a part of spectral overlap between the MEH-PPV absorption and the PFO emission. Energy Transfer Parameters The Energy Transfer Parameters The Numerous parameters of the energy transfer mechanism are evaluated in this sec tion. The quantum yield (ϕDA) and lifetime (τDA) values of the donor in both binary hybrid thin films were estimated by using the emission intensity of the donor in the absence (ID Numerous parameters of the energy transfer mechanism are evaluated in this section. The quantum yield (φ DA ) and lifetime (τ DA ) values of the donor in both binary hybrid thin films were estimated by using the emission intensity of the donor in the absence (I D ) and presence (I DA ) of each acceptor, where in homogeneous dynamic quenching [15,30]: The reduction in values of both φ DA and τ DA with addition of each MEH-PPV and MDMO-PPV-DMP, as shown in Figure 9, suggested the possibility of radiative energy transfer. Furthermore, the shorter φ DA and τ DA values compared to those of the pristine PFO thin film (φ D = 0.72 and τ D = 346 ps [31]) provided evidence of the efficient energy transfer from PFO to each MEH-PPV and MDMO-PPV-DMP. Numerous parameters of the energy transfer mechanism are evaluated in this section. The quantum yield (ϕDA) and lifetime (τDA) values of the donor in both binary hybrid thin films were estimated by using the emission intensity of the donor in the absence (ID) and presence (IDA) of each acceptor, where in homogeneous dynamic quenching [15,30]: The reduction in values of both ϕDA and τDA with addition of each MEH-PPV and MDMO-PPV-DMP, as shown in Figure 9, suggested the possibility of radiative energy transfer. Furthermore, the shorter ϕDA and τDA values compared to those of the pristine PFO thin film (ϕD = 0.72 and τD = 346 ps [31]) provided evidence of the efficient energy transfer from PFO to each MEH-PPV and MDMO-PPV-DMP. The linear Stern-Volmer plot (Figure 10a) confirms the homogeneity of the dynamic quenching of PFO by MEH-PPV, whereas the non-linear plot (Figure 10b) confirms that both types quenching, namely static and dynamic quenching, of PFO by MDMO-PPV-DMP can occur [15,32]. By fitting data of Figure 10 by using the OriginLab program and comparing them with the following Stern-Volmer equation, the Stern-Volmer constant (k app ) can be estimated: where k app equals to slope of the linear fitting in Figure 10a and is equal to (k S + k D ) + k S k D [A] in the non-linear fitting in Figure 10b [15]. k S and k D are static and dynamic quenching constants, respectively, and [A] is the acceptor concentration. According to Figure 10a, k app = 1.173 µM −1 , whereas, from the fitting data of Figure 10b where equals to slope of the linear fitting in Figure 10a and is equal to ( S + D) + S D[ ] in the non-linear fitting in Figure 10b [15]. S and D are static and dynamic quenching constants, respectively, and [A] is the acceptor concentration. According to Figure 10a In order to determine the type of energy transfer between the monomers of PFO and each of MEH-PPV and MDMO-PPV-DMP, the critical transfer distance (R0) in Å was calculated by the following formula: where in units of M −1 cm −1 nm 4 , "n" is the solvent refractive index, β 2 = 2/3 is the orientation factor for isotropic media, εA(λ) is the molar decadic extinction coefficient of the acceptor in function of wavelength (λ) and FD (λ) is the normalized emission of the donor. The values of J(λ) and Ro, as tabulated in Table 1, confirmed the dominant Förster type of energy transfer between the PFO monomers and each monomer of MEH-PPV and MDMO-PPV-DMP, where this type is typically effective in the range of 10Å-100Å [33,34]. In order to determine the type of energy transfer between the monomers of PFO and each of MEH-PPV and MDMO-PPV-DMP, the critical transfer distance (R 0 ) in Å was calculated by the following formula: where J(λ) in units of M −1 cm −1 nm 4 , "n" is the solvent refractive index, β 2 = 2/3 is the orientation factor for isotropic media, ε A (λ) is the molar decadic extinction coefficient of the acceptor in function of wavelength (λ) and F D (λ) is the normalized emission of the donor. The values of J(λ) and R o , as tabulated in Table 1, confirmed the dominant Förster type of energy transfer between the PFO monomers and each monomer of MEH-PPV and MDMO-PPV-DMP, where this type is typically effective in the range of 10-100 Å [33,34]. Based on Förster radio and emission intensities of the donor with (I DA ) and without (I D ) the acceptor, the distance (R DA ) between PFO and each monomer of MEH-PPV and MDMO-PPV-DMP can be estimated as shown in Figure 11. As the content of each MEH-PPV and MDMO-PPV-DMP increased from 0.3 to 10 wt.%, the R DA decreased from 74.1 to 36.9 Å and from 67.4 to 36.6 Å, respectively. The energy-transfer rate (k ET ) between a single pair of donor/acceptor in each binary hybrid, separated by an R DA , can be expressed in terms of R 0 [12]: As shown in Figure 12, the k ET values were significantly increased with the increasing the acceptor concentration, where they enhanced in the binary hybrid of PFO/MDMO-PPV-DMP greater than they did in PFO/MEH-PPV. This enhancement confirms the efficient energy transfer in both binary hybrids [15,30,35], and it was more efficient in the PFO/MDMO-PPV-DMP hybrid as compared to the PFO/MEH-PPV hybrid. Based on Förster radio and emission intensities of the donor with (IDA) and without (ID) the acceptor, the distance (RDA) between PFO and each monomer of MEH-PPV and MDMO-PPV-DMP can be estimated as shown in Figure 11. As the content of each MEH-PPV and MDMO-PPV-DMP increased from 0.3 to 10 wt.%, the RDA decreased from 74.1 to 36.9 Å and from 67.4 to 36.6 Å, respectively. Acceptor wieght ratio (wt%) Figure 11. Distance (RDA) between the monomers of the donor/acceptor versus the acceptor content. Solid curves refer to the fitting data. The energy-transfer rate (kET) between a single pair of donor/acceptor in each binary hybrid, separated by an RDA, can be expressed in terms of R0 [12]: As shown in Figure 12, the kET values were significantly increased with the increasing the acceptor concentration, where they enhanced in the binary hybrid of PFO/MDMO-PPV-DMP greater than they did in PFO/MEH-PPV. This enhancement confirms the efficient energy transfer in both binary hybrids [15,30,35], and it was more efficient in the PFO/MDMO-PPV-DMP hybrid as compared to the PFO/MEH-PPV hybrid. The relationship of the acceptor content with the probability (PDA) and efficiency (η) of the donor/acceptor energy transfer is illustrated in Figure 13, respectively. A gradual increase was observed for PDA with the addition of the acceptor in both binary hybrids. The greater increase in values of PDA in the PFO/MDMO-PPV-DMP hybrid can be attributed to the systematic decrease in the IDA that was larger than that of PFO/MEH-PPV hybrid. On the other hand, we can also observe a systematic increase in the η until the acceptor content reached 2 wt.% and then remained fixed with a maximum value at 0.99 and 0.97 for binary hybrids of PFO/MDMO-PPV-DMP and PFO/MEH-PPV, respectively. The relationship of the acceptor content with the probability (P DA ) and efficiency (η) of the donor/acceptor energy transfer is illustrated in Figure 13, respectively. A gradual increase was observed for P DA with the addition of the acceptor in both binary hybrids. The greater increase in values of P DA in the PFO/MDMO-PPV-DMP hybrid can be attributed to the systematic decrease in the I DA that was larger than that of PFO/MEH-PPV hybrid. On the other hand, we can also observe a systematic increase in the η until the acceptor content reached 2 wt.% and then remained fixed with a maximum value at 0.99 and 0.97 for binary hybrids of PFO/MDMO-PPV-DMP and PFO/MEH-PPV, respectively. To terminate intermolecular transfer in the PFO, the MDMO-PPV-DMP and MEH-PPV concentrations should be much less than the critical concentration (A o ), which is the acceptor concentration at which 76% of the energy was transferred [15]. Based on the average values of R 0 [15,32], the A o values of the MDMO-PPV-DMP and MEH-PPV were estimated as being~1.73 and~5.39 mM, respectively. On the other hand, based on the radiative (k r ) and radiationless (k nr ) rate constants [14,15], the conjugation length (A π ) values in the excited singlet state for both binary hybrids were estimated. No significant difference was detected in the k r value (~2.08 ns −1 ) with addition of MEH-PPV or MDMO-PPV-DMP, whereas the k nr values were dramatically improved (Table 1). Consequently, the A π of the PFO was decreased by increasing the acceptor in the PFO/MDMO-PPV-DMP hybrid more than that in the PFO/MEH-PPV hybrid. The exponential relationship between A π and Φ DA ( Figure 14) confirmed that the addition of the MDMO-PPV-DMP or MEH-PPV produces binary hybrids with highly fluorescent. Moreover, an approximately linear portion can be observed when Φ DA exceeded 0.2, with a zero value of A π at Φ DA~0 .5, which is consistent with the theoretical results [15,26]. The relationship of the acceptor content with the probability (PDA) and efficiency (η) of the donor/acceptor energy transfer is illustrated in Figure 13, respectively. A gradual increase was observed for PDA with the addition of the acceptor in both binary hybrids. The greater increase in values of PDA in the PFO/MDMO-PPV-DMP hybrid can be attributed to the systematic decrease in the IDA that was larger than that of PFO/MEH-PPV hybrid. On the other hand, we can also observe a systematic increase in the η until the acceptor content reached 2 wt.% and then remained fixed with a maximum value at 0.99 and 0.97 for binary hybrids of PFO/MDMO-PPV-DMP and PFO/MEH-PPV, respectively. To terminate intermolecular transfer in the PFO, the MDMO-PPV-DMP and MEH-PPV concentrations should be much less than the critical concentration (Ao), which is the acceptor concentration at which 76% of the energy was transferred [15]. Based on the average values of R0 [15,32], the Ao values of the MDMO-PPV-DMP and MEH-PPV were estimated as being ~1.73 and ~5.39 mM, respectively. On the other hand, based on the radiative (kr) and radiationless (knr) rate constants [14,15], the conjugation length (Aπ) values in the excited singlet state for both binary hybrids were estimated. No significant difference was detected in the kr value (~2.08 ns −1 ) with addition of MEH-PPV or MDMO-PPV-DMP, whereas the knr values were dramatically improved (Table 1). Consequently, the Aπ of the PFO was decreased by increasing the acceptor in the PFO/MDMO-PPV-DMP hybrid more than that in the PFO/MEH-PPV hybrid. The exponential relationship between Aπ and ΦDA ( Figure 14) confirmed that the addition of the MDMO-PPV-DMP or MEH-PPV produces binary hybrids with highly fluorescent. Moreover, an approximately linear portion can be observed when ΦDA exceeded 0.2, with a zero value of Aπ at ΦDA ~0.5, which is consistent with the theoretical results [15,26]. Figure 15 shows the CIE coordinates on the color chart for all pristine conjugated polymers, binary hybrids and ternary hybrid thin films. The CIE coordinates of pristine PFO (15 mg/mL), MEH-PPV (0.5 mg/mL) and MDMO-PPV-DMP (0.5 mg/mL) are (0.15, 0.07), (0.43, 0.37) and (0.41, 0.36), respectively, as presented in Figure 15a. Moreover, Figure 15b,c illustrated how the content of each MEH-PPV and MDMO-PPV-DMP, respectively, can be tuned the color emission of each binary hybrid via FRET mechanism. As the content of each MEH-PPV and MDMO-PPV-DMP in their binary hybrids increased, the CIE coordinates gradually shifted towards the white region before ended at the yellow region in the color chart. The shifting of CIE coordinates confirmed the efficient of FRET in the binary hybrids, as proved in the section above. Tuning the Emission Colors via FRET To obtain white emission, 2 wt.% of each acceptor MEH-PPV and MDMO-PPV-DMP was incorporated into the donor PFO. As a result, white emission (CIE coordinates: x = 0.26, y = 0.33) from the ternary hybrid was observed as demonstrated in Figure 15d. The tuning color can be attributed from the cascade of energy transfer from PFO to MEH-PPV, and then to MDMO-PPV-DMP, as expected later. Figure 15 shows the CIE coordinates on the color chart for all pristine conjugated polymers, binary hybrids and ternary hybrid thin films. The CIE coordinates of pristine PFO (15 mg/mL), MEH-PPV (0.5 mg/mL) and MDMO-PPV-DMP (0.5 mg/mL) are (0.15, 0.07), (0.43, 0.37) and (0.41, 0.36), respectively, as presented in Figure 15a. Moreover, Figure 15b,c illustrated how the content of each MEH-PPV and MDMO-PPV-DMP, respectively, can be tuned the color emission of each binary hybrid via FRET mechanism. As the content of each MEH-PPV and MDMO-PPV-DMP in their binary hybrids increased, the CIE coordinates gradually shifted towards the white region before ended at the yellow region in the color chart. The shifting of CIE coordinates confirmed the efficient of FRET in the binary hybrids, as proved in the section above. Conclusions Tuning color emission of the hybrid thin films and producing white emission were achieved by a systematic doping strategy via FRET. A systematic increase in the FRET efficiency was observed with increasing the acceptor content until it reached 2 wt.% and then remained fixed with a maximum value at 0.99 and 0.97 for binary hybrids of PFO/MDMO-PPV-DMP and PFO/MEH-PPV, respectively. The quenching type of emission in the PFO/MEH-PPV binary hybrid was dynamic, while it was both static and dynamic in PFO/MDMO-PPV-DMP. To terminate the intermolecular transfer in the PFO, the concentrations of MDMO-PPV-DMP and MEH-PPV should be much less than ~1.73 and ~5.39 mM, respectively. The white emission (CIE coordinates: x = 0.26, y = 0.33) from the ternary hybrid was produced at the optimal ratio of PFO/2 wt.% MEH-PPV/2 wt.% MDMO-PPV-DMP. To obtain white emission, 2 wt.% of each acceptor MEH-PPV and MDMO-PPV-DMP was incorporated into the donor PFO. As a result, white emission (CIE coordinates: x = 0.26, y = 0.33) from the ternary hybrid was observed as demonstrated in Figure 15d. The tuning color can be attributed from the cascade of energy transfer from PFO to MEH-PPV, and then to MDMO-PPV-DMP, as expected later. Conclusions Tuning color emission of the hybrid thin films and producing white emission were achieved by a systematic doping strategy via FRET. A systematic increase in the FRET efficiency was observed with increasing the acceptor content until it reached 2 wt.% and then remained fixed with a maximum value at 0.99 and 0.97 for binary hybrids of PFO/MDMO-PPV-DMP and PFO/MEH-PPV, respectively. The quenching type of emission in the PFO/MEH-PPV binary hybrid was dynamic, while it was both static and dynamic in PFO/MDMO-PPV-DMP. To terminate the intermolecular transfer in the PFO, the concentrations of MDMO-PPV-DMP and MEH-PPV should be much less than~1.73 and 5.39 mM, respectively. The white emission (CIE coordinates: x = 0.26, y = 0.33) from the ternary hybrid was produced at the optimal ratio of PFO/2 wt.% MEH-PPV/2 wt.% MDMO-PPV-DMP.
7,781.2
2021-11-01T00:00:00.000
[ "Materials Science" ]
Marked Seasonal Variation in Structure and Function of Gut Microbiota in Forest and Alpine Musk Deer Musk deer (Moschus spp.) is a globally endangered species due to excessive hunting and habitat fragmentation. Captive breeding of musk deer can efficiently relieve the hunting pressure and contribute to the conservation of the wild population and musk supply. However, its effect on the gut microbiota of musk deer is unclear. Recent studies have indicated that gut microbiota is associated with host health and its environmental adaption, influenced by many factors. Herein, high-throughput sequencing of the 16S rRNA gene was used based on 262 fecal samples from forest musk deer (M. berezovskii) (FMD) and 90 samples from alpine musk deer (M. chrysogaster) (AMD). We sought to determine whether seasonal variation can affect the structure and function of gut microbiota in musk deer. The results demonstrated that FMD and AMD had higher α-diversity of gut microbiota in the cold season than in the warm season, suggesting that season change can affect gut microbiota diversity in musk deer. Principal coordinate analysis (PCoA) also revealed significant seasonal differences in the structure and function of gut microbiota in AMD and FMD. Particularly, phyla Firmicutes and Bacteroidetes significantly dominated the 352 fecal samples from captive FMD and AMD. The relative abundance of Firmicutes and the ratio of Firmicutes to Bacteroidetes were significantly decreased in summer than in spring and substantially increased in winter than in summer. In contrast, the relative abundance of Bacteroidetes showed opposite results. Furthermore, dominant bacterial genera and main metabolic functions of gut microbiota in musk deer showed significant seasonal differences. Overall, the abundance of main gut microbiota metabolic functions in FMD was significantly higher in the cold season. WGCNA analysis indicated that OTU6606, OTU5027, OTU7522, and OTU3787 were at the core of the network and significantly related with the seasonal variation. These results indicated that the structure and function in the gut microbiota of captive musk deer vary with seasons, which is beneficial to the environmental adaptation and the digestion and metabolism of food. This study provides valuable insights into the healthy captive breeding of musk deer and future reintroduction programs to recover wild populations. INTRODUCTION Musk deer (Moschus spp.), the only extant genus of the family Moschidae, consists of seven species and are widely found in forests and mountains in Asia (Yang et al., 2003;Jiang et al., 2020). China has the largest musk deer population and musk source globally . Six species of genus Moschus are found in China, among which the forest musk deer (FMD, M. berezovskii) and the alpine musk deer (AMD, M. chrysogaster) are the most widely distributed and have the highest wild and captive population (Fan et al., 2018). They inhabit high-altitude coniferous forests or broad-leaved forests (Wang and Harris, 2015), alpine shrub meadow, and mountain forest grassland zone (Harris, 2016) in central and southwestern China, with some overlapping areas. However, wild musk deer populations plummeted to the brink of extinction in the 20th century due to illegal hunting, habitat fragmentation, and other human activities (Wu and Wang, 2006;Cai et al., 2020). The International Union for Conservation of Nature (IUCN) Red List (Wang and Harris, 2015;Harris, 2016) and red list of China's vertebrates (Jiang et al., 2016) have listed both the species as endangered (EN) and critically endangered (CR), respectively. The captivity breeding of the FMD and AMD began in China in the 1960s to curb the rapid decline of the musk deer population by reducing the pressure on hunting wild musk deer to some extent (Fan et al., 2019). Captive individuals can also serve as a rewilding resource for reintroduction, which is beneficial for effective conservation and population recovery of wild musk deer. However, gastrointestinal diseases occur in the captive FMD and AMD with a fatality rate of about 30%, especially in winter and autumn . Diseases are the most significant constraints on population growth, breeding scale, and musk secretion (Zhao et al., 2011;Zhang et al., 2019). Gut microbiota has a close mutualistic symbiotic relationship with their hosts during long-term coevolution and is essential in organisms (Nicholson et al., 2012). The complex and variable micro-ecosystem of gut microorganisms were involved in metabolism, immune regulation, intestinal development promotion, pathogen defense, and other physiological activities Yoo et al., 2020). Many factors, such as host genetics, diet, season, age, and lifestyle, influence the composition and function of gut microbiota (Zhang et al., 2010;Claesson et al., 2012;O'Toole and Jeffery, 2015). For instance, changes in dietary composition rapidly alter the composition and abundance of gut microbiota (Zmora et al., 2019). Seasonal changes in food composition and availability change the structure and function of gut microbiota in many animals (Amato et al., 2015;Xue et al., 2015). Moreover, captive breeding and ex situ conservation are effective for the conservation of endangered species. Long-term captive breeding has significantly changed the dietary composition of musk deer compared with wild populations (Guo et al., 2019), resulting in changes in the composition and function of gut microbiota (Sun et al., 2020). Therefore, studying the diversity of gut microbiota of endangered species in captivity is essential for assessing the current captive conditions, understanding the appropriate capacity of changes in gut microbiota for their future, and evaluating whether they can be released into the wild. However, the relationship between the diversity, structure, and function of gut microbiota of captive musk deer of different ages and seasonal changes is unclear due to the inadequate relevant data. In this study, combined with weighted gene co-expression network analysis (WGCNA), the 16S rRNA gene amplicon technology was used for high-throughput sequencing on the Illumina MiSeq sequencing platform to analyze the fecal microbial composition, diversity, and function in captive FMD and AMD in different seasons. This study aimed to (i) explore the composition and differences in the gut microbiota of both musk deer in different seasons; (ii) analyze the core microbiota and its metabolic functions, and their seasonal difference; and (iii) construct a weighed co-occurrence network of gut microbiota for identifying modules of co-occurring taxa and hub OTUs significantly related with the seasonal variation. Therefore, this study can provide a scientific basis for the effective management of captive musk deer. Sample Collection A non-invasive sampling method was used to collect 262 fresh feces of captive FMD in Aru Township, Qilian County, Qinghai Province, China (100 • 21 E, 38 • 7 N), during the early spring (mid-March, here referred to as T1, N = 49), late spring (late May, here referred to as T2, N = 57), summer (mid-July, here referred to as T3, N = 56), autumn (mid-November, here referred to as T4, N = 50), and winter (late December, here referred to as T5, N = 50). The FMD breeding center was located northeast of the Qinghai-Tibet Plateau with an altitude, annual average temperature, and precipitation of about 3,002 m, −0.1 • C, and 403 mm, respectively, with high daily range and radiation intensity. The maximum and minimum temperatures appeared in July and January, respectively. Furthermore, 90 fecal samples of captive AMD were collected during the late spring (late May, here referred to as T2, N = 55) and winter (late December, here referred to as T5, N = 35) in Xinglong Mountain, Gansu Province (104 • 4 E, 35 • 49 N). The AMD breeding center was located northeast of the Qinghai-Tibet Plateau with an altitude, annual average temperature, and precipitation of 2,171 m, 5.4 • C, and 406 mm, respectively. The FMD and AMD breeding centers were cleaned the night before sampling, and each individual was kept in a separate enclosure so that the fresh feces of each individual could be collected the following morning. Disposable PE gloves were used to collect fresh feces and put them into sterile self-sealed bags to prevent contamination of the feces surface. All samples were temporarily stored in the −20 • C vehicle-mounted refrigerator and later stored in the −80 • C ultra-low temperature refrigerator in the laboratory for DNA extraction. DNA Extraction and 16S rRNA Gene Sequencing E.Z.N.A. R soil DNA Kit (Omega Biotek, Norcross, GA, United States) was used to extract total bacterial DNA from FMD and AMD feces. The extraction quality of DNA was assessed using 1% agarose gel electrophoresis. NanoDrop 2000 (Thermo Fisher Scientific, Waltham, MA, United States) was used to determine the concentration and purity of DNA. The V4-V5 variable region of the 16S rRNA gene was amplified via PCR using primers; 515F (5 -GTGCCAGCMGCCGCGG-3 ) and 907R (5 -CCGTCAATTCMTTTRAGTTT-3 ). Each sample had three PCR replicates. The PCR was run in a reaction volume of 20 µl, containing 4 µl TransStart FastPfu buffer (5×), 2 µl dNTPs (2.5 mM), 0.8 µl each of forward and reverse primers (5 µM), 0.4 µl TransStart FastPfu DNA Polymerase, and 10 ng sample DNA and topped up with ddH 2 O. Throughout the PCR amplification, ultrapure water was used instead of a sample solution as a negative control to eliminate the possibility of false-positive PCR results. The PCR amplification procedure was as follows: 95 • C for 3 min (initial denaturation), followed by 27 cycles at 95 • C for 30 s (denaturing), 55 • C for 30 s (annealing), and 72 • C for 45 s (extension), and a final single extension at 72 • C for 10 min. The same sample PCR products were mixed, then 2% agarose gel was used to recover the PCR products. AxyPrep DNA Gel Extraction Kit (Axygen Biosciences, Union City, CA, United States) was used to purify the recovered products. The recovered products were measured using 2% agarose gel electrophoresis and quantified using Quantus TM Fluorometer (Promega, United States). NEXTFLEX Rapid DNA-Seq Kit (Bioo Scientific, Austin, TX, United States) was used to build the library according to the manufacturer's protocols. Then, the purified PCR-amplified fragments were pooled in equal concentrations and sequenced on Illumina MiSeq PE300 platform (San Diego, CA, United States). Determination of OTU and Taxonomy Assignments The raw data were pre-processed to remove the known adaptor, specific primers and low-quality ends with Trimmomatic (version 0.39, PE-phred33 ILLUMINACLIP:2:30:10 TRAILING:20 MINLEN:50 SLIDINGWINDOW:50:20) (Bolger et al., 2014). We set a 50-bp slide window and trimmed off those sequences with average base quality <20. These lowquality reads include reads with >10 nucleotides aligned to the adapter sequences, those with unidentified nucleotide (N) sequences, and those read lengths below 50 bp. The generated forward and reverse unpaired sequences were than merged together using FLASH with a minimum overlap of 10 bp and maximum mismatch of 0.2 (version 1.2.7, -m 10 -x 0.2) (Magoč and Salzberg, 2011). Chimera reads were removed, and operational taxonomic units (OTUs) were clustered with 97% nucleotide sequence similarity using UPARSE (version 7.1) 1 (Costello et al., 2009). The highest-frequency sequence in each OTU was selected as the representative sequence for further annotation. The reference sequence annotation and curation pipeline (RESCRIPt) was used to prepare a compatible Silva 138/16s bacteria database, based on SILVA SSURef of the curated NR99 (version 138) database 2 followed the protocol suggested by author 3 . The Ribosomal Database Project (RDP) classifier (version 2.11) 4 was used to classify the representative OTU sequence against the Silva 138/16s bacteria database at a confidence threshold of 0.8 . The taxonomy-based filtering was applied to remove all features that contain either mitochondria, chloroplast, or archaea, and the results were aligned to generate the OTU table. Bioinformatic Analysis Operational taxonomic units that were present in any two samples that are less than five sequences as well as a total abundance (summed across all samples) of less than 10 also be filtered. The OTU table was than rarefied to the lowest number of reads across samples (for FMD of 49,615 and for AMD of 50,680) before downstream analysis. The corresponding abundance information of each OTU annotation result in each sample was counted, and the sample sequences were defined as OTU based on the minimum number of sample sequences. Community bar charts were used to plot the relative abundance of bacteria in each fecal sample in musk deer at phylum and genus levels with R software (version 3.3.1, packages "stats"). Venn charts were used to analyze the core and unique bacterial phyla and genera in different seasons with the R software. A cluster heat map was used to compare the composition in different seasons with the R software (packages "pheatmap") (Perry, 2016). Alpha diversity was used to reflect the diversity of intestinal microbial composition. Sobs index (the observed richness) and Shannon index were used to analyze the diversity of gut microbial composition at the OTU level using Qiime software 5 (Caporaso et al., 2010). The Wilcoxon rank-sum test was used to analyze the seasonal significance of the two Alpha diversity indexes with the R software (packages "stats"). The bacterial diversity among different microbial communities was compared and analyzed to explore the community composition among different seasonal groups. Principal coordinate analysis (PCoA) was used to analyze the beta diversity among different groups with the R software (packages "vegan"). Bray-Curtis was used to calculate the distance between samples at the OTU level. Analysis of similarities (ANOSIM), a non-parametric statistical test, was used for the intergroup difference test with the R software (packages "vegan, " anosim function) (Oksanen et al., 2019). The metabolic functions of bacterial communities were predicted using PICRUSt (Phylogenetic Investigation of Communities by Reconstruction of Unobserved States) software based on the KEGG (Kyoto Encyclopedia of Genes and Genomes) and EggNOG (Evolutionary Genealogy of Genes: Non-supervised Orthologous Groups) databases with the OTU species annotation and abundance information (Langille et al., 2013). Moreover, the Wilcoxon rank-sum test was used to analyze the seasonal differences of metabolism-related functions and dominant bacteria in all groups. Weighted gene co-expression network analysis is a system biology method originally conceived for describing correlation patterns among genes across microarray data, which is widely used to identify critical hub genes of biological processes by constructing gene co-expression networks (Saris et al., 2009). Currently, it has been also applied in gastric microbiome networks and microbial modules (Park et al., 2019). We used WGCNA analysis to identify modules of co-occurred taxa and relate these modules to traits of musk deer (season, age, and gender). R package WGCNA (version 1.70-3) 6 was used to perform unsigned WGCNA analysis (Langfelder and Horvath, 2008). The weighted correlation network analysis was performed to cluster OTUs (van Dam et al., 2018). The soft thresholding power of 8 was chosen based on the criterion of approximate scale-free topology as well as the mean connectivity lower than 100 (Morandin et al., 2016). To divide highly co-occurred OTUs into several module members, hierarchical clustering using the dynamic tree cut method with DeepSplit of 3 was performed to create a hierarchical clustering tree of OTUs as a dendrogram (Do et al., 2017). Module eigengenes were calculated using the moduleEigengenes() function in the WGCNA R package to demonstrate the correlation of the module eigenvalue and OTU abundance profile . Then, we identified potential modules and OTUs associated with season, age, and gender. The network was generated to visualize the correlations among OTUs in the module associated with season . The connection weight values (range from 0 to 1) were calculated using the intramodularConnectivity() function in the WGCNA R package to indicate the strength of co-regulation between taxa. Additionally, Cytoscape software (version 3.8.2) 7 was used for network visualization, with Cytohubba plugin analyzing and extracting the hub OTUs (Kavarthapu et al., 2020). Assessment of Sequence Data A total of 36,862,468 (140,696 reads/sample) and 12,912,306 (143,470 reads/sample) high-quality clean reads were obtained in FMD and the AMD samples, respectively. The rarefaction curves of sobs and Shannon indexes at the OTU levels became gradually placid as the sequencing depth increased (Supplementary Figure 1). The results demonstrated that each fecal sample had sufficient OTUs to reflect the maximum level of bacterial diversity, which indicated a sufficient sequencing depth. A total of 3,548 and 2,259 OTUs were identified in FMD and AMD, respectively. At a 97% sequence identity threshold, the OTUs of the FMD were classified into 20 phyla, 35 classes, 93 orders, 171 families, and 404 genera, while those of the AMD were classified into 17 phyla, 27 classes, 66 orders, 125 families, and 300 genera. Seasonal Variation in α Diversity The Good's Coverage index of captive FMD and AMD in different seasons was higher than 99%. Both sobs and Shannon indexes were used to reflect the diversity of gut microbiota of musk deer in different seasons. For captive juvenile FMD, the α diversity of gut microbiota was significantly lower in early spring than in other periods (p < 0.05), and it was higher in winter than in other periods (Figures 1G,H). For adult FMD, the α diversity was significantly higher in winter than in other periods (p < 0.05) and lower in summer than in other periods. Overall, the α diversity of gut microbiota in captive FMD was higher in winter and autumn than in spring and summer. For captive adult AMD, the α diversity of gut microbiota was significantly higher in winter than in late spring (Figures 1G,H) FIGURE 1 | Gut microbial composition of the FMD and AMD. Histogram of relative abundance of individual bacteria phyla of the FMD (A) and the AMD (G). Relative abundance of dominant phyla in the FMD (A) and the AMD (B). Heat map analysis based on identifiable bacterial genera with the relative abundance of top30 for the FMD (C) and the AMD (D). The red, blue, light green, black, and dark green letters represent phylum Firmicutes, Bacteroidetes, Planctomycetes, Proteobacteria, and Spirochaetes, respectively. Analysis of core and unique bacteria of the FMD (E) and AMD (F) at phylum and genus levels via Venn plots. Seasonal variation of α diversity in the gut microbiota of the musk deer based on Sobs index (G) and Shannon index (H). * p < 0.05 (Wilcoxon rank-sum test), * * p < 0.01, and * * * p < 0.001. ns, not significant. (p < 0.05). Besides, the α diversity of gut microbiota was higher in winter than, but not significant, for both juvenile and older AMD. Seasonal Variation in β Diversity The Bray-Curtis distance algorithm was used to calculate the distance between fecal samples of captive musk deer in different seasons. ANOSIM analysis was used to test whether the intergroup differences were significantly greater than the intra-group differences in different seasons. The PCoA analysis showed that the R values were all greater than 0 (p = 0.001). Besides, there were significant seasonal differences in the gut microbial composition of the FMD in different seasons, and the intergroup differences were significantly greater than the intragroup differences (Figures 2A-C). The significant differences between the summer and winter groups (R = 0.3078, p = 0.0010) and the autumn and winter groups (R = 0.3254, p = 0.001) were relatively high. The gut microbial composition of captive adult AMD was significantly different between the winter and late spring groups (R = 0.1332, p = 0.016). Moreover, the intergroup difference was significantly greater than the intragroup difference ( Figure 2E). However, there was no significant seasonal difference in the composition of gut microbiota in juvenile (R = 0.0392, P = 0.272) or older (R = −0.0067, P = 0.512) AMD (Figures 2D,F). Analysis of Seasonal Difference of Dominant Bacteria The Wilcoxon rank-sum test was used to analyze the significant seasonal differences. For both juvenile and adult FMD, the Figures 2, 3). PCoA analysis between late spring and winter groups in juveniles (D), adults (E), and older (F) AMD. relative abundance of Firmicutes was significantly higher in winter than in other periods (p < 0.05) (Figure 3A), while that of the Bacteroidetes showed the opposite result (p < 0.05) ( Figure 3B). The relative abundance of Firmicutes in summer was lower than that in other periods, while that of Bacteroidetes showed the opposite result. Overall, the relative abundance of Firmicutes in the FMD was higher in autumn, winter, and early spring than in late spring and summer. In contrast, that of Bacteroidetes showed the opposite result (Figures 3A,B). Moreover, for captive AMD of different ages, the relative abundance of Firmicutes was higher in winter than in late spring, while that of Bacteroidetes showed the opposite result. Particularly, there were significant seasonal differences among adult AMD (p < 0.05). The analysis of the ratio of Firmicutes to Bacteroidetes (F/B ratio) showed that the ratio was significantly higher in winter than in other periods for both juvenile and adult FMD (p < 0.05) (Figures 3C,D). Overall, the F/B ratio significantly decreased in summer than in spring and increased in winter than in summer. Similarly, for captive AMD of different ages, the F/B ratio was higher in winter than in late spring, and there were significant seasonal differences among adult AMD (p < 0.05) (Figure 3E). The seasonal difference analysis indicated that nine dominant identifiable bacteria genera in juvenile and adult FMD had significant seasonal differences. The relative abundances of the genera Christensenellaceae R7 group and Ruminococcus were significantly higher in winter and early spring than in other periods. The relative abundances of the genera UCG-005 and Monoglobus were significantly higher in summer than in other periods. The relative abundance of genus Alistipes was significantly higher in late spring and summer than in other periods (Figures 3F,G). Moreover, among the nine dominant bacteria genera in AMD, genera Bacteroides, UCG-005, Alistipes, and Anaerostipes showed significant seasonal differences. The relative abundances of genera UCG-005 and Alistipes were significantly higher in winter than in late spring, while those of Bacteroides and Anaerostipes showed opposite results ( Figure 3H). Functional Prediction Analysis The functional prediction analysis in the KEGG database showed that gut microbiota in captive FMD and AMD were mainly involved in carbohydrate metabolism, amino acid metabolism, energy metabolism, metabolism of cofactors and vitamins, metabolism of cofactors and vitamins, translation, and replication and repair. The EggNOG database showed that 17 functions had high relative abundance. The KEGG database showed that the four major metabolismrelated functions of captive FMD had seasonal differences. Overall, those functions were significantly lower in summer than in other periods (p < 0.05) and were relatively higher in spring than in other periods ( Figure 4A). The main metabolism-related function of captive AMD was higher in late spring than in winter. Also, the energy metabolic function had significant seasonal differences (p < 0.05) (Figure 4C). The EggNOG database showed that six major metabolismrelated functions also had seasonal differences. Overall, the functions were relatively higher in spring than in other periods. All the major metabolic functions of gut microflora in captive FMD were significantly lower in summer than in other periods except for lipid transfer and metabolic function (p < 0.05) (Figure 4B). Similarly, all the major metabolic functions in captive AMD were higher in late spring than in winter except for amino acid transfer and metabolic function ( Figure 4D). The Hub OTUs Fluctuated With the Season Using WGCNA Analysis Baseline characteristics of participants in weighted correlation network analysis. A total of 262 FMD and 3493 OTUs were FIGURE 4 | Gut microbial function prediction and seasonal difference analysis based on KEGG and EggNOG databases. Seasonal difference analysis of the main function of gut microbiota of the FMD based on the KEGG database (A) and EggNOG database (B). Seasonal difference analysis of the main function of gut microbiota of the AMD based on the EggNOG database (C) and KEGG database (D). * p < 0.05 (Wilcoxon rank-sum test), * * p < 0.01 and * * * p < 0.001. ns, not significant. included in the study, with the mean age and the proportion of males of about 2.45 and 75.19%, respectively. The individual in early spring (T1), late spring (T2), summer (T3), autumn (T4), and winter (T5) periods accounted for 18. 70, 21.76, 21.37, 19.08, and 19.08%, respectively. The sample dendrogram and trait heat map is shown in Supplementary Figure 4. The network was constructed using one-step network construction. The networkType was set to signed and softthreshold power 8 to define the adjacency matrix based on the criterion of approximate scale-free topology (Supplementary Figure 5), with minimum module size 30, the module detection sensitivity DeepSplit set to 3, and cut height for merging of modules of 0.25 which means that the modules whose eigengenes are correlated above 0.75 will be merged (Supplementary Figure 6). A total of 3,493 OTUs were parsed into 13 different color modules. Among these 13 modules, the gray module indicated unassigned bacterial taxa. In the dendrogram, each leaf, represented as a short vertical line, corresponded to a bacterial taxon. Densely interconnected branches of the dendrogram group represented highly co-occurring bacterial taxa (Supplementary Figure 7). The correlation between module eigenvalue and trait was calculated. The module-trait relationship heat map demonstrated the correlation coefficient between module eigenvalues and traits (−1 to 1). The black module was significantly correlated with five different periods simultaneously (p < 0.01) and was not correlated with other traits (age and gender) ( Figure 5A). Therefore, the black module was selected for subsequent analysis. Furthermore, the OTU in the top 10% of global importance [Ktotal (the global network connectivity) value accounted for the top 10%] in WGCNA was extracted, and the network diagram was constructed in Cytoscape (715 nodes and 9,447 edges). The results showed that the black module was relatively independent and had low correlation with other modules (Figure 5B). The black module had the highest correlation with seasonal change, and 14 OTUs in the global hub OTUs were located in the black module, which was significantly related to seasonal changes ( Figure 5C). OTUs with a connectivity weight value greater than 0.02 in black modules were extracted to construct the network diagram of black modules (Figure 5D). The thickness of gray lines represented the co-occurred strength between OTUs. The size of an OTU node represented the Kwithin (the intramodular connectivity) value and the connectivity between the node and the black module. CytoHubba plugin was further used to extract and identify the hub OTUs including OTU6606, OTU5027, OTU7522, and OTU3787 ( Figure 5E) as top 4, which belong to the order Oscillospirales in phylum Firmicutes, and the first three hub OTUs belong to genus UCG-005 of family Oscillospiraceae. DISCUSSION The normal steady-state gut microbiota is closely related to the health of the host and coevolves in a mutualistic relationship with the host through complex interactions. Its unique community structure and metabolites are essential for regulating host metabolism, growth, and development, resistance to pathogens, immune regulation, adaptation, evolution, etc. (Bäckhed et al., 2005;Engel et al., 2012;Ezenwa et al., 2012;Zhang et al., 2016). This study systematically and comprehensively analyzed the seasonal differences of gut microbiota in captive FMD and AMD at different ages. The results indicated that the FMD and AMD shared seven dominant bacteria genera, including the genera Bacteroides, Christensenellaceae R7 group, UCG 005, Rikenellaceae RC9 gut group, Alistipes, Ruminococcus, and Prevotellaceae UCG 004 and genera Christensenellaceae R7 group, Ruminococcus, and Prevotellaceae UCG 004. Current studies have shown that genus Christensenellaceae R7 group is widely found in the intestinal tract and mucosa of the host, which is essential for good health and is involved in amino acid and lipid metabolisms (Waters and Ley, 2019). Genus Ruminococcus is involved in the degradation of cellulose and hemicellulose in the rumen of ruminants (Matulova et al., 2008;La Reau et al., 2016). It produces several cellulases and hemicellulases that convert dietary fiber into various nutrients (La Reau and Suen, 2018). It is also involved in food digestion and carbohydrate metabolism in ruminants. Genus Prevotellaceae UCG 004 is involved in the degradation of polysaccharides and the production of short-chain fatty acids (SCFAs) (Heinritz et al., 2016). Family Christensenellaceae (genus Christensenellaceae R7 group belongs to this family), Ruminococcaceae (Ruminococcus belongs to this family), and genus Alistipes are associated with immune regulation and healthy homeostasis and are regarded as potentially beneficial bacteria (Kong et al., 2016;Shang et al., 2016;Wang et al., 2018). Among them, the genera Christensenellaceae R7 group and Alistipes are classified as potential biomarkers for intestinal diseases (Crohn's disease, colorectal cancer, Clostridium difficile infection, etc.) (Mancabelli et al., 2017). Moreover, the genera Bacteroides, Alistipes, and Rikenellaceae RC9 gut group belong to Bacteroidetes. Genus Bacteroides can improve the metabolism of the host, mainly by participating in the metabolism of bile acid, protein, and fat, and regulating carbohydrate metabolism. Genus Alistipes is involved in the metabolism of SCFAs. Bacteroides and Alistipes belong to bile-tolerant microorganisms (David et al., 2014). High-fat animal feed can increase the relative abundance of genera Bacteroides and Alistipes (Wan et al., 2019), thus improving lipid metabolism by regulating acetic acid production (Yin et al., 2018). Genus Rikenellaceae RC9 gut group also promotes lipid metabolism (Zhou et al., 2018). Furthermore, phyla Firmicutes and Bacteroidetes significantly dominated the 352 fecal samples from captive musk deer in different seasons (relative abundance of more than 90%), consistent with other studies Zhao et al., 2019;Fountain-Jones et al., 2020). Firmicutes and Bacteroidetes are the dominant core bacteria in the rumen of ruminants, with the highest relative abundance. They are involved in essential processes, such as food digestion, nutrient regulation, and absorption, energy metabolism, and defense against invasion of foreign pathogens in the gastrointestinal tract of hosts (Jewell et al., 2015;Guan et al., 2017;Wang et al., 2017). Firmicutes promote fiber degradation in food and convert cellulose into volatile fatty acids, enhancing food digestion and growth and development. Herein, among the top 30 bacterial genera of gut microbiota, 23 and 19 were Firmicutes in the FMD and AMD, respectively. Besides, most were associated with carbohydrate metabolism and cellulose digestion and absorption. Bacteroidetes mainly promote the digestion and absorption of proteins and carbohydrates in the food, thus promoting the development of the gastrointestinal immune system. Among the top 30 identifiable bacterial genera, five and eight were Bacteroidetes in the FMD and AMD, respectively, of which most were involved in lipid metabolism. The nutrient level of animal feed directly affects the abundance of Firmicutes and Bacteroidetes. Herein, Firmicutes and Bacteroides were the core dominant microflora in the FMD and AMD at different seasons and ages. In addition, the relative abundances of Firmicutes and Bacteroidetes and their ratios had significant seasonal differences. The relative abundances of Firmicutes and the F/B ratio were higher in the cold season than in the warm season. This is beneficial for captive FMD and AMD to adapt to the cold season, thus promoting the decomposition of cellulose and hemicellulose in feed, carbohydrate metabolism, and nutrient digestion and absorption. The FMD and AMD are typical ruminants. Wild individuals feed on various high-fiber leaves, while captive individuals feed on concentrated animal feed and rouge feed. The former mainly includes carrot (Daucus carota), potato (Solanum tuberosum), maize (Zea mays), soybean (Glycine max), etc., and the latter includes fresh leaves (warm season) or dry leaves (cold season). Core and dominant bacteria genera are essential in food digestion, nutrient absorption, and energy metabolism of captive FMD and AMD. Herein, the relative abundances of the genera Christensenellaceae R7 group, Ruminococcus, and Prevotellaceae UCG 004 (Firmicutes) were higher in the cold season than in the warm season. In contrast, the relative abundance of genus Bacteroides (Bacteroidetes) showed the opposite results, indicating that the seasonal difference of dominant bacteria genera is closely associated with animal feed composition in different periods. Principal coordinate analysis showed that both adult and juvenile musk deer had significant seasonal differences, indicating that the gut microbial structure and composition of FMD and AMD are significantly different at different seasons, consistent with other studies (Amato et al., 2015;Maurice et al., 2015;Trosvik et al., 2018). The seasonal difference is closely related to food resources, dietary structure, nutrient utilization, and feeding pattern (Xue et al., 2015;Wu et al., 2017;Kartzinel et al., 2019). The Wilcoxon ranksum test showed that the α diversity of the gut microbiota had significant seasonal differences in captive FMD and AMD. Overall, the α diversity of the gut microbiota in musk deer was higher in winter and autumn than in spring and summer. Previous studies have shown that higher α diversity leads to a more complex and stable intestinal microbiota composition, enhancing resistance to external interference, and adaptability, which is beneficial to the host health (Stoffel et al., 2020). The decrease or loss of α diversity is associated with various diseases (Rogers et al., 2016). Therefore, the reported higher α-diversity of the gut microbiota in captive FMD and AMD in autumn and winter can improve the resistance to adverse environmental factors, reduce the influence of adverse environmental factors, and promote the intake of fiber-rich food and nutrient absorption and utilization in the cold season. The function prediction analysis based on KEGG and EggNOG databases showed that the gut microorganisms in captive FMD and AMD mainly involve carbohydrate metabolism, amino acid metabolism, energy metabolism, and cofactor and vitamin metabolism. These metabolic functions also had seasonal differences and were significantly correlated with core bacteria phyla and genera. The seasonal changes of core bacteria were also closely associated with the seasonal function changes. The effect of season variation on gut microbiota of musk deer was higher than that of age and gender based on the correlation between module eigenvalue and traits of musk deer. WGCNA analysis indicated that the black module had low correlation with other modules and has the highest correlation with seasons. OTU6606, OTU5027, OTU7522, and OTU3787 were significantly fluctuated with the season, which all belong to phyla Firmicutes. The four hub OTUs were at the core of the black module and can greatly affect the network structure of the co-occurrence bacterial taxa network in the black module, which can be used as an important indicator to evaluate the intestinal health of captive forest musk deer. CONCLUSION The study systematically and comprehensively analyzed the seasonal differences of gut microbiota structure and function through 16S rRNA gene sequencing of 352 fecal samples from the captive FMD and AMD at different ages. Comparison analysis identified significant seasonal variations of α-diversity, core microbiota, and metabolic functions. The α-diversity, phylum Firmicutes, and F/B ratio in the gut microbiota of both musk deer were higher in the cold season than in the warm season. The major metabolism-related functions were also significantly higher in the cold season than in the warm season in FMD. The four identified hub OTUs can be used as an important indicator to evaluate the intestinal health of captive forest musk deer. Therefore, this study provides insights into the captive breeding environment and future reintroduction programs. Besides, functional annotation analysis of gut microbiota and the seasonal changes of metabolic pathways combined with metagenome and metabolomic analyses can help to further explore the role of gut microbiota in the health, environmental adaptation, and metabolism of the FMD and AMD in the future. DATA AVAILABILITY STATEMENT The data presented in the study are deposited in the NCBI GenBank, accession number PRJNA725631 (https://dataview.ncbi.nlm.nih.gov/object/PRJNA725631?review er=e2rcdic27nir1a7qhp8unlk9s6). ETHICS STATEMENT A total of 262 fresh feces of captive forest musk deer were collected with the permission of the authorities of the forest musk deer breeding center in Qilian county, China. Also, a total of 90 fecal samples of captive alpine musk deer were collected with the permission of the authorities of the alpine musk deer breeding center in Yuzhong county, China. All procedures were approved by the Ethical Committee for Experimental Animal Welfare of the Northwest Institute of Plateau Biology.
8,225
2021-09-06T00:00:00.000
[ "Environmental Science", "Biology" ]
OH cleavage from tyrosine: debunking a myth A systematic macromolecular crystallography investigation into the observed electron density loss around the –OH group of tyrosines, as a function of dose at 100 K, is reported. It is concluded that a probable explanation is aromatic ring disordering as opposed to –OH cleavage; occurrence of the latter mechanism is a misconception perpetuated in radiation damage literature, and is unsupported by any observations in radiation chemistry. scores calculated by EDSTATS (Tickle, 2012) for the 1dwa structure retrieved directly from PDB and retrieved from PDB_REDO (Joosten et al., 2014) after 10 cycles of rigid body refinement. RSZO is a metric for model precision and is calculated as RSZO = mean(ρ obs ) / σ(Δρ), in which σ(Δρ) is the standard uncertainty of the 1dwa F obs -F calc map. The percentage of residues within 1dwa with lower RSZO scores than specific thresholds (between 0.5 and 3) are provided. The RSZO score is unbounded from above, with larger positive values indicating greater residue to density fit (Tickle (2012) Figure S1.1. 2mF obs -DF calc electron density maps and mF obs -DF calc maps for the 1dwa myrosinase model at different stages in the PDB_REDO re-refinement, centred at residue Tyr-215. All 2mF obs -DF calc and mF obs -DF calc maps were generated over the asymmetric unit with FFT, and are contoured at 2 (in grey) and +/-4 σ (green/red) respectively. (a) The original 1dwa model extracted directly from the PDB (with ordered water in red), with the corresponding original electron density maps (maps were extracted from the PDB in .mmcif format, and converted to .mtz format with CCP4 program CIF2MTZ). In (b), the coordinate model after 10-cycles of rigid body refinement with PDB_REDO has been superimposed on the coordinate model and electron density maps in (a). (c) The re-refined 1dwa model (with ordered water in yellow) and electron density maps extracted from PDB_REDO after 10-cycles of rigid body refinement. In (d), the coordinate model after full model rebuilding with PDB_REDO has been superimposed on the coordinate model and electron density maps in (c). (e) The fully rerefined 1dwa model (with ordered water in pink) and electron density maps extracted from PDB_REDO after full model rebuilding. Full model re-building with PDB_REDO resulted in correct placement of both the Tyr-215 aromatic group and the ordered solvent molecules. All subfigures were rendered in PyMOL (www.pymol.org). Figure S1.2. Specific radiation damage to Tyr-215 in the myrosinase damage series (Burmeister, 2000). (a) F obs (5) -F obs (1) difference map, overlaid on the original initial coordinate model in white (PDB accession code: 1dwa). Structure factor amplitudes have been retrieved directly from the PDB for the F obs (5) -F obs (1) map calculation, with phases derived from the original 1dwa coordinate model. Negative difference density is observable adjacent to the Tyr-215, however does not align with the -OH group. (b) F obs (5) -F obs (1) difference map, overlaid on the re-refined initial coordinate model in white (PDB accession code: 1dwa) retrieved from PDB_REDO. Structure factor amplitudes have been retrieved from PDB_REDO for the F obs (5) -F obs (1) map calculation, with phases derived from the re-refined 1dwa coordinate model. Clear negative difference density is observable centred at the position of the Tyr-215 -OH group. In both cases, the F obs (5) -F obs (1) maps have been generated by FFT within the RIDL pipeline, and are overlaid over the coordinate models at +/-5 σ contouring levels in green/red. (a) (b) Figure S1.3. D loss damage signature histogram plots over all atoms of selected residue types for (a,b) myrosinase (Burmeister, 2000), (c,d) TRAP (Bury et al., 2016), and (e,f) Malate Dehydrogenase (Fioravanti et al., 2007). Gaussian kernel density estimates are overlaid on the histogram plots. For each plot, Kolmogorov-Smirnov (KS) test statistics have been calculated to measure the similarity between the two residue types included. Other doses within these damage series exhibit qualitatively similar behavior (data not shown). ), (f) insulin (unpublished data), (g) the C-protein DNA complex, (h) phosphoserine aminotransferase, and (i) acetylcholinesterase. Tyr-OH residues exhibiting hydrogen bond interactions to Glu or Asp carboxyl groups are coloured blue. See main text for list of original publications for each protein structure. For clarity, only proteins for which damage series consisted of > 2 higher dose datasets have been included. X-axis doses are those reported originally (see original publications for corresponding dose calculations). The exception is myrosinase, for which doses were originally quoted in units of photons mm -2 ; diffraction weighted doses (DWD) (Zeldin, Brockhauser, et al., 2013) have been calculated in RADDOSE-3D (Zeldin, Gerstel, et al., 2013) for this damage series using crystal composition (heavy atom content, crystal size) and beam characteristics (energy, flux, exposure time, and collimation) as supplied in (Burmeister, 2000). Figure S1.5. Relationship between D loss (atom) and change in Bdamage (Gerstel et al., 2015) relative to the 1dwa structure, for each Tyr -OH atom in the myrosinase structure at each higher dose in series (a-d). For each atom a, and higher dose structure k = 1dwf, 1dwg, 1dwh, 1dwi, the relative change is calculated as To account for non-unity occupancies in the originally deposited data, all atomic occupancies in each coordinate model were set to 1 and a further round of isotropic B-factor refinement was performed in phenix.refine (Adams et al., 2010) prior to calculating Bdamage. The high correlation between D loss and change in Bdamage is striking, since both are independently calculated radiation damage metrics. Whereas Bdamage is dependent on refined coordinate model atomic B-factor values, D loss is derived directly from electron F obs (n) -F obs (1) difference density values. Supplementary material 2: GH7 protein crystallization and data collection Crystallization Glycoside hydrolase family 7 cellulase from Daphnia pulex (DpCel7B) was supplied by the National Renewable Energy Laboratory (NREL), having been overexpressed in a trichoderma reesei system. A crystal was grown using a hanging-drop crystallization protocol with 2.75 mg/ml protein mixed with 0.1 M monosodium citrate and 0.9 M ammonium sulphate, pH 4. The approximately cuboid-shaped crystal had dimensions of 75 × 75 × 10 µm. Prior to cryocooling of the crystal, it was soaked for 2 minutes in buffer solution with 30% (v/v) glycerol added as a cryoprotectant, immediately before storage in liquid nitrogen. X-ray data collection Data were collected at 100 K on beamline I02 at the Diamond Light Source, using an incident wavelength of 0.980 Å (12.7 keV) and a Pilatus 6M detector, positioned at 244.9 mm from the crystal throughout data collection. The beam passed through a 200 µm aperture and was slitted to 60 × 120 µm (vertical x horizontal), with an approximate Gaussian profile (vertical × horizontal FWHM: 17.7 × 104.8 µm). The flux at the sample position was determined using a pre-calibrated 500 µm thick silicon PIN photodiode to be ~1.77 × 10 12 ph/s at 100% beam transmission. The crystal was initially orientated with the small crystal dimension (z = 10 µm) parallel to the beam direction. A 2000° continuous sweep of data was collected, consisting of 9999 frames (Dj = 0.2°) of 0.04 s exposure time each. The beam transmission was held at 25% throughout. Data processing The single of 2000° sweep of diffraction data was divided into 11 disjoint 180° wedges (sufficient for completeness due to the C121 space group), each consisting of 900 images. Each wedge of data was integrated using DIALS (Fuentes-Montero et al., 2014), and then scaled and merged in AIMLESS . To obtain an initial set of phases for the first dataset, molecular replacement was performed in PHASER (McCoy et al., 2007), using an identical GH7 family Cellobiohydrolase from Daphnia pulex already deposited in the PDB (accession code: 4xnn, resolution 1.9 Å, (McGeehan, in preparation)) as a search model. The resulting initial coordinate model was then refined using REFMAC5 , first with 10 cycles of rigid body refinement, followed by repeated rounds of 10 cycles of restrained and TLS refinement, coupled with manual inspection in COOT. Final protein geometry was assessed with Molprobity4 (Chen et al., 2010). For the model from each later dataset, 10 cycles of rigid body refinement were performed in REFMAC, using the refined coordinate model derived from the initial dataset coupled with the merged structure factors from the later dataset. This is a standard protocol for refining multi-dataset protein damage series (Southworth-Davies et al., 2007). Since the crystal unit cell dimensions generally increase as a function of dose , rigid body refinement was employed to compensate for slight re-definition of the unit cell parameters with increasing dose. For the current analysis, no restrained refinement was performed for the models from each of the higher dose datasets. The calculation of the D loss metric is derived from F obs (n) -F obs (1) Fourier difference map coefficients, using the phases obtained for the initial dataset refined coordinate model, and as such restrained refinement of models for higher dose datasets is not required. Data reduction and refinement statistics for the full damage series are reported in Tables S2.1 and S2.2. Coordinates and structure factor amplitudes have been deposited in the PDB with accession codes: 5mcc, 5mcd, 5mce, 5mcf, 5mch, 5mci, 5mcj, 5mck, 5mcl, 5mcm and 5mcn. Dose calculations The diffraction weighted dose (DWD) was calculated using RADDOSE-3D for each dataset in the damage series. The crystal was modelled as a cuboid with dimensions 75 × 75 × 10 µm. The crystal composition was determined using the 4xnn coordinate model, which has an identical sequence. The contribution of heavy elements within the crystallization buffer (0.1 M Na and 0.9 M S) was included in the calculated crystal absorption coefficients. A Gaussian-shaped beam was modelled with parameters as given above. As a result of the high symmetry in the data collection procedure, for dataset n = 1,2,… the resulting DWD value was calculated to be 1.11 + 2.16×(n -1) MGy. By the end of the final dataset (n = 11) the crystal had absorbed an accumulated dose of DWD = 22.7 MGy. Table S2.2: Assessment of final macromolecular geometry for the refined coordinate model corresponding to the first dataset, as reported by Molprobity4 (Chen et al., 2010). Since for the coordinate models corresponding to the higher dose datasets, only rigid-body refinement was performed on the refined initial dataset coordinate model coupled with the higher dataset merged structure factor amplitudes, the reported geometry statistics were conserved for the higher dose datasets. Supplementary material 3: Tyr -OH versus -C ζ correlation analysis To verify whether published reports of negative F obs (n) -F obs (1) difference density at a small subset of Tyr residues (Tyr-330 in myrosinase (Burmeister, 2000), Tyr-63 in TRAP (Bury et al., 2016)) were compatible with a model of radiation-induced ring displacement, D loss values for Tyr -OH have been compared directly with those for covalently bound Tyr-C ζ atoms. Here, linear regression fitting has been restricted to structures deemed to contain statistically valid sample sizes of atoms (> 60 kDa). For myrosinase, at the highest dose analysed (Fig. S3.1a) a high positive correlation exists between Tyr-OH and -C ζ D loss , (linear R 2 : 0.72 -0.82 for dose range), which is of the order of both the Asp -C γ & -O δ1 (Fig. S3.1c) and Glu -C δ & -O ε1 (Fig. S3.1e) (with linear R 2 : 0.81 -0.91 and 0.83 -0.91 respectively for dose range). The observed high correlations for Asp and Glu correspond here to full oxidative cleavage of the carboxylate. The comparably high correlation between D loss for Tyr-C ζ and Tyr-OH supports a hypothesis of radiation-induced disordering of the overall tyrosyl aromatic ring, fixed as a covalently bound unit. For explicit cleavage of the phenolic C-O bond, a lower correlation would be anticipated, similarly to those reported between the Asp -C γ & -C β (Fig. S3.1b) and Glu -C δ & -C γ (Fig. S3.1d) atoms (linear R 2 : 0.11 -0.23 and 0.42 -0.59 respectively across the dose range). These reduced correlations are expected since the Asp-C β and Glu-C γ atoms are not predicted to be cleaved during sidechain decarboxylation. Similarly, for the TRAP damage series, the Asp -C γ & -O δ1 and Glu -C δ & -O ε1 D loss behaviour was more correlated than that of Asp -C γ & -C β and Glu -C δ & -C γ (Fig. S3.2 a-d). For TRAP, the Tyr -OH and -C ζ behaviour was only weakly correlated across the large dose range studied (1.3 -25.0 MGy, (Bury et al., 2016)) ( Fig. S3.2e). We suggest here that the low correlation observed for Tyr is a consequence of the 11-fold symmetry around each TRAP ring. Each of the two TRAP rings within the asymmetric unit contains 11 symmetry-related copies of a single Tyr residue (Tyr-63), with all of these predicted to exhibit similar D loss values due to the conserved local protein environment (binding interactions, solvent accessibility) for each Tyr-63 residue around a TRAP ring. Consequently, the Tyr scatter plot may not contain an adequate sampling of Tyr residues for linear regression analysis in TRAP. In contrast, other proteins (malate dehydrogenase (Fioravanti et al., 2007), acetylcholinesterase (Weik et al., 2000), and phosphoserine aminotransferase (Dubnovitsky et al., 2005)) exhibited negligible D loss correlation between Tyr -OH and -C ζ atoms (Fig. S3.3 & S3-4). However, no Tyr-OH groups were flagged as radiation-sensitive within these structures (Fig. 3, overall Tyr-OH ranks: 42, 36 and 54 for these proteins respectively, with the highest individual Tyr-OH positions being above 290, 138 and 235 at all tested doses). Consequently, a correlation would not be expected between -OH and -C ζ , with noise dominating at low D loss values. In summary, whereas the correlation between Tyr -OH and -C ζ atom electron density loss is inconsistent between the investigated proteins, in a subset of cases for which negative F obs (n) -F obs (1) map peaks have been detected near Tyr-OH groups (myrosinase and TRAP), a positive correlation is present. Figure S3.1 (a-e) Scatter plots to compare D loss (atom) behaviour for atoms within the same residue side-chains, for the highest dose myrosinase crystal dataset (Burmeister, 2000). The linear coefficient of determination is computed for each scatter plot. (a-e) Scatter plots to compare D loss (atom) behaviour for atoms within the same residue side-chains, for dataset 5 (11.6 MGy) of the TRAP crystal damage series (Bury et al., 2016). The linear coefficient of determination is computed for each scatter plot. (a) R 2 : 0.21 (b) R 2 : 0.17 Figure S3.4. The linear coefficient of determination (R-squared) quantifying the linear correlation between D loss for selected atom pairs for large protein structures (> 60 kDa). Box plots illustrate the variation in R-squared value between each dataset for each separate damage series.
3,445.4
2017-01-01T00:00:00.000
[ "Chemistry", "Physics" ]
Economic evaluation of patient navigation programs in colorectal cancer care, a systematic review Patient navigation has expanded as a promising approach to improve cancer care coordination and patient adherence. This paper addresses the need to identify the evidence on the economic impact of patient navigation in colorectal cancer, following the Health Economic Evaluation Publication Guidelines. Articles indexed in Medline, Cochrane, CINAHL, and Web of Science between January 2000 and March 2017 were analyzed. We conducted a systematic review of the literature using Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The quality assessment of the included studies was based on the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) checklist. Inclusion criteria indicated that the paper’s subject had to explicitly address patient navigation in colorectal cancer and the study had to be an economic evaluation. The search yielded 243 papers, 9 of which were finally included within this review. Seven out of the nine studies included met standards for high-quality based on CHEERS criteria. Eight concluded that patient navigation programs were unequivocally cost-effective for the health outcomes of interest. Six studies were cost-effectiveness analyses. All studies computed the direct costs of the program, which were defined a minima as the program costs. Eight of the reviewed studies adopted the healthcare system perspective. Direct medical costs were usually divided into outpatient and inpatient visits, tests, and diagnostics. Effectiveness outcomes were mainly assessed through screening adherence, quality of life and time to diagnostic resolution. Given these outcomes, more economic research is needed for patient navigation during cancer treatment and survivorship as well as for patient navigation for other cancer types so that decision makers better understand costs and benefits for heterogeneous patient navigation programs. Electronic supplementary material The online version of this article (10.1186/s13561-018-0196-4) contains supplementary material, which is available to authorized users. Introduction The impacts of cancer on individuals, caregivers, society and health care systems are profound. The National Cancer Institute estimates that in 2016, 1.6 million people in the United States will be diagnosed with cancer and nearly 600,000 will die from the disease [1]. Globally, over eight million lives lost and almost 200 million disability-adjusted life years were attributed to cancer in 2013 [2]. Close to $125 billion was spent on cancer care in the U.S. in 2010 [1], a figure anticipated to reach $173 billion by 2020 [3]. The growing cost of cancer care reflects successes in the field, such as increases in both the percentage of people who survive cancer and the number of years survived, with resultant costs of specialized care needs [4]. It also reflects failures: for example, inadequate coordination of care through an "increasingly specialized and fragmented health care system" [5], which can lead to service duplications, lower treatment adherence, poorer care quality, worse health outcomes, and increased costs for patients and payers [6][7][8]. Cancer cost must be considered in the context of health and health care disparities. Racial/ethnic minorities, low-income populations, and others from historically marginalized backgrounds tend to be diagnosed at later stages of disease progression, receive lower quality care and bear a disproportionate burden of disease. Racial/ethnic disparities in cancer cost an estimated annual $193 billion in premature death and $471.5 million in lost productivity in the United States alone [9]. As others have observed, there are both economic and moral arguments for bending the cost curve of cancer care [10]. Patient navigation (PN) has rapidly expanded as a promising approach to address cancer disparities, reduce the overall cost of cancer, and improve care coordination and patient adherence across the care continuum, particularly among minority and/or economically disadvantaged patient populations [11,12]. PN programs have been effective in improving clinical outcomes and patient experience, including reducing patient distress and anxiety, shortening acute hospital stays, increasing patient satisfaction and empowerment, and reducing disparities in timely movement through the cancer care trajectory [13]. Yet PN's effects on the cost of cancer care are not as well documented. Few studies on PN programs provide an exhaustive economic evaluation of their outcomes, and even fewer base their evaluation on validated methodological guidelines like the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) and on well-defined coordination problems [14]. More rigorous economic analyses of PN are needed for a variety of reasons, not least to inform policy decisions about if and how to pay for PN services, which in the U.S. are currently not reimbursed by third party payers. Strengthening understanding of the economic impacts of PN is particularly valuable for cancer types in which population-level early detection is cost-effective and PN improves adherence to initial phases of care. Colorectal cancer (CRC) is the third most commonly diagnosed cancer in the United States and worldwide. CRC will be diagnosed in an estimated 96,000 people in the United States in 2017 and will take the lives of over 50,000 people, disproportionately affecting racial/ethnic minorities and economically disadvantaged people due to later-stage diagnosis and low screening adherence [15]. Globally, almost 694,000 lives were lost to CRC in 2012 [2]; it is estimated that worldwide CRC diagnoses will nearly double in the next two decades to reach 2.4 million cases in 2035. The United States will spend approximately $17.41 billion on CRC care in 2020, with over half of cost spent on continuing care and in the last year of life [3]. Yet, the majority of spending on treatment, as well as the estimated $4.2 billion in productivity lost to CRC deaths and inestimable individual and family suffering [16], is largely considered avoidable due to the success of screening and removal of pre-cancerous polyps. PN has demonstrated improvements in timely movement through the CRC care trajectory, particularly among racial/ethnic minority, low-income, and other disadvantaged populations [17]. Accordingly, it makes an excellent case study to examine the economic impacts of PN on care. Our study aimed to analyze the literature and assess the level of evidence on the economic evaluation of PN programs in CRC. Review process A systematic search of the scientific literature was conducted in four major databases (MEDLINE using PubMed, Web of Science, Cochrane and CINAHL) to identify relevant English-language publications relating to economic evaluations of PN programs in CRC. The Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) guidelines were used to ensure systematic selection of studies [18] (see Additional file 1: Table A). The three preconditions for inclusion were that the study: (1) evaluated PN services: we confirmed that each article explicitly addressed PN (including navigators with and without a clinical license such as nurses and social workers performing navigator functions) rather than other health care provider roles that may perform similar tasks, (2) conducted an economic evaluation, and (3) focused on CRC. Keywords were defined according to population, intervention/comparator, outcomes, study design elements (see Additional file 1: Table B). Keywords were searched in the title or abstract of full-length publications that were published between January 2000 and march 2017. Articles were excluded if they did not correspond to the above criteria. Systematic literature reviews were also excluded Study selection Our initial search resulted in 243 articles that met the above-mentioned criteria. The retrieved studies were reviewed by four researchers in close consultation with a senior author (MPC) and, in case of disagreement, issues were resolved by consensus. Duplicates were removed, resulting in a total of 121 articles for review. The 121 citations were screened on the basis of titles and abstracts. 16 papers were then selected. The full-text articles for the 16 abstracts selected for inclusion were retrieved and read. The final number of original empirical studies was 9 after assessment of eligibility for inclusion. Data was extracted independently by four researchers. Extracted information included: bibliographic details, information on participants, PN interventions, outcomes, study design, and results. Disagreements were resolved by discussion. Figure 1 provides a PRISMA diagram illustrating details of the search strategy. Quality assessment The studies identified for inclusion were assessed against the 24 key criteria contained in the CHEERS checklist [19]. The checklist has been jointly endorsed by ten journals. All items were presented in the tables for this review, consisting of five broad categories: Title and abstract (2 items); Introduction (1 item); Methods (14 items); Results (4 items) and Discussion (3 items) (see Additional file 1: Table C). In certain studies, we considered that some CHEERS' items were not applicable: -When the time horizon was less than one year, discounting (item 9) was considered not applicable -When the economic evaluation was a cost analysis, effectiveness measurement (item 11) was considered not applicable -When measured outcomes were not preferencebased, preference measurement (item 12) was considered not applicable. We used the results of the quality assessment for descriptive purposes and to investigate potential sources of heterogeneity. Cost classification used The costs considered were: -Direct costs encompass all the health care expenditures generated by the program. They include the resources used for program implementation (program cost) and both the medical and non-medical resources generated as a consequence of the program (e.g., physician consultations, treatment cost, professional home care). These resources are priced on the health care market (consultation cost, treatment cost, etc.). -Indirect costs correspond to resources without a market price, such as opportunity costs for both the patient (e.g., travel time, waiting time, and productivity loss on the labor market) and his/her relatives (since informal care time means the caregiver cannot pursue other activities). While necessarily estimated, these resources are given a monetary value to be integrated within the costs of the economic evaluation. Results We present in Additional file 1: Table C the quality assessment of the included studies based on the CHEERS checklist. It shows that seven out of the nine studies reviewed can be considered high quality studies, following an existing approach to determining quality in cancer scholarship [20], with an average proportion of 84.8% of checklist criteria fulfilled. Table 1 shows the main characteristics of the studies included [21][22][23][24][25][26][27][28][29]. Most articles (n = 6) exclusively addressed navigating those due for recommended CRC screening to receive those services (n = 6). The few articles examining PN to diagnostic resolution (n = 2) addressed multiple cancer types. Two studies compared PN to screening colonoscopy versus other screening modalities (fecal occult blood testing (FOBT) or fecal immunochemical testing (FIT)). All studies but one occurred in the United States and took place in various clinical settings, primarily in the health care safety net setting. At least two-thirds of studies focused on racial/ethnic minority, low-income, or otherwise underserved populations. The only study to address PN from confirmed diagnosis through treatment or end of life occurred in New Zealand. Navigator profiles and roles described in the articles were diverse. Three studies used nurse navigators; four used non-clinically licensed navigators with various titles such as "lay" health educator or outreach worker. One article included a licensed clinical social worker and at least two employed bilingual staff. For the seven studies that described navigator actions, navigators provided assistance through a wide range of tasks. These included identification and removal of barriers to care, coordination of appointments and referrals, appointment reminders, support and encouragement, information and education, and tracking and follow up. Among the reviewed studies, four were based on randomized controlled trials (RCTs). All the studies reviewed indicated the time horizon for evaluation. "identifying and addressing patient barriers to accessing care (transport/ financial/ social)", 3. "coordinating arrangements for preoperative assessments and hospital admission," 4. "optimising post-operative care," 5. "Tracking investigations and appointments," 6. "ensuring the patient is discussed at a multidisciplinary team meeting," 7. "acting on any administrative delays." Table 2 shows that most of the studies (n = 8) adopted the health care system perspective, which refers to a variety of entities including the hospital (n = 3) or public or private payers (n = 5). Six studies were presented as cost-effectiveness analyses (among which, one presented both a cost-effectiveness analysis and a cost-benefit analysis), one was presented as a cost-utility analysis, one was a cost-consequence analysis, and one was a cost analysis. If we assume that using Quality Adjusted Life years (QALYs) implies conducting a cost-utility analysis [30], two of the cost-effectiveness analysis reviewed were also cost-utility analyses. All studies computed the direct costs of the program, which were defined a minima as the program costs, including training, personnel, and supply costs. Eight studies considered direct medical costs, which were usually divided into outpatient and inpatient visits, tests and diagnostics. Estimated treatment cost was only considered in four papers and no study included direct non-medical cost, such as home care expenses. Only one study included indirect costs in the total costs associated with the PN program, including patient productivity loss and travel cost. No study included indirect costs associated with informal care. The clinical outcomes studied were mainly measures of time from abnormal finding to diagnostic resolution (n = 2), receipt of colonoscopy (n = 4), Quality Adjusted Life Years (n = 3) or Life years (n = 1). One third of the studies interpreted their results in relation to different stakeholders' willingness to pay (WTP) for improvements in care. All but one study concluded that PN programs were unequivocally cost-effective for the health outcomes of interest. For instance, Incremental Cost Effectiveness Ratios (ICERs) ranged from $65 to $1958 per additional screening meaning that adopting the PN program instead of the alternative care strategy considered (for instance usual care, or fecal occult blood test or automated electronic health record-linked mailings) leads to a cost of $[65 to 1958] for an additional screened patient. ICERs ranged from $1192 to $9708 per diagnostic resolution and from $3765 to $15,600 per QALY gained. There was high probability for PN to be cost-effective for CRC if stakeholder's WTP ranged between $1200 and $1697 per additional screening and from $16,500 to $21,000 per QALY gained. In comparison, the National Institute for Health and Clinical Excellence (NICE) has been using a cost-effectiveness threshold ranging between £20,000 and £30,000 ($27,000 -$40,000)usually per QALY gained -for over 14 years [31,32]. The remaining study concluded that PN programs were only likely to be cost-effective (at $43,520 per life-year saved) under the most favorable assumptions, in which patients lost to follow-up have more advanced cancer, and navigators account for a 6-month earlier time to diagnostic resolution and have a 15% higher probability of follow up resolution completion [24]. Conclusions Most PN programs for CRC presented in our review had high probability for being cost-effective compared to usual care, given a conservative cost-effectiveness threshold of $50,000 per QALY gained [33]; one study found one-time PN to be cost-saving. Cost-effectiveness evidence is most robust for PN programs designed to increase adherence with CRC screening using colonoscopy. Given the U.S. Preventive Services Task Force's A grade recommendation of CRC screening and the demonstrated success of PN to increase screening adherence among racial/ethnic minorities, low-income populations, and other disadvantaged patients, the volume and strength of the evidence in favor of the economic value of PN for colorectal screening adherence is unsurprising. There are fewer articles for other phases of the continuum or using other screening methods [34]. The scant evidence seems to be tentatively favorable for the phase from abnormal screening to diagnostic resolution. Donaldson et al. (2012) concluded that PN programs increased achievement of timely diagnostic resolution for CRC (as well as breast cancers) among largely uninsured patients, and would be cost-saving if they were able to avert three to four cancer deaths per year [21]. Bensink et al. (2014) found limited economic benefit for PN during this phase (across four cancer types), indicating the greatest cost-effectiveness for those with the greatest needs such as the longest lapses in follow-up after screening, the most severe screening results, or the greatest potential to make gains in timeliness [24]. Lairson et al. advised payers to consider covering the costs of patient navigation for colonoscopy, which, compared to FOBT, has more chance to be considered cost-effective and even cost-saving when adopting larger time horizons [25]. The only study examining PN during the treatment phase addressed stage III colon cancer patients. Blakely et al. found PN to have high probability of cost-effectiveness, even considering a conservative WTP threshold [26]. These findings provide initial promising evidence for decision makers in support of PN for patients with more advanced cancers, and also for PN roles in providing treatment coordination and support. Evidence of PN's cost-effectiveness is bolstered by the methodological soundness of the studies included in this review. Seven studies, all published after 2012, meet standards for high-quality based on CHEERS criteria. One of the studies was a cost analysis, making an incremental interpretation of the results impossible according to CHEERS guidelines [23]. Although there have been calls for establishing common PN cost measures [35], establishing such measures are challenging since costs to one stakeholder is revenue for another. In the studies we reviewed, there were variations in considerations and definitions of direct costs, indirect costs and health outcomes. The lion's share of the total cost of PN programs was most often attributable to direct medical costs rather than direct non-medical costs or indirect costs not covered by health insurance [36]. In other words, most of the reviewed studies adopted the health care system perspective rather than society's perspective. Only a third of the articles addressed stakeholders' WTP. WTP is an important consideration to help payers optimize resource allocation, in particular with PN programs that are more costly but also more effective [28]. The perspective adopted is also crucial to a discussion of WTP thresholds, especially since PN programs considered to be cost-effective for society may exceed a hospital administrator's budget constraint, or their WTP, corresponding to their preferences for an improvement in patient's health outcome thanks to PN. It is noteworthy that patient preferences and patient reported outcomes (PROs) associated with PN are not addressed in the studies reviewed. While this review advances understanding of the cost-effectiveness of CRC PN, findings should be interpreted with caution given limitations to current extant research. The heterogeneity of PN programs impedes the generalizability and comparability of individual and aggregate findings. The diversity of navigator roles, modes of communication and intensity of the interventions not only have the potential to produce heterogeneity in PN outcomes; it also produces variation in direct costs related to personnel and program costs. In settings in which PN occurs within a multi-faceted approach isolating PN-specific outcomes from aggregate outcomes may be especially challenging [23,35]. For instance, the intervention described by Wilson and Villarreal to increase colonoscopy adherence includes free colonoscopies, extended clinic hours, and taxi services [29]. Another problem affecting the generalizability of the results is the definition of usual care which was appreciably different across the studies reviewed. Further research could consist in comparing PN programs by navigator profile in addition to (or even instead of) being limited to a specific pathology. This kind of comparison would require detailed characteristics about navigator profile, such as their academic background, professional training, level of remuneration, length of work experience, etc. that are missing in most of the studies reviewed. While PN program implementation is characterized by significant variability, the screening method and phases of the cancer continuum studied were limited among the studies examined. Therefore, these cost-effectiveness evaluation results may not apply for PN interventions with screening methods and at cancer continuum phases not included in this review. Extrapolation of findings for PN cost-effectiveness for other types of cancers should be done with extreme caution given that colonoscopy screening doubles as a preventive procedure, extending savings of early detection and removal of polyps over a lifetime. Colonoscopy is thus unique among cancer screening modalities. Finally, our review faced several of the challenges often found in economic reviews. Economic modeling is complex. Multiple different models were used across the studies included in review, and results could have been affected by each model's type, structure, data sources and assumptions [37]. Lack of cost-benefit analyses prevented us from assessing whether PN could be profitable for providers, health care systems and societies (and at what cost for payers and possibly patients), but such analyses could move scholarship beyond cost-effectiveness.
4,666.2
2018-06-14T00:00:00.000
[ "Medicine", "Economics" ]
Dilepton Production in p$+$p, Au$+$Au collisions at $\sqrt{s_{NN}}$ = 200 GeV and U$+$U collisions at $\sqrt{s_{NN}}$ = 193 GeV In this contribution we report $e^{+}e^{-}$ spectra with various invariant mass and p$_{T}$ differentials in Au$+$Au collisions at $\sqrt{s_{NN}}$=200 GeV and U$+$U collisions at $\sqrt{s_{NN}}$=193 GeV. The structure of the t ($t\approx p^2_{\mathrm{T}}$) distributions of these mass regions will be shown and compared with the same distributions in ultra-peripheral collisions. Additionally, this contribution discusses first measurements of $\mu^{+}\mu^{-}$ invariant mass spectra from STAR's recently installed Muon Telescope Detector (MTD) in p$+$p and Au$+$Au collisions at $\sqrt{s_{NN}}$ = 200 GeV. Introduction Heavy-ion collisions like those produced at the Relativistic Heavy Ion Collider (RHIC) produce a hot and dense strongly interacting medium of quarks and gluons [1,2,3,4]. Dileptons (l + l − ) are produced throughout all stages of the medium's evolution and escape with minimum interaction with the strongly interacting medium. For this reason, l + l − pair measurements play an essential role in the study of the hot and dense nuclear matter created in heavy-ion collisions. Dileptons in the low invariant mass region (up to M ll ∼1 GeV/c 2 ) retain information about the in-medium modification of vector mesons while dileptons in the intermediate mass region (extending out to M ll ∼3 GeV/c 2 ) predominantly originate from charm decays and thermal radiation of the medium. At higher invariant masses ( above M ll ∼3 GeV/c 2 ) dileptons originate from the Drell-Yan process and from the decay of heavy quarkonia such as J/ψ and Υ. Recent studies of the J/ψ yields in peripheral Pb+Pb collisions at √ s NN = 2.7 TeV by the ALICE collaboration showed significant excess at very low momentum transfers (p T < 0.3 GeV/c) [5]. STAR has also measured significant excess J/ψ yields in peripheral Au+Au collisions at √ s NN = 200 GeV and U+U collisions at √ s NN = 193 GeV for p T < 0.15 GeV/c [6]. These observations cannot be explained by hadronic production mechanisms and may be evidence of dilepton production from photon-photon or photon-nucleus interactions. These types of interactions are studied in great detail in ultra-peripheral collisions (UPC) where the impact parameter, b, is larger than two times the nuclear radius and no hadronic interaction takes place. This motivated STAR to measure the e + e − pair production in a wider invariant mass region (M ee < 4 GeV/c 2 ) at very low p T to further investigate the possible production mechanisms responsible for the observed J/ψ excess at low p T in peripheral A+A collisions. Fig. 2: (color online) The low p T e + e − excess yield with respect to the hadronic cocktail for Au+Au and U+U collisions with 60-80% centrality. The additional contributions expected from in-medium modified ρ production, QGP thermal radiation, and the sum of the two are shown. STAR's Time Projection Chamber (TPC) and Time-of-Flight Detector (TOF) are used in the measurement of the e + e − invariant mass spectra reported in this contribution [7,8,9]. The TPC provides charged particle tracking, momentum measurement, and particle identification through ionization energy loss measurement. The TOF detector provides precise particle velocity (β) measurements. The particle identification power of these two detectors is combined to provide clean electron identification out to p T ∼3 GeV/c. Figure 1 shows the e + e − invariant mass distribution from Au+Au collisions at √ s NN =200 GeV and U+U collisions at √ s NN =193 GeV for p T < 0.15 GeV/c. The e + e − invariant mass distribution is shown for 60-80%, 40-60%, and 10-40% collision centralities. For each case the expected contribution from hadronic sources (excluding the ρ meson) is shown as a solid black line. In 60-80% central Au+Au and U+U collisions there is a significant excess visible with respect to the corresponding hadronic cocktail. The excess is less significant in 40-60% central collisions while the data from 10-40% central collisions is consistent with the expectation from the hadronic cocktail. The excess yield with respect to the hadronic cocktail is shown in Fig. 2 for 60-80% central Au+Au and U+U collisions along with the additional contributions expected from an in-medium modified ρ and thermal radiation from the Quark Gluon Plasma (QGP). The sum of these additional contributions is insufficient to explain the observed excess. In this contribution we will investigate the possibility that the additional dilepton yield observed at low p T results from UPC-like photon-photon or photon-nucleus interactions. The characteristics of the excess dilepton production can be further studied by investigating the p T distribution in various invariant mass regions. In Fig. 3 the p T distribution is shown for the ρ-like mass region (0.4 < M ee < 0.76 GeV/c 2 ) and the continuum region (1.2 < M ee < 2.6 GeV/c 2 ) in 60-80% central Au+Au and U+U collisions. The hadronic cocktail adequately describes the data for p T > 0.15 GeV/c in both collision species and both mass regions. The entire observed excess with respect to the hadronic cocktail is found below p T ≈ 0.15 GeV/c. The structure of the p T distribution in Fig. 3 appears similar to the p T distribution for coherent photoproduction of ρ 0 in UPCs seen in Fig. 4. This similarity is motivation to investigate whether or not coherent photoproduction could explain the observed excess production of low-p T dielectron pairs in peripheral A+A collisions. Photon-nucleus interactions are characterized by strong coupling (Zα em ∼0.6) and a photon wavelength λ = h/p > R A , where R A is the radius of the colliding nuclei [10]. This relationship between the photon wavelength and R A limits the photon's transverse momentum such that the photon's p T is less than h/R A (p T <∼30 MeV for heavy ions). The strong coupling strength and low p T limit of the photoproduction mechanism leads to a large production cross-section at low p T which falls off rapidly at higher momentum transfers. Additionally, the coherent photon-nucleus interaction is the result of two indistinguishable processes in which the photon can be emitted from A 1 or A 2 , i.e. from either nucleus. For exclusive vector meson production the amplitude of these two processes have opposite signs which results in strong destructive interference patterns in the p T distribution [11]. One experimental signature of this is a strong downturn in the t ≈ p 2 T distribution at very low values of |t| followed by an exponential decrease in the yield as |t| increases. Then, at higher values of |t| the total yield flattens out as the other production mechanisms dominate over the coherent photoproduction mechanism. Figure 5 shows the t distribution for peripheral Au+Au and U+U collisions for the ρ-like and continuum mass regions. These t distributions are fit to Ae −Bt in the low t region excluding the data point with lowest t value. The large slope parameters from the t distribution fits are similar to the slope parameters found in coherent photoproduction of ρ 0 in UPCs [12]. However, more data points with smaller uncertainties are needed to make a definitive conclusion as to whether or not coherent photoproduction can explain the observed excess dilepton yield at low p T in peripheral A+A collisions. µ + µ − Measurements with the MTD at STAR Installation of the Muon Telescope Detector (MTD) at STAR was completed in 2014. The MTD, located outside the STAR magnet steel, provides ∼45% azimuthal coverage in −0.5<η<0.5 [13]. The precise timing (σ∼100 ps) and spatial resolution (σ∼1 cm) of the MTD aids in the identification of muons [14]. With the addition of the MTD, STAR has collected data with a dedicated dimuon trigger in p+p and Au+Au collisions at √ s NN =200 GeV. These datasets allow the µ + µ − invariant mass spectra to be measured over a large mass range for the first time at STAR. Measurements of the µ + µ − invariant mass spectra will allow STAR to conduct new studies of the in-medium modification of the ρ meson in the LMR where the µ + µ − channel suffers from fewer background sources compared to the e + e − channel. Measurement of the IMR in the µ + µ − channel will also allow for new studies of the QGP thermal radiation. The new muon identification capabilities at STAR also allow for e − µ correlation studies which may help reduce the uncertainty on the significant contribution from semi-leptonic charm decays in the IMR [15]. Dedicated dimuon-triggered datasets were collected from Au+Au collisions at √ s NN =200 GeV in 2014 and 2016. Preliminary analysis of these datasets show S/B of ∼1/10 in 60-80% central collisions. This S/B ratio is significantly higher than the S/B ratio of ∼1/100 observed in e + e − . The p+p data collected in 2015 will provide a high quality baseline with which to compare future STAR measurements of the µ + µ − invariant mass spectra in central Au+Au collisions. Figure 6 shows the raw µ + µ − distribution in data from 2015 p+p collisions at √ s=200 GeV. Also shown is the like-sign ((µ + µ + +µ − µ − )/2) distribution and the difference between the unlike-sign and like-sign distributions (signal). The µ + µ − signal shows clear peaks corresponding to the ω, φ, J/ψ, and ψ(2S ). The full analysis of the 2015 p+p and 2014 Au+Au datasets is ongoing. Summary In this contribution we have reported on recent STAR measurements of the e + e − invariant mass spectra at low p T in peripheral A+A collisions. If the observed excess dilepton production is the result of coherent photoproduction of vector mesons in peripheral A+A collisions then this may provide a novel probe for studying the QGP medium. For instance, if a vector meson is produced through photoproduction with very low p T followed by a hadronic interaction then the vector meson may remain in the vicinity of the strongly interacting medium until it has decayed into a dilepton pair. In this case the dileptons from the photo-produced vector meson could be used as a probe of the medium characteristics. However, more data with smaller uncertainties will be needed to support or reject the hypothesis that coherent photoproduction is responsible for the observed dilepton excess in peripheral A+A collisions at very low p T . STAR plans to collect data from 96 44 Ru+ 96 44 Ru and 96 40 Zr+ 96 40 Zr collisions in 2018. Data from these A=96 isobar collisions will help determine the exact production mechanism responsible for the excess since the photon flux from photon-photon and photon-nucleus interactions scale ∝ Z 4 and ∝ Z 2 , respectively. Finally, the full installation of the MTD at STAR in 2014 has allowed STAR to collect dedicated dimuontriggered datasets in p+p and Au+Au collisions at √ s NN =200 GeV. These datasets will allow new studies of the µ + µ − invariant mass spectra over a large mass range for the first time at STAR. The 2015 p+p dataset provides a high quality baseline for STAR measurements in central Au+Au collisions. The Au+Au datasets collected in 2014 and 2016 will provide new opportunities to study the in-medium broadening of the ρ meson in the LMR and to investigate the contributions from QGP thermal radiation and semi-leptonic charm decays in the IMR.
2,679.6
2017-04-23T00:00:00.000
[ "Physics" ]
Electron acceleration in direct laser-solid interactions far beyond the ponderomotive limit In laser-solid interactions, electrons may be generated and subsequently accelerated to energies of the order-of-magnitude of the ponderomotive limit, with the underlying process dominated by direct laser acceleration. Breaking this limit, realized here by a radially-polarized laser pulse incident upon a wire target, can be associated with several novel effects. Three-dimensional Particle-In-Cell simulations show a relativistic intense laser pulse can extract electrons from the wire and inject them into the accelerating field. Anti-dephasing, resulting from collective plasma effects, are shown here to enhance the accelerated electron energy by two orders of magnitude compared to the ponderomotive limit. It is demonstrated that ultra-short radially polarized pulses produce super-ponderomotive electrons more efficiently than pulses of the linear and circular polarization varieties. that ultra-short radially polarized pulses produce super-ponderomotive electrons more efficiently than pulses of the linear and circular polarization varieties. Introduction Generation of energetic electrons by laser interaction with matter has witnessed considerable development over the past four decades. In the interaction of a laser pulse with an under-dense plasma (ambient electron density c e n n 03 . 0  , where c n is the critical density) (1) GeV-energy electron beams can be obtained via wakefield acceleration (2). Larger numbers of electrons can be accelerated in denser plasmas ( c e n n 1 . 0  ) where direct laser acceleration is dominant, to energies below the ponderomotive limit (3)(4)(5)(6). The term ponderomotive limit refers to the maximum energy gain,   2 2 0 2 1 mc a  , by a free electron during interaction with a plane-wave laser pulse (7,8). Here, ) is a dimensionless intensity parameter, in which 0 E is the electric field amplitude and  its frequency, while m and e  are the mass and charge, respectively, of the electron, and c is the speed of light in vacuum. The ponderomotive scaling (3)(4)(5)(6)(7)(8) is widely quoted within the context of discussions of direct laser acceleration (9,10) in the interaction of intense laser fields with dense matter ( c e n n  ), including solid-density plasmas. Generation of electrons which gain over 7 times the ponderomotive energy in near-critical plasmas ( c e n n  ) has recently been reported (11)(12)(13)(14)(15)(16)(17)(18) . This process is associated with important effects, such as the enhancement of ion acceleration in the near-critical plasmas (19). Electron energy beyond the ponderomotive limit is achieved by anti-dephasing acceleration (ADA). Electrons reach such energies in near-critical plasmas by self-injection into the acceleration phase of the laser field, aided by stochastic motion (11,12), transverse electric fields (13), and longitudinal electric (14,15) or magnetic fields (16,17). In the absence of anti-dephasing, energies of the electrons generated in laser-solid interactions hover, generally, around the ponderomotive limit (3)(4)(5)(6). Only MeV electrons are generated when linearly-polarized (LP) and circularly-polarized (CP) terawatt laser pulses are used. However, an experiment in which a micro-wire, used as an advanced solid target to generate and transport hot electrons over millimeters (20)(21)(22), has recently demonstrated reaching several times the ponderomotive energy, when LP laser pulses are used (23,24). Near-critical plasmas can be generated in the vicinity of solid targets for some applications (13,15). Here, ADA is put forward as a new mechanism for electron acceleration directly from a solid target. It promises to deliver significantly higher electron energies, and to simplify experimental implementation. Previous work on applications associated with electron acceleration to ponderomotive energies, from laser-solid interactions, include ion acceleration (25), fast ignition (26), laser hole-boring (27), high-order harmonic generation (28), half-cycle XUV pulse generation (29), Bremsstrahlung x-ray generation (30), and generation of terahertz radiation (31). Progress in these applications, stands to be advanced by availability of more energetic, shorter and denser electron bunches. In this article, the novel mechanism of ADA, in the interaction of radially-polarized (RP) laser pulses of ultrashort duration with solid wires, is conceived, intuitively explained and backed up by particle-in-cell (PIC) simulations. Our investigations demonstrate unprecedented attosecond picocoulomb electron bunches with energies around one hundred MeV, i.e., two orders of magnitude higher than the ponderomotive limit. In particular, employing RP pulses, electrons are extracted from a wire target, and accelerated by the laser fields, see Fig. 1 for a schematic. Self-injection with a small dephasing rate is caused by the collective motion of the plasma electrons and the complex laser field variations. Results Mechanism of anti-dephasing acceleration (ADA) Direct laser acceleration by LP pulses leads to the generation of periodic electron bunches (32) and CP pulses generate spiral currents (22). In these cases, the azimuthal electron motion (17) makes it difficult to define an acceleration phase. By contrast, the acceleration phase in an RP laser field is determined directly by the regions of negative axial electric fields, with their troughs at phases   where N is an integer (33). The radial and axial electric field components ( r E and z E ) of the unperturbed laser pulse are shown schematically in Fig. 1(A). When the RP laser pulse propagates along the wire of radius smaller than its own waist radius at focus, electrons get knocked out and subsequently accelerated by the laser field. In each laser cycle, distortion to the electron distribution, and the process of electron injection, can typically be described as follows, with the help of Fig. 1. The force associated with r E pulls electrons into the vacuum around phase1, and pushes them backwards relative to the target over phase 3. Meanwhile electrons get accelerated by z E around phase 2 of a half cycle, and decelerated during interaction with the subsequent half cycle. Due to the acceleration around the trough of z E , dephasing around 3 is weaker than that about phase 1. As shown schematically in Fig. 1 Collective effects of the perturbed electrons We consider an ultrashort RP laser pulse with 2 0  a irradiating a wire target. The laser peak , the radial electric field r E is negative, so it works to knock electrons out of the target wire and the negative z E acts to accelerate them forward, while the opposite happens over the interval is the initial phase of the laser pulse. Figure 2(A) shows the positive r E around the wire is strengthened, due to perturbation of the electrons, while negative r E is weakened. Figure 2(C) shows many electrons are collected beyond the cylindrical boundary of the wire, around the phase 0  , at which the unperturbed trough of is expected (34). The negative electric field gradient, induced by electrons in the annular bunch, shifts the trough of z E to 2 0    as shown in Fig. 2(B), where the trough of r E also sits. The near-wire and off-axis z E are shown by solid-red and dashed-red curves in Fig. 2(D). While the off-axis z E is unperturbed, the near-wire z E is determined by the time-averaged field, as a contribution from the space-charge of the perturbed electrons. Pre-acceleration of the electrons in the combined fields around the wire is enhanced by the large surface-to-volume ratio of the wire target, as illustrated in Fig. 3, where snapshots of the accelerating fields are shown, respectively, at times Prior to entering their respective acceleration phases, the electrons undergo radial and axial oscillations inside the target, forced by the laser field. Subsequently, i.e., when the field structure with forward-shifted negative z E and weakened negative r E are acting and the right phase is reached, they get pulled out of the target slowly and accelerated forward with velocities approaching c . Figures 3(E) and (F) show evolution in time of their phases and scaled energies, with the gray shades representing the unperturbed acceleration phase. It is shown, for example, that particles following these typical trajectories can stay in the region with negative z E , as they slowly drift in weakened negative r E , and can be injected into the accelerating phase with an injection (pre-acceleration) energy of 1 0   . Energy gain beyond the ponderomotive limit ADA occurs after the pre-accelerated electrons are injected into the laser fields. The dephasing rate is given by . Using the relativistic equations of motion, one gets with z p the axial electron momentum component, and The message of Eq. (2) is quite simple: for an electron initially at rest 1  R , but subjecting the electron subsequently to the action of a symmetrically oscillating z E alone will cause R to exhibit symmetric oscillations. However, the laser z E , combined with the z E generated by the plasma electrons [especially when comparable to 0 0 a E (14)] will reduce the dephasing rate significantly. is clearly greater than c for tightly focused lasers. Electrons launched into phase II achieve higher energy gains, because the target becomes hotter with time, allowing more electrons to be knocked out and injected. In this regime, stronger space-charge and pinch effects (22) enhance acceleration by the on-axis axial component z E and boost the pre-acceleration to smaller R (13). The dephasing rates shown in Fig. 3(H) hover around 01 . 0 R and can be as small as 0.005. The energy gain receives a boost from the reduction in R via Eq. (1) and approaches . Unfortunately, the optimal anti-dephasing conditions, which lead to maximum acceleration, cannot be sustained by all electrons. For example, the dephasing rates of electrons accelerated in phase I do not decrease monotonically, as shown in Fig. 3(H) due to the fact that they move radially off-axis, into regions for which beyond position of the peak of r E . By contrast, electrons in phase II sustain small R values and acquire higher energy, as shown in Fig. 3(G). Discussion The self-injection regime of electrons for ADA has been discussed through examples involving RP pulses in interaction with wire targets. The RP laser pulse with a discrete accelerating phase, as illustrated schematically in Fig. 1, generates an electron bunch with FWHM of about 481 attosecond, as shown in Fig. 4(A). More discrete attosecond electron bunches with super-ponderomotive energy can be generated with longer driving pulses. Energy spectra, of electrons driven by laser pulses of amplitudes 2 0  a and higher, are shown in Fig. 4(B), which shows evolution of the beam energy and charge with the driving laser intensity. Further PIC simulations show that the cutoff energies depend on 0 r only weakly, and that the results converge as the resolution is increased. Although the accelerating phase is hard to define in LP and CP pulses, electrons are also gathered at azimuthally-dependent phases. Therefore, the ADA regime works also with LP and CP pulses, resulting in the electrons getting accelerated by a continuous stable phase. 3D-PIC simulations, similar to the above, have been carried out, in which the RP pulses are replaced by LP and CP pulses of the same amplitudes 0 a , focal radii 0 . An RP pulse is evidently superior to the CP and LP pulses for generating high-energy electrons. In the RP case, electrons can be launched into discrete phases, as opposed to being injected into continuous phases using the other polarization modes. Thus, the beam charge in the former is smaller than in the latter. Total charges of the electron beams driven by ultrashort pulses, shown in Fig. 4, are of the order of tens of pC, and increase with laser duration (22,23). The ratio of the beam charge to the laser duration is similar to that in a laser-wire experiment of approximately the same intensity (36). Fig. 4(D), the energy gain does not get affected drastically, at least for CP pulses. Methods In our calculations, the wire is assumed to be a cylinder, of length L and radius 0 r , lying with its axis along z  , with . Fields of the incident RP pulse are derived from the normalized vector potential, , where a is given by (37)
2,795.8
2019-03-25T00:00:00.000
[ "Physics" ]
A Novel Isolate of Spherical Multicellular Magnetotactic Prokaryotes Has Two Magnetosome Gene Clusters and Synthesizes Both Magnetite and Greigite Crystals Multicellular magnetotactic prokaryotes (MMPs) are a unique group of magnetotactic bacteria that are composed of 10–100 individual cells and show coordinated swimming along magnetic field lines. MMPs produce nanometer-sized magnetite (Fe3O4) and/or greigite (Fe3S4) crystals—termed magnetosomes. Two types of magnetosome gene cluster (MGC) that regulate biomineralization of magnetite and greigite have been found. Here, we describe a dominant spherical MMP (sMMP) species collected from the intertidal sediments of Jinsha Bay, in the South China Sea. The sMMPs were 4.78 ± 0.67 μm in diameter, comprised 14–40 cells helical symmetrically, and contained bullet-shaped magnetite and irregularly shaped greigite magnetosomes. Two sets of MGCs, one putatively related to magnetite biomineralization and the other to greigite biomineralization, were identified in the genome of the sMMP, and two sets of paralogous proteins (Mam and Mad) that may function separately and independently in magnetosome biomineralization were found. Phylogenetic analysis indicated that the sMMPs were affiliated with Deltaproteobacteria. This is the first direct report of two types of magnetosomes and two sets of MGCs being detected in the same sMMP. The study provides new insights into the mechanism of biomineralization of magnetosomes in MMPs, and the evolutionary origin of MGCs. In addition, it has been reported that the formation of magnetosomes is controlled by a magnetosome gene cluster (MGC) [9,[36][37][38]. Two sets of MGCs thought to be involved in magnetite and/or greigite formation have been identified in several Deltaproteobacteria MTB, including two uncultured sMMPs (the greigite-producing Candidatus (Ca.) Magnetoglobus multicellularis and the magnetite-and greigite-producing Ca. Magnetomorum HK-1) [31,35,39], one uncultured eMMP (the magnetite-producing Ca. Magnetananas rongchenensis RPA) [40], and two cultured unicellular MTB (the magnetite-producing Desulfovibrio magneticus RS-1 and the magnetite-and greigite-producing Desulfamplus magnetomortis BW-1) [41,42]. The corresponding magnetosome composition and morphology were also detected in the spherical Ca. Magnetoglobus multicellularis (irregularly shaped greigite magnetosomes were biomineralized), unicellular Desulfovibrio magneticus RS-1 (bullet-shaped magnetite magnetosomes were biomineralized), and Desulfamplus magnetomortis BW-1 (irregularly shaped greigite and bullet-shaped magnetite magnetosomes were both biomineralized), respectively [16,42,43], except for the ellipsoidal Ca. Magnetananas rongchenensis RPA and the spherical Ca. Magnetomorum HK-1. While the RPA could biomineralize both bullet-shaped magnetite and rectangular greigite crystals, the magnetosomes in the HK-1 were not described, although it showed 99.3% identity to the spherical Ca. Magnetomorum rongchengroseum, which was collected in another region and is reported to produce both magnetite and greigite particles [21]. In brief, there is no direct evidence whether two sets of MGCs regulate the synthesis of the two types of magnetosomes (magnetite and greigite) in the same MMP. In this study, undertaken in the intertidal zone of Jinsha Bay (South China Sea), we found a novel sMMP that contained two paralogous magnetosome gene clusters and could concurrently biomineralize both bullet-shaped magnetite and irregularly shaped greigite magnetosomes. We sequenced the genome of this sMMP, and showed that it was more integrated than previously reported MMPs. It simultaneously contained sets of both mam ('magnetosome membrane') and mad ('magnetosome associated Deltaproteobacteria') gene clusters, which implied the involvement of independent processes for synthesizing the two distinct types of magnetosome. Sampling and Enrichment of MMPs Sediment samples were collected from sites in the low-tide region of the intertidal zone in Jinsha Bay (Zhanjiang City, China; 21 • 16.267 N, 110 • 24.067 E) on 30 August 2020 and 7 September 2021. The salinity at these sites was measured (WTW Cond 3210 SET 1; Xylem, Germany), and ranged from 19.5 to 24.2‰ during the sampling periods. Samples of the subsurface sediment and in situ seawater (approximately 1:1) were transferred to the laboratory in sterile plastic bottles, and incubated in dim light at an ambient temperature for subsequent analyses. To enrich MMPs from the sediment, each plastic bottle was shaken to mix the sediment and water, and two magnets were attached externally, one each to opposite sides of the bottle adjacent to the seawater-sediment interface [44]. MMPs attracted to the magnets were removed, and purified magnetically for a subsequent study using the modified racetrack method [45]. Optical and Electron Microscopy The morphology and motility of the purified MMPs were observed using the hanging drop method using differential interference contrast (DIC) microscopy (Olympus BX51 equipped with a DP80 camera system; Olympus, Tokyo, Japan) [46]. For scanning electron microscopy (SEM) observations, each sample was fixed in 2.5% glutaraldehyde for >3 h at 4 • C, filtered onto a polycarbonate filter (high-density pores, 1 µm diameter; Whatman), dehydrated through a gradient of ethanol concentrations, dried, and gold-coated. The goldcoated samples were observed using a KYKY-2800B SEM (KYKY Technology Development Ltd., Beijing, China) operating at 25 kV. For transmission electron microscopy (TEM) observations of the MMP and magnetosomes morphologies, 10 µL samples of MMPs, which were purified using the racetrack method and concentrated by slight centrifugation, were deposited on formvar carbon-coated copper grids, washed three times with distilled water, and examined using a Hitachi HT7700 TEM (Hitachi Ltd., Tokyo, Japan) operating at 100 kV, and a JEM-2100 TEM (JEOL Ltd., Tokyo, Japan) operating at 200 kV. The composition of magnetosomes was analyzed using high-resolution transmission electron microscopy (HRTEM; JEM-2100 TEM) equipped for energy-dispersive X-ray spectroscopy (EDXS). Genomic DNA Extraction, Whole Genome Amplification, and Phylogenetic Analysis of 16S rRNA Genes The sMMPs were sorted using a TransferMan ONM-2D micromanipulator and a CellTram Oil manual hydraulic pressure-control system (IM-9B) installed on a microscope (Olympus IX51; equipped with a 40 × LD objective, Tokyo, Japan) [6,21,26,47]. The microsorted MMPs stored in PBS were repeatedly freeze-thawed, after which whole genome amplification (WGA) of MMPs was performed using multiple displacement amplification (MDA) for 8 h using the REPLI-g Single Cell kit (cat. #150343; Qiagen, Hilden, Germany), according to the manufacturer's instructions [26]. The WGA products were stored at −80 • C for genome sequencing and 16S rRNA gene analysis. The WGA products were diluted and used for amplification of the 16S rRNA gene. Universal bacterial primers 27F (5 -AGAGTTTGATCCTGGCTCAG-3 ) and 1492R (5 -GGTTACCTTGTTACGACTT-3 ) were used for the polymerase chain reaction (PCR) in a Mastercycler (Eppendorf, Hamburg, Germany). The purified and retrieved PCR products were cloned into the pMD18-T vector (Takara, Shiga, Japan), and transferred into competent Escherichia coli (strain DH5α) (Takara, Japan). Clones were selected randomly and sequenced using the vector primers M-13 and RV-M (Ruibio BioTech Co. Ltd., Qingdao, China). The 16S rDNA sequences obtained for the sMMPs were aligned against the nr/nt database using the BLAST search program (http://www.ncbi.nlm.nih.gov/BLAST/ accessed on 18 November 2021). The 16S rDNA sequences of reference MMPs and unicellular MTB were downloaded from the GenBank database. All sequences were aligned using Clustal W software version 2.1 [48], and a phylogenetic tree was constructed using the maximum likelihood (ML) method using the IQ-TREE software version 2.0.3 under the best-fit GTR+F+R4 model [49]. Bootstrap values were calculated using 1000 replicates. The tree was visualized and adjusted using iTOL webtool version 6.4.3 (https://itol.embl.de/ accessed on 26 November 2021), and was rooted with unicellular Deltaproteobacteria MTB. Fluorescence In Situ Hybridization (FISH) A specific oligonucleotide probe JSMW7 (5 -GCCACCTTTCATCTAATCTATC-3 ) was designed for the 16S rDNA sequence corresponding to position 185-206 of the target sMMP, and its specificity was evaluated using the online probe-match tool (http://rdp. cme.msu.edu/probematch/search.jsp, accessed on 19 November 2021) [50]. The specific probe was labeled with hydrophilic sulfoindocyanine Cy3 as the fluorescent dye at the 5 end. The universal probe EUB338 (5 -GCTGCCTCCCGTAGGAGT-3 ) was used as the positive control in hybridization, and was labelled with fluorescein phosphoramidite FAM at the 5 end. Appropriate amounts of E. coli cells were added to the sample and mixed with the target sMMP cells as negative controls. The specimen was treated and prepared as described previously [51,52], and FISH was carried out according to protocols reported early [51,53,54]. The hybridization results were observed using an Olympus BX51 epifluorescence microscope equipped with a DP80 camera system (Olympus, Tokyo, Japan). Sequencing, Assembly, and Genome Annotation, and Comparative Analysis of Magnetosome Genes and Proteins Paired-end 100 bp (PE100) libraries were constructed from the produced DNA of the WGA method using the MGI Easy FS DNA Library Prep Set kit (MGI, Shenzhen, China), according to the manufacturer's instructions. The genome was sequenced using the DNBSEQ-T1 platform (BGI-Qingdao, Qingdao, China), using the paired-end 100 bp library. After quality trimming and filtering using SOAPnuke version 2.1.6 [55], the reads were assembled using MEGAHIT version 1.2.8, using k-mer sizes from 27 to 255 by step 20 [56]. Then, the metaWRAP version 1.2.1 pipeline was used for metagenome binning, refinement, and reassembly with default parameters to select the pure genome [57]. The quality of MMP genomes was assessed using QUAST version 5.0.2 [58], and genomic completeness and contamination were estimated using CheckM version 1.1.3 [59]. The genome was annotated using Prokka version 1.14.5 [60]. Several available genomes of Deltaproteobacteria MTB were obtained from the GenBank database, including Desulfovibrio magneticus RS-1 [41], Desulfamplus magnetomortis BW-1 [42], Ca. Magnetoglobus multicellularis [35,39], Ca. Magnetomorum HK-1 [31], and Ca. Magnetananas rongchenensis RPA [40]. A comparative analysis of MGCs was performed using the MagCluster version 0.2.0 [61] and the clinker [62] with a manual inspection following. The putative magnetosome proteins were confirmed using NCBI PSI-BLAST [63], and the annotations were corrected manually. Comparative analyses of putative magnetosome proteins were performed using BLASTP (http://www.ncbi.nlm.nih.gov/BLAST/, accessed on 13 December 2021). The phylogenetic trees based on the Mam and Mad protein sequences were both constructed using the maximum likelihood (ML) method, using IQ-TREE software version 2.0.3 under the same best-fit LG+F+I+G4 model [49]. Bootstrap values were calculated using 1000 replicates. The tree was visualized and adjusted using iTOL webtool version 6.4.3 (https://itol.embl.de/, accessed on 15 January 2022). Occurrence, Structure, and Motility of the sMMPs Unicellular MTB and highly abundant sMMPs were observed in the intertidal sediment from one site in Jinsha Bay (Figure 1a). Yellow and gray layers of sand were present in the sediment, with the yellow layer approximately 1 cm above the gray layer. The sMMPs were present at a maximum abundance of approximately 304 inds./cm 3 in the gray layer. sMMPs were autofluorescent when illuminated with green, blue, violet, or UV light. The cellular interfaces were evident using autofluorescence excitation at 400−410 nm and 330−385 nm wavelengths (Figure 1f,g), but the cellular contours of MMPs were not distinct when the cells were exposed to illumination at 510−550 nm and 450−480 nm (Figure 1d,e). The average diameter of the sMMPs was 4.78 ± 0.67 μm (n = 237), and the average size of each individual unit was 1.03 ± 0.15 μm (n = 64) in the largest dimension. An analysis of DIC images ( Figure 1b) and SEM micrographs ( Figure 1c) showed that each sMMP contained approximately 14-40 constituent cells arranged with helical symmetry, which also appeared as a radical symmetry on the section image (n = 24). The sMMPs were autofluorescent when illuminated with green, blue, violet, or UV light. The cellular interfaces were evident using autofluorescence excitation at 400-410 nm and 330-385 nm wavelengths (Figure 1f,g), but the cellular contours of MMPs were not distinct when the cells were exposed to illumination at 510-550 nm and 450-480 nm (Figure 1d,e). The average diameter of the sMMPs was 4.78 ± 0.67 µm (n = 237), and the average size of each individual unit was 1.03 ± 0.15 µm (n = 64) in the largest dimension. Use of the hanging drop method showed that >80% of magnetically enriched sMMPs exhibited north-seeking polarity (Figure 1a), and swam along the magnetic field lines with an average speed of approximately 78.0 ± 41.4 µm/s (v, n = 38; maximum velocity, 177.4 µm/s) along a straight or helical trajectory (Figure 2a). Typical "ping-pong" motility was also observed at droplet edges, the north-seeking sMMPs which accumulated at the edge showed an excursion swim against the magnetic field lines away from the droplet edge (Figure 2b), then performed a return swim along the magnetic field lines back to the droplet edge ( Figure 2c). Additionally, the average speeds of excursion and return were 223.9 ± 54.5 µm/s (v 1 , n = 24; maximum velocity, 330.5 µm/s) and 102.2 ± 19.0 µm/s (v 2 , n = 24; maximum velocity, 138.4 µm/s), respectively (Figure 2b,c). Characterization of Magnetosome Biomineralization in the sMMPs TEM observations showed the simultaneous presence of both bullet-shaped and irregularly shaped magnetosomes arranged in chains or clusters within constituent cells of the sMMPs from the South China Sea (Figure 3a Characterization of Magnetosome Biomineralization in the sMMPs TEM observations showed the simultaneous presence of both bullet-shap regularly shaped magnetosomes arranged in chains or clusters within constitue the sMMPs from the South China Sea (Figure 3a,b). The average number of mag per individual cell was 30 ± 11 (n = 31), with the proportions of bullet-shaped ( red circle) and irregularly shaped crystals (Figure 3b, yellow circle) averaging 56.7%, respectively. HRTEM and EDXS analyses indicated that the bullet-shap tosomes were composed of magnetite (Fe3O4) (Figure 3c-e). The magnetite part 87.0 ± 20.3 × 35.2 ± 3.5 nm in size and had a width/length ratio of 0.42 ± 0.08 (Figure 3f-h). These analyses (HRTEM and EDXS) also showed that the irregula crystals were composed of greigite (Fe3S4) (Figure 3i-k). The greigite particles w 8.7 × 55.2 ± 7.3 nm in size and had a width/length ratio of 0.77 ± 0.11 (n = 215) ( n). Phylogenetic Analysis of the sMMPs We isolated two sMMPs using the micromanipulation sorting method, and their genomic DNA was extracted and amplified by WGA using MDA. The 16S rRNA genes were amplified, cloned, and sequenced from the WGA product. Sequences (53) related to the MMPs were obtained from 55 randomly chosen clones. All the MMP sequences shared an identity of at least 99.1%, indicating that the sMMPs represented a single species. FISH was used to corroborate the authenticity of the 16S rRNA gene sequences. Fluorescence microscopy observations showed that all bacterial cells were hybridized with the general probe EUB338 (Figure S1a, green), while only the spherical MMP cells were hybridized with the specific probe JSMW7, designed from the 16S rRNA sequence of the sMMP ( Figure S1b, red). These results demonstrated the specificity of the FISH analysis. The 16S rRNA gene sequence of the sMMP was most closely related (93.3% shared sequence identity) to that of the uncultured delta proteobacterium clone SY_48 (MW356768), which was collected from a mangrove area in Sanya [23]. It also showed 6.8-7.0% sequence divergence from the uncultured delta proteobacteria clones mmp2_9 (DQ630712), mmp45 (DQ630684), and mmp12 (DQ630669), from the Little Sippewissett salt marshes in Falmouth [17]. The phylogenetic analysis of the 16S rRNA gene sequence revealed that these five sMMP clones formed another group of spherical-type MMPs, and belong to the Deltaproteobacteria (Figure 4). We tentatively designate this novel isolated sMMP from Jinsha Bay at Zhanjiang City as MMP XL-1 (Figure 4). (MW356768), which was collected from a mangrove area in Sanya [23]. It also showed 6.8−7.0% sequence divergence from the uncultured delta proteobacteria clones mmp2_9 (DQ630712), mmp45 (DQ630684), and mmp12 (DQ630669), from the Little Sippewissett salt marshes in Falmouth [17]. The phylogenetic analysis of the 16S rRNA gene sequence revealed that these five sMMP clones formed another group of spherical-type MMPs, and belong to the Deltaproteobacteria (Figure 4). We tentatively designate this novel isolated sMMP from Jinsha Bay at Zhanjiang City as MMP XL-1 (Figure 4). The morphologies and characteristics of the sMMPs (labeled by the purple circles) and the eMMPs (labeled by the orange ellipses) have been described. General Genomic Features of the Proposed MMP XL-1 and Comparative Genomic Analysis of Magnetosome Gene Clusters Approximately 8.88 µg of genomic DNA was obtained from the two micro-sorted sMMPs following WGA. Following sequencing, assembly, and binning, a draft genome of approximately 8.49 Mb in size and having a GC content of 34.6% was obtained. The genome contained 279 contigs and had a 60,219 bp N50-value. A total of 5329 coding sequences (CDS) were predicted and annotated, including 46 tRNAs, one tmRNA, and one complete rRNA gene operon. The estimated completeness and contamination of the genome were 97.6% and 1.6%, respectively. The 11 homologous Mam proteins in clusters 1 and 2 had identities ranging from 46.3% to 83.8% (Table 1, in red), and the six homologous Mad proteins detected in clusters 3 and 4 showed identities between 52.1% and 72.2% (Table 1, in black). There may be two sets of homologous magnetosome gene clusters in the genome of sMMP from Jinsha Bay, one responsible for magnetite magnetosome biomineralization (clusters 1 and 3), and another associated with greigite magnetosome biomineralization (clusters 2 and 4) (Table 1 and Figure 5). A strong correlation in the phylogenies involved in magnetosome biomineralization was found, based on the Mam and Mad protein amino acid sequences ( Figure 6). In the phylogenetic tree of Mam proteins, the magnetite-related Mam proteins (including these from XL-1, HK-1, RPA, RS-1, and BW-1) were gathered in a single branch, while greigite-related Mam proteins from HK-1, Ca. Magnetoglobus multicellularis, XL-1, and BW-1 were gathered in a different branch (Figure 6a). Similar clades were present in the phylogenetic tree of Mad proteins. The Mad proteins from XL-1, HK-1, and RPA involved in magnetite biomineralization were clustered in one branch, while the Mad proteins involved in greigite biomineralization were clustered in another branch, including these detected in HK-1, Ca. Magnetoglobus multicellularis, and XL-1 (Figure 6b). Discussion In this study, we isolated a dominant type of sMMP from the intertidal sedimen Jinsha Bay, in the South China Sea. Phylogenetically, the sMMPs could be classified a novel species of Deltaproteobacteria using 16S rRNA analysis strongly, which had than 98.65% 16S rRNA gene sequence identity with its closest species [65]. Each aggr contained approximately 14-40 constituent cells and the individual cells of these sM were arranged loosely in helical symmetry, which showed a little difference from the arrangement of other previously reported sMMPs [16,18,20,21]. Their diameter wa proximately 4.78 μm, smaller than most reported sMMPs, except for the sMMPs (4. average diameter) collected from the mangrove habitat in the Sanya River (Tab [16,18,20,21,23,66]. The velocity of the magnetotaxis motility and the ping-pong moti the MMP XL-1 was more similar to that of the ellipsoidal Ca. Magnetananas rongchen than to other sMMPs ( Table 2) [26]. Both bullet-shaped magnetite and irregularly sh greigite crystals were simultaneously biomineralized by this sMMP (Figure 3), whic phenomenon previously observed in sMMPs from Itaipu Lagoon (Brazil), Lake Y (China), and the Sanya mangroves (China) ( Table 2) [21,23,67]. It has been reported MTB show a clear vertical distribution [33] in the oxic−anoxic interface zone and/or a regions [2]. Magnetite-producing MTB usually inhabit the top of the oxic−anoxic inte whereas greigite-producing MTB prefer the reducing environment at the base and sli below the interface [5,68]; this implies that those sMMPs that can biomineralize both netite and greigite may have broader vertical niches. Discussion In this study, we isolated a dominant type of sMMP from the intertidal sediments of Jinsha Bay, in the South China Sea. Phylogenetically, the sMMPs could be classified into a novel species of Deltaproteobacteria using 16S rRNA analysis strongly, which had less than 98.65% 16S rRNA gene sequence identity with its closest species [65]. Each aggregate contained approximately 14-40 constituent cells and the individual cells of these sMMPs were arranged loosely in helical symmetry, which showed a little difference from the tight arrangement of other previously reported sMMPs [16,18,20,21]. Their diameter was approximately 4.78 µm, smaller than most reported sMMPs, except for the sMMPs (4.6 µm average diameter) collected from the mangrove habitat in the Sanya River (Table 2) [16,18,20,21,23,66]. The velocity of the magnetotaxis motility and the ping-pong motion of the MMP XL-1 was more similar to that of the ellipsoidal Ca. Magnetananas rongchenensis than to other sMMPs ( Table 2) [26]. Both bullet-shaped magnetite and irregularly shaped greigite crystals were simultaneously biomineralized by this sMMP (Figure 3), which is a phenomenon previously observed in sMMPs from Itaipu Lagoon (Brazil), Lake Yuehu (China), and the Sanya mangroves (China) ( Table 2) [21,23,67]. It has been reported that MTB show a clear vertical distribution [33] in the oxic-anoxic interface zone and/or anoxic regions [2]. Magnetite-producing MTB usually inhabit the top of the oxic-anoxic interface, whereas greigite-producing MTB prefer the reducing environment at the base and slightly below the interface [5,68]; this implies that those sMMPs that can biomineralize both magnetite and greigite may have broader vertical niches. [41,43] "L" and "W" indicate the length and width of magnetosome crystals, respectively. "-" means no data acquired. The characteristics described above show that MMP XL-1 represented a unique type of sMMP; consequently, it was further investigated using genomic studies. To date, only two draft genomes of sMMPs were obtained, including the Ca. Magnetoglobus multicellularis and Ca. Magnetomorum HK-1 [31,35]. The genome of Ca. Magnetoglobus multicellularis was the first MMP genome to be analyzed; this showed that the genes involved in magnetite biomineralization were homologous with the genes in greigite-producing MTB, suggesting that the magnetotactic trait is monophyletic [35,39]. Two sets of magnetosome genes, one each responsible for magnetite and greigite biosynthesis, were identified in HK-1, providing the first evidence that two divergent magnetosome gene clusters can co-occur in a single MMP genome [31]. Subsequently, comparative genomic analysis was carried out to clarify which genes within XL-1 governed the biomineralization of each magnetite and greigite. Two sets of relatively complete magnetosome gene clusters, located on two contigs, were recognized in the genome (Table S1). One set showed synteny with homologous regions for the magnetite MGC of Ca. Magnetomorum HK-1 and the magnetite-producing Ca. Magnetananas rongchenensis RPA, while another set had a higher similarity with the MGC cluster from the greigite-producing Ca. Magnetoglobus multicellularis and the greigite MGC of Ca. Magnetomorum HK-1 ( Figure 5) [31,39,40]. In addition, two sets of mam genes involved in magnetite and greigite formation were detected in the unicellular Desulfamplus magnetomortis BW-1, and these were homologous with those of MMP XL-1 [42,64]. Synteny of the MGCs between this sMMP (MMP XL-1) and other Deltaproteobacteria bacteria is conserved, providing strong evidence that magnetite and greigite magnetosomes can be biomineralized by XL-1, controlled by two sets of MGCs ( Figure 5). This is the first direct report that an MMP can contain two types of magnetosomes and corresponding MGCs ( Table 2). Only greigite crystals and greigite-related genes were identified in Ca. Magnetoglobus multicellularis [16,35,39]. Although two types of MGCs were detected in the spherical Ca. Magnetomorum HK-1, the description of the magnetosome particles was not included [31]. Among eMMPs, only for one species (Ca. Magnetananas rongchenensis RPA) has MGC information been reported to date, and its involvement was limited to magnetite formation [26,40]. Intriguingly, we found that two sets of genes coding for proteins related to magnetite and greigite formation within XL-1 were paralogous. They shared variable degrees of amino acid similarity 46.3-83.8% (Table 1). The similarity between its own magnetite and greigite proteins was smaller than that between its magnetite proteins and corresponding magnetite proteins of other Deltaproteobacteria MTB. Additionally, a similar phenomenon was shown when using the greigite proteins for comparison (Table S2), which was consistent with the similarity difference of MGCs of the Ca. Magnetomorum HK-1. The greigite genes of HK-1 were more congruent with the greigite genes of other MTB than with its own magnetite genes [31]. This finding may be consistent with the hypothesis that the occurrence of two sets of magnetosome genes in Deltaproteobacteria MTB may have originated from ancient gene duplication and/or mutation in a common ancestor, then distributed to various MTB by HGT [47], and subsequent vertical inheritance by descent [31,70,71]. It is noteworthy that the arrangement and identities of magnetosome genes on clusters 1 and 3 of XL-1 were similar to the corresponding magnetite genes of RPA. The set of magnetosome genes on the clusters 2 and 4 of XL-1 had a similar gene order and high similarity to the greigite gene clusters of Ca. Magnetoglobus multicellularis, which implied that the clusters 1 and 3 were involved in the magnetite biomineralization, while clusters 2 and 4 were responsible for the greigite biomineralization within XL-1, respectively ( Figure 5). Additionally, the degree of the paralogous Mam and Mad proteins similarities mentioned above was almost the same (46.3-83.8% of the Mam proteins and 52.1-72.2% of the Mad proteins) (Table 1). Furthermore, the clades in the phylogenetic trees based on the Mam and Mad protein sequences were closely consistent as well (Figure 6), suggesting these two sets of mad genes (mad23, mad24, mad25, mad26, and mad27) were likely involved in the magnetite and greigite separately, which were the same as the mam genes ( Figure 5, clusters 3 and 4) [31,42,64]. The short gene cluster that included mad23, mad24, mad25, and mad26 has also been identified in all Nitrospirae MTB that synthesize bullet-shaped magnetosomes [72,73], which implies that they are also the core genes for magnetosome biomineralization. In addition, two branches were readily distinguishable on each of the Mam and Mad phylogenetic trees; one of these was involved in magnetite biomineralization and the other with greigite biomineralization (Figure 6). This provides further evidence for the presence of two sets of MGCs within MMP XL-1. Conclusions In this study, we firstly isolated sMMPs from the intertidal sediments of Jinsha Bay, in the South China Sea. Using TEM and EDXS analyses, we showed that both bulletshaped magnetites and irregularly shaped greigite crystals were biomineralized within one spherical multicellular aggregation. SEM observations indicated that individual units within the aggregation were arranged with radial symmetry. Phylogenetic analysis showed that the sMMPs formed a new clade with four other MMP clones (MW356768, DQ630712, DQ630684, and DQ630669), and it was affiliated to the Deltaproteobacteria. Two sets of MGCs involved in magnetite and greigite biomineralization were identified in the genome, and these were found to be more integrated than what has previously been reported in MMPs. The two mamAB-like operons coupled with the downstream mad gene cluster have the potential to control magnetosome biomineralization (magnetite or greigite). It has displayed direct evidence that both magnetosome morphologies and the MGCs related to the two types of magnetosome co-occur in the same species, which provides new information relevant to the study of biomineralization mechanisms associated with the magnetosomes of MMPs, and the evolutionary origin of MGCs.
6,118
2022-04-28T00:00:00.000
[ "Biology" ]
Iterative addition of parallel non-local effects to full wave ICRF finite element models in axisymmetric tokamak plasmas The current response of a hot magnetized plasma to a radio-frequency wave is non-local, turning the electromagnetic wave equation into an integro-differential equation. Non-local physics gives rise to wave physics and absorption processes not observed in local media. Furthermore, non-local physics alters wave propagation and absorption properties of the plasma. In this work, an iterative method that accounts for parallel non-local effects in 2D axisymmetric tokamak plasmas is developed, implemented, and verified. The iterative method is based on the finite element method and Fourier decomposition, with the advantage that this numerical scheme can describe non-local effects while using a high-fidelity antenna and wall representation, as well as limiting memory usage. The proposed method is implemented in the existing full wave solver FEMIC and applied to a minority heating scenario in ITER to quantify how parallel non-local physics affect wave propagation and dissipation in the ion cyclotron range of frequencies (ICRF). The effects are then compared to a reduced local plane wave model, both verifying the physics implemented in the model, as well as estimating how well a local plane wave approximation performs in scenarios with high single pass damping. Finally, the new version of FEMIC is benchmarked against the ICRF code TORIC. Introduction Ion cyclotron resonance heating (ICRH) is a robust and versatile method for heating fusion plasmas that is commonly used in present fusion experiment and planned to be installed in both ITER and SPARC [1,2].In large scale devices it is the only heating scheme available that can provide dominant ion heating, making it an attractive system for heating the plasma to fusion relevant conditions in a reactor.In addition, ICRH can be used to reduce turbulent transport, control impurities, and to provide non-inductive current drive [3][4][5][6][7]. The most common use of ICRH is to launch a fast magnetosonic wave that is absorbed by ions at fundamental or harmonic cyclotron resonances, or by the electrons via a combination of transit time magnetic pumping (TTMP) and electron Landau damping (ELD) [8].In addition, the wave may be mode converted into ion Bernstein waves (IBW) and ion cyclotron waves (ICW) [9][10][11].Although, if the resonance and ion-ion hybrid layers overlap, which they do for standard minority heating scenarios in large and most medium-sized tokamaks, mode conversion is negligible [12].In this work we will therefore focus on numerical modelling of the fast magnetosonic wave and not consider mode conversion. At fusion relevant temperatures and densities, the mean free path is longer than the gradient scale lengths.The dielectrically induced current is therefore non-local, meaning that the velocity of a particle in a given point is dependent on the cumulative acceleration in all previous positions.This turns the electromagnetic wave equation into an integrodifferential equation, which is not natural to solve using standard numerical methods, such as the finite element method (FEM).Furthermore, the wavelengths of the fast magnetosonic wave are non-negligible compared to the size of the machine, indicating that the wave equation is most appropriately solved using a full wave solver. The non-local nature of the plasma exhibits different properties perpendicular and parallel to the magnetic field lines.In the perpendicular direction, the non-locality arises due to finite Larmor radius (FLR) effects.The particle makes excursions from the magnetic field line and will be subject to different electric field strengths along its gyro-orbit.In fast wave heating scenarios, the ion Larmor radius is usually small compared to the wavelength and is therefore often used as a smallness parameter for expansion of the perpendicular plasma response tensor.Using such an expansion, the perpendicular non-locality can be modelled using for example a FEM description [13][14][15][16].Perpendicular non-locality is responsible for effects such as higher harmonic cyclotron damping, TTMP, and mode conversion to the IBW [17][18][19]. In the parallel direction, the non-locality connects the velocity of a particle in a given point with the acceleration it has received at previous locations on the guiding center orbit, which to the lowest order in Larmor radius is parallel to the magnetic field.Unlike for the perpendicular response, the is no natural smallness parameter that can be used to expand the parallel response in the ion cyclotron frequency range, and the integral operator must be treated in full.In quasi-homogeneous plasmas, the integral operator is a convolution between the plasma response tensor and the electric wavefield.Therefore, the parallel response is often treated in Fourier space, as the convolution operator transforms into a multiplicative relation [15,16,20].In Fourier space, the plasma response tensor is a function of the wavevector, which means that each plane wave component travels with a different velocity, a phenomenon known as spatial dispersion [21].Although Fourier methods are well suited for treating the integral operator, they produce dense system matrices that are computationally expensive to invert [22], and struggle with realistic wall and antenna geometries.Furthermore, they are inefficient when describing localized small-scale phenomena such as the IBW and ICW as the discretization cannot be locally refined.To combine Fourier methods with detailed scrape-off layer simulations, it has been demonstrated that it is possible to combine two independent solutions inside and outside the plasma boundary by using mode matching techniques [23,24]. Many wave solvers neglect parallel non-local effects, with the benefit of geometrical fidelity and low computational cost [13,14,25].In practice, this can be done by using a cold plasma model (which is independent of the parallel wavenumber k ∥ ), by neglecting the poloidal field, or by guessing a value for k ∥ .In axisymmetric models, the dominating toroidal contribution to k ∥ is known exactly, whereas the poloidal contribution is not known a priori and often neglected.When the poloidal contribution is correctly accounted for, it is said to give a corresponding upshift or downshift in the parallel wavenumber. To retain the advantages of the FEM, while including a more detailed plasma description, it is possible to add nonlocal terms to the electromagnetic wave equation iteratively.It was demonstrated in [26] that a small non-local anti-Hermitian contribution can be added iteratively to a single response tensor component in a 2D axisymmetric tokamak plasma to account for ELD in the lower hybrid range of frequencies.It has also been demonstrated that finite temperature effects, in particular Langmuir waves and ELD, can be added iteratively to a 1D finite difference model, where the initial guess is a cold plasma model [27] and the non-local contribution was computed using a kinetic description.In [27], it was also shown that without using acceleration techniques, convergence was slow and sometimes unstable.Hence, the authors applied minimum polynomial extrapolation [28] to make convergence more robust.Furthermore, it was demonstrated in [29,30] that both parallel and perpendicular non-local effects can be added iteratively to 1D axisymmetric JET-like plasmas in the ion cyclotron range of frequencies.In [30], perpendicular nonlocal effects were added to all tensor components.As such, it was possible to model mode conversion from the fast magnetosonic wave to the electrostatic IBW, as well as the propagation and damping of the IBW.Here, the non-local contribution was evaluated using wavelet transforms.To enhance convergence, the authors applied Anderson acceleration [31]. In this work, we show that it is possible to generalize this iterative method to account for non-local effects in all response tensor elements in a 2D axisymmetric tokamak plasma for ICRH applications.The non-local effects are evaluated spectrally by Fourier decomposing the electric wavefield along the magnetic flux surfaces, and the initial guess is evaluated using a local approximation of the hot plasma susceptibility tensor.The proposed iterative scheme is implemented in the full wave solver FEMIC [14] (resulting in FEMIC version 1.6) and is verified for a fast wave heating scenario in ITER.Furthermore, the iterative method will be compared to a reduced model using a local plane wave approximation (PWA).The purpose of this is twofold.First, a PWA model can be used to qualitatively verify the proposed iterative method, and second, given that the iterative method provides reliable results, it is interesting to see how well the local plane wave model performs quantitatively.The iterative model and the PWA model are compared both in terms of local power deposition and integrated power partition. The remainder of this paper is organized as follows.In section 2, we describe the iterative method used in this work, including the evaluation of the non-local contribution as well as the local hot plasma susceptibility description.In section 3, we make a detailed study of how non-local effects change the local power deposition in a fast wave heating scenario in ITER.The iterative method is then benchmarked against the ICRF code TORIC [15] in section 4.This is followed by a discussion of the results in section 5 and conclusions in section 6. Iterative method to account for a non-local dielectric tensor In this section, we develop an iterative method for solving the wave equation in a 2D axisymmetric tokamak plasma with non-local response.First, in section 2.1, we describe the principles of an iterative method based on the FEM and Anderson acceleration.The plasma response is separated into a local part and a non-local dispersive contribution.Then, in section 2.2, we show how we can find a local response tensor in practice, and how to evaluate the non-local contribution using Fourier spectral decomposition.In section 2.3 we present a local plane wave model that can be used to estimate the local up-and downshift in the parallel wavenumber, and in section 2.4 we describe the local power deposition model used in this work. Wave equation and operator splitting The electromagnetic wave equation for a non-local plasma response is given by where J ext is the antenna current, χj is the susceptibility tensor for species j, and I is the identity tensor.Here, tilde denotes that in real space, χj is an integral operator acting on the electric field E [32].In an axisymmetric configuration, the toroidal Fourier modes decouple such that where n ϕ is the toroidal mode number and (R, ϕ, Z) are cylindrical coordinates.The toroidal mode number enters the wave equation as a parameter and the wave equation can be solved for one toroidal mode at a time. To separate local from non-local physics, the susceptibility tensor can be split into two terms, where χ j is an local approximation of χj and δ χj is the difference between χj and χ j .Moving δ χj to the right hand side of the wave equation ( 1), we can introduce a non-local source current density to the wave equation, namely In accordance with [30], we will call this induced current the correction current density because it is correcting for errors we introduce by using a local susceptibility tensor. As the correction current density is evaluated using the nonlocal operator δ χj , it is not natural to represent using the FEM.This can be solved by iterating between the wave equation ( 1) and the correction current (4).The electric wavefield in iteration step k would then be obtained by solving with the initial correction current density δJ (1) NL = 0.However, it was shown in [27] that a conventional fixed-point iteration scheme tends to diverge even for simple plasma models.To achieve convergence, the authors employed Anderson acceleration.Similar techniques are employed in [29,30] as well as in the present work.Adding this step to the iterative process, the electric wavefield in iteration step k is obtained by solving 2) , . .., E (1) ; where the operator A indicates Anderson acceleration and β k is a damping parameter.Note that the authors of [27] chose to iterate on the correction current density, whereas we found that iterating over the electric field provided more reliable convergence in the scenarios described in this work.Convergence can be optimized by, for example, tuning the damping parameter (which can be a function of k) and the number of stored solutions.For further details on Anderson acceleration and how it relates to other acceleration techniques, we refer to [31]. Evaluation of correction current density The correction current density defined in (4) can for example be evaluated using poloidal Fourier decomposition, as the Fourier components are inherently non-local and are well suited to couple the current in a point r with the particles' previous acceleration in points r ′ .Using such a description, the correction current density is given by where is the Fourier representation of the susceptibility tensor χj [17][18][19], and k ⊥ and k ∥ are the components of the wavevector k that are perpendicular and parallel to the external magnetic field B, respectively.The exact implementation of χj used in this work is given below.Here, caret emphasizes dependence on k.Density and temperature profiles as well as the external magnetic field are assumed to vary slowly in space, i.e. the plasma is assumed to be quasi-homogeneous.The susceptibility tensor is evaluated in a Cartesian coordinate system (e x ,e y ,e z ), where e z = B/B, e x = k ⊥ /k ⊥ , and e y = e z × e x .The details of the coordinate transformation between this coordinate system and the cylindrical system are outlined in [33].In this coordinate system, the susceptibility tensor for a Maxwellian plasma with no net parallel flow can be represented as where Here n refers to the harmonic number, ω p,j is the plasma frequency, Ω c,j the cyclotron frequency, λ j = k 2 ⊥ ρ 2 L,j /2 the FLR parameter, ρ L,j = v ⊥,j /ω c,j the Larmor radius, v ⊥,j the perpendicular thermal velocity, v ∥,j the parallel thermal velocity, I n = I n (λ j ) the modified Bessel function of order n, Z(ζ n,j ) the plasma dispersion function evaluated at ζ n,j = (ω + nΩ c,j )/(k ∥ v ∥,j ) and ϵ j the charge sign. For a given toroidal mode number n ϕ , the parallel wavenumber is given by where B ϕ is the toroidal magnetic field, and m θ is the poloidal mode number.In this work, we use an equal-arc-length poloidal angle θ ≡ 2π ¸dl/L, where L is the circumference of the poloidal cross section of the local flux surface.Consequently, the rotational transform ι, used to describe the local helicity of the field, can be defined as where B θ is the poloidal magnetic field.It should be noted here that due to the choice of poloidal angle, ι is not a flux function.Unlike the toroidal modes, the poloidal modes are coupled.In general, the wave exhibits a broad poloidal spectrum consisting of both positive and negative poloidal mode numbers. The perpendicular wavenumber is approximated by the dispersion relation for the fast magnetosonic wave.To find the dispersion relation, we expand the susceptibility tensor to second order in k ⊥ and assume the parallel electric field component to be negligibly small.The dispersion equation can then be solved for k ⊥ analytically.Denoting the solution corresponding to the fast magnetosonic wave k FW = k FW (r, m θ ), we let Because the toroidal mode number enters as a parameter, the n ϕ -dependence is suppressed for brevity.The local approximation of the perpendicular wavenumber retains certain FLR effects, such as higher harmonic damping and TTMP [13,14], but does not describe mode conversion to the IBW.It is important to account for both second harmonic damping and TTMP, as they each constitute a significant portion of the total absorbed power. To obtain a local approximation of the susceptibility tensor, we neglect the m θ contribution in (18) (and thus also the m θ dependence in k FW ).This is accurate where non-local effects are not important, or where the wave propagates perpendicular to the flux surfaces in the poloidal plane.Elsewhere, this approximation is either an under-or an overestimation of the local parallel wavenumber.This description of the susceptibility tensor is algebraic, in the sense that χ j can be evaluated algebraically in each point in space.Explicitly, the local approximation is given by This plasma description has previously been implemented in a number of wave solvers, such as LION [13] and FEMIC. It is worth noting that if the spectrum in m θ is sufficiently wide, k ∥ can be close to zero, as can be seen from (18).If this occurs on the ion cyclotron resonance (i.e.where ω + nΩ c,j = 0), the susceptibility becomes singular, preventing the solver from converging to a physical solution.To resolve this singularity, we can expand the magnetic field along the particle orbits [34,35], using the techniques in [36] to retain the positive semi-definiteness of the imaginary part of the plasma dispersion function and thus the anti-Hermitian part of the dielectric tensor.This method introduces a toroidal broadening and is described in more detail in the appendix. In scenarios with dominant electron damping, further issues may arise related to small values of k ∥ , this time related to the Landau response.To converge to a physical solution, it was shown in [37] that it is possible to expand the parallel wavenumber in (18) along the particle orbits.This has not been implemented in the present work, as we are only considering scenarios with dominant fast wave ion absorption. Local plane wave model In (21) a local approximation of the dielectric tensor was defined by setting m θ = 0.However, when the wavefield locally resembles a single plane wave, the model for the parallel wavenumber can be improved to account for the up-and downshift.It was proposed in [38] to first perform a simulation with m θ = 0 and then use derivatives of the resulting electric wavefield to estimate k.Here we propose an extension to this model that we will refer to as the plane wave approximation (PWA) model.We use derivatives of the right hand polarized wave component, E n ϕ ,− , to estimate the poloidal wavenumber, as E n ϕ ,− is not strongly effected by either the cyclotron resonance or the ion-ion hybrid layer.The estimate can be obtained from a logarithmic derivative.However, in regions with destructive interference, the wave amplitude can become small, causing the logarithmic derivative to diverge.Therefore, we here propose to estimate the wavevector in the RZ-plane as where ⟨•⟩ denotes an average over a neighborhood around r with a radius of half a wavelength, and k RZ is the projection of k onto the poloidal plane.This estimate is here used to describe the direction of k, whereas the magnitude of k is taken from the fast wave dispersion relation.Assuming that the poloidal field is small compared to the toroidal one, the wavevector in the poloidal plane is dominated by the perpendicular wavenumber, k FW .Furthermore, a weight function, w = w(r), is introduced to avoid including up-or downshift when the wave resembles an eigenmode.The parallel wavevector of the PWA model is therefore given by where we propose the following ad hoc model for the weight As an example, consider the sum of a left and a right moving wave.Let the amplitude of the right moving wave be smaller than the left moving one by factor f. It can then be shown numerically that w ≈ 1 − f for f ≲ 0.7.Thus, this model provides an average parallel wavenumber, also in the presence of wave interference. In the context of this work, we want to use the PWA model as an alternative to the proposed iterative method.Therefore, ( 22) is applied to the right-hand polarized component of the electric field from the first iteration, E (1) n ϕ ,− .After reevaluating dielectric tensors and electric fields using k ∥ = k PWA ∥ (r), the power deposition is compared to the final iteration in the iterative method. Power absorption In this work, the absorbed power density by a particle species j will be evaluated as where we have used the susceptibility tensor as the power absorption kernel.It is interesting to see how the non-local contribution of the susceptibility tensor (i.e.δ χj ) affects the power absorption by particle species j, so we define the absorbed power density due to non-local absorption kernel for species j as It should be noted here that δQ j kernel is not symmetric, so we cannot expect the local power deposition to always be positive semi-definite.However, integrated absorption always is [15]. The non-local plasma response will also impact the power absorption due to changes to the electric field when iterating.Therefore, we will also define the absorbed power density due to these changes in the electric field for species j as where E (1) is the solution from the first iteration, i.e. the solution to the wave equation with δJ NL = 0.It is evident from ( 25)-( 27) that the power deposition without any nonlocal contributions is given by Q j − δQ j kernel − δQ j fields . Effect of parallel non-locality on fast wave heating The proposed iterative method described in section 2 has been implemented in the full wave solver FEMIC, which is based on the finite element method.The wave equation in ( 7) is discretized into triangular finite elements and solved using the RF module in COMSOL Multiphysics ® (as shown in [14]), whereas the correction current density in ( 8) is evaluated in MATLAB ® using the method outlined in section 2.2. The correction current density is then imported to COMSOL Multiphysics ® to be used as a source term to the wave equation in the following iteration step.The impact of parallel nonlocal effects on fast wave heating is then studied in an ITERlike plasma.To analyze the effect on the different absorption modes, fundamental minority heating, ELD, TTMP, and the coupling between ELD and TTMP are all studied separately.Second harmonic tritium damping is not shown due to its qualitative similarities to the minority absorption.Although second harmonic damping is sensitive to the perpendicular wavenumber, the dispersion relation (20) does not change much for the most important mode numbers m θ .Finally, we will show the non-local effects on ELD for n ϕ = 27.For each absorption mode, we study δQ kernel and δQ fields obtained both from the proposed iterative method and the PWA model described in section 2.3 to see how well the PWA model performs compared to a more sophisticated approach.The two models are compared on a straight line starting near the bottom antenna strap.For n ϕ = 27, we compare the two models in a region without strong reflections, as this would be outside the range of validity for the PWA model. Scenario The effect of non-local parallel physics is studied in a fast wave heating scenario in an ITER-like plasma with a 3% 3 He minority in a DT plasma with n D = n T .The temperature and density profiles are shown in figure 1.The remainder of the model parameters are given in table 1.For n ϕ = 61 and ITER relevant temperatures, resonance layers are broad, the ion-ion hybrid layer is less pronounced compared to lower mode numbers, and there is minimal mode conversion to the ion cyclotron wave.Under these conditions, fast wave heating is straightforward to simulate using an ICRH code such as FEMIC and non-local effects can be analyzed without complications.The chosen value for n ϕ corresponds to one of the dominating toroidal mode numbers in a dipole phasing configuration (0 π 0 π).Unless stated otherwise, the results in the remainder of this section have been obtained for n ϕ = 61.The wall and antenna geometries used in this work are described in [14], and can be seen in figure 2. Although the two poloidal antenna triplets will have a ±90 • phase shift in ITER [39], they are here simulated in phase.This simplifies the analysis and the validation of the numerical algorithm by removing the destructive interference between the top and bottom antenna triplets [14].The scenario was chosen to clearly demonstrate the qualitative effects of the up-and downshift in the parallel wavenumber.For symmetry reasons, parallel non-local effects are not expected to have a large impact on metrics such as the total coupled power, the power partition and the power deposition profiles in this scenario [40]. The chosen scenario manifests strong single pass damping and central ion absorption, as shown in figure 2. Most of the power (65%) is absorbed by the ions, of which 90% is absorbed by the 3 He-ions and 10% is absorbed through second harmonic tritium damping.The remainder of the power is absorbed by the electrons between the antenna and the ionion hybrid layer through ELD and TTMP. The toroidal and poloidal fields are oriented in the clockwise (negative ϕ) and counterclockwise (positive θ) directions, respectively.Consequently, for a positive n ϕ , we can expect a local upshift in the lower half of the poloidal plane, and a local downshift in the upper half. Minority absorption for n ϕ = 61 The width of the resonance layer is approximately proportional to the Doppler shift k ∥ v ∥,j , where again v ∥,j is the thermal velocity for species j.Therefore, we expect to see a broadening of the resonance layer where the wave is upshifted (in this case, below the equatorial plane).Conversely, we expect a narrower resonance layer above the equatorial plane.This can indeed be observed in figure 3(a).In terms of local values, the contribution to the power deposition is significant, but due to the antisymmetry about the equatorial plane, the effect on integrated power deposition is small. The parallel non-locality does not only affect where a given wavefield deposits its power in the plasma, but also the polarization of the wave.The change in polarization of the wave then impacts the power deposition in a different way than the non-locality of the power absorption kernel.A downshifted k ∥ should give a more pronounced ion-ion hybrid layer, with larger relative E + component at the hybrid layer, and smaller relative E + at the minority resonance.For an upshifted wave, we expect the opposite.This can indeed be observed when we study δQ In figure 3(c) we observe a close agreement between the FEMIC simulation and the local plane wave model.We note that the PWA model captures both the broadening of the resonance layer seen in δQ 3He kernel as well as the change in polarization represented in δQ 3He fields not only qualitatively, but also quantitatively. Electron absorption for n ϕ = 61 An upshift in the parallel wavenumber can both enhance and reduce electron absorption, depending on the value of k ∥ v ∥,e /ω.The electron damping is more complex than the ion damping due to the different physical effects at play.The fast magnetosonic wave is damped through both ELD and TTMP.The processes are coherent, and for fast wave heating approximately twice as much power is absorbed through TTMP than ELD.ELD and TTMP are connected through the susceptibility elements χyz,e and χzy,e , and the contribution from the cross terms is negative semi-definite, approximately of the same magnitude as the TTMP term for ICRF waves in tokamaks [8].All three contributions to the electron absorption have different dependencies on k ∥ v ∥,e /ω [17][18][19].Therefore, we will study each absorption mode separately. For the electron Landau damping, we see in figure 4(a) that an upshifted parallel wavenumber leads to a negative contribution from the non-local absorption kernel, δQ ELD kernel .Conversely, a downshifted parallel wavenumber yields a positive contribution.As for the effect of the iterated electric fields, δQ ELD fields , we see the opposite behavior (see figure 4(d)), although the contribution is not large enough to fully cancel the effect of the non-local absorption kernel.The non-local contribution from the cross terms is structurally very similar to the contribution from the ELD, although with opposite signs (see figures 4(c) and (f )).This is explained by the similar (but not identical) functional dependence on k ∥ v ∥,e /ω. Regarding the power deposition due to TTMP, we see in figure 4(b) that δQ TTMP kernel vanishes at R = 7.5 m.This is due to passing a maximum in the zeroth harmonic term of the susceptibility tensor component χyy,e .Again, we note an antisymmetry about the equatorial plane.We also observe in figure 4(e) that the non-local effect from the iterated electric fields changes sign at the ion-ion hybrid layer.This is due to a similar polarization change that was observed for the ion absorption in the previous section, but now for the E ycomponent. The PWA model produces a fairly close agreement with the iterative model.This is shown for the different terms in figures 4(g)-(i).The largest differences are observed between R = 6.75 m and R = 7.25 m due to the reflection of the fast magnetosonic wave off the ion-ion hybrid layer, resulting in an interference pattern. Electron Landau damping for n ϕ = 27 Given the agreement between the plane wave model and the implemented iterative method, we can use (23) to evaluate the relative shift in k ∥ for n ϕ = 27 and n ϕ = 61, respectively.The relative upshift varies over the poloidal plane depending on how well the wave fronts are aligned with the flux surfaces.We find that for n ϕ = 27, the relative upand downshift is approximately twice as large as for n ϕ = 61.This can be confirmed using (18) for a given poloidal m θ spectrum.The larger shift can also be seen in the magnitudes of δQ ELD kernel and δQ ELD fields in figures 5(a) and (b).The non-local contributions are more than twice as large compared to the corresponding terms for n ϕ = 61 (figures 4(a) and (d)). In figure 5(b) we also see that the reflection of the fast wave off the ion-ion hybrid layer produces a wave pattern in δQ ELD fields near the equatorial plane.The incoming wave from the antenna is upshifted, whereas the reflected wave is downshifted.This alters the corresponding interference pattern in the electric fields and electron absorption.Interference is not well modelled by a plane wave.We therefore make the comparison with the PWA model below the region of strong interference.Despite the large upshift, we see in figure 5(c) that the PWA model still produces good agreement with the iterative method, although it seems that the PWA model predicts some interference where the iterative method predicts none.Similar comparisons can be made for the other absorption modes as well. It is noteworthy that whereas the up-and downshift is more perturbative in nature for n ϕ = 61, the effect for n ϕ = 27 is of zeroth order locally, completely changing the local power deposition.Ion absorption for n ϕ = 27 is not shown due to its qualitative similarities to n ϕ = 61.The main difference is a smaller Doppler broadening of the resonance for n ϕ = 27 due to smaller k ∥ . Convergence The relative residual norm for iteration step k is defined as L 2 has been rescaled by its L 2 -norm to ensure that the left-handed and in particular the parallel components are not neglected when the residual is evaluated.The iteration procedure is deemed to have converged when the relative residual norm is smaller than a set value, in this case 10 −3 .The proposed numerical algorithm shows good convergence properties, as shown in figure 6.The rate of convergence is dependent on the toroidal mode number n ϕ , as a lower toroidal mode number such as n ϕ = 27 leads to a larger relative up-or downshift in the parallel wavenumber, requiring more iterations.It is also possible that increased mode conversion to the ICW for lower mode numbers slows down convergence.However, the most likely source to the slow convergence is small amounts of unresolved mode conversion near the magnetic axis, which can be seen in figures 5(a) and (b).This introduces numerical noise to the iteration process, making it more difficult to converge. To reduce numerical noise, is possible to filter the correction current density after each iteration.In this work, we have used total variation denoising (TVD) to filter out numerical noise from the finite element solution.It is in some cases possible to reduce the number of iterations significantly, as shown in figure 6, with little impact on the resulting wavefield and power deposition.For n ϕ = 27 we observe an improvement of almost a factor of two, whereas for n ϕ = 61 the difference is marginal.There is further room for optimization of the convergence, and possible parameters to explore are the TVD filtering parameter, the damping parameter, and the number of stored solutions in the Anderson acceleration.Such optimization is however outside the scope of this paper and may be addressed in a future publication.Convergence is much slower for lower values of n ϕ , but can be sped up by filtering the correction current density using TVD filtering (dashed curves). Comparison between FEMIC and TORIC To quantitatively validate the iterative method proposed in this work, FEMIC is benchmarked against the full wave ICRF code TORIC.TORIC uses a finite element discretization in the radial direction and Fourier methods in the poloidal and toroidal directions.The spectral treatment in the poloidal direction includes non-local effects natively, such as the up-and downshift in the parallel wavenumber.TORIC uses a second order FLR expansion in the perpendicular direction to account for higher harmonic damping and mode conversion to the IBW [41].Unlike in FEMIC, perpendicular non-local effects (FLR effects) are treated with differential operators.FEMIC's local treatment describes higher harmonic damping well but does not model mode conversion to the IBW. In this exercise, we compare three different models for parallel response in FEMIC to TORIC.First, we study FEMIC without any iterative additions to estimate how important nonlocal effects are in this particular scenario.We then apply the proposed iterative method, and finally, we also consider the PWA introduced in section 2.3. The models are benchmarked by comparing their predicted power partition.Here we use the same scenario as in section 3, but sweep over minority concentrations from 0% to 4% in halfinteger steps.To avoid any of the antisymmetries observed in section 3, and thus enhance the non-local effects on the power partition, the whole antenna is located below the equatorial plane.This antenna model is similar to using only the bottom triplet of the ITER antenna.To represent the same geometry in the two codes and thus facilitate the comparison between them, a simplified wall and antenna geometry has been implemented in FEMIC.Furthermore, we set the toroidal mode number to n ϕ = −27 to get a downshift in the parallel wavenumber.This comparison is similar to the exercises done in [14,42]. Power partition In the pure second harmonic heating scenario, TORIC predicts approximately 5% of the power to be mode converted into the IBW.The IBW is mostly absorbed by the electrons on the high field side.When a small minority concentration is introduced, the power converted to the IBW rapidly shrinks to less than 1% of the total deposited power.As FEMIC does not support IBW physics, the comparison of the power partition predicted by the two codes is facilitated by neglecting the IBW electron absorption in TORIC.To account for the removed IBW absorption, the power partition in TORIC is renormalized.The result is shown in figure 7. When non-local effects are not accounted for, FEMIC systematically overpredicts the electron absorption compared to TORIC for all minority concentrations, ranging from 15% in the pure second harmonic tritium heating scenario to 5% at 4% 3 He concentration.This is not surprising considering the large downshift in the parallel wavenumber below the equatorial plane.Consequently, FEMIC also underpredicts the fraction of power absorbed by the ions. Including parallel non-local effects iteratively, we note a close agreement between FEMIC and TORIC for all particle species, in particular for lower minority concentrations.The difference is the largest for 4% 3 He concentration but is even then only a few percent.It is not surprising that we observe slightly poorer agreement for higher concentrations.FLReffects tend to be more important here, and TORIC has a more advanced description of the perpendicular non-local physics. The local plane wave approximation of k ∥ shown in blue reproduces the good agreement of the iterative method for all minority concentrations, including concentrations below 1%, where single pass damping is weak.Here, the wave has no clear beam shape and is poorly approximated by a plane wave and we can expect the width of the poloidal spectrum to be important for wave polarization and absorption.Still, the difference in electron absorption between FEMIC and TORIC is small. All four models predict maximum absorption by the 3 Heions near 2.5% 3 He concentration.As the minority concentration increases, the ion-ion hybrid resonance moves away from the cyclotron resonance.Consequently, the overlap between the cyclotron resonance layer and ion-ion hybrid layer decreases, and the favorable polarization is not exploited. Discussion The main advantage of the implemented iterative method is the possibility to model the complex wall and antenna geometries in high fidelity thanks to the memory efficient finite element method, while still able to account for nonlocal effects, which are added iteratively.physics is only accounted for in the plasma core and is not expected to be significant in the SOL due to the low plasma temperatures there, saving computational resources.Another advantage of using a FEM solver is the possibility for mesh optimization, such as local mesh refinement for short wavelength wave modes.Using the FEM in the whole poloidal plane, combined with Fourier decomposition to account for parallel nonlocal effects, the proposed iterative method is less memory intensive than many other comparable methods at the expense of longer computational time.In principle, this is a desirable trade-off making it possible to run the code on a workstation, rather than a cluster.This applies also when modelling large machines, such as ITER and the EU-DEMO.Furthermore, fixed point iteration methods (and sometimes Anderson acceleration in particular) are being employed in other codes to couple the wave solver to a Fokker-Planck solver [43], or to apply rectified sheath boundary conditions [44,45].FEM solvers work well in this type of framework.Each wave solution typically requires a few minutes of computational time, and in principle, non-local effects, sheath physics and non-linear Fokker-Planck physics can all be included in each iterative step, reducing the total computation time compared to a more expensive wave solver.Unless the separate effects are strongly coupled, the convergence and total computation time should be mostly determined by the physical effect that converges the slowest. The reduced local plane wave model, which was used to validate the proposed iterative method, produces surprisingly good agreement not only in terms of power partition as shown in the benchmark in section 4, but also in local power density as shown in section 3.This good agreement is not expected to be maintained in scenarios with poor single pass damping, especially in smaller machines.When single pass damping is low, the reflected and refracted waves develop a complex poloidal mode spectrum, and a local plane wave model is then not sufficient to represent the up-and downshift in the parallel wavenumber. An attempt was made to use the PWA model to improve the convergence rate by providing an improved starting guess, but the error norm was only reduced in the first few iterations.The local plane wave model prescribes a k RZ mainly in the negative R-direction.Therefore, the reflected portion of the wave gets the opposite shift in the parallel wavenumber compared to what it should have.The iterative method then has to compensate for such errors, negating any gains from the improved initial guess.In fact, in some scenarios, the rate of convergence was reduced when using the PWA model as an initial guess.Despite this, the PWA model remains a useful first order correction for parallel non-local effects, especially in large machines.The reason for this is twofold.First, the model is more accurate in large machines with strong single-pass damping, such ITER and the EU-DEMO.Second, methods that take non-local effects into account tend to be computationally expensive, increasing the potential gain by using memory efficient methods such as the FEM.In particular, 3D FEM models can benefit from such reduced models [25]. In terms of relative importance of the various contributions the total deposited power, it is interesting to note that the quantities δQ j kernel and δQ j fields are of similar magnitude for a given species j.This means that accounting for nonlocal effects in the absorption kernel and in the wave propagation is equally important.Furthermore, comparing δQ j kernel and δQ j fields (figures 3 and 4) to Q j (figure 2), we note that the relative change in absorption due to non-local effects is similar for all absorption modes.This implies that non-locality in all tensor components is equally important, perhaps with the exception of the xz-and zx-components due to their small magnitude.All other components significantly contribute to the power deposition. A final point should be made on the importance of the upand downshift on the parallel wavenumber.For dipole phasing, the non-local effects are perturbative in nature, slightly changing the resonance width for ions and the absorbed power density for the electrons.For toroidal phasings that have a larger portion of the spectrum in the lower range of toroidal mode number, such as [0 π π 0], [0 0 π π] or current drive phasing, we can expect a larger effect.The nonlocal contribution to the local power density may then be of zeroth order, significantly changing the total absorbed power density. The comparison with TORIC produced good agreement in general, but in particular for lower minority concentrations where FLR effects are less important.At 3%-4% 3 He concentration it may be important to include FLR physics.This may be difficult to implement in FEMIC.Many of the terms of interest are divergence-like terms, which are difficult to evaluate in the RF module in COMSOL Multiphysics® as the curl-conforming vector elements used to discretize the electric field are not well suited for this type of operations.Here other basis functions or other formulations may need to be considered.It is also possible that the discrepancy is due to different representations of the parallel physics. Conclusions We have implemented an iterative method to include parallel non-local physics in the FEM based ICRF code FEMIC.The non-local contribution was evaluated using Fourier decomposition in the poloidal angle coordinate.The new version of FEMIC was applied to a fast wave minority heating scenario in ITER and has been benchmarked against the ICRF code TORIC with good agreement.To further verify and explain the change in wave polarization and local power absorption due to the inclusion of parallel non-local effects, a reduced local plane model was developed and compared to the simulation results from FEMIC.We showed that we can accurately represent the broadening of the resonance layer, the change in polarization due to the non-local effects on the ion-ion hybrid layer, as well as capture the impact of non-local effects on the different electron absorption modes.We also show that using a non-local absorption kernel is equally important as using a non-local dielectric tensor, for both ion and electron absorption. One of the main advantages of the proposed numerical scheme is that the finite element method in 2D axisymmetric configurations (or 3D) is well suited to be applied to complex geometries.Wave coupling and propagation in the scrape off layer and at the plasma edge are well represented by the finite element method without including non-local effects, as the temperature is low enough that non-local effects become negligible.Non-local contributions only need to be added in the plasma core, where the temperature is high.Furthermore there is no need to invert a large dense system matrix, as the non-local effects are added iteratively.In principle we can add more physical effects in the iteration procedure, such as nonlinear boundary conditions, or coupling the wave solver to a Fokker-Planck solver. The current implementation lacks certain finite Larmor radius effects, such as mode conversion to the ion Bernstein wave (IBW).This could be included by expanding the response tensor in k ⊥ and giving higher order terms a finite element representation so that they can be evaluated in a FEM solver such as COMSOL Multiphysics®, although care needs to be taken to use appropriate basis functions.The mode conversion to the IBW is sensitive to the up-and downshift in the parallel wavenumber [15], effects that in principle can be included using the method in the present work. Figure 1 . Figure 1.Profiles of electron density (black), electron temperature (dashed orange) and ion temperature (dash-dotted blue) as a function of major radius in the equatorial plane. Figure 2 . Figure 2. Real part of the left hand circularly polarized component of electric field, (a) E+, and absorbed power density by (b) 3 He-ions and (c) electrons.Minority and ion-ion hybrid resonances are indicated by dashed and dotted vertical lines, respectively. Figure 3 . Figure 3. Contribution to the absorbed power density by 3 He-ions due to non-local physics in (a) the absorption kernel (δQ 3 He kernel ), and (b) the susceptibility tensor (δQ 3 He fields ) as predicted by FEMIC in units of mWm −3 /W.Minority and ion-ion hybrid resonances are indicated by dashed and dotted vertical lines, respectively.In subfigure (c), the iterative method (blue and black) is compared to the local plane wave model (red and green) along the oblique line (black, figures (a) and (b)).Here n ϕ was set to 61. Figure 4 . Figure 4. Contribution to the absorbed power density through ELD (first column), TTMP (second column) and cross terms (third column) due to non-local physics in the absorption kernel (δQ kernel ) (a)-(c), and susceptibility tensor (δQ fields ) (d)-(f ) as predicted by FEMIC in units of mWm −3 /W.Minority and ion-ion hybrid resonances are indicated by dashed and dotted vertical lines, respectively.In subfigures (g)-(i), the iterative method (blue and black) is compared to the local plane wave model (red and green) along the oblique line (black, figures (a)-(f )).Here n ϕ was set to 61. Figure 5 . Figure 5. Contribution to the absorbed power density through ELD due to non-local physics in (a) the absorption kernel (δQ ELD kernel ), and (b) the susceptibility tensor (δQ ELD fields ) as predicted by FEMIC in units of mWm −3 W −1 .Minority and ion-ion hybrid resonances are indicated by dashed and dotted vertical lines, respectively.In subfigure (c), the iterative method (blue and black) is compared to the local plane wave model (red and green) along the oblique line (black, figures (a) and (b)).Here n ϕ was set to 27. Figure 6 . Figure 6.Relative residual norm as a function of iteration number.Convergence is much slower for lower values of n ϕ , but can be sped up by filtering the correction current density using TVD filtering (dashed curves). Table 1 . Summary of model parameters.
10,808
2024-04-09T00:00:00.000
[ "Physics", "Engineering" ]
Copy Number Variation Identification on 3,800 Alzheimer’s Disease Whole Genome Sequencing Data from the Alzheimer’s Disease Sequencing Project Alzheimer’s Disease (AD) is a progressive neurologic disease and the most common form of dementia. While the causes of AD are not completely understood, genetics plays a key role in the etiology of AD, and thus finding genetic factors holds the potential to uncover novel AD mechanisms. For this study, we focus on copy number variation (CNV) detection and burden analysis. Leveraging whole-genome sequence (WGS) data released by Alzheimer’s Disease Sequencing Project (ADSP), we developed a scalable bioinformatics pipeline to identify CNVs. This pipeline was applied to 1,737 AD cases and 2,063 cognitively normal controls. As a result, we observed 237,306 and 42,767 deletions and duplications, respectively, with an average of 2,255 deletions and 1,820 duplications per subject. The burden tests show that Non-Hispanic-White cases on average have 16 more duplications than controls do (p-value 2e-6), and Hispanic cases have larger deletions than controls do (p-value 6.8e-5). INTRODUCTION Alzheimer's disorder (AD) is a devastating neurodegenerative disease and is the most common cause of dementia. Approximately 6.2 million Americans are living with AD in 2021, and it is projected to reach 12.7 million in 2050, which makes AD one of the most pressing public health issues (Alzheimer's Association, 2020). Presently, there is no known effective prevention or disease modifying therapies, and the landscape of AD drug trials is gloomy. One possible reason is that AD is a heterogeneous disorder, but trials are designed treating it as a monolithic disease. Although lifestyle and environmental risk factors clearly affect AD, the primacy of genetic influences suggests that categorization by genetic basis should be prioritized in developing effective interventions. Most of the previous CNV GWAS of AD were performed using genotyping array data. Although these arrays can quickly and cost efficiently genotype large numbers of samples, there are serious technological limitations in that only large CNVs spanning multiple pre-determined probes can be reliably detected. However, WGS data allows an unbiased investigation of CNVs of all types (i.e., small and large; common and rare; within coding and non-coding regions) and provides a unique opportunity to comprehensively study CNVs in diseases. To accelerate AD genetic discovery, the Alzheimer's Disease Sequencing Project (ADSP) (Beecham et al., 2017), a strategic program funded by the National Institute on Aging (NIA), is committed to sequence AD cases, and cognitively normal elder controls from multi-ethnic populations, providing a valuable resource for genome-wide identification of CNVs. This study utilizes the ADSP Umbrella R1 dataset (ng00067) released through the National Institute on Aging Genetics of Alzheimer's Disease Data Storage Site (NIAGADS) Data Sharing Service (Kuzma et al., 2016). After quality and relatedness checks, we had 1,737 AD cases and 2,063 cognitively normal elder controls for this study. We employed three CNV calling algorithms, CNVnator (Abyzov et al., 2011), JAX-CNV Smoove (GitHub-brentp/smoove, 2021;Layer et al., 2014) that on average detected 2,378, 25, and 4,584 CNVs, respectively, for each sample. GraphTyper2 (Eggertsson et al., 2019) was then applied for joint genotyping to generate a single VCF for all 3,800 samples in the study, which increased the number of CNVs to 280,073 average/sample; however, most of those CNVs either overlap or are adjacent to each other. After merging CNVs of the same type (deletions or duplications) and removing conflict regions with different types of CNVs, there are on average 4,075 CNVs per sample. The CNVs we identified tended to be more abundant and longer in AD cases compared to cognitively normal, elder controls, though in most cases this trend was not statistically significant. MATERIALS AND METHODS The analysis flow consists of two major steps; identification of CNVs from WGS from 3,800 subjects (CNV Identification on WGS Data), and CNV burden analysis (CNV Burden Analysis Using PLINK). Figure 1 shows an overview of the flow of CNV identification on WGS data. The flow starts with alignment CRAM files and ends at the single-sample CNV list generation. The process began with a quality check (WGS Across-Chromosome Coverage Check) followed by sample-level CNV calling and project-level CNV joint genotyping (Sample-Level CNV Calling and Project-Level CNV Joint Genotyping). Finally, to meet the data format requirements of CNV burden analysis, the genotyped VCF was further split as a list in BED format per sample for region consolidation (for same-type CNVs overlapping) and removal (for different-type CNVs overlapping). Then, all BED files were merged and converted in PLINK format as the input of burden analysis (CNV List Assembling for PLINK Burden Analysis). The detailed scripts are given in supplementary material. WGS Across-Chromosome Coverage Check The quality of CNV calling on WGS data is sensitive to alignment coverages across all chromosomes of a sample. Uneven coverages of chromosomes may cause false positive CNVs. Thus, before calling CNVs, it is necessary to perform a quality check of alignment coverages. Samples with uneven coverage were removed from analysis. We developed a method (implemented as part of JAX-CNV) to first estimate the coverage of each chromosome of a sample. The method seeks 20 repetitive-free regions in each chromosome, and then calculates an average coverage of these regions to present the coverage of the chromosome. A repetitive-free region is defined as a 20k bp long region with each 25-mer (k-mer) inside the region having a unique position in the entire reference genome. Once coverage of each chromosome was obtained, we were able to identify outlier chromosomes with unexpected high or low coverages. For example, outliers could indicate trisomy, monosomy, and other gross chromosome number anomalies. An overall average coverage of a sample was then computed by using the coverages of all chromosomes excluding outliers. A standard deviation of chromosomes coverages was employed as the metric to identify problematic samples that were removed from downstream analyses. This method is fast and takes approximately 5 minutes for a 30X sequence sample. Sample-Level CNV Calling We employed CNVnator, JAX-CNV, and Smoove for CNV detection. CNVnator and JAX-CNV are Read-Depth-based (RD-based) algorithms while Smoove utilizes multiple signals of RD, Paired-End (PE), and Split-Read (SR). CNVnator is sensitive for CNVs sizes ranging from 1 to 50 kb; however, it may break larger CNVs into smaller pieces that introduce difficulties for downstream analyses. We included JAX-CNV in the analysis flow because it was developed to detect large (>50 kb) CNVs and resolves the issue of fine pieces from CNVnator. Smoove was recruited to strengthen small CNV (<1 kbp) identification. These three CNV calling algorithms are not only fast but also generating high-quality CNVs. Moreover, the combination of them allows us to cover the complete size spectrum of CNVs. For each sample, we applied these three algorithms separately. Each algorithm could generate a BED (JAX-CNV) or VCF (CNVnator and Smoove) file to store a set of deletions/ duplications with genomics coordinates and genotypes (homozygous or heterozygous, and copy numbers) of a sample. If a BED file was generated, we converted it to VCF format to facilitate the step of utilizing svimmer (GitHub-DecodeGenetics/svimmer, 2021) for callset merging. For variant types (deletions, duplications, inversions, and breakends) detected by Smoove, we only kept deletions and duplications. For each sample, we then applied svimmer to merge the three VCFs obtained from the three algorithms. Project-Level CNV Joint Genotyping Joint analysis is recommended for a dataset with multiple samples. Once variants of a sample were detected, a joint analysis step provides the ability to leverage population-wide information from multiple samples that allows us to refine lowquality genotypes and detect additional variants of a sample. For example, a joint genotyping step is suggested in the GATK best practice for SNV and INDEL detection. Compared to SNV/INDEL joint genotyping, CNV joint genotyping is challenging since breakpoints of CNVs from short-read sequence data may be imprecise. By incorporating detected variants within the linear reference genome, the emerging methodology, Graph Genome, provides a good model for joint genotyping CNVs across multiple samples in a single step. We evaluated GraphTyper2 (Eggertsson et al., 2019), Paragraph (Chen et al., 2019), and VG (Hickey et al., 2020), and selected GraphTyper2 in the analysis flow due to its balance of required computational resource and quality of results. As GraphTyper2 recommended, we employed svimmer (GitHub-DecodeGenetics/svimmer, 2021) to merge all sample-level VCFs and generate a single VCF that does not contain genotypes. GraphTyper2 was then applied on this merged VCF with all CRAM files for each 500kb region excluding the centromeres. GraphTyper2 generated a VCF of CNVs with genotypes of all samples. There are three models used for joint genotyping in GraphTyper2, Aggregated, Coverage, and Breakpoint, and we kept results from Aggregated model as GraphTyper2 suggests. We also applied PASS flag filter in the GraphTyper2 VCFs. Each 500kb chunk VCFs were consented using BCFtools (Danecek et al., 2021). CNV List Assembling for PLINK Burden Analysis There remains a challenge in using GraphTyper2 VCF files for downstream burden analysis. Since multiple calling algorithms were applied for CNV identification, CNV lengths and breakpoints may vary. Although GraphTyper2 was applied to mitigate this situation, we still can find CNV segments overlapping each other that is not acceptable by downstream association analysis tools such as PLINK (Chang et al., 2015). To resolve overlapping segments, we first split CNVs (with PASS genotype tags) of a sample in BED format for each sample. The BED is in the format of chromosome, begin position, end position, and copy number status for each CNV. The copy number status recorded as 0, 1, 3 or 4 copies. Of note, the copy status 4 includes copy numbers equal or larger than 4. Then, we used BEDTools (Quinlan and Hall, 2010) to merge overlapping or adjacent segments. Segments were merged only if they are the same CNV type, deletions or duplications. For those regions having different CNV types, we filtered them out since the downstream association analysis would not take those regions into consideration. Once the CNV consolidation and removal were done for all samples, we then concatenated all BED files and sorted the merged BED file by CNV positions. PLINK format, that is commonly accepted by other downstream association tools, is a tabular file format with CNV coordinates, family IDs, and sample IDs. Since there are no related samples in the dataset, we replicated sample IDs as family IDs. We then converted the BED file into a six-column with family ID, sample ID, chromosome, start position, end position, and copy number status, e.g. 0, 1, 3, or 4 copies. Rare CNV Identification Rare CNVs were obtained using PLINK to impose a 0.01 frequency threshold (i.e., --cnv-freq-exclude-above 38 and-cnv-overlap 0.5), which removed CNVs with >50% of its length spanning a region with >1% × 3,800 CNVs in the dataset. The same approach was applied on African American (AA) (--cnv-freq-exclude-above 9), Hispanic (--cnv-freqexclude-above 12), and Non-Hispanic White (NHW) (--cnvfreq-exclude-above 15) samples. Then, we applied the pilot mask released by the 1,000 Genomes Project (The 1000 Genomes Project, 2010) on rare CNV lists. The pilot mask was done by looking at the amount of sequence data that aligned to any given location in the reference genome. Regions are defined inaccessible if their depth of coverages (summed across all samples in the 1,000 Genomes Project) were higher or lower than the average depth. The mask results in 5.3% of bases marked "N" (the base is an "N"), 1.4% marked "L" (coverage is low), 0.6% marked "H" (coverage is high) and 3.7% marked "Z" (many reads mapped here have zero quality). The remaining 89.0% of are marked "P" (regions are good and passed). All rare CNVs need to reside in "P" regions. CNV Burden Analysis We examined the burdens of all and rare CNVs in AD cases and controls using PLINK. PLINK burden analysis uses permutation tests to compute p-values. For our analysis, we applied 500,000 permutations. For each sample, we considered four CNV burden features: 1) number of CNV events; 2) proportion of samples with ≥1 CNV events; 3) total event length in kb; and 4) average event length in kb. The CNV events included deletions and duplications together (DelDup), deletions specific (Del), and duplications specific (Dup). We reported the CNV burdens for AA, Hispanic, and NHW separately as well as for all-combined samples (ALL), The Bonferroni threshold for multiple testing is p-value < 0.05/96 analyses 0.000521, where the 96 analyses included the combinations from 2 sets of CNV analyses (all CNVs vs. rare CNVs), 4 burden features, 3 CNV events (DelDup, Del, and Dup) and 4 sample groups (ALL, AA, Hispanic, and NHW). Dataset-3,800 WGS Samples from NIAGADS R1 Release of ADSP 5k We used the ADSP WGS data released by NIAGADS in 2018. NIAGADS not only collected and released genetics data, but also harmonized minimal phenotypes (sex, race/ethnicity, diagnosis, APOE genotype) from each collocating cohort. For data harmonization, NIAGADS followed the ADSP coding scheme based on the National Alzheimer's Coordinating Center (NACC) Uniform Data Set (UDS) (Beekly et al., 2007) definitions. We used NIAGADS and did not redefine diagnosis or ethnicities in this study. There are 4,749 subjects and 4,788 sequenced samples (three subjects sequenced nine times and another three sequenced six times) by Illumina HiSeq 2000/2,500 or X Ten at an average of 37X coverage (the range from 10.68X to 74.16X). For the six subjects with multiple sequence sets, we picked one sequence set per subject, and removed the other 39 sequences. For the 4,749 subjects, these were 2,192 AD cases, 2,073 controls, and others 484 with diagnosis unknowns. For this study, we focused on AD cases and controls, and excluded samples with inconclusive clinical statuses. For the remaining 4,265 samples, we performed the acrosschromosome alignment coverage check (WGS Across-Chromosome Coverage Check) since uneven coverage may affect the quality of CNV detection. Fifteen samples were removed since their standard deviation of chromosomes coverages are greater than 15% of the average coverages, as shown in Figure 2 where each line presents a sample, and each dot presents the alignment coverage of the sample in the chromosome on the x-axis. Next, we removed 450 samples due to relatedness according to pedigree information provided by NIAGADS. Finally, we had 1,737 AD cases and 2,063 controls. The ethnicities/races are AA (n 978), Hispanic (n 1,247), NHW (n 1,566), and others (n 9), as shown in Table 1. Figure 3A. CNV Callset For each sample, we employed svimmer to merge the callsets from the three callers as a single VCF. Next, svimmer was applied to VCFs for all 3,800 samples to generate a combined VCF which along with all CRAM files are inputs of GraphTyper2. As described in Project-Level CNV Joint Genotyping, we kept Aggregated notated variants and also applied the PASS flag filter in this aggregated callset. The result was a total of 237,306 deletions and 42,767 duplications as a project-level VCF. The length distribution and allele frequency of the project-level VCF are given in Figures 3B,C. Lengths of deletions were presented by using negative values that were shown on the left panel of Figure 3B, while lengths of duplication were shown on the right panel of Figure 3B. CNV Concordant Check with Other Projects We compared our project-level callset with the 1,000 Genomes Project Phase 3 (1KG_P3) (Sudmant et al., 2015), gnomAD (Collins et al., 2020), and Decipher (Firth et al., 2009) that were obtained from dbGaP (https://www.ncbi.nlm.nih.gov/ dbvar/content/human_hub/). The 1KG_P3 and gnomAD have other types of variants (insertions, inversions, mobile element deletion, and mobile element insertions) in the lists that were not used in the comparison; only autosomal copy number variations were used for the comparison. All lists were converted into the BED format for performing cross-project concordant CNV checks by using BEDTools. We examined the overlap between our data and other call sets using either a 1bp or 50% overlap. We performed each pair of comparisons twice treating both callsets as the primary in one of the comparisons. As demonstrated in Table 2, each pair of comparisons is asymmetric with different concordance percentages depending upon which callset was the primary (primary callset is the one in the column). 79.9 and 76.3% of our called CNVs were found in gnomAD and Decipher when using at least 1bp overlapping criterion. However, only 39.8% were recalled in the 1KG_P3 callset. GnomAD likewise has a low concordance rate, with only 41%, of CNVs overlapping with the 1KG_P3 callset. Our callset and gnomAD callset have higher similarity and more novel CNVs compared to the 1KG_P3 and Decipher callsets. CNV List for PLINK Burden Analysis Since PLINK does not allow overlapping CNVs within a sample, we 1) split the project-level VCF and generated a list of CNVs for a sample in BED format, and 2) consolidated CNVs or removed conflict CNVs by the method described in Section 2.1.4. After splitting the project-level VCF for each sample, we found increased numbers of CNVs per sample (32,402 deletions and 9,131duplications) since GraphTyper2 uses a combination of the three CNV calling algorithms and leverages variant knowledge from other samples. However, most of those CNVs overlap or are adjacent to each other. Next, we consolidated overlapping/adjacent CNVs if they are the same type or removed overlapping CNVs if they are different types. This CNV consolidation step significantly reduces CNVs/sample (2,966 deletions and 1,863 duplications), as shown in Figure 3A. For rare CNV analysis, we first applied the pilot mask from the 1,000 Genomes Project that further filtered about 8.4% of CNVs and became 2,255 deletions and 1,820 duplications for each sample averagely. CNVs with an allele frequency <1% were retained for analysis. The number of rare CNVs/sample ranged from 0 to 1,546 with an average of 57/sample (46 deletions and 11 duplications; median value is 44 and standard deviation is 76.58843). Among 3,800 samples, three have zero rare CNVs while four have >1,000 rare CNVs. Those four samples are all Non-Hispanic Whites (two cases and two controls), and three of the four samples. According to the final review comment have higher detected numbers of CNVs (According to the final review comment 5,809, 5,945, and 5,992) compared to average (4075.06). The three were sequenced in the earlier stage of the project by Illumina HiSeq 2000/2,500 with PCR Amplified libraries. Table 3 are the PLINK burden tests. The four burden features were considered; 1) total event numbers, 2) Proportion of samples with ≥1 events, 3) total event length in kb, and 4) average event length in kb. Tests were done for all and rare CNVs as well as considering deletions and duplications (DelDup), deletions specific (Del) and duplications specific (Dup). The results suggested two significant all-CNV burden differences between cases and controls: 1) in NHW, on average cases have 16 more duplication events compared to controls do (p-value 2e-6); and 2) in Hispanic, the total deletion lengths in cases is larger than in controls on average (p-value 6.8e-5). There are no significant differences for rare CNV burden in all aspects examined. Of note, the p-values from PLINK burden analysis did not account for covariates and were merely examining if the observed burden measures of cases and controls were significantly different in a marginal fashion. Figure 4 shows the total event numbers per sample and the total event length in kb per sample. DISCUSSION We have composed a scalable bioinformatics pipeline to identify CNVs using WGS data and applied it to 1,737 AD cases and 2,063 cognitively normal controls from the ADSP. We observed 237,306 and 42,767 deletions and duplications, respectively with an average of 2,255 deletions and 1,820 duplications per subject. Although there were more and longer CNVs in AD case samples than controls, burden tests performed using all CNVs or rare CNVs (i.e., <1% in frequency) do not indicate a significant association with AD status. The false discovery rate of detected CNVs remains uncertain despite the fact that CNVs were generated circumspectly and have been cross checked with other projects including the 1KG, gnomAD and Decipher. The callset of 1KG is smaller than ours and gnomAD's, and it is therefore expected that 1KG recalls only ∼40% of ours and gnomAD's callsets, while ours and gnomAD's callsets capture 82.8 and 86.1% of 1KG's CNVs respectively. We would also like to note that 1KG processed their data several years earlier than we and gnomAD did. Since the publishing of the 1KG Phase3 callset, CNV-calling tools have moved towards integration of multiple alignment signals (such as read-depth, pair-end, and split-read signals) for calling. This concept was well-accepted before the publishing of the gnomAD callset, and could make 1KG's callset less similar to ours and gnomAD's. While extensive experimental validation of each CNV is not currently feasible, validation of significant deletions and duplications may be necessary. Alternatively, our findings could be replicated with other datasets of Alzheimer's Disease whole genome sequence data. Joint genotyping provides the ability to leverage information from multiple samples so we could refine low-quality genotypes and detect additional variants for a sample. However, it also brings challenges when breakpoints of CNVs from different samples do not align well. The situation is even worse when using multiple calling algorithms. For this study, we employed GraphTyper2 for joint genotyping, which is a graph-genome based method and has shown an advantage for genotyping larger variants such as CNVs. However, GraphTyper2 does not provide a total solution; overlapping CNVs can still be found after joint genotyping. To address the issue, we split aggregated results to generate a CNV list for each sample and resolved overlapping CNV regions. A graph reference genome presents a variant, a CNV in our application, as a branch in the graph. For the overlapping CNV situation, the graph genome creates several similar branches in a region. The issues could be resolved in a more fundamental way by pruning unnecessary brunches of the graph genome. A slim graph genome will also improve running time and memory usage. DATA AVAILABILITY STATEMENT The data analyzed in this study is subject to the following licenses/ restrictions: Data is accessible from NIAGADS DSS via qualified access. Formal requests to access these datasets can be submitted to NIAGADS DSS: https://dss.niagads.org/. ETHICS STATEMENT The studies involving human participants were reviewed and approved by North Carolina State University Institutional Review Board for the use of human subjects in research. The patients/participants provided their written informed consent to participate in this study.
5,164.6
2021-11-04T00:00:00.000
[ "Medicine", "Computer Science" ]
Tidal Modulation of Ice Streams: Effect of Periodic Sliding Velocity on Ice Friction and Healing Basal slip along glaciers and ice streams can be significantly modified by external time-dependent forcing, although it is not clear why some systems are more sensitive to tidal stresses. We have conducted a series of laboratory experiments to explore the effect of time varying load point velocity on ice-on-rock friction. Varying the load point velocity induces shear stress forcing, making this an analogous simulation of aspects of ice stream tidal modulation. Ambient pressure, double-direct shear experiments were conducted in a cryogenic servo-controlled biaxial deformation apparatus at temperatures between −2°C and −16°C. In addition to a background, median velocity (1 and 10 μm/s), a sinusoidal velocity was applied to the central sliding sample over a range of periods and amplitudes. Normal stress was held constant over each run (0.1, 0.5 or 1 MPa) and the shear stress was measured. Over the range of parameters studied, the full spectrum of slip behavior from creeping to slow-slip to stick-slip was observed, similar to the diversity of sliding styles observed in Antarctic and Greenland ice streams. Under conditions in which the amplitude of oscillation is equal to the median velocity, significant healing occurs as velocity approaches zero, causing a high-amplitude change in friction. The amplitude of the event increases with increasing period (i.e. hold time). At high normal stress, velocity oscillations force an otherwise stable system to behave unstably, with consistently-timed events during every cycle. Rate-state friction parameters determined from velocity steps show that the ice-rock interface is velocity strengthening. A companion paper describes a method of analyzing the oscillatory data directly. Forward modeling of a sinusoidally-driven slider block, using rate-and-state dependent friction formulation and experimentally derived parameters, successfully predicts the experimental output in all but a few cases. INTRODUCTION Ice streams represent a significant portion of the Antarctic ice mass balance (Bamber et al., 2000). Much of the dynamics that control ice stream flow rates are not well constrained, particularly sliding at the base. Modeling efforts often consider local variation and evolution of basal resistance as a control on flow rates (Clarke, 2005). However, our knowledge of the base is limited to but a few locations. Meanwhile, a growing body of literature documents the sensitivity of ice stream flow to tidal forcing (Riedel et al., 1999;Doake et al., 2002;Bindschadler et al., 2002;2003;Gudmundsson, 2007;Wuite et al., 2009;Brunt et al., 2010;Rosier et al., 2014;, with modulation measured up to 100 km upstream from the grounding line (Anandakrishnan and Alley, 1997). Ice stream tidal modulation displays great variations in the type and periodicity of modulation. For instance, Mercer and Bindschlader Ice Streams display smooth, diurnal modulation (Brunt and MacAyeal, 2014;Anandakrishnan et al., 2003) whereas Rutford Ice Stream's smooth modulation is semi-diurnal and more closely tuned to the spring-neap cycle (Murray et al., 2007;Minchew et al., 2017). Long period modulation provides changes in horizontal flow velocities in the range of 5-20% of the mean velocity, whereas the short period modulations have given rise to~300% the mean velocity, in some cases causing periodic reversal in flow direction (Makinson et al., 2012). Whillans Ice Stream is a noteworthy example in which the tidal modulation is in the form of stick-slip, captured by both GPS and seismic records (Weins et al., 2008;Winberry et al., 2009;Winberry et al., 2011;Winberry et al., 2014;Pratt et al., 2014;Barcheck et al., 2021) in which bursts of motion are followed by long periods of near stagnation. The twice-daily 30 min events occur just before low tide and just after high tide (Wiens et al., 2008). The relationship between displacement during an event and recurrence interval suggests a timedependent strengthening, or healing, at the basal interface between events . Variations in modulation style and tuning from location to location raise the possibility of using glacier response to relatively wellknown tidal signals to infer basal properties. The effects of periodic perturbations on frictional stability has been studied in laboratory experiments on both solid and granular materials, with oscillations in shear forcing (Lockner and Beeler, 1995;Savage and Marone, 2007;, oscillations in normal stress forcing (Richardson and Marone, 1999;Boettcher and Marone, 2004;Hong and Marone, 2005), and oscillations in principal stresses (Beeler and Lockner, 2003). The effects of periodic perturbations have additionally been reviewed in theoretical treatments (Tworzydlo and Hamzeh, 1997;Voisin, 2001Voisin, , 2002Perfettini and Schmittbuhl, 2001;Perfettini et al., 2003). These studies have provided insight into frictional modulation and dynamic triggering of stick-slip, including the frequency-and amplitude-dependences of rock and gouge systems. The theory of basal sliding has historically been based on viscous deformation and regelation processes (Weertman, 1957;Schoof, 2005;Gagliardini et al., 2007). However, stick-slip, rateweakening, and healing are processes observed in natural glaciers that do not fit into the Weertman-style sliding laws because they represent a departure from purely viscous behavior. Traditional models that neglect elastic behavior and brittle failure may not be capturing the full dynamic range of processes occurring at ice streams (Tsai et al., 2008;Gimbert et al., 2021). Theoretical considerations are beginning to include stick-slip events into quantitative descriptions of glacier sliding in the form of rate-and state-dependent frictional models (Sergienko et al., 2009;Zoet et al., 2013;Goldberg et al., 2014;Lipovsky and Dunham, 2016;Lipovsky et al., 2019;Minchew and Meyer, 2020). Zoet et al., 2020 showed that rate-and state-dependent friction from laboratory experiments were directly applicable to glacier seismology by generating stick-slips in the lab with a de-stiffened rig. Seismology plays a major part in the understanding of stick-slip motion because, like earthquakes on tectonically loaded faults, stick-slick motion of glaciers emits seismic energy (Winberry et al., 2009;Zoet et al., 2012;Winberry et al., 2014;Helmstetter et al., 2015;Podolskiy and Walter, 2016). Additionally, many ice streams are thought to be at the pressure melting temperature at the bed. Water pressures can vary considerably in both time and space (Kamb et al., 1985). Although oscillating water pressure has been suggested as the control on glacial velocities, borehole observations of maximum sliding velocity while pressure is still rising contradict this. Rather, a stick-slip mechanism is needed to reconcile those observations (Bahr and Rundel, 1996). In an attempt to provide physical mechanisms responsible for differing sliding behaviors and aid in prediction of future flow rates, we run a series of cyclically forced friction experiments to explore the onset of stick-slip and slow slip in a simple ice-on-rock system. We explore the effects of temperature, period, amplitude, normal stress, and velocity on the frictional strength and stability. We apply the mathematical framework of rate-and-state friction with an analysis of velocity steps to describe the laboratory results and forward model the behavior with a simple one degree of freedom slider block model to determine if the observed behavior is consistent with existing theory. MATERIALS AND METHODS Rectangular samples of polycrystalline ice were fabricated using a modified version of the "standard ice" protocol (Cole, 1979;McCarthy et al., 2017), which used seed ice that was shaved, sieved (between 250 and 106 μm), and packed into a rectangular mold. The mold was placed in an ice bath (T = 0°C) for~30 min, under vacuum. Upon temperature equilibration, the mold was flooded with degassed deionized water and then placed on a cold copper plate (T = −5°C) within a chest freezer, insulated on all sides, to allow directional solidification from the bottom up. The resulting samples were nearly pore-free (<1%) with uniform grain size of approximately 1 mm (due to subsequent grain growth at the relatively warm −5°C). Samples were made intentionally oversized; after removal from the mold, they were cut down to precise dimensions (50 × 50 × 100 mm) with a microtome located in a cold room (T = −17°C). The ice sliding surfaces were roughened with 100 grit sandpaper just before loading into the apparatus. Experiments were conducted in a double-direct shear configuration (Figure 1 inset) using an ambient pressure, servo-controlled biaxial apparatus (McCarthy et al., 2016). Experiments were conducted over a range of temperatures that were controlled via a circulating fluid cryostat (−2 < T(°C) < −16). In the experiments, the central polycrystalline ice block (50 × 50 × 100 mm) was slid between two stationary blocks of Barre granite (50 × 50 × 30 mm) such that the nominal area of contact (50 × 50 on each of two sliding faces) was constant during sliding. The Barre granite stationary blocks had a grain size of 1-5 mm and the surfaces were polished with a surface grinder to surface roughness (Ra) of 2.8 ± 2 μm, as determined by a profilometer. The roughened central ice block surfaces were Ra = 7 ± 1. A horizontal piston pushing against a screw stop applied constant normal stress that was maintained using force feedback servo control. Shear stress was induced by a vertical ram that was servocontrolled in displacement feedback using a computercontrolled driving program. The force applied and the displacement travelled in each direction were measured by load cells and displacement transducers, respectively, mounted outside the cryostat. The stiffness of the apparatus and sample assembly K was determined in situ by two methods. The first was the conventional method of fitting a line to the plot of shear stress vs. displacement during the initial moments of the experiment. This run-in or ramp stiffness was measured for eleven of the twelve experiments and is provided in Table 1 as K ramp , with the average value as 0.47 ± 0.23 kPa/μm. Stiffness was also determined later in the experiment using a velocity step and solving for K (rather than assuming a constant). Such values are called K* and are provided in Table 2. The two types of measurements are compared in Figure 7H. We combined a constant load point velocity with sinusoidal oscillations ( Figure 1A). In every experiment, a constant velocity was first applied for approximately 3 mm of displacement to precondition the sample such that steady-state friction was reached. After that, multiple amplitudes and periods were applied in succession (from short to long) to study the frictional response ( Table 1). The load point was moving at V(t), which we describe as a function of forcing period P and amplitude A as: ( 1 ) Specifically, two different driving protocols were used. Experiments C28 to C33 used a single median driving velocity (V m = 10 μm/s) and cycled through three periods (1, 10, and 100 s) with three amplitudes each (10, 5, and 2 μm/s, which are 100, 50, and 20% of V m ), such that at the highest amplitude velocities slowed to zero once per cycle, but the sample never moved backward. Experiments C31 and C33 further explored 5 and 50 s periods. Experiments C34 to C44 tested only two periods (10 and 100 s) and two amplitudes (100 and 50% of V m ) but employed two median driving velocities (1 and 10 μm/s) such that a full set of oscillations were conducted at 1 μm/s, the velocity TABLE 1 | List of experiments, intended driving conditions*, steady-state friction measured at the end of the loading ramp, prior to oscillations, and run-in stiffness, K ramp (determined from slope of the ramp). The a and b runs for samples C41 and C44 represent runs at two different normal stresses on the sample, all other conditions were the same. † K ramp at 41b could not be measured because the system was not fully unloaded before the b run was initiated. *Fitted velocities were 1.1 and 11.1 μm/s. Exp# Temp ( was increased to 10 μm/s, and another full set of oscillations were repeated ( Figure 1). The transition between the two rates represents a velocity upstep ( Figure 1A inset), which was used to measure rate-state parameters described in Eqs 2-4. During most experiments, a constant normal stress of~0.1 MPa was applied, with the exception of C41 and C44, during which we first ran the two-velocity cycling program under 0.5 MPa normal stress and then repeated the program under 1 MPa normal stress. In total, 10 samples were tested with runs covering 107 different combinations of V m , T, P, and A conditions. Some conditions were additionally repeated to confirm reproducibility, as discussed in Reproducibility Section and in the Supplementary Material. Temperature during an experiment was held constant with a circulating fluid-controlled cryostat described in McCarthy et al., 2016;. However, several improvements were incorporated into the temperature control and monitoring system since those previous works. A resistance temperature detector (RTD) embedded in the rock monitored temperature at the sliding interface between ice and rock. The RTD has an error of ±0.1°C, which was a significant improvement over the previously used thermocouples. Additionally, here the bottom plate of the cryostat was made of insulating material (polycarbonate) so that external heating did not transfer to the stationary steel blocks holding the rock samples. Finally, we prechilled all rock and metal in the cryostat overnight at the desired testing temperature before each experiment. These three changes created a system that achieved and held a desired temperature more efficiently than our previous cryostat. An example of the temperature control throughout the experiment is provided in the supplemental material (Supplementary Figure S2B). INVERSION FOR RATE AND STATE FRICTION PARAMETERS AND FORWARD MODELING The empirically derived rate-and-state friction (RSF) formulation has successfully been used to characterize frictional slip on faults and has been the common methodology for describing earthquake physics for the last 40 years (Dieterich, 1979(Dieterich, , 1981Ruina, 1983). For more recent treatments of RSF, one can turn to works by Marone (1998) and Scholz (2019) and the numerous references therein. Use of RSF has previously been used to describe ice friction experimental studies (Fortt and Schulson, 2009;Zoet et al., 2013;McCarthy et al., 2017;Zoet and Iverson, 2018). The observation of basal seismicity caused by stick-slip, which requires rate-weakening behavior, provides the license for application of RSF formulation in the field of glaciology as well and it has recently been employed to describe sliding behavior in Antarctic Ice Streams [e.g. Zoet et al., 2013;Lipovsky and Dunham, 2017;Lipovsky et al., 2019]. Therefore, variations between smooth (stable) sliding and stick-slip (unstable) sliding in cyclically forced ice friction experiments described here are analyzed using rate-and-state friction, which describes frictional strength as a function of both sliding velocity V (the relative slip rate across a contact interface) and state θ (which at steady state is the lifetime of asperity contact). In the formulation, the evolution of friction is controlled by two coupled equations, the first of which is: in which τ and σ n are shear and normal stresses, respectively, a is the "direct effect" accounting for variations in frictional strength due to changes in slip rate from a reference velocity, b is the "evolution effect" that determines the change in friction due to evolution of state from a reference steady state, and D C is the critical slip distance that is needed for the system to evolve from one steady-state to another (Dieterich, 1979;Marone, 1998). Parameters a, b, and D C are determined from experimental velocity step data (Figure 1 inset) and µ 0 and V 0 , are reference values such that µ 0 is the reference friction at reference velocity V 0 . The two forms of the time evolution of state are: Figure 7. the Aging Law (Dieterich, 1978) and the Slip Law (Ruina, 1983). The Aging law and Slip Law are similar in that at steady-state sliding, both are Vθ/D C = 1. They differ in that the Aging law describes state evolution during stationary contact while the Slip Law describes state changing only during slip. Hooke's law applied to a single-degree-of -freedom spring slider is used to approximate the elastic response of the apparatus (and sample), which, combined with the rate-and-state friction equations, allows experimental data to be analyzed (e.g. Marone, 1998). In terms of velocity and the friction coefficient, the relation is: where V L is the load point velocity, inertia is negligible, and the frictional surface is assumed to be rigid so that all elastic deformation is included in k, the spring constant of the loading environment, expressed as the apparatus stiffness (herein K ramp or K*) normalized by normal stress σ n and thus having units of 1/length. Previous experimental studies introduced a critical forcing period of the oscillating velocity, which determines the stability transition of the system. There are two timescales that have been proposed to govern the response of sliding to a given frequency. One is the natural period of the spring-slider system. There have not been enough experiments run at different conditions (e.g. stable sliding vs. stick-slip) to solve for all of the constants in this relationship, but several papers which we reference have shown that the critical period must be proportional to the critical slip distance D C and load point velocity V as (Rice and Ruina, 1983;Boettcher and Marone, 2004;Savage and Marone, 2007;van der Elst and Savage, 2015): When examining the dynamic response of a system to periodic or transient triggering, the nucleation timescale has been identified as (Dieterich, 1994;Beeler and Lockner, 2003;van der Elst and Savage, 2015): where stressing rate τ is the load point velocity multiplied by the system or apparatus stiffness K, which includes the stiffness of the apparatus and the sample. In a series of experiments with faults subjected to load oscillations, Beeler and Lockner (2003) found that when P n was shorter than the forcing period, maximum seismicity rate correlated with maximum stressing rate but when P n was longer than forcing period, maximum seismicity correlated with peak stress amplitude. An additional timescale of consideration in a periodically forced system is the forcing period relative to the natural oscillation period, which will determine whether it may oscillate harmonically or be damped. Natural oscillation period is controlled in part by critical stiffness K C , which is defined as: such that steady, stable sliding occurs when the system stiffness exceeds critical stiffness K > K c (Rice and Ruina, 1983;Gu et al., 1984;Scholz, 1998;Gomberg et al., 2001). This critical stiffness relationship was more recently explored and confirmed in a laboratory study of debris-laden ice sliding over rock (Zoet et al., 2020). The relative values of a and b from Eq. 2 determine the stability of a system. Determined from a step up in velocity ( Figure 1A inset) a shows the response after a velocity change, from the steady state friction value to the peak. Term b represents the change in friction from peak to the new steady state at the new velocity. When (a-b ≥ 0), this is termed rate-strengthening friction and corresponds to stable sliding or smoothly sliding. However, when a velocity up step results in a lower steadystate friction, (a-b < 0) the system is rate-weakening and conditionally unstable. This is the necessary condition for stick-slip behavior (e.g., Scholz, 1998). Stress decreases during a slipping event, and thus healing is also needed to facilitate strength recovery for repeated slip events (e.g., Carpenter et al., 2011). Velocity up-steps from raw data were analyzed with a nonlinear least squares fitting routine to a spring slider with spring constant k. In the fitting GUI (RSFit3000), slider velocity, friction coefficient (Eq. 2), both Eq. 3 (Aging) and 4 (Slip) descriptions of state, and Eq. 5 are cast as coupled ordinary differential equations (ODEs) (Skarbek and Savage, 2019). Although recent studies have shown that the Slip law does a better job of describing experimental data (Bhattacharya et al., 2015;, we provide fits for both for comparison. A single state variable was used and the fitting program minimizes the difference between the data for steady-state friction, a, b, and D C user inputs for initial guesses. The standard deviation of the (a-b) combined parameter in the GUI is computed using the covariance between a and b (Skarbek and Savage, 2019; their Eq. 7). Since recent studies demonstrate evolution of RSF parameters with displacement and slip velocity (Ikari and Saffer, 2011;Ikari et al., 2013), we additionally solved for k (provided in Table 2 as K*, such that K* = kσ n ). For more discussion of this and for additional details about the RSFit3000 fitting GUI, see Skarbek and Savage, 2019. Finally, a weighting function was included to ensure that the weight vector takes on values greater than unity at the velocity step and decays exponentially with load point displacement (Reinen and Weeks, 1993;Blanpied et al., 1998). A forward model was created that uses a single degree of freedom elastic slider block (Eq. 5) and Eqs. 2, 8 with sinusoidal load point velocity of Eq. 1 and the individual fitted parameters provided in Table 2 to predict the frictional response of the system at desired periods and amplitudes. Inertia is ignored in the RESULTS The results from these periodic forcing experiments, the first conducted in our apparatus, demonstrate first and foremost that (B) at long period at 50% of Vm response is slight, with small uptick in friction occurring between the low velocity point (t1) and peak acceleration (t3) (C39); (C) at shorter period (10s) both 100% Vm and 50% Vm are modulated, but not with a simple sine wave (C39); (D) a high amplitude event or slow slip event, which resembles a relaxation hold superimposed on a sine wave (C39); (E) a double event (one fast, one slow) within a single cycle (C44b); (F) pronounced stick-slip events, with almost total stress drop, occurring every other cycle (period doubling) at 1 sec before maximum velocity (C41b). Frontiers in Earth Science | www.frontiersin.org April 2022 | Volume 10 | Article 719074 6 by simply changing the frequency and amplitude of forcing, the full range of frictional behavior can be observed within a single experiment (Figure 2). At small forcing amplitudes (20% of V m ) at longest and shortest periods, we see no discernible modulation of the friction coefficient ( Figure 2A) and erratic oscillation that cannot keep up with the forcing (Figure 3A), respectively. For many conditions, particularly at long period, the response to a sinusoidal driving velocity is an in-phase sinusoid in friction (left side of Figure 2D). At the longest period (100 s) a high amplitude event during each cycle is observed in the frictional response ( Figure 2D). For this event and the transitional stage in Figure 2C, the maxima are not exactly in phase with the forcing. Rather, the minima occur just after the low velocity and the maxima just before the high velocity, so the response is always within the increasing velocity leg of the cycle. A similar observation of peak friction prior to peak velocity was observed in a modeling study that used a field-derived diurnal velocity signal (Zoet et al., 2021). Starting from the median point in velocity and continuing with decreasing velocity (Figure 2C), friction begins to relax. Just after velocity increases above zero (t1), friction responds rapidly to a peak value (at t2) then rapidly evolves back to the background oscillation value. This slow slip event thus resembles a slide-hold-slide superimposed on a sinusoid (to be discussed further in Amplitude and Period effects Section and the discussion). At some conditions slipping events occur twice per cycle ( Figure 2E) and at yet other conditions (high amplitude, high normal stress), audible stick-slips occur with significant stress drops, sometimes skipping cycles (period doubling; Figure 2F). Herein we quantify specific effects related to systematically changing temperature, amplitude, normal stress, and median forcing velocity. Temperature Effects Under the range of conditions explored here, in which homologous temperatures T/T m were quite high, the primary effect of temperature was a general decrease in the friction value. The average friction values of these data, which are determined from the average of values prior to and after oscillations, are consistent with steady-state friction reported in a previous study (McCarthy et al., 2017), in which samples were prepared identically, but were tested under constant load point velocities instead of oscillations. In the McCarthy et al., 2017 study, a linear fit to the data described an inverse relationship of friction on temperature of −0.0182/°C. Based on other studies of ice-on-ice friction, we do not anticipate this linearity to continue indefinitely (Beeman et al., 1988;Schulson and Fortt, 2012). At approximately 0.75 T/T m (−70°C) friction of ice flattens to between μ = 0.6-0.9 without strong temperature dependence at homologous temperatures lower than that. The one exception to consistency here is the lowest temperature measured in this study, which deviates from the previous linear temperature dependence. It is not clear if this discrepancy in Figure 3D at Frontiers in Earth Science | www.frontiersin.org April 2022 | Volume 10 | Article 719074 T = −16.4°C is a reduction of friction due to oscillations or is just due to uncertainty in measuring temperature in our previous study (which used thermocouples). Apart from this control on average friction, temperature effects on the style or amplitude of the frictional response are negligible at low normal stress over this temperature range ( Figure 3). There is no temperature dependence on the amplitude of shear stress oscillations ( Figures 3B,C) and, as shown in Amplitude and Period effects and Velocity effects Sections, no discernable temperature dependence of healing or rate dependence (a-b) behavior, respectively. There is also no discernable effect of temperature on individual rate-state parameters ( Table 2) within this admittedly narrow range of temperatures. Amplitude and Period Effects Due to the similarity in form to slide-hold-slides, we measured the mid-to-peak amplitude of those oscillatory friction data that approach and reach zero velocity (Figure 4 inset). Although the peak level of friction upon reloading is more rounded than in typical slide-hold-slides, we use the maximum value during the cycle. The midpoint is determined by drawing a straight line from the steady state friction values before and after the oscillations. As shown in Figure 4, the mid-to-peak amplitude of the oscillatory response is clearly a function of the period of forcing. The response resembles, in both magnitude and in slope, frictional strengthening determined from slide-hold-slide experiments in a previous study (McCarthy et al., 2017). A plot of amplitude vs. temperature provided in the Supplementary Material demonstrates no clear temperature dependence on the amplitude of the frictional response. Normal Stress Effects In Figures 5, 6 we document sliding behavior over a range of normal stresses (0.1-1 MPa) at T = −2°C ( Figure 5) and −5°C ( Figure 6). Shear stress increases linearly with normal stress at both of these temperatures (insets), consistent with Coulomb behavior. At lowest normal stress (0.1 MPa) for both temperatures, smooth sliding is observed under most forcings, similar to the data in Figure 3. However, at elevated normal stresses of 0.5 and 1 MPa the data show sudden stress drops that were accompanied by audible pops during the experiment. In Figure 6B, the response at σ n = 1 MPa demonstrates an event every other cycle (i.e. period doubling), with almost complete stress release during each event. These are stick-slips that occur~1 s before peak velocity ( Figure 2F). The events cause the piston and sample to jolt forward at almost 40 μm/s (double the programmed rate). Both the 0.5 and 1 MPa normal stress datasets also show a smaller event at the low velocity point. Stick-slip instabilities in the lab, that are accompanied by audible high velocity events, are analogous to crustal earthquakes (Brace and Byerlee, 1966;Tullis, 1988;Wong and Zhao, 1990) or stick slip events in glaciers (Helmstetter et al., 2015;Podolskiy and Walter, 2016). Velocity Effects During experiments C34-C44, we employed a positive step in load point velocity from 1 μm/s to 10 μm/s at approximately the halfway point of the experiment (Figure 1 inset). Using a leastsquares inversion (Skarbek and Savage, 2019), we fit Eq. 2 and both the Aging (Eq. 3) and Slip (Eq. 4) forms of state evolution to the velocity steps (Figure 7). Since filtering has the effect of smoothing out spikes and lowering peak amplitudes, we here use raw data (black). Curve fits are shown in green (the Aging Law) and red (the Slip Law). The parameters so determined are provided in Table 2. As shown in Figure 7, both laws do a good job of fitting the steady-state friction (the y-intercept), the stiffness (the upslope of the peak), and a and b. The only apparent difference is that the Slip law consistently provides a larger D C value than the Aging law ( Figure 7H). There is no significant dependence of D C on temperature or normal stress. Values for stiffness K (kPa/μm) as measured from the slope of initial ramps K ramp (gray circles; Table 1) and fit from the velocity step program K* (black squares, Table 2) are shown in Figure 7I. The fit values (measured at the up step during the middle of the experiment) are consistently stiffer than the ramp in values. The difference is likely due to a difference in maturity of the sliding interface from the beginning of the experiment to that reached during steady state (Skarbek et al., 2022). No apparent temperature dependence was observed in either measurement of K. A plot of (a-b) vs. temperature is shown in Figure 8; at all temperatures and normal stresses from the study, the velocity step analysis shows (a-b) values that are velocity-strengthening. Reproducibility Although not every set of conditions was reproduced, select temperatures and runs were performed in duplicate to test the reproducibility of the response. An example and discussion of this are provided in the Supplementary Material. Forward Model Results In Figure 9, we show forward modeling results of a slider block with measured RSF parameters ( Table 2) with a sinusoidal load point velocity (Eq. 1) at applicable periods and amplitudes, compared to the measured experimental response from three runs, which were selected for their variations in temperature and normal stress. The values used in the forward model were those determined from Slip fits to velocity steps ( Table 2). In all three pairs of figures, we see that the model predicts, with only small differences, the variations in frictional response. One significant deviation occurs in the far left of each figure. Just prior to the high frequency oscillations was a down step in median velocity from the 10 μm/s ramp to 1 μm/ s branch of the run (Figure 1). The overall experimental response to the down velocity step is a direct drop and gradual evolution to new background state, with the oscillations superimposed. These down step conditions were not incorporated into the forward model. The predictions are striking in their ability to capture both form and amplitude. Since the model does not include inertia, it does not predict the large high frequency stick-slips, with stress drops ( Figure 9C). In order to predict such behavior, a model with inertia must be included (Im et al., 2007). DISCUSSION In these experiments, we see that oscillatory modulation of frictional sliding depends on a combination of amplitude and frequency of the load velocity, normal stress, and stiffness of the system. Temperature and median velocity play only minor roles. We categorize the responses into four groups: no modulation, smooth modulation, slow-slip events (which are ostensibly relaxation holds superimposed on a sine wave), and stick-slips. No measurable change in frictional sliding is found for conditions of low normal stress (100 kPa) with low amplitude (less than 50% of V m ) and long period (100 s). We interpret this to mean that the stressing rate is slow enough throughout the modulation to not perturb the average friction. At higher normal stresses and smaller amplitudes, we see smooth modulation. At longer periods and high amplitudes, healing is activated and slow-slip events occur. As indicated by the dashed lines in Figure 1B, the amplitude of the frictional response at V m = 1 μm/s and V m = 10 μm/s are nearly the same (Δμ~0.125) when the amplitude of the forcing velocity is 100% of V m . This demonstrates that the amplitude of the frictional response is not proportional to the oscillation amplitude of the driving velocity, but rather the ratio of the oscillation amplitude to the median driving velocity and in particular is sensitive to whether the velocity approached or equaled zero, as is the case during a hold. We emphasize that a full spectrum of slip responses, from steady sliding to stick-slip, can be generated by small changes in our driving conditions. However, fitting to the velocity steps in our experiments show that the frictional rate parameters are velocity strengthening for all conditions tested here. Although we did not test a wide range of conditions, previous work has shown that ice friction analyzed in this way is generally velocity strengthening at a wide range of velocities at temperature Zoet et al., 2013). In a companion paper, however, fitting to the oscillations directly produced dominantly velocity-weakening behavior (Skarbek et al, 2022). Although a-b values are slightly positive and within a range commonly seen in friction experiments, the absolute values of a and b are large compared to most studies of rock at lower homologous temperature. These high values may be in part due to high healing. In our previous study (McCarthy et al., 2017), we demonstrated through slide-hold-slide experiments that ice exhibits strong healing during holds. Healing is typically described in the rock mechanics literature as β = Δμ/ log (time). The measured healing (β) of ice on rock from our previous study was at least an order of magnitude greater than that of rocks despite showing comparable steady-state friction values. Similar high healing was observed in other ice on rock and ice on till friction studies (Zoet et al., 2013;Zoet and Iverson, 2018). Due to the high homologous temperature of ice at these conditions (which are consistent with terrestrial glaciers and ice streams) healing has been attributed to changes in real area of contact accomplished by high temperature viscous deformation (Kennedy et al., 2000;Schulson and Fort, 2012;McCarthy et al., 2017) and pressure-enhanced melting at asperities (Zoet and Iverson, 2018). Although laboratory healing rates for ice are much higher than for rock, fault sliding behavior may be similarly dictated by healing rates. Faults have been shown to be sensitive to oscillating stresses at a variety of frequencies, such as seismic waves, tidal stresses, and seasonal loading due to snowpack, groundwater and surface water fluctuations (Hill et al., 1993;Gomberg et al., 2001;Heki, 2003;Saar and Manga, 2003;Cochran et al., 2004;Brodsky and Prejean, 2005;Johnson et al., 2017). Faults like the San Andreas have shown that low-frequency earthquakes and tremor are sensitive to tides in its creeping section (Thomas et al., 2009;van der Elst et al., 2016). These events are mostly located below 20 km and might be enhanced by higher healing rates at relatively higher homologous temperatures than more shallow portions of the fault. We suggest that more experiments at high homologous temperatures for common fault rocks might further demonstrate the influence that enhanced fault healing at high temperatures might have on fault slip style towards the brittle/ductile transition and below. We postulate that the frictional response of ice can be thought of as a competition between stressing rate and healing rate. When stressing rate is modest compared to healing rate, little healing occurs, and the sliding follows the forcing oscillation. When stressing rate is low, for instance at longer periods during the approach to zero velocity, frictional healing has more time to be effective, and healing-induced slow slip events occur. Only at the highest stressing rates do we see true stick-slip events. As shown in Figure 2, these occur when the normal stress is high and when velocity is increasing at higher rate (i.e. where the peak acceleration is high, not peak velocity, which is the same for almost all experiments:~20 μm/s), which is why stick-slip events are seen in the 10 s periods but rarely in the 100 s periods. Additionally if one compares the frictional amplitude in Figure 1B, the response to a velocity amplitude of 1 μm/s is the same if not larger than that at 10 μm/s. So we posit that it is peak acceleration relative to the background velocity that is also important. This represents a slight modification of the definition of stressing rate provided in Eq. 7, where instead of = K*V, here we instead employ a loading rate equal to K*V · /V m . Figure 10 is a response "phase diagram" which qualitatively maps out the style of sliding (for the four types defined at the beginning of this section) in stress vs. frictional forcing. More specifically we plot the change in friction Δμ normalized by steady-state friction µ SS (to remove temperature dependence) multiplied by normal stress (therefore "normalized shear stress amplitude") versus peak frictional forcing, σ n /(D C K*V · /V m ). The clustering of open squares on the figure shows that the large stick-slip events can occur over a variety of conditions and periods when stressing rate is high relative to the average velocity. At almost all conditions, forward modeling with a simple 1D slider, a periodic forcing velocity, and rate-state parameters determined from velocity steps can predict the frictional response. However, the high stressing rate experiments, which cause stick-slips and large stress drops, cannot be predicted by the simple model. In a companion paper, Skarbek et al., 2022, we determine the rate-state parameters directly from oscillatory data, instead of velocity steps. In such analysis, velocity-weakening parameters were measured from these same experiments. Previous studies have demonstrated that rate dependence can depend on velocity (Ikari and Saffer, 2011), so it should come as no surprise that a system with ever changing velocity may have different frictional properties than those measured at steady state. The new method of determining parameters from oscillatory data not only allows for significantly more information to be pulled from an experiment such as ours, it also provides a means of FIGURE 8 | Rate dependence of friction, a-b, vs. temperature, from experiments C34-44 each measured from a velocity step from 1 μm/s to 10 μm/s using the GUI described in Skarbek and Savage, 2019. Where error bars cannot be seen, they are smaller than the symbol size. Analysis of velocity steps indicates only velocity strengthening behavior over this temperature/velocity range. Implications for Ice Sheets and Glaciers Sliding under ice sheets and glaciers should be controlled by a similar competition between healing and strain rate. Depending on overburden conditions and whether temperature is near the pressure melting point, glaciers can exhibit a variety of normal stresses at the base. It is often assumed that melt present under ice streams results in low effective normal stress on order of 100 kPa (the stress at which the majority of experiments here were conducted). We posit that variations in frictional sliding behavior emerge from a difference in velocity induced by the tides. Taking the assumptions provided by Winberry et al, 2009, the water height can be described as a resistive force to constant upstream force, such that the net force is sinusoidal. When the tide is high, the resistance is high and velocity slows; when tide is low, resistance is low and velocity increases. GPS Field observations have confirmed along-flow variations in displacement relative to the mean that correspond to the tides, with decreasing amplitude and increasing phase lag observed the farther up stream from the grounding line (Minchew et al., 2017). The amplitudes in downstream motion thus depend on local conditions. In places with a low median sliding rate compared to tidally-forced amplitude, our experimental results suggest that these locations would be likely to experience significant healing during their cycle. In places additionally having localized high normal stress, we would expect unsteady sliding, as seen at Whillans Ice Stream. Although there are length and time scaling differences between our lab experiments and nature, we achieve similar normal stress, temperature, and median sliding rate conditions. Estimates of stress accumulation over recurrence time for Whillans yield a healing rate of 0.029/log(s) (Winberry et al., 2009. At temperatures greater than −5°C, this is in line with our lab estimates of healing at 100 kPa normal stress. The similarities in healing rate are somewhat surprising, as our sample configuration of ice on rock is quite simplistic compared to the ice-till-bedrock interface beneath glaciers. That the timing of events in our experiments (just before peak velocity and just after low velocity) corresponds to events timed just before low tide and just after high tide is additionally intriguing. One important difference between our laboratory conditions and natural conditions is that the period of our forcing signal (1-100 s) is orders of magnitude shorter than the period of the diurnal tide experienced by many ice sheets (~104 s). Other differences include roughness of the sliding interfaces and D C , which here are on order of microns, but in the field are possibly much greater, although there is much uncertainty (Lipovsky and Dunham, 2017, and references therein). The base of ice sheets should also contain a much wider range in asperity sizes. It is difficult to ascertain at this stage how these results directly scale to the field setting. Additional experimental and observational studies are needed to understand the relative importance and scaling of the multiple factors that control ice stream flow and stability. Here we have explored specifically the concepts of periodic velocity variations on frictional response and found that the effects can be modeled well using a forward model that includes rate-and state-dependent friction, elasticity, and a sinusoidal driving velocity. Future work will include more in depth applications of the model to tidally modulated ice stream and glacier slip. CONCLUSION The results from this study show that rate-and state-dependent friction formulations can describe oscillatory frictional behavior in ice. The Aging and Slip forms of state appear to be equally suitable to describing the velocity steps in this study. The ability of small variations in velocity to create large variations in the frictional response is likely due to high healing at the ice-rock interfaces, which at this high homologous temperature, is orders of magnitude greater than healing values measured in low temperature studies of rocks. Both the experimental work of ice on rock here (which replicates most of the conditions of ice streams) and natural ice streams are more influenced by the tides than their land-based, rocky brethren (Vidale et al., 1998). Although analysis of velocity steps shows velocity-strengthening behavior at all conditions tested here, distinct repeatable stick-slips were observed at some conditions. A companion paper (Skarbek et al., 2022) provides a way of analyzing oscillatory data directly, and in so doing, determined velocityweakening behavior. DATA AVAILABILITY STATEMENT The datasets generated by this study can be downloaded from USAP-DC at https://doi.org/10.15784/601497. Frontiers in Earth Science | www.frontiersin.org April 2022 | Volume 10 | Article 719074
9,945.6
2021-12-24T00:00:00.000
[ "Environmental Science", "Geology" ]
Stochastic multi-scale models of competition within heterogeneous cellular populations: Simulation methods and mean-field analysis We propose a modelling framework to analyse the stochastic behaviour of heterogeneous, multi-scale cellular populations. We illustrate our methodology with a particular example in which we study a population with an oxygen-regulated proliferation rate. Our formulation is based on an age-dependent stochastic process. Cells within the population are characterised by their age (i.e. time elapsed since they were born). The age-dependent (oxygen-regulated) birth rate is given by a stochastic model of oxygen-dependent cell cycle progression. Once the birth rate is determined, we formulate an age-dependent birth-and-death process, which dictates the time evolution of the cell population. The population is under a feedback loop which controls its steady state size (carrying capacity): cells consume oxygen which in turn fuels cell proliferation. We show that our stochastic model of cell cycle progression allows for heterogeneity within the cell population induced by stochastic effects. Such heterogeneous behaviour is reflected in variations in the proliferation rate. Within this set-up, we have established three main results. First, we have shown that the age to the G1/S transition, which essentially determines the birth rate, exhibits a remarkably simple scaling behaviour. Besides the fact that this simple behaviour emerges from a rather complex model, this allows for a huge simplification of our numerical methodology. A further result is the observation that heterogeneous populations undergo an internal process of quasi-neutral competition. Finally, we investigated the effects of cell-cycle-phase dependent therapies (such as radiation therapy) on heterogeneous populations. In particular, we have studied the case in which the population contains a quiescent sub-population. Our mean-field analysis and numerical simulations confirm that, if the survival fraction of the therapy is too high, rescue of the quiescent population occurs. This gives rise to emergence of resistance to therapy since the rescued population is less sensitive to therapy. for phenotypes to increase their robustness as time progresses). These properties are exploited by tumours to increase their proliferative potential and resist to therapies (Kitano, 2004). In addition to complex, non-linear interactions within individual cells, there exist intricate interactions between different components of the biological systems at all levels: from complex signalling pathways and gene regulatory networks to complex non-local effects where perturbations at whole-tissue level induce changes at the level of the intra-cellular pathways of individual cells (Alarcón et al., 2005;Ribba et al., 2006;Macklin et al., 2009;Osborne et al., 2010;Deisboeck et al., 2011;Powathil et al., 2013;Jagiella et al., 2016). These and other factors contribute towards a highly complex dynamics in biological tissues which is an emergent property of all the layers of complexity involved. In parallel to the model development, algorithms and analytic methods are being developed to allow for more efficient analysis and simulation of such models (Alarcón, 2014;Spill et al., 2015;de la Cruz et al., 2015;. In the case of cancer biology, the multi-scale interactions of intracellular changes at the genetic or molecular pathway level and tissue-level heterogeneity can conspire to generate unfortunate effects such as resistance to therapy (Merlo et al., 2006;Gillies et al., 2012;Greaves and Maley, 2012;Chisholm et al., 2015;Asatryan and Komarova, 2016). Heterogeneity plays a major role in the emergence of drug resistance within tumours and can be of diverse types. There is heterogeneity in cell types due to increased gene mutation rate as a consequence of genomic instability and other factors (Merlo et al., 2006;Greaves and Maley, 2012;Chisholm et al., 2015;Asatryan and Komarova, 2016). Heterogeneity can also be caused by the complexity of the tumour microenvironment (Alarcón et al., 2003;Gillies et al., 2012;Chisholm et al., 2015), in which diverse factors such as tumoural or immune cells (Kalluri andZeisberg, 2006, Grivennikov et al., 2010), or the extracellular matrix and its physical properties , strongly influence cancer cell behaviour. Note that hypoxia is also known to change the tumour microenvironment . In either case, heterogeneity within the tumour creates the necessary conditions for resistant varieties to emerge and be selected upon the administration of a given therapy. The main aim of this paper is to analyse the properties of heterogeneous populations under the effects of fluctuations both within the intracellular pathways which regulate (individual) cell behaviour and those associated to intrinsic randomness due to finite size of the population. To this purpose, we expand upon the stochastic multi-scale methodology developed in , where it was shown that such a system can be described by an age-structured birth-and-death process, instead of a branching process (Danesh et al., 2012;Durrett, 2013). The coupling between intracellular and the birth-and-death dynamics is carried out through a novel method to obtain the birth rate from the stochastic cell-cycle model, based on a mean-first passage time approach. Cell proliferation is assumed to be activated when one or more of the proteins involved in the cell-cycle regulatory pathway hit a threshold. This view allows us to calculate the birth rate as a function of the age of the cell and the extracellular oxygen in terms of the associated mean-first passage time (MFPT) problem (Redner, 2001). The present approach differs from that in in that our treatment of the intracellular MFPT is done in terms of a large deviations approach, the so-called optimal path theory (Freidlin and Wentzell, 1998;Bressloff and Newby, 2014). This methodology allows us to explore the effects of intrinsic fluctuations within the intracellular dynamics, in particular a model of the oxygen-regulated G1/S which dictates when cells are prepared to divide, as a source of heterogeneous behaviour: fluctuations induce variability in the birth rate within the population (even to the point of rendering some cells quiescent, i.e. stuck in G0) upon which a cell-cycle dependent therapy acts as a selective pressure. This paper is organised as follows. Section 2 provides a summary of the structure of the multi-scale. In Section 3, we give a detailed discussion of the intracellular dynamics, i.e. the stochastic model of the oxygen-regulated G1/S transition, and its analysis. In Section 4, we summarise the formulation of the age-structured birth-and-death process, the numerical simulation technique, and the mean-field analysis of a homogeneous population. In Section 5, we discuss how noise within the intracellular dynamics induces heterogeneity in the population and analyse the stochastic dynamics of competition for a limited resource within such heterogeneous populations. In Section 6 we further study the effects of noise-induced heterogeneity on the emergence of drug resistance upon administration of a cell cycle-specific therapy. Finally, in Section 7 we summarise our results and discuss our conclusions as well as avenues for future research. Summary of the multi-scale model Before proceeding with a detailed discussion of the different elements involved in the formulation of the stochastic multi-scale model, it is useful to provide a general overview of the overall structure of the model, which is closely related to that of the model proposed in Alarcón et al. (2005). The model we present in this article integrates phenomena characterised by different time scales, as schematically shown in Fig. 1. This model intends to tackle the growth and competition of cellular populations under the restriction of finite amount of available resources (in this case, oxygen) supplied at a finite rate,S. The general approach used in this model is a natural generalisation of the standard continuous-time birth-and-death Markov process and its description via a Master Equation (Gardiner, 2009). As we will see, the consideration of the multi-scale character of the system, i.e. the inclusion of the physiological structure associated with the cell-cycle variables, introduce an age-structure within the population: the birth rate depends on the age of cell (i.e. time elapsed since last division) which determines, through the corresponding cell-cycle model, the cell-cycle status of the corresponding cells. The evolution of the concentration of oxygen, c(t), (resource scale, see Fig. 1) is modelled by: where N T is the number of cellular types consuming the resource c, and N i (t), = … i N 1, , T , is the number of cells of type i at time t. Note that, in general, N i (t) is a stochastic process, and, therefore, in principle Eq. (1) should be treated as a stochastic differential equation (Oksendal, 2003). The second sub-model considered in our multi-scale model, associated with the intracellular scale (see Fig. 1), is a stochastic model of oxygen-regulated cell-cycle progression. This sub-model is formulated using the standard techniques of chemical kinetics modelling (Gillespie, 1976) so that the mean-field limit of the stochastic model corresponds to the deterministic cell-cycle model formulated in Bedessem and Stephanou (2014). This model provides the physiological state of the cell in terms of the number of molecules of each protein involved in the cell-cycle of a cell of a given age, a. From such a physiological state, we derive whether the G1/S transition has occurred. The cell-cycle status of a cell of age a is determined in terms of whether the abundance of certain proteins which activate the cell-cycle (cyclins) have reached a certain threshold. In our particular case, if at age a, the cyclin levels are below the corresponding threshold, the cell is still in G1. If, on the contrary, the prescribed threshold level has been reached, the cell has passed onto S, and therefore is ready to divide. This implies that the probability of a cell having crossed the threshold of cyclin levels at age a can be formulated in terms of a mean firstpassage time problem (MFTP) in which one analyses the probability of a Markov process to hit a certain boundary (Redner, 2001;Gardiner, 2009). Unlike our approach in , based on approximating the full probability distribution of the stochastic cell cycle model, in the present approach, passage time is (approximately) solved in terms of an optimal trajectory path approach (Freidlin and Wentzell, 1998;Bressloff and Newby, 2014). At the interface between the intracellular and cellular scales sits our model of the (age-dependent) birth rate, which defines the probability of birth per unit time (cellular scale) in terms of the cell cycle variables (intracellular scale). The rate at which our cell-cycle model hits the cyclin activation threshold, i.e. the rate at which cells undergo G1/S transition, is taken as proportional to the birth rate. In particular, the birth rate is taken to be a function of the age of the cell as well as the concentration of oxygen, as the oxygen abundance regulates the G1/S transition age, ( ) a c G S 1/ , i.e. the time (age) elapsed between the birth of a cell and its G1/S transition: i.e. cell division occurs at a constant rate, τ − p 1 , provided cells undergo the G1/S transition and H is the Heaviside function. In other words, we consider that the duration of the G1 phase is regulated by the cell cycle model, whereas the duration of the S-G2-M is a random variable, exponentially distributed with average duration equal to τ p (see Fig. 1). The third and last sub-model is that associated with the cellular scale. It corresponds to the dynamics of the cell population and is governed by the Master Equation for the probability density function of the number of cells (Gardiner, 2009). The stochastic process that describes the dynamics of the population of cells is an age-dependent birth-and-death process where the birth rate is given by Eq. (2) where ( ) a c G S 1/ is provided by the intracellular model. The death rate is, for simplicity, considered constant. As a consequence of the fact that the birth rate is age-dependent, our Multi-Scale Master Equation (MSME) does not present the standard form for unstructured populations, rather, it is an age-dependent Master Equation. A detailed description of each sub-model and its analysis is given in Sections 3 and 4. 3. Intracellular scale: stochastic model of the oxygen-regulated G 1 /S transition Biological background Cell proliferation is orchestrated by a complex network of protein and gene expression regulation, the so-called cell cycle, which accounts for the timely coordination of proliferation with growth and, by means of signalling cues such as growth factors, tissue function (Yao, 2014). Dysregulation of such an orderly organisation of cell proliferation is one of the main contributors to the aberrant behaviour observed in tumours (Weinberg, 2007). The cell cycle has the purpose of regulating the successive Schematic representation of the different elements that compose our multi-scale model. We show the different levels of biological organisation as well as associated characteristic time scales associated to each of these layers: resource scale, i.e. oxygen which is supplied at a constant rate and consumed by the cell population, cellular scale, i.e. oxygen-regulated cell cycle progression which determines the age-dependent birth rate into the cellular layer, and, finally, the cellular scale, which is associated to the stochastic population dynamics. activation of the so-called cyclin-dependent kinases (CDKs) which control the progression along the four phases of cycle: G1 (first gap phase), S (DNA replication), G2 (second gap phase), and M (mitosis) Goldbeter, 2009, 2011;Gerard et al., 2015). These four phases must be supplemented with a fifth, G0, which accounts for cells that are quiescent due to lack of stimulation (i.e. absence of growth factors, lack of basic nutrients, etc.) to proliferate. Recent models of the cell cycle organise the complex regulatory network into CDK modules, each centred around a cyclin-CDK complex which is key for the transition between the cell cycle phases (see, for example, Gerard and Goldbeter, 2009): cyclin D/CDK4-6 and cyclin E/CDK2 regulate progression during the G1 phase and elicit the G1/S transition, cyclin A/CDK2 promote progression during S phase and orchestrates the S/G2 transition, and, finally, cyclin B/CDK2 brings about the G2/M transition. The activity of each of these cyclin-CDK complexes is regulated in a timely manner, so that each phase of the cell cycle ensues at the proper time, by means of transcriptional regulation, post-transcriptional modifications (e.g. phosphorylation), and degradation (via ubiquitination) in which a large number of other components participate, including transcription factors, enzymes, ubiquitins, etc. In the present paper, we propose a coarse-grained description of the cell cycle phases by lumping S, G2, and M into one phase, so that we consider a two-phase model G1 and S-G2-M, as shown in Fig. 1, Alarcón et al. (2004). In particular, we consider that cells can only divide once they have entered the S-G2-M phase at a constant rate. Entry in S-G2-M is regulated by a (stochastic) model of the G1/S transition which takes into account the regulation of the duration of the G1 phase by hypoxia (lack of oxygen). The abundance of oxygen is known to be one of the factors that regulates the duration of the G 1 phase of the cell cycle. The issue of the regulation of the G 1 /S transition by the oxygen concentration has been the subject of several modelling studies (Alarcón et al., 2004;Bedessem and Stephanou, 2014). These models focus on the hypoxia-induced delay of the activation of the cyclins either through activation of cyclin inhibitors (Alarcón et al., 2004) or via up-regulation of the HIF-1α transcription factor (Bedessem and Stephanou, 2014). From the modelling point of view, both of them are mean-field models, thus neglecting fluctuations. In this section, we formulate a stochastic version of the model of Bedessem and Stephanou (2014), of which a schematic representation is shown in Fig. 2. HIF-1 mediates adaptive responses to lack of oxygen (Semenza, 2013). HIF-1 is a heterodimer consisting of two sub-units: HIF-1α and HIF-1β. Whilst the latter is constitutively expressed, HIF-1α is O 2 -regulated. In the presence of adequate oxygen availability is negatively regulated by the von Hippel-Lindau (VHL) tumour suppressor protein, which allows HIF-1α for degradation. VHL loss-of-function mutations are common in many types of tumours, which allows for de-regulated HIF-1α degradation (Semenza, 2013). HIF-1 is involved in a number of cellular responses including switch from oxidative phosphorylation to glycolysis, activation of angiogenic pathways, and inhibition of cell cycle progression (Bristow and Hill, 2008;Semenza, 2013). Mean-field model of the regulation of the G1/S transition by hypoxia Bedessem and Stephanou formulate a model of the effect of hypoxia (mediated by HIF-1) on the timing of the G1/S transition (Bedessem and Stephanou, 2014). The involvement of HIF-1 in cell cycle regulation is complex and not completely understood. There is evidence that HIF-1 delays entry into S(-G2-M) phase by activating p21, a CDK inhibitor (Koshiji et al., 2004;Ortmann et al., 2014). HIF-1 up-regulation of p21 mediates indirect down-regulation of cyclin E (Goda, 2003). Further to HIF-1 regulation of the cell cycle, there exists a feedback regulation of cell cycle proteins of HIF-1. Hubbia et al. (2014) report that CDK1 and CDK2 physically and functionally interact with HIF-1α: CDK1 down-regulates lysosomal degradation of HIF-1α, thus consolidating its stability and promoting its transcriptional activity. On the contrary, CDK2 activates lysosomal degradation of HIF-1α and promotes G1/S transition. Bedessem and Stephanou (2014) do not take into account all these issues and, for simplicity, chose to focus on the well-documented effect of HIF-1 on cyclin D (Wen et al., 2010;Ortmann et al., 2014). Bedessem and Stephanou (2014) model formulation is based on the following assumptions: 1. The G1/S is modelled by a biological switch which emerges from the mutual inhibition between a cyclin (in this case, cyclin E) and a ubiquitin complex (SCF complex): The latter marks the former for degradation whereas cyclin E mediates inactivation of the SCF complex. This mutual inhibition gives rise to a bistable situation in which two stable fixed points coexist, each associated with the G1 phase (high SCF activity, low cyclin E concentration) and the S-G2-M phase (low SCF activity, high cyclin E concentration). Activation and inactivation of the SCF complex are assumed to follow Michaelis-Menten kinetics. 2. As in previous models (Tyson and Novak, 2001;Novak and Tyson, 2004;Alarcón et al., 2004), the G1/S transition is brought about by triggering a saddle-node bifurcation, whereby the G1 phase fixed point collides with the unstable saddle, driving the system into S-G2-M fixed point. Two factors drive the system through this bifurcation: cell growth (logistic increase of the cell mass, Tyson and Novak, 2001) and activation of the E2F transcription factor. In Bedessem and Stephanou (2014), both factors are taken to up-regulate the transcription of cyclin E. 3. Activation of E2F is modelled in terms of the E2F-Retinoblastoma protein (RBP) switch (Lee et al., 2010). Briefly, E2F is captured (bound) by unphosphorylated RBP. Upon phosphorylation, RBP releases E2F which activates transcription of G1/Stransition promoting cyclins, such as cyclin E (Alberts et al., 2002). Further, E2F can be in unphosphorylated (active) form and phosphorylated (inactive) form. Following Novak and Tyson (2004), Bedessem and Stephanou assume that fraction of active E2F and RBP-bound E2F are in adiabatic equilibrium with unphosphorylated RBP and total free E2F (Bedessem and HIF−1a CycD RbNP E2FA CycE SCF Growth Hypoxia Fig. 2. Schematic representation of the elements involved in the model of hypoxiaregulated G 1 /S transition proposed by Bedessem and Stephanou (2014). Within the framework of this model, the negative-feedback between CycE and SCF is the key modelling ingredient for the system to emulate the behaviour of a cell during the G1/S transition. The relative balance between CycE (which promotes the G1/S transition) and SCF (G1/S transition inhibitor) is regulated by growth and hypoxia, so that the timing of the transition depends upon these two factors. These basic hypotheses are primarily based on previous models (Tyson and Novak, 2001;Novak and Tyson, 2004;Alarcón et al., 2004). The resulting pathway is schematically represented in Fig. 2. Stochastic G1/S transition model Based on the hypotheses summarised in Section 3.2, Bedessem and Stephanou (2014) formulated a mean-field model which is able to reproduce such behaviours as delay of the G1/S due lo lack of oxygen as well as hypoxia-induced quiescence. Here, we present a stochastic model (see Fig. 3 and Table 1), whose mean field limit is the model formulated in Bedessem and Stephanou (2014). We analyse this model using the stochastic quasi-steady state approximation we have developed in Alarcón (2014) and de la Cruz et al. (2015). The deterministic model formulated in Bedessem and Stephanou (2014), which we have briefly described in Section 3.2, is based on a series of reactions shown in Fig. 3, which include Michaelis-Menten kinetics for activation and inactivation of SCF complexes. Our stochastic model of the G1/S transition builds upon the stochastic (Markovian) description of the same set of reactions. Our model is predicated on the stochastic dynamics of the state vector being described by a Markov jump process (Grimmett and Stirzaker, 2001;Kampen, 2007), whereby the state of the system, X, changes by an amount r i when the elementary reaction i occurs. The waiting time between Markovian events is exponentially distributed, the process is characterised by the associated transition rates, i.e. ( ( + Δ ) = + | ( ) = ) = ( )Δ + (Δ ) P X a a X r X a X W X a O a i i 2 . Using law of mass action (Gillespie, 1976) as our basic modelling framework, the transition rates of each elementary process are given in Table 1. Once we have determined the transition rates associated with each elementary reaction (or channel), the dynamics of the system is given by the Chemical Master Equation of a (non-structured) Markov Process, ( ) X a : is the probability of the state vector of the system to be X at age, i.e. the time reckoned from the last division, a. The transition rates, ( ) W X i , the vectors r i (whose components are the variation of the number of each chemical species upon occurrence of reaction i) are given in Table 1 and determine the dynamics of the system. Even for moderately complex models, Eq. (3) has no solution in closed form. Therefore, in order to study the properties of the system one must resort to numerical simulation (Monte Carlo) or asymptotic approximations. In the next section, we present an asymptotic analysis based on a recently developed form of stochastic quasi-steady state approximation. Bedessem and Stephanou (2014). The reactions correspond (from top to bottom) to: hypoxia-inhibited synthesis and degradation of CycD, enzyme-catalysed, CycE-mediated inactivation of SCF, enzyme-catalysed activation of SCF, synthesis (regulated by growth and active E2F) and degradation (up-regulated by active SCF) of CycE, synthesis and degradation of Rb, and, last, synthesis and degradation of E2F. The negative feedback (mutual inhibition) between SCF and CycE mediates bistable behaviour in this model of the G1/S transition. Some of the transition rates associated to the reactions shown in are not constant but depend on the number of molecules of chemical species i present at a particular time, X i . For the definition of the quantities X i , we refer the reader to Table 1. Table 1 Reaction probability per unit time, The mass is assumed to grow following a logistic law: . Furthermore, Following Novak and Tyson (2004), we assume that at each time, the active E2F, The equilibrium between E2F-Rb complexes; free E2F and free Rb is given by Novak and Tyson (2004): Variable Description X 1 , X 8 Number of Cyclin D and Cyclin E molecules, respectively X 2 , X 5 Number of inactive and active SCF molecules, respectively X 3 , X 6 Number of SCF-activating and SCF-inactivating enzyme molecules, respectively X 4 , X 7 Number of enzyme-inactive SCF and enzyme-active SCF complexes, respectively X 9 , X 10 Number of free RbP and E2F molecules, respectively we have developed a stochastic version of the classical QSS approximation, the so-called semi-classical QSS approximation (SCQSSA) which, within the framework of the optimal path theory, allows us to tackle systems which exhibit separation of time scales, such as enzyme-catalysed reactions. Since these types of reaction feature prominently in our stochastic model of the hypoxia-regulated G1/S transition (see Fig. 3), we will use the SCQSSA to analyse the effects of intrinsic noise on the stochastic model of the hypoxiaregulated G1/S transition (as determined by the transition rates shown in Table 1). This approximation allows us to study noiseinduced phenomena which are relevant for the timing of the G1/S transition and, therefore, bear upon the population dynamics. Following the SCQSSA methodology summarised in Appendix A, we derive the following set of equations which describe the optimal path associated with the stochastic G1/S transition model, Table 1: where a is the rescaled age variable = a kESt 7 (see Appendix B, Table B1). The oxygen dependencies enter the model though the [ ] H dependent parameter κ 2 (see Tables B.3-B.5). The reader is referred to Appendix A for a summary of the method and Appendix B for a detailed derivation of Eqs. (4)-(12) which hereafter is referred to as the semi-classical quasi-steady state approximation (SCQSSA) of the stochastic cell-cycle model and to Alarcón (2014) and de la Cruz et al. (2015) for a full account of the SCQSSA methodology. Note that there are other approximations which do not require the QSSA assumption, such as the one described in Kurtz (1978) and Ball et al. (2006). Furthermore, since we will be interested in the random effects associated to the enzyme-regulated dynamics of SCF, encapsulated in the parameters p 3 and p 6 , we have taken ( ) = p a 1 i for all ≠ i 3, 6. This corresponds to analysing the marginal distribution integrating out all the stochastic effects associated to all X i with ≠ i 3, 6. Stochastic behaviour of the G1/S model We now proceed to study the behaviour of the stochastic model of the oxygen-regulated G1/S transition. We pay special attention to those aspects in which we observe a departure of the stochastic system from the mean-field behaviour. In particular, we highlight the effects of modifying the relative abundance of SCFactivating and inactivating enzymes, including the ability of inducing oxygen-independent quiescence. 3.5.1. The relative abundance of the SCF activating and inactivating enzymes controls the timing of the G1/S transition In references Alarcón (2014) and de la Cruz et al. (2015), we have shown that, under SCQSSA conditions, the momenta p 3 and p 6 , i.e. the momenta coordinates associated with the SCF-activating and inactivating enzymes, respectively, are determined by the probability distribution of their initial (conserved) number. In particular, if we assume that the initial number of SCF-activating and inactivating enzyme molecules, E 1 and E 2 , is distributed over a population of cells following a Poisson distribution with parameter E, we have shown that (de la Cruz et al., 2015): . With this in mind, we can analyse the effect of changing the relative concentration of SCF-activating and inactivating enzymes on the timing of the G1/S transition. Our results are shown in Figs. 4 and 5. Fig. 4 illustrates that, for a fixed oxygen concentration, the G1/S transition is delayed by depriving the system of SCFactivating enzyme: as the ratio = p p e e / / 3 6 1 2 of SCF activating and deactivating enzyme increases, the G1/S transition takes longer to occur. Then, Fig. 5 shows that the G1/S transition age ( ) a c p p , , G S 1/ 3 6 decreases when p p / 3 6 increases. Furthermore, increasing the oxygen concentration c from c¼ 0.1 to c¼ 1 shifts the curve towards lower transition ages ( ) a c p p , , G S 1/ 3 6 . Note that this prediction is beyond the reach of the mean-field limit Bedessem and Stephanou (2014). Induction of quiescence In view of the results of Section 3.5.1, we have proceeded to a more thorough analysis of the effect of varying the ratio p p / 3 6 , which we recall that, within the SCQSSA, is equal to the ratio between the abundance of SCF-activating and inactivating enzymes, on the behaviour of the SCQSSA system Eqs. (4)-(12). In particular, we have investigated the bifurcation diagram of Eqs. (4)-(12) with p p / 3 6 as the control parameter. Our results are shown in Fig. 6. We observe that, regardless of the value of m, there exists a range of values of the control parameter for which the saddle-node bifurcation, which gives rise to the G1/S transition, does not occur (i.e. only the G1-fixed point is stable). This result implies that depletion of SCF-activating enzyme, or, equivalently, over-expression of SCF-inactivating enzyme can stop cell-cycle progression by locking cells into the so-called G0 state, i.e. quiescence. These results are confirmed by direct simulation of the stochastic cell-cycle model (Table 1) using Gillespie's stochastic simulation algorithm (Gillespie, 1976), see Fig. 7. 3.6. Scaling theory of the G1/S transition age We finish our analysis of the intracellular dynamics by formulating a scaling theory of one of the fundamental quantities in our multi-scale model, namely, the G1/S transition age, ( ) a c p p , , G S 1/ 6 3 , which determines the age-dependent birth rate (see Eq. (2)). In this section, we will show that, in spite of the complexity of the SCQSSA formulation of the oxygen-dependent cell-cycle progression model (see Eqs. (4)-(12)), a G S 1/ exhibits remarkable regularities with respect to its dependence on the oxygen concentration and the cell-cycle parameters p 6 and p 3 . Such regularities hugely simplify our multi-scale methodology. We can see this regularity in Fig. 8. Fig. 8(a) for small c, with the disagreement increasing with increasing c. Thus, a good scaling approximation for a G S 1/ is given by Fig. 8(d). The critical oxygen concentration for quiescence c cr can be estimated analytically (with parameter values taken from Table B2): Table B2. The parameter a 0 can be estimated as follows. Let A(c) be defined as: goes from 3 to 1. The parameters b i , e i , and J i are defined in Appendix B, Table B2 Fig. 4. Series of plots illustrating how the ratio p p / 6 3 , which is associated with the ratio of the number of SCF-inactivating and SCF-activating enzymes, modulates the timing of the G1/S transition. Parameter values as given in Table B2. Initial conditions are provided in Table B4. Cellular scale: multi-scale Master Equation and path integral formulation We start by summarising the formulation of the multi-scale Master Equation (MSME) for the population dynamics model in terms of an age-structured stochastic process . First, we consider a simple age-dependent birthand-death process where ( ) n a t , stands for the number of cells of age a at time t. Both a and t are dimensionless according to the scaling prescribed in Table B1, Appendix B. The time variable is → t kESt 7 and a is defined after Eq. (12). The offspring of such cells with age a¼ 0. where ν ( ( ) ) = ( + ( )) ( ) W n a a t b a n a , , (see Table 2), μ is the (age-independent) death rate, and the age-dependent birth rate, b(a), is given by is the generalised coordinate associated with the concentration of CycE which must exceed a threshold value, CycE T for the cell-cycle to progress beyond the G 1 /S transition. Before this transition occurs, cells are not allowed to divide. By re-arranging Eq. (17) and taking the limit δ δ = → t a 0, we obtain: where (see Table 2). Eqs. (19) needs to be supplemented with the appropriate boundary condition at a ¼0, ( = ) P n a t , 0, 0 . We proceed by first considering the number of births that occur within the age group = a a j during a time interval of length δt. Since we are assuming that our stochastic model is a Markov process where, within each age group, birth and death occur independently and with exponentially distributed waiting times, the number of births, ( ) B a j , is distributed according to a Poisson distribution: Its probability density is therefore given by: Since Eq. (20) is a convolution, using the well-known property of the probability generating function, the generating function associated with P(B), is given by Grimmett and Stirzaker (2001): , , is the generating function associated with ( ( )) P B a j j : j b a n a t p 1 j j Therefore, by taking δ δ . 21 Numerical method Before proceeding further, we briefly describe the numerical methodology that we use to simulate the stochastic multi-scale model. In essence, the numerical method is an extension of the hybrid stochastic simulation algorithm used in to accommodate the age structure of the cell populations we deal with here . For simplicity, we restrict our description of the algorithm to a homogeneous cell population. Its generalisation to heterogeneous populations composed of a variety of cellular types is straightforward. Similarly to the procedure described in , we start by defining the population vector t is the set of indexes which label all those age groups which, at time, t, are represented within the population, i.e. all those age groups such that ( = ) > n a a t , 0 j . Having defined ( ) t , we summarise the numerical algorithm: 1. Set initial conditions: . We also set the value of the ratio of the SCF-regulating enzymes p p / 6 3 . 2. At some later time, t, the system is characterised by the quantities c(t), ( ) t , ( ) t , and N(t) 3. Generate two random numbers, z 1 and z 2 , uniformly distributed in the interval ( ) 0, 1 4. Use z 1 to calculate the exponentially distributed waiting time to the next event: 1 2 j j . The rates W 1 and W 2 are given in Table 2 5. Update the oxygen concentration at τ ( + ) c t by solving the associated ODE (1) between τ ( + ) t t , taking c(t) as initial condition. We use a 4 stage Runge-Kutta solver 6. Update the age-dependent birth rate for each ∈ j , i.e. Steps 3 to 10 are repeated until some stopping condition (e.g. ≥ t T) is satisfied Fig. 8. Series of plots showing the scaling analysis of the G1/S transition age. Plot (a) shows that below the critical value r cr , which corresponds to the value of the ratio p p / 6 3 , above which there is no transition to quiescence (i.e. if > p p r / cr 6 3 the G1/S transition age is finite when c¼ 0), ( ) a c p p , , G S 1/ 6 3 follows an algebraic decay with a universal p p / 6 3 -independent exponent provided that the oxygen, c, is rescaled by the critical oxygen concentration, 6 3 decays exponentially with the oxygen concentration with a characteristic concentration c 0 which, provided p p / 6 3 is larger enough than r , cr is p p / 6 3 -independent. Plot (b) shows how c cr varies as p p / 6 3 is changed. Similarly, plot (d) shows how + a varies p p / 6 3 changes. Parameter values as given in Table B2. Table 2 This table summarises the elementary events involved in the age-dependent birthand-death process. Cells of age a produce offspring at a rate proportional to the age dependent birth rate, ( ) b a , Eq. (18). We are assuming that upon cell division both cells are reset to = a 0. Therefore, upon proliferation, one cell is removed from the population of age a and two cells are added to the population of age = a 0. For simplicity, death is assumed to be age-independent and to occur at a rate proportional to the death rate, μ. Event Reaction Transition rate, Note that Step 5 does not involve the use a stochastic method of integration of the ODE which rules the time evolution of the oxygen concentration, Eq. (1). This is due to the fact that, within the time interval τ ( + ] t t , , the population stays constant, so that Eq. (1) can be solved by means of a non-stochastic solver. Steady-state of a homogeneous population: mean-field analysis Before proceeding further, in order to check the numerical algorithm proposed in Section 4.1, we analyse how it compares with results regarding the steady-state of the mean-field limit of a homogeneous (i.e. composed by one cellular type only) population Hoppensteadt (1975). The mean-field equations associated with the stochastic multi-scale model are: with boundary condition: t baQ a td a 0, 2 , . and birth rate given by: 6 3 is given by Eq. (13) According to Hoppensteadt (1975), in order to ascertain whether a steady-state solution, i.e. whether the system settles onto an age distribution where the proportion of cells of each age does not change, we seek for a separable solution: . Defining μ ν ( ) = + ( ) a ba, we obtain: , is therefore given by: For this quantity to be positive τ ν < 1 p must hold. This condition states that for a steady state to be reached, the average waiting time to division after the G1/S transition, τ p , must be smaller than the average life span of the cell, ν À 1 . Eq. (29) determines the stationary value of the oxygen concentration, ∞ c . The long-time dynamics of Eq. (23) can therefore be summarised as follows: given the values of τ p , ν, and the cell-cycle parameters (see Table B2 in Appendix B), the population evolves and consumes oxygen until the oxygen concentration reaches a steady value ∞ c . At this point, the population of resident cells has settled onto a steady state where its age structure does not change. The total number of cells is also constant and given by (see Eq. (23)): In order to verify the age-structured SSA proposed in Section 4.1, we compare its results with the mean-field predictions, which should be in agreement with the stochastic behaviour of the system for large values of the carrying capacity, ∞ N . Results are shown in Fig. 9. We observe that, as predicted by our steady-state analysis, the stochastic simulations show how the resident population goes through an initial (oxygen-rich) phase of exponential growth. As the population grows, oxygen is depleted and the resident population eventually saturates onto a number of cells which fluctuates around the mean-field prediction of the carrying capacity (Eq. (30)). Further verification of the validity of our numerical methodology is provided in Fig. 10. Our mean-field theory predicts that, everything else remaining unchanged, the average steady-state population should increase linearly with the rate of oxygen delivery, S (see Eq. 30). By contrast, the average equilibrium oxygen concentration does not depend on S, as shown in Eq. (29), and, consequently, it should stay constant upon increasing the rate of oxygen delivery. Fig. 10 shows that our simulations agree with these mean-field predictions. Quasi-neutral competition within heterogeneous populations A problem of fundamental importance in several biological and biomedical contexts is that of a population composed by a heterogeneous mixture of coexisting cellular types. A particularly relevant example of such a situation is that of cancer, where heterogeneity within the cancer cell population is assumed to be a major factor in the evolutionary dynamics of cancer as well as the emergence of drug resistance (Gatenby and Vincent, 2003;Merlo et al., 2006;Gillies et al., 2012;Greaves and Maley, 2012). Within this context, we are interested in (i) exploring the stability of a coexisting heterogeneous population and (ii) the effects such heterogeneity has on the long-term effects a cell-cycle dependent therapy. The origins of heterogeneity of cancer cell populations is normally attributed to genetic variability arising from chromosomal instability and increased mutation rate (Merlo et al., 2006). Our discussion of the model of the G1/S transition (see Section 3) suggests a different source of heterogeneity associated with stochastic effects due to intrinsic fluctuations. We have shown, by means of both the SCQSSA analysis and stochastic simulations, that the relative abundance of SCF-regulating enzymes, which is quantified in our analysis by the ratio p p / 6 3 (see Sections 3.5.1, 3.6 and Appendix A). In Section 3.5.1 we have shown that the timing of the G1/S transition, and, consequently, the overall birth rate is strongly affected by changes in this quantity. In view, of this we associate heterogeneity to a distribution of the abundance of such enzymes, whereby we associate cell phenotypes with different values of the ratio p p / 6 3 . We further assume that this heterogeneity is hereditary, i.e. daughter cells inherit the value of the ratio p p / 6 3 from their mother. Competition between two sub-populations To proceed further, we consider the case of the birth-and-death dynamics of a heterogeneous population composed by two subpopulations, ( ) n a t , In the remaining part of this section we will consider two cellular populations, resident and invader. Each of these phenotypes are determined in terms of the values four parameters. The resident cells are characterised by two population-dynamics parameters, namely, the average time to division after the G1/S transition, τ p 1 , and the death rate, ν 1 . We consider two further parameters, p 3 1 and p 6 1 , associated with the cell-cycle progression dynamics of the resident cells (see Section 3.4). Similarly, the invader is characterised by the corresponding parameters: τ p 2 , ν 2 , p 3 2 and p 6 2 . Mean-field coexistence versus quasi-neutral stochastic competition We proceed to analyse the conditions under which two populations are capable of long-term coexistence. In particular, we analyse a scenario in which the mean-field description predicts long -term coexistence between within a heterogeneous population leads to mutual exclusion of all strands but one through socalled quasi-neutral stochastic competition (Lin et al., 2012;Kogan et al., 2014;. The starting point for our study is the mean-field analysis carried out in Section 4.2. According to these results, (mean-field) populations evolve until a concentration of oxygen, ∞ c , is reached so that the associated replication number The replication number of either population is given by: This mean-field scenario is the basis for the study of the longterm stochastic dynamics of two populations which satisfy Eq. . The reproduction number is the average number of offspring per cell. We know from elementary considerations (Grimmett and Stirzaker, 2001) that the value of such quantity allows us to classify birth-death/branching processes. If > R 1 0 the population grows, on average, exponentially and has a finite probability of eventual survival. In this case, the process is referred to as super-critical. If < R 1 0 the population undergoes exponential decline (on average) and the extinction probability is equal to 1. The last case, in which = R 1 0 , the so-called critical case, the population, on average, stays constant. However, due to effects of noise, extinction occurs with probability 1, with the probability of survival up to time t asymptotically tends to ( ) ∼ − P t t S 1 (Kimmel and Axelrod, 2002). In our case, both the resident and the invader undergo a critical stochastic dynamics, where, once the steady state has established itself, the population evolves very close to the mean-field line of fixed points until fixation of one of the species (and, consequently, extinction of the other) occurs. In order to numerically check this scenario, we could proceed to estimate the survival probability at time t, P S (t). However, since this quantity exhibits a fat-tail behaviour, this would be computationally costly. A more efficient method is to resort to the asymptotic of the extinction time with system size, which in this case can be identified with the carrying capacity, K (Hidalgo et al., 2015). Typically, a quasi-neutral competition is associated with an algebraic dependence of the average extinction time of either population, T E , on the system size, in this case determined by the carrying capacity, K (Doering et al., 2005;Lin et al., 2012;Kogan et al., 2014). In Fig. 11 we plot simulation results for the competition between two identical populations. In particular, we study how the average extinction time of either population, T E , varies as the carrying capacity is changed. We observe that this quantity exhibits a linear dependence on the carrying capacity: Our scenario produces the same qualitative results as Lin et al. (2012) and Kogan et al. (2014) who studied the average extinction time of birth-and-death processes engaged in quasi-neutral competition. In both papers, the average extinction time was reported to depend linearly on system size. Study of the effects of cell-cycle-dependent therapy In Section 5 we have analysed how the dependence of the cellcycle progression on the concentration of SCF-activating and SCFinactivating enzymes allows us to engineer heterogeneous populations where invasion and coexistence may occur. In this section, we further explore the ability of inducing quiescence by varying the ratio between SCF-activating and SCF-inactivating enzymes, this time in connection with the ability of such populations to withstand the effects of cell-cycle-dependent therapy (Powathil et al., 2012;Gabriel et al., 2012;Billy and Clairambault, 2013;Powathil et al., 2013). In particular, we consider a scenario where two cellular populations coexist. Initially, one of these populations consists of a set of cells actively progressing through the cell-cycle which have reached a steady state characterised by the mean-field equilibrium Eqs. (28)-(30). The second population consists of quiescent cells whose cell-cycle is locked into the G0 phase and therefore do not proliferate. These cells are further assumed to undergo apoptosis at a very slow rate. More specifically, the ratio SCF-activating and SCF-inactivating enzymes in the quiescent cell population is such that, for the steady-state level of oxygen for the active cells (see Eq. (29)), cells are locked into the G1-fixed point (see Fig. 6). In this section we show that the presence of a quiescent population within a heterogeneous population may lead to resistance to cell-cycle-dependent therapy. In particular, we show that, whereas such therapies effectively reduce or even eradicate the active cell population, the feedback between the therapy-induced decrease in cell numbers and the associated increase in oxygen availability can yield to the quiescent population to enter the active state and thus regrow the population. In this sense, we claim that the quiescent population has a stem-cell-like effect whereby, under the action of therapeutic agent, can repopulate the system Alarcón and Jensen, 2010. Mean-field analysis We start our mean field analysis by considering a heterogeneous population composed by cells of two types: type 1 and type 2 cells. Type 1 consists of cells with values of ≡ p p so that they are locked in G0 (i.e. not cycling). The associated mean-field dynamics are given by: ( ) N t 2 are the total cell population of type 1 and type 2 cells, respectively: Fig. 11. Simulation results corresponding to the competition between the sub-populations of a heterogeneous populations. We show how the average extinction time of either population, T E , varies as the carrying capacity, K, changes. We observe that the dependence is linear. For simplicity, the two sub-populations are assumed to be identical (i.e. with the same characteristic parameter values for both the intracellular dynamics (cell-cycle) and the population-level dynamics (birth and death rates)). The carrying capacity = 4 , which correspond to = · · · · · K 0.9969 10 , 1.9301 10 , 2.6956 10 , 3.4367 10 , 4.2661 10 For simplicity, we assume that both populations consume oxygen at the same rate, k 1 . We further assume that ν ν ⪡ 2 1 , i.e. type 2 (quiescent) cells die at a much slower rate than type 1 (active) cells. In Section 4.2, we have already analysed under which conditions Eqs. which determines the steady-state value of the oxygen concentration ∞ c . At this point, the population of resident cells has settled onto a steady state where its age structure does not change. The total number of cells is also approximately constant and given by: where we have used the fact that ν ν ⪡ 2 1 . The cell-cycle parameters p 3 2 and p 6 2 have been chosen so that, for = ∞ c c , , i.e. type 2 cells are initially locked into G0 (see Fig. 6). Once the population reaches this therapy-free quasi-equilibrium state, we assume that a therapy which only acts on proliferating cells is administered. Examples of such therapies abound in cancer treatment and can take the form of cell-cycle specific drugs or radiotherapy (Powathil et al., 2012;Gabriel et al., 2012;Billy and Clairambault, 2013;Powathil et al., 2013). We characterise the efficiency of the therapy by the so-called survival fraction, F S , i.e. the percentage of cells which survive the prescribed dose. For example, in radiotherapy F S is usually taken to be given by the linear quadratic model: D stands for the radiation dosage expressed in Grays and α and β are cell type-specific parameters. In the present context, we do not specify any particular form of therapy and we simply take ∈ [ ) F 0, 1 S . Initially, the therapy only affects type 1 cells (since type 2 are not proliferating). Therapy affects the birth and death rates of the type 1 population, which now read: The resulting mean-field equation is given by: The action of therapeutic agent on the active population initially induces a decline of the population, which, in turn, involves an increase in the available oxygen concentration. The latter has the effect of accelerating the rate of progression of type 1 cells through the cell cycle. In the absence of the quiescent population, eventually both effects would find a balance and the population of active cells would settle onto a new equilibrium characterised by: since a G S 1/ is a decreasing function of the oxygen concentration. Reoxygenation during cell-cycle dependent therapy, in particular, radiotherapy, has been predicted by other models (Kempf et al., 2015). Consider now the effect of this process on the type 2 cell population which is initially quiescent. We know that hypoxia-induced arrest of the cell cycle is reversible (Alarcón et al., 2004;Bedessem and Stephanou, 2014), i.e. upon increase of the concentration of oxygen quiescent cells may re-enter the cell cycle and become proliferating. Re-entry of quiescent cells into the cell cycle is predicated upon a sufficient increase in the oxygen contraction: > ( ) c c p p , , i.e the initial oxygen concentration is such that type 2 cells are quiescent, but, as the therapy is administered and proceeds to act upon the type 1 cells, the oxygen concentration increases until it reaches its critical re-entry concentration. At this point, type 2 cells abandon quiescence and become active and competition between type 1 and type 2 cells ensues. In order to assess the long-time behaviour of the system, we first study the equilibrium of the type 2 cell population upon re-entry into cell-cycle progression. Its mean-field dynamics is given by: and c is determined by Eq. (38). Recall that, upon re-entering cell-cycle progression, type 2 cells are no longer immune to the therapy. Although in general the survival fraction is type-dependent, for simplicity we assume that F S has the same value for both cell types. The equilibrium condition is once again given in terms of the associated reproduction number, i.e. = R 1 0 T 2 which yields: Critical dosage In order to gain some degree of control over the behaviour described in the previous section, it would be useful to provide an estimate of the critical dosage above which therapy-induced reoxygenation is capable of activating quiescent cells. We characterise the therapy dose by means of the critical survival fraction, F S C . Recall that the characteristic equation for the oxygen-dependent growth rate, σ ( ) c 1 , of the population of active cells is given by: is satisfied, type 2 cells out-compete type 1 cells and the system, composed entirely of type 2 cells, settles onto the steady state prescribed by Eq. (52). In this sense, quiescent cells have stem-cell-like behaviour, in the sense that they can repopulate the system. In this second scenario, therapy is not completely without virtue since therapy drives the system to be taken over by a slower-cycling (less aggressive) phenotype. Simulation results In order to check the accuracy of the mean-field analysis carried out in Section 6.2 regarding the critical survival fraction for rescue from quiescence. We start by showing (Fig. 12) two typical realisations of the stochastic population dynamics which illustrate the rescue mechanism. In these simulations, we first let the active population settle on to its steady state. We then apply a sustained therapy with constant survival fraction. A more aggressive treatment (F S ¼0.6 in Fig. 12) greatly affects the active population: the amount of active cells killed by the therapy induces re-oxygenation of the population above the critical oxygen level for activation of the quiescent population whereupon the quiescent cells become proliferating. In Figs. 12(a) and (c), we show that upon activation of the quiescent population, a competition between both populations ensues, which eventually leads to extinction of the active population. A less aggressive therapy (F S ¼ 0.7 in Fig. 12) also induces death of the active population and re-oxygenation. However, in this case, the latter is not intense enough to induce activation of the quiescent cells (see Fig. 12(d)) and therefore the active cells will repopulate the system as the quiescent population stays on its course to eventual extinction, as shown in Fig. 12(b). Fig. 13 shows simulation results for the variation of probability of fixation of the quiescent population as the survival fraction of the therapy, F S , changes. Our simulation results show qualitative agreement with our mean-field theory (see Sections 6.1 and 6.2): as the survival fraction increases (i.e. the therapy becomes less efficient), the probability of fixation abruptly decreases from almost certainty of fixation to almost certainty of extinction. We observe that our mean-field theoretical predicts a critical value for F S slightly smaller than the observed when fluctuations due to finite size effects are present. However, we observe that, as the carrying capacity of the system is increased, the critical value of F S converges to the mean-field value. Discussion In this paper, we have presented and studied a stochastic multiscale model of a heterogeneous, resource-limited cell population. This model accounts for a stochastic intracellular dynamics (in this particular case, a model of the oxygen-regulated G1/S transition) and an age-structured birth-and-death process for the cell population dynamics. Both compartments are coupled by (i) a model for the time variation of resource (oxygen) abundance which regulates the rate of cell-cycle progression, and (ii) a model of the age-dependent birth rate which carries out the coupling between the intracellular and the cellular compartments (see Fig. 1 for a schematic representation of the model and Section 2 for a summary of the model formulation). Our analysis of the stochastic dynamics of the oxygen-regulated G1/S transition, which is a generalisation of the mean-field model presented in Bedessem and Stephanou (2014), has revealed a number of previously unreported properties related to the presence of fluctuations. In particular, the optimal path theory and the quasi-steady state approximation allow us to explore the effect of the SCF-regulating enzymes on the timing of the G1/S transition. The relative abundance of SCF-activating and inhibiting enzymes regulates the rate at which cells reach the G1/S transition: excess of SCF-activating enzyme can delay the transition and even rendering the cell quiescent regardless of oxygen concentration beyond the predictions of the mean-field model (see Sections 3.5.1 and 3.5.2). Furthermore, we have shown that the effects on timing of the G1/S transition of the relative abundance of SCF-activating enzyme give rise to a scaling form dependence of the age to the transition, a G S 1/ , whereby this quantity is a function of ( ) a c p p , / G S 1/ 6 3 which takes the form of Eq. (13) (see Section 3.6). Taken together, these results imply that stochasticity in the intracellular dynamics naturally generates variability within the population: an otherwise homogeneous population presents a distribution of birth rates induced by variability in the relative abundance of SCF-activating enzyme within the population of cells. This variation in the duration of the cell-cycle allows us to analyse the dynamics of a stochastic heterogeneous population under resource limitation conditions. To this end, we consider populations formed by sub-populations of cells characterised by differing relative abundance of SCF-activating and inhibiting enzymes. We further assume that this heterogeneity is heritable (i.e. daughter cells inherited the ratio of SCF-regulating enzymes from their mother). In this scenario, we have shown sub-populations within heterogeneous population engage in quasi-neutral competition: sub-populations of cells get extinct in an average time which is of the order of the carrying capacity of the system (see Section 5.2). In the context of modelling cancer cells populations, where heterogeneity is a main contributor to the complex dynamics of cancer (Anderson et al., 2006;Merlo et al., 2006;Anderson and Quaranta, 2008;Gillies et al., 2012;Greaves and Maley, 2012), this result is of relevance since it allows us to estimate the rate at which sub-populations or clones disappear from the tumour. The fact that this rate is proportional to the inverse of the carrying capacity reveals a highly dynamic scenario where clones are quickly decaying and being replaced within the tumour. This can have a profound impact on a variety of evolutionary phenomena such as emergence of drug resistant phenotypes. From the modelling perspective, we should note that quasi-neutral competition is a purely stochastic scenario: the mean-field limit predicts coexistence between the corresponding cell types. We have further explored the issue of emergence of drug resistant cell types by analysing a case study in which a quiescent population can be rescued from latency by the application of a cell-cycle dependent therapy. Examples of such therapies are radiotherapy or cytotoxic drugs designed to target cells in specific stages of cell cycle progression (Powathil et al., 2012). In the particular example analysed in Section 6, we have shown that in mixed population composed by active (proliferating) and quiescent cells, if the drug is not efficient enough (characterised in terms of its associated survival fraction, F S ), quiescent cells are rescued from latency and eventually reach fixation within the population, i.e. the activated quiescent cells out-compete the original active cells until the latter population becomes extinct (Alarcón and Jensen, 2010). It is noteworthy the fact that in this process of quiescence rescue the original cancer population is replaced by a much more resistant population as the activated Our results show that the methods and models presented in this paper are of great potential importance for the analysis of the complex dynamics of heterogeneous populations under resource limitation, in particular for the study of emergence of drug resistance in heterogeneous cancer cell populations. Several important issues have been left out of the present work. A major contributor to heterogeneity within a cancer cell population is spatial heterogeneity (Anderson et al., 2006;Gillies et al., 2012) which is closely related to micro-environmental heterogeneity. A further issue which should be analysed in depth concerns the scaling properties of the age to the G1/S transition (see Section 3.6), in particular whether this is a general property of the cell-cycle dynamics or rather a specific attribute of the stochastic model presented here. A thorough analysis of these issues falls beyond the scope of the present paper and are left for future research. cón, 2014;de la Cruz et al., 2015). In order to proceed further, we assume, as per the Briggs-Haldane treatment of the Michealis-Menten model for enzyme kinetics (Briggs and Haldane, 1925;Keener and Sneyd, 1998), that the species involved in the system under scrutiny are divided into two groups according to their characteristic scales. More specifically, we have a subset of chemical species whose numbers, X i , scale as: , whilst the remaining species are such that their numbers, X j , scale as: . Key to our approach is the fact that S and E must be such that: We further assume (de la Cruz et al., 2015) that the generalised coordinates, Q i , scale in the same fashion as the corresponding variable X i , i.e. Dimensionless variables and parameters corresponding. Table B4 Initial conditions used in simulation system (4)
14,124.6
2016-07-06T00:00:00.000
[ "Mathematics" ]
Corporate social responsibility, profits and welfare with managerial firms This paper analyses the equilibrium outcomes in a duopoly market where firms follow corporate social responsibility (CSR) behaviours under managerial delegation. It is shown that in the subgame perfect Nash equilibrium of the game, both firms emerge as CSR-type, and the firms’ profitability (resp. the welfare of consumers and society) are beneficiated (resp. harmed) by the CSR behaviour. This result is in sharp contrast with the conventional result (established under non-managerial firms) that the higher the CSR sensitivity to consumer surplus, the lower (resp. higher) the firms’ profitability (resp. the consumer surplus and social welfare). Introduction In the recent years, the debate on firms' social responsibility has been raised more frequently also in the academic literature (e.g. Baron 2001Baron , 2009Jensen 2001;Goering 2007Goering , 2008Lambertini and Tampieri 2010;Benabou and Tirole 2010). According to a well-known classification (Garriga and Melè 2004), the most relevant corporate social responsibility (henceforth, CSR) theories and related approaches are focused on one of the following aspects of social reality: economics, politics, social integration and ethics. Needless to say, the realm of economics is that concerning our approach, according to which corporation is an instrument 1 for wealth creation whose sole social responsibility is the maximisation of shareholder value (for instance measured by the share price) which in turn, in the standard industrial organisation literature, leads to a short-term profits orientation. In such a context (where the shareholder value maximisation is the supreme reference for corporate decision-making, as in the basic Cournot model), there is no room for CSR (unless the latter directly or indirectly positively affects economic costs of firms). 2 In the context of the basic Cournot model, the conventional wisdom as regards the welfare effects of the introduction of social concerns in the firms' behaviour argues that, in general, increasing social concern or responsibility of the firms is necessarily welfare improving. For example, this is the view prevailing in the European Union and between most scholars. 3 On the other hand, profits are expected to reduce adopting CSR behaviour, although recently it has been argued that shareholder value maximisation is not incompatible with satisfying certain interests of people with a stake in the firm (stakeholders) provided that it is specified a long-term value maximisation rather than the short-run profit maximisation, such as the ''enlightened value maximisation'' proposed by Jensen (2000). In this paper, we show that this conventional wisdom is not necessarily true, since the effects of adopting CSR behaviours on profits and social welfare may be also dependent upon (1) the interaction between the strategic market context in which the socially concerned firms operate; and (2) the choice of hiring or not a manager for delegating the strategic choices. 1 In fact Garriga and Melè (2004) denote as ''instrumental theories'' those belonging to this economic approach, whose the most well-known representative is the Nobel prize Friedman (1970) stating that ''the only one responsibility of business towards society is the maximisation of profits to the shareholders within the legal framework and the ethical custom of the country''. 2 As known in this standard literature-and also in the present model-firm's costs are not at all dependent of the level of engagement in social activities by the same firms. Of course if this was not true, then any investment in social demands that would reduce firm's costs would also produce an increase of the shareholder value, as in the case clearly analysed by the same Friedman (1970) (quoted in Garriga and Melè 2004, p. 53): ''It will be in the long run interest of a corporation that is a major employer in a small community to devote resources to providing amenities to that community or to improving its government. That makes it easier to attract desirable employees, it may reduce the wage bill or lessen losses from pilferage and sabotage or have other worthwhile effects''. Moreover the literature has pointed out that CSR could have the potential to ameliorate firms' profitability also through other various channels: for instance by reducing turnover rates and operating costs, by enhancing efficiency, by attracting better and/ or more loyal and motivated employees (e.g. Chong and Tan 2010). However, in the absence of the above-mentioned motives, the expected result is that the firm's profit is always reduced by a CSR. For showing this, we adopt the Cournot duopoly model with managers following the managerial delegation literature (e.g. Vickers 1985;Fershtman and Judd 1987). As widely observed in real world, CSR rules are often adopted by large corporations where ownership and control are separated. In fact, according to the KPMG Survey of Corporate responsibility reporting (2013), 93% among the world's largest 250 companies records CRS activities. Forbes (2014) reports the results of the Reputation Institute (2014) CSR survey, which confirms that the world's top ten companies with the best CSR reputations are widely known large, managerial companies. 4 In particular, we aim to answer the following research questions. First, which effect has firms' social responsibility on quantity, price, consumer surplus and total welfare? Second, as to firms, could it pay off to be socially responsible? Third, as to consumers and society, is the adoption of socially responsible rules by firms really welfare improving? We assume that the shareholders may instruct their manager to pursue social objectives (represented, as usual in the literature, by an interest for the welfare of consumers); however, as in the standard profit-seeking world, the choice of the manager's incentive contract by shareholders aims to maximise profits. This could mean that the stakeholder participation in governance is in hiring managers selecting the optimal output level according to the stakeholder interests; nevertheless, it does not extend down to stipulate manager's contracts. The current paper contributes to the literature by providing with a description in ''strategic'' terms of the multi-various phenomenon of the CSR. Several scholars have focused their attention on the reasons that lead companies to embrace CSR behaviours (Baron 2001(Baron , 2009García-Gallego and Georgantzís 2009;Benabou and Tirole 2010;Reinhardt and Stavins 2010;Lambertini and Tampieri 2010). Baron (2001) put emphasis on the fact that companies may strategically follow CSR rules because guided by their selfishness. In fact, ''social activists'' such as non-governmental organizations (NGOs) can directly exercise ''social pressure'' on companies to adopt ''responsible behaviours'' through threats (boycott, negative propaganda) or rewards (endorsements). As a consequence, CSR can simply be a keen response to that pressure. On the other hand, Benabou and Tirole's (2010) standpoint is that several interacting motivations (ranging from pure altruism to material motivation, social and self-esteem concerns) lead firms to pro-social CSR behaviour. Moreover, as Reinhardt and Stavins (2010) argue, if there are customers willing to pay for ethical goods, and this can be profitable, CSR arises as a strategy like any other form of product differentiation: according to the basic economics, the opportunity emerges because of the presence of asymmetric information, economies of scale, intellectual-property protection and the successful defence of the ensuing niche against imitators. Nonetheless, the previous contributions do not deal with the issue of managerial delegation with firms of CSR-type, a subject which has been scanty studied in the literature. To the best of our knowledge, few exceptions are three recent articles: Goering (2007); Kopel and Brand (2012); Manasakis et al. (2014). Goering (2007) investigates a mixed duopoly setting in which a private non-profit CSR firm competes with a private profit-maximising company. The stakeholders of the CSR firm select a contract for their managers. The author focuses on the Stackelberg competition mode, in which the non-profit firm is the market leader and he shows that, in subgame perfect equilibria, the managers of the leading company generally will not be given the true objective function to optimise. Manasakis et al. (2014) study the incentives of firms' owners to commit voluntarily to CSR activities in oligopoly. Their approach is slightly different because the socially responsible attributes are attached to products, with consumers forming expectations about their existence and level. In other words, those authors consider the ''demand-side'' (the consumers') CSR preferences, while the abovementioned works mainly focuses on the ''supply-side'' (the firms') CSR preferences, in the sense that firms choose the quantity to supply to maximise also consumers' interests. The key result is that hiring an ''individually'' socially responsible manager and delegating to him the CSR effort and market decisions represents a commitment device for the firm's owners and credibly signals to consumers that the firm will undertake CSR activities. Differently from the two above-mentioned papers, Kopel and Brand (2012) investigate, similarly to the current paper, the ''strategic'' role played by CSR (measured by the interest for consumer's surplus by shareholders) in a gametheoretic context in the frame of the Cournot model. 5 Similar to Goering (2007), those authors analyse a mixed quantity-setting duopoly in which a socially concerned firm competes with a profit-maximising firm. Both firms may opt to hire a manager who determines the output to produce on behalf of the firm's owner. To hire a manager and delegate the output choice emerges as subgame perfect equilibrium for both firms. In the case of similar cost of production, the CSR firm has both a higher market share and profit. The relationship between the share of consumer surplus the CSR firm takes into account and its profit is non-monotonic: in fact, as the share increases, the CSR firm's profit first increases and then decreases. However, all those contributions assume that stakeholders may decide also on manager's contracts and, in any case, they do not study an endogenously determined equilibrium in a duopoly with both CSR firms. 6 Hence, this paper aims to fill this gap. Although numerous articles have explored the empirical relationship between CSR performance and profitability performance, their results have often been contradictory (even within a given analysis) and, thus, no definitive consensus exists. For instance (1), most researchers have found only a negative relationship (see, inter alias, Bromiley and Marcus 1989;Davidson et al. 1987;Davidson and Worrel 1988); (2) some others have found an inconclusive relationship (e.g. Aupperle et al. 1985;Ingram and Frazier 1983); and (3) however, there is also an increasing number of articles showing a positive correlation between the social responsibility and financial performance of corporations (e.g. Griffin and Mahon 1997;Roman et al. 1999;Waddock and Graves 1997). It is shown that under managerial delegation, the subgame perfect Nash equilibrium (SPNE) implies that both firms follow CSR rules, and the firms' profitability is always enhanced by their sensitivity to consumer surplus. This means that, if managers are delegated to select the optimal output following CSR rules, then the choice of firms to be CSR is due to economic convenience instead of altruistic behaviour. 7 On the other hand, such a choice results to be welfare damaging, despite it aims to protect the consumers' interests. These results contrast the conventional wisdom. The theoretical and empirical implications are rather intriguing. As regards the former implication, we recall that these unconventional results are obtained in a standard model in which owners instruct their managers to maximise not only the short-term profits (i.e. the shareholder value) but also the consumer's surplus. As regards the latter implication, we note that our model may provide the theoretical basis, rather unexpectedly, also within the ''instrumental theory'' of CSR for explaining some of the empirical findings showing a positive correlation between CSR performances and profitability performances. Therefore, these findings may be important for the current policy debate on firms' social concern, in particular as regards social welfare as well as the motivation for the firm's owner to take into account the objectives of the stakeholders of his own firm. The rest of this paper is structured as follows. Section 2 introduces the model setup, and Sect. 3, after having determined the SPNE in which both firms follows CSR rules, summarises the results. The last section concludes our findings. The model We assume the following standard linear inverse market demand where p denotes price, and q i and q j are the firms' output levels for i, j = 1, 2 and i = j. For tractability, both firms are assumed to have zero production costs. Following the recent established literature (e.g. Goering 2007Goering , 2008Lambertini and Tampieri 2010), in the current model, we interpret the social concerns as part of consumer surplus: the feature of a CSR firm is to be sensitive to it. Consequently, we suppose that the firm wishes to maximise profits plus a fraction k of the market consumer surplus in its objective. It follows that the participation of the stakeholders in governance applies on market choices: once involved in the firm's governance, they set the ''social responsibility'' level. Thus, the private owners of the socially responsible firms take as exogenously given the level of ''social concern'', k, given, for instance, by the ''customary toughness'' of the stakeholders incorporated in the firm's objective function. This is coherent with the results of the empirical study of Spitzeck and Hansen (2010), who find that the stakeholder engagement mechanism is mainly confined to ''dialogue and issues'' advisory, while the ''structural'' strategic choices (i.e. different from the output choices) of the company are outside their sphere. Furthermore, because firms compete for the same clients in the industry, it can be soundly assumed that stakeholders may ask both companies for an identical level of ''social concern''. 8 As a consequence, the CSR firm's objective function may be specified as a simple parameterised combination of profits-which are given by p i ¼ ða À q i À q j Þq i -and consumer surplus. Thus, the CSR objective function (W) is: where k 2 ½0; 1 denotes the weight that CSR firms assign to consumer surplus. 9 The benchmark model In this case, the firms follow CSR rules; however, they are not managerial. The analysis is carried out as usual through the maximisation of (2) with respect to the quantity and solving the system of the two reaction functions for i, j = 1, 2 and i = j, one gets the equilibrium output Substituting (3) backwards, we obtain the equilibrium profits and consumer surplus, respectively where the superscript CSR denotes that firms are socially concerned. Note that the satisfaction of the non-negativity constraints on profits requires ultimately that k 1=2, that is, the firm's interest for the consumer's welfare has not to be too high. This inequality also holds true for the rest of the paper. The model with managerial firms We introduce managerial delegation in the usual way, by assuming that the owners of both firms hire a manager and delegate the output decision to this manager. 10 Each manager receives a fixed salary plus a bonus element, which is related to a weighted combination between firms' profits and sales. Moreover, we also follow the standard assumption by managerial delegation theory that the fixed component (salary) of the manager pay is chosen by the firm's owner such that the manager exactly gets his/her opportunity cost, which is normalised to zero. More specifically, following for instance Vickers (1985), Jansen et al. (2007Jansen et al. ( , 2009), Fanti and Meccheri (2013), Meccheri and Fanti (2014), manager i receives a bonus that is proportional to where the weight b i is chosen by the owner i to maximise its own objective function, and it can be either positive or negative because the owner may provide incentives or disincentives to the manager's choice of output (sales). The timing of the game is usually as follows. At stage 1, every firm's owner decides the weight on sales in the manager's bonus formula. At stage 2, each manager competes in quantities. The equilibrium concept considered is the SPNE, obtained solving the model by backward induction. Given the CSR firm's objective function (2), u i (which drives the manager i's utility) can be rewritten as: Hence, the equilibrium of the second stage of the game (the market game) must satisfy: for i, j = 1, 2 and i = j. From (9), we obtain the reaction functions 10 Note that it can be shown that the decision of both firms of CSR-type to hire a manager is a SPNE of a standard managerial delegation game in which each firm chooses whether to hire a manager. However the analysis of such a game is beyond the scope of this paper. Corporate social responsibility, profits and welfare with… 347 After the usual calculations, one gets the equilibrium output and profits as a function of the weights on quantities of the manager's bonus scheme: Maximising profits with respect to b i and then solving the system of the following reaction functions in bonuses space One gets the following (symmetric) equilibrium value for b: with ob i =ok\0: CSR has a mitigating effect on the manager's bonus. Lemma 1 The weight on quantities is positive (resp. negative) when the sensitivity to consumer surplus is low (resp. high). The received literature argues that it is optimal for firms' owners to twist their manager's incentives away from strict profit maximisation towards sales incentives. Lemma 1 reveals that, in this model, this occurs only if the social concern of the firm is sufficiently low; otherwise, it is optimal for owners to twist their manager's incentives away from sales. Substituting backwards the equilibrium weight on quantities, one gets the subperfect equilibrium output, profit and consumer surplus in the delegation case, respectively where the first superscript, D, denotes ''managerial delegation'', and the second that the firms follow CSR rules. Observing that when k ¼ 0 firms simply maximise profits, the duopoly outcomes in the standard profit-maximisation case become where the superscript PS stands for ''profit-seeking'' firms. Results The above results report the equilibrium outcomes under exogenously given situations of only profit-seeking managers and socially concerned managers. Now we extend the above analysis by letting the firm endogenously choose whether to instruct its manager to follow CSR rules. We show that the endogenous decisions of following CSR rules emerge as the SPNE outcome. The mixed case: one manager follows CSR rules and the other one is profit-seeking Let us consider the case that the firm i's manager maximises the social objective function, while the rival j's manager maximises profits. In the second stage, the firm i's maximises (8) with respect to the quantity. Solving for q i the system of the FOCs, we derive which represents firm i's output as function of the rival firm's output. On the other hand, firm j's manager maximises the following utility function: From (19), the FOCs for firm j determine the following reaction function Corporate social responsibility, profits and welfare with… 349 Solving the system composed by (18) and (20) for the firms' quantities, we obtain the firm's i output and the firm's j optimal response. Both response functions are expressed as function both of its and the rival firm's bonus weights. In the first stage of the game, firm j's owner maximises with respect its manager's bonus weight the following [after inserting (21) and (22) in the profit function]: Solving with respect to b i leads to the following reaction function for the owner i: Likewise, firm j's owner maximises the following Solving with respect to b j leads to the following reaction function for the owner j: In the asymmetric equilibrium of the first stage subgame, we obtain Note that under CSR rules, the owner may fix even a negative bonus when its social concerns are relatively high. More precisely, b i In other words, the CSR firm's owner will twist their manager's incentives towards (away from) sales depending on whether they are strongly (weakly) social concerns oriented. Equilibrium analysis Now we are in a position to conduct the following analysis. We compare firm's profits under the two mixed cases, and we investigate which rule (CSR or profitseeking) endogenously emerges as SPNE for both firms in the presence of managerial delegation. Let us define the following profits differentials: Hence, we state the following: Result 1 In the SPNE outcome of the game both firms choose to instruct their managers to follow CSR rules. Proof Result 1 straightforwardly derives from the signs of the two profit differentials involved in the determination of the SPNE: À 5Þ 2 ð5 À kÞ 2 \0: Now, having established that both firms endogenously choose to be CSR-type, we investigate how profits and welfare vary with the importance of the CSR motive in the cases without and with managerial delegation. Then, the following results hold: 11 Result 2 (a) As expected, under the standard profit-maximising behaviour, the higher k is the lower profit is; (b) in contrast to the common wisdom, under managerial delegation the higher k is the higher profit is. Proof recalling that k 1=2, this straightforwardly follows from op CSR=CSR ok ¼ 2a 2 ð2kþ1Þ Result 3 (a) As expected, under the standard profit-maximising behaviour without managerial delegation, the higher k is, the higher consumer surplus 12 is; (b) in contrast to the common wisdom, under managerial delegation, the higher k is the lower consumer surplus is. Proof Recalling that k 1=2, this straightforwardly follows from oCS CSR=CSR Hence, when firms are managerial, as a modern theory of corporation governance argues, an increase in their ''social concern'' with respect to the decisions on output may, rather counter-intuitively, actually decrease consumer surplus and increase firms' profitability. The explanation for this result is as follows. There are two opposite forces at work. On the one hand, the introduction of CSR and the increasing weight attached to the stakeholder-consumers' interests lead the firms to a ''more aggressive'' market behaviour through output expansion (Fanti and Buccella 2016). On the other hand, with managerial firms, CSR softens the magnitude of the manager's bonus on sales in equilibrium. Result 3 tells us that with managerial firms, the second effect of CSR dominates, leading to a reduction in output and, consequently, to a rise in prices and profitability. Moreover, this has also a negative effect on social welfare, given its direct correlation with consumer surplus. In fact, the value of the consumers surplus in the social welfare function always dominates that of profits (a well-known result in the IO literature in the presence of standard linear demand and technology). Therefore, in the presence of managerial delegation, the negative effect of the ''social concern'' on consumers' welfare overcomes the positive effect on profits, and thus the social welfare reaches a minimum (compatible) with the non-negativity constraint condition of profits. These results imply that an analysis of the impact of firms' social concern should take into account the market context and the strategic behaviours of the CSR firms as well as the separation between control and ownership as regards the decisions on quantity competition. Conclusions The paper shows that in a Cournot duopoly with managerial firms both firms obtain higher profits adopting CSR rules in the product market game (i.e. to be sensitive to consumer surplus), and the firms' profitability is always increasing with their social concern. Conversely, the work shows that consumer surplus and social welfare are reduced when firms adopt CSR rules, and the higher the firms' social concern, the lower the consumers' and social welfares. This finding is in sharp contrast with the conventional result (established with non-managerial firms) that the higher the CSR sensitivity to consumer surplus, the lower (resp. higher) the firms' profitability (resp. the consumer surplus and social welfare). 13 Moreover, importantly, the firms' type (either CSR or profit-seeking) has been endogenously determined in an appropriate game context, showing that in the SPNE of the game, both firms are of CSR-type. The implication of our findings is striking: the choice to follow CSR rules in the production's choice stage (which is also robust because endogenously determined) is not related to social interests, but it is moved by profit-enhancing reasons. On the other hand, consumers and society should, rather paradoxically, hinder firms' social concerns. While in the owner-managed firm it is never profitable to take into account the consumer surplus in the firm's objective, in the managerial firm profits are higher when owners instruct their managers to take into account the consumer surplus in their objective, and in particular, if owners may choose also ''how much'' to consider the consumer surplus, then a profit-maximising level of CSR does always exist (for details, see the ''Appendix''). Therefore, the present work revisits the reasoning of Friedman (1970) in a context of separation between management and ownership showing that owners can find beneficial to delegate to managers engaged in CSR activities to improve the firms' profitability. Several extensions of the present simple model come to mind. Among others, we mention: (1) to assume price competition; (2) to assume a different structure of the manager's incentive contract, such as contracts related to the relative firms' profitability or market shares; (3) to assume that products are horizontally and vertically differentiated; (4) to consider consumption externalities; (5) to consider an endogenous timing of moves of the game; and (6) to consider the presence of asymmetric and endogenous costs (such as unionised labour costs). These extensions are left for further research. Result 5 Profits under managerial delegation are larger than those under no delegation when in the former case owners confers to the consumer surplus a weight in the firms' objective function equal to 1\k\1:75 and in the latter case set k ¼ 0. Result 5 implies that in the case in which owners may choose whether and how to be socially responsible, the common wisdom that managerial delegation is profit reducing at equilibrium is reversed (even for a wide range of the weight k). More interestingly, Results 4 and 5 jointly indicate, in contrast to the common wisdom, the unexpected result that at the market equilibrium shareholders find more profitable to delegate to managers and be strongly socially responsible (in the sense that they confer more importance to the welfare of the consumer than their own profits). The economic intuition of this result is grounded on the role played by the sales incentive's value in the managerial contract. In fact, given that the consumer surplus has a substantial incidence in the objective function, the owners strategically use quantities via the adoption of the CSR and then, through the bonus, they may penalise managers with respect to output (however, not always, depending on the opportunely weighted level of social engagement) to increase the firms' profitability.
6,133.4
2017-04-05T00:00:00.000
[ "Economics", "Business" ]
Weak Solutions in Elasticity of Dipolar Porous Materials The theories of porous materials represent a material length scale and are quite sufficient for a large number of the solid mechanics applications. In the following, we restrict our attention to the behavior of the porous solids in which the matrix material is elastic and the interstices are voids of material. The intended applications of this theory are to the geological materials, like rocks and soils and to the manufactured porous materials. The plane of the paper is the following one. In the beginning, we write down the basic equations and conditions of the mixed boundary value problemwithin context of linear theory of initially stressed bodies with voids, as in the papers of 1, 2 . Then, we accommodate some general results from the paper 3 , and the book 4 , in order to obtain the existence and uniqueness of a weak solution of the formulated problem. For convenience, the notations chosen are almost identical to those of 2, 5 . Introduction The theories of porous materials represent a material length scale and are quite sufficient for a large number of the solid mechanics applications. In the following, we restrict our attention to the behavior of the porous solids in which the matrix material is elastic and the interstices are voids of material.The intended applications of this theory are to the geological materials, like rocks and soils and to the manufactured porous materials. The plane of the paper is the following one.In the beginning, we write down the basic equations and conditions of the mixed boundary value problem within context of linear theory of initially stressed bodies with voids, as in the papers of 1, 2 .Then, we accommodate some general results from the paper 3 , and the book 4 , in order to obtain the existence and uniqueness of a weak solution of the formulated problem.For convenience, the notations chosen are almost identical to those of 2, 5 . Basic equations Let B be an open region of three-dimensional Euclidean space R 3 occupied by our porous material at time t 0. We assume that the boundary of the domain B, denoted by ∂B, is a closed, bounded and pice-wise smooth surface which allows us the application of the 2 Mathematical Problems in Engineering divergence theorem.A fixed system of rectangular Cartesian axes is used and we adopt the Cartesian tensor notations.The points in B are denoted by x i or x .The variable t is the time and t ∈ 0, t 0 .We will employ the usual summation over repeated subscripts while subscripts preceded by a comma denote the partial differentiation with respect to the spatial argument.We also use a superposed dot to denote the partial differentiation with respect to t.The Latin indices are understood to range over the integers 1, 2, 3 . In the following, we designate by n i the components of the outward unit normal to the surface ∂B.The closure of domain B, denoted by B, means B B ∪ ∂B. Also, the spatial argument and the time argument of a function will be omitted when there is no likelihood of confusion. The behavior of initially stressed bodies with voids is characterized by the following kinematic variables: In our study, we analyze an anisotropic and homogeneous initially stressed elastic solid with voids.We restrict our considerations to the Elastostatics, so that the basic equations become as follows. i The equations of equilibrium is as follows: ii the balance of the equilibrated forces is as follows: iii the constitutive equations are as follows: 2.5 In the above equations we have used the following notations: i ρ-the constant mass density; ii u i -the components of the displacement field; Marin Marin 3 iii ϕ jk -the components of the dipolar displacement field; iv ν-the volume distribution function which in the reference state is ν 0 ; v σ-a measure of volume change of the bulk material resulting from void compaction or distension; vi τ ij , η ij , μ ij -the components of the stress tensors; vii h i -the components of the equilibrated stress; viii F i -the components of body force per unit mass; ix G jk -the components of dipolar body force per unit mass; x L-the extrinsic equilibrated body force; xi g-the intrinsic equilibrated body force; xii ε ij , κ ij , χ ijk -the kinematic characteristics of the strain tensors; xiii C ijmn , B ijmn , . . ., D ijm , E ijm , . . ., a ij , b ij , c ijk , d i , ξ represent the characteristic functions of the material the constitutive coefficients and they obey to the following symmetry relations 2.6 The physical significances of the functions L and h i are presented in the works 6, 7 . The prescribed functions P ij , M ij and N ijk from 2.2 and 2.3 satisfy the following equations: N ijk,i P jk 0. 2.7 Existence and uniqueness theorems In the main section of our paper, we will accommodate some theoretical results from the theory of elliptic equations in order to derive the existence and the uniqueness of a generalized solution of the mixed boundary-value problem in the context of initially stressed bodies with voids.Throughout this section, we assume that B is a Lipschitz region of the Euclidian threedimensional space R 3 .We use the following notations: the Cartesian product is considered to be of 13-times.Also, W k,m is the familiar Sobolev space.With other words, W is defined as the space of all u u i , ϕ ij , σ , where 3.2 For clarity and simplification in presentation, we consider the following regularity hypotheses on the considered functions: i the constitutive coefficients are functions of class C 2 on B; ii the body loads F i , G jk , and H are continuous functions on B. The ordered array u i , ϕ jk , σ is an admissible process on Let ∂B S u ∪ S t ∪ C be a disjunct decomposition of ∂B, where C is a set of surface measure and S u and S t are either empty or open in ∂B.Assume the following boundary conditions: where the functions u i , ϕ jk , σ, t i , μ jk , and h are prescribed, u i , ϕ jk , σ ∈ W 1,2 S u , and t i , μ jk , h ∈ L 2 S t .Also, we define V as a subspace of the space W of all functions u u i , ϕ ij , σ which satisfy the boundary conditions: 3.4 On the product space W × W, we consider a bilinear form A v, u , defined by where 3.6 We assume that the constitutive coefficients are bounded measurable functions in B which satisfy the symmetries 2.6 .Then, by using relations 3.5 and 2.6 it is easy to deduce that A v, u A u, v . 3.7 Also, by using symmetries 2.6 into 3.5 , it results in A ijkmnr χ ijk u χ mnr u P ki u j,k u j,i − 2M ik u j,i ϕ jk 2d i σγ ,i g ij σ ,i σ ,j ξσ 2 dV, and thus A u, u 2 B UdV, 3.9 where U ρe is the internal energy density associated to u. We suppose that U is a positive definite quadratic form, that is, there exists a positive constant c such that C ijmn x ij x mn 2G ijmn x ij y mn 2F ijmnr x ij z mnr B ijmn y ij y mn 2D ijmnr y ij z mnr A ijkmnr z ijk z mn P ki x ji x jk − 2M ik x ji y jk 2N rik x ji z jkr 2a ij x ij w 3.10 for all x ij , y ij , z ijk , ω i , and w.Now, we introduce the functionals f v and g v by u i , ϕ jk , γ ∈ W be such that u i , ϕ jk , γ on S u may by obtained by means of embedding the space W 1,2 into the space L 2 S u . The element v u i , ϕ jk , σ ∈ W is called weak (or generalized) solution of the boundary value problem, if 12 hold each v ∈ V.In the above relations, we used the spaces L 2 B and L 2 S u which represent, as it is well known, the space of real functions which are square-integrable on B, respectively, on S u ⊂ ∂B. It follows from 3.10 and 3.8 that for any v v i , ψ jk , γ ∈ W. Let us consider the operators N k v, k 1, 2, . . ., 49, mapping the space W into the space L 2 B , defined by 3.14 It is easy to see that, in fact, the operators N k v, k 1, 2, . . ., 49, defined above, have the following general form: where n k r α are bounded and measurable functions on B. Also, we have used the notation D α for the multi-indices derivative, that is, 3.16 By definition, the operators N k v, k 1, 2, . . ., 49 form a coercive system of operators on Wif for each v ∈ W the following inequality takes place: 3.17 In this inequality, the constant c 1 does not depend on v and the norms |•| L 2 and |•| W represent the usual norms in the spaces L 2 B and W, respectively. In the following theorem, we indicate a necessary and sufficient condition for a system of operators to be a coercive system. is equal to m for each ξ ∈ C 3 , ξ / 0, where C 3 is the notation for the complex three-dimensional space, and The demonstration of this result can be find in 4 . In the following, we assume that for each v ∈ W, we have where the constant c 2 does not depend on v.We denote by P the following set: 3.23 In the following theorem, it is indicated a necessary and sufficient condition for the existence of a weak solution of the boundary-value problem.A v, u v, u define a bilinear form for each v, u ∈ W/P, where u ∈ u and v ∈ v.If it is supposed that the inequalities 3.17 and 3.20 hold, then a necessary and sufficient condition for the existence of a weak solution of the boundary value problem is p ∈ P ⇒ f p g p 0. 3.24 Moreover, the weak solution, u ∈ W, satisfies the following inequality: where c 3 is a real positive constant.Further, one has for each v ∈ W/P. For the prove of this result, see 3 . In the following, we intend to apply the above two results in order to obtain the existence of a weak solution for the boundary value problem formulated in the context of theory of initially stressed elastic solids with voids.Theorem 3.3.Let P {0}.Then there exists one and only one weak solution u ∈ W of our boundaryvalue problem. Mathematical Problems in Engineering Proof. from 3.13 and 3.14 we immediately obtain 3.20 .The matrix 3.20 has the rank 13 for each ξ ∈ C 3 , ξ / 0. Thus by Theorem 3.1 we conclude that the system of N k operators, defined in 3.14 , is coercive on the space W. According to definition 3.21 of P, we have that ε ij v 0, κ ij v 0, χ ijk v 0, γ i v 0, ψ 0 for each v ∈ P, v v i , ψ jk , ψ .So, we deduce that P reduces to P v v i , ψ jk , γ ∈ V : v i a i ε ijk b j x k , ψ jk ε jks b s , γ c , 3.27 where a i and b i and c are arbitrary constants and ε ijk is the alternating symbol.We will consider two distinct cases.First, we suppose that the set S u is nonempty.Then the set P reduces to P {0}, and therefore, condition 3.24 is satisfied.By using Theorem 3.2, we immediately obtain the desired result. In the second case, we assume that S u is an empty set.Then we have the following result.Proof.In this case, the boundary value problem P is given by 3.27 , where a i , b i , and c are arbitrary constants such that we can apply, once again, Theorem 3.2 to obtain the above result. Conclusion For the considered initial-boundary value problem the basic results still valid.Now, for different particular cases, the solution can be found because it exists and is unique. Theorem 3 . 1 . Let n psα be constant for |α| k s .Then the system of operators N p v is coercive on W if and only if the rank of the matrix by V/P the factor-space of classes v, wherev v p, v ∈ V, p ∈ P , Theorem 3 . 4 . The necessary and sufficient conditions for the existence of a weak solution u ∈ W of the boundary-value problem for elastic dipolar bodies with stretch, are given byB ρF i dV ∂B t i dA 0, B ρε ijk x j F k G jk dV ∂B ε ijk x j t k μ jk dA 0,3.28where ε ijk is the alternating symbol.
3,192
2008-09-25T00:00:00.000
[ "Engineering", "Materials Science", "Mathematics" ]
Sparsity-Inducing Super-Resolution Passive Radar Imaging with Illuminators of Opportunity Shunsheng Zhang 1,*, Yongqiang Zhang 1, Wen-Qin Wang 1, Cheng Hu 2 and Tat Soon Yeo 3 1 Research Institute of Electronic Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, China<EMAIL_ADDRESS>(Y.Z<EMAIL_ADDRESS>(W.Q.W.) 2 Electronics and Information School, Beijing Institute of Technology, Beijing 100081, China<EMAIL_ADDRESS>3 Department of Electrical and Computer Engineering, National University of Singapore, Singapore 117576, Singapore<EMAIL_ADDRESS>* Correspondence<EMAIL_ADDRESS>Tel.: +86-28-6183-1163 Introduction Passive radar exploiting illuminator of opportunity (IO), such as frequency modulation (FM) station [1,2], television station [3], etc., has been shown to successfully detect and track targets [4][5][6].While compared with active radar, passive radar has many advantages in low cost, low vulnerability to electronic jamming, counter low-flying target penetration and counter-stealth.Nevertheless, it is a challenge for existing passive radar techniques to achieve high-resolution imaging due to limited bandwidth of the opportunistic signals. Conventional inverse synthetic aperture radar (ISAR) [7,8] usually transmits a large bandwidth signal to achieve fine range resolution.However, in passive radar imaging, most IOs just irradiate a narrow bandwidth signal in Ultra High Frequency (UHF) or Very High Frequency (VHF) bands.Furthermore, if the target rotation angle relative to the radar is also small, then the radar shall also have a very limited cross-range resolution.Even for a large rotation angle, more complicated target-motion compensation is often required due to non-uniform rotational motion and/or fluctuation of target reflection characteristics. Several investigations have been performed for passive ISAR imaging [9][10][11][12][13].In [9], a smoothed pseudo Wigner-Ville distribution (SPWVD) has been applied for passive ISAR cross-range processing to achieve wide-angle imaging.However, this algorithm requires a large number of illuminators to provide a large rotation angle and a large computation complexity induced by the time-frequency transform.In [10,11], an experiment to enhance the cross-range resolution of passive ISAR image using digital TV based passive bistatic radar (PBR) has been carried out.It is demonstrated that a moderate resolution can be expected for the polar format algorithm (PFA), but there is a tradeoff between the imaging swath and the resolution.In [12,13], a passive ISAR processing scheme is presented by exploiting the multichannel digital television broadcasting (DVB)-T signals, which is verified by the experimental results of ship and civilian aircraft imaging.Nevertheless, since the rotation angle is only several degrees, it is still difficult to obtain a meaningful passive ISAR image.Thus, a large rotation angle should be employed for the system.In general, a large rotation angle means a long observation time.However, when the target is maneuvering, the target may not be irradiated by the IO and passive radar simultaneously, which makes long-term data collection very difficult.Moreover, portions of the data may be missed due to uncooperative IO.To resolve these problems, effective algorithms should be developed for passive ISAR super-resolution imaging, particularly for the cases with a small rotation angle as well as with limited measurements. In [14], an ESPRIT based super-resolution imaging algorithm based on external illuminators is proposed under the condition of small angular rotation, where ESPRIT [15] is used to estimate the spatial frequencies of different scatterers on the target.RELAX [16,17] can also estimate the amplitude and frequency of multi-component complex sinusoidal signals and has been successfully applied for super-resolution SAR imaging.Although the proposed scheme seems to work well for super-resolution passive radar imaging, we should be aware that both ESPRIT and RELAX are more or less sensitive to noise and modeling errors.In addition, applying ESPRIT or RELAX for passive radar imaging will require a considerable amount of measurements and, consequently, it is impractical for actual applications.A compressed sensing (CS) based algorithm was proposed in [18] to reconstruct passive ISAR images, but it is effective only when the multiple non-adjacent DVB-T channels are spectrally separated. In this paper, we propose a super-resolution passive ISAR imaging framework based on CS for the case with two illuminators and a small rotation angle.In this framework, the passive radar imaging is recast as a reconstruction problem of sparse signal with unknown amplitude and frequency.Next, the log-sum penalty function-based CS algorithm is employed to recover the amplitude and frequency parameters of strong scatterers.Simulated results demonstrate the superiority of the proposed method over other existing super-resolution algorithms such as ESPRIT and RELAX under the condition of low SNR and limited measurements. The remaining sections are organized as follows.In Section 2, the parametric passive radar imaging model is established.Section 3 presents the CS-based algorithm for super-resolution passive radar image formation.In Section 4, we compare the performance of the proposed algorithm and other super-resolution algorithms with simulated data.Finally, conclusions are drawn in Section 5. Parametric Passive Radar Imaging Model Consider a two-dimensional imaging model as shown in Figure 1.The reference point O is taken at the target center,T (r i , θ i ) and R (r 0 , θ 0 ) denote the i-th (i = 1, 2) illuminator and receiver coordinates, respectively; P k (r k , ϕ k ) denotes the k-th (k = 1, 2, ..., K) scatterer's coordinates on the target, and the distances from the k-th scatterer to the i-th illuminator and receiver are denoted by R k i and R k 0 , respectively. Passive radar imaging model. Suppose the i-th illuminator transmits a monochromatic continuous signal [19][20][21] s i (t) = exp (j2π where t is the fast time and f i is the carrier frequency.The received signal reflected from the k-th scatterer is where A k is the reflectivity of the k-th scatterer, and τ is the signal propagation time determined by with c being the speed of light.In the far-field condition, since R k i r k and R k 0 r k , we can use the following approximations Then, Equation (2) can be rewritten as with λ i = c/ f i being the wavelength.Assuming that the referenced signal is the instantaneous complex envelope of the returned signal after demodulation and translational motion compensation can be represented by Note that stationary illuminators and receiver and moving target are assumed.In this case, the rotation velocity can be estimated by the method presented in [19], wherein both a short observation time and corresponding small rotation angle that can be regarded as uniform rotation are assumed.In this paper, we suppose the target moves with a rotation angle Θ during the observation time and define the instantaneous rotation angle as ϕ Θ ∈ [ϕ k − Θ, ϕ k ].Thus, Equation ( 8) can be rewritten as Let (θ be reformulated through the trigonometrical transform into For such a small rotation angle, we have Substituting Equation (11) into Equation (10) yields (12) can be rewritten as Suppose the target consists of K scatterers, the received signal can then be represented by where h (k) and ω k are the reflected amplitude and spatial frequency of the k-th scatterer, respectively.Equation ( 14) implies that the returned signal can be parameterized as a summation of K sinusoids with unknown amplitudes and frequencies.Moreover, in the passive radar imagery, strong scatterers often only take up a fraction of the two-dimensional imaging scene.Therefore, passive radar imaging can be regarded as a sparse reconstruction problem, and, thus, we propose the following passive radar super-resolution imaging algorithm with a CS technique. Super-Resolution Imaging for Passive Radar The ESPRIT algorithm for passive radar imaging has been investigated in [14], which employs the rotation invariance technique and eigenvalue decomposition to estimate the amplitude and spatial frequency of each scatterer.In this section, two super-resolution algorithms including RELAX and CS are presented in detail. RELAX-Based Passive Radar Imaging RELAX, which relaxes the hypothesis of the additive noise and system errors, is an effective spectrum estimation algorithm.By adopting an iterative procedure, the spatial frequencies of the scatterers can be estimated by the nonlinear least square (NLS) method.When the RELAX algorithm is employed, Equation ( 16) should be recast as: where h and ω are the estimates of h and ω , respectively; and • 2 denotes the 2 norm.Let where {h (i), ω i } (i = 0, 1, ..., K − 1, i = k) are assumed to be given.Then, the minimization of Equation ( 17) is equivalent to The NLS estimates of h (k) and ω k are given by [16], respectively: It should be noted that ω k is the dominant peak position of f H (ω k ) s k 2 2 , which can be efficiently obtained by fast Fourier transform (FFT) for the data sequence s k padded with zeros.According to [22], the locally optimal solution of {h (i), ω i } obtained by the NLS estimate is globally optimal.That is, we can obtain two independent sets {h , ω } estimated from the two external illuminators.Subsequently, a passive ISAR image can then be attained with these two estimated sets {h , ω }. CS-Based Passive Radar Super-Resolution Imaging It is widely recognized that in the CS framework, the optimal reconstruction of an unknown sparse signal can be achieved from limited noisy measurements by solving a sparsity-constrained optimization problem [23].The CS framework has been successfully applied in conventional ISAR and bistatic ISAR imaging [24][25][26][27]. In this paper, we use such a CS technique to accomplish super-resolution passive radar imaging with fewer television illuminators and a smaller rotation angle. Taking additive noise into account, Equation ( 16) can then be rewritten as where h=[h (0 ] T , σ is the measured noise, Φ is the dictionary matrix, and the (k, m) element of Φ is defined as: Note that the dictionary Φ is characterized by unknown parameters {ω k }, which should be estimated together with the unknown complex amplitude {h k }.Estimating {ω k } and {h k } can then be formulated as a sparse recovery problem: where • 1 denotes the 1 norm, δ is the noise level, and Φ ∈ C M×N (M N, N is the length of the recovered signal) is an over-complete dictionary.Accurate reconstruction of the sparse signal can be achieved through existing recovery algorithms, such as orthogonal matching pursuit (OMP) [28] and basis pursuit (BP) [29]. In this paper, we use a log-sum sparsity-encouraging function for the sparse reconstruction.The log-sum penalty function has been extensively applied for sparse signal recovery [30][31][32], and its superiority over the conventional 1 -type algorithm has been widely verified.In doing so, Equation ( 24) can be recast as where h i denotes the i-th component of the vector h, and ε > 0 is a positive parameter to ensure that the function is well-defined.Thus, the {ω k } and {h k } can be estimated by the algorithm proposed in [33]. Once the amplitude and spatial frequency of each scatterer are obtained, the target position can be determined by searching the vector ω = {ω k } K k=1 .Here, it should be pointed out that the passive radar image achieved in this paper is a two-dimensional image.Thus, at least two illuminators are needed.Thus, the scatterers' position estimation problem can be further formulated as where {r , ϕ } are the estimates of {r i , ϕ i } i=K i=1 , and ω re f is the referenced spatial frequency.The position information of different scatterers can then be obtained by solving Equation (26). In summary, the flowchart of the proposed CS-based method for passive radar imaging is shown in Figure 2.After searching for positions using two sets of ω , and adding the scattering amplitude information in the reconstructed vector h , we can obtain the passive radar image of the target.Flowchart of the proposed compressed sensing (CS)-based method for passive radar imaging. Influence of Noise It should be emphasized that the use of the ESPRIT algorithm for super-resolution passive radar imaging requires a high SNR (SNR ≥ 30 dB).Otherwise, it will have a large reconstruction error.In contrast, the RELAX and CS methods achieve good recovery performance even in low SNR cases. In addition, the estimation of the noise level given in Equation (25) is still an open problem in sparse signal recovery.If δ is too large, a part of the signal component cannot be recovered.On the other hand, if δ is too small, some noise components may be treated as signals.In this paper, we estimate the noise level δ using the energy-based method presented in [34]. Influence of Measurements In CS-based passive radar imaging, accurate frequency estimation can be obtained only when a large M is employed.In our algorithm, we only require that M ≥ O (K log (N/K)) for Equation (25).In contrast, our investigations (detailed in Section 4) demonstrate that ESPRIT and RELAX algorithms cannot yield accurate frequency estimation for such a small M.This implies that our proposed CS algorithm can achieve better frequency estimation performance than the ESPRIT and RELAX algorithms when limited measurements are available. Why CS Performs Better Than ESPRIT and RELAX While both the ESPRIT and RELAX algorithms have been shown to be able to perform super-resolution imaging through spectra estimation; however, their reconstruction performance are restricted by the length of the measured data.For limited observation data, the passive radar imaging results obtained by the ESPRIT and RELAX algorithms will be very poor, which have been validated in Section 4.5.On the contrary, the proposed CS method can reconstruct a high-dimensional signal from low-dimensional observation data.This reason is that an unknown sparse signal may be accurately recovered from a limited number of measurements by solving a sparsity-driven optimization problem.In other words, the CS-based method is not sensitive to the data limitation.Therefore, CS can achieve better reconstruction performance than the ESPRIT and RELAX algorithms in super-resolution passive radar imaging. Simulation and Analysis In this section, we use simulation data to evaluate the performance of the proposed method, and compare it with the ESPRIT and RELAX algorithms under different SNRs and limited measurement conditions. Suppose the illuminator of opportunity A, represented by IO(A), locates at (35 km, 80 • ) and operates at the carrier frequency of 200 MHz, while the illuminator of opportunity B, denoted by IO(B), locates at (30 km, 20 • ) and works at 280 MHz.The receiver is located at (20 km, 0 • ).Additionally, the rotation angle Θ is assumed to be 4 • during the observation time. Frequency Estimation of Scatterers Applying CS The noise corrupted data with SNR = 20 dB is generated by adding white Gaussian noise into the simulated passive radar echo.Then, the proposed CS method is applied to estimate the amplitudes and spatial frequencies of six scatterers ((0,0), (3,0), (3,9 π/16), (3,−9 π/16), (3,15 π/16), (3,−15 π/16)) on the target.To provide a quantitative evaluation for estimation performance, the relative mean square error (MSE) is defined by where ω , ω represent the spatial frequency vector and its estimation of the scatterers, respectively.Tables 1 and 2 give the MSE of the estimated spatial frequency from IO(A) and IO(B) using the proposed CS method, respectively, where the MSEs are calculated according to Equation (27).It can be observed that the estimated frequencies coincide with the theoretical values.This indicates that the proposed CS method can estimate the frequency parameter with high accuracy. Position Estimation Using the Proposed Method In this section, we demonstrate that the scatterer's position can be obtained through two frequency vectors.The simulation parameters are the same as Section 4.1.We first use one set ω to search for the position of the scatterer (3, 0), which is shown in Figure 3a.Obviously, the scatterer (3, 0) cannot be distinguished in the polar plane.In Figure 3b, the red point corresponds to the position of the scatterer (3, 0) estimated by using two frequency sets.Likewise, we use two frequency sets to estimate the position of six scatterers via (26), as shown in Figure 4.It is clear that the coordinates of all scatterers are correctly estimated. Resolution Comparison In order to investigate the imaging resolution, an experiment of resolution comparison is carried out.The simulation parameters are the same as Section 4.1, and the scattering model of an extended target in the 2D plane is illustrated in Figure 5. Figure 6a corresponds to the reconstructed passive radar image obtained through the polar format algorithm (PFA) [10], and Figure 6b shows the passive radar imaging result reconstructed by the proposed method.We note that the passive radar image produced by our proposed method has better reconstruction performance than that of the PFA algorithm.Moreover, the proposed method can obtain higher resolution in the xand y-direction than the PFA algorithm.This implies that our proposed method can achieve super-resolution.In addition, we find that when these scatterers are not on the grid, the proposed CS method can still obtain the focused image. Imaging Performance versus SNR In this simulation, we examine the impacts of SNR on the passive radar imaging performance.We add different complex noise with Gaussian distribution into the simulated passive radar echoes with a SNR of 5, 10 and 20 dB, respectively.Similarly, the MSE of the estimated scatterers' position is used to evaluate the reconstruction performance. Figure 7 compares the performance of the ESPRIT, RELAX and proposed CS algorithms in the passive radar imaging under different SNRs.The first, second and third row represent passive radar image obtained by the ESPRIT, RELAX and proposed CS algorithm, respectively.Note that the RELAX and proposed CS algorithms can provide position estimation with better accuracy than the ESPRIT algorithm under different SNR conditions.It should be emphasized that, among these three super-resolution algorithms, the estimation precision of the proposed CS algorithm is the highest under the same SNR condition. It can be concluded from the MSE results given in Table 3 that, with decreasing SNR, the MSE of the estimated position obtained by the ESPRIT algorithm degrades rapidly.This implies that the noise level has a great effect on its estimation performance.However, the MSEs obtained by the RELAX and proposed CS algorithms are acceptable under different SNRs.Moreover, the proposed CS algorithm outperforms the RELAX algorithm in the sense of estimation accuracy.Even if the SNR is lower than 0 dB, for example, −10 dB, the performance of the proposed CS-based passive radar imaging framework degrades slightly.Therefore, we can conclude that the proposed CS algorithm is suitable for super-resolution passive radar imaging, especially in low SNR situations. Imaging Performance versus Measurements In this experiment, we compare the reconstruction performance of different super-resolution algorithms in different amount of available measurement data.The used measurement data are 50%, 25% and 12.5% of the passive radar echo data, respectively, which can be obtained through two, four and eight times uniform decimation.Suppose the input SNR is 10 dB.The comparative imaging results are shown in Figure 8.The first, second and third rows represent super-resolution passive radar image obtained by the ESPRIT, RELAX and proposed CS algorithms, respectively.It is noticeable that the proposed CS algorithm can achieve more accurate reconstruction than both the ESPRIT and RELAX algorithms, even if the measured data is only 12.5% of the passive radar echo data.It can be noticed from the estimation MSEs given in Table 4 that the ESPRIT and RELAX algorithms are heavily influenced by the amount of measurement data.In contrast, the proposed CS algorithm achieves satisfactory MSE even with very limited amount of measurement data.This validates again that the proposed CS algorithm is very effective for super-resolution radar imaging in limited measurement cases.However, when the used measured data is less than 1/16 of the original passive radar echo data, the imaging performance of the proposed framework will be seriously deteriorated, and corresponding MSE of the proposed CS algorithm is about 0.1625. Conclusions This paper proposed a super-resolution passive radar imaging algorithm based on CS using a log-sum penalty function.After establishing the sparse signal model of passive radar imaging, we apply the CS technique to extract the spatial frequencies and scattering amplitudes of different scatterers.Next, the super-resolution passive radar imagery is reconstructed by searching the coordinates of each scatterer.Simulated data experiments have been provided to demonstrate that the proposed CS method outperforms other existing super-resolution algorithms such as ESPRIT and RELAX algorithms under both low SNR and limited measurement data.We have also shown that the proposed CS method allows for only two television illuminators and a small rotation angle to achieve super-resolution passive radar imaging.Furthermore, the proposed CS method can provide a new prospect to solve the off-grid problem. Figure 2 . Figure 2.Flowchart of the proposed compressed sensing (CS)-based method for passive radar imaging. Figure 3 . Figure 3. Position estimation of the scatterer located at (3, 0): (a) using single frequency set; and (b) using two frequency sets. Figure 4 . Figure 4. Position estimation of six scatterers using two frequency sets. Figure 5 . Figure 5. Scattering model of an extended target in the 2D plane. Figure 6 . Figure 6.Passive radar images obtained by using different methods: (a) reconstructed image with the polar format algorithm (PFA); and (b) reconstructed image with the proposed method. Figure 7 . Figure 7. Passive inverse synthetic aperture radar (ISAR) imaging results with different reconstruction methods under different signal-to-noise ratios (SNRs). Figure 8 . Figure 8. Passive ISAR imaging results with different reconstruction methods under different measurements. Table 1 . Estimated spatial frequency and its mean-square-error (MSE) from illuminator of opportunity (IO) (A) using the proposed compressed sensing (CS) method. Table 2 . Estimated spatial frequency and its MSE from IO(B) using the proposed CS method. Table 3 . MSEs of the estimated position using the ESPRIT, RELAX and proposed CS algorithms under different signal-to-noise ratios (SNRs). Table 4 . MSEs of the estimated position using the ESPRIT, RELAX and proposed CS algorithms under different measurements.
4,986.8
2016-11-08T00:00:00.000
[ "Mathematics" ]
Assessment of Micellar Water pH and Product Claims : Micellar waters are widely used skincare cleansing products. It is commonly considered that micellar waters do not need to be rinsed off. Products left on the skin can affect its pH, which typically ranges from 4.1 to 5.8. and plays a vital role in maintaining the integrity of the skin barrier. Our objective was to evaluate the pH of micellar waters and investigate product claims, and differences according to target skin type. The pH of 30 samples of different micellar waters was tested. The products were categorized into groups based on target skin type. Statistical analysis was performed on both quantitative and qualitative data. In addition to descriptive statistics, the Shapiro–Wilk test, Fischer’s Exact test, and the Kruskal–Wallis test were used considering the minimal significance level of 95%. The pH of the tested micellar waters ranged from 4.25 to 7.87. Most samples, 21 (70%), claimed to have a no-rinse formula. Most products, 18 (60%), also reported some type of testing having been performed. There were no statistically significant differences in pH between target skin types but products “for all skin types” were the most likely to lack rinsing instructions. In conclusion, most micellar water samples had skin-friendly pH levels and providers should carefully consider product characteristics for patients with skin conditions. Introduction Micellar water is a skincare product that utilizes micellar technology to effectively cleanse the skin of impurities, including makeup, sunscreen, and dirt.It typically consists of water mixed with surfactants (surface-active agents), such as cetrimonium bromide or other gentle detergents [1].In a micelle, the hydrophilic parts of the surfactant molecules are oriented towards the surrounding water, while the hydrophobic parts are sequestered in the core of the micelle, away from the water.When micellar water is applied to the skin, the micelles can encapsulate hydrophobic compounds (like oils and fats), making it easy to remove them without the need for harsh rubbing or excessive friction [2] (Figure 1). Consumers might use micellar water in a variety of ways: it can be utilized as a standalone cleansing step or as a pre-cleansing step before using a traditional cleanser.Additionally, it serves as a precision correction tool in makeup application, as a tool for refreshment throughout the day, and as a helpful cleansing tool in situations where clean water is not available or its use on the skin is undesirable [3,4]. Products left on the skin for a prolonged time have the potential to affect the skin's barrier function [5].The barrier function is maintained by the outermost layer of the epidermis called the stratum corneum.An essential component of this barrier is the acid mantle that helps to preserve the skin's natural pH, which is slightly acidic, typically ranging from 4.1 to 5.8 with slight variation depending on location and ethnicity [6].Interestingly, some research suggests that skin with a pH below 5 tends to be in better overall condition [7].The skin's acidity is a result of natural acids, such as lactic acid, Dermato 2024, 4 80 produced by the skin's microbiome.Disruptions to this natural pH balance can impair the skin's barrier function, resulting in dryness, flakiness, and increased vulnerability to environmental irritants and allergens.Additionally, an imbalanced pH can create a more favorable environment for certain types of bacteria, viruses, and fungi [5].by the skin's microbiome.Disruptions to this natural pH balance can impair the skin's barrier function, resulting in dryness, flakiness, and increased vulnerability to environmental irritants and allergens.Additionally, an imbalanced pH can create a more favorable environment for certain types of bacteria, viruses, and fungi [5].The available research on micellar water products is extremely limited; studies on cleansing agents mostly focus on traditional cleansing methods such as soap and syndets [8] or do not provide information that would aid readers in selecting the most suitable products [9].Despite a notable lack of information, these products are very popular among consumers with their market only continuing to grow [10]. This study aimed to evaluate the pH of commercially available micellar waters to verify whether they are slightly acidic and, therefore, consistent with the maintenance of the barrier function of human skin.Additionally, we strived to investigate whether the instructions for use on the labels of these products direct the consumer to forego rinsing and leave the product on the skin for a prolonged time, what testing was declared on the labels, and whether there are notable differences depending on the target skin type. Materials and Methods This was an observational, analytical, cross-sectional, quantitative study.Products from 23 different brands of micellar water were purchased from a popular cosmetics and household goods store chain in Riga, Latvia (European Union).The researchers manually selected the products with the aim of including at least one sample from every brand that carried micellar water in their product selection at the time of conducting the study (February 2023).Samples included in the study represent a variety of price points and market shares, from local brands to large international conglomerates, brands sold widely in both department stores and in pharmacies: Algotherm ® , Avene ® , BABE ® , Bioderma ® , Dzintars ® , Eucerin ® , Eveline ® , Garnier ® , Isana ® , Isispharma ® , La Roche Posay ® , L'Oréal ® , Lumene ® , Madara ® , Marence ® , Mixa ® , Mossa ® , Neutrogena ® , Nivea ® , Seal ® , SVR ® , Uriage ® , and Vichy ® .The pH of 30 samples of micellar waters was analyzed, target skin types were identified based on the information provided on the product labels, and subsequently the products were categorized into several different groups.Additionally, we noted the instructions of use, the presence of product testing claims, and whether the labels mention use around the eyes.This study did not require the ethics committee's approval, as it did not involve human subjects but only the biochemical analysis of micellar waters.The samples were paid for using the researchers' personal funds to avoid any conflicts of interest. The pH of all the micellar water samples was measured using the Vernier LabQuest2 pH meter calibrated to a pH of 4, at room temperature (approximately 21 degrees Celsius) without dilution.Each sample was tested 3 times with a time interval of 10 min.For statistical analysis, mean values of the three measurements were used as the discrepancy The available research on micellar water products is extremely limited; studies on cleansing agents mostly focus on traditional cleansing methods such as soap and syndets [8] or do not provide information that would aid readers in selecting the most suitable products [9].Despite a notable lack of information, these products are very popular among consumers with their market only continuing to grow [10]. This study aimed to evaluate the pH of commercially available micellar waters to verify whether they are slightly acidic and, therefore, consistent with the maintenance of the barrier function of human skin.Additionally, we strived to investigate whether the instructions for use on the labels of these products direct the consumer to forego rinsing and leave the product on the skin for a prolonged time, what testing was declared on the labels, and whether there are notable differences depending on the target skin type. Materials and Methods This was an observational, analytical, cross-sectional, quantitative study.Products from 23 different brands of micellar water were purchased from a popular cosmetics and household goods store chain in Riga, Latvia (European Union).The researchers manually selected the products with the aim of including at least one sample from every brand that carried micellar water in their product selection at the time of conducting the study (February 2023).Samples included in the study represent a variety of price points and market shares, from local brands to large international conglomerates, brands sold widely in both department stores and in pharmacies: Algotherm ® , Avene ® , BABE ® , Bioderma ® , Dzintars ® , Eucerin ® , Eveline ® , Garnier ® , Isana ® , Isispharma ® , La Roche Posay ® , L'Oréal ® , Lumene ® , Madara ® , Marence ® , Mixa ® , Mossa ® , Neutrogena ® , Nivea ® , Seal ® , SVR ® , Uriage ® , and Vichy ® .The pH of 30 samples of micellar waters was analyzed, target skin types were identified based on the information provided on the product labels, and subsequently the products were categorized into several different groups.Additionally, we noted the instructions of use, the presence of product testing claims, and whether the labels mention use around the eyes.This study did not require the ethics committee's approval, as it did not involve human subjects but only the biochemical analysis of micellar waters.The samples were paid for using the researchers' personal funds to avoid any conflicts of interest. The pH of all the micellar water samples was measured using the Vernier LabQuest2 pH meter calibrated to a pH of 4, at room temperature (approximately 21 degrees Celsius) without dilution.Each sample was tested 3 times with a time interval of 10 min.For statisti-cal analysis, mean values of the three measurements were used as the discrepancy between measurements did not exceed the accuracy range of the measuring device (±0.2 pH units). Data were stored in a Microsoft Office 365 Excel ® spreadsheet and evaluated using the IBM SPSS Statistics 29.0 software platform.The normality of quantitative data was assessed using different methods including the Shapiro-Wilk test, and non-parametric tests were chosen as the assumptions necessary for parametric tests were not met.Since we compared 4 (≥3) independent sample groups, the Kruskal-Wallis H test was utilized to look for differences in pH values between the groups.Qualitative data were assessed using Fischer's Exact test as the data did not fit the conditions of the Pearson Chi-Square test.Summary measures used in descriptive statistics were median, interquartile range, minimum, and maximum values.The test results were considered statistically significant if the p < 0.05. Results The pH of the micellar waters ranged from 4.25 to 7.87.Seven (23.3%) had a pH < 5, fourteen (46.7%) micellar waters had a pH between 5 and 5.99, three (10.0%)had a pH between 6 and 6.99, and seven (23.3%) had a pH above 7. Eleven (36.6%) samples tested above the pH of normal skin (Table 1).Upon investigating the intended audience outlined on the labels of micellar waters, we identified five different target skin types (normal, combination, oily, dry, and sensitive).Based on the instructions on the labels, we categorized the tested products into four groups: 9 (30%) samples were intended for all skin types, 14 (47%) for sensitive skin, 3 (10%) for dry/dehydrated skin, and 4 (13%) for normal/combination to oily skin (Table 1).Both products labeled "for all skin types" and those "for sensitive skin" consider individuals with sensitive skin as part of their target audience, which adds up to 23 (77%) of the tested micellar waters. Of the 30 samples, the labels of 21 (70.0%)samples claim to have a no-rinse formula, while the rest, 9 (30.0%),offer no instructions.Regarding the type of testing brands claimed to have conducted on their products, 10 (33.3%) claimed to have been both dermatologically and ophthalmologically tested, 8 (26.7%) were only dermatologically tested, and 12 (40.0%)did not report any testing having been performed. Most of the product labels, 24 (80%), contained information stating that use around the eyes is acceptable. There was no statistically significant difference in the distribution of pH values across categories of target skin types (Kruskal-Wallis test, H = 0.92, p = 0.822).The two groups with the largest sample size were also compared and the result remained the same. The group targeting all skin types had a median pH value of 5.46 (IQR = Q1-Q3 = 5.22-6.93), the sensitive skin group had a median value of pH 5.50 (IQR 5.33-7.02), the dry/dehydrated group had a median value of pH 5.37 (IQR 5.11-6.06),and the normal/combination to oily group had a median value of pH 5.26 (IQR 4.66-6.33)(Figure 2). Bioderma ® Sebium H2O Purifying 4.89 Isispharma ® Teen derm aqua 5.62 Loreal Paris ® 7.03 Upon investigating the intended audience outlined on the labels of micellar waters, we identified five different target skin types (normal, combination, oily, dry, and sensitive).Based on the instructions on the labels, we categorized the tested products into four groups: 9 (30%) samples were intended for all skin types, 14 (47%) for sensitive skin, 3 (10%) for dry/dehydrated skin, and 4 (13%) for normal/combination to oily skin (Table 1).Both products labeled "for all skin types" and those "for sensitive skin" consider individuals with sensitive skin as part of their target audience, which adds up to 23 (77%) of the tested micellar waters. Of the 30 samples, the labels of 21 (70.0%)samples claim to have a no-rinse formula, while the rest, 9 (30.0%),offer no instructions.Regarding the type of testing brands claimed to have conducted on their products, 10 (33.3%) claimed to have been both dermatologically and ophthalmologically tested, 8 (26.7%) were only dermatologically tested, and 12 (40.0%)did not report any testing having been performed. Most of the product labels, 24 (80%), contained information stating that use around the eyes is acceptable. There was no statistically significant difference in the distribution of pH values across categories of target skin types (Kruskal-Wallis test, H = 0.92, p = 0.822).The two groups with the largest sample size were also compared and the result remained the same. The group targeting all skin types had a median pH value of 5.46 (IQR = Q1-Q3 = 5.22-6.93), the sensitive skin group had a median value of pH 5.50 (IQR 5.33-7.02), the dry/dehydrated group had a median value of pH 5.37 (IQR 5.11-6.06),and the normal/combination to oily group had a median value of pH 5.26 (IQR 4.66-6.33)(Figure 2).The products for sensitive skin tended to have directions claiming that rinsing is not necessary.Products for all skin types were the most likely to have no rinsing directions mentioned on the label; this association was statistically significant (Fischer's Exact test, df = 3, n = 30, p = 0.046) and strong (Cramer's V = 0.52) (Table 2).The products for sensitive skin tended to have directions claiming that rinsing is not necessary.Products for all skin types were the most likely to have no rinsing directions mentioned on the label; this association was statistically significant (Fischer's Exact test, df = 3, n = 30, p = 0.046) and strong (Cramer's V = 0.52) (Table 2). Products for sensitive skin tended to not have any testing declared on the label.Products for dry/dehydrated skin tended to be dermatologically tested.The association between target skin type and testing reported on the label was not statistically significant (Fischer's Exact test, p = 0.241). There was no statistically significant difference in the distribution of pH values across the different product testing groups (Kruskal-Wallis test, H = 2.315, p = 0.314). Products for sensitive skin tended to be suitable for use around the eyes, but no statistically significant associations were found between product labels mentioning use around the eyes and the target skin types (Fischer's Exact test, p = 0.141).Similarly, no statistically significant associations were found between use around the eyes and the pH values (Kruskal-Wallis test, H = 1.075, p = 0.300). Discussion Micellar water is a gentle skincare product using micellar technology to cleanse the skin without harsh rubbing, thus making it an attractive option for those with sensitive skin.Prolonged exposure to skincare products as well as the act of cleansing itself can affect the skin's pH and hence also its barrier function [5].In this study, the pH of 30 micellar water samples was measured and the claims made on product packaging were critically assessed. While none of our samples had a very alkaline pH, almost a fourth had a pH above 7, and a third tested above the pH of 6, many of those from groups whose target skin type includes people with sensitive skin.This could indicate that the pH of the solution is not always intended to match that of the physiological skin pH but rather that of the eyes since micellar waters are often used for eye makeup removal.The physiological pH of conjunctival tear fluid ranges from 6.30 to 7.23 and that of the conjunctival mucous discharge from 7 to 8 [11].These are essential in maintaining the health and functionality of the ocular system.However, in our study, most products did mention use around the eyes and this did not correlate with the tested product pH.Although similar results were observed in other similar studies investigating liquid cleansers, in a study looking at children's soap the results showed a pH range of 4.40-7.90 in syndets and liquid soap with a single outlier of 9.90 in the category of specifically antibacterial liquid soaps [8]. In our study, there were no substantial differences found in pH values between different target skin types, which leads us to believe that the pH of the solution might not be the main aspect of product formulation that is taken into account when determining who will benefit from it the most.Alternatively, it might be relevant to assess the types of surfactants used in the formula, the presence of hydrating and soothing ingredients, and the absence of common irritants such as fragrant components and colorants in micellar water products.This idea is reinforced by the results found in the previously published literature showing some cleansing agents with a more acidic pH to be more irritating than their neutral counterparts, suggesting the importance of considering the interaction of the ingredients with the skin under skin pH conditions [12]. Interestingly, most of the tested samples were confirmed to claim to have a no-rinse formula; a trend toward these claims was noted in the category of micellar waters aimed at sensitive skin.Patients with inflammatory skin conditions such as atopic dermatitis are known to sometimes find washing with water to be irritating, especially in areas with hard water, which leads to increased surfactant deposition when combined with conventional cleansing agents [4].It could be that products marketed as no-rinse formulas are an attractive option to such patients as their use can help avoid using tap water. When it comes to whether our results can be extended to other markets, most of the micellar waters examined in this study are imported into the Latvian market.While many of these products are associated with well-known global brands, it remains uncertain whether their chemical composition and pH levels are consistent across international markets.Nevertheless, since the study specifically chose brands that are readily obtainable in common stores both within and beyond Latvia and evaluated a comparatively wide range of different brands, it is plausible that the findings can be extended to other European countries and the global market.Ideally, these outcomes should be reinforced by conducting further research in other geographical areas. This research underscores that there might be a relative pH inadequacy in numerous products marketed as suitable for sensitive skin.Healthcare providers should be mindful of the qualities of products recommended to patients, especially those with skin conditions that increase their sensitivity to irritants. Limitations of this study include the unbalanced sample sizes of micellar waters by skin type, which makes evaluating the correlations between the pH and target skin types of products challenging.Additionally, the sample size of 30 is rather low considering the overall number of products available globally. Conclusions Most of the micellar water samples were within the physiological pH of healthy skin, but a third of the tested products exceeded this level.The packaging of the micellar water products contained different instructions: most claimed to have a no-rinse formula, most were intended for sensitive skin, and the majority encouraged the use around the eye area.The only significant association was found to be between the product labeling "for all skin types" and the lack of rinsing instructions on the packaging. Healthcare providers should consider the characteristics of products to be recommended for patients with skin diseases in order to achieve optimal treatment results.Product pH is not the only characteristic that should be looked at, so further research on the differences between available micellar water products is needed to be able to recommend the most suitable products for patients' individual needs. Figure 1 . Figure 1.Micellar structure and composition in an aqueous solution. Figure 1 . Figure 1.Micellar structure and composition in an aqueous solution. Figure 2 . Figure 2. Variation in pH according to target skin type. Figure 2 . Figure 2. Variation in pH according to target skin type. Table 1 . Micellar waters according to their target skin type and tested pH. Table 2 . Rinsing directions on the label of micellar water depending on target skin type.
4,790.8
2024-07-11T00:00:00.000
[ "Medicine", "Environmental Science", "Chemistry" ]
Microbial reduction of organosulfur compounds at cathodes in bioelectrochemical systems Organosulfur compounds, present in e.g. the pulp and paper industry, biogas and natural gas, need to be removed as they potentially affect human health and harm the environment. The treatment of organosulfur compounds is a challenge, as an economically feasible technology is lacking. In this study, we demonstrate that organosulfur compounds can be degraded to sulfide in bioelectrochemical systems (BESs). Methanethiol, ethanethiol, propanethiol and dimethyl disulfide were supplied separately to the biocathodes of BESs, which were controlled at a constant current density of 2 A/m2 and 4 A/m2. The decrease of methanethiol in the gas phase was correlated to the increase of dissolved sulfide in the liquid phase. A sulfur recovery, as sulfide, of 64% was found over 5 days with an addition of 0.1 mM methanethiol. Sulfur recoveries over 22 days with a total organosulfur compound addition of 1.85 mM were 18% for methanethiol and ethanethiol, 17% for propanethiol and 22% for dimethyl disulfide. No sulfide was formed in electrochemical nor biological control experiments, demonstrating that both current and microorganisms are required for the conversion of organosulfur compounds. This new application of BES for degradation of organosulfur components may unlock alternative strategies for the abatement of anthropogenic organosulfur emissions. Introduction Organosulfur compounds (OSCs) are naturally present in various environments, including oceans, marine estuaries, volcanos and salt marshes. These organsulfur compounds play an important role in the natural global sulfur cycle [1,2]. However, anthropogenic emissions of e.g. methanethiol and dimethyl disulfide account for 30% of the annual global emissions with 3222 GgS/year. Methanethiol, dimethyl disulfide and other compounds like ethanethiol and propanethiol are found in the pulp and paper industry, rayon and cellulose industry and (bio)gas streams [1]. Removal of organosulfur compounds from waste streams is required due to the low odor thresholds, high toxicity and their corrosive nature. The state-of-the-art treatment strategy for conversion and removal of organosulfur compounds is the oxidation to insoluble disulfides. This process, known as the Merox process, is in many cases economically unfavorable due to complex processing schemes, high OPEX and CAPEX and low efficiencies [3]. In addition, this process requires chelating chemicals, which can harm the environment. Lacking a suitable treatment method, organosulfur compounds are typically incinerated contributing to SO 2 emissions. The emissions of SO 2 are strictly regulated and are becoming more stringent over the years. Therefore, new cost-effective, environmental friendly strategies for the removal of organosulfur compounds are desired. In general, biological processes are considered environmentally friendly as they require ambient temperatures and pressures and do not require chemical catalysts. Methanethiol and dimethyl disulfide can be converted under aerobic conditions in bio trickling systems [4]. However, aerobic degradation only occurs at low concentrations and degradation rates are low. On the other hand, anaerobes, such as methanogenic archaea, are known to tolerate much higher thiol concentrations and, therefore, represent an alternative for the treatment of thiol containing waste streams. Several studies report successful biological reduction of methanethiol and dimethyl disulfide to methane and hydrogen sulfide by methylotrophic methanogens [5][6][7]. The degradation of ethanethiol and propanethiol appears to be much more challenging. Degradation in anaerobic bioreactor has, to the best of our knowledge, not been reported. Leerdam et al. was unable to convert these organosulfur compounds in anaerobic batch systems [8]. Only two studies report an enhanced production of ethane and propane in anoxic sediments when ethanethiol and propanethiol are supplied [9,10]. The conversion efficiencies were low, less than one percent of the added substrates, and Oremland et al. [9] suggested that conversions of ethanethiol were solely a result of co-metabolism and growth on ethanethiol as sole substrate was not possible. Nevertheless, these studies suggest that biological conversion mechanisms exist. Bioelectrochemical systems (BESs) are an emerging biotechnology with a wide range of applications, e.g. electricity generation, metal recovery, chemicals synthesis, and wastewater treatment [11][12][13][14]. In bioelectrochemical systems, microorganisms catalyze anodic oxidation reactions or cathodic reduction reactions. Reaction rates can be manipulated by controlling electrode potential or current density. Complete reduction of organosulfur compounds at a biocathode would result in the formation of methane and sulfide. These products have as advantage that in the (bio)gas industry methane can be directly used and elemental sulfur can be recovered from the hydrogen sulfide with existing technologies. In this study, we demonstrate that methanethiol, ethanethiol, propanethiol and dimethyl disulfide can be converted at biocathodes, and that degradation requires both microorganisms and electricity. Bioelectrochemical cell setup Bioelectrochemical experiments were performed in 4 identical anaerobic reactors (H cells [15]). Cells consisted of two 150 mL chambers separated by a cation exchange membrane (8.02 cm 2 CEM, fuma-sep®FTCM-E, Fumatech, Germany). The experiment was performed under continuous stirring at 350 RPM. Gas produced at the cathode was collected in 1 L gas bags (Cali-5-Bond TM , Calibrated Instruments Inc., USA) with an initial gas volume of 35 mL. Graphite felt electrodes (0.4 cm  2 cm x 15cm, CTG Carbon GmbH, Germany) connected to a platinum current collector were used as both anode and cathode electrodes. A 3 M KCl Ag/AgCl reference electrode (þ210mV vs SHE, Prosense, Oosterhout, the Netherlands) was inserted in the cathode chamber. The current was controlled with a potentiostat (Ivium, the Netherlands). Cells were operated at room temperature (22-25 C). Medium and inoculum A bicarbonate medium similar to the medium used for biodesulfurization processes under haloalkaline conditions [16] was used during all experiments. The medium contained 49 g/L NaHCO 3 , 4.42 g/L Na 2 CO 3 and 0.1 mL/L nutrient solution containing N, P and trace elements (Paqell B.V., The Netherlands). The medium was flushed with N 2 for 20 min, resulting in a pH of 8.5. Anolyte contained the same carbonate buffer with 84.5 g/L potassium hexacyanoferrate(II)trihydrate. Hexacyanoferrate (II) is a typical substrate in bioelectrochemical system used as an electron donor in the anode to avoid the crossover of produced oxygen at the anode to the cathode when water is oxidized at the anode. Cathode chambers were inoculated with a mixture of biomass selected based on their acclimation to anaerobic conditions, high salt concentrations, the presence of organosulfur compounds and the presence of methanogens. The inocula were obtained from (1) a chain elongation reactor fed with high methanol concentrations (250 mM)(total nitrogen (TN) 1.47 g/L) [17],(2) a granular anaerobic reactor operating at a high salt concentration (20 g Na/L) [18] (TN 4.3 g/L), (3) a digester for municipal wastewater treatment sludge (Ede, The Netherlands) combined with anaerobic sludge treating wastewater from paper industry (Eerbeek, The Netherlands)(4.4 g/L TCOD), and (4) sulfide oxidizing biomass adapted to the presence of 0.5-2.5 mM dimethyl disulfide (TN 0.6 g/L) [19]. The biomass mix was obtained by combining 2 mL of the the different inocula. 1 mL of the biomass mix was added to the cells during startup. Bioelectrochemical cell operation For initial proof-of-principle experiments, sludge from the paper industry and municipal wastewater treatment plant was used to inoculate two cells. The reactors were started with 1mM methanol (150 μmol) and 0.1 mM methanethiol (15 μmol) at the biocathodes and were operated for one week to stimulate initial growth and to acclimate the biomass. Subsequently, medium was replaced and methanol (150 μmol) and methanethiol (22.5 μmol to cell 1 and 15 μmol to cell 2) were supplied once. Cells were galvanostatically controlled at 2 mA (2 A/m 2 projected surface area of the graphite felt cathode). Gas phase concentrations of organosulfur compounds and sulfide concentrations in solution were monitored for 5 days. In the next experimental run, we studied the degradation of all four organosulfur compounds. Each organosulfur compound was supplied to a biocathode in a pre-experimental run 13 days prior to the start of the experimental run, allowing microorganisms to adapt to experimental conditions. 1 ml of biomass mix (see section 2.2) and 150 μmol methanol were added to all cells. Upon the detection of methane, indicating biological activity, 15 μmol of methanethiol, ethanethiol, propanethiol and dimethyl disulfide were added to individual cells every weekday. Cells were galvanostatically controlled at 2mA, obtaining a current density of 2 A/m 2 normalized to the projected surface area of the graphite felt electrode. Medium from the pre-experimental run was replaced at the start of the experimental run. Suspended biomass in the medium was collected by centrifuging for 15 min at 5000 rpm and was returned to the cells. Biomass mix (1 mL) and methanol (150 μmol) were added to the cells on day 0. Cells remained under galvanostatic control (2 mA/m 2 ) and 15 μmol of organosulfur compounds were added daily. After 12 days, organosulfur compound additions were doubled and current density was doubled to 4 A/m 2 . The cells were operated in this mode for 10 days. Electrochemical, biotic and abiotic control experiments Electrochemical control experiments were performed in the same electrochemical setup as described in 2.3 with a current density of 2 mA/ m 2 . New graphite felt electrodes without microorganisms were used. Organosulfur compounds (15 μmol) were added individually to the cells, which were operated for 24 h. Additional electrochemical experiments were performed with 15 μmol dimethyl disulfide, 150 μmol methanol and biomass from the different sources added to individual cells. The cells were operated until sulfide production was shown in each of the cells. Biotic control experiments with hydrogen as electron donor were performed in 250 mL serum flasks. At the start of the experiment the headspace was replaced with a CO2/H2 (80:20) gas mixture at 1.3 bar. Flasks contained 150 mL medium, methanol (150 μmol) and were inoculated with 1 mL biomass mix. The start pH was between 7.7 and 8. Upon the production of methane, each serum flask was supplied with 15 μmol of one single organosulfur compound. Flasks were placed in an incubator and mixed at 150 RPM at 22-25 C for 30 days. Abiotic control experiments with methanethiol were performed in 250 mL serum flasks under 100% nitrogen at atmospheric pressure. Individual flasks were filled with 150 mL medium and water and 0.1 mM methanethiol. Analytical techniques and calculations We used organosulfur gas phase analysis to validate the use of sulfide as indicator for organosulfur compound degradation. Gas phase concentrations of methanethiol and dimethyl disulfide were analyzed as described by Roman et al., 2015 [20]. Analysis of organosulfur compounds to show their degradation is challenging as (i) organosulfur compound analyses are prone M. Elzinga et al. Environmental Science and Ecotechnology 1 (2020) 100009 to errors due to their volatile nature, and (ii) a decrease in organosulfur concentrations does not necessarily indicate its conversion, as it could also indicate leakage of organosulfur from the system. Total sulfide (S 2À , HSand H 2 S) in the liquid was measured using the methylene blue method (Hach LCK 653), in which sulfide is converted to H 2 S. To avoid measurement errors due to the alkaline nature of the samples, dilutions were performed with 0.5 M sulfuric acid. The method was tested for interference with organosulfur compounds by measuring solutions of sulfide with and without organosulfur compounds, and no interference was observed. Sulfate and thiosulfate, were analyzed with Ion Chromatography (see SI-1). Gas chromatography was used to measure CH 4 , N 2 , O 2 , CO 2 and H 2 using the methods described by Liu et al., 2017 [21]. Next Generation Sequencing was used to analyze the microbial community. The Powersoil DNA isolation kit was used for DNA extractions. DNA amplification via PCR and sequencing were performed as described by Takahashi et al. [22]. Coulombic efficiency was calculated as Where V is the catholyte volume (0.15 L), [OSC-S] is the concentration of sulfur added (mol/L), n is the number of electrons for reduction of organosulfur compounds to CH 4 and HS -(See eq (1)-(4)), F is the Faraday constant (96485 C/mol), I is current (A) and t is time (s). 3. Results and discussion Methanethiol degradation in BES Gaseous methanethiol and its oxidation product dimethyl disulfide, expressed as organosulfur compounds, and dissolved sulfide were analyzed over time at the biocathodes of two BES cells. Organosulfur compounds were almost completely removed from the gas phase in 5 days. The gas phase concentration decreased with 95% starting with 0.13 μmol in cell 1 and with 82% in cell 2 starting with 0.19 μmol, while the sulfide in the liquid increased from <0.5 to 9.1 μmol (61 μM) in cell 1 and to 9.3 μmol (62 μM) in cell 2 (Fig. 1). A considerable part of the supplied methanethiol was converted and recovered as sulfide. After 5 days: 41% of the 22.5 μmol (0.15 mM) methanethiol supplied in cell 1, and 64% of the 15 μmol (0.1 mM) methanethiol supplied to cell 2 was obtained as sulfide in the liquid phase. The fate of the remaining methanethiol fraction is unclear. However, there are several possible reasons why not all methanethiol was recovered as sulfide. First, a fraction of the methanethiol could be unconverted and still present in the liquid phase as there was still some methanethiol present in the gas phase. Second, methanethiol could be lost from the system through diffusion via or adsorption onto the membrane, tubing, electrodes and sampling ports. However, this was not quantified. Finally, the measured sulfide may underestimate the produced sulfide, since part of the sulfide could be used by microorganisms for growth. Abiotic controls, without electrical current, showed a minor decrease of gaseous organosulfur compounds of 1.1% in medium and 3.1% in Milli-Q (see SI-2) demonstrating that no degradation of methanethiol occurred in absence of bacteria and electrical current. Sulfide formation was used in further experiments to indicate organosulfur compound conversions, since sulfide formation provides a convincing proof that organosulfur compounds are converted. Degradation of organosulfur compounds Addition of organosulfur compounds over 22 days resulted in a total addition of 278 μmol (1.9 mM) sulfur for methanethiol, ethanethiol and propanethiol (Fig. 2a) and 555 μmol (3.7 mM) sulfur for dimethyl disulfide (Fig. 2b). Each biocathode, operating with different organosulfur compounds, showed a similar trend in the formation of sulfide. Sulfide was accumulating to 45 μmol (0.30 mM) for methanethiol, 44 μmol (0.29 mM) for ethanethiol, 42 μmol (0.28 mM) for propanethiol and 107 μmol (0.71 mM) for dimethyl disulfide during the experiment. The formation of sulfide in each of the biocathodes demonstrates that all organosulfur compounds were converted. Sulfide formation showed a faster increase with time after the current density and organosulfur compound additions were doubled, indicating that organosulfur compounds reduction continued at higher rates. The increased levels of organosulfur compounds did not inhibit the microbial community as sulfide production continued at an increased rate. Sulfur recoveries as sulfide were 18% for methanethiol, 18% for ethanethiol, 17% for propanethiol and 22% for dimethyl disulfide compared to total added sulfur in the form of organosulfur compounds. For methanethiol, this recovery is lower than the sulfur recovery obtained in the initial 5 day experiment with methanethiol (section 3.1), in which recoveries of 41% and 64% were obtained. It is likely that this difference in sulfur recovery is related to the difference in operation and experimental run time. For example, the longer experimental run (22 days) received 11 additions of methanethiol, whereas the 5 day experimental run received only one addition. A more detailed study to close sulfur balances is needed in future studies. 17% of the sulfur added in the form of methanethiol was detected as sulfate, while sulfate recovery remained lower than 5% in the experiments with other organosulfur compounds. The higher sulfate production for the methanethiol fed biocathode might be a result of some oxygen intrusion found at this cathode (See SI-3). The presence of oxygen in this cell may have influenced the conversion mechanisms, as strictly anaerobic conditions could not be maintained. However, with the measured cathode potentials lower than À1 V vs. Ag/AgCl, it is expected that oxygen was quickly reduced. The formation of methane within the cell indicates that anaerobic processes were still in place. The cathode potentials in all tests were ranging between À1.15 to À1.43 V vs. Ag/AgCl and slowly decreased throughout the experiment, indicating that hydrogen formation took place. The main element of the medium consists of a carbonate/bicarbonate buffer. Under the operating conditions, methane can be formed biologically from this inorganic carbon source. Hydrogen and methane were also detected in the gas The coulombic efficiency (part of the total charge used for organosulfur compound reduction) was 1.7% for methanethiol, 3.4% for ethanethiol and 5.0% for propanethiol and dimethyl disulfide, under the assumption that the available organosulfur compounds were completely reduced towards methane and sulfide (See eq (1)-(4)). In our experiments, we did observe methane formation, but it is also possible that other products rather than methane, e.g. CO 2 , ethane or propane were formed (not included for calculation of electron efficiency). Other electron sinks in this process were hydrogen and methane (See SI-3), and growth of biomass (not quantified). The effect of thiol concentration and current density on thiol degradation was not further studied in this manuscript and will be topic of further research. Organosulfur degradation requires a combination of electricity and microorganisms Electrochemical control experiments were performed to ensure that the degradation of organosulfur compounds was not merely a result of the applied current. The experiments showed no sulfide formation after 24 h of operation and indicate that microbial activity is required for the conversion of the tested organosulfur compounds. Cathode potentials during the control experiments ranged between À1.10 and À1.34 V vs. Ag/AgCl, similar to the cathode potential in the bioelectrochemical runs. During the bioelectrochemical experiments, hydrogen was produced at the cathode, which could be used as alternative electron donor for organosulfur compound degradation. Therefore, biotic control experiments were performed with 20% hydrogen in the gas phase, without electrodes. Surprisingly, during 30 days, no sulfide was detected, demonstrating that both current and biomass are required for the degradation of the organosulfur compounds. Even though biodegradation of methanethiol [5][6][7][8]23,24] and dimethyl disulfide [6][7][8] in anaerobic environments has been frequently reported, this was not shown in our control experiments, and may be the result of different conditions, such as halo-alkaline medium, selected inocula and experimental run time. Oremland et al. studied the degradation of ethanethiol in the presence and absence of hydrogen, the obtained results were contradictory and showed both increased and decreased degradation under the presence of hydrogen [9]. The exact role of electrical current and potentially hydrogen in organosulfur degradation is still unclear and needs to be elucidated in further research. Microbial composition in organosulfur compound degrading BES Microbial community analyses were performed on the bioelectrodes collected at the end of the experiment (Section 3.2). A large similarity was found between cathodes fed with ethanethiol, propanethiol and dimethyl disulfide (Fig. 3). Dominant families on the cathodes were Halomonadaceae, Clostridiaceae families 2 and XIV. The cathode fed with methanethiol showed a lower abundancy for the two families Clostridiaceae 2 and XIV, while the presence of Rhodobacteraceae was increased. The difference in microbial community on this cathode compared to the others potentially resulted from oxygen intrusion into this cell (See SI-3). The members of the families Halomonodaceae and Clostridiaceae 2, dominant in all cells, were also observed in a haloalkaline sulfide oxidizing bioreactor in the presence of methanethiol, ethanethiol and propanethiol [25]. Clostridium, one of the genera within the family of Clostridiaceae, is well known to be electroactive [26]. Since a mixture of inocula were used to study organosulfur degradation, experiments were performed in the same bioelectrochemical test setup to evaluate whether each separate inoculum had the capacity to degrade organosulfur compounds. Here, dimethyl disulfide was used. The cell inoculated with a mix of paper industry sludge and municipal wastewater treatment plant sludge showed sulfide formation after 4 days. Sludge from the chain elongation reactor showed sulfide formation after 5 days, and, sludge from the high salinity reactor and sludge adapted to the presence of organosulfur compounds both showed sulfide formation after 18 days. Degradation of dimethyl disulfide was thus possible by all inoculum sources separately. Outlook Thiols and hydrogen sulfide are often both present in gaseous waste streams. Biodesulfurization technologies focused on the recovery of elemental sulfur from hydrogen sulfide suffer from the presence of organosulfur compounds [27]. The conversion of these organosulfur compounds towards hydrogen sulfide would in these cases be an advantage as this can be detoxified together with the hydrogen sulfide and be removed in existing, well-known efficient treatment plants. This research presents a proof of principle for the reduction of methanethiol, ethanethiol propanethiol and dimethyl disulfide at biocathodes towards sulfide. Various aspects need to be considered to further study and develop this new application of BES. Coulombic efficiencies in this experimental design were low and can be improved by limiting (i) the formation of methane resulting from carbon dioxide reduction and (ii) hydrogen formation at the electrode. When calculating the coulombic efficiency, we assumed a complete reduction towards methane, however, metabolic pathways and products were not further identified. Further design questions involve defining reaction kinetics, microbial growth rates, evaluating long-term process stability and the role of methanol as co-substrate. The economic feasibility of this system will largely depend on maximum attainable reduction rates and microbial toxicity limits. Regardless the remaining research questions, this application of BES demonstrated a new potential strategy to biologically convert organosulfur to sulfide, a product for which many efficient sulfur recovery technologies are available. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
5,017.6
2020-01-01T00:00:00.000
[ "Chemistry" ]
Bidirectional biomimetic flow sensing with antiparallel and curved artificial hair sensors Background: Flow stimuli in the natural world are varied and contain a wide variety of directional information. Nature has developed morphological polarity and bidirectional arrangements for flow sensing to filter the incoming stimuli. Inspired by the neuromasts found in the lateral line of fish, we present a novel flow sensor design based on two curved cantilevers with bending orientation antiparallel to each other. Antiparallel cantilever pairs were designed, fabricated and compared to a single cantilever based hair sensor in terms of sensitivity to temperature changes and their response to changes in relative air flow direction. Results: In bidirectional air flow, antiparallel cantilever pairs exhibit an axially symmetrical sensitivity between 40 μV/(m s−1) for the lower air flow velocity range (between ±10–20 m s−1) and 80 μV/(m s−1) for a higher air flow velocity range (between ±20–32 m s−1). The antiparallel cantilever design improves directional sensitivity and provides a sinusoidal response to flow angle. In forward flow, the single sensor reaches its saturation limitation, flattening at 67% of the ideal sinusoidal curve which is earlier than the antiparallel cantilevers at 75%. The antiparallel artificial hair sensor better compensates for temperature changes than the single sensor. Conclusion: This work demonstrated the successive improvement of the bidirectional sensitivity, that is, improved temperature compensation, decreased noise generation and symmetrical response behaviour. In the antiparallel configuration, one of the two cantilevers always extends out into the free stream flow, remaining sensitive to directional flow and preserving a sensitivity to further flow stimuli. Introduction Biological lateral line organ Flow sensors in nature often have a morphological polarity, such as the hair cell sensors in the lateral line of fish [1], in jellyfish [2], arthropods [3,4] and crickets [5][6][7][8], as well as the hair cells in audition of humans [9]. The lateral line of a fish is an intricate flow sensing network of individual sensors, called neuromasts, which are located on the surface and subsurface on the body of the fish. Over millions of years, two different types of neuromasts have evolved, canal and superficial neuromasts, which encode the pressure gradient over the body's surface [10] and velocity of the flow [11], respectively. Whereas superficial neuromasts are located at the outside of the skin being in direct contact to the water, canal neuromasts form a part of an internal canal system located beneath the skin in which water streams in by entering external openings. Although canal and superficial neuromasts vary in their anatomical structure, both neuromast types are similar in their functional principle: water flows into the canal or around the skin and bends a jellylike cupula protruded into the fluid. Canal neuromast cupulae are typically hemispherical with a diameter in the hundreds of micrometers, whereas the much smaller superficial neuromast cupulae are bullet-shaped and typically around 50-100 μm in height and 10 μm in width [12]. Figure 1 illustrates the anatomical structure of the neuromast. Neuromasts contain bundles of hair cell stereovilli which are deflected mechanically by a flow stimulus which triggers a membrane potential shift. The neuromast has a directional sensitivity which is determined by an axis of orientation of the stereovilli and kinocilium of the individual hair cells. The stereovilli increase in stepped length up to the tallest kinocilium, as shown in Figure 2. This morphology as well as the presence and arrangement of the various tip links between the stereovilli and the kinocilium generate a directional sensitivity and defines a best sensing direction. Indeed, as the flow bends the gradedheight stereovilli, it deflects the cilia, allowing the mechanoreception of the flow stimulus to be transduced as a depolarisation in the membrane, as pictured in Figure 2. The opposing flow pushes the kinocilium towards the stereovilli, restricting their movement, relaxing the linkages and thus inhibiting the corresponding action potential [9]. Therefore, depending on the flow mechanical excitation direction, hair cells within the same neuromast assume a bending orientation that is antiparallel (i.e., opposite orientation) to each other, thus generating an axis of sensitivity or best direction. Depolarising events are associated with decreasing membrane electrical resistance while hyperpolarization responses with an increase in membrane electrical resistance [15]. A new flow stimulus which occurs on top of already existing flow is diffi- cult to decouple using one sensor orientation. This is overcome in nature so that an oscillatory flow imposed in addition to a steady flow can be sensed using two sensory cells with opposite excitatory stimulus best directions [16]. The fish lateral line then further deploys these sensors to decouple multiple flow stimuli by using an array of neuromast sensors [17]. Another key feature of the directional sensitivity is that if the incoming flow stimulus is tested at varying angles to the biological hair cell, the response, as measured on the afferent nerve, is cosinelike in relation to the stimulus angle [1], as illustrated in Figure 1. The signal detection capability of superficial neuromasts strongly depends on the hydrodynamics of the boundary layer and the mechanical properties of the cupula [11]. Caused by its viscosity, water close to a body's surface moves slower than the flow stimulus, which creates a spatial gradient of flow velocity in the boundary layer. The boundary layer thickness is commonly defined as the distance from a surface where the velocity is nearly equal to the free stream velocity. It increases with distance from the leading edge and with decreasing flow velocity [18]. In the theoretical approach described by McHenry et al. [11], the deflection of the cupula structure in flow was modelled by considering the mechanical contributions of the boundary layer, the components of the cupula morphology and the fluidstructure interaction. By treating the cupula structure as a cylindrical beam, the forces generated by the flow upon and within the cupula were calculated and cupula deflection was described as a function of its height above the skin [11]. The first attempts have been made by designing and developing hot-wire anemometry (HWA) sensors [31]. The HWA principle is based on a fluid flowing over a heated wire which causes heat loss due to convective currents [25]. As the resistance of the wire depends on its temperature, the cooling rate gives information about the flow velocity. Researchers from the Micro and Nanotechnology Laboratory of the University of Illinois at Urbana-Champaign, Chen and Liu [32], developed a new type of flow sensor based on the HWA principle, realised by using an efficient microfabrication process which combines surface micromachining and a 3D assembly method [28]. Two support beams, each being 2.7 μm thick, elevate the thermal wire out of the plane up to only 1 mm, hence reducing interference to the flow. The thermal wire is made of platinum or tungsten and its length varies between 50 and 200 μm. This approach based on their novel 3D assembly method makes it possible to form large arrays of sensors on a variety of substrate materials. The research group from the MESA+ Institute for Nanotechnology at the University of Twente developed a capacitive artificial neuromast based sensor platform that was integrated as high-density arrays to measure high-frequency acoustic flow patterns based on drag force [33][34][35][36]. The artificial neuromast is made of a vertical pillar with heights between 400 and 1000 μm, heights similar to the sensory hairs found on crickets. To distinguish between rotation and translation normal to the substrate, two electrodes on a membrane were attached to the base of the pillar. The capacitive read-out indicates the movement of the pillar. High-density arrays were designed to increase the total capacitance value and thereby the overall sensitivity. Successfully carried out measurements (capacitance vs voltage, frequency dependence and directional sensitivity) demonstrated the viability of the capacitive-based flow sensing approach. With a pillar geometry of 900 μm × 50 μm and a twodimensional sensor separation of ≈250 μm, the flow sensor array described by Bruinink et al. [35] achieved very high sensitivity (minimal detectable flow amplitude) down to 2 mm s −1 in air flow. The 100-fold increase in acoustic sensitivity in comparison to their first-generation capacitive-based flow-sensor arrays [33,34] was mainly achieved by an increase in pillar length from 450 to 900 μm, a decrease in diameter from 50 to 25 μm and the removal of the membrane curvature. Table 1 gives an overview of bent piezoresistive cantilever structures, a subset of closely related MEMS flow sensors for direct comparison in this study. A more comprehensive list of previously described piezoresistive based artificial hair sensors, including geometries such as vertical beams, vertical pillars and bent flags, can be found in [37]. To systematically investigate the response behaviour of their piezoresistive cantilever-based air flow sensor, Wang et al. [38] performed wind tunnel tests with three different cantilever beam lengths (400 μm, 1200 μm and 2000 μm) at air flow velocities ranging between 0 and ≈45 m s −1 . Progressively increasing the air flow velocity increased resistance signals approximately linearly. Average sensitivities for the individual cantilever beam lengths were found to be 0.0134, 0.0227 and 0.0284 Ω/(m s −1 ), respectively. Aiming to detect air flow direction, Wang et al. [39] positioned four microcantilever beams (4000 μm long and Table 1: Previously described, bent-cantilever-based flow sensors ordered by year of publication. Comparative overview of cantilever geometries and flow velocity related performance indicators (sensitivity S, measurement range R). Hair geometry defined as product of length × width (or diameter) × thickness. Adapted from [37]. Du et al. [40] connected a rectangular plate (3000 μm × 2500 μm × 10 μm) to two squared cantilevers (500 μm × 500 μm × 10 μm) which bent as air flow hit the plate. While the plate received the drag force of the air flow, the cantilevers measured the drag force using platinum strain gauges. One variable resistor (strain gauge) on each of the two cantilevers and two fixed resistances on the sensor substrate were used in a Wheatstone bridge circuit, which converted the resistance change into a change in output voltage that decreased in forward and increased in backward direction with air flow velocities ranging between −18 m s −1 and 18 m s −1 . The flow sensor demonstrated a sensitivity of 60 μV/(m s −1 ) at around 18 m s −1 . By attaching two flow sensor units perpendicular to each other, the sensor demonstrated sensitivity to multidirectional air flow with a maximal error of 9.2% [41]. Zhang et al. [43] calibrated their bent piezoresistive flow sensors in deionized water with different cantilever beam lengths (100 μm, 200 μm and 400 μm) at varying water flow rates between 0 and 0.2 m s −1 . The microcantilevers were able to measure small flow rates between 0 and 0.23 m s −1 , with a sensitivity ranging between 1.5 and 3.5 Ω/(cm s −1 ). Qualtieri et al. [44] developed bent artificial hair sensors (600 μm × 100 μm × 0.7 μm) that demonstrated bidirectional sensitivity to nitrogen flow. Equipped with a 80 μm long nickel-chrome (80/20) piezoresistor (strain gauge), the artifi-cial hair sensor was sensitive to nitrogen flow along both cantilever directions. However, the strain gauge resistance revealed an asymmetrical response behaviour. While the piezoresistance varied by around 0.44% in the forward flow direction (cantilever flattened), a curled-up cantilever varied only by 0.07% in the backward direction. Extending the cantilever and strain gauge length, Qualtieri et al. [45] characterized artificial hair sensors (1500 μm cantilever beam length) in water at flow velocities up to 0.5 m s −1 . It was shown that the sensitivity of the flow sensor to a specific dynamic range can be tuned by choosing the thickness of a thin, waterproof parylene layer accordingly [46]. A parylene coating of 0.5 μm thickness showed strain-hardening behaviour with a linear sensitivity of Antiparallel cantilever design In this study, keeping the same cantilever dimensions as described by Qualtieri et al. [45], we expanded on our work to optimize the sensor response to flow by using an antiparallel configuration. Antiparallel cantilevers are fabricated and fully characterized which are coupled as a sensor with bidirectional sensitivity. The novel sensor design and arrangement is calibrated and stimulated by air flow in multiple directions. The effects of temperature change were measured. To compare the antiparallel cantilever with a single cantilever design, a single hair sensor reference model with equal cantilever dimensions and material composition was designed, manufactured and investigated in the presented study, as shown in Figure 3. The main difference between a single and two antiparallel cantilevers is that one variable resistor (sensing strain gauge) forms a quarterbridge configuration (in the case of the single cantilever), whereas two represent a half-bridge configuration (antiparallel cantilevers) in the Wheatstone bridge circuit. Flow velocities ranging between 10 m s −1 and 32 m s −1 were used to characterize the two sensor variants in air. The Reynolds equivalence with regard to speed in water is ≈1/15 that in air with kinematic viscosities of ν ≈ 1 × 10 −6 m 2 s −1 for water and ≈15 × 10 −6 m 2 s −1 for air at 20 °C [18]. That is, 10-30 m s −1 flow speeds in air are (Reynolds) equivalent to ≈0.7-2 m s −1 in water, which happen regularly in a biological context. A 35 cm trout, for instance, swims with a speed of up to 3.5 m s −1 (10 body lengths per second) [48]. Reynolds numbers for the cantilever were roughly estimated based on the assumption of ideal laminar air flow conditions, following the calculation presented in [49]. Reynolds numbers range between R e ≈ 67 (for 10 m s −1 ) and R e ≈ 213 (for 32 m s −1 ) for air at 20 °C (kinematic viscosity ν = 15.06 × 10 −6 m 2 s −1 ) and a cantilever beam width d = 100 μm (characteristic length). This suggests steady flow conditions in the wind tunnel at the cantilever tip. In laminar flow, the boundary layer thickness δ over a flat plate is approximately δ ≈ 4.91 with kinematic viscosity ν, distance x along the surface from the leading edge, and free stream velocity u ∞ [18]. For instance, given an air flow of 30 m s −1 over a 30 cm sized object, the boundary layer would be approximately 1.9 mm thick. The presented flow sensors, inspired by the mechanoreceptive neuromasts found in fish, are adequate for measuring phenomena that take place in the boundary layer, as the total height of the flow sensor (cantilever tip height as measured from the surface) is 1.4 mm. The electrical measurement procedure depends on two piezoresistive strain gauges fixed to flexible cantilevers bent out-ofplane, which are deflected by the mechanical force of the moving fluid. Each micro strain gauge runs across the entire surface of the cantilever and accordingly changes its electrical resistance even at the tiniest stretching and compressing of the beam. A built-in Wheatstone bridge circuit, which implements a half-bridge strain gauge configuration, maximizes the signal transduction and provides measurable differences in voltage proportional to the deformation of the beams. Voltage signals below and above the potential difference of the Wheatstone bridge circuit in its equilibrium (offset) state are directly related to the direction of the flow. The antiparallel artificial hair sensor is equipped with two bent cantilevers and has a squared footprint of 2.5 mm 2 . The cantilevers reach approximately 1 mm tip height above the squared sensor substrate. With a substrate height of ≈400 μm, the total height of the flow sensor adds up to 1.4 mm. Each cantilever is 1.5 mm long, 100 μm wide and, depending on the thickness of the parylene layer, 2 to 4 μm thick. Four contact pads, located in each of the four corners on the sensor top side, interface power supply (excitation voltage) and voltage readout with peripheral data acquisition hardware. SEM pictures are presented in Figure 4 and Figure 5. As shown in Figure 5 and listed in Table 2, a single strain gauge is made up of 8 strain gauge periods with a pitch of 12 μm between two neighbouring periods. A single period is 1500 μm long, 4 μm wide and has a thickness of 100 nm. The equivalent circuit resistance between two adjoined points in an unexcited Wheatstone bridge circuit, say points A and B as pictured in Figure 6, is represented by a parallel circuit of a single resistor in the one arm (R A,B ) and a series of three resistors in the other arm (R B,C + R C,D + R A,D ). In case of opposed points, say points A and C, the circuit consist of two arms with two strain gauges in series, respectively. Table 3 provides an overview of the strain gauge resistances at rest (without power supplied to the circuit) as measured prior to any experiments. Interconnected strain gauge resistances read approximately 31 kΩ for adjoined (e.g., R A,B ) and 41 kΩ for opposed (e.g., R A,C ) strain gauges. This result demonstrates that The change in resistance for an upwardly bent strain gauge is caused by the compressing surface strain of the curled-up cantilever. To fulfil the condition that the volume of the strain gauge is constant even under compression, an increase of the strain gauge thickness is expected, which in turn decreases the electrical resistance. For a previously described cantilever beam under maximal compression in nitrogen flow, the least possible resistance value was found to be approximately 0.07% smaller than the resistance value in resting, bent position [44]. This value is used to approximate the resistance value of the bent strain gauge under maximal compression in air flow. The theoretical voltage range is defined by the minimal and maximal possible resistance values in the circuit, which are reached in two conditions. Both conditions are met when the cantilevers undergo maximal loading, that is, maximal stretching for the one and maximal compressing for the other beam: for each of the two flow directions, one strain gauge is highly compressed (R bent(max) , flow hits the cantilever back side and causes the beam to curl up further) while the other is flat (R flat , flow hits the cantilever front side and flattens the beam). The output voltage V out , that is, the voltage of point D relative to point B, as shown in Figure 6, is defined as (1) where V D and V B are the potentials at point D and B, respectively, and V exc is the excitation voltage. As indicated in Figure 6, the variable resistors (strain gauges) are located in the right arm, that is R A,B and R B,C . With an excitation voltage of 3.3 V, the offset (resting), minimal and maximal voltage signals for the antiparallel artificial hair sensors approximate: Following the same mathematical procedure, the quarter-bridge configuration of a single artificial hair sensor (with strain gauge R A,B ) generates output signals that range between V out(min) ≈ −5.9 mV and V out(max) = 0 mV, with resting voltage V out(offset) ≈ −5.3 mV. Results Bidirectional air flow Influence of temperature on micro strain gauges As shown in Figure 8, the offset voltage of the Wheatstone quarter-bridge circuit (single cantilever) drifted around 100 mV, from 7.22 V up to 7.32 V after cooling down the sensor unit from 25 °C to 19 °C. This drift suggests that the sudden temperature decrease influenced the offset voltage for the single cantilever sensor. In the subsequent warming up phase (data not shown), the output dropped back to its offset voltage (7.22 V). Accommodating for signal amplification, the actual drift was 1 mV. The temperature experiments were repeated for the antiparallel artificial hair sensors. The results in Figure 8 show that the Wheatstone half-bridge circuit with two bent strain gauges reduced the effects of temperature changes. A perfectly balanced half-bridge circuit with two identical resistors in both arms would fully compensate for temperature changes. Due to variation in the resistances (see Table 3), the half-bridge circuit was not perfectly balanced and a drift of 15 mV was measured. Accommodating for signal amplification, the actual drift was 0.15 mV. The artificial hair sensors did not generate heat, which suggests that they changed their temperature with the environment only. Both flow sensors gave a consistent, sinusoidal response to the multidirectional air flow conditions, which was expected, as drag force on the bent cantilevers is a function of flow direction (or rotation angle in the presented experiment). When a cantilever is rotated by angle α, the expected signal amplitude is A α = A max sin(α), where A max is the signal amplitude for the rotation angle of best directional sensitivity, namely 90° and 270° in case of the two flow sensor variants during the experiments. The expected amplitude for a rotation angle α = 45° corresponds to approximately 71% of the maximal signal amplitude. Both flow sensor variants showed the expected signal output (±0.7 for the normalized data) for the rotation angles 45°, 135°, 225°, and 315°. However, the antiparallel artificial hair sensors generated a more precise sinusoidal response, as indicated by the higher R-square value (0.9842 for the antiparallel configuration vs 0.9660 for the single cantilever). In forward flow (270°), the single sensor reached its saturation limitation, flattening at 67% of the ideal sinusoidal curve whereas the antiparallel cantilevers saturated at 75%, as indicated by the residuals in Figure 9. Although both sensor variants showed a signal saturation for 90° and 270° (where the deviation from the ideal sinusoidal signal increased), the antiparallel strain gauges generated a higher signal in these particular positions. This behaviour can be explained, as one strain gauge is bent down while the other is curved up, which represents two changing resistances in opposite directions, one getting smaller and one getting bigger. That amplifies the electrical effect in the Wheat-stone bridge circuit, which in turn supports higher ranges in the output voltage. Therefore, antiparallel cantilevers resulted in improved directional sensitivity and sinusoidal response to flow angle. While the antiparallel cantilevers provided maximal signal for rotation angles of best directional sensitivity (90° and 270°), it inhibited voltage output for the two other main directions, where air flow hit the sensors from the side (0° and 180°i n our experiment). In these two conditions, where the cantilever surface is minimal and aerodynamic forces are hardly exerted on the cantilever beam, the cantilevers oscillated around their resting position. Even under laminar conditions, the bent cantilever is a flexible bluff body that generates periodic turbulence, which causes oscillation around its resting position. Multidirectional air flow The histograms in Figure 10 and Figure 11 present the average mean of 30,000 measurement values at rotation angles 0°, 90°, 180°, and 270°. As expected, the cantilevers oscillated around the resting position when the air flow hit both sensor variants from the side (0° and 180°). The single cantilever revealed a more skewed, broader distribution (Figure 10), whereas the antiparallel cantilevers showed a more symmetric, narrower distribution ( Figure 11). The same behaviour was identified for the two rotation angles of best directional sensitivity, that is 90° and 270°. The antiparallel artificial hair sensors were less noisy and showed a statistical symmetry, as indicated by the similarly shaped but mirror-inverted distribution for the forward and backward air flow directions. This fact can also be explained when comparing the standard deviations for the directions of best sensitivity. While the single cantilever showed a difference of ≈0.43 between the two directions, that is 1.387 for the forward (cantilever flattened down at 270°) and 0.953 for the backward air flow direction (cantilever curled up at 90°), the antiparallel cantilevers showed a more consistent symmetry for these rotation angles (0.01 difference). The bigger difference in the standard deviation suggests that the single cantilever generated a noisier signal for the 270° (cantilever flattened down) when compared to the 90° rotation (cantilever curled up), which supports the assumption that the cantilever oscillated less in the compressed state, because it is less flexible in the compressed state. Discussion In this work, antiparallel artificial hair sensors, coupled as a flow sensor with bidirectional sensitivity, were fabricated and fully characterized. The novel sensor design and arrangement was calibrated and stimulated by air flow in multiple directions. Sensitivity to temperature changes was measured. All experiments and measurements were compared with a single hair sensor reference model with equal cantilever dimensions and material composition. The conducted experiments under bidirectional air flow conditions demonstrated that the antiparallel artificial hair sensors exhibit an axially symmetrical sensitivity range between 40 μV/(m s −1 ) for the lower air flow velocity range (between ±[10-20] m s −1 ) and 80 μV/(m s −1 ) for a higher air flow velocity range (between ±[20-32] m s −1 ). Compared to the cantilever based flow sensor described by Du et al. [40], which provided an asymmetrical best sensitivity of 60 μV/(m s −1 ) at 18 m s −1 , the presented antiparallel cantilevers demonstrated an improved sensitivity for air flow velocities higher than 20 m s −1 . Whereas the presented single cantilever reference model showed a saturation limit at −17.2 m s −1 (for a curled-up cantilever in backward air flow direction), the antiparallel configuration provided sensitivity to both air flow directions, overcoming the saturation limit by having the second cantilever extended out into the flow. The theoretical output voltage was estimated to fluctuate between ±5.9 mV around the offset voltage for an excitation voltage of 3.3 V. The measurement values ranged between 9.6 mV and 12.6 mV around the offset voltage of 11 mV for the antiparallel artificial hair sensor. The mismatch between the measured (3 mV) and the modeled (11.8 mV) output range can be explained by the fact that the maximal air flow velocity of the wind tunnel (32 m s −1 ) did not generate enough drag force to flatten the downwind cantilever completely. In our previous work, we obtained signal saturation in air at about 40 m s −1 for a comparable but not mechanically identical cantilever [47]. Other authors described detection limits for comparable cantilevers as 45 m s −1 [38], which confirms the suggested explanation for the mismatch in the measured output voltage range. The presented strain gauges were subject to temperature changes, which are caused by two effects. The first is a difference in convective cooling due to the perpendicular air flow over the strain gauges on the extended cantilevers versus the parallel flow over the gauges on the sensor substrate. The second effect is the difference in heat capacity of the thin cantilever versus that of the much more massive sensor substrate, which affects both the heat flow by conduction and the temperature change of the substrate itself. To investigate the influence of temperature on the micro strain gauges, the presented artificial hair sensors were subjected to a sudden decrease in temperature, from 25 °C down to 19 °C in about 4 minutes. As a consequence, the offset voltage of the single cantilever sensor drifted by 1 mV, whereas the antiparallel cantilevers showed a smaller influence of the temperature decrease, a drift of 0.15 mV. The Wheatstone half-bridge circuit with two bent strain gauges did not compensate for the temperature change completely, as it was not perfectly balanced (see the variation in the resistances in Table 3). Nevertheless, it reduced the influence of temperature by a factor of 6.7 when compared to the single cantilever sensor, as shown in Figure 8. In previously published work, various thermal effects on piezoresistors such as friction, self-heating and convection were described. Du et al. proposed an additional temperature resistance that could better compensate for temperature changes [41]. Other authors fabricated temperature compensation circuits together with the strain gauges on a single chip, as described by Ozaki et al. [50]. Their proposed (but not discussed) approach for on-chip temperature compensation was a Wheatstone full-bridge configuration with four active strain gauges. In general, temperature compensation is required when bent out-of-plane cantilevers operate in large air flow velocity ranges, where wind-induced convection and in turn temperature changes are likely to occur. Drag force on the bent cantilevers is a function of flow direction (or rotation angle) and therefore implies a sinusoidal response behaviour, which was already demonstrated by previously published work. The flow sensor described by Wang et al. [39] revealed a sinusoidal relationship between air flow angle and resistance variation, with the largest variation for the downwind, flattened cantilever and the least variation for the upwind, curled up cantilever. The resistance variations of a cantilever positioned perpendicular to the air flow direction was almost equal to zero. Ozaki et al. [50] and Du et al. [41] used a Wheatstone half-bridge circuit to convert the resistance variations into a related output voltage, which demonstrated the expected sinusoidal relationship between air flow angle and output voltage signal. In our work, as presented in Figure 9, the performed measurements under multidirectional air flow conditions matched the expected values, as both flow sensors gave a sinusoidal response to the multidirectional air flow. The single sensor reached its saturation limitation in forward flow at 67% of the ideal sinusoidal curve whereas the antiparallel cantilevers saturated later at 75%. When the air flow hit both sensor variants from the side (cantilevers oscillated around the resting position), antiparallel cantilevers showed a more symmetric, narrower distribution. For forward and backward air flow direction, the antiparallel artificial hair sensors were less noisy and showed a statistical symmetry, as indicated in the histograms in Figure 10 and Figure 11. This result implies that the antiparallel artificial hair sensors generated less noise while oscillating around their offset voltage and provided higher sensitivity over the entire multidirectional measurement range. The presented novel antiparallel flow sensor design was inspired by the neuromasts found in the lateral line of fish, neuromast sensors which have a directional sensitivity (determined by an axis of orientation of the individual hair cells) and morphological polarity [1]. As in the biological model, where hair cells within the same neuromast assume an antiparallel bending orientation with respect to each other, the presented antiparallel artificial hair sensors also demonstrated an axis of sensitivity or best direction. Conclusion As indicated by the two presented artificial hair sensors, a sinusoidal response to flow stimulus has been measured, mimicking the cosine-like response function of the afferent nerves of the hair cells. The clear response angle for the antiparallel artificial hair sensors creates a unique sensor with improved directional sensitivity. Its response to oscillatory flow has yet to be determined; however, as one cilia flattens, the other remains extended out into the flow and may therefore be capable of responding to additional flow stimuli. Despite the fact that the direction of flow cannot be measured by the antiparallel cantilever configuration in principle (an orthogonal configuration would have to be formed to directly measure two-dimensional flow direction, which we are planning to build in the upcoming year), it demonstrated improved bidirectional sensitivity to flow sensing with curved artificial hair sensors. In the short term, the presented sensor platform will be investigated in liquids as well, once the sensors are embedded in a waterproof MEMS package. For future applications, we are targeting automotive, robotics, automation and air conditioning applications with variable air flow speeds, ranging between a few and 40 m s −1 . A first real-world case study with our flow sensor platform was designed and conducted by the Institute of Air Handling and Refrigeration in Dresden (Germany), which aimed at detecting angular momentum in industrial air ducts for controlling the speed of contra-rotating axial fans. A given technical problem of axial fans is the emergence of unavoidable swirl in the wake flow with high peripheral speeds. While contra-rotating axial fans already reduce the swirl in the wake flow [51], detecting angular momentum in real-time using our flow sensor platform could possibly further reduce mechanically unfavourable flow dynamics in the wake. When compared to flow sensors which are based on the same electrical measurement procedure (piezoresistive, strain gauge) and mechanical structure (bent cantilever), the achieved sensitivity of our device is comparable to previously published work by other authors. Our presented research focussed on how to improve the response behaviour of bent piezoresistive cantilever structures. The successive improvement of the bidirectional sensitivity, that is, improved temperature compensation, decreased noise generation and symmetrical response behaviour, can be considered the primary result of the presented research. The specific advantage of our design is that we can measure flow speeds directly instead of having to infer them from Bernoulli assumptions based on pressure differences. Experimental Sensor fabrication During the fabrication process, thin silicon nitride/silicon layers are stacked together on a silicon wafer substrate and precisely shaped into the desired cantilever geometry by performing an etching process. In the following, we describe the MEMS fabrication process for the stress-driven artificial hair sensor. The fabrication process is subdivided into the following main steps: • Depositing functional material layers: The flow sensor is based on a silicon-on-insulator (SOI) substrate which is made up of a 400 μm silicon wafer, a 2 μm thick silicon dioxide (SiO 2 ) insulation layer, and a 2 μm silicon device layer. Due to the residual stress in the material, the released SiN/Si bilayer bends out of the plane upon release [37]. The resulting cantilever thickness is 2.3 μm. • MEMS packaging: After wire bonding the contact pads to an integrated circuit socket adapter (W13028RC, Winslow ADAPTICs), chemical vapour deposition of parylene at room temperature is performed which adds a conformal 2 μm encapsulation layer to all sides of the cantilever beams as well as the flow sensor substrate (including circuitry). This process tunes the mechanical properties (flexural stiffness) of the artificial hair sensor, as described in [46]. Experimental methodology Three different experiments were designed and conducted to investigate the performance of both stress-driven hair sensor variants to flow stimuli: 1. Both variants were compared and calibrated under bidirectional air flow conditions, that is two air flow directions with reference to the two bending axes of the cantilevers. 2. Both variants were subjected to thermal variations to compare the influence of temperature on the micro strain gauges. 3. Both variants were compared under multidirectional air flow, that is flow sensors were gradually rotated until a full cycle was achieved (360°). Bidirectional air flow The facility used for the first experimental research was a subsonic wind tunnel, operated by the Department of Civil and Industrial Engineering at University of Pisa. The closed-return Goettingen type wind tunnel had an open (round) test section of 1.1 m in diameter, 1.4 m in length, and was characterized by a free-stream turbulence level of 0.9%. A computer-controlled traversing rig positioned the artificial hair sensors at the centre of the wind tunnel. In our measurements, air flow velocities between 12 and 32 m s −1 were generated and tracked by a calibrated pitot tube and thermoresistance based measuring equipment. In the wind tunnel, Reynolds numbers range between R e ≈ 67 (for 10 m s −1 ) and R e ≈ 213 (for 32 m s −1 ), suggesting steady flow conditions in the wind tunnel at the cantilever tip. To investigate the electrical behaviour under bidirectional air flow conditions, two different experiments were performed with flow sensors aligned in forward and backward flow direction with reference to the two bending axes of the cantilevers. For both experiments, the velocity range between 12 and 32 m s −1 was applied to measure the electrical response of the flow sensors. During the experiments, the built-in Wheatstone bridge circuit was excited with V exc = +3.3 V DC, while its voltage output was connected to a benchtop digital multimeter (Agilent 34405A). Influence of temperature on micro strain gauges As electrical resistivity of metals is temperature-dependent, a drift in the offset voltage may occur while the flow sensor is in operation. To compare the influence of temperature on air flow measurements, both flow sensor variants (single and antiparallel artificial hair sensors) were subjected to a sudden decrease in temperature. A cylindrical ice block (6 cm diameter and 7 cm height) was cooled down to −11 °C and situated directly beneath the flow sensor for 10 minutes, while the offset voltage of the Wheatstone bridge was recorded. The built-in Wheatstone bridge circuit was excited with V exc = +5.0 V DC. To eliminate additional highly complex thermodynamics, measurements were performed without air flow. A high-resolution infrared camera (FLIR, SC600-series, IR lens f = 13.1 mm), positioned in 20 cm distance, was used to track the temperature changes at the sensor substrate.
8,530.2
2019-01-03T00:00:00.000
[ "Engineering", "Biology" ]
Editorial: Natural Product Epigenetic Modulators and Inhibitors Molecular Simulations Laboratory, Department of Chemistry, University of Buea, Buea, Cameroon, Institute of Pharmacy, Martin-Luther University of Halle-Wittenberg, Halle (Saale), Germany, Institute of Botany, Technical University Dresden, Dresden, Germany, Department of Pharmaceutical Chemistry, Istanbul University Istanbul, Istanbul, Turkey, Department of Drug Chemistry and Technologies, Faculty of Pharmacy and Medicine, Sapienza University of Rome, Rome, Italy, Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy Epigenetics in drug discovery have recently drawn a lot of attention and there are increasing number of applications as reflected in the growing number publications and citations on this subject. This is because epigenetic dysfunction is widely implicated in several human diseases, including cancer, neurodegenerative and parasitic diseases. Epigenetic broadly involves covalent changes on the building blocks of chromatins called nucleosomes. The reactions involved include methylation, acetylation, phosphorylation, ubiquitination and reverse reactions of those. Natural products (NPs) are regaining attention as sources of lead compounds that can be used as a starting point of drug or epigenetic probe discovery. Besides, NPs have documented large structural diversity with unique and complex scaffolds with promising activity and selectivity profile with epigenetic targets. This special collection of articles is dedicated to the research topic "Natural Products Epigenetic Modulators and Inhibitors." This is a focused research area that covers NPs and NP mimics and derivatives, which play important modulatory or inhibitory roles in the epigenome. NPs and natural remedies have a potential role to play in the treatment of several diseases, including age-related and neurodegenerative disorders. A detailed summary of the health benefits of NPs capable of inhibiting or modulating the zinc-independent histone deacetylases (HDACs), sirtuins, varying from anti-aging to cardiovascular disease, weight loss, etc. have been summarized in a recent review article (Karaman Mayack et al., 2020). Among the sirtuins, sirtuin 1 (sirt1) is the most studied member, and its expression has been associated with the increase in insulin sensitivity. Besides, sirt1 is known to be implicated in tumorigenic and anticancer processes, as well as in the regulation of some essential metabolic pathways by controlling p53 deacetylation and modulation of autophagy. Targeting Sirt1 could affect caloric restriction and extend human lifespan. NPs (including polyphenols contained in products like fruits, vegetables, and plants) have recently been found to upregulate sirt1 activity (Iside et al.). Besides, epigenetic mechanisms (particularly histone deacetylase-inhibition) have been recently reported to be involved in epilepsy and chronic pain development (De Caro et al.), as well in several other disease-associated epigenetic mechanisms, including cancer (Cassandri et al.;Karaman Mayack et al., 2020). For this reason, several NPs have been repurposed for the treatment of different diseases and cancer itself (Montalvo-Casimiroet al. 2020; Moreira-Silvaet al. 2020). Several recent chemoinformatics studies have been conducted with the goal of exploring the chemical space and target space of NPsthat play epigenetic roles in nature (Sessions et al., 2020), including inhibitors of DNA methyltransferases (DNMTs) (Saldívar-González et al.). The chemical and activity landscape of naturally occurring epigenetic modulation and inhibition has often Edited and reviewed by: Alastair George Stewart, The University of Melbourne, Australia *Correspondence: Fidele Ntie-Kang<EMAIL_ADDRESS>Cecilia Battistelli<EMAIL_ADDRESS> Specialty section: This article was submitted to Translational Pharmacology, a section of the journal Frontiers in Pharmacology motivated lead compound discovery for these targets to virtually screen NP databases (Naveja and Medina-Franco 2015;Akone et al.). A survey of the structure-multitarget activity relationships and diversity (>7,000 compounds, including NPs), which had been previously tested against ∼50 epigenetic targets showed that HDACs and other epigenetic targets are clustered in the chemical space, whereas the chemical space of inhibitors of the various DNMTs included in the study did not overlap. This observation indicated the selectivity of DNMTs toward their inhibitors . In addition, inhibitors of HDACs are known to reverse latency in the human immunodeficiency virus (HIV), facilitating antiviral activity to target and kill the virus in the "chock and kill" process (Andersen et al., 2018;Divsalar et al.). In the interesting review by Akone and collaborators (Akone et al.), the authors focus on NPs that inhibit DNA methyltransferases (DNMTs) and histone deacetylases (HDACs). Notably they emphasize that these natural drugs can be considered as promising candidates in the treatment of cancer. While DNA methylation and histone modifications represent epigenetic signatures regulating gene expression (Allis and Jenuwein, 2016) in physiological events (development, differentiation, proliferation), hypomethylation and hypermethylation of DNA have been observed in cancer cells (Qi and Xiong, 2018). For this reason, the identification of highly specific and not toxic compounds represents a main topic in the field. The most known drugs targeting DNMTs are azacytidine and decitabine. Despite their usefulness as epigenetic modulators, these compounds are highly toxic and poorly stable (Gnyszka et al., 2013). More recently, some natural compounds have been reported as efficient against DNMTs activity; specifically, in the aforementioned review, the authors extensively describe the activity of (-)-epigallocatechin-3gallate, contained in green tea, polyphenol curcumin, the flavonoid quercetin, kazinol Q, resveratrol (3, 4′, 5trihydroxystilbene), the quinone Nanaomycin A, the isoflavone genistein, the isothiocyanate sulforaphane, the pentacyclic terpenoid Boswellic acid, the Z-ligustilide, the germacrane sesquiterpene lactone parthenolide and the ubiquinone derivative Antroquinonol D. Moreover, they approach the analysis of HDACs inhibitors, due to their important role in cancer progression treatment (Lee et al., 2017). As previously described, also in the case of HDACs inhibitors, natural compounds represent a starting point for the development of structures that could be highly selective for the different HDAC isoforms and potentially active in cancer. Trichostatin A was the first natural HDAC inhibitor described but its lack of selectivity limits its use (Khan et al., 2008;Shen and Kozikowski, 2016). More recently, different compounds have been designed and synthetized and some of them show good efficacy in cancer cells and in different patho-physiological states (Rossi et al., 2018). Specifically, the distinct HDAC inhibitors differ between zinc-binding inhibitors (linear HDAC inhibitors, cyclic tetrapeptides, cyclic depsipeptides) and non-zinc-binding inhibitors (e.g., Ursolic acid, Epicocconigrones A and B, Curcumin, n-Butyric acid and Aceroside VIII). Overall, in this manuscript (Akone et al.) the authors highlight the importance of NPs that can represent a basis for the development of specific and efficient new-generation compounds. In line with this paper, Fiorentino and co-workers (Fiorentino et al.) analyze the role of a distinct class of acetyltransferases, the lysine acetyltransferases (KATs), normally involved in cell signaling, metabolism, gene regulation, and apoptosis, that transfer acetyl groups on target proteins. It is well-known that alteration of KATs enzymatic activity correlates with different pathological states (inflammation, neurological diseases and cancer). For these reasons the conceptualization, design and synthesis of KAT inhibitors (KATi) could represent a powerful strategy to counteract these conditions. In this field of investigation, NPs have been identified as active compounds counteracting KATs. Specifically, the authors extensively report that NPs, despite their low selectivity, represent suitable scaffolds for the development of new derivatives potentially more specific and with a high grade of activity. With this intent, Fiorentino et al., make an overview starting from garcinol and isogarcinol that represented the starting point for the development of new molecules. Then they discuss about epigallocatechin-3-gallate, a non-selective KATi, active against p300, CBP, PCAF and Tip60 (Choi et al., 2009), delphinidin andcurcumin, despite it cannot be considered a selective KATi. The dissertation is enriched with the focus on other NPs differentially active against KATs. Specifically, the authors analyze the potentiality of different quinones, the 6pentadecylsalicylic acid (anacardic acid) and its derivatives, that show different activity especially on CBP and p300. Starting from this evidence, the authors discuss also the activity and effectiveness of alkaloids, possessing inhibitory activity against p300 and PCAF as well as the role of prostaglandins and peptide metabolites. In summary, different NPs represent a good starting point for the development of new drugs in the field of KAT inhibitors with low off-target effects and high efficacy. AUTHOR CONTRIBUTIONS All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication. ACKNOWLEDGMENTS This Research Topic would not have been possible without the contribution of all the researchers and authors. As Guest-Editors, we would like to thank all the authors for their substantial contribution toward the success of this issue. We would like to thank the reviewers for their time and valuable constructive comments and suggestions. Lastly, a special word of appreciation to the members of the Frontiers Editorial Office, for their assistance in making this work a success.
1,944.6
2021-03-04T00:00:00.000
[ "Chemistry" ]
A New Variational Model for Segmenting Objects of Interest from Color Images We propose a new variational model for segmenting objects of interest from color images. This model is inspired by the geodesic active contour model, the region-scalable fitting model, the weighted bounded variation model and the active contour models based on the Mumford-Shah model. In order to segment desired objects in color images, the energy functional in our model includes a discrimination function that determines whether an image pixel belongs to the desired objects or not. Compared with other active contour models, our new model cannot only avoid the usual drawback in the level set approach but also detect the objects of interest accurately.Moreover, we investigate the new model mathematically and establish the existence of the minimum to the new energy functional. Finally, numerical results show the effectiveness of our proposed model. Introduction Image segmentation is one of the fundamental problems in image processing and computer vision.Recently, variational methods have been extensively studied for image segmentation because of their flexibility in modeling and advantages in numerical implementation.In generally these methods can be categorized into two classes: edge-based models 1-3 and region-based models 4-6 . Edge-based models utilize the image gradient to stop evolving contours on object boundaries.The geodesic active contour GAC model is one of the most well-known edgebased models.It was proposed in 2 and widely used in practice 7 .However, this model has one major disadvantage: given an initial curve, during the evolution, the energy may evolve to its bad local minimizers.Region-based models have some advantages over the edge-based models.Firstly, region-based models do not utilize the image gradient, so they have better performance for images with weak object boundaries.Secondly, these models are significantly less sensitive to the location of initial contours.The Chan-Vese CV model 5 is one of the most popular region-based models.It is successful for an image with two regions, each of which has a distinct mean of pixel intensities.In order to handle images with multiple regions, Vese and Chan proposed the piecewise constant PC models 8 , in which multiple regions can be represented by multiple level set functions.And yet, these PC models are not very successful for images with intensity inhomogeneity.In order to segment images with intensity inhomogeneity efficiently, local intensity information was incorporated into the active contours models 9-11 .For example, Li et al. 9 proposed the regionscalable fitting model which draws upon local intensity means.This model is able to segment object boundaries accurately.In papers 12, 13 , both local intensity means and variances are used to characterize the local intensity distribution in their proposed active contour models.However, these local intensity means and variances must be defined empirically.In addition, similar forms of local intensity means and variances were also introduced in 14 for a statistical interpretation of Mumford-Shah functional 15 .In a word, these methods have a certain capability of handling intensity inhomogeneity.Moreover, there are some models that make full use of advantages of the edge-based and the region-based models, such as the model 7 that unifies the GAC model and the CV model.This model can obtain good results when the contrast between meaningful objects and the background is low. When the aim is to detect all the objects, the above models are very efficient.However, if only the objects of interest OOI are concerned, the segmentation of OOI using active contour methods becomes difficult.In order to detect OOI, the properties of OOI must be known mathematically.However, it is not an easy task to find out a suitable mathematical description of these characters.Many segmentation methods have used shape or texture characters of OOI.For example, the active shape models ASM 16 , the geodesic active contour method incorporated with shape information 17 , and the supervised texture segmentation method 18 .In this paper, inspired by the weighted bounded variation model 19 , the geodesic active contour model, the region-scalable fitting model and the active contour models based on the Mumford-Shah model 7 , we propose a new model which can be applied to segmenting objects of interest from color images.In order to segment desired objects in color images, the energy functional in our model includes a discrimination function 20 which determines whether an image pixel belongs to the desired objects or not.Moreover, our new model uses not only the edge detector which contains information concerning boundaries of desired objects but also the spatially varying fitting functions which are used to approximate the image intensities.Compared with other active contour models, our method cannot only have the advantages of both edge-based and region-based models, but also avoid the usual drawback in the level set approach e.g., initiation and reinitiation .In particular, our new model can detect the desired objects accurately.Finally, we investigate this new model mathematically: we establish the existence of the minimum to the energy functional, analyze the property of it and implement the numerical algorithm efficiently. The remainder of the paper is organized as follows: in Section 2, we show some background.In Section 3, our new model is proposed.Theoretical results, iterating schemes and experimental results are also given in this section.Finally, we conclude our paper in Section 4. The Region-Scalable Fitting Model Let Ω ⊂ R 2 be the image domain and f : Ω → R be a given gray-scale image.The regionscalable fitting model is defined by minimizing the following energy functional: 2.2 The steady state solution of this gradient flow is 2.3 And it is known that 2.3 is the gradient descent flow of the following energy: 2.4 Some Related Models In 2 , the geodesic active contour model GAC is defined by the following minimization problem: where L c is the length of the curve c and the function g is an edge indicator function that vanishes at object boundaries.For a color image f f 1 , f 2 , f 3 , a stopping function g x was proposed in 21 where ∧ is the largest eigenvalue of the structure tensor metric g ij in the spatial-spectral space, and where x x 1 , x 2 and R, G, B represent the pixel values of Red, Green, and Blue after Gaussian convolution, respectively, that is, In order to segment the desired objects in color images, a novel stopping function depending on both the discrimination function of OOI and the image gradient was proposed in 20 : where G σ 1 is a Gaussian kernel and the discrimination function α x can be defined by the following steps in 20 . 1 m sample pixels are chosen from the OOI.The color information of the ith sample pixel is denoted by J i1 , J i2 , J i3 .Therefore, J J ij is a m × 3 matrix. 2 Compute the correlation coefficient matrix of J as ρ ρ i,j 3×3 . 5 The confidence interval is constructed as with the degree of confidence 1 − 0 < < 1 , where If a 1 x belongs to the above interval, α x 1.If a 1 x does not belong to the above interval, α x 0. Our Proposed Model Let f f 1 , f 2 , f 3 : Ω → R 3 be a given color image where Ω ⊂ R 2 is the image domain a bounded open set .Inspired by the GAC model, the energy functional 2.4 , and the active contour models based on the Mumford-Shah model 7 , our new model is constructed.This model is to minimize the following energy functional w.r. 0 Ω , |ϕ| ≤ g}, g is a diffusion coefficient defined as the formula 2.8 , K σ , G σ 1 are the Gaussian kernels, and α x is a discrimination function.The purpose of constructing such a discrimination function is to derive the characteristics of desired objects so that these characteristics can be shown in the energy functional.This is done by analyzing m sample pixels chosen from the desired objects.The Principal Components Analysis PCA and interval estimation 22 are used in the analysis.By using PCA, a new set of variables which is called principal components, is obtained.By using the interval estimation, we can define an interval that covers many samples for each principal component.In our problem, at most the first two principal components are used.Without loss of generality, we only assume the first principal component is used.An interval a, b for this principal component can be constructed by the interval estimation.When every pixel x of the color image is projected from its RGB values to the first principal component axis, we get a new value u x .If the value is within the interval, the pixel is probabilistically regarded as a pixel in the desired objects.Thus the discrimination function α x based on the color information is constructed as 3.2 In order to understand more details of constructing this discrimination function, please see 23 . In the following, we analyze the above model from two aspects.Firstly, for any given u, according to the necessary condition of the minimization problem, it is known that the functions f i1 y , f i2 y i 1, 2, 3 must satisfy the following equations: where K σ takes larger values at the points near the center point y, and decreases to 0 as x goes away from y.Therefore, f i1 y , f i2 y i 1, 2, 3 are allowed to vary in space. Furthermore, for any given f i1 , f i2 i 1, 2, 3 , the model 3.1 can be converted into a simpler form.That is 3.4 If u is limited to a characteristic function 1 Ω c , the energy functional 3.4 can be changed into the following form: where C is a constant.In this case, the model 3.4 is equal to the following constrained minimization problem: when approximating f i of desired objects with spatially varying fitting functions f i1 , f i2 i 1, 2, 3 . The above analysis shows that our new model uses both the edge detector which contains information concerning boundaries of desired objects, and the spatially varying fitting functions f i1 y , f i2 y i 1, 2, 3 which are used to approximate the image intensities of desired objects.Just because of this, our model can segment the desired objects very accurately. Mathematical Results In 7, 24 , the proof of the existence of models has not been given.In the following, we will state the existence of the minimizer to the energy functional 3.4 and analyze the property of it.Theorem 3.1.For any given f i1 , f i2 ∈ L ∞ Ω (i 1, 2, 3), there exists a function u ∈ g −BV 0,1 Ω minimizing the energy functional E1 in 3.4 . 3.7 By 3.9 The structure tensor metric g ij is symmetric positive and ∧ is the largest eigenvalue of g ij .Thus , the BVnorm of u n is bounded.Thus, there is a subsequence, also denoted by {u n }, and u * ∈ BV Ω such that u n → u * strongly in L 1 Ω . Moreover, according to the formula u n → u * in L 1 Ω , we know that there is a subsequence, also denoted by {u n }, satisfying lim n → ∞ u n x u * x almost everywhere for x ∈ Ω.Since u n x ∈ 0, 1 for any x ∈ Ω, u * x ∈ 0, 1 almost everywhere for x ∈ Ω. Assume Mathematical Problems in Engineering Then That is, u n → u * strongly in L 1 Ω and u * x ∈ 0, 1 for every x ∈ Ω.Furthermore, by the lower semicontinuity for the g − BV space, we get 3.15 The formulas 3.14 -3.15 imply the weak lower semicontinuity of the energy functional Therefore, the infimum is attained by u * ∈ g − BV 0,1 Ω and it is a minimum of the energy functional E1. Similar to 7, 25 , we can obtain a property of minimizers to the energy functional 3.4 . Theorem 3.2. Let f i x i 1, 2, 3 , g x ∈ 0, 1 .For any given f i1 , f i2 (i 1, 2, 3), if u x is any minimum of the energy functional E1 defined in 3.4 , then for almost every μ ∈ 0, 1 , the characteristic function 1 Ω c 1 {x:u x >μ} is also a global minimum of the functional E1 where c is the boundary of the set Ω c . In addition, according to the above theorem, we know Ω c {x : u x > μ} is a minimizer of the following minimization problem 3.17 Numerical Implementation In the numerical algorithm, we do not deal with the new variational model 3.1 directly since too many equations need to be computed.In order to improve computational efficiency, we use the algorithm framework of the paper 24 to deal with our model.That is, we minimize the energy functional E by alternating the following steps. 1 Considering u fixed, compute f i1 and f i2 i 1, 2, 3 by using the formula 3.3 . 2 Considering f i1 and f i2 i 1, 2, 3 fixed, update u by using the iterative schemes of the minimization problem 3.4 .When a steady state is found, the final segmentation is obtained by thresholding u at any level in 0, 1 in our experiments, we choose μ 0.5 . In the following, we will give the iterative schemes of the minimization problem 3.4 .To solve this minimization problem, we firstly change it into the following unconstrained minimization problem: where ν u max{0, 2|u − 0.5| − 1} is an exact penalty function provided that the constant ζ is chosen large enough.According to the paper 7 , it is known that this unconstrained minimization problem has the same set of minimizers as the minimization problem 3.4 .The energy functional E2 is convex, so it doesn't possess local minimizers.Hence, any minimizer of E2 is global. Based on 26-28 , a convex regularization is used where θ is chosen to be small enough so that we almost have u v. Thus, we consider the iterative schemes of the above weak approximation as the iterative schemes of step 2 .Since this functional is convex, its minimum can be computed by minimizing this functional with respect to u and v separately.That is 2 u being fixed, we search for v as a solution of 3.21 According to 29 , we know the solution of 3.20 can be given by u v − θ div p, 3.22 where p p 1 , p 2 is given by The previous equation can be solved by the fixed point method p n 1 p n δt∇ div p n − v/θ 1 δt/g x ∇ div p n − v/θ , p 0 0, 0 . 3.26 In this paper, our model is solved numerically by alternative minimization schemes and not by using the classical Euler-Lagrange equations method.When using Euler-Lagrange equations, one must consider Ω g x |∇u| 2 ε 2 dx instead of the weighted TV-norm, where the small parameter ε is necessary to prevent numerical instabilities.The direct result of this regularization parameter is the obligation to use a small temporal step to ensure a correct minimization process.Thus, a large number of iterations to reach the steady state solution is necessary.In other words, although it is correct, the segmentation process remains slow.In the algorithm of our paper, the iteration process 3.22 -3.24 is very similar to the standard Chambolle's projection algorithm.The only difference is the appearance of the factor g x .This iteration process has two important advantages: firstly, it can be implemented very fast since it uses many useful convex optimization tools.Secondly, it is more faithful to the continuous formulation of the energy since it does not use the additional artificial parameter ε.Therefore, our algorithm is fast and doesn't need so many iterations to reach the steady state.Moreover, the algorithm of our paper is numerically more efficient than classical level set methods since the level set methods need to initialize the active contour in a distance function and reinitialize it periodically during the evolution.In fact, the algorithm of our paper belongs to the algorithm framework mentioned in 24 .And there are many papers 7, 28 that have analyzed the complexity of this kind of algorithms.So we just analyze it simply in this paper.If more details are wanted, please see 7, 24, 28 . Experimental Results Our method can be implemented on many synthetic and natural images.First, we consider two different cases Figures 1 a and 2 a .Figure 1 a is an image containing three different objects.The blue rectangle is our desired object.Figure 2 a is not a simple image where the bird is our interested object.Figures 1 b , 2 b , 1 c , and 2 c display the final active contours and u got by using our new model, respectively.From these experimental results, we find our model can detect OOI, regardless of other objects. In the following, we compare our new model with the model 23 in Figures 3, 4, and 5. Figures 3 a , 4 a , and 5 a are the original images.Our purpose is to segment the plane in Figure 3 a , the yellow leaves in Figure 4 a and the green leaves of oranges in Figure 5 a .Figures 3 b , 4 b , 5 b , 3 c , 4 c , and 5 c denote the final active contours and u obtained by using our new model, respectively.According to these results, we see that our model can detect the boundaries of the desired objects very accurately.And these results are hard to achieve by using the model in 23 . Conclusion This paper describes a new variational model for segmenting desired objects in color images.This model is inspired by the GAC model, the region-scalable fitting model, the weighted bounded variation model and the active contour models based on the Mumford-Shah model.In order to segment objects of interest from color images, the energy functional in our model includes a discrimination function which determines whether an image pixel belongs to the desired objects or not.Compared with other active contour models, our new model cannot only avoid the usual drawback in the level set approach but also detect the desired objects accurately.Our numerical results confirm the effectiveness of our algorithm.Moreover, we establish the existence of the minimum to the new energy functional and analyze the property of it. Figure 3 : Figure 3: An example of an image segmentation.a The original image our desired object is the plane .b-c The final active contour and u obtained by using our model m 7, δt 1/8, λ i1 λ i2 0.001 i 1, 2, 3 , θ 0.1 .d The final active contour obtained by using the model in 23 . Figure 4 :Figure 5 : Figure 4: An example of an image segmentation.a The original image our desired objects are the yellow leaves .b-c The final active contour and u obtained by using our model m 7, δt 1/8, λ i1 λ i2 0.001 i 1, 2, 3 , θ 0.1 .d The final active contour obtained by using the model in 23 .
4,506.6
2010-06-28T00:00:00.000
[ "Computer Science" ]
Kant and Global Poverty: Guest Editors’ Introduction to Special Issue What duties do wealthy people have towards those living in poverty, in their own country or abroad? What duties do better-off people have towards those afflicted by life-threatening emergencies, such as starvation, epidemics and war? For anyone living in relative comfort, these questions are familiar, if uncomfortable. Two familiar answers are rooted in our duties to help others where we can. The more comforting answer sees these duties as individual requirements of charity. They are optional aspects of virtue, weak or supererogatory. A more uncomfortable view sees them as stringent duties to aid or rescue.1 Kantian analyses of poverty agree that better-off people have duties to help others. But they also stress how limiting this focus can be. The moral concern of the better-off tends to figure poor people as needy recipients. A more adequate approach must recognise those living in poverty as agents and rights-holders. It must raise broader questions about the oppression and domination that exclude people from material resources. As Narayan (2000) puts it, poverty is also “powerlessness and voicelessness.” Kant himself frames duties of beneficence as “imperfect.” This is not because they are weak or unimportant. Rather, they do not directly guide action, at least in most contexts. The content of these duties depends greatly on someone’s specific situation and the needs What duties do wealthy people have towards those living in poverty, in their own country or abroad? What duties do better-off people have towards those afflicted by life-threatening emergencies, such as starvation, epidemics and war? For anyone living in relative comfort, these questions are familiar, if uncomfortable. Two familiar answers are rooted in our duties to help others where we can. The more comforting answer sees these duties as individual requirements of charity. They are optional aspects of virtue, weak or supererogatory. A more uncomfortable view sees them as stringent duties to aid or rescue. 1 Kantian analyses of poverty agree that better-off people have duties to help others. But they also stress how limiting this focus can be. The moral concern of the better-off tends to figure poor people as needy recipients. A more adequate approach must recognise those living in poverty as agents and rights-holders. It must raise broader questions about the oppression and domination that exclude people from material resources. As Narayan (2000) puts it, poverty is also "powerlessness and voicelessness." Kant himself frames duties of beneficence as "imperfect." This is not because they are weak or unimportant. Rather, they do not directly guide action, at least in most contexts. The content of these duties depends greatly on someone's specific situation and the needs they can address. Their capacities to help depend on the means at their disposal -their relative wealth, of course, but also their power within political, economic and organisational structures. Kant captures this fact in the organisation of his last great work, the Metaphysics of Morals. 2 This distinguishes the domains of right and ethics. Except in extreme conditions of civil breakdown, most people's powers to act are structured by legal frameworks -the domain of right. Wealth and poverty exemplify this. Some people may have adequate or ample means, and may be able to help (some) others. Those living in poverty are excluded from others' property, not just by walls and borders, but also by property rights and police forces. In other words, a Kantian approach to poverty cannot begin with the ethical, imperfect duties of the better-off. It must be oriented by the legal and political frameworks which govern people's access to vital resources and social institutions. Kant's own presentation may make this hard to see, and critics of inequality have often been sceptical of his approach. His account of right does not start with need or welfare. Instead, it is based on a formal account of external freedom: each person has the right to "independence from being compelled by another's choice" (6:237). The Metaphysics of Morals offers lengthy justifications of rights to property and contract, but says little about the rights of the poor or about states' duties to address poverty. It may seem, then, that a Kantian approach ultimately disappoints. His framework points to wider political issues behind familiar questions about duties to help. But in the end, it addresses poverty only in terms of "imperfect" duties. Those who benefit from existing systems of power and property must decide for themselves, how to weigh duties of beneficence against other commitments. Such a conclusion would be mistaken. Onora O'Neill (e.g. 2016) and Thomas Pogge (e.g. 2002) have long argued that much poverty owes to unjust structures. Hence there are Kantian duties of justice to challenge and reform these structures. More recently, Arthur Ripstein (2009: Ch. 9) has pressed a further claim. Poverty exposes people to domination by others; domination is the opposite of freedom. States therefore have duties to ensure everyone's material security. This duty does not reflect a benevolent concern for welfare, nor does it take a view about the fair distribution of resources. It arises because states' existence and powers are justified in terms of people's freedom: "independence from being compelled by another's choice." A Kantian approach also insists that people living in poverty are agents and rights-holders. No one denies that emergency aid may be important: thus Peter Singer's compelling example of the child who has fallen into a pond and risks drowning (1972). But widespread, on-going poverty is a different sort of emergency. Exploitative or exclusionary structures push people to the margins, leaving them without a secure place on this earth (Korsgaard 2018). Insofar as aid is an apt response to people's plight, it must support those persons' agency and autonomy, and avoid paternalism and self-congratulation. More broadly, unjust structures can only be changed collectively and on an institutional level (Allais 2015, Williams 2019. The articles in this special issue range broadly across these themes -from duties of beneficence to the injustices of poverty to the challenges of overcoming it. Sensen stresses that duties of aid can be very demanding; depending on the context, "imperfect" does not mean "optional." Sticker and Stohr both focus on the wrongs of indifference: we must not treat people as "mere things" (Sticker); we should be aware that widespread indifference has a self-perpetuating dynamic that fosters vice (Stohr). Problems of powerlessness and unfreedom are central to Pinzani's article as well as Mieth & Williams's. Injustice is central to Zylberman's discussion, while empowerment forms the core theme of Mosayebi's article. Igneski stresses the need to become a partner to collective action in order to challenge injustice, while Blöser stresses the importance of hope in challenging poverty and the injustices that it reflects. To introduce the articles in more detail, we group them as follows: New interpretations of Kant's Categorical Imperative Kant offers three main formulations of his supreme moral principle, the Categorical Imperative. Sensen focuses on the formula of universal law; Sticker focuses on the formula of humanity; Mieth & Williams on the formula of the realm of ends. Oliver Sensen argues against a familiar interpretation of the Formula of Universal Law: "act only in accordance with that maxim through which you can at the same time will that it become a universal law" (4:421). This is often taken to imply that duties of aid are imperfect and somehow optional. Against this, Sensen emphasises universal human ends such as self-preservation and developing one's capacities. Universalisation requires us to promote and protect these ends. In Sensen's view, mid-level principles enable this. These may be universal (e.g. "Do not murder"), or they may have cultural or conventional aspects (e.g. "Drive on the left"). These principles are morally obligatory insofar as they are necessary to protect universal human ends. Since self-preservation is such an end, some of these principles are bound to apply to situations where one person's life is in great peril. Depending on the urgency of the situation, how costly aid would be to a potential helper, and whether others could also help, Sensen argues that duties to aid can be stringent indeed. Martin Sticker focuses on the Formula of Humanity: "So act that you use humanity, in your own person as well as in the person of any other, always at the same time as an end, never merely as a means" (4:429). He agrees with Sensen that some failures to rescue are wrong: they represent culpable indifference to our fellow human beings. However, the Formula of Humanity does not readily accommodate this. Indifference to someone is not treating as a means: it does not see them as even instrumentally valuable. Sticker therefore suggests that we reframe indifference as treating as a mere thing -a non-person who has neither intrinsic value ("end-in-themselves") nor instrumental value ("(mere) means"). While many poor people suffer exploitation, others suffer from indifference, as do the victims of many emergencies. Sticker therefore contends that Kant's formula should read: So act that you use humanity, in your own person as well as in the person of any other, always at the same time as an end, never merely as a means, nor merely as a thing. In an earlier chapter, Corinna Mieth and Garrath Williams (2022) argued that poverty represents a profound violation of human dignity. Decent civic and economic relations affirm people's dignity as ends-in-themselves. In such conditions, people are able to act on their own terms as means for others. This idea reflects Kant's third formula: "a systematic union of rational beings through common objective laws [whose] purpose is precisely the relation of these beings to one another, as ends and means -[this] can be called a realm of ends" (4:433, our emphasis). By contrast, poverty involves powerlessness and exclusion from such "union." People may be exploited (that is, used as mere means) or deprived of meaningful social roles (that is, treated as useless). In their contribution here, Mieth & Williams consider the poverty and powerlessness affecting so many contemporary migrants, and present these as forms of exclusion that violate Kant's realm of ends. Echoing Hannah Arendt's account of statelessness and "superfluousness" (1951), they stress how Western states so often prohibit contemporary migrants from acting as means. In some cases, migrants are subject to paternalistic policies that discount their own agency; in many more cases, they are subject to active exclusion and hostility, depriving them of material resources, civic agency, legal protection, and worthwhile opportunities to act with and for others. Political dimensions The next three contributions take explicitly political approaches, focussing on rights to freedom and agency, and duties to join in collective action. Alessandro Pinzani approaches poverty in terms of states' duties to ensure the conditions of freedom, and more specifically in terms of proposals for a Universal Basic Income. The leading philosopher of UBI, Philippe Van Parijs, argues that paying an unconditional basic income to all citizens -regardless of their wealth or other income -secures "real freedom" (1995). Formal accounts of freedom, like Kant's, only provide for security and "self-ownership" (in the sense that no one else but I may control the use of my body). Van Parijs argues that this is not enough to support opportunities to develop and realise one's own plan of life. Pinzani rejects this characterisation. He argues that a UBI scheme may provide the best institutional realisation of Kant's formal ideal. UBI guarantees independence and avoids paternalism; it enables all citizens to participate in political life and to develop, as Kant puts it in the Doctrine of Virtue, their "moral perfection." In all these respects, it conforms to Kant's view of freedom: formal, but inextricably bound up with material factors. Reza Mosayebi frames extreme poverty as a denial of agency, and juridical empowerment as the corresponding remedy. The ability to meaningfully assert one's human rights enables political agency and contributes to self-respect. When civic and economic conditions disempower people, others must insist on their juridical standing and support their abilities to assert their human rights. Juridical empowerment aims to raise the consciousness of the impoverished about the causes of their poverty, and to increase awareness of human rights and the need for structural and legal reforms. Mosayebi brings several Kantian elements to bear on the poverty debate: the duty to self to maintain one's self-respect, the prohibition on using oneself as a mere means, and the duty to maintain one's rightful honour -in Kant's words, to assert "one's worth as a human being in relation to others" (6:236). Mosayebi contrasts the idea of juridical empowerment with a beneficiary-oriented view of poverty. The poor should not be viewed as passive recipients of aid or charity, but rather as juridical agents and rights-holders. Violetta Igneski considers a related problem: which actors bear duties to affirm the agency and rights of those living in poverty? Even relatively wealthy people may find themselves powerless to address institutional and structural problems; only collective action can really address these. Drawing on contemporary discussions of joint and collective action, Igneski therefore focuses on the duties of unstructured collectives, and on individual duties to join together with others to challenge poverty. She also emphasises how Kant's own ideas support such an approach. While Kant's legal theory emphasises the state as a collective actor, his moral writings repeatedly point to other collective activities -sometimes in quite practical form (the "public use of reason," the "visible" church), sometimes as ideals that should guide each person (as "citizens of the world," contributors to an "ethical commonwealth," or members of a "realm of ends"). Participation in these collective projects will mean very different things, depending on each person's situation and the forms of collective action in which they already participate. But to refuse such participation, in whatever forms may be possible, is to show indifference to profound injustice. Social dimensions of vice and virtue Three further contributions consider questions of vice and virtue, and show how they interact with social and political structures. Karen Stohr suggests that we can better understand duties towards the poor if we focus on richer people's indifference. This is not only a failure of beneficence or justice: it is a vice. This vice is insidious. It does not merely incentivise acting wrongly. It also gives rise to self-deception and complacency: a person sees her activity as justified when it is not. Moreover, vice has social dimensions. When indifference is widespread, social conditions encourage or "normalise" it. The vice of indifference is particularly dangerous when it comes to "imperfect duties" that allow for latitude and judgment. Judgments and attitudes will tend to be skewed, giving too much weight to personal and even selfish ends. Framing indifference as a vice enriches Kant's picture of moral failure, and helps us to envisage remedies. For example, we should see significant sacrifices for others' ends as exemplary rather than exceptional or (merely) supererogatory. This claim presents a challenge for another aspect of Kant's ethics, however. While Kant sees secrecy as a virtue in practising beneficence, Stohr suggests that contributions and sacrifices must be public, in order to counter the normalisation of indifference. Claudia Blöser investigates the role of hope, as people try to challenge and overcome poverty. Some development economists have argued that hope is essential to the agency and empowerment of people living in poverty. Programmes that try to aid and support the poor should enlist their hope "as a resource for sustaining agency in difficult circumstances." Without disputing this point, Blöser points out that it tends to individualise the issues. Like all our authors, Blöser sees poverty as an institutional and political problem. Kant's approach emphasises the need for hope to be rational. Hope becomes self-deceiving and illusory if I over-estimate my individual chances of earning more money or escaping an unjust situation. In this sense, hopelessness may be entirely rational, especially if civic conditions are corrupt or brutal. As such, efforts at change, and the hope involved in such efforts, must also address fundamental political structures. Ariel Zylberman explores the nature of the wrong done to those living in poverty, and how Kant's philosophy of right interacts with ethics in this domain. While some deny that poverty violates individual rights, Zylberman argues that poverty is incompatible with a person's "original" or "innate" right to freedom (6:237). In Zylberman's terms, this means that the wrong of poverty is relational. This is not, he argues, just a matter of person-to-person forms of mistreatment. It is also a matter of structural relations that are unequal, humiliating and subordinating. Alongside his expansive view of the original right to freedom, this relational understanding leads to a radical reconceptualization of duties concerning poverty. Against authors who see private beneficence as an adequate response to poverty, Zylberman argues that this is bound to be morally hazardous. The rights of the poor must correlate with duties of public right; private beneficence cannot uphold rights. When states fail to discharge their duties concerning poverty, however, better-off individuals still have duties. These duties should be reconceptualized. They are not private beneficence, as implied by the familiar questions raised at the start of this introduction. Instead, they represent private remedies for public failures.
3,987.2
2023-04-01T00:00:00.000
[ "Philosophy", "Economics" ]
Pseudo-Passive Time-of-Flight Imaging: Simultaneous Illumination, Communication, and 3D Sensing The 3D time-of-flight (ToF) cameras have recently received a lot of attention due to their wide range of applications. Despite remarkable advancements in ToF imaging, state-of-the-art ToF cameras are still afflicted by the power hungriness of their illumination sources. To tackle this problem, we exploited existing lighting infrastructure, which ensures the ubiquitous presence of modulated light sources in indoor spaces that serve as opportunity illuminators. We explored the bistatic geometry for passive imaging using the pulse-based ToF approach. Our work is inspired by the recently introduced visible light communication (VLC) or light-fidelity (Li-Fi) infrastructure. VLC allows the infrastructure to provide indoor simultaneous illumination, communication, and sensing (SICS). To this end, we designed a bistatic geometry for the purpose of attaining passive 3D imaging. Such capabilities are achieved by exploiting the pulse shape of the autocorrelation function of real optical signals generated by VLC/Li-Fi modules (e.g., OpenVLC and LiFiMAX). We demonstrated passive imaging by means of matched filtering. In this work, we also studied different sampling strategies in the time-shift domain, including uniform, random, and sparse rulers, which is another step forward toward preserving high depth accuracy with a minimal number of measurements. The proposed methodology achieved successful depth reconstruction with negligible root-mean-square error (RMSE) for the low signal-to-noise ratio (SNR) of the measurements. Parametric models, such as Gaussian and sum of sines, are used to characterize the cross correlation functions and allow for robust parametric depth retrieval from a few measurements. Moreover, we attained a 20-mm worst case error for a target at 25 cm. The experiment proved that the bistatic passive depth reconstruction is feasible. Pseudo-Passive Time-of-Flight Imaging: Simultaneous Illumination, Communication, and 3D Sensing gaming, and human-machine interaction, previously reserved to LiDARs and stereo vision systems [1], [2], [3]. ToF is an active imaging technique that is able to produce both intensity and depth maps. A ToF camera computes the distance between the camera and the scene objects in a per-pixel basis by exploiting the time lapse that the photons confront when the modulated light signals are projected onto the scene and bounce back to the camera, which is proportional to the distance between the camera and each corresponding scattered point. In recent years, rapid advances in solid-state technology have profoundly transformed the lighting infrastructure from conventional lamps (e.g., incandescent and halogen) to light-emitting diodes (LEDs). Thus, LEDs have become popular for displays and light sources because of their advantages such as long lifetime, small size, low cost, energy efficiency, and low switching transient [4]. The switching capability of LEDs enables the visible light channel to be modulated at high frequencies, which are high enough to be imperceptible to the human eye [5]. LED lighting has gained global recognition as a green lighting technology in recent years [6], [7]. LEDs are predicted to replace conventional lights eventually and become the ultimate light source for many applications [8]. This transition has transmogrified the lighting infrastructure into a novel communication paradigm. This paradigm shift has accelerated the development of visible light communication (VLC). VLC is a promising mechanism and an accessible technology, driven by the ubiquitous proliferation of low-cost LED sources in indoor environments. LED-based VLC systems offer numerous advantages, including low-cost front ends, immunity to electromagnetic interference, high physical-layer security, and unregulated spectrum resources (400-900 THz) [9]. It is believed that VLC will play a significant role in the fifth-generation (5G) and sixth-generation (6G) communication [10], [11]. This brings communication and lighting together within a single module. Therefore, the VLC infrastructure can provide multiple services simultaneously, such as communication, illumination, and now, also ToF 3D imaging in indoor environments. In this context, the VLC infrastructure has laid down a solid foundation for pseudo-passive ToF sensing. In addition, VLC-enabled front ends from companies, such as pureLiFi, Philips, and Oledcomm, are accessible to the general public and have rapidly penetrated into the commercial market, including homes, offices, and industrial buildings. 1 The commercialized VLC technology (IEEE 802.15.7) offers a data rate of 150 Mb/s and the research prototypes reach 1 Gb/s of data rates [12]. Light-based wireless communication, the so-called VLC, and ToF sensing systems have been developed independently for two decades after the emergence of optical wireless communication (OWC). The passive imaging method we present transcends the communication-only or sensing-only philosophy and, alternatively, uses a VLC-enabled sensing approach based on the cooperation between VLC and ToF imaging. Thus, this study broadens the reach of VLC systems to synergistically support communication and sensing services. Active 3D imaging has been a vibrant research topic for many years [13], [14], [15]. Despite significant progress in ToF imaging, state-of-the-art ToF cameras are still susceptible to the high power consumption of their dedicated illumination units. Unfortunately, this problem remains unaddressed, and most of the recent progress is on the signal processing side. For example, in [16], the illumination system features 91 W of optical power for wide-area ToF imaging. This precludes the applicability of ToF imaging in specific scenarios. To overcome this problem, an alternative approach that uses an opportunity illuminator for sensing is proposed. This makes the built-in dedicated illumination unit futile, thus enabling passive ToF 3D sensing. VLC has recently been used for passive ToF sensing without synchronization between the source and the ToF camera. This leads to an unknown depth offset in passive ToF imaging [17]. Nonetheless, this passive modality still needs refining to attain accurate depth recovery. Due to the development of a bistatic sensing geometry and the refinement of the ToF sensing pipeline for passive 1 http://purelifi.com/case-studies/ imaging modality, we solved the synchronization problem by introducing a direct link between the emitter and a reference photodiode (RPD). In general, the bistatic configuration uses two parallel channels. One is the reference channel, which transmits signals between the VLC source and the reference PD to acquire an external reference signal for the photonic mixer device (PMD) camera. Another one captures scenerelated reflections. Informative measurements are achieved by cross-correlating the reference and reflected signals. This solution exploits the existing VLC infrastructure [18] to illuminate the scene with modulated light and synchronize the camera with an externally provided signal, allowing accurate depth reconstruction. To evaluate this alternative, we have used an Open-VLC1.3 module with a white LED and a LiFiMAX module with an infrared (IR) LED for our simulations [19], [20]. OpenVLC is a low-cost and open-source platform with a bandwidth of 1 MHz and supports a throughput of 400 kb/s. The LiFiMAX module can provide a data rate of 40 and 100 Mb/s in uplink (UL) and downlink (DL), respectively. Moreover, in this work, the depth is reconstructed using different sampling methods in the time-shift domain. In an attempt to reduce the number of measurements, we compared different sampling approaches, both uniform and nonuniform. The signal processing community extensively relies on uniform sampling (US) in the communication and sensing domains, while nonuniform alternatives often remain unexplored, arguably except random sampling (RS). RS is used as a basis for compressive sensing to recover sparse signals from a few measurements [21]. In addition, for depth reconstruction, we used a matched filtering method. 1) Matched Filtering: A matched filter (MF) is a well-known signal processing technique for improving signal quality and estimating delays. The temporal shift is observed by correlating a known delayed signal (or template) to an unknown signal [22], [23], [24]. Matched filtering is often performed in an analog circuit that carries out a correlation operation and then finds the peaks or in a digital circuit or computer that takes samples of the signal and calculates the discrete correlation function. The MF offers accurate results at a low computational cost, thus allowing for high frame rates. In this context, the sampling rate should be high enough. The 6G technology promotes communication and sensing simultaneously. In this context, VLC infrastructure can be reused for multiple services. To the best of the authors' knowledge, this is the first work exploring VLC infrastructure as a drop-in replacement for the illumination unit in ToF depth cameras. The rest of this article is structured as follows. Section II provides a summary of related work in the area of pulse-based (PB) ToF and existing passive sensing methods. Section III is devoted to the system model. In Section IV, the depth recovery method and different sampling schemes are briefly discussed. The experimental setup is demonstrated and elaborated in Section V. Section VI provides the simulation performance of the system based on several sampling schemes and the firstever passive-ToF 3D reconstruction. Finally, Section VII draws conclusions and proposes future lines of work. II. RELATED WORK OWC and ToF sensing have achieved unprecedented results independently in the recent past, but without mutual intersection. To date, no prior works have studied the intersection of both technologies for simultaneous communication and 3D sensing. Recently, the emergence of OWC variants, such as VLC, light-fidelity (Li-Fi), and free-space optical communication (FSOC), has brought interesting avenues for passive ToF imaging. These variants are frequently used for communications and illumination in indoor and outdoor settings. In parallel, ToF cameras have made great strides in depth reconstruction. In general, ToF cameras are segregated into two operational modes, i.e., PB-ToF mode and continuouswave (CW) ToF mode. In the PB-ToF mode, the source emits a short pulse that illuminates the scene, and bounced-back signals are received by the ToF camera. ToF pixels perform an integration of the scene reflected light mixed with a demodulation control signal (DCS). The measurements are achieved by shifting the DCS with respect to the illumination control signal. In the CW mode, measurements are obtained for different phase shifts between the DCS and the illumination signal. The phased ToF camera was primarily pioneered by Schwarte [25]. His work paved the way for the success of ToF cameras in the computer vision community. Over the years, ToF imaging technology has become ubiquitous in a wide range of 3D imaging applications. The PMD camera is a leading-edge technology for CW-ToF cameras. These devices are able to extract depth from raw data following the phase stepping algorithm [26]. Such devices are frequently endorsed due to their mature processing pipeline and publicly accessible designs [27]. A related work outlined the fundamental operation of lock-in ToF cameras, their merits and shortcomings, the layout of ToF pixels, and a remedy to practical difficulties that appear when a PMD camera is being used in the presence of background light [26], [28], [29], [30]. Li-Fi was initially demonstrated by the German physicist Harald Haas. VLC is a subset of temporally structured lighting [31]. The challenging task is to encode the information in a lighting framework (one or multiple VLC sources). Zhang et al. [32] reported recent advancements in VLC hardware technology. They demonstrated that the blue LEDs and color converters attained 1485-and 470-MHz bandwidth, respectively. These advances not only boost the capacity of VLC channels but also facilitate passive ToF sensing. In 2019, VLC was used for ranging and vehicular communication. This work is not directly related, but the ToF technique is analogous [33]. The ToF sensing technique is a well-known problem: determining the distance from the reflection of a known signal. Recent research efforts have been devoted to the development of PB-ToF imaging systems. Typically, rectangular pulse shapes are used in PB-ToF sensors. It is challenging to generate perfectly square pulses since vertical rising or falling edges would require unlimited bandwidth. Sarbolandi et al. [34] used a PB-ToF camera (Hamamatsu area sensor S11963-01CR) featuring two electronic shutters, known as gates, which are used for the accumulation of photogenerated carriers from bounced-back photons. Both gates are triggered sequentially for a few nanoseconds. The distance to an object leads to a shift in the reflected signal compared to the emitted signal. Thereby, the time shift determines depth. Lang et al. [35] developed a PB-ToF technique for classifying materials based on their unique signatures. Wagner et al. [36] developed an interferometric technique for depth imaging that uses the inherent photon bunching signature of thermal light, which was initially demonstrated by Brown and Twiss [37], [38]. This work adds complexity to the system since the illumination signals should be conditioned before use. This method requires high-bandwidth detectors. In contrast to our work, other authors employed photometric stereo (PS) and aperture masking interferometry to perform passive imaging [39], [40], [41], [42]. Furthermore, the need for multiple sources and an appropriate footprint are fundamental problems in such techniques for scene reconstruction using low-coherence interferometry. The aforementioned passive ToF alternatives [36], [39] have significant drawbacks and are far from being practical. Differently, a VLC-enabled passive ToF system can use a single broadcast signal for multiple purposes, such as communication, illumination, and 3D sensing. This exploits the same spectrum and hardware resources, allowing it to play a significant role in the future wireless network industry. Unlike previous studies, we make use of the fortunate fact that VLC infrastructure is often found in homes, office settings, industrial zones, and vehicles, i.e., interesting application fields for ToF cameras [43]. We propose using the existing VLC infrastructure for illumination, turning the background light from a disturbance into a useful optical signal for the ToF camera. In light of existing literature, no prior works have shown passive ToF imaging by leveraging opportunity illuminators. This article demonstrates the VLC-enabled passive ToF system that can provide communication and 3D sensing for free in terms of additional power consumption. III. SYSTEM MODEL We consider a single-input-single-output (SISO) system that can emit signals aimed at both probing scene objects and enabling communication with DL users. Fig. 1 shows the detailed block diagram of our proposed VLC-enabled passive ToF sensing scheme. In this section, we first model the free-space optical channel, and later on, we define the mathematical model based on the communication and the passive ToF sensing perspectives. Consider a point-to-point VLC-based optical wireless link that broadcasts a DL signal toward the scene and the user. A bistatic setup is designed to synchronize the ToF camera. In this context, the reference signal for the PMD module can be obtained via a direct lineof-sight (LoS) link between the VLC source and the reference PD. The VLC source and the ToF camera should not be co-located in contrast to conventional ToF cameras. A. VLC Channel The underlying characteristics of a VLC channel are mainly dictated by the optical link configuration [44]. The most influential aspect of VLC communication performance is the quality of the communication channel. Kahn and Barry [45] demonstrated six different configuration cases for indoor VLC links based on the presence or absence of an LoS link between an optical transmitter (TX) and a receiver (RX). The degree of directionality and orientation between TX and RX determines the link configuration of indoor VLC systems. The direct LoS link is a dominant communication link configuration and is widely used in the literature [46]. The emitter beam and the field-of-view (FoV) of the RX define the optical transmission channel. In this case, the LoS channel facilitates obtaining a higher received light intensity (i.e., a higher signal-to-noise ratio (SNR) that can be used to achieve suitable data rates and a long communication range). Propagation of optical signals in an optical wireless channel [47] can be written as where P R denotes the received optical power, P T is the source transmitted signal power, G OTX is the gain of an optical TX, G ORX is the gain of the optical RX, and G D denotes the attenuation of light intensity over the propagation distance. A SL accounts for the other system-dependent losses due to the system design and the link misalignment configurations. Optical TXs (i.e., LEDs) are the core components of VLC communication. A Lambertian pattern models the intensity profile of LEDs in the spatial domain. In fact, the radiation pattern from the Lambertian source has radial symmetrical profiles and is controlled by its order of emission m = −ln(2)/ln(cos( 1/2 )). Here, 1/2 represents the half-power beamwidth of the TX. The existing literature on optical wireless channels has reported that the Lambertian radiation pattern is widely accepted by the VLC research community (see, e.g., [44], [48], and references therein). Considering the Lambertian beam TX aperture and received light at the PD, then the gain of the optical transceiver in the presence of free-space propagation losses can be expressed [49] as follows: where is the full-width transmit beam divergence angle, D R denotes the optical RX aperture diameter, d TRX is the distance between TX and RX systems, and λ is the wavelength. Equation (1) can be rewritten as (3) by substituting the gain values From (3), we can encapsulate the free-space propagation loss η and this can be given by According to (4), the attenuation factor is also affected by the emitter's beamwidth, the RX's aperture diameter, and the propagation distance between TX and RX. A wider beamwidth leads to high attenuation and a large aperture reduces the attenuation coefficient because the RX may collect more light. In practice, LED generates the far-field pattern with its maximum intensity region. For an LED with a Lambertian emission pattern, the angular intensity distribution is the maximum at 0 • , and the theoretical half-power beamwidth is 1/2 = 120 • . Optical signal propagation loss (optical path loss) can be computed by making use of the LED beam divergence angle and the diameter of the PD aperture. B. VLC Communication Model VLC is a novel wireless communication technology supported by existing lighting infrastructure. Fig. 2 presents an indoor VLC link. The VLC infrastructure now serves as an opportunity illuminator in passive ToF sensing. We used two types of light-based wireless communication modules: OpenVLC1.3, which is a research platform, and LiFiMAX module, which is a marketable product. The functionality of the modules is discussed in the sequel. 1) OpenVLC: This is one of the most common open-source and adaptable platforms available to the VLC research community. It emits a manchester-coded on-off-keying (MC-OOK) modulated signal. This translates each bit into two transition levels, such as 0 bit and a 1 bit as "HIGH" during the first and second half of the symbol period, respectively, and otherwise, it is "LOW", as enunciated in (6). The transmitted data signal for j ∈ {0, 1} can be formulated as [17] x MC- where v j (t) denotes the pulse waveform and T is the period of each bit. Let us assume that x MC−OOK (t) is the transmitted data stream. In reality, the emitted optical signal x MC−OOK (t) does not exactly coincide with (5) due to the low-pass behavior of the LEDs. Therefore, our realistic theoretical model exploits the definition of convolution between the emitted signal (5) and the LED impulse response function, h LED (t) = e −t/τ /τ, which is modeled as a first-order low-pass filter. This operation results in a probing signal for the sensing and transmitted signals for communication, as expressed by [46] p VLC The optical signal propagates via the LoS channel model and is received by the DL RX (i.e., PD). The PD translates the light intensity into an electrical output signal y PD DL (t). The probing signal is further encapsulated as the received DL signal by performing the convolution between the probing signal (7) and the LoS channel response given in (9). This can be mathematically represented as [46] y PD where * denotes the convolution operator and n(t) is the independent additive white Gaussian noise (AWGN) with zero mean and unit-variance. Moreover, the LoS channel response is a time-shifted and scaled delta function that signifies the amplitude drop and delay of the transmitted data signal. As a result, an optical path loss becomes a significant parameter for characterizing the LED illumination potential. Indoor VLC channels may contemplate the effect of both the LoS and the non-LoS components at the RX end. LoS seems to be the most frequent method used for indoor OWC and illumination settings. In line with [50] and [51], the LoS channel resides between the optical TX and RX. In this work, we ignored the non-LoS components of the optical link without sacrificing generality and considered a single LoS channel between TX and RX. The LoS channel impulse response (CIR) can be written as where η is the attenuation coefficient that is introduced due to the optical channel propagation losses (path loss), t is the propagation time (delay offset of the LoS path) that the optical signal undergoes in free space, and δ(t) is the Dirac delta function. 2) LiFiMAX: Li-Fi is a wireless networking technology that has recently been developed for commercial applications as a fully networked device. We used the LiFiMAX system in this work. It can be easily installed on the ceilings of a conference room or workplace and it provides network access to any device equipped with a plug-and-play LiFiMAX dongle. This enables Internet connectivity for 16 users within 28-m 2 cell size with throughputs of 60 and 40 Mb/s in the DL and UL respectively. LiFiMAX uses carrierless amplitude and phase (CAP) modulation, one of the efficient spectral methods to overcome modulation bandwidth challenges in VLC. CAP modulation has seen extensive research interest in VLC applications as a result of its excellent spectral efficiency as well as its simplicity [52]. The transmitted signal can be expressed by where a n and b n are the real and imaginary parts of the nth symbol, respectively, T is the time period, and n is the symbol index. The pulse-generating orthogonal filters are given as follows: where g(t) is the root raised cosine filter (RRC), ω c is the angular frequency of the filter, and T is the symbol time period. CAP is a variant modulation scheme of quadrature amplitude modulation (QAM) signals for Li-Fi communication [53]. The received signal is further demodulated and decoded to retrieve the original data stream. Note that the UL VLC signals are not considered in this case. For the sensing purpose, the received signal at the reference PD is an analog signal that needs to be converted to the binary signal for the synchronization of the ToF camera. C. Passive ToF Sensing Model In the bistatic passive ToF paradigm, opportunity illumination sources (i.e., VLC sources) illuminate the scene, and the ToF camera acquires the reflected signal. We studied a PB-ToF method based on VLC-modulated light signals. The VLC signal interacts with the scene, and the observed ToF signal is affected by the scene response function (SRF), defined in (12), denoted by h s (t), where s represents the scene. Single reflection K = 1 is often considered in the general setting of the ToF camera. In the most general case, (K ≥ 1), a single ToF pixel may receive multiple reflections from the scene. In this context, the SRF is given by the expression [54] h (12) where δ(t) denotes the Dirac delta function and are the reflective components of the targets and corresponding delays that are introduced by K backscattered light paths [14]. The reflected signal is obtained by performing the convolution between the probing signal and the SRF, i.e., r (t) = ( p VLC TX * h s )(t) and this follows (7). The reflected signal is given by [55] where the SRF is the shift-invariant response function. The conventional CW-ToF method uses the internal signal generated by the ToF system as DCS [14], [56]. Differently, in this work, we exploited the thresholded version of the RPD signal, y RPD (t), as the DCS. This is a binary signal used to synchronize the ToF camera. The received signal is sampled at every bit duration T b . In that manner, we employed a thresholding scheme that compares the received signal amplitude to the threshold voltage V th ; this allows us to make a decision and generate logical levels low and high. The signal prior to thresholding is denoted as the RPD signal, y RPD (t), and the thresholding operation can be written as follows: where A is the required amplitude level of the ToF reference signal. Let us first explain the measurement gathering process to gain insights into how our proposed method processes the acquired data. During the acquisition time interval, the VLC emitter transmits signals continuously, which is long enough to capture the reflected light signals. We can work with a fixed time window of size τ , which is facilitated by the main lobe of the autocorrelation of VLC signals. This allows us to use a given range of samples in simulation and hardware experiments and accounts for the finite exposure time of the ToF camera. The cross correlation between (13) and (14) provides continuous-time measurements m(t) as given in (15). The measurements are obtained by shifting the control signal within a given range of delays, related to the depth range to cover. This yields samples of the cross correlation function where τ is the shift component. The discrete measurements can be written as m[i ] = m(t)| t =iT,i∈Z , where 0 ≤ t < T is the sampling range. IV. DEPTH RECOVERY APPROACH The focus of this section is now shifted to depth reconstruction using matched filtering. Now, our discussion is linked to our numerical experiments, how we built our simulations from real optical signals and attempted to invert the model. Later, we will use real measurements from PMD pixels. The starting point is the generation of measurements in order to emulate our numerical experiments. We briefly discuss the measurement generation process to perform matched filtering for depth recovery. The GT cross correlation function is shifted and downsampled by a shift operator, S q : R N → R N and a downsampling operator, D q : respectively. This yields a measurement vector that can be written as follows: where u τ = Nτ/T and the spacing between the samples is defined as R = N/Q. For the number of measurements acquired according to the considered cases, we adhered to the standard number of measurements, namely, Q ∈ {4, 8, 16}, while N Q and Q is a different representation of M as measurements. The MF correlates the test vector obtained from the reference cross correlation vector and the measurement vector [22], [57]. To demonstrate the correct operation of MF, AWGN is added to the measurements for our simulation, which is a common channel model for noise. The SNR ranges from −30 to 100 dB for attaining measurement vectors. The reference cross correlation is generated in simulations and calibrated in experiments. We obtained the test vector by applying the same sequence of shifting and downsampling operations, but for a candidate delay to test, τ , so that a vector of the same dimension as the measurement vector is obtained that is a function of τ . The test vector can be generated as follows: The estimated time delay (18) is provided by determining the argument that maximizes the MF function, and from this, the depth estimation can be carried out as given in (19) where ., . denotes the standard inner product of vectors. The proposed methodology allows depth reconstruction, depending on various measurement SNR values. The translation of time shift into the distance is a simple linear computation Different sampling schemes are used in the time-shift domain. The depth range is given by the width of the main lobe of the cross correlation function. We sampled the cross correlation function according to uniform, random, and sparse ruler [58], [59] sampling schemes. A. Bistatic Geometry Bistatic ToF imaging is a new line of research; bistatic configurations of emitters and RXs have not yet been explored in 3D ToF imaging. In the passive modality setting, the estimated depth defines a 3D ellipsoid where the target point may lie due to the bistatic geometry. The foci of the ellipsoid are the VLC emitter and the ToF RX. Provided that we know the observation direction of each pixel of the ToF array due to lens calibration, we can compute the intersection between the ellipsoid and the direction vector. This defines the 3D location of the target point, thus retrieving the accurate depth between the camera and the target. The 3D location of the target can be written as follows: where T is the target position, R is the position of the RX, u R is the unit vector, and d RT is the distance between the RX and the target. The 3D ellipsoid is defined by where d ET is the Euclidean distance between the emitter and the target and d RT defines the Euclidean distance between the RX and the target. The depth offset, d o , is associated with the used cables and electronics. It is not related to the scene geometry. The complete proof takes profit from the lens calibration and uses the observation vector u R to attain the true depth. The proof of the final depth for the bistatic geometry is provided in the Appendix I and can be expressed as where d ER = E − R 2 2 and C x , C y , and C z are defined in the Appendix I. d is the total distance that is obtained by (19). u x , u Ry , and u Rz are the observation vector components in the x-, y-, and z-directions. The demonstrated results in Section VI-E are carried out by applying (19) first and this estimated depth is further used in (22) to obtain the true depth between the RX and the target. B. Generalized Sampling in Time-Shift Domain Classical sampling theory, based on the Shannon-Nyquist theorem, assumes US, but related work has shown that nonuniform sampling schemes may reduce the required number of samples without compromising the signal reconstruction quality [60]. We analyze three different sampling schemes in the time-shift domain: US, pseudorandom sampling, and sparse-ruler-based sampling. 1) Uniform Sampling: Reconstruction of a signal from a discrete number of sampling points, where the signal is sampled kT uniformly at fixed-time intervals, is known as US, where T is the time interval and is probably the most widely spread sampling technique. In an arbitrarily fine discrete domain, the samples would be located at the positions k ∈ {0, s, 2s, 3s, . . .}, where s is a discrete version of T . Fig. 3(a) shows US with fixed time intervals. Provided that our signals of interest are ultimately K -sparse, with K = 1 in the single-bounce case, Shannon sampling yields redundant and unnecessary samples, raising the computation complexity of subsequent processing. To overcome this limitation, one can make use of nonuniform sampling [61]. 2) Nonuniform Sampling: It deals with sets of sampling points that are not uniformly distributed over the sampling domain. Different methods can be used to define the location of the sampling points, which may be random or deterministic. In this work, we analyze the depth reconstruction and performance obtained by employing random [58], [62] and sparse ruler (see [59] and references therein) sampling on the time-shift domain. The current upsurge of interest in sampling signals at rates lower than their Nyquist rate can be credited to compressive sampling (also known as compressed sensing) [63], [64], which has fueled a substantial amount of research over the last several years, including in the ToF 3D imaging community [1]. Conclusively, we consider sparse measurements in our simulation settings. Theoretically, a sparse ruler is one that misses some marks, but it can still measure all integer distances between 0 and the ruler's length L. For example, a sparse ruler with the length L = 23 and M = 8 marks would be {0, 1, 2, 11, 15, 18, 21, 23}. The cardinality of the set M determines the number of measurements, and the value of the marks denotes the distance between each measurement sample and the reference sample. We have exploited an optimal sparse ruler of type Wichmann W (2,5). Note that, since all markers are set on a discrete grid (like a sequence), the distances between them are always expressed in terms of integers rather than time-shift units. 3) On-Grid and Off-Grid Sampling: The sampling schemes considered in this work rely on an underlying regular grid. The regular or fine gird sampling is governed by the hardware (i.e., oscilloscope). It is assumed that the sample location fits within the fine grid's resolution. For US and RS methods, on-grid samples can always be generated, regardless of the step size of the fine grid. In the context of the sparse-ruler sampling (SRS) method, samples may not coincide with the on-grid locations. In this case, the signal is further downsampled to match the fine grid resolution required by the sparse ruler. This results in a grid mismatch or locations that are off-grid [65]. One has to deal with the grid mismatch between the fine grid and the sparse-ruler-based sampling grid (off-grid). To this end, the interpolation method can be used to predict the samples located between the fine-grid GT samples and form a smooth curve between on-grid and off-grid sample locations. V. EXPERIMENTAL SETUP In this section, the focus is on designing a bistatic setup for passive ToF imaging. We provide the details of the hardware components that are used in the setup. The bistatic geometry of the VLC-enabled passive ToF setup is shown in Fig. 4(a). Our proposed experimental setup is shown in Fig. 4(b). The simulations and experiments are developed based on our proposed pipeline (cf. Fig. 1). We used an evaluation board containing a PMD camera module endowed with built-in external reference capabilities to control the PMD pixels. This module needs binary signals with 0 V as low and 1.8 V as high voltage levels of the external reference signal. In this case, we employed a thresholding operation that emulates the thresholding circuit to digitize the analog signal for external referencing. This circuit should come before the evaluation board. For the thresholding operation in our experimental settings, we used a pulse generator (HP 8082A) that carries out the thresholding operation. The key parameters of ToF setup and evaluation board are given in Tables I and II. VLC communication experiments were carried out in dark and bright room conditions. ToF sensing tests were conducted in the darkroom settings. In the experimental setup, we used an optical rail to move the emitter and RX easily. The rail width is our x, the height of the emitter/RX is y, and the depth is denoted by z. The length of the rail is 1 m. Here, we have provided a procedure for the proposed method and the process for acquiring data as follows. 2) The RPD is placed at the top of the ToF sensor. It is responsible for obtaining the optical signal for synchronizing the ToF camera. This establishes a direct link between the emitter and the RPD. 3) The RPD signal is inserted into the thresholding circuit, converting the analog signal to the digital signal. Furthermore, the thresholded signals are fed to the picosecond delayer (PSD), enabling custom delays. In our case, we selected our range of scenarios based on the range offered by the PSD. 4) The output signal of the PSD is launched into the evaluation board after the necessary voltage-level adaptation. 5) On the flip side, the reflections are acquired by the PMD camera. The ToF camera is mounted on the optical rail at the position of R = (7, 21.5, 60) cm. The evaluation board has a reference mixer, which takes the reflected signal and mixes it with the external reference signal. Assume that the signal, which is fed into the sensor board, is a thresholded representation of the VLC signal. This yields a cross correlation operation, and by delaying the signal, we can obtain measurements. 6) The target is placed at a distance of 25 cm from the ToF evaluation board. In our case, we used a MiraVera PMD version that features a high-resolution of 640 × 240 pixels. The Boehler star is used as a target in 3D. This is useful for evaluating the camera's real lateral resolution. VI. RESULTS AND DISCUSSION A. Evaluation of OpenVLC and LiFiMAX Modules For evaluating the performance of both exploited modules, we obtained optical signals using a Thorlabs PD (PDA 10A-EC) via an Agilent Oscilloscope (MDO4104-6). The optical signals can be seen in Fig. 5(a) and (b). The random data signals are generated from both modules (OpenVLC and LiFiMAX). These signals are used to emulate the passive modality. The acquired data signals are thresholded by following (14) with respect to their local average. B. Depth Reconstruction Error in the Presence of Noise A number of simulations are carried out using real optical signals of both modules. We exploited the two different ranges of each of the modules. For the OpenVLC module, which has the largest time period, the range can be covered up to 150 m, but we restricted simulations to a range of 6.72 m because of our PSD, which can provide shifts only up to the considered range. In this section, we group our results based on different sampling approaches, according to Section IV-B, and the number of measurements. Synthetic ToF data measurements were generated and modeled as pulse-shaped cross correlation samples, as reported in Section VI-A. The shape is approximately Gaussian with a standard deviation of 44.8 ns for OpenVLC and 10 ns for LiFiMAX. The measurements are generated via (15) by following the considered sampling methods in the time-shift domain. We took into account 40 realizations for each approach used. Four samples of the correlation function are often used for phase computation in consumer devices, such as PMD cameras. Without sacrificing generality, samples are studied using a single-echo case in which the cross correlation function range has been partitioned into several measurement samples. The depth reconstruction results of this study are shown in box plots as shown in Fig. 6 for the OpenVLC module. We utilized root-mean-square error (RMSE) as our performance metric to evaluate the depth reconstruction performance of our method, i.e., where N is the number of independent depth scenarios,d is the estimated depth, and d GT is the ground-truth depth. The RMSE is considered in logarithmic scale RMSE[dB] = 10 log 10 (RMSE). The measurements SNR is controlled ranging from −30 to 100 dB. The matched filtering results are restricted up to 50 dB since a plateau at a negligible error was attained, as presented in Fig. 6. Several depth tests are carried out for a number of considered ranges. When the SNR is greater than 0 dB, a matched filtering method attained a negligible error. A more comprehensive analysis showed few outliers in all sampling schemes. One may observe a large error for SRS due to the combined effect of noise and the off-grid phenomenon due to a mismatch between the fine grid and the grid required by the sparse ruler. The matching filtering approach performs well, with US and RS. The MF performs better since the transition from failure to successful reconstruction happens at a much lower measurement SNR. Fig. 7 presents the depth reconstruction error for matched filtering using the LiFiMAX module. The same procedure is conducted for the Li-Fi module. We attained a −90-dB RMSE value for uniform, −95 dB for random, and almost −80 dB for sparse ruler. It is observed here that sparse-ruler depth reconstruction error has a minimal difference in results with respect to the other two techniques. Our results demonstrate how depth retrieval from samples gained according to different sampling schemes performed in noisy environments. Assume that we have noisy measurements, and the noise might be from the ToF sensors or the surroundings. We attempted to provide the results of the measurement SNR and the estimated depth SNR. The effects of measurement SNR on the estimated depth SNR were studied for both the OpenVLC and LiFiMAX modules. We provide the OpenVLC SNR results here. The estimated SNR becomes constant after 0-dB SNR in matched filtering, as shown in Fig. 8. C. Parametric Modeling The measurements are samples of the cross correlation model in (15). The data points were constrained to a range that meets the range of our PSD. In the time-shift domain, a window size of about 45 ns was taken into consideration. The cross correlation was obtained from the full range. The cross correlation is approximated as a sum of Gaussian functions in (24) and a sum of sines model in (25) for the OpenVLC. The sum of sines and Fourier models in (26) were found to be best in describing the LiFiMAX cross correlation function where t is the time scale of the cross correlation function, a i , b i , c i , and ω are the model parameters, and n = 2. The fit models for the considered range cases are shown in Fig. 9 against the points used for fitting. Plots of Fig. 9 demonstrate an excellent match with the cross correlation functions, with R 2 = 0.9999 in almost all cases considered. Fig. 10 shows the depth reconstruction using the Gaussian model for the OpenVLC module and the sum of sines model for the LiFiMAX cross correlation function. We attained successful depth reconstruction up to constant for both modules. D. Performance Evaluation of VLC Communication This section focuses on the communication-oriented system performance, considering both modules. Here, we concentrate our analysis on the OpenVLC module due to the fact that it is a research-based module, while the LiFiMAX module is a commercial product. We evaluated the OpenVLC module performance in terms of SNR and theoretical bit error rate (BER) as a function of distance. These metrics are used to examine the MC-OOK VLC link for indoor communication. The BER is given by BER = Q f ((SNR) 1/2 ), where Q f is a quality factor that can be defined as Q f = erfc((SNR/(2) 1/2 )). Fig. 11(a) represents the throughput of the OpenVLC link as a function of distance. It can be seen that this can provide illumination without affecting the communication data rate. Fig. 11(b) shows the measured SNR and theoretical BER of an indoor VLC channel using the ThorLabs PD and the Agilent oscilloscope. The eye diagrams are measured after downsampling to one sample per bit. The MC-OOK modulation is used and an eye diagram is generated by making use of OpenVLC random data signals. The eye-diagram settings are enabled in the oscilloscope to see the eye patterns. It is possible to define the quality of a transmission network using eye diagrams. The SNR can be computed as follows: First insights can be gained by analysis of the eye diagram's vertical and horizontal characteristics. We choose to measure the vertical amplitude and horizontal eye openings, as shown in Fig. 12. E. Preliminary 3D Imaging Results This experiment aims to demonstrate that the concept of passive depth recovery works in practice. To this end, we used only four uniformly distributed samples for depth reconstruction. The results are shown in Fig. 13. Fig. 13(a) shows the so-called Boehler star [66] used as a target, which is a 3D representation of the Siemens star used to determine the spatial resolution of depth sensors. The VLC module is used to acquire raw images with custom delays, and the exposure time is maintained at 5 ms. The depth can be obtained via (22) using the lens normals. In the experiments, we recorded the measurements for the reference function, #» Y GT , in the range provided by our PSD. One can cover the complete range of the main lobe of the cross correlation function. A plane is used to acquire the reference function data. Then, MF is used to retrieve the depth from the measurements by leveraging the recorded reference function. The 3D reconstruction in Fig. 13 is the first one ever obtained from a ToF camera without a dedicated illumination system and validates the proposed passive ToF imaging concept. VII. CONCLUSION AND FUTURE OUTLOOK In this work, we proposed a novel VLC-enabled passive imaging pipeline that allows depth estimation up to machine precision in simulations. This study opens up new possibilities for simultaneous communication and 3D sensing systems by exploiting different sampling schemes in the timeshift domain. The validity of the proposed method has been demonstrated using matched filtering, resulting in a depth reconstruction accuracy of 95% at suitable SNR levels for both modules. The key advantage of our work is a drop-in replacement of the classical ToF illumination units by existing, uncontrolled VLC sources. This method drastically reduces the power consumption of ToF cameras and eliminates temperature drift effects from measurements that are generated by the light source [67]. We have also shown that this method performs well with noisy measurement data in simulations. Low-complexity parametric models, including Gaussian and the sum of sines, have been proposed to characterize the cross correlation functions of both modules. Extremely low fitting errors confirmed the validity of the proposed models. Hardware experiments validate our proposed methodology by attaining the worst case depth error of 20 mm at a 25-cm target distance. In addition, we acknowledged that the depth accuracy depends on the VLC source bandwidth. The communication signals are not optimized for sensing purposes. The design of modulation waveforms that are jointly optimized for both purposes is a subject of future work. On the theoretical front, we have provided a complete formulation of the direct sensing model and the method we use for solving the inverse problem. Our method raises interesting considerations regarding the hardware. We intend to explore our hardware implementation further in the future. (22) Remember that (22) is derived from (20) and (21) since the emitter and the RX locations are known. In this regard, (21) is further rearranged and represented in the Euclidean distance between the emitter and the target. By omitting d o , this can be expressed as follows: (29) where E = (E x , E y , E z ) and T = (T x , T y , T z ) are 3D locations of the emitter and the target, respectively. Equation (21) is broken down into 3D coordinate components and it can be defined as T x = R x + u Rx d RT T y = R y + u Ry d RT T z = R z + u Rz d RT (30) where R = (R x , R y , R z ) are the 3D coordinates of the RX and u R = (u Rz , y Ry , u Rz ) are the components of observation vector. We substitute (30) in (29) and, after carrying out some re-arrangements and manipulation, we get d 2 − 2dd RT + d 2 RT = C 2 x + C 2 y + C 2 z +d 2 RT u 2 Rx + u 2 Ry + u 2 Rz −2d RT C x u Rx + C y u Ry + C z u Rz (31) where d 2 ER = C 2 x + C 2 y + C 2 z is the Euclidean distance between emitter and RX, where C x = E x − R x , C y = E y − R y , and C z = E z − R z are the differences of the x-, y-, and z-coordinates of emitter and RX, respectively. At this point, it is noted that the sum of normal vector components is equal to unity, (u Rx 2 + u Ry 2 + u Rz 2 = 1). By substituting this in (31) and making some rearrangements, we obtain the bistatic depth recovery formulation as given in (22).
11,109
2022-11-01T00:00:00.000
[ "Engineering", "Physics" ]
The prognostic significance of tumour cell detection in the peripheral blood versus the bone marrow in 733 early-stage breast cancer patients Introduction The detection of circulating tumour cells (CTCs) in the peripheral blood and disseminated tumour cells (DTCs) in the bone marrow are promising prognostic tools for risk stratification in early breast cancer. There is, however, a need for further validation of these techniques in larger patient cohorts with adequate follow-up periods. Methods We assayed CTCs and DTCs at primary surgery in 733 stage I or II breast cancer patients with a median follow-up time of 7.6 years. CTCs were detected in samples of peripheral blood mononuclear cells previously stored in liquid-nitrogen using a previously-developed multi-marker quantitative PCR (QPCR)-based assay. DTCs were detected in bone marrow samples by immunocytochemical analysis using anti-cytokeratin antibodies. Results CTCs were detected in 7.9% of patients, while DTCs were found in 11.7%. Both CTC and DTC positivity predicted poor metastasis-free survival (MFS) and breast cancer-specific survival (BCSS); MFS hazard ratio (HR) = 2.4 (P < 0.001)/1.9 (P = 0.006), and BCSS HR = 2.5 (P < 0.001)/2.3 (P = 0.01), for CTC/DTC status, respectively). Multivariate analyses demonstrated that CTC status was an independent prognostic variable for both MFS and BCSS. CTC status also identified a subset of patients with significantly poorer outcome among low-risk node negative patients that did not receive adjuvant systemic therapy (MFS HR 2.3 (P = 0.039), BCSS HR 2.9 (P = 0.017)). Using both tests provided increased prognostic information and indicated different relevance within biologically dissimilar breast cancer subtypes. Conclusions These results support the use of CTC analysis in early breast cancer to generate clinically useful prognostic information. Introduction In recent years breast cancer survival rates have been steadily increasing, partly due to earlier diagnoses as a result of increased awareness and widespread mammography screening programmes. Despite these advances, approximately one-third of patients will develop distant metastasis, which represents the terminal step in the progression of the disease. The relative paucity of accurate prognostic tests has made it difficult to identify these high-risk patients to allow for more optimized adjuvant treatment decisions. Similarly, many patients at low risk of developing disseminated disease undergo toxic adjuvant chemotherapy treatments that are of little benefit. There is consequently a need for new prognostic tests with significantly increased sensitivity and specificity, particularly for those patients at the early stages of their disease. Tumour cells in the bone marrow (disseminated tumour cells (DTCs)) or circulating in the peripheral blood (circulating tumour cells (CTCs)), are potential progenitors of distant metastasis and may, therefore, represent important targets of such tests [1][2][3][4][5][6][7][8][9][10][11][12]. Indeed, the detection of CTCs and DTCs has been associated with both disease progression in metastatic breast cancer [13][14][15], as well as disease recurrence and distant spread in early-stage breast cancer [1,3,5,6,16]. Bone marrow has traditionally been the primary compartment in which the prognostic value of the detection of these cells has been investigated, as it is a common homing organ for tumour cells of epithelial origin [17]. While the detection of DTC is of prognostic significance [1,3,5], it has the inherent disadvantage of needing an invasive procedure for sample collection and, additionally, is less suitable for serial sampling for monitoring adjuvant treatment response. The use of peripheral blood as an alternative sampling material has, therefore, gained significant interest in recent years as it does not suffer from these drawbacks. However, there remains a need for good quality studies to confirm the clinical relevance of CTCs in larger patient cohorts, particularly in comparison to DTCs. So far relatively few such studies in early breast cancer have included mature outcome data. Furthermore, only limited data exist on the comparison of CTCs and DTCs in this context, and studies have shown variable results [7,18,19]. Whether CTC detection could replace DTC detection is still an open question, and the value of CTCs versus DTCs may differ among patients as a consequence of different tumour biology and/or dissemination characteristics. In the current study, we used our previously-developed [2,20] multi-marker QPCR-based CTC assay on thawed samples of liquid nitrogen-stored peripheral blood mononuclear cells (PBMC) from 733 early-stage breast cancer patients collected at the time of diagnosis. In addition, this patient series previously had bone marrow samples assayed for DTCs [1], which afforded an opportunity to directly compare the prognostic relevance of CTC and DTC detection in the same large patient group. Patient and sample selection Between May 1995 and December 1998, samples of peripheral blood and bone marrow were collected from 920 stage I-II primary breast cancer patients immediately prior to primary surgery, at five hospitals in Norway (Ullevål University Hospital (now Oslo University Hospital Ullevål), Norwegian Radium Hospital (now Oslo University Hospital Norwegian Radium Hospital), Baerum Hospital, Aker University Hospital, and Buskerud Hospital), as described previously [1]. After excluding patients from whom peripheral blood mononuclear cells (PBMC) were either unavailable for this study (n = 149) or who were eligible for and/or received preoperative systemic therapy (pT3/4 patients; n = 38), which would likely distort the true CTC levels, PBMC were thawed from liquid nitrogen storage for CTC analysis. Bone marrow samples from the same patients (n = 733) had previously been analysed for the presence of DTCs [1]. Patients were treated with breast-conserving surgery or breast ablation and axillary clearance in addition to adjuvant therapy where required, as described in Table 1. Systemic adjuvant treatment was given according to the national guidelines between 1995 and 1998. Patients with node negative disease having either tumours ≤ 2 cm or grade 1 tumours 2 to 5 cm in diameter did not receive systemic adjuvant treatment. In addition, patients > 65 years of age did not receive chemotherapy. The standard adjuvant chemotherapy regimen consisted of six cycles (3 qw) of intravenous cyclophosphamide 600 mg/m 2 , methotrexate 40 mg/m 2 , and fluorouracil 600 mg/m 2 . Ten patients received highdose chemotherapy. Tamoxifen was used as endocrine therapy in hormone receptor-positive patients. Radiotherapy was offered after breast-conserving surgery for all patients and after mastectomy for node-positive premenopausal patients and postmenopausal patients with four or more positive lymph nodes. The follow-up consisted of clinical examination at 6-to 12-month intervals at the hospital outpatient departments or by the patients' primary doctors, with mammography annually. Further diagnostic work-up was performed only if the patients had symptoms or signs of progression. The median follow-up time for metastasis-free survival was 7.1 years, and for breast cancer-specific survival 8.3 years. Written informed consent was obtained from all participants and the study was approved by the Regional Ethic Committee. Control subjects For the CTC study, 16 patients with metastatic breast cancer (M 1 disease, according to the Union Internationale Contre le Cancer criteria) were included as clinical "positive controls" as based on our previous studies [2] the majority (> 90%) were expected to have circulating tumour cells. They were invited to participate if they were between treatments or soon to start subsequent palliative treatment modality. A total of 28 samples of peripheral blood from healthy controls into which different numbers (5 to 1,000) of cultured breast tumour cells derived from a mixture of six breast cancer cell lines (MCF7, CAMA1, T47D, SKBR3, ZR-75-1, and MDA-MB-231) were also tested in parallel as a second positive control group. Twenty-five healthy (negative) control subjects matched for gender and age and randomly selected from hospital and scientific staff were also freshly collected. Mononuclear cells from these control groups were separated and stored frozen for up to one week to mimic the processing and storage conditions of the patient samples, before being thawed and assayed. For the DTC study, bone marrows from 98 healthy controls were assayed. Sampling and processing of peripheral blood (PB) A total of 50 mL of blood was collected immediately prior to primary surgery. Mononuclear cells, including any tumour cells present, were separated from the whole blood samples using Ficoll Hypaque (BD, Breda, Netherlands) and stored frozen in aliquots of 20 × 10 6 cells (except from 30 patients, from which less than 20 × 10 6 cells were available) in liquid nitrogen until tumour cell enrichment (10 to 15 years later). Tumour cells were separated from a single aliquot of peripheral blood mononuclear cells (PBMCs) using anti-EpCAM (clone HEA-125) and anti-ErbB2 Micro Beads (MACS ® ; Miltenyi Biotec, Leiden, Netherlands) according to the manufacturer's instructions. In brief, beads were incubated with the PBMCs for 30 minutes at 4°C, after which labelled cells were collected on a magnetic separation column. After removal of the column from the magnetic field, the retained EpCAM + and/or ErbB2 + cells were eluted, and stored at -70°C in lysis buffer (5 M Guanidine thiocyanate (Merck, Darmstadt, Germany), pH 6.8, 0.05 M tris, 0.02 M EDTA, 1.3% Triton) until mRNA isolation and cDNA synthesis. mRNA isolation and cDNA synthesis Total RNA was precipitated from the cell lysate and dissolved in lysis buffer from the μMACS™ One-step cDNA kit (Miltenyi Biotec). Oligo(dT) Micro Beads were added and the mixture placed onto the μMACS column in the thermo MACS™ Separator (Miltenyi Biotec, Leiden, Netherlands). The isolated mRNA was directly converted into cDNA as per the manufacturer's instructions, with an additional elution with 20 μl of elution buffer, resulting in a total volume of 70 μl. Quantitative real-time PCR Quantitative real-time PCR primers (Sigma Genosys, Cambridge, UK) and 5'-fluorescently FAM-labelled probes (Applied Biosystems, Nieuwerkerk a/d IJssel, The Netherlands) were designed using the Perkin Elmer Primer Express ® software (PE, Foster City, CA, USA) based on the published sequences of CK19, p1B, EGP-2, PS2, mammaglobin and SBEM as previously described [20] ( Table 2). All primers were designed to be intron-spanning to preclude amplification of genomic DNA. Commercially available primers and probes for the "housekeeping" genes β-actin and glyceraldehyde-3phosphate dehydrogenase (GAPDH) (Applied Biosystems) were also used. Serially diluted cDNA synthesized from the amplified RNA of a pool of 82 snap frozen breast cancer tissues was used to generate standard curves for control and marker gene expression. For all cDNA dilutions, fluorescence was detected from 0 to 50 PCR cycles for the control and marker genes in single-plex reactions, which allowed the deduction of the C T -value (threshold cycle) for each product. The C T -value is the PCR cycle at which a significant increase in fluorescence is detected due to the exponential accumulation of PCR products and is represented in arbitrary units (TaqMan Universal PCR Master Mix Protocol, Applied Biosystems) [21]. The "housekeeping" genes β-actin and GAPDH were used to confirm reaction efficiency. Each measurement was performed in triplicate. Quality control measures for the PCR reactions included the addition of a genomic DNA control and a non-template control. Quadratic discriminant analysis (QDA) 'CTC score' calculation QDA was used to calculate a 'CTC score' representing CTC presence or absence for each sample, as previously described [22,23], using the expression data of the four best marker genes (CK19, p1B, EGP and MmGl). QDA is a statistical approach to find the combination of quadratic and linear functions of variables (in this case marker genes), which leads to the optimal separation between groups (in this case metastatic breast cancer patients and healthy volunteers). It is a generalization of Fisher's Linear Discrimination Analysis (LDA), which allows only linear functions [24]. A positive discriminant score indicates the presence of tumour cells in a sample, and conversely a negative discriminant score indicates their absence. QPCR measurement and QDA data analysis in this way offer a simple and objective estimate of tumour cell presence in a given sample. Sampling and processing of bone marrow (BM) The processing and analysis of DTC has been previously described [1]. Briefly, BM was collected from patients under general anaesthesia just before primary surgery for suspected breast cancer. A total of 40 mL of BM was aspirated from anterior and posterior iliac crests bilaterally (10 mL per site) and after separation by density centrifugation, mononuclear cells (MNC) were collected and cytospins were prepared (5 × 10 5 MNC/ slide). Four slides were incubated with the anticytokeratin monoclonal antibodies (mAbs) AE1 and AE3 (Sanbio, Uden, The Netherlands), and the same number of slides were incubated with an isotype-specific irrelevant control mAb. The visualization step included the alkaline phosphatase/anti-alkaline phosphatase reaction, and the slides were counterstained with hematoxylin to visualize nuclear morphology. The cytospins were manually screened by light microscopy by a single pathologist (EB). The determination of the presence of DTC was based on Consensus morphology criteria [25]. In the DTC-positive patients, a median of 1 cell was detected, with a mean of 98 cells (range 1 to 7,500). Of the bone marrow samples from 98 healthy controls, 4 contained ≥ 1 cell (3 with 1 cell, 1 with 2 cells) which were scored positive by the described technique. Those patients with a DTC positive event in the negative control (isotype specific), were excluded from DTC interpretation in the specific test (AE1AE3), which improved the specificity of the DTC analysis. Statistical analysis The primary endpoints for the survival analysis was breast cancer-specific survival (BCSS), which was measured from the date of surgery to breast cancer-related death or otherwise censored at the time of the last follow-up visit or at non cancer-related death, and metastasis-free survival (MFS), measured in the same way. Detection of CTC Results from a total of 733 stage I to II breast cancer patients with biomaterial available at the time of surgery for both CTC and DTC analysis are presented. For CTC detection, samples of peripheral blood from a group of healthy individuals (negative controls) and metastatic breast cancer patients (positive controls) were processed the same way as the patient samples. These were used to calculate weights for each of the tumour marker genes used in the CTC assay and to set the threshold of CTC positivity. In an analogous fashion, 28 blood samples from healthy volunteers into which cultured breast tumour cells were spiked were also investigated as positive controls, since all should contain detectable levels of tumour marker gene expression. This sample set generated comparable tumour marker weights to the metastatic patients. This analysis also demonstrated that the assay gave a positive CTC score when as few as one tumour cell per mL was present in the sample (data not shown). Using the above data to tune the QDA CTC score calculation, 58 of the 733 (7.9%) patients had a positive CTC score and determined CTC-positive, versus 0 out of 25 healthy controls in the negative control group. The CTC result was compared to clinical and histopathological parameters, as well as the DTC status, as described in Table 1. Patients determined to be CTCpositive were significantly more likely to have ER-negative (P = 0.002 (Pearson Chi-square test)), PR-negative (P = 0.046), HER2-positive (P < 0.001) tumours, and have tumour cells present in the bone marrow (P < 0.001). CTC status was next tested in a multivariate Cox regression model, including DTC status and standard prognostic clinical variables (lymph node status, histological grade, tumour size, hormone receptor status, HER2 status, and vascular invasion). In the final model, CTC status remained a significant prognostic factor for both relapse-free survival and breast cancer-specific survival (Table 4). Within this cohort, 367 lymph node-negative patients were considered at low risk of metastasis according to the prevailing guidelines at the time of study accrual and did not receive systemic adjuvant treatment. Of these patients, those that were CTC-positive had a significantly higher risk of metastasis and breast cancer death than those that were CTC-negative (MFS HR = 3.0, P = 0.014; BCSS HR = 3.6, P = 0.011) (Figure 2). In the subgroup of patients receiving chemotherapy (with or without hormonal treatment; n = 155), CTC status was also prognostic (MFS HR = 2.3, P = 0.042; BCSS HR = 3.1, P = 0.012). For patients treated with hormonal therapy only (n = 164), survival differences according to CTC status were not statistically significant (MFS HR = 1.3, P = 0.701; BCSS HR = 1.7, P = 0.376). Circulating tumour cell status versus disseminated tumour cell status Previously, we reported the clinical significance of DTC status in this patient cohort (over a shorter follow-up period) by immunocytochemical detection of cytokeratin-positive cells [1]. This allowed the direct comparison of the prognostic value of CTC detection to DTC detection. Of the 733 patients included in the study, the bone marrow samples of 86 patients (11.7%) contained DTCs. In univariate analyses DTC status was also a significant predictor of outcome, with a HR of 1.9 for MFS (P = 0.006) and 2.3 (P = 0.001) for BCSS (Table 3). However, in the multivariate analyses including CTCs, DTC status fell below the level of significance (Table 4). Patients in whom tumour cells were detected in both peripheral blood and bone marrow had a significantly poorer outcome than those with tumour cells detected in only one or neither compartment (MFS = 3.5, P = 0.001; BCSS = 3.0, P = 0.008; Figure 3). In addition, those patients who were exclusively CTC-positive or exclusively DTC-positive also had reduced survival as compared to patients who were negative for both (CTC+/DTC-BCSS HR = 2.2, P = 0.015; MFS = 1.9, P = 0.034. CTC-/DTC+ BCSS HR = 2.1, P = 0.003; MFS = 1.5, P = 0.127). A subgroup univariate analysis revealed that DTC status was only prognostic in lymph node-positive patients (MFS HR = 2.1, P = 0.004 and BCSS HR = 2.1, P = 0.007), and not in lymph node-negative patients (MFS HR = 0.6, P = 0.378 and BCSS HR = 1.0, P = 0.978). CTC status however, appeared to be prognostic in both groups (MFS HR = 2.1, P = 0.015; BCSS HR = 1.9, P = 0.045 for lymph-node positive patients, and MFS HR = 2.3, P = 0.039; BCSS HR = 2.9, P = 0.017 for lymphnode negative patients; Figure 1). When patients were subgrouped by ER, PR, and HER2 status, CTC status, but not DTC status, was significantly associated with survival for ER-negative and/or HER2-negative patients. For ER-positive patients, those with the presence of DTCs experienced reduced BCSS. This was not observed for CTC positive patients. Both CTC and DTC status were associated with MFS (Table 5). Of the 143 ER/PR/HER-negative ("triple negative") patients, CTC status, but not DTC status, was prognostic for both Discussion We aimed to validate a previously-developed [2,20], highly sensitive multi-marker QPCR-based CTC assay in a large early-stage breast cancer patient cohort with mature outcome data. Bone marrow samples from these patients had previously been assayed for DTCs by immunocytochemical analysis using anti-cytokeratin antibodies [1]. This made it possible to also directly compare the prognostic value of the detection of tumour cells in the peripheral blood versus the bone marrow in the same large patient group. In this cohort of 733 early-stage breast cancer patients, CTC status was a significant predictor of both metastasis-free and breast cancer-specific survival, in agreement with other studies [2,6,7,26,27]. To our knowledge, this is the largest published study on CTCs with mature outcome data. There was no significant difference in the distribution of many common clinical variables between CTC-positive and CTC-negative patients, barring that CTC-positive patients had tumours which were ER-negative, PR-negative and HER2-positive significantly more often, in agreement with other studies [28]. CTC-positive patients were in addition more likely to have tumour cells present in both their bone marrow, in accordance with the findings of Daskalaki et al., Pierga et al.,and Müller et al. [7,29,30] (Table 1). Importantly, multivariate analysis demonstrated that CTC status provided prognostic information independent of other common clinical variables (Table 4). Furthermore, the CTC analysis discriminated outcome among those patients deemed at sufficiently low-risk of distant metastasis by traditional risk assessment to not receive systemic treatment. Indeed, these "low-risk" lymph node-negative CTC-positive patients had a fiveyear BCSS of 75.0%, versus 93.0% for the CTC-negative patients ( Figure 2). The identification of such a poor prognosis subgroup within a group of traditionallydefined "low risk" patients indicates the potential value of CTC detection in routine clinical use in conjunction with other prognostic tests. Our results show that immunomagnetic tumour cell enrichment followed by multi-marker QPCR analyses on samples of PBMCs stored frozen for up to 15 years provides valuable clinical data. As expected, some cellular and nucleic acid degradation was observed in these samples after their removal from long-term storage, with tumour marker gene expression lower, variance higher (24-fold) and lower frequency of CTC positive cases (7.9%) than that observed with our recent study using fresh material from a similar patient group (n = 82) [2]. None of the 25 healthy controls assayed were determined to be CTC-positive, which suggested high specificity. On the other hand, it should be noted that the possibility of occasional tumour marker gene expression in non-malignant cells of breast cancer patients cannot entirely be excluded, which may decrease true specificity in a clinical setting. Interestingly, while CTC status was correlated with DTC status in these patients, it also provided additional prognostic information and outperformed DTC status in the multivariable analysis ( Figure 3, Tables 4 and 5). Keeping in mind the retrospective design of this study, the presented results indicate that the CTC status especially improves the ability to predict outcome in the ER negative subset. It is possible that CTCs and DTCs to some extent reflect distinct aspects of the disease and that their presence and relevance depend on the biology of the underlying breast cancer: bone marrow is a compartment in which tumour cells may remain dormant for many years [17], whereas peripheral blood represents a transitory route for tumours that may be more rapidly proliferating and/or actively migrating. This is consistent with the observation that CTC-positive patients are significantly more likely to have ER-/PR-/HER2+ (Table 1) or triple negative tumours (P = 0.003 by Pearson Chisquare), which tend to be more aggressive and exhibit high proliferative capacity [31]. DTCs by comparison, might have higher propensity to remain dormant but viable for extended periods, and, therefore, be more predictive of metastasis or relapse at later time points. This is consistent with our previous study in which the DTC status of a subset of patients from the current cohort was correlated to molecular subtype [32]. It was found that prognostic DTC status was most strongly correlated with patients with the less aggressive Luminal A-type tumours. In some previous studies CTC status was found to be considerably less prognostic than DTC status [10,11], in contrast to the present results. We cannot exclude the possibility that differences in the prognostic value of DTC detection in different studies may be due to the cytokeratin antibody used [33]. However, the novel CTC detection platform we have employed does offer high sensitivity and specificity by incorporating multi-marker QPCR-based detection and immunomagnetic tumour cell enrichment [20], resulting in increased prognostic power. This strategy also presents one limitation, however; by using anti-EpCAM and anti-ErbB2 Micro Beads for immunoselection, tumour cells that do not express or express low levels of EpCam or HER2 may not be enriched. Indeed, CTCs are known to be biologically heterogeneous (for review see [34]), and able to undergo phenotypic changes during migration. For example, some CTCs may lose EpCam in order to intravasate and reach circulation during epithelial-to-mesenchymal transition (EMT) [35]. Also, the use of an antibody directed against HER2 would favour the selection of HER2 positive CTCs, which may contribute to the observed significant overrepresentation of CTC-positive patients who have HER2-positive tumours. Although there are biological explanations for differences in the clinical impact of CTCs and DTCs, different methodological principles for the detection of CTCs and DTCs, including the possibility for different sensitivity, might also contribute to incongruent results. Therefore, parallel analysis of CTCs and DTCs with the same methodological principle should be encouraged in future trials. Our CTC assay was originally developed using fresh PBMCs, and in our previous study demonstrated more than twice the CTC positivity rate in early-stage patients than was observed using the liquid nitrogen-stored material here [2]. The use of our method on fresh PBMCs would, therefore, be expected to further increase the sensitivity and clinical utility of the assay. Future robust prospective studies are warranted to demonstrate this. Conclusions Our results add further evidence for the potential clinical value of CTC and DTC detection in risk stratification in early breast cancer. According to our data, approximately one in five breast cancer patients are likely to be CTC and/or DTC positive at the time of diagnosis and, significantly, a proportion of these will be classed as 'low-risk' by traditional prognostic measures and may not receive any form of systemic treatment, yet would probably derive considerable benefit from it. Our results also suggest that while CTC status may be easier to determine and monitor than DTC status due to the accessibility of the sampling material, both CTC and DTC status provide valuable clinical data that may be indicative of different disease aspects. Together, these results provide evidence that CTC and DTC detection generate prognostic information that would be useful in a clinical setting.
5,821.2
2011-06-14T00:00:00.000
[ "Medicine", "Biology" ]
Amelioration of autoimmune arthritis by adoptive transfer of Foxp3-expressing regulatory B cells is associated with the Treg/Th17 cell balance Background Foxp3 is a key regulator of the development and function of regulatory T cells (Tregs), and its expression is thought to be T cell-restricted. We found that B cells in mice can express Foxp3 and B cells expressing Foxp3 may play a role in preventing the development of collagen-induced arthritis (CIA) in DBA/1J mice. Methods Foxp3 expression was modulated in CD19+ B cells by transfection with shRNA or using an over-expression construct. In addition, Foxp3-transfected B cells were adoptively transferred to CIA mice. We found that LPS or anti-IgM stimulation induced Foxp3 expression in B cells. Foxp3-expressing B cells were found in the spleens of mice. Results Over-expression of Foxp3 conferred a contact-dependent suppressive ability on proliferation of responder T cells. Down-regulation of Foxp3 by shRNA caused a profound induction in proliferation of responder T cells. Adoptive transfer of Foxp3+CD19+ B cells attenuated the clinical symptoms of CIA significantly with concomitant suppression of IL-17 production and enhancement of Foxp3 expression in CD4+ T cells from splenocytes. Conclusion Our data indicate that Foxp3 expression is not restricted to T cells. The expression of Foxp3 in B cells is critical for the immunoregulation of T cells and limits autoimmunity in a mouse model. Rheumatoid arthritis (RA) is a debilitating autoimmune disease characterized by chronic inflammation and destruction of the joints has been considered to be a Th1 and/or Th17-mediated disease. However, B cells also play important roles in the pathogenesis of RA. B cells present within the synovial membrane of affected joints are involved directly in sustained inflammation in the rheumatoid synovium [3], and play a critical role in the synthesis of rheumatoid factor (RF) [23]. The therapeutic success of B cell depletion using a mAb against the B-cell surface molecule CD20 (Rituximab; RTX) has brought in a renewed focus on the role of B cells in the pathogenesis and control of RA and other autoimmune diseases [24,25]. Interestingly, regulatory B cells have also been proposed to play a role in the K/BxN arthritis mouse model, a model in which Igs are required for disease development. Furthermore, the number of regulatory B cells was negatively correlated with disease activity in new onset RA patients [26]. Several different Breg subsets have now been identified and characterized phenotypically as CD5 + B-1a, CD1d + marginal zone B cells, transitional-2-marginal zone precursor B cells, and CD1d hi CD5 + CD19 hi in mouse models. The transcription factor Foxp3 is a master regulator of Tregs, controlling their development and function. A role for Foxp3 in maintaining self tolerance has been shown in scurfy mice, and in patients with immunodysregulation, polyendocrinopathy, enteropathy, and X-linked (IPEX) syndrome as the causative genetic anomaly that results in severe autoimmune diseases [27][28][29]. The expression of Foxp3 in conventional T cells confers suppressive activity and induces the expression of associated molecules such as CD25, cytotoxic T lymphocyte antigen 4 (CTLA4), and glucocorticoid-induced TNF receptor-related protein (GITR) [30][31][32]. These findings suggest that B cells with suppressive activity may also express Foxp3. Foxp3 expressing CD19(+)CD5(+) B cell population was identified in human peripheral blood mononuclear cells and regulatory properties of this cell type was proposed [33]. The Foxp3 expressing regulatory B cells were identified as strong suppressors in milk allergy in human. In the present study, we investigated the existence of Foxp3-expressing B cells, and their regulatory roles in mice arthritis model, by testing whether they could regulate the proliferation of effector T cells in vitro through a cell-to-cell contact-dependent mechanism. Furthermore, we found a therapeutic effect of Foxp3 + B cells in autoimmune arthritis by performing cell transfer studies in CIA mice, an animal model of RA. We observed that the Foxp3 + B cells showed a strong suppressive effect against arthritis in CIA mice. Mice Male DBA/1J mice aged 6-8 weeks were purchased from the Charles River Laboratory (Yokohama, Japan). Mice were housed in groups of 10 and maintained at a mean temperature of 21 °C (±2 °C) on a 12-h light/12-h dark cycle, with food and water available ad libitum. All experimental procedures were examined and approved by the Animal Research Ethics Committee of the Catholic University of Korea (permit number: CUMC-2009-0044-01), which conforms to all National Institutes of Health of the USA guidelines. All surgeries were performed under isoflurane anesthesia, and all efforts were made to minimize suffering. Cell preparation and culture The A20 cell line (mouse B cell lymphoma) was purchased from the American Type Culture Collection (ATCC; Rockville, MD, USA). Spleens were collected for cell preparations from DBA/1J or arthritis mice. To purify CD19 + B cells or CD4 + T cells, splenocytes were incubated with CD19 or CD4 microbeads (Miltenyi Biotec, Auburn, CA, USA) and isolated on MACS separation columns. B cells or T cells determined by staining with FITC-labeled anti-CD19 mAb or PE-labeled anti-CD4 mAb, respectively (BD Biosciences Pharmingen, San Diego, CA, USA). These cells were >98 % purity. CD19 + B cells or A20 cells were cultured with various stimuli, such as 10 µg/ml lipopolysaccharide (LPS; Sigma-Aldrich, St. Louis, MO, USA) or 10 µg/ml anti-IgM (Jackson Immu-noResearch, West Grove, PA, USA) for 72 h. Flow cytometric analysis CD19 + B cells were resuspended in 4 % BSA flow buffer and blocked with CD16/CD32 Fc antibody (BD Pharmingen). Cells were incubated on ice for 30 min with FITClabeled anti-CD19 mAb or PerCP cy5.5-labeled anti-CD4 mAb (all from eBioscience, San Diego, CA, USA). For intracellular staining of Foxp3 and CTLA4, cells were fixed, permeabilized, and stained with FITC-or PElabeled anti-Foxp3 mAb and/or PE-labeled anti-CTLA4 mAb (all from eBioscience). Finally, cells were analyzed using a FACSCalibur (Becton-Dickinson, San Jose, CA, USA). Cells that stained positively for CD4, CD19, CD25, IL-17 and Foxp3 were counted visually at higher magnification by four individuals, and the mean values were presented in the form of a graph. Transfection of CD19 + B cells The Foxp3 open reading frame (ORF) was codon optimized for mammalian codon usage and was synthesized by Genscript Corp. (Piscataway, NJ, USA). The Foxp3 ORF was subcloned into the pcDNA3.1 (+) mammalian expression vector (Invitrogen, Carlsbad, CA, USA), digested with Hindlll and Xho l sites. The construct was verified by DNA sequencing (Macrogen, Seoul, Korea). Foxp3-specific targeting short hairpin RNA (shRNA) was purchased from Santa Cruz Biotechnology (Santa Cruz, CA, USA). For transfection, splenic CD19 + B cells were seeded in 24-well plates. Cells were transfected with 1.5 µg of DNA using poly-MAG and Magneto FACTOR plates (Chemicell GmbH, Berlin, Germany), according to the manufacturer's instructions. Cell viability was assessed using the Cell Counting kit-8 (CCK-8, Dojindo Laboratories, Kumamoto, Japan). Western blot analysis CD19 + B cell lysates were denatured in SDS, resolved by 10 % SDS-PAGE, and transferred to polyvinylidene fluoride membranes (Amersham Pharmacia, NJ, USA). Membranes were pre-incubated with 5 % skimmed milk in TBS for 2 h at room temperature. Primary Abs directed against Foxp3 (Santa Cruz Biotechnology), diluted 1/200 in blocking buffer (5 % skimmed milk in TBS), were then added and the samples incubated overnight at 4 °C. After the samples were washed for four times in TBST, HRPconjugated secondary Abs were added and incubated for 1 h at room temperature. Finally, membranes were washed in TBST and the hybridized bands were detected with an ECL detection kit (Pierce, Rockford, IL, USA). Suppression assay CD4 + CD25 − T cells were isolated from spleens of arthritic mice by magnetic bead cell sorting using an untouched CD4 + T Cell Isolation Kit II and CD25 Microbeads (all from Miltenyi Biotec, Bergisch Gladbach, Germany), according to the manufacturer's instructions. To assess the suppressive activities of Foxp3-transfected CD19 + B cells, CD4 + CD25 − responder T cells (5 × 10 4 / well) were cultured with a 1:1 ratio of shRNA or Foxp3transfected CD19 + B cells (which were stimulated with LPS or anti-IgM) in the presence or absence of bovine type II collagen (CII) (Chondrex Inc., Redmond, WA, USA), in a 100 ng/ml anti-CD3-coated 96-well plate. In some cases, Foxp3-transfected CD19 + B cells were placed in the inner transwell chamber. During the last 16 h of culture, cells were pulsed with 3 H-thymidine (1 μCi/ well; MP Biomedicals, Seven Hills, Australia). Cells were assessed for thymidine incorporation in a Microbeta counter (Wallac Oy 1450 MicroBeta; Wallac, Melbourne, Australia). Induction and clinical assessment of arthritis CII was dissolved in 0.1 M acetic acid solution (2 mg/ml) by gentle rotation at 4 °C overnight. Mice were injected intradermally at the base of the tail with 100 μg CII emulsified with an equal volume of CFA containing 2 mg/ ml Mycobacterium tuberculosis (Chondrex Inc). On day 14, a second injection of CII in IFA was administered. Arthritic indices were evaluated three times weekly by three or more independent investigators until 9 weeks after the first immunization. The scale of the arthritis index ranged from 0 to 4. Scores were defined as follows: 0, no evidence of erythema or swelling; 1, erythema and mild swelling confined to the mid-foot (tarsal part) or ankle joint; 2, erythema and moderate swelling extending from the ankle to the mid-foot; 3, erythema and moderate swelling extending from the ankle to the metatarsal joints; 4, erythema and severe swelling encompassing the ankle, foot, and digits [34]. Histological assessment of arthritis At sacrifice, knee joints (mid-tibia to mid-femur) were harvested, and the joints were fixed overnight in 4 % paraformaldehyde Decalcified limbs were embedded in paraffin and sectioned to a 7-µm thickness. Tissues were stained with hematoxylin-eosin (H&E), Toluidine blue, and Safranin O. For histological evaluation of arthritis, sections were evaluated in a blind manner. The scores were evaluated as described previously [35]. Adoptive transfer Splenic CD19 + B cells of naïve mice were purified with magnetic beads (Miltenyi Biotec). Purified spleen B cells were transfected with mock or Foxp3 over-expression construct and cultured with LPS for 72 h. Purified 10 × 10 6 Foxp3-transfected CD19 + B cells were suspended in a total of 200 μl saline and transferred intravenously into mice on days 7 and 28 after CII immunization. Statistical analysis Experimental values presented are the means ± SD. Student's t tests were performed to calculate the statistical differences between means of different variables, and P values less than 0.05 (two-tailed) were considered significant. LPS and IgM stimulation induced Foxp3 expression in mouse B cells To evaluate the effect of LPS and IgM on Foxp3 expression of B cells, we cultured splenic CD19 + B cells in the presence of LPS or IgM for 72 h. Foxp3 expression was ascertained by PCR, immunoblotting, and flow cytometry. Increased in Foxp3 mRNA and protein expression were observed in both cultured CD19 + primary B cells isolated from splenocytes of wild type mice (WT), and in a mouse B cell lymphoma line (A20), after LPS or IgM mAb stimulation (Fig. 1a, b). Flow cytometric analysis of cultured B cells in media alone showed that 0.73 % of B cells expressed Foxp3. After stimulation with LPS or IgM mAb, more than 10 % of the CD19 + cells expressed Foxp3 (Fig. 1c). Additionally, LPS and IgM mAb induced an increased level of Foxp3 expression in CD19 + B cells, as determined by confocal microscopy (Fig. 1d). CD19 + Foxp3 + B cells exist in spleens of CIA mice Our in vitro studies suggested that stimulation with BCR or BCR-independent activators induced Foxp3 + in B cells. Therefore, these cells may be related to an inflammatory environment such as RA. Next, we investigated whether B cells that expressed Foxp3 were present in the spleens of CIA mice. Foxp3 mRNA and protein expression was detected in splenic B cells under basal conditions and was further up-regulated by LPS or anti-IgM stimulation (Fig. 2a). By confocal microscopy, we identified Foxp3 expression in splenic CD19 + B cells, although the vast majority of Foxp3 + cells were CD4 + T cells (Fig. 2b). We observed more CD19 + Foxp3 + B cells in CIA than in WT DBA/1 mice, the numbers of which peaked at 5 weeks post-immunization (Fig. 2c). Foxp3 + B cells peaked at 5 weeks and resided in the B cell zone up to 11 weeks, although the frequency of Foxp3 + B cells at 8 weeks was less than at 5 weeks of immunization. Since T cell mediated immune response was occurred before B cell mediated immune response, the number of CD4 + Foxp3 + cells were more than that of CD19 + Foxp3 + cells in 5 weeks of immunization. We also detected CD19 + Foxp3 + B cells in CIA mice (Fig. 2d). These data indicated that Foxp3expressing CD19 + B cells exist naturally in the spleens of mice under inflammatory conditions such as CIA. Foxp3-transfected B cells have suppressor activity in vitro Foxp3 plays an essential role in the suppressive function of Treg cells. Foxp3 + Treg cells inhibit the activation of autoreactive T cells in an antigen-specific manner [36]. Therefore, we determined whether B cell suppressor activity depends on Foxp3 expression. CD19 + B cells were transfected with either Foxp3-specific shRNA, or an over-expression construct. Foxp3 over-expression led to a two-fold up-regulation compared to control upon non-stimulation and stimulation with LPS. Cells transfected with shRNA showed low levels of Foxp3 expression (Fig. 3a). Transfected cells did not undergo apoptosis (Fig. 3b). Since CIA is induced by type II collagen, we studied that B cells expressing foxp3 can regulate the proliferation of type II collagen specific T cells. CD19 + B cells transfected with Foxp3 efficiently suppressed the proliferation of CD4 + T cells stimulated with CD3 mAb in the absence or presence of CII. In contrast, Foxp3-specific shRNA-transfected cells showed no inhibitory effect on T cell proliferation (Fig. 3c). These data showed that expression of Foxp3 is induced after BCR stimulation by LPS or anti-IgM, and that the suppressive activity of CD19 + B cells is due to the expression of Foxp3. Foxp3 + expressing B cells-mediated cell contact-dependent suppression of T cell proliferation To identify whether Foxp3 can reduce T cell proliferation in the cell to cell contact, we measured T cell proliferation using Foxp3 over-expression vector transfected cells. The use of an in vitro transwell assay allows confirmation Park et al. J Transl Med (2016) 14:191 of whether direct contact with Foxp3 + CD19 + B cells is required for proliferation of CD4 + CD25 − T cells. Responder T cells were cultured with transfected B cells either in direct contact or separated by a transwell. The suppressor activity of CD19 + B cells transfected with Foxp3 was lost in the absence of cell-cell contact (Fig. 4a, b). These findings suggest that the suppressor activity of Foxp3-expressing B cells occurred via cell-cell contact. Foxp3-transfected B cells inhibit autoimmune arthritis in mice RA is an antigen-specific T cell-mediated autoimmune disease. Therefore, we tested whether Foxp3-transfected CD19 + B cells could ameliorate the development of arthritis in a CIA model. Splenic CD19 + B cells transfected with Foxp3 were transferred to DBA/1J mice on days 7 and 28 after the first immunization. Disease progression was monitored. In control mice, clinical disease was apparent on day 21 after the first immunization and became steadily worse until the end of the experiment (Fig. 5a). Mice receiving Foxp3-tranfected B cells had a lower clinical score of the affected arthritic joints and a delay in the development of arthritis compared to the CIA mice (Fig. 5a, f ). Consistent with clinical observations, both the cellular infiltrate within the synovium and the level of cartilage degeneration were substantially lower in the group transfected with Foxp3. CIA mice showed a marked synovial thickening, pannus formation, and cartilage and bone destruction (Fig. 5b). Interestingly, numbers of CD4 + Foxp3 + CTLA4 + Treg cells CIA was induced in 6-to 8-week-old male DBA/1J mice by intradermal immunization in the base of the tail with Cll/CFA emulsion on day 0, followed by a booster injection on day 21, as described in the "Methods" section. a Purified B cells isolated from the spleen of CIA mice were cultured with 10 µg/ml LPS or 10 µg/ml anti-IgM. Following 72 h of activation, Foxp3 mRNA (left) and protein (right) levels were determined by RT-PCR and Western blot, respectively. The optical densities of Foxp3 bands were normalized to those of β-actin. b Spleens of DBA/1J or CIA mice (5 weeks after primary CII immunization) were stained with CD19-FITC, CD4-PerCP cy5.5, and Foxp3-PE mAbs and analyzed by confocal laser microscopy. The numbers in the graphs indicate the cell count of positive cells counted microscopically. c Kinetic analysis of Foxp3-expressing B cells in the spleens of CIA mice. Original magnification, ×400. d The expression of CD19 and Foxp3 in CIA mice splenocytes. The data represent one of three similar experiments. Bar graphs show mean ± SD. Statistical significance was defined at *p < 0.05 compared with 3 weeks after immunization, **p < 0.01, ***p < 0.001 compared with untreated cells were increased in the lymph nodes and the spleens of Foxp3-transfected mice compared to CIA mice (Fig. 5c). We also observed that Th17 cell numbers increased in the spleen of CIA mice. In contrast, a larger population of Foxp3 + Treg cells and a smaller population of Th17 cells were observed in mice receiving Foxp3-tranfected B cells (Fig. 5d, e). Moreover, Foxp3 transfected B cells reduced significantly CIA development compared to mock transfected B cells. These data suggest that Foxp3expressing CD19 + B cells are protective against arthritis development. Discussion Foxp3 is the most specific marker of regulatory T cells [31,37]. Up to now, Foxp3 expression has been found only in CD4 + T cells and in some tumor cell lines [38]. Transformation of B cells with EBV was reported to express Foxp3, although normal B cells do not express Foxp3 [38]. In this study, we showed that Foxp3-expressing CD19 + B cells exist in both normal and autoimmune arthritis mice. The origins of these B cells and how they develop remain unclear. Given the fact that Foxp3 + CD19 + B cells constitute only a small fraction of B cells, transfection of Foxp3 into B cells provides a useful method to generate regulatory B cells in vitro. In vitrogenerated regulatory B cells can be utilized to inhibit the progression of ongoing autoimmune processes. Our data suggest that transfection of Foxp3 into CD19 + B cells induced functional regulatory T cells and suppressed effector T cell proliferation. As a result, Foxp3-infected B cells delayed the onset of arthritis and suppressed its severity in CIA mice. Regulatory B cell subsets are recognized as an important component of the immune system. Several reports have shown that regulatory B cells influence T cell activation and inflammatory responses through the secretion of IL-10 [10]. Several phenotypes of regulatory B cells have been described. Peritoneal CD5 + B-1a cells are known to produce IL-10 [4,17]. CD5 + B cells also produce IL-10 upon IL-12 stimulation [39]. Splenic B cells with a CD21 + CD23 − MZ phenotype from lupus mice produce IL-10 in response to CpG stimulation [40]. Splenic CD1d hi CD21 + CD23 + IgM + B cells with a T2-MZP phenotype also produced IL-10 and inhibited the development of CIA [18]. IL-10-producing CD1d hi CD5 + regulatory B cell subset showed a suppressive effect against autoimmune encephalitis [20]. Recently, a novel subset of IL-10-producing regulatory B cells, distinct from MZ or B-1a cells, was discovered in the intestine and identified as CD5 − CD11b − CD21 + B cells [41]. Regulatory B cells have been demonstrated to exert immunosuppressive functions by inducing Tregs or skewing the cytokine profile of effector T cells toward an immunosuppressive phenotype [42,43]. Transcription factors play important roles in the development and lineage commitment of lymphocytes. Little is known about the transcriptional factors that regulate the generation of regulatory B cells. Foxp3 is necessary and sufficient for Treg generation and function [44]. Also, Foxp3 expression in T cells is known to be restricted. In our study, the existence of Foxp3 + Bregs was demonstrated in mice arthritis model. Foxp3 + may play a role in the generation of Bregs, and the over-expression of Foxp3 in B cells induced regulatory effects. The BCR plays an important role in the development and proliferation of pre-B and B cells [45,46]. Similarly, Foxp3 + regulatory T cell differentiation and function in the periphery is also dependent on suboptimal TCR stimulation [47]. Our results revealed that expression of Foxp3 is induced after BCR stimulation by anti-IgM or LPS. The frequency of splenic Foxp3 + CD19 + B cells was significantly lower in WT than CIA mice. Up-regulation of Foxp3 in B cells of CIA mice may be a consequence of normal B cell activation under the influence of inflammatory conditions. Also, stimulation of Foxp3 + CD19 + B cells with either LPS or anti-IgM, increased their suppressor activity. Although BCR stimulation induced Foxp3, maximal and sustained Foxp3 expression may require additional stimulation with ligands such as CD40 and LPS. Increased Breg cells in CIA animal may rouse the question why these Breg cells cannot protect host from arthritis. We speculate that increased Breg cells are not sufficient in number or function to suppress overactivated effector T cells in arthritis animal model in vivo. There are also other possibilities like increased apoptosis of Breg cells in inflammatory conditions. Further studies may be needed to prove this hypothesis. Interestingly, our data showed that adoptive transfer of Foxp3 + CD19 + B cells-increased the number of Foxp3 + Tregs in vivo. Vallerskog et al. [48] reported that Foxp3 + T cell numbers were significantly increased in peripheral blood of rituximab-infused SLE patients. Alteration of T cell populations may be important in the B cell depletion therapy used for autoimmune diseases. Our results showed that Foxp3 + Bregs modulated a T cell population. The interaction between B and T cells will likely be important in the treatment of arthritis and other autoimmune diseases. The mechanism how FoxP3 B cells are working in vivo needs be elucidated. We observed that Foxp3 B cells secreted large amount of IL-10 and TGF-β (data is not shown). IL-10 or TGF-β producing B cells have been reported to play important role in regulatory function [49][50][51]. IL-10 inhibits pro-inflammatory cytokine and supports regulatory T cell differentiation so play pivotal role of immune tolerance. Conclusion In summary, we report a significant function of regulatory B cells expressing Foxp3 in CIA. The regulatory effect of Foxp3 + B cell showed contact-dependent. Foxp3 + B cells successfully suppress arthritis and induced the Treg cell population. Identification of mechanism related to the induction of Treg cells remains an Representative photographs from each group are shown. Original magnifications, ×40, ×200. c Percentage of CD4 + Foxp3 + CTLA4 + Treg cells in the spleens, mesenteric lymph nodes (mLN) and draining lymph nodes (dLN) from control CIA or mice injected Foxp3-transfected CD19 + B cells. Cells were stained with CD4-PerCP cy5.5 mAb before permeabilization and stained using Foxp3-FITC and CTLA4-PE mAb. FACS analyses showed that Foxp3 + CLTA4 + cells gated on CD4 + T cells. d, e Spleens from mice of each group were stained with anti-CD4 (green) and anti-IL-17 (red) (upper image) or anti-CD4 (red), anti-CD25 (blue), and anti-Foxp3 (green) (lower image) antibodies. Populations of CD4 + CD25 + Foxp3 + and CD4 + IL-17 + T cells were analyzed using confocal laser microscopy and Flow cytometry. f Mock and Foxp3 transfected B cells (1 × 10 7 ) were injected i.v. into CIA mice (each group n = 10) on days 7 and 28, which had been immunized with CII/CFA. Original magnification, ×400. Data are expressed as the means ± SD. *p < 0.05 compared with control CIA mice
5,551
2016-06-28T00:00:00.000
[ "Biology", "Medicine" ]
Characteristic Functionals of Dirichlet Measures We compute characteristic functionals of Dirichlet-Ferguson measures over a locally compact Polish space and prove continuous dependence of the random measure on the parameter measure. In finite dimension, we identify the dynamical symmetry algebra of the characteristic functional of the Dirichlet distribution with a simple Lie algebra of type $A$. We study the lattice determined by characteristic functionals of categorical Dirichlet posteriors, showing that it has a natural structure of weight Lie algebra module and providing a probabilistic interpretation. A partial generalization to the case of the Dirichlet-Ferguson measure is also obtained. 1. Introduction and main results. Let X be a locally compact Polish space with Borel σ-algebra B(X) and let P(X) be the space of probability measures on (X, B(X)). For σ ∈ P(X) we denote by D σ the Dirichlet-Ferguson measure [9] on P(X) with probability intensity σ. The characteristic functional of D σ is commonly recognized as hardly tractable [14] and any approach to D σ based on characteristic functional methods appears de facto ruled out in the literature. Notably, this led to the introduction of different characterizing transforms (e.g. the Markov-Krein transform [16,43] or the c-transform [14]), inversion formulas based on characteristic functionals of other random measures (in particular, the Gamma measure, as in [32]), and, at least in the case X = R, to the celebrated Markov-Krein identity (see e.g. [24]). These investigations are based on complex analysis techniques and integral representations of special functions, in particular the Lauricella hypergeometric function k F D [21] and Carlson's R function [5]. The novelty in this work consists in the combinatorial/algebraic approach adopted, allowing for broader generality and far reaching connections, especially with Lie algebra theory. Fourier analysis. Denote by D α k the Dirichlet distribution on the standard simplex ∆ k−1 with parameter α k ∈ R k + , which we regard as the discretization of D σ induced by a measurable k-partition X k of X (see §2 below). Our first result is the following. where i = √ −1 is the imaginary unit, Z n is the cycle index polynomial (2.1) of the n th symmetric group and f j denotes the j th power of f . Furthermore, the map σ → D σ is continuous with respect to the narrow topologies. The characteristic functional representation is new. It provides -in the unified framework of Fourier analysis -(a) a new (although non-explicit) construction of D σ as the unique probability measure on P(X) satisfying D σ = lim k D α k (see Cor. 3.16. Following [45], we call this construction a weak Fourier limit); (b) new proofs of known results on the tightness and asymptotics of families of Dirichlet-Ferguson measures (see Cor.s 3.12 and 3.13), proved, elsewhere in the literature, with ad hoc techniques; (c) the continuity statement in the Theorem, which strengthens [37,Thm. 3.2] concerned with norm-to-narrow continuity. This last result is sharp, in the sense that the domain topology cannot be relaxed to the vague topology. Representations of SL 2 -currents and Bayesian non-parametrics. The Dirichlet-Ferguson measure D, the gamma measure G [19,42] and the 'multiplicative infinite-dimensional Lebesgue measure' L + [42,45] play an important rôle in a longstanding program [20,42,46] for the study of representations of measurable SL 2 -current groups, i.e. spaces of SL 2 -valued bounded measurable functions on a smooth manifold X. Within such framework, connections between these measures and Lie structures of special linear type are not entirely surprising. In particular, the measure L + is constructed (see [45, §4.1]) as the weak Fourier limit for k → ∞ of rescaled Haar measures on the identity connected components dSL + k+1 in maximal toral -commutative -subgroups of the special linear groups SL k+1 (R). (For details on this construction see §4.2 below.) Relying on connections between cycle index polynomials and Pólya Enumeration Theory, we identify the special linear object acting on the Dirichlet distribution D α k as the dynamical symmetry algebra, in the sense of [26,28], of the Fourier transform D α k . In contrast with the case of L + , we are able to detail the action of the whole -non-commutative -dynamical symmetry algebra, and provide a suitable interpretation of this action in terms of Bayesian statistics. Indeed, one remarkable property [9,29,36] of Dirichlet measures is that their posterior distributions given knowledge on the occurrences of some categorical random variables are themselves Dirichlet measures with different parameters; that is, Dirichlet measures are self-conjugate priors. We show how this property is related to the action of the dynamical symmetry algebra. More precisely, for α ∈ ∆ k−1 and p ∈ (Z + 0 ) k denote by D p α the posterior distribution of the prior D α given atoms of mass p i at point i ∈ [k] (see property iii in §2.2). We prove the following. Theorem 1.2 (see Thm. 4.12). The dynamical symmetry algebra g k of the function D α (see Def. 4.4) is (isomorphic to) the Lie algebra sl k+1 (R) of real square matrices with vanishing trace. Furthermore, if α is chosen in the interior of ∆ k−1 , the universal enveloping algebra U(g k ) naturally acts on an infinite-dimensional linear space O Λα detailed in the proof. Special subalgebras of U(g k ) may be identified, whose actions fix the linear span O Hα ⊆ O Λα of the family of characteristic functionals { D p α } p varying p ∈ (Z + 0 ) k , or the linear span O Λ + α ⊆ O Λα of characteristic functionals of some distinguished improper priors of Dirichlet-categorical posteriors. Theorem 1.1 allows for a partial extension of this result to the infinite-dimensional case of D σ . Since L + and G may be expressed as product measures with D as the only truly infinitedimensional factor, cf. [43], we expect Theorem 1.2 to provide further algebraic insights on these measures. Quasi-invariance of D. (Quasi-)invariance properties of D, G and L + have been studied with respect to different group actions [20,34,35,42]. Given (X, σ) a Riemannian manifold with normalized volume measure σ, let G be some subgroup of (bi-)measurable isomorphisms of (X, B(X)). We are interested in the quasi-invariance of D σ with respect to the group action ψ.η := ψ η where ψ is in G, η is in P(X) and ψ η := η • ψ −1 denotes the push-forward of η via ψ. When X = S 1 and G = Diff(X), the quasi-invariance of D σ with respect to a similar action was a key tool in the construction of stochastic dynamics on P(X) with D σ or the related entropic measure P σ as invariant measures, see [34,38]. Whereas Theorem 1.1 allows for Bochner-Minlos and Lévy Continuity related results to come into play, the non-multiplicativity of D σ (corresponding to the non-infinite-divisibility of the measure) immediately rules out the usual approach to quasi-invariance via Fourier transforms [2,20,42,43]. Other approaches to this problem rely on finite-dimensional approximation techniques, variously concerned with approximating the space [34,35], the σ-algebra [20] or the acting group [11,45]. The common denominator here is for the approximation to be a filtration (cf. e.g. [20,Def. 9]) -in order to allow for some kind of martingale convergence -and, possibly, for the approximating objects to be (embedded in) linear structures (cf. e.g. [34,45]). The goal of Theorem 1.2 is ultimately to provide approximating sequences -at the same time of the space X, the σ-algebra on P(X) and the acting group -that are suitable in the sense above. Plan of the work. Preliminary results are collected in §2, together with the definition and properties of Dirichlet measures and an account of the discretization procedure that we dwell upon in the following. In §3 we prove Theorem 1.1. As a consequence, by the classical theory of characteristic functionals we recover known asymptotic expressions for D βσ when β → 0, ∞ is a real parameter (Cor. 3.13, cf. [37, p. 311]), propose a Gibbsean interpretation thereof (Rem. 3.14), and prove analogous expressions for the entropic measure P β σ on compact Riemannian manifolds [41], generalizing the case X = S 1 [34,Prop. 3.14]. In the process of deriving Theorem 1.1 we obtain a moment formula for the Dirichlet distribution in terms of the cycle index polynomials Z n (Thm. 3.3). In light of Pólya Enumeration Theory we interpret this result by means of a coloring problem ( §4.1). This motivates the study of the dynamical symmetry algebra l k of the Humbert function k Φ 2 resulting in the proof of Theorem 1.2. Finally, in §4.3 we study the limiting action of the dynamical symmetry algebra l k when k tends to infinity. Some preliminary results in topology and measure theory are collected in the Appendix. Combinatorial preliminaries. Set and integer partitions. For a subset L ⊆ [n] denote byL the ordered tuple of elements in L in the usual order of [n]. An ordered set partition of [n] is an ordered tupleL :=(L 1 ,L 2 . . . ) of tuplesL i such that the corresponding sets L i , termed clusters or blocks, satisfy ∅ L i ⊆ [n] and i L i = [n]. The order of the tuples inL is assumed ascending with respect to the cardinalities of the corresponding subsets and, subordinately, ascending with respect to the first element in each tuple. A set partition L of [n] is the family of subsets corresponding to an ordered set partition. This correspondence is bijective. For any set partition write L [n] and L r [n] if #L = r, i.e. if L has r clusters. A (integer ) partition λ of n into r parts (write: λ r n) is an integer solution λ ≥ 0 of the system, n · λ = n, λ • = r; if the second equality is dropped we term λ a (integer ) partition of n (write: λ n). We always regard a partition in its frequency representation, i.e. as the tuple of its ordered frequencies (cf. e.g. [3, §1.1]). To a set partition L r [n] one can associate in a unique way a partition λ(L) r n by setting λ i (L) := # {h | #L h = i}. Permutations and cycle index. A permutation π in S n is said to have cycle structure λ, write λ = λ(π), if λ i equals the number of cycles in π of length i for each i. Let S n (λ) ⊆ S n be the set of permutations with cycle structure λ, so that S n (λ(π)) = K π the conjugacy class of π and #S n (λ) = M 2 (λ) := n!/(λ! n λ ) [40, Prop. I.1.3.2]. Let now G < S n be any permutation group. The cycle index polynomial of G is defined by We write Z n := Z Sn for the cycle index polynomial of S n , satisfying, for t := (t 1 , . . . , t n ) and t k := (t 1 , . . . , t k ) with k ≤ n, the identities Z n (t) = 1 n! λ n M 2 (λ) t λ , Z n ((a 1) n t) = a n Z n (t) a ∈ R . (2.1) and the recurrence relation Definition 2.1 (Dirichlet distribution). We denote by D α (y) the Dirichlet distribution with parameter α ∈ R k + (e.g. [29]), i.e. the probability measure with density with respect to the k-dimensional Lebesgue measure on the hyperplane of equation y • = 1 in R k , concentrated on (the interior of) ∆ k−1 . Alternatively, for any measurable A ⊆ R k−1 , Whereas both descriptions are common in the literature, the first one makes more apparent property ii below. Namely, write '∼' for 'distributed as' and let Y be any ∆ k−1 -valued random vector. The following properties of the Dirichlet distribution are well-known: i. aggregation (e.g. [9, p. 211, property i • ]). For i = 2, . . . , k set y +i :=(y + y i e i−1 )î. Then, ii. quasi-exchangeability (or symmetry). For all π ∈ S k Y ∼ D α =⇒ Y π ∼ D απ . (2.5) iii. Bayesian property (e.g. [9, p. 212, property iii • ] for the case r = 1). Let W ∈ [k] r be a vector of [k]-valued random variables and P ∈ (Z + 0 ) k be the vector of occurrences defined by and denote by D p α the distribution of Y given P = p, termed here the posterior distribution of D α given atoms with masses p i at points i ∈ [k]. Then, Most properties of the Dirichlet distribution may be inferred from its characteristic functional k Φ 2 , a confluent form of the k-variate Lauricella hypergeometric function k F D (see e.g. [7]). Recall the following representations of c > a > 0 and its confluent form (or second k-variate Humbert function [7, ibid.] The distribution D α is moment determinate for any α > 0 by compactness of ∆ k−1 . Its moments are straightforwardly computed via the multinomial theorem as so that the characteristic functional of the distribution indeed satisfies (cf. [7, §7.4.3]) 2.3. The Dirichlet-Ferguson measure. Notation. Everywhere in the following let (X, τ (X)) be a second countable locally compact Hausdorff topological space with Borel σ-algebra B. We denote respectively by cl A, int A, bd A the closure, interior and boundary of a set A ⊆ X with respect to τ . Recall (Prop. 2.2) that any space (X, τ (X)) as above is Polish, i.e. there exists a metric d, metrising τ , such that (X, d) is separable and complete; we denote by diam A the diameter of A ⊆ X with respect to any such metric d (apparent from context and thus omitted in the notation). Denote by C c (X) (resp. C b (X)) the space of continuous compactly supported (resp. continuous bounded) functions on (X, τ (X)), (both) endowed with the topology of uniform convergence; by C 0 (X) the completion of C c (X), i.e. the space of continuous functions on X vanishing at infinity; by M b (X) (resp. M + b (X)) the space of finite, signed (resp. non-negative) Radon measures on (X, B(X)) -the topological dual of C c (X) and C 0 (X) -endowed with the the vague topology τ v (M b (X)), i.e. the weak* topology, and the induced Borel σ-algebra. Denote further by P(X) ⊆ M + b (X) (cf. Cor. 5.3) the space of probability measures on (X, B(X)). If not otherwise stated, we assume P(X) to be endowed with the vague topology τ v (P(X)) and σ-algebra B v (P(X)). On M + b (X) (resp. on P(X)) we additionally consider the narrow topology τ n (M + b (X)) (resp. τ n (P(X))), i.e. the topology induced by duality with C b (X). Finally, given any measure ν ∈ M b (X) and any bounded measurable function g on (X, B(X)), denote by νg the expectation of g with respect to ν and by g * : ν → νg the linear functional induced by g on M b (X) via integration. The following statement is well-known. A proof is sketched to establish further notation. Proposition 2.2. A topological space (X, τ (X)) is second countable locally compact Hausdorff if and only if it is locally compact Polish, i.e. such that τ (X) is a locally compact separable completely metrizable topology on X. Moreover, if (X, B(X)) additionally admits a fully supported diffuse measure ν, then (X, τ (X)) is perfect, i.e. it has no isolated points. Sketch of proof. Let (αX, τ (αX)) denote the Alexandrov compactification of (X, τ (X)) and α : X → αX denote the associated embedding. Notice that αX is Hausdorff, for X is locally compact Hausdorff; hence αX is metrizable, for it is second countable compact Hausdorff, and separable, for it is second countable metrizable, thus Polish by compactness. Finally, recall that X is (homeomorphic via α to) a G δ -set in αX and every G δ -set in a Polish space is itself Polish. The converse and the statement on perfectness are trivial. Partitions. Fix σ ∈ P(X). We denote by P k (X) the family of measurable non-trivial kpartitions of (X, B, σ), i.e. the set of tuples X := (X 1 , . . . , X k ) such that Given X ∈ P k (X) we say that it refines A in B if X i ⊆ A whenever X i ∩ A = ∅, respectively that it is a continuity partition for σ if σ(bd X i ) = 0 for all i ∈ [k]. We denote by P k (A ⊆ X), resp. P k (X, τ (X), σ) the family of all such partitions. Given X 1 ∈ P k 1 (X) and X 2 ∈ P k 2 (X) with k 1 < k 2 we say that X 2 refines X 1 , write . We denote the family of all such null-arrays by Na(X). Analogously to partitions, we write with obvious meaning of the notation Na(A ⊆ X) and Na(X, τ (X), σ). If σ is diffuse (i.e. atomless), then lim h σX h,i h = 0 for every choice of X h,i h ∈ X h with (X h ) h ∈ Na(X). Given a (real-valued) simple function f and a partition X ∈ P k (X), we say that f is locally Given a function f in C c we say that a sequence of (measurable) simple functions (f h ) h is a good approximation of f if |f h | ↑ h |f | and lim h f h = f pointwise. The existence of good approximations is standard. The Dirichlet-Ferguson measure. By a random probability over (X, B(X)) we mean any probability measure on P(X). For X ∈ P k (X) and η in P(X) set η X := (ηX 1 , . . . , ηX k ) and Recall (cf. [39]) that, if σ ∈ P(X) is diffuse, then for every k ∈ N 1 and y ∈ int ∆ k−1 there exists X ∈ P k (X) such that σ X = y. , Fleming-Viot with parent-independent mutation [8]; see e.g. [36, §2] for an explicit construction) is the unique random probability over (X, B(X)) such that Existence was originally proved in [9] by means of Kolmogorov Extension Theorem (cf. Fig. 1 below). A construction on spaces more general than in our assumptions is given in [18]. Other characterizations are available (see e.g. [36]). Since X is Polish (Prop. 2.2), in (2.11) it is in fact sufficient to consider u continuous with |u| < 1 and, by the Portmanteau Theorem, X ∈ P k (X, τ (X), σ) (cf. e.g. [41, p. 15]). Let P be a P(X)-valued random field on a probability space (Ω, F , P) and recall the following properties of D σ , to be compared with those of D α , i. realization properties: §4, Thm. 2], with supp P (ω) = supp σ [9, §3, Prop. 1] or [25]. In particular, if σ is diffuse and fully supported, then I is countable and {x i } i is P-a.e. dense in X. The sequence (η i ) i is distributed [12] according to the stick-breaking process. In particular, (independent also of the η i 's [6]) and σ-distributed. ii. σ-symmetry: for every measurable σ-preserving map ψ : X → X, i.e. such that ψ σ = σ, [15,Lem. 9.0] together with (2.10) and the quasi-exchangeability of D α ). In particular, P X is distributed as a function of σ X for every X ∈ P k (X) for every k. iii. Bayesian property [9, §3, Thm. 1]: Let W := (W 1 , . . . , W r ) be a sample of size r from P , conditionally i.i.d., and denote by D W σ the distribution of P given W, termed the posterior distribution of D σ given atoms W. Then, Discretizations. In order to consider finite-dimensional marginalizations of D βσ , we introduce the following discretization procedure (cf. [33] for a similar construction). Any partition X ∈ P k (X) induces a discretization of X to [k] by collapsing X i ∈ X to an arbitrary point in X i , uniquely identified by its index i ∈ [k], i.e. via the map pr X : The finite σ-algebra σ 0 (X) generated by X induces then a discretization of P(X) to the space P([k]) via the mapping µ → i µX i δ i . Since the latter space is in turn homeomorphic to the standard simplex ∆ k−1 via the mapping i y i δ i → y, every choice of X ∈ P k (X) induces a discretization of P(X) to ∆ k−1 via the resulting composition ev X = pr X . It is then precisely the content of (2.10) that any partition X as above induces a discretization of the tuple ((X, σ), (P(X), D βσ )) to the tuple (([k], α), (∆ k−1 , D α )), where α := β ev X σ is identified with the measure i α i δ i on [k] (cf. Fig. 1 below). Going further in this fashion, the subgroup S X of bi-measurable isomorphisms ψ of (X, B(X)) respecting X, i.e. such that ψ (X) := (ψ(X 1 ), . . . , ψ(X k )) = X up to reordering, is naturally isomorphic to the symmetric group S k , the bi-measurable isomorphism group Iso The canonical action of S X on X, corresponding to the canonical action of S k on [k], lifts to the action of S k on ∆ k−1 by permutation of its vertices, that is, to the action on P([k]) defined by π.y := π y under the identification of y with the measure i y i δ i . The following result is a rather obvious generalization of the latter fact, obtained by substituting degeneracy maps with arbitrary maps. We provide a proof for completeness. Remark 3.2. Assuming the point of view of conditional expectations rather than that of marginalizations, (2.10) may be restated as where σ 0 (X) denotes as before the σ-algebra generated by some partition X ∈ P k (X). The aggregation property (2.4) is but an instance of the tower property of conditional expectations, whereas its generalization (3.2) is a consequence of the σ-symmetry of D σ . Theorem 3.3 (Moments of D α ). Fix α > 0 and s ∈ R k . Then, the following identity holds The statement is equivalent toμ n =ζ n , which we prove in two steps. Step 1. The following identity holds By induction on n with trivial (i.e. 1 = 1) base step n = 1. Inductive step. Assume for every α > 0 and s in R kμ If k ≥ 2, we can choose j = . Applying (3.6) to both sides of (3.4) yields where the latter equality holds by lettingμ −1 := 0. Letting now α := α + e j and applying the inductive hypothesis (3.5) with α in place of α yields for every j = . By arbitrariness of j = , the bracketed quantity is a polynomial in the sole variables s and α of degree at most n − 1 (obviously, the same holds also in the case k = 1). As a consequence (or trivially if k = 1), every monomial not in the sole variable s cancels out by arbitrariness of s, yielding The latter quantity is proved to vanish as soon as in fact a particular case of the well-known Chu-Vandermonde identity Step 2. It holds thatμ n =ζ n . By strong induction on n with trivial (i.e. 1 = 1) base step n = 0. The inductive hypothesis, (3.4) and (3.6) yield By arbitrariness of j this implies thatζ n [s, α] −μ n [s, α] is constant as a function of s (for fixed α), hence vanishing by choosing s = 0. Remark 3.4. Here, we gave an elementary combinatorial proof of the moment formula for D α , independently of any property of the distribution. Notice for further purposes that, defining µ n [s, α] as in (3.3), the statement holds with identical proof for all α in C k such that α • ∈ Z − 0 . For further representations of the moments see Remark 3.11 below. Proposition 3.5. The function k Φ 2 [ts; 1; α] is the exponential generating function of the polynomials Z n , in the sense that, for all α ∈ ∆ k−1 , More generally, Proof. Recalling that k Φ 2 [α; α • ; s] = D α (s) by (2.9) and noticing that α • = 1, Theorem 3.3 provides an exponential series representation for the characteristic functional of the Dirichlet distribution in terms of the cycle index polynomials of symmetric groups, viz. Replacing s with −its above and using (2.1) to extract the term t n from each summand, the conclusion follows. The second statement has a similar proof. Remark 3.6. It is well-known that the characteristic functional of a measure µ on R d (or, more generally, on a nuclear space) is always positive definite, i.e. it holds that The following Lemma also appeared in [22]. Proof. Since D α is moment determinate, it suffices -by compactness of ∆ k−1 and Stone-Weierstraß Theorem -to show the convergence of its moments. By Theorem 3.3 (cf. also (2.1)), As a consequence of the Lemma further confluent forms of k Φ 2 may be computed: 3.2. Infinite-dimensional statements. Together with the introductory discussion, Proposition 3.1 suggests the following Mapping Theorem for D σ , to be compared with the analogous result for the Poisson random measure P σ over (X, B(X)) (see e.g. [17, §2.3 and passim]). The σsymmetry of D βσ and the quasi-exchangeability and aggregation property of D α are trivially recovered from the Theorem by (2.10). Theorem 3.9 (Mapping theorem for D σ ). Let (X, τ (X), B(X)) and (X , τ (X ), B(X )) be second countable locally compact Hausdorff spaces, ν a non-negative finite measure on (X, B(X)) and f : (X, B(X)) → (X , B (X)) be any measurable map. Then, Proof. Choosing X :=(g −1 (1), . . . , g −1 (k)), the characterization (2.11) is equivalent to the requirement that (g ) D ν = D g ν for any g : X → [k] such that every ν-representative of g is surjective, which makes X non-trivial for ν. Denote by S(X, ν, k) the family of such functions and notice that if h ∈ S(X , f ν, k), then g := h • f ∈ S(X, ν, k). The proof is now merely typographical: where the second equality suffices to establish that (f ) D ν is a Dirichlet-Ferguson measure by arbitrariness of h, while the third one characterizes its intensity as f ν. We denote by P(P(X)) the space of probability measures on (P(X), B n (P(X))), endowed with the narrow topology τ n (P(P(X))) induced by duality with C b (P(X)). We are now able to prove the following more general version of Theorem 1.1. Theorem 3.10 (Characteristic functional of D βσ ). Let (X, τ (X), B(X)) be a second countable locally compact Hausdorff space, σ a probability measure on X and fix β > 0. Then, Moreover, the map ν → D ν is narrowly continuous on M + b (X). Proof. Characteristic functional. Fix f in C c and let (f h ) h be a good approximation of f , locally constant on X h := (X h,1 , . . . , X h,k h ) with values s h for some (X h ) h ∈ Na(X). Fix n > 0 and set α h := βσ X h . Choosing u : thus, by Dominated Convergence Theorem, continuity of Z n and arbitrariness of f , ∀f ∈ C c µ D βσ n [tf * ] =n! β −1 n Z n t 1 βσf 1 , . . . , t n βσf n , t ∈ R . Using (2.1) to extract the term t n from Z n and substituting t with i t on the right-hand side, the conclusion follows by definition of exponential generating function. Continuity. Assume first that (X, τ (X)) is compact. By compactness of (X, τ (X)), the narrow and vague topology on P(X) coincide and P(X) is compact as well by Prokhorov Theorem. Let (ν h ) h∈N be a sequence of finite non-negative measures narrowly convergent to ν ∞ . Again by Prokhorov Theorem and by compactness of P(X) there exists some τ n (P(P(X)))-cluster point D ∞ for the family {D ν h } h . By narrow convergence of ν h to ν ∞ , continuity of Z n and absolute convergence of D · (f ), it follows that lim h D ν h = D ν∞ pointwise on C c (X), hence, by Corollary 5.3, it must be D ∞ = D ν∞ . In the case when X is not compact, recall the notation established in Proposition 2.2, denote by B(αX) the Borel σ-algebra of (αX, τ (αX)) and by P(αX) the space of probability measures on (αX, B(αX)). By the Continuous Mapping Theorem there exists the narrow limit τ n (P(X))lim h α ν h = α ν ∞ , thus, by the result in the compact case applied to the space (αX, B α ) together with the sequence α ν h , τ n (P(P(X)))-lim The narrow convergence of ν h to ν ∞ implies that α ν ∞ does not charge the point at infinity in αX, hence the measure spaces (X, B(X), ν * ) and (αX, B(αX), α ν * ) are isomorphic for * = h, ∞ via the map α, with inverse α −1 defined on im α αX. The continuity of α −1 and the Continuous Mapping Theorem together yield the narrow continuity of the map (α −1 ) . The conclusion follows by applying (α −1 ) to (3.8) and using the Mapping Theorem 3.9. In the case when ν h converges to ν ∞ in total variation, the continuity statement in the Theorem and the asymptotics for β → 0 in Corollary 3.13 below were first shown in [37,Thm. 3.2], relying on Sethuraman's stick-breaking representation. The following result was also obtained, again with different methods, in [37]. where, in the first case, δ : X → P(X) denotes the Dirac embedding x → δ x . Proof. The existence of D 0 σ and D ∞ σ as narrow cluster points for {D βσ } β>0 follows by Corollary 3.12. Retaining the notation established in Theorem 3.10, Corollary 3.8 yields for all k hence the order of the limits in each left-hand side of (3.11) may be exchanged, for the convergence in k is uniform with respect to β. This shows (3.9). By Theorem 3.10, βσ may be substituted with any sequence (β h σ h ) h with lim h β h = 0, ∞ and {σ h } h a tight family. Observe that, despite the similarity with Lemma 3.7, Corollary 3.13 is not a direct consequence of the former, since the evaluation map ev X is never continuous. Let Z H β := exp(−βH) , F β := −β −1 ln Z H β and G β :=(Z H β ) −1 exp(−βH) respectively denote the partition function, the Helmholtz free energy and (the distribution of) the Gibbs measure of the system. It was heuristically argued in [34, §3.1] that -at least in the case when (X, B, σ) is the unit interval - where: S is now an entropy functional (rather than an energy functional), Z β is a normalization constant and β plays the rôle of the inverse temperature. Here, D * σ denotes a non-existing (!) uniform distribution on P(X). Borrowing again the terminology, this time in full generality, one can say that for small β (i.e. large temperature), the system thermalizes towards the "uniform" distribution δ σ induced by the reference measure σ on the base space, while for large β it crystallizes to δ σ , so that all randomness is lost. Consistently with property i of D σ , we see that E D ∞ σ η i = 0 and E D 0 σ η i = δ i1 for all i, where δ ab denotes the Kronecker symbol; in fact, both statements hold with probability 1. It is worth noticing that a different interpretation for the parameter β has been given in [22], where the latter is regarded as a 'time' parameter in the definition of a PCOC. Remark 3.15. By the Continuous Mapping Theorem, both the continuity statement in Theorem 3.10 and the asymptotic expressions in Corollary 3.13 hold, mutatis mutandis, for every narrowly continuous image of D βσ , hence, for instance, for the entropic measure P β σ [34,41]. This generalizes [34, 3.14] and the discussion for the entropic measure thereafter. Corollary 3.16 (Alternative construction of D βσ ). Assume there exists a nuclear function space S ⊆ C 0 (X), continuously embedded into C 0 (X) and such that S ∩ C c (X) is norm-dense in C 0 (X) and dense in S. Then, there exists a unique Borel probability measure on the dual space S , namely D βσ , whose characteristic functional is given by the extension of (3.7) to S. Proof. By the classical Bochner-Minlos Theorem (see e.g. [10, §4.2, Thm. 2]), it suffices to show that the extension to S, say χ, of the functional (3.7) is a characteristic functional. By the convention in (2.2), χ(0 S ) = χ(0 Cc(X) ) = 1. The (sequential) continuity of χ on S follows by that on C 0 (X) and the continuity of the embedding S ⊆ C 0 (X). It remains to show the positivity (see Rmk. 3.6) of χ, which can be checked only on S ∩ C c (X) by · -density of the inclusions S ∩ C c (X) ⊆ C 0 (X). The positivity of χ restricted to C c (X) follows from the positivity of k Φ 2 in Remark 3.6 by approximation of f with simple functions as in the proof of Theorem 3.10. Remark 3.17. Let us notice that the assumption of Corollary 3.16 is satisfied, whenever X is (additionally) either finite (trivially), or a differentiable manifold, or a topological group (by the main result in [1]). In particular, when X = R d , we can choose S = S(R d ), the space of Schwartz functions on R d . Since f * is τ n (P(X))-continuous for every f ∈ C b (X) and bounded by f , the map G is continuous. Proof. The continuity of D β · is proven in Theorem 3.10. By e.g. [9,Thm. 3] for all f ∈ C c (X) one has D βσ f * = σf , hence G inverts D β · on its image. Finite-dimensional statements. Multisets. Given a set S, a (finite integer-valued ) S-multi-set is any function f : S → N 1 such that #f is finite, where # denotes integration on S with respect to the counting measure. We Recall that the number of [n]-multi-sets with cardinality r is r n /r! (see e.g. [40, §I.1.2]). • the symmetry property (2.5) when g = π ∈ S k and, more generally, Proposition 3.1; • the aggregation property (2.4); • the marginalization (2.10) (recall that pr X = ev X ); • the symmetry property (2.12) when f = ψ is measure preserving and, more generally, Theorem 3.9; the commutation of the solid sub-diagram delimited by the two dashed triangles corresponds to the requirement of Kolmogorov consistency. We say that two k-colorings where p k,i [t] := 1 · t i with 1 ∈ R k denotes the i th k-variate power sum symmetric polynomial. In the following we consider an extension of PET to multisets of colors and explore its connections -arising in the case G = S n -with the Dirichlet distribution D α . A different approach in terms of colorings, limited to the case α • = 1, was briefly sketched in [16, §7]. Let s α be an integer-valued multiset with α ∈ R k + , henceforth a palette. As before, we understand the elements s 1 , . . . , s k of its underlying set this is the coefficient of the monomial t Corollary 4.3. Let S n,k,r denote the set of S n -equivalence classes ϕ • of α-shadings of [n] such that α ≥ 0 and α • = r. Then, the probability p α h 1 ,...,h k of some ϕ • uniformly drawn from S n,k,r having exactly h i occurrences of the i th color satisfies Proof. The number of palettes with total number of shades r equals the number r n /n! of integer-valued [n]-multisets of cardinality r, thus, choosing r = α • , hence, by homogeneity The conclusion follows by Corollary 4.2 and Theorem 3.3. The study of D α in the case when α • = 1 is singled out as computationally easiest (as suggested by Theorem 3.3, noticing that 1 n = n!), α representing in that case a probability on [k], as detailed in §2. For these reasons, this is often the only case considered (cf. e.g. [16]). On the other hand though, the general case when α > 0 is the one relevant in Bayesian non-parametrics, since posterior distributions of Dirichlet-categorical and Dirichlet-multinomial priors do not have probability intensity. The above coloring problem suggests that the case when α ∈ (Z + ) k is interesting from the point of view of PET, since it allows for some natural operations on palettes, corresponding to functionals of the distribution. Indeed, we can change the number of colors and shades in a palette s α by composing any permutation of the indices [k] with the following elementary operations: • (i) 'widen', respectively (ii) 'narrow the color spectrum', by adding a color, say s k+1 , respectively removing a color, say s k . That is, we consider new palettes (s ⊕ s k+1 ) α⊕α k+1 , respectively (s 1 , . . . , s k−1 ) (α 1 ,...,α k−1 ) ; • (iii) 'reduce color resolution' by regarding two different colors, say s i and s i+1 , as the same, relabeled s i . In so doing we regard the shades of the former colors as distinct shades of the new one, so that it has α i + α i+1 shades. That is, we consider the new palette (sî) α +i ; • (iv) 'enlarge', respectively (v) 'reduce the color depth', by adding a shade, say the α th i+1 , to the color s i , respectively removing a shade, say the α th i , to the color s i . This latter operation we allow only if α i > 1, so to make it distinct from removing the color s i from the palette. That is, we consider the new palettes s α+e i , resp. s α−e i when α i > 1. Increasing the color resolution of a multi-shaded color, say s k with α k > 1 shades, by splitting it into two colors, say s k and s k+1 with α k > 0 and α k+1 > 0 shades respectively and such that α k + α k+1 = α k , is not an elementary operation. It can be obtained by widening the spectrum of the palette by adding a color s k+1 with α k+1 shades and reducing the color depth of the color s k to α k . Thus, this operation is not listed above. We do not allow for the number of shades of a color to be reduced to zero: although this is morally equivalent to removing that color, the latter operation amounts more rigorously to remove the color placeholder from the palette. The said elementary operations are of two distinct kinds: (i)-(iii) alter the number of colors in a palette, while (iv)-(v) fix it. We restrict our attention to the latter ones and ask how the probability p α h 1 ,...,h k changes under them. By Corollary 4.3 this is equivalent to study the corresponding functionals of the n th moment of the Dirichlet distribution. For fixed k, we address all the moments at once, by studying the moment generating function Namely, we look for natural transformations yielding the mappings where C α is some constant, possibly dependent on α. Here 'natural' means that we only allow for meaningful linear operations on generating functions: addition, scalar multiplication by variables or constants, differentiation and integration. For practical reasons, it is convenient to consider the following construction. Definition 4.4 (Dynamical symmetry algebra of k Φ 2 ). Denote by g k the minimal Lie algebra containing the linear span of the operators E ±1 , . . . , E ±k in (4.2) endowed with the bracket induced by their composition. Following [26], we term the Lie algebra g k the dynamical symmetry algebra of the function k Φ[α; s] := k Φ 2 [α; α • ; s], characterized below. 4.1.2. Dynamical symmetry algebras. We compute now the dynamical symmetry algebra of the function k Φ[α; s] := k Φ 2 [α; α • ; s], in this section always regarded as the meromorphic extension (2.9) of the Fourier transform of D α (s) in the complex variables α, s ∈ C k . The choice of complex variables is merely motivated by this identification and every result in the following concerned with complex Lie algebras holds verbatim for their split real form. For dynamical symmetry algebras of Lauricella hypergeometric functions see [26,27] and references therein; we refer to [13] for the general theory of Lie algebra (representations) and for Weyl groups' theory. Notation and definitions. Denote by E i,j varying i, j ∈ [k +1] the canonical basis of Mat k+1 (C), with [E i,j ] m,n = δ mi δ nj , where δ ab is the Kronecker delta, and by A * the conjugate transpose of a matrix A. The following is standard. Then, the complex Lie sub-algebra l k of gl k+1 (C) generated by these vectors is l k = sl k+1 (C), with sl 2 -triples Denote further by f k < l k the sub-algebra spanned by {e i,j , f j,i , h i,j } i,j∈ [k] . Then, f k ∼ = sl k (C). Everywhere in the following we regard l k together with the distinguished Cartan sub-algebra h k < l k of diagonal traceless matrices spanned by the basis {h 0,j } j∈[k] ; the root system Ψ k induced by h k , with simple roots γ j corresponding to the sl 2 -triples of the vectors e j−1,j for j ∈ [k]; positive, resp. negative, roots Ψ ± k corresponding to the spaces of strictly upper, resp. strictly lower, triangular matrices n ± k . The inclusion f k < l k induces the decomposition of vector spaces (not of algebras) The subscript k is omitted whenever apparent from context. For fixed α ∈ C k regard k Φ[α; · ] as a formal power series and let f α : C 2k+1 s,u,t −→ C be Let A ⊆ C k . It is readily seen that the functions {f α } α∈A are (finitely) linearly independent, since so are the functions {f α (1, u, 1) ∝ u α } α∈A . Set and define the following differential operators, acting formally on O, where i, j ∈ [k], i = j and ∇ y := (∂ y 1 , . . . , ∂ y k ) for y = u, s. Term the operators E α i , resp. E −α i , raising, resp. lowering, operators. Finally, let g k be the complex linear span of the operators (4.4) endowed with the bracket induced by their composition. Actions on spaces of holomorphic functions. Let Λ α := α + Z k and set, for every ∈ R + , Proof. The statement on J α i is straightforward. Moreover, Remark 4.7. The variables u and t are merely auxiliary (cf. [28, §1]). The operators do not depend on the parameter α, rather, the subscripts indicate which indices they affect. Heuristically, the action of the operators (4.4) given in Lemma 4.6 may be derived from that [26, (1.5)] of operators in the dynamical symmetry algebra of k F D by a formal contraction procedure [26, p. 1398], letting (in the notation of [26]) α = 0, β = α, γ = α • and dropping redundancies. Remark 4.8. If α • = 1, the action of the lowering operators E −α i vanishes. This is natural when regarding f α as a formal power series, whereas it is conventional when regarding f α as a meromorphic function, for the functions (1 − α • )f α−e i are in fact -after cancellationswell-defined, not identically vanishing, and holomorphic in s even for α • = 1. The convention here reads 0 × ∞ = 0, which is consistent with the usual convention in measure theory when we identify α • − 1 with the quantity (σ − δ y )X for any y in X; the reason for such identification will be apparent in §4.3 below. where i, j = 0, . . . , k and p, q = 1, . . . , k with i = j, p = q and, conventionally, Proof. Given the action of the operators in (4.5) straightforward computations yield Proposition 4.11. Let ρ : l k → End(O) be the linear map defined by with j > i. Then, for any fixed α ∈ C k , the pair ρ α : Lie algebra representation of l k with image g k O Λα . Furthermore, the functions f α transform as basis vectors for ρ α , in the sense that for every v in the basis for l k and every α in Proof. By Corollary 4.9, ρ α is a well-defined linear morphism into End(O Λα ). The fact that f α transforms as a basis vector of O Λα is an immediate consequence of Lemma 4.6. For α ∈ Λ α such that α > 1, the actions of operators in (4.4) on O α are mutually different again by Lemma 4.6, hence ρ α is injective. In order to show that ρ α l = g O Λα is a Lie algebra of type A k and that ρ α is a Lie algebra representation, it suffices to verify Serre relations [13, §18.1] of type A for the operators ρ α v with v = v j in an sl 2 -triple corresponding to the simple root γ j in Ψ k . These are readily deduced from Lemma 4.10. In order to prove (i)-(ii) it suffices to show that, for all α ∈ Λ + α and ∈ Z + , one has , v in the basis of f, ∈ N 1 and w in the basis for h ⊕ r + . All of the above follow immediately from Lemma 4.6. Notably, since α • = 1, h acts on O α precisely by weight α. Since α ∈ ∆ k−1 , then f α+p ( · , 1, 1) = D α+p ( · ). By the Bayesian property of D α the space O Hα is spanned precisely by the Fourier transforms of the form D p α . It remains to show that U(r + ). The uniqueness of v follows by the fact that, since r + is Abelian, U(r + ) coincides with the (Abelian) symmetric algebra generated by r + (see [13, §17.2]). This proves (iii). In order to show (iv), recall (e.g. [13, §12.1]) that the Weyl group W k of Ψ k is isomorphic to S k+1 and its action on Ψ k may be canonically identified as dual to the action of S k+1 on h k via conjugation by permutation matrices in P k+1 ∼ = S k+1 < GL(h k ) ∼ = GL k+1 (C). Let P 2:k+1 < GL k+1 (C) denote the subgroup of permutations matrices whose action on Mat k+1 (C) fixes the first row and column. Clearly S k ∼ = P 2:k+1 < P k+1 . Composing the isomorphism ρ α with the identification of the action of P k+1 above completes the proof. 4.2. Invariant measures on simplices and affine spheres. In the following we lay out a comparison between the results obtained in the previous section and some known facts about the multiplicative infinite-dimensional Lebesgue measure L + [43]. For the reader's convenience, let us briefly recall the construction of L + given in [45]. to be compared with the density (2.3) of the Dirichlet distribution. The k · -invariance of the measures λ k,r k,β on rescaled affine spheres corresponds (a) in the finite-dimensional case -to the projective invariance of the measures L α with respect to the same action, with Radon-Nikodým derivative for L α -a.e. y ∈ M k−1 and (b) in the infinite-dimensional case -to the projective invariance [43, 4.1] of L + β,σ with respect to the action of the group of multipliers exp(C c (X)) M + b (X) given by g.η := g · η where g = e h ∈ C + b (X) for some h ∈ C c (X). The Radon-Nikodým derivative satisfies in this case (see [43, 4.1]) The commutative action of h k . It is the content of Theorem 4.12(i) that the characteristic functionals of the measures D α , varying α ∈ int ∆ k−1 , are projectively invariant under the action of the maximal toral subalgebra h k < l k in the representation ρ α . Since h k acts on O α by weight α (see the proof of Thm. 4.12(i)), for arbitrary J t := t 1 J α 1 + · · · + t k J α k ∈ h k one has The non-commutative action of l k and a family of distinguished improper priors. In contrast to the case of the measures L α on affine spheres -where only the action of the commutative subgroup dSL + k (R) < SL k (R) is taken into account -in the case of the Dirichlet distributions D α it is possible to detail the full non-commutative action of the algebra l k on their characteristic functionals. Incidentally, let us notice that the acting object, although of special linear type in both cases, is a (subgroup of a) Lie group in the first case, but the corresponding Lie algebra in the latter case. This is because the action is, in the first case, an action on measures themselves, whereas, in the second case, on their characteristic functionals. If α ∈ int ∆ k−1 , then (a) the action of basis elements in r + k amounts to take (characteristic functionals of) Dirichlet-categorical posteriors; it fixes the space O Hα of (characteristic functionals of) such posteriors. On the other hand, (b) the action of basis elements in r − k amounts to take (characteristic functionals of) Dirichlet-categorical priors; such priors should be allowed to be improper, in the sense that they are no longer probability measures, but rather (in-)finite definite (i.e., positive or negative, not signed) measures. Indeed, if we letD α be any such improper prior, with density given by (2.3) in the case when α ∈ Λ + α , thenD α has sign given by The action of r − k fixes the space O Λ + α of (characteristic functionals of) all such priors and vanishes on the line M α,0 , the singular set of the normalization constant B[α ] −1 . Finally, (c) the action of basis elements in f k contains every non-trivial combination of the actions (a) and (b), and fixes isoplethic hypersurfaces M α, , i.e. those where the intensity α has constant total mass α • . In this framework, the case α ∈ bd ∆ k−1 is spurious, since the intensity measure α should always be assumed fully supported. 4.3. Infinite-dimensional statements. For a ∈ R we denote by M >a b (X) the space of finite signed measures ν in M b (X) such that νX > a. Theorem 4.13. Let (X, τ (X), B(X)) be a second countable locally compact Hausdorff space and ν be a diffuse fully supported non-negative finite measure on X. Let further and Then, (iii) let σ be a diffuse fully supported probability measure on (X, τ (X)) and let further (X h ) h ∈ Na(X, τ (X), σ). For σ-a.e. x, such that X h,i h ↓ h {x}, and for every good approximation (f h ) h of f , locally constant on X h and uniformly convergent to f , there exist the pointwise limiting rescaled actions Proof. The functional Φ[ν, f ] is well-defined in the first place since νX > 0. For c, t > 0 denote by P c,t ⊆ R n the polydisk y ∈ R n | |y i | ≤ ct i . By induction and (2.2) it is not difficult to show that max Pc,t |Z n | = Z n [c(t 1) n ]; moreover, by (2.1) and Theorem 3.3, the latter equals t n c n /n!. As a consequence, for arbitrary ν in M >0 b (X) and f ∈ C c (X), letting y i := νf i above, Let now A be in B and (X h ) h as in (ii). Fix f in C c (X), set α h := ν X h and let (f h ) h be a good approximation of f , locally constant on X h with values s h . Equation (4.5) yields by summation More explicitly, since f h is constant on each X h,i with value s h,i , Proposition 3.5 yields Since |f h | ≤ |f | pointwise, the sequence f i h h converges strongly in L 1 ν for every i ≤ n for every n ∈ N 1 , thus by continuity of Z n , there exists the limit The proof of the statement for E A,−B is analogous. This completes the proof of (ii). The requirement that νX > 1 is necessary to the convergence of Φ[ν − δ y , f ] for y ∈ X in the definition of E A,−B , whereas it may be relaxed to νX > 0 in the case of E A . We will make use of this fact in the proof of (iii). Fix now x in X and let i h := i h (x) be such that X h,i h ↓ h {x}. By Lemma 5.1, the sequence (i h ) h is unique for σ-a.e. x. With the same notation of (ii), let now A = X h,i h in (4.7). Then, thus, (4.8) and (4.9) yield, together with the continuity of y → D σ+δy (f * ) for fixed f and σ, By the Bayesian property D x σ = D σ+δx , this yields the conclusion for the limiting raising action. Finally, since σ is a probability measure, (α h ) • = 1 for all h, thus by Lemma 4.10, where the second equality for the first limiting action follows by (3.11). In all three cases, independence of the limits from the chosen (good) approximation is straightforward. Appendix. We collect here some results in topology and measure theory. Lemma 5.1. Let (X, τ (X), B, σ) be a second countable locally compact Hausdorff Borel measure space of finite diffuse fully supported measure. Then, for every (X h ) h ∈ Na(X, τ (X), σ) for σ-a.e. x in X there exists a unique sequence (X h,i h ) h , with i h := i h (x), such that X h X h,i h ↓ h {x}. Proof. Proposition 2.2 justifies well-posedness of the requirements in the definition of (X h ) h . Without loss of generality, each X h,i may be chosen to be closed by replacing it with its closure cl X h,i = X h,i ∪ bd X h,i . Hence X h may be chosen to be consisting of closed sets (disjoint up to a σ-negligible set) with non-empty interior. It follows by the finite intersection property that every decreasing sequence of sets (X h,i h ) h such that X h,i h ∈ X h admits a non-empty limit, which is a singleton because of the vanishing of diameters. Vice versa, however chosen (X h ) h , for every point x in X it is not difficult to construct a (possibly non-unique) sequence X h,i h (with i h := i h (x)) convergent to x and such that X h,i h ∈ X h . Furthermore, letting x be a point for which there exists more than one such sequence, we see that for every h the point x belongs to some intersection X h,i 1 ∩ X h,i 2 ∩ . . . , hence, since every partition has disjoint interiors by construction, x ∈ bd X h,i 1 ∩ bd X h,i 2 ∩ . . . . Since for every h and i ≤ k h each set X h,i is a continuity set for σ, the whole union ∪ h≥0 ∪ i∈[k h ] bd X h,i is σ-negligible, thus so is the set of points x considered above, so that for σ-a.e. x there exists a unique sequence (X h,i h ) h such that X h,i h ∈ X h and lim h X h,i h = {x} and x belongs to each X h,i h in the sequence. Finally, recall the following form of Lévy's Continuity Theorem. . Let (Y, τ (Y )) be a completely regular Hausdorff topological space, V be a linear subspace of C(Y ) separating points in Y and χ be a complexvalued functional on V . If (µ γ ) γ is a narrowly precompact net of Radon probability measures on (Y, B(Y )) and lim γ µ γ (v) = χ(v) for every v in V , then (µ γ ) γ converges narrowly to a Radon probability measure µ, the characteristic functional thereof coincides with χ.
13,897.8
2018-10-23T00:00:00.000
[ "Mathematics" ]
Real-time TIRF observation of vinculin recruitment to stretched α-catenin by AFM Adherens junctions (AJs) adaptively change their intensities in response to intercellular tension; therefore, they integrate tension generated by individual cells to drive multicellular dynamics, such as morphogenetic change in embryos. Under intercellular tension, α-catenin, which is a component protein of AJs, acts as a mechano-chemical transducer to recruit vinculin to promote actin remodeling. Although in vivo and in vitro studies have suggested that α-catenin-mediated mechanotransduction is a dynamic molecular process, which involves a conformational change of α-catenin under tension to expose a cryptic vinculin binding site, there are no suitable experimental methods to directly explore the process. Therefore, in this study, we developed a novel system by combining atomic force microscopy (AFM) and total internal reflection fluorescence (TIRF). In this system, α-catenin molecules (residues 276–634; the mechano-sensitive M1-M3 domain), modified on coverslips, were stretched by AFM and their recruitment of Alexa-labeled full-length vinculin molecules, dissolved in solution, were observed simultaneously, in real time, using TIRF. We applied a physiologically possible range of tensions and extensions to α-catenin and directly observed its vinculin recruitment. Our new system could be used in the fields of mechanobiology and biophysics to explore functions of proteins under tension by coupling biomechanical and biochemical information. atomic force microscopy (AFM)-based single-molecule force spectroscopy, the mechanical disruption of the autoinhibitory M 1 /M 2 -M 3 interaction induces a drastic change in the conformation of α-catenin, together with increased mechanical stability. In addition, another single-molecule study showed that the affinity of α-catenin for the head domain of vinculin increases under tension 27 . However, although the α-catenin-mediated mechanotransduction is a dynamic molecular process, there is no experimental technique at the molecular level that allows us to observe vinculin recruitment to α-catenin under tension in real time. Therefore, in the present study, we developed a system combining AFM and total internal reflection fluorescence (TIRF) to directly observe the dynamic recruitment of vinculin to α-catenin. AFM-based techniques, such as DNA cutting/pasting 28 and structural imaging 29 , have been combined with TIRF-based molecular observation to investigate biomechanical phenomena at the molecular level. Using the new system, we extended α-catenin molecules (the mechano-sensitive M 1 -M 3 domain; residues 276-634) using AFM and simultaneously observed their recruitment of full-length vinculin molecules in real time using TIRF. We used full-length vinculin in the experiment to minimize tension-free binding of vinculin to α-catenin, as it forms an autoinhibited conformation against α-catenin [30][31][32] . A previous study based on isothermal titration calorimetry (ITC) reported no detectable interaction between the α-catenin M 1 -M 2 domain, which is an open form fragment without the autoinhibitory M 3 domain, and full-length vinculin 33 . In our study, we hypothesized that the mechanical force drastically destabilizes the M 1 domain of α-catenin to allow its interaction with full-length vinculin. As a result of the AFM-based α-catenin extension experiment, we obtained force curves (force versus time curves) as shown in Fig. 2a,b. In both figures, the tensile force is shown as positive and five force curves are overlaid together. The time course is set from 0.6 s, i.e., from the contact phase in piezo-height control. The force curves were classified into two groups: with successful α-catenin extension (Fig. 2a) and without (Fig. 2b), based on the rupture force, F R , observed in the retract phase. The threshold for F R (50 pN) for the classification was determined by comparison of the results of the control extension experiment by AFM without α-catenin modification onto coverslips (see Supplementary Fig. 1). In our experiment, approximately 52.6% of the force curves (2264 curves/total 4305 curves) showed successful α-catenin extension and the remainder (approximately 47.4%) did not. Thereby, force curves with successful α-catenin extension showed approximately 15 pN of force during the hold phase (Fig. 2c), which was much higher than that without α-catenin extension (Fig. 2d). The contour maps in Fig. 2c,d represent the number density of force-time data points in all the curves in each group. The approximately 15 pN force found in the curves with α-catenin extension (Fig. 2c) would be sufficient to open the autoinhibition of α-catenin 27 and were within the physiological range 34 . We also suggest an approximately 5 pN , with a GST-tag at the N-terminus and a His 6 -tag at the C-terminus, were chemically modified on coverslips at the C-terminus and were mechanically loaded using an AFM tip; simultaneously, full-length vinculin molecules (residues 1-1066), dissolved in solution, were observed by TIRF. Vinculin molecules modified with Alexa Fluor 488 dye were illuminated when they came into in an evanescent field in TIRF (cyan region, approximately 150 nm from the surface of coverslips). (b) Movement of the base of the AFM cantilever was regulated by a programmed piezo-height as follows: (i) Decrease the piezo-height with the velocity of 500 nm/s to approach the AFM tip to the α-catenin-modified coverslip and stop when the reaction force reaches 500 pN (the piezoheight is set to zero at this point); (ii) keep the piezo-height to wait for an interaction between the AFM tip and α-catenin; (iii) increase the piezo-height by 40 nm with a velocity of 500 nm/s to extend α-catenin; (iv) keep the piezo-height constant at 40 nm to hold α-catenin under tension and wait for vinculin recruitment; and (v) increase the piezo-height by 260 nm with the velocity of 500 nm/s to completely retract from the coverslip and examine the resistance force of α-catenin with or without vinculin recruitment. force observed in the curves without α-catenin extension (Fig. 2d) was caused by non-specific tethering in some curves. Thus, using AFM, we mechanically opened the conformation of α-catenin to allow its recruitment of vinculin during the hold time. To explore the conformational change of α-catenin under tension, we performed an AFM-based structural imaging experiment. In this experiment, we tested wild-type and mutant (M319G and R326E) M 1 -M 3 domains of α-catenin. The mutated fragment, possibly with an attenuated intra-domain interaction between the M 1 and M 2 -M 3 domains, was reported to recruit full-length vinculin in a tension-insensitive manner 26 . We validated that the labeling of full-length vinculin by Alexa-Fluor 488 dye did not affect its affinity for α-catenin. As a result of the AFM structural imaging, we obtained three-dimensional features of the α-catenin fragments, as shown in Fig. 2e,f. Wild-type α-catenin (Fig. 2e) exhibited three regions corresponding to three helix bundles, as proposed in a previous structural biology study 23,24 . By contrast, the mutated α-catenin (Fig. 2f) had a different configuration, with one region having a similar size to each domain in the wild-type as well as possessing another larger region. As the M 2 and M 3 domains of α-catenin are known to get close to each other by rotation in the absence of the M 1 domain 35 , we concluded that the larger region corresponds to the rotated M 2 and M 3 domains getting close to each other, and that the smaller region corresponds to the M 1 domain. This result implied that residues Real-time TIRF observation of vinculin recruitment to stretched α-catenin by AFM. The dynam- ics of full-length vinculin molecules were observed by TIRF simultaneously with measurement of the extension of α-catenin by AFM. In movies obtained in conjunction with successful α-catenin extension, we observed a square pulse in the intensity (Fig. 3a) with a possibility of approximately 0.71% (16 events/2264 events), in which the intensity increased during the hold phase and then decreased at the beginning of the retract phase (see Supplementary Material, Video S1). This result indicated that full-length vinculin interacted with α-catenin under approximately 15 pN tension and then dissociated upon further extension of α-catenin. Similar square pulses were observed with a possibility of approximately 0.10% (2 events/2041 events) in the movies obtained without successful α-catenin extension; however, the timing of the decrease in the intensity was not synchronized with the force measurement. These unsynchronized square pulses could be caused by transient association/dissociation of vinculin with α-catenin in the autoinhibited conformation under no tension. In movies either with or without successful α-catenin extension, we observed a different type of signal (Fig. 3b), that is, a step increase in the intensity (see Supplementary Material, Video S2) with a possibility of approximately 0.75% and approximately 0.59% (17 events/2264 events and 12 events/2041 events), in which the intensity increased independently of the hold phase and was sustained until each AFM measurement was completed. We further determined that the step increases resulted from nonspecific vinculin interaction with the surface of the coverslip, not to α-catenin molecules, as we observed the step increases on coverslips without α-catenin modification (any other chemical compounds were modified in the same way). We also confirmed that the intensity in step increases was sustained for more than 20 s in longer observations, supporting that the drop in the intensity in square pulses was not caused by photo-bleaching but caused by the dissociation of vinculin from α-catenin. Furthermore, the square pulses were not observed when we used the M 2 -M 3 domain (residues 385-634), without the M 1 domain. This result indicated that vinculin interacted with the M 1 domain of α-catenin but did not interact with the M 2 -M 3 domain or any linker components. Thus, using this novel AFM-TIRF combined system, we could directly observe the recruitment of full-length vinculin to α-catenin under tension at the molecular level. Conformational stabilization of α-catenin by vinculin recruitment. To understand how vinculin recruitment changes the force response of α-catenin, we analyzed the force curves in conjunction with the square pulses observed by TIRF, i.e., with vinculin recruitment (Fig. 4a). In this analysis, the force curves with successful Step increases in intensity, independent of the timing of AFM experiment, which was caused by nonspecific vinculin interaction with the coverslip. α-catenin extension (2264 curves) were further classified into two groups: Curves with square pulses (16 curves) and those without (2248 curves). We then measured the unfolding force, F u , in the retract phase (Fig. 4b), which indicates the force response of α-catenin with and without vinculin recruitment, at every peak in the force curves, except for the last peak, which was caused by the detachment of the AFM tip from α-catenin. The histogram of the F u for the force curves with square pulses (green bars, Fig. 4b) showed a higher mode value of approximately 180 pN, compared with approximately 100 pN for the force curves without square pulses (gray bars, Fig. 4b). This result indicated that α-catenin in an open conformation under tension is further mechanically stabilized by vinculin recruitment, which was consistent with the previous observations 26,27 . The high F u of more than 600 pN in the force curves without square pulses (black bar, Fig. 4b), which we did not observe in our previous study 26 , could be caused by parallel fishing of several α-catenin molecules; we adopted a 50-fold higher concentration of α-catenin in this study compared to our previous single molecule study to increase the possibility of obtaining an interaction between the AFM tip and α-catenin molecules. Thus, by analyzing the force curves in the retract phase, we confirmed that the square pulses observed in TIRF corresponded with vinculin association/dissociation with α-catenin under tension. Discussion Using the new AFM-TIRF system, we directly observed the recruitment of full-length vinculin to the M 1 -M 3 domains of α-catenin under tension. In the context of mechanobiology, certain proteins have been identified as mechano-chemical transducers or mechano-sensors, which expose a cryptic binding site under mechanical force to recruit other proteins 13,36-39 . However, the identification and characterization of mechano-sensor proteins is still at the development stage because of the lack of suitable experimental approaches to directly detect the interaction between proteins under force. Our new system can be used in the fields of mechanobiology and biophysics to unravel the functions of proteins under tension by coupling biomechanical and biochemical information. From our results, vinculin dissociation events were observed even in cases where the α-catenin was not fully unfolded in the retract phase, i.e. the α-catenin extensions at the rupture events were shorter than its fully extended length of approximately 143 nm. This result suggests that vinculin dissociated from α-catenin at the beginning of the retract phase (from 3.68 s to 4.20 s). This result is consistent with our previous AFM measurement 26 , in which full-length vinculin-bound α-catenin showed large F u with small extensions (approximately 29.4 nm), suggesting that vinculin dissociated from α-catenin under this small amount of extension. Although the isolated head domain of vinculin is reported to dissociate only when the α-catenin M 1 -M 3 domain is almost fully extended 27 , our result suggested that full-length vinculin dissociates more readily from α-catenin VBS than does the vinculin head domain. We suggested that this difference was caused by the autoinhibitory interaction in the full-length vinculin; the tail domain of vinculin competed with α-catenin VBS to interact with the head domain of vinculin under our conditions, which made the vinculin head domain easier to dissociate from α-catenin VBS under high tension. Our results also suggested that full-length vinculin dissociates easily from α-catenin when the vinculin affinity for α-catenin decreases under attenuated tension, with recovery of the autoinhibited conformation 40 . The actual extension of α-catenin should also be discussed. We analyzed the tip-coverslip distance by moving the average and deviation in the tip-coverslip distance, which differed by approximately 15 nm in the force curves with and without successful α-catenin extension (see Supplementary Fig. 2). This value was reasonable because the AFM cantilever was deflected by approximately 25 nm to the inverse (pushing) direction at the wait phase. When the piezo-height increased to 40 nm at the extend phase, the AFM cantilever deflected to the pushing direction first went back to the original position, with an increase in piezo-height of approximately 25 nm, and then deflected to the extending direction with an increase in piezo-height of approximately 15 nm. Thereby, the curves without α-catenin extension (gray line, Supplementary Fig. 2), most of which did not show even nonspecific tethering, showed approximately 15 nm of the tip-coverslip distance. The reason why the group with α-catenin extension (orange line, Supplementary Fig. 2) also showed approximately 15 nm of the tip-coverslip distance would be that the cantilever stiffness was much higher than that of α-catenin or other tethered components, such as polyethylene glycol (PEG). Although it is difficult to estimate the absolute value of extension in α-catenin, since the other components, such as PEG, were also extended like α-catenin in series, and the length of PEG varied, we emphasize that the force range of approximately 15 pN was sufficient to open the α-catenin conformation, as validated by the previous reports 26,27 . The number of molecules tested in this study needs to be discussed. In these experiments, we needed to increase the concentration of α-catenin on coverslips so that the possibility of an α-catenin-extending event could be increased to approximately 50%. As a result, it is possible that we occasionally parallel-loaded a couple of α-catenin molecules at the same time, because we observed a long tail in the unfolding force histogram in Fig. 4b. This parallel loading could also have been caused by the presence of a glutathione-S-transferase (GST)-tag at the N-terminus of α-catenin, as the GST-tag itself can form a dimer. Therefore, our results cannot be exclusively interpreted as single-molecule events and therefore this might cause a variation in the configuration of the force curves. Nevertheless, during AFM structural imaging, the area corresponding to α-catenin molecules showed a symmetric distribution (see Supplementary Fig. 3), suggesting that most α-catenin molecules existed as monomers in the same buffer used in our AFM-TIRF experiment. We also verified that a certain amount of full-length vinculin molecules can exist as monomers by cause of dithiothreitol (DTT) dissolved in AFM-TIRF experiment (see Supplementary Fig. 4), but the remainder of the molecules formed dimers, trimers, or even higher order oligomers. This analysis suggests that vinculin exists as a self-associated complex, as well as a monomer, in cells. In future work, it would be interesting to observe the effect of vinculin self-association on the interaction between α-catenin and vinculin under tension. Methods Protein purification. α-Catenin and vinculin were expressed and purified as reported in our previous study 26 . DNA fragments of mouse αE-catenin M 1 -M 3 (residues 276-634) and M 2 -M 3 (residues 385-634) were amplified by PCR and cloned into the pGEX6P-3 vector (GE Healthcare). A DNA fragment of mouse full-length vinculin (residues 1-1066) was cloned into the pET-6b (+) vector (Novagen). These plasmids were verified by DNA sequencing and transformed into Escherichia coli strain BL21Star (DE3) cells (Invitrogen). α-Catenin and vinculin molecules were expressed at 20 °C in Luria-Bertani medium supplemented with 0.1 mM isopropyl-β-d-thiogalactopyranoside. BL21Star cells expressing α-catenin and vinculin were suspended in 20 mM Tris-HCl buffer (pH 8.0) containing 150 mM NaCl and disrupted by sonication. The supernatant after ultracentrifugation was applied to a Glutathione Sepharose 4B column (GE Healthcare). Proteins eluted from the column were further purified by anion exchange (HiTrap Q HP, GE Healthcare) and gel filtration (Superdex 200 pg, GE Healthcare) chromatography. N-terminal His 6 tags on vinculin molecules were then cleaved using human rhinovirus 3C protease. For AFM structural imaging, GST-tags at N-termini of the wild-type and mutated fragments were cleaved by PreScission Protease (GE Healthcare). Chemical modification for coverslips and AFM tips. Coverslips (Matsunami Glass Ind. Ltd.) and silicon nitride AFM tips (OMCL-TR400PSA; spring constant, 0.02 N/m; curvature radius of tip, 15 nm; Olympus Co.) were chemically modified for use in AFM (Nanowizard3 Ultra; JPK Instruments AG) as described in a previous study 26 . To make a chamber to contain the working buffer with vinculin, we placed flat silicon rings on coverslips. The coverslips were cleaned in a plasma cleaner, sonicated in 1 M KOH for 15 min, and then sonicated in ethanol for 15 min. After cleaning, the coverslips were thoroughly washed in MilliQ-treated water (EMD Millipore, Hayward, CA, USA). For modification with α-catenin molecules, the coverslips were treated with 2% MPTMS/ ethanol for 15 min and then treated with 2 mM maleimide-C 3 -NTA (Mal-C 3 -NTA; DOJINDO Laboratories, Kumamoto, Japan) in 20 mM Tris-HCl buffer for 30 min. The NTA-modified coverslips were then treated with 10 mM NiCl 2 (Wako Pure Chemical Industries, Osaka, Japan)/MilliQ water for 30 min and further washed with MilliQ water and 20 mM Tris-HCl buffer. Using NTA-Ni 2+ -His 6 affinity binding, 500 μM α-catenin fragments in 20 mM Tris-HCl buffer were immobilized on the coverslips. The coverslips were finally washed with 20 mM Tris-HCl buffer. The AFM tips were first cleaned in a plasma cleaner and then treated with 2% APTES/MilliQ for 15 min. The tips were then treated with 6 mM Mal-PEG-NHS ester/20 mM Tris-HCl buffer for 30 min and with 10 mM glutathione/20 mM Tris-HCl buffer for 1 h. The remaining maleimide groups were quenched with 50 mM 2-mercaptoethanol/MilliQ water and washed with 20 mM Tris-HCl buffer. Fluorescent labeling of vinculin. Full-length vinculin molecules were labeled using the Alexa Fluor 488 Protein Labeling Kit (Thermo Fisher Scientific, Waltham, MA, USA), in which the primary amines in vinculin formed covalent bonds with the Alexa 488 dye. Full-length vinculin molecules and the Alexa 488 dye were conjugated at a 1:1 ratio at room temperature. Alexa-488-labeled vinculin molecules were purified using size exclusion chromatography. For each measurement using TIRF-combined AFM, Alexa 488-labeled vinculin molecules were dissolved at a concentration of 10 nM in working buffer (20 mM Tris-HCl pH 8.0, 1 mM DTT, and 2 mM Trolox). DTT was added to the working buffer to prevent aggregation of vinculin molecules caused by disulfide bonds, and Trolox was used to sustain the fluorescence of the Alexa 488 dye by maintaining reducing conditions in the working buffer. Settings for TIRF combined with AFM. For the AFM-TIRF combined system, we first calibrated the actual force measured by AFM. We adjusted the laser position to the top of cantilever in the working buffer using a non-oil 20x magnification lens and approached the bare coverslip to evaluate the relationship between the change in voltage from the photodiode and the deflection of the cantilever. The deflection was then calibrated to force based on the spring constant of the AFM cantilever. After calibration, we moved the base of the cantilever ScIeNtIFIc RepoRTs | (2018) 8:1575 | DOI:10.1038/s41598-018-20115-8 upward by 1000 nm and then removed the AFM head unit. The objective lens was then switched to a 100x objective lens for TIRF (UAPON 100XOTIRF; NA, 1.49; Olympus Co.) and the α-catenin-modified coverslip was set on the stage and the space inside the silicon ring was filled with the working buffer containing vinculin. We set up the TIRF condition by controlling the optional axis of the 488-nm laser (OBIS 488 LS; Coherent, Inc., Santa Clara, CA, USA), so that the penetration depth of the evanescent field was 150 nm, which covered the length of a fully extended αE-catenin M 1 -M 3 fragment (approximately 143 nm). After focusing on the surface of the coverslip, the AFM head unit was brought back to the stage and the AFM tip was directed towards the coverslip surface. The power of the laser was first set at maximum for 30 min to diminish the fluorescence of any vinculin that was nonspecifically attached to the surface, and then the power was decreased to 2.97 W/cm 2 using an ND filter to allow for AFM-TIRF measurement under conditions that avoided photo-bleaching. During the AFM force measurement, the fluorescence of vinculin was observed using an EM-CCD camera (iXon Ultra 888; Andor Technology Ltd., Belfast, UK); the exposure time was 50 ms and the final frame rate was approximately 14.76 frames/s. To synchronize the force measurement with the vinculin observation, we used the pulse signal output from the AFM controller that then externally controlled the EM-CCD camera. Finally, we analyzed the intensity of fluorescence in the square region (15 × 15 pixels) around the position of the tip. AFM structural imaging. For the AFM structural imaging, a fresh mica surface was obtained by peeling off the surface layers using tape. Wild-type and mutant α-catenin fragments (1 nM of each) in the working buffer (50 μL) were dropped onto the mica surface and allowed to absorb for 1 h. The solution was then washed using working buffer, and finally fresh working buffer (50 μL) was dropped onto the surface. Silicon AFM tips specialized for structural imaging (OMCL-AC160TS; spring constant, 26 N/m; resonant frequency, 300 kHz; curvature radius of tip, 7 nm; Olympus Co.) were used to obtain the images.
5,286.4
2018-01-25T00:00:00.000
[ "Biology", "Engineering", "Materials Science", "Physics" ]
3-(3,4-Dimethoxybenzyl)chroman-4-one In the title compound, C18H18O4, the six-membered chroman-4-one ring adopts an envelope conformation with the C atom bonded to the bridging CH2 atom as the flap. The dihedral angle between the mean plane of the fused pyranone ring and the dimethoxy-substituted benzene ring is 89.72 (2)°. In the crystal, adjacent molecules are linked via C—H⋯π interactions. In the title compound, C 18 H 18 O 4 , the six-membered chroman-4-one ring adopts an envelope conformation with the C atom bonded to the bridging CH 2 atom as the flap. The dihedral angle between the mean plane of the fused pyranone ring and the dimethoxy-substituted benzene ring is 89.72 (2) . In the crystal, adjacent molecules are linked via C-HÁ Á Á interactions. supplementary materials Acta Cryst. (2013). E69, o241 [doi:10.1107/S1600536813000925] No classical inter-or intra-molecular hydrogen bonds are observed. The packing of the title compound is stabilized into a three-dimensional network by C-H···π intermolecular interactions, which serve to link inversion-related sheets (Fig 2). Experimental 2′-Hydroxydihydrochalcone (0.1 g) was dissolved in ethanol (10 ml) and refluxed with paraformaldehyde (0.022 g) and 50% aqueous diethylamine (0.2 ml) for 7 hrs. Ethanol was distilled off and the residue was taken up in ethyl acetate. The ethyl acetate layer washed with water then with dilute HCl and finally with water. Ethyl acetate was distilled off and the oily residue was column chromatographed over silica using pet ether (7): ethyl acetate(3) as eluent to get the 3-(3,4-dimethoxybenzyl)-2,3-dihydro-4H-chroman-4-one. Single crystals of the title compound were grown using methanol as solvent by slow evaporation technique and white needle-like crystals were harvested at room temperature. M.P. 397 K. Special details Geometry. All e.s.d.'s (except the e.s.d. in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell e.s.d.'s are taken into account individually in the estimation of e.s.d.'s in distances, angles and torsion angles; correlations between e.s.d.'s in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell e.s.d.'s is used for estimating e.s.d.'s involving l.s. planes. Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > σ(F 2 ) is used only for calculating R-factors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger. Fractional atomic coordinates and isotropic or equivalent isotropic displacement parameters (Å 2 ) x y z U iso */U eq O1 0.29295 (5
644
2013-01-16T00:00:00.000
[ "Chemistry" ]
Application of Residual Power Series Method to Fractional Coupled Physical Equations Arising in Fluids Flow The approximate analytical solution of the fractional Cahn-Hilliard and Gardner equations has been acquired successfully via residual power series method (RPSM). The approximate solutions obtained by RPSM are compared with the exact solutions as well as the solutions obtained by homotopy perturbation method (HPM) and q-homotopy analysis method (q-HAM). Numerical results are known through different graphs and tables.The fractional derivatives are described in the Caputo sense.The results light the power, efficiency, simplicity, and reliability of the proposed method. The Gardner equation [17] (combined KdV-mKdV equation) is a useful model for the description of internal solitary waves in shallow water, + 6 ± 6 2 + = 0. ( Those two models will be classified as positive Gardner equation and negative Gardner equation depending on the sign of the cubic nonlinear term [18,19].Gardner equation is widely used in various branches of physics, such as plasma physics, fluid physics, and quantum field theory [20,21].It also describes a variety of wave phenomena in plasma and solid state [22,23].The Cahn-Hilliard equation [24] is one type of partial differential equations (PDEs) and was first introduced in 1958 as a model for process of phase separation of a binary alloy under the critical temperature [25], This equation is related to a number of interesting physical phenomena like the spinodal decomposition, phase separation, and phase ordering dynamics.On the other hand it becomes important in material sciences [26,27].The aim of this paper is to study the time-fractional Gardner equation [28][29][30] and time-fractional Cahn-Hilliard equation [31][32][33][34][35][36][37] of this form, where 0 < ≤ 1, −∞ < < ∞, and 0 ≤ < .Numerous methods have been used to solve this equations, for example, q-Homotopy analysis method [28], the new version of F-expansion method [29], reduced differential transform 2 International Journal of Differential Equations method [30], the generalized tanh-coth method [38], the generalized Kudryashov method [39], extended fractional Riccati expansion method [31], subequation method [32], homotopy analysis method [33], the Adomian decomposition method [34], improved ( G /)−expansion method [35], homotopy perturbation method [36], and variational iteration method [37].We solve Cahn-Hilliard equation and Gardner equation by RPSM. The RPSM was first devised in 2013 by the Jordanian mathematician Omar Abu Arqub as an efficient method for determining values of coefficients of the power series solution for first and the second-order fuzzy differential equations [40].The RPSM is an effective and easy to construct power series solution for strongly linear and nonlinear equations without linearization, perturbation, or discretization.In the last few years, the RPSM has been applied to solve a growing number of nonlinear ordinary and PDEs of different types, classifications, and orders.It has been successfully applied in the numerical solution of the generalized Lane-Emden equation [41], which is a highly nonlinear singular differential equation, in the numerical solution of higher-order regular differential equations [42], in approximate solution of the nonlinear fractional KdV-Burgers equation [43], in construct and predict the solitary pattern solutions for nonlinear time-fractional dispersive PDEs [44], and in predicting and representing the multiplicity of solutions to boundary value problems of fractional order [45].The RPSM distinguishes itself from various other analytical and numerical methods in several important aspects [46].Firstly, the RPSM does not need to compare the coefficients of the corresponding terms and a recursion relation is not required.Secondly, the RPSM provides a simple way to ensure the convergence of the series solution by minimizing the related residual error.Thirdly, the RPSM is not affected by computational rounding errors and does not require large computer memory and time.Fourthly, the RPSM does not require any converting while switching from the low-order to the higher-order and from simple linearity to complex nonlinearity; as a result, the method can be applied directly to the given problem by choosing an appropriate initial guess approximation. Applications To illustrate the basic idea of RPSM, we consider the following two time-fractional Gardner and Cahn-Hilliard equations. Time-Fractional Gardner Equation. Consider the timefractional homogeneous Gardner equation Subject to the initial Condition The exact solution when = 1, = 1 is We define the residual function for (23) as therefore, for the kth residual function , (, ), To determine 1 (), we consider ( = 1) in ( 27) But from (19) at = 1, Now depending on the result of (22) In the case of k=1, we have 1 (, 0) = 0, To determine 2 (), we consider ( = 2) in ( 27) International Journal of Differential Equations But from (19) Applying on both sides and solving the equation ,2 (, 0) = 0, then we get The solution in series form is given by Numerical Results This section deals with the approximate analytical solutions obtained by RPSM for Gardner and Cahn-Hilliard equations.In classical case( → 1), Figure 1 and Tables 1 and 2 describe the comparison between RPSM with q-HAM [28] and HPM [36].In fractional case, Figures 2, 3, and 4 describe the geometrical behavior of the solutions obtained by RPSM for different fractional value of the two equations. Conclusions This work has used the RPSM for finding the solution of the time-fractional Gardner and Cahn-Hilliard equations.A very good agreement between the results obtained by the RPSM and q-HAM [28] was observed in Figure 1(a) and Table 1. Figure 1(b) and Table 2 that the mentioned method achieves a higher level of accuracy than HPM [36].Consequently, the work emphasized that the method introduces a significant improvement in this field over existing techniques. International Journal of Differential Equations
1,282.4
2018-07-04T00:00:00.000
[ "Physics", "Engineering", "Mathematics" ]
Cosmic Perspectives and the Myths We Need to Survive Big history can be defined a s t he a ttempt t o u nderstand t he i ntegrated h istory o f t he c osmos, E arth, l ife and humanity. Cosmic perspectives and biological evolution are the main scientific ingredients that can convert and broaden history into big history. The aim of this paper is to describe a dilemma that such a scientific, Darwinian big history must face: the inevitable incompatibility between an objective scientific s earch f or t ruth a nd an evolutionary compulsion for brains to harbor useful fictions — the myths we need to survive. Science supports both sides of this dilemma. New and improved cosmic perspectives can’t just be scientifically accurate. To be of use they must leave room for the myths we humans need to survive. But, what are those myths? I discuss and question whether the following ideas qualify as such myths: a belief in an objective meaning for human life, humanism/speciesism, human free will and stewardship of the Earth. Correspondence | Charles H. Lineweaver<EMAIL_ADDRESS>Citation | Lineweaver, C. H. (2019) Cosmic Perspectives and the Myths We Need to Survive. Journal of Big History, III(3); 81 93. DOI | https://doi.org/10.22339/jbh.v3i3.3350 The myths we have told ourselves for roughly two million years have helped us survive. But how much survival value do these parochial myths still contain for 8 billion people on a shrinking planet? What myths do we still need? The answers to these questions set the agenda for the construction of big history and modern cosmic perspectives. Every human culture has a worldview (Brown 1991) -a cosmic perspective -a weltanschauunga context within which the world is explained, the gods are propitiated, and believers are protected. Most traditional worldviews have been blatantly selfserving: We are "the people". We are the good Greeks. They are the bad barbarians. We are the chosen ones. The Earth has been made for us. People of my religion go to heaven -believers in other religions go to hell. For such myths to become so ubiquitous, groups who thought they were the best people on Earth and favoured by the gods, must have had an adaptive advantage. These beliefs made us proud, gave us confidence and promoted our survival. Scientific worldviews are slowly displacing myths. Darwinian evolution continues to supplant anthropocentric creation stories. The most influential scientific revolutions are ones that change our view of ourselves -the ones that change our understanding of how we got here and how we fit in. This is because the meaning or purpose we find in life is strongly linked to who we think we are. The Copernican and Darwinian Journal of Big History revolutions changed our worldview and undermined traditional beliefs about our privileged place in the universe (Kuhn 1957(Kuhn , 1962. They removed humans from the center of the universe and reduced our traditional pride and confidence in ourselves. But at the same time, they gave us a new pride in how much we have figured out about the universe and our place in it. When told about Darwin's idea that we evolved from ape-like ancestors (Darwin 1859(Darwin , 1871, an elderly Victorian woman is reputed to have replied, "Let us hope it is not true, but if it is, let us pray it does not become widely known." If our local myths have taught us that our true position is in first class next to gods and angels, then it is painfully degrading to recognize our true place among terrestrial tetrapods. Sociobiology (Wilson 1975) is the systematic study of the biological basis of all social behavior. It can be understood as a continuation of the Darwinian reassessment of who we think we are and a challenge to human exceptionalism. Sociobiology applies Darwinism to human society and human psychology (Wilson 1978), and has provoked such fierce resistance from the humanities and social sciences, that the conflict became known as the sociobiology wars (Segerstrale 2000). The multifaceted resistance to Darwinism is described in "Darwin's Dangerous Idea" (Dennett 1995, see also Cronin 2013. Perhaps motivated by witnessing the nationalistic delusions that led to the Great War, Bertrand Russell (1919) described the prevalence and usefulness of comforting fictions, Every man, wherever he goes, is encompassed by a cloud of comforting convictions, which move with him like flies on a summer day. Russell (1928) thought we should push back against this "cloud of comforting convictions": There is a stark joy in the unflinching perception of our true place in the world, and a more vivid drama than any that is possible to those who hide behind the enclosing walls of myth. However, in our confident promotion of scientific perspectives in the modern world, we need to face the question: How stark can the scientific perception of our true place in the world become before our perception loses its survival value? How unflinching can we be before our stalwart behaviour becomes detrimental to our survival? Isn't flinching sometimes adaptive? If a scientific vision of our true place in the universe is too stark -if our true place is too bleak, meaningless and unable to sustain hope and optimism -no one will want that vision -and those who adopt it will probably be at a disadvantage. Myths -like Russell's "cloud of comforting convictions" -sustain us. And sometimes we need sustaining. Our stomachs empty, our babies and children starving, our loved ones succumbing to plague and death -the worldviews of our hunting and gathering ancestors were based on beliefs that promoted survival in such conditions. If we got too weak or discouraged, if our worldview did not maintain our courage in the face of adversity, our enemies sensed our vulnerability and attacked. Comfort cannot be easily discounted or Gauguin's images suggest that he is not looking for scientific answers to these questions. The beginning of a human life is on the right, the end of a human life is on the left. There is a blue idol of a god, maybe some worshipping going on, but there are no evolving monkeys. In debt and despair, Gauguin painted this while mourning the sudden death of his nineteen-year-old daughter Aline. After finishing this painting, Gauguin unsuccessfully tried to kill himself with arsenic. (Image from wiki Commons, Museum of Fine Arts, Boston) Journal of Big History trivialized in a mysterious, intimidating and dangerous world. Where can I get my next meal? How can I gather enough resources to attract a mate and reproduce. How can we keep our children alive? Most of our myths and morality evolved to help us successfully answer these questions -questions that have little to do with truths about the big picture, heliocentrism or our evolutionary relationship to monkeys. Has the world become safe enough to dispense with myths? We rich, well-fed moderns, armed with antibiotics and ensured of our children's survival have other means to find comfort. Now that starvation no longer knocks at our doors, now that infectious disease is no longer due to the wrath of the gods, now that we have outsourced retribution and the enforcement of justice to the state (Diamond 2008), many of us feel comfortable discarding our culture's traditional myths and replacing them with less flattering truths that our egos can still put up with. If we are confident in who we are, we can afford to question the traditional beliefs that have given us importance and meaning. But how unflattering can the truths become and still promote our survival? Can we handle the unmythologized truth? For those of us trying to construct big history and better cosmic perspectives, the question becomes: How much truth can they contain and still perform their function? Useful Untruths Whatever may be the innermost feelings of individual scientists, science itself works by rigorous adherence to objective values. There is objective truth out there and it is our business to find it. (Dawkins, 2017, p 7) Scientists are trained to look for the truth. On the left, concepts are divided into useful (inside the green circles) and useless (outside the green circles). Since "useful" can be time-and context-dependent, we show multiple boundaries between useful and useless. On the right, concepts are divided into truths (inside the blue circle) with untruths (outside the blue circle). Scientists often say they are looking for truth and naively assume that all truths are useful. In contrast, Darwinian evolution produces the useful with no assumptions about truth. In the next figure, we combine these two concepts to show that not all truths are useful and not all useful concepts are true. When we analyze data, we try to do so dispassionately. We suppress our hopes -we fight what we want to be true, so that the truth can emerge more easily. We search for objective truth through the emotional storms and confusion of our own subjectivity. In the scientific hunt for the truth, the useful often shows up. In the Darwinian hunt for the useful, the truth often shows up. There is often a correlation between true and useful. Let's ignore this well-known and popular overlap of truth and usefulness and consider the usefulness that does not overlap with the truth (Figs. & 4). Here are some examples of the things that fall into the four categories of Figure 4: 1) Useful truths (in the middle). Modern medicine and technology are based on useful truths from biology, physics and chemistry. Useful truths underpin applied science of all kinds, e.g., the production by modern agriculture of drought-tolerant crops, cars, computers, the internet, cell phones and x-ray machines, etc. 2) Useless truths (on the right): knowledge so detailed that nobody cares, e.g. the positions and velocities of all the nitrogen molecules in your room exactly π seconds after you read this sentence, the idea that your own group, or your own children are not objectively better than other groups and other people's children. Mathematicians generate mountains of useless truths, but occasionally, a new branch of physics finds a use for some of them. Thus, occasionally useless truths are converted into useful truths by the changing boundary of what is useful. 3) Useless untruths (area surrounding both circles) : incorrect data or bad information that no one cares about, or uses, or believes. 4) Useful untruths (on the left): arguably the most interesting set. These include myths, religion, self-deception (Trivers 2000), flattery of self, flattery of others, dreams, nightmares, flights of fancy, belief in the superiority of your in-group (tribalism, nationalism), dehumanization of the members of the tribes you are fighting (xenophobia and racism), selffulfilling prophecies, placebo effects. I will argue that humanism/speciesism and our belief in human free will are also in this category. In Wilson's 2013 "Letters to a Young Scientist" he reminds us why we do science and why science is right and religions are wrong: The scientific method has been consistently better than religious beliefs in explaining the origin and meaning of humanity…Colorful they are, and comforting to the minds of believers, but each contradicts all the others. And when tested in the real world they have so far proved wrong, always wrong. Something is amiss here. Evolution (and the human brain that it produced) shouldn't care if religious beliefs are "wrong, always wrong" as long as they keep their believers alive preferentially over non-believers. Wilson's sociobiology is all about the idea that brains (like livers and lungs) are organs that have been selected to keep us alive and reproduce (Barkow et al. 1999). It seems strange for the founder of sociobiology to expect adaptive religious beliefs to be true. Brains and their contents have been selected to support useful cosmic perspectives (not necessarily truthful ones). If true ideas are useful, then brains that harbour them will be selected for. If false ideas are useful then brains that harbour them will be selected for. Religious beliefs have been tested in the real world. That is why there are so many extant believers. On this Darwinian view, we expect our cosmic perspectives (about questions such as "Who are we?", "What is our place in the universe?", "What is the origin and meaning of humanity?") to be useful, comforting, and an aid to survival, but not necessarily truthful. The new scientific light that Darwinism shines on the battle between truth and useful fictions, is that there Journal of Big History is no higher priority than survival. No truth-seeking mechanism, like science, can succeed if it undermines survival. Wilson wrote "The scientific method has been consistently better than religious beliefs in explaining the origin and meaning of humanity." But Gauguin et al are not expecting a scientific explanation of the meaning of their lives. Scientific answers are not what they want to hear. Our traditional expectation is that truly "meaningful answers" must give the leading role to humans. But what ultimate "meaning" can science explain when there isn't any ultimate meaning? For Gauguin, the monstrous mindlessness of the cosmos is not among the acceptable explanations for the death of his nineteen-year-old daughter Aline. At such times, scientific views play second fiddle to myths, because we believe we are important and need input to support this idea that science seems unable to provide. What are the myths we need to survive? Weinberg's pointlessness Is there any meaning in all the information that scientists have amassed about our place in the Universe? In one of the most cited passages in popular science at the end of his book "The First Three Minutes" (1977) about the big bang origin of the universe, Steven Weinberg (winner of the 1979 Nobel prize for physics) muses: It is almost irresistible for humans to believe that we have some special relation to the universe, that human life is not just a more-orless farcical outcome of a chain of accidents reaching back to the first three minutes, but that we were somehow built in from the beginning… Below, the Earth looks very soft and comfortable -fluffy clouds here and there, snow turning pink as the Sun sets, roads stretching straight across the country from one town to another. It is very hard to realize that this all is just a tiny part of an overwhelmingly hostile Universe. It is even harder to realize that this present Universe has evolved from an unspeakably unfamiliar early condition, and faces a future extinction of endless cold or intolerable heat. The more the Universe seems comprehensible, the more it also seems pointless. I'm sure Weinberg's expectations are to blame for making the Universe seem pointless to him. The universe is only pointless to the degree that he insists it have a point. After having attracted some criticism for his use of the word "pointless", Weinberg backpeddled and articulated his thoughts a bit more carefully (see Lightman 1990, p But Weinberg didn't go on to explain. Apparently, he was unable to describe a universe with a point -a universe in which humans have some objective meaning that science could discover. This is a relief in some quarters: 'If there is no meaning in it,' said the King, 'that saves a world of trouble, you know, as we needn't try to find any' (Carroll 1865). Chesterton's Conservatism The removal of useful untruths from our cosmic perspective seems to be a goal of science. Chesterton (1929) has some advice for reformers who would like to displace traditional myths; don't take down a fence until you know the reason it was put it up. In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, "I don't see the use of this; let us clear it away." To which the more intelligent type of reformer will do well to answer: "If you don't see the use of it, I certainly won't let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it. Heeding Chesterton, before we tear down more of our ~ 2-million-year-old myths, we should figure out why they are there, so we can keep the ones we still need. What myths do we still need to tell about ourselves? Harari's Fictions Yuval Harari's recent books about humans and big history have been hugely successful (Harari 2015(Harari , 2017(Harari , 2018. He describes the beginnings of science as the discovery of our own ignorance. He postulates that our success as a species is mostly due to our ability to tell stories and to believe them. Our advantages over other species he chalks up to our credulity and our ability to delude ourselves into believing myths and fictions. You could never convince a monkey to give you a banana by promising him limitless bananas after death in monkey heaven (Harari 2015) Among our most successful fictions are concepts that most people would not consider fictions: nations, money, democracy, capitalism, corporations, religion and human rights. The important question he keeps asking is: What are the myths we humans need to survive? Scientists are uncomfortable with this question and cannot easily address it within the confines of the scientific method. We are not necessarily looking for ideas that will help us survive. We are hunting for the truth, wherever that leads us. We are not trained to care about the survival implications of our truths. Astronomers do not request ethical clearances, or fill out health and safety impact statements before announcing their discoveries to the world. Most cosmologists are blissfully unaware of the effect their newly discovered truths will have on people. We do not know whether the idea of a multiverse will terrify us with yet another layer of anonymity, or help us become more humble and survive the next millennium. The idea of assessing the value of a scientific worldview has been limited to "Is it true?" not "Does it contribute to our survival?" To make a scientific worldview psychologically useful and more palatable to people who need more meaning and purpose in their lives, should we include a bit of human-centered mythology in our worldview? Fantasy writer P.C. Hodgell (2000) has little sympathy for such compromises between myth and science: That which can be destroyed by the truth should be. This attitude seems unnecessarily combative and ignores the nuances of the changing boundaries of what is "useful" (Figs. 3 & 4). Rather than seeing it as a battle, the relationship between truths and useful untruths can be seen as a symbiotic relationship that can be nudged conservatively (in Chesterton's sense): don't destroy a myth until you know why it is there. Science and Survival What is the purpose of life? Am I important? How hard should I fight to stay alive? How hard should I fight for my tribe? Can I find food? -or should I just give up? Scientific worldviews have effects on our answers to all these existential questions. And the effects are rarely as life-affirming as the effects of traditional myths. Science (and Darwinism in particular), erodes the trust that many people have had in their myths. This is one reason the leaders of native peoples all over the world are ambivalent about, or positively against, contributing their knowledge and genes to modern science (Marks 2009) -an undertaking whose main result will be to undermine native traditions even faster. In a life short and uncertain, it seems heartless to do anything that might deprive people of the consolation of faith when science cannot remedy their anguish. Those who cannot bear the burden of science are free to ignore its precepts. But we cannot have science in bits and pieces, applying it where we feel safe and ignoring it where we feel threatenedagain, because we are not wise enough to do so. (Sagan 1997, p 279-80) Sagan writes that "we cannot have science in bits and pieces"? But, isn't that the way most people have science? And if we are "not wise enough" now, can't we learn and become wiser? Can't we measure people's worldviews and then later keep track of whether they survive or not? If we want to displace traditional myopic myths, the survival value of our scientific worldview needs to outweigh the survival value of traditional self-serving worldviews. Sagan makes a similar suggestion: There is some cost-benefit analysis which must be applied, and if the comfort, consolation and hope delivered by mysticism and superstition is high, and the dangers of belief comparatively low, should we not keep our misgivings to ourselves? (Sagan 1997, p 281) Telling a non-scientific, illiterate society of hunter/ gatherers about their African origins can be equivalent to insulting their gods and undermining their creation stories (Larson 2006). Native peoples are having their cultural identities pulled out from under them. Many cultures and languages are disappearing (Crystal 2000, Sutherland 2003. The rapid pace of technology has now placed all of us in the same position of rapidly losing our traditional myths. Like native peoples, we are all having our identities transformed. Our regional cultures are being taken away from us and replaced by a global culture homogenized in a technological blender of mass media, modern transportation and global communication (Habermas 2001). Old Myths in the Modern World It is not difficult to recognize the biases and lies of our current cosmic worldviews. They are the same self-serving lies that we have been telling ourselves for several million years -that our tribe is the bestthat our species is the best -that the out-group should be ignored, left to die, or be killed. The most common form of myth creation is ignoring or being unaware of the big picture and telling only part of the story -telling the truth, but not telling the whole truth. If I am pretending to tell the story of all humanity, but I am only telling the story Figure 5. A skirmish between two Dani tribes in the Baliem Valley of the New Guinea Highlands. Their myths are mutually exclusive. Each thinks their group is better. The nationalistic myths of nation states are also mutually exclusive. Whether national myths promote or inhibit the survival of nationalists is an on-going concern of humanity. (photograph by Karl G. Heider, Peabody Museum of Archaeology and Ethnology, Harvard University) Volume III Number 3 2019 of one nation, then I am creating a nationalistic myth. Even if the story of the one nation is correct in every detail, it is still a myth because it is presenting itself as something larger than it is. This is the unappreciated myth-creating power of editing, or just ignorance. The debunking of these myths -partial stories parading as the full story -is one of the biggest problems that big history has to solve. In school, most of us were taught the history of the particular nation where we were brought up. We were taught national history. I was taught American history. The history I was taught was not incorrect, it was just that it left out other nations and other peoples. It largely ignored the native peoples of North America. The story did not explicitly state that our nation is the best. It was just that other nations were ignored. Nation states all over the world continue to indoctrinate their children with these myths created by restrictive national histories -the products of conveniently incomplete truths. Big history tries to remove the blinkered myopia and biased legacy of such national histories by considering everyone. Big historians are trying to amalgamate national histories into the history of humanity (Harari 2015). They are also trying to include the scientific history of the universe -not only all people, but all biology. And not only all biology, but all matter (e.g. Christian 2005, Rodrigue, Grinin & Korotayev 2017. But big historians have an extra burden that scientists don't. Big historians are burdened by the adherence to a narrative structure meant for consumption by one species. Like Weinberg (1977), a human audience naturally yearns for the largest role possible for humanity. Some scientists focus their attention largely on the science of man and ignore other species (like my history teachers ignoring other nations). They are not telling explicit lies. The details about humanity are often correct. What is incorrect is the pretense of presenting the full picture while presenting a blinkered vision in which only one species is important. Jacob Bronowski's books "The Identity of Man" and "Ascent of Man" (Bronowski 1966(Bronowski , 1973 are good examples of telling the story of one species and pretending it is the story of all life. Most of the facts are correct, but the exclusion of non-humans creates a flattering myth: For me, the understanding of nature has as its goal the understanding of human nature, and of the human condition within nature… the human being is a mosaic of animal and angel. (Bronowski 1973) Based on such flattering myths, the "science" of human uniqueness is thriving. This biased politicized science is a good example of why science should not traffic in self-serving myths. It is biased because it doesn't ask "What kind of an animal are humans?" Rather it assumes we are better than other animals and asks, "What makes us better?" Like the myths of nationalism, it is a myth based on incomplete truths and an emotional appeal to human exceptionalism. It tells us that "Humans are unique" and ignores the more complete truth: "Humans are unique, just like every other species." As tribes become nation states, tribalism becomes nationalism. As nations recognize other nations and our common humanity, nationalism becomes humanism. Our in-groups have gotten bigger, but having a larger in-group solves one problem and creates another -it just moves the problem to a larger scale (Diamond 1997, Harari 2015. Increasing the size of the "ingroup" from a nation to include all humanity may reduce wars between nations but may increase the war between species -between humanity and the rest of the biosphere. Valuing Homo sapiens above all other species is leading to the environmental degradation of the planet (Rees 2003, Grooten & Almond 2018 and ultimately, this isn't good for anyone. In more traditional self-centered myths, the "self" meant, my tribe or my ethnic group. But in the aftermath of World War II, the idea that we are all people became a valuable new progressive myth (Harari 2015). Despite speaking different languages, and being of different religions and ethnic groups, the United Nations was created and the Charter of Human Rights was agreed to. For Bronowski and most modern myth makers the new "self" in "self-centered myths" has become all humanity. This is a powerful antidote to tribalism and racism, but it still excludes other apes and all other species. Thus, humanism has a downside --speciesism: the idea that my species is the best species. Unlike racism, speciesism has not yet been recognized as a self-centered prejudice harmful to the Earth. It is still seen in a positive light as a tool against racism. From an ecological point of view, humanism is a subtle way of saying that the species Homo sapiens is more important than other species. Many humanists are keen on keeping chimps at arm's length. This is because, if humans are to be recognized as a firstclass group, distinct and better than other speciesentitled to more rights than other species -then a larger biological distance helps justify these human rights and privileges. Some of the useful untruths of speciesism have been undermined by the work of Jane Goodall (2010) and DNA sequencing of our closest cousins, chimpanzees (Mikkelsen et al 2005). Free will and Stewards of the Earth Scientific revolutions over the past few hundred years have changed our view of the world (Lucretius ~ 50 BC, Huxley 1863, Wallace 1904, Harari 2015. And they have changed our self-image. Many more changes are on the way. So many that Cronin (2013) thinks we have much to fear from our continued scientific attempts to understand ourselves. We are in a fight to protect human dignity and agency and free will and our speciesism. How else can we sustain the myth that our species is more important than all other species? The scientific examination of the concept of free will is an example of something we should fear because it could have dangerous implications for our self-image: When we consider whether free will is an illusion or reality, we are looking into an abyss. What seems to confront us is a plunge into nihilism and despair. (Dennett 2008) Sam Harris and Richard Oerton strongly disagree with Dennett's topography (Harris 2012, Oerton 2012. They think that the illusion of free will is a detrimental perpetuation of savagery into the modern word (see Clark's 2013 review of Oerton 2012). The useful fiction of free will and the illusion of control has produced a "we are the stewards of the Earth" mentality (Grinspoon 2016). But, we are certainly not acting like stewards when we clear land and monopolize it with monocultures for our growing numbers (Hardin 1993), displacing and significantly reducing populations of insects, birds and other wildlife (Diamond 2010, Wikelski & Tertitski 2016, Grooten & Almond 2018. Our self-serving speciesism gives our needs higher priority than the needs of other species, and has become a justification to expropriate resources everywhere and pollute the entire planet with our waste products (Daly & Farley 2010, Lineweaver & Townes O'Brien 2015. While constructing a cosmic perspective, keeping the good parts of humanism while abandoning these speciesist implications may enable us to change and survive. Conclusion Cosmic perspectives and biological evolution are the main scientific ingredients that can convert and broaden history into big history. However, when adding these ingredients, there is an inevitable incompatibility between the scientific search for truth and the evolutionary compulsion to believe in adaptive useful fictions. Self-serving beliefs have Volume III Number 3 2019 been a prominent universal feature of human cultures for sound evolutionary reasons. I point out and analyze the concept of useful untruths, and ask: What myths do we still need to survive? Following Chesterton, I suggest that before displacing a myth, we should find out what its purpose is and determine if we still need it to survive. I suggest this is the path forward for creating better cosmic perspectives. In particular, I discuss and question the potentially useful untruths of i) an objective meaning to human life, ii) a bigger ingroup and the double-edged nature of humanism iii) free will and the supposed human stewardship of the Earth.
7,184.8
2019-07-01T00:00:00.000
[ "Philosophy" ]
A rational design of multimodal asymmetric nanoshells as efficient tunable absorbers within the biological optical window In this work, the optical properties of asymmetric nanoshells with different geometries are comprehensively investigated in the quasi-static regime by applying the dipolar model and effective medium theory. The plasmonic behaviors of these nanostructures are explained by the plasmon hybridization model. Asymmetric hybrid nanoshells, composed of off-center core or nanorod core surrounded by a spherical metallic shell layer possess highly geometrically tunable optical resonances in the near-infrared regime. The plasmon modes of this nanostructures arise from the hybridization of the cavity and solid plasmon modes at the inner and outer surfaces of the shell. The results reveal that the symmetry breaking drastically affects the strength of hybridization between plasmon modes, which ultimately affects the absorption spectrum by altering the number of resonance modes, their wavelengths and absorption efficiencies. Therefore, offsetting the spherical core as well as changing the internal geometry of the nanoparticle to nanorod not only shift the resonance frequencies but can also strongly modify the relative magnitudes of the absorption efficiencies. Furthermore, higher order multipolar plasmon modes can appear in the spectrum of asymmetric nanoshell, especially in nanoegg configuration. The results also indicate that the strength of hybridization strongly depends on the metal of shell, material of core and the filling factor. Using Au-Ag alloy as a material of the shell can provide red-shifted narrow resonance peak in the near-infrared regime by combining the specific features of gold and silver. Moreover, inserting a high permittivity core in a nanoshell corresponds to a red-shift, while a core with small dielectric constant results in a blue-shift of spectrum. We envision that this research offers a novel perspective and provides a practical guideline in the fabrication of efficient tunable absorbers in the nanoscale regime. Model and method A key feature that characterizes the optical spectrum of hybrid nanostructures is the interactions between different plasmon modes at different surfaces of plasmonic metal. Even the simplest possible symmetry breaking can alter these interactions and give rise to modified, and altogether new, plasmonic features. Here, two types of symmetry breaking in nanoshells are considered. One is the core offsetting which means that the spherical core is displaced and the nanoshell has a non-concentric core; the other is changing core geometry which is generated by replacing the spherical core with a nanorod. It is worth mentioning that these nanostructures are proposed based on the recent nanofabrication techniques 17,[27][28][29][30][31][32][33] . The discussion initiates with the modal properties of the tunable asymmetric nanostructures capable of supporting multiple LSPRs. It is common to consider a nanorod as a spheroid in simulations. The spheroid is an ellipsoid with two equal axes that can look like either a stretched (prolate) or a flattened (oblate) sphere. Since the polarizability of nanorod sensitively depends on the orientation of nanorod with respect to the polarization of incident light, two distinct geometries are considered for this structure. The nanorod is treated as a prolate and oblate, when the polarization of light is parallel with its length or width, respectively. Figure 1 shows the geometry of asymmetric nanoshells consisting of a dielectric core (ε c ) and metallic shell (ε(ω)) embedded in www.nature.com/scientificreports/ the host medium (ε m ) . For comparison, the schematic of symmetric core-shell nanostructure is also shown in this figure (Fig. 1a). As one can see, the first asymmetric nanoshell is a nanoegg (NE) in which the inner core (R 1 ) is moved away from the center (σ ) but does not touch the shell (R 2 ) (Fig. 1b). For the second asymmetric nanoshell, the hybrid NP can be achieved by inserting a dielectric core with the shape of either prolate (Fig. 1c) or oblate (Fig. 1d) in the spherical shell. The geometry of these nanoshells which called PS and OS, respectively, is defined by three parameters, minor (a) and major (b) radii of spheroid and radius of the outer spherical shell (R 2 ) . Here, it is assumed that the incident light propagates along the y-axis and is linearly polarized in the x-direction. In the first instance, the optical response of metallic NP depends on its characteristic size d. If d << , where is the wavelength of incident light in the surrounding medium, the optical properties of NP can be discussed by the dipolar model in the quasi-static regime, where NP shows mainly a dipolar-like response. In the dipolar model, when a NP is illuminated, the external field induces a dipole moment inside the NP which is proportional to the field p = αE 0 , where E 0 is the external electric field amplitude experienced by the particle and α is the effective polarizability of NP. Once the polarizability of the target structure is known, the scattering, absorption and extinction efficiencies can be calculated in the quasi-static limit 34 . where k = 2π √ ε m is the wavenumber of light in the medium. Optical modeling of eccentric nanoshell. To obtain the static dipole polarizability of nanoegg (α NE ) , the Laplace equation was solved in two spherical coordinates by applying appropriate boundary conditions at the interfaces of core-shell and shell-environment 23 σ are the dipole-quadrupole and quadrupole-dipole coupling constants of solid and cavity sphere plasmons, respectively. The polarizability of core-shell NP (α CS ) can be obtained by setting σ = 0 in Eq. (4). Optical modeling of nanoshell with nanorod core. To find the static polarizability of spheroidal core in the nanoshell, the effective medium theory (EMT) can be employed. According to this analytical method, the hybrid nanostructure in a homogeneous medium is replaced by a single nanoparticle with an effective dielectric function. Using this theory, first the effective dielectric function for the entire nanostructure is derived, then a relation between the polarizability and the dielectric function through the Clausius-Mossotti relation is obtained 36,37 here, the core with the depolarization factor L c i and volume V c has been considered as a spheroidal inclusion that occupies a volume fraction f = V c /V sh of a spherical metallic NP with the depolarization factor L sh i and volume V sh as a host medium (i = x, y, z). The important parameters in the geometry of an ellipsoid are its depolarization factors. The three depolarization factors for any ellipsoid satisfy L x + L y + L z = 0 . A sphere has three equal depolarization factors of 1/3. The other two special cases are prolate spheroids and oblate spheroids. For an oblate, the two major axes (a x < a y = a z ) are equal, while for prolate, the two minor axes are of the same size (a x > a y = a z ) . The geometrical factors of oblate and prolate along their major axis are defined by Eqs. (10) and (11), respectively 34,38 . www.nature.com/scientificreports/ where the eccentricity is e The depolarization factor along their minor axes is L minor = (1 − L major )/2. Once the effective dielectric function of NP is known, the dipole (α d ) and quadrupole (α q ) polarizabilities can be calculated in the quasi-static regime 39 : The extinction and scattering efficiencies are given by Eqs. (13) and (14), respectively. where x = kR 2 . Note that absorption efficiency is calculated as Q abs = Q ext − Q sca . From Eqs. (4), (9) and (12), it is apparent that the polarizability of NP contains various parameters that have their own significance. Out of these parameters, the dielectric function is one of the most important parameters to understand the optical properties of metallic nanoshell. Drude-Lorentz model gives the size dependent dielectric function of the NP as 40-43 : ε ∞ is the dielectric constant of bulk metal, ω p is the plasma frequency, ω is the photon energy and γ bulk is the electron collision damping in the metal. For small nanoshells, i.e. smaller than 50 nm, the dielectric function has to be modified due to the size dependence of electron mean free path. The finite size effect on the dielectric function of NP is considered by γ = Av f /L eff , where A is a dimensionless parameter for matching theoretical and empirical data, v F is the Fermi velocity and L eff is the effective mean free path of electron which can be calculated as 44 : where, V is the volume of the nanoshell. S in and S out represent the internal and external surfaces of the shell, respectively. Since, gold and silver in NP form exhibit the most interesting selective absorption in the visible and nearinfrared regime, here, the optical properties of nanoshells in which the metallic shells are one of these two materials or their combination in the form of alloy, is investigated. The related data to the dielectric function of gold and silver are presented in Table 1 45 . The dielectric constant of the core and host medium can be mentioned as other important parameters. Here, the core is made of silica and nanoshell is embedded in the aqueous environment. In the wavelength range of 300-1400 nm, the refractive index of SiO 2 and water have been measured as 1.47 and 1.33, respectively. Result and discussion Geometry. Here, the optical response of nanoshell with different geometries including core-shell (CS), nanoegg (NE), core with prolate shape (PS) and core with oblate shape (OS), are investigated. For a reasonable comparison between different geometries of NP, the radius of the shell and filling factor (f is the filling factor, defined as the ratio of the core to nanoparticle volumes) are identical for all structures. A spherical nanoshell with outer radius of 25 nm and filling factor of f = 0.512 is considered. For a nanoshell with a spherical core, the radius of the core is 20 nm. In the case of nanoegg, the offset is considered to be in the range of σ = 1-4 nm. www.nature.com/scientificreports/ For PS and OS nanostructures, the major and minor radii are varied to achieve different aspect ratios, while the volume of the core is fixed. Figure 2A reports the optical response of nanoshells. Here, the core offset of NE is 4 nm and for PS and OS nanostructures, (a, b) are (23 nm, 15.12 nm) and (18.65 nm, 23 nm), respectively. For all structures presented in this part, the metal and dielectric of choice are gold and silica, respectively. It is found that the scattering efficiencies are smaller than the absorption ones and most part of the incident light is absorbed by these small nanoshells. Therefore, only absorption spectra are presented in the next figures. As one can see, two distinct plasmon resonance peaks can be observed in the spectra of CS, PS and OS; while the spectrum of NE exhibits three LSPR peaks.To understand this plasmonic behavior, the plasmon hybridization of solid and cavity plasmon modes can be applied to this geometry. According to the hybridization model, metallic nanoshell supports plasmon modes at the inner and outer surfaces of the shell which can interact with each other due to the finite thickness of the shell [46][47][48] . This interaction results in the splitting of the plasmon resonances into two hybridized resonances; the lower energy bonding mode (BM) and the higher energy antibonding mode (ABM). The strength of hybridization between plasmon modes is controlled by the thickness of the metallic shell. The coupling between incident light and BM is strong due to a large net electric dipole moment of bonding mode,whereas there is a weak interaction between the optical field of light and ABM with an electric quadrupolar nature 49,50 . As it can be seen from Fig. 2A, both BM and ABM are observed in the absorption spectra of CS, PS and OS structures. The position of Low-energy plasmon modes of CS, PS and OS are 661 nm, 637 nm and 749 nm and their high-energy modes are located at 437 nm, 435 nm and 442 nm, respectively. Besides antibonding and bonding modes at 439 nm and 708 nm, another resonance mode is also observed in the spectrum of NE at 562 nm. To gain further insights into the plasmon modes, the electric field distributions near the nanoshells for all LSPR maxima are also calculated and presented in Fig. 3. It is evident that there is a high field enhancement at the outer interface of nanoshells for the bonding resonance modes, whereas the maximum electric field is strongly confined inside the nanoshell for the antibonding modes. Unlike CS, Ps and OS with almost symmetric field distribution at the outer surface, the electric field is not uniformly distributed at the outer surface of NE, instead, it is concentrated at the side where the core is close to the shell. On the other side, almost no electric field is observed. Note that the highest field enhancement can be achieved by NE at 562 nm. As the geometry of the spherical core alters to the elongated spheroids, the charge accumulation on the surface of the core, the thickness of the shell and therefore, the energy of the cavity plasmon mode which ultimately affects the hybridization, change. As a result, the position of BM and AMB peaks and their absorption efficiencies can be tuned. It is well known that the polarizability of NP determines the absorption efficiency and its strength extremely depends on the magnitude of charges on the surface of the nanoparticle 51 . This charge accumulation is influenced by the NP shape. Depends on the polarization of the incident light, charge separation on the surface of spheroidal NP is in two direction. Since the incident radiation is linearly polarized in the x-direction, the longitudinal and transverse plasmon modes are excited in the prolate and oblate, respectively. For more accurate discussion about charge accumulation, the surface charge density at the metal-dielectric interfaces of nanoshell, should be considered 52,53 . Surface charge density is calculated by applying Gauss's law at the inner and outer www.nature.com/scientificreports/ surfaces of nanoshell 21 . Figure 4 shows the surface charge density on the inner and outer surfaces of nanoshells for bonding plasmon modes. Obviously, due to the light propagation, opposite charges are found on the surface along the electric field direction in which light is polarized (x-axis). The same induced charges residing on the inner and outer surfaces of the nanoshell, indicating the dipolar bonding plasmon mode. From Fig. 4, it is clear that when the spherical core deforms toward elongated spheroids, the magnitude of the effective surface charge density as well as its spatial separation are changed 54,55 . When the electric field is transverse to the long axis of the spheroid (OS- Fig. 4C), surface charges are generated on the sides of the oblate. In this case, the magnitude of induced charges and their spatial separation are decreased. Here, the effect of charge reduction is much greater, and therefore, the restoring force is reduced. This corresponds to the red-shift of the cavity plasmon mode. On the contrary, when the electric field is parallel to the long axis of the spheroid (PS- Fig. 4B), surface charges are generated on the tips of the prolate and the situation is reversed and results in a blue-shift of the cavity plasmon mode 54,56 . It has been shown that the absorption peaks are incredibly sensitive to varying electron density distributed on the inner and outer surfaces of nanoshell. The shift of LSPR peak can be calculated by Eq. (17) 57 where sp , N and η are the LSPR wavelength, the electron density and the particle shape factor, respectively. For OS structure, where coupling occurs between the dipolar mode of the shell and the transverse mode of the spheroidal core, there is a great change in the electron density. Therefore, a considerable red-shift is observed in the bonding mode. It seems that the hybridization between inner and outer plasmon modes becomes stronger which results in a larger energy gap between BM and ABM. In addition, the coupling between ABM and incident optical field is enhanced since this mode can have a net electric dipole moment. Although the resonance occurs at a higher wavelength and shifts toward the near-infrared regime, this plasmon mode is relatively more broadened. For PS, the situation is reversed and BM mode experience a minor blue-shift due to a decrease in electron density. Here, the hybridization is weak and a small splitting of the plasmon modes will occur. However, narrowing the spectrum is the main advantage of this structure. As mentioned, another interesting outcome from Fig. 2A is the emergence of a new peak in the absorption spectrum of NE. The reason is as follows; For CS with a concentric core, plasmon hybridization only occurs between solid and cavity modes of the same angular momentum numbers. However, when the center of the core is displaced with respect to the center of the shell, i.e. nanoegg, the selection rules for the interaction of plasmon modes are strongly modified and the hybridization of solid and cavity plasmons with different angular momenta is allowed. This hybridization leads to a red-shift of both BM and ABM to the lower energy and appears new resonance due to coupling between dipole-quadrupole cavity and solid plasmon modes. Multiple LSPR modes www.nature.com/scientificreports/ makes NE a good candidate for applications where two or more simultaneous absorption enhancement at different wavelengths is required. The results reported in Fig. 2A indicate that inserting an asymmetric core into a spherical nanoshell can provide more tunable LSPR by changing core offset of NE or aspect ratio (AR = Major radius/Minor Radius) of spheroidal core, in comparison with concentric nanoshell. The absorption spectra of NEs with different core offsets are plotted in Fig. 2B. It is clear that only BM and ABM are observed in the absorption spectrum when the core offset is small. With the increase of core offset, the interaction of dipole-quadrupole modes leads to emergence of a new LSPR mode (MM) in the spectrum. A larger offset correlates with a larger red-shift and smaller absorption efficiency in BM LSPR. The position and efficiency of ABM, on the other hand, almost remain constant. Figure 2C provides the detail of the core offset effect on LSPR wavelength and absorption efficiency of BM and MM. Figure 2D,F show the absorption spectra of PS and OS, respectively, where the aspect ratio of the core is varied by keeping the NP size and core volume fraction fixed. It has been found that BM LSPR of PS and OS experience a slight blue-shift and significant red-shift when the AR is increased, respectively. Based on the findings, by changing AR, the BM peak of OS can be considerably tuned from 661 to 790 nm and the BM peak of PS can be slightly adjusted from 661 to 620 nm. ABM mode of both structures almost remains constant. Despite the advantage of high resonance displacement, the absorption efficiency of OS NP is reduced by increasing AR. On the contrary, this parameter does not significantly affect the absorption yield of PS nanostructure. The effect of AR of PS and OS on the LSPR wavelength and its absorption efficiency is reported in Fig. 2E,G, respectively. Material. The absorption spectra of NPs are fully characterized by the resonance frequency, the maximum absorption cross-section and the bandwidth of the LSPR 58 . For photothermal applications, nanoparticles should possess narrow LSPR with high absorption within the biological window. Since gold has a smaller real part and greater imaginary part of the dielectric function, less polarized charges and more plasmonic damping are produced by Au NPs compared to Ag NPs 59,60 . In other words, the Au-LSPR always occurs at a higher wavelength as compared with the ones calculated for the Ag NP. However, the absorption spectrum of gold is broadened and its maximum is much less than the sharp LSPR peak of silver. In addition, Au NPs have good biocompatibility and high chemical stability, while Ag NPs show toxicity and chemical instability. Combining gold and silver as shell material demonstrates fascinating optical properties in comparison with monometallic NPs. Au-Ag alloy would merge the properties of high homogeneity and biocompatibility of gold and higher absorption cross-section of silver 61 . The use of this alloy as a nanoshell material provides two advantageous; red-shifted LSPR to NIR regime and narrow bandwidth spectrum. Moreover, further tunability can be obtained, since the percentage of each constituent can be varied. To investigate the effect of the metallic shell material, the absorption spectra of 20 nm nanoshells (CS, NE, PS and OS) with different metallic shells (Au, Ag and alloy) embedded in the water were calculated. Note that the dielectric function of alloy Au x Ag 1−x can be measured using a simple linear combination of the dielectric constants 62 where x denotes the Au fraction in the alloy. The results in Fig. 5 clearly demonstrate how the absorption spectra get red-shifted and absorption efficiency is reduced as Au content in the nanoshell material increases. This trend is observed for all structures (CS, NE, OS and PS). On the contrary, the structures with higher content of Ag have narrower spectra with greater absorption efficiencies. Pinchuk et al reported that the additional contribution of the interband electronic transitions should be considered in the dielectric function of noble metal 58 . Therefore, the resonance frequency and full width at half maximum (FWHM) of the LSPR can be modified as where χ is the interband susceptibility 63 . Since χ Au > χ Ag , the LSPR of Au occurs at higher wavelengths. For silver, the imaginary part of susceptibility is considerably small, so that Im(χ) ≈ 0 in the optical range. As a result, the square root in Eq. (20) is about unity and FWHM approximately equals damping rate. However, the plasmonic mode of the Au is broadened as compared to the one for Ag, due to the more contribution of the imaginary part of the interband transition to the bulk dielectric function of gold. It is also observed that the absorption efficiency is enhanced by increasing the amount of Ag in nanoshells due to the large absorption coefficient of silver 64 . Obviously, the amplitude of the absorption spectrum is roughly determined by absorption coefficient, α(ω) = 2κ(ω)ω/c where κ(ω) = Im(ε)/(2n) ; κ and n is extinction coefficient and refractive index, respectively. The detail of the Au fraction effect on BM LSPR wavelength, FWHM and absorption efficiency is presented in Fig. 3E,F. Core dielectric function. It is apparent that both the LSPR peak position and intensity of absorption spectra are significantly influenced by the dielectric properties of the core materials. Using magnetic core in nanoshell offers the unique advantage of a combination of magnetic and plasmonic properties. The most commonly used magnetic NPs in biological application are iron oxides, which have several different compositions, To exemplify the effect of core dielectric and take advantage of magnetoplasmonic nanostructures, the optical properties of nanoshells are calculated for four different core materials; hollow, which has a refractive index equal to that of the surrounding environment, silica, magnetite and hematite. Figure 6 presents the absorption spectra of nanoshells, identical to those in Fig. 2A, except that the alloy Au 0.5 Ag 0.5 has been chosen as a shell material. Note that the dielectric constant of iron oxide varies with the wavelength of incident light in the visible-infrared range 65 . Here, only the bonding modes are analyzed because they are the most intense plasmonic modes in the optical spectra we are interested in; antibonding modes are not observable in the selected spectra (500-1000 nm). The results reveal that a difference in the values of the core dielectric constants leads to the shift of LSPR peak due to a modification of cavity plasmon mode which results in the change of the extent of the plasmon coupling. When the core region is filled by a higher dielectric medium, the effective dielectric constant of the core will be increased and results in a greater polarization of the dielectric medium. As a consequence, the accumulated charges in the resonance zone will attenuate and therefore, the restoring force can be reduced. This corresponds to the lower resonant frequency (i.e., larger wavelength). This is why resonance peaks experience red-shifted 66,67 . Filling a lower dielectric medium in the core regions reverses the situation. In other words, a core with a high permittivity corresponds to a red-shift, whereas a core with small dielectric constant results in a blue-shift of LSPR. Compared to SiO 2 @Au 0.5 Ag 0.5 , hollow nanoshell is slightly blue-shifted while, the spectra of Fe 3 O 4 @Au 0.5 Ag 0.5 and Fe 2 O 3 @Au 0.5 Ag 0.5 are considerably red-shifted; the larger is the real refractive index, the www.nature.com/scientificreports/ larger is the red-shift of the LSPR peak. It is also observed that the LSPR peak intensity of the iron oxide II and III are remarkably lower than those of the hollow-core and silica-core NPs. The dashed lines show the absorption spectra of iron oxides with only the real part of the refractive index for comparison. It is clear that the position of LSPR peaks are identical for absorbing and non-absorbing magnetoplasmonic NPs; however, the absorption of the absorbing ones is lower. The reason is that the strength of hybridization between solid and cavity plasmon modes decreases when a core material with a complex refractive index is used due to dampening the cavity plasmon modes 68 . Therefore, when the imaginary component of permittivity is not negligible, the intensity of the LSPR peak is weakened; the larger is the imaginary refractive index, the larger is intensity drop of the LSPR peak. It should be noted for Fe 3 O 4 and Fe 2 O 3 , the imaginary component of refractive index in the optical range of (500-1000 nm) is small; hence, its effect on the position of resonance peak is negligible 69 . Filling factor. The impact of the filling factor (f) on the tunability of the resonance peak as well as the absorption efficiency is investigated by changing the core dimension while the total size of the nanoshell remains constant. For a complete comparison, the absorption spectra of metal-coated silica and hematite NPs embedded in water for CS, NE, PS and OS configurations are calculated and shown in Fig. 7. Here, the metal of choice is an alloy containing 50% gold. The nanoshells radii are set with a fixed value of 25 nm, whereas the values of f are increased from 0.31 to 0.77. For all configurations, the results clearly indicate that the resonance peak at a longer wavelength shows a considerable red-shift, whereas the resonance peak at a short wavelength exhibits a slight blue-shift when the ratio between the core and NP volumes (f) is increased. Since these two modes exhibit opposite trends in the peak position displacement, it is concluded that well-separated LSPRs can be obtained at higher f. This trend is observed in the spectra of both SiO 2 @Alloy and Fe 2 O 3 @Alloy . As expected, a larger red-shift is observed in the absorption spectrum of magnetic NP due to a larger real part of dielectric permittivity. At the same time, it can be seen that higher order plasmon modes are excited and additional LSPRs appear in the spectrum of NE and OS for small f due to a relatively strong symmetry breaking in these structures. Moreover, the absorption efficiency of BM mode initially increases, reaches its maximum and then decreases. In addition, for MM resonance peak in NE configuration, the considerable red-shift alongside absorption reduction is observed in the spectrum by increasing the value of f. Another effect seen in Fig. 7 is that the absorption intensity of the hybridized mode at shorter wavelength is several times enhanced, for Fe 2 O 3 @Au 0.5 Ag 0.5 . In other words, the antibonding mode becomes visible in the absorption spectrum. As mentioned before, the properties of nanohybrid structures could be described in terms of plasmon hybridization, where the cavity plasmon modes associated with the inner surface of the shell interact with the solid plasmon modes at the outer surface of the shell. Apart from the geometric parameters, the energy of the cavity mode is determined by the dielectric permittivity of the core and shell, while the energy of the solid mode depends on the permittivity of the shell and surrounding medium 49 . Therefore, any variation in ε c leads to an adjustment of cavity mode's energy and a change in the absorption spectrum. For the case of a nanoshell with a low dielectric permittivity core, i.e. silica, the cavity plasmon mode is at higher energy than the solid plasmon mode. As a result, the low and high energy hybridized modes are bright and dark plasmon, respectively and the absorption spectrum is dominated entirely by the low energy mode. Contrary, for the case of a high permittivity core material; i.e. hematite, the energy of the cavity plasmon mode is lower than the energy of solid mode. In this regime, although the bonding mode is present in the absorption spectrum, the higher www.nature.com/scientificreports/ energy antibonding mode will have the largest absorption efficiency 68 . It should be pointed out this absorption enhancement is much stronger in OS NPs, so that the absorption of ABM is several times larger than those for BM. Recall that the difference in the absorption amplitude would be attributed to the difference in the charge separation 63 . Except for PS nanostructure with a slight change in the absorption efficiency of ABM mode, Q abs gradually decreases as the filling factor increases. Similarly, the effect of the filling factor on the optical properties of NPs with core materials of hollow and magnetite is also analyzed. Figure 8 provides the detail of the filling factor effect on the absorption spectrum of NPs with different geometries; CS, NE, PS and OS. The results signify that the filling factor is an important parameter for tuning the resonance in the wavelength range from 300 to 1400 nm. Conclusion In this paper, a detailed theoretical study of the optical response of asymmetric nanoshells with different geometries including core-shell, nanoegg and nanorod core in a spherical shell (i.e., PS and OS) in the quasi-static regime by applying the dipolar model and effective medium theory has been presented. The plasmon hybridization model is employed to explain the plasmonic behavior of these nanostructures. According to hybridization theory, the interaction of solid and cavity plasmon modes at the inner and outer surfaces of nanoshell results in the splitting of the plasmon resonances into a lower energy bonding mode (BM) and a higher energy antibonding mode (ABM). Additionally, the higher order plasmon modes can be excited and appear in the absorption spectrum due to the relaxation of selection rules of plasmon interactions. The strength of this coupling is remarkably altered either by changing the geometry of the core to prolate or oblate or by offsetting the spherical core. In comparison with CS, the considerable red-shift alongside absorption reduction is observed in the BM mode of PS and OS, while the shift of ABM modes is slight. For the case of nanorod as the core, the results indicate that the coupling between plasmon modes of OS is stronger and the energy gap between BM and ABM is larger. The absorption spectrum of OS shifts toward the higher wavelengths at the near-infrared regime. Moreover, the ABM mode of OS is also enhanced due to its large electric dipole moment. On the other side, although the red-shift of PS is not as large as OS, LSPR is much narrower than the one for OS. For NE, besides BM and ABM, another resonance mode (MM) emerges in the spectrum as a result of coupling between the cavity and plasmon modes with different angular momenta. In addition, the bonding mode of NE is also red-shifted and its absorption is decreased, compared to CS. Furthermore, the absorption spectrum of asymmetric nanoshell can be further tuned over the wider spectral region either by changing the aspect ratio of nanorod or core offset of nanoegg. Based on findings, a larger aspect ratio correlates with a larger shift in the spectrum. Similarly, a larger core offset leads to a larger red-shift of the spectrum. It is also found that the optical response of nanoshell significantly depends on the material of the shell. In contrary with the silver-coated nanoshell which provides narrow LSPR at shorter wavelengths with high absorption efficiency, the more broaden LSPR with less absorption of gold-coated NP occurs at longer wavelengths. Combining gold and silver as the shell material is a good method to merge the advantages of both gold and silver in single NP. In addition, the absorption spectrum is tuned over the wide spectral range by changing the percentage of gold or silver in the alloy. When Au content in the alloy increases, the spectrum gets red-shifted and absorption efficiency is reduced. The same trend is observed in the spectra of all geometries; CS, NE, PS and OS. One of the effective parameters to modify the hybridization is the dielectric function of the core since it determines the cavity plasmon mode energy. Therefore, changing the material of the core leads to a change in the optical response of nanoshell. Inserting a high permittivity core in the nanoshell corresponds to a red-shift, while a core with small dielectric constant results in a blue-shift of bonding mode. On the other side, using a magnetic core (magnetite and hematite) with a complex dielectric function in the metallic nanoshell can unite the plasmonic and magnetic properties in a single NP. The larger red-shift is observed in the spectra of magnetite and hematite due to their large real refractive indices. At the same time, the absorption intensities of LSPR of www.nature.com/scientificreports/ magnetic NPs are remarkably lower than one of the silica-core nanoshell due to the decrease in the strength of hybridization between solid and cavity plasmon modes as a result of dampening the cavity plasmon modes by their imaginary part of dielectric functions. Finally, it is observed that there is a significant red-shift in the resonance peaks at longer wavelengths as the filling factor increases, while the resonance peaks at short wavelengths exhibit a slight blue-shift. This trend is observed in the absorption spectra of all reported NPs. These findings open a new route to optimize the synthesis of hybrid nanostructures in various applications as far as absorption is concerned.
8,062.6
2021-07-23T00:00:00.000
[ "Physics" ]
Energy Saving Technology of 5G Base Station Based on Internet of Things Collaborative Control For time and space constraints, 5G base stations will have more serious energy consumption problems in some time periods, so it needs corresponding sleep strategies to reduce energy consumption. Based on the analysis of 5G super dense base station network structure, through the analysis of current situation and user demand, a cluster sleep method based on genetic algorithm is constructed under the support of genetic algorithm, which can realize the dynamic matching of energy consumption in time domain and space, and the low load base station enters the sleep state. In order to verify the performance of the algorithm, the simulation network structure is built on the MATLAB platform, and the advantages of the algorithm in this study are obtained through comparative analysis, and the relevant test parameters are set for the technical performance analysis of this study. The research shows that the method proposed in this paper has a certain energy-saving effect, can meet the energy efficiency requirements of 5G ultra dense base station, and in the ultra dense base station group, the complexity can also meet the system operation requirements, which has a certain degree of practicality, and can provide reference for the follow-up related research. I. INTRODUCTION With the continuous development of mobile communication technology, people's access to information and speed continue to improve, in the context of 5G network gradually popularized, a large number of base stations are in full swing. Compared with 4G network, 5G network requires higher number and density of base stations, so the resource consumption caused by the capacity of communication system should be fully considered in the process of mobile base station layout. The traditional macro cellular network architecture can not meet the energy consumption demand of the dense 5G base station, so we need to improve the traditional macro cellular network structure and propose a new network structure scheme. UDN technology is a heterogeneous network structure which is different from the traditional macro cellular network. This network structure has the characteristics of dense layout, high frequency reuse rate, and can effectively improve the network capacity. It is the basic network structure commonly used in 5G network at present. The associate editor coordinating the review of this manuscript and approving it for publication was Wei Wei . The layout of this small cellular network structure will lead to higher density of communication base stations. In order to improve the quality of network communication service, it is necessary to further reduce the energy consumption in network operation and realize green communication. Based on the demand of green communication under the background of 5G growing popularity, this study analyzes the sleep algorithm of base station, explores the energy-saving technology of 5G base station, combines the Internet of things technology, collects network data with the support of sensors, and constructs a centralized dynamic sleep method based on genetic algorithm to realize the system network cluster management of high-density and large-scale base station groups, effectively reduce the energy loss of centralized base station group and improve the system stability. It is found that the current research of some experts and scholars on 5g papers mainly focuses on its networking mode and operation process stability, while the research on energy-saving process is mostly one-sided. In this study, the algorithm is improved and innovated on the traditional algorithm, the main research work includes the following: (1) this study abandons the traditional base station VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see http://creativecommons.org/licenses/by/4.0/ distributed network sleep mode, and adopts the centralized sleep algorithm to consider all the combination modes of base stations, so as to find the best sleep scheme, and improve the energy-saving effect of the system and the operation stability of the system; (2) the genetic algorithm is applied to the selection and calculation of dormancy scheme, so as to effectively improve the calculation efficiency of the system, and the cluster mode of collaborative control is proposed, so as to calculate the energy-saving mode of each cluster by combining the genetic algorithm; (3) in the base station clustering, the dynamic clustering algorithm based on the cooperation degree can avoid that the traditional algorithm can not adapt to the actual system channel time-varying, and further improve the system stability. In conclusion, this study proposes a centralized dynamic sleep strategy based on a series of algorithms for the problem of high complexity of centralized sleep algorithm. With the support of the Internet of things data collection system, the dynamic analysis of base station sleep is carried out to obtain the best network sleep strategy in the shortest time, so as to effectively reduce network energy consumption and achieve the expected purpose of base station energy saving. II. RELATED WORK This research is based on the research of many experts and scholars. Through reading many literatures, we can understand the current situation of the research. At present, green wireless communication technology has received extensive attention from the communication industry, and scholars in various countries have studied communication energy-saving technology from multiple perspectives, including network structure innovation, system mode new channel protocol improvement, etc. Through these analysis, we can promote the development of green energy-saving technology, and derive many theories and technical methods. Academic circles have carried out research through some international projects, such as European Telecom design project [1], French mobile communication network structure optimization project [2], UK research on its original network structure and put forward corresponding green energy-saving radio project [3], etc. At present, the international standardization organization has listed the green communication in the corresponding standards, which provides reference for the subsequent communication network construction. Traditional mobile communication network resource allocation mainly focuses on the effective allocation of network resources. The research parameters include spectrum efficiency, output signal strength, interference items, etc. The reasonable allocation of these projects can effectively improve the efficiency of network operation, but this research method is only applicable to 4G and its previous network structure. The large amount of resource waste caused by 5G intensive base station group needs to be optimized by the corresponding green energy-saving technology. The energy-saving strategy based on wireless resource management is a common energy-saving technology in the current communication system. This technology not only needs to deal with the integration of various heterogeneous networks, but also needs to make full use of various network protocol information, and provide demand guarantee for the system at different terminals. According to the mobile communication network structure, the load analysis is carried out [4], and the corresponding base station group model is constructed. Combined with the simulation, the in-depth study of the base station dormancy scheme is carried out, and the corresponding optimization strategy is proposed, and finally an effective optimization scheme is obtained to achieve the expected goal of green energy saving [5]. In view of the high energy consumption of the base station, a random sleep scheme is proposed. The minimum energy consumption of the base station is set as the constraint condition, and the sleep probability of the base station is calculated and analyzed [6]. Based on the previous research of sleep strategy, the parameter of network coverage is added to design the model for performance simulation. The simulation results show that the actual energy consumption is reduced by 30%, but the dynamic load change is not realized. Reference [7] put forward to take the distance sense as a reference parameter, design the corresponding energy-saving sleep scheme, design the simulation experiment to analyze the energy consumption of the base station. From the research results, although it has some energy-saving effect, it still does not realize the dynamic change of the load [8]. Based on the research of mobile communication base station dormancy technology, choose the double-layer heterogeneous network as the basic network structure, and take the distance between the small base station and the macro base station as the decision condition, and implement the base station dormancy in the order of the distance from the macro base station from the near to the far. This method has certain effect, but it can not effectively identify the distance between the base station and the macro base station, so this method is not easy to promote [9]. According to the analysis of mobile communication network load, a simulation model based on load sensing is proposed, which takes the base station load as the basis of system identification. If the base station load is lower than the set value, it will be shut down to realize the base station sleep. Chaudhary et al. [10] combined with the mobile characteristics of mobile networks, and also use the load as the threshold value for load monitoring. When the load is lower than the threshold value, it will enter the sleep state, and when the load gradually increases, it will gradually open, so that the system can serve multiple mobile terminal users. Reference [11] select the heterogeneous network architecture to study the sleep system, design the base station group corresponding to the threshold, and carry out the system research with the service load as the sleep combination, and study the heterogeneous network macro base station and small base station, and analyze the best sleep combination scheme from the orthogonal spectrum of macro base station and small base station [12]. Based on the research of the structure of heterogeneous cellular network, this paper constructs the method of base station dormancy based on heterogeneous cellular network. The system mainly accesses the algorithm model according to the resource pairing. The small base station in this system uses the load threshold to sleep, while the macro base station in the base station uses the low load mode to sleep. Tian et al. [13] analyzed the architecture of the mobile communication network, selected the separation architecture, analyzed the random sleep of the small base station based on the architecture, and analyzed the channel when the base station slept, that is to say, the spectrum is released by the small base station. With the help of the small base station, the macro base station can effectively use these spectrum, so as to help more base stations to sleep. Reference [7] researches on the mobile vehicle network, add the sleep algorithm to the network, so as to realize the real-time update of the vehicle position of the base station. The design of cooperative communication between mobile communication base stations can effectively reduce the system energy consumption and realize the cooperation between the same layer. In this state, there are two main states, the sleep state and the active state. The base station is generally in one of these two states. When a base station is in the sleep mode, it can transfer to the adjacent base station in the mode, and interfere with other communication users within the sleep range of the base station. In addition, cooperation research can also be carried out through the base stations between different layers. In this way, the central base station mainly helps users within the coverage of the surrounding dormant base stations by expanding [14]. The statistics of the base station energy reduction model are shown in Table 1. There are also some experts and scholars who realize the base station sleep by combining new energy with base station, so as to effectively reduce the energy consumption of base station. In the research of base station, the random characteristics of base station load and the random characteristics of green energy saving of base station need to be considered. In order to maximize the stability of the system, the above two points need to be combined. When the base station is dormant, we need to consider some control strategies of the base station dormancy mode, and also need to study some other control modes of the base station dormancy, including the control of various signals of the base station. III. SYSTEM MODEL Compared with 4G network, 5G network requires higher number and density of base stations, so the resource consumption caused by the capacity of communication system should be fully considered in the process of mobile base station layout. Mobile cellular network is an important carrier of data transmission in mobile communication [15]. There is obvious tidal effect in the process of data transmission, which directly leads to low energy utilization. Therefore, when the traffic load of the base station is low, if it works with the maximum power, it will lead to serious waste of resources, thus, the base station with low load can be directly selected to enter the sleep state, so as to reduce the unnecessary energy loss in the network system. In the process of energy-saving technology analysis of 5G base station, this study directly from the perspective of user access, combined with the analysis of user communication quality and network energy efficiency, combined with genetic algorithm to build 5G base station energy-saving technology based on Internet of things collaborative control, and reasonably explain the relevant issues, and reasonably sleep the base station [16]. A. SYSTEM NETWORK MODEL Compared with 4G network, 5G network requires higher number and density of base stations, so the resource consumption caused by the capacity of communication system should be fully considered in the process of mobile base station layout. The traditional macro cellular network architecture can not meet the energy consumption demand of the dense 5G base station, so we need to improve the traditional macro cellular network structure and propose a new network structure scheme. UDN technology is a heterogeneous network structure which is different from the traditional macro cellular network. This network structure has the characteristics of dense layout, high frequency reuse rate, and can effectively improve the network capacity. It is the basic network structure commonly used in 5G network at present. The layout of this small cellular network structure will lead to higher density of communication base stations. In order to improve the quality of network communication service, it is necessary to further reduce the energy consumption in network operation and realize green communication. Affected by the wavelength, the density of 5G base station is far higher than 4G network (Fig.1). It is set that there are M service base stations in the base station, and they are expressed as M = {1, 2, · · · , M }, where BS m is the base station [17]. Suppose that there are N users in the network, and they are respectively expressed as N = {1, 2, · · · , N }, Where UN n is the user whose number is n, the real-time switch state of the matrix base station is represented by the matrix S = [s m ] l×M , the value of 1 indicates the active state, and the value of 0 indicates the dormant state. 1) NETWORK LOAD The network complexity in the system indicates the number of users of the network load at a certain time, and there is a positive correlation between the number of users and the network load [18]. Therefore, the network load can be represented by the number of users, and the relationship between the number of users of a certain day and the time can be represented as a histogram, from which it can be seen that the network load is in a dynamic change, among them, 0-8 points is the low period of network load, and the network load over 8 points shows a jump increase, and continues to 0 point, which is closely related to people's work and rest time (Fig.2) [19]. The statistical results are shown in Figure 2. 2) FLOW MODEL In order to improve the effective deployment of follow-up skills, it is necessary to analyze the model, establish the attribute parameters corresponding to the model, and the traffic model mainly corresponds to the spatial customer distribution relationship. The user distribution in this study conforms to the characteristics of random distribution ( Fig.3) [20]. In this study, experimental analysis was performed through matalab, and the results are shown in Figure 3. Figure 3 (a) shows Random distribution, Figure 3 (b) shows Uniform distribution, Figure 3 (c) shows Uniform distribution. B. SYSTEM ENERGY CONSUMPTION MODEL There are two main forms of base station energy consumption: dynamic energy consumption and static energy consumption. Static energy consumption is mainly basic power consumption without load. Dynamic energy consumption has a functional relationship with base station load, which can be expressed as: sleep mod e (1) P 0 − −SBS represents the basic circuit power consumption in the active state; m -represents the load related proportional coefficient; P T m − −SBS represents the transmission power; P sleep -represents the power consumption when SBS is in the sleep state. There are two main ways to calculate the energy efficiency of cellular network, one is to calculate the energy consumption per unit area, unit W m 2 . this method focuses on calculating the total energy consumption when calculating the energy consumption [21]. The other is to calculate the energy consumption per unit bit, unit bit s.W , simplification to bit J . This calculation method mainly calculates the comparison value between data transmission rate and energy consumption, so it is more comprehensive, including not only total energy consumption, but also data transmission rate. In this study, energy consumption per unit bit is used as energy efficiency calculation of cellular network, and the calculation formula is as follows [22]: In the above formula, C t -represents the total capacity of the target network; P t -represents the total energy consumption of the target network system. Combined with the network architecture model of this study, the statistical formula of network energy efficiency EE net can be shown in (3). In the above formula, N S -represents the number of base stations in the target network; C S i -represents the throughput of the i-th small base station; P S i -represents the power consumption of the i-th small base station. C. SYSTEM ACCESS MODEL 1) ESTABLISH PAIRING MODEL TO REALIZE PAIRING CONNECTION BETWEEN USERS AND BASE STATIONS Due to the dense 5G base stations, the distance between the base stations is small, and there is a serious phenomenon of repeated coverage among the coverage areas of each base station. Therefore, users face a variety of choices in the repeated coverage area. In order to improve the user experience, it is necessary to build a reasonable pairing between users and the base station. The relevant variables of the algorithm are as follows: USR (u, s ki ) -indicates whether user u has sent a request to base station s ki % within the coverage, 1 indicates that the request has been made, 0 indicates that the request has not been made, UFree (u) -indicates that the user access status, 1 indicates that the request has been connected or the user has not sent a request to the base station, 0 indicates that the user request has not passed, PT (u) -SINR connection reference table, and N max -indicates the maximum number of users connected to the base station. 2) ESTABLISH THE SLEEP ALGORITHM MODEL OF BASE STATION In order to establish the base station sleep algorithm model, the propagation loss between the base station and the user is set as pl, which can be expressed as follows: pl m,n = ε log 10 d m,n + c d (4) In the above formula, d m,n -represents the distance between base station BS m and user UE n ; c d -represents the distance independent coefficient factor; ε -represents the attenuation coefficient of signal propagation. In order to establish the connection model between UE and BS, the connection matrix of UE and BS can be expressed as In the above formula, pl m,n p m,n -represents the power received by the user from the base station; M k=1,k =m pl k,n p k,nrepresents the interference generated by other base stations to the user. Suppose that when γ m,n is greater than a threshold ∧ th , UE n connects to BJ BS m . The total power consumption VOLUME 8, 2020 of the system can be expressed as formula (6). So the transmission rate of BS m to UE n is [23]: R m,n = B m,n · log 2 I + γ m,n As formula (8), the ratio of total capacity and total energy consumption is: The final number of active base stations of energy efficiency f 1 = η EE of the system is f 2 = S · E, besides, E = [1, 1, · · · , 1] N ×1 . Based on the above analysis, the following models can be established according to the actual situation [24]: In the above formula, M max m -represents the maximum value of the maximum connectable users of the m-th base station. τ -represents the probability of users not being served. C1 -indicates the switch state of base station and the connection state of base station BS m and user UE n ; C2 -indicates that a UE can only be serviced by one BS at most; C3 -indicates that the maximum number of UE that a limited base station can be connected to; C4 -indicates that any UE n can be connected to and serviced by BS only when BS is active; C5 − γ m,n ≥ ∧ th is deformed; C6 -ensures that the interruption probability of UE is less than τ . Through the above, the energy-saving problem of the base station can be transformed into a multi-objective optimization problem with multiple parameters of [C1 − C6]. the minimum active state is the optimal energy-saving state. Therefore, it is necessary to optimize the active state of the base station in multiple states, and find the minimum active state from multiple states. IV. CENTRALIZED SLEEP STRATEGY BASED ON GENETIC ALGORITHM A. SLEEP ALGORITHM MODEL In order to achieve the energy-saving effect, it is necessary to find the best combination from multiple base station sleep combinations when the base station is in normal operation. Based on this study, from the perspective of nonlinear constraints, combined with genetic algorithm to build a multi-objective solution model, to explore the optimal sleeping combination of 5G base station group. 1) CHROMOSOME DESIGN OF GENETIC ALGORITHM Suppose there are N base stations in total, and the corresponding states of each base station are GK i , then there are n corresponding states. If the positions of these base stations and the corresponding state sequence function gene are taken as a chromosome, and they are real coded, and the chromosome length is set to N, then the information structure they contain can be expressed as follows: In the above formula, s i -represents the position of the ith base station; G i -represents the corresponding state of the ith base station; Gene -represents the chromosome; N -represents the chromosome length. 2) FITNESS FUNCTION Different chromosomes in the population contain a base station dormancy strategy, and the advantages and disadvantages of the strategy are mainly reflected by its functions. Therefore, it is necessary to analyze the system access model to get different base station dormancy combinations. In this process, the role of two genetic algorithms is to find the largest dormancy combination sequence of base station energy efficiency f l , which can be expressed as In the process of solving sequence combination optimization, sleep analysis is mainly carried out from the perspective of user and base station connection. Before the analysis, it is necessary to analyze the fitness function and over limit fitness function from the perspective of threshold adaptation. Threshold fitness function. This function mainly judges the base station dormancy condition by setting the threshold value, that is, to ensure that the signal interference between users does not affect the user experience, it is necessary to avoid the threshold sequence appearing in the process of population evolution, so the threshold fitness function can be expressed as: In the above formula, K i -indicates whether the SINR of each user and the base station is greater than the threshold value; ∧ th -indicates the threshold value of the connection between the user and the base station; γ m,n -indicates the signal to interference noise ratio of the connection between the user and the base station; n -indicates the number of users; K v -indicates the threshold fitness value. Over limit fitness function. Overrun means that the number of base station users cannot exceed the load limit of the base station, so as to ensure the stable operation of the base station. The calculation formula is as follows: In the above formula, K j -indicates whether the number of users connected by the base station exceeds the upper limit; b num -indicates the number of users connected by the base station; M max n -indicates the upper limit of the number of users connected by the base station; K t -indicates the over limit fitness value. According to the above analysis, the constraints can be taken as the constraints of building the model, so that the sleep policy model can be expressed as: In the above formula, the penalty coefficient of objective function α -is about threshold constraint, and that of objective function β -is about overrun constraint. Through the above analysis, the algorithm model of this study is constructed, and then the algorithm flow is analyzed. B. ALGORITHM FLOW Combined with the above algorithm analysis, genetic algorithm is used to optimize the base station sleep strategy, the algorithm flow (Fig.4), and the detailed process is described as follows: Step 1: initialize the genetic algorithm. Initialization parameters include population size m, maximum genetic algebra, generation gap gg4p, individual precision value, individual crossover probability, mutation probability, etc; Step 2: the constraints are established by combining UE and BS, and the corresponding access model is built on this basis; Step 3: the chromosome information in the algorithm is encoded by real coding, in which the gene chromosome is the position and state of the base station; Step 4: calculate the fitness of the function, get the fitness function through the objective function, carry in the numerical value for iterative calculation, and finally get the fitness corresponding to the threshold; Step 5: judge the iteration process by terminating the iteration function. If the conditions are met, output the best result and enter step 7. If the conditions are not met, enter step 6. Step 6: operator operation is carried out for genetic algorithm. The number of iterations is accumulated by 1 each time. The operator types are gene selection, cross recombination and gene mutation. New population is obtained through operator operation, and then enter step 4. Step 7: calculate the best sleep combination of the base station and output the result. C. CLUSTERING TIME SEGMENT DIVISION Share the time of the day. Set s handover triggered nodes in the day, and the base station load is different at different SP times, so the number of clusters corresponding to them is different. If the corresponding time nodes will re cluster the network structure, frequent clustering will affect the stability of the system. Therefore, according to the SP load in the United States and its load development trend, this study designs a threshold search scheme for the target optimization, and divides the whole day period under the condition of the threshold. The time period is expressed as TD i , and SP network is used to cluster the clustering algorithm at the beginning of the time period. When clustering, it is mainly based on the following conditions: VOLUME 8, 2020 (1) For each SP in TD i , the load difference between its load and other SP in TD i must be less than the load threshold G, G = ϕLoad max , where Load max is the load value when the network is fully loaded, and ϕ is the decimal between [0 ∼ 1], indicating the proportion of G to the full load. (2) For SP of TD i , it must be ensured that all SP i must be continuous. For example, SP 1 and SP 3 cannot be divided into the same TD i , while SP 1 , SP 2 and SP 3 can be divided into the same TD i . The time period of a day is divided into regions. It is set that the daily load changes develop in the order shown in the figure (Fig.5). Then between 1 and 5 points, the load lifting speed is slow, so the difference between SP 1 and SP 5 is basically small, and the threshold value is not exceeded, so it can be classified as a TD. It can be clearly expressed as TD1. Similarly, TD2 in Figure 5 can be obtained respectively. The rise speed of TD2 is fast, so it is not suitable to use this load to calculate TD1. In the actual decision-making, selecting different G will lead to different results, and the more clustering results will lead to further increase in energy consumption, it will lead to less supply of adjacent base stations, and will cause the base stations can not sleep, so it is necessary to select the appropriate threshold g for energy balance effect. If a base station in the same cluster is defined as an adjacent base station, a model can be established for energy consumption and complexity balancing: The constraint condition is formula (8) [C1 − C6]. The optimization target is the energy consumption and complex balance of the base station. In the above formula, α and β are numbers in the range of [0, 1], and they are the weights of energy consumption and complexity respectively, besides, α + β = 1, in this study, both values are 0.5. If A is the normalized energy consumption of the whole network, there is the following formula: In the above formula, P cos t -represents the energy consumption of current threshold G; max P cos t -represents the maximum energy consumption of all thresholds. If B is set as complexity, the following formula exists: a -number of clusters of small base stations N i -number of selected population in genetic algorithm M i -number of base stations in the i-th cluster k i -number of iterations completed by the i-th cluster Complexity -complexity generated by the current threshold G; max Complexity -maximum complexity. The minimum C obtained by formula (16) is the optimal threshold value. In the subsequent calculation, there are certain differences in the optimal threshold value, which can be calculated by the above methods. D. BASE STATION CLUSTERING ALGORITHM In this study, a dynamic clustering model of interest tree is designed, which is controlled by the parameters of cooperative factors. From the perspective of network structure, the clustering problem is directly transformed into the problem of interest tree generation. The cooperation factor can be defined as the degree of willingness of two base stations to cooperate with each other, which can be described as: If there are two base stations a and B, and the cooperation factor of them is expressed as SF (a, b), then the dry ratio benefit caused to a by the cooperation of two base stations a and B can be expressed as: In the above formula, SINR coop(A,B) A -represents the drying ratio caused by the cooperation of base stations A and B to A; The drying ratio to a caused by the cooperation between the base stations SINR no−coop A -A and B when they do not cooperate. It can be seen from formula (19) that for base station A, the larger the SINR coop(A,B) A value is, the larger the SF (A, B) value is, the greater the SINR gain brought to A by the cooperation between A and B, and the greater the probability of cooperation between them. Similarly, formula (20) represents the received SINR gain of the cooperation of two base stations. In the formula, SINR coop(B,A) B -the signal to interference ratio received by base station B when base station A and B cooperate; SINR no−coop B -the signal to interference ratio received by base station B when both of them do not cooperate. Through the above analysis, we can see that there is a certain directionality in the cooperation between different base stations, SF(A, B) = SF(B, A), so that the problem of base station cooperation is transformed into a two-way selection problem. There are N 5G base stations in the system, and the clustering result is expressed as Clu = {clu 1 , clu 2 , · · ·}, where |clu| represents the number of clusters, for example, b k i is the k-th base station in cluster i. The Clu i -user set is expressed as u i = u 1 i , u 2 i , · · · , and the base station cooperation is used to eliminate the possible interference (Fig.6). Assuming that the sum rate of the system is R t , it can be expressed as shown in formula (21). If Clu i does not use the intra cluster anti-interference technology, the receiving dryness ratio of base station i can be expressed as: In the above formula, the channel parameter vector between l ki -user K and base station i; |l ki | 2 = pl k,j -is the propagation loss; σ 2 n -is the noise variance; P u -is the transmission power of the user; If user i and user j promote the work through collaborative technology, the mutual interference between them will be eliminated, then the drying ratio can be expressed as: Through the above comparative analysis, it is not difficult to find that the base station cooperative control technology can effectively improve the drying ratio of the reception between the base stations. Therefore, after the cluster is established, the gain can be divided through the cooperative effect to improve the system stability. It is not difficult to see from formula (24) that the gain can be divided by cooperation between base stations, and the system operation efficiency and data transmission efficiency can be effectively improved. Therefore, the clustering algorithm proposed in this study has a certain effect. As shown in Fig. 8, the process of converting a cooperative base station into a weighted connected graph is transformed from (a) to (c) (Fig.7). B. SIMULATION RESULTS In order to prove the effectiveness of this research method, this paper studies the number of active base stations, network energy efficiency, algorithm complexity and outage probability. Firstly, the dynamic sleep mechanism of the base station is studied, that is, the base station with small load is dynamically VOLUME 8, 2020 controlled to switch to the sleep state. Combined with the genetic algorithm, the base station dormancy problem can be transformed into a multi-objective optimization problem. In the experiment, the parameters such as QoS and outage probability of system users are considered comprehensively to solve the optimal dormancy scheme of the system. The number of active base station users and energy efficiency trend calculated by genetic algorithm (Fig.8). In the process of iterating the number of active base stations with genetic algorithm. In the initial state of system operation, 25 base stations are all active, in the iterative process of the algorithm, the number of active base stations decreases after the number of iterations increases, and the number of active base stations remains basically the same from iteration to generation 34, so as to get the optimal solution. Finally, the number of active base stations is 13. At this time, 40% of the base stations enter the sleep state. The sleep algorithm of base station based on genetic algorithm can make the low load base station in the system enter the sleep state, and the users within the coverage of the base station will unload its load to the adjacent base station after entering the sleep state. At this time, the UE connection state, that is, the base station activity() (Fig.9). Before the base station sleep scheme is activated, all the base stations are active. It shows that after the implementation of the centralized sleep scheme based on genetic algorithm, most of the base station switch state and UE connections are in the sleep state (Fig.10). The reduction of the number of sleep base stations will effectively reduce the system energy consumption. The simulation results of this process (Fig.11). With the number of dormant base stations increasing in the iteration, the energy consumption of the system is gradually reduced and the energy efficiency is enhanced. It is not difficult to see from the figure that the system energy efficiency is about 0.02, and the optimal solution can be found after 34 iterations of the system. The system energy efficiency is increased to about 0.027. Therefore, the base station energy-saving technology based on genetic algorithm has a certain effect, which can effectively improve the energy efficiency of the system. Using the same method to study clustering algorithm, we find that clustering algorithm is close to genetic algorithm in energy efficiency improvement, so they have similar effect in the search of sleep strategy. After that, the complexity of genetic algorithm and clustering algorithm are compared and analyzed, and the results are shown (Fig.12). The complexity comparison between the genetic algorithm and the clustering algorithm at each time is given. From the above analysis, it can be seen that the clustering algorithm has lower complexity than the genetic algorithm, especially when the number of base stations is large, the clustering algorithm has a more obvious advantage in computing complexity, and the legacy algorithm has a strong advantage in the sleep decision of the base station of 5G super dense base station group. Therefore, in the subsequent 5G base station energysaving technology, we can give full play to the advantages of the fractional algorithm. VI. CONCLUSION 5G communication network has a large number of base stations, a large density of base stations, and a large fluctuation of users in the time and space domain, so there will be a relatively obvious waste of resources in the low load period. It is necessary to use the base station sleep technology to make the low load base stations enter the sleep state, so as to reduce the system energy consumption and improve the system energy consumption. This paper analyzes the principle and performance of distributed algorithm and centralized algorithm in the base station sleep scheme, and proposes a centralized dynamic cluster sleep strategy based on genetic algorithm. In this study, the sleep strategy of 5G ultra dense network base station is studied, and build 5G base station energysaving technology based on Internet of things collaborative control. Through the MATLAB simulation platform, the algorithm is simulated and analyzed, the relevant simulation parameters are set, the simulation data is collected and the statistical chart is drawn. The simulation results show that the proposed 5G base station sleep scheme has a certain energy reduction effect and can effectively improve the system energy efficiency. At the same time, the performance of clustering algorithm and genetic algorithm proposed in this study are analyzed. Both of them can effectively reduce the energy consumption of 5G base station, and clustering algorithm has the advantage of complexity, and more practical. JENG-SHYANG PAN received the Ph.D. degree in electrical engineering from the University of Edinburgh, U.K., in 1996. He is currently a Professor with the Shandong University of Science and Technology, Harbin Institute of Technology Shenzhen and Fujian University of Technology. His current research interests include soft computing, information security, big data mining, and signal processing.
9,259.6
2020-01-01T00:00:00.000
[ "Engineering", "Computer Science", "Environmental Science" ]
High–temperature droplet epitaxy of symmetric GaAs/AlGaAs quantum dots We introduce a high–temperature droplet epitaxy procedure, based on the control of the arsenization dynamics of nanoscale droplets of liquid Ga on GaAs(111)A surfaces. The use of high temperatures for the self-assembly of droplet epitaxy quantum dots solves major issues related to material defects, introduced during the droplet epitaxy fabrication process, which limited its use for single and entangled photon sources for quantum photonics applications. We identify the region in the parameter space which allows quantum dots to self–assemble with the desired emission wavelength and highly symmetric shape while maintaining a high optical quality. The role of the growth parameters during the droplet arsenization is discussed and modeled. The fabrication of high purity single and entangled photon sources is crucial for the development of quantum communication protocols 1,2 and quantum computation 3,4 , and it is a fundamental requirement for the realization of repeaters capable of transferring quantum entanglement over long distances 5,6 . Among the different light emitting platforms, semiconductor quantum dots (QDs) are very attractive, as they can be integrated with other photonic and electronic components in miniaturized chips. Single photon and entangled photon emitters have been fabricated by QDs using self-assembly techniques like Stranski-Krastanov and Droplet Epitaxy (DE) 7 . In particular, DE enables a fine tuning of the shape, size, density, and thus, of the emission wavelength of the nanostructures [8][9][10][11] with an emission range extending from 700 nm to 1.5 μm 12 . The high symmetry (111) surface, due to its C 3v symmetry, is optimal to reduce the fine structure splitting (FSS) [13][14][15] , but problematic for the growth via Stranski-Krastanov growth mode, since on (111) the relaxation of a strained III-V semiconductor epilayer immediately proceeds through the nucleation of misfit dislocation at the interface rather than through the formation of coherent 3D islands 16 . DE is able to self-assemble highly symmetric QDs on (111)A substrate, capable of polarization-entangled photon emission with very high fidelity 17 . Moreover, the choice of GaAs QDs allows a fast radiative recombination and a weaker impact of spin dephasing mechanisms [18][19][20][21] . DE QDs have been demonstrated 22 to improve the yield of entanglement-ready photon sources up to 95% while matching the emission wavelength with an atomic-based optical slow medium such as Rb atoms (as proposed in 23 ) for the fabrication of quantum memories and quantum repeaters. This result was achieved through the combination of low values of the excitonic FSS and radiative lifetime, together with the reduced exciton dephasing allowed by the choice of GaAs/AlGaAs QDs fabricated on (111)A substrates. The major DE drawback, related to the low temperature kept for the nanostructure crystallization and barrier layer deposition necessary for this growth technique, was overcome using an innovative high-temperature DE technique which is allowed by the use of (111) substrates. Here we present and investigate in the details a novel high-temperature DE growth technique, which is based on the control of the growth parameters (substrate temperature and As flux) on the Ga adatom diffusion during the GaAs QDs formation by droplet epitaxy on (111)A. In particular, we address the effect of Ga droplet arsenization for the formation of QDs at substrate temperature increased by about 300 °C with respect to previous reports 24 while avoiding QD elongation 25 due to anisotropy in Ga adatom diffusion [26][27][28] . We also address shape control issues, preserving hexagonal shape even at high temperatures, which has a strong impact on the optical quality and excitonic FSS. The highly symmetric dots obtained with our modified recipe show a mean line width experimental Details The growth experiments were performed in a conventional Gen II MBE system, on epiready GaAs (111)A substrates. The optimal control of the As flux during the growth was assured by a valved cell. The cracking zone temperature of the As cell was set in every experiment at 600 °C in order to provide As 4 molecules. After the oxide desorption at 580 °C, an atomically smooth surface was prepared by growing a 100 nm thick GaAs buffer layer and a 50 nm Al 0.3 Ga 0.7 As barrier layer after reducing the temperature to 520 °C. To achieve a smooth surface with minimal surface roughness (RMS below 0.5 nm), growth conditions were kept according to 29 . The RHEED pattern clearly showed a (2 × 2) surface reconstruction 30 . The substrate temperature was then decreased to 450 °C and the As valve closed in order to deplete the growth chamber from the arsenic molecules. When the background pressure reached a value below 1 × 10 −9 Torr, a Ga flux with a rate of 0.01 ML/s was supplied to the substrate surface to form Ga droplets. During the Ga supply the surface reconstruction did not show any change. One sample with Ga droplets, from now on D, was then removed from the growth chamber. For the other samples, in order to study the influence of substrate temperature and As flux during the crystallization on the QD formation, the substrate temperature was decreased to low temperature, 200 °C (sample L1) or medium temperature, 400 °C (sample M1) or increased to high temperature, 500 °C (all the samples of H series) and then the Ga droplets irradiated with As flux for 5 minutes. The irradiated As beam equivalent pressure (BEP) is reported in Table 1 with all the parameters used for the QD formation. A second set of samples was then prepared using the same recipe of the first set, but capping the nanostructures with a 10 nm of AlGaAs barrier layer grown at 500 °C, another 40 nm at 520 °C and a GaAs capping layer of 5 nm. This second set will be recognized adding a C at the end of the sample name. The morphological characterization of the uncapped samples was performed ex-situ by an Atomic Force Microscope (AFM) in tapping mode, using ultra-sharp tips capable of a resolution of about 2 nm. Ensemble photoluminescence measurements (PL) were carried out cooling the samples at 15 K and using the 532 nm line of a Nd:YAG continuous wave laser. The incident power on the samples was 0.5 mW with a laser spot size of approximately 80 μm. Results The AFM characterization of all the uncapped samples was performed by collecting different images in different areas of the sample, in order to study the density, the shape, the volume and the size distribution of the QD ensemble. Left panel of Fig. 1 shows a 1 × 1 μm 2 AFM scan of sample H4. The image demonstrates the formation of GaAs QDs. The density of QDs calculated over different images on this sample is approximately 8 × 10 8 cm −2 The height and equivalent radius distributions of the QDs are reported in the two right panels of Fig. 1. We found a mean height of 3.9 ± 0.5 nm and a mean equivalent radius of 35.3 ± 5.3 nm. The AFM characterization of sample D, on which only Ga deposition was performed, shows the formation of Ga droplets with spherical cup shape (see panel a of Fig. 2). The density of the droplets on sample D is approximately 7 × 10 8 cm −2 , the diameter is 50.4 ± 7.0 nm and the height 7.4 ± 1.1 nm, the contact angle is approximately 33.7°. The shape of the droplet is perfectly symmetric, without elongation in any crystallographic direction. Simple calculations considering the volume of the droplets, the density, and the amount of the deposited Ga, demonstrate with good agreement that all the gallium is collected inside the droplets. This is in agreement with the fact that (111)A surface is Ga terminated and the Ga excess, during gallium deposition, immediately creates droplets on the surface. The AFM characterization of samples L1, M1 and H1 (Fig. 2, [211] directions. These shapes are in agreement with the ones reported for larger islands by Jo et al. 15 this case, three sides are longer than the other three (40.1 ± 4.2 and 23.8 ± 3.0 nm, respectively), the mean height 3.0 ± 0.5 nm and the angle between the substrate and the sidewalls is approximately 18°. Panel e of Fig. 2 shows AFM scan of single QD grown on sample H4, where the droplets were arsenized at 500 °C with a BEP of 5 × 10 −5 torr. Here the QDs show a truncated pyramidal shape with regular hexagonal base. The mean height is 3.9 ± 0.5 nm, the hexagon side of 42.7 ± 3.2 nm, and the angle between the substrate and the sidewalls is approximately 14°. Likewise GaAs/AlGaAs droplet epitaxy QDs grown on (001), we expect the dots laterally limited by stepped facets 31 . In order to study the role of Ga adatom diffusion and incorporation during droplet arsenization, we analyzed the AFM images and measured the total volume of GaAs crystallized inside the QDs after the arsenization, as reported in Table 1. An important parameter of the DE-QDs which helps to elucidate the actual processes during the droplet crystallization is the ratio γ = V 1 /V between the volume of the final QDs (V 1 ) and the GaAs volume available (V) by the complete crystallization of the Ga contained in the droplets. It quantitatively sets the difference between a two-dimensional growth, where the droplet have the role of local group III reservoirs 32 and γ = 0, and the three-dimensional crystallization of the QD inside the original droplet. The experimental dependence of γ on As flux (J As ) and crystallization temperature T is reported in Fig. 5. The measured GaAs crystallized inside the QDs is always lower from the expected volume considering the initial Ga volume stored in the droplets, except for sample L1. The total volume of GaAs crystallized inside the QDs decreases with increasing arsenization temperature and with decreasing J As . Another important fabrication step, which affects the optical properties of the QDs, is the capping procedure. We used ensemble PL to investigate samples L1C, H1C and H4C as reported in Fig. 6 in black, red and blue, respectively. The peak around either 640 or 650 nm is related to AlGaAs barrier, whereas the emission bands at 650-690, 660-705 and 700-765 nm, for samples L1C, H1C and H4C respectively, are related to QD emission. Discussion The morphology observed for the QDs grown on the sample on which Ga droplets were arsenized with low As flux, L1, M1 and H1, is in agreement with the description reported in 15 , attribuing the morphological evolution to the higher incorporation rate of Ga at A steps, respect to that at the B steps. The observed shape dependence on growth parameters is related to the kinetically controlled growth mode of the droplet epitaxy QDs, where diffusion and incorporation processes play a fundamental role. In particular, on (111)A substrates the QD shape is determined by the interplay between the Ga diffusion along the steps and the different incorporation probability of Ga at A and B step edges 33,34 . Ga atoms are, in fact, more easily incorporated along A steps, where each edge As atom displays two dangling bonds, than on along step B, where each edge As atom has only one dangling bond (see upper panel of Fig. 4), thus reducing the reactivity of step B respect to A. Let us analyze more in the details the change of the QD morphology as a function of the growth parameters. The shape of the QDs on samples L1 and H1 are graphically summarized in Fig. 3 and compared with the size of the original droplet. Comparing in panel b) the mean dimensions of Ga droplets on sample D (orange), and of QDs on samples L1 (light green) and H1 (light blue), it is possible to see that the formation of QD crystallized with 2 × 10 −6 torr As BEP, while increasing the substrate temperature from 200 to 500 °C, is dominated by incorporation exclusively along A steps, while the incorporation along B steps is suppressed. It can be observed clearly from panel b of Fig. 3 that the sides of the hexagonal dot on sample L1 (in light green) is mostly tangent to the base circle of the original droplet (orange), and that the sides of triangular dots on sample H1 (light blue) are mostly tangent to the base circle of the original droplet only along B steps, thus confirming the absence of incorporation in those directions. Also on sample M1 (not shown in the picture) the longer sides are mostly tangent along B steps to the base circle of the original droplet. These observations confirm that the shape of the QDs is determined by kinetically controlled diffusion and incorporation processes in which the different growth velocity between A and B steps determines anisotropy at high temperature. By increasing the As BEP up to 5 × 10 −5 torr at high substrate temperature, it is possible to obtain again QDs with hexagonal symmetrical shape. A graphical representation is reported in panel c of Fig. 3. Here we consider the shape and the mean size measured for the original gallium droplet (orange) and we compared it with the shape and the mean size measured for the QDs on sample H4 (dark green). From panel c can also be observed that on sample H4 all the sides of the hexagonal dots are away but at the same distance from the base circle of the original droplet. Let us discuss the presented phenomenology. We analyze the Ga diffusion length on B steps  B . When >  L B B , being L B the B step length, the Ga atom arriving at B-steps can easily diffuse along the step and then reach the high incorporation rate A steps. The consequent higher growth rate of the A steps, due to both higher incorporation probability and higher Ga atom flux respect to B-Steps, causes the disappearance of the A steps. This leads to a triangular QD shape limited by facets with B steps. On the other side, when <  L B B , there is no transfer of Ga atoms between B and A steps and Ga is incorporated at the arrival step. This condition leads to QDs with an hexagonal shape. The  B is controlled by the Ga diffusivity along the B steps D E kT exp( ) , where E D is the diffusivity activation energy, and by Ga average time τ ∝ J 1/ As between the arrival and the reaction at the step: The diffusion length is therefore minimized by low temperatures and/or high As fluxes growth conditions. The activation energy along B steps = E E /2 A D has been recently reported in Ref. 15 A . Comparing samples L1 and H4, the two which exhibit QDs with hexagonal shape, we observe that the the increase in the arsenization temperature from 200 °C (L1) to 500 °C (H4) determines an increase of a factor 5 of the diffusivity term in  B while the concurrent increase in J As from 2 × 10 −6 Torr (L1) to 7 × 10 −5 Torr decreases the Ga lifetime around factor 6. Therefore, the two effects cancel each-other in the determination of www.nature.com/scientificreports www.nature.com/scientificreports/ diffusion length, leaving almost unchanged  B . We then interpret the observed QD hexagonal shape in L1 and H4 samples as the results of <  L B B due to the low temperature (L1) and high As flux (H4) growth conditions. As a result, we can state that Ga diffusion length on B steps is the physical parameter that controls the kinetics of the growth, and thus, in turn, the QD shape. The described behavior allows for the growth of GaAs QDs by DE at substrate temperature much higher than the one typically used on (001) substrates and, compared to the data previously reported on (111)A substrates, to preserve the hexagonal shape also for arsenization performed up to 500 °C. This is expected to allow for the growth of materials with improved crystalline quality respect to the usual DE QDs crystallized at 200 °C. In fact, a low temperature of crystallization for the Ga droplets 35,36 and subsequent AlGaAs barrier deposition 37 is detrimental for the crystalline and the optical quality of the QDs, mainly due to the formation of vacancies and the incorporation of defects. We then consider the loss of material observed during the crystallization process, as reported in the last column of Table 1. To explain this behavior, we have to consider that the crystallization process involves the control of the growth kinetic, which allows to tune the fabricated nanostructures from three to two-dimensional. On GaAs (001) substrates, nanostructure shape can vary from compact islands to rings and eventually to flat disks extending to the droplet surroundings 38 . These different shapes can be achieved by tuning the speed of the arsenization processes that takes place within the metallic droplet (process 1 in Fig. 4) lower panel, and the Ga atoms diffusion and incorporation outside the droplet (process 2 in Fig. 4 lower panel). On GaAs (111)A substrates, the geometry of the fabricated nanostructure is different due to different symmetry, and we observe compact islands for all the measured parameter range but we observe that the ratio of Ga crystallized inside the rim of the original droplet and the one outside, is changing with the growth parameters T and J As . The parameter which quantifies the balance between process 1 (the crystallization inside the original rim of the droplet) and process 2 (diffusion/incorporation) is the ratio γ, as it is expected to range from one in the pure three-dimensional growth to zero in the diffusion/incorporation two-dimensional growth. As previously reported for (001) surface, γ is a function of T and J As 39 . In Fig. 5 we reported the data related to parameter γ for the temperature series (L1, M1 and H1, from the top down in right panel) and for the As pressure series (from H0 to H5, from left to right, left panel). Considering the temperature series, it is possible to see that on sample L1 almost all the gallium is crystallized inside the hexagonal QDs, while on samples M1 and H1 a loss of about 46% and 91% of the original gallium deposited is measured. Considering the samples of As pressure series, arsenized at 500 °C with different As fluxes from 8 × 10 −7 to 7 × 10 −5 Torr, it is also consistently observed a loss in volume. The value of γ is increasing with the equivalent pressure of As irradiated during the droplet crystallization. For this series the loss in volume of GaAs crystallized inside the QDs is between 50% and 93%. It is interesting to notice that, on (111)A surface, the possibility of a loss of material outside the final nanostructure was already observed for InAs QDs grown by DE in 40 . www.nature.com/scientificreports www.nature.com/scientificreports/ To understand the observed behavior we have to consider how the droplet arsenization and the diffusion processes depend on the growth parameters. Let us first assume the droplet crystallization mechanism depends on the liquid-solid interface area, the arsenic solubility and diffusivity into the droplet and J As . Considering, as first order approximation, a slow dependence of the interface area on the growth time (process 1 in Fig. 4, the volume crystallized inside the droplet depends linearly on growth time t via the equation where ρ D is the constant that takes into account all the other factors (the arsenic solubility and diffusivity into the droplet) with the exception of the As flux J As . The growth rate of Ga adatoms incorporated outside the nanostructure depends in a more complex way on J As and T. The easiest resulting geometry observed on GaAs(001) substrates is a disk, with a radius given by the sum of the diffusion length of Ga atoms ( surf ) and of the radius of the droplet: A , where D 0 is the diffusivity prefactor, E D the surface diffusion activation energy and N d the surface density of As sites. The disk is therefore increasing its radius by increasing the substrate temperature and decreasing the As flux. The disk increases its thickness with a rate which is proportional to J As and to the product between As/Ga reaction probability and As residence time ζ R 41 . Considering that in most of the cases r surf 0 (with the exception of the low T range) and that the diffusion/incorporation process follows the same physics on (001) an (111)A, the growth rate of process 2 can be expressed by Where μ′ and μ are constant collecting geometrical and constant factors. The growth will proceed for a time τ until the Ga in the droplet is fully consumed. This condition is reached when It is possible to observe that in Eq. 3, γ is increased by increasing the As flux J As as γ = J As /(J As + const), and is decreased by increasing T as . Also increasing the term μζ ρ D / R D 0 , the value of γ is decreased. For this reason, a short As residence time ζ R on (111)A respect to (001) substrates, as reported in 42 , is expected to make the contribution of the crystallization process inside the droplet to be preminent even at high T, provided a sufficient increase of the As BEP. The upper limit for this effect is marked by the limited As solubility and diffusivity in the droplet for extremely high As flux. The dependence of γ on J As and T is reported graphically in Fig. 5 and compared with experimental values of the ratio. For the diffusion activation energy on GaAs (111)A surface we used the value as calculated by ref. 43 , E D = 1.06 eV. The value of the ratio μζ ρ = × D / 2 10 R D 0 2 Torr has been fitted to the temperature series L1, M1, and H1. The data are nicely reproduced by our model which depends on a single fit parameter. Understanding the relationship between the growth parameters and the shape of the nanostructures is fundamental for the fabrication of emitters with specific electronic and optical properties but it is then necessary to evaluate the effect of the deposition of the AlGaAs capping layer in terms of shape change and interdiffusion. The optical properties of the QDs capped with 50 nm Al 0.3 Ga 0.7 As and 5 nm of GaAs were studied by means of www.nature.com/scientificreports www.nature.com/scientificreports/ ensemble photoluminescence (PL) and single-band constant-potential model simulations on the second set of sample. As expected from simple considerations of quantum confinement energy, the size of the QDs, and in particular their height, is affecting the emission wavelength. The ensemble PL spectra of samples L1C, H1C and H4C are displayed in black, red and blue, respectively, in Fig. 6. The peak around either 640 or 650 nm is related to AlGaAs barrier, whereas the emission bands at 650-690, 660-705 and 700-765 nm, for samples L1C, H1C and H4C respectively, are related to QD emission. The different position of AlGaAs related peak in sample L1C can be attributed to a slightly different composition in the barrier. The distribution of the emission energy for sample H4C shows sizable modulations which are typical of QDs with low aspect ratio and can be attributed to monolayer fluctuations in height, as reported in 24 . We simulated the expected radiative recombination energies with the single-band constant-potential model 44 , using realistic dot shapes from the AFM images taken from the corresponding uncapped samples and linear dimensions given by average values from the experimental size distribution. The band parameters used in the calculation 45,46 are chosen consistently with previous studies on droplet epitaxy GaAs/AlGaAs QDs. The results for the ground state transition are shown by arrows in Fig. 6 alongside the ensemble PL spectra. It is worth notice that for the samples in which the substrate temperature during the QDs crystallization was set equal or higher than 400 °C, a small blueshift around 30 meV was found between the theoretical estimation and the centroid of the energy distribution. For a substrate temperature of 200 °C during the As crystallization (sample L1C), a blueshift larger than 200 meV was measured. We interpret such discrepancy as the outcome of interdiffusion at the AlGaAs/GaAs interface during the capping layer deposition, when the temperature is substantially increased up to 500 °C and the nanostructure is slowly covered with an AlGaAs layer. While the interdiffusion processes during the deposition of the capping layer are always present, they have much more limited impact for QDs crystallized at elevated temperature, and can be estimated in an interdiffusion of few monolayers as shown by cross section scanning tunneling microscpy measurements on DE-QDs 47 . In fact, when the QDs are crystallized at lower temperature, we expect the presence of a larger amount of vacancies and crystal defect, which enhance the interdiffusion process (see e.g. 35,36 ) when the temperature is raised again to 500 °C during the AlGaAs barrier deposition. According to previous estimates of the interdiffusion length in DE QDs 35,36 , we expect and interdiffusion lenght L ≥ 1 nm for the QDs grown at low temperature, thus being a sizeable fraction of the total QD height. As shown in 22 , micro-PL measurement of sample H4C confirms the high quality of the QDs. The measurements performed on the sample show a mean line width of the neutral exciton of about 15 μeV and a best value of 9 μeV, a mean fine structure splitting of 4.5 μeV, which results in the aforementioned large fraction (more than 95%) of emitters capable of generating entangled photons. The higher crystallization temperature of the Ga droplets allows for the formation of nanostructures with a reduced density of vacancies and defects. The possibility to set a higher substrate temperature without introducing anisotropy and then FSS, is made possible by the short residence time of As on (111)A respect to (001) substrates. A highly symmetric hexagonal shape also at high crystallization temperature is obtained controlling the Ga diffusion along B steps by increasing As flux, which permits to properly balance the Ga adatom incorporation along steps A and B. conclusions The presented high temperature droplet epitaxy procedure allows for the self-assembly of QDs with high optical quality and symmetric shape by crystallizing the QDs at substrate temperature higher than previously reported 15 . The improved optical quality is related to the high crystalline quality of the GaAs QDs and the surrounding AlGaAs barrier, as both are crystallized or deposited at a temperature close to the optimal one for the GaAs crystal growth on (111)A surface. We investigated and modeled the dependence of the QD shape and size on the parameters used during the crystallization process. The model proposed shows that high temperature droplet epitaxy on (111)A substrates is governed, as the standard droplet epitaxy on GaAs(001), by the balance between crystallization within the droplet and the process of Ga adatom detachment from the droplet, diffusion and incorporation into the crystal surrounding the droplet. The predominance of the former over the latter allows for the self-assembly of 3D islands. This is realized on GaAs (111)A substrates at high T owing to the low residence time of the As on the (111)A surface which hinders the diffusion/crystallization processes on the crystal surface around the droplet. The high As pressure required for the crystallization also permits the equalization of the growth velocities along the A and B steps by increasing the incorporation of Ga adatoms along the B steps, resulting in a symmetric hexagonal shape. PL measurements also demonstrated that high temperature of crystallization permits to preserve the shape of the QDs when capped, thus allowing for the reproducibility of the fabrication procedures which is a fundamental asset for the deterministic design of the emitters for wavelength-specific applications.
6,552
2019-08-07T00:00:00.000
[ "Physics", "Engineering" ]
Disconjugacy conditions and spectrum structure of clamped beam equations with two parameters In this work, we apply the 'disconjugacy theory' and Elias's spectrum theory to study the disconjugacy \begin{document}$ u^{(4)} + \beta u''-\alpha u = 0 $\end{document} with two parameters \begin{document}$ \alpha,\beta\in\mathbb{R} $\end{document} and the spectrum structure of the linear operator \begin{document}$ u^{(4)} + \beta u''-\alpha u $\end{document} coupled with the clamped beam conditions \begin{document}$ u(0) = u'(0) = u(1) = u'(1) = 0 $\end{document} . As the application of our results, we obtain the global structure of nodal solutions of the corresponding nonlinear analogue based on the bifurcation theory. Recently, Cabada and Enguiça [3] made an exhaustive study of the fourth order linear operator u (4) + M u coupled with the clamped beam conditions (2). They obtained the exact values on the real parameter M for which this operator satisfies an anti-maximum principle. Such a property is equivalent to the fact that the related Green's function is nonnegative in [0, 1]×[0, 1]. When M < 0 they obtained the best estimate by means of the spectral theory and for M > 0 they attained the optimal value by studying the oscillation properties of the solutions of the homogeneous equation u (4) + M u = 0. By using the method of lower and upper solutions they deduced the existence of solutions for nonlinear problems coupled with (2). More precisely, they proved the following Ma et al. [13] also obtained the exact values on the real parameter M ∈ (−m 4 1 , m 4 0 ) for which confirm the positivity of linear equation u (4) + M u = 0 by using the ' disconjugacy theory' [6] and the spectrum theory Elias [7,8] and Rynne [18]. In addition, they obtained the spectrum structure of the linear operator u (4) + M u coupled with the clamped beam conditions (2). A natural question is whether or not exist the optimal condition of the positivity and spectrum structure for the general equation u (4) + βu − αu = 0. Note that [4] give the description of the interval of parameters for the general linear nth-order ordinary differential equation is disconjugate on a given bounded interval I. For example, let . This is an important result of disconjugacy theory, however, the choice ofM limits the widespread application. It is the purpose of this paper to apply the ' disconjugacy theory' [4,6] and the spectrum theory Elias [7,8] and Rynne [18] to study the choice of α to provide the disconjugacy of the equation u (4) +βu −αu = 0 and the spectrum structure of the linear operator u (4) + βu − αu coupled with the clamped beam conditions (2). As the applications of our results, we show the global structure of nodal solutions of the corresponding nonlinear problem (1), (2). For other results on the existence of solutions for fourth-order boundary value problems, see [2,10,12,14,15,20] and the reference therein. The rest of the paper is arranged as follows: in Section 2, we state some preliminary results about disconjugacy and the spectrum of some general linear operators. Section 3 is devoted to showing that the sufficient conditions and necessary and sufficient conditions for the equation u (4) + βu − αu = 0 is disconjugate on [0, 1]. And we also obtain the spectrum structure of the corresponding fourth-order linear operator. In Section 4, we apply our results on linear problems to show the global structure of nodal solutions of the nonlinear problem (1), (2). ). The functions y 1 , · · · , y n ∈ C n [a, b] are said to form a Markov system if the n Wronskians where D = d/dt, and , · · · , n. Suppose the equation (7) is disconjugate on [a, b]. Let f be a continuous function on [a, b]. Let k be a positive integer, the (k, n−k) two-point boundary value problem has a unique solution y, since the corresponding homogeneous problem has no nontrivial solution. The solution can be represented in the form where the Green's function G(t, s) is defined by the following properties (1) As a function of t, G(t, s) is a solution of (7) on [a, s) and on (s, b] and satisfies the n boundary conditions (10); (2) As a function of t, G(t, s) and its first n − 2 derivatives are continuous at t = s, while G (n−1) (s + 0, s) − G (n−1) (s − 0, s) = 1. Suppose the equation (7) is disconjugate on [a, b]. Then the Green's function of (k, n − k) two-point boundary value problem satisfies (−1) n−k G(t, s) > 0, a < s < b, a < t < b. YANQIONG LU AND RUYUN MA Note that Cabada and Saavedra[5, Theorem 3.1] also obtain the sign of the Green's function of (k, n − k) two-point boundary value problem. For 2mth order (m ≥ 1), self-adjoint, disconjugate differential operator, Rynne [18] showed a spectrum theorem which will be the foundation of this paper. Suppose that for each i = 1, · · · , 2m, we have a function The functions L i u, i = 0, · · · , 2m will be called the quasi-derivatives of u. We consider the Banach spaces with the norm || · || 2m−1 which, for convenience, we will write as || · ||. Define an operator L : For each integer k ≥ 1 and ν ∈ {+, −}, let S k,ν denote the set of functions u ∈ E such that: (1) u has only simple zeros in (a, b) and no quasi-derivative of u is zero at a or b, other than those specified in (10); (2) u has exactly k − 1 zeros in (a, b); (3) νu > 0 in a deleted neighborhood of t = 0. , and the boundary conditions (10) are such that L is formally self-adjoint, that is where ·, · denotes the standard L 2 (a, b) inner product. Assume that (H0) p ∈ C 0 [a, b], and p ≥ 0 on [a, b], while p ≡ 0 on any interval of [a, b]. Then for each k ≥ 1 and each ν ∈ {+, −}, problem Lu = µp(t)u, t ∈ (a, b), y (i) (a) = 0, i = 0, 1, · · · , k − 1; has a unique solution (µ k , ψ k ) ∈ R + × S k,ν with ||ψ k || = 1. In addition: Let y k (t, a) be the solution of (7) which satisfies at t = a for k = 1, · · · , n − 1 the initial conditions y (n−k) k (a) = 1, y (n−j) k (a) = 0 (j = 1, · · · n; j = k). Denote the Wronskian of y 1 (t, a), · · · , y k (t, a). Remark 1. It may be note that W n (s, a), s ∈ [a, b] never vanishes because the solutions y 1 , · · · , y n are linearly independent. . Let L n and L m be two general linear differential operators of order n and m respectively. If the equations L n u(t) = 0 and L m u(t) = 0 are disconjugate on the interval I, then the composite n + mth order equation L n (L m u(t)) = 0 is also disconjugate on I. (2) λ 2 < 0 is the maximum of the biggest negative eigenvalues on T n [M ] in E k , with n − k odd. 3. Disconjugacy for the equation u (4) + βu − αu = 0. Let us consider the linear boundary value problem where α, β ∈ R are two parameters. First, we consider the case α = 0. In this case, (11) reduces to The problem (12) has nontrivial solution if and only if β solves the equation Moreover, the first positive solution of the equation (13) is β 1 = 4π 2 . Proof. We divide the proof into two cases. Case 1 β ∈ (0, β 1 ). In the virtue of Lemma 2.3, we only need to find a Markov fundamental system of solutions of u (4) + βu = 0. Next, we consider the case α > 0. In this case, by computing, the problem (11) has a nontrivial solution if and only if (α, β) solve the equation where Especially, if β = 0, then (11) degenerate the boundary value problem where C i , i = 1, 2, 3, 4 are arbitrary constants. Let u 1 be the unique solution of the initial value problem Then Let u 2 be the unique solution of the initial value problem Then u 2 (t) = cosh λt − cos µt, t ∈ [0, 1]. Let u 3 be the unique solution of the initial value problem Then u 3 (t) = µ 3 sinh λt + λ 3 sin µt, t ∈ [0, 1]. YANQIONG LU AND RUYUN MA Let u 4 be the unique solution of the initial value problem Then Then where σ ∈ (0, 1) is small enough. To show that the functions {z 1 , z 2 , ; z 3 , z 4 } form a Markov fundamental system of solutions of u (4) + βu − αu = 0 on [0, 1], we only need to provide the value range of α, β to ensure For convenience, in the sequel of the proof we replace For any fixed σ ∈ (0, 1) is small enough, by the sign of W 1 (t) and W 3 (t), it is not difficult to verify that W 1 (t), W 3 (t) is positive on [0, 1], so we only discuss the positivity of W 2 (t). Clearly, we can get that the first derivative and second derivative of W 2 as follows: Thus, we only need to show that W 2 (t) has at most one zero point on [0, 1], which is equivalent to solve the equation It is very difficult to solve the above equation since the parameter λ, µ is not given. Hence, we give some sufficient conditions to ensure the disconjugacy of u (4) + βu − αu = 0 on [0, 1]. Proof. To this end, in the virtue of Lemma 2.3, we only need to find a Markov fundamental system of solutions of u (4) + βu − α 0 u = 0. Subcase (c) α < − β 2 4 . Let a, b > 0 be the real number and satisfy Then Let us consider the following equation here It is worthy to point out that if β = 0, then (25) is degenerated (6). So the equation u (4) + αu = 0 has nontrivial solution if and only if α < 0 solve the equation (6), i.e. Remark 7. From Corollary 3, the disconjugacy of the equation u (4) +βu −αu = 0 is an important property to obtain the spectrum structure of the linear operator u (4) + βu − αu = 0 coupled with the clamped beam conditions u(0) = u(1) = u (0) = u (1) = 0, which provides an important theoretical basis for studying the steady-state solution of elastic beam equations in engineering and mechanics.
2,671.2
2020-01-01T00:00:00.000
[ "Mathematics", "Physics" ]
Genetic Mutations and Treatment of Spinocerebellar Ataxias Cerebellar diseases and disturbs cause speed, amplitude and strength deficiency. Among cerebellar dysfunctions, the spinocerebellar ataxia is a pathology characterized by the presence of progressive cerebellar ataxia. Spinocerebellar ataxia has as its initial clinic manifestations the deterioration of equilibrium and coordination, beyond eye disturbances, progressive postural oscillation associated with dysarthria, dysphagia and pyramidal and extrapyramidal signs. Cerebellar ataxias are caused by a cerebellum disorder and its connections, which may be attributed by root causes in the cases of congenital and hereditary ataxias. This review brings a brief historical perspective about cerebellar functions and in addition, a discussion about a specific cerebellar dysfunction, spinocerebellar ataxia, emphasizing its physiology to a better comprehension of the disease, its clinic and the several types of this pathology. Introduction Ataxia in the terminology of Greek origin means "out of control".In medical terms "ataxia" refers to a disorder in the control of body posture, motor coordination, speech and eye movements.The term locomotor ataxia is being employed since the XIX century, often meaning "lack of motor coordination" [1].Cerebellar ataxias are caused by a cerebellum disorder and its connections, which may be attributed by root causes in the cases of congenital and hereditary ataxias [1]. This review brings a brief historical perspective about cerebellar functions and in addition, a discussion about a specific cerebellar dysfunction, spinocerebellar ataxia, emphasizing its physiology to a better comprehension of the disease, its clinic and the several types of this pathology. Cerebellar Dysfunction Studies of patients with cranial trauma since the First World War leaded to an initial understanding of cerebellum functionality.Holmes in 1917 performed detailed descriptions about cerebellar injuries, in which he attested that after the injury there were alterations in the patients' ability of performing sequential complex and gentle movements [2]. Cerebellum presents great importance to Central Nervous System (CNS), since it performs interrelations and essential functions.The main functions of cerebellum involve the coordination of motor activity, equilibrium and muscular tonus [3,4].In addition, cerebellum is a great integrator of information, as it receives upgrades all the time referring to spinal medulla activity and the ongoing movements.Cerebellum also coordinates a great number of afferent signs and alerts the appropriate motor centers to perform the necessary correlations in order to obtain a coordinated movement [5]. Cerebellar diseases and disturbs cause speed, amplitude and strength deficiency.Among cerebellar dysfunctions, the spinocerebellar ataxia is a pathology characterized by the presence of progressive cerebellar ataxia.Spinocerebellar ataxia has as its initial clinic manifestations the deterioration of equilibrium and coordination, beyond eye disturbances, progressive postural oscillation associated with dysarthria, dysphagia and pyramidal and extrapyramidal signs [6]. Progressive disturbs are characterized by the degenerations of spinocerebellar tracts.Among the neurological manifestations present, the visual loss and nystagmus are the most frequent characteristics of these diseases [7] Hereditary cerebellar ataxias may be classified based on the pattern of inheritance (autosomal recessive, autosomal dominant, x-linked and mitochondrial). Autosomal Dominant Cerebellar Ataxias (ADCA) The ADCA are known as hereditary spinocerebellar ataxias, which have a vast classification of neurodegenerative disorders, but all have in common the ataxia.The etiology, in most of the cases, is explained by the mutations that occur in the evaluated gene, characterized by the presence of a repeat, expansive and unstable CAG trinucleotide [7]. After the description of the first defective gene caused by ADCA, each new defective gene discovered received a numeration and ADCA started to be called spinocerebellar ataxia. The clinical signs presented by the patients with this disease are, march progressive ataxia, members' progressive ataxia, eye movements abnormalities, dysarthria, dysphagia, facial atrophy, tongue fasciculation, pyramidal and extrapyramidal signs, peripheric neuropathies, deafness and visual loss.The patient's clinic depends on what type of ADCA he has and which modified gene is involved.The clinical characteristics are better stablished to ADCA in which a specific genetic defect is identified. Spinocerebellar Ataxia (SCA) Types The most common spinocerebellar ataxias are characterized by an increased number of repetitions of a certain nucleotide sequence, above the number normally found in people without disease background.The enhancement of CAG trinucleotide repetitions is the most frequent cause of SCAs.These mutations are called dynamic expansions and arise from the instability of repeated DNA in the same locus.This fact causes a greater propensity to alterations in polymerase DNA enzyme, triggering an insertion of more nucleotides in this segment, favoring an expansion. In his study, Takiyama [8] reported that the biological basis of repetitions in expansion disorders is the instability of these repetitions during cell division, in meiosis as well as in mitosis, and that this instability causes, by the process of contraction and genic conversion, different cellular populations with different numbers of repetitions, forming a mosaic of cells.The instability in meiosis causes gonadal mosaicism, observed in patients' sperm.During transmission, these patients tend to present greater expansion dimension through generations.When this mosaicism occurs during mitosis, it is detected in somatic cells (somatic mosaicism), including lymphocytes, muscular cells and central nervous system, however, it is less significant than the gonadal mosaicism. In scientific literature, authors affirm that for most part of expansion disorders, there are differences related to clinical manifestation severity and age of symptoms' appearance.These differences depend from whom the expanded allele was inherited. Among SCAs, when the transmission is paternal, an increased number of repetitions in children is verified, unlike when the expanded allele is maternal, in which this tendency is not observed.Therefore, paternal transmission involves worse clinical manifestations.In these cases, patients usually present the symptoms earlier than their father did.Normally, the more serious are the symptoms and the earlier they appear, the greater is the number of repetitions.This observation is known as symptoms anticipation and it is common in expansion disorders [9]. Genetic Mutations and Treatment Among all spinocerebellar ataxias, the most common characteristic is cerebellum neurodegeneration.SCAs are differentiated by the involvement of extra cerebellar regions and genetic mutations during disease progression.This genetic involvement is what makes the disease propagate through generations, due to phenotype gap and the heterogynid that arises from expansions of DNA repetition. Until the last decade there was a great difficulty in identifying the type of genetic mutation when symptoms appeared, therefore the difficulty of specifying the course of the disease was also a difficult task.In dominant disease, the clinical observation of younger ages at onset and increasing severity in successive generations is referred to as genetic anticipation.Variability in age at onset, especially the anticipatory decrease in age at onset in successive generations, is regarded as a hallmark of polyglutamine expansion SCAs.The discovery that the CAG repeat length changes in size during transmission, germline mosaicism, is the molecular explanation for this finding.Whereas normal alleles are transmitted to off spring without modification, pure expanded alleles are unstable and the number of CAG repeats tends to increase during transmission.Paternal expansions are more likely to be unstable during transmission.This paternal bias is often attributed to the increased number of mitotic divisions preceding male gametogenesis, but could also be attributable to alterations in the activity and concentration of DNA repair proteins.another genetic characteristic is the so called polyglutamine expansion SCAs.The polyglutamine expansion SCAs share a mutational mechanism with other polyglutamine expansion diseases, such as Huntington disease and spinal bulbar muscular atropy, and perhaps a pathogenic process, even though most of the proteins involved in polyglutamine expansion diseases have unknown or unrelated functions [10]. The polyglutamine expansion SCAs caused by translated CAG repeat expansions are the most well studied group of AD-CAs.They share the same mutation, a CAG repeat, which manifests above a threshold of CAG repeats that varies according to the gene-usually above 37-40 repeats, but repeats are much smaller in the gene for SCA6 (>19)28 and much larger for SCA3 (>51) [10]. After the description of the first defective ACAD-causing gene, each newly discovered defective gene was numbered and ACAD was renamed Spinocerebellar Ataxia (SCA).The current classification based on genetic alterations comprises 31 types of SCA4.Its prevalence ranges from 0.9 to 3: 100,000, varying by type and continent.SCA1, 3 and 6 are the most frequent in the world.In Brazil, SCA3, also called Machado-Joseph Disease, is the most prevalent type [11].Volume 2018; Issue 02 Nowadays, genetic tests can be performed in commercial laboratories in order to obtain more accurate diagnostics, thereby, it is also possible to find available treatments and present a prognostic to each type of ataxia.With all this information, patients and their families may receive a better and more complete counseling.Magnetic resonance is an efficient exam to identify cerebellar regions involved in the disorders. Spinocerebellar ataxias are treated with therapies that aim to alleviate symptoms.These therapeutic techniques utilize the principles of motor learning, with the objective of improving the movement's coordination with the repetition of a functional movement that is specific to a certain task.Thus, the treatment consists in: gaining muscular strength, modulating tonus; improving trunk's stability through exercises; training transferences (from lying to seat, from seated to standing); normalizing the speed of the movements; training the march (utilizing regular and irregular surfaces, ramps, stairs); in a nutshell, it consists in providing the patients the greatest independence possible [12]. Therefore, functional training, also known as motor control exercises, may benefit patients with spinocerebellar dysfunction. To the present moment, there is no pharmacological treatment to SCAs, however, cholinergic, serotoninergic, gabaminergic and also dopaminergic drugs are being tested for this purpose.Researchers reported that rare cases of episodic SCAs may respond to acetazolamide treatment [13]. As highlighted in recent systematic reviews of randomized controlled trials there is currently a lack of high-quality research studies into the rehabilitation of cerebellar ataxia.Despite this, the literature on the treatment of cerebellar ataxia describes some approaches that warrant further investigation.Approaches may be broadly divided into those that aim to improve functional ability by compensating for the underlying deficit and those that aim to improve function through restorative techniques that involve adaptation and recovery within the neuromusculoskeletal system. Compensatory approaches include the use of strategies to encourage decomposition of movement into simpler single joint movements; visual and verbal cues to aid walking speed and stride length; the use of assistive technology to aid computer use; and aids such as customized seating and frames to help posture, balance and mobility [14]. Future Research Few scientific articles focus on therapeutic interventions, or aim to find new methods of treatment to this neurodegenerative disease.Therefore, new researches are necessary to evidence the importance of neurofunctional interventions in the treatment of SCA, as well as to search for new ways to treat this disease. Conclusion Autosomal dominant hereditary ataxias, currently called Spinocerebellar Ataxias (SCAs), are progressive neurogenerative diseases that trigger slow degeneration of cerebellum and its connections.
2,454.2
2018-10-09T00:00:00.000
[ "Psychology", "Biology" ]
Multi-Team Bertrand Game with Heterogeneous Players In this paper, we proposed a general form of a multi-team Bertrand game. Then, we studied a two-team Bertrand game, each team consists of two firms, with heterogeneous strategies among teams and homogeneous strategies among players. We find the equilibrium solutions and the conditions of their local stability. Numerical simulations were used to illustrate the complex behaviour of the proposed model, such as period doubling bifurcation and chaos. Finally, we used the feedback control method to control the model. Introduction Game theory [1,2] is the study of multi-person decision problem.Such problems arise in economics.The game is called incomplete information if at least one of the players does not know the other player's payoff, such as in an auction when the bidders do not know the offers of each other.Otherwise, it is called complete information game.Also, the game can be classified to static or dynamic game.There are two famous economic games, the first is the Cournot game [3] and the second is the Bertrand game [4].In economic games, the first step is to construct the game.The second step is to solve the game (get their Nash equilibrium) and study the stability of these equilibria.Nash [5] showed that in any finite game there exists at least one Nash equilibrium. Nature push us to make teams in all fields.This has at least two main advantages.The first is the improvement of our profit and the second is that living in a team reduces the risk.For example, in the forest animals live in teams (herds).Since looking for food in a team is more efficient than doing it alone and reduces predation risk due to early spotting of predators and that existing in a team gives a higher probability that the predator will attack another member of the team.Another example is the competition between firms in the market.Suppose M branches of McDonald fast food shops compete against branches of Kentucky fast food shops.N Multi-team game has been studied in [6].In their work, they proposed and applied the concept of multi-team game in the hock-dove game, prisoner dilemma game and Cournot game.Also, the Cournot multi-team game has been studied in [7][8][9][10][11].The standard static Bertrand game has been studied in [12].A duopoly Bertrand game with bounded rationality is studied in [13].Multi-team bertrand game is studied in [14] with two teams but the second team consists of one player. We will construct the model in Section 2. In Section 3, we will analysis the model, i.e., we will find its equilibrium points and their stability conditions.Some numerical analysis will be done in Section 4 to show the complexity behaviour of the model.Finally in section 5, we will use the feedback control method to control our odel.m The Model Bertrand game is a model of competition used in economics.It describes interaction among firms that set prices and their customers that choose quantities at that price.In this game there are at least two firms producing homogeneous products and compete by setting prices simultaneously.Consumers buy everything from a firm with a lower price.If all firms have the same price, consumers randomly select among them.Suppose there are totally firms (produce certain product) in the market and these firms are divided into teams.Let ij be the price per unit of that product produced by the firm in the team and let ij c be the marginal cost of producing one unit of that product n N p j i  by the firm in the team .Then, the payoff of the firm in the team , if it played without the team, is given by the following equations; where is the number of teams and i is the number of firms in the team .The positive constants are the demand parameters where is the slop of the demand function. N N , a b b We propose that, firms in the same team share some of their payoffs with their team mates.So, let i lj  be the payoff rate that firm will takes from the payoff of firm in the same team .It is clear that , of the first firm of the first team , if he played with the team, is given by the followimg; and  . For example, the final payoff < 1 where 1 is the number of firms in the first team.In general, the final payoff of the firm in the team i is given by; In the case of two teams   where each team consists of two firms and from Equation (1), we get the following payoffs of the firms in each team; Using the assumption in Equation (2) of sharing some of the payoffs and Equation (3), we get the final payoffs of the firms as follows: In this model, we assume that the firms in the first team use the marginal profit method [15], to expect their profit for the next time according to the following equations; where 1 j  is the speed (rate) of adjustment and it is a function of the price 1 j p .The firms in the second team use Nash equilibrium [2], to make their decision for the next step by solving the following equations; In this model, we assume that the speed of adjustment will be linear and take the form   5) and ( 6), we get the following system (7); Then, Equation (7) describe a system of two teams, each team consists of two firms with homogeneous strategies among each firms in each team and heterogeeous strategies among teams.n The Analysis of the Model The steady state (equilibrium) solutions are very interest [16].In the context of difference equations, an equilibrium solution x is defined to be the value that satisfies the relations . Then, we can get the equilibrium solutions for our model by the following.Let Then, using Equation ( 8), the equilibrium points are given by solving the following system: We get three boundary equilibrium solution points and the fourth one is the coexistence equilibrium one.The first boundary equilibrium one is given by The second boundary equilibrium one is given by The third boundary equilibrium one is given by The most important one is the coexistence one   c H A E A B A B A a A A c D A F A c E A H A c F A D A c H A E A The stability of this equilibrium solutions is based on the eigenvalues of the Jacobian matrix of the system (7), which is given by (10); where and The equilibrium solution will be stable if the eigenvalues , = 1,2,3,4 i i  of the Jacobian matrix (10) satisfy the conditions < 1, = 1,2, . The eigenvalues for the first equilibrium point are given by 1 . F o r t h e o t h e r equilibrium points it is very difficult to compute these eigenvalues.Instead, we find the characteristic polynomial, which has the following form: Then, the necessary and sufficient conditions [17], for all roots of the characteristic polynomial P a a a a P a a a a a a a a a a a a a a a a a a a a a The coefficients of the characteristic polynomial for the most important one (coexistence) are as follows: Copyright © 2011 SciRes. . Then, the equilibrium solution 4 of the system (7) is stable under the conditions (11).This means that in the long run all firms are coexist.So, the market will be stable. Numerical Simulations In this section, we will use some numerical simulations to show the complicated behaviour of the model (stability, period doubling bifurcation and chaos).We note that the small cooperation among the firms in the same team ( ) will lead to a complex behaviour in the system, while the increasing this ooperation will lead to the stability. < 0.3459991 c Chaos Control As we seen in the last section, the adjustment rate ij  and the payoff return i lj  of the boundedly rational firms play an important role in the stability of the market.So, to avoid this complexity we will try to control the chaos.We will use the feedback method [18] to control the adjustment magnitude.Modifying the first equation in our system will give us the following controlled system; (12) where the parameter is the control factor.The Jacobian matrix of the controlled system will be: From Figure 3, we find that the controlled system begin chaotic, periodic and then stable by increasing the control factor . k Figure 4 shows the stability behaviour of the controlled system when .This means that if the firms in the first team adopt the feedback adjustment, the price system can switch from a chaotic to a regular or equilibrium state.= 0.5 k Figure 1 shows the bifurcation diagram of the prices and profits with respect to the adjust speed 11  while the other parameters are constant and have taken the values , , that the equilibrium point  = 0.5212497, 0.5254234, 0.5638708, 0.5599204 p  is locally stable for 11 < 0.7569938  , after this value it became periodic and finally the system became chaotic.The same thing occur to the profits in figure 1B at the same value of 11  . Figure 2 12  Figure2shows the effect of changing the parameters i lj  .We get a bifurcation diagram for the prices and profits with respect to1 12  with the values of the other parameters are the same as in Figure1except that 11  Figure 1 . Figure 1.The bifurcation diagram of the prices and the profits with respect to α 11 . Figure 2 . Figure 2. The bifurcation diagram of the prices and the profits with respect to 12  . .Figure 3 . Figure 3.The bifurcation diagram of the prices with respect to the controlling factor k. Figure 4 . Figure 4.The bifurcation diagram of the prices with respect to the controlling factor k = 0.5.
2,333.8
2011-09-29T00:00:00.000
[ "Mathematics" ]
General-order observation-driven models: ergodicity and consistency of the maximum likelihood estimator The class of observation-driven models (ODMs) includes many models of non-linear time series which, in a fashion similar to, yet different from, hidden Markov models (HMMs), involve hidden variables. Interestingly, in contrast to most HMMs, ODMs enjoy likelihoods that can be computed exactly with computational complexity of the same order as the number of observations, making maximum likelihood estimation the privileged approach for statistical inference for these models. A celebrated example of general order ODMs is the GARCH$(p,q)$ model, for which ergodicity and inference has been studied extensively. However little is known on more general models, in particular integer-valued ones, such as the log-linear Poisson GARCH or the NBIN-GARCH of order $(p,q)$ about which most of the existing results seem restricted to the case $p=q=1$. Here we fill this gap and derive ergodicity conditions for general ODMs. The consistency and the asymptotic normality of the maximum likelihood estimator (MLE) can then be derived using the method already developed for first order ODMs. ODMs have the nice feature that the computations of the associated (conditional) likelihood and its derivatives are easy, the parameter estimation is hence relatively simple, and the prediction, which is a prime objective in many time series applications, is straightforward. However, it turns out that the asymptotic properties of the maximum likelihood estimator (MLE) for this class can be cumbersome to establish, except when they can be derived using computations specific to the studied model (the GARCH(1, 1) case being one of the most celebrated example). The literature concerning the asymptotic theory of the MLE when the observed variable has Poisson distribution includes [23,21,24] and [46]. For a more general case where the model belongs to the class of one-parameter exponential ODMs, such as the Bernoulli, the exponential, the negative binomial (with known shape parameter) and the Poisson autoregressive models, the consistency and the asymptotic normality of the MLE have been derived in [11]. However, the one-parameter exponential family is inadequate to deal with models such as multi-parametric, mixture or multivariate ODMs (the negative binomial with all unknown parameters and mixture Poisson ODMs are examples of this case). A more general consistency result, has been obtained recently in [13], where the observed process may admit various conditional distributions. This result has later been extended and refined in [16]. However, most of the results obtained so far have been derived only under the framework of GARCH(1, 1)-type or first-order ODMs. Yet, up to our knowledge, little is known for the GARCH(p, q)-type, i.e. larger order discrete ODMs, as highlighted as a remaining unsolved problem in [44]. Here, following others (e.g. [42,29]), we consider a general class of ODMs that is capable to account for several lagged variables of both hidden and observation processes. We develop a theory for the class of general-order ODMs parallel to the GARCH(p, q) family and in particular we investigate the two problems listed here below. a) Provide a complete set of conditions for general order ODMs implying that the process is ergodic. b) Prove the consistency of the MLE. Under the assumption of well-specified models, this can be treated in two separate sub-problems. 1) Prove that the the MLE is equivalence-class consistency. 2) Characterize the set of identifiable parameters. In principle, the general order model can be treated by embedding it into a firstorder one and by applying the results obtained e.g. in [13,16] to the embedded model. Yet the particular form of the embedded model does not fit the usual assumptions tailored for standard first-order ODMs. This is why, as pointed out in [44], solving Problem a) is not only a formal extension of available results. To obtain Theorem 8 where Problem a) is addressed, we derive conditions by taking advantage of the asymptotic behavior of iterated versions of the kernels involved. Incidentally, this also allows us to improve known conditions for some first-order models, as explained in . Once the ergodicity of the model is proved, we can solve Sub-problem b) 1). This is done in Theorem 10, which is obtained almost for free from the embedding in the ODM(1,1) case. Sub-problem b) 2) is much more involved and has been addressed in [17]. To demonstrate the generality, applicability and efficiency of our approach, we apply our results to three specific integer valued ODMs, namely, the log-linear Poisson GARCH(p, q) model, the negative binomial integer-valued GARCH or the NBIN-GARCH(p, q) model and the Poisson AutoRegressive model with eXogenous variables, known as the PARX model. To the best of our knowledge, the stationarity and ergodicity as well as the asymptotic properties of the MLE for the general loglinear Poisson GARCH(p, q) and NBIN-GARCH(p, q) models have not been derived so far. For the PARX(p, q) model, which can be considered as a vector-valued ODM in our study, such results are available in [1] but our approach leads to significantly different assumptions, as will be shown in Section 2.3.3. Numerical experiments involving the log-linear Poisson GARCH(p, q) and the NBIN-GARCH (p, q) models can be found in [40,Section 5.5] for a set of earthquake data from Earthquake Hazards Program [43]. The paper is structured as follows. Definitions used throughout the paper are introduced in Section 2, where we also state our results on three specific examples. In Section 3, we present our main results on the ergodicity and consistency of the MLE for general order ODMs. Finally, Section 4 contains the postponed proofs and we gather some independent useful lemmas in Section 5. Definitions and Examples 2.1. Observation-driven model of order (p, q). Throughout the paper we use the notation u ℓ:m := (u ℓ , . . . , u m ) for ℓ ≤ m, with the convention that u ℓ:m is the empty sequence if ℓ > m, so that, for instance (x 0:(−1) , y) = y. The observationdriven time series model can formally be defined as follows. Definition 1 (General order ODM and (V)LODM). Let (X, X ), (Y, Y) and (U, U) be measurable spaces, respectively called the latent space, the observation space and the reduced observation space. Let (Θ, ∆) be a compact metric space, called the parameter space. Let Υ be a measurable function from (Y, Y) to (U, U). Let (x 1:p , u 1:q ) →ψ θ u1:q (x 1:p ) : θ ∈ Θ be a family of measurable functions from (X p × U q , X ⊗p ⊗ U ⊗q ) to (X, X ), called the reduced link functions and let G θ : θ ∈ Θ be a family of probability kernels on X × Y, called the observation kernels. (ii) We further say that this model is a vector linearly observation-driven model of order (p, q, p ′ , q ′ ) (shortened as VLODM(p, q, p ′ , q ′ )) if moreover for some p ′ , q ′ ∈ Z >0 , X is a closed subset of R p ′ , U ⊂ R q ′ and, for all x = x 0:(p−1) ∈ X p , u = u 0:(q−1) ∈ U q , and θ ∈ Θ, for some mappings ω, A 1:p and B 1:q defined on Θ and valued in R p ′ , R p ′ ×p ′ p and R p ′ ×q ′ q . In the case where p ′ = q ′ = 1, the VLODM(p, q, p ′ , q ′ ) is simply called a linearly observation-driven model of order (p, q) (shortened as LODM(p, q)). Remark 1. Let us comment briefly on this definition. (1) The standard definition of an observation driven model does not include the admissible mapping Υ and indeed, we can define the same model without Υ by replacing the second equation in (2.1) by where (x 1:p , y 1:q ) → ψ θ y1:q (x 1:p ) : θ ∈ Θ is a family of measurable functions from (X p × Y q , X ⊗p ⊗ Y ⊗q ) to (X, X ), called the (non-reduced) link functions, and defined by However by inserting the mapping Υ, we introduce some flexibility and this is useful for describing various ODMs with the same reduced link functioñ ψ θ . For instance all LODMs or VLODMs use the form of reduced link function in (2.2) although they may use various mappings Υ's. This is the case for the GARCH, log-linear GARCH and NBIN-GARCH, see below, but also of the bivariate integer-valued GARCH model [9]. (2) When p = q = 1, then the ODM(p, q) defined by (2.1) collapses to the (first-order) ODM considered in [13] and [16]. Note also that if p = q, setting r := max(p, q), the ODM(p, q) can be embedded in an ODM(r, r), but this requires augmenting the parameter dimension which might impact the identifiability of the model. (3) Note that in our definition, Υ does not depend on θ. In some cases, one can simplify the link functionψ, by allowing one to let Υ depend on the unknown parameter. To derive identifiability conditions or to prove the consistency of the MLE, the dependence of the distribution with respect to the unknown parameter θ is crucial. In contrast, proving that the process is ergodic is always done for a given θ and it is thus possible to use our set of conditions to prove ergodicity with Υ depending on θ, hence reasoning for a given θ and a given Υ. Moreover our set of conditions (namely Conditions (A-3), (A-4), (A-5), (A-6), (A-7) and (A-8) in Theorem 8) depends on Υ only through the image space U, which can usually be taken to be the same even in situations where Υ may depend on θ. The GARCH model has been extensively studied, see, for example, [5,25,26,34,27] and the references therein. Many other examples have been derived in the class of LODMs. An important feature of the GARCH is the fact that x → G θ (x; ·) maps a family of distributions parameterized by the scale parameter √ x, which is often expressed by replacing the first line in (2.1) by an equation of the form Y k = √ X k ǫ k with the assumption that {ǫ n : n ∈ Z} is i.i.d. Such a simple multiplicative formula no longer holds when the observations Y k 's are integers, which seriously complicates the theoretical analysis of such models, as explained in [44]. Our results will apply to the GARCH case but we do not improve the existing results in this case. The real interest of our approach lies in being able to address general order integer-valued ODMs for which existing results are scarce. . Consider an ODM as in Definition 1. Then, for all k ≥ 0, the conditional distribution of (Y k , X k+1 ) given F k only depends on where we defined For all θ ∈ Θ and all probability distributions ξ on (Z, Z), we denote by P θ ξ the distribution of {X k , Y k ′ : k > −p, k ′ > −q} satisfying (2.1) and Z 0 ∼ ξ, with the usual notation P θ z in the case where ξ is a Dirac mass at z ∈ Z. We replace P by E to denote the corresponding expectations. We always assume that the observation kernel is dominated by a σ-finite measure ν on (Y, Y), that is, for all θ ∈ Θ, there exists a measurable function g θ : X × Y → R ≥0 written as (x, y) → g θ (x; y) such that for all x ∈ X, g θ (x; ·) is the density of G θ (x; ·) with respect to ν. Then the inference about the model parameter is classically performed by relying on the likelihood of the observations (Y 0 , . . . , Y n ) given Z 0 . For all z ∈ Z, the corresponding conditional density function p θ (y 0:n |z) with respect to ν ⊗(n+1) under parameter θ ∈ Θ is given by where, setting u k = Υ(y k ) for k = 0, . . . , n, the sequence x 0:n is defined through the initial conditions and recursion equations where, throughout the paper, for all j ∈ {1, . . . , p + q − 1}, we denote by Π j (z) the j-th entry of z ∈ Z. Note that x k+1 only depends on z and y 0:k for all k ≥ 0. Throughout the paper, we use the notation: for all n ≥ 1, y 0:n−1 ∈ Y n and z ∈ Z, ψ θ y 0:(n−1) (z) := x n , with x n defined by (2.7). (2.8) Note that for n = 0, y 0:−1 is the empty sequence and (2.8) is replaced by ψ θ ∅ (z) = x 0 = Π p (z). Given the initial condition Z 0 = z (i) , the (conditional) maximum likelihood estimatorθ z (i) ,n of the parameter θ is thus defined by where, for all z (i) ∈ Z, Note that since {X n : n ∈ Z} is unobserved, to compute the conditional likelihood, we used some arbitrary initial values for X (−p+1):0 , (the first p entries of z (i) ). We also use arbitrary values for the last q − 1 entries of z (i) . Note that we index our set of observations as Y 0:n (hence assuming 1 + n observations to compute L θ z (i) ,n ). The iterated function ψ θ Y 0:k can be cumbersome but is very easy to compute in practice using the recursion (2.7). Moreover, the same kind of recursion holds for its derivatives with respect to θ, allowing one to apply gradient steps to locally maximize the likelihood. In this contribution, we investigate the convergence ofθ z (i) ,n as n → ∞ for some (well-chosen) value of z (i) under the assumption that the model is well specified and the observations are in the steady state. The first problem to solve in this line of results is thus to show the following. This ergodic property is the cornerstone for making statistical inference theory work and we provide simple general conditions in Section 3.2. We now introduce the notation that will allow us to refer to the stationary distribution of the model throughout the paper. Definition 3. If (A-1) holds, then a) P θ denotes the distribution on ((X × Y) Z , (X × Y) ⊗Z ) of the stationary solution of (2.1) extended on k ∈ Z, with F k = σ(X −∞:k , Y −∞:(k−1) ); b)P θ denotes the projection of P θ on the component Y Z . We also use the symbols E θ andẼ θ to denote the expectations corresponding to P θ andP θ , respectively. We further denote by π θ X and π θ Y the marginal distributions of X 0 and Y 0 under P θ , on (X, X ) and (Y, Y), respectively. As a byproduct of the proof of (A-1), one usually obtains a function V X : X → R ≥0 of interest, common to all θ ∈ Θ, such that the following property holds on the stationary distribution (see Section 3.2). (A-2) For all θ ∈ Θ, π θ X (V X ) < ∞. It is here stated as an assumption for convenience. Note also that, in the following, for V : X → R ≥0 and f : X → R, we denote the V -norm of f by with the convention 0/0 = 0 and we write f V if |f | V < ∞. With this notation, under (A-2), for any f : X → R such that f V X , π θ X (|f |) < ∞ holds, and similarly, since π θ Y = π θ X G θ as a consequence of (2.1), for any f : Equivalence-class consistency and identifiability. For all θ, θ ′ ∈ Θ, we write θ ∼ θ ′ if and only ifP θ =P θ ′ . This defines an equivalence relation on the parameter set Θ and the corresponding equivalence class of θ is denoted by [θ] := {θ ′ ∈ Θ : θ ∼ θ ′ }. The equivalence relationship ∼ was introduced by [32] as an alternative to the classical identifiability condition. Namely, we say thatθ z (i) ,n is equivalence-class consistent at the true parameter θ ⋆ if Recall that ∆ is the metric endowing the parameter space Θ. Therefore, in (2.11), ∆(θ z (i) ,n , [θ ⋆ ]) is the distance between the MLE and the set of parameters having the same stationary distribution as the true parameter, and the convergence is considered under this common stationary distribution. Identifiability can then be treated as a separate problem which consists in determining all the parameters θ ⋆ for which [θ ⋆ ] reduces to the singleton {θ ⋆ }, so that equivalence-class consistency becomes the usual consistency at and only at these parameters. The identifibility problem is treated in [17,Proposition 11 and Theorem 12] for general order ODMs, where many cases of interest are specifically detailed. We will use in particular [17, Theorem 17 and Section 5.5]. Three examples. To motivate our general results we first explicit three models of interest, namely, the log-linear Poisson GARCH(p, q), the NBIN-GARCH(p, q) and the PARX(p, q) models. To the best of our knowledge, the stationarity and ergodicity as well as the asymptotic properties of the MLE for the general log-linear Poisson GARCH(p, q) and NBIN-GARCH(p, q) models have not been derived so far. Such results are available for PARX model in [1] but our approach leads to significantly different assumptions, as will be shown in Section 2.3.3. Once the ergodicity of the model is established, we can investigate the consistency of the MLE. This is done by first investigating the equivalence-class consitency of the MLE and then we only need to add an identifiability condition to get the consistency of the MLE, see Theorems 4, 5 and 7 hereafter. Such results pave the way for investigating the asymptotic normality. This is done in [40,Proposition 5.4 Example 1. The Log-linear Poisson GARCH(p, q) Model parameterized by θ = (ω, a 1:p , b 1:q ) ∈ Θ ⊂ R 1+p+q is a LODM(p, q) with affine reduced link function of the form (2.2) with coefficients given by with observations space Y = Z ≥0 , hidden variables space X = R, admissible mapping Υ(y) = ln(1 + y), and observation kernel G θ (x; ·) defined as the Poisson distribution with mean e x . Our condition for showing the ergodicity of general order log-linear Poisson GARCH models requires the following definition. For all x = x (1−(p∨q)):0 ∈ R p∨q , m ∈ Z ≥0 , and w = w (−q+1):m ∈ {0, 1} q+m , define ψ θ w (x) as x m+1 obtained by the recursion We can now state our result on general order log-linear Poisson GARCH models. The proof is postponed to Section 4.4. Remark 2. Let us provide some insights about Condition (2.14). (1) Using the two possible constant sequences w, w k = 0 for all k or w k = 1 for all k in (2.13), we easily see that (2.14) implies a 1:p ∈ S p and a 1: where we used the usual convention a k = 0 for p < k ≤ q and b k = 0 for q < k ≤ p and where (2) A sufficient condition to have (2.14) is Indeed, defining ρ as the left-hand side of the previous display, we clearly have, for all m > p ∨ q, w ∈ {0, 1} q+m and x ∈ R p∨q , (3) The first iteration (2.13) implies, for all w ∈ {0, 1} q and x ∈ [−1, 1] p∨q , Hence a sufficient condition to have (2.18) (and thus (2.14)) is (4) When p = q = 1, by Points (1) and (3) above, Condition (2.14) is equivalent to have |a 1 | < 1 and |a 1 + b 1 | < 1. This condition is weaker than the one derived in [13] where |b 1 | < 1 is also imposed. NBIN-GARCH model. Our next example is the NBIN-GARCH(p, q), which is defined as follows in our setting. with affine reduced link function of the form (2.2) with coefficients given by (2.12), observations space Y = Z ≥0 , hidden variables space X = R ≥0 , admissible mapping Υ(y) = y, and observation kernels G θ (x; ·) defined as the negative binomial distribution with shape parameter r > 0 and mean r x, that is, for all y ∈ Z ≥0 , We now state our result for general order NBIN-GARCH model. The proof is postponed to Section 4.5. Remark 3. Clearly, under the setting of Example 2, if Eq. (2.1) has a stationary solution such that µ = x π X (dx) < ∞, taking the expectation on both sides of the second equation in (2.1) and using that y π Y (dy) = r x π X (dx), then (2.20) must hold, in which case we get Hence (2.20) is in fact necessary and sufficient to get a stationary solution admitting a finite first moment, as was already observed in [48, Theorem 1] although the ergodicity is not proven in this reference. However, we believe that, similarly to the classical GARCH(p, q) processes, we can find stationary solutions to Eq. (2.1) in the case where (2.20) does not hold. This is left for future work. 2.3.3. The PARX model. The PARX model is similar to the standard INGARCH(p, q) model but with additional exogenous variables in the linear link function for generating the hidden variables. Following [1], these exogenous variables are assumed to satisfy some Markov dynamic of order 1 independently of the observations and of the hidden variables (see their Assumption 1). This leads to the following definition. Definition 6 (PARX model). Let d, p, q and r be positive integers. . . , d}. Let P(x) denote the Poisson distribution with parameter x and L be a given Markov kernel on R r × B(R r ). We define the Poisson AutoRegressive model with eXogenous variables, shortly written as PARX(p, q), as a class of processes parameterized by θ = (ω, a 1: , and satisfying Remark 4. Note that our a 1 , . . . , a p and b 1 , . . . , b q correspond to β 1 , . . . , β q and α 1 , . . . , α p of [1]. Here we allowed r to be different from d whereas in in [1] it is assumed that d = r and they are denoted by d x . Since the exogenous variables are observed we can recast the PARX model into a VLODM(p, q) by including the exogenous variables into the observations Y t and the hidden variable X t . Namely, we can formally see the above PARX model as a special case of VLODM as follows. Example 3 (PARX model as a VLODM). Consider a PARX model as in We end up this section by stating our result for general PARX model. For establishing the ergodicity of the PARX model, we need some assumptions on the dynamic of the exogeneous variables through the Markov kernel L. Namely we consider the following assumptions on the kernel L. (L-1) The Markov kernel L admits a kernel density ℓ with respect to some measure ν L on Borel sets of R r such that where · denotes the Euclidean norm on R r . (L-2) The Markov kernel L is weak-Feller (Lf is bounded and continuous for any f that is bounded and continuous). (L-3) There exist a probability measure π L on Borel sets of R r , a measurable function (L-4) There exists a constant M ≥ 1 such that for all (ξ, ξ ′ ) ∈ R 2r , such that the following assertions hold for L, its kernel density ℓ introduced in (L-1) and its drift function introduced in (L-3). We can now state our main result for the PARX model. Theorem 7. Consider the PARX(p, q) model of Definition 6, seen as the VLODM of Example 3. Suppose that (L-1)-(L-4) hold, and that, for all Then the following assertions hold. The proof is postponed to Section 4.6. Let us briefly comment on our assumptions. Remark 5. Contrary to the previous example, the PARX model requires an additional Markov kernel L and therefore additional assumptions on this kernel. We briefly comment on them hereafter and compare our assumptions to those in [1]. (i) Assumptions (L-1)-(L-3) are classical assumptions on Markov kernels to ensure its stability and ergodicity. In [1, Assumption 2], a different approach is used and instead a first order contraction of the random iterative functions defining the Markov transition is used. Their assumption would typically imply our (L-3) with V L (ξ) = 1 + ξ s for some s ≥ 1, with their Assumption(3)(ii) implying our (L-3)(iii). (ii) Our Assumption (L-4) is inherited from our approach for proving ergodicity using the embedding of the PARX model into a VLODM model detailed in Example 3. It is not clear to us whether it can be omitted for proving ergodicity. This question is left for future work. Note that we provide another formulation of this assumption in Remark 10, see Section 4.6. There is no equivalent of our Assumption (L-4) in [1] as their technique for proving ergodicity is different. However, although we have the same condition (2.22) as them for ergodicity, see their Assumption 3(i), they require an additional condition, Assumption 3(iii), which we do not needed. This condition involves both the coefficients a i and b i and the contraction constant ρ used in their stability assumption for the Markov transition of the covariates. In contrast, our condition on the parameters a i and b i of the model, merely imposed through (2.22), are completely separated from our assumptions (L-1)-(L-4) on the dynamics of the covariates. (iii) Assumptions (L-5) and (L-6) are not required to get ergodicity. they are needed to establish the equivalence class consistency of the MLE and the identifiability of the model. Like our Assumption (L-4) for proving ergodcity, Assumption (L-5) is inherited from the embedding of the PARX model into a VLODM model here used for proving the equivalent class consistency. Assumption (L-6) is a mild and natural identifiability assumption. It basically says that the covariates f 1 (ξ k ), . . . , f d (ξ k ) are not linearly related conditionally to ξ k−1 . If they were, it would suggest using a smaller set of covariates. The identifiability condition of [1] is different as it involves a condition both on the covariate distribution and on the parameters a 1 , . . . , a p and b 1 , . . . , b q , see their Assumption 5. To conclude this section, let us carefully examine the simple case where the covariates are assumed to follow a Gaussian linear dynamic and see, in this specific case, how our assumptions compare to that of [1]. More precisely, assume that L is defined by the following equation on the exogenous variables where ℵ is an r × r matrix with spectral radius ρ(ℵ) ∈ (0, 1), σ > 0, and η t ∼ N (0, I r ). Then Assumption (L-1) holds with and ν L being the Lebesgue measure on R r . It is straightforward to check (L-2) and (L-3) with V L (ξ) = e λ ξ for any λ > 0. Let us now check that Assumption (L-4) holds. First note that, setting Then we get that for all ξ, ξ ′ ∈ R r such that ξ − ξ ′ ≤ ǫ, for some positive constant M 0 only depending on ℵ , σ and ǫ. Finally, for all ξ, ξ ′ ∈ R r , and Assumption (L-4) holds. Preliminaries. In the well-specified setting, a general result on the consistency of the MLE for a class of first-order ODMs has been obtained in [13]. Let us briefly describe the approach used to establish the convergence of the MLEθ z (i) ,n in this reference and in the present contribution for higher order ODMs. Let θ ⋆ ∈ Θ denote the true parameter. The consistency of the MLE is obtained through the following steps. Step 1 Find sufficient conditions for the ergodic property (A-1) of the model. Then the convergence of the MLE to θ ⋆ is studied underP θ⋆ as defined in Definition 3. Step 2 Establish that, as the number of observations n → ∞, the normalized loglikelihood L θ z (i) ,n as defined in (2.10), for some well-chosen z (i) ∈ X p , can be approximated by where p θ (·|·) is aP θ⋆ -a.s. finite real-valued measurable function defined on (Y Z , Y ⊗Z ). To define p θ (·|·), we set, for all y −∞:0 ∈ Y Z ≤0 and y ∈ Y, whenever the following limit is well defined, Step 3 By (A-1), the observed process {Y k : k ∈ Z} is ergodic underP θ⋆ and provided thatẼ it then follows that Step 4 Using an additional argument (similar to that in [37]), deduce that the MLEθ z (i) ,n defined by (2.9) eventually lies in any given neighborhood of the set which only depends on θ ⋆ , establishing that where ∆ is the metric endowing the parameter space Θ. Step 5 Establish that Θ ⋆ defined in (3.2) [15], we proved a general result for partially observed Markov chains, which include first order ODMs, in order to get Step 5, see Theorem 1 in this reference. Finally Step 6 is often carried out using particular means adapted to the precise considered model. We present in Section 3.2 the conditions that we use to prove ergodicity (Step 1) and, in Section 3.3, we adapt the conditions already used in [16,15] to carry out Step 2 to Step 5 for first order model to higher order ODMs. Using the embedding described in Section 4.1, all the steps from Step 1 to Step 5 can in principle be obtained by applying the existing results to the first order ODMs in which the original higher order model is embedded. This approach is indeed successful, up to some straightforward adaptation, for Step 2 to Step 5. Ergodicity in Step 1 requires a deeper analysis that constitutes the main part of this contribution. As for Step 6, it is treated in [17]. 3.2. Ergodicity. In this section, we provide conditions that yield stationarity and ergodicity of the Markov chain {(Z k , Y k ) : k ∈ Z ≥0 }, that is, we check (A-1) and (A-2). We will set θ to be an arbitrary value in Θ and since this is a "for all θ (...)" condition, to save space and alleviate the notational burden, we will drop the superscript θ from, for example, G θ and ψ θ and respectively write G and ψ, instead. Ergodicity of Markov chains is usually studied using ϕ-irreducibility. This approach is well known to be quite efficient when dealing with fully dominated models; see [35]. It is not at all the same picture for integer-valued observation-driven models, where other tools need to be invoked; see [21,13,16] for ODMs(1,1). Here we extend these results for general order ODMs(p, q). Let us now introduce our list of assumptions. They will be further commented after they are all listed. We first need some metric on the space Z and assume the following. (A-3) The σ-fields X and U are Borel ones, respectively associated to (X, δ X ) and (U, δ U ), both assumed to be complete and separable metric spaces. For an LODM(p, q), the following condition, often referred to as the invertibility condition, see [41], is classically assumed. where S p is defined in (2.17). For an ODM(p, q) with a possibly non-linear link function, (I-1) is replaced by a uniform contracting condition on the iterates of the link function, see (A-4) below. In order to write this condition in this more general case, recall that any finite Y-valued sequence y, ψ θ y is defined by (2.8) with the recursion (2.7). Next, we may rewrite these iterates directly in terms of u 0:(n−1) instead of y 0:(n−1) . Namely, we can definẽ ψ θ u 0:(n−1) (z) := x n , with x n defined by (2.7) (3.4) so that ψ θ y 0:(n−1) (z) =ψ θ Υ ⊗n (y 0:(n−1) ) (z) for all z ∈ Z and y 0:(n−1) ∈ Y n . Now define, for all n ∈ Z >0 , the Lipschitz constant forψ θ u , uniform over u ∈ U n , where we set, for all v ∈ Z 2 , We use the following assumption on a general link function. (A-4) For all θ ∈ Θ, we have Lip θ 1 < ∞ and Lip θ n → 0 as n → ∞. The following assumption is mainly related to the observation kernel G and relies on the metrics introduced in (A-3) and on the iterates of the link functions defined in (3.4). (A-5) The space (X, δ X ) is locally compact and if q > 1, so is (U, δ U ). For all x ∈ X, there exists δ > 0 such that Moreover, one of the two following assertions hold. (A-6) There exist measurable functions V X : X → R ≥0 and V U : The following condition is used to show the existence of a reachable point. (A-7) The conditional density of G with respect to ν satisfies, for all (x, y) ∈ X × Y, and one of the two following assertions hold. (a) There exists y 0 ∈ Y such that ν( The last assumption is used to show the uniqueness of the invariant probability measure, through a coupling argument. It requires the following definition, used in a coupling argument. Under (A-8)-(i) (defined below), for any initial distribution ξ on (Z 2 , Z ⊗2 ), letÊ ξ denote the expectation (operator) asso- U → R ≥0 and a Markov kernel G on X 2 × Y, dominated by ν with kernel density g(x, x ′ ; y) such that, setting W Y = W U • Υ, we have GW Y W X and the three following assertions hold. (i) For all (x, x ′ ) ∈ X 2 and y ∈ Y, (ii) The function W X is symmetric on X 2 , W X (x, ·) is locally bounded for all x ∈ X, and W U is locally bounded on U. one of the two following assertions holds. (1) In many examples, (3.12) is satisfied with where φ is a measurable function from X 2 to X, in which case, GW Y should be replaced by GW Y • φ in (A-8). (3) If q = 1, the terms depending on ℓ both in (3.9) and (3.13) vanish. We can take V U = W U = 0 without loss of generality in this case. (4) Recall that a kernel is strong (resp. weak) Feller if it maps any bounded measurable (resp. bounded continuous) function to a bounded continuous function. By Scheffé's lemma, a sufficient condition for G to be weak Feller is to have that x → g(x; y) is continuous on X for all y ∈ Y. But then (3.7) gives that G is also strong Feller by dominated convergence. (5) Note that Condition (3.7) holds when G(x; ·) is taken among an exponential family with natural parameter continuously depending on x and valued within the open set of natural parameters, in which case G is also strong Feller. We are in this situation for both Examples 1 and 2. We can now state the main ergodicity result. Theorem 8. Let P z be defined as P θ z in Definition 1. For convenience, we postpone this proof to Section 4.2. The following lemma provides a general way for constructing the instrumental functions α and φ that appear in (A-8) and in Remark 6-(1). The proof can be easily adapted from [16, Lemma 1] and is thus omitted. Lemma 9. Suppose that X = C S for some measurable space (S, S) and C ⊆ R. Thus for all x ∈ X, we write x = (x s ) s∈S , where x s ∈ C for all s ∈ S. Suppose moreover that for all x = (x s ) s∈S ∈ X, we can express the conditional density g(x; ·) as a mixture of densities of the form j(x s )h(x s ; ·) over s ∈ S. This means that for all t ∈ C, y → j(t)h(t; y) is a density with respect to ν and there exists a probability measure µ on (S, S) such that We moreover assume that h takes nonnegative values and that one of the two following assumptions holds. Convergence of the MLE. Once the ergodicity of the model is established, one can derive the asymptotic behavior of the MLE, provided some regularity and moment condition holds for going through Step 2 to Step 5, as described in Section 3.1. These steps are carried out using [16, Theorem 1] and [15, Theorem 3], written for general ODMs (1,1). The adaptation to higher order ODMs(p, q) will follow easily from the embedding of Section 4.1. We consider the following assumptions, the last of which uses V X as introduced in Definition 3 under Assumptions (A-1) and (A-2). 1 ∈ U, a closed set X 1 ⊆ X, C ≥ 0, h : R + → R + and a measurable functionφ : Y → R ≥0 such that the following assertions hold with z (i) = (x Remark 8. In the case where the observations are discrete, one usually take ν to be the counting measure on the at most countable space Y. In this case, g θ (x; y) ∈ [0, 1] for all θ, x and y and Condition (B-3)(ii) trivially holds whatever X 1 is. We have the following result, whose proof is postponed to Section 4.3. Embedding into an observation-driven model of order We further denote the successive composition of Ψ θ y0 , Ψ θ y1 , ..., and Ψ θ y k by (4.3) Ψ θ y 0:k = Ψ θ y k • Ψ θ y k−1 • · · · • Ψ θ y0 . Note in particular that ψ θ y 0:k defined by (2.8) with the recursion (2.7) can be written as (4.4) ψ θ y 0:k = Π p • Ψ θ y 0:k , Conversely, we have, for all k ≥ 0 and y 0:k ∈ Y k+1 , where we set u j = Π p+q+j (z) for −q < j ≤ −1 and u j = Υ(y j ) for 0 ≤ j ≤ k and use the convention ψ θ y 0:j (z) = Π p−j (z) for −p < j ≤ 0. By letting Z k = (X (k−p+1):k , U (k−q+1):(k−1) ) (see (2.4)), Model (2.1) can be rewritten as, for all (4.6) By this representation, the ODM(p, q) is thus embedded in an ODM(1, 1). This in principle allows us to apply the same results obtained for the class of ODMs(1, 1) in [16] to the broader class of ODMs(p, q). Not all but some conditions written above for an ODM(p, q) indeed easily translate to the embedded ODM(1, 1). Take for instance (A-4). By (3.5), (3.6) and (4.5), we have, for all n ∈ Z >0 , using the convention Lip θ m = 1 for m ≤ 0, Hence the same assumption (A-4) will hold for the embedded ODM(1, 1). As an ODM(1, 1), the bivariate process {(Z k , Y k ) : k ∈ Z ≥0 } is a Markov chain on the space (Z × Y, Z ⊗ Y) with transition kernel K θ satisfying, for all (z, y) ∈ Z × Y, A ∈ Z and B ∈ Y, Remark 9. Note that (A-1) is equivalent to saying that the transition kernel K θ of the complete chain admits a unique invariant probability measure π θ on Z × Y. Moreover the resulting π θ X and π θ Y can be obtained by projecting π θ on any of its X component and any of its Y component, respectively. Note also that, by itself, the process {Z k : k ∈ Z ≥0 } is a Markov chain on (Z, Z) with transition kernel R θ defined by setting, for all z ∈ Z and A ∈ Z, (4.9) R θ (z; A) = ½ A (Ψ θ y (z))G θ (Π p (z) ; dy). 4.2. Proof of Theorem 8. The scheme of proof of this theorem is somewhat similar to that of [16,Theorem 2] which is dedicated to the ergodicity of ODM(1,1) processes. The main difference is that we need to rely on assumptions involving iterates of kernels such as (E-2), (A-4) and (A-8)(iv) below, to be compared with their counterparts (A-4), (A-7) and (A-8)(iv) in [16,Theorem 2]). Using the embedding of Section 4.1, we will use the following conditions directly applying to the kernel R of this embedding. Based on these conditions, we can rely on the two following results. The existence of an invariant probability measure for R is given by the following result. Obviously, we haveπR =π, which shows that R admits an invariant probability distributionπ. Now let M > 0. Then by Jensen's inequality, we have for all n ∈ Z ≥0 , Letting n → ∞, we then obtainπ(V ∧ M ) ≤ β 1−λ ∧ M . Finally, by the monotone convergence theorem, letting M → ∞, we getπV < ∞. Proposition 12. Assume (E-1) (E-3) and (E-4). Then the Markov kernel R admits at most one unique invariant probability measure. Proof. This is extracted from the uniqueness part of the proof of [13,Theorem 6], see their Section 3. Note that our Condition (E-3) corresponds to their condition (A2) and our Condition (E-4) to their condition (A3) (their α, Q,Q and Q ♯ being ourᾱ, R,R andR). Hence it is now clear that the conclusion of Theorem 8 holds if we can apply both Lemma 11 and Proposition 12. This is done according to the following successive steps. Step 2 Prove (E-2): this is done in Lemma 13 using (A-3), (A-5), (A-6) and the fact that, for all y ∈ Y, ψ y is continuous on Z, as a consequence of Lip 1 < ∞ in (A-4). Step 5 Provide an explicit construction forR andR satisfying (4.12) and (4.13) in (E-4): this is done in Lemma 15 using (A-8)(i); Step 6 Finally, we need to establish the additional properties of thisR required in (E-4), namely, (4.14) and (4.15). This will be done in the final part of this section using the additional Lemma 16. Let us start with Step 2. Lemma 13. If for all y ∈ Y, ψ y is continuous on Z, then (A-5) and (A-6) imply (E-2). Proof. We first show that R is weak Feller, hence R m is too, for any m ≥ 1. Further defineF : Z × X → R by setting, for all z ∈ Z and x ∈ X, Hence, with these definitions, we have, for all z ∈ Z, Rf (z) =F (z, Π p (z)), and it is now sufficient to show thatF is continuous. We write, for all z, z ′ ∈ Z and Since z →f (z, y) is continuous for all y, we have A(z, z ′ , x ′ ) → 0 as (z ′ , x ′ ) → (z, x) by (3.7) and dominated convergence. We have B(z, x, x ′ ) → 0 as x ′ → x, as a consequence of (A-5)(a) or of (A-5)(b). HenceF is continuous and we have proved that R m is weak Feller for all m ∈ Z >0 . We now proceed with Step 5. In order to achieve Step 6, we rely on the following result which is an adaption of [13, Lemma 9]. (4.21) and that W : Then, (4.14) and (4.15) hold. Proof of (ii). We apply Theorem 10. We have already shown that (A-4), (A-1) and (A-2) hold in the proof of Assertion (i), with V X (x) = e τ |x| for any τ > 0. Assumptions (B-1) and (B-2) obviously hold for the log-linear Poisson GARCH model (see Remark 7 for the second one). It now only remains to show that, using V X as above, (B-3) is also satisfied. We set X 1 = X which trivially satisfies (B-3)(i). Condition (B-3)(ii) is also immediate (see Remark 8). Next, we look for an adequatē φ, h and C ≥ 0 so that (iii) (iv) (v) and (vi) hold in (B-3). We have, for all θ ∈ Θ and (x, We thus set h(u) = e x (i) 1 u, C = 1 andφ(y) = A + B y for some adequate nonnegative A and B so that (iii) (using Remark 7 and Υ(y) = ln(1 + y) ≤ y), (iv) and (v) follow. Then we have G θφ (x) ≤ A + B e x V X , provided that we chose τ ≥ 1. This gives (vi) and thus (B-3) holds true, which concludes the proof. We start with (A-6), with V X (x) = x. We can further set V Y (y) = y since, we then have GV Y (x) = rV X (x) so that |GV Y | V X = r. With these definitions, (3.9) leads to On the other hand, we have that, for all z = (x (−p+1):0 , u (−q+1):(−1) ∈ Z = R p+q−1 ≥0 and all n = 1, 2, . . . , where, in the right-hand side of this equation, if n − k ≤ 0, E z [X n−k ] should be replaced by x n−k and if n − k ≤ −1, E z [Y n−k ] should be replaced by u n−k . By definition of G, E z [Y n−k ] = rE z [X n−k ]. Hence, denoting x n = E z [X n ] we get that the sequence {x k : k ∈ Z ≥0 } satisfies the recursion where here c k = a k + r b k ≥ 0 for k = 1, . . . , p ∧ q, c k = a k if q < k ≤ p and c k = rb k if p < k ≤ q. Moreover we can clearly find a constant C only depending on θ such that x 1:(p∨q) ∞ ≤ C(1 + |z| ∞ ). Then applying Lemma 20 and (4.32), we get that there exists C ′ > 0 and ρ ∈ (0, 1) such that, for all n ≥ 1, Condition (A-6) follows. We conclude with the proof of (A-8). Let us apply Lemma 9 with C = (0, ∞) = X and S = {1}, µ being the Dirac mass at point 1, j(x) = (1 + x) −r and h(x; y) = Γ(r+y) y ! Γ(r) x 1+x y , which satisfies (H'-1). This leads to α and φ satisfying (A-8)(i) and (3.14) by setting, for all x, Next, for all (x, x ′ ) ∈ Z 2 , we have We thus obtain (A-8)(iii) by setting W X (x, x ′ ) = (1 ∨ r). All the other conditions of (A-8) are trivially satisfied in this case, taking W U ≡ 1. Next, we look for an adequateφ, h and C ≥ 0 so that (iii) (iv) (v) and (vi) hold in (B-3). For all θ ∈ Θ, (x, x ′ ) ∈ X 1 and y ∈ Z ≥0 , we have We set C = 0, h(s) = s andφ(y) = A + B y for some adequate non-negative A and B so that (iii) (using Remark 7 and Υ(y) = y), (iv) and (v) follow. Then we have G θ ln +φ (x) ≤ A + B r x V X . This gives (vi) and thus (B-3) holds true, which concludes the proof. Proof of (iii). The proof of this point is similar to the proof of Theorem 4(iii) in Section 4.4, except that Condition (4.30) is replaced by (4.33) ln + (|y|) π θ Y (dy) < ∞ . We already checked that G θ V Y V X with V Y (y) = y, implying y π θ Y (dy) < ∞. Hence (4.33) holds and the proof is concluded. 4.6. Proof of Theorem 7. The following remark about Assumption (L-4) will be useful. We now turn to checking (A-6). Define Then, using (L-3)-(iv) with n = 1 and h = V L , To complete the proof of (A-6), it only remains to check (3.8). Recall that in this model, (see (2.21)), we have, using (L-3), Defining c i as in Assertion (ii) of Lemma 21, we write this inequality as Combining (4.36) and (4.37) yields that, for any ǫ > 0, we can find M large enough such that, with V defined by (3.9), and setting we have, for all n ∈ Z >0 , u n ∈ R ≥0 and Thus, since ̺ < 1, for n large enough, It follows that, for any ǫ > 0, lim sup where, in the second inequality, we used Lemma 20 with Assertion (ii) of Lemma 21. We thus get (3.8). We have thus checked all the assumptions of Theorem 8 for the VLODM of Example 3 and this concludes the proof of Assertion (i). Useful Lemmas The following result is used in the proof of Theorem 10. It is proven in [18,Lemma 19]. Lemma 17. (A-4) implies that for all θ ∈ Θ, there exists C > 0 and ρ ∈ (0, 1) such that Lip θ n ≤ C ρ n for all n ∈ Z >0 . A byproduct of Lemma 17 is the following result, which is used in the proof of Lemma 14. A-4) hold. Let θ ∈ Θ and u 0 ∈ U and set u k = u 0 for all k ∈ Z ≥0 . Then there exists x ∞ ∈ X such that for all z ∈ Z, ψ θ u 0:n (z) converges to x ∞ as n → ∞ in (X, δ X ). The two following lemmas are straightforward and their proofs are thus omitted. They are used in the proofs of Theorem 5 and Theorem 7, respectively. Lemma 20. Let r be a positive integer and (ω, c 1:r ) ∈ R 1+r . Let {x k : k ∈ Z ≥0 } be a sequence satisfying c k x n−k , n ≥ r , Suppose that the polynomial P (z) = 1− r k=1 c k z k has no roots in {z ∈ C : |z| ≤ 1}. Then there exist ρ < 1 and C > 0 such that, for all n ∈ Z ≥0 , |x n − ω/P (1)| ≤ C ρ n (1 + max(|x 0 |, . . . , |x r−1 |)) .
12,140.8
2021-01-01T00:00:00.000
[ "Mathematics" ]
A Detailed Chemical Study of the Extreme Velocity Stars in the Galaxy Two decades on, the study of hypervelocity stars is still in its infancy. These stars can provide novel constraints on the total mass of the Galaxy and its Dark Matter distribution. However how these stars are accelerated to such high velocities is unclear. Various proposed production mechanisms for these stars can be distinguished using chemo-dynamic tagging. The advent of Gaia and other large surveys have provided hundreds of candidate hyper velocity objects to target for ground based high resolution follow-up observations. We conduct high resolution spectroscopic follow-up observations of 16 candidate late-type hyper velocity stars using the Apache Point Observatory and the McDonald Observatory. We derive atmospheric parameters and chemical abundances for these stars. We measure up to 22 elements, including the following nucleosynthetic families: {\alpha} (Mg, Si, Ca, Ti), light/Odd-Z (Na, Al, V, Cu, Sc), Fe-peak (Fe, Cr, Mn, Co, Ni, Zn), and Neutron Capture (Sr, Y, Zr, Ba, La, Nd, Eu). Our kinematic analysis shows one candidate is unbound, two are marginally bound, and the remainder are bound to the Galaxy. Finally, for the three unbound or marginally bound stars, we perform orbit integration to locate possible globular cluster or dwarf galaxy progenitors. We do not find any likely candidate systems for these stars and conclude that the unbound stars are likely from the the stellar halo, in agreement with the chemical results. The remaining bound stars are all chemically consistent with the stellar halo as well. INTRODUCTION High-velocity (HiVel) stars are unique dynamical probes for understanding the Galaxy (e.g., Hawkins et al. 2015a).Gravitationally bound HiVel stars can be used to constrain the total mass and local escape velocity of the Galaxy (e.g., Piffl et al. 2014;Williams et al. 2017).Unbound HiVel stars can be used to study the the Galaxy's dark matter halo (e.g., Gnedin et al. 2005;Gallo et al. 2022).The acceleration mechanisms for these unbound HiVel stars remains unclear (see e.g., Tutukov & Fedorova 2009;Brown 2015, and references therein). Unbound HiVel stars, which we will refer to as hypervelocity stars 1 (HVSs), were first proposed by Hills (1988) ⋆<EMAIL_ADDRESS>1 This definition is agnostic of production mechanisms for the unbound stars.Some authors use hypervelocity star to label objects from the Hills mechanism, and runaway/hyper-runaway stars for other fast moving stars not produced in this manner.Under this alternative definition, there is then a discussion of bound and unbound hypervelocity stars.Other authors have adopted high velocity and extreme velocity to be agnostic to the origin/production mechanisms. with a more narrow usage.Hills (1988) defined HVSs as unbound stars moving on radial orbits from the Galactic Center (GC), potentially having galactocentric rest frame velocities vGRF > 1000 km s −1 .They argued these stars were the product of a 3-body encounter consisting of a stellar binary and a super massive black hole (SMBH).This production pathway is the so-called "Hills' mechanism".However, there are myriad potential origins for HVSs stars because of the broad definition we adopt, including accreted systems (Reggiani et al. 2022), the stellar disc, and the stellar halo, among others (see e.g., Quispe-Huaynasi et al. 2022, and references therein).HVSs can constrain the total mass of the Galaxy (Rossi et al. 2017), and the environment at the GC (Kenyon et al. 2008;Brown 2015;Rossi et al. 2017;Marchetti et al. 2022).Furthermore, some models for HVS production from the Large Magellanic Cloud (LMC) provide indirect evidence for the existence of either a massive black hole (Edelmann et al. 2005;Boubert & Evans 2016a) or an intermediate mass black hole (Gualandris & Portegies Zwart 2007) at the center of the LMC. Brown et al. (2005) provided the first observational evidence for the Hills' mechanism.They observed a B-type star (labeled HVS1) with vGRF ∼ 673 km s −1 and a galactocentric distance of 107 kpc (Brown et al. 2014).This is often claimed to be the first HVS observed; however, this depends on the definition of HVS being used.This serendipitous discovery and the numerous large scale surveys of the Milky Way's stellar populations, e.g.The Radial Velocity Experiment (RAVE, Steinmetz et al. 2006), The Apache Point Observatory Galactic Evolution Experiment (APOGEE, Majewski et al. 2017), The Sloan Digital Sky Survey (SDSS, York et al. 2000), Gaia (Gaia Collaboration et al. 2018a), have caused dramatic growth in the study of HiVel stars.Much of this investigation is focused on HVSs (reviewed in Brown 2015), with less attention paid to the bound stars.However, recent works have shed light on these bounded stars as well (e.g., Hawkins et al. 2015a;Hawkins & Wyse 2018;Reggiani et al. 2022;Quispe-Huaynasi et al. 2022). The Hills' mechanism alone cannot explain all the unbound stars observed in the Galaxy.Heber et al. (2008) showed that the B-type star HD 271791 could not have originated from the GC because its flight time would be at least twice the lifespan of the star.In addition, the apparent clumping of early type HVSs around the constellation Leo (Brown et al. 2014;Brown 2015) does not agree with the expectation that HVSs from the Hills' mechanism should be isotropically distributed around the GC.Hence competing ideas on production emerged.In addition, there were variations on the Hills' mechanism which could produce HVS (e.g., a star interacting with a massive black hole binary, Yu & Tremaine 2003).Runaway stars are one such idea, where the observed star was jettisoned from its birth star cluster and accelerated to high velocities.This could be accomplished through dynamical evolution in clusters (Poveda et al. 1967), binary interactions (Leonard & Duncan 1988) or binary supernova explosion (Blaauw 1961).Another possibility is the so-called double degenerate double detonation scenario, where two white dwarfs orbit each other, the primary star undergoes a helium shell detonation and a subsequent carbon core detonation in a type 1a supernova.Afterwards, the secondary white dwarf is accelerated to high velocity from the resulting explosion.This mechanism has been suggested for three HVS white dwarfs observed by Shen et al. (2018) and six HVS white dwarfs observed by El-Badry et al. (2023).Other origins include tidal stripping of globular clusters (Capuzzo-Dolcetta & Fragione 2015), satellite dwarf galaxies (e.g., Pereira et al. 2012;Boubert & Evans 2016a), or merging galaxies (e.g., Abadi et al. 2009;Pereira et al. 2012;Helmi et al. 2018).These HiVel stars could also originate in the stellar halo and be subsequently dynamically heated through a merger event.We refer the reader to Tutukov & Fedorova (2009) and Brown (2015) and references therein for a more comprehensive list of possible acceleration mechanisms.Even more production pathways exist for bound HiVel stars because the energies required are less extreme compared to the unbound stars.These production pathways for HVSs are often difficult to distinguish from one another entirely; however, some progress can be made studying the possible origins of observed HVSs and their spatial distributions across the Galaxy (Brown 2015;Hawkins & Wyse 2018). The small sample size of confirmed HVSs is a fundamental barrier to both disentangling the plethora of formation pathways proposed for these stars and their application to study the Galaxy.The small sample is both a property of their intrinsic rarity and our ability to detect these stars.The re-view by Brown (2015) estimates the sample size of confirmed HVS is ∼ 20 based on prior literature.Distinguishing production mechanisms on the basis of ejection velocity would require 50-100 (Sesana et al. 2007;Perets et al. 2009) HVSs.Applications of HVSs also can require much larger samples (e.g., Gallo et al. 2022, requires up to 800 HVSs to constrain the DM halo shape).Many studies have produced candidate HVSs following the discovery in Brown et al. (2005).Boubert et al. (2018) compiled a catalog of HVS candidates in the literature, finding over 500.Boubert et al. (2018) reexamined this catalog of candidates using Gaia DR2 measurements whenever possible, because of the uniform treatment of data and the improvements in astrometric precision, finding N ∼ 40 had a probability of being bound to the Galaxy below 50%.This sample only had 1 late type star2 present.A slew of new HVS candidates have been discovered following Gaia DR2 and DR3 (e.g., Bromley et al. 2018;Hattori et al. 2018;Marchetti et al. 2019;Raddi et al. 2021;Li et al. 2021;Marchetti 2021;Igoshev et al. 2023), the majority of which are oriented towards either late type stars or white dwarfs, which had not been readily sampled before Gaia DR2 (see e.g., Boubert et al. 2018).These developments in turn have spurred interest in characterizing these candidate HVSs and other HiVel stars (e.g., Hawkins & Wyse 2018;Reggiani et al. 2022;Quispe-Huaynasi et al. 2022).These studies use chemodynamic approaches to constrain the origins of these candidate HVSs.Regardless of whether the objects are truly bound or not, constraining the origin of the sample of HVS candidates is interesting because of the diverse range of phenomena that can produce these HiVel stars.Hawkins & Wyse (2018) finds their sample is comprised of halo stars, while Reggiani et al. (2022) and Quispe-Huaynasi et al. (2022) find large fractions (∼ 50%, and ∼ 86%, respectively) of their samples are consistent with an accreted origin. This study aims to expand the number of well characterized extreme velocity stars using candidates from the literature, in a similar vein as Hawkins & Wyse (2018) and Reggiani et al. (2022).We set out to take ground based observations of 16 candidate hyper velocity stars to more precisely constrain their radial velocities.We then chemically characterize them so that we may place constraints on their likely origin.This chemical characterization has seen success in Hawkins & Wyse (2018) and Reggiani et al. (2022).With our sample of 16 stars, we substantially enlarge the pool of extreme velocity stars with chemical abundances.In Section 2.1 we summarize our target selection.Section 2.2 details the data acquisition and reduction.Our methods for measuring the atmospheric parameters and chemical abundances are provided in Sections 3 and 4 respectively.A description of the kinematic analysis is given in Section 5.The results are presented in Section 6 and discussed in Section 7. Finally a summary is given in Section 8. Target Selection The goal of this work is to constrain the origins and production mechanisms for these HVS candidates.In order to achieve this goal, we start by selecting HVSs to follow up from various existing literature sources (Bromley et al. 2018;Hattori et al. 2018;Marchetti et al. 2019), and from Astronomical Data Query Language (ADQL) queries by the authors using Gaia DR2/DR3 data shown in Appendix A. Each method uses different selection criteria therefore we will summarize each.For brevity, we omit the various quality cuts imposed by each study and encourage the interested reader to see the original work for more details.Bromley et al. (2018) and Marchetti et al. (2019) used three dimensional (3D) velocities and orbit integration with a Milky Way (MW) gravitational potential.The two studies differ in selection criteria and the masses used for the MW potential (we refer the reader to Section 2.5 of Bromley et al. (2018) for more details on the differences between the works) and consequently may find different HVS candidates.Kenyon et al. (2018) have found radial and tangential velocities can be used in lieu of full 3D velocities as a reliable method for finding HVSs depending on the star's distance from the Sun.Tangential velocities are more useful for nearby stars (i.e., ≲ 10 kpc from the Sun) because of the lower uncertainties in parallax, while radial velocities (RVs) are useful at further distances.Hattori et al. (2018) finds 30 candidate HVSs within 10 kpc from the Sun using only the tangential velocities.Hattori et al. (2018) note that their approach is complementary with Marchetti et al. (2019), as they use different quality cuts on the astrometric data, allowing them to potentially sample a different group of stars.Finally, we have found candidates based on galactocentric radial velocities from Gaia DR3 data.The coordinate system transformations were done using Equation 1 from Hawkins et al. (2015a). Follow-up Observations The final target selection consisted of 16 late-type HVS candidates predominately in the northern hemisphere.This sample complements the data from Reggiani et al. (2022), who used a similar sample size of candidate HVSs in the southern hemisphere to study said stars' chemistry.In addition to the program stars, we observe stars from the Gaia Benchmark catalog (Heiter et al. 2015;Jofré et al. 2014;Blanco-Cuaresma et al. 2014a) and Bensby et al. (2014) catalog.These standard stars assist in refining the data reduction, verifying the data analysis, and calibrating derived abundances.Lastly, we reanalyze some data from Hawkins & Wyse (2018) and Reggiani et al. (2022) to assess the impact of methodological differences between the studies. High-resolution spectra were collected using two instruments: the ARC Echelle Spectrograph (ARCES) on the 3.5m Apache Point Observatory Telescope (Wang et al. 2003), and the Tull Echelle Spectrograph (TS, Tull et al. 1995) on the 2.7m Harlan J. Smith Telescope at the McDonald Observatory.ARCES observations completely sample 3800 − 9200 Å with a resolving power R = λ/∆λ ∼ 31500.TS observations used slit 4 with a resolving power of ∼ 60000 and a wavelength coverage of ∼ 3500 − 10000 Å with interorder gaps towards the redder wavelengths.For both instruments, standard calibration exposures were also obtained (i.e., biases, flats, and ThAr lamp).Raw data was reduced in the usual fashion (i.e., bias removal, flat-fielding, cosmic ray re-moval, scattered light subtraction, optimal extraction, and wavelength calibration) using pyRAF/IRAF3 . To normalize the spectra, we fit a pseudo continuum using cubic splines and iterative sigma clipping.Orders are then combined using a flux weighted average.We discard 50 pixels on either end of each order because of the poor signal due to the blaze function.We compared the normalization of the Gaia Benchmark stars we observed to a reference normalization (Blanco-Cuaresma et al. 2014a) to fine tune the sigma clipping parameters.The signal-to-noise ratio (SNR) was estimated at the end of the normalization and order stitching by calculating the standard deviation in the normalized flux with values between 1 and 1.2 over a 60 Å window4 in the middle of the chip at 5200 Å.Assuming Gaussian noise, we can transform this into a robust estimate for the true noise using a halfnormal distribution5 .The upper limit on the flux mitigates the impact of hot pixels and cosmic rays6 , while the lower limit avoids confusing absorption features with noise.Our median SNR over the range 5170−5230 Å was 28 per pixel for ARCES, 32 per pixel for TS.iSpec (Blanco-Cuaresma et al. 2014b) was used to perform the final bad pixel removal7 , radial velocity corrections and the barycentric correction.The radial velocity was determined using a cross correlation with an atomic line list.These results agreed very well with the Gaia DR3 radial velocities.The barycentric corrections were found using the built in iSpec calculator.A summary of the observational parameters can be found in Table 1.Data from Hawkins & Wyse (2018) and Reggiani et al. (2022) was processed in an identical manner. Two targets, Star 1 (Gaia DR3 source id 1400950785006036224) and Star 6 (Gaia DR3 source id 1042515801147259008) were observed with both ARCES and TS providing a check on the validity of our reduction method. The initial sample had a contaminant (Gaia DR3 source id 4150939038071816320).We believe this was from problems with the observed spectrum from the first version of Gaia DR2 leading to an erroneously large RV measurement, with |Vr| > 500 km s −1 .Our RV measurements indicate this is not an extreme velocity star, Vr ∼ −21.6±1 km s −1 which agrees with the Gaia DR3 estimate of Vr ∼ −22.3±3.9 km s −1 , with a total velocity similar to the Sun.We conclude it is not an extreme velocity star and is omitted from our data tables.It was processed in the same manner as the science sample and provides another check on our methodology.We compared our radial velocity measurements with the estimates from Gaia in Figure 1.We find good agreement between the Gaia DR2 estimates and our measurements, with the Gaia measurements being on average 7.5 km s −1 larger than the values we measured from our follow up observations. External Data We use external data for our targets to aid in the isochrone analysis in Section 3.2 and the kinematic analysis in Section 5. We use astrometric data from Gaia DR3 Gaia Collaboration et al. (2021).Rather than using parallax to estimate distance, we use the distance estimates from Bailer-Jones et al. ( 2021) because more than a third of our stars have relative parallax errors greater than 10%. We use the following photometric data (when available): (i) Gaia DR2 G band magnitude (Gaia Collaboration et al. 2016Collaboration et al. , 2018b;;Evans et al. 2018) (ii) 2MASS J, H, Ks bands and associated uncertainties (Skrutskie et al. 2006) (iii) AllWISE W1, and W2 bands and associated uncertainties (Wright et al. 2010) (iv) SkyMapper u, v, g, r, i, z bands and associated uncertainties (Wolf et al. 2018) (v) PANSTARRs g, y bands and associated uncertainties (Chambers et al. 2016;Magnier et al. 2020) (vi) SDSS u, z bands and associated uncertainties (Blanton et al. 2017) These bands must pass quality cuts 8 .These cuts are identical to the recommendations put forward by each survey, with 8 SkyMapper: link SDSS: link 1,link 2 Pan-STARRS: link 1,link 2, link 3 2MASS: link 1, link 2 the exception of SDSS where we allowed a bad pixel within 3 pixels of the centroid, otherwise none of our stars would have usable SDSS photometry.Since the SDSS u-band is the bluest band we use, retaining it is important for constraining the metallicity and extinction.All photometric data used in our subsequent fitting is provided in Table 2.We also use extinction estimates from BAYESTARS (Green et al. 2019). ATMOSPHERIC PARAMETERS One of the primary goals of the work is to measure the atmospheric properties (i.e., effective temperature, surface gravity, metallicity, microturbulence) of these HVS candidates.High quality measurements of these properties are necessary to infer chemical abundances, and thus constrain the origins and production mechanisms of these fast stars.Our spectra for the HVS candidates are low to mid SNR and appear metal poor based on visual inspection of the spectra.These data properties make a purely spectroscopic analysis challenging, as low SNR limits the number of weak absorption features we can use, and the metal poor nature implies that we must be careful about non-local thermodynamic equilibrium (NLTE) effects.To achieve the highest quality atmospheric parameters, we develop a workflow that combines spectroscopic, astrometric and photometric information simultaneously to find a self consistent model for the star similar to Section 3 of Reggiani et al. (2022); however, we choose to use a spectral synthesis approach rather than a line-by-line synthesis used in the aforementioned study due to the low SNR of our spectra.Our spectroscopic analysis is done using methods and models which assume local thermodynamic equilibrium (LTE), departures from these assumptions can arise in the metal poor regime and may be substantial (see e.g., Frebel et al. 2013), however the photometric information is less affected by this (see e.g., Frebel et al. 2013, and references therein).The photometric data also bypasses the problems of low SNR spectra, while being sensitive to both the effective temperature (T eff ), and surface gravity (log g).However, the photometric metallicity ([Fe/H]) signal is weaker and heavily reliant on blue bands and extinction estimates.On the other hand, the spectroscopic fitting is more sensitive to the metallicity and microturbulent velocity (ξ) while largely agnostic about the presence of extinction.Hence, we measure T eff /log g using photometry and metallicity/ξ from spectroscopy.We employ python code LoneStar for the spectroscopic analysis (see Section 3.1 for details) and the python package Isochrones9 for the photometric analysis (see for Section 3.2 details). The step-by-step fitting process is as follows: (i) Fit the spectrum with LoneStar to find initial guesses for all atmospheric parameters (i.e., T eff , log g, [Fe/H], ξ) (ii) Fit the photometric data listed in Table 2 with Isochrones using values from LoneStar as a guess (iii) Re-Fit the spectrum using LoneStar holding T eff /log g fixed from the photometric fit in step 2. A guess for ξ is created using the surface gravity relationship from Kirby et al. (2009), their Eq.2. (iv) Re-Fit the photometric data using Isochrones with Table 2.A portion of the photometric Data used for isochrone fitting.For compactness, we only display a subset of the columns.We assume an error of 0.005 mags for the Gaia G band magnitude.When values are not present or do not pass our quality cuts, a nan value is provided.The Gaia photometry corresponds to Gaia DR2 with the values from Gaia DR3 producing no changes.The J, σ J , H, σ H , Ks, σ Ks correspond to the 2MASS survey.W 1 and W 2 are photometry from AllWISE.We use SkyMapper DR2 data, PANSTARRs DR1 data, and SDSS IV data when available.A full machine readable version of the table is available online.Typically it takes a couple of iterations to reach termination (i.e., the metallicity is consistent or stable in both meth-ods).Convergence in metallicity is preferable but not always achievable.Differences of up to 0.2 in metallicity were found for some stars between the photometric and spectroscopic fits.This is in line with Bochanski et al. (2018), who find the mean spectroscopic and photometric metallicities of two clusters to be discrepant at the 0.15 dex level.Often this appeared with fits that had anomalously high extinction fits, using higher dust content to counteract higher metals.There are known shortcomings in photometric models of stars as well.In the event the two metallicity measurements do not agree within the total errors (i.e., the internal errors added in quadrature with the external errors) we use the spectroscopic metallicity.We reason that this represents the closest approximation to the real value because spectral lines are sensitive to the bulk abundance changes the metallicity represents.Metallicity and microturbulence are also not strongly correlated for those fits.In contrast, the photometric fits for metallicity show a strong degeneracy with extinction estimates even with strong priors on the dust because our blue band photometry does not place strong enough constraints on the isochrone fit.We found this discrepancy between the photometric and spectroscopic metallicity was also present for the test star we analyzed from Reggiani et al. (2022), with a difference of ∼ 0.15 dex.However, if we consider only the spectroscopic metallicity, we find the same measurement as Reggiani et al. (2022). Alias Internal errors for each parameter are derived from the method used to measure said parameter.T eff and log g are measured using photometry and we use the posteriors from Isochrones as their internal uncertainties.ξ is measured solely from spectroscopy.The internal error for ξ from the posterior was small for all stars, and we took the largest value of 0.03 km s −1 as the assumed error for the entire sample.As discussed below, the external errors for ξ are 2 orders of magnitude larger, so this choice does not materially change the results.Lastly, the metallicity is measured in both the photometric and spectroscopic approaches.We prefer and use the spectroscopic value because, as previously stated, we have more confidence in the accuracy of it.The internal error for the metallicity was taken as the quadrature sum of the internal errors from the photometric and spectroscopic posteriors. To evaluate the efficacy of our atmospheric parameter estimation, we compare our fits to the literature values for the standard stars.Since this study focuses on metal poor objects, we limit our comparison to objects with [Fe/H] ≲ −0.5 dex.We find the following median offsets and dispersion ∆T eff = 181 K, σT eff = 40 K, ∆ log g = 0.07 dex, σ log g = 0.13 dex, ∆[Fe/H] = 0.06 dex, σ[Fe/H] = 0.04, ∆ξ = 0.01 km s −1 , ∆ξ = 0.28 km s −1 .The external error for each parameter is taken as the standard deviation of the difference, yielding σext [Fe/H] ∼ 0.08 dex, σext T eff ∼ 40 K, σext log g ∼ 0.13 dex, and σext ξ ∼ 0.28 km s −1 .For the literature comparison our sample included HD 122563, which is a Gaia benchmark star.We elected to use a microturbulence value of 1.8 km s −1 rather than the value of 1.13 km s −1 listed in Jofré et al. (2014).We calculate this revised value using the Gaia-ESO relationship.We prefer our revised value as the literature value seems very abnormal compared to even the paper it is listed in. LoneStar LoneStar is a python code written by T. Nelson to perform stellar atmospheric and abundance fitting for high resolution spectra.The goal of this package was to combine the benefits of traditional synthesis based approaches (e.g., BACCHUS Masseron et al. 2016) with a Bayesian framework to improve the error analysis and work at lower SNR.The code is organized into two modules, abund and param.The latter will be detailed here, with additional details for the abundance fitting provided in Section 4. The user designates an interpolator, a collection of wavelength regions of interest, which atmospheric parameters should be varied, and what priors to use for the Bayesian regression.The fitter then uses the Markov Chain Monte Carlo (MCMC) python package emcee11 to maximize the posterior probability distribution.We typically require 18-24 walkers and around 3000 iterations to converge.We attempt to account for the following sources of error when minimizing the data: flux errors, interpolator reconstruction errors, and synthesis errors.To accomplish this, we introduce an error softening term for remaining unaccounted for terms to improve performance which is simultaneously fit along with the atmospheric parameters.The following parameters can be varied or fixed: T eff , log g, [Fe/H], ξ and rotational broadening (V sin i).V sin i is applied on-the-fly using a convolution recipe from Gray (2008), which assumes a limb darkening coefficient of ϵ = 0.6.We use the wavelength sampling of ARCES for the atmospheric parameter fitting for a homogeneous analysis.This results in a downsampling of the data from TS by a factor of 2; however we have found this makes a negligible difference to the values fit for various test cases (including all stars from Nelson et al. 2021).Models are originally created with a wavelength sampling 3x higher than the TS data and subsequently downsampled to the ARCES wavelength space. The Payne (Ting et al. 2019) was used as the interpolator.We synthesized a library of ∼ 11000 spectra to train this artificial neural network with a single hidden layer containing 300 nodes12 .To create our library of synthetic spectra we randomly sampled the following intervals: 3900 K ≤ T eff ≤ 7000 K, 0 dex ≤ log g ≤ 5 dex, −3 ≤ [Fe/H] ≤ 1.For each combination of T eff , log g, and [Fe/H] we create three synthetic spectra by setting ξ equal to 0, 1.5, and 2.6 km s −1 .All synthetic spectra were constructed from MARCS model atmospheres (Gustafsson et al. 2008) using TURBOSPECTRUM (Plez 2012) for radiative transfer.MARCS models are calculated in 1D LTE.If the surface gravity is ≥ 3.0 dex, planeparallel models are used, and spherical models otherwise.If a combination of T eff , log g, and [Fe/H]lies between MARCs models, an interpolation is done to create the specified model atmosphere.The atmospheric composition uses solar abundances from Grevesse et al. (2007) scaled by metallicity for most elements.MARCs models use separate abundance estimates for C, N, and O (see Gustafsson et al. 2008, Section 4 for more information).We assumed the same composition for C, N and O as the models.We use Gaia-ESO line list version 5 (Heiter et al. 2019) for atomic transitions.The line list includes hyperfine structure splitting for Sc I, V I, Mn I, Co I, Cu I, Ba II, Eu II, La II, Pr II, Nd II, Sm II.We also include molecular data for CH (Masseron et al. 2014), C2, CN, OH, MgH (T.Masseron, private communication), SiH (Kurucz 1992), TiO, FeH, and ZrO (B.Pelz private communication). Wavelength masking is vital for an accurate atmospheric parameter fitting process because poorly modeled regions or problematic lines can alter the minimization.To begin, we limit the usable data to the range of 4500 − 6800 Å.The limit on the blue side arises from a combination of reduced detector sensitivity, low source flux from our HVS candidates because we targeted late-type stars, and difficulties inherent to accurately placing the continuum in regions of dense metal absorption.This makes accurate continuum placement for metal rich stars challenging in the blue.Hence to be uniform in our treatment of program and standard stars we exclude data below 4500 Å.The data showed wavelength calibration issues past 8000 Å for some stars, therefore we excluded two lines at ∼ 8500 Å from the line selection in Hawkins & Wyse (2018).With these two lines removed, the reddest line in our line selection for iron in the atmospheric parameter fitting was at ∼ 6750 Å, so an upper limit of 6800 Å was used for the synthesis.Next we exclude features from "bad" pixels which can arise from the following: leftover cosmic rays13 , scattered light features, or dead pixels.With this cleaned spectrum, we then mask wavelengths outside the vicinity of iron lines used in previous studies on metal poor stars by Hawkins & Wyse (2018) and Ji et al. (2020).The line core is taken as the local minimum closest in wavelength to the line data.The extent of the wavelength window around each line is determined by a first derivative test, however adopting a small ∆λ window of 0.5 or 1 Å around each iron line does not change the results. We use Bayesian regression to estimate the atmospheric parameters.Ordinary regression determines the best fit through minimizing the differences between the the error weighted sum of squared residuals (SSR) between the data and the model.Bayesian regression builds on this approach by including terms to represent the behavior of the model parameters based on previous knowledge.These additional terms are called priors.We initially adopt uninformative priors (i.e., uniform distributions) on all parameters.We limit the temperature to a range of 4000 to 6500 K based on the spectral types of the program stars.On subsequent iterations, where we fix T eff and log g, we adopt Gaussian priors for [Fe/H] and ξ.The mean for [Fe/H] is taken as the output from Isochrones.The mean for ξ is determined by inputting the Isochrones surface gravity estimate into the Kirby et al. (2009) relationship.We adopt standard deviations of 0.1 dex and 0.3 km s −1 for [Fe/H] and ξ, respectively.This choice represents our increased confidence in values of the parameters without being overly restrictive. Upon finishing a fit, Lonestar writes the chain file for the MCMC, a small record of the parameter fits (including fixed and freed quantities), and some diagnostic plots to visualize how the fit performed.The best fit is the median.The upper and lower 1σ errors are the 84th and 16th percentiles, respectively. Isochrones Isochrones is a package to fit MESA Isochrones and Stellar Tracks (MIST, Dotter 2016) models using the MultiNest wrapper PyMultinest (Buchner et al. 2014) to photometric data.Isochrones also uses Bayesian regression for data fitting, so the user can specify initial parameter values and priors for those values.If priors are not specified, Isochrones adopts default distributions, we refer the interested reader to their package documentation for these. Following the general procedure from Reggiani et al. ( 2022), we input the following data: atmospheric parameters (T eff , log g, [Fe/H]), the median photo-geometric distance estimates from Bailer-Jones et al. ( 2021), extinction estimates from the dustmaps python wrapper for BAYESTARS (Green et al. 2019), and the photometric data for our HVS candidates described in Section 2.3.The dustmaps provided by BAYESTARS are 3D if the stars are inside the modeled volume. In cases where the star resides outside the modeled volume a 2D dustmap which integrates the modeled dustmap is used instead.All of our input quantities require error estimates.For the atmospheric parameters, we adopt 100 K, 0.5 dex, and 0.1 for T eff , log g, and [Fe/H], respectively.We assume an error floor of 0.01 mag for σA v computed by BAYESTARS.We adopt an error floor of 5 mmag for the photometry because it improved the fitting performance, similar to Reggiani et al. (2022) 14 . We adopt the default priors for all quantities aside from metallicity and distance.For metallicity, we use a uniform prior between -4 and 0.5.For distance, we use a Gaussian prior centered on the median photo-geometric distance from Bailer-Jones et al. ( 2021) and a standard deviation which is the difference in the upper and lower 1σ errors divided by two.We restrict the extinction to a range of 0 to Av + σA v + 0.1, where Av is the estimate produced from BAYESTARS, σA v is the 1σ error estimate from BAYESTARS.The extinction in Isochrones is largely constrained by blue band photometry, however most of our HVS candidates lacked good photometry in the blue.Hence constraining the extinction to realistic values was necessary.In the absence of these tight constraints, the dust can deviate substantially from the dustmaps estimates.This deviation could be caused by imperfect models or data problems, where the dust value could compensate for these shortcomings.We note that the uncertainties derived from Isochrones do not include any systematics.The fitting process only uses one set of models and the uncertainties reported are solely the posteriors from the Bayesian distributions. ABUNDANCES Once the atmospheric parameters are determined, we measure abundances for up to 22 elements with the abund module of LoneStar.The following elements are measured 15 : Na, Mg, Al, Si, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Sr, Y, Zr, Ba, La, Nd, and Eu.This list includes members from the light/Odd-Z, Neutron Capture, α elements, and Fe-peak nucleosynthetic families. The LoneStar abund module synthesizes spectra at different [X/H] ratios.We synthesize spectra at [X/H] = 0, ±0.3, ±0.6 in this fitting process.A model atmosphere is created using the best fit values determined from atmospheric parameter fitting.Synthetic spectra are created in the same manner as the atmospheric parameters with two exceptions.First, the spectra will have the abundance of the element of interest altered.Secondly, the spectra are created using a radiative transfer code rather than interpolation from a precomputed grid.During the initial abundance fitting, we do not assume an α enhancement based on the metallicity in order to be agnostic to the origins of these stars.The abundances change negligibly (median(∆[X/H]) < 0.02) when the average α abundances (i.e., Mg, Si, Ca, Ti 16 ) from the first iteration are fed into the analysis. To measure the abundance for the species of interest, the user provides a line selection.The exact wavelength of the theoretical and observed line will primarily differ from imperfect wavelength calibration and other data reduction artifacts.Such discrepancies can be significant if uncorrected (see e.g., Jofré et al. 2017, Section 4.1).We use the same line core and wing search algorithm as described for the Fe lines in Section 3.1.Then abundances for individual lines are found through χ 2 minimization between the observation and synthetic spectra 17 .We estimate the 1σ uncertainties using the width of the χ 2 curve (see e.g., Coe 2009).We neither downsample nor mask pixels in this step.For each line, plots of the data and synthesis are provided for visual inspection of the fit quality.During the fit process, a line may be rejected for lack of sensitivity over the [X/H] range used (i.e., no change in the χ 2 values), the automatic windowing failing, inadequate sampling of the line in the data based on the window limits, and a few other pathologies.If a line is rejected based on this automatic assessment, the line data and cause of the rejection are recorded in a tracker object.These are saved for the user to review later.For any line fit, a quality flag is created indicating if there are problems with the fit (e.g., a reduced χ 2 greater than 3 or less than 0.5).Once all lines for a species are either fit or rejected, the abundances and quality flags are tabulated and output for the user.The line list selection for all elements and all stars is given in Table 3.We use a different 15 We attempted to measure Li.The only star which had a detection of Li was the contaminate Gaia DR3 source id 4150939038071816320, which was a dwarf.The remainder of our sample were giants. 16Ti is included here because an α enhancement in the MARCs models will include Ti. 17 If no local minimum is found using the input range of [X/H], the abundance range is adjusted to be centered around the abundance with the smallest χ 2 in the test value set and the fit is repeated.The smallest χ 2 value may occur on the upper or lower side of the abundance range.This process repeats up to 5 times, after which we conclude we are unable to adequately model the observation. line selection for metal poor stars (taken as [Fe/H] < −0.5) and metal rich stars.This is primarily a caution for potential NLTE effects.In addition to modeling concerns, some lines may become measurable in the absence of dense absorption caused by higher metallicities (e.g., towards the blue end of the spectrum). We use internal quality cuts to help filter out problematic abundance measurements from specific absorption features (e.g., Co from star 6 due to noise).These quality cuts will vary on a line-by-line and star-by-star basis therefore the final line selection for each star may be slightly different.We require all absorption lines used for abundance determination to be at least 3σ detection, where we use a local SNR estimate with the relation from Cayrel (1988) to approximate the uncertainty in the equivalent width based on the continuum placement.We supplement our automatic quality flagging with visual inspection of all lines in our selection for 7 stars of varying atmospheric parameters and SNR. Abundances reported are taken as the median of the lines that pass quality controls.The internal errors are estimated as the standard error (i.e., std(abundance)/ √ N lines ).If only one line is present we take the uncertainty on χ 2 as the internal error.To propagate the uncertainties from the atmospheric parameters we employ a sensitivity analysis in similar fashion to Hawkins et al. (2020a) and Nelson et al. (2021).For each parameter, we perturb the best fit model and derive abundances for this perturbed model atmosphere.The difference between the abundances from the best fit and perturbed model is the error introduced from that parameter.These abundance errors are added in quadrature with the line-by-line statistical errors for [X/H] to determine the total error for an abundance measurement.One limitation of this process is that it does not account for covariances in uncertainties between the atmospheric parameters. The Fe line selection between the atmospheric and abundance fitting is different.For the atmospheric parameters, we use the union of lines from Hawkins & Wyse (2018) and Ji et al. (2020) whereas the abundances only use lines from the former.The change in line selection comes from distinct goals in the param and abund analysis.The former was tasked with creating a starting point so casting a wide net was desirable.The latter was a refinement of this fitting process and so we decided to use the line list the author was more familiar with.This amounts to ∼ 70 fewer lines being used for Fe in the abundance determination compared to the metallicity fit.This change, along with quality selection cuts, produces an offset between the metallicity and iron abundance of −0.03 ± 0.07 for the entire sample and −0.01 ± 0.07 if we only consider stars with metallicity below −0.5.(Mashonkina et al. 2007), Co (Bergemann et al. 2010), Fe (Bergemann et al. 2012b,a), Mg (Bergemann et al. 2015(Bergemann et al. , 2017)), Mn (Bergemann & Gehren 2008), Si (Bergemann et al. 2013), and Ti (Bergemann 2011;Bergemann et al. 2012b) are accounted for on a line-byline basis using online tables from MPIA.Star 5 (Gaia DR3 source id 1396963577886583296), lies outside the atmospheric parameter range of these published values, therefore we do not attempt to apply a correction for this star. NLTE corrections for Ca Table 3.A portion of our line selection for each element, its atomic properties and the absolute abundance we derive for each absorption feature.A full machine-readable version, including abundances for each star for each line, is available online.The lines used will vary between stars because of the quality checks.χ is the excitation potential in eV, log gf is the logarithm of the oscillator strength f multiplied by its statistical weight g, and log(ϵ) is the absolute abundance (after subtracting the Solar abundances) derived for this line.The solar abundances adopted are from Grevesse et al. (2007) DYNAMICAL ANALYSIS We employ a dynamic analysis to assess whether these HVS candidates to answer two questions: 1) Which, if any, of the HVS candidates are unbound or marginally bound?2) For the unbound or marginally bound objects, what systems might be progenitors for these fast moving stars? We use the python package galpy18 for this analysis.For each orbit, we used the radial velocities from our observations, the Bailer-Jones et al. ( 2021) photo-geometric distances, with the remaining astrometry from Gaia DR3.In general, there was very good agreement between Bailer-Jones et al. ( 2021) distances and those fit from Isochrones.To construct our covariance matrix, Σ, for uncertainty analysis, we use the uncertainties and covariances for the right ascension (RA), declination (Dec), proper motion in RA (pmra), and proper motion in Dec (pmdec) from Gaia, and assume the RV and Bailer-Jones et al. ( 2021) distances are uncorrelated.We propagate measurement uncertainties to our orbit integration and other derived kinematic quantities through Monte Carlo sampling of the multivariate normal distribution N (⃗ µ, Σ), where ⃗ µ is the measured value for each quantity.This sampling is repeated 1000 times. We use the MWPotential2014 (Bovy 2015) to approximate the Galactic potential.This potential uses a Navarro-Frenk-White halo with a scale length of 16 kpc (Navarro et al. 1996).A Miyamoto-Nagai potential (Miyamoto & Nagai 1975) with radial scale length of 3 kpc and vertical scale height of 280 pc is used for the disc.Finally, the bulge has a power-law density profile with an exponent of -1.8 and is exponentially tapered at 1.9 kpc.We assume current values for the solar position and kinematics as R0 = 8.122 kpc, z0 = 20.8pc (GRAVITY Collaboration et al. 2018a;Bennett & Bovy 2019), and a solar motion of (U⊙, V⊙, W⊙) = (12.9,245.6, 7.78) km s −1 (Drimmel & Poggio 2018; GRAVITY Collaboration et al. 2018b;Reid & Brunthaler 2004). Kinematics The kinematics of the candidate HVSs is used to determine whether these objects are gravitationally bound or unbound to the Galaxy, as well as where these stars may have been produced.This production location in turn constrains how these stars were accelerated.To access whether these candidate HVS are bound, we use their present day kinematics along with a model of the Milky Way's gravitational potential from Williams et al. (2017).We note the Milky Way model used in Williams et al. (2017) differs from that used in Section 5.In Figure 2 we show total velocity (v total ) as a function of spherical distance from the GC (r), for our candidate HVS stars (labeled by their alias) and a Milky Way escape velocity curve with 1σ uncertainties based on the model and uncertainties from Williams et al. (2017).We calculate v total and r using the photo-geometric distances from Bailer-Jones et al. ( 2021), our ground-based RV measurements, and the remaining astrometry from Gaia DR3.The uncertainty band on the Williams et al. (2017) model is created using Monte Carlo sampling of their model parameter uncertainties.From this work, we see that only star is likely unbound from the Galaxy, with stars 5, and 8 being marginally unbound (1σ level).We have marked these stars in red in subsequent chemical plots to aid with their identification. We used catalogs of globular clusters and Milky Way satellites in galpy to determine if there were any clear candidate progenitors for stars 5, 7, and 8.These catalogs for globular clusters and Milky Way satellites are based on Vasiliev (2019) and Fritz et al. (2018), respectively.We integrated these systems using a similar framework as the previous section; however, we only integrated back 300 Myr.A star traveling with 100 km s −1 in the radial direction would cover a distance of 30 kpc in this period, well outside the distances we expect our HVS to have traveled either from the outer Galaxy inward or vice versa.This choice also helps minimize potential inaccuracies from the uncertainty in the input phase space parameters (x, y, z, vx, vy, vz) and the Galactic potential.For all systems examined, the point of closest approach for our objects is at least 10 times the the half light radii of the candidate origin system.Doubling the integration length to 600 Myr, does not change the results.Extending the integration to 1.5 Gyr, the closest approach for stars 7 and 8 is ∼ 1 kpc from the star systems examined.Interestingly, stars 7 and 8 share the same system of closest approach in their obits (NGC 6205), and the same second closest system (NGC 6341).Star 5 fairs worse, with the closest approach being Draco II at 3.5 kpc, and the second closest system being NGC 6229 at ∼ 8 kpc. We conducted a second round of kinematic analysis using a modified potential.Bland-Hawthorn & Gerhard (2016) estimate a dark matter halo mass 50% larger than the one used by default in MWPotential2014.In addition, galpy is capable of modeling the impact of the LMC's gravitational potential.galpy also provides a built in way to estimate the escape velocity from different symmetric potentials.Due to the LMC breaking cylindrical symmetry, we could only find an escape velocity estimate using the heavier dark matter halo potential.We find the escape velocity curve is unchanged from the Williams et al. (2017) Figure 2. Displayed is the current spherical position and total velocity for each star in our sample.The numbers for each star correspond to their alias from Table 1.The error bars show the propagated uncertainties from the Monte Carlo sampling of the astrometry, RV, and distance uncertainties.The dark blue line represents the median escape velocity assuming the spherical model from Williams et al. (2017), with the lighter blue contours corresponding to the 1σ range, propagating the uncertainties in parameters from Williams et al. (2017).Only star 7 is definitively unbound, star 5 is marginally bound, and star 8 could be unbound based on the overlap in the error bars. The system of closest approach for star 8 is NGC 5897, at a distance of ∼ 280 pc roughly ∼ 1.25 Gyo. Stellar parameters and abundances Atmospheric stellar parameters and chemical abundance measurements are displayed in Table 4.The full table includes both LTE and and NLTE corrected measurements when applicable. Comparison to Prior Works As part of our analysis, we observed stars from the Gaia benchmark stars (Jofré et al. 2014;Heiter et al. 2015) and stars from Bensby et al. (2014) to assess the differences in the atmospheric and abundance fits.We find a median offset of ∼ −181 K in effective temperature.Our analysis of the stars from Hawkins & Wyse (2018) show a similar offset of ∼ −187 K when compared with the previous spectroscopic temperatures.We suspect this offset arises from the discrepancy in photometric and spectroscopic temperatures.We offer two lines of evidence to support this hypothesis.First, in Table 3 from Hawkins & Wyse (2018), photometric temperatures from Gaia DR2 are provided, which show an offset of ∼ −173 K. Second, Figure 2 from Frebel et al. (2013) finds an offset of ∼ −200 K between the photometric and initial spectroscopic temperatures (i.e., before NLTE considerations).Assuming we can apply the general relationship from very metal-poor objects to less metal-poor stars, this would explain the offset in temperature.Additional discussion on the impact of NLTE on effective temperature can be found in e.g., Korn et al. (2003) and Mucciarelli & Bonifacio (2020). Offsets in the other atmospheric parameters are seen for the standard star sample.Comparing the atmospheric parameters for the stars in the Hawkins & Wyse (2018) sample we see ∆ log g = −0.9/− 0.52 dex; ∆[Fe/H] = −0.44/− 0.27 dex; ∆ξ = 0.1/ − 0.24 km s −1 for the LoneStar + Isochrones / LoneStar only fits respectively.These differences are an order of magnitude larger than those for the standard stars.These offsets could arise from differences in the treatment of NLTE, the low SNR of the data, and the fitting methods employed.Hawkins & Wyse (2018) gauges the influence of NLTE effects by redoing their fits using the photometric temperature instead, finding offsets of up to 0.3 dex in metallicity. Abundance differences between Bensby et al. (2014); Battistini & Bensby (2015Bensby ( , 2016) ) and this study are shown in Figure 3. Three elements lack literature comparisons: Copper (Cu), Lanthanum (La), and Europium (Eu).Copper and Lanthanum measurements were not available from the literature studies we referenced.No suitable measurements of Europium were found in our observations after filtering through our quality criteria. There are several plausible sources for these abundance offsets; we will consider differences in atmospheric parameters and NLTE corrections.NLTE corrections do not have a consistent affect on the abundances.For Ti and Co, they increase the offset relative to a pure LTE comparison by ∼ 0.1 dex, while the rest have negligible changes (i.e., ∆ < ±0.05 dex).To gauge the significance of the atmospheric parameters, we use the stars from Hawkins & Wyse (2018) as a proof of concept.After controlling for changes in bulk metallicity (i.e., using [X/Fe] rather than [X/H]) we find comparable offsets in Mg, Si, Ca, Zn, Sr, Nd, Y as observed in the data.Still further discrepancies could arise from the atomic data, the line selection, the visual inspection, continuum placement, and so on. When comparing our abundance measurements to the literature abundances, we do not use NLTE corrections from Section 4 unless otherwise specified because several of the comparison studies (e.g., Hawkins & Wyse 2018) only compute LTE abundances.In addition, we do not rescale our data based on the reference stars from Bensby et al. (2014); Battistini & Bensby (2015, 2016) because this rescaling cannot be done uniformly for all the literature samples we compare our data with. Literature Sources For context in Figure 4 and Figure 5, we include measurements from studies on the thin and thick disc (Bensby et al. 2014;Battistini & Bensby 2015, 2016), the bulge (Bensby et al. 2010;Gonzalez et al. 2015), the inner halo (Nissen & Schuster 2010), metal poor halo stars (Yong et al. 2013;Roederer et al. 2014), the LMC (Van der Swaelmen et al. 2013), and Fornax (Letarte et al. 2010).We have also included data from other studies on the chemistry of hyper/high velocity stars (Hawkins & Wyse 2018;Reggiani et al. 2022).We are comparing LTE abundances to one another in these plots. Table 4.A portion of the atmospheric parameters and abundances for the HVS candidates.Stars are labeled by their alias from Table 1.The [Fe/H] value is the abundance determined for iron rather than the metallicity from the atmospheric parameter fitting; however, these values are nearly identical (|∆| < 0.03).ξ is the microturbulence velocity.We only provide internal errors for T eff and log g as the external errors are provided in the text and identical for each star.The internal errors for ξ (i.e., σ ξ ) were negligible compared to the external error therefore we do not list them.Abundances which lacked measurements are given a nan.The error in abundance measurement is the total error described in Section 4. [X/Fe] values are provided in the digital version of this table.A full machine readable version of the table is available online.The disagreement between our measurements and the literature is reduced when we only examine LTE abundances and remove offsets caused by differences in metallicity measurements (i.e., use ∆[X/Fe] instead of ∆[X/H].Remaining differences are likely a consequence of differences in the T eff and ξ parameters.We find differences of ∼ 150 K in T eff and 0.4 km s −1 in ξ. Chemical Abundances The goal of this work is to use chemical tagging (Freeman & Bland-Hawthorn 2002) to constrain the origins of our latetype HVS candidates.Chemical evolution of the interstellar medium (ISM) is a fundamental ingredient for chemical tagging because most stellar abundances reflect the composition of their progenitor ISM.Broadly, a generation of stars will form with some initial composition.As the stars age, their interior composition will change from fusion; however, the surface composition remains roughly constant over their lifespan and hence can act as a fossil record of the progenitor sys-tem.This modified stellar composition is then dispersed into the ISM through some flavour of supernova (type Ia, type II, etc.), stellar winds, or other mechanism (e.g., kilonova).The chemical evolution of the ISM depends on the availability of new materials (i.e., the amount and type of feedback) as well as the mixing efficiency of said materials with the extant gas. The type and timescales of the feedback are dependent on mass, and to a lesser extent metallicity of the stellar population (see e.g., Nomoto et al. 2013, and references therein). For the first ∼ 1 Gyr, massive stars are thought to be the primary contributor to the chemical evolution of the ISM owing to their relatively short lifetimes compared to low mass stars (see e.g., Gilmore et al. 1989, Section 1.3).Hence at low metallicities (e.g., [Fe/H] ≲ −1 for the solar neighborhood), the abundance patterns of the Galaxy reflect the yields from massive stars.These yields have a metallicity dependence (see e.g., Nomoto et al. 2013).After this period, feedback from lower mass stars (e.g., AGB winds, type Ia supernova) becomes increasingly important as more low mass stars reach the point at which they can expel their matter into the surrounding environment.Since low mass stars are far more numerous than higher mass stars (e.g., Kroupa & Weidner 2003), eventually the feedback of materials into the ISM from the lower mass stars will tend to dominate the present day ISM composition in areas of continuous star formation within the Galaxy.The metallicity at which low mass stars start becoming important is dictated by the star formation rate.The total mass of star forming matter and the initial mass function (along with the metallicity distribution function) play a similarly pivotal role in what feedback mechanisms are possible to subsequently modify the ISM. 6.6 α elements: Mg, Si, Ca, Ti The α elements are formed through the consecutive addition of helium nuclei (α-particles, see e.g., Burbidge et al. 1957).Titanium, while not formed in the same pathway (see e.g., Curtis et al. 2019), often follows the same trends so it is fre- If not all three elements have measurements, we take the average instead.All abundances shown are taken to be in LTE.For reference, in each panel we also show the abundance ratios of the thin and thick disc (Bensby et al. 2014;Battistini & Bensby 2015, 2016, in grey), the high α halo (Nissen & Schuster 2010, in green), the low α halo (Nissen & Schuster 2010, in bright green), the metal poor halo (Roederer et al. 2014, in light blue) (Yong et al. 2013, in tan), and the the bulge (Bensby et al. 2010;Gonzalez et al. 2015, in brown). We include abundances from two contemporary studies on the abundances of hyper velocity candidates as well, Hawkins & Wyse (2018) in blue, and Reggiani et al. (2022) Wheeler et al. 1989;Weinberg et al. 2019).At solar metallicities, [α/Fe] ∼ 0. The inflection point at [Fe/H] ∼ −1 is referred to as the 'knee'.The inner halo, bulge, thin disc, and thick disc all have distinct locations in the [α/Fe] vs [Fe/H] plane, corresponding to their evolution; however, the boundaries between the regions are not always well defined (see e.g., Feltzing & Chiba 2013;Hawkins et al. 2015b).Further, there are also signatures for accreted systems, which show a 'knee' at metallicities lower than -1 dex. Results for individual α elements are shown in Figure 5. We also show the combined abundance pattern in Figure 4, where we have taken [α/Fe] as the median of the [Mg/Fe], [Si/Fe], and [Ca/Fe] abundances measured for that star. Overall, we find relatively good agreement between the chemical patterns of our stars and the inner stellar halo.This is consistent with the picture from Hawkins & Wyse (2018).However, we do not see a significant low α component in contrast to Reggiani et al. (2022); Quispe-Huaynasi et al. (2022) which are both follow-up studies on the chemistry of HVS candidates.We examined all stars with [α/Fe] ≲ 0.3 to check if these objects were consistent with accreted origins and the so-called 'low α halo' from Nissen & Schuster (2010).Nissen & Schuster (2010) show that the low and high α halos cluster differently in Toomre space and in [Ni/Fe] vs [Na/Fe] space.The usefulness of the Toomre space clustering is hampered by our selection criteria of fast moving stars.In the Toomre space, shown in Figure 6, our entire sample of candidate HVSs appear more consistent with the fast moving low α halo compared to the slower high α; however, most of our sample appears chemically consistent with the high α halo.This is also found when we compare the For reference, in each panel we also show the abundance ratios of the thin and thick disc (Bensby et al. 2014;Battistini & Bensby 2015, 2016, in grey), the high α halo (Nissen & Schuster 2010, in green), the low α halo (Nissen & Schuster 2010, in bright green), the metal poor halo (Roederer et al. 2014, in light blue) (Yong et al. 2013, in tan), and the the bulge (Bensby et al. 2010;Gonzalez et al. 2015, in brown).We include abundances from two contemporary studies on the abundances of hyper velocity candidates as well, Hawkins & Wyse (2018) For ease of reference, each star in the paper is labeled by the number in its alias (i.e., star 7 is labeled 7).Data from Nissen & Schuster (2010) is included for comparison.The high α, low α, and thick disc data from Nissen & Schuster (2010) are labeled HA (blue circle), LA (red square), and TD respectively (black cross).Bottom: A comparison of the [Na/Fe] and [Ni/Fe] abundances for the candidate HVSs.Aliases are used for points similar to Figure 6.Data from Nissen & Schuster (2010) is included for comparison and labeled as in Figure 6.The abundances used and compared to in this panel use LTE. shown in Figure 6, finding no stars which are unambiguously in the low α halo cluster.6.7 Light/Odd-Z elements: Na, Al, V, Cu, Sc Odd-Z elements are produced in a variety of nucleosynthetic pathways.Sodium (Na) and Aluminum (Al) can both experience strong NLTE effects at low metallicities (see e.g., Kobayashi et al. 2020, and references therein), making their interpretation difficult.Figure 5 displays our measurements of Na, Al, V, Cu, and Sc in [X/Fe] vs [Fe/H] space (black circles).As before, we see that our stars are consistent with the stellar halo.Unlike Reggiani et al. (2022), we do not see any evidence in [Na/Fe] of our stars being consistent with a dwarf or accreted galaxy.We find [Al/Fe] which over 1 dex greater than other literature we compared to.This difference in [Al/Fe] might be a result of different line selection between our study and Yong et al. (2013); Roederer et al. (2014); Reggiani et al. (2022), all of which use lines close to ∼ 4000 Å, a section of data that we discarded during the reduction (see Section 3).We instead use 5557.06,6696.02,6698.67Å to measure the Aluminum abundance.These are weak lines in our program stars, so we are only able to measure Al in a handful of spectra.Applying NLTE corrections for Na does not meaningfully change the offsets between our values and Reggiani et al. (2022) for our line selection, so we conclude that most of our stars are likely not accreted or debris from a satellite galaxy. 6.8 Fe-peak elements: Cr, Mn, Co, Ni, Zn The Fe-peak elements are primarily synthesized with Type Ia and core collapse supernovae (e.g., Nomoto et al. 2013).These elements largely trace the iron abundance.As such, most of these elements are expected to have a roughly flat trend of [X/Fe] against metallicity.We plot our results of [Cr, Mn, Co, Ni, Zn/Fe] (black circles) in Figure 5.We find further evidence that these stars are likely from the halo based on their agreement with Nissen & Schuster (2010) and Hawkins & Wyse (2018) 6.9 Neutron Capture Elements: Sr, Y, Zr, Ba, La, Nd, Eu The neutron capture elements are commonly split into those which are primarily produced in the slow neutron capture process (s-process) and the rapid neutron capture process (r-process).The s-process elements (Sr, Y, Zr, Ba, La, and Nd) are formed in AGB stars and then returned to the ISM through stellar winds.By contrast, r-process elements are formed with rapid neutron capture.The exact nature of what processes drive this is still an open area of research, with binary neutron star mergers being one such candidate (van de Voort et al. 2020).The neutron-capture element abundance ratios for our stars (black circles) for [Sr, Y, Zr, Ba, La, Nd, Eu/Fe] as a function of metallicity are shown in Figure 5.Of the neutron capture elements we measure, all are members of the s-process group except Eu.These elements are consistent with the stellar halo. DISCUSSION In our sample, we find only one star that seems unbound based on the adopted escape velocity from Williams et al. (2017).Further, there is one marginally bound star and one star which could potentially be marginally bound on the overlap in its v total uncertainties and the escape velocity uncertainties.and M12).We found that none of our stars were enhanced in Al while simultaneously being depleted in Mg, thus are not consistent with being second generation globular cluster stars.The abundance plots are shown in Figure 7. Cabrera & Rodriguez (2023) provides 50% and 90% credible phase space regions for stars dynamically ejected out of 148 globular clusters in the Milky Way.For stars 7 and 8 we find no suitable clusters.Star 5 is at a much greater distance and therefore the proper motion and sky locations are not as constraining on this star's origins.However, based on the second kinematic analysis, it could be plausible for Star 8 to originate from NGC 5897.There is some probability based on the Cabrera & Rodriguez (2023) models for star 8 to be located in is present region.Star 8 also has a chemical make up which agrees well with the previous characterization of NGC 5897 from Koch & McWilliam (2014).Koch & McWilliam (2014) measure a metallicity of ∼ −2.04 and α/Fe] ∼ 0.34 based on 7 stars from the cluster.We measure star 8 with a metallicity of ∼ −1.97 and α/Fe] ∼ 0.33, which are within the measurement errors.This conclusion is hampered by our [Na/Fe] measurement which is ∼ 0.7 compared to their highest value of ∼ 0.6.They note that NLTE corrections at these parameter ranges might account for a difference of -0.05 dex.Higher SNR follow-up observations of star 8 would be useful to confirm this possible origin.These observations would also be useful for measuring more elements useful for studying populations of stars in globular clusters (e.g., O, Si, Al). LMC There is interest in detecting hyper velocity stars from the LMC, which could offer indirect evidence for the existence of a massive black hole in the center of the LMC (Edelmann et al. 2005).Boubert & Evans (2016b) provide on sky distributions for LMC stars ejected through the Hills' mechanism and Boubert et al. (2017) create phase space distributions for LMC run away stars.Our on sky distribution of stars is in the lowest or second lowest density contours for both scenarios. As pointed out in Reggiani et al. (2022), due to the lack of LMC stars over the metallicity range covered by our objects, ruling out these stars came from the LMC based on chemistry alone is difficult.However, we see no positive evidence (i.e., the chemical patterns are not consist with LMC origins) favoring the LMC over the stellar halo based on chemical abundances. Low α halo or Accreted system As discussed in Section 6.6, there are no stars in our sample that appeared to simultaneously satisfy the Toomre clustering, [α/F e] ≲ 0.3, and [Ni/Fe] vs [Na/Fe] grouping that Nissen & Schuster (2010) found for the low α halo.Star 8 appears to have a lower [α/Fe] value but has very high [Na/Fe] and [Ni/Fe] making it incompatible with the Nissen & Schuster (2010) low α halo origin.Therefore we can rule out accretion as an origin for these HVSs.We can also rule out origins from a satellite galaxy with Fornax like chemistry due to the disagreements in [Na/Fe] seen in Figure 5 and the [α/Fe] seen in Figure 4. Galactic Center Boubert & Evans (2016b) provide an on sky distribution for stars ejected from the GC.Our stars fall outside the main density contours of this figure.This is supported by our orbit integration that finds stars 5, 7, and 8 have not originated within 7 kpc of the GC. Star 7 This star has been previously found to be likely unbound and characterized in Bromley et al. (2018) and Du et al. (2018) in accord with our result; however, there is disagreement over the origin.Du et al. (2018), using LAMOST abundances, finds an [α/Fe] of ∼ 0.3 which they use as evidence for this star having an accreted origin.We find [α/Fe] ∼ 0.4, typical of the stellar halo.Caution should be exercised when comparing these measurements in detail.The line selection, or region selection used in Du et al. (2018) differs from ours as we do not include Ti in our measurements.Bromley et al. (2018) uses orbit integration to conclude this star may come from the LMC.While we are able to replicate the timing they give for the disc crossing, we do not find a similar result for the LMC close approach.We find no clear dynamical progenitor for this star and conclude it was likely born in the 'in situ' stellar halo. SUMMARY Hyper velocity stars are rare and useful objects to study both due to their unclear origin and potential applications for the understanding of the Galaxy.While first discovered by Brown et al. (2005), the subfield has seen rapid growth fueled by renewed interest and large scale surveys, in particular the Gaia mission.Brown (2015) estimated a total of 20 confirmed HVS stars, and Boubert et al. (2018) find ∼ 500 candidate HVSs.Boubert et al. (2018) found only 1 of the likely unbound HVS candidates was a late type star. In this context, we aim to (1) confirm (or not) the HVS status of 16 candidate HVSs taken from the literature, (2) derive their chemical abundance pattern, (3) derive their dynamical properties, and use these pieces of information to (4) constrain their origins.We perform follow-up observations of 16 candidate hyper velocity stars based on the literature to confirm their RVs and measure their chemical abundances.We used a combination of the Tull Echelle Spectrograph on the 2.7m HJST Telescope at the McDonald observatory and the ARCES spectrograph on the 3.5m APO telescope.We find good agreement between the RV measurements from Gaia and our ground based observations.We use the full 6D kinematic information to assess whether these extreme velocity stars are likely unbound or not on the basis of the Milky Way escape velocity model from Williams et al. (2017).We confirm one star (Gaia DR3 source id 1383279090527227264 ) is very likely unbound, and find 2 (Gaia DR3 source id 1396963577886583296; Gaia DR3 source id 1478837543019912064) which might be marginally bound (with the details depending on the exact model of the local escape speed used).The remainder appear high velocity (with v total > 300 km s −1 , and all but one with v total > 350 km s −1 ) but bound.We use orbit integration to search for a possible dynamic origin of these stars.Between the orbital trajectories of the (marginally) unbound HVS and known globular clusters (Vasiliev 2019) and satellite galaxies (Fritz et al. 2018), we attempt to determine the progenitor.We find that none of the marginally bound or unbound sources have a clear progenitor. We measure chemical abundances for up to 22 species.These elements are Mg, Si, Ca, Ti (α group), Fe, Cr, Ni, Co, Sr, Mn (Fe-Peak), Na, Al, V, Cu, Sc (odd-Z group), and Sr, Y, Zr, Ba, La, Nd, Eu (Neutron capture).These elements span the main nucleosynthetic families.We find our sample is largely consistent with the abundance trends for the inner halo (see e.g., Figures 5).They do not appear to originate from globular clusters, the LMC, or the Galactic center.Unlike Reggiani et al. (2022) and Quispe-Huaynasi et al. (2022) we do not find any stars chemically consistent with the low [α/Fe] typical of accreted systems.There are three possible causes for this difference: 1) Small number statistics/chance, 2) Differences in abundance analysis, 3) Differences in target selection of HiVel star candidates.For 1) it is possible with small (N∼10-20 stars), that we, by chance sample, different populations of HiVel stars.Additionally, the stellar parameter and abundance analysis methods are different between various literature which could lead to an differences.Finally, the target selection from this study uses a combination of HVS candidates from four sources as described in Section 2.1.Reggiani et al. (2022) also uses Hattori et al. (2018) in their initial selection.However, Quispe-Huaynasi et al. ( 2022) uses their own selection process and Reggiani et al. (2022) uses Herzog-Arbeitman et al. (2018) in addition to Hattori et al. (2018). The lack of accreted stars in our sample is intriguing in light of recent results such as Mackereth & Bovy (2020), which propose the majority (70%) of the halo is accreted.It is possible the in situ halo stars we see are formed in the Milky Way and heated from an early accretion event (e.g., Belokurov et al. 2020).A possible origin for these fast moving stars is they are the metal weak tail of the splash distribution.Many of these stars are on retrograde orbits which can be seen in Figure 6.This agrees with the observation from Belokurov et al. (2020). [Al/Fe] has been argued to distinguish between accreted and in situ stars (e.g., Hawkins et al. 2015b;Carrillo et al. 2022).On the basis of our high [Al/Fe] measurements, it is plausible these stars again formed in the Milky Way, however these observations should be taken with caution.Beyond the difficulties with measuring Al in our moderate SNR sample and the differences in Al measurements between studies we compare our data with other studies (see Section 6.7), there are also potential problems with comparing IR measurements of Al with optical ones for metal poor stars (e.g.Carrillo et al. 2022). To our knowledge, Gaia DR3 source id 1383279090527227264 is one of the first late-type HVS with high resolution spectra and detailed chemical abundances.Our measurements suggest it is a halo star, ruling out several other proposed origin scenarios.Lacking a known progenitor star cluster, we conclude it was likely accelerated from the Galactic Halo.Gaia DR3 is likely to reveal many new late type HVS candidates for follow-up work. Figure 1 . Figure 1.Displayed is a comparison of the radial velocities measured in this study versus those from Gaia DR2.Generally we find very good agreement between the two studies.Error bars are included and are smaller than the typical point size.Dashed lines indicate a radial velocity of 0 (km s −1 ). model used above.The top two systems change for Star 8 and are unchanged for stars 5 and 7. Figure 3 . Figure 3. Differences in abundances derived for stars in Bensby et al. (2014);Battistini & Bensby (2015, 2016) and this study.Elements which use NLTE corrections in this calibration plot are marked with asterisks below their label. Figure 4 . Figure 4. [α/Fe] abundance measurements and errors as a function of metallicity are plotted.Data points from the HVS candidates found in Section 5 are shown in red, the remainder of the sample is shown in black.[α/Fe] is taken as the median of [Mg/Fe], [Si/Fe], and [Ca/Fe].If not all three elements have measurements, we take the average instead.All abundances shown are taken to be in LTE.For reference, in each panel we also show the abundance ratios of the thin and thick disc(Bensby et al. 2014;Battistini & Bensby 2015, 2016, in grey), the high α halo(Nissen & Schuster 2010, in green), the low α halo (Nissen & Schuster 2010, in bright green), the metal poor halo(Roederer et al. 2014, in light blue)(Yong et al. 2013, in tan), and the the bulge(Bensby et al. 2010; Gonzalez et al. 2015, in brown).We include abundances from two contemporary studies on the abundances of hyper velocity candidates as well,Hawkins & Wyse (2018) in blue, andReggiani et al. (2022) in violet.Abundances for the LMC from Van der Swaelmen et al. (2013) (teal) and Fornax from Letarte et al. (2010) (gold) are also shown for additional context. Figure 5 . Figure5.[X/Fe] abundance measurements and errors as a function of metallicity are plotted.Data points from the HVS candidates found in Section 5 are shown in red, the remainder of the sample is shown in black.All measurements compared are in LTE.For reference, in each panel we also show the abundance ratios of the thin and thick disc(Bensby et al. 2014;Battistini & Bensby 2015, 2016, in grey), the high α halo(Nissen & Schuster 2010, in green), the low α halo(Nissen & Schuster 2010, in bright green), the metal poor halo(Roederer et al. 2014, in light blue)(Yong et al. 2013, in tan), and the the bulge(Bensby et al. 2010; Gonzalez et al. 2015, in brown).We include abundances from two contemporary studies on the abundances of hyper velocity candidates as well,Hawkins & Wyse (2018) in blue, andReggiani et al. (2022) in violet.Abundances for the LMC from Van der Swaelmen et al. (2013) (teal) and Fornax from Letarte et al. (2010) (gold) are also shown for additional context. Figure 6 . Figure6.Top: A Toomre diagram for the HVS candidates in this paper.For ease of reference, each star in the paper is labeled by the number in its alias (i.e., star 7 is labeled 7).Data fromNissen & Schuster (2010) is included for comparison.The high α, low α, and thick disc data fromNissen & Schuster (2010) are labeled HA (blue circle), LA (red square), and TD respectively (black cross).Bottom: A comparison of the [Na/Fe] and [Ni/Fe] abundances for the candidate HVSs.Aliases are used for points similar to Figure6.Data fromNissen & Schuster (2010) is included for comparison and labeled as in Figure6.The abundances used and compared to in this panel use LTE. Figure 7 . Figure7.Abundance ratios for the candidate HVSs compared to those known to be observed in globular clusters.Background data is taken fromMasseron et al. (2019).If the star lacked 1 of the measurements in a given panel it was excluded from that panel. Table 1 . Hawkins & Wyse 2018;Reggiani et al. 2022) used in this study.A complete machine-readable version is available online.The astrometry is from Gaia DR3.We elect to use the distances fromBailer-Jones et al. (2021)in lieu of the values from parallax inversion from Gaia DR3 because more than a third of our science stars have relative parallax errors greater than 10%.Radial velocities are derived from the ground based follow-up observations along with the signal-to-noise ratio (SNR).The literature source of the candidate hyper velocity star is provided in the reference column.Stars used for calibrations or previous detailed in other studies (e.g.,Hawkins & Wyse 2018;Reggiani et al. 2022)are not included in our data tables.We use the abbreviation McD to indicate the observations were taken at the McDonald Observatory, and APO for the Apache Point Observatory.The ADQL entry in the source column indicates the targets were acquired from the ADQL query listed in Appendix A. , except where described otherwise in Section 4. in violet.Abundances for the LMC from Van der Swaelmen et al. (2013) (teal) and Fornax from Letarte et al. (2010) (gold) are also shown for additional context. (2010)(gold) are also shown for additional context.
17,581.2
2024-07-02T00:00:00.000
[ "Chemistry", "Physics" ]
and Our main theorem establishes the uniqueness of the common fixed point of two set-valued mappings and of two single-valued mappings defined on a complete metric space, under a contractive condition and a weak commutativity concept. This improves a theorem of the second author. 6(A,C) + 6(C,B) for all A B C in B(X). We say that a subset A of X is the limit of a sequence {A of nonempty subn sets of X if each point a in A is the limit of a convergent sequence {a }, where to the bounded sets A and B respectively, then the sequence {6(An,Bn)} verge converges to (A,B). This lemma was proved in [2]. Now let F be a mapping of X into B(X) We say that F is continuous at the point in X if whenever {x is a sequence of points in X converging to x n the sequence {Fx in B(X) converges to Fx in B(X).If F is continuous at n each point x in X, we say that F is a continuous mapping of X into B(X).A point z in X is said to be a fixed point of F if z is in Fz. For a selfmap I of (X,d), the authors of [3], extending the results of [2] and [4], defined F and I to be weakly commuting on X if (Flx,IFx) !max{(Ix,Fx), diam IFx} (I.I) for all x in X.Two commuting mappings F and I clearly commute, but two weakly )] [0, x/(ax + ah+l)]--IFx but for any x in X we have 6(Flx, IFx) x/(x + ah+l) x/a (Ix, Fx) Note that if F is a single-valued mapping, then the set {IFx} consists of a single point and therefore diam {IFx} 0 for all x in X. Condition (I.I) therefore reduces to the condition given in [5], i.e. d(Flx, IFx) !d(Ix, Fx) (1.2) for all x in X. In this paper we consider the family F of functions f from [0, ) into [0, ) The proof of this lemma is obvious but see also [15]. 2. RESULTS IN COMPLETE METRIC SPACES. Let F, G be two set-valued mappings of X into B(X) and let I J be two elfmaps of X such that F(X) !l(X), G(X) i J(X) (2.1) l.et Xo (Resp.yo be an arbitrary point in X and define inductively a sequence {i resp.{yn }) such that, having defined the point Xn_l (resp.yn_l), choose a point :n (resp.yn with IXn (resp.JYn) in FXn_l (resp.GYn_ 1) for n |,2, 'I'his can be done since the range of I (resp.J) contains the range of F (resp.G). We consider the following conditions: ( (y2) F continuous and IFx !Fix for all x in X ([) J continuous, (2) G continuous and JGx !GJx for all x in X Modifying the proof of theorem of [I] we are now able to prove the following: THEOREM I. Let F, G be two set-valued mappings of X into B(X) and let I, J be two selfmaps of X satisfying (2.1) and As in the proof of theorem of [24], we have on using inequality (2.3) p times and property (a): 6(FXm,GYn) fP(max{6(FXr'Gyq) m p _< r _< m n-p!q!n}) for m n > p.Thus (Fx m,Fxn) _< 6(FXm,GYs) + 6(GYs,FXr) 2 for m n > p The sequence {z is therefore a Cauchy sequence in the complete metric space X and so has a limit Z in X where z is independent of the particu- lar choice of each z It follows in particular that the sequence {Ix converges n n to z and the sequence of sets {Fx converges to the set {z} n Similarly, it can be proved that the sequence {Jyn converges to a point w and the sequence of sets {Gy n} converges to the set {w} Using (2.3) we have 6(Fx n,Gyn) _< f(max{d(IXn,JYn), 6 (Ixn'Gyn), 6(JYn,FXn Letting n tend to infinity and using lemma and properties (aa) and (aaa) it is seen that w z. Now suppose that (yl) holds.Then the sequence {12Xn and {IFXn converge to Iz and {Iz} respectively.Let w be an arbitrary point in Fix for n n n 1,2, Then since I weakly commutes with F we have on using (I.I) Letting n tend to infinity and using lemma we see that the sequence {w converges n to Iz But Iz is independent of the particular choice of w in Fix and this n n means that the sequence of sets {Fix converges to the set {Iz} n Using inequality (2.3) we have (FlXn'GYn) -< f(max{d(12Xn'JYn ), 6(12Xn,GYn), 6(Jy n,Flxn)}) Letting n tend to infinity and using lemma and property (aa), we have d(Iz,z) _< f(d(Iz,z)) which implies Iz z by (aaa). Letting n tend to infinity and using lemma and property (a), we obtain the ine- quality (Fu,z) !f(max{d(lu,z), 6(z,Fu)}) f((z,Fu)) Thus Fu {z} by (aaa) and since F and I weakly commute, we have z Fz Flu IFu Iz}. It follows that Iz z. We have therefore shown that if the conditions (yi) and (j), with i, I, 2, hold then Iz Jz z and Fz Gz {z}. That z is the unique common fixed point of F and I and of G and J follows easily.This completes the proof of the theorem. COROLLARY I. Let F, G be two set-valued mappings of X into B(X) and let I, J be two selfmaps of X satisfying (2.1) and (Fx,Fy) c.max{d(Ix,Jy),(Ix,Gy),(Jy,Fx)} ( for all x, y in X, where 0 c < I. Further, let F and G commute with I and J respectively.If F or I and G or J are continuous, then F, G, I and J have a unique common fixed point z.Further, Fz Gz {z and z is the unique common fixed point of F and I and of G and J. PROOF.As in the proof of theorem of [I], it is proved that (2.2) holds for any x0' Y0 in X.Since F and G commute with I and J respectively, we have Fix IFx and GJx JGx for all x in X.The thesis then follows from theorem if we assume that f(t) ct for all t O. The result of this corollary was given in [I]. We now give an example in which theorem holds but corollary is not applicable.Gx [0,x/(x + 8)], Ix Jx x for all x in X. By example i, F and G weakly commute with I. Further, we have GJx for all x in X. + 2y- + 2y' y and so I/(I + 2y) _< c which as y tends to zero, gives c _> I, a contradiction. Mathematical Problems in Engineering Special Issue on Time-Dependent Billiards Call for Papers This subject has been extensively studied in the past years for one-, two-, and three-dimensional space.Additionally, such dynamical systems can exhibit a very important and still unexplained phenomenon, called as the Fermi acceleration phenomenon.Basically, the phenomenon of Fermi acceleration (FA) is a process in which a classical particle can acquire unbounded energy from collisions with a heavy moving wall.This phenomenon was originally proposed by Enrico Fermi in 1949 as a possible explanation of the origin of the large energies of the cosmic particles.His original model was then modified and considered under different approaches and using many versions.Moreover, applications of FA have been of a large broad interest in many different fields of science including plasma physics, astrophysics, atomic physics, optics, and time-dependent billiard problems and they are useful for controlling chaos in Engineering and dynamical systems exhibiting chaos (both conservative and dissipative chaos).We intend to publish in this special issue papers reporting research on time-dependent billiards.The topic includes both conservative and dissipative dynamics.Papers discussing dynamical properties, statistical and mathematical results, stability investigation of the phase space structure, the phenomenon of Fermi acceleration, conditions for having suppression of Fermi acceleration, and computational and numerical methods for exploring these structures and applications are welcome. To be acceptable for publication in the special issue of Mathematical Problems in Engineering, papers must make significant, original, and correct contributions to one or more of the topics above mentioned.Mathematical papers regarding the topics above are also welcome. Authors should follow the Mathematical Problems in Engineering manuscript format described at http://www .hindawi.com/journals/mpe/.Prospective authors should submit an electronic copy of their complete manuscript through the journal Manuscript Tracking System at http:// mts.hindawi.com/according to the following timetable: and if for arbitrary O, there exists an integer n n N such that A ! for n N where A is the union of all open spheres with n centres in A and radius LEMM_A I.If {A and {B are sequences of bounded subsets of (X,d) which con- n n
2,021.6
1986-01-01T00:00:00.000
[ "Mathematics" ]
A Scale-invariant Higgs Sector and Structure of the Vacuum In view of the current status of measured Higgs boson properties, we consider a question whether only the Higgs self-interactions can deviate significantly from the Standard-Model (SM) predictions. This may be possible if the Higgs effective potential is irregular at the origin. As an example we investigate an extended Higgs sector with singlet scalar(s) and classical scale invariance. We develop a perturbative formulation necessary to analyze this model in detail. The behavior of a phenomenologically valid potential in the perturbative regime is studied around the electroweak scale. We reproduce known results: The Higgs self-interactions are substantially stronger than the SM predictions, while the Higgs interactions with other SM particles are barely changed. We further predict that the interactions of singlet scalar(s), which is a few to several times heavier than the Higgs boson, tend to be fairly strong. If probed, these features will provide vivid clues to the structure of the vacuum. We also examine Veltman's condition for the Higgs boson mass. Introduction It appears that completion of the standard model (SM) of particle physics, as it stands, has been achieved with the discovery of the Higgs boson at the LHC experiments [1,2]. In particular, the two parameters of the Higgs potential, namely, the vacuum expectation value (VEV) of the Higgs field v H = µ H / √ λ H = 246 GeV and the Higgs boson mass m h = √ 2µ H ≈ 126 GeV, are now determined. Here, H denotes the Higgs doublet field H = (H + , H 0 ) T . This means that all the parameters of the SM have been determined. After the discovery, investigations of properties of the Higgs boson are being performed rapidly, such as measurements of its spin, CP , and couplings with various SM particles. Up to now, there are no evident contradictions with the SM predictions. Accuracies of the measurements are improving, and identification with the SM Higgs boson is becoming more likely. These features, however, do not readily lead to the conclusion that the SM is confirmed altogether. As an important piece, confirmation of the self-interactions of the Higgs boson is still missing, which is indispensable to unveil the structure of the Higgs potential. There are some phenomenological and theoretical problems in the Higgs sector of the SM as of today. For instance, there is a huge hierarchy between the electroweak and Planck scales, which generates the "Naturalness problem". Furthermore, even if we permit fine tuning of parameters, we face a problem that the vacuum of the Higgs potential is unstable or metastable at a high-energy scale. * These problems give us motivations to consider extensions of the SM Higgs sector. In view of the present status, it is an interesting question whether it is possible, with such an extension, that only the Higgs self-interactions are significantly different from the SM predictions, while other properties of the Higgs boson are barely affected. Naively, one may expect that deviations of the Higgs self-couplings from the SM values can be expressed as effects of higher-dimensional operators, which are suppressed by a cut-off scale Λ as (v H /Λ) n . It is based on a general model-independent argument in the case that the Higgs effective potential can be expanded as a polynomial in the Higgs field H. Since the cut-off scale may be of order a few TeV scale or higher, one may expect that the deviation is already constrained to be quite small. One direction to evade such an argument is to consider models with non-decoupling effects to Higgs self-interactions. There are many examples [3,4]. Here, we would like to consider a different possibility. In various physics of spontaneous symmetry breakdown (including condensed matter physics), there appear effective potentials which cannot be expanded in polynomials in the field variables, namely effective potentials which are irregular at the origin. For example, the theory of superconductivity at T = 0 gives such an effective potential, and in certain strongly interacting systems such singular potentials are also expected to appear (although they are difficult to compute reliably). As an example which is perturbatively computable within relativistic quantum field theory, we consider an extended Higgs sector with classical scale invariance. In this case, typically the Higgs effective potential takes a form ∼ λφ 4 [ln (φ 2 /v H 2 ) − 1/2], which is irregular at the origin via the Coleman-Weinberg (CW) mechanism [5], where φ = √ 2 Re H 0 . If this potential is expanded about the VEV in terms of the physical Higgs field h = φ − v H , the expansion generates powers of h higher than the quartic power, and they arise in powers of h/v H without being suppressed by a cut-off scale. At the same time, it is expected that the triple and quartic couplings of h can have order-unity deviations from the SM values. In recent years, CW-type potentials have been studied extensively as models of an extended Higgs sector, with different motivations. For instance, classical scale invariance as a solution to the hierarchy problem has become a hot subject. These are models which become scaleless at a new physics scale M (e.g. Planck scale), µ H 2 (M) = 0. In these models, typically the scale invariance is broken radiatively by the CW mechanism, which (in steps) leads to a generation of the electroweak scale as the Higgs VEV [6,7,8,9,10,11,12,13], or generates scales involving nonzero VEVs of other scalar fields [14,15,16,17,18,19,20,21,22,23,24]. Even without explicit realizations, classical scale invariance is often mentioned as a possibility or guideline for an underlying mechanism. The purpose of this paper is to explore physics in the vicinity of our vacuum, characterized by the effective potential of an extended Higgs sector. As a minimal extension of the SM Higgs sector, we consider a scale-invariant Higgs potential with a real singlet scalar field which belongs to the N representation of a global O(N) symmetry. Though there have been several works in which the same or similar extensions are considered, we re-analyze the model by examining the validity range of perturbative calculations of the effective potential in detail. We can obtain a phenomenologically valid potential within the validity range of a perturbative analysis. We clarify its behavior around the electroweak scale. We also examine Veltman's condition [25] for the Higgs mass, which is a criterion for judging a "Naturality" of the scalar potential by examining the coefficient of the quadratic divergence. Some of previous works on similar models are subject to possible instability of their predictions when higher-order perturbative corrections are included. Such instability was already pointed out by the original CW paper, and we re-examine this condition. Other works, which are legitimate with respect to the perturbative validity, use the framework of Gildener-Weinberg (GW) [26] to analyze the effective potential. In this framework, one concentrates on a one-dimensional subspace of the configuration space of the effective potential, namely that corresponding to the physical Higgs direction, and examines the potential shape on that subspace. On the other hand, we analyze the effective potential in a way closer to the original CW approach, which enables to clarify a global structure of the potential shape in the configuration space. Ref. [9] analyzed a gauged and non-gauged version of a scale-invariant model, and the latter is close to the one we examine. Phenomenologically we reproduce similar aspects, hence we state the differences of our analysis in comparison. As mentioned above, Ref. [9] uses the GW framework to analyze the properties of the effective potential. Instead we present a formulation which enables analysis of global properties of the effective potential. As a result, for instance, we are able to analyze the structure of the potential in every direction around the vacuum in the configuration space and predict the interactions among the physical scalar particles. This paper is organized as follows. In Sec. 2 we explain our model. In Sec. 3, we develop a theoretical framework needed for a perturbative analysis of the model. We give results of our numerical analysis in Sec. 4. Conclusions and discussion are given in Sec. 5. 2 Model and effective Higgs potential Lagrangian The scale-invariant limit of the SM has long been excluded experimentally. Hence, to impose classical scale invariance, we need to extend the Higgs sector. As a minimal extension, we consider a scale-invariant extension of the SM with an additional real singlet scalar field with a Higgs-portal coupling. We consider the case where the singlet scalar field is in the fundamental representation of a global O(N) group: S = (S 1 , · · · , S N ) T . The Lagrangian is given by where the real singlet field S interacts with itself and the Higgs doublet field H via the self-interaction and portal interaction with the coupling constants λ S and λ HS , respectively. Effective potential up to one-loop level The one-loop effective potential in the Landau gauge, renormalized in the MS-scheme, is given by Here, the expectation values of the scalar fields in the presence of source J are given by The index i denotes the internal particle in the loop, and their parameters are given by † W bosons: massive scalar bosons: n ± = 1 , M ± 2 = F ± , c ± = 3 2 ; NG bosons of the Higgs field: NG bosons of the singlet field: charged leptons (f = e, µ, τ ): and F ± are defined by Hereafter, we neglect all the Yukawa couplings except the top Yukawa coupling, y t = y (U) 33 ∼ O(1), since other Yukawa couplings are fairly small. It is customary to denote the summation over i together with particle's statistical factor as supertrace "STr," which we also use below. Renormalization group analysis We can extend the applicability range of the effective potential by a renormalizationgroup (RG) improvement. According to the general formulation [27,28], the (L + 1)-loop beta functions and anomalous dimensions can be used to improve the L-loop effective potential. Here, we obtain the leading-logarithmic (LL) potential by improving the treelevel potential by the one-loop beta functions and anomalous dimensions. It is argued in [27,28] that, within the range where ln(M i 2 /µ 2 ) are not too large, combining the oneloop effective potential with the one-loop beta functions and anomalous dimensions gives † Strictly speaking, the terminology "Nambu-Goldstone (NG) bosons" for the internal particles is inadequate except at the vacuum configuration and depends on how the symmetries are broken by the vacuum. More precisely, "NG bosons" represent the scalar modes which are orthogonal to the radial directions of H and S. a better approximation. Hence, we obtain an improved-next-to-leading-order (improved-NLO) potential by combining them. Comparing NLO, LL and improved-NLO potentials, we can examine validity (stability) of the predictions in the vicinity of the vacuum. The beta functions and anomalous dimensions are defined by respectively, where X is a coupling constant and A is a field, both renormalized at scale µ. The one-loop beta functions and anomalous dimensions of the model are given by The beta functions which do not include λ HS or λ S are suppressed. We obtain the improved potentials by taking the renormalization scale as t = ln( where Perturbatively valid parameter region In a previous work [16] a parameter region has been searched assuming In that case, however, the Coleman-Weinberg mechanism does not work properly, since the one-loop corrections cannot compete against the tree-level terms in such a parameter region if the order counting in perturbation theory is legitimate. (This is in analogy to the case of pure scalar φ 4 field theory considered in the original CW paper [5].) As a result, the renormalization scale considered in [16] is about 10 5 v H , where the large logarithmic corrections invalidate a perturbative analysis. In this section, we reconsider the parameter region, in which the results are perturbatively valid and the CW mechanism works properly. As mentioned in the Introduction, there is a framework of analysis to find a vacuum of a one-loop effective potential and to compute particle contents of scalar bosons systematically, which was introduced by Gildener and Weinberg [26]. In that framework we obtain the following general results. A perturbatively valid vacuum appears on the ray of a flat direction of the tree-level potential; a light scalar boson "scalon" (corresponding to the Higgs boson in our context) appears in addition to heavy scalar bosons; interactions among the scalon and the other scalar bosons are derived. Although it is a useful framework, we consider that our framework presented below is advantageous to analyze more global features of the potential, including the vicinity of the vacuum. Order counting in perturbative expansion We investigate an order counting among the coupling constants, with which the CW mechanism is expected to work in the perturbative regime. To realize the CW mechanism, the tree-level and one-loop contributions to the effective potential should be comparable and compete with each other. Parametrically this requires a relation to be satisfied. (This is in analogy to the case of massless scalar QED considered in the original CW paper [5], in which λ ∼ e 4 (4π) 2 ≪ 1 is required as a consistent parameter region.) Note that we take y t into account in Eq. (22) since the top quark gives the dominant oneloop contribution among the SM particles which couple to the Higgs particle. According to Eq. (22), we consider that λ H and λ HS 2 (as well as y t 4 ) are naively counted as the same order quantities in perturbative expansions. ‡ It follows that the relation between λ H and λ HS should read |λ H | ≪ |λ HS | . (23) ‡ We do not consider a fine cancellation between the λ HS 2 and y t 4 contributions. Thus, Eq. (22) and Eq. (23) indicate that λ H 2 and λ HS 2 need to be counted as different orders, although they both belong to the one-loop contributions. Furthermore, λ HS needs to be large, at least of order N C /N y t ≈ 1.7 N −1/2 , in order to beat the top quark negative contribution, for stabilizing the vacuum. We derive a systematic approximation of the effective potential Eq. (5), taking into account the above order counting. First, the contributions from the NG bosons of the Higgs field on the right-hand-side of Eq. (5) can be written as follows: Secondly, F ± (φ, ϕ) in the fourth and fifth terms in Eq. (5) can be written as follows: where we have defined F ±app (φ, ϕ) as the approximate form of F ± (φ, ϕ). Then we substitute Eqs. (24) and (25) to the expression of the effective potential Eq. (5). We also have to apply Eq. (22) or Eq. (23) to the tree-level potential in Eq. (4). In particular, Eq. (22) can be interpreted as follows. Although the term proportional to λ H is tree-level, after taking into account the above order counting of the coupling constants, this term should be regarded as next-to-leading order (NLO), in contrast to the term proportional to λ HS , which is at the LO. For this reason, the LO contributions to the effective potential in the above order counting is given by We show V LO in Fig. 1. At LO, the potential is flat along the φ axis, which composes the minima of this potential. § Thus, at LO, the vacuum is not determined uniquely. At every vacuum the Higgs boson is massless, while the singlet scalers are massive. § If λ S is small and of the same order as λ H , λ S ϕ 4 should be counted as NLO. In this case, both φ and ϕ axes become the flat minima of V LO . Other features, especially the results presented in the next section, are hardly affected, if the global minimum of V LO + V NLO is on the φ axis. The NLO contributions are given by We explain the reason for omitting F −app in the next subsection. As seen above, at every vacuum of V LO , ϕ = 0 and the ϕ-direction becomes a massive mode. Hence, from consistency of the perturbative expansion, we expect that ϕ = 0 holds also at the vacuum of V LO + V NLO . According to the general argument on the potential with a flat direction at LO, with an appropriate choice of the couplings (λ H , λ HS , λ S ), V LO + V NLO exhibits a minimum on the φ-axis by the CW mechanism, and the Higgs boson becomes massive at NLO. Hence, the Higgs boson is generally expected to be much lighter than the singlet scalars. Thus, the SM gauge group is broken by the vacuum as required Comment on the Hessian matrix A special prescription is needed for computing the CW potential, in the case that the minimum of the effective potential is not determined uniquely at the LO of the perturbative expansion, such as in Eq. (26). Generally the arguments of logarithms in a one-loop effective potential include the eigenvalues m i 2 of the Hessian matrix of the tree-level scalar potential. The Hessian matrix is a matrix whose elements are given by ∂ 2 Vtree ∂x i ∂x j , where x i denotes a scalar field. Therefore, the eigenvalues m i 2 at the potential minimum coincide with the mass-squared eigenvalues of the scalar fields at tree level. In the case that the tree-level potential has a unique minimum, all the eigenvalues of the Hessian matrix at the minimum are positive, m i 2 > 0, and m i 4 ln m i 2 in the one-loop effective potential are well-defined. * Since in our case V LO is flat along the φ-axis, m φ 2 (the eigenvalue in the direction of φ) can be negative at configuration points (φ, ϕ) infinitesimally away from the φ-axis. Instead, if we determine the potential minimum of V LO + V NLO and compute the Hessian matrix of V LO +V NLO in the vicinity of the minimum, all the eigenvalues are positive. Thus, a naive perturbative treatment is inappropriate in computing quantum fluctuations in the vicinity of a vacuum in the case that there is a negative eigenvalue. Here we adopt a prescription † that (only) the eigenvalue m φ 2 cannot be determined by V LO , and that this eigenvalue should be determined by V LO + V NLO , whose effect through m φ 4 ln m φ 2 should be included in V NNLO . ‡ Since we do not compute V NNLO , in practice we simply neglect its contribution. The other eigenvalue m ϕ 2 determined by V LO is positive. In Eq. (7), the two eigenvalues are given by F ± (φ, ϕ). The eigenvalues of the modes orthogonal to these radial modes in the scalar sector are composed by those of the three degenerate modes of H and those of the N − 1 degenerate modes of S, all of which are non-negative; see Eqs. (7) and (24). § We consider that these should be included in the computation of V NLO . Based on these considerations, we omit the contribution of F −app (the eigenvalue in the direction φ) in Eq. (27). An analysis for a consistent treatment of the NG modes has been performed recently [29], which is similar in spirit to the above argument. We will further develop the above method for including higher-order corrections consistently in our future work. * For massless modes, such as NG bosons of the Higgs field, we define the values of m i 4 ln m i 2 in the limit m i 2 → 0, i.e., zero. See Eq. (24). † This corresponds to the following prescription for computing V eff by the background field method. We determine the propagator of the relevant quantum field not by the LO vertices alone but also by including one-loop self-energy corrections. Note that both LO vertices and one-loop self-energy corrections are dependent on the background fields (φ, ϕ) and determined from V LO and V NLO , respectively. ‡ This is consistent in the vicinity of the φ-axis since m φ 2 is an NLO quantity. § At the vacuum, the three modes orthogonal to the radial mode of H are identified with the NG modes, whose eigenvalues vanish to all orders, while all the N modes of S become degenerate, since the O(N ) symmetry is unbroken. Relation between λ H and λ HS at µ = v H To study the effective potential in the vicinity of the vacuum, a natural choice of the renormalization scale would be µ = v H , where v H = 246 GeV is the VEV of the Higgs field. Here, we set µ = v H and ϕ = 0 and examine a relation among the scalar couplings (λ H , λ HS , λ S ). (Other couplings are fixed to the SM values.) As a result we obtain a relation between λ H and λ HS . Setting ϕ = 0, the effective potential takes a form where C 1 and C 2 are constants dependent on λ H , λ HS and independent of λ S . This shows that both mass and VEV of the Higgs boson are independent of λ S . [This is not the case if we use RG-improved potentials, since terms including both λ S and ln( The potential minimum is determined by from which we obtain v H as a function of λ H , λ HS and µ. Then we set µ = v H : Since v H is the only dimensionful parameter, the right-hand-side is proportional to v H and we can divide both sides by v H . This gives a relation between λ H and λ HS , which are renormalized at µ = v H . The relation is shown in Fig. 2, obtained by solving Eq. (30) numerically. * Note that λ HS should be of order N C /N y t ≈ 1.7 N −1/2 or larger (see Sec. 3.1), while it should be smaller than order 4π to ensure perturbativity. Results of analysis 4.1 Phenomenologically valid parameters Using the effective potential V LO + V NLO obtained in the previous section, we search for a phenomenologically valid parameter region for the couplings (λ H , λ HS , λ S ). We require that the observed mass and VEV of the Higgs boson are reproduced. * If we neglect the gauge couplings (and all the Yukawa couplings except y t ), we obtain a simple relation: This gives a good approximation of the numerical result. are defined similarly to Eqs. (15) and (16) from V LO and V LO + V NLO with the scale choice t = ln( φ 2 + ϕ 2 /v H ), respectively. As mentioned, in the case (I), λ H and λ HS are fixed by m h and v H , while λ S can be taken as a free input parameter. Reflecting this feature, even in the cases (II) and (III), the values of λ H and λ HS are tightly constrained, while λ S can be taken fairly freely. For this reason, we take λ S as the input parameter for all three cases. In the table we take λ S = 0.10 as an example. In all cases there exist λ H and λ HS which reproduce the mass and VEV of the Higgs boson. Furthermore, in accord with the argument in Sec. 3.1, the mass of the singlet scalar is predicted to be several times larger than the Higgs boson mass in each case. We find that the predicted masses of the singlet scalar are consistent with each other within about 5% accuracy. The differences may be taken as a reference for the stability of our predictions. An undesirable feature is that the locations of the Landau pole are close and in the several TeV region. This originates from the large value of λ HS at the electroweak scale µ ≃ v H (see Sec. 3.1). Dependences of the predictions on λ S is as follows. If we raise the value of λ S , the locations of the Landau pole are even lowered, since the couplings λ H , λ HS , λ S increase with the renormalization scale as they influence each other. The mass of the singlet scalars are barely dependent on λ S , since only the fourth derivative in the ϕ direction of the effective potential is affected at LO. The values of λ H and λ HS do not change very much with λ S . For instance, if we take λ S = 0.5 and 1.0, the Landau pole appears at 3.2TeV and 2.8TeV respectively, while the mass of the singlet scalar changes little. To see the shape of the effective potential, we show in Fig. 3 (a) the contour plot of the potential in case (I) of Tab. 2. Since the potential is symmetric under φ → −φ or ϕ → −ϕ, we show only the upper-right part of the configuration space. As expected there is a global minimum along the φ-axis. There is also a shallower local minimum along the ϕ-axis, which is generated by the CW mechanism of a competition between λ S ϕ 4 . The effective potential for the SM is also shown. and λ HS 2 ϕ 4 ln(λ HS ϕ 2 /µ 2 ) terms. † In cases (II) and (III) of Tab. 2, we obtain qualitatively similar contour plots. In Fig. 3 (b) we show the effective potentials on the φ-axis (i.e., for ϕ = 0) for all three cases, together with the effective potential of the SM. We see that differences between the three approximations are small. In particular, the difference between the LL approximation (II) and the improved-NLO approximation (III) is hardly visible. These features show good stability of our predictions in the vicinity of the vacuum, and that they are within the validity range of perturbation theory. There is a significant difference between our effective potential and that of the SM. In comparison to the SM, the minimum of our potential is shallow, although the values of v H and m h are common. The clear difference of the potential shapes indicates that the higher derivatives of the potentials at the minimum are appreciably different. Namely, we anticipate that the Higgs self-interactions of our model are appreciably different from those of the SM. Next we examine the cases N > 1. In general, we expect that the location of the Landau pole is raised as compared to the N = 1 case. This is because the required value of λ HS to overwhelm the top-loop contribution decreases with N as 1/ Interactions among physical scalar particles 4.2.1 Our results Let us expand the effective potential about the vacuum after rewriting φ 2 → H † H, ϕ 2 → S · S and Then the effective potential takes a form where only up to dimension-four interactions are shown explicitly. We show the values of the coupling constants among the scalar particles in Tab. 4. They correspond to the parameters of Tabs. 2 and 3. In the case N = 1, the triple self-coupling of the Higgs boson λ hhh turns out to be larger than the SM value by a factor 1.7-1.8, while the quartic self-coupling λ hhhh is larger by a factor 3.7-4.5. The range of each value shows the level of accuracy of our prediction. (In general higher derivatives of the potential have larger uncertainties.) These values are barely dependent on the input value for λ S , since the effective potential in case (I) is independent of λ S on the φ-axis, and the dependence is weak in the cases (II) and (III). For N = 1, the coupling constants involving the singlet scalars are large and of order ten. They become even larger if the input value for λ S is taken to be larger. One may wonder if perturbation theory is valid with such large coupling constants. We remind the reader that the coupling constants in the original Lagrangian are not so large, and that our predictions have been tested to be within the perturbative regime in the vicinity of the scale of the vacuum. These large couplings are considered to be a typical feature of the present model and need to be tested by future experiments. For larger N, the Higgs triple self-coupling is barely dependent on N, while the quartic coupling decreases slightly with N. The N dependences of the couplings involving the singlet scalars are more evident. Generally we obtain smaller couplings for larger N. This originates from high sensitivities of these couplings on λ HS , which reduces with N as ∼ 1/ √ N, and can be regarded as a characteristic feature of our potential. In the cases (I) and (III) the values of λ ssss are not shown in the table, for the following reason. If we compute the fourth derivative of V NLO with respect to S and set h = | s| = 0, the contribution of the NG modes of H diverges. This does not happen in the case (II) since V NLO is not involved. The divergence originates from an infra-red region, k ∼ 0 in d 4 k/(k 2 ) 2 , and is an artifact of setting all the external momenta to zero. In physical amplitudes, momenta flowing into the quartic vertex are almost always non-zero, which regularize the IR divergence in the loop integral. Since the other couplings have similar values for the cases (I), (II), (III), we expect that the values of λ ssss for the case (II) would give reasonable estimates of the quartic vertex appropriate for physical amplitudes. Comparison with results using Gildener-Weinberg's framework Let us compare our results with those using the conventional GW framework. Table 6: λ S dependences of the couplings involving the singlet scalar bosons. We show in Tab. 5(a)-(c) comparisons of the couplings among the physical Higgs boson and singlet scalar(s), including their self-couplings, as defined in eq. (33). The effective potential in the GW framework coincides with our effective potential after setting ϕ = 0. Hence, the Higgs self-couplings are the same in both analyses. [We list the values for case (I) of our analysis as the corresponding values in the GW framework.] In conventional analyses using the GW framework, the couplings involving the singlet scalars are derived using the tree-level interactions (by substituting the VEV of the Higgs boson appropriately). Therefore, loop-induced effects are not included. In our case, since the portal coupling λ HS is large, its loop-induced effects can be large. These effects are most enhanced for N = 1, where λ HS is the largest; see Tab. 5(a). In particular there is a large enhancement of the singlet quartic self-coupling λ ssss , if λ S is small and the effect of λ HS -induced loop contribution is much larger. (We list the case λ S = 0.1 in the table.) Note that, although λ S is practically a free parameter, a smaller value is preferred with regard to the Landau pole. For completeness, we show the λ S dependences of our predictions in Tab. 6. We see that only λ ssss depends considerably on λ S , and the dependence is roughly consistent with that of 6λ S + const., as anticipated. In view of the large portal coupling, certainly it is sensible to include the loop-induced effects in computing the interactions involving the singlet scalars. In this case, we need to set up a formulation with proper account of order counting, as described in the previous section. It inevitably requires departure from the analysis in a one-dimensional subspace, i.e., the GW formulation. The differences between our results and those of the GW framework tend to decrease for larger N, since the portal coupling becomes smaller. This tendency can be confirmed in Tabs. 5(b)(c). Veltman's condition for the Higgs mass Veltman's condition is a condition for the quadratic divergence to vanish in the radiative correction to the Higgs potential. Generally the one-loop effective potential for the Higgs boson in a cut-off regularization scheme is given by where Λ is a UV regulator, and M 2 (φ)(≪ Λ 2 ) is the mass-squared matrix . The first term is a cosmological constant term, which we subtract by the counter term (with tremendous fine-tuning, but we will not be concerned about it here). The second term is a quadratically divergent term. It can either be subtracted (fine-tuning), which makes the theory unnatural, or vanish if holds at a scale µ 0 , which makes the theory natural. Here, δ φ denotes a fluctuation field vector around the VEV φ . In the SM, the coefficient of the quadratically divergent term is given by Thus, Veltman's condition is violated at order unity and fine-tuning is necessary if the cut-off scale is much higher than the electroweak scale. Here, we examine whether the fine-tuning is tamed comparatively in the scale-invariant model. The coefficient of Λ 2 is expressed by the matrix It is diagonal, since there is no mixing between the Higgs boson and the singlet scalar boson. Let us moderately examine only the coefficient of the Higgs boson for various N. The hh component of the above matrix takes values 0.76, 5.6, 12.8 for N = 1, 4, 12, respectively. Instead, we can examine the constraint by Veltman's condition on the parameters of the model, see Figs. 6(a)-(c), which also combine the phenomenologically valid regions plotted in Figs. 5. As can be seen, the fine-tuning is tamed particularly for N = 1, which may be a good feature of the model. ‡ On the other hand, the quadratic divergence for the singlet component is significant. Although λ S is a free parameter, it is expected to be positive semi-definite at the electroweak scale in order to stabilize the vacuum at LO. § One way to remedy this problem is to couple right-handed Majorana neutrinos to the singlet scalars, as advocated in [11]. Here, we do not pursue such possibilities and leave it as an open question. ‡ Since the cut-off scale, given by the Landau pole, is at several TeV scale in the N = 1 case, there will not be a serious fine-tuning problem below this scale. § Since one-loop effects induced by the portal coupling are large, there may be a consistent region where λ S is negative and still the vacuum is stabilized. Conclusions and Discussion Up to now experimental data show that properties of the Higgs boson are consistent with the SM predictions. It is an intriguing question, with respect to the structure of the vacuum, whether only the Higgs self-interactions not measured so far can deviate significantly from the SM predictions. A possible scenario is that the Higgs effective potential is irregular at the origin. In this analysis we studied an extension of the Higgs sector with classical scale invariance as an example of such potentials. As a minimal model, we considered the SM Higgs sector in the scale-invariant limit, coupled via a portal interaction to an N-plet of real scalars under a global O(N) symmetry (singlet under the SM gauge group). We analyzed the electroweak symmetry breaking by the CW mechanism using RG and examined interactions among the scalar particles which probe the structure of the potential around the vacuum. We computed the effective potential in the configuration space of the Higgs and singlet scalar fields (φ, ϕ). The input parameters are the Higgs self-coupling λ H , portal coupling λ HS , and self-coupling of the singlet scalars λ S at tree level. In order to obtain perturbatively valid predictions, we find that parametrically |λ H | ≪ |λ HS | needs to be satisfied. Furthermore, λ HS needs to be large, of order N C /N y t ≈ 1.7N −1/2 , in order to overwhelm the contribution of the top quark loop. We developed a special perturbative formulation for our model in order to analyze the (φ, ϕ) space, since a naive treatment fails. The consistent parameter region of the couplings (λ H , λ HS , λ S ) is found which gives a global minimum of the effective potential at (φ, ϕ) = (v H , 0) with the Higgs VEV v H = 246 GeV and the Higgs boson mass m h = 126 GeV. The potential is stable against RG improvements, showing validity of the perturbative predictions. λ H and λ HS are almost fixed, while λ S (> 0) can be taken fairly freely. The SM gauge group is broken as in the SM, while the O(N) symmetry is unbroken. (Hence, there is no NG boson.) The Higgs boson and singlet scalars do not mix at the vacuum. The mass of the N degenerate singlet scalars arises at LO of the perturbative expansion, whereas the mass of the Higgs boson is generated at NLO. Hence, generally the singlet scalars are much heavier than the Higgs boson. For N = 1 and λ S = 0.1 the mass is about 500 GeV, and it becomes lighter for larger N and heavier for larger λ S . Since the coupling λ HS should be large in order to beat the top loop contribution, the Landau pole appears at a few to a few tens TeV (the position of the Landau pole is higher for larger N and smaller for larger λ S ). Hence, the cut-off scale of this model is considered to be around this scale. For instance, this feature conflicts with a scenario which imposes a classically scale-invariant boundary condition at the Planck scale as a possible solution to the naturalness problem. Even without such a motivation, it can be a serious drawback of this model that the Landau pole is located so close to the electroweak scale. These features are consistent with the results of the analysis given in [9], while we presented more detailed analyses. We computed the triple and quartic (self-)couplings of the Higgs and singlet scalar particles at the vacuum. Computation of the interactions involving the singlet scalars is a unique aspect of the use of our formulation, since the portal coupling is large and loop-induced effects tend to be large. We obtain the Higgs triple and quartic self-couplings which are larger than the SM values by factors 1.6-1.8 and 2.8-4.5, respectively. The triple coupling is hardly dependent on N, while the quartic coupling decreases slightly with N. Both of them are barely dependent on λ S . According to the studies [31,32], we naively expect that the triple coupling can be detected at 2σ level at a future ILC, with an integrated luminosity of 1.1 ab −1 at a centre-of-mass energy √ s = 500 GeV and 140 fb −1 at √ s = 1 TeV. The couplings involving the singlet scalars are fairly large, of order ten, in the case of small N, reflecting the irregularity of the potential at the origin and a large value of λ HS . If probed, these large couplings provide fairly vivid clues to the structure of the effective potential in the vicinity of the vacuum. It is difficult to detect signals of this model at the current LHC experiments. Since there is no mixing between the Higgs and singlet scalar particles, the couplings of the Higgs boson with other particles are unchanged from the SM values at tree level. The singlet scalar particles interact with the SM particles only through the Higgs boson. Furthermore, the singlet scalars are heavy and can be produced only in pairs due to the O(N) symmetry (or Z 2 symmetry for N = 1). These features make it difficult to detect the singlet scalars directly at the LHC experiments. The production cross sections of the singlet scalars at the LHC are expected to be very small and it would be difficult to detect them, although a further detailed analysis is needed. On the other hand, it would be difficult to probe the extended Higgs sector from loop effects in various precision measurements. These appear as higher-order effects to the Higgs effects, and since the Higgs effects themselves are small, we expect that detection of an anomaly is non-trivial. From the cosmological point of view, there is a possibility that the singlet scalar boson(s) is a part of dark matter. The singlet boson is stable due to O(N) or Z 2 symmetry. Since the couplings between singlet(s) and Higgs λ hss and λ hhss are very large, the annihilation cross section is also large, which decreases its relic abundance. In N > 1 cases, the total relic abundance is the sum over each individual singlet s i . Then typically the scalar couplings decrease and the total relic abundance increases with N. Our naive estimation shows that singlet(s) can be a dark matter, whose abundance is less than around 1% of the total dark matter relic abundance [33]. We are currently preparing a further detailed study. We also examined Veltman's condition (vanishing of the quadratic divergence) for the Higgs mass. We find that for N = 1 the fine-tuning is relaxed compared to the SM, by the effect of the large portal coupling which cancels against the top-quark loop effects. Since the current level of the fine-tuning in the SM indicates that the natural scale of the cut-off is quite close to the electroweak scale, this may be a good tendency of the model. The relaxation of fine-tuning is not significant for N > 1. A possible scenario to avoid the Landau pole near the electroweak scale is to promote the global O(N) symmetry to a gauge symmetry (the group can also be replaced by another non-abelian group). An appropriate gauge-symmetric extension pushes up the location of the Landau pole, driven by the asymptotically-free nature, and in an extreme case, up to the Planck scale in the context of a radiative electroweak symmetry breaking scenario [9]. Since the Higgs self-interactions by higher powers of the Higgs field are not suppressed and scalar interactions are large, one may suspect that our model belongs to one of the strongly-interacting Higgs sector, which can be analyzed, e.g. using a non-linear sigma model. We note that it is not a strongly-interacting model, at least around the electroweak scale. We reemphasis that our predictions are well within the perturbative regime, and the usual loop expansion with only renormalizable interactions gives stable predictions with only a few input parameters. To realize a phenomenologically valid scenario, a slightly unusual order-counting is employed, as discussed in Sec. 3.
10,046.4
2015-03-10T00:00:00.000
[ "Physics" ]
Ebola Entry Inhibitors Discovered from Maesa perlarius Ebola virus disease (EVD), a disease caused by infection with Ebola virus (EBOV), is characterized by hemorrhagic fever and a high case fatality rate. With limited options for the treatment of EVD, anti-Ebola viral therapeutics need to be urgently developed. In this study, over 500 extracts of medicinal plants collected in the Lingnan region were tested against infection with Ebola-virus-pseudotyped particles (EBOVpp), leading to the discovery of Maesa perlarius as an anti-EBOV plant lead. The methanol extract (MPBE) of the stems of this plant showed an inhibitory effect against EBOVpp, with an IC50 value of 0.52 µg/mL, which was confirmed by testing the extract against infectious EBOV in a biosafety level 4 laboratory. The bioassay-guided fractionation of MPBE resulted in three proanthocyanidins (procyanidin B2 (1), procyanidin C1 (2), and epicatechin-(4β→8)-epicatechin-(4β→8)-epicatechin-(4β→8)-epicatechin (3)), along with two flavan-3-ols ((+)-catechin (4) and (−)-epicatechin (5)). The IC50 values of the compounds against pseudovirion-bearing EBOV-GP ranged from 0.83 to 36.0 µM, with 1 as the most potent inhibitor. The anti-EBOV activities of five synthetic derivatives together with six commercially available analogues, including EGCG ((−)-epigallocatechin-3-O-gallate (8)), were further investigated. Molecular docking analysis and binding affinity measurement suggested the EBOV glycoprotein could be a potential molecular target for 1 and its related compounds. Introduction Ebola virus (EBOV) is the causative agent of Ebola virus disease (EVD), a disease associated with hemorrhagic fever and high case fatality rates [1][2][3]. With limited therapeutic options and a current outbreak of Ebola virus disease in Africa, there is an urgent need to develop novel anti-EBOV agents [4]. The study of anti-EBOV agents is largely hampered by the requirement of biosafety level 4 (BSL-4) containments. In the current study, to search for inhibitors of Ebola virus, we used a cell-based assay with replication-incompetent pseudotyped viral particles that express a specific viral glycoprotein on the surface and can be performed in a BSL-2 containment [5]. The development of the "one-stone-two-birds" protocol was first reported in 2011 [6]. This approach has been subsequently applied in our plant drug discovery program and assisted in our successful discovery of a number of antiviral lead compounds [5,[7][8][9][10]. Over 50% of approved drugs are natural products or their derivatives and mimics, which illustrates that plants are an important source for drug discovery [11]. There are nearly 33,000 vascular plant species found in China, and 9300 seed plant species are used in traditional Chinese medicine [12]. Lingnan refers to the tropical and subtropical region to the south of the Nan Mountains, a mountain range that divides the south and central subtropical zones [13]. The numerous plant resources with wide biological diversity in this region provide enormous potential for the discovery of antiviral agents. subtropical zones [13]. The numerous plant resources with wide biological diversity in this region provide enormous potential for the discovery of antiviral agents. A collection of over 500 medicinal plant extracts from the Lingnan region was tested against infection with Ebola-virus-pseudotyped HIV particles (EBOVpp). This led to the identification of Maesa perlarius as an anti-EBOV plant lead, which was further investigated to discover active compounds through bioassay-guided separation. M. perlarius is a 1-3 m tall shrub mainly distributed in China, Thailand, and Vietnam [14,15]. M. perlarius has several ethnopharmacological uses in different regions. In China, M. perlarius is known as ji yu dan. The leaves of M. perlarius are made into paste for the treatment of broken bones. In Cambodia, Laos, and Vietnam, the roots are used as diuretics and digestives. The leaves are used as a remedy for treating measles, and an infusion of the leaves can be prepared as a drink for postpartum care [15]. The phytochemistry of Maesa species has been extensively studied, with triterpenoid saponins being the most well studied, followed by flavonoids and benzoquinones [16][17][18][19][20][21]. However, compounds with antiviral properties have rarely been reported from Maesa plants. In our bioassay-guided fractionation study of the methanol extract of the plant M. perlarius, we isolated a number of active polyphenols composed of flavan-3-ol units, including procyanidins (also referred to as "proanthocyanidins"). Dimeric procyanidins and several flavan-3-ol monomers were found to be potent entry inhibitors against EBOV, with IC50 values ranging from 0.83 to 2.95 µM. The subsequent mechanism of action and binding affinity measurement, aided by docking analysis and microscale thermophoresis (MST) technology, determined that the flavan-3-ol compounds exhibited virucidal potency by interacting with EBOV glycoprotein. In the present study, we report on structural isolation, determination, and modification and evaluate the anti-EBOV activity of the identified polyphenol compounds. Stem Extract of M. perlarius Is Identified as a Potential Anti-EBOV Lead The screening of Lingnan plant extracts led to the identification of M. perlarius (MPBE) as an anti-EBOV hit plant. The dose-response relationship of the MPBE extract was established, and its IC50 value against infection with replication-incompetent EBOVpp was measured as 0.52 µ g/mL ( Figure 1). MPBE was further evaluated and validated using infectious Ebola virus in a biosafety level 4 (BSL-4) containment facility, which revealed an inhibitory effect against EBOV, with an IC50 value of 7.47 µ g/mL. normalized response-variable slope to the data using GraphPad Prism software. Flavan-3-Ols and Procyanidins Identified from the Stems of M. perlarius The methanol extract (MPBE, 529 g) of the stems of M. perlarius (5.5 kg) was fractionated by silica-gel normal-phase column chromatography with the use of a gradient elution system of dichloromethane (CH 2 Cl 2 )/acetone, followed by the gradient CH 2 Cl 2 /MeOH, to afford 84 fractions (MPBF1-84). MPBF64 (48.8 g), one of the eluents of CH 2 Cl 2 :MeOH 1:1, was found to exhibit an inhibitory effect against replication-incompetent EBOVpp, with an IC 50 value of 0.42 µg/mL. A portion of this fraction (4.1 g) was then subjected to further separation by a reverse-phase column packed with octadecylsilyl (ODS) silica gel and eluted by gradient MeOH/H 2 O to afford 11 fractions (MPBF64A-F64K) ( Figure 2). The obtained fractions were evaluated for their anti-EBOV activity. Most fractions separated from F64 (except F64J and F64K) showed more than 50% inhibition of EBOVpp luciferase signal compared to the luciferase signal of VSVpp (vesicular stomatitis virus (VSV)-pseudotyped HIV particles) at a concentration of 1.0 µg/mL. There was no cytotoxicity against the host cells at a concentration of 20 µg/ mL, confirming that the inhibitory effects of these fractions are not due to cytotoxicity but because it is acting on the binding and entry stage of EBOVpp. Fractions of MPBF64C-G showed negative inhibitory effects towards VSVpp, suggesting that the fractions might contain components that promote VSVpp infection. The active fraction MPBF64B (0.65 g) was further purified by a Sephadex LH-20 column and preparative HPLC to provide five compounds. By comparing the spectral data with those of previously identified compounds, they were identified as the known compounds procyanidins B2 (1) and C1 (2), epicatechin-(4β→8)-epicatechin-(4β→8)-epicatechin-(4β→8)-epicatechin (3), (+)-catechin (4), and (−)-epicatechin (5) (Figure 3). The methanol extract (MPBE, 529 g) of the stems of M. perlarius (5.5 kg) was fractionated by silica-gel normal-phase column chromatography with the use of a gradient elution system of dichloromethane (CH2Cl2)/acetone, followed by the gradient CH2Cl2/MeOH, to afford 84 fractions (MPBF1-84). MPBF64 (48.8 g), one of the eluents of CH2Cl2:MeOH 1:1, was found to exhibit an inhibitory effect against replication-incompetent EBOVpp, with an IC50 value of 0.42 µ g/mL. A portion of this fraction (4.1 g) was then subjected to further separation by a reverse-phase column packed with octadecylsilyl (ODS) silica gel and eluted by gradient MeOH/H2O to afford 11 fractions (MPBF64A-F64K) ( Figure 2). The obtained fractions were evaluated for their anti-EBOV activity. Most fractions separated from F64 (except F64J and F64K) showed more than 50% inhibition of EBOVpp luciferase signal compared to the luciferase signal of VSVpp (vesicular stomatitis virus (VSV)pseudotyped HIV particles) at a concentration of 1.0 µ g/mL. There was no cytotoxicity against the host cells at a concentration of 20 µ g/ mL, confirming that the inhibitory effects of these fractions are not due to cytotoxicity but because it is acting on the binding and entry stage of EBOVpp. Fractions of MPBF64C-G showed negative inhibitory effects towards VSVpp, suggesting that the fractions might contain components that promote VSVpp infection. The active fraction MPBF64B (0.65 g) was further purified by a Sephadex LH-20 column and preparative HPLC to provide five compounds. By comparing the spectral data with those of previously identified compounds, they were identified as the known compounds procyanidins B2 (1) and C1 (2) were extracted by methanol (3 × 32 L, 24 h) at room temperature. The dried methanol extract (MPBE, 529 g) was fractionated by silica-gel normal-phase column chromatography with the use of a gradient elution system of dichloromethane (CH2Cl2)/acetone, followed by the gradient CH2Cl2/MeOH, to afford 84 fractions (MPBF1-84). MPBF64 (48.8 g), one of the fractions eluted by CH2Cl2:MeOH 1:1, was found to exhibit an inhibitory effect against EBOVpp, with an IC50 value of 0.42 µ g/mL. A portion of this fraction (4.1 g) was then subjected to further separation by a reverse-phase column packed with octadecylsilyl (ODS) silica gel and eluted by gradient MeOH/H2O to give 11 fractions (MPBF64A-F64K). MPBF64B (0.65 g) was further purified by a Sephadex LH-20 column and preparative HPLC to provide compounds 1-5. Figure 2. Bioassay-directed fractionation of M. perlarius extract. The stems of M. perlarius (5.5 kg) were extracted by methanol (3 × 32 L, 24 h) at room temperature. The dried methanol extract (MPBE, 529 g) was fractionated by silica-gel normal-phase column chromatography with the use of a gradient elution system of dichloromethane (CH 2 Cl 2 )/acetone, followed by the gradient CH 2 Cl 2 /MeOH, to afford 84 fractions (MPBF1-84). MPBF64 (48.8 g), one of the fractions eluted by CH 2 Cl 2 :MeOH 1:1, was found to exhibit an inhibitory effect against EBOVpp, with an IC 50 value of 0.42 µg/mL. A portion of this fraction (4.1 g) was then subjected to further separation by a reverse-phase column packed with octadecylsilyl (ODS) silica gel and eluted by gradient MeOH/H 2 O to give 11 fractions (MPBF64A-F64K). MPBF64B (0.65 g) was further purified by a Sephadex LH-20 column and preparative HPLC to provide compounds 1-5. Anti-EBOV Activity Evaluation of Proanthocyanidins and Flavan-3-ols For a better comparison of the compounds' activities, we further investigated the dose-response relationship of their anti-EBOV activities, together with some commercially available B-type procyanidin and flavan-3-ols ( Figure 4). The IC50 values of the compounds against EBOVpp ranged from 0.83 to 36.0 µM, and compound 1 was found to be the most potent ( Table 1). All of the B-type procyanidins (1, 6, and 7) exhibited potent anti-EBOV activities, with IC50 values ranging from 0.83 to 1.52 µM. Both compounds 1 and 6 possessed 2R, 3R, and 4R configurations on the upper unit. The difference in the configuration of C-2′ and C-3′ at the terminal unit is not crucial for the anti-EBOV activity of compounds 1 and 6. Compounds 6 and 7 have different stereochemistry on their upper units (6: 2R, 3R, and 4R; 7: 2R, 3S, and 4S), which contributes to the stronger anti-EBOV activity of 6 (IC50 = 0.95 µM) over 7 (IC50 = 1.52 µM). This anti-EBOV activity and difference in structural relationship can also be observed in gallocatechin (10) and epigallocatechin (11). With a 2R and 3R stereochemistry, compound 11 showed 2.2 times stronger antiviral activity than compound 10 (2R and 3S). Catechin (4) and epicatechin (5) are the basic units used by nature to form proanthocyanidins, but the two monomers displayed much weaker anti-EBOVpp activity than the dimeric proanthocyanidins. However, the ester analogues, EGCG (8) and ECG (9), displayed much stronger anti-EBOV activities than compounds 4 and 5. EGCG was also reported as an inhibitor of the endoplasmic reticulum (ER) chaperone heat shock 70 kDa protein 5 (HSPA5), which could inhibit EBOV infection [22]. Another study applied an insilico analysis to dock a series of polyphenol compounds to Ebola viral protein VP24, which showed that EGCG and procyanidin B2 had binding affinity to VP24 [23]. Anti-EBOV Activity Evaluation of Proanthocyanidins and Flavan-3-Ols For a better comparison of the compounds' activities, we further investigated the dose-response relationship of their anti-EBOV activities, together with some commercially available B-type procyanidin and flavan-3-ols ( Figure 4). The IC 50 values of the compounds against EBOVpp ranged from 0.83 to 36.0 µM, and compound 1 was found to be the most potent ( Table 1). All of the B-type procyanidins (1, 6, and 7) exhibited potent anti-EBOV activities, with IC 50 values ranging from 0.83 to 1.52 µM. Both compounds 1 and 6 possessed 2R, 3R, and 4R configurations on the upper unit. The difference in the configuration of C-2 and C-3 at the terminal unit is not crucial for the anti-EBOV activity of compounds 1 and 6. Compounds 6 and 7 have different stereochemistry on their upper units (6: 2R, 3R, and 4R; 7: 2R, 3S, and 4S), which contributes to the stronger anti-EBOV activity of 6 (IC 50 = 0.95 µM) over 7 (IC 50 = 1.52 µM). This anti-EBOV activity and difference in structural relationship can also be observed in gallocatechin (10) and epigallocatechin (11). With a 2R and 3R stereochemistry, compound 11 showed 2.2 times stronger antiviral activity than compound 10 (2R and 3S). Catechin (4) and epicatechin (5) are the basic units used by nature to form proanthocyanidins, but the two monomers displayed much weaker anti-EBOVpp activity than the dimeric proanthocyanidins. However, the ester analogues, EGCG (8) and ECG (9), displayed much stronger anti-EBOV activities than compounds 4 and 5. EGCG was also reported as an inhibitor of the endoplasmic reticulum (ER) chaperone heat shock 70 kDa protein 5 (HSPA5), which could inhibit EBOV infection [22]. Another study applied an in-silico analysis to dock a series of polyphenol compounds to Ebola viral protein VP24, which showed that EGCG and procyanidin B2 had binding affinity to VP24 [23]. Compounds 4, 5, 10, and 11 are flavan-3-ol monomers. With one more hydroxyl group substituted on the ring B, the anti-EBOV activity of compounds 10 and 11 were ~3-4 times stronger than that of compounds 4 and 5, respectively. To extend the scope of the study, a series of additional flavan-3-ol analogues were screened for their inhibitory effects against EBOVpp ( Figure S19). Surprisingly, none of them exhibited an anti-EBOV effect, even at concentrations of 50 µM. This suggests that the flavan-3-ol skeleton is crucial for the anti-EBOV activity of these compounds. Compounds 4, 5, 10, and 11 are flavan-3-ol monomers. With one more hydroxyl group substituted on the ring B, the anti-EBOV activity of compounds 10 and 11 were~3-4 times stronger than that of compounds 4 and 5, respectively. To extend the scope of the study, a series of additional flavan-3-ol analogues were screened for their inhibitory effects against EBOVpp ( Figure S19). Surprisingly, none of them exhibited an anti-EBOV effect, even at concentrations of 50 µM. This suggests that the flavan-3-ol skeleton is crucial for the anti-EBOV activity of these compounds. Synthesis of ECG Analogues from (−)-Epicatechin to Study the Functional Group Requirement for Anti-EBOV Activity Since ECG (9) showed strong anti-EBOV activity (Table 1) and considering its structural simplicity in comparison with EGCG, it was selected as a structural scaffold to synthesize the ester analogues at C-3 and further explore the anti-EBOV activity enhancement potential. (−)-Epicatechin (5) was then used as the base structure for structural modification due to its commercial availability. The derivatization of compound 5 started from the selective benzyl (Bn) protection on its phenol group by reacting with benzyl bromide and potassium carbonate (K 2 CO 3 ) in super-dry DMF overnight. Following column separation to purify the 3 ,4 ,5,7-tetra-O-benzyl-(-)-epicatechin (12), the hydroxyl group at C-3 underwent Steglich esterification with various benzoyl chlorides, carbonyl chlorides, benzoic acids, and carboxylic acids in the presence of 4-dimethylaminopyridine (4-DMAP) to provide epicatechin derivatives with a structural similarity to EGCG. To improve the reactivity of benzoic acid and carboxylic acid towards esterification, these acids were reacted with oxalyl chloride to form the relevant acyl chlorides. After esterification, the hydroxyl groups were deprotected by hydrogenolysis to yield the final products. Three benzoic acid esters (13b-15b) and two cycloalkylcarboxylate esters (16b and 17b) were synthesized (Scheme 1). Synthesis of ECG Analogues from (-)-Epicatechin to Study the Functional Group Requirement for Anti-EBOV Activity Since ECG (9) showed strong anti-EBOV activity (Table 1) and considering its structural simplicity in comparison with EGCG, it was selected as a structural scaffold to synthesize the ester analogues at C-3 and further explore the anti-EBOV activity enhancement potential. (-)-Epicatechin (5) was then used as the base structure for structural modification due to its commercial availability. The derivatization of compound 5 started from the selective benzyl (Bn) protection on its phenol group by reacting with benzyl bromide and potassium carbonate (K2CO3) in super-dry DMF overnight. Following column separation to purify the 3′,4′,5,7-tetra-O-benzyl-(-)-epicatechin (12), the hydroxyl group at C-3 underwent Steglich esterification with various benzoyl chlorides, carbonyl chlorides, benzoic acids, and carboxylic acids in the presence of 4-dimethylaminopyridine (4-DMAP) to provide epicatechin derivatives with a structural similarity to EGCG. To improve the reactivity of benzoic acid and carboxylic acid towards esterification, these acids were reacted with oxalyl chloride to form the relevant acyl chlorides. After esterification, the hydroxyl groups were deprotected by hydrogenolysis to yield the final products. Three benzoic acid esters (13b-15b) and two cycloalkylcarboxylate esters (16b and 17b) were synthesized (Scheme 1). Scheme 1. Synthesis of ECG congeners. The cytotoxicity and antiviral activity of these synthetic compounds were tested at 10 µM. None of the synthetic compounds exhibited improved anti-EBOV activity, and only 13b exhibited > 60% inhibitory effect against EBOVpp infection, which suggests that the 3,4,5-trihydroxybenzoate ester at C-3 of (-)-epicatechin could be crucial for anti-EBOV activity ( Figure 5). The cytotoxicity and antiviral activity of these synthetic compounds were tested at 10 µM. None of the synthetic compounds exhibited improved anti-EBOV activity, and only 13b exhibited > 60% inhibitory effect against EBOVpp infection, which suggests that the 3,4,5-trihydroxybenzoate ester at C-3 of (-)-epicatechin could be crucial for anti-EBOV activity ( Figure 5). Synthesis of ECG Analogues from (-)-Epicatechin to Study the Functional Group Requirement for Anti-EBOV Activity Since ECG (9) showed strong anti-EBOV activity (Table 1) and considering its structural simplicity in comparison with EGCG, it was selected as a structural scaffold to synthesize the ester analogues at C-3 and further explore the anti-EBOV activity enhancement potential. (-)-Epicatechin (5) was then used as the base structure for structural modification due to its commercial availability. The derivatization of compound 5 started from the selective benzyl (Bn) protection on its phenol group by reacting with benzyl bromide and potassium carbonate (K2CO3) in super-dry DMF overnight. Following column separation to purify the 3′,4′,5,7-tetra-O-benzyl-(-)-epicatechin (12), the hydroxyl group at C-3 underwent Steglich esterification with various benzoyl chlorides, carbonyl chlorides, benzoic acids, and carboxylic acids in the presence of 4-dimethylaminopyridine (4-DMAP) to provide epicatechin derivatives with a structural similarity to EGCG. To improve the reactivity of benzoic acid and carboxylic acid towards esterification, these acids were reacted with oxalyl chloride to form the relevant acyl chlorides. After esterification, the hydroxyl groups were deprotected by hydrogenolysis to yield the final products. Three benzoic acid esters (13b-15b) and two cycloalkylcarboxylate esters (16b and 17b) were synthesized (Scheme 1). Scheme 1. Synthesis of ECG congeners. The cytotoxicity and antiviral activity of these synthetic compounds were tested at 10 µM. None of the synthetic compounds exhibited improved anti-EBOV activity, and only 13b exhibited > 60% inhibitory effect against EBOVpp infection, which suggests that the 3,4,5-trihydroxybenzoate ester at C-3 of (-)-epicatechin could be crucial for anti-EBOV activity ( Figure 5). Molecular Docking Analysis and Microscale Thermophoresis (MST) Measurement Reveal EBOV-GP as the Potential Molecular Target of B-Type Procyanidins and Flavan-3-Ol Analogues In recent years, computational approaches have been widely used to elucidate the binding targets of bioactive compounds at the molecular level. The characterization of the EBOV virion structure and the availability of co-crystalized 3D structures of the Ebola viral glycoprotein (EBOV-GP) with ligands provides a basis for molecular docking analysis. EBOV-GP is a single protein located at the outer membrane of EBOV and plays an important role in viral attachment, endosomal entry, and fusion. EBOV-GP possesses a trimeric structure, which is composed of the external segment, GP1, and the transmembrane unit, GP2 [24]. Up to now, 11 compounds have been found to have direct interaction with EBOV-GP. These chemically diversified structures were shown to bind at the same cavity of EBOV-GP but with varied binding modes [25][26][27][28]. It has been suggested that binding of these compounds to this large cavity of EBOV-GP could destabilize the protein and compromise EBOV entry [26]. A computational approach was thus employed to explore the possibility of this EBOV-GP binding cavity as the binding site of B-type procyanidins and their related compounds. To evaluate whether EBOV-GP is the potential target of B-type procyanidins and their related compounds, procyanidins B1 (6) and B2 (1) were docked with the binding domains of EBOV-GP crystal structures in complex with the reported anti-EBOV compounds using AutoDock Vina ( Figure 6) [27]. These compounds could be docked to the binding pocket of toremifene (the known ligand of EBOV-GP) in the EBOV-GP crystal structure 5JQ7 [25,29]. The docking condition was then further applied to the docking of other compounds. We docked eight B-type proanthocyanidins or the flavan-3-ols monomers to the EBOV-GP binding cavity ( Figure S38). The correlation between the docking score and the IC 50 value of a ligand was also studied. As shown in Table 2, the docking scores and IC 50 values were inversely correlated with r = −0.919 (Pearson correlation coefficient, p < 0.01). There is a trend that a compound with a lower IC 50 value has a lower binding score, indicating that the compound may have a higher binding affinity to the cavity. The molecular docking results suggested that compounds 1, and 6 and their related compounds possibly bind to this Ebola GP cavity to destabilize the Ebola viral protein, which suggests that EBOV-GP is the potential target protein for procyanidin B2 and its related compounds. (4) 3.60 × 10 −5 −6.5 a Half maximal inhibitory concentration of a compound against infection with EBOVpp; expressed in molar concentration. b The docking scores were the free energies of binding calculated by the scoring function in AutoDock Vina [30]. To confirm the molecular interaction of B-type procyanidins with EBOV-GP, MST experiments were performed to measure the binding affinity of EBOV-GP and procyanidin B2 (6). MST is a qualitative method to elucidate the interaction between a protein and a ligand by measuring the diffusion behavior of the labeled protein when the protein is being excited by infrared light. The diffusion behavior of the protein is altered later, when it is complexed with an increased number of unlabeled ligand molecules. The binding affinity (K d ) of the ligand can be determined by obtaining the titration curves [31]. By definition, the dissociation constant (K d ) refers to the concentration of ligand at which half of the ligand binding sites on the protein are occupied by the ligand molecules [32]. The His-tag labeling strategy was employed to characterize the interaction between our protein of interest and a small molecule in purification-free cell lysate [33]. In this experiment, a cell lysate expressing EBOV-GP-His 6 was used to determine binding affinity. A pCMV vector coding for only His 6 was used as a control. We determined the EBOV-GP K d value for procyanidin B2 binding to EBOV-GP to be 13.0 µM, whereas the K d value for toremifene (positive control) was measured as 21.4 µM (Figure 7 and Table S6), which is similar to values reported in the literature (24.3 µM and 14 µM) using different assay buffer systems [25,34]. On the other hand, toremifene and procyanidin B2 could not produce a binding curve with the negative control ( Figure S40). Table S6), which is similar to values reported in the literature (24.3 µ M and 14 µM) using different assay buffer systems [25,34]. On the other hand, toremifene and procyanidin B2 could not produce a binding curve with the negative control ( Figure S40). Discussion Ebola virus disease (EVD) is a highly pathogenic disease with a case death rate of more than 60%. In the present study, a pseudotyped virus-based screening assay system was established to discover specific EBOV inhibitors against EBOV-GP-mediated entry, which includes attachment, endocytosis, and fusion [6,24]. The inhibitors may target viral proteins or host factors that are involved in viral infection. The screening of 500 medicinal plants from the Lingnan region of China resulted in the identification of M. perlarius as an Discussion Ebola virus disease (EVD) is a highly pathogenic disease with a case death rate of more than 60%. In the present study, a pseudotyped virus-based screening assay system was established to discover specific EBOV inhibitors against EBOV-GP-mediated entry, which includes attachment, endocytosis, and fusion [6,24]. The inhibitors may target viral proteins or host factors that are involved in viral infection. The screening of 500 medicinal plants from the Lingnan region of China resulted in the identification of M. perlarius as an anti-EBOV plant lead. The methanol extract (MPBE) of the stem materials of this plant was subjected to column separation guided by our developed anti-Ebola pseudoviral system, leading to isolation of three procyanidins (procyanidin B2 (1), procyanidin C1 (2), and epicatechin-(4β→8)-epicatechin-(4β→8)-epicatechin-(4β→8)-epicatechin (3)), along with two flavan-3-ol monomers ((+)-catechin (4) and (−)-epicatechin (5)). To the best of our knowledge, this is the first isolation and identification of procyanidins from the Maesa species. Procyanidins are condensed tannins that are formed through the C-C linkage of flavan-3-ol monomers, i.e., catechin and epicatechin. However, the biochemical pathway of procyanidin synthesis is not fully understood. Similar to their monomers, procyanidins are polyphenolic compounds and are present in many edible plants. Procyanidins are abundant in unripe apples, barley, sorghum, peanuts, cacao, various berries, and grapes [35]. They were found to have antioxidative effects, and some procyanidin-rich plant extracts have been commercially available in the market as dietary supplements [36]. For example, Pycnogenol, the standardized extract of French maritime pine bark, was found to contain over 70% procyanidin [37]. The 3D structure of a procyanidin largely depends on the stereochemistry of the hydroxyl group at C-3 and the bonds formed between the monomers. Procyanidins with different spatial conformations contribute to the differences in interaction with molecules and ligands, resulting in different bioactivities [38]. Besides antioxidative effects, the immunomodulatory, cardioprotective, and platelet-activation effects of procyanidins were also reported [39]. Condensed tannins and their monomers were reported to exhibit antiviral activities against various kinds of viruses, including HSV, HIV, SARS-CoV, and H1N1 influenza virus [40][41][42][43], and our study is the first report of in vitro anti-EBOV viral activities of procyanidins. In the current study, the anti-EBOV activity of proanthocyanidins and flavan-3-ols were investigated. The IC 50 values of these compounds against infection with EBOVpp in A549 cells ranged from 0.83 µM to 36.0 µM. Compound 1 was the most potent compound, with an IC 50 value of 0.83 µM. B-type proanthocyanidins showed stronger overall anti-EBOV activity than their flavan-3-ol monomers. Configurations of the functional groups in proanthocyanidins also affect antiviral activity. Since proanthocyanidins and flavan-3-ols belong to large classes of phytochemicals, there are ample opportunities to explore the structure-activity relationship in order to identify anti-EBOV lead molecules among polyphenols. Understanding of the anti-EBOV mechanism of procyanidins is very limited. Their related flavan-3-ol monomer compounds, gallic acid (IC 50 value of 25.4 µM against infectious recombinant EBOV infection) and EGCG, were previously reported as EBOV entry inhibitors. The time-of-addition assay revealed that gallic acid targeted a post-binding step and possibly interferes with the GP-mediated fusion. EGCG was found to be an inhibitor of ER chaperone HSPA5, an Ebola virus associated host protein [22,44]. In our study, we attempted to investigate EBOV-GP as the possible molecular target of procyanidin B2 and its related compounds by using molecular docking analysis in conjunction with MST technology to verify the binding affinity of the ligand-protein interaction. The docking scores were calculated to approximate the binding energies of the flavan-3-ol derivatives to the protein target, EBOV-GP. Our docking analysis results indicated that the structures of these flavan-3-ol derivatives could fit into a known EBOV-GP binding cavity targeted by multiple chemically diverse drugs. Though the scores might not be equivalent to the IC 50 s [45], a good correlation was found by comparing the docking scores and the experimental in vitro IC 50 values of the eight investigated flavan-3-ol compounds ( Table 2). The molecular docking results were further validated by MST experiments to measure the EBOV-GP binding affinities of procyanidin B2 in comparison to toremifene. Different from surface plasmon resonance (SPR) technology, which detects change in size after the ligand-target binding event, the measurement of the thermophoretic profile in MST is achieved by observing tiny changes in the solvation entropy of the target molecule. MST has several advantages, such as sample preparation and method development. It does not require immobilization of a target protein to a surface or require a complicated buffer system. Purification-free samples can also be used, which allows for rapid determination of binding affinity of ligand-protein interaction [46]. Our MST experiments confirmed that procyanidin B2 is a ligand of EBOV-GP, with a K d value of 13.0 µM, revealing that this compound binds to the same cavity of EBOV-GP as toremifene (K d value of toremifene was measured at 21.4 µM). Taking these results into account, EBOV-GP could be the correct antiviral target for these flavan-3-ol molecules, and the anti-EBOV potency of these types of compounds could be further improved by manipulating the flavan-3-ol molecule structures to fit in the procyanidin B2 binding site with enhanced binding affinities. General Normal-phase column chromatography was performed using silica gel (100-230 mesh, Davisil or 230-400 mesh, Merck, Darmstadt, Germany). A Sephadex LH-20 (Amersham Bio-Sciences UK Ltd., Amersham, UK) was used in size-exclusion chromatography. Reversedphase column chromatography was carried out on ODS RP-18 silica gel (40-63 µm, Merck). Analytical high-performance liquid chromatography (HPLC) was carried on an Agilent Technologies Series 1100 HPLC (Agilent Technologies, Inc., Santa Clara, CA, USA) with a diode-array detector (DAD). An analytical Thermo-C18 (150 × 4.6 mm) column was used in the analysis. Semi-preparative HPLC was performed on the same HPLC system equipped with a semi-preparative column YMC-Pack ODS-A, S-5 µm, 12 nm (250 × 20 mm) (YMC Co., Ltd., Kyoto, Japan). The high-resolution electrospray ionization mass spectroscopy (HRESIMS) spectra were acquired from an Agilent 6540 time-of-flight mass spectrometer (Agilent Technologies). The optical rotations were measured with a JASCO P-1010 polarimeter (JASCO, Easton, MD, USA). The circular dichroism (CD) spectra were acquired on an Olis DSM 172 circular dichroism spectrophotometer (Olis, Athens, GA, USA). The infrared (IR) spectra were recorded on a PerkinElmer Spectrum One FT-IR spectrometer using potassium bromide (KBr) pellets (PerkinElmer, Waltham, MA, USA). The ultravioletvisible (UV-vis) spectra were obtained from a Varian Cary 50 UV-Vis spectrophotometer (Agilent Technologies, Santa Clara, CA, USA). The 1D and 2D nuclear magnetic resonance (NMR) spectra were acquired with a Bruker Ascend 400 MHz spectrometer (Bruker, Karlsruhe, Germany). The unit of chemical shifts (δ) in NMR spectra are presented in parts per million (ppm). The chemical shifts of deuterated methanol (CD 3 OD) were used as references ( 1 H: 3.31 ppm, 13 C: 49.00 pm). All coupling constants (J) were expressed in Hertz (Hz). The standard pulse sequence provided by the manufacturer was applied in all NMR experiments. Plant Materials The stems of M. perlarius were collected in Hong Kong in October 2014. The collected plants were authenticated by Prof. Chen Hu-Biao of the School of Chinese Medicine, Hong Kong Baptist University. A voucher specimen (SHB0005) available for inspection upon request at the Quality Research Laboratory/Phytochemistry Laboratory, School of Chinese Medicine, Hong Kong Baptist University. The plant materials were allowed to dry at room temperature for two weeks and were then cut into small pieces and ground into powder by a pulverizer. Extraction and Isolation The dried powder of the stems of M. perlarius (5.5 kg) was extracted by methanol (3 × 32 L, 24 h) at room temperature. The extract was filtered out and concentrated to dryness by rotary evaporators to afford an extract (MPBE, 529 g), which was fractionated by silica-gel normal-phase column chromatography (1. (Table S3). The retention time, ion peak, and fragmentation pattern of compound 1 in LC-MS also matched with those of the purchased procyanidin B2 standard (Chengdu Biopurify Phytochemicals Ltd., Chengdu, China). Compound 1 was identified as procyanidin B2. Synthesis of ECG Derivatives The starting compound, (−)-epi-catechin, was purchased from Tokyo Chemical Industry (TCI, Shanghai, China). All other reagents were purchased from Sigma-Aldrich or J&K (Beijing, China). (−)-Epi-catechin was first selectively protected by Bn-groups as described in the literature [50]. The protected epi-catechin (12) was then carried out to react with benzoyl chloride, carbonyl chloride, benzoic acid, and carboxylic acid in the presence of 4-DMAP in CH 2 Cl 2 to afford the derivatives. After esterification, the hydroxyl groups were deprotected by hydrogenolysis to afford the final products. The spectra for these compounds are provided in the supporting information. Cytotoxicity Assay A sulforhodamine B (SRB) assay was used to evaluate the cytotoxicity of the samples using A549 cells with modifications [54]. Briefly, 4000 cells in 190 µL complete DMEM medium were seeded in each well of a 96-well cell culture plate. After 24 h, 10 µL of a sample in 10% (v/v) DMSO was introduced to the 96-well plate. An amount of 10 µL of 10% (v/v) DMSO was used as negative control, and paclitaxol was used as a positive control. After the treated cells were incubated at 37 • C for 72 h, 50 µL cold trichloroacetic acid (TCA, 50%, w/v) was added into each well. The plates were incubated at 4 • C for an additional 1 h to fix the cells. The plates were then rinsed with low-running tap water four times and dried at room temperature. An amount of 50 µL of SRB solution (0.4%, wt/vol) was then added to each well. The plates were incubated at room temperature for 10 min and were washed with 1% (v/v) acetic acid to remove the unbound dye. The plates were allowed to dry. The protein-bound dye in each well was dissolved with 100 µL of Tris base solution (10 mM, pH 10.5), followed by shaking on an orbital shaker for at least 30 min. The optical density (OD) values were measured at 515 nm in a microplate reader. The CC 50 (the concentration of an agent to cause a by 50% reduction in cell viability) values were calculated by GraphPad Prism 5.0 (GraphPad Software, San Diego, CA, USA). Production of EBOV and VSV Pseudovirions Human embryonic kidney 293T cells were transiently transfected with either 1.875 µg VSV-G envelope-expression plasmid or 1.875 µg Ebola glycoprotein-expression plasmid (Zaire ebolavirus isolate IRF0164, partial genome) and 13.125 µg Env-deficient HIV vector (pNL4-3. Luc. R.E.) [55,56] in a 100 mm cell culture dish via Lipofectamine 3000 (Ther-moFisher, Waltham, MA, USA) following the manufacturer's instructions. This pNL4-3 was derived from an infectious molecular clone of an SI, T-tropic virus (Michael et al., 1998), and is replication-deficient since the HIV is Env-and Vpr-. Additionally, the luciferase gene (luc) carried by this recombinant HIV vector serves as the reporter for HIV replication (reverse transcription, integration, and HIV gene expression). Forty-eight hours post-transfection, the supernatants were collected and filtered through a 0.45 µm pore size filter, and the generated pseudovirions (EBOVpp (EBOV pseudovirion): HIV vector with EBOV-GP incorporated on the surface; VSVpp (VSV pseudovirion): HIV vector with VSV-GP incorporated on the surface) were used for infection [44]. Pseudovirion Inhibition Screening Assay The host A549 (human lung epithelial) cells were seeded at 4000 cells per well (96well plate) in complete DMEM. Plant extracts, fractions, or compounds (10 µL) and the pseudovirus (90 µL) were incubated with host cells. Seventy-two hours post-infection, cell lysates were collected for luciferase assay using a Neolite Reporter gene assay system (PerkinElmer). The luciferase signals were measured in a Perkin Elmer Envision 2104 multilabel reader. Antiviral Assay against Infectious Ebola Virus Samples were serially diluted to a ratio of 1:10, with 100 µM being the highest concentration tested. For determination of cell viability with CellTiter-Glo, HeLa cells were incubated for 48 h with samples, after which time an equal volume of CellTiter-Glo reagent was added. Luminescence values were determined for control untreated cells and were used as a reference for sample-treated cells. The anti-infectious EBOV assay was performed following the published protocol by Cui at et al. [44]. To evaluate whether the samples inhibit infection, 40 µL of a sample was added to each well of VERO E6 cells. The EBOV inoculum was delivered in 10 µL (MOI = 1). After adding virus, the cells were incubated for 1 h at 37 • C, after which time they received another 50 µL of sample. After 48 h, cells were fixed with 10% formalin, and infection was measured. EBOV-infected cells were detected with human Ab KZ52. An anti-human secondary conjugated to Alexafluor 488 was used to visualize infected cells that were counterstained with DAPI. Wells receiving only virus and media were used to establish a "control infection." Control infection rates varied between plates, from approximately 11% to 35% (n = 4 wells per plate). Given that a 35% infection rate is more common than what was seen in our group, the higher infection rate was used to determine experimental infection rates. Experimental values are expressed as a percentage of the control infection. Preparation of Proteins for Docking The target protein, Ebola glycoprotein, was retrieved from the Protein Data Bank (https://www.rcsb.org/structure/5JQ7) (accessed on 30 December 2021). All bound waters, cofactors, and the original ligand were removed from the protein. Geisteger charges were computed, and polar hydrogen atoms were added subsequently. The AutoDock atom types were defined using AUTODOCK Tools 1.5.6, the graphical user interface of AUTODOCK, supplied by MGL Tools. The grid parameters were adjusted to cover the binding site of the original ligand in the protein crystal structure. Molecular Docking Using AutoDock Vina The prepared pdbqt files of ligands and proteins were docked using AUTODOCK Vina [30]. For each docking, the docking poses with the eight best docking scores were obtained, and the binding interactions were analyzed. Microscale Thermophoresis (MST) Experiment The plasmids expressing His-tagged EBOV-GP (Ebola virus glycoprotein expression plasmid, C-His tag, cat. no.:VG40304-CH, Sino Biological) and the pCMV-negative control (cat. no.: CV015, Sino Biological) were transfected into 293T cells in a separate 90 mm dish using lipofectamine 3000. Forty-eight hours post-transfection, the transfected cells were washed by PBS twice, followed by the addition of M-PER lysis buffer supplemented with protease inhibitor cocktail (Halt ™ Protease Inhibitor Cocktail, EDTA-Free, Thermo Fisher Scientific) to collect the cell lysates. MTS experiments were performed according to the NanoTemper Technologies protocol in a Monolith NT.115 (red/blue) instrument (NanoTemper Technologies, South San Francisco, CA, USA). In the experiment, the cell lysate expressing His-tagged proteins was labeled with a Monolith His-Tag Labeling Kit RED-tris-NTA (NanoTemper Technologies), as described by the manufacturer. The final concentration of target protein in the cell lysate was kept constant at 25 nM. PBST/M-PER (1:1) was used as the assay buffer. A small-molecule DMSO stock solution (10 µL) was diluted to a ratio of 1:10 with assay buffer and then diluted to a ratio of 1:1 in 10 µL assay buffer to make a 16-sample dilution series (final concentration ranging from 1.52 nM to 50 µM). Labelled cell lysate (10 µL) was added to the diluted drug solution, and the mixture was incubated for 20 min in the dark. The mixtures were then loaded to standard Monolith NT.115 capillaries. The experiments were performed on a Monolith NT.115 Red system using medium MST power and 30% LED power at 25 • C. The data acquired with MO. Control (NanoTemper Technologies) were analyzed using MO. Affinity Analysis 3.0.5 (NanoTemper Technologies), and the K d values were determined by fitting the data with the K d model. Conclusions Ebola virus disease (EVD) is associated with fulminant hemorrhagic fever and high case mortality rates. Researchers are making efforts to develop anti-EBOV drugs. However, biosafety level 4 laboratories are required to handle EBOV, which constrains EBOV-related studies. A pseudotyped virion system that consists of envelope-deficient core from a virus, luciferase reporter genes, and envelope glycoproteins from another virus provides a convenient alternative platform for antiviral research. In the present study, we established a pseudotyped screening system involving two types of pseudovirions, EBOVpp and VSVpp, to evaluate the anti-EBOV activity of a sample. By comparing the inhibitory-effect difference of a sample against the two pseudovirions, an antiviral agent that specifically targets EBOV can be determined. Natural resources are reservoirs for druggable compounds. The considerable resources of plants in the Lingnan region provide an abundant potential to discover lead molecules. We screened 500 plant extracts made from medicinal plants collected in this region, which led to the discovery of M. perlarius as an anti-EBOV lead. Prior phytochemical and pharmacological studies of Maesa plants mainly focused on triterpenoid saponins. Our bioassay-guided fractionation study of the methanol extract of the stems of M. perlarius resulted in the discovery of a number of flavan-3-ol oligomers as potent EBOV inhibitors. In particular, we identified B-type procyanidins belonging to a class of condensed tannins as a new category of anti-EBOV phytochemicals. Furthermore, procyanidin B2 (1) was docked to fit the binding pocket of EBOV-GP, which was confirmed in MTS experiments by comparing it with the known EBOV-GP ligand toremifene. Thus, the current study presents flavan-3-ol monomers and oligomers as lead molecules that can be used as scaffolds for a target-oriented synthesis of additional analogues possessing improved anti-EBOV potency. Data Availability Statement: The data presented in this study are available on request from the corresponding authors.
9,745.4
2022-02-27T00:00:00.000
[ "Biology" ]
Research review on cutting temperature and cutting vibration of titanium alloy The text should be set to 1 line spacing. The abstract should be centred across the page, indented 17 mm from the left and right page margins and justified. It should not normally exceed 200 words The titanium alloy called "strategic metal" is a typical refractory material that causes high temperature and vibration during cutting due to its high strength, low thermal conductivity and low elastic modulus. In the processing process, the tool wear is serious, and the accompanying cutting heat and cutting vibration are coupled to each other and change with time. Studying the temperature and vibration characteristics in the processing process is conducive to further understanding of cutting mechanism, better controlling temperature and vibration, improving cutting processing quality and reducing production cost. This paper reviews the current situation of cutting temperature and cutting vibration at home and abroad. Scholars focus on the influence of cutting factors on temperature and vibration, based on finite element simulation and mathematical modeling, and the research on temperature and vibration coupling of titanium alloy cutting should be further Introduction Due to its high specific strength, excellent chemical corrosion resistance and biocompatibility, titanium alloy is widely used in the high-precision industries of human life, and is more applied in petroleum power generation, marine engineering and medical industries. As a not scarce resource on earth, titanium has not been widely used for many years, mainly due to the high application cost of titanium alloy. With the rapid development and wide application of titanium alloys, the processing of titanium alloys is still a major industrial problem. Titanium alloy is a typical difficult-to-cut metal material. The main problems in the cutting process include: (1) small deformation coefficient, (2) high cutting temperature, (3) large cutting force per unit area, (4) the phenomenon of cold hardening is serious, (5) the tool is easy to wear. Therefore, it is of great significance to study thoroughly and systematically the turning temperature and turning vibration characteristics of titanium alloy during machining. In recent years, many scholars at home and abroad have studied the temperature and vibration characteristics of titanium alloy turning from theories, experiments, finite element simulation, and predictive modeling. 2 Current status of cutting temperature Due to its high strength and low thermal conductivity of titanium alloy, the contact area of the tool and the workpiece produces high pressure and temperature, while all the energy consumed is almost converted to thermal energy. Cutting heat is generally transmitted from chips, tools, work-pieces and surrounding media. The high temperature and strong vibration of the titanium alloy are cut due to its high strength, low thermal conductivity and low elastic modulus. Reasonable control of cutting temperature can reduce the impact of thermal deformation, reduce tool wear, extend the service life of tools, and thus stabilize the processing precision and processing quality. Factors affecting the cutting temperature Practice has proved that the workpiece material, tool material and structure, processing mode and cutting parameters setting are all important factors affecting the cutting temperature. Davies [1] reviews several widely used cutting temperature measurements, Shows how the temperature can be monitored during material removal, in comparison with criteria critical for removal of measured materials, provide a reference to scholars in the field of work; Han [2] performed high speed cutting and milling tests on Ti-6Al-4V titanium alloy material, studied and analyzed the trend of cutting temperature under dry cutting, air jet and nitrogen jet; Zhou [3] used low-temperature cooling technology. The results show that low-temperature cooling can significantly improve the stress and thermal stress of cutting tools; Liu et al [4][5][6][7] combined with ultrasonic vibration vehicle cutting to study the changes in amplitude, cutting velocity and displacement in different directions, analyze the influence of the two cutting methods on the cutting force and maximum temperature changes and material processing properties; Venkatesan [8] made an experimental study on laser assisted machining of Inconel 718 using turning process, and analyzed the effects of laser cutting parameters (i.e. cutting speed, feed rate and laser power) on cutting force and cutting temperature; Xavior [9] carried out precision turning of Ti-6Al-4V titanium alloy. Based on the experimental values, analysis of variance was carried out to understand the effects of cutting speed, feed rate, cutting depth and tool tip radius on cutting force, surface roughness and cutting tool temperature. Use finite element simulation to monitor and predict cutting temperature With the development of computer science and technology, finite element simulation becomes the main means of study heat and cutting temperature. He et al [10][11] used finite element analysis (FEA) technology to model titanium alloy, the influence of the cutting parameters on the cutting temperature is analyzed, a regression model of cutting parameters and cutting temperature was established to predict titanium alloy cutting temperature; Veeranaath [12] studied the changes of workability, cutting stress and temperature of PCBN tool in dry processing Ti-6Al-4V by FEA technology; Tian [13] performed numerical simulation of the highspeed milling temperature field of titanium alloy TC4, the influence of the cutting parameters on the front and rear blade temperature and its distribution is studied; Based on the fluent solid energy equation, Du [14] constructed the finite element model of heat transfer on the tool chip contact surface in the second deformation zone, and discussed the effects of coating material, coating thickness and tool chip actual contact area on heat transfer when the coated tool was cutting H13 hardened die steel; Yan [15] combined with FEA technology to establish a 3 D cutting finite element model of TC4 titanium alloy with three elements, we analyze and compare the cutting force of three different coating tools; He [16] uses FEA technology to establish a cutting model, the effect of the cutting parameters on the cutting temperature field and the residual stress field is analyzed for simulation; combined with the Johnson-Cook Law of Materials, Zhang [17] established a three-dimensional finite element model (FEM) for oblique cutting and studied the distribution of cutting force, temperature and residual stress during oblique cutting of TC21 alloy; Wang [18] uses two finite element software under the same simulation parameters to analyze the cutting temperature distribution and the trend of cutting temperature with the cutting speed. Predictive modeling using mathematical methods In addition to the simulation analysis of cutting temperature using finite element software, many scholars also use mathematical models to establish analytical models related to cutting temperature, and predict the change of cutting temperature during the cutting process, with more high efficiency.Qu [19] establishes the mathematical model of D406A CNC cutting temperature and cutting force and determines the parameters in the model by cutting test; Ulutan [20] and Yan [21] predict the temperature during processing based on a thermal modeling technology which can greatly reduce the calculation time without causing much impact on the accuracy. Mohring [22] proposes a numerical iterative method for calculating the temperature of the contact area between the chip and the tool.The temperature of the tool measured on the front and rear knife faces is compared with the calculated temperature.Shan [23] developed an improved analytical model based on the mobile heat source method to predict the distribution of the cutting temperature in the orthogonal cutting of the titanium alloy TC4, showing that the relative difference in the predicted temperature was 0.49% to 9%, indicating its high predictive power accuracy. Dayan [24] used the general full factor method to design the test, and established three prediction models of surface roughness, temperature and metal removal rate according to the obtained test data. The established model is in good agreement with the test results.Pervaiz [25] combines traditional finite element machining simulations with computational hydrodynamic models to establish a new method to analyze the temperature distribution of cutting tools to predict cutting heat and knife tip temperature during machining; Imbrogno [26] developed a new 3 D finite element model to predict cutting force, temperature and corresponding microstructural changes in semi-refined Ti-6Al-4V under dry low temperature, and calibrated and verified. Current research status of cutting vibration Titanium alloy causes sharp cutting vibrations during cutting due to its lower elastic modulus and high strength. When the cutting vibration becomes large to a certain extent, the tool vibration directly affects the tool wear situation, and by monitoring the tool vibration can obtain information about the tool wear, improve the production quality and efficiency of the cutting [27].To this end, scholars study the influence of cutting parameters on the cutting vibration, the structure of the machine tool itself, the structure of knives and other titanium alloy cutting methods. Impact factors of cutting vibration Practice has proved that the workpiece material, tool material and structure, processing mode and cutting parameters setting are all important factors affecting the cutting temperature. Sun [28] performs the first systematic study of the numerical and theoretical aspects of the high-speed vibrational cutting process of Ti-6Al-4V, help to deepen the understanding of the vibration effect of metal cutting, provide practical guidance for retraining and utilization of vibration assistance processing; Yan [29] studied the change of tool vibration with cutting parameters during the processing of TA2 pure titanium and TC4 titanium alloy; Zhou [30] goes through external circular vehicle machining of TA15 titanium alloy, the vibration signal characteristic values were collected and extracted, the influence law of the cutting amount on the cutting vibration and the surface roughness is analyzed, explain the effect of the cutting amount on the vibration; Zhu [31] systematically investigated the relationship between tool wear, chip morphology, and cutting vibration, tool wear was found to aggravate the cutting vibration, thus aggravating the chip edge wear. Monitor and predict tool wear based on cutting vibration Reasonable monitoring of cutting vibration can reduce the zigzag cutting chips caused by titanium alloy with high strength and low heat conductivity, and aggravate the vibration of the tool system, and then weaken the wear of the tool and the deterioration of surface processing quality. Scholars analyze the optimal processing conditions of titanium alloy by building a prediction model. Silva [32] proposes a model to predict milling force, which is suitable for milling of ring tool and considers the cutting vibration effect of processing. Combined with FEM simulation analysis software, Patil [33] performs twodimensional FEM transient simulation and experimental characterization of titanium alloy under ultrasonic vibration assisted cutting condition using DEFORM software. Wang [34] uses the wavelet characteristics and support vector machine mode recognition algorithm to establish a milling vibration recognition system, and divide the vibration signal into three categories. Measurements of reducing cutting vibration The design of special structure knife rack, special shape tool and ultrasonic vibration auxiliary cutting methods can have a good effect on reducing cutting vibration. By manufacturing the hollow elements of special structure in the rack shaft, Vogel [35] greatly reduces the vibration amplitude of the tool and greatly improves the production quality of the piece. The texture tool proposed by Suzuki [36] can effectively cause process damping and inhibit vibration, which can play a good role in the experiments compared to ordinary tools. Rao [37] found that a new tool with a micro porous pattern on the front blade and side of the tool can reduce the friction of the front blade and thus reduce the vibration. Zheng [38] developed a rotating vibration control system based on multiple time delay, and numerical simulation found that the designed controller can effectively inhibit the cutting vibration and improve the stability of the cutting process. Identifying the dynamics of the cutting process based on working mode analysis, Kim [39] proposes a new vibration monitoring method that can determine the stability level of cutting before the vibration becomes unstable. Martin [40] studied the influence of cutting parameters on the vibration of machined parts. The experimental results show that the feed rate is the most important factor affecting the machining effect. Chen [41] proposes a novel nested artificial neural network that is more accurate to predict cutting states such as cutting vibration and surface roughness compared to conventional RSM and linear regression models. Compared to the traditional mathematical model in the new method, the newly proposed neural network class model has too high judgment coefficient and lacks the tests of the unfamiliar test set. Conclusion The temperature and vibration generated during the actual cutting process are coupled to each other and change over time. Based on the theoretical analysis, the scholars and scholars studied the temperature and vibration of titanium alloy cutting. However, the following points can be made to improve the cutting temperature and vibration. (1) In terms of testing: At present, there are many research results on a single measurement method of cutting temperature and cutting vibration, and there are few research results on the simultaneous collection of cutting temperature and vibration. Therefore, in future research, a simultaneous measurement system for cutting temperature and vibration can be built, which provides an experimental basis for the analysis of the coupling characteristics between cutting temperature and vibration. (2) Intelligent prediction: Most of the current research literature analyzes the influence of cutting parameters and tool structure parameters on cutting temperature and vibration through experimental testing and theoretical research, and uses a large number of numerical simulation methods such as finite element models to analyze temperature and vibration characteristics. Fewer scholars have established vibration and temperature prediction models, and more vibration and temperature prediction models can be established in the future to provide a reference solution for the current problem of the difficulty of measuring the turning temperature of titanium alloys in factories. (3) In terms of coupling characteristics analysis: At present, most of the literature establishes the empirical formula of turning temperature on cutting parameters or the prediction model of cutting vibration on turning parameters. Few documents establish the fitting model between cutting vibration and temperature, and study the relationship between the two The coupling characteristics. Therefore, in the follow-up research, the research on the coupling characteristics between cutting temperature and turning vibration needs to be further strengthened.
3,257.8
2021-01-01T00:00:00.000
[ "Materials Science" ]
Evaluation of Glass-Ionomer versus Bulk-Fill Resin Composite: A Two-Year Randomized Clinical Study Background: The aim of this split-mouth design research was to compare the clinical performance of a glass-ionomer cement system on Class I/II cavities against the clinical performance of bulk-fill resin composite restoration materials. Methods: Thirty-five patients were randomized and enrolled in the study, aged between 10 and 12 years, all of whom had a matched pair of permanent mandibular carious molars with similar Class I/II. A total of 70 restoration placements were performed. The patients were each given two restorations consisting of either a glass-ionomer cement with a nano-filled coating or a bulk-fill resin composite after the use of a self-etch adhesive. The cumulative survival rates were estimated using log-rank test and the Kaplan–Meier method. For comparison of the restorative materials in line with the modified Ryge, the McNemar test and the Wilcoxon signed rank test were employed. Results: With regard to retention, the glass-ionomer cement system and bulk-fill resin composite performed similarly in permanent molars in Class I/II cavities over a period of up to 24-months (p > 0.05). Over the 24-month period, Class I restorations showed statistically better survival rates than Class II restorations (p < 0.05). In the case of glass-ionomer cement systems, over the two-year period, more common chipping and surface degradations were observed. Conclusions: The glass-ionomer cement system and bulk-fill resin composite restorative materials display good clinical performance over a period of 24-months. Introduction A range of different materials are available for use in the restoration of permanent and primary teeth, such as conventional glass ionomer cement (GIC), compomer, resin-modified glass ionomer cement, and composite resins [1][2][3]. All of these have been commonly used in Europe as alternatives to amalgam [4]. The most-favored material generally is GIC, which, through the release of fluoride, offers extra protection against caries [1,3,4]. It should be noted, however, when a subsequent systematic review of clinical studies was carried out, the results showed no conclusive evidence of a caries-inhibitory effect [5]. Moreover, questions were raised by many studies as to whether conventional GIC does contain antibacterial properties, particularly regarding the viability of residual bacteria in carious dentin after all types of glass-ionomer restorations [4][5][6]. Other concerns that have been raised about GIC are a suspected tendency for microleakage and some deficiencies in physical properties, resulting in greater wear in stress-bearing cavities [7]. Another alternative material to GIC used in pedodontics clinics are types of composites. The new composite materials introduced to the market are described as 'bulk-fill resin composites' (BRC). The introduction of bulk-fill resin materials brought changes in their composition [8]. For this reason, regarding the application of traditional light-curing resin-based composites, it has been recommended as a gold standard that progressive increments of controlled thickness are used [9]. The maximum increment of 2 mm thickness has been generally accepted. Despite this, this is a time-consuming process in the case of deep cavities and in the case of contamination between the increments, and carries an increased risk of air bubble development [8]. Such factors as the resin shade/type filler amount and the intensity and spectrum of the activation light were all found to have a bearing on the cure depth of the resin composite [9,10]. These need to be accompanied by qualities of good adaptation and low residual stress, as required for all materials and filling techniques, and are essential through polymerization [10]. In recent years, dental practitioners have benefited from a significant expansion in the range and number of restorative materials available for use. A high-viscosity GIC system (GICs), developed over the years, has emerged as a prominent player in recent years due to its physical and chemical attributes, including fluorine release, adhesion to tooth tissue, biocompatibility, and its similarity to dentin tissue in terms of the thermal expansion coefficient [11]. The key features required in restorative materials are their similarity to dental tissue in terms of physical and chemical structure, high aesthetic and mechanical properties of restorative materials, and the ability to remain in the mouth for a long time while enabling the tooth to function as expected. In addition to these issues in child dentistry, it is essential that restorative materials can be applied to cavities quickly and easily. It is for this reason that GIC has emerged as a leading restorative material in pedodontics [12]. However, GIC does have some disadvantages when compared to resin composites [11,13]. These include the aesthetic appearance, lower fracture resistance, and greater occlusal abrasion. High-viscosity GIC, on the other hand, has improved the physical properties of the conventional product by optimizing the distribution of polyacids and particles. Recently, a GIC was launched using a cavity conditioner and light-curing nano-filler self-adhesive protective resin surface coating (G-Coat Plus) [13]. The availability of an increasing range of materials for use in restorative treatment has led to a growth in adhesive dentistry, and the use of composite resin with adhesive systems has become widespread [14]. It is recognized that the placement of the composite restoration on molar teeth can be a time-consuming process. It is recommended that the composite is placed in the cavity using the incremental technique. Bulk-fill-type composite materials have been developed to facilitate material placement and polymerization [9,10]. The term "bulk technique" is used because the chemically polymerized composites are placed into the cavity in a single layer. The name of the composite is also the name of the application technique [9]. It is known that the application of only a single layer reduces the clinical time for the physician and improves comfort for the patient [14]. The system which initiates polymerization in these composites also shortens the curing time. BRC have been reported to polymerize slowly due to modified methacrylate resins [15]. Kim et al. used four types of BRC material in their assessment, to test for micro-rigidity of the resin thickness. They concluded that BRC with a thickness application of 4 mm is appropriate [16]. The aim of our study was to compare the two-year clinical performance of GICs when used with self-adhesive resin, against BRC, used in conjunction with a self-etch adhesive. Materials and Methods The study protocol of this randomized, controlled, clinical trial was approved by the Ethics Committee of the Ege University (ethical code: 13-4/7). The aim of the study, its procedures, and its related risks were explained to the children and their parents before its commencement. Informed consent forms were signed by all the parents. A total of thirty children were included, attending the pediatric dentistry clinic of the Ege University. Twenty-five of the patients received two restorations while five of them received four restorations. Patient Selection The selection of the children was based on the following inclusion/exclusion criteria: Inclusion criteria were as follows: (i) Children aged between 10 and 12 years who were healthy, without any known history of systemic illness; (ii) Children whose had a matched pair of maxillary and mandibular first/second permanent with an approximal or occlusal carious lesion of the same size; (iii) Children whose permanent molars consisted of a minimum 2.5 mm depth dentinal lesion; (iv) Children who had not received fissure sealant application; (v) Children who could attend the clinic regularly for controls throughout the 24 months. Exclusion criteria were as follows: (i) Children with special needs; (ii) Uncooperative children; (iii) Children with molar incisor hypomineralization, enamel hypoplasia, or dental fluorosis; (iv) Children with parafunctional habits or bruxism; (v) Children with teeth that received excess or no load due to malocclusion. Restoration Process The operators all carried out their examinations under light using a mirror and probe, and all reported same-sized caries. We used a split-mouth design for the two different restorative materials. In the preparation of the cavities after removal of the caries, diamond rounds were used for the aerator and the drills. Tactile and normal optical criteria were used to verify the thorough removal of caries, after cavity preparation. Cavity sizes (depth) were measured with the help of the root canal instruments and endometer. The BRC was applied per layers, each approximately 4 mm thick. However, in the GICs group, it was placed in a single layer, regardless of the cavity depth. The power analysis of this study was performed based on the two different material evaluation periods to calculate the sample size. The sample size required for each group was determined to be at least 35 teeth (n = 70 total restoration) using G-power software version 3.1.9.7 for Windows (Heinrich Heine, Universität, Dusseldorf, Germany), for a power of 94%, the effect size of 1.73. In total, two operators performed 70 restoration placements. One of the two materials was EQUIA, an encapsulated GICs marketed for use in all types of restorations. The other material used was a Tetric EvoCeram, applied in conjunction with a one-component bonding system AdheSe One F. The restorative material to be placed was randomly selected by a different dentist than the operator (double-blinded) before mixing the materials, and the treated dentin, in line with manufacturers' instructions. The clinician used a SuperMat Universal matrix tensioning system (Kerr, Swiss) to maintain tight adaptation of the restorations. Application of GICs: The cavities were prepared by a cavity regulator (GC Cavity Conditioner) for 10 s, with the help of cotton pellets. The cavities were washed with water and air-dried. After the glass-ionomer capsule was activated, the mixing device was mixed with Silver Mix 90 for 10 s and GIC material was placed in the cavity. Minimum moisture contamination was avoided. At the end of this period, G-Coat Plus was applied after superfluous diamond drillings and irradiated for 20 s. With the restorations set initially, the dentist carried out the necessary occlusal adjustment before applying G-Coat Plus over them. Application of BRC material: AdheSE was applied to a cavity using brushed agitation with the aid of a VivaPen for 20 s. Adhesive material was polymerized for 10 s (Woodpecker, Yixing, China). BRC material was placed into the cavity to a maximum thickness of 4 mm, and then polymerized for 15 s. Silicon-based polishing material OptraPol was used for finishing the restoration after occlusion control. Any moisture that accumulated in this time was absorbed using cotton wool rolls. The clinician applied a dentin liner, Theracal in the deep dentinal lesions for pulpal protection, but excluded teeth with pulpal exposure. A matrix was used for all Class II restorations to maintain tight adaptation (Table 1). Assesment and Statistical Analyses The examiners, who were not involved in the placement procedures, evaluated all the restorations at 6, 12, 18, and 24 months. Cohen's kappa values for inter-examiner and intra-examiner reliability, obtained after repeated examinations of 10% of the study group, were 0.91 and 0.82, respectively. The examiners performed the examinations during regular visits, using the modified Ryge criteria to evaluate the restorations ( Table 2). In assessing the restorations, the location and type of tooth were recorded by the examiners. Two of the authors of the study entered the data into spreadsheets and analyzed them using statistical software (SPSS 13.0, SPSS, Chicago). Cumulative survival rates were estimated using the log-rank test and the Kaplan-Meier method. For comparison of the restorative materials in line with modified Ryge, the McNemar test and the Wilcoxon signed rank test were employed. Results The mean age of the children was found to 11.2 years ( Table 3). The cavity sizes ranged from 2.5 to 6 mm (mean: 3.7 mm). The 6-month cumulative survival rate for the Class I/II restorations was found to be 100% for both the GICs and BRC restorations. The 12-month survival rate for Class I/II restorations was also found to be 100% for the GICs and BRC restorations. One-year clinical observation results were examined, and the retention rate of restorations was found to be 100%. The survival rates for both materials were 100% for a mean observation period of 10.4 months. Two of the GICs restorations showed chipping, but none for the BRC group. No secondary caries were observed in any restoration. Marginal discoloration scores were similar for both materials and none of the scores of Charlie, though the marginal fit was impaired. The examiners evaluated 29 patients (68 restorations in total) after 24 months of follow-up. The patient follow-up rate was 96.6%. The 24-month survival rate for Class I/II restorations was also found to be 97.1%and %100 for the GICs and BRC restorations, respectively. At the end of the 24-month follow-up, it was found that retention, secondary caries, marginal discoloration variables had no significant effect on the survival rates of the restorations (p > 0.05). After 24-months, results indicated only three cases of chipping for the GICs in Class II cavities. For the same period, no failures were observed for BRC. Secondary caries were observed in the case of GICs fillings (1 Charlie, 1 Bravo) and one BRC (1 Bravo). Marginal discoloration scores were similar for the two materials (7 GICs, 7 BRC) but surface porosity (10 GICs (2 Charlie), 3 BRC), and marginal deterioration (10 GICs (1 Charlie), 2 BRC) were all more prevalent with GICs in the Class II cavities. No statistically significant difference was apparent between the groups for either material (p > 0.05). Postoperative sensitivity scores were acceptable and similar for both materials (1 GICs and 1 BRC). Furthermore, no statistically significant difference was found after 24 months between the total numbers of Class I restorations patients that survived in the GICs and the BRC groups (p > 0.05). Over the 24-month period, Class I restoration patients showed statistically better survival rates than Class II restoration patients. The cumulative survival percentages at 6, 12, and 24 months were calculated for each of the two restorative materials, according to class. This gave the distribution of the retained restorations according to the modified Ryge categories. However, no statistically significant difference was demonstrated between the two materials for any of the criteria, or with respect to the BRC or the GICs restorations (Table 4). Discussion Regarding new GICs, long-term clinical trials have assessed the GICs' clinical performance in vivo studies in permanent molar teeth. Although some clinical studies have been carried out on the use of GIC as a restorative material in permanent teeth, only limited information is available on its use in relation to Class II cavities. We sought to establish whether reports were correct that the restorative GICs can be used effectively in both Class I and II cavities. In accordance with the modified Ryge criteria, to ensure as far as possible that objectivity was maintained, the two experienced experts were nominated, independent from the investigation team, to evaluate all the restorations. Our study therefore conformed with the recommendations of Hickel et al., who maintained that evaluation should be carried out by a researcher other than the physician who is applying the restorations [17]. A further check of the restorations was included at two years to complete the long-term study. In most clinical follow-up studies, modified Ryge evaluations criteria are used to assess the clinical performance of restorations [13,18]. Fridley et al. studied the clinical performance of GICs for the restoration of permanent molar teeth over 24 months. Their examination of 26 Class I and 125 Class II restorations in a total of 43 patients led to the conclusion that GICs was a suitable restoration material for permanent molar teeth. They concluded therefore that GICs can be used in Class I cavities of all sizes in permanent teeth, but it is more suitable for use in smaller Class II cavities [19]. In their study, Türkün and Kanık used GICs, GIC and Riva SC with two dissimilar surface coating materials (Fuji Varnish, G-Coat Plus). These were evaluated according to the modified Ryge criteria at the 6th, 12th and 18th month and the 6th year. They reported that, despite minor repairable defects, the overall clinical performance of GIC was found to be excellent. Compared to Riva SC, this was evident even in the case of large posterior Class II restorations at the final period of 6 years [20]. Khandewal et al. reported that the GICs was found to be successful at 88.8% for Class I cavities at the end of the 24-month follow-up [19]. Gürgan et al. carried out a four-year evaluation to determine the clinical performance of Gradia Direct restorations with GICs. No significant changes were reported with regard to color harmony, surface roughness, postoperative sensitivity, secondary caries, and anatomic form [13]. These accord with the findings of our study. Ersin et al. used composites and high-viscosity GIC in their study. They found that Surefil and Fuji IX composite materials with 419 Class I and Class II tooth restoration performed well. Their studies showed both restorative materials to be successful and there was no difference (p > 0.0.5), although the Class II restorations were found to be significantly more unsuccessful than the Class I restorations [21]. The criterion of retention is a crucial parameter in evaluating the clinical performance of restorations. The ADA reported that the retention rate should be at least 90% after 18 months in order to be considered clinically successful [22]. Gürgan et al. reported a 24-month follow-up of Gradia Direct restorations with GICs applied to the surface. At the end of a four-year period, two restorations using glass-ionomer application were lost and the success rate was found to be 97.1% (100% in Class I restorations and 92.39% in Class II restorations) [13]. Hickel et al. reported that the loss rate of GIC ranged from 0 to 25.8% at the end of one year. It has been reported that the main cause of the losses in Class II restorations, as opposed to Class I, is high-pressure-related fractures [23]. In our study, both restoration materials were evaluated as successful according to ADA criteria for the 6-month and 24-month periods (100%, 97.1%). In our study, when the color matching of restorations was evaluated, initially, all restorations received the alpha score. Both restoration materials showed similar results at the end of 24 months and were deemed to be successful. Gürgan et al. came to analogous conclusions in their studies evaluating the clinical performance of the composite resin Gradia Direct Posterior with GICs. They reported that the color matching for both restoration materials was successful and remained similar after four years. Gürgan et al., though their results were not statistically significant, reported that Gradia Direct Posterior restorations of coloration compared well to GICs's use of self-etch adhesive, with the formation of weaker adhesion on the edges of the cavity [13]. Loss of edge harmony between the tooth tissue and the material, interruption of the connection between the interface, application-related errors, and polymerization shrinkage can all play a role. This situation leads to microleakage, marginal coloration, and secondary caries [8,10]. Mahmoud et al. reported that coloration occurring between untreated enamel surfaces and the residual composite surplus could also be a factor [24]. In our study, there were no unsuccessful restorations in the 24-month evaluation period regarding the margin criterion. It is acknowledged that roughness on the surface of restorations is often due to insufficient finishing and polishing. This affects the overall clinical performance of restorations due to coloration, plaque accumulation, gingival irritation, and an increased susceptibility to recurrent bruising. Polishing tires, interface sanders, and blasters are used, though Sof-Lex discs have been reported to be more successful than other polishing systems [25]. Our study indicates that effective treatment using these procedures significantly affects the clinical performance of restorations. It is thought that the G-Coat Plus surface-coating material applied on GIC with the GICs also contributes to the success of restorations, in respect to the surface roughness criterion, after 24 months. In our study, when restorations were evaluated according to this criterion, it was observed that there were only two restorative C-D scores in the GICs group. In the clinical follow-up to our study no significant difference was found between the two different restoration materials. Postoperative sensitivity can stem from a range of different causes including trauma, excessive drying and marginal leakage during the procedure. It is thought that the coating and adhesive application of the Theracal LC (Bisco, Schaumburg, IL, USA) pulp coating, applied to deep cavities in GICs and BRC restorations, can be effective in decreasing sensitivity. We set out in this study to determine whether Class I/ II restorations using BRC would produce a higher retention rate in permanent teeth when compared with those carried out using GICs. Previously, clinicians favored resin-based composites for the treatment of permanent molars. Moreover, one of the suggested that resin-based composites, together with the adhesive systems recommended for use with them, might offer a longer survival time than GICs when used for Class II restorations [26]. Research by Hickel et al. found that annual failure rates for stress-bearing cavities of primary molars were 0 to 25.8 percent for GIC restorations, compared to 0 to 15 percent for resin-based composite restorations [17]. Our own findings showed the BRC to be satisfactory for Class I and II restorations in permanent teeth at the 24-month recall examination. In the case of the Class II restorations, the survival rate for the resin-based composite was shown to be higher than that for the GICs, though this difference was not statistically significant. It is possible, however, that a statistically significant difference might be obtained if the evaluation period was further extended. It also has the added benefit of reducing sensitivity when applied in deep cavities. The main failure characteristic for Class I/II restorations in the case of both materials was the loss of the restoration. We rarely observed caries at the margin, and secondary caries, which are generally a major cause of restoration failure, were also rarely in evidence. For both materials in our study, the surface texture, integrity, anatomical form, and marginal discoloration of all restorations placed were found to be satisfactory. The GICs and BRC in both cavity types exhibited highly similar clinical performance over the observation period of two years. Consequently, the null hypothesis formulated at the beginning of this study was accepted. The GICs' features are an important matter for clinical practice in pediatric dentistry. This is probably due to the chemical adhesion of the glass-ionomers system to the tooth structures surface without the need for adhesive systems, easy handling, and uncomplicated application completion [21]. In our study, the use of a GICs required relatively less time than BRC restorations. Moreover, regarding the GICs, their use in pediatric dentistry is perhaps preferred over BRC materials owing to their biocompatibility and, especially, the fluoride recharge/release from the glass-ionomer structure [5,12]. Continued development of the mechanical and physical properties of the both restorative materials will enable the construction of longer-lasting restorations. We believe that our study will provide useful guidance for future in-vivo and in-vitro studies on this subject and contribute to the further development of this area in the science of pedodontics. Conclusions Within the limitations of this study, we found that both GICs and BRC composite restorative materials exhibited satisfactory results after 2 years of clinical service. No statistically significant difference was found between the materials. However, Class I restorations showed statistically better clinical performance than Class II restorations according to the modified Ryge criteria for GICs. Based on these results, both the BRC and GICs can be recommended for use by clinicians as alternative options for Class I/II restorations and for deep cavities. However, it will be necessary in the longer term to continue to monitor the survival rate of GICs Class II restorations because of the failed Class II restorations in our study.
5,474
2022-10-01T00:00:00.000
[ "Medicine", "Materials Science" ]
Spin transport in insulators without exchange stiffness The discovery of new materials that efficiently transmit spin currents has been important for spintronics and material science. The electric insulator Gd3Ga5O12 (GGG), a standard substrate for growing magnetic films, can be a spin current generator, but has never been considered as a superior conduit for spin currents. Here we report spin current propagation in paramagnetic GGG over several microns. Surprisingly, spin transport persists up to temperatures of 100 K \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gg$$\end{document}≫ Tg = 180 mK, the magnetic glass-like transition temperature of GGG. At 5 K and 3.5 T, we find a spin diffusion length λGGG = 1.8 ± 0.2 μm and a spin conductivity σGGG = (7.3 ± 0.3) × 104 Sm−1 that is larger than that of the record quality magnet Y3Fe5O12 (YIG). We conclude that exchange stiffness is not required for efficient spin transport, which challenges conventional models and provides new material-design strategies for spintronic devices. Gd 3 Ga 5 O 12 (GGG) is an excellent substrate material for the growth of, e.g., high-quality YIG films 13 . Above T g = 180 mK, it is a paramagnetic insulator (band gap of 6 eV 14 ) with Gd 3+ spin-7/2 local magnetic moments that are weakly coupled by an effective spin interaction 15 J ex~1 00 mK. Recently, the spin Seebeck effect (SSE), i.e., thermal spin current generation, was observed in GGG at low temperatures (< 20 K) and high magnetic fields 16 . Here, we report long-range (500 nm) spin transport in a GGG slab (Fig. 1b) under applied magnetic fields even at much higher temperature (~100 K) than the Curie-Weiss temperature |Θ CW |, whereas at low temperatures GGG turns out to be a surprisingly good spin-conductor. Results Material characterization. GGG does not exhibit long-range magnetic ordering at all temperatures 15 (Fig. 1d), while its fielddependent magnetization is well described by the Brillouin function. The large low-temperature saturation magnetization of 7 μ B per Gd 3+ (Fig. 1c) is governed by the half-filled 4f-shell of the Gd 3+ local moments. Sample structure and measurement setup. We adopt the standard nonlocal geometry [5][6][7][8][9][10][11][12] to study spin transport in a device comprised by two Pt wires separated by a distance d on top of a GGG slab ( Fig. 2b and Supplementary Note 1). Here, spin currents are injected and detected via the direct and inverse spin Hall effects 1,17 (SHE and ISHE), respectively (Fig. 2a). A charge current, J c , in one Pt wire (injector) generates non-equilibrium spin accumulation μ s with direction σ s at the Pt/GGG interface by the SHE. When σ s and the magnetization M in GGG are collinear, the interface spin-exchange interaction transfers spin angular momentum from the conduction electron spins in Pt to the local moments in GGG at the interface, thereby creating a nonequilibrium magnetization in the GGG beneath the contact that generates a spin diffusion current into the paramagnet. Some of it will reach the other Pt contact (detector) and generate a transverse voltage in Pt by means of spin pumping into Pt and the ISHE. To our knowledge, long-range spin transport in magnets has been observed only below their Curie temperatures 5-12 , e.g., YIG, NiFe 2 O 4 , and α-Fe 2 O 3 , so magnetic order and spin-wave stiffness have been considered indispensable. Here, we demonstrate that a relatively weak magnetic field can be a sufficient condition for efficient spin transport, demonstrating that the dipolar interactions alone generate coherent spin waves. Observation of long-range spin transport through paramagnetic insulator. First, we discuss the field and temperature dependence of the nonlocal detector voltage V in Pt/GGG/Pt with contacts at a distance d = 0.5 μm. We use the standard lock-in technique to rule out thermal effects (see Methods). A magnetic field B is applied at angle θ ¼ 0 in the z-y plane (see Fig. 2b) such that the magnetization in GGG is parallel to the spin polarization σ s of the SHE-induced spin accumulation in the injector. Figure 2c shows V(B) at 300 and 5 K. Surprising is the voltage observed at low, but not ultralow temperatures that increases monotonically as a function of |B| and saturates at about 4 T. Pt/ YAG/Pt, where YAG is the diamagnetic insulator Y 3 Al 5 O 12 , does not generate such a signal (Fig. 3b); apparently the paramagnetism of GGG (Fig. 1c) is instrumental in the effect. We present the nonlocal voltage in Pt/GGG/Pt at 5 K as a function of the out-of-plane magnetic field angle θ and injectioncurrent J c as defined in Fig. 2b. The left panel of Fig. 2d shows that V at B j j ¼ 3:5 T is described by VðθÞ ¼ V max cos 2 θ: it is maximal (V max ) at θ ¼ 0 and θ ¼ ±180 (B // y) but vanishes at θ ¼ ±90 (B // z) (also see Supplementary Note 2). Furthermore, V depends linearly on |J c | (see the right panel of Fig. 2d). The same cos 2 θ dependence in Pt/YIG/Pt is known to be caused by the injection and detection efficiencies of the magnon spin current by the SHE and ISHE, respectively, that both scale like cosθ (refs. 5-7 ). The SHE-induced spin accumulation at the Pt injector interface, i.e., the driving force for the nonlocal magnon transport, scales linearly with J c , and so does the nonlocal V (ref. 5 ). The above observations are strong evidence for spin current transport in paramagnetic GGG without long-range magnetic order. Temperature and high-field dependence of nonlocal spin signal. Figure 3d shows the θ dependence of V at B = 3.5 T and various temperatures T. Clearly, V~V max (T)cos 2 θ, where V max (T) in Fig. 3a decreases monotonically for T > 5 K, nearly proportional to M(T) at the same B as shown in the inset to Fig. 3a. The field-induced paramagnetism therefore has an important role in the |V| generation. Surprisingly, V max at 3.5 T persists even at 100 K, which is two orders of magnitude larger than |Θ CW | = 2 K. The exchange interaction at those temperatures can therefore not play any role in the voltage generation. In contrast, a large paramagnetic magnetization (~μ B per f.u. at 3.5 T) is still observed at 100 K, consistent with long-range spin transport carried by the field-induced paramagnetism. At high fields, Fig. 3e shows a non-monotonic V(B, θ ¼ 0): At T < 30 K, V gradually decreases with field after a maximum at~4 T, which becomes more prominent with decreasing T. A similar feature has been reported in Pt/YIG/Pt 9 and interpreted in terms of the freeze-out of magnons: a Zeeman gap / B larger than the thermal energy / T critically reduces the magnon number and conductivity. It appears that thermal activation of magnetic fluctuations is required to enable a spin current in GGG as well. Estimation of spin diffusion length. By changing the distance d between the Pt contacts, we can measure the penetration depth of an injected spin current. V max at 5 K, as plotted in Fig. 4a, decreases monotonously with increasing d. A similar dependence in Pt/YIG/Pt is well described by a magnon diffusion model 18,19 . We postulate that the observed spin transport in GGG can be described in terms of magnon diffusion of purely dipolar spin waves. As the GGG thickness of 500 μm ) d, we cannot use a simple one-dimensional diffusion model that would predict a simple exponential decay V max ðdÞ $ expð Àd=λÞ. Considering two spatial dimensions (see Supplementary Note 4) leads to: where K 0 ðd=λÞ is the modified Bessel function of the second kind, λ ¼ ffiffiffiffiffiffi Dτ p is the spin diffusion (relaxation) length, D is the spin diffusion constant, τ is the spin relaxation time, and C is a numerical coefficient that does not depend on d. By fitting Eq. (1) to the experimental data, we obtained λ GGG = 1.82 ± 0.19 µm at B ¼ 3:5 T and T ¼ 5 K. ARTICLE A long spin relaxation length in an insulator implies weak spinlattice relaxation by spin-orbit interaction, which in GGG should be weak because the 4f-shell in Gd 3+ is half-filled with zero orbital angular momentum (L = 0). We tested this scenario by a control experiment on a Pt/Tb 3 Ga 5 O 12 (TGG)/Pt nonlocal device with similar geometry (d = 0.5 μm). TGG is also a paramagnetic insulator with large field-induced M at low temperatures (see the inset to Fig. 3a). However, Tb 3+ ions have a finite orbital angular momentum (L = 3) and an electric quadrupole that strongly couples to the lattice 20 . Indeed, we could not observe any nonlocal voltage in Pt/TGG/Pt in the entire temperature range (see Fig. 3a-c). This result highlights the importance of a weak spin-lattice coupling in long-range paramagnetic spin transmission. Modeling of nonlocal spin signal. We model the nonlocal voltages in a normal-metal (N)/paramagnetic insulator (PI)/normalmetal (N) system by the magnon diffusion equation in the PI and interface exchange interactions at the metal contacts 21 with spincharge conversion (see Supplementary Notes 3 to 6). The voltage in the Pt detector as a function of B and T reads: where ξ(B, T) = gμ B B/k B (T + |Θ CW |), g is the g-factor, μ B is the Bohr magneton, k B is the Boltzmann constant, B S (x) is the Brillouin function as a function of x for spin-S, C 1 and C 2 are known numerical constants, S = 7/2 is the electron spin of a Gd 3+ ion, g s is the effective spin conductance of the Pt/GGG interface, and σ GGG is the spin conductivity in GGG. The observed V(B) is well described by Eq. (2) (see Fig. 4b). The best fit of Eq. (2) is achieved by σ GGG = (7.25 ± 0.26) × 10 4 Sm −1 and g s = (1.82 ± 0.05) × 10 11 Sm −2 . We determine σ GGG and g s by fitting the experimental data to a numerical (finite-element) simulation of the diffusion 19 that takes the finite width of the contacts and the GGG film into account (the details are discussed in Supplementary Note 9). Discussion The reported spin transport in GGG has not been observed nor does affect previous nonlocal experiments in YIG/GGG, conducted at room temperature 5,9 or at low temperatures and low fields 6-8 , because under those conditions GGG is not magnetically active. Non-negligible contributions from GGG may appear only at low-temperature and high fields, as observed in the local SSE of a Pt/YIG/GGG junction 22 . A few papers address the spin current in paramagnets 16,23-26 . Shiomi et al. and Wu et al. reported paramagnetic spin pumping in La 2 NiMnO 6 and SSE in GGG and DyScO 3 in their paramagnetic phases, respectively. Spin currents were observed above but close to the magnetic ordering temperature and therefore attributed to critical spin fluctuations (paramagnons). Here we find a nonlocal signal in GGG even at 100 K, much higher than | Θ CW | = 2 K in GGG (see Fig. 3a). The exchange interaction has been reported to cause spin diffusion in Heisenberg paramagnets 27 at zero magnetic fields. We cannot measure a possible contribution to the spin transport by diffusion at (nearly) zero magnetic fields in our setup by the spin conductivity mismatch and the decrease of the spin diffusion length at small field (see Supplementary Note 7 and 8). However, direct spin diffusion is suppressed when the rotational symmetry is broken by anisotropy or applied magnetic fields 28 (exponentially so in Heisenberg ferromagnets 29 ). The contribution from direct magnetic dipole-dipole interactions dominates nuclear spin diffusion, but is orders of magnitude smaller than what we observe 30 . A plausible mechanism is based on the long-range dipolar interaction, which becomes important when the Zeeman and thermal energies are of comparable magnitude. The paramagnetic The θ and B dependence of V for Pt/GGG/Pt at various temperatures. We varied θ by rotating the field at B = 3.5 T in the z-y plane. The field was changed from −9 T to 9 T at θ ¼ 0 for the B dependence system acquires a finite magnetization by applying magnetic field, and the dipole interaction is enhanced, especially in GGG with large Gd magnetic moments. The dipole interaction can support collective spin-wave excitations even in paramagnets that for small wave numbers are identical to those of ferromagnets (see Supplementary Discussion). Finally, we address the large spin conductivity of GGG at low temperatures and high magnetic fields. Thermally occupied magnons with energies up to about k B T carry the spin transport. In YIG, their dispersion is dominated by the exchange interaction. Although large exchange energy corresponds to a large magnon group velocity, exchange magnons have short wave lengths, which make them sensitive to local magnetic disorder such as grain boundaries that give rise to spin-wave scattering limiting YIG's spin conductivity. In contrast, the dipolar interaction is longranged and therefore less affected by (short-range) disorder. In GGG, the long-range dipole interaction dominates the excitation of spin waves for frequencies comparable to the Zeeman energy (≈ gµ B B). Therefore, paramagnets with strong dipolar but weak exchange interactions, such as GGG, may provide ideal spin conductors as long as the thermal fluctuation of the magnetic moments is sufficiently suppressed by applied magnetic fields. Moreover, the large spin conductance g s indicates a strong interface exchange interaction at a metal contact to GGG. In summary, we discovered spin transport in the Curie-like paramagnetic insulator Gd 3 Ga 5 O 12 over several microns. Its transport efficiency at moderately low temperatures and high magnetic fields is even higher than that of the best magnetically ordered material YIG. Low-temperature experiments 31 , covering the magnetic glass-like transition temperature of GGG, may be interesting to explore the link between spin frustration and spin transport in GGG. Magnonic crystals 2 can help determining the length scales that govern the observed spin transport. Since paramagnetic insulators are free from Barkhausen noise associated to magnetic domains and magnetic after-effects (aging) 32 typical for ferromagnets and antiferromagnets, they are promising materials for future spintronics devices. Methods Sample preparation. A single-crystalline Gd 3 Ga 5 O 12 (111), Y 3 Al 5 O 12 (111), and Tb 3 Ga 5 O 12 (111) (500 μm in thickness) were commercially obtained from CRYS-TAL GmbH, Surface Pro GmbH, and MTI Corporation, respectively. For magnetization measurements, the slabs were cut into 3 mm long and 2 mm wide. For transport measurements, a nonlocal device with Pt wires was fabricated on a top of each slab by an e-beam lithography and lift-off technique. Here, the Pt wires were deposited by magnetron sputtering in a 10 −1 Pa Ar atmosphere. The dimension of the Pt wire is 200 μm long, 100 nm wide, and 10 nm thick and the separation distance between the injector and detector Pt wires are ranged from 0.3 to 3.0 μm. A microscope image of a device is presented in Supplementary Fig. 1. We measured the temperature dependence of the resistance between the two Pt wires on the GGG substrate but found that the resistance of the GGG is too high to be measurable. Magnetization measurement. The magnetization of GGG, YAG, and TGG slabs was measured using a vibrating sample magnetometer option of a quantum design physical properties measurement system (PPMS) in a temperature range from 5 K to 300 K under external magnetic fields up to 9 T. Spin transport measurements. We measured the spin transport with a PPMS by a standard lock-in technique from 5 K to 300 K. An a.c. charge current was applied to the injector Pt wire with a Keithley 6221 and the voltage across the detector Pt wire was recorded with a lock-in amplifier (NF 5640). The typical a.c. charge current has a root-mean-square amplitude of 100 μA and a frequency of 3.423 Hz. Data availability The data that support the findings of this study are available from the corresponding author on request.
3,794
2018-11-29T00:00:00.000
[ "Physics", "Materials Science" ]
A Fuzzy Multi-Criteria Model for Municipal Waste Treatment Systems Evaluation including Energy Recovery : One of the recent problems on waste sorting systems is their performance evaluation for proper decision making and management. For this purpose, multi-criteria methods can be used to evaluate the sorting system from both operational and financial perspectives. According to a recent literature review, there are no solutions for evaluating waste sorting systems that take into account: sorting point utilisation, sorting efficiency, waste stream irregularity, and technical system availability. In addition, the problem of data uncertainty and the need to use expert judgements indicate the need for the implementation of methods adjusted to the qualitative and quantitative assessment, such as the fuzzy approach. Following this, in order to overcome the presented limitations, the authors introduced the new assessment method for waste sorting systems based on multi-criteria model implementation and fuzzy theory use. Therefore, the developed model was based on a hierarchical fuzzy logic model for which appropriate membership function parameters and inference rules were defined. The specificity of the chosen assessment criteria and their justification was provided. The model has been implemented to evaluate one of the waste sorting plants in Wroclaw, Poland. Tests have been conducted for seven different configurations of waste sorting lines (with variable input parameters). The study focuses on analysing the amount of selected waste at each station in relation to the total stream size of each fraction. Efficiency was measured by the mass of the collected waste and the number of pieces of waste in each fraction. Based on the obtained results, estimations of particular parameters of the model were made, and the results were presented and commented on. It was shown that there is a significant relationship between the level of system evaluation and sorting efficiency and an inverse relationship with the level of RDF obtained. The analysis was based on Pearson’s linear correlation coefficient estimation and linear regression implementation. Introduction Socio-economic development determines an increase in the intensity of material consumption on a global scale. This increase occurs despite the orientation of the economy towards the rationalisation of production, services, and technologies. In order to minimise the amount of waste sent to landfills and incineration, due to the rapid depletion of landfill space and emissions of air pollutants from incineration, recycling is preferred [1] and, at a lower level of the waste treatment hierarchy, recovery (for example, by burning alternative fuel (RDF)). In 2018, the production of municipal solid waste (MSW) in 28 countries of the European Union reached 250.6 Mt, with the adoption of different management strategies involving recycling (48 wt%), incineration and thermal valorisation (29 wt%), and landfilling (23 wt%) [2]. A report issued by the European Commission in 2018 identified as many as 18 countries (out of 28) at risk of not achieving the required 50% recycling rate in 2020 [3]. One of the problems presented as an obstacle to achieving the required recovery levels is the lack of modern solutions in the available infrastructure. Therefore, to achieve the required level, it In Reference [20], the authors undertook to apply a new comprehensive approach to the long-term techno-economic evaluation of the performance of sorting plants for plastic waste recovery. The authors point out that MRF performance evaluation is carried out using different approaches even with respect to the time frame, which makes it difficult to compare the results. In Reference [18], the authors presented a case study from Spain that analysed the characteristics of input streams and final products from two types of recycling plants. The analysis aimed to determine the maximum efficiency of sorting systems for building demolition waste. The methodology presented by the authors allows a preliminary classification of options for the management of products resulting from sorting. Bottom ash from municipal solid waste incineration is usually processed to recover valuable raw materials, such as metals, and to produce mineral material for use in construction or destined for disposal. In Reference [21], the author presented a model to evaluate the quality and quantity of materials obtained using different recovery methods. The authors in Reference [22] presented a life-cycle assessment (LCA) of a bottom ash management and recovery system to identify environmental tipping points beyond which the burdens of recovery processes outweigh the environmental benefits of metal and mineral aggregate valorisation. In the next work, Reference [28], the authors assessed the technical possibilities of recovering the mineral fraction contained in compost (CLO) on a processing line designed for glass recovery. Such action fits into the EU strategy focused on circular waste management. A different approach was presented in Reference [24], where the authors proposed a comparison of available techniques used to treat electronic waste (e-waste) at an MRF in the state of California. The analysis was concerned with comparing the economic aspects of using the different techniques. In Reference [25], the authors analysed the recovery of beverage cartons at three light packaging waste treatment plants at different input materials and input weights. The authors highlighted uncertainties in how waste samples were collected to determine the various indicators. The article aimed to indicate the good practices necessary for proper waste sampling. In a subsequent publication, the same authors, in Reference [26], presented a methodical approach to the evaluation of waste treatment results. The basis was the analyses presented in the previous publication, while the result of the publication was the presentation of a method based on a decision tree. Due to the increasing demands on waste sorting systems, they are becoming more complex to provide greater material recovery from products with complex material mixtures. In order to increase efficiency and process complex material mixtures, separation systems are usually organised as highly integrated multi-stage systems. Therefore, in Reference [19], the authors have proposed an approach for modelling, analysing, and designing multistage separation systems to meet specific recovery/classification performance objectives. A similar approach can be found in Reference [12], where the authors presented a tool for predicting the performance of sorting systems and showed how performance depends on the system configuration and parameters, as well as the input materials. An interesting approach was presented in Reference [29], where the authors presented a probabilistic model of material separation processes. The model is generic and can be used to analyse the performance of recycle processes and material recycling systems, as well as other separation and purification processes. In Reference [27], the authors evaluated the limitations of using published partition coefficient datasets to model the sorting performance of mechanical unit operations commonly found in material recovery facilities (MRFs). A partition coefficient is generally defined as a ratio between 0 and 1 corresponding to the fraction of an input stream that ends up in a specific output stream. In Reference [23], the authors presented the results of research work, which consisted of checking and evaluating the automation of municipal waste sorting plants by supplementing or replacing manual sorting by sorting by a robot with artificial intelligence. The articles presented so far indicate different approaches to the evaluation of sorting systems. Their basis is based on a set of indicators, but the final assessment is based only on the comparison of individual indicators among themselves or reference to specific standards. In the literature, many publications approach the problem of evaluation in a more comprehensive way. This is evidenced by the following literature reviews, References [30,31] that analyse the use of multi-criteria methods (MCDM). In Reference [30], the authors performed a state-of-the-art analysis based on 260 articles. They analysed the combinations of the criteria used in the articles, mainly social, economic, and environmental. Only 2% of the methods dealt with recycling centres/sorting plants/MRF, but they did not consider the operational criteria. The second review cited, Reference [31], has shown that recycling is the most-researched problem in the literature. That is, 27 studies out of 80 have investigated the recycling problem. Most of these studies are about finding the best recycling technology/strategy. The vast majority of the articles dealing with the issue of sorting systems evaluation use the life cycle assessment method (LCA) (see, for example, References [13,16,32]) or life cycle cost (LCC) (see, for example, Reference [33]). In Reference [17], the authors evaluated the performance of a large material recovery facility based on a selected set of parameters. The chosen parameters were used to create a dashboard to help managers make system management decisions. In Reference [6], the authors have presented an integrated assessment of production processes in ecological, technical, and economic terms. The presented method was analysed in relation to the recovery of materials during iron and steel production. The method aimed to support investors' decisions in order to receive the best ecological results. This may not be a very popular approach, but the authors have shown that methods based on multicriteria methods (MCDM) can also be effectively used for decision-making in the area of environmental impact as methods based on life cycle analysis (LCA). The current evaluation approaches do not analyse the issue of RDF production as a component for evaluation. The Research Aim In summary, the development of waste sorting systems management solutions caused the need to create tools that (a) will allow for assessing of the current efficiency level, (b) indicate the directions of further development/changes that provide the possibility to achieve environmental and law requirements, and (c) include assessment approaches taking into account the operational and financial perspectives. In addition, waste sorting systems performance evaluation, on the one hand, is particularly challenging and requires taking into account specific aspects of waste type and waste treatment possibilities. On the other hand, the problem with data uncertainty and quality needs to be investigated. For this reason, the authors focus on the specificity of multi-criteria assessment methods implementation for waste sorting semi-automatic systems effectiveness evaluation. The introduced evaluation methods consider different parameters of a system evaluation (e.g., sorting point efficiency, sorting line efficiency index, an uneven load of stations, availability of waste sorting lines). The proposed approach is also based on fuzzy theory use in order to assess the evaluation index EI for the waste sorting system performance. Following this, the main contribution of this study is that: -We have defined the assessment parameters for waste sorting system efficiency, which are essential for waste sorting processes performance. -We have introduced a two-step assessment method to assess the waste sorting system efficiency based on fuzzy theory use. -We have implemented the proposed method to verify its diagnostic function and determine its labour intensity. -We have analysed the impact of the system evaluation on RDF energy quality and sorting efficiency. The present paper aims to develop a multi-criteria evaluation method that considers the different parameters of a system evaluation (e.g., sorting point efficiency, sorting line efficiency index, an uneven load of stations, availability of waste sorting lines). The article consists of the following parts as follows. Section 1 indicates a research gap that will be filled by the model presented in Section 2. Sections 3 and 4 present the model using a selected real system as an example and discuss the results obtained. Section 5 presents the conclusion of the paper. A Fuzzy Multi-Criteria Model for Evaluating Municipal Waste Treatment Systems The described method consists of a fuzzy model of the treatment system and an analysis of the impact of system evaluation on RDF energy quality and sorting efficiency. To consider the aspect of the treatment system's energy potential, the assessments obtained from the model should be compared with the RDF energy quality and efficiency (analysis presented in Chapter 3). In the presented method, a two-stage assessment is used in which the collection points are assessed first and then the system. However, due to the nature of the input and output data, the method can be used to assess both the system and any part of it (line, set of automatic sorting machines, etc.). The method described was developed as part of a research project in accordance with the methodology described in References [30,34]. Twenty-three employees (including sorting plant managers and 18 system employees) were selected for the preparation. Surveys were conducted on a quarterly basis to determine the criteria and values of the variables for each membership function. The periodicity of the survey was intended to eliminate intuitive responses based on current system parameters. Based on the results obtained, an analysis was carried out to calculate the values to be assessed. Due to the high uncertainty of the data, a method based on fuzzy logic was chosen (it is recommended in case of uncertainty in the input data). The next step was to define the names of the membership function and to transfer the survey results into linguistic variables. Based on the results, we ran a regression to determine the shape of the membership function. The result indicated that triangular functions could be used. The values of the individual variables were then determined based on expert opinion. For example, this gives unambiguous results in contrast to Gaussian functions. Figure 1 shows a hierarchical fuzzy logic model for evaluating municipal waste treatment systems. The output variable of the model is the waste treatment system evaluation index (EI). It depends on the values of four input variables: The utilisation rate of sorting points (PU) is determined from the average value of the utilisation rate of all sorting points in the waste treatment system. The utilisation rate of individual points is determined based on a fuzzy model, where the inputs are the point efficiency rate ( ) and the waste stream irregularity ( ) for the evaluated point. The waste sorting points are mechanical devices used for waste sorting and manual sorting stations. Sorting point efficiency ( ) is expressed as a ratio of the total amount of sorted waste on the tested sorting station to the assumed productivity. It results from the technical parameters of a device or the predispositions of a workstation. The form of the membership function of the sorting point efficiency variable is shown in Figure 2. Index k corresponds to the sort line number, and j is the number of the workstation being evaluated. The waste stream irregularity ( ) is calculated for a variable amount of waste processed in a specific time unit. The selected time unit must be identical to the one for which the workstation productivity was determined. The flux non-uniformity informs about the influence of flux variability (deviation from the average) on the workstation performance. The membership functions of the waste stream irregularity are presented in Figure 3. The utilisation rate of sorting points (PU) is determined from the average value of the utilisation rate of all sorting points in the waste treatment system. The utilisation rate of individual points is determined based on a fuzzy model, where the inputs are the point efficiency rate (WP j ) and the waste stream irregularity (WV j ) for the evaluated point. The waste sorting points are mechanical devices used for waste sorting and manual sorting stations. Sorting point efficiency (WP j ) is expressed as a ratio of the total amount of sorted waste on the tested sorting station to the assumed productivity. It results from the technical parameters of a device or the predispositions of a workstation. The form of the membership function of the sorting point efficiency variable is shown in Figure 2. Index k corresponds to the sort line number, and j is the number of the workstation being evaluated. The utilisation rate of sorting points (PU) is determined from the average value of the utilisation rate of all sorting points in the waste treatment system. The utilisation rate of individual points is determined based on a fuzzy model, where the inputs are the point efficiency rate ( ) and the waste stream irregularity ( ) for the evaluated point. The waste sorting points are mechanical devices used for waste sorting and manual sorting stations. Sorting point efficiency ( ) is expressed as a ratio of the total amount of sorted waste on the tested sorting station to the assumed productivity. It results from the technical parameters of a device or the predispositions of a workstation. The form of the membership function of the sorting point efficiency variable is shown in Figure 2. Index k corresponds to the sort line number, and j is the number of the workstation being evaluated. The waste stream irregularity ( ) is calculated for a variable amount of waste processed in a specific time unit. The selected time unit must be identical to the one for which the workstation productivity was determined. The flux non-uniformity informs about the influence of flux variability (deviation from the average) on the workstation performance. The membership functions of the waste stream irregularity are presented in Figure 3. The waste stream irregularity (WV j ) is calculated for a variable amount of waste processed in a specific time unit. The selected time unit must be identical to the one for which the workstation productivity was determined. The flux non-uniformity informs about the influence of flux variability (deviation from the average) on the workstation performance. The membership functions of the waste stream irregularity are presented in Figure 3. Determination of the inference rules and membership functions was made on the basis of experts' opinions. The experts were employees (management staff) of waste treatment facilities. They established a four-grade scale for the evaluation of points, together with the factors influencing the process, and determine the weights for the indicators. The presented membership functions and inference rules are the interpretation of the obtained opinions. The average value of the obtained sharpened values of the sorting point utilisation index is the input variable for the fuzzy waste treatment system evaluation model. The membership function of the linguistic variable of the sorting point utilisation rate is presented in Figure 4. The effectiveness of a treatment system (PE) is defined as the ratio of correctly sorted waste fractions to the total amount of waste that has been sorted. where: -total weight of sorted waste streams at the j-th evaluated workstation in k-th line, -total weight of the waste stream at the j-th evaluated workstation on k-th line, and -total weight of the waste stream input. The membership function of the linguistic variable in the effectiveness of the processing system (PE) is presented in Figure 5. The effectiveness of a treatment system (PE) is defined as the ratio of correctly sorted waste fractions to the total amount of waste that has been sorted. where: -total weight of sorted waste streams at the j-th evaluated workstation in k-th line, -total weight of the waste stream at the j-th evaluated workstation on k-th line, and -total weight of the waste stream input. The membership function of the linguistic variable in the effectiveness of the processing system (PE) is presented in Figure 5. The effectiveness of a treatment system (PE) is defined as the ratio of correctly sorted waste fractions to the total amount of waste that has been sorted. where: O k j -total weight of sorted waste streams at the j-th evaluated workstation in k-th line, S k j -total weight of the waste stream at the j-th evaluated workstation on k-th line, and S 1 -total weight of the waste stream input. The membership function of the linguistic variable in the effectiveness of the processing system (PE) is presented in Figure 5. The value of is assumed to depend on the values of the minimum recovery levels set out in the regulation of the authorities of the country concerned. Results that are less than the indicated value are considered unacceptable. The irregularity of the workstation load is determined by the unevenness coefficient The value of a PE is assumed to depend on the values of the minimum recovery levels set out in the regulation of the authorities of the country concerned. Results that are less than the indicated value are considered unacceptable. The irregularity of the workstation load is determined by the unevenness coefficient calculated from the formula: where: WP k j -unevenness coefficient of sorted waste streams at the j-th evaluated workstation in k-th line, WP k j -mean value of the unevenness coefficient calculated for the j-th evaluated workstation on k-th line, and N-number of observations. WV TP ∈ [0, 1]. This indicator makes it possible to determine the correctness of the processing system due to the workstation load. The membership function of the linguistic variable irregularity of workstation load is presented in Figure 6. The value of is assumed to depend on the values of the minimum recovery levels set out in the regulation of the authorities of the country concerned. Results that are less than the indicated value are considered unacceptable. The irregularity of the workstation load is determined by the unevenness coefficient calculated from the formula: Where: -unevenness coefficient of sorted waste streams at the j-th evaluated workstation in k-th line, -mean value of the unevenness coefficient calculated for the j-th evaluated workstation on k-th line, and -number of observations. ∈ 0,1 . This indicator makes it possible to determine the correctness of the processing system due to the workstation load. The membership function of the linguistic variable irregularity of workstation load is presented in Figure 6. Waste treatment systems, due to the nature of their work, are exposed to frequent breakdowns and downtimes. It results in a significant proportion of the average time spent in a failed state. In addition, each downtime of a treatment system is repeatedly associated with a lengthy restoration of the system to full load operation. Below the minimum readiness , the system operating at the current load (the size of the input stream) is not able to process all the input material. The membership function of the linguistic variable availability of the processing system is presented in Figure 7. Waste treatment systems, due to the nature of their work, are exposed to frequent breakdowns and downtimes. It results in a significant proportion of the average time spent in a failed state. In addition, each downtime of a treatment system is repeatedly associated with a lengthy restoration of the system to full load operation. Below the minimum readiness a A , the system operating at the current load (the size of the input stream) is not able to process all the input material. The membership function of the linguistic variable availability of the processing system is presented in Figure 7. Since only those systems are capable of processing the total stream of collected waste sorting, efficiency becomes a key criterion. It implies the requirement to achieve minimum recovery rates and, at the same time, has a real impact on the company's revenue. The rules have been developed on this basis, which state that, if the system fails to achieve the minimum recovery level, the assessment value is unacceptable. Otherwise, the importance of the availability and utilisation indicator increases. The fourth indicator is used when the utilisation level of a treatment system informs about the available unused processing capacity. It is used to assess the correctness of the waste stream selection along with its unevenness. Based on the research conducted, 49 inference rules were developed (found in the appendix). The membership function of the output variable evaluation of the waste treatment Since only those systems are capable of processing the total stream of collected waste sorting, efficiency becomes a key criterion. It implies the requirement to achieve minimum recovery rates and, at the same time, has a real impact on the company's revenue. The rules have been developed on this basis, which state that, if the system fails to achieve the minimum recovery level, the assessment value is unacceptable. Otherwise, the importance of the availability and utilisation indicator increases. The fourth indicator is used when the utilisation level of a treatment system informs about the available unused processing capacity. It is used to assess the correctness of the waste stream selection along with its unevenness. Based on the research conducted, 49 inference rules were developed (found in the appendix). The membership function of the output variable evaluation of the waste treatment system is presented in Figure 8. Since only those systems are capable of processing the total stream of collected w sorting, efficiency becomes a key criterion. It implies the requirement to achieve minim recovery rates and, at the same time, has a real impact on the company's revenue. rules have been developed on this basis, which state that, if the system fails to achiev minimum recovery level, the assessment value is unacceptable. Otherwise, the portance of the availability and utilisation indicator increases. The fourth indicator is when the utilisation level of a treatment system informs about the available unused cessing capacity. It is used to assess the correctness of the waste stream selection a with its unevenness. Based on the research conducted, 49 inference rules were develo (found in the appendix). The membership function of the output variable evaluation of the waste treatm system is presented in Figure 8. After performing the inference using the described membership functions and r the result is obtained in the form of fuzzy sets. The quantification of the fuzzy valu the real value is carried out using the centre of gravity method. MATLAB R2017a soft was used for this. Model Application-Case Study In order to evaluate the waste processing system using the method described in paper, tests were conducted in one of the sorting plants in Wroclaw (Poland) for s tively collected waste (metal and plastic). The system under study was dedicated on sorting selectively collected waste. The system studied consisted of three lines for so fractions of different sizes. In addition, on individual lines, there are other devices, as induction sorters, gravity sorters, magnetic sorters, etc. Due to the complexity o system, the paper presents the application of the method only for a selected fragme the system. However, the method was verified on the basis of the full system and accepted by experts. Only a part of the sorting installation dealing with sorting of a w stream with a fraction between 50 mm and 170 mm has been included in the evalua Figure 9 presents a drawing with the designation of individual sorting stations. After performing the inference using the described membership functions and rules, the result is obtained in the form of fuzzy sets. The quantification of the fuzzy values to the real value is carried out using the centre of gravity method. MATLAB R2017a software was used for this. Model Application-Case Study In order to evaluate the waste processing system using the method described in this paper, tests were conducted in one of the sorting plants in Wroclaw (Poland) for selectively collected waste (metal and plastic). The system under study was dedicated only to sorting selectively collected waste. The system studied consisted of three lines for sorting fractions of different sizes. In addition, on individual lines, there are other devices, such as induction sorters, gravity sorters, magnetic sorters, etc. Due to the complexity of the system, the paper presents the application of the method only for a selected fragment of the system. However, the method was verified on the basis of the full system and was accepted by experts. Only a part of the sorting installation dealing with sorting of a waste stream with a fraction between 50 mm and 170 mm has been included in the evaluation. Figure 9 presents a drawing with the designation of individual sorting stations. At each workstation, workers face each other and select the same types of waste. Table 2 presents the type of waste sorted at each workstation. Between the workers, there are chutes into which they throw sorted waste. In the case of workstations 2 to 5, apart from chutes, the workers have bags into which they throw fractions other than those thrown into the chutes. Blue PET, White PET, Aluminium, Tetra Pak At each workstation, workers face each other and select the same types of waste. Table 2 presents the type of waste sorted at each workstation. Between the workers, there are chutes into which they throw sorted waste. In the case of workstations 2 to 5, apart from chutes, the workers have bags into which they throw fractions other than those thrown into the chutes. The study included an analysis of the amount of selected waste at each station in relation to the total stream size of each fraction. Efficiency was measured by both the mass of collected waste and the number of pieces of waste in each fraction. In the present paper, the mass ratio was used to determine efficiency. Based on the findings of the experts, the values of the individual linguistic variables were determined. They are presented in Table 3. Table 3. Values of individual linguistic variables. The study was conducted in 7 independent experiments and included one shift of plant operation (4 h) at a time. The data were collected to estimate the input parameters of the proposed waste treatment system evaluation model. Table 4 presents the evaluation parameters of the sorting points for seven cases. According to the assumptions of the method, the data in Table 5 were used to calculate the value of the PU index. This result, along with the other values of the input indicators, are presented in Table 5. The PU index values are relatively low. This is due to the fact that the average waste stream size for individual cases was approximately 575.63 kg/h. In contrast, the installation should process approximately 1000 kg/h at a given station. This relationship can also be seen in the PE value for efficiency. The PE value is very high, but this is due to the low intensity of the work. It is also indicated by the WV TP showing that the workplaces were loaded unevenly. The data obtained during the study show that workstations 1, 2, and 5 were mainly loaded. In the extreme case, on workstation 1, the workers reached an efficiency of 85 pieces per minute on average, while, on workstation 4, approximately On this workstation, 34 pieces of waste per minute was achieved. There were no system downtimes while conducting the study. This was due to the relatively short research time from the beginning of the shift to the first break for employees (after about four hours). Discussion The model described hereinabove is generic and can be used to evaluate systems that sort any type of waste. However, from the point of view of systems dealing with sorting selectively collected waste, there is an important relationship between the evaluation values and the energy quality of RDF. Tetra Pak was treated as an average value for the energy content of paper, plastic, and metal. The following calorific values are presented in Table 6 [35][36][37]. A case study is presented to demonstrate the use of the proposed method. The narrow range of EI results from performing the assessment on a selected portion of the system in several scenarios. When evaluating the whole system, a wider EI scope will be achieved. However, it is impossible to present the entire system as a case study due to its complexity and limited article length. We have presented the possibility of investigating the dependence of energy value on the system's rating. When applying the method to another sorting system, one should follow the steps of our method and, finally, determine the energy dependence on the value of the grades obtained. The purpose of a sorting plant is to separate economically significant recyclable materials from other materials. From this point of view, there should be a high correlation between the assessment value and the sorting efficiency of the system. This relationship is presented in Figure 10. The Pearson's linear correlation coefficient is R xy = 0.756 for the rating and the percentage efficiency of waste sorting. The correlation at the presented level may seem low, but the obtained result seems reasonable from the decision-makers' point of view. It should be noted that, on all workstations from 1 to 5, workers choose the blue PET fraction. It implies a significant difference between the purchase price of a ton of blue PET and the other fractions. From an economic point of view, there is a much higher concentration on selecting higher value materials. Therefore, HDPE is selected only at one last station. As a consequence, the efficiency of picking blue PET is higher than HDPE, for example. To sum up the above considerations, the method described hereinabove can be used to evaluate systems differing in the scale of the amount of waste processed and dealing with sorting of different materials. Moreover, the method indicates a significant inverse relationship between the obtained evaluation value and RDF quality. Therefore, it may be As mentioned before, there is also a relationship between the evaluation value obtained from the method described in Section 2 and the energy quality of the RDF. From the point of view of the operation of the sorting plant, materials that are economically significant are selected. These materials are also characterised by high calorific value. Therefore, as the rating increases, there is a decrease in RDF quality. The Pearson linear correlation coefficient in this case is R xy = −0.732. Again, the value of the coefficient may seem relatively low, but this is mainly due to the previously described relationship, i.e., uneven focus on selecting all types of plastic. This relationship is shown in Figure 11. Conclusions Evaluation of the waste treatment system is the subject of current research. There are many one-dimensional methods for its evaluation. Within the paper's framework, we presented a general method based on fuzzy logic principles for the multi-criteria evaluation of waste treatment systems and their impact on RDF energy quality. The paper is divided into four sections. Within the first section (introduction), the authors conducted a review of the state of the art in evaluation of waste sorting systems. The conclusions made it possible to identify a research gap. The second part (A fuzzy multi-criteria model for evaluating municipal waste treatment systems) describes a multicriteria model for evaluating municipal waste treatment systems. The parameters of the fuzzy logic model (membership functions, inference rules and sharpening functions) were defined. For their definition, the opinion of experts was relied upon. In Chapter 3 (Application of the model-a case study), the presented evaluation model has been implemented for the evaluation of the Wrocław sorting installation for selectively collected waste (metal and plastic). In the present paper, we analysed the evaluation of the selected sorting line in seven selected cases (describing one change in the system operation). In the fourth part (Discussion), as part of the application work, estimations of particular parameters of the model were made, and the results were presented and commented on. It was shown that there is a significant relationship between the level of system evaluation and sorting efficiency and an inverse relationship with the level of RDF obtained. Further work will be carried out to develop a method of configuring the waste processing system taking into account the placement and configuration of technical equipment, as well as the qualifications of the personnel operating the system. Work will also be carried out to develop a method of assigning the selected fraction to a sorting line operator. Guidelines will also be developed for creating a waste stream (in terms of morphology and size) on the sorting line (in conjunction with the qualifications of the sorting To sum up the above considerations, the method described hereinabove can be used to evaluate systems differing in the scale of the amount of waste processed and dealing with sorting of different materials. Moreover, the method indicates a significant inverse relationship between the obtained evaluation value and RDF quality. Therefore, it may be applied for multi-criteria process optimisation purposes, e.g., when changing the arrangement of workstations and changing priorities in the context of selected fractions at individual workstations. Regression analysis allows establishing the relationship between quantities while determining the confidence interval for simple linear regression. It allows estimating deviations from the nominal level of the relationship between the quantities of the system evaluation level and the RDF calorific value. The estimation of confidence intervals for the parameters of the simple linear regression was developed using Reference [38]. An alpha significance level of 0.05 was assumed. Based on the confidence interval, the expert can read off the range of RDF energy depending on the focus on a particular type of fraction (most often, as a result of differences in the cost of selling raw materials, PET recovery is preferred). Conclusions Evaluation of the waste treatment system is the subject of current research. There are many one-dimensional methods for its evaluation. Within the paper's framework, we presented a general method based on fuzzy logic principles for the multi-criteria evaluation of waste treatment systems and their impact on RDF energy quality. The paper is divided into four sections. Within the first section (introduction), the authors conducted a review of the state of the art in evaluation of waste sorting systems. The conclusions made it possible to identify a research gap. The second part (A fuzzy multicriteria model for evaluating municipal waste treatment systems) describes a multi-criteria model for evaluating municipal waste treatment systems. The parameters of the fuzzy logic model (membership functions, inference rules and sharpening functions) were defined. For their definition, the opinion of experts was relied upon. In Chapter 3 (Application of the model-a case study), the presented evaluation model has been implemented for the evaluation of the Wrocław sorting installation for selectively collected waste (metal and plastic). In the present paper, we analysed the evaluation of the selected sorting line in seven selected cases (describing one change in the system operation). In the fourth part (Discussion), as part of the application work, estimations of particular parameters of the model were made, and the results were presented and commented on. It was shown that there is a significant relationship between the level of system evaluation and sorting efficiency and an inverse relationship with the level of RDF obtained. Further work will be carried out to develop a method of configuring the waste processing system taking into account the placement and configuration of technical equipment, as well as the qualifications of the personnel operating the system. Work will also be carried out to develop a method of assigning the selected fraction to a sorting line operator. Guidelines will also be developed for creating a waste stream (in terms of morphology and size) on the sorting line (in conjunction with the qualifications of the sorting staff) that will enable the higher recovery of the desired raw material fraction. We envisage developing the proposed method to include cost considerations.
9,020.2
2021-12-21T00:00:00.000
[ "Environmental Science", "Engineering" ]
Design of sampling-based noniterative algorithms for centroid type-reduction of general type-2 fuzzy logic systems General type-2 (GT2) fuzzy logic systems (FLSs) become a popular research topic for the past few years. Usually the Karnik–Mendel algorithms are the most prevalent approach to complete the type-reduction. Nonetheless, the iterative quality of these types of computational intensive algorithms might impede applying them. For the improved types of algorithms, some noniterative algorithms can enhance the calculation efficiencies greatly, while it is still an open problem for comparing the relation between the discrete TR algorithms and corresponding continuous TR algorithms. First, the sum and integral operations in discrete and continuous noniterative algorithms are compared. Then, three kinds of noniterative algorithms originate from the type-reduction of interval type-2 FLSs are extended to complete the centroid type-reduction of general T2 FLSs. Four computer simulations prove that while changing the number of samples suitably, the calculational results of discrete types of algorithms may accurately gain on the related continuous types of algorithms, and the calculational times of discrete types of algorithms are obviously less than the continuous types of algorithms, this may offer the possible meaning for designing and applying T2 FLSs. Introduction As is known to all, the computational complexities of general T2 FLSs [1] are much higher than IT2 FLSs. Recently, as the alpha-planes representations of GT2 fuzzy sets (FSs [2][3][4][5][6]) were put forward by few study groups, the calculated amount of GT2 FLSs can be significantly decreased. As the secondary membership grades of interval T2 FSs are equal to one, they can only measure the uncertainty of membership function (MF) in a unified manner. For GT2 FSs, whose secondary membership grades lie between zero and one, therefore, they can measure the uncertainty of MF in a non-uniform way. In other words, GT2 FSs may be considered as more advanced uncertainty models than IT2 FSs. As the design degrees of freedom increase, general T2 FLSs [7][8][9][10][11] have the latent capacity to overmatch IT2 FLSs on dealing with more complex uncertainty environments. In general, T2 FLSs are made up of five blocks as: fuzzifier, fuzzy rules, fuzzy inference, TR and defuzzification. Among which, the type-reduction acts as translating the type-2 FS to the type-1 FS. Then, the module of defuzzification transforms the T1 FS to a output. The centroid type-reduction [12][13][14] is the most prevalent study method. In the early days, the iterative Karnik-Mendel (KM) algorithms [15] were opened up to perform the TR of interval T2 FLSs. Then, the continuous type of Karnik-Mendel (CKM) algorithms [16,17] were put forward; moreover, the monotone property and super-convergence nature of them were also proved. Usually, it requires two to six iteration steps for each KM algorithm. For the sake of improving the computational cost, many other iterative algorithms are put forward gradually, they are enhanced Karnik-Mendel (EKM) algorithms [18], weighted based EKM algorithms [5,19], enhanced opposite direction searching (EODS) algorithms [20][21][22] and so on. The other kinds of noniterative algorithms which achieve the output of systems directly were also put forward, they are Uncertainty Boundary (UB) algorithms [23], Coupland and John algorithms [24], Nagar and Bardini algorithms [13,25], Nie and Tan algorithms [26,27], Begian and Melek and Mendel algorithms [28,29] and so forth. Interval T2 FLSs on account of NB algorithms have excellent shows over the impact of uncertainty [30][31][32]. Most recent theoretical researches about calculating the centroids of IT2 FSs show that the continuous type of NT algorithms are actually an accurate TR method. Furthermore, interval T2 FLSs on the basis of Begian-Melek-Mendel algorithms have advantages on both stability and robustness compared with the T1 FLSs. All these jobs have established foundations for investigating the noniterative algorithms for completing the centroid TR for general T2 FLSs. The paper analyzes the sum and integral operations in the corresponding discrete and continuous algorithms. In addition, the Nagar and Bardini (NB) and Nie and Tan (NT) algorithms are demonstrated to be two specific circumstances of Begian-Melek-Mendel algorithms. For the GT2 FLSs on the basis of the alpha-planes expression of GT2 FSs, we provide the inference, type-reduction and defuzzification according to these three types of noniterative algorithms. Then, four simulation experiments are adopted to illustrate that while increasing the sampled number of primary variable, the computational results of discrete type of noniterative algorithms may accurately gain on the corresponding continuous type of noniterative algorithms. The rest of the paper is arranged as the following. The next section provides the background knowledge of GT2 FLSs. Then, the third section gives the three kinds of noniterative NB, NT and BMM algorithms, and how to expand them three to complete the type-reduction of GT2 FLSs. According to the simulation instances, the fourth section illustrates and analyzes the computation results. Finally, the last section is the conclusion. GT2 FLSs From the viewpoint of inference [1,33,34], general T2 FLSs can be categorized into both the Mamdani type [9,12,35] and Takagi and Sugeno and Kang type [8,[36][37][38]. Take into account a Mamdani general T2 FLS which has m inputs x 1 ∈ X 1 , x 2 ∈ X 2 , · · · x m ∈ X m , and a single output y ∈ Y , and the system is described by M rules, in which the sth rule can be as For each rule, the firing interval for the relevant α-level should be computed as in which T is the minimum or product t-norm. Let the α-plane of consequentH s at the corresponding α-level beH s α , theñ To get the firing rule α-planeà s α , for each rule, combing the fired interval with its relevant consequent α-plane, that is to say, Aggregating all theà s α (s 1, 2, . . . , M) to get the α-planeà α , that is to say, in which ∨ represents the maximum operation. Then, the Y C, α (x ) at the corresponding α-level should be acquired by obtaining the centroid ofB α , that is to say, in which the là α (x ) and rà α (x ) may be computed by different types of TR algorithms [5,6,[12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27] as and (9) in which the special IT2 FS Rà α α/à α , and N denotes the number of sampling. Three types of noniterative algorithms Three types of discrete noniterative algorithms are provided here for investigating the centroid type-reduction for GT2 FLSs. Nagar-Bardini noniterative algorithms The closed form of NB algorithms [25] based IT2 FLSs were shown to own distinguish advantages to cope with farranging of uncertainty. According to the inference, let the output be the GT2 fuzzy setÃ. Let theà be evenly discretized to n points, then the là α and rà α can be calculated as According to the continuous type of KM algorithms [14][15][16][17]19], the continuous type of NB (CNB) algorithms should be provided as Then, the type-reduced sets and defuzzified ouputs can be obtained in terms of Eqs. (10) and (11). Here, both the discrete NB algorithms and continuous NB algorithms calculate the output as the linear combination of 2 special type-1 FLSs: one relies the upper MFs, while the other depends on the lower MFs. Nie-Tan noniterative algorithms The discrete NT algorithms compute the defuzzified output of GT2 FS at the corresponding α-level as Then, the continuous NT (CNT) algorithms solve it as Most recent studies show that the CNT algorithms [26] are in reality an accurate TR approach. Here, the statements and explanations are provided as follows. Theorem 1 [17]. The random sampling methods (algorithms) should obtain the precise centroid type-reduction of GT2 FLSs as the sampled number is infinity. Proof. According to the inference [43], suppose that Rà α be the output interval type-2 FS (under α-level). Furthermore, along the y-axis, let the vertical slices number be m, and along the u-axis, l denotes the horizontal slices number, therefore, the embedded T1 fuzzy sets number is l m . Let μ Rà α (y), and μ Rà α (y) be the upper bound and lower bound of FOU for Rà α . Here, select M embedded type-1 fuzzy sets randomly for Rà α , and suppose that μ i (y) (i 1, 2, . . . , M) be the ith embedded FS. Aggregating all M embedded fuzzy sets, therefore, So that, Fig. 1 Graphs for FOUs As u i (y j ) is a random number on [u i (y j ), u i (y j )], therefore, the right side of Eq. (19) can be rewritten as So that, For Eq. (20), it is sampled randomly as M → ∞ for the left, and the right is the aggregation. Attention that l/M is only a constant, therefore, it has nothing relations to the calculations of the centroid. As for the continuous type interval type-2 FS Rà α , just replace the m j 1 and l k 1 with the y max y min and u(y) u(y) will work. Therefore, Theorem 2 [17] . For the Rà α , let whose membership function of representative embedded FS be the form Here, the centroid for IT2 fuzzy set Rà α should be calculated as Proof For the interval type-2 FS Rà α , choose M embedded FSs. Suppose that μ i (y) be the membership function for the ith embedded fuzzy set. Due to y j is a vertical slice randomly selected, therefore, μ i (y) could be expressed as: u i (y j ) u(y j ) + ϕ i [u(y j ) − u(y j )], in which the random number ϕ i is equally distributed on the interval [0, 1]. Therefore, Actually, lim Computing the centroid a prevalent approach to defuzzify type-1 fuzzy set, i.e., As y j is just a random vertical slice of y, on the basis of Eq. (26), it can be obtained that According to Eq. (27) and theorem 1, u * (y) [u(y) + u(y)]/2 will be the membership function of representative embedded FS. Furthermore, this will be used to compute an accurate centroid of Rà α . When Rà α is the continuous type, simply substitute the M i 1 by the y max y min for the proofs. Therefore, the continuous NT algorithms are actually equal to the exhaustive type-reduction algorithms, and this can be extended to perform the accurate type-reduction. Begian-Melek-Mendel noniterative algorithms The discrete BMM algorithms compute the output straightly, and the output of GT2 FS can be obtained as in which the two adjustable coefficients are a and b, respectively; here, the output is also a linear combination of 2 specific type-1 FLSs. Then, the continuous BMM (CBMM) algorithms solve it as In fact, the discrete BMM algorithms [14,28,29] can be considered as the more general form of discrete NB algorithms and NT algorithms. Then, the explanations are given as follows. Observe Eqs. (12), (13) and (29), it should be found that the NB algorithms and BMM algorithms are completely the same when a a N B where a N T and b N T satisfy Observing Eqs. (29) and (31), as a a N T and b b N T , we obtain that the NT algorithms and BMM algorithms will be the same. Finally, the inner connections between the discrete and continuous algorithms for completing the centroid TR of GT2 FLSs should be as: (1) The discrete noniterative algorithms compute the centroids by means of the sum operation for the sampled points y i (i 1, . . . , n). In addition, the continuous noniterative algorithms compute in terms of the integral. In theory, the computational results of discrete noniterative algorithms can gain on the continuous noniterative algorithms while the number of samples approaches infinity. (2) For the discrete noniterative algorithms, the more precise calculation effects can be acquired by adding the number of samples. Simulations Four cases are given here. Suppose that the footprint of uncertainties (FOUs) and the related vertical slices (or secondary membership functions) of output GT2 FS be known according to the inference engine of GT2 FLSs. Here, four commonly used GT2 FSs are selected for studying. In case 1, the FOU is made up of the piecewise defined linear functions [2,3,5,6,19], while the related secondary MF is selected as the trapezoidal type membership function. In the second case, the FOU is made up of the linear functions and Gaussian functions [13-16, 26-29, 33], and the corresponding secondary membership functions is still selected as the trapezoidal type membership function. In case 3, the FOU is composed of the Gaussian type functions [2,3,5,6,19], while the corresponding secondary membership function is selected as the triangular type membership function. In the last case, the FOU is only a Gaussian primary membership function with [13-16, 26-29, 33], and the corresponding secondary membership functions is the triangular type membership function. In experiments, the α is equally divided into values as: α 0, 1/ , · · · , 1. Here, we let vary from one to a hundred with a step size of one. Figure 1 and Table 1 give the FOUs of all cases. In addition, Fig. 2 and Table 2 provide the related secondary membership functions. Consider the continuous noniterative algorithms as benchmark to calculate both the type-reduced sets for 100, and the defuzzified outputs for (Delta) ranging from one to a hundred with the step size of 1, and they are shown by Here, the absolute errors between three kinds of continuous noniterative algorithms and sampling-based discrete noniterative algorithms for calculating type-reduced sets and defuzzified outputs are provided in Figs. 5, 6, 7, 8, 9, 10. Next, the quantitative studies for the mean of absolute errors are performed. Then, Tables 3, 4, 5 provide the means of absolute errors for type-reduced sets between the continuous noniterative algorithms and sampling-based discrete noniterative algorithms. Furthermore, the means of absolute errors for centroid defuzzified values are given in Tables 6, 7, 8. The absolute errors of type-reduced sets between continuous NB algorithms and sampling-based discrete NB algorithms Next, we investigate the unrepeatable calculation times for the continuous noniterative algorithms and their corresponding sampling-based discrete noniterative algorithms. Computer programs are completed with the software of Matlab 2013a. Here, we use the hardware platform as a double core CPU dell desktop. Then, Tables 9, 10, 11, 12, 13, 14 provide the total computation times of noniterative algorithms for calculating the type-reduced sets and defuzzified values, respectively, where the time unit is the second(s). See from Figs. 5,6,7,8,9,10 and Tables 3,4,5,6,7,8,9,10,11,12,13,14, the conclusions can be made as: (1) For calculating both the centroid type-reduced sets and defuzzified values, as the number of samples increases, the results of sampling-based noniterative algorithms will all be more and more close to their corresponding continuous type of noniterative algorithms. (2) In both examples 1 and 4, as the number of sampling is 100, the computational results of discrete type of noniterative algorithms will be the same as their related continuous type of noniterative algorithms (see Figs. 5a-10a, and Figs. 5d-10d); while for the middle two cases, the number of samples must be added to 19,000 to let the computational results of discrete type of noniterative algorithms be the same as their related continuous type of noniterative algorithms (see Figs. 5b-10b, and Figs. 5c-10c). (3) While calculating average of absolute errors for typereduced sets and defuzzified values, the discrete type of Fig. 6 The absolute errors of defuzzified outputs between continuous NB algorithms and sampling-based discrete NB algorithms noniterative algorithms could better gain on their related continuous type of noniterative algorithms as the samples is chosen as 19,000 (see Tables 3, 4, 5, 6, 7, 8). (4) For calculating the type-reduced sets and defuzzified values, the specific computational times of discrete noniterative algorithms are less than their corresponding continuous counterparts. Among them, the discrete type of noniterative algorithms which have the maximum number of samples take the longest computation times. Despite so, for the discrete type of noniterative algorithms which have the maximum number of samples, they only have about 0.6244%, 0.5208%, 0.7708%, and 0.6617%, 0.5635%, 0.6875% of their continuous counterparts, respectively (see Tables 9, 10 , 11, 12, 13, 14). (5) According to the items (1)-(4), it is obviously that the proposed sampling-based noniterative algorithms could be outstanding approximation approaches for their related continuous counterparts. Furthermore, the efficiencies of formers are distinctly higher than the continuous counterparts. Conclusions and expectations The paper compares the sum operation and integral operations in discrete and continuous algorithms on the foundation of type-2 fuzzy logic theory. As for four types of general Fig. 7 The absolute errors of type-reduced sets between continuous NT algorithms and sampling-based discrete NT algorithms T2 fuzzy sets with different kinds of FOUs and secondary MFs, the continuous noniterative algorithms are selected as the criterion to calculate the type-reduced sets and centroid defuzzified outputs. Simulation instances are given to show the shows of proposed sampling-based discrete noniterative type of algorithms. As the number of samples is selected suitably, the proposed sampling-based noniterative type of algorithms can approach to the corresponding continuous counterparts exactly. In addition, the efficiencies of formers are significantly higher.
3,983
2022-06-20T00:00:00.000
[ "Computer Science" ]
Preface to: Submanifolds in Metric Manifolds The present editorial contains 11 research articles, published in the Special Issue entitled “Submanifolds in metric manifolds” of the MDPI mathematics journal, which cover a wide range of topics from differential geometry in relation to the theory and applications of the structure induced on submanifolds by the structure defined on various ambient manifolds [...] The present editorial contains 11 research articles, published in the Special Issue entitled "Submanifolds in metric manifolds" of the MDPI mathematics journal, which cover a wide range of topics from differential geometry in relation to the theory and applications of the structure induced on submanifolds by the structure defined on various ambient manifolds.The geometry of some particular manifolds and their submanifolds, with examples and applications (characterization properties and inequality or equality cases), is studied. The concept of Riemannian submersion between Riemannian manifolds is very popular in theoretical physics, as well as in differential geometry.In Contribution 1, the authors consider a contact-complex Riemannian submersion, which puts the almost-contact-metric structure from the domain manifold to the almost-Hermitian structure of the target manifold.The authors provide several new results, showing mainly when the base manifold admits a Ricci soliton when it is Einstein, when the fibres are η-Ricci solitons, and when they are η-Einstein. A classical challenge in Riemannian geometry is to discuss the geometrical and topological structures of submanifolds.In Contribution 2, the authors obtain some topological characterizations for the warping function of a warped product pointwise semi-slant submanifold in a complex projective space (where the constant sectional curvature c = 4).Some important applications of this theory can be found for the singularity structure in liquid crystals, in the system in statistical mechanics with low dimensions, and physical phase transitions.Since paper 2 considers both the warped product manifold and the homotopy-homology theory, its results can be used as part of physical applications. In Contribution 3, the authors prove the non-existence of stable integral currents in a compact oriented warped product pointwise semi-slant submanifold of a complex space form, under extrinsic conditions, which involve the Laplacian, the squared norm gradient of the warped function, and pointwise slant functions.The authors investigate the curvature and topology of submanifolds in a Riemannian manifold and the usual sphere theorems in Riemannian geometry.The second target of paper 3 is to establish topological sphere theorems from the viewpoint of warped product submanifold geometry with positive constant sectional curvature and pinching conditions in terms of the squared norm of the warping function and the Laplacian of the warped function as extrinsic invariants. The importance of spaces with constant curvature is well understood in cosmology.A cosmological model of the universe is obtained by assuming that the universe is isotropic and homogeneous.In Contribution 4, the authors focuse on the Z-symmetric manifold admitting certain types of Schouten tensor and they find some properties of Z-symmetric spacetime admitting Codazzi-type Schouten tensor. The Kähler manifold is the subject of symplectic geometry.Contact geometry appears as the odd dimensional counterpart of symplectic geometry, in which the almost-contact manifold corresponds to the almost-complex manifold.In Contribution 5, the authors study the * -Ricci operator on Hopf real hypersurfaces in the complex quadric.As the correspondence to the semi-symmetric Ricci tensor, the authors give a classification of real hypersurfaces in the complex quadric with the semi-symmetric * -Ricci tensor. In Contribution 6, the author studies the geometry of the almost-paracontact and almost-paracomplex Riemannian manifolds.The author defines the first natural connection and constructs a relation between this connection and the Levi-Civita Moreover, some properties of curvature tensors, torsion tensors, Ricci tensors and scalar curvatures using these connections are given, along with an explicit five-dimensional example. A statistical manifold can be considered as an expanse of a Riemannian manifold, such that the compatibility of the Riemannian metric is developed to a general condition.In Contribution 7, the authors present nearly Sasakian and nearly Kähler structures on statistical manifolds and show the relation between these geometric notions.Moreover, the authors study some properties of (anti-)invariant statistical submanifolds of nearly Sasakian statistical manifolds. The geometry of submanifolds in Golden or metallic Riemannian manifolds was widely studied by many geometers.The notion of a Golden structure was introduced 15 years ago and has been a constant interest of several geometers.In Contribution 8, the authors propose a new generalization (apart from that called the metallic structure), named the almost (α, p)-Golden structure.By adding a compatible Riemannian metric, the authors focus on the study of the structure induced on submanifolds in this setting and its properties. In Contribution 9, the authors study a new holonomic Riemannian geometric model, associated with the Gibbs-Helmholtz equation from classical thermodynamics, in a canonical way.Using a specific coordinate system, the authors define a parameterized hypersurface in R 4 as the graph of the entropy function.The main geometric invariants of this hypersurface are determined and some of their properties are derived.Using this geometrization, the authors characterize the equivalence between the Gibbs-Helmholtz entropy and the Boltzmann-Gibbs-Shannon, Tsallis and Kaniadakis entropies, respectively, by means of three stochastic integral equations.They prove that some specific (infinite) families of normal probability distributions are solutions for these equations.This particular case offers a glimpse of the more general equivalence problem between classical entropy and statistical entropy. The background of Contribution 10 is the total space TM of the tangent bundle of a Riemannian manifold (M, g), endowed with a metric G, constructed as a general natural lift of the metric from the base manifold.The author studies the conditions under which the (pseudo-)Riemannian manifold (TM, G), endowed with the Schouten-Van Kampen connection and associated with the Levi Civita connection of G, is a statistical manifold admitting torsion.The results obtained in this work lead to new examples of (quasi-)statistical structures on the tangent bundle of a Riemann manifold. In Contribution 11, the authors investigate the properties of the induced structure on a light-like hypersurface by a meta-Golden semi-Riemannian structure.Moreover, the authors study the properties of the invariant and anti-invariant light-like hypersurfaces, and screen semi-invariant light-like hypersurfaces of almost-meta-Golden semi-Riemannian manifolds.They also provide some useful examples. As the Guest Editor of this Special Issue, I would like to thank the MDPI publishing editorial team, who gave me the opportunity to undertake the role of Guest Editor for the Special Issue "Submanifolds in metric manifolds".I am very grateful to all the authors who have contributed through their research articles.Also, I would like to express my gratitude to all the reviewers for their valuable suggestions and critical comments, which have improved the quality of the papers in this Issue. We hope that the selected research studies will have a positive impact on the international scientific community, inspiring other researchers to study and expand on the topics covered in this book. Conflicts of Interest: The author declares no conflicts of interest.
1,566.6
2024-04-07T00:00:00.000
[ "Mathematics" ]
Nonvolatile Memristor-Based Reservoir Computing System with Reservoir Controllability Recent advances in reservoir computing (RC) using memristors have made it possible to perform complicated timing-related recognition tasks using simple hardware. However, the fixed reservoir dynamics in previous studies have severely limited application fields. In this study, RC was implemented with a reservoir that consisted of a W/HfO 2 /TiN memristor (M), a capacitor (C), and a resistor (R), in which the reservoir dynamics could be arbitrarily controlled by changing their parameters. After the capability of the RC to identify the static MNIST data set was proven, the system was adopted to recognize the sequential data set [ultrasound (malignancy of lesions) and electrocardiogram (arrhythmia)] that had a significantly different time constant (10 -7 vs. 1 s). The suggested RC system feasibly performed the tasks by simply varying the C and R, while the M remained unvaried. These functionalities demonstrate the high adaptability of the present RC system compared to the previous ones. separability of the memristor. The capacitance Introduction The convolutional neural network (CNN), based on the back-propagation learning rule, combines the convolutional layer and the fully connected layer. 1 It shows outstanding performance in static image processing (recognition and classification). 2,3 However, when the temporal order of each input vector and the correlation between the input vectors are essential, such as for natural language recognition or translation, a method of processing the input over time is required, and CNN is not suitable for this purpose. 4 Such an event sequence or timedependent network operation can generally be represented by the relationship between the input (xi) and the network state (Hi) as follows: where Hi+1 represents the state of the network at time step i+1, and A, Hi, and xi represent the activation function, the state, and the input, respectively, of the network at time step i. A typical network with such characteristics is a recurrent neural network (RNN) with the long-short-term memory learning rule, 5 which mitigates the vanishing gradient descent problem of the classical RNN. 6 Nonetheless, these artificial neural networks perform vast amounts of multiplication and accumulation operations during the learning and inference steps. If these calculations are performed using the conventional von Neumann computing system, even with the latest graphics processing unit, the cost of achieving the required processing speed and the energy consumption will be enormous. 7 In this regard, the recent upsurge of studies on neural networks that use a memristor-based cross-bar array (CBA) based on Ohm's law and Kirchoff's law is notable. 8- 13 If the memristor used in such neural networks can process the event-sequence-related and temporal information, it can perform the RNN functionality. An even more desirable functionality is to extract the features of the input information (raw data vector) and feed them to the next classification layer, which is usually a fully connected network (FCN), as for the CNN. Reservoir computing (RC) is capable of these functions. 14,15 RC, which consists of a randomly generated RNN (reservoir) and an FCN (readout layer), extracts features at the reservoir and analyzes the result in the readout layer. The core part of the RC system is the reservoir, where the non-linear transformation of the input signal is performed, and the characteristics of the input signal are projected into a high-dimensional feature space, which is called the reservoir state. 16 Afterwards, the reservoir state and the target output are matched during the training of the readout layer. To constitute a high-performance RC system, the reservoir devices (or system) must express not only the signal amplitude but also the time information. When a memristor is used as the reservoir, its resistance must be determined by the different input pulse signals with varying amplitudes and the intervals between such input signals. If the input signals to be discerned have simple and obviously distinguishable patterns, a memristor can sufficiently discern them by assigning different resistance values to them. However, for complicated and similar input patterns, high separability is required, which is usually challenging to achieve with a given type of memristor. 17,18 Also, the input signals could have substantially different time constants, which further severely limits the memristor reservoir. 19,20 In this case, a high-capacity of the reservoir adaptation must be achieved not only by modifying the memristor but also by incorporating additional circuit components. In a software-based RC system, several studies reported optimized RC systems that showed high performance through reservoir adaptation by modifying several parameters and connections between recurrent nodes. 16,21 Recently, various studies were conducted on hardware-based RC systems that use volatile memristors, in which a volatile memristor was used to process a time-varying input. [17][18][19][20] In those studies, the reservoirs were constructed based on ionic diffusion dynamics (diffusive memristors), in which the diffusive memristor was operated as follows. It was initially reset to the high-resistance state (HRS), and the input pulse later set it to a low-resistance state (LRS). 17 However, such an LRS decays with time through the progressive rupture of the metallic (Ag) conducting filament (CF) with time, which was induced by the thermal energy of the ambient. 4 Therefore, it can respond not only to the number/strength of the input pulses but also to the timing between the input pulses and the time elapsed before the data read operation. However, there are several limitations in using such reservoir dynamics. First, the duration and interval of the input signal are limited to the time range in which sufficient conductance decay occurs. For this reason, in the previous studies, it took 1 to 20 ms for one memristor to process 4-bit data, which is a speed that is significantly insufficient to process a large amount of data. 17,18 Second, obtaining a reproducible reservoir state could be challenging. An Ag-filament-based diffusive memristor has a stochastic switching nature, 17 so the variation of the reservoir state will be significantly large. Finally, reservoir adaptation could be difficult to achieve, given that the reservoir dynamics are totally determined by the material property, which renders the previous system useful for applications with a time scale similar to that of the specific memristor. [18][19][20] In this study, a device based on an electron trap/detrap mechanism was used to solve the aforementioned issues. 22,23 A W/HfO2/TiN (WHT) memristor goes into an LRS when the trap is filled with electrons and shifts to a high-resistance state (HRS) when the trapped electrons are detrapped. Since the resistance switching is based on the electron trapping and not the ionic movement, reproducible results can be achieved. 24,25 In addition, since the work functions between the top and bottom electrodes differ only slightly, there is limited built-in potential, so the device has good retention properties ( Supplementary Fig. S1a). 23,26 Nonetheless, the WHT memristor also has its own time constant of operation, so it is unlikely to achieve reservoir adaptability with a sufficiently large time constant range. This problem could be solved by combining the memristor with a capacitor (C) and a normal resistor (R). Under this circumstance, the R-C time constant of the circuit can be varied, and the memristor response to the temporal arrangement of the inputs can be controlled. Figure 1a shows the RC system that can control the reservoir dynamics using a memristor, a normal resistor, and a capacitor (1M1R1C). In this RC system, the charging and discharging of the capacitor transforms the signals applied to the device into various forms so that the conductance state of the memristor can be varied depending on the magnitude and sequential arrangement of the input signal ( Supplementary Fig. S1b, c). Such a configuration of the RC system allows the arbitrary variation of the response dynamics by adjusting the sizes of the resistor, capacitor, and pulse width, etc. Therefore, the optimized RC system can be configured for tasks with vastly different time scales. HfO2 layer between the TE and BE. Supplementary Fig. S1e shows the X-ray photoelectron spectroscopy (XPS) analysis of the W/HfO2 interface in the WHT device. Analysis of the W peak in the XPS data revealed the presence of tungsten sub-oxide (WOx, x<3) and a WO3 layer. Device Analysis The energy-dispersive X-ray spectroscopy line scan result ( Supplementary Fig. S1d 28,29 The symmetric energy band profile is unfavorable for fluent electronic bipolar resistive switching (eBRS). 23,26 However, the WO3 layer formed at the W/HfO2 interface can induce a Schottky barrier, whereas the HfO2/TiN interface constitutes a quasi ohmic contact. 27,30 Especially, the chemical interaction between the HfO2 and TiN layers can produce defects within the HfO2 layer, which provide the system with the electron traps that are necessary to induce the eBRS mechanism. With the application of the positive bias to the TE, the traps were filled with electrons that were injected from the TiN BE through the quasi ohmic contact, which switched the device to the LRS. Conversely, when the negative bias was applied, the device switched back to the HRS as the trapped electrons were detrapped, while the electron injection from the TE was blocked by the Schottky barrier at the W/HfO2 interface. 26 Due to the presence of the WOx layer, there was no need to set current compliance (CC) during the operation. Reservoir Generation The physical reservoir must satisfy the following four requirements: 1) high dimensionality, 2) non-linear transformation, 3) separability, and 4) dynamic adaptation. In this study, the reservoir was implemented by configuring the circuit, shown in Fig. 2a. Pulse streams were generated by a pulse generator (PG), where input signal '1' is converted to a high level, and '0' is converted to a low level. These pulse streams were delivered to channel 1 (CH1) and channel 2 (CH2) of an oscilloscope (OSC). A 50 Ω resistor was assigned to CH1, which allowed monitoring of the input pulse shape. In CH2, a 1 MΩ resistor was connected to the deviceunder-test (DUT, the memristor) in series. From the estimated voltage from the CH2 resistor, the voltage applied to the DUT was inferred. Since the oscilloscope fixes the size of the CH2 resistor at 1 MΩ, the overall series resistance to the memristor was adjusted by connecting a load resistor (RL), as shown in the figure. Also, a capacitor was connected to the CH2 resistor 7 in parallel, which stored the charge supplied by the applied pulse voltage. In this specific experimental setup, its value was fixed at 180 pF. However, the dynamic time constant of the RC system was varied by changing RL and the capacitance when it was necessary. The measurement consisted of two steps. In the first step, a pulse was generated at the PG, which caused SET switching in the memristor. In the second step, the conductance state of the memristor was read through the DC sweep. showed a corresponding gradual increase in the capacitor voltage, which was saturated at ~ 2.5 V. When the second '0' signal came in, the capacitor was discharged, and the reverse current flowed into the DUT, which made its voltage negative, while the CH2 showed gradual decay of the capacitor voltage. It was noted from the CH2 voltage that the capacitor was not completely discharged during the 0.2 ms duration of '0' signal, so when the subsequent '1' signal came in, the capacitive charging current was not as high as in the previous '1' signal case (where the DUT voltage peaked only up to ~ 2.5 V). Such an effect can be more evidently seen with the subsequent '1' signal (the reference pulse), as there was almost no peak in the DUT. Therefore, in this case, the effective number of SET pulses applied to the DUT was only two (the first and second '1' among the total three '1's in the '01011' sequence). After the entire pulse sequence was over, the memristor resistance was 2.82 M. In the case of the right panels in Figs. 2b and c, in contrast, each of the 1 signals is separated by 0 signals, and all the three '1's in the '10101' sequence switched the DUT to the SET state, which made its resistance 2.67 M, despite the application of the same number of set pulses (three) in the two cases. It should be noted, however, that the last two peaks had a lower effect in decreasing the memristor resistance than the first one due to its lower peak height, which was induced by the incomplete discharging of the capacitor during the intervening '0' pulse cycle. This is not a demerit but actually a merit of this RC system, which allowed even higher separability and adaptability. Therefore, this RC system can recognize not only the different input pulse numbers but also their timing. Figures. 2b and c show several notable features. First, if the memristor has the same resistance for both the positive and negative current flows, the charging and discharging times of the capacitor, which can be estimated from CH2, should be identical. However, due to the built-in asymmetry of the band profile of the WHT memristor, the resistance at the positive bias of ~ 2.5 V was ~ 100 times lower than that at the negative bias of ~ 1.5 V. Therefore, the charging was much faster than the discharging. This is the first factor that allows the RC system to have higher separability and adaptability. Second, the capacitance and RL values can be arbitrarily taken to vary the charging and discharging times, which can eventually affect the effectiveness of the voltage pulse applied to the memristor. Third, the input voltage pulse height and duration are another knob that can further change the RC dynamics. These features rendered the RC system extremely flexible and adaptable to the various requirements, as shown in the next sections. Without the last reference pulse, such a systematic variation and examination of the memristor state control would have been improbable. Modifying the Reservoir Dynamics In this RC system with the given WHT memristor property and capacitance, RL and the pulse height/duration were varied to examine the separability of the memristor. The capacitance 9 could also be varied, but it was fixed in this experiment section. Figure 3 shows several examples of the different degrees of separability of the RC system when these parameters were varied. The examples show the current value read at 0.5 V after the 16 different input patterns, from '0000' to '1111', were programmed to the PG, with the additional reference pulse added last. The x-axis numbers correspond to the different input patterns described in the inset table in Fig. 3e, and the different parameters, such as RL, the input pulse, and the reference pulse, for each graph in Fig. 3 are summarized in Table I. It should be noted that in Fig. 3 Supplementary Fig. S10). It was also noted that the '1000' pattern resulted in the highest memristor conductance, although there were only two SET pulses (the first 1 and the reference pulse at the last SET pulse). This is because the reference pulse induced the highest peak voltage to the memristor because the interval between the two pulses, during which the capacitor was fully discharged, was the longest (the details are shown in Supplementary Fig. S11). Of the six graphs in Fig. 3, Fig. 3c shows well the critical features of this RC system. The only difference of Fig. 3c from Fig. 3a is the pulse length [100 s (in a) vs. 200 s (in c)]. As the pulse width increases, the capacitor discharging during the 0 input increased, and the subsequent '1' induced a higher peak voltage. The conductance levels in Fig. 3c can be clearly grouped into three levels, which are determined by the number of 1's immediately after the '0' (not the total number of '1'). For example, '0000' has only one 1 after 0 (the reference pulse), so it induced the lowest conductance. Interestingly, '1111' has the same low conductance even though it had five 1 inputs (including the reference pulse). This is because the only effective '1' was the first one because all the other '1's do not have the preceding '0's, so they cannot produce peak voltage. Another characteristic and most desirable setting could be seen in Fig. 3f, in which RL was decreased to 10 k, and the pulse width was decreased to 200 ns. This setting makes the capacitor charging per one voltage pulse ('1' signal) insufficient, and its discharging during the '0' signals faster. Overall, this makes the memristor conductance more linearly dependent on the total number of '1's, as shown in Fig. 3f (an example of insufficient charging and details of the effects are included in Supplementary Fig. S12). A short pulse length is also beneficial to rapidly process the input vectors. Task Optimization: MNIST To recognize the digit images in the Modified National Institute of Standards and Technology Database (MNIST), the reservoir dynamics were optimized to implement an RC system suitable for the task. To do this, the raw MNIST data set, composed of 784 pixels (28 x 28), had to be reconfigured to meet the requirement of this specific RC system, which is basically a binary system (0 and 1 inputs). According to Fig. 3f, 16 levels of the memristor conductance were readily discerned, so 4 bits were dealt with by one memristor. Therefore, the data in the 784 pixel images were binarized and chopped by 4 bits, which resulted in 196 4-bit input signals. To make the task analysis more efficient, the frequency of the appearance of inputs in the dataset was investigated, and it was confirmed that '0000' appeared most frequently, followed by '1111,' '1000,' '0011,' and '0001' (Supplementary Table 1). Therefore, in this task-optimized RC system, the task was performed effectively by setting the operation parameters so that the RC system could readily separate the responses to the inputs with a high frequency of appearance rather than separating the responses to all the 16 inputs. The data points indicated by the red circle in Fig. 3f correspond to these frequently appearing signal sets. Accordingly, the 196 4-bit input image data were converted to the 196 x 1 memristor conductance vector (MCV), where the measurements were performed on a single 1M1R1C circuit, based on Fig. 3f. Using the 50,000 training images in the MNIST data set, 50,000 training MCVs were generated. These MCVs were used to train the 196 x 10 FCN (weights and biases), which were generated in a PyTorch simulation (Methods section). The trained RC system was used to infer the 10,000 MNIST test images, and the achieved accuracy was 90.1% (See Table II). This reservoir took 1 s of time to process one pulse stream, which is 10 3 ~ 10 4 times faster than in the previous studies. 18,17 Table II also shows the comparison with other RC results using the diffusive memristors and the software-based single-layer FCN. This study focuses on the only memristive RC system that performs reservoir adaptation, and it showed the best performance in terms of accuracy and latency. In addition, in this study, higher accuracy was achieved using a 1/4-size FCN compared to a software-based FCN. 1 The hardware size of the RC system could be further decreased as the number of bits processed by the reservoir (nBPR) increases, for as long as the separability for the higher nBPR is guaranteed. Supplementary Fig. S13 shows the different read currents for the 3 to 6 bits (8 to 64 input patterns). Obviously, the separability decayed as the nBPR increased, but they were still be used to recognize the MNIST data set because not all the input patterns mattered equally. Table III shows the variation in the test accuracy of the MNIST data set using the same method as above, but with different nBPRs. As the nBPR increased from 3 to 6, which was accompanied by a decrease in the required memristor number from 252 to 112, the accuracy decreased from 90.7% to 86.3% (the confusion matrices are included in Supplementary Figure S14), which is not much lower than in the software-based FCN (784 x 10). The next section demonstrates the most crucial merit of this RC system by showing its capacity to process time-series data using medical diagnostic data. Task Optimization: Medical Diagnosis Medical diagnosis often requires analyzing time-varying data and making a quick diagnosis, but there are inevitable limitations such as high dependence on operators and high variability across different medical institutions. For a more accurate and objective medical diagnosis, a universal diagnosis system adaptable to various situations is essential. Automatic medical diagnosis using deep learning has considerable potential, and several studies have been conducted on it, 31-33 but most of them rely on the conventional image classification method, such as CNN. This means that the traditional medical diagnosis produces data images and analyzes them later, mostly ex-situ. This study suggests, however, a method for in-situ medical diagnosis in real-time using a 1M1R1C reservoir. The diagnostic application consists of two sections. The first section is a breast cancer diagnosis using ultrasound images, and the second section is an arrhythmia diagnosis based on electrocardiogram (ECG) results. These two applications have vastly different operating signal frequencies (MHz to Hz). In this study, a system for efficient medical diagnosis was implemented by optimizing the RC system for each task. 1) Diagnosis of malignancy in breast lesions Ultrasound is used to diagnose and monitor breast cancer. In contrast to the conventional CNN, where the preprocessed images are identified, the proposed RC system in this study directly uses ultrasonic raw data without an imaging process, as shown in Fig. 4a. In the conventional ultrasound diagnosis, the ultrasound is transmitted to the piezoelectric material, where electrical signals are generated. The signal processor processes these signals to generate an ultrasound image, which the operator analyzes to diagnose the disease. However, if the reservoir can directly process the ultrasonic signal, the imaging process can be skipped, and an automatic diagnosis will be made at the readout layer. Therefore, this system makes real-time diagnosis simpler than in the existing ultrasound diagnosis. 13 The dataset used in the experiment consisted of an open-access database of raw ultrasound signals acquired from malignant and benign breast lesions. 34 Each sample consisted of 510 ultrasound (10 MHz) echo lines. After they were preprocessed for measurement convenience, they were converted into pulse streams and applied to the memristor (Methods section). Figure 4b shows the results of the voltage-time (V-t) measurement for one echo line of a benign sample (inset in Fig. 4b, and Methods section). This method has two main advantages over the existing ultrasound diagnosis using CNN. First, diagnosis is performed using a much simpler system without a pre-imaging process. Second, one of the major difficulties in ultrasound analysis is the presence of artifacts. 31 CNN may have difficulty in recognizing such artifacts because it performs learning and inference with the information on the artifacts. On the other hand, when using 1M1R1C, even with additional stimulation by artifacts, the capacitor only maintains the charging state. Therefore, the reservoir state is determined by the overall contour rather than by fine artifacts, and it can show higher performance. 2) Real-time arrhythmia diagnosis Arrhythmia is a condition in which the heart has an irregular rhythm or an abnormal heart rate. Since malignant arrhythmia can cause sudden death due to a heart attack, 35 real-time ECG monitoring and diagnosis are required. The purpose of this experiment is to implement a system capable of real-time diagnosis of arrhythmia in response to an electrical signal caused by a heartbeat. For the experiment, a part of the MIT-BIH arrhythmia database 36 was used, and a task-optimized reservoir was utilized to distinguish between arrhythmia and normal cases. A 14 reservoir capable of responding to a signal with a frequency of 0.8 to 1.2 Hz was constructed using a 1 µF capacitor parallel to CH2. In this case, a simple RC system composed of only one 1M1R1C reservoir could be used. Figure 4c shows a part of the ECG of a patient with arrhythmia. The electrical signal is generated at approximately 0.8-s intervals, and then arrhythmia occurs at 1.6 s (marked by a red arrow). When an electrical signal from a heartbeat is applied to the reservoir, the capacitor maintains a high charging level at a normal beat; but when an arrhythmia occurs, the capacitor is discharged at a longer interval than in the normal case, and SET switching occurs in the memristor by the next pulse ( Supplementary Fig. S15). Since this reservoir responds only to arrhythmia, the reservoir state can reflect the pulse of the arrhythmia patient in real-time. Figure 4d shows the results of 5-minute reservoir monitoring based on ECG data of normal (case 1) and arrhythmic (cases 2 and 3) patients. In cases 2 and 3, 49, and 81 arrhythmias occurred, respectively. As a result, the conductance of the reservoir monitoring in case 3 was the highest, and the reservoir state was clearly distinguished according to the degree of arrhythmia. This single reservoir system was able to detect different arrhythmia conditions in real-time with low energy using a simple 1M1R1C circuit. Conclusion In this study, an RC system with high reservoir separability and dynamics controllability was demonstrated using a nonvolatile W/HfO2/TiN memristor. A reservoir was generated by composing a 1M1R1C circuit. From asymmetric charging/discharging of the capacitor caused by the memristor, separability, which is the basic property of the reservoir, was achieved. In addition, the manner in which the reservoir reacted to the input signal was modified by changing various parameters such as the load resistor, capacitance, pulse width, and pulse height. Using these characteristics, the RC system was optimized to perform static data-based MNIST recognition application and sequential data-based medical diagnosis (ultrasound diagnosis and ECG-based diagnosis). For the MNIST recognition, a task-optimized system was used to improve the separability of the inputs that frequently appeared in the dataset. Furthermore, the tradeoff between the reduction of the readout layer size and the performance was confirmed by increasing the nBPR. The RC-system-aided medical diagnosis was conducted for two situations with extremely contrasting input frequencies (1 Hz and 10 MHz). By implementing a reservoir configuration suitable for each task, the excellent performance was achieved without misdiagnosis. This is because this RC system has a circuit structure that can freely change the time constant according to various tasks. In particular, the most crucial point of this study is its demonstration that dynamic signals with vastly different time constants can be well distinguished by changing the resistor or capacitor added to the circuit using only one type of memristor. The two types of hardware needed to implement the RC system proposed in this study are shown in Supplementary Fig. S16. Figure S16a shows that RL can be changed by selecting different transistors at the end of the resistor metal line. In contrast, Fig. S16b shows that RL can be varied by changing the normal memristor resistance. Therefore, it is expected that the fabrication of the hardware for the array configuration will be simple and that the reservoir dynamics can easily be changed even in the fabricated hardware. Methods Memristor fabrication. The array of cross-bar-type W/HfO2/TiN memristors was fabricated. A 50 nm-thick TiN layer was sputtered (Endura, Applied Materials) on an SiO2/Si substrate, and the TiN layer was patterned into a line shape to form a BE. The 2-to 10 µm-wide TiN BEs were patterned using conventional photolithography and the dry-etching system. After the patterning, the residual photoresist was removed with acetone and cleaned sequentially with deionized water. Then 4 nm HfO2 was deposited using atomic layer deposition (ALD) at a 280 °C substrate temperature using a traveling-wave-type ALD reactor (CN-1 Co. Plus 200). A tetrakis-ethlylmethylamido hafnium (TEMA-Hf) and O3 were used as precursors for Hf and oxygen, respectively. On the HfO2 layer, 50-nm-thick W TEs were sputtered using the MHS- PyTorch simulation for the readout layer of the RC system. The logistic regression algorithm was used to train the readout layer for the MNIST recognition and breast lesion classification. The reservoir state (x) in the form of an n × 1 vector (n = 196 for the MNIST recognition and n = 510 for the breast lesion classification) was multiplied by the weight matrix (W) of the readout layer to yield the weighted sum (z). The weighted sum was applied to the following softmax function to yield an output (̂). The sum of the elements of the output vector became 1 and the output of the softmax function was perceived as a 'probability.' The cross-entropy loss was used for the loss function, which is defined as: wherein N is the number of samples, and is the target output for input . To minimize the loss, a gradient-descent-based Adam optimizer 39 was used. Full-batch-type learning of the readout layer was performed in PyTorch. RC system for medical diagnosis. 1) Ultrasound-based breast lesions diagnosis: Each sample in the database consisted of 510 ultrasound (10 MHz) echo lines, and the length of each echo line was different for each sample (100 ~ 300 µs). For measurement convenience, samples in which breast lesions appeared within 40 s were used for learning and inference. The raw ultrasound data were binarized and converted into 100 ns pulses, which corresponded to a 10MHz frequency. Pulse streams that consisted of 400 100 ns-long pulses were applied to the memristor. The measurement setup was set at a 3.5 V pulse height, 4 V reference pulse height, and a 30 kΩ load resistance. The use of relatively large load resistance and a short pulse length made the reservoir sensitive to consecutive pulses, and the effect of the signal pulses remained until the reference pulse (Fig. 4b). After the measurement, the 510 x 2 readout layer was trained based on the reservoir state that consisted of the 510 reservoir responses for each input. 2) ECGbased arrhythmia identification: The measurement was performed under the conditions of no RL, a 2.5 V pulse height, a 200 ms length, and a 1 µF capacitor. Data availability. All the relevant data are available from the authors. and a capacitor. CH1 shows the shape of the input pulse stream, and CH2 shows the voltages applied to the 1M ohm resistor, from which the voltages across the DUT were obtained (green graph). b) The voltages applied to the memristor with a '0101+reference pulse' (left) and a '1010+reference pulse' (right). c) The voltages of the corresponding CH2, where the 4V and 0V voltage amplitudes represent '1' and '0,' respectively. The voltage across CH2 shows that the charging and discharging rates of the capacitor were asymmetric. charging discharging Figure 4: The automatic medical diagnosis system using the 1M1R1C reservoir and the experiment results in the two sections. a) A system for diagnosing the malignancy of breast lesions, which is much simpler than in the existing method (inset in a). In this system, ultrasonic signals are applied directly to the reservoir, so the imaging step is omitted. b) V-t graph for one echo line of a benign sample (inset in Figure 4b). c) A part of the electrocardiogram of a patient with arrhythmia. Long intervals caused by abnormal beats discharged the capacitor, and the conductance of the memristor increased in the next pulse. d) Five-minute reservoir monitoring based on the ECG of one normal patient (case 1) and two arrhythmic patients (cases 2 and 3). When arrhythmia occurred, the conductance of the memristor increased. Case 3, which had the most severe arrhythmia symptoms, showed the highest conductance.
7,317.8
2021-01-13T00:00:00.000
[ "Computer Science", "Engineering" ]
Selection of the best point and angle of lateral ventricle puncture according to DTI reconstruction of peripheral nerve fibers Abstract This study aims to find accurate angles and depths of lateral ventricle puncture using diffusion tensor imaging (DTI) reconstruction, as well as to provide an optimized and alternative puncturing strategy. A total of 90 computed tomography (CT) images and 30 CT images with DTI were analyzed. The measurements were performed on coronal, sagittal, and horizontal planes. Some distances and angles were measured to determine the best angle and penetration depth during the puncture process. Important landmarks of the lateral ventricle were also measured, and a comparison of the differences between 2 hemispheres was also assessed. It showed that the vertical distance from the superior margin to inferior margin of the lateral ventricle was 22.2 ± 0.5 mm and the length was 124.1 ± 2.1 mm. In the frontal horn puncture approach, the penetration depth should be limited between 105.2 and 109.4 mm, the angle should be 71.6 ± 2.7°. During the occipital horn puncture approach, puncturing depth was from 90.7 to 111.4 mm, and angle was 15.3 ± 1.8°. Through the parietal lobe puncture approach, which was firstly brought out in this study, the puncturing length should be 124.4 to 130.2 mm and angle was 56.6 ± 2.0°. The traditional recommended protocol of lateral ventricle puncture is not accurate, the refined lateral ventricle puncture protocol established in this study will reduce injury and remain function. A DTI imaging examination combining with nerve fibers reconstruction were strongly recommended before lateral ventricle puncture, which will help neurosurgeons to determine the best puncturing angles and depth. Introduction Lateral ventricles, 2 largest cavities of the ventricular system of the human brain which containing cerebrospinal fluid (CSF), are one of the most important parts of brain. [1] Each lateral ventricle resembles a C-shaped structure from inferior horn in the temporal lobe, and pass through a body in the parietal lobe and frontal lobe. Once the lateral ventricle is damaged, it will cause some severe problems such as ventricle hemorrhage, which will lead to cerebrospinal fluid occlusion, hydrocephalus, ventricular expansion, increased intracranial pressure, brain herniation, brain stem, and hypothalamus damage. Besides, the ventricle hemorrhage will cause the hyperpyrexia, organ collapse, and lots of fatal complications. The hemorrhage of the lateral ventricle has become a major public health problem in recent years. It is wellaccepted to eliminate the patients' ventricle hemorrhage and relieve intracranial pressure by lateral ventricle puncture when hemorrhage occurs. [2] Compared with craniotomy, the lateral ventricle puncture is minimally invasive, and maximally safe. Nevertheless, there are also some ventricle puncture failure reports, which will lead to many serious complications: infection, shunt vessel plug, intracranial hemorrhage, etc. [3] Blames were put on the improper and inaccurate position of the puncturing points and angles, because this procedure mainly relies on the experience of neurosurgeons. Besides, the morphology of nerve fibers and lateral ventricle are quite different between individuals. Previous studies have already set up an established protocol of lateral ventricle puncture: puncture point was set as 10 cm superior to the nose root, 3 cm on the right side of the middle line, after infiltrating anesthesia and drilling skull, puncture was conducted on the frontal horn. [4] However, the morphology of lateral ventricle in pathologic and physiologic conditions varies a lot. Additionally, nerve fibers between different races and ages have significant variations. Therefore, the previous established procedure is not accurate and it still needs to be improved. It is very necessary to conduct such an imaging study to determine the best points and angles to perform lateral ventricle puncture safely and effectively. Diffusion tensor imaging (DTI) is a magnetic resonance imaging technique that enables the measurement of the restricted diffusion of water in nerve fibers in order to produce neural tract images, it provides useful as well as accurate structural information about nerve fibers. [5] It also allows us to sketch detailed morphologic information of nerve fibers around lateral ventricle, which have never been reported before. Here, we conducted such a study that aimed to determine the most proper points and the best angles to perform lateral ventricle puncture by DTI and CT imaging, which could reduce the injury rapidly when puncturing. Through the comprehensive investigation, an innovative lateral ventricle puncture procedure via parietal lobe was firstly brought out. Besides, this procedure combined with other 2 traditional lateral ventricle puncture procedure were fully evaluated and compared to each others to elucidate the merits and disadvantages of each procedure. This study not only contributed to a better neuro-imaging understanding of the lateral ventricle and its peripheral structures, but also provided refined procedures to guide the operations. Materials and methods The DTI measurement protocol was performed at Department of Radiology, the First Hospital of Jilin University. This study was approved by the Ethics committee of the First Hospital of Jilin University. All participants were given written informed consent. We enrolled 30 healthy adult participants from December 2016 to May 2017 who are more than 18 years old and volunteered to participate in the study, 30 CT and 30 DTI were acquired from the same adults for study. In addition, other retrospective imaging data of 90 healthy adults were also included in the present study. Participants were considered only if exhibited no symptoms of neurology. The exclusion criteria are pathologic or traumatic effects in their cerebral DTI image. The participants in this study including 57 males and 63 females aged from 22 to 87 years, with an average age of 52.5 years. DTI data were processed by DSI studio software to reconstruct the parameter of fractional anisotropy (FA), volume ratio (VR), relative anisotropy (RA), apparent diffusion coefficient (ADC), and the color FA map. The bilateral data were measured, and a comparison of the differences between 2 hemispheres was also assessed. CT imaging method All the CT imaging data were taken by a CT machine (GE Sigma HDxt; GE, Fairfield, CT) in Department of Radiology, the First Hospital of Jilin University. Then, the imaging data were reconstructed using the volume-rendering system to conduct a measurement. The measurements were performed on the coronal, sagittal, and axial planes after three-dimension reconstruction in computer, respectively. After all the anatomical structures were identified, the sagittal plane was selected to make the measurements (as shown in Fig. 1A). The shortest distance (FAH) from the frontal puncture point (F) to anterior horn of the lateral ventricle (AH) was measured, and the vertical distance of the lateral ventricle was also measured. The vertical distance of the lateral ventricle was defined as the distance from inferior margin (IM) to superior margin (SM) of the lateral ventricle in sagittal plane. Then, the horizontal plane of the samples was selected (as shown in Fig. 2A), we also measured the shortest distance (OPH) from the occipital puncture point (O) to the posterior horn of the lateral ventricle (PH) as well as the length of the lateral ventricle, the length of the lateral ventricle was defined as the distance from anterior margin (AM) to posterior margin (PM) of the lateral ventricle in axial plane. What's more, the other sagittal plane of the samples was selected as an innovative method brought out in this study, the puncture line was chosen by parallel to the most of never fiber bundles (PM) (as shown in Fig. 3A), we measured the shortest distance PPH from the parietal puncture point (P) to the posterior horn of the lateral ventricle (PH) as well as the distance DTI method The DTI data acquisition: Using the Siemens 1.5 TMR scanner, gradient field 40 mt/M, switching rate of 200 mT/M/ms. The axial position was conducted by using the diffusion sensitive single-induced echo plane imaging sequence, the scanning parameters are: TR = 8100 milliseconds, TE = 106 milliseconds, FOV = 230 mm  230 mm, 128 by 128 matrix, 3 mm layer thick, no interval, total 40-story image. By using 2 dispersion weights, b value is 0 and 1000 s/mm 2 , respectively. Diffusion sensitive gradient is applied to the 6 isotropic directions, respectively. Total collection time is 7 minutes and 30 seconds. Image processing: DSI studio was used to analyze the image and to reconstruct the FA, VR, RA, ADC, and color FA map. By comparing and analyzing the section images, FA and color FA map, the location and form of the white matter fiber bundle as well as the lateral ventricle were identified and measured. After all the anatomical structures were identified, the reconstructed whole brain never fibers was displayed (as shown in Fig. 4), the axial parameters S, P, and L noted superior, posterior and left, respectively. The sagittal plane of the samples which www.md-journal.com reconstructed partial never fibers near the frontal lobe was selected (as shown in Fig. 1 B), we measured the shortest distance FAH from the frontal puncture point (F) to the anterior horn of the lateral ventricle (AH), the distance FB from the puncture point (F) to the point B which was determined by the direction parallel to the fiber bundles. In this way, FB was parallel to the shape of fiber bundles. FB is the refined puncturing angle that was firstly brought out in this study, whereas FAH was the traditional puncturing route. The vertical distance of the lateral ventricle from IM to SM of the lateral ventricle in sagittal plane and ∠BFC, the angle between line FB and coronal plane (C) was also measured. The horizontal plane of the samples which reconstructed partial never fibers near the occipital lobe was selected (as shown in Fig. 2B), we measured the distance (OPH) from the occipital puncture point (O) to the posterior horn of the lateral ventricle (PH) as well as the length of the lateral ventricle from AM to PM of the lateral ventricle in axial plane. ∠OPC, the angle between line OPH and coronal plane (C) was also assessed. The sagittal plane of the samples which reconstructed partial never fibers near the parietal lobe was selected (as shown in Fig. 3B), we measured the distance PM from the parietal puncture point (P) to the point which was determined by parallel to the fiber bundles (M). Therefore, PM was parallel to the shape of fiber bundles. The vertical distance of the lateral ventricle from IM to SM of the lateral ventricle in sagittal plane and ∠MPC, the angle between the line MP and coronal plane (C), were measured. Measurement data of the length and height of the lateral ventricle can help neurosurgeons to determine the morphology of the lateral ventricle as well as surrounding never fiber bundles easily and accurately, never fiber bundles can be identified with the help of lateral ventricle morphology. Statistical analysis All the statistics were entered into SPSS 18.0 (SPSS Inc, Chicago, IL) for analysis. The measurements were presented as mean ± standard deviation. We had normality tests for all the data. Differences between the 2 hemispheres were tested by an independent-sample t test. P < .05 noted that there was a statistical significance between 2 hemispheres. Results There was no significant difference between 2 hemispheres (P > .05). In addition, no significant differences were found between men and women (P > .05). Three lateral ventricle puncture procedures were fully investigated and measured as follows. Lateral ventricle puncture from the frontal horn approach The puncture point was selected as 10 cm superior to the nose root, 3 cm on the right side of the middle line, infiltrating anesthesia, and drilling skull. Point F in Figure 1 was considered as the puncture point. CT measurement results showed that the shortest distance between the puncture point (F) and anterior horn of the lateral ventricle (AH) was 94.7 mm, and the vertical distance of the lateral ventricle was 22.3 ± 0.3 mm. Then, DSI studio was used to reconstruct the important fiber bundles which were surrounding the lateral ventricle. The distance and angles were also measured in DTI image. DTI measurement results showed that the shortest distance between the puncture point (F) and anterior horn of the lateral ventricle (AH) was 94.6 mm, and the vertical distance of the lateral ventricle was 22.2 ± 0.5 mm. Results showed that the shortest distances and the vertical distance of the lateral ventricle in the CT measurement and DTI measurement were roughly the same. In this study, it also showed that the traditional puncture route tended to cause more damage and injury, because the traditional puncture route is not parallel to the shape of the lateral ventricle. Therefore, a refined frontal puncture procedure was firstly brought out by us, shown as the direction of FB in Figure 1B. The line FB was parallel to the most of fiber bundles, in this way, puncture process will cause less injury and damage though the distance FB was 105.2 ± 2.3 mm, which is a little longer than FA (traditional puncture route). Additionally, the DTI image can also be used to demonstrate the fiber bundle's position and its angles. The puncturing angle ∠BFC in the refined frontal puncturing process ought to be 71.6°in the sagittal plane. The data of landmarks location implied that puncturing depth should be limited from 105.2 to 109.4 mm and the puncturing angle was 71.6 ± 2.7°, exceeding this value maybe damage other important structures or nerve fibers. All data of the CT and DTI measurements are shown in Table 1. Lateral ventricle puncture from the occipital horn approach The puncture point was located as 4 to 7 cm superior to the external occipital protuberance and 3 cm next to the middle line. Point "O" in Figure 2 was considered as the puncture point. Initially, CT image was used to measure the shortest distance between the puncture point (O) and the posterior horn of the lateral ventricle (PH), the shortest distance was 89.7 mm and the length of the lateral ventricle, the distance between the anterior horn and posterior horn of the lateral ventricle, was 124.6 ± 1.5 mm. Then, DTI was also used to conducted measurements. DTI measurements showed that the shortest puncturing distance was 90.7 mm and the length of the lateral ventricle was 124.1 ± 2.1 mm. In addition, DTI showed that the nerve fibers surrounding the posterior horn of the lateral ventricle was sophisticated and contorted, which demonstrated that posterior horn puncture was not an ideal procedure, and it can cause a lot of complications related to vision. OPH was selected as the puncturing route in the occipital horn puncture approach in Figure 2B, which was parallel to the most of fiber bundles surrounding the lateral ventricle. The distance OPH was measured as 90.7 ± 1.3 mm. The angle ∠OPC between the fiber bundles and coronal plane was 15.3°in the horizontal plane. The measurement data implied that puncturing depth should be limited from 90.7 to 111.4 mm, and puncturing angle was 15.3 ± 1.8°, exceeding this value the puncturing maybe not reach the lateral ventricle as well as damage some other important structures. The data of all CT and DTI landmarks measured in this method are shown in Table 2. Lateral ventricle puncture from the parietal lobe approach This study was an innovative method that firstly developed in this study by DTI technology. The puncture point was selected as 35.5 cm superior to the nasal root and 2 cm next to the midline. Point P in Figure 3 was considered as the puncture point. First of all, the CT image was still used to conduct basic anatomical measurements, the shortest distance between puncture point (P) and the posterior horn of the lateral ventricle ( Discussion Lateral ventricle hemorrhage is a common but vital hemorrhagic cerebrovascular disease. Some severe complications such as hematoma and different types of cerebrospinal fluid circulation disorder may occur due to various amount of hemorrhage. Since there are plenty of supplying arteries around this region, lateral ventricle hemorrhage tend to cause obstructive hydrocephalus, which will lead to an increase of intracranial pressure, ultimately forming cerebral hernia. [6] Besides, the hemorrhage breaking into the lateral ventricle will alter the pressure in thalamus and brain stem, which would finally lead to endocrine dysfunction, abnormal circulation as well as respiratory systems malfunction and some other complications. Lateral ventricle puncture approach is a very effective way to treat lateral ventricle hemorrhage or obstructive hydrocephalus, which is wellaccepted worldwide. Traditional management in the puncture process mostly depends on operator's experience because of the lack of advanced image data and accurate locating devises. [7] Therefore, it is difficult to determine a precise puncture point simply relies on CT image, operators usually tend to apply the shortest puncture procedure in clinical practice. However, there are plenty of failure cases and complications that occur every year due to inaccurate location and misunderstanding regarding to this procedure. The complications include cognition impairment, cerebral visual injury, hemorrhage, etc. [8] With the rapid development of modern science and technology, this situation needs to be improved urgently. Therefore, DTI technology combined with CT was used in this study to determine the best puncturing depth and angles in different procedures, additionally the advantages, disadvantages, and indications of each approach was also fully elucidated. The DTI technology enabled us to observe the location, morphology, and the positional relationships of the fibers as well as the lateral ventricle clearly and accurately. Magnetic resonance imaging (MRI) 3D-rendering tools were used to analyze the T1weighted images and DTI in DSI studio. By reconstructing the fiber bundles, the spatial relationships between fiber bundles and lateral ventricle were displayed. After calculating the angles Table 2 Measured values of OPH, AMPM, and OPC from CT and DTI (mm and°). OPH was selected as the puncturing route in the occipital horn puncture approach, which was parallel to the most of fiber bundles surrounding the lateral ventricle. AMPM noted the length from anterior margin (AM) to posterior margin (PM) of the lateral ventricle in axial plane. And ∠OPC noted the angle between line OPH and coronal plane (C). CT = computed tomography, DTI = diffusion tensor image, Ng = not given, SD = standard deviation. between the sagittal, coronal, axial plane, and fiber bundles, the best angles to perform lateral ventricle puncture were determined Lateral ventricle puncture from the frontal horn approach With the help of DTI technology, the puncturing depth and angles were determined by reconstructing never fiber bundles. In the reconstruction data and images, it showed that the nerve fiber of the frontal lobe is straight and regular, and there are only a little variations in the traveling course of the nerve fiber among individuals. Therefore, this part is an ideal region to perform lateral ventricles puncture. However, reconstruction results also showed that the traditional operation process (the direction of FAH, shown in Fig. 1) cross the main direction of the most nerve fiber bundles in this region, which demonstrated that the traditional operation process will cause more damage regarding to brain. Due to the important function of frontal lobe, the puncture procedure was refined in this study to reduce the damage and injury of brain. The puncturing angle should be changed to 71.6 ± 2.7°, and the puncturing depth altered to 105.2 to 109.4 mm accordingly. Though the puncture depth increased a little, it indeed reduced the damage of brain. With the help of DTI technology, it could be seen clearly that the traditional operation process FAH was much different from the refined puncture direction of FB, it would lead to much more damage and cause cognitive dysfunction. However, these complications could be avoided in the refined approach by setting the puncturing direction parallel to most of the nerve fiber bundles. Accordingly, the damage of fiber bundles reduced significantly. Additionally, the result also proved that by the reconstruction of the never fiber bundles, the shape of fibers near the later ventricle could be seen clearly, in this way, it provided a solid reference and a better comprehension for the puncture operation. An individual MRI and DTI examination was recommended before puncture to achieve better therapeutic effects. Lateral ventricle puncture from the occipital horn approach This approach is the standard procedure in the cerebral ventricular shunt process, with the help of DTI technology combined with CT technology, we found that many refinement needs to be improved regarding to this approach. By reconstructing the never fiber surrounding the lateral ventricle, it showed that the nerve fiber of this region contorted with each other, fiber bundles of this region is messy because of Gratiolet bundles and other nerve fibers, so it is difficult to avoid all the nerve fibers. Besides, tractus opticus travelled across this region, many complications related to the vision sometimes occurred after treatment. Therefore, an ideal approach should avoid all the important fiber and nerve tracts. According to the DTI result, puncturing depth should be limited from 90.7 to 111.4 mm, and puncturing angle was 15.3 ± 1.8°in the occipital lateral ventricle approach. With this approach, the puncturing needle will cause the least damage and reach the posterior horn in a safe way. Compared with other 2 approaches, this approach tends to cause more complications. Lateral ventricle puncture from the parietal lobe approach This method was a new innovative method which is firstly brought out in this study by DTI reconstruction. The puncture point was set posterior to postcentral gyrus. Traditional concepts believed that the longer puncture depth it had, the severe damage it would cause. However, our study demonstrated it was apparently a popular misunderstanding. Based on the DTI/ diffusion tensor tractography (DTT) reconstruction, we could draw a firm conclusion that the key point causing complications and postsurgery damage was not the puncturing depth but the puncturing angles. If the angle is improper or inaccurate, the fiber bundles can be damaged no matter how deep the penetration is, resulting in character disorder or visual impairment. Additionally, nerve fiber bundle of this region is relatively regular and trim, the variation of fibers in this region is not significant. Besides, this region dominate the feeling of a person, compared with the character disorder or visual impairment caused by the damage of frontal or occipital lobe, paresthesia may be the lowest cost which could be accept by most people. With the anatomical data provided in this study, this innovative procedure is much easier to be conducted and it tends to cause less damage compared with the other 2 traditional procedures. In this study, we provided not only a new innovative technology but also a completely new puncture strategy which had never been used before. The DTI technology could not only observe the tracking fibers of a brain but also determine the best puncturing angles and depth in different operations. In addition, measurements data in this study are also easy to be transformed in other coordinate systems which are different from ours. Sometimes the damage of the fiber bundles is inevitable, but the damage extent can be reduced rapidly by choosing a best puncture point and a proper angle. When a neurosurgeon conducted lateral ventricle puncture, it is easy to determine the best puncturing angles and depth by a protractor and a ruler with the help of measurement in this study. [9] This study not only provides neurosurgeons a better comprehension regarding to positional relationship of the lateral ventricle and peripheral tracts, but also provide a detailed guideline for lateral ventricle puncture, which would reduce intraoperative or postoperative accidents rapidly. In addition, DTI technology had broad application in neurosurgery. A DTI imaging examination combining with nerve fibers reconstruction was strongly recommended before lateral ventricle puncture, which will help neurosurgeons to determine the best puncturing angles and depth for individuals. Though this study is concluded by elaborate design and precise measurements, we still have to admit that the sample of this study is still limited, further studies need to be conducted in the future to confirm our conclusions.
5,553.8
2018-11-01T00:00:00.000
[ "Medicine", "Engineering" ]
Storage of photonic time-bin qubits for up to 20 ms in a rare-earth doped crystal Long-duration quantum memories for photonic qubits are essential components for achieving long-distance quantum networks and repeaters. The mapping of optical states onto coherent spin-waves in rare earth ensembles is a particularly promising approach to quantum storage. However, it remains challenging to achieve long-duration storage at the quantum level due to read-out noise caused by the required spin-wave manipulation. In this work, we apply dynamical decoupling techniques and a small magnetic field to achieve the storage of six temporal modes for 20, 50 and 100 ms in a $^{151}$Eu$^{3+}$:Y$_2$SiO$_5$ crystal, based on an atomic frequency comb memory, where each temporal mode contains around one photon on average. The quantum coherence of the memory is verified by storing two time-bin qubits for 20 ms, with an average memory output fidelity of $F=(85\pm 2)\%$ for an average number of photons per qubit of $\mu_\text{in}$ = 0.92$\pm$0.04. The qubit analysis is done at the read-out of the memory, using a type of composite adiabatic read-out pulse we developed. INTRODUCTION The realization of quantum repeaters [1][2][3] , and more generally quantum networks, is a long-standing goal in quantum communication. It will enable long-range quantum entanglement distribution, long-distance quantum key distribution (QKD), distributed quantum computation and quantum simulation 4 . Many schemes of quantum repeaters rely on the heralding of entanglement between quantum nodes in elementary links 2,5 , followed by local swapping gates 1 to extend the entanglement. The introduction of atomic ensembles as repeater nodes, and the use of linear optics for the entanglement swapping, stems from the seminal DLCZ proposal 2 . A key advantage of atomic ensembles is their ability to store qubits in many modes through multiplexing [6][7][8][9][10][11][12] , which is crucial for distributing entanglement efficiently and with practical rates 13 . Rare-earth-ion (RE) doped crystals provide a solidstate approach for ensemble-based quantum nodes. RE doped crystals can provide multiplexing in different degrees of freedom 8,9,11,[14][15][16] , efficient storage 17,18 , long optical coherence times 19,20 and long coherence times of hyperfine states 21-24 that allows long-duration and ondemand storage of optical quantum states. Long optical coherence times, in combination with the inhomogeneous broadening, offer the ability to store many temporal modes 7,13 . Repeater schemes based on both time and spectral multiplexing schemes have been proposed 8,13 . Here we focus on repeaters employing time-multiplexing and on-demand read-out in time 13 , which require the long storage times provided by hyperfine states 25 . The longest reported spin storage time of optical states with mean photon number of around 1 in RE doped solids is about 1 ms in 151 Eu 3+ :Y 2 SiO 5 26 . However, even nearterm quantum repeaters spanning distances of 100 km or * Corresponding author<EMAIL_ADDRESS>above would certainly require storage times of at least 10 ms, and more likely of hundreds of ms 25 . A particular challenge of long duration quantum storage in RE systems is noise introduced by the application of the dynamical decoupling (DD) sequences that are required to overcome the inhomogeneous spin dephasing 26 and the spectral diffusion 22,24 . To reduce the noise one can apply error-compensating DD sequences 27 , or increase the spin coherence time by applying magnetic fields to reduce the required number of pulses 22,28,29 . In this article we report on an atomic frequency comb (AFC) spin-wave memory in 151 Eu 3+ :Y 2 SiO 5 , in which we demonstrate storage of 6 temporal modes with mean photon occupation number µ in = 0.711 ± 0.006 per mode for a duration of 20 ms using a XY-4 DD sequence with 4 pulses. The output signal-to-noise (SNR) ratio is 7.4 ± 0.5, for an internal storage efficiency of η s = 7%, which excludes the contribution of losses due to the optical path between the memory output and the detector. These results represent a 40-fold increase in qubit storage time with respect to the longest photonic qubit storage in a solid-state device 26 . The improvement in storage time is due to the application of a small magnetic field of 1.35 mT, see also 24,29 , which increases the spin coherence time with more than an order of magnitue while simultaneously resulting in a Markovian spin diffusion that can be further suppressed by DD sequences. By applying a longer DD sequence of 16 pulses (XY-16) we demonstrate storage with µ in = 1.062 ± 0.007 per mode for a duration of 100 ms, with a SNR of 2.5 ± 0.2 and an efficiency of η s = (2.60 ± 0.02)%. In addition we stored two time-bin qubits for 20 ms and performed a quantum state tomography of the output state, showing a fidelity of F = (85 ± 2)% for µ in = 0.92 ± 0.04 photons per qubit. To analyse the qubit we propose a composite adiabatic control pulse that projects the output qubit on superposition states of the time-bin modes. The current limit in storage time is technical, due to heating effects in the cryo cooler caused by the high power of the DD pulses and the duty cycle of the sequence. The measured spin and (c), sketches of the experimental setup around the memory and filter crystals, respectively. The crystals are glued on a custom mount with two levels at different heights, in the same cryostat. The memory crystal is at the center of a small coil used to generate the RF signal. A larger coil (not shown) on top of the cryostat generates a static magnetic field along the D1 axis of the Y2SiO5 crystal. Optical beams are depicted with exaggerated angles for clarity. AOM: acousto-optic modulator; SPAD: single photon detector (d) Sketch of the time sequence of pulses used for multimode spin-storage. (e) AFC efficiency measured as a function of AFC time 1/∆, with bright input pulses and magnetic field of 1.35 mT along D1. The solid line indicates the exponential fit resulting in zero-time efficiency η0 = (36 ± 3)% and effective coherence time T AFC 2 = (240 ± 30) µs, see text for details. Error bars represent 95% confidence intervals. More information on the experimental setup and pulse sequences can be found in the Methods section and in Supplementary Notes 1 and 3. coherence time as a function of the DD pulse number n p follows closely the expected n 2/3 p dependence, which suggests that considerably longer storage times are within reach with some engineering efforts. RESULTS The 151 Eu 3+ :Y2SiO5 system The platform for our quantum memory is a 151 Eu 3+ :Y 2 SiO 5 crystal with an energy structure at zero magnetic field as in Figure 1a. The excited and ground states can be connected via optical transitions at about 580 nm 30 . The quadrupolar interaction due to the effective nuclear spin I = 5/2 of the Eu 3+ ions generates three doublets in both the ground and excited states, separated by tens of MHz. This structure allows to choose a Λ-system with a first ground state |g into which the population is initialized, connected to an excited state |e for optical absorption of the input light, and a second ground state |s for on-demand long-time storage. The full AFC-spin wave protocol 7,31 , sketched in Figure 1d, begins by initializing the memory so to have a comb-like structure in the frequency domain with periodicity ∆ on |g and an empty |s state, via an optical preparation beam (see Figure 1b). The initialization step closely follows the procedure outlined by Jobez et al. 32 . The photons to be stored are sent along the input path and are absorbed by the AFC on the |g ↔ |e transition, leading to a coherent superposition in the atomic ensemble. The AFC results in a rephasing of the atoms after a duration 1/∆, while normally they would dephase quickly due to the inhomogeneous broadening. Before the AFC echo emission, the excitation is transferred to the storage state |s via a strong transfer pulse. The radio-frequency (RF) field at 46.18 MHz then dynamically decouples the spin coherence from external perturbations and compensates for the spin dephasing induced by the inhomogeneous broadening of the spin transition |g ↔ |s . In our particular crystal, the shape of the spin transition absorption line is estimated to be Gaussian with a width of about 60 kHz (Supplementary Note 2). A second strong optical pulse transfers the coherent atoms back into the |e state, after which the AFC phase evolution concludes with an output emission along |e → |g . To implement the memory scheme, a coherent and powerful laser (1.8 W) at 580 nm is generated by amplifying and frequency doubling a 1160 nm laser that is locked on a high-finesse optical cavity 33 . The 580 nm beam traverses a cascade of bulk acousto-optic modulators (AOM), each controlling an optical channel of the experiment, namely optical transfer, memory preparation, filter preparation and input. The optical beams and the main elements of the setup are represented in Figure 1b, c. A memory and two filtering crystals are cooled down in the same closed-cycle helium cryostat to ∼ 4 K, placed on two levels of a single custom mount. The 1.2 cm long memory crystal is enveloped by a coil of the same length to generate the RF field. Another larger coil is placed outside the cold chamber and used to generate a static magnetic field. After the cryostat, the light in the input path can be detected either by a linear Si photodiode for experiments with bright pulses, or by a Si single photon avalanche diode (SPAD) detector for weak pulses at the single photon-level. For photon counting it is necessary to use a filtering setup ( Figure 1c) to block any scattered light and noise generated by the second transfer pulse. Another AOM acts as a temporal gate, before passing the beam through two filtering crystals that are optically pumped so to have a transmission window around the input photon frequency and maximum absorption corresponding to the transfer pulse transition. The 151 Eu 3+ :Y 2 SiO 5 crystals are exposed to a small static magnetic field along the crystal D 1 axis 34 . At zero magnetic field, the protocol enabled to achieve storage of multiple coherent single photon-level pulses up to about 1 ms 26 . However, it has been shown that even a weak magnetic field can increase the coherence lifetime 19,24,35,36 , which motivated us to use a ∼ 1.35 mT field along the D 1 axis 29 . Spin-wave AFC The AFC spin-wave memory consists of three distinct processes: the AFC echo, the transfer pulses and the RF sequence, and each process introduces a set of parameters that will need to be optimized globally in order to achieve the best possible SNR, multimode capacity and storage time. Below we briefly describe some of the constraints leading to the particular choice of parameters used in these experiments. The maximum AFC spin-wave efficiency is limited by the AFC echo efficiency for a certain 1/∆, which typically decreases exponentially as a function of 1/∆. We can define an effective AFC coherence lifetime T AFC 2 and efficiency η AFC as η AFC = η 0 exp −4/(∆T AFC 2 ) 32 , where η 0 depends on the optical depth and the AFC parameters. With an external magnetic field of 1.35 mT D 1 , we obtained T AFC 2 = (240 ± 30) µs with an extrapolated zerotime efficiency of η 0 = (36 ± 3) %, see Figure 1e. The η 0 efficiency is consistent with the initial optical depth of 6 in our double-pass input configuration (each pass provides an optical depth of about 3). The effect of the fieldinduced Zeeman split on the AFC preparation process is discussed in detail Ref 29 . In short no adverse effects of the comb quality is expected when the comb periodicity is a multiple of the excited state splitting, provided more than two ground states are available for optical pumping as in Eu 3+ :Y 2 SiO 5 , while other periodicities can lead to a lower AFC efficiency. In Figure 1e the modulation period of about 25 µs indeed corresponds to the excited state splitting of 41.4 kHz. The exponential decay implies that there is a trade-off between the memory efficiency (favoring short 1/∆) and temporal multi-mode capacity (favoring long 1/∆). In addition we must consider the shortest input duration that can be stored, which is limited by the effective memory bandwidth. The optical transfer pulses should ideally perform a perfect coherent population inversion between states |e and |s , uniformly over the entire bandwidth of the in-put pulse. Efficient inversion with a uniform transfer probability in frequency space can be achieved by adiabatic, chirped pulses 37 . Here we employ two HSH pulses proposed by Tian et al. 38 , which are particularly efficient given a limitation in pulse duration. For a fixed Rabi frequency the bandwidth of the pulse can be increased by increasing the pulse duration 37 , which however reduces the multimode capacity of the AFC spin-wave memory. Considering as a priority to preserve the storage efficiency while still being able to store several time modes, we set 1/∆ = 25 µs, corresponding to the first maximum (with efficiency 28%) on the AFC echo decay curve in Figure 1e. Given the 1/∆ delay, we optimized the HSH control pulse duration, leading to a bandwidth of 1.5 MHz for a HSH pulse duration of 15 µs. The remaining 10 µs were used to encode 6 temporal modes, giving a mode duration of T m = 1.65 µs. Each mode contained a Gaussian pulse with a full-width at half-maximum of about 700 ns. The RF sequence compensates for the inhomogeneous spin dephasing and should ideally reduce the spectral diffusion due to spin-spin interactions through dynamical decoupling (DD) 24,39-41 . However, effective dynamical decoupling requires many pulses, with pulse separations less than the characteristic time of the spin fluctuations. Pulse errors can then introduce noise at the memory output 26 , which in principle can be reduced by using error-compensating DD sequences 27 . In practice, however, other factors such as heating of the crystal due to the intense RF pulses limit the effectiveness of such sequences, and noise induced by the RF sequence is the main limitation in SNR of long-duration AFC spin-wave memories 9,26,42 . Characterization with bright pulses We first present a characterization of the memory using bright input pulses and a linear Si photodiode, implementing four decoupling sequences with a number of pulses ranging from a minimum of 2 to a maximum of 16. (1) = (47 ± 2) ms, as expected for a Ornstein-Uhlenbeck spectral diffusion process 40,41,44 . A similar scaling was obtained in Holzäpfel et al. 24 , using a slightly different experimental setup, magnetic field and Λ-system, which indicates that much longer storage times could be achieved. However, adding more pulses for the same storage times introduces additional heating, causing temperature-dependent frequency shifts of the optical transition 30,45 . This technical issue could be addressed in the future by optimizing the heat dissipation in proximity of the crystal. The extrapolated zero-time efficiencies vary between 6 and 9%, and the data appears relatively scattered around the fitted curves for the longer decoupling sequences. These two observations might be a sign of the presence of beats originating in the different phase paths available to the atoms during storage, due to the small Zeeman splitting of the ground state doublets in this regime of weak magnetic field. Similar effects have been shown in a more detailed model of interaction between a system with splittings smaller than the RF pulses chirp 29 . Single-photon level performance We now discuss the memory performance at the single photon level. The dark histograms in Figure 3 show three examples of spin storage outputs with their respective input modes for reference. The lighter histograms show the noise background, measured while executing the complete memory scheme without any input light (see Methods section for details). This noise floor, when integrated over the mode size T m , gives us the noise probability p N . When compared to the sum of the counts in the retrieved signal in the mode, the summed noise count is well below the retrieved signal for all the storage times here reported. We used an XY-4 type of RF sequence for storage at 20 ms, XY-8 for 50 ms and XY-16 for 100 ms. Table 1 summarizes the relevant results, in particular with SNR values ranging from 7.4 to 2.5 for 20 ms and 100 ms respectively. The average input photon number per time mode µ in is close to 1 in all cases, although it varies slightly. To account for this, an independent figure of merit is the parameter µ 1 = p N /η, which corresponds to the average input photon number that would give an SNR of 1 in output 46 . Since it scales as the inverse of the efficiency 26 , it increases with storage time, but for all cases studied here it is well below 1. The noise probability p N varied from 7 · 10 −3 to 11 · 10 −3 , see Table 1, similar to previous experiments 26 . An independent noise measurement at 20 ms showed that the XX, XY-4 and XY-8 resulted in almost identical noise values of p N = 7.4 · 10 −3 , 8.1 · 10 −3 and 8.6 · 10 −3 (error ±0.3 · 10 −3 ), respectively. This shows that pulse area errors are effectively suppressed by the higher order DD sequences, and that the read-out noise is caused by other types of errors, which at this point are not well understood. If compared with the efficiencies measured with bright pulses in Figure 2, the storage efficiency measured at the single photon-level is noticeably lower for 50 and 100 ms. We believe this is due to the long measurement times required for accumulating the necessary statistics, which exposes the experiment to long-term fluctuations affecting optical alignment in general and specifically fiber coupling efficiencies. Nonetheless, our results show that our memory is capable of storing successfully multiple time modes at the single photon level, with a SNR that is in principle compatible with storage of quantum states 26 for up to 100 ms. More information on the memory parameter estimations from the data can be found in Supplementary Table 2. Time-bin qubit storage To characterize the quantum fidelity of the memory, we analyzed the storage of time-bin-encoded qubits. Both qubits were prepared in the ideally pure superposition state ψ in = 1/ √ 2 (|E + |L ), where |E and |L represent the early and late time modes of each qubit. Exploiting our 6-modes capacity, we encoded the components |E and |L of the first qubit into the temporal modes 2 and 3 respectively, and similarly for the second qubit in modes 5 and 6. To perform a full quantum tomography of the memory output state, represented by the density matrix ρ out , one needs to be able to perform measurements of the ob- servables represented by the Pauli matrices σ x , σ y and σ z . The observable σ z can simply be measured using histogram traces as shown in Figure 3. The σ x and σ y observables can be measured by making two partial readouts of the memory 46-48 , separated by the qubit mode spacing T m , where each partial transfer pulse should ideally perform a 50% transfer as both modes are emitted after the second transfer pulse. In the past, this has been achieved by using two distinct, shorter transfer pulses 46,48,49 , separated by T m , which in practice can reduce the efficiency below the ideal 50% transfer 46 . This is particularly true for long adiabatic, chirped pulses, which would then need to be severely shortened to produce two distinct pulses separated by the mode spacing T m . In addition the first control pulse would need to be reduced in duration as well, as the chirp rate of all the transfer pulses should be the same 37 . To overcome the efficiency limitation for qubit analysis based on partial read-outs with adiabatic pulses, we propose a composite pulse that can achieve the ideal 50% partial transfer, independently of the pulse duration and mode separation. The composite HSH pulse (cHSH) is a linear sum of two identical adiabatic HSH pulses, with their centers separated in time by T m . The cHSH has a characteristic amplitude oscillation due to the interference of the two chirps, see Figure 4a. Intuitively, one can think of each specific frequency within the AFC band-width as being addressed twice by the cHSH, once by each component, at two distinct times separated exactly by T m , despite the fact that the whole cHSH pulse itself is much longer than T m . As a consequence, the addressed atomic population partially rephases after the pulse at two times separated by T m . An alternative method for analysing qubits consists in using an AFC-based analyser in the filtering crystal 50,51 . However, we observed that the SNR was deteriorated when using the same crystal as both filtering and analysing device. The cHSH-based analyser resulted in a significantly better SNR after the filters, yielding a higher storage fidelity. The phase difference θ between the two cHSH components sets the measurement basis, where θ = 0 (θ = π/2) and θ = π (θ = 3π/2) projects respectively on the |+ and |− eigenstates of σ x (σ y ), encoded in the early-late time modes basis as 1/ √ 2 (|E + e iθ |L ). Note that this type of analyser can only project onto one eigenstate of each basis, hence two measurements are required per Pauli operator. Figure 4b shows the histograms corresponding to the two σ x projections. After measuring the expectation value of all three Pauli operators, we can reconstruct the full quantum stateρ out using direct inversion 52 , after verifying that the corresponding state matrix is indeed physical. We hence derive a fidelity of F = (85 ± 2) %, averaged over the two qubits. Raw counts and the resulting expactation val- ues can be found in Supplementary Table 4, with corresponding numbers of experiment repetitions in Supplementary Table 6. The average number of photons per qubit was µin = 0.92±0.04 and the reconstructed density matrixρ out is shown in Figure 4c. The purity of the reconstructed state is P = (76 ± 3) %, which limits the maximum achievable fidelity in absence of any unitary errors to 87%. This indicates that the fidelity is limited by white noise generated by the RF sequence at the memory read-out. Another element supporting this conclusion is given by the fidelity measured with bright pulses, so that the noise is negligible (see Supplementary Note 4), yielding a value of 96 %. We further note that the σ z measurement yielded a SNR of 3.48 ± 0.15, which when scaled up to the single photon level in one time bin becomes about 7.0. This is compatible with the value reported in Table 1 The measured qubit fidelity can be compared to different criteria for quantum storage. In this work we characterize the memory by storing qubits encoded onto weak coherent states. In this context Specht et al. 53 introduced a classical fidelity limit by comparing to a measure-andprepare strategy, where the memory inefficiency and multiphoton components of the states are exploited. Nevertheless, for the efficiency of 7.39% and the mean qubit photon number of µ in = 0.92 the criterion gives a maximum classical fidelity of 81.2% (see Supplementary Note 4), such that our qubit fidelity at 20 ms surpasses the classical limit. We can also consider future applications of the memory in terms of storing a qubit encoded onto a true single photon (Fock state), for which the classical limit is F = 2/3 54 . For storage of true single photon qubits it can be shown that this limit can be surpassed provided that the probability p of finding the photon before the memory is larger than the µ 1 parameter (see Table 1) 9,26 . Recent quantum memory experiments in praesodymium-doped Y 2 SiO 5 has reached a heralding efficiency of 19% of finding a true single photon before the memory 55 , which with our µ 1 = 0.098 at 20 ms storage time would result in a theoretical qubit fidelity of about 75%. The fidelity can be improved by a combination of increasing current memory efficiency and single-photon heralding efficiency. DISCUSSION The results presented here demonstrate that longduration quantum storage based on dynamical decoupling of spin-wave states in 151 Eu 3+ :Y 2 SiO 5 is a promising avenue. In terms of qubit storage, we observe a 40fold increase in storage time with respect to the previous longest quantum storage of photonic qubits in a solid-state device 9 . Currently, the storage time in 151 Eu 3+ :Y 2 SiO 5 is limited by the heating observed when adding more pulses in the decoupling sequence, which is a technical limitation, but the classical storage experiments by Holzäpfel et al. 24 suggest that even longer storage times are within reach in 151 Eu 3+ :Y 2 SiO 5 . Solving the heating problem will also allow applying DD pulses in a rapid succession with fixed time separation, as done by Holzäpfel et al. 24 , giving more flexifibilty in the read out time and reducing any deadtime of the memory. In general the implications of the timing of DD sequences have not yet been adressed in rate calculations of quantum repeaters. Our observation that DD sequences with more pulses did not generate more read-out noise is key to achieving longer storage times also at the quantum level. We also note that these techniques could be applied also to Pr 3+ doped Y 2 SiO 5 crystals, where currently quantum entanglement storage experiments are limited to about 50 µs 56 . Another interesting avenue is to apply these techniques to extend the storage time of spin-photon correlations experiments 42,51 in rare-earth-doped crystals. Expanded setup The core of the setup consist of a closed-cycle pulsed helium cryostat with a sample chamber at a typical temperature of 3.5 K. In the sample chamber, a custom copper mount holds the memory crystal, with dimensions 2.5 x 2.9 x 12.3 mm along the (D 1 , D 2 , b) axes 34 , and a series of two filtering crystals with similar size. All these are 151 Eu 3+ :Y 2 SiO 5 crystals with a doping concentration of 1000 ppm 26 . Around the memory crystal, a copper coil generates the RF field to manipulate the atoms on their spin transitions. The coil is coupled to a resonator circuit, with resonance tuned on the 46 MHz spin transition, which produces a Rabi frequency of 120 kHz, corresponding roughly to an AC field of amplitude 12 mT. Before the resonator, the RF signal is created by an arbitrary wave generator, and amplified with a 100 W amplifier coupled to a circulator to redirect unwanted reflection from the resonator system. The input beam is used to create the optical pulses to be stored, and it goes through the memory crystal twice with a waist diameter of 50 µm. The memory preparation beam is overlapped with the input path with a larger spot size around 700 µm, to ensure homogeneity of the preparation along the crystal length, with an incident angle of about 1°. The transfer beam is overlapped in a similar way, with a beam diameter at the waist of 250 µm. The AFC structure is prepared with a 3 MHz total spectral width, which however is not fully exploited since the 1.5 MHz bandwidth of the HSH transfer pulses limits the effective memory bandwidth. The width of the transparency and absorption windows in the filter crystals were of 2 MHz, with a total optical depth of approximately 7.4 in the absorption window through the two crystals. The relative spectral excinction ratio should thus be exp(7.4) : 1 = 1636 : 1. More information on the setup can be found in Supplementary Note 1 and Supplementary Note 2. Photon counting and noise measurement The quantities µ in and p N in Table 1, correspond to average number of photons at the memory output for a single storage attempt. They are obtained by summing raw detections in modes of duration T m = 1.65 µs, then dividing by the number of experiment repetitions, averaging over the 6 modes, and dividing by the detector efficiency η D = 57 % and cryostat-to-detector path transmission (typically between 17 and 20 %). The histograms in Figure 3 and Figure 4b are obtained in the same way for a binning resolution of 200 ns. Noise photons at the memory read-out originate from excitation of ions from the |g state to the |s state during the DD sequence, due to imperfections of the RF pulses 26,27 . These ions are then excited by the readout transfer pulse and decay on the |e -|g transition trough spontaneous emission. These noise photons are thus spectrally indistinguishable from the stored photons. The spontaneous character was verified by observing that its decay constant corresponds to the radiative lifetime T 1 . Note that without RF manipulation the read-out noise is significantly reduced, i.e. the main SNR limitation in current spin-wave experiments is due to RFinduced photon noise. The noise parameter p N indicates the probability of a noise photon being emitted by the memory during a time corresponding to the mode size T m = 1.65 µs. For the spin storage data at T s = 20 ms, visible in Figure 3a and Table 1, it is measured independently by blocking the input beam during the full storage sequence in the same time-modes in which the retrieved modes would be. A more detailed analysis per-mode and the exact number of repetitions for all experiments are reported respectively in Supplementary Table 3 and Supplementary Table 5. To decrease the total acquisition time of the experiments at T s = 50 and 100 ms, we calculated the respective p N values reported in Table 1 from a ∼ 225 µs time window centered at about 190 µs after the first retrieved mode in the same dataset. By doing so, we exploited the fact that the noise floor is due to spontaneous emission with a decay time of 1.9 ms 19 , and can be considered uniform up to ∼ 200 µs after readout. This is confirmed experimentally on the 20 ms datasets, as the difference in p N calculated in the retrieved mode position of the data with input blocked correspond to the result of the procedure above within the Poissonian standard deviation. The histograms displaying noise in Figure 3b and c are representative regions of the noise floor taken at about 20 µs after the retrieved modes in the same dataset. All errors are estimated from Poissonian standard deviations on the raw detector counts and propagated considering the memory temporal modes as independent. DATA AVAILABILITY The data sets generated and/or analysed during the current study are available from the corresponding authors upon reasonable request.
7,366.8
2021-09-14T00:00:00.000
[ "Physics" ]
Fault Detection of Inline Reciprocating Diesel Engine : AMass and Gas-Torque Approach Early fault detection and diagnosis for medium-speed diesel engines are important to ensure reliable operation throughout the course of their service. This work presents an investigation of the diesel engine combustion-related fault detection capability of crankshaft torsional vibrations. Proposed methodology state the way of early fault detection in the operating six-cylinder diesel engine. The model of six cylinders DI Diesel engine is developed appropriately. As per the earlier work by the same author the torsional vibration amplitudes are used to superimpose the mass and gas torque. Further mass and gas torque analysis is used to detect fault in the operating engine. The DFT of the measured crankshaft’s speed, under steady-state operating conditions at constant load shows significant variation of the amplitude of the lowest major harmonic order. This is valid both for uniform operating and faulty conditions and the lowest harmonic orders may be used to correlate its amplitude to the gas pressure torque and mass torque for a given engine. The amplitudes of the lowest harmonic orders (0.5, 1, and 1.5) of the gas pressure torque and mass torque are used to map the fault. A method capable to detect faulty cylinder of operating Kirloskar diesel engine of SL90 Engine-SL8800TA type is developed, based on the phases of the lowest three harmonic orders. Introduction The great interest of engine manufacturing industry and research centers on the crankshaft vibration [1] and instantaneous speed analysis for the fault or misfire detection in internal combustion engines is due to the positive results obtained using different approaches.A misfire event can be considered as an impulsive excitation on the crankshaft, related to the lack of torque resulting from the missing combustion.This excitation causes a sudden engine speed decrease followed by an oscillation with characteristics amplitude and frequency associated with the torsional behavior of the engine-load system. A number of methods are devised to detect nonuniformities in the contributions of the cylinders to the total engine output by analyzing the variation of the measured crankshaft's speed may be classified in different groups such as observer-based method, pattern recognition, statistical estimators, order domain methods, and a combination of these methods.In [2], a good overview of the earlier state of the art has been presented.The model developed in [3] has been extensively used with very good results to predict and control torsional vibrations to correlate the amplitudes of the major harmonic orders of the crankshaft's speed variation with the average indicated mean effective pressure of the engine.Balancing the cylinder-wise torque contributions of automotive and other high-speed engines is usually addressed by reconstructing the cylinder-wise netindicated torques, by direct use of the angular acceleration [4].The method suggested [5,6], in which the measured angular speed was directly related to the nonuniform torque contribution to detect misfire.In order to determine the cylinder-wise net-indicated torque, the oscillating torque applied on the flywheel was reconstructed from measurements of the angular speeds of the crankshaft.The oscillating torque can be reconstructed by using a single mass engine model and one angular speed measurement [7] by assuming that the crankshaft is rigid and that the engine is sufficiently decoupled from the transmission and load,.Obtaining an estimate of the prevailing load torque [7,8] uses the fact that the instantaneous torque from the engine is zero at topdead center (TDC) and bottom-dead-center (BDC).In [9], misfire was simulated by disconnection of fuel supply of one cylinder of the engine which describes the measurements of vibro-acoustic exhaust locomotive engine signals for misfire phenomenon simulation and some results of their nonlinear analysis.Instead of estimating the engine load torque [10] proposed an observer-based method, which uses the measured engine load torque directly to reconstruct the cylinderwise net-indicated torques of a six-cylinder engine.In [11], a novel method, based on the polar coordinate system of the instantaneous angular speed (IAS) waveform, has also been introduced: to improve the discrimination features of the faults compared to the FFT-based approach of the IAS signal.In [11], two typical experimental studies were performed on 16-and 20-cylinder engines, with and without faults, and diagnose the results by the proposed polar presentation method.Method for evaluating the torque nonuniformity between the various cylinders of an internal combustion engine due to statistical dispersion in manufacturing or aging in the injection system is proposed in [12], where the problem of the crankshaft torsional vibration effects is presented on the engine speed fluctuations, that make the misfire detection critical, when multiple misfires are present in the same cycle.In [13], a simple algorithm based on an energy model of the engine which requires the measurement of the instantaneous angular speed for misfire detection in reciprocating engines is proposed.By processing the engine dynamics in the angular domain, variations in the working parameters of the engine, such as external load and mean angular speed, are compensated.A dimensionless feature has been abstracted for evaluation of the combustion as well as compression process of each cylinder. In this work, a method of detecting early fault for medium-speed power plant diesel engines is proposed, in which as in [14,15] torsional vibration amplitudes are used to superimpose the mass and gas torque.Further mass and gas torque analysis is used to detect fault in the operating engine. Problem Definition Problem studying in this work is to identify the early faulty cylinder by using the amplitudes of torsional vibration and lowest harmonic orders (0.5, 1, and 1.5) of the inertia masses and DFT of measured engine speed signal based on gas forces.In this paper, the authors focus and present how amplitudes of torsional vibration play an important role in performance diagnosis of internal combustion engine which will be the recent area of research for engine manufacturing industry and research centers. Engine Model Figure 1 illustrates a complete mass-elastic model of a fourstroke six-cylinder inline diesel engine system.There is a flywheel on the engine shaft to even the rotational speed of Pulley Flywheel Figure 1: -Mass-elastic model of a four-stroke six-cylinder inline diesel engine. the engine shaft.The rectangles in Figure 1 represent masses with link rotating in relation to the shafts.For example, the mass of the piston/crank mechanism of each cylinder is illustrated by rectangles.Dampings are represented by the box/plate symbol and stiffness of shafts is represented by spring symbol.In Figure 1, J 1 , . . ., J 7 are the inertia of the each cylinder and pulley, k 1 , . . ., k 8 are the stiffness, and c 1 , . . ., c 6 are the viscous damping's, respectively. Results and Discussion In this work, as per [1,6,14,15], detailed torsional vibration analysis was carried out and is used to calculate the torsional vibration amplitude for the given operating engine.The information in Table 1 pertains to the four-stroke, sixcylinder inline diesel engine.Tables 2, 3, and 4 show detailed torsional vibration analysis for the engine under consideration. Interpretation and Calculation of Resultant Phase Vector Based on Reciprocating Masses Here, the engine under consideration is SL90 model manufactured by Kirloskar Oil Engine Pune-03.For the considered six-cylinder engine used as an example, the crankshaft has crank throws spaced 120 • apart.The firing sequence selected is l-5-3-6-2-4.Considering the order number is the number of complete cycles occurring during one crankshaft revolution, and then the first-order vectors will be spaced 120 • apart in the sequence corresponding to the firing order.second-order vectors rotate at twice crank intervals so each succeeding second-order vector will be spaced at 2 × 120 • = 240 • apart in their correct sequence and so on. Torsional Resonance Calculation by Holzer Tabulation Method.A phase vector diagram as shown in Figure 2 is drawn for torsional resonance of 1387.22 rpm for the given engine to analyze the minor and major critical harmonic orders.It is seen that the third order is the major critical Since the 3rd order excitation of 3-node vibration mode falls within the operating range of 750 to 2200 rpm, hence a phase vector diagram as shown in Figure 2 is drawn for torsional resonance of 1387.33 rpm or 4162 vpm (Vibration per Minute). Figures 2(a) and 2(b) shows the phase angle diagram with resultant vectors for the 0.5 order and l order with phase angles 60 • and 120 • .The cylinder number and the amplitude vectors corresponding to it are shown in Figure 2 are drawn by using data in Table 5.It is found that more than one vector has the same angular position, so these have been arithmetically added making the diagram to the right.Then, by taking cosine components and algebraically adding the total cosine (vertical) component for the resultant vector is found, and by taking the sine components the resultant sine component is similarly found.The resultant vector magnitude is then It is found that the fourth-order successive vectors are spaced at 4 × 120 = 480 • = (360 + 120 • ) apart and, therefore, have the same pattern as for the first order, similarly for the 7th and 10th orders.Figure 2(a) shows the 0.5, 2.5, 3.5, 5.5, 6.5 orders.The 1.5, 4.5, 7.5 orders as shown in Figure 2(c) all act up and down, as drawn, and have a moderately large resultant.In the case of order numbers which are multiples of half the number of cylinders, that is, 3, 6, 9, and so forth, for the considered six-cylinder engine, all the vectors act in the same direction (not shown in Figure 2), for this reason, they are called "Major Orders." For 0.5 Order.Phase vector angle, θ = tan −1 (sum of all vertical components/sum of all horizontal components) θ = −49.50• . For 1.5 Order.Phase-vector angle θ = tan −1 (sum of all vertical components/sum of all horizontal components) θ = 90 • .For given engine the resultant phase vector can be calculated as above, and this Resultant phase Vector can be represented (by arrow having magnitude equal to amplitude of torsional vibration) as in Figures 2(a), 2(b), and 2(c). Interpretation and Calculation of Resultant Phase Vector Based on Gas Forces Extensive experiments were conducted on a four-stroke six cylinder direct injection diesel engine (Kirloskar SL90-SL8800TA).The engine was operated at constant speed and full loads.To simulate a early faulty cylinder, the engine was operated under normal working condition.The pressures were measured in all cylinders by piezoelectric pressure transducers.The mean-indicated pressure (MIP) and the gas pressure torque (GPT) of each cylinder were calculated from the pressure traces.The GPT and the measured speed were subjected to a discrete Fourier transform (DFT) to determine the amplitudes and phases of their harmonic components.Figure 3 shows an actual pressure curve generated by the pressure transducers on all six cylinders when engine was working under normal condition.The results of the application of this technique to given DI Diesel engine are graphically shown in Figures 4 and 5. It is observed that when the cylinders are uniformly contributing to the total engine torque, the first three harmonic orders (K = 0.5, 1, 1.5) play a significant role in the frequency spectrum of the total gas-pressure torque and, consequently, appear with a very low contribution in the frequency spectrum of the crankshaft's speed [5,6].If the frequency spectrum of the crankshaft's speed corresponding to uniform cylinders operation is compared to the spectrum corresponding to a faulty cylinder, one may see that the major difference is produced by the amplitudes of the first three harmonic orders.As far as the cylinders operate uniformly, these amplitudes are maintained under a certain limit.Once a cylinder starts to reduce its contribution, the amplitudes of the first three harmonic orders start increasing.These amplitudes may be used to determine the degree by which a cylinder reduces its contribution to the total gas pressure torque.The identification of the faulty cylinder may be achieved by analyzing the phases of the lowest three harmonic orders.Figure 4 is drawn by reconstructing the pressure traces of the six cylinders in a sequence corresponding to the firing order (1-5-3-6-2-4) with engine operating under normal conditions.Figure 5 shows the lowest three harmonic orders of the measured speed, respecting the measured amplitudes and phases.It is seen that only for the expansion stroke of cylinder 5, all three harmonic curves have, simultaneously, a negative slope.In the phase-angle diagrams (Figure 5) of these orders, the resultant vector corresponding to the harmonic component of the measured speed is also represented (shown by arrow having magnitude equal to amplitude of vibration) is drawn by using data in Figure 6.One may see that, for each of the three considered orders, the vectors are pointing toward the group of cylinders that produces less work.The cylinder that is identified three times among the less productive cylinders is the faulty as shown in Figure 5. Method for Detecting Fault in Operating Engine Based on the Figures 2 and 5, the following method is developed.(1) The phase-angle diagrams, considering the firing order of the engine are drawn for the lowest three harmonic orders placing in the top dead centre (TDC) the cylinder that fires at 0 • in the considered cycle.(2) On these phase angle diagrams, the corresponding vectors of the measured speed are represented in a system of coordinate axes. (3) The cylinders toward which the vectors are pointing are the less contributors and receive a "-" mark.If there are cylinders that receive a "-" mark for all three harmonic orders they are clearly identified as less contributors to the engine total output. (4) This method is able to identify a faulty cylinder at very early stage, that is, before starting engine manufacturing (Table 6) and as soon as its contribution reduces from rated value to zero with respect to the contribution of the other cylinders (Table 7). Conclusions Proposed methodology state the way of early fault detection in the operating diesel engine and Tables 6 and 7 shows validation of developed methodology.Here, torsional vibration amplitudes Table 2 are used to detect early fault in the operating diesel engine.In this paper, combination of mass torque and gas torque is used to map the fault.From this work it is found that the amplitudes of the lowest harmonic orders (0.5, 1, and 1.5) of the inertia masses and DFT of measured speed based on gas forces may be effectively used to map the fault.This method is capable to detect early faulty cylinders (which is now cylinder number 5) based on the phases of the lowest three harmonic as soon as its contribution starts to drop from rated value to zero due to its inertia forces as compared to the contribution of the other cylinders.This methodology can be also used for early fault diagnosis in the operating SI engine. Figure 4 :Figure 5 : Figure 4: Detection of a faulty cylinder from the phases of the lowest three harmonic orders. Table 1 : Engine basic data. Table 2 : Natural frequency of whole system = 1148 vpm. Table 3 : Natural frequency of whole system = 2379 vpm. Table 4 : Natural frequency of whole system = 4162 vpm. Table 5 : Summary of the values of torsional vibration amplitude from Holzer Tables2, 3, and 4.
3,500
2012-09-20T00:00:00.000
[ "Engineering" ]
Rough Set Theory Based Fuzzy TOPSIS on Serious Game Design Evaluation Framework , Introduction This study aims to provide a system evaluation model for multimedia game design educators selecting the most suitable MACF design from a series of evaluation criteria.The selection of MACF design criteria is regarded as multicriteria problem, including the subjectivity, uncertainty, and fuzziness in the evaluation process. Based on the above factors, six reasons are proposed for the evaluation.(a) The confirmation of criteria is determined by group experts in which they present subjective and objective considerations.(b) TOPSIS logic is rational and understandable.(c) The judgment rules close to human thinking could be acquired through fuzzy linguistic evaluation.(d) The alternatives could be compared with the objectives and criteria decision-making process.(e) The weights of importance between criteria are taken into account and included in the comparison.(f) The calculation process is simple and easy to understand [1]. Furthermore, fuzzy Delphi method is utilized for screening the criteria indicators form the literature.The entire fuzzy AHP is used for pairwise comparison of weights, as there are plenty of evaluation criteria in the literatures.It is expected to reduce the large amount of comparisons with AHP and achieve the final sequence.Fuzzy TOPSIS therefore simplifies the process of AHP and rapidly calculates the ideal solution and positive ideal solution.The alternatives are compared with ideal solution and positive ideal solution for the sequence.A project is regarded as the best one when it is close to ideal solution and far away from positive ideal solution.The optimal projects are further sequenced.The remainder of this study is structured as follows Section 2 briefly describes the proposed methods.In Section 3, proposed model for meaningful serious-game flow selection is presented and the stages of the proposed approach are explained in detail.How the proposed model is used on a real world example is explained in Section 4. In Section 5, conclusions and suggestions are discussed. Related Works 2.1.Rough Set Theory.Rough set theory proposed by Pawlak [2] has been shown as a useful mathematical tool for exploring data patterns [3] and a tool for decision support systems, especially dealing with imprecise, uncertain, and vague information in the decision process.It is used for distinctive classification and recognition, and it has achieved many goals and has been used for machine learning, researching, expert systems, and decision support system and has been widely applied in diverse domains, including finance, manufacturing, medicine, and image processing.In RST [4], a dataset can be described as an information system containing decision rules in the form "if the conditional attributes apply, then the decisional attributes apply." Chen and Cheng [5] construct hybrid models by using Rough Set (RS) classifiers to provide meaningful decision rules as knowledge-based systems from an intelligent perspective, and offers an alternative method for forecasting credit ratings and assessing the quality of RS classification systems in the global banking industry.Therefore, by using RS to execute selecting important attribute and then classify selecting attributes via intelligent technique. Fuzzy Multiple Criteria Decision-Making 2.2.1.Fuzzy Set.Fuzzy set theory is a mathematical theory pioneered by Zadeh [6], which is designed to model the vagueness or imprecision of human cognitive processes.The key idea of fuzzy set theory is that an element has a degree of membership in a fuzzy set [7,8].A fuzzy set is defined by a membership function that maps elements to degrees of membership within a certain interval, which is usually the value [0, 1].If the value assigned is zero, the element does not belong to the set (it has no membership).If the value assigned is one, the element belongs completely to the set (it has total membership).Finally, if the value lies within the interval, the element has a certain degree of membership (it belongs partially to the fuzzy set) [9].Table 1 shows the structure of triangular fuzzy numbers that are used in this paper. Fuzzy Delphi Method. Fuzzy Delphi Method was proposed by Garris et al. [10], and it was derived from the traditional Delphi technique and fuzzy set theory.Noorderhaben [11,12] indicated that applying the Fuzzy Delphi Method to group decision can solve the fuzziness of common understanding of expert opinions.van Laarhoven and Pedrycz [13] proposed the FAHP, which is to show that many concepts in the real world have fuzziness.Therefore, the opinions of decision-makers are converted from previous definite values to fuzzy numbers and membership numbers in FAHP.2.2.3.Fuzzy AHP.Satty [14] proposed the analytic hierarchy process (AHP) methodology which was a systematic method developed.It is to solve complex and multicriteria decision problems powerfully.Cheng et al. [15] improve the AHP by Fuzzy theory.Hsieh et al. [16] employed fuzzy analytic hierarchy process (FAHP) method to solve the problem of planning and design tenders selection in public office building.And FAHP method was also applied in the research of Chen et al. [17] to evaluate expatriate assignments.Thus, in this study, due to the fuzziness existed in the part of evaluation criteria, we decide to adopt the RST and FDM to form the primary evaluation criteria of MACF selection, and employ the FAHP to calculate the weight of individual criteria so as to establish the fuzzy multicriteria model of MACF selection criteria. Fuzzy TOPSIS. Chen and Hwang [18] proposed TOP-SIS multiple criteria method to identify solutions from a finite set of alternatives and initially proposed.Hwang and Yoon [19] define the ideal solution and negative ideal solution.The optimal solution should have the shortest distance from the positive ideal solution and the farthest from the negative ideal solution.In recent years, several researchers adopt fuzzy TOPSIS methods and applications to solve the problem and conflict [20][21][22][23][24]. Fuzzy TOPSIS methodology requires preliminarily information about the relative importance of the criteria.This importance is expressed by attributing a weight to each considered criterion.Chen adapted the methodology to calculate the weight of each criterion and to evaluate it by fuzzy AHP [25]. Learning Theory in Serious Game Design.Serious games have been some attempts to bring in learning effectiveness evaluation models.Garris et al. [10] presented a farreaching input-process-output model of instructional games and learning that has implications for the design and implementation of effective instructional games.Prensky [26] proposed digital game-based learning (DGBL) which includes activities that involve learning through solving problems or overcoming challenges posed in games.Specifically, learning arises as a result of the game's tasks; knowledge is enhanced through the game's content, and skills are developed while playing the game [27].The design of digital games is critical in learning.A successful digital game must involve challenge, curiosity and fantasy to increase interest and intrinsic motivation for learning [28], and Added practice and exercise in the game, which can help students to retain information more easily [29], provide immediate feedback, and activate prior knowledge by requiring players to use previously learned skills in order to advance to higher levels of the game [30]. ARCS Learning Motivation Model.The ARCS model is a problem solving approach to designing the motivational aspects of learning environments to stimulate and sustain students' motivation to learn [31].There are two major parts to the model.The first is a set of categories representing the components of motivation.The second part of the model is a systematic design process that assists in creating motivational enhancements that are appropriate for a given set of learners.To accurately measure the change in learner motivation, Karoulis and Demetriadis [32] indicated that the ARCS model can be the standard of how much the learning motivation is increased by the game [33].The four dimensions of ARCS are the following.Attention: attention which increases the learner's curiosity, relevance: establishment of the relevance of the learning content to learners, confidencefeedback to the learner, through the effort and the learning process of self-control, and satisfaction: the satisfaction or reward the learner can gain. Flow Theory. Csikszentmihalyi [34] proposed the original definition of flow and he defined it as "the holistic experience that people feel when they act with total involvement." Flow describes a state of complete absorption or engagement in an activity and refers to the optimal experience [35]. During the state, people are extremely involved with activity that nothing seems to matter.Csikszentmihalyi [34,35] summarized the most commonly exhibited factors of flow into nine characteristic dimensions, including clear goals, immediate feedback, potential control, the merger of action and awareness, personal skills well suited to given challenges, concentration, loss of self-consciousness, time distortion, and autotelic experience.The concept has been broad applied in studies such as sports, work, shopping, rock climbing, dancing, games, and others [35]. It is important that the challenge that the player faces in the game matches the player's skill.If challenge is significantly higher than player's skill, the player will fell anxiety.In contrast, if the challenge is significantly lower than player's skill, the player will feel bored.The three-channel model of flow explains the above situation in Figure 1.Therefore, for keeping a player in a flow state, game designers should ensure that while a player's skill increases, the challenges also should become more difficult.[36] proposed meaningful learning strategy with research of cognition and learning.Meaningful learning importance lies in enabling students' acquisition of new knowledge and it is relevant to previous experiences in the personal's information and unique understandings [37].In recent years, several researches have employed mobile technologies to support the achievement of meaningful learning [38,39].Huang et al. [40] design and implement a meaningful learning-based evaluation model for ubiquitous which based on previous study are active, authentic, constructive, cooperative and personal.Several characteristics of u-learning are also linked to attributes of meaningful learning.Therefore, this study adopted these five characteristics of meaningful learning as the evaluation criteria of MACF selection.[41] was utilized to measure the cognitive load items.Extraneous cognitive load is the source for the cognitive load when delivering textbook information or presenting to spend learning memory resources for learners.Cognitive load is based on the load when learners understand how to deal with materials [42,43].Therefore, cognitive load depends on learners having prior knowledge and learning memory resources.In serious game design, it is usually used as supplementary learning material for enhancing learning achievement.There are three criteria to intrinsic cognitive load from the structure and complexity of the materials for cognitive load.The criteria items are shown in Table 2. The Proposed MACF Evaluation Model A MACF design evaluation model (Figure 2) is proposed in this study.By integrating group experts' wisdom with AHP and multicriteria decision-making (MCDM) in RST based Fuzzy TOPSIS, three stages are included: (1) Criteria selected by RST and FDM identification of necessary criteria for MACF design by RST and fuzzy Delphi method; (2) Calculating the weights of criteria by Fuzzy AHP; (3) Evaluating and determining the ranking by Fuzzy TOPSIS. Stage 1 (criteria selected by RST and FDM).MACF design evaluation criteria are first collected through literature, selecting attribute by RST and the hierarchy model identified by experts are screened with Fuzzy Delphi Method.[49] X Note: V: selected, X: not selected, and DF: defuzzification. Step 1 (collect the criteria of decision group from related work and RST).RST addresses the continuing problem of vagueness, uncertainty, and incompleteness by applying equivalence classes to partition training instances using lower and upper approximations [50]. Let ⊆ and ⊆ be an information system.The set is approximated using information from and then by constructing the lower and upper approximation sets, which are, respectively, The lower approximation requires the classification of elements in as members of set using knowledge in .The upper approximation requires the classification of elements in as possible members of set using knowledge in () = − which is the -boundary region of set , comprising those objects that cannot be clearly classified as members of set using knowledge in .Set is considered "rough" with respect to knowledge in when the boundary region is not empty.The quality of classification of by expresses the percentage of objects that can be correctly assigned to set using attribute , as follows: If () = 1, then the decision table is consistent; otherwise, it is inconsistent.Reduct and core are the two key RST concepts [4,51].Given and ∈ , the reduct is the minimal nonredundant subset of attributes of the original dataset for IND() = IND(), and the reduct can classify the universe with the same quality.The core comprises the most relevant attributes in the original dataset and cannot be removed from an information system.Let RED() denote all reducts of .The intersection of RED() is a core of , as it is shown by (3).Pawlak described RST in detail [4,51] as follows: CORE () = ∩RED () . (3) Step 2 (establish the fuzzy positive reciprocal matrix [ã ÿ ]). The core set of RST is a basic attribute of evaluation for experts.Linguistic variables in a questionnaire are first utilized for finding out the evaluation score of each alternate factor's significance given by the experts.Triangular fuzzy numbers are then used for integrating the experts' opinions to show the fuzziness of significance between factors.The triangular fuzzy number is established with the expert's evaluation values acquired from the questionnaire following the equation w , , , and .The calculation equations are shown as follows. Assuming the evaluation of the th factor's significance given by the th expert, among experts, being the fuzzy weight W of the th factor appears, where Step 3 (calculation of triangular fuzzy number w of each criterion significance).The evaluation of each criterion's triangular fuzzy number given by the experts is calculated for the triangular fuzzy number w of the alternate factor's significance. Assuming the evaluation of the th initial indicator's significance given by the th expert being ÿ = ( ÿ , ÿ , ÿ ), = 1, 2, 3, . . ., , the fuzzy weight of the th initial indicator is calculated as below: where W : fuzzy weight, : evaluation of the th initial indicator's significance given by the th expert, : number of indicator, : number of expert. The total value of the triangular fuzzy number W = ( , , ) of indicators is organized with defuzzification, proposed by Chen and Hwang [18], to present the consensus of the Delphi members to the indicator evaluation scale. Step 4 (defuzzification).Center-of-gravity (COG) tends to perform defuzzification the fuzzy weight W of each alternate factor being the actual value .The equation is show as below. Step 5 (screening evaluation indicators and establishing evaluation criteria).Finally, a proper factor could be screened from numerous factors by setting the threshold (determined by Scree test criterion).The screening follows the principles as below. Situation 1.When ≥ , accept the th factor as the evaluation indicator. Situation 2. When < , remove the th factor.Practically, the value of threshold could be determined by Scree test criterion, in which fuzzy numbers (DF 1 , DF 2 , . . ., DF ) are first ordered then Linear Graph is drawn, and the Maximum turning point of linear graph scree test is the threshold . Stage 2 (calculate the weights of criteria by FAHP).Criteria confirmed by (Fuzzy Delphi Method) FDM are evaluated the hierarchic weight with AHP.The evaluation criteria weights of MACF design formed the matrix through pairwise comparisons. Step 1 (set up Fuzzy Paired Comparison Matrices).W is the weight of the pairwise comparison matrix à in each row where is the number of criteria; then Step 2 (defuzzification).W and à are as follows which could be used for calculation with Center-of-gravity where Step 3 (calculate the relative weight).The calculation of weight is show as Step 4 (validity test of weight).(I) Substitute = to acquire the eigenvector 1 of the weight.The left of the equation assumed 1 that (II) Substitute = with known and for the maximum eigenvalue max (III) Calculate the consistency index CI with known max (V) Consistency test. Stage 3 (evaluating and determining the ranking by TOPSIS). The decision team would distribute the evaluation criteria weights and determine the optimal alternatives for sequence. Our overall survey instrument was based on both past literature published surveys (ARCS, flow theory, and meaningful learning and cognitive load) and serious game-based learning.To consider the meaningful serious game flow selection practices in game design evaluation, we built on the MACF selection criteria.We gathered and developed the instruments of serious game design selection criteria from these different sources.All instruments were distributed in 15 critical constructs; all the instruments were represented in Table 2. A MACF design evaluation with "" alternatives and "" criteria can be expressed in matrix format as given below: where 1 , 2 , . . ., and are feasible alternatives, 1 , 2 , . . ., and are evaluation criteria, is the performance rating given by the decision-makers to alternative against criterion , and is the weight of criterion .An algorithm of the fuzzy decision-making method for dealing with the selection of reverse logistics provider is given as follows. Step 1 (adopts evaluation criteria).Form Stage 1, committee of decision-makers adopts FDM results to identify the evaluation criteria. Step 2 (adopts weight of evaluation criteria).Form Stage 2, choose the appropriate linguistic variables for the importance weight of the criteria by Fuzzy AHP. Step 3 (evaluate the ratings of criterion).Use linguistic rating variables to evaluate the ratings of alternatives with respect. Step 4 (normalized decision matrix).Construct the normalized decision matrix.The normalized value is calculated as, Step 5 (construct weighted normalized decision matrix).The weighted normalized is calculated as. Step 6 (determine evaluation solution).Determine positive ideal solution (maximum value on each criterion) and negative ideal solution (minimum value on each criterion) from the weighted normalized decision matrix.In the below equation, 1 is the set of benefit criteria and 2 is the set of cost criteria: Step 7 (calculate the euclidean distance).Equations ( 21), (22) between positive ideal solution and negative ideal solution for each alternative: Step 8 (calculate ranking order of all alternatives).Calculate the closeness coefficient of each alternative.According to the closeness coefficient, we can understand the assessment status of each alternative and determine the ranking order of all alternatives as follows: Evaluated Game Description Game 1. Energy education game functions as follows. Green city is a serious game which was designed for achieving energy education for elementary school.It takes the form of mobile game played within android pad.Each of the game levels and eleven different building locations around Taiwan is instantiated as a team in the game.Each game element has been equipped with real-world energy sensors to measure energy use and this monitoring is the main source for scoring of each team.Additionally, players play simple; fast ecoaction based minigames, take quizzes and learn about energy efficiency from sources on cloud.The actions of a team of players combine to win awards that improve and upgrade the virtual representation of their team building.Figure 3(b) shows that teams cooperate on virtual environment to build the energy city. Game 2. Software engineering project management game functions as follows [52]. The game situation: the construction of the game, besides the design of the game screen, also includes the situation and character design.The story is set in a computer and internet service company whose clients and complicated equipment are getting more and more.This company therefore wants to develop systems that can answer questions of clients and increase the efficiency.The player must help the company evaluate and develop software, act as different roles in the developing process, and complete different tasks as different roles to complete the software development. The interface design: the game in this study takes the story background, environment, and age of players into consideration.The game provides five different roles to be chosen.Figure 4(b) shows that every role corresponds to different situations and tasks, and the player can go through the different roles to learn all different tasks of various positions. In the requirement analysis, this study uses the maze game, which will show the problem sign and player position.When passing a problem sign, the character must stop, and the player must solve the current problem in order to keep going forward.In this task, the multiple choice questions are designed by the meeting record from the game.Besides solving all problems in the maze, Figure 4(a) shows that the player must find a way out in order to increase his interest and keep the player's attention on game-based learning.In this task, the player must distinguish the requirements into functional and nonfunctional.The player has to take an evaluation then he will get the score which will be provided to the teacher for reference. Game 3. 3D CCGBLS game functions as follows.The learners can understand the operation procedure through gamebased simulation of clinical path.With the game-based simulation of Clinical Path, this system is expected to achieve the three objectives.(1) The simulation of various operations allows the player to be familiar with the operation process. (2) The operation simulation allows the player to understand the complication in the operation process. (3) The healthcare information offered in the game could assist the player in acquiring knowledge. The game provides four different game situation levels to be chosen.Figure 3(a) a doctor that visits patient and talks.In the game of cardiac catheterization, this study uses the first person view control of the game, which will show the situation of view Cardiac Catheterization training.The player must solve all the problem in the game order to keep going forward.In this task, the multiple choice questions are designed by meeting the record from the game.Besides solving all problems in the game, Figure 5(b) shows that the player must find a way out in order to increase his interest and keep the player's attention on game-based learning.In this task, the player must distinguish the emergencies that will be happen occasionally in the 3D-CCGBLS.If the answer is wrong, the health points will decrease by one and the question will reappear and the countdown will be reset in order to give the player the chance to correct the mistake.The player must answer in limited time, to increase the challenge of the game.At the end of the learning phase, the player has to take a learning evaluation; then the learner's learning data will be collected into learner's portfolio by mobile intelligent agent and then he will get the score which will be provided to the teacher for reference.Game 4. Blood circulation system serious game is called red adventure which functions as follows.Aiming at the function of blood circulation in human bodies, the learners could log in to the system from the menu in Figure 6(a) and get into the unit practice through the level menu in Figure 6(b) with the main menu and the role selection to enter the unit learning of the circulation system of heart and blood, the circulation system of liver, the circulation system of spleen, and the circulation system of lung. The game starts on right atrium, going through left lung, left atrium, spleen, liver, right atrium, right lung, and left atrium, where the circulation of blood refers to lung circulation and systemic circulation, returning to the circulation of lung.In addition to the collection of oxygen and carbon dioxide, oxygen or carbon dioxide are released at specific locations in the process, where there are attacks of bacteria and wounds for recovery.Before leaving the learning level of each organ, correctly answering the questions in the games is requested for learning feedback. Identification of Necessary Criteria. In this stage, we focused on the analysis of evaluation criteria of MACF selection.Thus, the experts chosen were the professionals in the area related to our study with the experience of serious game design experts.Besides, they should be of rich working experience with the serious game designs and their positions where at least the rank of department managers should be over 10 years of working experience.In general, the numbers of expert were from three to fifteen [11,12].This study was sent out to eleven serious game design experts as questionnaire subjects. After that we designed the questionnaire in a 9-point fuzzy semantic differential scale; see Table 1.And, we asked the selected experts to answer instrument survey.The selected experts assigned a relative importance to every collected variable with respect to three dimensions of ARCS, flow, and meaningful learning in order to confirm critical constructs as the evaluation criteria of MACF selection.The expert questionnaires were collected, and triangular fuzzy function with respect to every potential variable was established as represented in Table 2. When selecting the evaluation criteria, it was generally considered important if the relative importance is greater than 65%.According to the above filtering treatment, we obtained from the collected experts' questionnaires, there are 15 important criteria commonly agreed by 11 experts.And, totally 15 instrument items were eliminated.They were listed as shown in Table 2. According to the experts' decision, evaluation hierarchy of MACF model was built.There are four levels in the decision hierarchy structured for MACF design evaluated criteria (Figure 7).The overall goal of the MACF decision process determined is in the first level of the hierarchy.The criteria are on the second level, subcriteria are on the third level, and evaluated MACF model is on the fourth level of the hierarchy. 4.3. Calculate the Weights of Criteria.This stage was employed to calculate the fuzzy weights.The next three steps were shown as follows. Step 1 (collection).This method was based on the experts' precise value and the synthesized experts' opinions with the geometric mean instead of the fuzzy numbers input directly by experts. Step 2 (defuzzification).Since the weights of all evaluation criteria were fuzzy values, it was necessary to compute a nonfuzzy value by the process of defuzzification.Through formulas, the defuzzification weight Wi can be obtained. Step 3 (normalization).In order to effectively compare the relative importance among evaluation criteria, we normalized the obtained weights using formula.They were listed as shown in Table 2. In the finalization of AHP steps, results are shown in Table 3. From these obtained results, the specialization, interactivity, and the accuracy of MACF selection can be concluded.The results obtained from the computations based on the pairwise comparison matrix are provided in Table 3. Result shows the playfulness (C24), skills (C22), attention (C11), and personalized (C35) which are determined as the four most important criteria in the MACF selection process by fuzzy AHP.Consistency ratio of the pairwise comparison matrix is calculated as 0.068 < 0.1.So, the weights are shown to be consistent and they are used in the selection process. Evaluating and Determining the Final Rank of Alternative. In the following step, decision-makers assessed the quality of the alternative hospital web sites.The same fuzzy scale is used for evaluation as in fuzzy AHP and the decision matrix with alternatives and criteria can be seen with linguistic terms in Table 1.In the case study, there are three game alternatives.After constructing the fuzzy decision matrix and normalized matrix, TOPSIS method expression was used to calculate [25].The last step of the methodology consists of ranking the selected game project to the ideal solution.The performance indices are computed to rank the alternatives, and the obtained results are given in Table 4.The evaluation results point out that Game 1 has the best score overall (Game 1 > Game 3 > Game 2 > Game 4). Conclusion and Suggestions Multimedia game design education criteria evaluation refers to the selection of instructional strategies, which could affect the learning effectiveness of game design education.A lot of alternatives should be taken into account evaluating the factors in different game design criteria.In this case, an efficient decision evaluation method is necessary for reinforcing the decision evaluation quality for MACF design. A system evaluation process for MACF design is proposed in this study, which applies triangular fuzzy numbers to expressing the evaluation linguistics and considering the subjective judgment and objective analyses.A mixed fuzzy multicriteria decision-making is further applied to completing the group decision evaluation model.The evaluation mode completed the MACF design evaluation based on literature and experts' definitions.The criteria comparisons of the case are preceded by the optimal design sequence. The criteria weights are acquired by Fuzzy AHP, which provides the calculations of ideal solution and positive ideal solution of Fuzzy TOPSIS in the criteria decision process.Meanwhile, weighted decision evaluation is further calculated according to such weights to generate alternatives and determine the sequence. The proposed MACF design evaluation decision model has largely enhanced the working efficiency of MACF design education.Fuzzy TOPSIS simplifies the solution of AHP calculating the evaluation process and rapidly generates the results and sequence of decision evaluations.Moreover, the calculation of indicator weights through Fuzzy TOPSIS is important for the evaluation comparison in the case.Different weights could generate the priority sequence of the evaluation results.It shows that the weights are determined through experts' group decision to avoid prejudice, reduce bias in the decision process, and benefit the correctness of criteria evaluation [53]. Most decisions are complex and conflict that decisionmakers should consider solving problems with scientific methods.The development of MACF design evaluation decision model could assist game design educators in proposing teaching strategies. A mixed multicriteria decision-making evaluation model is combined in this study.Although the research model is compared with MACF design, the future research could Figure 1: Flow. Table 1 : Membership function of fuzzy scale. Table 2 : Criteria of MACF evaluation model selected by experts using RST and FDM. Table 3 : The summary of evaluation criteria weight. Table 4 : The final evaluation and ranking of alternative.
6,748
2013-12-23T00:00:00.000
[ "Computer Science" ]
The E ff ects of Thermal and Atmospheric Pressure Radio Frequency Plasma Annealing in the Crystallization of TiO 2 Thin Films : Amorphous TiO 2 thin films were respectively annealed by 13.56 MHz radio frequency (RF) atmospheric pressure plasma at discharge powers of 40, 60, 80 W and thermal treatment at its corresponding substrate temperature ( T s ). T s was estimated through three measurement methods (thermocouple, Newton’s law of cooling and OH optical emission spectra simulation) and showed identically close results of 196, 264 and 322 ◦ C, respectively. Comparing with thermal annealing, this RF atmospheric pressure plasma annealing process has obvious e ff ects in improving crystallization of the amorphous films, based on the XRD and Raman analysis of the film. Amorphous TiO 2 film can be changed to anatase film at about 264 ◦ C of T s for 30 min plasma treatment, while it almost remains amorphous after 322 ◦ C thermal treatment for the same time. of (101), (004), and (105) respectively, the with the XRD anatase Sample H1 (196 ◦ C) and H2 (264 ◦ C) are amorphous, as the XRD spectra shown in Figure 5b. Sample H3 (322 ◦ C) shows weak (101) peak. By comparing above XRD results of the samples respectively treated by plasma and heating, it is indicated that plasma annealing treatment is more e ff ective in promoting the crystallization of amorphous TiO 2 film than thermal treatment. Anatase TiO 2 film can be obtained by atmospheric pressure RF plasma treatment for only 30 min at temperature of 264 ◦ C. Introduction Titanium dioxide (TiO 2 ) film has been extensively investigated in the last decades by many researchers. TiO 2 is a kind of chemically stable and body-friendly semiconductor material with high refractive index [1], photocatalytic activity [2], and wide-band gap [3]. With these outstanding properties, TiO 2 is widely used as a white pigment in paints, toothpastes [4], an antireflection coating [5], photocatalyst in environmental [6,7] and energy field [8,9], and a transport layer in dye-sensitized solar cell [10] and perovskite solar cells [11]. TiO 2 has three naturally occurring polymorphs, anatase, brookite, and rutile [1]. Usually, anatase requires a substrate temperature over 450 • C and in excess of 550 • C to prepared rutile [12]. Anatase has larger band gap (3.2 eV) than that (3.0 eV) of rutile and the photocatalytic activity of anatase is obviously superior to that of rutile [13]. To reduce the crystallization temperature of anatase will broaden the application field of the TiO 2 . RF plasma treatment has been developed to prepare anatase TiO 2 film at low temperature [14][15][16]. The amorphous TiO 2 film turn to anatase even at room temperature [14]. However, the treatment processes usually needs vacuum system. Atmospheric pressure plasma is widely studied for a continuous process with reducing equipment and handling cost [17,18]. The synergistic effect of plasma deposition and substrate temperature in improving the crystallization of TiO 2 film through atmospheric pressure kHz plasma has been found and studied in our previous work [19]. In this paper, we have investigated the crystallization process of amorphous TiO 2 film annealed by RF atmospheric pressure plasma and its corresponding T s thermal treatment. Amorphous TiO 2 thin films were successfully crystallized through 30 min plasma annealing treatment of 60 W (T s~2 64 • C). However, it is found that the films thermally treated with higher temperatures were still amorphous. Thin Films Deposition The amorphous TiO 2 thin films were prepared by atmospheric pressure dielectric barrier discharges (AP-DBD) chemical vapor deposition. The experiment details can be found in our previous work [19]. The flow rates were set as Ar:O 2 :TiCl 4 (carrier gas) = 1000:10:25 (sccm). The temperature during deposition was not more than 50 • C. The deposition time was 30 min. Quartz were used as substrates in the thin film deposition. The deposited TiO 2 thin films are amorphous and used as original samples. The thicknesses of the films are about 105 µm. Plasma Treatment A RF Generator (13.56 MHz, RF-10S/PWT, Advanced Energy, Fort Collins, CO, USA) was used as power source in plasma annealing. The discharge chamber was made by quartz and the chamber wall is 1 mm thick. The discharge gap is 2 mm. The films have been treated separately at three discharge power: 40, 60 and 80 W. Ar with the flow rate of 500 sccm was used as the treatment gas. The treatment time is 30 min. The waveforms of applied voltage and discharge current were measured by wideband voltage probes (Tektronix P5100, Beaverton, OR, USA) and wideband current probes (Pearson 2877, London, UK), and recorded on a digital oscilloscope (Tektronix DPO 4101). Optical emission spectra of the RF plasma were investigated by an optical emission spectrometry (OES, Avantes Avaspec-2048tec, Apeldoorn, The Netherlands). The schematic of the experimental setup is shown in Figure 1. Coatings 2019, 9, x FOR PEER REVIEW 2 of 10 temperature in improving the crystallization of TiO2 film through atmospheric pressure kHz plasma has been found and studied in our previous work [19]. In this paper, we have investigated the crystallization process of amorphous TiO2 film annealed by RF atmospheric pressure plasma and its corresponding Ts thermal treatment. Amorphous TiO2 thin films were successfully crystallized through 30 min plasma annealing treatment of 60 W (Ts ~264 °C). However, it is found that the films thermally treated with higher temperatures were still amorphous. Thin Films Deposition The amorphous TiO2 thin films were prepared by atmospheric pressure dielectric barrier discharges (AP-DBD) chemical vapor deposition. The experiment details can be found in our previous work [19]. The flow rates were set as Ar:O2:TiCl4 (carrier gas) = 1000:10:25 (sccm). The temperature during deposition was not more than 50 °C. The deposition time was 30 minutes. Quartz were used as substrates in the thin film deposition. The deposited TiO2 thin films are amorphous and used as original samples. The thicknesses of the films are about 105 µm. Plasma Treatment A RF Generator (13.56 MHz, RF-10S/PWT, Advanced Energy, Fort Collins, CO, USA) was used as power source in plasma annealing. The discharge chamber was made by quartz and the chamber wall is 1 mm thick. The discharge gap is 2 mm. The films have been treated separately at three discharge power: 40, 60 and 80 W. Ar with the flow rate of 500 sccm was used as the treatment gas. The treatment time is 30 minutes. The waveforms of applied voltage and discharge current were measured by wideband voltage probes (Tektronix P5100, Beaverton, OR, USA) and wideband current probes (Pearson 2877, London, UK), and recorded on a digital oscilloscope (Tektronix DPO 4101). Optical emission spectra of the RF plasma were investigated by an optical emission spectrometry (OES, Avantes Avaspec-2048tec, Apeldoorn, the Netherlands). The schematic of the experimental setup is shown in Figure 1. Temperature Measurement during Plasma Annealing Three test methods were used to estimate the substrate temperature during the discharge. (a) As shown in Figure 1, a thermocouple (TES1310, Taipei, Taiwan) was set between the bottom electrode and the chamber, where it is assumed the temperature is close to the substrate temperature during Temperature Measurement during Plasma Annealing Three test methods were used to estimate the substrate temperature during the discharge. (a) As shown in Figure 1, a thermocouple (TES1310, Taipei, Taiwan) was set between the bottom electrode and the chamber, where it is assumed the temperature is close to the substrate temperature during the plasma treatment. (b) After plasma treating, the temperature of the bottom electrode will decrease with time. By recording this temperature decreasing with time and using Newton's law of cooling, the original temperature of the electrode was calculated. (c) The gas temperature at different discharge powers is simulated by OH peak of the plasma OES. Heating Treatment The TiO 2 films were heated in the same chamber with same Ar flow rate as plasma treatment. The heating temperatures were set following the temperatures of different power discharges. The treatment was 30 min as plasma treatment. Thin Films Analysis The samples were characterized by powder X-ray diffraction (XRD, Tokyo, Japan) measurements recorded with a Rigaku D/max-2550 PC X-ray diffractometer with a rotating anode and a Cu Ka radiation source (λ = 1.54056 Å). Raman measurement was carried out using a Raman spectroscopy (Renishaw inVia-Reflex, Wotton-under-Edge, UK). The power of the laser was 5 mW, and the laser excitation was 532 nm. Scans were taken on an extended range (100-800 cm −1 ). The morphology of the obtained samples (original and P3) were determined with a field emission scanning electron microscope (FESEM, Hitachi, S-4800, Tokyo, Japan). Methylene blue (MB, Sigma-Aldrich, St. Louis, MO, USA) was employed as dye to evaluate the photocatalytic activity of the films. The samples were settled in 20 mL of 5 mg/L MB aqueous solution. The quartz container with the sample and solution was placed on a magnetic stirrer plate (Leici JB-1, Shanghai Leici Co., Ltd, Shanghai, China) and a stirrer bar placed in the solution. The photocatalytic reaction was conducted at room temperature under UV light (PLS-SXE300UV, Beijing Perfectlight Co., Ltd, Beijing, China). It provided 6.6 W ultraviolet light. The distance between the lamp and the base of the container was 1 m. The decomposition of MB was monitored by measuring the absorbance of the aliquot solution using the UV-vis spectrophotometer (Shimadzu UV-2600, Kyoto, Japan) in liquid cuvette with deionized water as reference. The concentration of the MB solution was measured every 30 min. Figure 2a shows the voltage and current waveforms of argon RF discharge of 80 W. The input peak-to-peak voltage (V p-p ) is about 1800 V, and the input peak-to-peak current (C p-p ) is about 0.44 A. The V p-p and C p-p of different treatment powers are shown in Table 1. I-V Curve and OES Test of the Discharge The OES of the 80 W RF plasma is shown in Figure 2b. The inset is the discharge photograph. The Ar (671.8 nm), Ar (723 nm) and other typical emission lines are shown out. Since the discharge is in atmospheric pressure, OH (309 nm) [20], N 2 (337.1, 357.7, 371.1 nm) [21] and O (777.1 nm) [22] are also shown out in the plasma. Coatings 2019, 9, x FOR PEER REVIEW 3 of 10 the plasma treatment. (b) After plasma treating, the temperature of the bottom electrode will decrease with time. By recording this temperature decreasing with time and using Newton's law of cooling, the original temperature of the electrode was calculated. (c) The gas temperature at different discharge powers is simulated by OH peak of the plasma OES. Heating Treatment The TiO2 films were heated in the same chamber with same Ar flow rate as plasma treatment. The heating temperatures were set following the temperatures of different power discharges. The treatment was 30 minutes as plasma treatment. Thin Films Analysis The samples were characterized by powder X-ray diffraction (XRD, Tokyo, Japan) measurements recorded with a Rigaku D/max-2550 PC X-ray diffractometer with a rotating anode and a Cu Ka radiation source (λ= 1.54056 Å). Raman measurement was carried out using a Raman spectroscopy (Renishaw inVia-Reflex, Wotton-under-Edge, UK). The power of the laser was 5 mW, and the laser excitation was 532 nm. Scans were taken on an extended range (100-800 cm −1 ). The morphology of the obtained samples (original and P3) were determined with a field emission scanning electron microscope (FESEM, Hitachi, S-4800, Tokyo, Japan). Methylene blue (MB, Sigma-Aldrich, St. Louis, MO, USA) was employed as dye to evaluate the photocatalytic activity of the films. The samples were settled in 20 mL of 5 mg/L MB aqueous solution. The quartz container with the sample and solution was placed on a magnetic stirrer plate (Leici JB-1, Shanghai Leici Co., Ltd, Shanghai, China) and a stirrer bar placed in the solution. The photocatalytic reaction was conducted at room temperature under UV light (PLS-SXE300UV, Beijing Perfectlight Co., Ltd, Beijing, China). It provided 6.6 W ultraviolet light. The distance between the lamp and the base of the container was 1 m. The decomposition of MB was monitored by measuring the absorbance of the aliquot solution using the UV-vis spectrophotometer (Shimadzu UV-2600, Kyoto, Japan) in liquid cuvette with deionized water as reference. The concentration of the MB solution was measured every 30 min. Figure 2a shows the voltage and current waveforms of argon RF discharge of 80 W. The input peak-to-peak voltage (Vp-p) is about 1800 V, and the input peak-to-peak current (Cp-p) is about 0.44 A. The Vp-p and Cp-p of different treatment powers are shown in Table 1. Temperature Measured Since temperature of discharge has great influence on crystallization and it is not easy to get the accurate substrate temperature during the plasma discharge. Three methods were employed to estimate the substrate temperature during plasma treatment at different discharge powers. Thermocouple By using thermocouple, the temperature of the bottom of chamber (T c ) is about 322 • C. And the temperatures measured at 40 and 60 W are 196 and 264 • C and shown in Table 2. OH Peak Simulation The OH (309 nm) rotational temperature are usually used to estimate the gas temperature of plasma (T g ) [23]. By using LIFBASE software (version 2.1.1), the OH rotational temperatures at discharge were simulated and the result at 80 W is shown in Figure 3. The simulated result show that the gas temperature was 307 • C. The gas temperature simulated with OH peak by OES (T o ) at discharge power of 40 and 60 W are 207 and 277 • C and shown in Table 2. The OES of the 80 W RF plasma is shown in Figure 2b. The inset is the discharge photograph. The Ar (671.8 nm), Ar (723 nm) and other typical emission lines are shown out. Since the discharge is in atmospheric pressure, OH (309 nm) [20], N2 (337.1, 357.7, 371.1 nm) [21] and O (777.1 nm) [22] are also shown out in the plasma. Temperature Measured Since temperature of discharge has great influence on crystallization and it is not easy to get the accurate substrate temperature during the plasma discharge. Three methods were employed to estimate the substrate temperature during plasma treatment at different discharge powers. Thermocouple By using thermocouple, the temperature of the bottom of chamber (Tc) is about 322 °C. And the temperatures measured at 40 and 60 W are 196 and 264 °C and shown in Table 2. OH Peak Simulation The OH (309 nm) rotational temperature are usually used to estimate the gas temperature of plasma (Tg) [23]. By using LIFBASE software (version 2.1.1), the OH rotational temperatures at discharge were simulated and the result at 80 W is shown in Figure 3. The simulated result show that the gas temperature was 307 °C. The gas temperature simulated with OH peak by OES (To) at discharge power of 40 and 60 W are 207 and 277 °C and shown in Table 2. Newton's Law Newton's law of cooling demonstrates exponential cooling curve with time [24]. The electrode temperature changes with time are recorded and shown in Figure 4 until the temperature stabiles. Newton's Law Newton's law of cooling demonstrates exponential cooling curve with time [24]. The electrode temperature changes with time are recorded and shown in Figure 4 until the temperature stabiles. The temperature changes from the beginning 2-39 s were inset in Figure 4 and fitted with Newton's law of cooling. We got the formula as follows, where, T is the temperature ( • C) and t is time (s). So, extending the curve to t = 0 s, the plasma temperatures at different discharge powers (T n ) are calculated and shown in Table 2. Coatings 2019, 9, x FOR PEER REVIEW 5 of 10 The temperature changes from the beginning 2-39 s were inset in Figure 4 and fitted with Newton's law of cooling. We got the formula as follows, where, T is the temperature(°C) and t is time (s). So, extending the curve to t = 0 s, the plasma temperatures at different discharge powers (Tn) are calculated and shown in Table 2. The thermocouple test temperature (Tc), gas temperature simulated by OH peak (To) and calculated temperature by Newton's law of cooling (Tn) are summarized in Table 2. It is displayed Tc, To and Tn are identically close at the same discharge power. They all show increasing trends with the increasing of the discharge power. The highest temperature is assumed not above 322 °C at the highest discharge power of 80 W. Thin Film Characterization In order to compare the effects of thermal or plasma treatment in improving film crystallization, three amorphous films were respectively thermally treated at the same substrate temperature of 196, 264 and 322 °C as at three discharge powers. The same chamber and Ar gas flow rate are used to ensure the consistent treatment conditions, except the plasma discharge is off. The treated conditions and the sample names are given in Table 3. The thermocouple test temperature (T c ), gas temperature simulated by OH peak (T o ) and calculated temperature by Newton's law of cooling (T n ) are summarized in Table 2. It is displayed T c , T o and T n are identically close at the same discharge power. They all show increasing trends with the increasing of the discharge power. The highest temperature is assumed not above 322 • C at the highest discharge power of 80 W. Thin Film Characterization In order to compare the effects of thermal or plasma treatment in improving film crystallization, three amorphous films were respectively thermally treated at the same substrate temperature of 196, 264 and 322 • C as at three discharge powers. The same chamber and Ar gas flow rate are used to ensure the consistent treatment conditions, except the plasma discharge is off. The treated conditions and the sample names are given in Table 3. The XRD spectra of samples treated by plasma and heating are respectively shown in Figure 5a Figure 5b. Sample H3 (322 • C) shows weak (101) peak. By comparing above XRD results of the samples respectively treated by plasma and heating, it is indicated that plasma annealing treatment is more effective in promoting the crystallization of amorphous TiO 2 film than thermal treatment. Anatase TiO 2 film can be obtained by atmospheric pressure RF plasma treatment for only 30 min at temperature of 264 • C. The XRD spectra of samples treated by plasma and heating are respectively shown in Figure 5a,b. The original sample as deposition is amorphous. The XRD spectrum of sample P1 treated by 40 W plasma, in Figure 5a shows weak (101) peak at 2θ values of 25.4°. Sample P2 and P3 show similar XRD pattern. Peaks identified at 2θ values of 25.4°, 38.1°, 48.2° and 53.9°, corresponding to the crystal planes of (101), (004), (200), and (105) respectively, which confirms the crystal phase with the standard XRD pattern of anatase TiO2 (JCPDS files No. 21-1272). Sample H1 (196 °C) and H2 (264 °C) are amorphous, as the XRD spectra shown in Figure 5b. Sample H3 (322 °C) shows weak (101) peak. By comparing above XRD results of the samples respectively treated by plasma and heating, it is indicated that plasma annealing treatment is more effective in promoting the crystallization of amorphous TiO2 film than thermal treatment. Anatase TiO2 film can be obtained by atmospheric pressure RF plasma treatment for only 30 min at temperature of 264 °C. Raman shift spectra were performed and the results are shown in Figure 6. As shown in Figure 6a, original sample and P1 are amorphous. And P2 and P3 show anatase TiO2 major Raman bands at 144, 197, 399, 515, and 639 cm −1 . These bands can be attributed to the five Raman-active modes of anatase phase corresponding to Eg, Eg, A1g, B1g, and Eg, respectively [25]. Figure 6b shows the Raman shift spectra of the samples with a different heating treatment. All the samples are amorphous. The Raman results are in well agreement with the above XRD results. The surface morphology of the original sample and the sample treated by plasma at 80 W were characterized by FE-SEM and shown in Figure 7. The original sample is fluffy. The film is made of small TiO2 particles. No clear outline of the agglomerated particles is presented and the morphology indicates poor connection between the aggregated nanoparticles. Figure 7 b and d show the morphology of TiO2 thin film treated by 80 W RF plasma. The surface has been etching by the plasma and columnar TiO2 stand on the substrate. The FE-SEM results show that, the plasma has strong etching effect on amorphous TiO2 thin film. Raman shift spectra were performed and the results are shown in Figure 6. As shown in Figure 6a, original sample and P1 are amorphous. And P2 and P3 show anatase TiO 2 major Raman bands at 144, 197, 399, 515, and 639 cm −1 . These bands can be attributed to the five Raman-active modes of anatase phase corresponding to E g , E g , A 1g , B 1g , and E g , respectively [25]. Figure 6b shows the Raman shift spectra of the samples with a different heating treatment. All the samples are amorphous. The Raman results are in well agreement with the above XRD results. The promoting influence of RF atmospheric pressure plasma treatment on the crystallization of amorphous TiO2 film can be attributed to its obvious sheath structure. Plasma positive ions (such as Ar + ) are accelerated through plasma sheath and will bombard the film surface directionally during the treatment [26,27]. Previous work has shown that energetic particles bombard the film and induce atomic movement. In general, amorphous phases have higher Gibb's free energy than crystalline phases. In order to reach a crystalline phase, it must overcome an activation energy barrier. In conventional furnace annealing, this energy barrier is overcome by thermal energy externally supplied. In RF plasma irradiation, however, the initial energy state is activated to a higher energy level as a result of momentum transfer from the plasma, so the activation energy barrier is reduced, as illustrated, and the film is crystallized effectively by the RF plasma irradiation [28]. Both the XRD and Raman results show similar results that the RF plasma treatment enhanced the crystallization of TiO2 film. Anatase TiO2 film was prepared at about 264 °C, which is less than the usual phase change temperature (450 °C) from amorphous to anatase by thermal annealing. So, positive ions and their acceleration through RF plasma sheath is considered to be more effective than randomly moving The promoting influence of RF atmospheric pressure plasma treatment on the crystallization of amorphous TiO 2 film can be attributed to its obvious sheath structure. Plasma positive ions (such as Ar + ) are accelerated through plasma sheath and will bombard the film surface directionally during the treatment [26,27]. Previous work has shown that energetic particles bombard the film and induce atomic movement. In general, amorphous phases have higher Gibb's free energy than crystalline phases. In order to reach a crystalline phase, it must overcome an activation energy barrier. In conventional furnace annealing, this energy barrier is overcome by thermal energy externally supplied. In RF plasma irradiation, however, the initial energy state is activated to a higher energy level as a result of momentum transfer from the plasma, so the activation energy barrier is reduced, as illustrated, and the film is crystallized effectively by the RF plasma irradiation [28]. Both the XRD and Raman results show similar results that the RF plasma treatment enhanced the crystallization of TiO 2 film. Anatase TiO 2 film was prepared at about 264 • C, which is less than the usual phase change temperature (450 • C) from amorphous to anatase by thermal annealing. So, positive ions and their acceleration through RF plasma sheath is considered to be more effective than randomly moving neutral Ar molecules during only thermal treatment, which improves the plasma crystallization of the amorphous films possibly [29]. Figure 8 shows the photocatalytic activity of the P3 and H3. After 150 min irradiating by UV light, the MB concentration decreases to about 51% by sample P3. Sample H3 shows almost the result with no catalyst. As we know, the anatase shows higher photocatalytic activity than amorphous [30]. However, the photocatalytic is not outstanding comparing with well crystallized TiO 2 film [19]. neutral Ar molecules during only thermal treatment, which improves the plasma crystallization of the amorphous films possibly [29]. Figure 8 shows the photocatalytic activity of the P3 and H3. After 150 min irradiating by UV light, the MB concentration decreases to about 51% by sample P3. Sample H3 shows almost the result with no catalyst. As we know, the anatase shows higher photocatalytic activity than amorphous [30]. However, the photocatalytic is not outstanding comparing with well crystallized TiO2 film [19]. Conclusions This study shows the effects of RF atmospheric pressure plasma treatment and heating annealing in the crystallization of amorphous TiO2 film. Three methods are employed to estimate the substrate temperature Ts during plasma treatment and identically close Tc, To and Tn are obtained. The highest temperature is assumed not above 322 °C at the highest discharge power of 80 W. The amorphous TiO2 samples become to anatase after 60 and 80 W plasma treatment for 30 min (corresponding Ts is 264 and 322 °C). However, the samples remain amorphous after the heating treatment with the same time and temperature. However, the TiO2 film was etched by the plasma treatment and some TiO2 particles have been removed from the film. Atmospheric pressure RF plasma is a prospective method for promoting the crystallization of the amorphous films at low temperature and short time.
6,028.4
2019-05-31T00:00:00.000
[ "Physics" ]
X-ray diffractive imaging of highly ionized helium nanodroplets Finding the lowest energy configuration of N unit charges on a sphere, known as Thomson’s problem, is a long-standing query which has only been studied via numerical simulations. We present its physical realization using multiply charged He nanodroplets. The charge positions are determined by x-ray coherent diffractive imaging with Xe as a contrast agent. In neutral droplets, filaments resulting from Xe atoms condensing on quantum vortices are observed. Unique to charged droplets, however, Xe clusters that condense on charges are distributed on the surface in lattice-like structures, introducing He droplets as experimental model systems for the study of Thomson’s problem. I. INTRODUCTION In 1904, J. J. Thomson sought to find the configuration of N unit charges on a sphere that minimizes the overall Coulombic energy [1]. A related problem of packing particles on a sphere has since appeared in the research of highly ordered finite systems such as viral morphology, crystallography, and molecular structure [2][3][4][5]. Generally, Thomson's problem remains unsolved, though numerical simulations have yielded a variety of configurations depending on the number of charges. Locally, calculations yield charge distributions characterized by triangular lattice configurations with approximately six nearest neighbors per charge [6][7][8][9][10][11][12]. Surprisingly, it has been shown that the minimum energy configuration is not always *<EMAIL_ADDRESS>†<EMAIL_ADDRESS>‡<EMAIL_ADDRESS>§<EMAIL_ADDRESS>the one that places the charges at the furthest distance from each other, nor the configuration with the greatest symmetry. It is not immediately clear if an idealized Thomson problem that ignores the roles of thermal or quantum mechanical kinetic energies has any practical significance. However, charged submicrometer helium droplets present an extraordinary experimental realization [10] because they remain liquid down to absolute zero temperature and can hold hundreds of charges, which are effectively confined and pushed to the droplet's surface due to their mutual Coulomb repulsion. Surface liquid helium acts as a suitable support to study the structure and collective properties of two-dimensional Coulomb systems, i.e., those made of electrons or cations [13][14][15][16]. Closely related to the Thomson problem are electrons inside multielectron bubbles in liquid helium which were discussed as model systems to study the electronic structure of two-dimensional gases of electrons on a sphere with substantial ripplon coupling [17]. A very recent study reports stable bubbles containing six and eight electrons [18]. However, the study of multielectron bubbles is hindered by dynamic instability and their irregular shape [19,20]. It is inevitable that a physical realization of the Thomson problem differs from its purely mathematical cousin in some important respects. Exact localization of charges on a sphere is physically untenable because it requires infinite potentials. FIG. 1. Schematic of the experiment. Helium droplets are ionized via electron impact and pass through plane capacitor electrostatic deflectors. Ionized helium droplets are doped with Xe atoms which cluster around the positions of the charges and serve as markers. The droplets are interrogated with the XFEL. Diffraction patterns are recorded on a pnCCD detector and processed using a phase retrieval algorithm to obtain the density profile and charge distributions. The predicted configuration for 18 charges is used as an example to simulate the displayed diffraction pattern. Pursuant to a helium droplet, the charges are held within the droplet by some finite solvation potential. Therefore, their radial positions can vary to some extent. The surface tension of liquid helium is rather small, which may result in some local surface deformation. Some departure from a spherical shape may also be caused by the droplet's rotation. It would therefore be very interesting to see the effect of the physical modalities on the configuration. Recently, Laimer et al. demonstrated that multiply charged 4 He droplets can be produced via electron impact [21]. The cations in He droplets likely exist in the form of covalently bound He 3 + units, which are formed within less than 10 ps from He + ions that are initially produced upon electron impact [22]. Here, the arrangement of charges in helium droplets is studied via scattering of radiation from an x-ray free electron laser (XFEL). Visualization of charges is achieved by doping the droplets with xenon (Xe) atoms, which cluster around the charges and serve as contrast agents. The cluster positions are obtained from diffraction images using an iterative phase retrieval algorithm [23]. Distinctly different from neutral helium droplets, aggregation within charged helium droplets leads to fractured dotlike patterns with compact spots of Xe clusters throughout the droplets. We assign the compact dots to charged Xe clusters near the droplet surface. II. EXPERIMENTAL The experiments are performed at the Small Quantum Systems (SQS) instrument of the European XFEL [24,25] using the Nano-sized Quantum Systems (NQS) end station. A schematic is shown in Fig. 1. Neutral helium droplets are produced via cryogenic nozzle beam expansion of pressurized helium gas through a 5 μm nozzle into vacuum [26]. The droplets are ionized by an electron gun operating at 40-100 V acceleration voltage and ionization currents between 5 μA and 1.3 mA [21]. Charged droplets pass through two parallel plate capacitors (each containing 2 × 2 cm 2 plates placed 2 cm apart from each other), used to verify the charging through electrostatic beam deflection. The ionized droplets are doped with Xe atoms (see Fig. 1) in a pickup cell. The focused XFEL beam [∼3.5 μm full width at half maximum (FWHM) focal diameter] intersects the Xe-doped helium droplets ∼950 mm downstream from the droplet source. The XFEL is operated at 10 Hz with 1 FEL pulse per bunch train, a photon energy of 1.5 keV, a mean pulse energy of 3 mJ, and a nominal pulse duration of ∼25 fs. Diffraction images are recorded with a 1-Mpixel positive-negative charge coupled device (pnCCD) detector (1024 × 1024 pixels, 75 × 75 μm 2 each) [27], centered along the XFEL beam axis 542 mm behind the interaction point. The detector consists of two separate panels (1024 × 512 pixels each), located above and below the x-ray beam with a central, rectangular cutout to accommodate the primary beam [19]. Each diffraction pattern is used to reconstruct the density distribution of Xe clusters within a single, isolated doped droplet (see Fig. 1) by applying an iterative phase retrieval algorithm termed droplet coherent diffractive imaging (DCDI) [23]. III. RESULTS Diffraction images of neutral droplets are recorded for the initial droplet beam characterization and confirm that the droplets have an average radius of 250 nm (roughly 10 9 atoms) and an average aspect ratio (AR) (major diffraction axis/minor diffraction axis) of 1.04, similar to previous conditions [28]. The nonsphericity can be assigned to centrifugal deformation of rotating droplets [26,28,29]. At low angular momenta and aspect ratios of 1 < AR < 1.1, these can be described as spheroids. The flux of atoms carried by the droplets is monitored by the pressure rise in the beam dump chamber [26]. Ionization of the droplets did not lead to any measurable decrease (<10%) in flux. However, the application of ∼100 V/cm to the parallel plate capacitors downstream from the ionizer completely extinguishes the droplet flux, demonstrating effective charging. Initial measurements are performed on neutral Xe-doped droplets; see Fig. 2. The diffraction images, Figs. 2(1a) and 2(2a), exhibit rings close to the center and speckles in the outer region due to scattering off Xe clusters. Figures 2(1b) and 2(2b) show the corresponding density reconstructions (helium is depicted in blue and Xe clusters in red/yellow) as obtained via DCDI. The Xe clusters form filaments aligned along a common direction. They are the result of Xe atoms condensing on the cores of quantum vortices, as documented in our previous works [23,26,29,30]. Figure 3 shows diffraction patterns and density reconstructions from charged, Xe-doped 4 He nanodroplets. The diffraction images Figs. 3(1a)-3(3a) resemble those for uncharged droplets. Figure 3(4a) is noteworthy as it contains six Bragg spots, arranged hexagonally, which have been marked by white rings. Density reconstructions are shown in Figs. 3(1b)-3(4b). In distinction to neutral droplets, aggregation within charged helium droplets appears unique in that it leads to fractured dotlike patterns with compact spots of Xe clusters. The comparison of Xe densities in charged droplets (Fig. 3) with neutral droplets (Fig. 2), obtained in this work as well as in previous works [23,26,29,30] indicates important differences. Xe clusters in neutral droplets are seen as filaments. However, it is possible that a droplet's direction of angular momentum can be aligned with the x-ray beam such that the cylindrical vortices appear as dots on the detector plane. The coexistence, however, of dot-like features and filaments having not been observed in neutral doped droplets in several studies [23,26,29,30], indicates that the dots are not the result of a particular viewing angle. Thus, we assign the dots to the location of charged Xe clusters near the droplets' surface. Figure 3(3b) shows a much larger droplet (AR = 1.01, a = 550 nm, N = ∼ 10 10 helium atoms) with several small clusters arranged in the center. During the experiments, droplets 1, 2, and 3 from Fig. 3 had a constant doping level, set by evaporation of ∼20% of the constituent helium atoms. Considering that the pickup of one Xe atom leads to evaporation of ∼250 helium atoms [26], the number of doped Xe atoms is estimated to be ∼ 7 × 10 5 Xe atoms for the droplets in Figs. 3(1b) and 3(2b), and ∼10 7 Xe atoms for Fig. 3(3b) (or about 10 -3 per helium atom). Figure 3(4b) exemplifies aggressive doping and charging conditions in larger droplets. Here, droplets were produced at nozzle settings of 60 bar, 4 K and with the ionizer set to 200 eV and 1.3 mA emission current. The droplet (AR = 1.01, a = 337 nm, N = ∼ 4 × 10 9 He atoms) in Fig. 3(4b) was doped with ∼1.6 × 10 7 Xe atoms, leading to evaporation of ∼50% of helium atoms. The reconstruction shows a very dense pattern of small clusters. The details cannot be fully resolved. Note that the resolution limit of the experiment is ∼20 nm [23] or three pixels in Fig. 3(4b), whereas the average distance between the Xe clusters is estimated to be ∼40 nm. IV. DISCUSSION Significant differences in the Xe density distributions upon doping neutral versus charged droplets are readily apparent. While neutrals exhibit exclusively filament-shaped Xe structures, charged droplets contain compact, dot-shaped Xe clusters often in greater numbers than the filaments. Evidently, the presence of charges influences the formation of Xe clusters. Charges in He droplets were previously proposed to serve as nucleation centers for embedded atoms [21], catalyzing the formation of clusters such as in Fig. 3(1b). The charge acquired by the droplet is estimated from the current density in the experiment (∼ 1 A/m 2 ), time of flight of the droplet through the ionizer (∼25 μs), and assuming the ionization cross section of the droplet corresponds to its geometric cross section. The estimated number of ∼30 charges can be compared with the observed 16 dot-shaped clusters in Fig. 3(1b). The agreement is reasonable considering the coarse estimate and the fact that upon creation, some charges leave the droplet in the form of He n + clusters [31]. Calculations [10] show that charges in helium droplets reside close to the surface. For 20 charges, they are submerged by about 3% of the droplet radius, and move closer to the surface with increasing charge [10]. Xe clusters forming around the charges increases the solvation energy, pulling the charged clusters deeper inside. Calculations [32] show that the potential energy of Xe clusters is flat in the droplet's interior but increases significantly ∼10 nm from the surface. Thus, we expect charges encapsulated in Xe clusters will reside close to the surface. The distance will likely be comparable to the resolution of the present experiments of ∼20 nm. Upon doping helium droplets, vortices and surface charges compete to attract the dopants. Like other particles, some charges will be captured by vortices, while others will remain free [31,33,34]. The partitioning depends on the droplet size, number of vortices, and the binding energy of He + to vortices. The capture impact parameter for Xe atoms by a vortex has been estimated to be ∼0.5 nm [35], and the capture cross section of a 200 nm long filament is ∼200 nm 2 . This can be compared to the cross section for the capture of a Xe atom by a charge of ∼1 nm 2 . This estimate assumes that He 3 + and Xe particles move thermally inside the He droplet and combine if their induction interaction energy exceeds 0.38 K. Therefore, Xe atoms are mostly attracted to vortices. The situation in Fig. 3(1b) appears fortunate as only two vortices are present. However, the interpretation of Fig. 3(1b) is not straightforward, as the filaments likely contain multiple charges, the locations of which cannot be determined. Figure 3(2b) shows an elliptic constellation of Xe clusters around the droplet center. The corresponding threedimensional (3D) structure is not obvious. For example, the figure may correspond to charged clusters attached to vortices arranged in a circle and viewed sideways, as was observed previously [30]. The other Xe clusters may correspond to charges on the droplet's surface that are not attached to vortices. Most images obtained in this work resemble that of Fig. 3(3b), exhibiting a constellation of clusters in the middle that lacks recognizable symmetry, and a small density on the periphery. Figure 3(4b) represents the extreme case of intensive charge exposure (200 eV at 1.3 mA emission current) coupled with heavy Xe doping. Although cluster configurations in Fig. 3(4b) cannot be resolved, some conclusions can be drawn from the observation of hexagonal Bragg spot arrangements in Fig. 3(4a). This pattern may originate from a system having short-range order, but lacking long-range order, such as a two-dimensional (2D) liquid. From the spot scattering angle of about 0.021 rad, the average distance between the scattering centers is estimated to be ∼40 nm. The distribution of Xe clusters at a comparable level of doping in neutral droplets (∼40% of helium atoms evaporated) was previously obtained [36] and displays different features such as a network of interconnected filaments. Therefore, we assign the presence of excess Xe clusters in Fig. 3(4b) to droplet charging. Previously observed Bragg spots in neutral droplets were assigned to lattices of quantum vortices. However, the smallest observed distance between vortices was about ∼150 nm, [26] much larger than the estimated distance between scattering centers in Fig. 3(4b). A droplet having such a tight distance between vortices will rotate at an angular velocity about two times higher than the stability threshold for droplet fission [26,28]. Assuming the scattering centers are distributed evenly (with an average distance of 40 nm) on the surface of an R = 340 nm droplet, then the number of charges is estimated to be ∼1000. In comparison, the Rayleigh criterion [10] predicts a maximum charge of 1800. It is likely that the diffraction in Fig. 3(4a) corresponds to a droplet containing a few vortices, but the formation of filaments is suppressed by aggressive charging and doping. Small angle diffraction from a 3D object is well described by the 2D Fourier transform of the column density along the light propagation direction. For a lattice on a sphere, the projection will accurately reflect the distances between the clusters near the droplet center, but peripheral distances will appear smaller. The combined effect will be a rapid decrease in the intensity of the Bragg spots in the radial direction. This agrees with the observation of the first order Bragg spots in Fig. 3(4a). Therefore, the results in Figs. 3(4a) and 3(4b) are consistent with the Xe clusters evenly filling the surface of the droplet. The location of charges in a spherical helium droplet was studied via Monte Carlo calculations [10], where at low T similar configurations were found as in previous calculations minimizing the potential energy [6][7][8]11,12]. Melting of the charge lattice at higher temperature is characterized by a sharp increase in the number of dislocations, which in the limit of large N can be associated with the Kosterlitz-Thouless transition. The melting point corresponds to temperatures T * = 0.025 and 0.01 for N = 32 and 92 charges, respectively. Here, the dimensionless temperature T * is defined as the ratio of the absolute temperature to the Coulomb energy of two elementary charges at a distance equal to the radius of the droplet. In 4 He droplets T * ≈ 5 × 10 -3 at R = 200 nm, suggesting that a lattice could be observed in helium droplets relevant to this work. Reconstructions from small angle scattering diffraction patterns provide coordinates of charges in the X -Y plane of the detector. The values of the Z coordinates along the direction of the x-ray beam, Z = ± S 2 − (X 2 + Y 2 ), can be obtained by assuming that all charges have the same distance S from the droplet center. The sign of Z determines if the charge resides on the front or far side of the droplet. Distinguishing charge locations on the front and far side could be achieved by finding a configuration of minimum potential energy. Currently, this approach is hindered by the presence of vortex filaments, which may also contain some charges with unknown locations. If vortices could be eliminated, the droplets will presumably only contain point-like clusters, which will contain larger numbers of xenon atoms than in the current experiment and produce correspondingly brighter diffraction signals. Our previous work [23] shows that a shift along the Z coordinate results in a phase shift of the complex density of xenon atoms, which could be used to distinguish the locations at the two surfaces. This will provide accurate (±10 nm) positions of charges on the front and far sides of the droplet and close to the center. However, the reconstructed Z values for points on the peripheries of the droplet will have large uncertainties. To achieve a full 3D reconstruction, large angle scattering experiments will be needed. V. CONCLUSIONS In summary, charged helium droplets are a promising experimental realization of Thomson's problem for determining the minimum energy configuration of charges on a sphere. This work shows that droplets of a few hundreds of nm in radius and containing up to ∼1000 charges can be utilized as Thomson systems. While Thomson-type surface lattices of charge sites were observed, it was also discovered that quantum vortices can coexist with the charge lattice and act as dominant scavengers of Xe atoms. Nevertheless, our results indicate that quasi free charges occupy positions near the droplet surface, forming lattice structures consistent with numerical solutions of Thomson's problem. Future experiments with 3 He droplets could provide opportunities to advance these studies as they are devoid of quantum vortices above ∼0.15 K. The data that support the findings of this study are stored under a proper DOI available from the corresponding authors upon reasonable request at Ref. [37].
4,414.2
2022-06-21T00:00:00.000
[ "Physics" ]
An Analysis of E-commerce Enterprises Sustainable Development and Financing Structure O2O (Online to Offline) is a kind of new e-commerce mode and has brought new development opportunities for e-commerce enterprises. Through investing and financing , some e-commerce enterprises improved their performance rapidly in the short term. But some face serious operational risk, even bankruptcy. It's still a question whether the e-commerce enterprises have achieved sustainable development or not. The relationship between financing structure and sustainable development needs to be studied. Based on the sustainable development theory and Higgins sustainable growth model, this paper uses SPSS to analyze the financial statement data of 98 listed e-commerce companies. The empirical results reveal that actual growth rates don't match the sustainable growth rates, which means that E-commerce enterprises don't achieve sustainable growth. And the sustainable growth rate is correlated with the financing structure. Lastly, this paper puts forward constructive suggestions about financing structure adjustment. Introduction Recent years, electronic commerce gradually becomes a hot topic of the public because of O2O business model.Enterprises are involved in e-commerce, such as catering, transport, education and so on.On the one hand, enterprises take advantage of the internet to build green supply chain, make the whole supply chain and enterprises more efficient.On the other hand, enterprises expand the market through financing, get a rapid performance promotion in a very short-term.But there are also some enterprises who don't have clear profit model.They grow too fast, the resources and management ability can't match the growth speed of enterprises.This situation makes enterprises face serious business difficulties, even the risk of bankruptcy.Enterprises pay attention to the issue of sustainable development.From the perspective of sustainable development theory, this paper selects 98 listed E-commerce companies, uses SPSS to do empirical analysis.Based on the Higgins sustainable growth model, study analyzes the financial statements of listed companies.The result shows that the enterprises do not achieve sustainable growth.On the basis of the research result, paper further analyzes the relationship between financing and the sustainable growth.Finally, we put forward several methods to adjust the structure of financing to help enterprises achieve sustainable growth. Theoretical Framework As for electronic commerce enterprises development research, academic circles mainly focus on the following two aspects: evaluation of sustainable development, analysis of financing structure.First, sustainable development evaluation.The research achievements of foreign literature are four classic models of the sustainable development at the earliest.Higgins first proposed the Higgins SGR (Sustainable Growth Rate) model [1]. He emphasized the effective allocation of resources.He was against the enterprise blindly pursuing growth.Higgins model was the first one to use quantitative method to study the model of the sustainable development, which is the most classic sustainable growth model.Van Horne optimized the sustainable development model on the basis of Higgins [2].He studied respectively the sustainable growth model under static and dynamic environment, but the model parameters are complex, which increased the difficulty of the operation.Higgins model and Van Horne model were based on the same accounting equation (assets = liabilities + owner's equity).Other two classical sustainable growth models are Rappaport model and Colley model, which are based on cash flow diameter [3].Other scholars expand the research of sustainable development concept and model .They apply in different industries.Garbie put forward sustainable development evaluation indicators for manufacturing enterprises, evaluated the sustainable development of manufacturing enterprises through mathematical quantitative model [4].Santiteerakul & Sekhari (2015) applied the sustainable development model to the evaluation of enterprises in the field of supply chain [5].Domestic research about the sustainable developmental evaluation mainly focus on financial perspective and study the sustainable development of the enterprises in different areas.Xiaoyan Zhang (2009) studied the ethnic autonomous region sustainable development situation of listed companies, combined with financing, investment, dividend distribution.She put forward suggestions to promote the development of sustainable [6].Hong Qin (2015) found the relationship between the gas enterprise investment scale, cash flow and enterprise sustainable development, put forward several methods to maintain the stability of the investment and the long-term sustainable development of the enterprise [7].[8].The financial data showed that the enterprises did not achieve sustainable growth.He put forward that financing, profits and other factors influence the sustainable development of the enterprises.But he did not do further empirical test.Existing literature research covers industry enterprises, such as biological pharmaceutical industry, coal industry and information technology industry, etc. On the one hand, the e-commerce supply chain restructured, brought performance growth; On the other hand, the rapid financing expansion and enterprises bankruptcy phenomenon existed.Whether e-commerce enterprises realized sustainable development or not is still a question.Therefore, put forward hypothesis 1. Hypothesis 1: The sustainable growth rate and the actual growth rate are not significantly different. According to the research literature, enterprises sustainable development of different industries show that investment and financing closely related to sustainable development of the enterprise.Thus research further analyses the relationship between the financing structure and the sustainable development in electronic commerce enterprises.Literature review showed that the pecking order theory of financing is the cornerstone of modern financing structure theory.According to this theory, the financing order should be internal financing, external financing, indirect financing, direct financing, debt financing and equity financing.Internal financing has a low cost and risk.It is good for the promotion enterprise's performance.Enterprises financing usually choose external financing first.Pecking order theory and the concept of sustainable development have intrinsic consistency.Rong Qiu (2016) studied e-commerce enterprise financing mode, analyzed the relationship between the financing and financing efficiency.Finally, he solved the problem of enterprises financing difficulties and put forward constructive suggestions [9].Jianping Tu, Xue Yang (2013) analysed supply chain finance of the e-commerce platform and its influence on enterprises development [10].He also studied the similarities and differences between different traditional financing models.The quantitative research of financing structure and sustainable development in the electronic commerce field are not too much, the main result of quantitative research is financing structure and corporate performance.Although performance and sustainable development of enterprises have internal consistency, but they still belong to different object of study.So research takes the methods and ideas for reference.Based on the pecking order theory, Donglin Yong (2012) studied the performance of listed tourism companies, found that internal financing is positively related to corporate performance [11], external financing is negatively related to corporate performance.Feifei Dai (2014) [12], Pengfei Zhang (2015) [13] made a research in different industries, found similar conclusion.Hypothesis 2 and hypothesis 3 are based on these theories. Hypothesis 2: The internal financing of electronic commerce listed companies is positively related to the sustainable growth rate. Hypothesis 3: The external financing of electronic commerce listed companies is negatively related to the sustainable growth rate. Research on the relationship between the equity financing and the development of enterprises have three different conclusions.1) Equity financing ability is positively related to corporate earnings.Lixia Yu and Gen Zhao (2011) studied A-share listed company data, found that the directional equity and corporate performance are significantly positively related [14].Xian Huang (2009) put forward that the listed company ownership structure and corporate benefit have same direction change [15].Lijun Liu did an empirical study of the listed companies' financing structure in China.According to the research of financing preference, listed companies prefer the order of "equity financing, debt financing, internal financing".2) The equity structure is negatively related to the profitability.Xiangjian Zhang and Jin Xu (2005) found that after equity allocation, listed companies' performance declined.3) The equity financing and corporate profitability appear inverted u-shaped relationship.Feifei Dai (2014) found that too high or too low ownership concentration are detrimental to the ascension of business performance.Pecking order financing theory is that enterprises should give preference to equity financing, then choose debt financing.But due to the particularity of capital market in China, the relevant financial system is imperfect, e-commerce listed companies prefer equity financing (venture capital and private equity) than creditor's rights financing.So hypothesis 4 is put forward.Hypothesis 4: The Equity financing of electronic commerce listed companies is positively related to the sustainable growth rate. As for the relationship between creditor's rights financing and enterprise sustainable development, there are two different opinions.1) The creditor's rights financing and profitability are positively related.Zheng Ni and and Shanwei Wei (2006) found that creditor's rights financing sends positive signals to the market and it is related to corporate value.2) Creditor's rights financing is negatively related to the profitability.Xian Huang (2009) found that creditor's rights financing influences enterprise benefits in the opposite direction.Xiujuan Gu and Yaping Kong (2010) found a significant decline in performance after the issuance of bonds through the empirical study.Based on the theory of pecking order, the risk and cost of creditor's rights financing are higher, therefore put forward hypothesis 5. Hypothesized 5: The debt financing of electronic commerce listed companies is negatively related to the sustainable growth rate. Model and Variables To study the development of e-commerce enterprises, research uses Higgins sustainable growth model to evaluate the sustainable growth rate.Model contains four dimensions, sales net interest rate, total asset turnover, equity multiplier, retention ratio.Sales net interest rate reflects the enterprise business ability; Total asset turnover reflects the assets turnover flow capacity; Equity multiplier reflects the equity of debt, the financing way, the enterprise capital structure; Retention ratio reflects the enterprise's distribution policy.The model structure is advantageous for the enterprise to carry out the sustainable development evaluation, helps managers to improve management skills ac-Y.C. Niu cording to the result of the performance evaluation, adjust the structure of financing and financial leverage, achieve sustainable growth. To study the relationship between financing structure and the enterprise sustainable growth rate, paper builds five regression models.Traditional financing structure includes bond financing rate, but electronic commerce public finance data show that bond financing project is less.Inadequate data is not suitable as independent variables. Variables include internal financing rate and external financing rate.Internal financing rate can be divided into depreciation financing rate (DF) and retained earnings financing rate (RF).External financing rate can be divided into stock financing rate (SF), short-term debt financing rate (SBF), long-term debt financing rate (LBF), commercial credit financing rate (CF),fiscal financing rate (FF) . To study internal financing, external financing and the relationship with enterprise sustainable growth rate, verify hypothesis 2 and hypothesis 3, research builds regression model 1. Samples and Data The research object is e-commerce listed companies.The research focuses on sustaina- The definition of variables:  SGR: Higgins sustainable growth rate.  . SGR P A R T = * * *  P: Business net interest rate, P = net profit/business income. A: Total asset turnover, A = operating income/average total assets; the average total assets = (ending balance of assets + ending balance of assets a year ago)/2.  R: Earnings retained rate, R = (net profit − cash dividend)/net profit. T: Rights and interests multiplier, T = the final total assets/initial owners' equity. g: The actual growth rate, g = (the current business income − business income of the previous period)/business income of the previous period. To do an overall analysis about the sustainable growth rate and the actual growth rate, study which is higher and more stable, the paper made a descriptive statistical analysis.Descriptive statistical analysis is carried out on the sample data, as shown in Table 1. Table 1 descriptive statistical analysis results show that the actual growth rate is around 20%, which is higher than the sustainable growth rate( 7.5%).From the point of maximum, the actual growth rate is higher than the sustainable growth rate.From the point of minimum, the actual growth rate is lower than the sustainable growth rate.The actual growth rate ranges at a larger extent.Compared with the actual growth rate standard deviation (0.34), the sustainable growth rate is 0.15, which is relatively stable. Data is consistent with actual situation, sustainable growth rate showes a good development trend. Sustainable Growth Test The sustainable growth rate and the actual growth rate distribution are unknown.To test whether the e-commerce listed companies achieve sustainable growth or not, research chooses non parametric test-Wilcoxon symbol rank test.Confidence level is 95%.Wilcoxon symbol rank test results are showed in Table 2 Test results show that among 490 listed e-commerce companies, negative rank number is 319, which accounts for more than 65% in the total number; Positive rank number is 171.The actual growth rate is generally higher than the sustainable growth rate, there is no equal real growth rate and sustainable growth rate.Inspection results is significant.Hypothesis 1 is refused.Test results are in conformity with the reality.Ecommerce listed companies take financing strategy in great quantities.Actual grow rate is too high, resources and management can't match the fast speed.Part of the enterprise capital chain collapsed.Enterprises do not achieve green sustainable development. Regression Analysis Regression model 1 results are showed in Table 3: internal financing rate is positively related to the sustainable growth rate, sig.value is 0.016.The result is significant, hypothesis 2 is verified.Internal financing cost is low, mainly comes from retained earnings.It is consistent with the sustainable development principle of enterprises, enterprises should first choose internal financing.External financing rate is negatively related to the enterprise sustainable growth rate, sig.value is 0.004.The result is significant, hypothesis 3 is verified.External financing cost is higher than internal financing and the risk is bigger.Enterprises should control the scale of external financing. The regression model 2 results are showed in Table 4: equity financing rate is negatively related to the sustainable growth rate, sig.value is 0.002.The result is significant, hypothesis 4 is refused.By analysing the present situation of electronic commerce enterprise financing, the financing scale is over the range of sustainable growth.In order to achieve sustainable development, the enterprise should control the scale of equity financing.Rate of debt financing is negatively related to the sustainable growth rate, sig.value is 0.001.The result is significant, hypothesis 5 is verified.Debt financing cost is higher, which is consistent with the reality.The debt financing projects are relatively less.Because the listed electronic commerce companies enjoy the preferential tax policy in China, such as tax incentives.It is conducive to the development of enterprises.Depreciation financing rate has no significant relation with the sustainable growth rate.Because the electronic commerce companies are mostly light assets.The proportion of the depreciation financing is small.So the impact on enterprise sustainable growth is not stable, which is consistent with the reality (Table 5). Conclusions The innovation is combing the sustainable development theory with pecking order theory, using Higgins sustainable growth model to study the financing, the growth and bankrupt of the companies.The empirical results show electronic commerce listed companies do not achieve sustainable growth. Fuzhou Han (2014) used the sustainable growth model to analyze 44 United States' Y. C. Niu listed E-commerce companies in 2008-2012 ble development.In order to make the conclusion more persuasive, research selects a long period of time sequence data to analyze the annual financial statements, otherwise it is easy to cause the instability of the conclusion.Sample companies refer to the China securities regulatory commission website, net of information of listed companies, flush financial network electronic commerce section.Take out of the enterprises which are in abnormal financial position (ST companies), for example negative net profit for two consecutive years.Finally research selects 98 listed e-commerce companies.The list is showed in Appendix, which is at the end of the paper.Given the samples are not enough in 2015 financial statements, paper captures data from 2010 to 2014, five years, 490 samples in total.Financial statement data source is from the CSMAR database, and its branches, such as company research series database, China listed company financial statements, the Chinese listed company financial index analysis database.CSMAR database is the first, also the largest database in economic, financial and securities information.All the information in the database come from the company financial statements. Table 4 . Regression model 2 results.Regression model 3 results are showed in Table5: retained earnings financing rate is positively related to the sustainable growth rate, sig.value is 0.043.The result is significant, further verifies the hypothesis 2. The regression results of the commercial credit financing rate is not significant.Research finds that the commercial credit financing includes notes payable, accounts payable, and advance payments.Less data cause unstable.Long-term debt financing rate, stock financing rate is negatively related to the sustainable growth rate, sig.value is 0.The result is significant, further verifies hypothesis 3. Short-term debt financing rate is not significant, because the electronic commerce main purpose of financing is capital expansion.Listed companies are more inclined to long-term loan financing.Short-term financing has less amount of data.Fiscal financing rate is positively related to the sustainable growth rate, the result is significantly. Table 5 . The main contributions of research are analysing the growth rate of E-commerce listed companies and providing a model for enterprise financing decision reference.Corporate financing structure is related to the sustainable growth rate.Internal financing rate is positively related to the sustainable growth rate, external financing rate is negatively related to the sustainable growth rate.Further analysis finds that positive correlation relationship includes: retained earnings financing rate, fiscal financing rate.Negative correlation relationship includes: equity Regression model 3 results.
3,896.2
2016-09-06T00:00:00.000
[ "Business", "Economics" ]
Control of parallel non-observable queues: asymptotic equivalence and optimality of periodic policies We consider a queueing system composed of a dispatcher that routes deterministically jobs to a set of non-observable queues working in parallel. In this setting, the fundamental problem is which policy should the dispatcher implement to minimize the stationary mean waiting time of the incoming jobs. We present a structural property that holds in the classic scaling of the system where the network demand (arrival rate of jobs) grows proportionally with the number of queues. Assuming that each queue of type $r$ is replicated $k$ times, we consider a set of policies that are periodic with period $k \sum_r p_r$ and such that exactly $p_r$ jobs are sent in a period to each queue of type $r$. When $k\to\infty$, our main result shows that all the policies in this set are equivalent, in the sense that they yield the same mean stationary waiting time, and optimal, in the sense that no other policy having the same aggregate arrival rate to \emph{all} queues of a given type can do better in minimizing the stationary mean waiting time. This property holds in a strong probabilistic sense. Furthermore, the limiting mean waiting time achieved by our policies is a convex function of the arrival rate in each queue, which facilitates the development of a further optimization aimed at solving the fundamental problem above for large systems. Introduction In computer and communication networks, the access of jobs to resources (web servers, network links, etc.) is usually regulated by a dispatcher.A fundamental problem is which algorithm should the dispatcher implement to minimize the mean delay experienced by jobs.There is a vast literature on this subject and the structure of the optimal algorithm strongly depends on i) the information available to the dispatcher, ii) the topology of the network and iii) how jobs are processed by resources.We are interested in a scenario where: • The dispatcher has static information of the system; • The network topology is parallel; • Resources process jobs according to the first-come-first-served discipline. Static information means that the dispatcher knows the probability distributions of job sizes and inter-arrival times but cannot observe the dynamic state of resources such as the current number of jobs in their queues.This scenario can be of interest in the context of volunteer computing, cloud computing, web server farms, etc.; see, e.g., [24,25,20] respectively. In this framework, the problems of finding an algorithm, or policy, that minimizes the mean stationary delay and of determining the minimum mean stationary delay are both considered difficult; see, e.g., [2,1] for an overview.A policy can be defined as a function that maps a natural number n, corresponding to the n-th In this paper, we focus on the first subproblem and, under some system scaling, we identify a set of policies that are optimal.This result is used to reduce the second subproblem to the solution of a convex optimization. One of the folk theorem of queueing theory says that determinism in the inter-arrival times minimizes the waiting time of the single queue [18,23].In view of this classic insight and fixing fractions p, it is not surprising that an optimal policy tries to make the arrival process at each resource as regular (or less variable) as possible.Thus, our stochastic scheduling problem can be essentially converted into a problem in word combinatorics.If the dispatcher must ensure fractions p, the main result known in the literature is that balanced sequences are optimal admission sequences [19,1].However, balanced sequences of given rates p are known to exist in very few particular cases.These cases are captured by Fraenkel's conjecture, which is still open to the best of our knowledge [36]; see also [2, Chapter 2], which contains an overview of which rates p are balanceable. Matter of fact, the problem of finding an optimal deterministic policy is still considered difficult [10,3,38,22,2,7,21].The only exceptions are when resources are stochastically equivalent, where round-robin1 is known to be optimal in a strong sense [27], or when the dispatcher routes jobs to two resources, where balanced sequences can be always constructed no matter the value of rates p [19].In presence of more than two queues, we stress that balanced sequences with given rates p do not exist in general.This non-existence makes the problem difficult and one still wonders which structure should an optimal policy have when p is not balanceable.When the routing is performed to two resources, jobs join the dispatcher following a Poisson process and service times have an exponential distribution, the optimal rates p as function of the inter-arrival and service times have a fractal structure, see [16,Figure 8].This puts further light on the complexity of the problem even in a simple scenario. While deterministic policies are believed to be more difficult to study than probabilistic policies (deterministic and probabilistic in the sense described above), they can achieve a significantly better performance [3].This holds also for the variance of the waiting time because, as discussed above, the arrival process at each resource is much more regular in the deterministic case, especially if there are several resources as we show in this paper.A particular class of deterministic policies, namely billiard policies, have been recently implemented in the context of large volunteer and cloud computing to improve the performance of real applications such as SETI@home [24,25]. Contribution In the framework described above, we are interested in deriving structural properties of deterministic policies when the system size is large.We study a scaling of the system where the arrival rate of jobs, λk, grows to infinity proportionally with the number of resources (queues in the following), Rk, while keeping the network load (or utilization) fixed.This scaling is often used in the queueing community.Specifically, there are R types of queues, and k is the number of queues in each type, i.e., the parameter that we will let grow to infinity.Beyond issues related to the tractability of the problem, this type of scaling is motivated by the fact that the size of real systems is large and that replication of resources is commonly used to increase system reliability. First, with respect to a class of periodic policies, we define the random variable of the waiting time of each incoming job.This is done using Lindley's equation [26] and a suitable initial randomization.Using such randomization, we can adapt the framework developed by Loynes in [28] to our setting where jobs are sent to a set of parallel queues.In particular, Theorem 1 shows the monotone convergence in distribution of the waiting time of each incoming job.Then, with respect to a given vector p ∈ N R , we define a certain subset of policies that are periodic with period k r p r and such that exactly p r jobs are sent in a period to each queue of type r.While further details will be developed in Section 3.1, this set is meant to imply that queues of a given type are visited in a round-robin manner and that arrivals are "well distributed" among the different queue types.When k → ∞, our main result states that all the policies in this set are equivalent, in the sense that they yield the same mean stationary waiting time, and optimal, in the sense that no other policy having the same aggregate arrival rate to all queues of a given type can do better in minimizing the mean stationary waiting time.In particular, we show that the stationary waiting time converges both in distribution and in expectation to the stationary waiting time of a system of independent D/GI/1 queues whose parameters only depend on p, λ and the distribution of the service times.This is shown in Theorems 2 and 3, respectively. The main idea underlying our proof stands in analyzing the sequence of stationary waiting times along appropriate subsequences.Along these subsequences, it is possible to extract a pattern for the arrival process of each queue that is common to all members of the subsequence.Such pattern is exploited to establish monotonicity properties in the language of stochastic orderings.These properties hold for the considered subsequences only: they do not hold true along any arbitrary subsequence and counterexamples can be given.These properties will imply the uniform integrability of the sequence of stationary waiting times and will allow us to work on expected values. Summarizing, fixing the proportions p of jobs to send to each queue type and given k large, our results state that all the policies belonging to the set that we identify in Section 3.1 yield the same asymptotic performance and are asymptotically optimal.Furthermore, using known properties of the D/GI/1 queue, we obtain that the stationary mean waiting time obtained in our limit is a convex function of p.This reduces the complexity of subproblem ii) above because it boils down to the solution of a convex optimization problem. This paper is organized as follows.Section 2 introduces the model under investigation and provides a characterization of the stationary waiting time (Theorem 1); Section 3 introduces a class of policies and presents our main results (Theorems 2 and 3); Section 4 is devoted to proofs; finally, Section 5 draws the conclusions of this paper. Parallel queueing model We consider a queueing system composed of R types of queues (or resources, servers) working in parallel.Each queue of type r is replicated k times, for all r = 1, . . ., R, so there are kR queues in total.Parameter k is a scaling factor and we will let it grow to infinity.The service discipline of each queue is first-comefirst-served (FCFS) and the buffer size of each queue is infinite.A stream of jobs (or customers) joins the queues through a dispatcher.The dispatcher routes each incoming job to a queue according to some policy and instantaneously.Figure 1 illustrates the structure of the queueing model under investigation.In the following, indices r, κ, n will be implicitely assumed to range from 1 to R, from 1 to k, in N, respectively. All the random variables that follow will be considered belonging to a fixed underlying probability triple (Ω, F, Pr). Let (T n,κ,r ) n∈N be given sequences of i.i.d.random variables in R +2 .These sequences are all assumed to be independent each other.Quantity T (k) n is interpreted as the inter-arrival time between the n-th and the (n + 1)-th jobs arriving to the dispatcher.Quantity S (k) n,κ,r is interpreted as the service times of the n-th job arriving at the κ-th queue of type r.We assume that S r .For the arrival process at the dispatcher, we will refer to the following cases. Case 1.The process (T (k) n ) n∈N is a renewal process with rate λk and such that Var T It is clear that Cases 2 and 3 are both more restrictive than Case 1.Let • denote the L 1 -norm. Let Quantity q r,κ will be interpreted as the proportion of jobs sent to queue (r, κ). Since q is a vector of rational numbers, n * < ∞.Let V be a discrete random variable with values in {1, . . ., n * } such that Pr(V = i) = 1/n * , for all i = 1, . . ., n * .We assume that V is independent of any other random variable. Let A q (k) be the set of all functions π : N → {1, . . ., R} × {1, . . ., k} such that for all r and κ q for all n, where 1 E denotes the indicator function of event E. Thus, these functions are periodic with period n * and n * q r,κ is the number of jobs sent in a period to queue (r, κ).We refer to each element π ∈ A q (k) as a policy (or a q-policy) operated by the dispatcher, and it is interpreted as follows: π(n) = (r, κ) means that the n-th job arriving to the dispatcher is sent to the k-th queue of type r if n ≥ V , otherwise it means that the n-th job is discarded.Thus, the outcome of random variable V gives the index of the first job that is actually served by some queue.In other words, π(V ) is the first queue that serves some job. Let (T (k) n,κ,r (π)) n∈N be the sequence of inter-arrival times that are induced by policy π at the κ-th queue of type r (under any of the cases above).By construction, T (k) n,κ,r (π) is the sum of a deterministic number of inter-arrival times seen at the dispatcher.The arrival process (T (k) n,κ,r (π)) n∈N can be made stationary if it is allowed a shift in time and a suitable randomization (independent of V and of any other random variable) for the inter-arrival time of the first arrival of each queue.We assume for now that this has been done (details will be given at the beginning of Section 4).As done in [28] and according to [15, p. 456], this implies that we can extend the stationary process (T (k) n,κ,r (π)) n∈N to form a stationary process (T (k) n,κ,r (π)) n∈Z (clearly, the same holds for the process (S The waiting time of the n-th job arriving to the κ-th queue of type r induced by a policy π ∈ A(k) is denoted by W (k) n,r,κ (π).It is the time between its arrival at the dispatcher (or equivalently at the queue) and the start of its service, and it is defined as follows: for n = 0, W (k) n,r,κ (π) = 0 and for n > 0, where x + def = max{x, 0}.Equation ( 2) is known as Lindley's recursion [26].The assumption that W (k) 0,r,κ (π) = 0 serves to avoid technicalities3 .It is known that the sequence of random variables (W and that n+1,r,κ (π), where ≤ st denote the usual stochastic order; see [28].We refer to W (k) r,κ (π) as the stationary waiting time of jobs at the κ-th queue of type r. Given π, let f : N → N×{1, . . ., R}×{1, . . ., k} be a mapping with the following meaning: f (n) = (n , r, κ) means that the n-th job arriving to the dispatcher is the n -th customer joining queue (r, κ).If n < V , no job is sent to any queue and thus we assume where f j refers to the j-th component of f .With this notation, quantity is the waiting time of the n-th job arriving to the dispatcher induced by a policy π ∈ A q (k), for all n ≥ n * .Now, let (Q r,κ ) ∀r,κ denote a partition of set {1, . . ., n * }.The subsets Q r,κ , for all r and k, are thus disjoint and we further require that the number of points in Q r,κ is n * q r,κ .Let r,κ (π)'s with weights q.Next theorem says that W (k) (π) can be interpreted as the right random variable describing the stationary waiting time of jobs achieved with policy π ∈ A q (k).It is proven by adapting the framework developed by Loynes in [28].In the remainder of the paper, convergence in distribution (in probability) is denoted by Theorem 1.Let Case 1 hold.Let q be such that q r,κ λ < µ r for all r, κ and π ∈ A q (k).Then, and By using the monotone convergence theorem, Theorem 1 implies that also the moments of Finally, for a given p ∈ R R + , we define the auxiliary random variable W r (p), which corresponds to the stationary waiting time of a D/GI/1 queue with inter-arrival times (T n,r (p)) n∈N where T n,r (p) = p /(p r λ) and service times (S n,1,r ) n∈N , for all r.Let also V def = V (p) be a random variable with values in {1, . . ., R} independent of any other random variable and such that Pr(V = r) = p r / p , for all r, and let Note that EW (p) = r pr p EW r (p) can be interpreted as the mean waiting time of R independent D/GI/1 queues averaged over weights p/ p . We will be interested in establishing convergence results for W (k) when k → ∞.With respect to some policies to be defined, we will show forms of convergence to W (p). Discussion Some remarks about the model above and Theorem 1 follow: • To agree on a definition for EW (k) or other moments of W (k) , one does not necessarily need to know the distribution of W (k) , e.g., one can achieve that using Cesaro sums [2].Matter of fact, existing works agree on the structure of EW (k) without constructing the distribution of W (k) .On the other hand, our approach needs to know the distribution of W (k) (and thus Theorem 1) because we can prove convergence results for EW (k) only through a distributional convergence argument, that is [11,Theorem 3.5,pp. 31].In particular, to prove EW (k) converges to EW (p), we will prove convergence in distribution of W (k) to W (p) and then the uniform integrability of the sequence of the W (k) 's.This is the reason why we need the characterization of the distribution of the stationary waiting time W (k) , which we give in Theorem 1 and prove through classical arguments. • Though several works focused on finding policies that minimize the expected value EW (k) (see the introduction), the analysis of EW (k) in our scaling where k → ∞ seems new. • We assume that q is a vector of rational numbers.From a practical standpoint, this is not a loss of generality for obvious reasons.As an additional remark to support this assumption, we note that it has been proven that in several cases the q-vector that minimize min π∈Aq(k) EW (k) (π) is indeed rational [2, Theorem 32, p.136]; see also [37]. Our approach needs this assumption to prove (7).In the case where π(n) is not periodic, a case that we do not consider here, and the limit lim n→∞ does not exist, it would be interesting to know whether some convergence in distribution of W (k) f (n) (π) occurs.A totally different argument will be needed here. • The fact that in Case 1 we require that Var T (k) n = o(k) is not a loss of generality for Theorem 1, as we do not let k → ∞ there.Case 1 covers the case where T (k) n has a (Gk, α)-phase-type distribution where both G and α are not functions of k. • We may have assumed that each queue of type r was replicated kz r + o(k) times instead of just k times.This is essentially equivalent to our setting and our proofs can be easily adapted to this case though we do not do it for simplicity. Main results For p ∈ Z R + , let and The set A p (k) is interpreted as the set of all periodic policies for which λk pr p is the aggregate mean arrival rate of jobs to all type-r queues.We consider the following problem. As discussed in the introduction, this is considered as a difficult problem.We are interested in establishing structural properties of Problem 1 when k is large.In the following, we define a certain class of policies and then present our main results. A class of periodic policies as the subset of all policies π ∈ A p (k) that satisfy the following properties: • The sequence (π 1 (n)) n∈N has period p and p r is the number of jobs sent to type-r queues per period, for 1 ≤ r ≤ R. The first property specifies the periodicity of π with respect the types of queues, while the second with respect to queues.Thus, the second property says that queues of type r are accessed by jobs in a round-robin (or cyclic) order starting from queue 1, and we need it to ensure that the cardinality of p |, does not vary with k.One can verify that These policies can be implemented in a distributed manner.More precisely, one can think that there are two tiers of dispatchers: the dispatcher in the first tier schedules jobs inter -group, while the dispatchers in the second tier, R in total, schedule jobs intra-group and implement round-robin. With respect to a sequence of policies (π (k) ) k∈N , where p , we will show (Lemma 2) This means that the finite dimensional distributions of the arrival process at each queue will be 'close' to the deterministic process, when k is large, which implies that some form of convergence to W (p) should occur in view of the continuity of the stationary waiting time [12]. Asymptotic equivalence and optimality Next theorem proves a first form of convergence of Theorem 2. Let Case 1 hold.Let p ∈ Z R + be such that λ pr p < µ r for all r.Let also an arbitrary sequence (π (k) ) k∈N be given where π Theorem 2 is not enough to claim that EW (k) (π (k) ) converges as well.Convergence of the first moment is important from an operational standpoint, as in practice one desires to optimize over EW (k) or Var W (k) .Under some additional assumptions, next theorem states that also the expected value and the variance of W (k) converge.Furthermore, it states that all the policies in set C (k) p are both asymptotically equivalent and asymptotically optimal, with respect to the criterion in Problem 1. Theorem 3. Let Case 2 or 3 hold.Let p ∈ Z R + be such that λ pr p < µ r for all r.Let also an arbitrary sequence (π (k) ) k∈N be given where π Provided that k is large, thus, no other policy in A p (k) \ C (k) p can do better than any π (k) ∈ C (k) p to minimize EW (k) .It also explicits the limiting value of EW (k) (π (k) ), which is EW (p).It is known that EW (p) is a convex function in p (e.g., [30]). To prove Theorem 3, we use the well-known fact that [5] for any π ∈ A p (k), where ≤ icx denotes the increasing-convex order (see, e.g., [33,35] for their definition).Using this lower bound first and then that the waiting time of the D/GI/1 queue is convex increasing in its arrival rate, it is not difficult to show that Then, we prove that the sequence EW (k) (π (k) ) is upper bounded by a sequence that converges to the lower bound in (19).An observation here is that the lower bound (18) holds under conditions that are weaker than those assumed in this paper; see [23].For instance, it is possible to extend (19) (and thus Theorem 3) to the case where i) policies are not periodic, ii) the fractions of jobs to send in each queue are not necessarily rational numbers, and iii) policies are randomized [31], that is the case where π(n) is any probability mass function over the set of queues.We do not investigate these extensions in further detail. Proofs In this section, we develop proofs for Theorems 1, 2 and 3. Before doing this, we fix some additional notation and show how it is possible to make the arrival process at each queue stationary with respect to any policy in π (k) ∈ A q (k), q ∈ Q kR + such that q = 1 (as assumed in Section 2).Let us consider the κ-th queue of type r and its arrival process (T (k) n,κ,r ) n∈N .Each inter-arrival time clearly depends on the policy π (k) implemented by the dispatcher, i.e., T n,κ,r (π (k) ), though in the following we drop such dependence for notational simplicity.Since π (k) is periodic by construction with period n * and n * q r,κ jobs have to be sent within a cycle to the queue identified by the couple (r, κ), the sequence (T (k) n,κ,r ) n∈N is composed of a repeated pattern of n * q r,κ inter-arrival times, that we can write as where each quantity A (k) j,κ,r , j = 1, . . ., n * q r,κ , is the sum of a deterministic number (that depends on π (k) ) of inter-arrival times to the dispatcher.We denote such number by a (k) j,r,κ .Thus, where = st denotes equality in distribution. If for some p ∈ Z R + , then we notice that a (k) j,r,κ does not vary with κ because by symmetry the arrival processes of all queues of a given type are equal, in distribution, up to a shift in time.In this case, the arrival process at any queue of type r becomes a sequence composed of a repeated pattern of p r inter-arrival times that we can write as 2,κ,r , . . ., A (k) pr,κ,r . Therefore, when p , we will just write a and that Now, we want to make (T n,κ,r ) n∈N stationary.This can be done as follows by randomizing over the first inter-arrival time of queue (r, κ).Now, let us consider the auxiliary random variables U r,κ , for all r and κ, which we assume independent each other and of any other random variable and having a uniform distribution in [0, 1].Then, we take Therefore, if j+1,r,κ and so forth according to the pattern (20).Defined in this manner, one can see that (T (k) n,r,κ ) n∈N is stationary as desired.At this point, the issue is the following.Consider two queues, say (r, κ) and (r , κ ), and suppose that T j ,r ,κ .We should check whether policy π (k) is actually able to induce arrival processes at the queues equal (samplepath-wise) to the ones built above through (25).One can easily see that this can be done with a possible shift of time for the arrival process at the queues and possibly discarding a finite number of jobs.This is allowed because these operations do not change the stationary behavior. In the remainder, we will use stochastic orderings.We will denote by ≤ st , ≤ cx and ≤ icx , the usual stochastic order, the convex order and the increasing convex order, respectively; we point to, e.g., [33,35] for their definition. We will also refer to the following lemma, which can be easily proven. Lemma 1.Let N be a finite positive integer and suppose (f k,n ) (k,n)∈N×{1,...,N } is a semi-infinite array of numbers such that for some constant c, lim k→∞ f k,n = c, for n ∈ {1, . . ., N }.Then, for any sequence We now give proofs for our results, i.e., Theorems 1, 2 and 3. Proof of Theorem 1 Let random variables V 1 and V 2 be given such that f (n+1) (π) be the Loynes waiting times (see [28, p. 501]) obtained when the first queue to serve a job is given by the outcome of V 1 and V 2 , respectively; thus, W (k) f (n) (π) is the waiting time at time 0 with n jobs in the past at queue (r, κ), provided that f (n) = (n , r, κ).Then, we let the (Loynes) waiting times be driven by the same realizations of the random inter-arrival and service times. We now prove (7).Since π is periodic with period n * , we first observe that π returns the same queue along subsequence (n n * + i) n∈N , for all i = 1, . . ., n * .Similarly, also the second and third component of f (n n * + i) do not change along these subsequences, though they are not known in advance because they depend on the outcome of random variable V , see (4).Thus, for all i = n * + 1, . . ., 2n * , by construction we have Pr(f and we get In (27b), we have conditioned on f (i).In (27c), we have used that r,κ (π); see [28].In (27d), we have used the definition of W (k) (π).Now, since the limit in (27d) does not depend on i, the proof is concluded by applying Lemma 1 once noted that n * is a finite positive integer. Proof of Theorem 2 We first observe that we can prove this theorem under some assumption on the sequence (π (k) ) k∈N .Given π (1) ∈ C p , we require that for all k: One may refer to these sequences as the 'natural' scaling of policy π (1) : in the two-tier interpretation of our policies, (28) means that the dispatcher at the first tier implements the same policy, to queue types, when k grows.These sequences will be assumed along this proof.If this theorem holds for these sequences, then it also holds for all the sequences in view of Lemma 1 and of the fact that the cardinality of C (V ) = 1 and π (prm+i) (V ) = π (pr(m−1)+i) (V ), which in some sense couples the arrival processes at queues of the (p r m + i)-th and (p r (m − 1) + i)-th systems. Without loss of generality, let us assume that (r, 1) is the queue that receives the first job.Now, Fact 1 holds true because (p r m + i) − (p r (m − 1) + i) = p r jobs must be sent to some queues of type r of the (p r m + i)-th system in the time interval [V + a (pr(m−1)+i) j,r , V + a This argument applies to all the other queues because we have considered periodic policies. We show Fact 1 in the following example, to help understanding its meaning and proof. Since (32) holds for all i = 1, . . ., lcm(p), by using Lemma 1 we obtain As a comment, we note here that (33) could be proven without Fact 1 and what has followed.However, we stress that we will need Fact 1 later anyhow, as it will play a crucial role in the proof of our main result Theorem 3 (see Lemma 3.iii). Lemma 2. Under the hypotheses of Theorem 2, In (34b), (34c) and (34e), we have used Chebyshev's inequality, the law of total variance and that the T (k) n 's are i.i.d., respectively.In (34f), we have used (33) and that a n,r,κ = p λpr , we can use the continuity of the stationary waiting time (see [12,Theorem 22]) to establish that Using (35) and that Cesaro sums converge if each addend converges, we obtain as desired. Proof of Theorem 3 Proof of (15) and ( 16).Given that C p ⊆ A p (k), ( 15) and ( 16) hold true if for all π ∈ A p (k) and lim k→∞ Let EW r,κ (x) be the mean waiting time of a D/GI/1 queue with arrival rate λx and i.i.d.service times having the same distribution of S 1,r,κ .Inequality (37) is a fairly direct application of known results: for all π ∈ A p (k), In (39b), we have used the lower bound in [23].In (39c), we have used Karamata's inequality once noticing that i) EW r,κ (x) = EW r,1 (x), ii) the majorization ( pr p , . . ., pr p ) ≺ (kq r,1 , . . ., kq r,k ) holds, and iii) the mean waiting time of a D/GI/1 queue is convex increasing in the arrival rate (see, e.g., [30,Theorem 5], [17]), which means that q r,κ EW r,κ (kq r,κ ) is convex in q r,κ . We now prove (38).As in the proof of Theorem 2, this can be done assuming that (28) holds.Thus, the sequences (28) will be assumed along this proof. The remainder of the proof basically works as follows.First, we bound the waiting times of our G/GI/1 queues through the waiting times of suitable GI/GI/1 queues.Second, we show that the sequence of such waiting times converges in distribution to W (p).Then, we show that the waiting times of such GI/GI/1 queues are non-increasing in the ≤ icx -sense along the sequences k m + i, for all i = 1, . . ., lcm(p), which allows us to conclude that the sequence is uniformly integrable; this is the point where we will use Fact 1. Finally, we use [11,Theorem 3.5,pp. 31] to conclude that also the sequence of the expected values converges to EW (p). Associated to each queue of type r, we define an auxiliary random variable, T r , such that Next lemma provides properties satisfied by T (k) r that will be used later.We recall that k m = m lcm(p), see (29).iii) For all i = 1, . . ., lcm(p), Proof.Proof of i).This is an immediate consequence of (33). Proof of ii).For all i = 1, . . ., lcm(p), let j * i ∈ arg min j=1,...,pr a Proof of iii).We use that X ≤ icx Y if and only if there exists an other random variable Z such that X ≤ st Z and Z ≤ cx Y [29]. Using (31) where j * i is defined above in point i).Thus, we have −T Figure 1 : Figure 1: Structure of the parallel queueing model under investigation. Problem 1 . Let p ∈ Z R + be given.Determine the optimizers and the optimal objective function value of min π∈Ap(k) Fact 1 . For m ∈ N, let k m def = m lcm(p),(29)where lcm(p) denotes the least common multiple of p 1 , . . ., p R .The subsequences (k m + i) m∈N , for all i = 1, . . ., lcm(p), play a key role in our proof of Theorem 3. Along these subsequences, next fact holds true and follows by construction of the policies in set C(k)p : it is a direct consequence of the fact that queues of the same type are visited in a round-robin manner.For m > 1, j = 1, . . ., p r and i ∈ N, a By construction, we have π (prm+i) 2 − 1 ] , for j = 1, and the number of arrivals at the dispatcher in that interval is exactly p by construction of the policies in C (k) p for any k (see subsection3.1) ,κ converges in probability for each n, also the finite dimensional distributions of the process (T (k) n,r,κ ) n∈N converge to the one of the constant process with rate λpr p .Together with the fact that ET (k) n,r,κ = lim k→∞ ET (k) Lemma 3 . Under the hypotheses of Theorem 3, the following properties hold: λpr in probability, as k → ∞. Lemma 1, the convergence in ii) holds if we can show that T all i = 1, . . ., lcm(p).Given (43), this amounts to show that a ∀i = 1, . . ., lcm(p).(44)Weprove the former by showing (the stronger statement) that a all j.Now, using that{|X − c| > 2 } ⊆ {|X − EX| > } ∪ {|EX − c| > } for a random variable X,The second term in the right-hand side of former inequality tends to zero as k → ∞ by(33).The following shows that also the first term goes to zero: ), we have used Chebyshev's inequality and that the T (k) n 's are independent.In (46b), we have used that a (k) j,r ≤ k p . (k m + i) 2 ( remains to show that −Z ≤ cx −T (km+i) r , which is equivalent to show that Z ≤ cx T holds trivially under Case 3. Now, let Case 2 hold.Noticing that both T (km+i) r and Z have Erlang distributions with the same mean, to prove (49) is enough to show that Var Z ≤ Var T k m+ i)(k m + i + lcm(p)) (k m + i + lcm(p)) 2 (51b) ≤ a (km+i) j * i ,r (k m + i) 2 = λ 2 Var T (km+i) r(51c)as desired. Table 1 : Illustrative example for the decomposition in Fact 1.Since Fact 1 holds for any m > 1, we can make the replacement m → m lcm(p) and that
8,507.8
2014-04-17T00:00:00.000
[ "Mathematics" ]
Anatomy of Higgs mass in Supersymmetric Inverse Seesaw Models We compute the one loop corrections to the CP even Higgs mass matrix in the supersymmetric inverse seesaw model to single out the different cases where the radiative corrections from the neutrino sector could become important. It is found that there could be a significant enhancement in the Higgs mass even for Dirac neutrino masses of $\mathcal{O}$(30) GeV if the left-handed sneutrino soft mass is comparable or larger than the right-handed neutrino mass. In the case where right-handed neutrino masses are significantly larger than the supersymmety breaking scale, the corrections can utmost account to an upward shift of 3 GeV. For very heavy multi TeV sneutrinos, the corrections replicate the stop corrections at 1-loop. We further show that general gauge mediation with inverse seesaw model naturally accommodates a 125 GeV Higgs with TeV scale stops. On the other hand, neutrino masses constitute one of the strongest signatures of physics beyond standard model. It is imperative that any supersymmetric extension of the standard model should also contain an explanation for non-zero neutrino masses. Among many ideas to generate tiny neutrino massse, the inverse seesaw model [20] is interesting as it is applicable at the weak scale with neutrino Yukawa couplinig of order one, and thus testable at colliders like LHC. In the present work, we revisit the consequences of the inverse seesaw model for the lightest CP even Higgs boson mass [21,22]. We find parameter regions in which the one-loop corrections to the light Higgs mass can be very significant, leading to an increase of O(10) GeV, for the neutrino Yukawa coupling larger than about 0.2. This is in the line of observations of Refs. [23,24] which explored the role of extra vector like matter at TeV scale in increasing the light Higgs mass. We then apply these corrections to phenomenological minimal supersymmetric standard models (PMSSM) and general gauge mediaed supersymmetry breaking models (GMSB) where the Higgs mass can become 125 GeV for supersymmetry breaking scale around TeV. The paper is organized as follows. In the next section, we present one loop corrections to the Higgs mass and study the various parameter regimes. In the section 3, we work out two numerical examples in (1) PMSSM (2) General gauge mediated supersymmetric inverse seesaw model. We conclude in the section 4. Appendix A contains the main formulae, whereas appendices B and C contain RGE equations and some ancillary formulae. II. ONE LOOP CORRECTIONS TO THE HIGGS MASS IN MSSM The inverse seesaw model is characterized by a small lepton number violating mass, unlike the Type-I seesaw, the right-handed neutrinos can be as light as TeV or even below, with their Yukawa couplings of order one. This is achieved by having an additional singlet field, which we denote by S. The superpotential for this model is given as where W M SSM stands for the standard MSSM superpotential, and N c and S are singlet fields carrying lepton number +1,-1 respectively. We consider one right-handed neutrino and one singlet S field in the discussion. It can be easily generalized to the case of two/three generations. The mechanism of how neutrino gets mass is well documented in the literature. We revisit it here briefly. In the basis, {ν L , N c , S}, the mass matrix, M ν , for the neutral leptons is given by Where m D = Y N H u . The eigenvalues are given as Since the inverse seesaw model is typically a low scale model, unlike the traditional seesaw mechanisms, one wonders if they can give large enough contribution to the light CP-even Higgs boson mass. This is more important to explore in the regions where m D can be relatively large 10 GeV. It should be noted that the range in m D (0.2 − 0.3)v for has been explored by collider searches [25,26]. There are constraints, however, on the size of m D for a given value of M R from electroweak precision tests [27]: This constraint is strictly for the electron and muon generations. For the third generation, it is slightly weaker, at the level of 0.07. This requires M R 3 TeV for m D close to the top quark mass. To compute the corrections to the light Higgs mass from the neutrino sector, we use the one loop effective potential methods of Coleman-Weinberg [28]. The methods have been used to derive the well-known one-loop corrections from the top-stop sector [29,30] and we extend them to the neutrino sector in the inverse seesaw model. The scalar potential in this model consists of In the basis, {ν L ,Ñ c ,S}, the mass matrix, M 2 ν , for the sneutrinos is given by In the above matrix, elements with * correspond to symmetric entries of the mass matrix. The eigenvalues of the above mass matrix can be easily derived in the limit µ S m D M R , as required by the inverse seesaw mechanism and the electroweak precision tests. In the leading order of m D M R /d 2 , m D X N /d 1 1, they are given as One-loop corrections for the Higgs mass matrix will be derived from the one-loop effective scalar potential given by the standard form: In the basis Φ T = (Re{H 0 d }, Re{H 0 u }), the corrections to the CP even Higgs mass are given as where M 2 0 stands for the tree level mass matrix, ∆M 2 t and ∆M 2 ν are contributions from the top/stop sector and the neutrino/sneutrino sectors respectively. The full mass matrix has the W sin β 2 2 log In the following we write down the contribution from the neutrino sector in a compact notation as follows: In the above, and B ij and A ijk 's are given in the Appendix. While the above formulae are written for a single generation of right handed and singlet neutrinos, they can be easily generalized to three generations of right-handed and singlet neutrinos. The neutrino contributions to the light Higgs mass, though similar to those from the top/stop sector, have a couple of distinct features: (a) there is no colour factor associated with the neutrino contributions, so they typically lower than the top/stop contributions by a factor three, (b) The fermionic contributions, from the right-handed neutrinos can be significant, reducing the total contribution to the Higgs mass. This is highly dependent on the hierarchies between the relevant parameters: the soft masses and the right handed neutrino masses. To understand the overall relevance of these contributions, we will consider a few interesting cases below. Note that in our numerical analysis, we restrict m D M R /d 2 , m D X N /d 1 to be less than 0.1. We now consider the case where M R is the largest mass scale in the theory. This limit has been earlier considered in Ref. [22]. In this case, the enhancement in the light Higgs mass is much smaller and restricted to a few GeV. This is because the neutrino 1-loop correction, which is negative, significantly suppresses the total contribution from the neutrino sector. This is illustrated in Fig. 3 where III. APPLICATIONS TO PMSSM AND GMSB In the present section, we present two numerical examples as an application to the above calculation. A. PMSSM and Inverse Seesaw The phenomenological MSSM is low energy parameterisation of the supersymmetry breaking soft terms in terms of 19-22 parameters (See for example, [31]). To study the inverse seesaw model in As we can clearly see from the figure, even if m SU SY is below 1 TeV, there is enough contribution from the neutrino sector to a 125 GeV Higgs mass As right-handed neutrino mass M R increases, we get closer to the case-2 discussed in the previous section where larger right-handed neutrino masses make the contribution of right-handed neutrinos to Higgs mass negative, and thus reducing the Higgs mass. This effect is seen in the right panel of Fig. 6. B. GMSB and Inverse Seesaw Minimal gauge mediation models have been strongly constrained by the recent discovery of the Higgs mass of 125 GeV [5]. This is because the stop mixing parameter X t is predicted to be very small in these models. The X t can be made large through renormalisation group corrections, but this would require gluino masses to be greater than 8 TeV. Thus, the only way these models can accommodate a light CP even neutral higgs boson with a mass around 125 GeV is by increasing the masses of the stops beyond 4 TeV. This range for the stop masses is far beyond the LHC reach. One can then consider modification by including either messenger-matter interactions, new fields or new interactions to achieve the Higgs mass within the required ball park. The feature of small X t also persists in general gauge mediation which is an umbrella of all possible gauge mediations both in the perturbative and non-perturbative regime. A recent analysis of the Higgs mass in general gauge mediation is presented in Ref. [32]. Incorporating the inverse seesaw model in general gauge mediation could generate the Higgs mass in the right ball park, due to the additional corrections induced by the neutrinos. We study this possibility in the present subsection. The set up of general gauge mediation we consider is specified by the following boundary conditions at the messenger scale: This generates a large enough positive contribution to m 2 N at the weak scale as m 2 Hu is negative at the weak scale from the requirement of electroweak symmetry breaking. The question then remains whether with the above boundary conditions it is possible to reproduce either of the conditions m N M R or m L M R to enhance the Higgs mass significantly. Assuming as before only one right-handed neutrino and one singlet, we find that it is indeed possible to generate a Higgs mass of 125 GeV. We have to choose an appropriate boundary condition for the third generation sleptons such that it is close to the M R mass. In the table I we present An interesting application of this model lies in general gauge mediation where we have shown that implementing inverse seesaw model can enhance the light Higgs mass to the 125 GeV for stops less than a TeV, without resorting to any mechanism to enhance the stop mixing parameter X t . Appendix A: Appendix Here we have collected the expressions for B ij and A ijk 's which are used in the calculation of one-loop corrected Higgs mass. where we have suppressed the generation indices. Appendix B: RGE equations in SISM In the last section of the appendix we present the renormalisation group equations for some of the superpotential and soft terms relevant to the analysis of general gauge mediation. To derive the formulae we use the standard formulae available in the literature [33,34]. The notation we use From eq (C5), it is evident that Higgs mass receives large correction from sneutrino masses. As sneutrino masses implicitly depend on m D , increase in m D increases the Higgs mass.
2,612.4
2014-05-21T00:00:00.000
[ "Physics" ]
Estimating marginal likelihoods from the posterior draws through a geometric identity Abstract This article develops a new estimator of the marginal likelihood that requires only a sample of the posterior distribution as the input from the analyst. This sample may come from any sampling scheme, such as Gibbs sampling or Metropolis–Hastings sampling. The presented approach can be implemented generically in almost any application of Bayesian modeling and significantly decreases the computational burdens associated with marginal likelihood estimation compared to existing techniques. The functionality of this method is demonstrated in the context of probit and logit regressions, on two mixtures of normals models, and also on a high-dimensional random intercept probit. Simulation results show that the simple approach presented here achieves excellent stability in low-dimensional models, and also clearly outperforms existing methods when the number of coefficients in the model increases. Introduction Bayesian model selection relies on the posterior probabilities of the H candidate models M 1 , . . . , M H conditional on the data [3,19]. Two major strategies exist for estimating the posterior probabilities p(M h |y) for each of the h = 1, . . . , H candidate models: (a) the model indicators are considered as part of the parameter space and jointly drawn during MCMC sampling together with all other unobserved quantities [17]. The drawback of this strategy is the necessity of including all candidate models in the estimation simultaneously. If the analyst then wishes to consider an additional model that was not a part of the initial candidate set, another simultaneous estimation needs to be carried out. This exacts a significant cost on the analyst in terms of computing time and effort. Strategy (b) estimates the marginal likelihood for each model which allows for easy calculation of the posterior probabilities independent from the estimation of the other candidate models [19,27]. Despite this appealing characteristic, calculating the marginal likelihood is a non-trivial integration problem, and as such it is still associated with significant effort on the part of the analyst and potential imprecision in the case of high-dimensional or multi-level models. Comparative studies of existing estimation techniques for the marginal likelihood only provide clear evidence of precision for candidate models of lesser dimensions, while Bayesian analysis frequently requires more complex models [2,12,14]. Regardless of which of these two methodological approaches is used to identify the model with the highest posterior probability, reporting the marginal likelihood of this model supports the comparability and reproducibility of the modelling results and thus is a desirable practice. This article presents a technique for estimating the marginal likelihood, which alleviates both of the drawbacks that hinder existing approaches. The method presented herein requires only a sample of the pos-terior distribution as an input and is thus implementable as a generic function allowing for its use in a variety of applications. Additionally, the approach shows significantly less sensitivity to an increase in the number of model coefficients compared to existing approaches. We We refer to the integral in the denominator in (1.3) as the non-normalized posterior integral over A and abbreviate it by κ A . Integrating 1 over a K-dimensional bounded set has the geometric interpretation of a generalized volume, or hypervolume, and we will refer to the integral in the nominator in (1.3) as the volume of A. From 1.3 we see that the original integration problem in (1.1) can be split up into two separate problems, namely the volume of A, and the non-normalized posterior integral over A. Splitting up the estimation of the marginal likelihood into these two sub-problems reduces computation effort significantly, and leads to very stable estimates. In particular, in the case of high-dimensional models. The simpler computation results from the relative ease of implementing techniques to estimate a geometric volume as opposed to techniques for estimating a high-dimensional integral. Additionally, the posterior distribution can be used as an importance density for estimating κ A , where all drawbacks of earlier approaches exploiting the posterior distribution as importance density are overcome by limiting the region of integration to A, as defined subsequently. The remainder of this article is organized as follows: Section 2 details the proposed estimation method. Section 3 applies the developed method to several types of models, these are probit and logit regression, mixture of normals, and the random intercept probit. For each of the applications, a comparison to existing methods is presented, such as Chib's method [5], importance sampling, and bridge sampling [21]. Section 4 summarizes and concludes. The approach In this section we present a new estimator for the marginal likelihood by separately estimating the volume of A and the corresponding non-normalized posterior integral as defined in Section 1. We also present a method for choosing A in such a way that the quotient of these estimators yields a stable estimate of the marginal likelihood. First, we turn to the non-normalized posterior integral. A common technique for numerical integration is importance sampling, and since the posterior distribution is part of the numerator in the non-normalized posterior integral this suggests the posterior as the importance density. Since draws from the posterior distribution are usually available as a natural output of Bayesian analysis, the importance sampling estimator of the posterior integralκ A is available almost ad hoc once A has been defined: where θ (l) with l = 1, . . . , L refers to the posterior draws after the burn in. Earlier attempts for estimating the marginal likelihood by importance sampling have likewise exploited the posterior distribution as importance density, among them most prominently is the harmonic mean estimatorp HM [22]. In this approach, after some reordering, one integrates both sides of Bayes theorem over the whole prior domain: and since integrating the prior over its own domain yields 1, the harmonic mean estimator is given bŷ It is well known thatp HM suffers from pseudo-bias due to the use of an unrestricted prior domain as the region of integration [20], and from significant instability [12]. To be valid, the importance density must have its support defined across the entire region of integration. The posterior distribution however, generally does not have such a wide support in practice, and thus is an improper choice as importance density in this respect. The instability stems from the dominating effect that only a small number of draws from the MCMC sampler has on the sum in equation (2.2). Some of the L posterior draws usually result in low values of p(y|θ (l) ), and since the likelihood enters the estimator reciprocal, these small values dominatep HM . However, shortcomings of the harmonic mean estimator guide the way to a definition of A allowing a stable estimation of the marginal likelihood from identity (1.3). Firstly, to ensure the posterior distribution is a proper choice for the importance density, as required in our approach, the region of integration A must have full support of the posterior distribution. Secondly, to avoid instability, the region of integration A may only contain points for which the sum in (2.1) is stable independently of any specific run of the MCMC sampler. To address both of these requirements and at the same time allow for a simple estimation of the volume and the non-normalized posterior integral we define A as the intersection of two sets A 1 and A 2 . Set A 1 is defined by a threshold value ρ and only those points θ lie in A 1 whose non-normalized posterior p * (θ|y) exceeds this threshold, such that Considering the series p * ,(1,...,L) = p * (θ (1) |y), . . . , p * (θ (L) |y), a natural way of determining the threshold ρ is to ensure that the lowest values of p * ,(1,...,L) are not destabilizing the sum in (2.1) by setting ρ as a quantile of p * ,(1,...,L) . We define ρ as the median of p * ,(1,...,L) because of its robustness even in models of high dimension, and because the posterior distribution has sufficient support to serve as importance density wherever the non-normalized posterior exceeds its median. Thereby, an almost perfectly stable estimation of (2.1) is ensured by excluding all values of p * ,(1,...,L) from the estimation ofκ A that might destabilize the sum in (2.1), and a graphical demonstration thereof is given subsequently after the derivation of the final set A. However, usage of A 1 as A from (1.3) does not allow easy estimation of its volume and would constitute an integration problem with comparable burdens as the original formulation of the marginal likelihood in (1.1). To facilitate easy estimation of the volume of A a second set A 2 is defined, in such a way that the volume of the intersection of A 1 and A 2 can be estimated by means of statistical standard techniques only. As long as A 1 and A 2 have the same dimension K and an intersection A, the volume of this intersection V A can be written as V A = πV A 2 , where π refers to the ratio of points lying in A 2 that also lie in A 1 , and V A 2 is the volume of A 2 . Hence, for an easy estimation of V A we define A 2 in such a way that its volume V A 2 can be calculated analytically and that drawing uniformly from within it is efficient and feasible. Then π can simply be estimated by drawing K-dimensional vectors θ (r) uniformly from A 2 such that where I( ⋅ ) refers to the indicator function, and R is the number of random draws from within A 2 . Notice that all of the above is valid for unimodal as well as for multimodal distributions. However, henceforth we derive the algorithm for estimating the marginal likelihood for unimodal distributions, while a minor change which will allow for the estimation of p(M|y) for multimodal distributions is presented in Section 3.2 and applied to a mixture of normal distributions. Even though other definitions of A 2 are possible, we choose a K-dimensional ellipsoid as A 2 . This choice shows outstanding efficiency of the resulting estimator, and to dispel doubts that the choice of an ellipsoid might limit the applicability of the approach, applications in Section 3 focus on posterior distributions with non-elliptical contours. The set of points θ lying in A 2 is thus defined by where C is a positive definite matrix of dimension K × K with its eigenvectors defining the principal axes of the ellipsoid. θ * is a point with support of the posterior distribution, and we define θ * as its posterior mode to ensure substantial overlap between A 2 and A 1 . The last step in defining A 2 is thus choosing C. Consider matrix T = (θ (1) , . . . , θ (L) ) , and its covariance matrix D = cov(T), we define where α is a scalar with domain ℝ + , and is employed as a tuning parameter in the presented approach. In all applications in Section 3, α was set in such a way that the resulting intersection of A 1 and A 2 contains about 49 % of the L posterior draws of θ (l) . We include in the Appendix a trivial binary search method for finding the value of α. The choice of setting α to contain 49 % of the posterior draws can be understood intuitively: since ρ is defined as the median of p * ,(1,...,L) , set A 1 contains exactly (for an even L) 50 % of the posterior draws, and since A is the intersection of A 1 and A 2 , it cannot contain more than these 50 %. While the majority of points θ (l) for which p * (θ (l) |y) > ρ have comparably low distance to the posterior mode θ * , in high-dimensional models with very non-elliptical contours some points far away from the mode can still fulfill p * (θ (l) |y) > ρ. If α is chosen to include all 50 % of the posterior draws fulfilling this condition, the resulting ellipsoid from (2.3) might have much mass in its outer layers where only few points exceed the threshold ρ, with potentially negative effects on the efficiency of estimatingπ . Setting α to include the 49 % of draws with lowest distance to the mode increases the efficiency of estimatorπ , while no evidence for a negative impact on the stability ofκ A is found. Figure 1 compares the values of the non-normalized posterior p * (θ (l) |y) for all l = 1, . . . , L draws of the posterior distribution to those values of p * (θ (l) |y) for which θ (l) ∈ A, for a low and a high-dimensional model. The almost completely flat lower boundary of the non-normalized posterior values under A allow for an extraordinarily stable estimation of the posterior integral in (2.1) independent of the dimension of the respective model. Algorithm I for the estimation of the marginal likelihood is thus given by: (1) Run an MCMC sampler to obtain L posterior draws θ (l) after the burn in, calculate the series of nonnormalized posterior density values p * ,(1,...,L) , and set ρ to its median, θ * to the posterior mode, and D to the covariance matrix of the posterior draws. 2 , count the numberr of draws for which p * (θ (r) |y) > ρ, and setπ =̄r R . (4) Estimate the volume of A asV A =πV A 2 , and obtain the estimator for the non-normalized posterior integralκ A from (2.1). (5) Calculate the final estimator of the marginal likelihood aŝ An algorithm for efficiently drawing uniformly from within a hyperellipsoid is found in the Appendix which requires minimal computing time even for large values of R. Volume V A 2 is calculated rather than estimated, and the Appendix shows a recursive algorithm returning log(V A 2 ) with minimal computing time even for very high K. Thus, the calculation ofπ and V A 2 , and consequentlyV A , is achieved with low computational effort and high precision. Applications In this section the developed approach is applied to several important classes of models. The estimator is compared to those existing methods that have consistently shown excellent performance within the respective class of models in earlier comparative studies. These are the approaches of Chib [5], along with importance sampling and bridge sampling [21]. Chib's method exploits the basic marginal likelihood equation where θ * is a high density point for which the posterior mode is used, and will be referred to throughout this section. Thus an estimate of the marginal likelihood requires an estimate of the posterior density at θ * , and we follow the estimation technique for the posterior ordinatep (θ * |y) as outlined in [5] throughout this section. Chib's method does not require any input other than draws from the Gibbs sampler, however, for each variable block within the Gibbs sampler one needs to re-run a reduced sampler. In particular, for hierarchical and multi-level models, this incurs a certain degree of burden on the analyst. Furthermore, Chib's estimator as outlined above is applicable in a Gibbs setting only, while a generalization of this approach also allows estimation of the marginal likelihood based on output from the Metropolis-Hastings algorithm [6]. Importance sampling and bridge sampling both require as an input an importance density q(θ), which approximates the posterior distribution. The degree to which this approximation successfully mimics the posterior distribution has substantial impact on the efficiency of the estimators. For the importance sampling estimator m = 1, . . . , M draws θ (m) are obtained from q(θ), and an estimator is then given by averaging over the non-normalized posterior values weighted by the importance density ordinates such that If q(θ) does not have fatter tails than the posterior density, the importance density estimator can become unstable and resulting in high standard errors. The bridge sampling technique somewhat alleviates this problem by weakening the demands on the importance density with respect to its tail behavior [12]. The bridge sampling estimator of [21] is the limit of the recurrencê The problem of finding an adequate importance density is not trivial, in particular for high-dimensional densities tuning of the importance density can become very time consuming. An approach to define the importance density q(θ) based on output from the Gibbs sampler is found in [8], where θ is split into D blocks (θ 1 , . . . , θ D ) and based on the conditional distributions of these parameter blocks a mixture importance density is constructed as a byproduct of the Gibbs sampler j>d , y) refers to the product of conditional densities from which θ was drawn in the s th iteration of the Gibbs sampler, and l 1 , . . . , l S refer to a series of S Gibbs iterations chosen by the analyst, see [9,11] for details. Then q(θ) converges to the posterior density when S → ∞. Throughout this section an importance density as defined in (3.1) is used, unless stated otherwise. Besides comparing point estimates, the expected relative mean-squared error (RE 2 ) gives evidence on the stability of the estimators, and respective estimation techniques for RE 2 are summarized in [12]. However, recent research investigated the accuracy of these estimation techniques [2] and calculated numerical standard errors of the outlined estimators for 500 independent runs of the MCMC sampler. The analysis shows that there is significant risk of overestimating or underestimating the standard errors under given techniques when these require as an input the specific lag at which the autocorrelation of the MCMC chain tapers off (which is the case for all estimators here except the importance sampling estimatorp IS ). However, since this paper aims to complete a pure comparison of the presented estimator to the existing ones, we abstract from this discussion and estimate RE 2 without relying on a measure of the autocorrelation by the bootstrap method, even though this increases computational effort significantly (see e.g. [18] for details about the bootstrap method). For each bootstrap replication ofp A , we draw with replacement L out of the L posterior draws and carry out the estimation ofp A anew. The same applies for bootstrap replications ofp CH when the model is estimated by a one-block Gibbs sampler. For models requiring multi-block samplers only the high density point θ * and the conditional density of the first variable block can be calculated from the bootstrap sample, while for each of the remaining variable blocks the respectively reduced Gibbs sampler is required to be carried out anew to obtain the ordinate of the particular conditional density. Similarly, M out of the M importance draws are drawn to obtain a bootstrap replication ofp IS , and these importance draws are combined with M out of the M selected posterior draws forp BS . Reported standard errors are calculated from 200 bootstrap replications each. Defining the quantities L, M, S and R to yield comparable computational time with respect to the calculation of the different estimators, and thus allow for a fair comparison, would require tuning these quantities in each application. However, we set these quantities in such way thatp A andp CH show about the same computation time, while even under low values of S importance sampling and bridge sampling require significantly more time. In all subsequent applications we obtain L = 100,000 draws from the Gibbs sampler as input forp A andp CH . Additionally, we set R = 100,000 to ensure precise estimation ofπ considering its standard error is √π (1−π ) R . However, to ensure precise estimation even for high-dimensional distributions with very non-elliptical contours and potentially small values ofπ , we define as a generic rule the lower bound of draws to ber = 1,000. M and S are set in the respective application. Probit and logit The most exploited data with respect to comparing estimation techniques of the marginal likelihood are the prostatic nodal involvement data, see e.g. [5,7,14]. These data contain N = 53 patients with cancer of the prostate and the binary response indicates whether the cancer has spread to the surrounding lymph nodes. Potential explanatory variables are the patient's age at time of diagnosis (x 1 ), the level of serum acid phosphate (x 2 ), the binary result of an X-ray examination (x 3 ), the size of the tumor (x 4 ), with 0 for small and 1 for large, and the pathological grade of the tumor (x 5 ), with 0 if less serious and 1 if more severe. All of the above cited articles agree on the addition of an intercept for modeling the binary outcome. Each of the aforementioned three instances in literature where these data are exploited for marginal likelihood estimation assume a different model specification and prior setting. We use a probit link [5,7] as well as a logit link function [14], given by probit: where x i is a row vector containing the explanatory variables of respondent i, the column vector β holds the corresponding coefficients, and Φ( ⋅ ) refers to the cumulative distribution function of a standard normal distribution. We use the prior setting in [7] for the priors in the probit model specification. Therefore, we use a normal prior with mean 0 and standard deviation 10 for the intercept, and normal priors with mean 0.75 and a standard deviation of 5 for the other explanatory variables. For achieving comparability between the probit and logit models, in the logit specification all parameters of the normal prior are scaled by 1.6 [14]. The probit models are estimated as outlined in [1], while the logit models are estimated by auxiliary mixture sampling as explained in [13] and [15]. Since the presented approach exploits the posterior distribution as importance density in a similar way as the harmonic mean estimator does, we also reportp HM in this section to demonstrate the substantial impact the proposed limitation of the region of integration to set A has. We set the number of draws of the importance density to M = 20,000 forp IS , and combine these draws with randomly selected (M = 20,000) posterior draws for the calculation ofp BS . The mixture importance density is constructed from S = 100 components as outlined in (3.1). Based on three independent runs of the MCMC sampler for each model, Table 1 displays the outcome of the different estimators of the marginal likelihood on the logarithmic scale. Firstly, for models of relatively low dimension, i.e. M 1 − M 7 and M 10 − M 16 , we see thatp IS ,p BS , andp CH have the smallest standard errors, while the accuracy ofp A clearly allows for its application without increasing the risk of false model choice to a relevant degree. Secondly, we see that the advantage of existing methods vanishes when the dimension of the investigated model increases. In particular, for M 9 and M 18 we see thatp A outperforms the other estimators except the bridge sampling estimatorp BS . Thus the estimatorp A is more robust to an increase in the complexity of the model than importance sampling and Chib's method in this example. This statement is also supported by consideration of the standard errors from the model of lowest dimension, M 1 and the model of highest dimension, M 18 , which have increased by factor 5 or more for all other estimators, while the standard errors ofp A have not responded to an increase of the dimensionality within each group of models with the same link function. Variables Model ln(p HM ) ln(p IS ) ln(p BS ) ln(p CH ) ln(p A ) Model ln(p HM ) ln(p IS ) ln(p BS ) ln(p CH ) ln(p A ) 1, Mixture of normals A general formulation of a mixture distribution is given by Denoting the indicator holding the assignment of observation y i to a mixture component by I i , where this indicator takes labels s = 1, . . . , S, the model likelihood and the prior of the series I N = (I 1 , . . . , I N ) are invariant to re-labelling. When the prior of the parameters θ s is invariant too, then the posterior distribution of a mixture model is multimodal with respect to the S! permutations of the indicator labels. This is a challenge for marginal likelihood estimation of mixture models [11]. Parameter estimation of mixture models through MCMC sampling frequently suffers from the un-identifiability of the indicator labels, which can result in undesired label switching. However, several techniques have been proposed to address the label switching problem [10,26]. We first consider a posterior sample θ L,1 = (θ (1) , . . . , θ (L) ) in which all draws belong to one permutation of the indicator labels. Since the marginal likelihood is the normalizing constant of the whole unconstrained posterior distribution, not only of the subspace belonging to one permutation, calculatingp A from only θ L,1 would result in an incomplete estimate of the marginal likelihood. Thus we define a second sample of the posterior distribution θ L,S! = (θ (1) , . . . , θ (L) ) in which the draws represent a balanced sample from all of the S! permutations of the labels. In this section the random permutation sampler of [10] is applied to obtain θ L,S! , while other approaches exploring all modes of the posterior distribution could be used as well (see e.g. [4]). A derivative of Algorithm I allowing the estimation of the marginal likelihood for multimodal mixture models is then found by first estimating ρ, θ * , D, A, and thusπ andV A , from θ L,1 . Secondly based on the thereby defined A integralκ A is subsequently estimated from θ L,S! . Thusp A can be implemented for mixture models as generic as for unimodal distributions, however, two runs of the MCMC sampler are required. Nevertheless, since the construction of the importance density from (3.1) requires output from a Gibbs sampler run exploring all modes of the posterior as well, while usually a sample θ L,1 is additionally needed for parameter estimation, the requirement of two posterior samples does not mean additional burden compared to importance density based techniques in practice. Here we demonstrate the function of the proposed estimator by applying it to the simulation example used in [10] and [16]. Two examples of a mixture of two normal distributions are defined. The first example is defined in such a way that the two states are easily separated by the Gibbs sampler and set η 1 = η 2 = 0.5, μ 1 = 1, μ 2 = 2, σ 2 1 = 0.01, and σ 2 2 = 0.05. The second example is defined with the same coefficients as the first example, but with σ 2 1 = 0.1 and σ 2 2 = 0.15, resulting in significant overlap of the distributions. We simulate 1,000 observations from each of the two mixture distributions, and prepare estimation by defining natural conjugate priors for μ s and σ 2 s . An inverse gamma distribution is selected as prior for σ 2 s where the scale parameter and the shape parameter equal 3, imposing a vague prior considering the magnitudes of the true parameters. Then the prior of μ s is a normal distribution with mean 0 and variance 100σ 2 s . Vector (η 1 , η 2 ) follows a priori a Dirichlet distribution with parameters a η 1 = a η 2 = 5. The Gibbs sampler as outlined in [23] is then applied to obtain the posterior sample θ L,1 , and combined with the random permutation step to obtain θ L,S! . The approach presented here is compared to importance sampling and bridge sampling where the importance density in (3.1) is constructed from S = 100 components, and M is again set to 20,000. As discussed in the literature (see e.g. [12]) Chib's method does not perform comparably well when applied to multimodal models as it does for unimodal models, and we omit reporting the outcomes of this estimator here. The left plot in Figure 2 displays a scatter plot of σ 2 1 and σ 2 2 from all l = 1, . . . , L draws of the series θ L,S! of the MCMC sampler (black, in the background) overlaid by those draws for which θ (l) ∈ A (in the front). We see that the posterior distribution of σ 2 1 and σ 2 2 under the two permutations overlap to a significant extent and that set A is centered around one of the two permutations of the component labels. very low standard errors and outperform the estimator presented in this article. However, in absolute terms the standard errors ofp A are still very small and would usually have no practical impact on model choice. For the parameter specification with overlap of the distributions of σ 2 1 and σ 2 2 bridge sampling again has the smallest standard errors, while the magnitudes have increased noticeably. Additionally, we see that the standard errors of importance sampling and the method presented here are on the same scale. We interpret the results from this simulation study as evidence thatp A provides unbiased and stable estimates when applied to multimodal models. Considering the significant challenges associated with marginal likelihood estimation of such mixture models, and the limited applicability of Chib's estimator to them, the presented approach can significantly contribute to making the task of model selection less cumbersome for this important class of models. Random intercept probit In this subsection the sensitivity of the discussed estimators of the marginal likelihood to an increase of the dimensionality of the candidate models is explored. This paper provides for the first time a comparison of the discussed estimation techniques for a high-dimensional unit-level model, and discloses the shortcomings of existing approaches. As one instance of a comparative study exploring the existing techniques with respect to a unit-level model, Frühwirth-Schnatter and Wagner [14] fit a random intercept logit model, with up to K = 25 coefficients, to agricultural data and compare the estimates from Chib's method, as well as from importance and bridge sampling under the mixture importance density from (3.1). In this section we increase the number of model dimensions in five applications up to K = 142 to demonstrate the extraordinary stability of the presented estimator in comparison to the existing approaches. The applications of this section exploit data on reading proficiency in US schools. Data stem from the "Program for International Student Assessment" (PISA), and here data from the year 2009 is used as provided in [25]. Reading proficiency is measured in terms of a continuous score, however, since limited dependent variable models have shown to be a bigger challenge with respect to marginal likelihood estimation, the score is censored in this example. Consequently, the observed dependent variable y is defined according to whether a student exceeded the median score of all students (1) or fell behind it (0). As explanatory variables we use a student's grade (x 1 ; a number between 9 and 11), gender (x 2 ; 1 for female, 0 for male), whether a student's family has lived in the USA for at least three generations (x 3 ; 1 for yes, 0 for no), the index of economic, social and cultural status (x 4 ; a continuous value centered at 0), and an indicator whether a student has stated they enjoy reading (x 5 ; a continuous value centered at 0). Skrondal and Rabe-Hesketh [24] estimate an older version of the PISA data and add a random intercept for each school to capture overdispersion. In this application we use a probit link function, such that the model is given by where t = 1, . . . , T refers to the school of student i with her or his characteristics being held by the row vector x i . We define a normal prior for β such that β ∼ N(0, 100 × I), where I refers to the identity matrix with five columns. We specify a natural conjugate prior for μ and σ 2 . The variance of the random intercept is estimated to be 0.28 in [24], and we specify an inverted Gamma distribution as its prior with σ 2 ∼ G −1 (2, 0.5) having a median of around 1.2 and thus being vague enough for this application. The prior of μ is then a normal distribution with mean 0 and variance 100σ 2 . The Gibbs sampler is outlined in the Appendix. This model is estimated for five different data sets. The first data set contains all 21 public schools in the PISA sample from the Northeast of the US, the second data set contains the 28 schools of the US region West, the third contains the 36 schools from the Midwest, and the fourth data set contains the 50 schools from the remaining region, South. For the fifth data set we combine these regions and estimate reading proficiency for all 135 public schools in the sample. Considering the model estimates an individual intercept for each school, the total number of coefficients K is the number of schools in the respective data set plus 7. The estimation techniques explored in this paper are the method presented in this article, Chib's method, and importance and bridge sampling. As discussed earlier, the mixture importance distribution from equation (3.1) approaches the posterior distribution for S → ∞, and S = 100 performed well for the low-dimensional models from the proceeding sections. However, in a hierarchical model each of the S com- j>d , y) of the mixture importance density becomes a very narrow and almost spike shaped distribution conditional on a draw of the latent variables Z = (z 11 , . . . , z NT ) . As a result the final importance density has hardly any mass in between its S spikes and clearly fails the requirement to have fatter tails than its target distribution, with negative impact on the stability ofp IS . The hedgehog-like shape of the importance density may also induce problems for bridge sampling. When constructing the importance density from (3.1) in an unsupervised manner, it is likely that some of the posterior draws θ (1) , . . . , θ (M 2 ) intended to be used during bridge sampling have originally been drawn from one of the S components of the mixture importance density. If a draw θ (m2) has been drawn from one of the S components of the importance density (i.e. from the center region of one of the spikes), then the magnitude of q(θ (m2) ) is exceedingly high compared to an importance density constructed from components M 2 which none of the posterior draws have been drawn from. As a result, the sum in the denominator of the bridge sampling estimator ∑ q(θ (m 2 ) ) M 2 q(θ (m 2 ) )+M 2 p * (θ (m 2 ) |y)/p (t−1) BS is dominated by these high values of q(θ (m 2 ) ) with neg-ative effects on the stability of the estimator. To avoid this source of instability, we split the posterior sample and construct the importance density from the first half of the L posterior draws θ (1) , . . . , θ (L/2) and select the M 2 posterior draws for usage during bridge sampling from the second half θ (L/2) , . . . , θ (L) . Following the preceding discussion about possible drawbacks of the mixture importance density when applied to unit-level models, we compare it to another approach for the unsupervised construction of an importance density. Likewise as for (3.1) θ is split into D blocks (θ 1 , . . . , θ D ), and the marginal distribution of each block q(θ d ) is estimated from the posterior draws. The product of the estimated marginal densities is then used as importance density (see e.g. [7]): Here, the marginal distribution of μ is estimated by a normal distribution with its mean being the mean of the posterior draws (μ (1) , . . . , μ (L) ), and its variance as the variance of the posterior draws. The marginal distribution of σ 2 is approximated by a log-normal distribution with its mean as the mean of the log-posterior draws (log(σ 2,(1) ), . . . , log(σ 2,(L) )), and its variance as the variance of this series. Likewise, q(θ d ) of β and of the random intercepts γ 1 , . . . , γ T are approximated by normal distributions with their means and their variances estimated from the respective posterior draws. As this construction of the importance density decreases its ability of drawing from high-density regions of the posterior distribution compared to the mixture importance density approach, we set M = 1,000,000. For the mixture importance density, we set S = 1,000 and M = 50,000. Considering these demanding settings for both importance densities, importance sampling and bridge sampling require long computational runtime, while calculation ofp A is much faster. Table 3 displays the results of the comparative study. While the magnitudes of the estimates cannot be compared between the different values of K as these are related to different data sets, the standard errors allow conclusions about the sensitivity of the respective estimators to an increase of the number of coefficients of the underlying model. Firstly, we see that constructing the importance density as in (3.3) can be ruled out for this model and we skip reporting estimates beyond K = 28. Secondly, we see from the high standard errors that, even for this relatively low value of K, importance sampling applying the mixture importance density produces less reliable estimates than the remaining three techniques, and this reliability worsens with an increase in K. However, up to K = 57 all three remaining techniques provide estimates with relatively low standard errors, whilep A is by far the most stable for all values of K. Thirdly, in practice one would usually wish to analyze all public schools in the US in the same model, which is marked in Table 3 through K = 142. For this model we see that all existing techniques are not able to produce stable estimates, while in contrast p A provides outstandingly stable estimates with low standard errors showing only very limited sensitivity to the increase in the number of model coefficients from K = 28 to K = 142. The outstanding stability ofp A is a result of directly addressing the magnitude of the non-normalized posterior as the criterion to qualify for A. For importance density based methods the importance density has to approximate the posterior distribution in all K dimensions, whilep A only relies on the univariate aggregate p * (θ|y). This is a significant advantage with respect to the stability of the estimates. The right plot in Figure 2 compares all posterior draws of the coefficients μ and σ 2 of the data with K = 28, to those draws lying in A, and demonstrates that the posterior distribution has very non-elliptical contours obviously not hampering the stability ofp A . Conclusion and discussion This article develops a new approach to marginal likelihood estimation and compares it to existing estimation techniques. This comparison reveals two major advantages of the presented approach. Firstly, it requires only a sample of posterior draws as an input from the analyst, and these draws can come from any sampling scheme, such as Gibbs sampling or Metropolis-Hastings sampling. Furthermore, and in contrast to some widely used earlier approaches, only draws from one run of the MCMC sampler are required, except for Table 3: PISA data; logarithm of different estimators and for five different data sets. Importance sampling and bridge sampling using an importance density constructed as product of estimated marginal densities as in (3.3) are referenced byp 1 IS andp 1 BS , whilep 2 IS andp 2 BS use a mixture importance density constructed as in (3.1),p CH refers to Chib's method, and the estimator proposed in this paper is referenced asp A . Relevant standard errors in parentheses, results from three independent MCMC runs per data set are reported multi-modal posterior distributions where a second run is required. Hence, the presented approach can be implemented generically for many important classes of models. This property substantially decreases the burden for the analyst associated with computing the marginal likelihood. The second advantage of the presented approach is improved stability when the number of model coefficients increases, as illustrated in Section 3.3. Herein, the high level of stability, in contrast to the existing approaches, was shown up to a 142-dimensional random intercept probit. While this result can encourage analysts to report the marginal likelihood as part of Bayesian analyses more frequently, no general rule can be derived as to the number of dimensions for which the method will provide stable estimates. Therefore, assessments of the relationship between the number of dimensions, the type of the analyzed model, and the precision of the resulting estimates are suggested for future research. Comparable uncertainty remains regarding the best choice of the tuning parameters for the presented approach, these are ρ and α. However, despite the very different model types on which the presented approach was tested in Section 3, all applications utilized the same values for these parameters. The high stability of the estimates in these applications suggests that the precision of the method may not be sensitive to the choice of tuning parameters. Nevertheless, future research should provide a better understanding how and to what extent an optimized choice of these parameters could further improve the precision of the method. As a final remark, and as mentioned in Section 2, we note that in principle the choice of the geometric object for A 2 , which was chosen to be a K-dimensional ellipsoid in this article, is a tuning parameter. The availability of methods for drawing uniformly from within an ellipsoid, and for providing the log of its volume even when K is high, make it a convenient and efficient choice. Nevertheless, it cannot be ruled out that a different definition of A 2 could further improve the properties of the presented approach. A Subroutines for Algorithm I A.1 Log of volume of K-dimensional ellipsoid Algorithm 1 returns the log of the volume of a K-dimensional ellipsoid. (1); v (1) = log(2) #Calculats the volume of i-dimensional spheres of unit radius 5: for i = 2 to K do 6: v (i) = log( 2π i−1 ) + v (i−2) 7: end for #Scales the volume of K-dimensional unit sphere with respect to main axes 8: log(V A 2 ) = v (K) + ∑ K k=1 log( √ e (k) ) 9: end procedure A.2 Drawing uniformly from within K-dimensional ellipsoid Algorithm 2 returns one point θ (r) uniformly drawn from within a K-dimensional ellipsoid. Algorithm 2. Drawing uniformly from within a K-dimensional ellipsoid.
10,157
2020-08-05T00:00:00.000
[ "Computer Science", "Mathematics" ]
ANALYSIS OF DYNAMIC PARAMETERS OF THE SYSTEM OF RASTER FORMATION AND CONTROL The presented research work analyzes the sensing system, the main aim of which is a raster formation and controlling this process using the optical measuring equipment and high precision angle encoders. Optical measuring equipment are used for the raster position detection, meanwhile angle encoders for controlling the tape speed. The main parameter of raster formation process is fixed transportation speed, but there are difficulties to realize it, because there is imperfection of the device elements. The article analyzes the dispersion of vibration accelerations of the raster formation device and tape in the two directions (transverse and longitudinal) and presents an analysis of their parameters in application of the theory of covariance functions. The results of the measurements of vibration accelerations at the fixed points of the device constructions and the tape were recorded on a time scale in the form of digital arrays (matrices). Values of auto-covariance and inter-covariance functions of digital arrays of the vibration accelerations measurement data were calculated by changing the quantum interval in a time scale. The developed software Matlab 7 in operator package environment was used in the calculations. and optimizing the processes of formation the construction formation and of the structure of symbols. The presented research examines the tape transport system consisting of electromechanical tape tractions and its fixed tensioning mechanisms as well as a tape tilt mechanism operating on sliding friction. This system is mounted on a massive granite base built on the foundation using passive vibration insulators. The research and data processing method and results of the experimental study of the layout system are proposed in the article. Experience of this article authors in construction, analysis and optimization of mechatronic precision devices is presented in papers [1][2][3][4]. The formation process takes place in the dynamic mode because both the steel tape and the laser raster formation head are continuously moving during the process. In assessment of the significant points of the construction being examined, analysis of the vibrations dispersion of construction of the device and its tape point was carried out by application of the covariance function theory. The Gaussian process emulator, with separable covariance function assumption of separability, imposes constraints on the emulator and may negatively affect its performance in some applications where separability may not hold. Zhang et al. [5] propose a multi-output Gaussian Introduction Steel tape, on the surface of which certain symbols, such as rasters, special marks and the like are formed, are often used for metrological and technological needs, such as for measuring the displacement. Those symbols are formed by the movement of a steel tape and the beams of the source of light, which affects either the band surface or its special coatings. The undesired deviations in the position and the shape of symbols formed in that way mainly depend on uncontrolled deviations of the said movement of the tape and the light beam from the set parameters. One of the most important components of such deviations are vibrations of the band resulting from the sources of excitation of the tape transporting system mechanism and other internal and external vibrations. Uncontrolled changes in their position and the displacement speed, as well as the band deformation that have an adverse effect on the quality of the formed structures, occur as a result of their impact. It also affects the accuracy and reliability of the active control of the process carried out in real time. This effect on the system of defined parameters depends on vibration parameters -frequencies, amplitudes and other statistical characteristics. Knowing these parameters is important for designing the construction tape thrust mechanism operating on slip friction. The main parameter of the raster formation process is fixed transportation speed, but there are difficulties to realize it, because there is some imperfection of the device elements. This method may be used to produce a precision metrological scale on a stainless steel tape. The covariance function theory was used in analysis of the vibration parameters. Graphical expressions of the covariance functions show change of probabilistic inter-dependence of vibration parameters in the timeline. This degree of change of the vibration parameters spatial relations depends on technological and dynamic properties of the tape device being examined. During the experiment, four sensors were laid out on the structural elements of the device and the fifth sensor was placed on the tape of the device. Vibration accelerations of all the five sensors were measured in the longitudinal and transverse directions of the tape movement. Simulation of vibration parameters The research presents an option for calculating the most reliable trend value of the vibrations vector applying the method of the least squares. It is assumed that the vibration vector trend is a discrete quantity with a fixed value. Use of the least squares method partially eliminates random vibration errors. When processing the large-scale measurement data, the least squares method also provides asymptotically effective values of the calculated parameters in the case where the measurement data distribution is not normal. The array of measurements of vibration accelerations consists of 5 vectors { (columns). Each vector is understood as a random function with random measurement errors. The most reliable trend value { u which is also called a weighted average, is calculated for each vector { in application of the least squares method. An assumption that the trend value of the vector changes according to the harmonic law, where the forecasted wave length corresponds to the length of the vibration accelerations vector, is used. The parameter equation of the individual vector value i { reads: where i f is a random acceleration error, i { is an acceleration value and { u is an acceleration vector trend. Coefficient a i is expressed as: is the value of the unit of measure, rad, , , , . i n 1 2 f = The following is the equation (1) in the matrix form: process emulator with a non-separable auto-covariance function to avoid limitations of using the separable emulators. Guella et al. [6] provide characterization theorems for high dimensional spheres, as well as for the Hilbert sphere. Kithulgoda et al. [7] present a strategy for incremental maintenance of the Fourier spectrum to changes in concept that take place in data stream environments. Singh and Singh [8] generalized Chebyshev-Fourier moments (G-CHFMs) and generalized pseudo -Jacobi-Fourier moments (G-PJFMs) have been extended to represent color images using quaternion algebra. Zhang et al. [9] used the spectral densities to calculate the average entropy of mixtures of random density matrices and show that the average entropy of the arithmetic-meanstate of n qubit density matrices randomly chosen from the Hilbert-Schmidt ensemble was never decreasing with the number n. Kurt and Eryigit [10] studied dynamics of the two-state system coupled to an environment with peaked spectral density. Alotta et al. [11] provided a complete characterization of normal multivariate stochastic vector processes. Band et al. [12] focused in fractal Fourier analysis in which the graphs of the basic functions are Cantor sets, discontinuous at a countable dense set of points, yet have good approximation properties. Zhang et al. [13] simulated fractal signals that were generated under varying signal lengths and scaling exponents that indicate a range of physiologically conceivable fractal signals. Liu et al. [14] aimed to improve the computational accuracy of the fractal dimension of the profile curve. Florindo and Bruno [15] proposed a novel technique for the numerical calculus of the fractal dimension of fractal objects, which can be represented as a closed contour. Xie et al. [16] proposed a layered online data reconciliation strategy based on a Gaussian mixture model, which is proposed for the complex industrial processes with multiple modes. Odry et al. [17] demonstrated the use of a special test bench that both enables simulations of various dynamic behaviors of wheeled robots and measures their real attitude angles along with the raw sensor data. Song el al. [18] proposed a structure-preserving bilateral texture filtering algorithm to smooth the texture while preserving salient structures. Du et al. [19] proposed a correlation filter based visual tracking approach that integrates spatial-temporal adaptive feature weights into the DCF method to derive an efficient solution. Li et al. [20] proposed an algorithm for adaptive multiscale morphological filtering (AMMF), and its effect was evaluated by a simulated signal. Nathen et al. [21] presented an extension of a Large Eddy Simulation based approach in the framework of Lattice Boltzmann Methods. During the research, an analysis of the raster formation device construction and dispersion of vibration accelerations of its tape points was carried out. The tape transport system consists of electromechanical tape traction and its fixed tensile mechanisms, as well as a The accuracy of the parameter values calculated in application of the least squares method is assessed using their covariance matrix K{u . Since the trend of cases of this task is a quantity (a scalar), the following may be written: where 0 vl is a value of the standard deviation 0 v . It is assessed according to the formula Object of the research and measurement results This article examines a raster formation device ( Figure 1). The research work analyzes the tape transport system consisting of the electromechanical tape traction and its fixed tensile mechanisms, as well as a tape tilt mechanism operating on sliding friction. Vibrations in two directions (longitudinally and transversely) were measured at 4 points of the raster device (point 1 -fixed tension mechanism of the tape; point 2 -tape displacement measurement system; point 3-tilting node; point 4 -traction -tension system) and on the metal tape. The articles analyzes vibration signals of significant points of the metal scale production device, when the device is in operating mode. Measurements were conducted in the two directions X and Y (Figure 1). Figure 2 The most reliable trend value of the vibration accelerations vector { is calculated in application of the condition of the least squares method: where P is the diagonal matrix of weights of vibration accelerations n n i # where 0 v is the standard deviation of the measurement result 0 { the weight whereof is accepted as equal to one p 1 0 = . Thus, the 0 v value is selected freely, because it has no impact on the calculation results. Value of the measurement result 0 { is selected so that weights pi were close to one (to reduce the scope of calculations). Out of t equation the following is derived: Equation (7) shows that i v{ value depends on value of the vibration accelerations i { . Thus, the acceleration of a higher value is of lower accuracy, because i v . Using Equation (5), the following expression is obtained: where the accepted average value . 10 The extremity of the function (4) is determined having calculated its partial derivatives according to the parameter { u , equated it to zero and solved the equation received: Then, the following is obtained: and The solution is equal to: Table 2. Auto-covariance function of one data array or intercovariance function of the two arrays K x {^h is expressed as [24][25]: or is a variable quantum range, k is a number of the measurement units, T is the value of the measurement unit, T is time and M is a symbol of the mean. The covariance model of vibration signal parameters which authors used in this article is described in detail in papers [3,23]. Arrays of the vibration accelerations measurements' data were obtained using 1-axis accelerometers 8344 and a were in the frequency range from 280 to 420 Hz from Figure 5). The lowest level of vibrations in the Y and X directions was in the tape tilting node (Figure 1, point 3), where the formation of rasters takes place and it was 6 (in the Y direction) and 12 mm/s 2 (in the X direction). Covariance model of vibration signal parameters and the results of the analysis of the vibrations acceleration model Guchhait and Banerjee [22] proposed a variant of constitutive equation error based material parameter estimation procedure for linear elastic plates, which was developed from partially measured free vibration signatures. The conception of a stationary random function is the scientific ground of this theoretical model. Considering that, errors of measuring the bridge vibration parameters are random and of the same accurateness, ie. the average error M const 0 " T = the dispersion D const T = . The covariance function of digital signals depends on the difference between the arguments only, ie. on the quantisation interval on the time scale [23]. The theoretical model was a subject of an assumption that errors of digital signals of vibrations are random and possibly systematic. The measurement data trend of the vector is eliminated in each vector of measurement arrays of vibration parameters. Vibration dispersion time interval Expression of each measurement vector is a random function due to random measurement errors when bringing its expression closer to the shape of a stationary function, trend components, which were calculated in application of the method of least squares, were eliminated. Measurement data arrays were processed using the developed computer software in application of Matlab programme package operators. Values of the quantised interval of simplified covariance functions range from 1 to / n 2 8000 = . The value K x { l^h of the simplified auto-covariance function K x {^h was calculated for each vibrations vector and graphical expressions of 16 simplified auto-covariance functions were obtained. potentially inter-related vibrations of nodes of the device construction. Conclusions Analysis of the dynamic parameters of the sensing system construction, the main aim of which is the raster formation and controlling this process, using the optical The simplified inter-covariance functions of the acceleration vectors of the array have average correlation coefficient values for the majority of pairs of vectors ranging in the 0.4-0.8 interval, which is illustrated by the data of inter-correlation of the 1 st acceleration vector with vectors of all the other 7 accelerations and between vectors 2 and 3, 2 and 7, 2 and 8, 3 and 6, 3 and 7, 3 and 8, and 6 and 8. Thus the correlation between the majority of accelerations vectors is significant enough, which shows related to the fact that the structure of vibration signals is a composite function that has vibrations of large amplitudes with long intervals and vibrations of relatively small amplitudes. Expressions of simplified auto-covariance functions of accelerations vectors of the arrays are similar. The 10 th simplified auto-covariance function of the vector of transverse vibration accelerations of the tape has a different expression than other auto-covariance functions. The simplified inter-covariance functions of accelerations vectors of the array have the average correlation coefficient values ranging within the 0.4-0.8 interval in the majority of pairs of vectors. Thus, the correlation between the majority of pairs of accelerations vectors is significant enough, which shows the possibly inter-related vibrations of construction nodes of the device. measuring equipment and high precision angle encoders, revealed that the highest level of vibrations of up to 32 mm/s 2 was in the traction -tension system, when assessing the longitudinal vibrations; when assessing vibrations in the transverse direction the highest level of up to 190 mm/ s 2 was formed in the fixed tension mechanism of the tape. Main components of the vibration acceleration amplitudes were in the frequency range from 280 to 420 Hz. The simplified auto-covariance and inter-covariance functions show the change of the probabilistic interdependence of vibration parameters of the raster formation tape device in the time scale. This degree of changes depends on the relation between vibrations of the structural nodes of the device being examined and the dynamic properties. Graphical expressions of simplified auto-covariance and inter-covariance functions are two-tiered, which is
3,677.8
2020-07-08T00:00:00.000
[ "Physics" ]
Structural Health Monitoring Using Ultrasonic Guided-Waves and the Degree of Health Index This paper proposes a new damage index named degree of health (DoH) to efficiently tackle structural damage monitoring in real-time. As a key contribution, the proposed index relies on a pattern matching methodology that measures the time-of-flight mismatch of sequential ultrasonic guided-wave measurements using fuzzy logic fundamentals. The ultrasonic signals are generated using the transmission beamforming technique with a phased-array of piezoelectric transducers. The acquisition is carried out by two phased-arrays to compare the influence of pulse-echo and pitch-catch modes in the damage assessment. The proposed monitoring approach is illustrated in a fatigue test of an aluminum sheet with an initial notch. As an additional novelty, the proposed pattern matching methodology uses the data stemming from the transmission beamforming technique for structural health monitoring. The results demonstrate the efficiency and robustness of the proposed framework in providing a qualitative and quantitative assessment for fatigue crack damage. Introduction Crack initiation and propagation are the main driving forces of damage in many engineering materials [1]. In the absence of corrective actions, for example, decreasing the load levels or performing corrective maintenance actions, a crack may propagate up to the catastrophic failure of a structural component. A wide range of non-destructive testing (NDT) techniques have been proposed to support operation and maintenance decision making [2][3][4]. Visual inspection, radiography, or ultrasound [5,6] are some examples of these NDT techniques; these are regarded as highly reliable but also time consuming (requiring the interruption of service) and dependent on the structure [7]. Alternatively, structural health monitoring (SHM) techniques enable a continuous on-board monitoring of the structural health facilitating the conditionbased maintenance of the structure [8]. Some SHM techniques are especially suited for structures made of conductive materials such as metals (e.g., eddy currents [9]). Others, however, allow a more generic application to structures made of any kind of isotropic or anisotropic materials. For instance, strain gauges and fiber Bragg gratings [10] are typically used to monitor strain deformation in structures. However, their area of coverage is relatively low, thus potentially missing small defects unless a dense network of sensors To overcome this limitation, this paper proposes the use of a novel pattern matching post-processing method for the detection and monitoring of fatigue damage in isotropic materials based on signal features stemming from each characteristic point (CP) of the signal. These points correspond to the peak amplitudes (maximum and minimum) of the acquired signal. The ToF of the peaks are chosen as the base of comparison to detect and monitor damage in a structure [37]. More specifically, a set of these points acquired in the undamaged state are further used to build a trapezoidal function similarly to a membership function or fuzzy set [38,39]. Additional measurements in non-pristine states are evaluated in the proposed function, whereby a degree of health (DoH) of the structure is provided as output. The feasibility and efficiency of the proposed algorithm (based on [40]) are demonstrated in a real fatigue test of an aluminum plate. The ultrasonic transmission beamforming technique with a linear phased-array of six PWAS is adopted to monitor (i.e., excite and receive ultrasonic guided-waves) the structure with an enhanced power every 1000 loading cycles at the same load level. Another symmetric phased-array (which only receives ultrasonic signals) is used to support the results obtained from the former one. The experimental results show that the proposed methodology is able to detect fatigue damage at an early stage (i.e., the crack onset) and to monitor the crack growth in an effective and efficient manner. Furthermore, the ease of implementation of this methodology makes it potentially applicable to industrial environments in order to perform real-time SHM. The remainder of the paper is organized as follows-Section 2 describes the proposed post-processing method based on the comparison of the ToF of the signal peaks; Section 3 shows experimental setup as well as the results obtained from this method in the fatigue experiment; Section 4 discusses the potential impact of the proposed methodology on ultrasonic guided-wave based SHM; finally, Section 5 provides concluding remarks and suggests future works. Methodology The proposed detection and monitoring methodology based on a novel damage index is presented in this section. The index, referred to here as DoH, relies on ultrasonic guided-wave data taken from the SHM of structural panels. More specifically, the DoH uses baseline ultrasonic data acquired when the structure is in pristine state as a basis of comparison for further measurements when the structure is potentially damaged. To enhance the computational efficiency of this technique, only the ToF of the signal peaks above a user-specified amplitude threshold (named as A t ) are used as representative information of the raw ultrasonic data, as shown in Figure 1a. The peaks are obtained applying a sliding window to the ultrasonic signal. Note that the width of this window is chosen to be proportional to the period of the signal, that is, the inverse of the central frequency. The maximum and minimum points within these moving windows are selected, hence obtaining the signal peaks. Furthermore, to partially address the irreducible uncertainty of these measurements and add robustness to the damage index, an arbitrary amount of repeated signals are acquired leading to a set of peaks with small differences in both time and amplitude (see Figure 1b). These variations are assumed to be an indicator of the measurement (or aleatory) uncertainty [41], which supports the previous robustness claim of the proposed damage index. Similarly to a fuzzy set, a trapezoidal function is created around the set of ToF points acquired in pristine state (see Figure 1c). This function makes it possible to assess the structural health by analyzing the ToF mismatch between the CPs (see Figure 1b for reference) acquired in both the degraded state and the pristine state. Note that as this function gives degree of membership values within the interval [0, 1], the proposed damage index will carry qualitative information about the structural degradation. Mathematically, the evaluation within the previously established trapezoidal function (or set) of CP j i , namely the i-th CP of the j-th signal, i = 1, . . . , N, j = 1, . . . , m, will provide the degree of membership µ j i ∈ [0, 1] of such a point to the set, which is defined as follows: where ToF Figure 1c. This is to account for further measurement uncertainty that is not captured by the initial repeated measurements. Note also the intervals j i and L j i need to be suitably defined by the modeler in the proposed approach. Thus, when assessing damage through an acquired guided-wave, this method evaluates if CP j i falls within L j i , then µ j i = 1 meaning that the structure is unaltered. Alternatively, if the structure suffers a permanent damage, the ultrasonic guided-waves will be affected by this damage through slight ToF mismatches [42]. In this case, the CP j i is likely to fall within the interval j i due to an advance or delay of the acquired signal, which in turn leads to µ The assessment of a degree of membership µ j i ∈ [0, 1) may be an indication of structural damage, but it can also occur due to environmental or numerical noise. To overcome this issue, every CP j i is assessed and their degrees of membership µ j i are obtained, the global degree of membership of the j-th ultrasonic signal (i.e., M j , with M being the capital letter of µ) is obtained as follows: Note that the function g(·), conservatively chosen in this paper as the minimum of the degrees of membership, might be adopted differently such as the weighting and segmentation of the ToF or the amplitude [43]. Note also that in Equation (2) the function g(µ j i ) has been subtracted from the unity, consequently, M j can be viewed as a damage index which increases as the defect becomes more severe, and vice versa. Moreover, when a set of ultrasonic signals are available and post-processed by this method, an array (or matrix) of M j values rather than a single value can be obtained. The use of additional M j values ultimately leads to a more reliable monitoring since the influence of sensor malfunctioning in the monitoring decision greatly decreases. In this case, the resulting array or matrix is referred here to as the DoH matrix of a structure, with values from 0 (maximum health) to 1 (maximum degradation). A schematic workflow of the proposed methodology is shown in Figure 2. Case Study The proposed DoH damage index (Section 2) for damage detection and monitoring is illustrated in this section. To this end, a fatigue test on an aluminum plate with a central notch has been carried out and its structural health is monitored by means of the DoH using ultrasonic guided-waves excited and received by PWAS and a SHM ultrasound system (SHMUS). Fatigue Testing Configuration The fatigue test has been performed using a middle tension [M(T)] specimen with a centered crack and loaded in tension using a positive loading ratio. The test specimen of dimensions 245 mm × 500 mm × 1 mm has been extracted from a 1003x503 QQA250/5 'O' 2024 aeronautic grade aluminum sheet using a water jet cutting procedure. The notch has been centered with respect to the test sample centerline and machined using a laser cutting procedure, with the final dimensions specified in Figure 3a. The total length of the machined notch is 22.5 mm approximately. An Instron 8850 servo-hydraulic fatigue testing machine has been used to conduct the fatigue experiment. A fatigue pre-cracking procedure has been applied to develop a fresh and straight crack front to mitigate the effect of the machined started notch [44]. The crack onset and growth is recorded through a digital still camera. A tension-controlled fatigue test has been performed with a stress ratio ∆R = 0.1 and maximum load amplitude P max = 10 kN, which corresponds to the 60% of the aluminum yield strength. The test was performed up to 100,000 cycles with a cycling frequency of 20 Hz. The reader is referred to Figure 3 for further information about the experimental set-up. Note that the upper and lower 50 mm bands of the specimen are used to clamp the plate into the fatigue testing machine. Ultrasonic Guided-Wave Based Tests The detection and monitoring of the crack onset and growth has been carried out using two linear phased-arrays of six PWAS, consisting of six evenly spaced PWAS that are linearly placed in the structure [19]. Piezoelectric ceramic disc transducers of 7 mm diameter and 0.5 mm thickness have been used in this fatigue test. These are used as transmitter-receiver (T i ) and receiver (S i ) arrays for the generation and reception of the ultrasonic guided-waves, respectively. Note that the transmitter-receiver array (T i ) works in pulse-echo mode by simultaneously generating and acquiring ultrasonic signals, while the receiver array (S i ) functions as a sensor only, also known as pitch-catch mode. It is important to note that the PWAS T 5 malfunctioned during the fatigue test and therefore its associated data have not been considered for SHM purposes. These PWAS have a radial mode of vibration and a resonant frequency centered at 300 kHz. They have been evenly spaced with a separation of 10 mm and symmetrically placed with regards to the sample centerline, as observed in Figure 3, and bonded to the aluminum surface using 3M™ Scotch-Weld™ DP490 epoxy adhesive. The piezoelectric transducers are managed by a SHMUS, which is a custom-built compact electronic device for ultrasonic guided-wave based SHM. It includes 12 arbitrary waveform generators and 12 acquisition systems (refer to [45] for further details of the SHMUS). The SHMUS is controlled by USB using a tailor-made control and processing software installed in a laptop. The schematic of the experimental set-up for damage monitoring is shown in Figure 4. The excitation signals are sinusoids of 4 cycles, with 300 kHz frequency (in accordance to the resonant frequency of the chosen PWAS) and 45 volts peak to peak amplitude. The ultrasonic signals are acquired using a sampling frequency of 60 MHz and 12 bits of resolution. Furthermore, the transmission beamforming technique [46] has been adopted for the monitoring of the specimen during the fatigue test. Using this technique, synchronously delayed signals were applied to the T i PWAS (refer to Figure 3a), so that the wave fronts can be constructively summed at a pre-established direction creating a main wave beam [47]. The ultrasonic tests have been carried out by steering the main beam at different directions, that is, from ξ 1 = 0 • to ξ 37 = 180 • with an increment of ∆ξ = 5 • , which sweeps the entire monitoring area. Note that the ultrasonic signals are acquired by both the T i and S i phased-arrays of PWAS, hence receiving two sets of 37 × 6 signals (i.e., 37 angles and 6 PWAS for each phased-array). Programmed inspections during the fatigue experiment have been carried out every 1000 cycles. For each individual inspection, the fatigue test has been stopped at the minimum load level and an ultrasonic inspection has been performed using the transmission beamforming technique. The received ultrasonic signals (available in [48]) by the SHMUS have been de-noised using a bandpass filter centered at the frequency of excitation (i.e., 300 kHz). This allows a range of frequencies around the frequency of excitation to pass, while attenuating frequencies out of the scope. In this case the passband is defined in the interval [250 kHz, 350 kHz], while the stopbands, that is, attenuated frequency ranges, are defined within [0 kHz, 200 kHz] and [400 kHz, ∞) with a minimum attenuation of −60 dB. Figure 5 shows the representation of this bandpass filter in both time and frequency domains. Damage Monitoring Results The ultrasonic data acquired in pristine state at the minimum load level (i.e., 1.0 kN) have been used to build the trapezoidal functions (refer to Equation (1)). The parameters used to create these sets are: (1) and L j i are set so that the measurement uncertainty is considered while avoiding that the trapezoidal functions of two adjacent peaks overlap. Note that the evaluation of a new signal in the previous functions has been carried out using a smaller amplitude threshold of A t = 10%, so that a control loop is created for the amplitude in an analogous manner to a hysteresis controller [49]. Figure 6a-d show the DoH matrix of the signals acquired at different fatigue cycles, namely 1000, 20,000, 50,000, and 100,000. Note that two DoH matrices are provided for each dataset, the one on the top is for the pulse-echo array (using T i PWAS) and the one on the bottom is for the pitch-catch array (using S i PWAS). As is evident from the results, a clear degradation of the structure is appreciated by the increase of the DoH damage index when increasing the number of cycles. Notwithstanding, working with the DoH matrices may be limited in practice due to their complex interpretation. Hence the mean value of M j , j = 1, . . . , m (i.e., all the values of the DoH matrices), is used here as a simplified statistic of the SHM data evolution, as shown in Figure 7a. Results also show that a higher damage index value is obtained using the data acquired in the S i sensors (grey line) compared to the T i sensors (black line). Furthermore, a comparison of the evolution of both the DoH index and fatigue crack length is also depicted in Figure 7a. The measurements of the crack length are obtained by digitizing different points in the crack path, as shown in Figure 7b. A remarkable agreement in the trend of both damage indicators is observed throughout the duration of the fatigue test. Note also that the early damage is detected using the two phased-arrays around 10,000 cycles, although the S i sensors are able to provide an earlier indication of damage (i.e., even before it is visible by optical means). Note that other statistics that illustrate the damage information obtained from the beamforming tests can be used. In particular, the adoption of the mean of the DoH matrix has shown accuracy in monitoring of the degradation of the structure. However, this statistic shows no indication of the dispersion of the cells of the DoH matrices and works as a filter of the evolution data. To provide such dispersion information, the evolution of the individual damage indices M j for the array of sensors T i when focusing at 60 • and 120 • are shown in Figure 8a,b, respectively. The equivalent data for the upper phased-array with S i sensors are shown in Figure 8c,d. These curves depict the evolution of the 60 • column of the DoH matrices, equivalent to the ones shown in Figure 6. ξ1 ξ2 ξ3 ξ4 ξ5 ξ6 ξ7 ξ8 ξ9 ξ10 ξ11 ξ12 ξ13 ξ14 ξ15 ξ16 ξ17 ξ18 ξ19 ξ20 ξ21 ξ22 ξ23 ξ24 ξ25 ξ26 ξ27 ξ28 ξ29 ξ30 ξ31 ξ32 ξ33 ξ34 ξ35 ξ36 ξ37 panels (a,b)) and pitch-catch (panels (c,d)) modes at two particular directions. As observed from Figure 8, the evolution of M j for each PWAS has very different paths even for two adjacent PWAS. For instance, the M j value of the sensors S 2 and S 3 at 120 • (see Figure 8d) have significantly different evolutions from 15,000 cycles onwards. This difference highlights the high sensitivity of the ultrasonic guided-waves when interacting with damage. Given that a different angle is formed between the damage and sensor, the reflected part of the incident wave may vary drastically [50]. Therefore, different sensor locations can acquire very different amplitudes of the scattered wave. Nevertheless, the damage onset is accurately captured by most of the sensors of the same phased-array at the same time. On the Ultrasonic Guided-Waves Results The proposed methodology for damage detection and monitoring based on the DoH index, has been illustrated using ultrasonic guided-waves and a fatigue test. The damage index is based on pattern matching using fuzzy logic fundamentals, which provide it with robustness against measurement uncertainties. The adoption of the transmission beamforming technique for SHM entails the collection of large amount of data [46,51]. This allows a more robust damage detection, providing redundancy and ensuring an accurate monitoring in the event of hardware failure, for example, sensor malfunction. One basic assumption considered in this work is that the damage severity increases as does the ToF mismatch between signals acquired at different instants of time. Note that an important source of signal variability, which stems from environmental factors (e.g., temperature or humidity [52]), is not considered in this work. Additionally, changes in the properties of the coupling material between the PWAS and the structure will add variability on the received signals that is unrelated to the structural integrity. The consideration of these factors would imply the adoption of compensation techniques [53] and signal shape and amplitude prediction methods [54] so that the evaluation of the structural integrity becomes independent from these variables. This constitutes a desirable extension of the proposed methodology in order to monitor structures in the long term. The ultrasonic tests were designed so that the wave beams focused at 37 directions using two phased-arrays: (1) a sensing array (S i ) placed at the upper part of the plate in pitch-catch mode and (2) a generating one (T i ), which was also acquiring in pulse-echo mode, placed at the bottom. In the literature, authors have typically used sensing PWAS in the opposite side of the damage [55] to measure ultrasonic guided-waves with higher amplitude and reduce the electronic complexities stemming from pulse-echo. To study their SHM performance, a comparison of both pulse-echo and pitch-catch approaches has been provided in this paper. In this context, Figure 7a shows that both S i and T i sensors were able to provide an accurate damage monitoring, although the sensors in the upper part of the plate (S i ) showed a higher sensitivity for the early stages of damage. Notwithstanding this, the use of a pulse-echo approach is more practical in real-world engineering scenarios where operation and maintenance impositions may dictate the location of sensors and actuators. The DoH matrices contain a large amount of damage-related information, which may be complex to interpret in practice. In this paper, we choose the mean value as a representative statistic of this large amount of SHM information, although other descriptive statistics could be adopted. This statistical simplification along with the computational efficiency of the signal post-processing, make the proposed methodology suitable for realtime and on-board SHM applications. Moreover, even in the event of a malfunctioning PWAS, the amount of post-processed signals is sufficiently large so that the mean of the DoH is capable of monitoring damage in an accurate manner (as shown in the results), further emphasizing the robustness of the proposed approach. It is also noticeable from the results that a similar evolution pattern is obtained for both the proposed DoH index and the crack growth (refer to Figure 7a). To further investigate this relationship, both DoH indexes and their corresponding crack growth data are displayed in Figure 9. In particular, the data from the DoH matrices for each of the 37 directions, averaged over the six PWAS of each phased-array, are shown in Figure 9a,b, respectively, as compared to the measured fatigue crack length. These scatter plots reveal the dispersion of the DoH values with respect to the different monitoring directions used in the transmission beamforming test. However, a clear correlation is observed from these plots, and further revealed in Figure 9c,d which relate the mean of the DoH index and the crack length. The variability stemming from the 37 beamforming directions is also depicted using the 90% and 50% confidence bands. Note that although the array of S i sensors provides a more certain index (with narrower bands), the T i sensors in pulse-echo are still able to predict the crack length with a remarkable precision. This remarkable correlation suggest that the DoH index can be used to predict the crack length in a probabilistic manner. Furthermore, a state-space dynamical system representation of the data will provide a predictive model of the crack length whereby the remaining useful life of the structure can be estimated (see [56]), using just the DoH as input data. This further development would require the assessment of a higher number of specimens, which would provide additional data to build a robust predictive model. 90% uncertainty band 50% uncertainty band Pitch-catch array (d) Mean and uncertainty bands of (b) Figure 9. Relationship between crack growth and mean of DoH for both pulse-echo and pitch-catch working modes. Panels (a,b) provide the data for the 37 directions with an average value of DoH with respect to the PWAS in the array. Additionally, (c,d) show the mean and 90% and 50% uncertainty bands of the DoH data. On the Fatigue Properties of the Specimen The experimental results obtained in the aluminum M(T) specimen can be also assessed in terms of crack length evolution and rate for the adopted stress range and load ratio. It is worth mentioning that no appreciable change in stiffness was measured in the damaged specimen. The stress intensity range ∆K is obtained according to the indications of ASTM E647 for a M(T) specimen as follows [44]: where α = 2a/W, 2a is length of the machined notch in mm, B denotes the plate thickness in mm, and W is width of the specimen in mm. The parameter ∆P = P max − P min with P max and P min being the maximum and minimum load applied in the fatigue test, respectively. Notice that Equation (3) is valid for linear-elastic, isotropic, and homogeneous materials, and that it excludes potentially influencing effects derived from residual stresses or crack closure [57]. Note also that Equation (3) provides the values of ∆K with 1% accuracy for the clamped end specimen configuration used in the experimental fatigue test [44]. Additionally, the crack growth rate da/dN is obtained using the secant method, that is, calculating the slope of the straight line joining two adjacent points, as follows: where a n and N n denote the crack length and the number of cycles of the n-th point, respectively. Given that the crack growth rate is an average over an increment (n + 1), the average crack sizeā = 1/2(a n+1 + a n ) is adopted to calculate ∆K [44]. Thus, the crack growth rate along with the stress intensity range are obtained from the experimental results and are shown in Figure 10. These data are compared to the Walker crack growth equation [58] adopting the parameters for a 2024-T3 clad and bare aluminum sheet, L-T (M2EA11AB1) [59]. It is noticeable from Figure 10 that the fatigue properties obtained from the experiments are relatively close to those available in the literature, considering the high variability that is typically manifested in fatigue tests [60]. This similarity is obtained even though (1) the fatigue tests do not strictly follow the ASTM E647 recommendations in terms of specimen aspect ratio; and (2) the data from the Walker equation is for an aluminum material with a different heat treatment than the one used in the tests. Finally, a comment is added in relation to the correlation between crack length and DoH index. Indeed, motivated by the strong correlation of the proposed DoH index with respect to the crack length (as previously discussed in Section 4.1), an estimation of the stress intensity factor could also be provided as long as the crack growth rate was predicted. This estimation could be particularly useful for an efficient characterization of the structure under consideration, for example, to provide the stress state at a crack tip and to establish a fracture failure criterion [61]. In this sense, additional future lines of work are under consideration regarding data normalization and damage classification. At a mechanical test level, testing different sample geometries, materials such as carbon fiber reinforced polymers, and notch shapes are also required to validate the feasibility of the proposed damage index in monitoring and predicting damage features in more complex scenarios. However, it is worth mentioning that the proposed extensions for this DoH-based methodology related to the diagnosis of damage are valid for a controlled environment where only fatigue damage is carried out. Note that additional sources of damage, for example, impact or corrosion, would be accounted for in the DoH matrix, although the proposed descriptive statistic (mean value) would be unable to differentiate them. To overcome this limitation, pattern recognition approaches or machine learning could be applied to the DoH matrix data at the cost of a higher computational cost. Conclusions An ultrasonic guided-wave based post-processing approach for the detection and monitoring of damage in isotropic materials is proposed in this paper. The methodology relies on a damage index, named as the DoH index, which is based on the ToF variation of the characteristic points of signals. This index enables a real-time and computationally efficient continuous monitoring. The accuracy and robustness of the proposed method in assessing the health condition of a structure under fatigue load is illustrated using experimental data from a fatigue test in an aluminum M(T) plate. The following conclusions can be drawn: • The proposed DoH damage index detects the damage onset and monitors its growth in a thin-walled metallic plate under fatigue testing conditions. It also shows a remarkable correlation with the fatigue crack length evolution. • The damage evolution is accurately monitored using both the pulse-echo and sensing only phased-arrays of PWAS along with the transmission beamforming technique. • The robustness of the proposed damage index builds on a pattern matching process, partially using fuzzy logic fundamentals, and information stemming from the transmission beamforming tests. • The observed accuracy alongside the large amount of data contained in the DoH matrices encourage the use of the proposed method for a reliable diagnosis of damage features and mechanical properties. Further work to improve and extend the proposed approach is under consideration regarding: (1) the application of environmental compensation techniques; (2) the embedment of the proposed damage index in a prognostics framework to predict the remaining useful life of the structure; and (3) the investigation of the damage index sensitivity with regards to different specimen geometries, materials such as composites, notch shapes, and damage forms along with their locations.
6,689.2
2021-02-01T00:00:00.000
[ "Materials Science" ]
Predicted Secretome of the Monogenean Parasite Rhabdosynochus viridisi : Hypothetical Molecular Mechanisms for Host-Parasite Interactions : Helminth parasites secrete several types of biomolecules to ensure their entry and survival in their hosts. The proteins secreted to the extracellular environment participate in the pathogenesis and anthelmintic immune responses. The aim of this work was to identify and functionally annotate the excretory/secretory (ES) proteins of the monogenean ectoparasite Rhabdosynochus viridisi through bioinformatic approaches. A total of 1655 putative ES proteins were identified, 513 (31%) were annotated in the UniProtKB/Swiss-Prot database, and 269 (16%) were mapped to 212 known protein domains and 710 GO terms. We identified six putative multifunctional proteins. A total of 556 ES proteins were mapped to 179 KEGG pathways and 136 KO. ECPred predicted 223 enzymes (13.5%) and 1315 non-enzyme proteins (79.5%) from the secretome of R. viridisi . A total of 1045 (63%) proteins were predicted as antigen with a threshold 0.5. We also identified six venom allergen-like proteins. Our results suggest that ES proteins from R. viridisi are involved in immune evasion strategies and some may contribute to immunogenicity. Introduction Monogeneans are ectoparasitic flatworms commonly found on the gills of marine and freshwater fish. Monogenean infections can cause excess mucosal secretion, epithelial damage, hemorrhage, osmotic problems, and gill atrophy leading to respiratory system failure [1]. In addition, lesions caused by monogenean parasites can facilitate secondary infections by bacteria, which can lead to the death of the fish host [2]. Some monogenean species belonging to the Diplectanidae family have been responsible for diseases and mortality in finfish aquaculture [3][4][5][6]. Several diplectanids of the genus Rhabdosynochus have been reported infecting the gill lamellae of wild and cultured snooks, Centropomus spp. (Perciformes: Centropomidae) [7,8]. Particularly, in northwestern Mexico, the diplectanid Rhabdosynochus viridisi has been reported causing mortality in a broodstock of Pacific white snook, Centropomus viridis [9]. This fish species is a well-suited candidate for marine cage aquaculture [10]. Additionally, snooks reared in laboratory and in a pilot commercial-scale farm have been affected by R. viridisi (Morales-Serna F.N. pers. obs.). Therefore, fish-parasite interactions should be better understood to support future research focused on the development of treatments to prevent or control monogenean infections in snooks. Parasites secrete several biomolecules to ensure their entry and survival in the host. In particular, the proteins secreted to the extracellular environment, also known as secretome or excretory/secretory (ES) proteins, participate in the pathogenesis and immune Parasitologia 2023, 3 34 responses [11]. Although the secreted proteins could participate in the homeostasis of the parasite or its defense against other pathogens (e.g., part of the integument against bacteria), a critical role in the interaction with the host has been proposed. Some ES proteins induce apoptosis [12] or cause proteolysis in order to remodel tissues for invasion [13], and others act as immunomodulators to evade the host's immune system for parasite establishment and adaptation [14,15]. Immune evasion is a strategy used by pathogenic organisms to maximize their probability of transmission to a fresh host. It may involve (1) hiding from the immune system (e.g., antigen mimicry or masking, diversity/polymorphism, variation of parasite antigens); (2) interfering with the function of the immune system (e.g., blocking pattern recognition receptors (PRRs) and downstream signaling, inhibition of humoral factors); or (3) destroying elements of the immune system (e.g., induction of apoptosis of immune cells) [16][17][18]. The identification and analysis of ES proteins in parasitic flatworms have mainly been carried out in species of cestodes and trematodes that mostly infect mammals [19][20][21][22], whereas monogeneans have received less attention [23,24]. To further our understanding of monogeneans and their interaction with their fish hosts, we need better knowledge about their secreted proteins. Experimental studies focused on the extraction of these proteins in monogeneans may be challenging due to their small size (in the range of micrometers) and the difficulty of obtaining sufficient individuals to purify the required amounts of proteins. In this case, analysis in silico may be useful to predict ES proteins and to guide future experiments [25]. Thus, based on a recently released transcriptome of R. viridisi [26], the aim of the present work was to characterize in silico the ES proteins. We analyzed the possible roles of secretory antigens in immunity against this monogenean, with emphasis on recent developments regarding their immunomodulatory action and potential involvement in evasion of the host immune defense. In Silico Identification and Functional Annotation of ES Proteins A total of 1655 non-redundant ES proteins from R. viridisi were predicted (Supplementary Materials). Only 513 proteins (31%) were annotated in the UniProtKB/Swiss-Prot database (Supplementary Materials) and 182 (11%) showed sequence similarity to proteins reported in WormBase Parasite (Supplementary Materials). Eighteen proteins (including three C-type lectins and five peptidases) showing high similarity to fish (Teleostei) and helminth (Lophotrochozoa) proteins were included in the secretome (Supplementary Table S1). Of the 1655 predicted ES proteins, 269 (16%) were mapped to 212 known protein domains. The most represented protein domains were immunoglobulin, leucine-rich repeat domain (involved in protein-protein and protein-ligand interactions), and zinc finger (involved in protein-nucleic acid interactions), which are also present in non-ES proteins (Supplementary Materials). Lectin C-type domain (carbohydrate-binding), CAP domain, and serine and cysteine peptidase domains, related to immune-regulatory proteins, were also found. We identified six putative multifunctional proteins (Supplementary Table S2). Two of them, thioredoxin and thrombospondin 1, are included in MultitaskProtDB. We mapped 269 (16%) ES proteins to 710 GO terms (229 Biological Process, 239 Cellular Component, and 242 Molecular Function categories). The most enriched GO terms ( Figure 1) in R. viridisi were: cell (GO:0005623), cell part (GO:0044464), organelle (GO:0043226), membrane (GO:0016020), and membrane part (GO:0044425) at the Cellular Component category; anatomical structure development (GO:0048856) at the Molecular Function category, and cellular process (GO:0009987), cellular metabolic process (GO:0008152), and nitrogen compound metabolic process (GO:0006807) at Biological Process category. Similarly, we mapped 2419 (6.3%) non-ES proteins to 6324 GO terms (2057 Biological Process, 2133 Cellular Component, and 2134 Molecular Function categories). As shown in Figure 1, the GO terms with higher representation of ES proteins with respect to non-ES proteins were endomembrane system, cell periphery, plasma membrane, plasma Table S3). ECPred predicted 223 enzymes (13.5%), 1315 non-enzyme proteins (79.5%), and 117 proteins without any prediction (7%) from the secretome of R. viridisi (Supplementary Materials). The putative enzymes were classified according to six classes of the Enzyme Commission (EC). Most of the predicted enzymes were represented by hydrolases (53%), followed by transferases (30%) and oxidoreductases (10%) (Figure 2A). Only 41% of the hydrolases were classified into EC subclasses ( Figure 2B): 23% esterases (EC 3.1), 15% peptidases (EC 3.4), and 3% glycosylases (EC 3.2). Of the 1655 ES proteins, 12 were identified as Carbohydrate-active enzymes (CAZymes). They were included into nine different families (Supplementary Table S4) of the three major classes: five glycoside hydrolases, five glycosyltransferases, and two polysaccharide lyases. MEROPS analysis of the ES proteins resulted in the identification of 27 peptidases and 14 peptidase inhibitors (Supplementary Materials). Cysteine and serine peptidases were the most represented. The peptidase sub-families found in the secretome are shown in Figure 3. The I93 subfamily of peptidase inhibitors was the most abundant. ECPred predicted four glycosylases and 18 peptidases; however, dbCAN2 predicted five glycosylases and the alignment against the MEROPS database resulted in 27 peptidases in the secretome of R. viridisi. the ES proteins resulted in the identification of 27 peptidases and 14 peptidase inhibitors (Supplementary Materials). Cysteine and serine peptidases were the most represented. The peptidase sub-families found in the secretome are shown in Figure 3. The I93 subfamily of peptidase inhibitors was the most abundant. ECPred predicted four glycosylases and 18 peptidases; however, dbCAN2 predicted five glycosylases and the alignment against the MEROPS database resulted in 27 peptidases in the secretome of R. viridisi. Antigenicity Prediction of ES Proteins A total of 1045 (63%) ES proteins were predicted as antigens with a threshold of 0.5. Of these, 43 had a score higher than 0.9 (Supplementary Table S5). Of those, only six were functionally annotated (Collagen alpha-2(I) chain, GTPase NRas, Collagen alpha-2(IV) chain, Beta-1,4-galactosyltransferase 2, Inactive histone-lysine N-methyltransferase 2E, and Transcription factor 12). The proteins with unknown function were submitted to Phyre 2 and the Protein Data Bank (PDB) template name with higher confidence (>90%) and percentage of identity (>25%) was selected. Table 1 shows the six most antigenic proteins and VAL proteins. Discussion This is the first study predicting secreted proteins of a monogenean member of the Diplectanidae family. Previously, Caña-Bozada et al. [23] identified the ES proteins in four monogenean species, Eudiplozoon nipponicum, Gyrodactylus salaris, Neobenedenia melleni, and Protopolystoma xenopodis. Similarly, our present work suggests that various predicted ES proteins from R. viridisi are potentially involved in adhesion, penetration, incorporation of nutrients, and immunomodulation of host cells (Figure 4), which are functions with important roles in pathogenesis [14]. GO annotations of ES proteins from R. viridisi agree in most represented Molecular Functions and Biological Process categories with those previously reported for other helminths [25]. However, the annotation of membrane and membrane part in the Cellular Component category is only similar to other Platyhelminthes, including monogeneans [23]. For example, the secretion of cadherins, protocadherins, integrins, and collagen alpha chains (Supplementary Table S3) could be related to cell-cell adhesion after the physical insertion of haptors in the epithelial tissue, important to anchor and transient attachment for locomotion, achieved in monogeneans through cooperation between adhesive secretion [27] and the haptor [28]. The overrepresentation of hydrolases in the secretome of parasitic helminths has been reported elsewhere [25] and could be associated with proteolysis. Cytosolic cathepsins B and D could induce apoptosis in T cells [29]. These processes, proteolysis and apoptosis, contribute to the penetration of the parasite and destruction of host cells and immune receptors [17]. Putative cathepsins, essential for survival and presenting low homology to fish proteins, have been previously considered as potential drug targets due to their overrepresentation in ES proteins of other monogenean species [23]. We found the presence of cathepsins B, C, D, and L in both ES and non-ES proteins from R. viridisi. It has been reported that cathepsins L are important for nematodes and can degrade haemoglobin, serum albumin, immunoglobulins, fibronectin, collagen I, and laminin under acidic conditions, and its enzymatic activity is host-specific [30]. Also, cathepsins L have been characterized in the monogeneans N. melleni [31] and E. nipponicum [32]. Eudiplozoon nipponicum abundantly expresses cathepsins B, D, L1, and L3, which play a critical role in haemoglobin processing and immunomodulation [24]. Rhabdosynochus viridisi, like other mucus-feeding monogeneans, could use elastase-like serine peptidases for extracellular digestion [32] instead of cathepsins, so it remains to verify the possible role of these peptidases in the secretome. A predominance of inhibitors of the subfamily I93 was observed in the secretome of R. viridisi. This subfamily contains inhibitors of metallopeptidases that have been proposed to participate in extracellular matrix turnover, tissue remodeling, and other cellular processes in parasitic helminths [33]. In addition, the serpins (serine peptidase inhibitors I01, I02, I08) and cystatin (cysteine peptidase inhibitor I29) secreted by R. viridisi, could block complement activation, prevent inflammation, and induce immunosuppression by subverting Th1 mechanisms and drawing the immune system towards a Th2/Treg response, as has been reported for E. nipponicum [34,35]. Inhibitors of serine and cysteine peptidases, belonging to family I29, are highly transcribed and secreted by adult parasites of E. nipponicum, although they are not present in the secretome in large quantities [24]. The multifunctional proteins thioredoxin and thrombospondin 1 (TSP-1) found in the R. viridisi secretome could be related to pathogen virulence activity [36]. Thioredoxins of nematodes are ES proteins with antioxidant, anti-inflammatory, and anti-apoptotic activities. They also modify monocytes and epithelial cells, binding and inducing the time-dependent release of cytokines [37]. Thioredoxin has been also reported in the secretome of the monogenan E. nipponicum [24]. TSP-1 interacts with other extracellular matrix proteins to regulate cellular behavior. Moreover, it is secreted as a compensatory mechanism for controlling inflammation and protecting tissues from excessive damage A predominance of inhibitors of the subfamily I93 was observed in the secretome of R. viridisi. This subfamily contains inhibitors of metallopeptidases that have been proposed to participate in extracellular matrix turnover, tissue remodeling, and other cellular processes in parasitic helminths [33]. In addition, the serpins (serine peptidase inhibitors I01, I02, I08) and cystatin (cysteine peptidase inhibitor I29) secreted by R. viridisi, could block complement activation, prevent inflammation, and induce immunosuppression by subverting Th1 mechanisms and drawing the immune system towards a Th2/Treg response, as has been reported for E. nipponicum [34,35]. Inhibitors of serine and cysteine peptidases, belonging to family I29, are highly transcribed and secreted by adult parasites of E. nipponicum, although they are not present in the secretome in large quantities [24]. The multifunctional proteins thioredoxin and thrombospondin 1 (TSP-1) found in the R. viridisi secretome could be related to pathogen virulence activity [36]. Thioredoxins of nematodes are ES proteins with antioxidant, anti-inflammatory, and anti-apoptotic activities. They also modify monocytes and epithelial cells, binding and inducing the timedependent release of cytokines [37]. Thioredoxin has been also reported in the secretome of the monogenan E. nipponicum [24]. TSP-1 interacts with other extracellular matrix proteins to regulate cellular behavior. Moreover, it is secreted as a compensatory mechanism for controlling inflammation and protecting tissues from excessive damage [38]. TSP-1 has not been reported in monogeneans or other platyhelminths, but it is known that thrombospondin 2 (TSP-2) in the trematodes Clonorchis sinensis and Fasciola hepatica is a virulence and immunomodulation-related transcript that causes transforming growth factor beta stimulation [39]. In humans, TSP-1 and TSP-2 can interact with various ligands, such as structural components of the extracellular matrix, cytokines, cellular receptors, growth factors, proteases, and other stromal cell proteins [40]. This is the first report of thrombospondin 1 in a monogenean parasite. The 18 predicted ES proteins from R. viridisi with high similarity to fish and helminths proteins possibly have mimicry function. For instance, Hebert et al. [41] observed that mimicry candidate proteins from the behavior-altering cestode Schistocephalus solidus had specific sequence similarity with proteins of the host (the threespine stickleback, Gasteros-teus aculeatus). It has been suggested that a pathogen can mimic and substitute host proteins in order to hijack the host cellular processes [42]. Therefore, we would expect that potential mimicry proteins secreted by R. viridisi could be involved in immune-regulatory functions. The most represented protein domains in the R. viridisi secretome were immunoglobulin (Ig), zinc finger, and leucine-rich repeats. Ig-like domain functions in Caenorhabditis elegans include cell-cell recognition, cell-surface receptors, muscle structure, and the immune system [43]. Zinc finger proteins participate in numerous physiological processes, such as cell proliferation, differentiation, and apoptosis, thereby maintaining tissue homeostasis. They are also implicated in transcriptional regulation, ubiquitin-mediated protein degradation, signal transduction, actin targeting, DNA repair, cell migration, and numerous other processes [44]. Particularly, zinc finger proteins are required for chromosome-specific pairing and synapsis during meiosis in C. elegans [45] and for survival of adult worms of Schistosoma japonicum [46]. The potential functions of zinc finger proteins in immune system regulation, both at the transcriptional and post-transcriptional levels, have been proposed recently [47]. Likewise, leucine-rich repeat domains are present in several immune receptors of animals and also participate in protein-protein interactions [48]. The most represented KEGG pathway, human papillomavirus (HPV) infection, could be associated with immune evasion. The HPV oncoproteins downregulate the expression of proinflammatory cytokines and chemokines, and upregulate the expression of immunosuppressive genes in host cells. They also degrade host proteins [49]; for example, we found the E3 ubiquitin-protein ligase UBR4 in the secretome, which seems to be involved in removing pro-apoptotic and pro-inflammatory molecules via the ubiquitin-proteasome system [50]. Several KEGG BRITE objects in ES proteins were related to membrane proteins suggesting a role in host-pathogen interactions. Monogenean-specific antigens [51,52] and fish antibodies have been detected in several infections [53,54]. However, these studies are limited due to the difficulty of obtaining anti-Fc antibodies specific for different fish species, and high-quality protein extracts of small parasites, which are difficult to maintain in culture. The detection and identification of parasite antigens and humoral responses is important for vaccine development, and bioinformatic predictions have become essential. The present study suggests that most (63%) of the ES from R. viridisi are potential antigens and, therefore, could be processed and presented to T cell receptors, as well as recognized by host antibodies. According to the bioinformatic prediction of Gomez et al. [55], ES proteins of the platyhelminth Taenia solium are enriched in antigenic regions as compared to non-ES proteins. Antigens promoting antibody response could alleviate infection throughout antibody-dependent cellular cytotoxicity (ADCC) mechanisms [56]. For example, some structures of Schistosoma spp. and Fasciola spp., covered by antibodies, are destroyed by toxic proteins and reactive species released by cells with Fc receptors [57]. The most probable antigen predicted by Vaxijen was annotated as collagen alpha-2(I) chain. The collagen alpha-2(IV) chain also presented a score higher than 0.9. Recently, it was reported that human collagen alpha-2 type I stimulates collagen synthesis, wound healing, and elastin production in normal human dermal fibroblasts [58]. Type IV collagen, the most abundant constituent of the basement membrane, is highly conserved among vertebrates and invertebrates, regulating cell adhesion and migration [59]. Other most probable antigens are apparently unique to the species R. viridisi, since they did not have functional annotation by sequence homology. Species-specific antigens are common to members of a single species that improve the efficiency of immunological diagnosis and immunoprophylaxis of helminthic diseases [60]. Further research to identify antigens from the ES proteins in the tegument of monogeneans could shed light on the development of better strategies to prevent or control fish diseases. These antigens, such as membrane or tegumental proteins, have been proposed as vaccine candidates due to the probability of interaction with the host's immune system [61][62][63][64]. VAL proteins were also identified. These proteins are generally secreted in parasitic stages [65] and there is an increasing interest in characterizing their immunological properties to design anthelmintic vaccines [66][67][68]. They are ubiquitous ES products that paralyze plants and animals, and have been previously named as sperm-coating protein/Tpx/antigen 5/pathogenesis-related-1/Sc (SCP/TAPS), cysteine-rich secretory proteins/antigen 5/pathogenesis-related 1 (CAP), or activation-associated secreted proteins (ASPs); the CAP domain seems to be conserved in evolution, allowing the binding of small hydrophobic ligands, but little is known regarding endogenous ligands [65]. There are no experimental studies of VAL proteins from monogenean species, and only 41 VAL transcripts from N. melleni have been reported [69]. Of the six VAL proteins of R. viridisi, only the peptidase inhibitor 16 (PI16) showed similarity to unnamed protein products from P. xenopodis, which is another monogenean species. PI16 suppresses the chemokine chemerin activation [70], which impairs endothelial cell inflammation [71]. GLIPR1-like protein 1 has been reported in ES from other parasites, suggesting a role in cell adhesion [72,73]. Pathogenesis-related proteins form a protective barrier against invasive pathogens, at least in plants [74]. Pre-mRNA-splicing factor ISY1 homolog is a component of the spliceosome C complex required for the selective processing of microRNAs (miRNAs) during embryonic stem cell differentiation. It is also involved in pre-mRNA splicing in the nucleus [75]. Scoloptoxin SSD976 is a voltage-gated calcium channel inhibitor [76] that could impair Ca 2+ signaling, T cell activation, and proliferation in fish [77]. Venom allergen-like protein 4 (VAL-4) has been reported as secreted by other helminths. VAL-4 presents lipid-binding properties and can sequester small hydrophobic ligands; this could be a mechanism to modulate immune responses in the host [65]. It remains to be determined if there is functional homology of these VALs in the context of monogenean-fish interaction, at least to verify their role in the parasite life cycle and modulation of the host. The present study provides an in silico characterization of the secretome of R. viridisi; nevertheless, it has some limitations. The identified ES proteins of R. viridisi depend on the quality of the annotated transcriptome, and it is possible that proteins presenting high homology to fish belong to the host. Moreover, there is a lack of data from monogenean proteins, and annotation based only on sequence homology is not totally reliable. It is necessary to verify the presence of the secreted proteins through a proteomic approach, and to identify key virulence factors by functional studies, which are limited due to the size and difficulty of handling R. viridisi specimens. Antigenicity prediction is based on a human host, whose immune system is different from fish; the latter is composed of low-affinity antibodies, while antigen presentation and ADCC remain unverified. Materials and Methods Protein sequences were retrieved from an available R. viridisi transcriptome [26]. The specimens used for that transcriptome were all adults isolated from the gills of their fish host C. viridis, reared in laboratory conditions [26]. The predicted ES proteins were identified following the bioinformatic workflow of Gahoi et al. [25], which filters sequences based on signal peptide, discarding sequences with transmembrane regions, subcellular localization of mitochondrial proteins, endoplasmic reticulum retention signal, and GPIanchor proteins. To rule out contaminant sequences, the ES proteins were aligned using BLASTP against the NCBI non-redundant protein database (E-value < 1 ×10 −5 ). The sequences that were best hits with Protostomia (taxid: 33,317) were retrieved, and the remaining sequences were considered contaminants. Proteins with similar E-value among Teleostei, Lophotrochozoa, and Platyhelminthes bases were included. ES proteins were annotated based on Uniprot-SwissProt and WormBase Parasite [78] databases, using BLASTP (E-value < 1 × 10 −4 ). ES proteins without any homologue in the NCBI non-redundant protein database were considered as the specific secretome of R. viridisi. The domains were identified with HMMSCAN v3.1.b2 [79] using the Pfam database as reference [80]. Possible multifunctional proteins (with two or more domains associated with different biochemical functions) were identified using the domain annotation and the Multitask ProtDB-II protein database [81]. The Gene Ontology (GO) terms were retrieved with the PANNZER2 server [82] and were subsequently plotted and analyzed with the WEGO 2.0 server [83] by Pearson Chi-Square test, using the entire proteome as the reference group. A significant enrichment was considered when the p-value < 0.05. The GO terms were updated with QuickGO [84]. KEGG Orthology (KO) IDs and KEGG pathways were retrieved from the Trinotate annotation [26]. Conclusions The present work represents hypothetical molecular mechanisms of host-parasite interactions of the monogenean R. viridisi with its host. We identified potential proteins secreted by this parasite and discussed their possible contribution to the pathogenesis and anthelmintic responses. Using bioinformatic analysis, we found that most of the ES from R. viridisi are potential antigens. This is the second study reporting the presence of VAL transcripts in a monogenean parasite. Further research is necessary to confirm the presence and function of these ES proteins in vivo. Tables file, containing Table S1: Proteins with similarity between fish and parasites; Table S2: Possible multifunctional proteins found in predicted ES proteins of R. viridisi; Table S3: Adhesion proteins identified in R. viridisi; Table S4: Families of carbohydrate-active enzymes (CAZymes) in predicted ES proteins of R. viridisi; Table S5
5,343.8
2023-01-10T00:00:00.000
[ "Biology" ]
Forest Stand Inventory Based on Combined Aerial and Terrestrial Close-Range Photogrammetry In this article we introduce a new method for forest management inventories especially suitable for highly-valued timber where the precise estimation of stem parameters (diameter, form, and tapper) plays the key role for market purposes. The unmanned aerial system (UAS)-based photogrammetry is combined with terrestrial photogrammetry executed by walking inside the stand and the individual tree parameters are estimated. We compare two automatic methods for processing of the point clouds and the delineation of stem circumference at breast height. The error of the diameter estimation was observed to be under 1 cm root mean square error (RMSE) and the height estimation error was 1 m. Apart from the mentioned accuracy, the main advantage of the proposed work is shorter time demand for field measurement; we could complete both inventories of 1 hectare forest stand in less than 2 h of field work. Introduction Increasing demands for sustainable forestry require highly precise and fast forest inventories.Tree height and diameter at breast height (DBH) are the most important parameters in taking forest inventories.DBH is usually measured manually with a caliper or a measuring tape and it is recorded together with species during field inventories [1].Tree height is usually measured by a height measuring instrument based on angle and distance measurements.Although the spatial information (horizontal structure of stand or tree positions) is difficult to obtain, it represents important information regarding the forest inventory itself (the stem density) as well as for studies of distance-dependent growth and dynamics of trees and stands [2]. Manual inventory methods for collecting forest information used nowadays are labor cost-and time-demanding and there is a need to find alternative methods.Many methods for simplifying DBH measurement using non-contact methods have been proposed in the past decades summarized, for example, by Clark et al. [3].Their findings show that accuracy, productivity, and cost requirements, as well as practical restrictions, must be considered when evaluating whether a project benefits from a new instrument compared to traditional field measurements.Nowadays, advanced technologies that enable the acquisition of three-dimensional (3D) data for further analysis are widely used in research of specific forest environments.Some of these technologies can provide fast data acquisition of large areas, e.g., aerial photogrammetry (AP) or airborne laser scanning (ALS), and these are commonly used for commercial purposes in forest inventories [4][5][6].Point clouds derived from images by AP are usually generated by computer vision/photogrammetry techniques, such as structure from motion (SfM) [7].SfM operates in accordance with principles of traditional stereoscopic photogrammetry, Forests 2016, 7, 165 2 of 14 using well-defined geometrical features captured in multiple images from different angular viewpoints to generate a 3D point cloud [8].However, these technologies have limited (ALS) or almost no (AP) ability to penetrate dense canopies and gain information about the ground, undergrowth, or even shape and size of stems, because the view from the top does not allow sufficient detail of internal forest structure, especially in the case of richly-structured forests with different spatial, age, and tree species composition.Wallace et al. (2016) [8] recently described the difference between AP and ALS from small unmanned aerial vehicle and demonstrated that the ALS has better canopy penetration than the AP.AP and ALS can provide good information about forest canopy structure for the analysis, e.g., heights of trees, but tree height estimates may be underestimated because, in the case of ALS, laser pulses could miss the treetops (especially in the case of conifers with narrow canopies), could be reflected from lower levels of the forest stand and, thus, detect the sides of the trees instead of the treetops [9,10].Both technologies have the advantage of larger scale and very fast data acquisition of relatively large forest areas in a short time and with very high resolution.A lower density of point clouds could influence the ability of successful individual tree detection [8]. Apart from large scale measurements based on aerial data acquisition, terrestrial laser scanning (TLS) is being increasingly applied in forestry.Although TLS provides a large amount of detailed information from the forest environment and about forest and tree parameters [6,11,12], the current price of laser scanning devices still makes it difficult to be deployed in management inventories. There are several studies on estimation of stem attributes using terrestrial photogrammetry (TP) work by [13] was performed on analogue images and concludes that manual measurements in analogue images would be unfeasible due to long processing times.More recently [14,15] presented methods for stem profiling and measurement based on digital photogrammetry.This method, however, places several constraints on the imaging process.Dick et al. [16] described a technique for the deployment of panoramic 360-degree photographs for tree horizontal structure mapping.The proposed technique was tested on a large set of plots and the authors reported the overall accuracy of 0.38-0.44m in distance and 2.3-2.5 in azimuth measurements. In 2014, Liang et al. [17] described a method for terrestrial photogrammetry using a hand-held consumer camera and reported 88% accuracy in stem detection and 2.39 cm error in diameter estimation.The authors described the main advantages of terrestrial photogrammetry compared to terrestrial laser scanner to be the low cost of the equipment, fast field measurement, and automated data processing.Surový et al. [18] described the accuracy of point reconstruction on individual stems using a hand held camera with stop-and-go method and showed that the spatial error proved to be stabilized when more than eight cameras point to the evaluated point.Forsman et al. [1] developed a method for terrestrial mapping of tree structure using a multi-camera rig.The rig was composed of six cameras which simultaneously captured images.After the creation of a point cloud, the stem points are fitted with circle function for diameter estimation.The authors reported the highest accuracy on plots with clearly visible stems and the DBH estimation was reported to be with a RMSE of 2.8-9.5 cm.Miller et al. [19] designed and described a method for individual tree models using SfM and multi-view stereo-photogrammetry (MVS) techniques and they estimated the 2D and 3D accuracy of the reconstruction.They used xyllometry as the most accurate ground truth validation of volume and found the total volume to be estimated with a RMSE of 18.53% (slightly better for stems, and much higher error was reported for branch volume estimation).The 2D metrics reported by the authors were better; 3.74% RMSE for height estimation and 9.6% for DBH.However, it is important to mention that only smaller trees were used in the study.In our study, we wanted to estimate the accuracy and feasibility of the deployment of SfM and MVS techniques for mature forest stands with canopy heights of 25 m and higher and to evaluate the possible replacement of terrestrial manual measurements, both because of time-efficiency and higher precision, as well as richer data (including the stem form, taper, horizontal asymmetry, etc.).For that purpose, we conducted geodetic measurement of almost 1 hectare of forest stand with subsequent AP and TP data acquisition and data processing for further accuracy and suitability assessment. Study Area The test area is located in the eastern part of Czech-Moravian Highland (Figure 1).This site was established in 1891 according to the former methodology of the Forest Research Institute in Mariabrunn, Vienna.It was conducted under registration number 170 and it is one of the oldest well-preserved forest research plots in Central Europe.Originally, it was established for the research of the effect of different planting clasps on the production and function of the subsequent forest stands on an area of 1.5 hectares.Current research on this plot focuses on mechanical and environmental stability of forest stands with regard to health conditions and transformation of the species, age, and spatial structure of the forest.The plot is located 650 m above sea level with a mostly flat surface, with a slight slope to the prevailing southeast aspect. Study Area The test area is located in the eastern part of Czech-Moravian Highland (Figure 1).This site was established in 1891 according to the former methodology of the Forest Research Institute in Mariabrunn, Vienna.It was conducted under registration number 170 and it is one of the oldest wellpreserved forest research plots in Central Europe.Originally, it was established for the research of the effect of different planting clasps on the production and function of the subsequent forest stands on an area of 1.5 hectares.Current research on this plot focuses on mechanical and environmental stability of forest stands with regard to health conditions and transformation of the species, age, and spatial structure of the forest.The plot is located 650 m above sea level with a mostly flat surface, with a slight slope to the prevailing southeast aspect.The research plot was chosen because of a previous geodetic survey of the position of all trees, including surveying DBH and heights of individual trees.Only a part of the area was selected for the purpose of this study-an area of 0.8 hectares with 100% of 124 year-old spruce growth.The aim was to find a homogeneous surface without forest regeneration and almost no other vegetation in the understory. Field Survey A geodetic survey was carried out in combination with GNSS measurements with a Topcon Hiper Pro (Topcon Corporation, Tokyo, Japan) and Trimble M3 Total Station (Trimble Navigation Limited, Sunnyvale, CA, USA) in the Czech national coordinate system (so called Unified Triangulation Cadastral Network).DBH of trees were measured by Häglof caliper (Häglof Sweden AB, Långsele, Sweden) in two perpendicular directions, and tree heights were measured by a Häglof vertex device.For accurate measurement, each tree was measured three times from different places, whereas the distance was equivalent to at least the height of the trees.Final DBHs and heights of the trees were calculated as the arithmetic mean of all measures.A total of 118 trees were measured with the average height of 31 m (max = 36 m, min = 22 m), and the mean DBH 38.2 cm (max = 54.5 cm, min = 18.5 cm) (Table 1).All surveyed positions correspond with the closest point at the base of the stem.The characteristics of measured trees are presented in Table 1.The research plot was chosen because of a previous geodetic survey of the position of all trees, including surveying DBH and heights of individual trees.Only a part of the area was selected for the purpose of this study-an area of 0.8 hectares with 100% of 124 year-old spruce growth.The aim was to find a homogeneous surface without forest regeneration and almost no other vegetation in the understory. Field Survey A geodetic survey was carried out in combination with GNSS measurements with a Topcon Hiper Pro (Topcon Corporation, Tokyo, Japan) and Trimble M3 Total Station (Trimble Navigation Limited, Sunnyvale, CA, USA) in the Czech national coordinate system (so called Unified Triangulation Cadastral Network).DBH of trees were measured by Häglof caliper (Häglof Sweden AB, Långsele, Sweden) in two perpendicular directions, and tree heights were measured by a Häglof vertex device.For accurate measurement, each tree was measured three times from different places, whereas the distance was equivalent to at least the height of the trees.Final DBHs and heights of the trees were calculated as the arithmetic mean of all measures.A total of 118 trees were measured with the average height of 31 m (max = 36 m, min = 22 m), and the mean DBH 38.2 cm (max = 54.5 cm, min = 18.5 cm) (Table 1).All surveyed positions correspond with the closest point at the base of the stem.The characteristics of measured trees are presented in Table 1.During the geodetic survey, the locations of altogether 10 ground control points (GCP) were also measured, located across the plot in places with low canopy density for the alignment with the UAS-based model.We used white metal targets with size of 20 cm ˆ20 cm with a black cross (size 5 cm ˆ5 cm) for good recognition on images both from AP and TP (Figure 2).During the geodetic survey, the locations of altogether 10 ground control points (GCP) were also measured, located across the plot in places with low canopy density for the alignment with the UASbased model.We used white metal targets with size of 20 cm × 20 cm with a black cross (size 5 cm × 5 cm) for good recognition on images both from AP and TP (Figure 2). Aerial Image Acquisition Aerial images were acquired by a multirotor unmanned aerial system (UAS) hexacopter DJI S800 Spreading Wings (DJI, Shenzhen, China) with Sony NEX 5R camera (Sony Corporation, Tokyo, Japan) with a Sigma lens with a fixed focal length of 19 mm.The sensor size of camera is APSC (24 mm × 16 mm) and the resolution is 16 megapixels (4912 × 3264 pixels).Aerial photographs were taken at midday on 10 July 2015 from a height of 120 m. Data processing of a total of 210 images was carried out in AGISOFT Photoscan 1.2.5 software (Agisoft LLC, St.Petersburg, Russia) into the ortophoto with resolution of 1 cm and dense point cloud that consists of more than 18 million points (Figure 3).The AGISOFT Photoscan Professional software, which implements modern SfM algorithms, was used to generate a point cloud from the RGB photographs.Detailed descriptions of the Photoscan workflow can be found, for example, in [20,21]. Aerial Image Acquisition Aerial images were acquired by a multirotor unmanned aerial system (UAS) hexacopter DJI S800 Spreading Wings (DJI, Shenzhen, China) with Sony NEX 5R camera (Sony Corporation, Tokyo, Japan) with a Sigma lens with a fixed focal length of 19 mm.The sensor size of camera is APSC (24 mm ˆ16 mm) and the resolution is 16 megapixels (4912 ˆ3264 pixels).Aerial photographs were taken at midday on 10 July 2015 from a height of 120 m. Data processing of a total of 210 images was carried out in AGISOFT Photoscan 1.2.5 software (Agisoft LLC, St.Petersburg, Russia) into the ortophoto with resolution of 1 cm and dense point cloud that consists of more than 18 million points (Figure 3).The AGISOFT Photoscan Professional software, which implements modern SfM algorithms, was used to generate a point cloud from the RGB photographs.Detailed descriptions of the Photoscan workflow can be found, for example, in [20,21]. Terrestrial Photogrammetry Terrestrial photogrammetry was carried out by the same camera placed in a special holder on a measuring pole at a height of 4 m pointing forward.The holder was designed so that the axis of the camera forms an angle of approximately 45 degrees with the vertical axis (Figure 4).The image shooting was done based on continuous capturing of images during crossing through the forest with a speed of approximately 5 km per hour with the interval of one second per captured image.This procedure ensured high overlap of images, thus, most locations were covered by more than nine images.For referencing the same GCPs as for aerial images were used.In this way, 1774 images altogether were collected and further processed into a dense point cloud of more than 79 million points (Figure 5). Terrestrial Photogrammetry Terrestrial photogrammetry was carried out by the same camera placed in a special holder on a measuring pole at a height of 4 m pointing forward.The holder was designed so that the axis of the camera forms an angle of approximately 45 degrees with the vertical axis (Figure 4).The image shooting was done based on continuous capturing of images during crossing through the forest with a speed of approximately 5 km per hour with the interval of one second per captured image.This procedure ensured high overlap of images, thus, most locations were covered by more than nine images.For referencing the same GCPs as for aerial images were used.In this way, 1774 images altogether were collected and further processed into a dense point cloud of more than 79 million points (Figure 5). Terrestrial Photogrammetry Terrestrial photogrammetry was carried out by the same camera placed in a special holder on a measuring pole at a height of 4 m pointing forward.The holder was designed so that the axis of the camera forms an angle of approximately 45 degrees with the vertical axis (Figure 4).The image shooting was done based on continuous capturing of images during crossing through the forest with a speed of approximately 5 km per hour with the interval of one second per captured image.This procedure ensured high overlap of images, thus, most locations were covered by more than nine images.For referencing the same GCPs as for aerial images were used.In this way, 1774 images altogether were collected and further processed into a dense point cloud of more than 79 million points (Figure 5). Terrestrial Photogrammetry Terrestrial photogrammetry was carried out by the same camera placed in a special holder on a measuring pole at a height of 4 m pointing forward.The holder was designed so that the axis of the camera forms an angle of approximately 45 degrees with the vertical axis (Figure 4).The image shooting was done based on continuous capturing of images during crossing through the forest with a speed of approximately 5 km per hour with the interval of one second per captured image.This procedure ensured high overlap of images, thus, most locations were covered by more than nine images.For referencing the same GCPs as for aerial images were used.In this way, 1774 images altogether were collected and further processed into a dense point cloud of more than 79 million points (Figure 5). Data Processing The automatic classification procedure in AGISOFT PhotoScan consists of two steps.At the first step the dense cloud is divided into cells of a certain size.In each cell the lowest point is detected.Triangulation of these points gives the first approximation of the terrain model.At the second step a new point is added to the ground class, providing that it satisfies two conditions: it lies within a certain distance from the terrain model and that the angle between the terrain model and the line to connect this new point with a point from a ground class is less than a certain angle.The second step is repeated while there still are points to be checked. After processing and classification into ground and non-ground points in AGISOFT Photoscan, both (aerial and terrestrial) point clouds were exported to LAS 1.3.format (LASer File Format for Exchange of ASPRS-The Imaging and Geospatial Information Society) for subsequent processing in ArcGIS 10.3 software (ESRI, Redlands, LA, USA), with the Spatial Analyst extension, and LAStools (Rapidlasso GmBH, Gilching, Germany) (Figure 6).The aim of the data processing was automatic identification of tree positions and automatic detection of tree DBH and tree heights. Data Processing The automatic classification procedure in AGISOFT PhotoScan consists of two steps.At the first step the dense cloud is divided into cells of a certain size.In each cell the lowest point is detected.Triangulation of these points gives the first approximation of the terrain model.At the second step a new point is added to the ground class, providing that it satisfies two conditions: it lies within a certain distance from the terrain model and that the angle between the terrain model and the line to connect this new point with a point from a ground class is less than a certain angle.The second step is repeated while there still are points to be checked. After processing and classification into ground and non-ground points in AGISOFT Photoscan, both (aerial and terrestrial) point clouds were exported to LAS 1.3.format (LASer File Format for Exchange of ASPRS-The Imaging and Geospatial Information Society) for subsequent processing in ArcGIS 10.3 software (ESRI, Redlands, LA, USA), with the Spatial Analyst extension, and LAStools (Rapidlasso GmBH, Gilching, Germany) (Figure 6).The aim of the data processing was automatic identification of tree positions and automatic detection of tree DBH and tree heights.First, we processed the data from TP, with ground points used for digital terrain modelling and the non-ground points serving for the modelling of tree stems.After that it was possible to create the horizontal cross-section of the point cloud at breast height above terrain with LASheight tool.A created DTM from classified ground points was used as reference.Due to sufficient point density of the point cloud, only a very thin cut with a height interval of 10 mm was used (all points in the interval from 1.29 m to 1.30 m above terrain were chosen).Further, the cross-sections in LAS format were converted into a geodatabase feature class and a buffer zone with the diameter of 10 cm was created around each point with subsequent merging into one feature.After breaking down into single isolated features, all segments were assigned a unique ID of separated polygons.This procedure enabled us to get relevant point cuts of individual trees (point clusters), which were subsequently delineated by circumscribed circle or convex hull methods and, as a result, they gave us the approximate shape of stem cross-sections.By transferring the centers of gravity of these elements, horizontal centers of the trees were obtained as a new point layer. Automated Delineation of Stem Circumference and Diameters We tested two automatic methods for delineation of stem circumference: -Circle fitting; and -Convex hull.First, we processed the data from TP, with ground points used for digital terrain modelling and the non-ground points serving for the modelling of tree stems.After that it was possible to create the horizontal cross-section of the point cloud at breast height above terrain with LASheight tool.A created DTM from classified ground points was used as reference.Due to sufficient point density of the point cloud, only a very thin cut with a height interval of 10 mm was used (all points in the interval from 1.29 m to 1.30 m above terrain were chosen).Further, the cross-sections in LAS format were converted into a geodatabase feature class and a buffer zone with the diameter of 10 cm was created around each point with subsequent merging into one feature.After breaking down into single isolated features, all segments were assigned a unique ID of separated polygons.This procedure enabled us to get relevant point cuts of individual trees (point clusters), which were subsequently delineated by circumscribed circle or convex hull methods and, as a result, they gave us the approximate shape of stem cross-sections.By transferring the centers of gravity of these elements, horizontal centers of the trees were obtained as a new point layer. Automated Delineation of Stem Circumference and Diameters We tested two automatic methods for delineation of stem circumference: -Circle fitting; and -Convex hull. First, we conducted the processing of stem cuts by means of a regular circle.This method allows processing of the point cloud into a circle even in the cases when the whole stem is not covered by images from all sides.The resulting accuracy of this method can be negatively influenced by the irregularity of the tree stem (Figure 7).The second method for DBH delineation is an algorithm called the convex hull method.Formally, when we have a set of points X, the convex hull may be defined as the intersection of all convex sets containing X or as the set of all convex combinations of points in X.With the latter definition, convex hulls may be extended from Euclidean spaces to arbitrary real vector spaces; they may also be generalized further, to oriented matroids [22].It creates a connected polygon of points and, in the case of high density data, it can describe the whole shape of the stem in a tree cut with very low generalization. Forests 2016, 7, 165 7 of 13 First, we conducted the processing of stem cuts by means of a regular circle.This method allows processing of the point cloud into a circle even in the cases when the whole stem is not covered by images from all sides.The resulting accuracy of this method can be negatively influenced by the irregularity of the tree stem (Figure 7).The second method for DBH delineation is an algorithm called the convex hull method.Formally, when we have a set of points X, the convex hull may be defined as the intersection of all convex sets containing X or as the set of all convex combinations of points in X.With the latter definition, convex hulls may be extended from Euclidean spaces to arbitrary real vector spaces; they may also be generalized further, to oriented matroids [22].It creates a connected polygon of points and, in the case of high density data, it can describe the whole shape of the stem in a tree cut with very low generalization. Tree Height Assessment A terrain DTM was extracted from the minimum elevation data in the TP point cloud and a DSM was extracted from the maximum elevation data in the AP point cloud. Based on the difference of DSM and DTM, we calculated a canopy height model (CHM) expressing the height of the objects above ground.Heights of trees were assigned to a tree point layer by zonal statistics as a local maxima of tree canopies conducted by the inversed watershed segmentation of CHM [23]. . Stem Volume Calculation Our method was transformed into a model for automatic calculation based on AP and TP photogrammetric point clouds as a source layer with final results in the form of a tree point layer with attributes of height, DBH, and tree volume.The detailed workflow is shown in Figure 8. Tree Height Assessment A terrain DTM was extracted from the minimum elevation data in the TP point cloud and a DSM was extracted from the maximum elevation data in the AP point cloud. Based on the difference of DSM and DTM, we calculated a canopy height model (CHM) expressing the height of the objects above ground.Heights of trees were assigned to a tree point layer by zonal statistics as a local maxima of tree canopies conducted by the inversed watershed segmentation of CHM [23]. Stem Volume Calculation Stem volumes were calculated by an allometric equation for Norway spruce according to Petráš and Pajtík [24] (Equation (1)): V " 0.000031989 ˆpd `1q 1.8465 ˆh1.1474 ´0.0082905 ˆpd `1q ´1.0204 ˆh0.8961 (1) Our method was transformed into a model for automatic calculation based on AP and TP photogrammetric point clouds as a source layer with final results in the form of a tree point layer with attributes of height, DBH, and tree volume.The detailed workflow is shown in Figure 8. First, we conducted the processing of stem cuts by means of a regular circle.This method allows processing of the point cloud into a circle even in the cases when the whole stem is not covered by images from all sides.The resulting accuracy of this method can be negatively influenced by the irregularity of the tree stem (Figure 7).The second method for DBH delineation is an algorithm called the convex hull method.Formally, when we have a set of points X, the convex hull may be defined as the intersection of all convex sets containing X or as the set of all convex combinations of points in X.With the latter definition, convex hulls may be extended from Euclidean spaces to arbitrary real vector spaces; they may also be generalized further, to oriented matroids [22].It creates a connected polygon of points and, in the case of high density data, it can describe the whole shape of the stem in a tree cut with very low generalization. Tree Height Assessment A terrain DTM was extracted from the minimum elevation data in the TP point cloud and a DSM was extracted from the maximum elevation data in the AP point cloud. Based on the difference of DSM and DTM, we calculated a canopy height model (CHM) expressing the height of the objects above ground.Heights of trees were assigned to a tree point layer by zonal statistics as a local maxima of tree canopies conducted by the inversed watershed segmentation of CHM [23]. Our method was transformed into a model for automatic calculation based on AP and TP photogrammetric point clouds as a source layer with final results in the form of a tree point layer with attributes of height, DBH, and tree volume.The detailed workflow is shown in Figure 8. Accuracy Assessment According to the report from AGISOFT Photoscan, horizontal precision of 0.034 m and vertical precision of 0.011 m calculated on GCPs was reached during the processing of AP data.After classification of ground points and noise points, the final point cloud with average density of 1890 points per square meter was created. In the case of TP, we reached a horizontal precision of 0.028 m and a vertical precision of 0.004 m, with an average density of 10,385 points per square meter (Table 2).In the first part, we assessed the accuracy of terrestrial photogrammetry for digital terrain modeling.Thanks to previous geodetic survey of tree locations we could evaluate the vertical accuracy of the created digital terrain model (DTM).This validation was carried by comparison of field-measured geodetic points (position of trees) with the DTM interpolated from classified ground points from TP (Equation ( 2)).Results of this assessment were statistically evaluated using basic parameters-min, max, mean, standard deviation and root mean square error (Equation ( 3)): where Z geo is the elevation of geodetic points and Z DTM is the elevation taken from classified ground points from TP interpolated into raster DTM. where RMSE is the root mean square error, Mean Z error is the mean, and SD Z error is the standard deviation of all values.The final calculated DBHs, tree heights, and tree stem volumes were compared with field measurements of trees for the agreement of accuracy of automatic forest stand inventory and, again, statistically evaluated by basic statistic parameters analogous to elevation in the previous step. Accuracy of TP for Tree Detection Based on the comparison of geodetic surveys, we found out that TP-based tree positions reached the positional accuracy with RMSE lower than 0.5 m with maximum of almost 1 m (Table 3).This inaccuracy could be caused by the method of geodetic surveys, because all surveyed positions correspond with the closest point at the base of the stem, while the automatically-detected trees from TP were identified directly by its centers from point cloud.In terms of assessing the number of trees per hectare or stand, and the overall structure of the forest, these errors are negligible. Accuracy of TP for Terrain Modeling Secondly, we evaluated the accuracy of DTM from terrestrial photogrammetry in comparison with geodetic survey.We compared together 118 positions from geodetically measured terrain points (measured locations of trees) with DTM created from ground points from TP.The results in Table 4 show sufficient accuracy for terrain modelling with Mean Error of 0.002 m, standard deviation of 0.065 m and the same RMSE of 0.065 m.These minor deviations show the high quality of DTM generated from TP. Accuracy of Height, DBH, and Tree Stem Volume Estimations Results of automatic processing of the photogrammetric point cloud show a high precision for the estimation of DBH and the height of trees, as well as for further calculation of tree stem volumes (Table 5).In the case of tree heights, we reached an overall RMSE of 1.016 m with the mean value of ´0.152 m, which indicates a slight disagreement with tree heights obtained from the UAS point cloud.Despite the best efforts in tree height's ground measurement, the error on the ground may be comparable to, or even larger than, the mean value we encountered as the difference.In the case of DBH, very good results were achieved using both of the methods for modelling the stem perimeter.The convex hull method showed results twice as accurate as the circumscribed circle (Table 5).The accuracy of less than 1 cm in the DBH is more than sufficient as it is within the tolerance of caliper measurement.The calculation of stem volume is highly influenced by the used equation (Equation ( 3)), but the obtained RMSE in comparison with field-measured data less than 0.1 cubic meter is also sufficient. Figure 9 shows the relation between the data measured by the photogrammetry method in comparison with the ground truth.It can be seen that the measured heights appear slightly larger than the heights from photogrammetry.However, the RMS error is relatively small (1 m).The error of Forests 2016, 7, 165 10 of 14 one meter is, as mentioned before, of rather little importance in forest inventory in the Czech Republic and on the other hand, the ground truth equipment for height measurement may have an error of comparable size.Figure 9 shows there is no fluctuation of this error (e.g., no difference in smaller and larger trees) so we believe there is no systematic error, neither in AP nor TP data, and the error itself is a combination of the errors in both measurements. Forests 2016, 7, 165 10 of 13 than the heights from photogrammetry.However, the RMS error is relatively small (1 m).The error of one meter is, as mentioned before, of rather little importance in forest inventory in the Czech Republic and on the other hand, the ground truth equipment for height measurement may have an error of comparable size.Figure 9 shows there is no fluctuation of this error (e.g., no difference in smaller and larger trees) so we believe there is no systematic error, neither in AP nor TP data, and the error itself is a combination of the errors in both measurements.Figure 10 shows the relation of results obtained by photogrammetry compared with data from ground truth for the two studied scenarios: convex hull and circumscribed circle.The circumscribed circle shows lower accuracy (higher RMSE) mostly due to the above-mentioned irregularity of stem horizontal cross-section and the occlusion problems.Similarly, as in the height estimation, the error is constant along the diameters and the overall accuracy is very good (acceptable for management inventory purposes).Figure 11 shows the behavior of the error for the stem volume estimations for the ground truth data and for both techniques for the estimation of DBH.As expected, the results are very similar to the diameter estimation error behavior and follow the same pattern. In total, processing of terrestrial point cloud by convex hull achieves more than a double precision compared to a circumscribed circle.The complete inventory of the forest stand with the area of 0.5 hectare with 118 trees including aerial and terrestrial field survey took less than two hours (one hour for GCPs measurement, half an hour for aerial data capturing and about 15 minute for Figure 10 shows the relation of results obtained by photogrammetry compared with data from ground truth for the two studied scenarios: convex hull and circumscribed circle.The circumscribed circle shows lower accuracy (higher RMSE) mostly due to the above-mentioned irregularity of stem horizontal cross-section and the occlusion problems.Similarly, as in the height estimation, the error is constant along the diameters and the overall accuracy is very good (acceptable for management inventory purposes).than the heights from photogrammetry.However, the RMS error is relatively small (1 m).The error of one meter is, as mentioned before, of rather little importance in forest inventory in the Czech Republic and on the other hand, the ground truth equipment for height measurement may have an error of comparable size.Figure 9 shows there is no fluctuation of this error (e.g., no difference in smaller and larger trees) so we believe there is no systematic error, neither in AP nor TP data, and the error itself is a combination of the errors in both measurements.Figure 10 shows the relation of results obtained by photogrammetry compared with data from ground truth for the two studied scenarios: convex hull and circumscribed circle.The circumscribed circle shows lower accuracy (higher RMSE) mostly due to the above-mentioned irregularity of stem horizontal cross-section and the occlusion problems.Similarly, as in the height estimation, the error is constant along the diameters and the overall accuracy is very good (acceptable for management inventory purposes).Figure 11 shows the behavior of the error for the stem volume estimations for the ground truth data and for both techniques for the estimation of DBH.As expected, the results are very similar to the diameter estimation error behavior and follow the same pattern. In total, processing of terrestrial point cloud by convex hull achieves more than a double precision compared to a circumscribed circle.The complete inventory of the forest stand with the area of 0.5 hectare with 118 trees including aerial and terrestrial field survey took less than two hours (one hour for GCPs measurement, half an hour for aerial data capturing and about 15 minute for This study has demonstrated the ability of combined aerial and terrestrial photogrammetry for automatic calculation of tree parameters specially targeted to forest management inventory purposes.Our intention was to test the possible use of a common commercial digital camera for the creation of 3D models capable of providing high-resolution realistic point clouds which can be used for the delineation of individual trees and the estimation of common inventory parameters, such as the number of trees per hectare, height, and the diameter of individual trees.The combination of both methods provided excellent characterization of the stand terrain compared to geodetic measurements (RMSE of 0.065 m, which was possible because of a lack of dense vegetation) and very good characterization of individual trees in their positions (RMSE of 0.463 m), tree heights (RMSE 1.016 m), and DBH (RMSE 0.911 cm for convex hull and 1.797 cm for the circumscribed circle method).When comparing stem volumes calculated on the basis of the equation (Equation ( 3)), we also achieved very good results (RMSE of 0.082 m 3 for convex hull processing and 0.180 m 3 for circumscribed circle processing) (Figure 10).The calculation of stem volume is based on empirical equations and regression models that are location and forest stand specific [24][25][26].The final accuracy is highly influenced by the vertical shape of the stem and it is presented only to illustrate the difference between the methods.The equation used [24] is the most suitable one, because it was primarily calculated for the territory of Central Europe.We found only a small correlation between the absolute DBH error and the deviation from the circular shape.In the case of multidimensional regression, it was found out that DBH error changes depending both on DBH (increases with the size of DBH) and on the deviation of the stem shape from the regular circular shape.At trees with a regular circular shape the error is relatively constant, unchanging with the diameter of the stem and mostly positive.Contrary to this, at trees with an irregular shape this error steeply decreases with diameter and is mostly negative (Figure 12). Apart from the mentioned accuracy, the method of combined aerial and terrestrial photogrammetry for forest inventory is very time-efficient in terms of field work and can reduce labor costs.In comparison with field measurement, we can reach sufficient accuracy in heights of trees, almost the same accuracy in DBH and furthermore we can get information about tree positions for forest structure modeling.However, the method can add some more time and money costs in terms of data processing, because it increases the requirements on software and hardware equipment.After optimization process, this methodology could serve as a good tool for fast forest inventory even in a commercial sector.Possible drawbacks for the deployment of this methodology in practice may be the rich vertical structure of the forest stands where it may be difficult to distinguish the points In total, processing of terrestrial point cloud by convex hull achieves more than a double precision compared to a circumscribed circle.The complete inventory of the forest stand with the area of 0.5 hectare with 118 trees including aerial and terrestrial field survey took less than two hours (one hour for GCPs measurement, half an hour for aerial data capturing and about 15 min for terrestrial photogrammetry).Subsequent computer processing (practically not requiring human intervention) took 24 h. Discussion This study has demonstrated the ability of combined aerial and terrestrial photogrammetry for automatic calculation of tree parameters specially targeted to forest management inventory purposes.Our intention was to test the possible use of a common commercial digital camera for the creation of 3D models capable of providing high-resolution realistic point clouds which can be used for the delineation of individual trees and the estimation of common inventory parameters, such as the number of trees per hectare, height, and the diameter of individual trees. The combination of both methods provided excellent characterization of the stand terrain compared to geodetic measurements (RMSE of 0.065 m, which was possible because of a lack of dense vegetation) and very good characterization of individual trees in their positions (RMSE of 0.463 m), tree heights (RMSE 1.016 m), and DBH (RMSE of 0.911 cm for convex hull and 1.797 cm for the circumscribed circle method).When comparing stem volumes calculated on the basis of the equation (Equation ( 3)), we also achieved very good results (RMSE of 0.082 m 3 for convex hull processing and 0.180 m 3 for circumscribed circle processing) (Figure 10).The calculation of stem volume is based on empirical equations and regression models that are location and forest stand specific [24][25][26].The final accuracy is highly influenced by the vertical shape of the stem and it is presented only to illustrate the difference between the methods.The equation used [24] is the most suitable one, because it was primarily calculated for the territory of Central Europe.We found only a small correlation between the absolute DBH error and the deviation from the circular shape.In the case of multidimensional regression, it was found out that DBH error changes depending both on DBH (increases with the size of DBH) and on the deviation of the stem shape from the regular circular shape.At trees with a regular circular shape the error is relatively constant, unchanging with the diameter of the stem and mostly positive.Contrary to this, at trees with an irregular shape this error steeply decreases with diameter and is mostly negative (Figure 12).climate zone mixed forests rich in species, it may be also possible to use this kind of classification for species detection.Another drawback may be the occlusion of reference points in very dense canopies.The proposed method is targeting the mature stands before harvest where some canopy openings are expected, but in case of a fully closed upper crown layer these may become invisible.The possible solutions like placement of points outside the canopy, trying to enhance the visibility of those in the stand may be the subject of further research in this area. Conclusions In this work we presented a novel method for the determination of management inventory parameters using the combination of aerial and terrestrial SfM photogrammetry.This methodology may be a cheap and fast alternative for terrestrial measurement, with special focus on quick and precise determination of stem attributes in high-value timber class stands.We found the accuracy for height estimation to be highly precise, with a RMSE of 1 m, and for DBH estimation to be 0.9 cm and 1.8 cm for the two tested methods of convex hull and circumscribed circle, respectively.The two methods were selected as the most practical to be applied for the automatic shape detection.The method of convex hull, however, appears to be more precise and, so, it is recommendable for the point fields resulting from this kind of data acquisition.Apart from the mentioned accuracy, the method of combined aerial and terrestrial photogrammetry for forest inventory is very time-efficient in terms of field work and can reduce labor costs.In comparison with field measurement, we can reach sufficient accuracy in heights of trees, almost the same accuracy in DBH and furthermore we can get information about tree positions for forest structure modeling.However, the method can add some more time and money costs in terms of data processing, because it increases the requirements on software and hardware equipment.After optimization process, this methodology could serve as a good tool for fast forest inventory even in a commercial sector.Possible drawbacks for the deployment of this methodology in practice may be the rich vertical structure of the forest stands where it may be difficult to distinguish the points belonging to the stem from the points belonging to the regeneration or understory vegetation.Such problems may be solved by spectral-based classification of the pixels (point clouds).In temperate climate zone mixed forests rich in species, it may be also possible to use this kind of classification for species detection.Another drawback may be the occlusion of reference points in very dense canopies.The proposed method is targeting the mature stands before harvest where some canopy openings are expected, but in case of a fully closed upper crown layer these may become invisible.The possible solutions like placement of points outside the canopy, trying to enhance the visibility of those in the stand may be the subject of further research in this area. Conclusions In this work we presented a novel method for the determination of management inventory parameters using the combination of aerial and terrestrial SfM photogrammetry.This methodology may be a cheap and fast alternative for terrestrial measurement, with special focus on quick and precise determination of stem attributes in high-value timber class stands.We found the accuracy for height estimation to be highly precise, with a RMSE of 1 m, and for DBH estimation to be 0.9 cm and 1.8 cm for the two tested methods of convex hull and circumscribed circle, respectively.The two methods were selected as the most practical to be applied for the automatic shape detection.The method of convex hull, however, appears to be more precise and, so, it is recommendable for the point fields resulting from this kind of data acquisition. Figure 1 . Figure 1.Location of the Mariabrunn research plot within the Czech Republic. Figure 1 . Figure 1.Location of the Mariabrunn research plot within the Czech Republic. Figure 2 . Figure 2. Design of the GCP. Figure 2 . Figure 2. Design of the GCP. Figure 5 . Figure 5. Detailed 3D view on point cloud from TP. Figure 3 . Figure 3. Ortophotos from the UAS (A) and terrestrial photogrammetry (B), with locations of GCPs. Figure 5 . Figure 5. Detailed 3D view on point cloud from TP. Figure 5 . Figure 5. Detailed 3D view on point cloud from TP. Figure 5 . Figure 5. Detailed 3D view on point cloud from TP. Figure 6 . Figure 6.3D view of point clouds from terrestrial and aerial photogrammetry. Figure 6 . Figure 6.3D view of point clouds from terrestrial and aerial photogrammetry. Figure 11 Figure 11 shows the behavior of the error for the stem volume estimations for the ground truth data and for both techniques for the estimation of DBH.As expected, the results are very similar to the diameter estimation error behavior and follow the same pattern. Figure 11 . Figure 11.Calculated stem volumes from field measured parameters × automatic processing (a) convex hull; and (b) circumscribed circle. Figure 11 . Figure 11.Calculated stem volumes from field measured parameters ˆautomatic processing (a) convex hull; and (b) circumscribed circle. Figure 12 . Figure 12.Surface plot of the DBH error, field-measured DBH, and deviation from the circle shape. Figure 12 . Figure 12.Surface plot of the DBH error, field-measured DBH, and deviation from the circle shape. Table 1 . Characteristics of measured trees. Table 1 . Characteristics of measured trees. Table 2 . Properties of the point clouds produced from AP and TP. Table 3 . Comparison of locations of measured trees from geodetic survey and from the automatic detection of stems. Table 4 . Comparison of DTM accuracy from geodetically-measured points and from TP. Table 5 . Comparison of field measurement with calculated tree heights, DBH, and stem volumes.
10,984.4
2016-07-30T00:00:00.000
[ "Environmental Science", "Mathematics" ]
Comparative study of ensemble models of deep convolutional neural networks for crop pests classification Pest infestations on wheat, corn, soybean, and other crops can cause substantial losses to their yield. Classification of crop pests is of considerable importance for accurate and intelligent pest control. Ensemble models can effectively improve the accuracy of crop pests classification and different ensemble models can produce different results. To study the advantages and disadvantages of ensemble models under different agricultural production environments, six basic models are trained on a D0 dataset. Then, three models with the best classification performance are selected. Finally, the ensemble models, i.e., linear ensemble named SAEnsemble and nonlinear ensemble SBPEnsemble, are designed to combine the basic models for crop pests classification. The accuracies of SAEnsemble and SBPEnsemble improved by 0.85% and 1.49% respectively compared to the basic model with the highest accuracy. Comparison of the two proposed ensemble models shows that the accuracy of SAEnsemble is lower when the pests are very similar to the background, while SBPEnsemble is more likely to make wrong predictions under the condition of huge difference between pests and background. Therefore, in practical agricultural production, choosing linear or nonlinear ensemble methods according to different situations can effectively improve the accuracy of crop pests classification. crop production losses. Once pests develop in fields, only timely diagnosis by farmers can enable effective treatment. Pest prevention methods depend on the species of pest, but sorting pest species manually is cumbersome and inefficient because of the high similarity and complex structure among pests. Traditional pest classification methods include the use of the K-means clustering algorithm, which requires the manual extraction of insect image features. This is time-consuming when the dataset is large [10]. As relevant information features are extracted from images manually, the learning system is not automated [4]. In another study, a framework that classified leaf diseases of five diverse plants using image processing and machine learning algorithms achieved an accuracy of 83-94% [1]. Traditional pest classification methods rely on manual operation and are generally used for small labels and dataset of small categories. In the face of pest classification, these methods appear increasingly inefficient and miscellaneous. With the rapid development of deep convolution networks in recent years, researchers have begun to use CNN to develop image classification systems for pest classification. CNN can learn features at all locations in an image, and the shallow layers can learn simple features, while the deep layers can learn complex features, so CNN show better performance in the task of pest classification. To improve the accuracy of CNN, deepening the network is commonly used. But as the network deepens, the CNN model can be rapidly large and consume a lot of memory resources and storage resources. To propose simple and efficient methods, ensemble learning [29] is proposed. Ensemble learning accomplishes learning tasks by combining different models, which can be used for classification problems, regression problems, feature selection, outlier detection, etc. Ensemble learning is a technical framework that combines models along different methods to achieve better results. It consists of two main steps, the first step is to choose different models, and the second step is to choose a combination method and then combine these models into a new model. There are many methods of Ensemble learning, including hard voting method, soft voting method, learning method, etc. The learning methods are divided into two main categories, linear learning, and nonlinear learning. Linear learning focuses on learning the weights of each model, representing the relationships between the models. While nonlinear learning takes the output of different models as the input of a new model for training, representing the internal relationships of the models. Therefore, with the basic models unchanged, ensemble models using the linear learning method and ensemble models using the nonlinear learning method will produce different results, according to which the different results can guide actual agricultural production. The basic models used in this paper are Xception, InceptionV3, and MobileNetV2. The key contributions of the proposed work are as follows: 1. Two Ensemble models are proposed, SBPEnsemble, a linear ensemble model, and SAEnsemble, a nonlinear ensemble model. 2. Ensemble models are comprehensively compared, advantages and disadvantages of the two ensemble models in agricultural production are summarized. 3. The classification performance of basic models and the applicable pest species are compared. The rest of this paper is organized as follows: Section 3 introduces the dataset and methods used in the current work. Section 4 presents the experimental results, advantages and disadvantages of the two proposed ensemble models. Section 5 compares the two proposed ensemble models with other studies. Finally, Section 6 presents some conclusions and future prospects. Literature review There has been a lot of research dedicated to the classification of insect pests. GoogLeNet model is used to classify ten common crop pests and achieved an increase of 6.22% in accuracy compared to the most advanced method [28], the model can extract features at different scales. Thenmozhi and Reddy [26] built a CNN manually, compared this model with other advanced CNN pest classifiers, and analyzed the influence of transfer learning. Many methods are used to enhance the data to improve CNN performance and tested on two pest dataset [20], these data enhancement methods greatly improve the robustness of the model. The faster region-based CNN (Faster R-CNN) is trained to detect the location of lesions on leaves, its classification accuracy of 7 types of tea tree insect pests was 89.4% [17]. Fast R-CNN resolves two problems of R-CNN, numerous candidate frames to be computed and the slow training speed. Its training speed is 8 times faster than R-CNN while improving the accuracy. VGG16 and InceptionV3 are used to detect and identify rice pests, a two-stage model derived from fine-tuning was proposed, which also achieved good results [7]. Fangyuan et al. [11] proposed a cascade pest-classification method based on two stages framework, and a context-aware attention network was constructed to classify the pest images. Both of the above methods combine different models, making reasonable use of the advantages of different models. An important experiment is conducted in which models were trained using images obtained under experimental conditions and field conditions [2]. This method compares the accuracy of CNN models under different conditions, and puts the scenario of CNN usage in a natural scene, proving that no matter how accurate the model was, the accuracy would be sharply reduced by nearly 50% when applied to the actual production situation. Many algorithms in pest classification are based on the following models: Xception, InceptionV3, Vgg16, Vgg19, MobileNetV2, Resnet50, and SqueezeNet [13,19,21,22,24,30,34]. These models usually have different network structures and characteristics for learning different features and the generalization ability of the model also differs, which leads to the differing classification performance of different models for different classes of crop pests. It is a good scheme to ensemble CNN models to create a system with high predictive capacity. An ensemble model based on a genetic algorithm is proposed to improve the classification accuracy of a basic model for multiple types of insect pests [3]. In recent years, transformer architectures have changed many areas of deep learning. It was first proposed in [27], which is an architecture based on attention mechanisms that completely dispenses with convolution operations. Influenced by this, Visual Transformer(ViT) was proposed, a transformer architecture inspired by natural language processing (NLP) tasks. After experiments, it was found that this method outperformed deep convolution-based networks in terms of training time and structure [9], but the network also had the problem of having too many parameters, which made it difficult to be directly ported to mobile devices for agricultural production. To solve the problem of too many parameters, a new deformable self-attentive module was proposed, in which the location of key sum-value pairs can be selected in a data-dependent manner, and this method has reduced the number of ViT parameters to some extent and achieved good results in the field of pest classification [31]. The merits and demerits of existing works are listed in Table 1. Dataset In this paper, the dataset is a publicly available D0 dataset that consists of 40 kinds of crop pests. All 4508 RGB images having resolutions of 200 × 200 were proposed [33]. The corresponding numbers and label of each class are listed in Table 2. The D0 dataset was divided into three groups: training, verification, and testing, with 3151 images being used for training, 378 for verification, and 943 for testing. The D0 dataset contains classes that are common such as the Halyomorpha halys and Pieris rapae, classes that have Obvious contours such as Sesamia inferens, classes that have unique features such as Eurydema domulus, two very similar classes such as Corythucha ciliat and Corythucha marmorata. Samples of 10 classes are shown in Fig. 1. Methods To improve accuracy compared to the basic model and to select a suitable ensemble model under different situations. This paper contains the following two main processes. First, six basic models are pre-trained by using the transfer learning on the D0 dataset, and three models having the best performance are selected. Then, the two ensemble models are designed to investigate crop pest classification and compare them to three traditional ensemble models. Figure 2 shows the framework of the comparative study. Transfer learning Transfer learning is an emerging family of machine learning techniques and has been actively studied in machine learning and AI communities in recent years [16]. It is to solve new learning tasks using fewer examples by using information gained from solving related tasks [25]. There are two transfer learning methods. First, all the layers of the pre-training model are frozen, except the layers added by searchers. Second, the part of layers of the pre-training model is frozen, the multi-layer convolution layer close to the input, for a large amount of low- VGG16,InceptionV3 Two models were combined Complex combination process Dosovitskiy et al. [9] transformer Convolution is removed Too many parameters Zhuofan et al. [31] ViT Better performance than CNN Too many parameters level information retained. In this comparative study, Vgg16, Vgg19, and Resnet50 use the first method. InceptionV3, Xception, and MobileNetV2 use the second method. Xception Xception uses depth-wise separable convolution instead of the traditional convolution operation. This helps maintain classification accuracy while reducing the number of parameters and the amount of computation [6]. This paper freezes the first 120 layers of Xception and adds three fully connected layers containing 512 neurons. The activation function is ReLU [14], which is used to avoid linearity, Softmax activation function is used in the final output layer to classify the pests. Figure 3a shows the model structure. It can be seen that there are many Aulacophora indica frozen layers of Xception, which retains its ability to extract shadow features. On this basis, a few connection layers are added, the number of neurons is reduced from 512 to 40, and 216 and 108 are skipped, which reduces the calculation. InceptionV3 InceptionV3 is an implementation of GoogLeNet, its ability to deconstruct these features into smaller convolution sections [35]. Its network can be efficiently decomposed into small convolution kernels, which greatly reduces the number of parameters of the model and the chance of overfitting [18]. This paper freezes the first 270 layers of InceptionV3 and add three fully connected layers containing 512 neurons. The corresponding activation function is ReLU. Softmax activation functions are used in the final output layer to classify the pests. The specific structure is shown in Fig. 3b. MobileNetV2 MobileNetV2, the fast execution speed makes experimenting and parameter tuning much easier, while the low memory consumption is a desirable quality in the context of an ensemble Step2 Fig. 2 The general framework of the comparative study of networks [5]. This paper freezes all layers of MobileNetV2 and adds two fully connected layers containing 512 neurons. The model structure is given in Fig. 3c. It can be seen that all layer of MobileNetV2 has been frozen because MobileNvetV2 is a model with very small parameters, which are suitable for the pest classification task in this paper. Ensemble models In this paper, ensemble models are based on the voting method [15], which is divided into two kinds. One is hard voting which includes one ensemble model, in which each basic models vote in the class with the highest prediction probability, and each model has a vote. When the voting results of the three models are different, the model with the highest probability on the verification set is chosen to provide the result. The second is soft voting which includes three ensemble models, the first model takes the average of all models' predicted probability in a certain class as the standard and the class with the highest probability as the result. It has disadvantages in that the accuracy of the model is not considered, so the model with low accuracy may have a high impact on the result. To solve the problem, the second ensemble model of soft voting is proposed. This ensemble model takes the accuracy of the three models in the verification set as the weight and linearly multiplies the weights with the predicted results. In this way, models with high accuracy will be assigned higher weight. But the accuracy of the model is not the best weight. To obtain the best weight, the linear ensemble model based on simulated annealing, SAEnsemble is proposed. It is the third model of soft voting. Linear ensemble Simulated annealing [32] is a random optimization algorithm based on the mountain climbing algorithm. Its starting point is based on the similarity between the annealing process of solid material in physics and the combinatorial optimization problem. It will start from a relatively high initial temperature. With the continuous decline of temperature parameters, combined with the probability jump characteristics to randomly find the global optimal solution of the objective function in the solution space, the local optimal solution can jump out probabilistically and eventually approach the global optimal solution. This algorithm is usually used to optimize tasks. In this paper, the initial solution space consists of three random solutions between 0 and 1. The corresponding initialization objective function value y_max is obtained. It is determined whether T is higher than Tmin. If it is large, it enters the loop. If it is small, it jumps out of the loop. After entering the loop, each new solution will float up and down the range of 0.05 × 0.025×T based on the old one. After obtaining new solutions, it is checked whether they are all between 0 and 1. If they are all consistent, the objective function value yNew is calculated. If yNew is greater than y, it is updated to the new solution space and then compared with the previous maximum objective function value y_max. If it is greater, the maximum objective function value and the corresponding solution space are updated. If yNew is smaller than the previous y, then Eq. (1) determines whether the solution space is updated or not. This process is repeated until the optimal objective function value and the optimal solution space are attained. The steps described above are shown in Fig. 4, which are the processes of obtaining the best weight by SAEnsemble. Nonlinear ensemble The linear ensemble assigns the weight to the basic model directly, which will ignore the important nonlinear relations inside the outputs of basic model. A nonlinear ensemble takes the output of the basic model as the input of another new network for training. In this comparative study, each basic models outputs the prediction probabilities of 40 classes at one time, so the three models output 120 prediction probabilities in total, which are taken as the inputs of the Back Propagation(BP) neural network, This network will pass the loss function to measure the probability distribution of neural network prediction and the distance between the real-worth probability distribution. The sample loss is reduced by increasing the output probability of the corresponding position with the target value. It iterates repeatedly to obtain the optimal solution that the 40 best predictions for an image. This ensemble model is called SBPEnsemble. The first layer of SBPEnsemble is an input layer of 512 neurons, with 120-size onedimensional data input. It randomly throws away half of the data. The second layer contains 512 neurons, regularized by L2, and the activation function is ReLU. It also randomly throws away half of the data. The third layer, i.e., the output layer, contains 40 neurons and uses a linear activation function. Figure 5 shows the specific structure. Experimental setup and training processes Before the image is input into the model, it is also subjected to data enhancement using image processing methods such as rotation, scaling, and mirroring. The experimental environment is 3080TI of the Ubuntu operating system, using Python, Keras, and OpenCV learning framework. InceptionV3 has the most parameters, and it takes the longest time to train. The training times of a basic model are listed in Table 3. SAEnsemble is based on the simulated annealing algorithm, its computational complexity O(n) depends on the size of the input sample m, the number of nodes N, which is a finite constant. SBPEnsemble is based on BP neural network, Fig. 4 Process of obtaining the best weights for the models through simulated annealing and its computational complexity O(n) depends on the size of input samples m and the number of neural nodes n. After comparing the computational complexity of the two algorithms, the computational complexity of SAEnsemble grows faster than that of SBPEnsemble, so SAEnsemble requires a longer runtime. It can also be seen from the runtime that SAEnsemble is about three times that of SBPEnsemble. In this comparative study, three basic models were used for transfer learning, two ensemble models were used for training. Each model was trained through 100 epochs. The process of model parameter adjustment is as follows. First, three optimizers, SGD, Adam, and Adagrad, are selected. The accuracy of the three basic models after training is shown in Fig. 6a, it can be seen that the Adam optimizer has the best result. Adam is the most commonly used optimizer with the fastest convergence. Then the learning rate is adjusted. Three different learning rates, 0.0005, 0.001, and 0.002, are used respectively. It can be seen from Fig. 6b that MobileNetV2 is most suitable for a 0.001 learning rate, while InceptionV3 V3 and Xception both use a 0.005 learning rate. The final Accuracy, precision, and recall of all models are shown in Table 4. Performance of the CNN models in pest classification In classes with the highest accuracy of each basic model, it is found that Xception can outperform the other models on some beetles' classes, such as classes 12,17, 23, 24, and 25, see the second line of Fig. 7. To input one image into the three basic models, the Fusion feature map extracted from the last convolution layer of Xception is precisely concentrated on the target region, while MobileNetV2 also concentrates in the target region but the region is more divergent and the learning performance is not as good as Xception, see Fig. 8. However, MobileNetV2 show better performance on some moths' classes, such as class 1,9,20, 26,35, which is shown in the third line of Fig. 7. To input one image into the three basic models, only MobileNetV2 learned the correct area, see Fig. 9. InceptionV3 has the highest accuracy on test sets. The different basic model has different classification performance for pest class, this is one of the reasons to combine the three basic models. The process of the Fusion feature map is shown in Fig. 10. Table 5 presents the precision, accuracy, f1 of SBPEnsemble on the test set. The model can fully classify 23 insect pests with an average classification rate of 96.55%. Table 6 presents the accuracy of SAEnsemble on the test set. The model could fully identify 22 insect pests with an classification rate of 96.03%. The distribution of the two matrices is roughly the same because they are combined with the same three basic models, but there is some difference between the two ensemble models in performance. SAEnsemble can comprehensively consider the features extracted from each model to get the weight and then carry out a linear ensemble, when the insect pest and the background were very similar, the features extracted by a basic models were not obvious, leading to classification errors in SAEnsemble, such as Fig. 11. When the characteristics of the two insect pests are very similar, the features extracted by three basic model are also similar. Which leads to classification errors in SAEnsemble. The feature images extracted from two classes by three basic models are shown in Fig. 12. The nonlinear ensemble model SBPEnsemble is a BP neural network, which learns all the outputs of the three models. From a mathematical point of view, it avoids the issue that SAEnsemble faces, the extraction of unobvious features and similar features to some extent. But, it is slightly weaker than that of SAEnsemble under the condition of huge difference between pest and background. In such a case, SAEnsemble could accurately extract the characteristics of insect pests and classify them correctly, while SBPEnsemble may make wrong predictions, see Fig. 13. Therefore, in a real-world production situation, when the pest has a high degree of similarity with background, the SBPEnsemble can be used to combine the InceptionV3 Xception MobileNetV2 Fig. 7 The five classes with the highest accuracy of the three basic models InceptionV3 Xception MobileNetV2 Fig. 8 The Fusion feature map of the first, middle, and last convolution in three basic models on class 12 model. When the pest is distinguishable from the background, SAEnsemble may achieve better results. Influence of background on SAEnsemble and SBPEnsemble To further study the impact of background in the image, two groups of images are randomly selected from the images with low accuracy. In the first group, the pests are very similar to the background. In the second group, the pests are very different from the background. The classification results of SBPEnsemble and SAEnsemble are below each image, as shown in Fig. 14. In the first group of images, all SBPEnsembles are correctly classified, while SAEnsemble only correctly classifies two pictures. In the second group of images, SBPEnsemble has two wrong classifications and SAEnsemble has all the correct classifications. The experimental result shows that the nonlinear ensemble model SBPEnsemble is better than SAEnsemble when the pests are very similar to the background; When the pests are difference from the background, the linear ensemble model SAEnsemble has better performance. According to the above results, selecting an appropriate ensemble method in agricultural production can effectively improve the accuracy. This section contains result illustration of two proposed models and compare the two proposed models with other studies on D0 dataset. It also contains comparison of model training with/ without foreground-based enhancement. Result illustration of two ensemble models When checking the prediction results of the five models, including three basic models and two proposed ensemble models. It is found that when the predicted values of the three basic models were different, SAEnsemble and SBPEnsemble can obtain the same (correct) results. The results obtained by ensemble models is not only based on the basic model with the highest accuracy, InceptionV3, but also considered two other models with lower accuracy, Xception and MobileNetV2, as shown in the second row in Fig. 15. It is proved that although the ensemble model is likely to rely on the model with the highest accuracy, it also learns the other two models with lower accuracy, so as obtaining better results. This indicates that the ensemble model is effective. Comparison of the proposed model with existing methods On the D0 dataset, [26] proposed a CNN model that has twelve layers, this model achieved a classification of 95.97%. Xie et al. [33] created an automated system for crop pest performance comparison of SAEnsemble, SBPEnsemble, and other methods for SMALL and IP102 datasets is presented in Table 7. The two proposed ensemble models have the highest accuracy. Compared with other pest classification work, such as GoogLeNet, the two ensemble models proposed in this paper have smaller parameters and higher accuracy. The two-stage model is too complex, so its operability is not as good as the ensemble model. Although the transformer has the highest accuracy, its to many parameters make it unable to be moved to the device, so its practicability is poor. The two ensemble models proposed in this paper have a simple structure, and can effectively improve the accuracy of a basic model and move to devices. When facing a complex agricultural environment, it can select the appropriate ensemble method to improve the accuracy of pest classification. Limitations of the ensemble model The two ensemble models cannot get high accuracy when the classification accuracy of the basic model is low, as shown in Fig. 16 for classes 5-7, 32, and 35. Classes 5-7 are pests with small targets and no obvious characteristics. Classes 32 and 35 are too similar to classes 18 and 0 respectively. The classification accuracy of the basic model is very low, leading to poor ensemble results. Whether linear ensemble or nonlinear ensemble, the final result depends on the accuracy of the basic model, which is the biggest limitation of the ensemble method. Conclusion This paper proposed and compared two ensemble models, SAEnsemble and SBPEnsemble, for corp pest classification. The experimental results on the D0 dataset of 4508 crop pest images show that SAEnsemble and SBPEnsemble achieved high accuracy rates of 95.54% and 96.18% respectively, which are 0.85% and 1.49% higher than the basic model. In addition, the different basic model has different classification performance in the pest class, Xception is suitable for pests of the beetle class and MobileNetV2 is suitable for pests of the moth family class. Different ensemble models can be selected as per different actual agricultural production conditions, the linear ensemble is suitable for cases when the pest profile is distinguishable from the background, and the nonlinear ensemble is suitable for cases when the pest profile is similar to the background. But, The performance of the nonlinear ensemble is better than the linear ensemble. In the future, this can serve as a guide to aid decision-making by farmers and help in preventing pest transmission. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
6,217.4
2021-12-10T00:00:00.000
[ "Agricultural and Food Sciences", "Computer Science" ]
Is Solar Wind electron precipitation a source of neutral heating in the nightside Martian upper atmosphere? Solar Wind (SW) electron precipitation is able to deposit a substantial amount of energy in the nightside Martian upper atmosphere, potentially exerting an influence on its thermal structure. This study serves as the first investigation of such an issue, with the aid of the simultaneous measurements of both neutral density and energetic electron intensity made on board the recent Mars Atmosphere and Volatile Evolution (MAVEN) spacecraft. We report that, from a statistical point of view, the existing measurements do not support a scenario of noticeable neutral heating via SW electron precipitation. However, during 3%−4% of the MAVEN orbits for which data are available, strong correlation between nightside temperature and electron intensity is observed, manifested as collocated enhancements in both parameters, as compared to the surrounding regions. In addition, our analysis also indicates that neutral heating via SW electron precipitation tends to be more effective at altitudes below 160 km for integrated electron intensities above 0.01 ergs·cm−2·s−1 over the energy range of 3−450 eV. The results reported here highlight the necessity of incorporating SW electron precipitation as a heat source in the nightside Martian upper atmosphere under extreme circumstances such as during interplanetary coronal mass ejections. Introduction The thermal structure of the dayside Martian upper atmosphere is well known to be driven by solar Extreme Ultraviolet (EUV) heating (e.g. Forbes et al., 2008;Krasnopolsky, 2010), a scenario that is supported by recent measurements made with several instruments on board the Mars Atmosphere and Volatile Evolution (MAVEN) spacecraft . Mean neutral temperatures are readily obtained from the observed scale heights of the total atmospheric density measured by the Accelerometer (Zurek et al., 2017), the densities of various atmospheric constituents measured by the Neutral Gas and Ion Mass Spectrometer (NGIMS) (Mahaffy et al., 2015a;Stone et al., 2018;Cui J et al., 2018), as well as the intensities of distinctive airglow emission features, such as the ultraviolet double emission and the CO Cameron band emission, measured by the Imaging Ultraviolet Spectrograph (Jain et al., 2015(Jain et al., , 2018. The combined analysis of Bougher et al. (2017) confirmed that the temperatures estimated from different data sources are in agreement and reveal pronounced variations with solar EUV irradiance and heliocentric distance. In addition, existing evidence also suggests that the role of solar forcing may sometimes be overwhelmed by other forcing processes, such as atmospheric waves and tides (e.g. Forget et al., 2009;. In the nightside Martian upper atmosphere where the solar radiation is switched off, energetic electron precipitation, primarily of Solar Wind (SW) origin, has been considered as the dominant source of energy deposition. For instance, it has been proposed that SW electron precipitation is responsible for the formation of a patchy and sporadic nightside ionosphere on Mars (e.g. Cui J et al., 2015Cui J et al., , 2019Girazian et al., 2017), that it causes auroral emission (Bertaux et al., 2005), and that it possibly powers nightside atmospheric escape as well (e.g. Zhang Q et al., 2020). SW electron precipitation may also exert an influence on the thermal structure of the nightside Martian upper atmosphere. Historically, the thermal structure has not been well constrained by observations on the nightside of Mars (Schofield et al., 1997;Magalhães et al., 1999) and the role of energetic electron precipitation has consequently not been explored in detail. Indeed, no thermospheric global circulation model calculations to date have incorporated electron precipitation as a potential external energy source (e.g. Bougher and Robel, 1991;Bougher et al., 1999Bougher et al., , 2000Angelats i Coll et al., 2005;González-Galindo et al., 2005, 2009a. Fortunately, our knowledge of the thermal structure of the nightside Martian upper atmosphere has been greatly improved over the past few years by the accumulation of MAVEN NGIMS measurements. The recent NGIMS investigation of Stone et al. (2018) revealed a mean nightside temperature of 127 ± 8 K in the Martian upper atmosphere, which was about 50% of the mean subsolar temperature of 260 ± 7 K. In addition, the analysis of Pilinski et al. (2018), also using the NGIMS data, identified several persistent nightside temperature structures near the dusk and dawn terminators of Mars, which were interpreted as caused either by nightside dynamical heating associated with converging and downwelling winds or by terminator waves originating in the lower atmosphere. Complementary to the NGIMS temperature data, the Solar Wind Electron Analyzer (SWEA) on board MAVEN (Mitchell et al., 2016) is able to make simultaneous measurements of energetic electron intensity in the vicinity of Mars, thus allowing the role of SW electron precipitation on nightside thermal structure to be explored for the first time. This serves as the main motivation of the present study. Temperature Neutral temperatures in the nightside Martian upper atmosphere can be readily retrieved from the NGIMS measurements either via isothermal fitting to the neutral density data (e.g. Mahaffy et al., 2015a) or via downward integration of the momentum equation of an atmospheric species (e.g. Stone et al., 2018). The NGIMS is a quadrupole mass analyzer capable of measuring the densities of various constituents of the Martian upper atmosphere and ionosphere over the mass range of 2−150 Da with a mass resolution of 1 Da (Mahaffy et al., 2015b (Fu MH et al., 2020). For comparison, we define the dayside and near-terminator regions as those with SZA < 90° and 90° < SZA < 110°, respectively. The entire dataset contains 1700 dayside orbits, 550 nearterminator orbits, and 570 nightside orbits. For each MAVEN orbit, the available NGIMS density measure-ments allow neutral temperatures to be derived from six individual species: CO 2 , O, N 2 , CO, Ar, and He (Mahaffy et al., 2015a). This requires that the distribution of CO 2 , as the dominant species, be in hydrostatic equilibrium and the distribution of the remaining species be in diffusive equilibrium. For the above reason, He is not considered because it is a relatively light species possibly subject to strong escape (Bovino et al., 2011;Gu H et al., 2020a). In addition, CO is also not considered because it has been argued that the NGIMS CO densities are unreliable due to incomplete knowledge of the relevant cracking patterns (Wu XS et al., 2020). For the remaining four species, CO 2 , O, N 2 , and Ar, their NGIMS densities are likely to be subject to the well-known effect of wall contamination (e.g. Cui J et al., 2009), as displayed in Figure 1, comparing the mean temperatures derived from isothermal fittings to the inbound and outbound density measurements separately. The figure suggests clearly that the outbound temperatures are substantially higher than the inbound temperatures for CO 2 and O, a feature that has been attributed to wall contamination rather than realistic variations of the ambient atmosphere (e.g. Fox et al., 2017;Fu MH et al., 2020). In contrast, the inbound and outbound temperatures from N 2 and Ar are nearly identical, indicative of negligible wall contamination. For this study, the NGIMS Ar densities are used to infer the thermal structure of the nightside Martian upper atmosphere, including both the inbound and outbound measurements. Compared with some previous investigations relying on the outbound NGIMS CO 2 densities only (e.g. Cui J et al., 2018), using the NGIMS Ar densities effectively doubles the available sample size. To get an overview of the thermal structure of the nightside Martian upper atmosphere, we show in Figure 2 the nightside temperature distribution based on isothermal fittings to NGIMS Ar density data from the periapsis altitudes (typically 150 km) up to 200 km during all available MAVEN orbits. The dayside and nearterminator temperature distributions are also shown for comparison where we treat the dawn and dusk terminators separately. According to the NGIMS measurements, the median temperature in the Martian upper atmosphere declines from 200 K on the dayside to 125 K on the nightside, whereas the dawn and dusk terminator temperatures have comparable mean values of 157 K and 163 K, respectively. The scattering in temperature is minimized as 20% on the dayside, increases slightly to 25% near the dusk terminator and on the nightside, and becomes maximized as 35% near the dawn 25% terminator. The mean nightside temperature that we obtain is fully consistent with the value of 127 K reported in Stone et al. (2018). During each MAVEN orbit, two individual neutral temperature profiles can also be derived from the NGIMS Ar data, one from inbound and the other one from outbound, via the downward integration of the diffusive equilibrium equation in the form of where is the Ar number density, k is the Boltzmann constant, T is the neutral temperature, m is the Ar atomic mass, G is the gravitational constant, M is the planetary mass, and r is the planetocentric radius. By way of illustration, Figure 3 displays an example of the nightside Ar density profile obtained during the inbound portion of MAVEN orbit #1600, along with the respective temperature profile derived from Equation (1), assuming an upper boundary temperature of 100 K. Several caveats need to be emphasized: (1) The derived temperature profile depends somewhat on the choice of the upper boundary temperature at 200 km, but previous investigations have indicated that this arbitrary choice has very little impact on the temperatures at one scale height below the upper boundary (e.g. Cui J et al., 2018). (2) The effect of eddy mixing has been implicitly ignored in deriving the temperature profile, which is a reasonable approximation because the homopause altitude on Mars is usually located well below the MAVEN periapsis (e.g. Slipski et al., 2018). (3) Temperature extraction using Equation (1) relies on the assumption that the observed Ar density variation along the spacecraft trajectory is purely vertical, which means that the horizontal variation in the Martian upper atmosphere has been neglected (e.g. Stone et al., 2018). (4) Recently, Leclercq et al. (2020) have argued that the procedure of downward integration is likely to fail in the presence of wave activity, but this should not affect the temperatures averaged over a large number of orbits, provided that the phase shifts between different waves are random. In Figure 4, we show the temperature profiles derived from NGIMS Ar density data acquired during the inbound portions of several representative nightside MAVEN orbits. In each panel, the thick solid line corresponds to the case with the upper boundary temperature taken to be the mean temperature at 200−250 km from isothermal fitting, whereas the thin solid lines show the temperature profiles with the upper boundary temperatures shifted by −20 K, −10 K, +10 K, and +20 K, respectively. The figure clearly demonstrates that for each case, a convergency is reached among different temperature profiles at 10 km below the upper boundary. Distinctive wavelike features are clearly seen in the figure, indicative of the omnipresence of gravity waves in the Martian upper atmosphere (e.g. England et al., 2017;Siddle et al., 2019). It is also noteworthy that the temperatures at the lowest altitude obtained from the inbound and outbound data for each orbit are usually not identical despite the fact that they essentially characterize the same atmospheric regions. This must be connected to horizontal variations that have been implicitly neglected in downward integration (e.g. Stone et al., 2018). Finally, we discuss the uncertainties in temperatures derived both from isothermal fitting and from downward integration, which are crucial for a rigorous identification of the potential impact of energetic electron precipitation on the thermal structure of the nightside Martian upper atmosphere. Typically, the temperature uncer- Earth and Planetary Physics doi: 10.26464/epp2021012 3 tainty near and above 190 km is likely to be large and due mainly to the unknown upper boundary temperature, whereas at relatively low altitudes, the temperature uncertainty is 10 K and is dominated by horizontal variations in the Ar density . Correlation Between Energetic Electron Precipitation and Thermal Structure Valuable information on SW energetic electron precipitation in the vicinity of Mars is collected by the MAVEN SWEA, a symmetric hemispheric electrostatic analyzer that measures electron intensity over the energy range of 3−4.6 keV with a resolution of 17% (Mitchell et al., 2016). The instrument is also capable of measuring the arrival direction of energetic electrons within a field of view (FOV) of 360° × 120° of which 8% is blocked by the spacecraft body (Mitchell et al., 2016). Energetic electron precipitation in the nightside Martian upper atmosphere is notably characterized by the appearance of electron depletion that has been shown to be strongly modulated by the ambient magnetic field configuration (e.g. Steckiewicz et al., 2015Steckiewicz et al., , 2017Niu DD et al., 2020). This means that if SW electron precipitation indeed serves as a significant source of neutral heating, a visible correlation between neutral temperature and crustal magnetic field intensity should be revealed by existing observations. Especially, reduced temperatures near the strong crustal magnetic fields that are preferentially located over the southern hemisphere of Mars (Acuña et al., 1998;Connerney et al., 1999) are expected. However, this is not the case as indicated in Figure 5 where tude of 400 km. We therefore conclude that from a statistical point of view, SW electron precipitation does not exert a significant impact on the thermal structure of the nightside Martian upper atmosphere. A similar conclusion could also be reached in a more direct manner by inspecting the variation of the nightside neutral temperature as a function of the integrated energetic electron intensity over the same vertical column of 150−200 km, over the energy range of 3−450 eV, as well as over the full instrument FOV. The lower energy limit corresponds to the minimum electron energy measured by the SWEA, ignoring the spacecraft charging effect (Mitchell et al., 2016), at which SW electron precipitation could deposit energy locally via impact excitation of atmospheric neutrals (e.g. Gu H et al., 2020b). The upper energy limit corresponds to where the measured electron intensity on the nightside of Mars drops rapidly to the noise level (Niu DD et al., 2020). The variation between the nightside temperature and integrated electron intensity is depicted in Figure 6, essentially revealing no correlation. Note that the uncertainties in both isothermal neutral temperature and integrated electron intensity are typically very small and are thus not shown in the figure. In Figure 7, we compare further the NGIMS-derived temperature profiles and the SWEA-derived electron energy spectra (in terms of the omnidirectional electron intensity), accumulated during several representative nightside MAVEN orbits as indicated in the figure legend with "i" standing for inbound and "o" standing for outbound. The displayed temperature profiles are those obtained with the upper boundary temperatures set as the mean temperatures at 200−250 km, during these orbits (see Section 3). Figure 7 clearly suggests a multifaceted thermal structure in the nightside Martian upper atmosphere, which we describe in detail below. The top row of Figure demonstrate distinctive structures uncorrelated with the SWEAbased energetic electron intensity measurements made during the same orbits. For the first example, strong temperature fluctuations are seen, which are more likely associated with atmospheric gravity waves widely observed in the Martian upper atmosphere (e.g. England et al., 2017;Siddle et al., 2019). For comparison, electron precipitation displays a quite different pattern, most pronounced above 190 km where the temperature drops to a minimum. For the second example, the nightside temperature increases nearly monotonically from 165 km towards the lower boundary, to be distinguished from the observed energetic electron precipitation with an isolated enhancement covering 10 km centered around 160 km. Similar situations are encountered for the remaining two examples, each demonstrating a continuous band of electron precipitation extending to at least 200 km whereas the temperature structure is distinctly characterized by the presence of several localized temperature enhancements. In the bottom row of Figure where simultaneous enhancements in electron intensity are also observed. For the second example, the nightside temperature is enhanced over the altitude range of 155−175 km with two localized maxima near 160 and 170 km, collocated with a continuous band of substantial energetic electron precipitation that also exhibits two localized maxima at similar altitudes. The third example is characterized by a consistent pattern of nightside thermal structure and electron precipitation with three localized enhancements near 165, 170, and 190 km. For the last example, an apparent correlation between nightside temperature and electron intensity is also suggested by the MAVEN data revealing simultaneous enhancements near 180 and 190 km. By scrutinizing the entire data set, we find that the majority of MAVEN orbits show observations similar to those in the top row of Figure 7, in which the neutral temperature is uncorrelated with energetic electron intensity, whereas only 3.5% show apparent correlations between the two parameters. The identification of correlation is based on visual inspection of coexisting localized enhancements in both NGIMS-based temperature and SWEAbased electron intensity, a necessarily imprecise procedure that is clearly subject to some uncertainties. Despite this, the results presented here undoubtedly indicate that observational evidence for the possible impact of SW electron precipitation on nightside temperature is rare. It is also noteworthy that the nightside temperature can rise to 200 K or higher, which is well above the nightside mean temperature of 125 K, but when it does so, it is associated observationally with enhanced SW electron precipitation. As mentioned at the end of Section 2, the temperature uncertainty from downward integration is 10 K, which is much smaller than the typical temperature enhancement seen in Figure 7. The analysis presented so far relies on collocated enhancement in both temperature and electron intensity on the nightside of Mars, but this represents only one aspect of the SW impact on the atmospheric thermal structure. For further investigation, we divide all available nightside SWEA measurements made at any given altitude into several consecutive bins with different ranges of electron intensity, and we compute the respective mean nightside temperature for each bin based on simultaneous NGIMS Ar measurements. The corresponding mean nightside temperature profiles for different bins are displayed in Figure 8, revealing a pronounced temperature enhancement below 160 km for the highest electron intensity bin of 0.01−0.1 ergs·cm -2 ·s -1 , where the electron intensity is obtained by integrating the SWEA spectrum over the energy range of 3−450 eV (see above) and over the full instrument FOV. This indicates that the impact of SW electron precipitation tends to be effective at relatively low altitudes, only when the ambient electron intensity is sufficiently high. We also caution that the scattering in temperature is large (not shown in the figure), typically comparable with the maximum temperature difference seen at the bottom of Figure 8. Clearly, a firm conclusion on the role of SW electron precipitation on neutral heating must rely on a careful analysis that suppresses other possible sources of temperature variability below 160 km. Concluding Remarks and Prospects SW electron precipitation deposits a substantial amount of energy in the nightside Martian upper atmosphere and causes electron impact ionization in the absence of solar EUV radiation (e.g., Girazian et al., 2017;Lillis et al., 2018;Cui J et al., 2019). It is also likely that such an external energy deposition exerts an influence on the nightside atmospheric thermal structure, a process, however, that has never been explored in previous studies. Here we combine the NGIMS measurements of neutral density (Mahaffy et al., 2015a) and the SWEA measurements of energetic electron intensity (Mitchell et al., 2016), both made on board the recent MAVEN spacecraft, which has allowed us to evaluate, for the first time, the question of SW impact on the nightside thermal structure of the Martian upper atmosphere. Two different approaches are used to infer upper atmospheric temperatures on the nightside of Mars, providing either mean temperatures from isothermal fittings (e.g., Mahaffy et al., 2015b) or vertical temperature profiles from downward integration of the diffusive equilibrium equation (e.g., Stone et al., 2018;Cui J et al., 2018), both using the NGIMS Ar density data as inputs. The fact that the measured Ar densities are free from wall contamination (e.g., Cui J et al., 2009) indicates that both the inbound and outbound measurements could be readily used for the purpose of this study. The isothermal temperature obtained in the nightside Martian upper atmosphere shows no correlation with either the energetic electron intensity or the crustal magnetic field strength. This suggests that, from a statistical point of view, SW electron precipitation does not exert a noticeable impact on the nightside thermal structure, and further validates the rationale underlying existing global circulation models of the Martian upper atmosphere (e.g. Angelats i Coll et al., 2005;González-Galindo et al., 2005, 2009a, of which none have included energy deposition via SW electron precipitation as a viable external energy source. Despite this, our scrutinization of the entire MAVEN data set does reveal individual cases, during 3%−4% of available MAVEN orbits, that are characterized by apparent correlation between patterns of nightside thermal structure and of SW electron precipitation. Such a correlation is commonly manifested as one or several collocated enhancements in both NGIMS-based temperature and SWEA-based electron intensity. Meanwhile, a comparison of mean temperature profiles at different levels of electron intensity indicates that SW-induced neutral heating, if indeed occurring in the nightside Martian upper atmosphere, tends to be effective only below 160 km and when electron intensities are greater than 0.01 ergs·cm −2 ·s −1 integrated over the energy range of 3−450 eV. Based on the preliminary results presented in this study, we conclude that neutral heating caused by SW electron precipitation can in general be ignored in the nightside Martian upper atmosphere, but should be seriously considered under some extreme circumstances, such as during interplanetary coronal mass ejections (Wilson et al., 2020 and references therein). Part of our analysis in Section 3, leading to the possible identification of SW impact, relies on visual identification of collocated enhancements in both temperature and electron intensity, a procedure that is subject to obvious uncertainties and limitations. A more quantitative and thorough analysis is required to answer more precisely the question of SW effects on the Martian nightside thermal structure. Meanwhile, it is also important that the neutral heating rate due to SW electron precipitation be further parameterized quantitatively with the aid of the combined MAVEN data set, similar to the calculations that we have made for the solar EUV heating rate in the dayside Martian upper atmosphere (Gu H et al., 2020b). Such an extended analysis will be covered in a follow-up investigation. 10 -6 -10 -5 (erg•cm -2 •s -1 ) 10 -5 -10 -4 (erg•cm -2 •s -1 ) 10 -4 -10 -3 (erg•cm -2 •s -1 ) 10 -3 -10 -2 (erg•cm -2 •s -1 ) 10 -2 -10 -1 (erg•cm -2 •s -1 )
5,285
2021-01-01T00:00:00.000
[ "Environmental Science", "Physics" ]
Improved Catalytic Transfer Hydrogenation of Levulinate Esters with Alcohols over ZrO 2 Catalyst : Levulinic acid (LA) and its esters (alkyl levulinates) are polyfunctional molecules that can be obtained from lignocellulosic biomass. Herein, the catalytic conversion of methyl and ethyl levulinates into γ -valerolactone (GVL) via catalytic transfer hydrogenation (CTH) by using methanol, ethanol, and 2-propanol as the H-donor/solvent, was investigated under both batch and gas-flow conditions. In particular, high-surface-area, tetragonal zirconia has proven to be a suitable catalyst for this reaction. Isopropanol was found to be the best H-donor under batch conditions, with ethyl levulinate providing the highest yield in GVL. However, long reaction times and high autogenic pressures are needed in order to work in the liquid-phase at high temperature with light alcohols. The reactions occurring under continuous gas-flow conditions, at atmospheric pressure and a relatively low contact time (1 s), were found to be much more efficient, also showing excellent GVL yields when EtOH was used as the reducing agent (GVL yield of around 70% under optimized conditions). The reaction has also been tested using a true bio-ethanol, derived from agricultural waste. These results represent the very first examples of the CTH of alkyl levulinates under continuous gas-flow conditions reported in the literature. Introduction The increased need to promote sustainable industrial development has led to the search for alternatives and renewable raw materials and feedstocks for production. One of the most promising alternatives to petrochemistry are biorefineries based on lignocellulosic biomass valorization. Lignocellulose is composed of three main components and bio-polymers, namely lignin, hemicellulose, and cellulose from which a wide plethora of valuable bio-platform molecules can be obtained [1,2]. Among them, levulinic acid (LA) can be obtained from cellulose through a multi-step sequence of reactions in which hydrolysis of the bio-polymer is followed by dehydration and hydration reactions to yield the target product and formic acid [3]. With both a ketonic and carboxylic function group, LA is a molecule of great interest and with great versatility. In particular, LA has been studied and used for the synthesis of various value-added chemicals (namely tetrahydrofuran, γ-valerolactone (GVL), angelica lactones (AL), and 1,4-pentanediol (1,4-PDO), among others) [4]. For all of these reasons, its production has increased from 450,000 kg/year (in the 1990s), to around 3800 tons/year in 2020 [3]. It is therefore not surprising that the United States Department of Energy has classified LA as one of the top twelve most promising bio-based building block chemicals [5]. Nowadays, the most applied strategy in LA valorization is through reduction, by means of catalytic hydrogenation with molecular hydrogen (H2) in a gas and liquid-phase. Some studies show the catalytic transformation of LA and its esters using homogeneous catalysts [6,7]. However, this approach is not attractive given the complexity of catalyst separation and recovery [8]. Therefore, the use of heterogeneous catalysts has been investigated over the last 10 years. The use of catalysts containing noble metal nanoparticles supported on high surface area materials, which have permitted reaching complete conversion and high selectivity of GVL (98-99%) has been reported [9]. However, noble-metal catalysts are expensive and harsh conditions (high hydrogen pressure and reaction temperature) are still required to achieve satisfying yields and selectivity toward the target products [9][10][11]. In addition, most of the published studies still depend on H2 as the reducing agent. This renders the process less sustainable since, through an energetically intensive procedure (e.g., methane steam reforming), fossil fuels are still the main feedstock for hydrogen production [12,13]. A suitable and sustainable alternative, is represented by the catalytic transfer hydrogenation (CTH) using the Meerwein-Ponndorf-Verley (MPV) mechanism. This approach uses organic molecules, such as alcohols, that behave as hydrogen donors towards the carbonyl group in the presence of a catalyst containing both acid and base Lewis sites [7,9,14]. Secondary alcohols, such as isopropanol, are more suitable for the chemical process, since the proposed reaction mechanism has shown a greater stability of the secondary carbocation formed as an intermediate during the reaction [15]. Typically, noble-metal catalysts have been used for the CTH of LA. In particular, the use of supported Ru with isopropanol as the preferred H-donor has led to us achieving yields above 80% for GVL working on the liquid-phase and using alkyl levulinates as the main substrates [16,17]. In addition, Raney Ni has produced yields of up to 99% for GVL while working with mild conditions (room temperature-80 °C), however with the use of an excess of isopropanol [17]. Since CTH through the MVP mechanism is achieved while in the presence of a catalyst containing Lewis acid/base pairs, several studies have focused on the use of ZrO2 for the CTH of LA and its esters [18]. Indeed, ZrO2 has been found to be a suitable catalyst for the target reaction due to its amphoteric character. This allows the simultaneous activation of both the carbonyl group of LA and the alcohol on the active sites of the support [18,19]. Additionally, for this reason, modified-ZrO2 (e.g., Zr-Al mixed oxides) were tested for the liquid-phase (batch configuration) CTH of ethyl levulinate (EL) using different alcohols as H-donors: methanol, ethanol, 1-and 2-propanols, and cyclohexanol. As expected, isopropanol gave the highest selectivity for GVL (83-84%) at 95% of EL conversion [20]. Even though isopropanol seems to be a great H-donor, it is known that secondary alcohols are likely to form their corresponding ketones which may favor unwanted side reactions (e.g., aldolic condensations). Therefore, simpler molecules such as methanol and ethanol have become attractive alternatives [7,15,21]. However, mechanistic studies have shown that primary alcohols are less available to undergo hydride shift [22]. Nevertheless, ethanol represents an interesting alternative given its high abundance, non-toxicity, sustainability, environmentally benign nature and finally, it can be obtained through fermentation of biomasses [17,23]. Presently, most of the published studies for the CTH of LA and its esters using ZrO2 based catalyst have been performed in the liquid-phase using batch reactors and isopropanol as the Hdonor. Furthermore, levulinate esters are characterized by lower boiling points compared to LA and can be directly obtained by the acid-catalyzed alcoholysis of cellulose-derived carbohydrates. In this way, the need for an additional esterification step of LA is avoided, and finally, the sustainability of their direct utilization as raw materials is increased significantly [24]. For all these reasons, we decided to investigate the CTH of the levulinate ester using a synthesized high specific surface area tetragonal zirconia catalyst in a gas-phase continuous-flow, fixed-bed reactor. In particular, we thoroughly and systematically investigated the effects of different parameters (contact time, reaction temperature, the type of alcohol used as the H-donor and the effect of the leaving group of the levulinate ester), as well as the study of the reaction and catalyst's deactivation mechanism [25]. The results obtained in the gas-phase were also carefully compared to the ones obtained in the liquid-phase and batch conditions [26]. Herein, the most interesting results are summarized and highlighted. Materials and Methods Reagents and standards were analytical grade, all supplied by Sigma-Aldrich (Merk, Kenilworth, NJ, USA) and used as received. A real mixture of bio-ethanol (derived from molasses and cereal fermentation) was provided by Caviro, a leading Italian wine producer group. The bioethanol volumetric composition was: ethanol 95%, acetic acid 1.3%, ethyl acetate 1.2%, methanol 1.8%, aldehydes and acetals 0.7%. For all the details related to catalyst preparation and characterization refer to ref. [25]. For all the details related to both the continuous-flow gas-phase reactor and the liquid-phase autoclave reactor and catalytic test procedures please refer to refs. [25,26]. Results and Discussion Firstly, the effect of the reaction temperature on the feasibility of the methyl levulinate (ML) reduction to GVL using ethanol as the H-donor was investigated in the gas-phase over ZrO2 ( Figure 1). Interestingly, in our reaction conditions (see Figure 1 caption), ML conversion was complete at the lowest temperature, while GVL was formed only between 200 and 300 °C with a maximum yield of 65% at 250 °C. In addition, the carbon balance of the reaction follows a volcano plot with a maximum of 90% at 250 °C. For all these reasons, 250 °C was selected as the best reaction conditions for the following tests. In this way, we performed a comprehensive investigation of the effect of the alcohols used as Hdonors in the CTH of ML in both the gas-phase (continuous-flow reactor), and liquid-phase (batch, autoclave reactor) conditions. From analysis of the obtained results ( Figure 2) a few important points can be highlighted: • The gas-phase continuous flow conditions produced a superior activity (higher ML conversion) and greater yield of GVL compared to the liquid-phase, regardless the alcohol used as the Hdonor; Isopropanol has been confirmed as the best H-donor for the liquid-phase conditions while, in the continuous-flow system in the gas-phase conditions, ethanol and isopropanol led to very similar results with complete conversion of ML (at least for six hours of reaction) and a very good GVL yield (from 60 to 80%); • When performing the catalytic tests using EL as the substrate (instead of ML), using the same reaction conditions as shown in Figure 2, a slight improvement in the obtained results in terms of catalytic activity and GVL yield were observed. This phenomenon may be attributed, to a limited extent, to the increased efficiency of EtOH as the leaving group in the intramolecular cyclization of EL to angelica lactones [26]. However, in the long-term stability tests performed with ethanol (Figure 2a), the progressive continuous deposition of heavy carbonaceous compounds over the catalytic surface, led to the blockage and poisoning of the active Lewis acid sites, and therefore, led to a progressive decrease in the conversion and change in chemo-selectivity, promoting the alcoholysis of angelica lactones back to EL. Nevertheless, in situ regeneration of the catalyst was successfully achieved, by performing a heat treatment, by feeding air at 400 °C for 2 h and permitted the almost complete recovery of the initial catalytic behavior, proving that the deactivation of ZrO2 is reversible and that regeneration could be achieved in a simple and practical manner. Finally, a real bio-ethanol mixture was used as the H-donor for ML at the previously optimized conditions. Figure 3 shows that bio-ethanol gave a slightly less satisfactory catalytic performance compared to the one obtained with HPLC-grade ethanol. This is probably due to the presence of impurities in bioethanol (i.e., acetic acid and aldehydes) that could foster the formation of heavy carbonaceous compounds on the catalytic surface, promoting the faster deactivation of the catalyst and in this way decreasing the maximum obtainable GVL yield. Conclusions The CTH of alkyl levulinates with a range of alcohols compared to a heterogeneous zirconia catalyst showed a superior catalytic behavior when the reaction is performed in a fixed-bed, continuous flow system in the gas-phase. In particular, high surface tetragonal ZrO2 is a suitable catalyst for the target reaction due to its ability to activate both the substrate and the alcohol over Lewis acid and basic sites. During the first 300 min of the reaction, under the optimized conditions, a full conversion was achieved and the reaction was selective towards the formation of GVL. At longer reaction times on stream, the deposition of heavy carbonaceous compounds over the active sites was observed. In this way, deactivation of the catalyst and, in addition, a change in the chemoselectivity of the reaction was observed (e.g., alcoholysis of angelica lactones to yield EL). The latter effect was observed in all the catalytic tests performed in the gas-phase, regardless the type of alkyl levulinate or the chemical nature of the alcohol used. Moreover, the in situ regeneration of the catalyst was achieved when exposing it to a flow of air at 400 °C for 2 h, restoring its initial catalytic behavior. On the other hand, ZrO2 was not able to activate methanol as the H-donor, achieving a low conversion of ML and low amounts of GVL. Isopropanol was proven to be an excellent H-donor, allowing the complete conversion of ML and high yield of GVL. Nevertheless, when ethanol (and bio-ethanol) was used as the H-donor, a similar catalytic behavior was observed in this way opening new possibilities towards a sustainable route to GVL production. Author Contributions: Gas-phase catalytic tests, catalysts characterization and data analysis, T.T.; gas-phase catalytic tests, catalyst preparation, P.B.V.; liquid-phase catalytic tests and data analysis, E.P., R.P. and F.M.; writing and manuscript editing, N.D. and F.C. All authors have read and agreed to the published version of the manuscript. Funding: This research received no external funding. Acknowledgments: We gratefully acknowledge the financial support received from SINCHEM Grant to carry out this work. SINCHEM is a Joint Doctorate program selected under the Erasmus Mundus Action 1 (framework agreement no. 2013-0037) of the European Union. Conflicts of Interest: The authors declare no conflict of interest.
2,968
2020-11-09T00:00:00.000
[ "Chemistry" ]
Feature Extraction of Speech Signal Based on MFCC (Mel cepstrum coefficient) Smart power plant is to establish a modern energy power system to achieve safe, efficient, green, and low-carbon power generation. Its characteristics are that the production process can be independently optimized, the relevant systems can collect, analyze, judge, and plan their own behavior, and intelligently and dynamically optimize equipment configuration and its parameters. This paper focuses on the optimal recognition state of MFCC in smart power plants. In this paper, we propose that by changing the number of filters and the order of MFCC to view the expression effect of the final MFCC parameter, the evaluation index of the effect is “accuracy”, the evaluation index—accuracy in the neural network. In this paper, the network is built through a Python programming environment, and the comparative experiment is adopted to analyze the influence of each parameter on the speech information expression effect of MFCC parameters. Introduction The wide application of intelligent voice control technology in the electric power industry is an inevitable trend in developing intelligent power plants.In the future era of the Internet of Things, the interactive mode of "speaking demand--getting feedback" will be further extended.All electrical appliances will be able to "listen" or even "speak"; voice control will become an important means to build smart power plants. One of the most important links in speech recognition is the extraction of speech feature parameters.A good speech feature parameter should have five attributes [1] : universality, uniqueness that is not easy to imitate, the stability of not changing with time, robustness without noise interference, and the convenience of being easy to extract. MFCC speech signal feature was first proposed by Davis and Mermelstein [2] .The main core step of this feature is realized in the frequency domain of the speech signal, which is to convert the frequency axis to the Mel frequency scale and then convert the inverted spectrum operation to the inverted spectrum domain. In the characteristic binding of MFCC, Guo [3] used the combined feature parameters of MFCC and △ MFCC to identify the weighted VQ algorithm.This combined feature was first proposed by D. Reynolds in 1996; Zhang [4] introduced a new feature parameter LPMCC (LPCC mel transform).This parameter is a fusion of the LPCC and MFCC.And in the paper, we proposed a new extraction algorithm --based on the Fisher criterion extraction algorithm in the MFCC extraction algorithm; Li [5] proposed a dual acoustic feature integration method based on the x-vector architecture.Unlike previous methods that directly stitch the two acoustic features, it lets two different feature parameters first enter two different branches of the neural network.This is an integration method in the previous layer of the statistical pooling layer. In the study of endpoint detection, Wen [6] took the lead in using the improved double-gate method of calculating the amplitude mean P of the beginning T frame to eliminate noise and, at the same time, using the improved AMDF (cycle short-time average amplitude difference function) method for base sound detection, which improved the accuracy of endpoint detection; Bai et al. [7] is adding the first three components of MFCC, obtaining MFRE (MER energy ratio) as the speech endpoint detection measure.Finally, the fuzzy C mean clustering algorithm delineates the two-threshold threshold. Regarding the study of MFCC denoising, Li et al. [8] compared the different speech characteristics of MFCC and LPCC in 2010 and concluded that MFCC was better than LPCC in both noise shielding and noise resistance; Zhou [9] proposed a kind of spectral subtraction for the representative vector, through this method to eliminate the influence of music noise in MFCC parameter extraction, after experimental verification, this method can better eliminate burr noise; Hu et al. [10] is in extracting the MFCC FTT step to the harmonic characteristics of the turbidity spectrum by noise compensation reconstruction method to reconstruct the frequency spectrum; Jin et al. [11] combines Speech-denoising Wavenet with MFCC and adds it to the void convolution layer through a fully connected layer and a convolution layer of the original model. In the algorithm improvement of MFCC extraction, Wen et al. [12] proposed to cite the new technology MidMFCC parameter in the emotion field.The characteristic of this parameter is as follows: The high-frequency part of IMFCC at the frequency of 0~2000 Hz, and the low-frequency part of MFCC at 2000 at the frequency of 2000~4000 Hz.Through this feature, the frequency characteristics of the recognition rate of the system can be improved to a certain extent. The feature extraction of the speech signals The extraction process of MFCC is as follows: First, the extracted speech data is preprocessed, then the fast Fourier transform (FFT), convert the frequency scale in the Mel filter set, and finally take the logarithmic and discrete cosine transformation (DCT) [13] .The specific process is shown in Figure 1. Figure 1 Flow chart of MFCC extraction The speech set used in this paper is the thches_30 speech library disclosed by Tsinghua University, which was recorded from 2000 to 2001.It records 10, 893 sentences from 30 boys and girls for training.All the samples in the voice library are recorded through a charcoal grain microphone while keeping the recording environment in a quiet office.The speech format is WAV format, the sampling frequency is 16 kHz, the sampling size is 16 bits, and the language type is Chinese. Pretreatment The preprocessing process is shown in Figure 2: Figure 2 The basic process of speech signal analysis and processing Digital processing: The voice signal is through the bandpass filter, and then the voice signals every interval amplitude value, the original signal from the time domain continuously goes into the continuous signal.Only finally through A/D transform voice signals into binary digital code, will sample the amplitude value that becomes discrete, according to different quantitative order distance classification and the same amplitude value. Pre-aggravating treatment: It uses a first-order digital filter to increase the power of the highfrequency signal above 800 Hz to keep the speech signal flat in the subsequent processing, using the same SNR spectrum with the formula: 1 . Windowing and framing: The Hamming window is selected as the window function for window adding and frame processing.The hamming window function is: where N is the window length.The framing process is the process of adding windows.According to the sampling period, the length of the window is determined, and the voice signal is divided (weighted) so that each segment is added to a selected window.The definition is as follows: * . (3) Endpoint detection: Short-time energy and short-time over-zero rate methods are adopted to remove high-frequency ambient noise and low-frequency signal interference and carry out endpoint detection.We let the speech signal of the pretreatment frame be , and its short-time energy is expressed by .The calculation formula is as follows: ∑ . (4) The short-time over-zero rate distinguishes constant points according to the number of levels of the waveform.In the specific calculation, the discrete signal is generally used.We let the short time over zero rate symbol of speech signal be , which is defined as: where is a symbolic function, specifically expanded by: After digital processing, pre-aggravation, window adding, and frame segmentation, the voice time domain signal for each frame is obtained. Fast Fourier transform (FFT) The preprocessed speech time domain signal performs fast Fourier transform (FFT), transforms the time domain signal into frequency domain signal, obtains the spectrum , and obtains the amplitude spectrum | | .We let the voice signal DFT be: Mel filter bank The resulting amplitude spectrum | | through the mel filter sets , its output is , the formula is as follows: Discrete cosine transform Finally, the parameter is discrete cosine transformation and converted into the inverted spectrum domain, and finally gets the MER inverted spectrum coefficient , which is the MFCC characteristic parameter to be extracted.The specific formula is as follows: , 0, 1, 2, . . ., . (10) Pre-processing results The Python code reads the original speech of the selected samples, and the energy changes over time are presented as a waveform map.Its waveform map is shown in Figure 3.As can be seen from Figure 3, part of the speech signal without any processing has very high energy.A pre-aggravation operation is required here to ensure that the same signal-to-noise ratio can be used in performing the speech signal spectrum conversion, where the coefficient is selected is 0.97.After preaggravation, the overlapping frame method will be used to window the voice signal and frame operation.The form is shown in Figure 4, where N is the frame length, and M is the frame move. Figure 4 The overlapping frame method The CNN network model construction In order to make better use of the above data set for training and testing, one-dimensional CNN is used to build the model.There are four convolution layers, in which the convolution kernel length is 1, 3, 3, 5, the activation function selects "relu", the optimizer selects "adam", the classifier selects "softmax", and the evaluation criteria are "accuracy".The specific network operation process structure is shown in Figure 5. Exploring the influence of C0 and data set division in MFCC In order to better display the specific feature form of MFCC, the selected preprocessed samples in Figure 4 are subject to 3D visualization after extracting the MFCC feature parameters, where the abscissa represents the order of MFCC, the ordinate represents the frequency number, and the vertical coordinate represents the inverted spectrum value.As shown in Figure 7 (left), the C0 value of the first dimension of MFCC is very large, and in general, it is not used as part of the characteristic parameters.Usually, C0 will be replaced with the log value of energy.The replaced MFCC parameter is shown in Figure 7 (right).It can be seen that the MFCC, after replacing C0, is more consistent with the distribution of the inverted spectrum value.In order to explore the influence of C0 substitution and data set division on the identification effect of the test set, the data set is divided into four groups, and the ratio between the training set and the test set is divided into 1:1, 2:1, 3:1:4:1.The experiment compares the two MFCC feature parameters, and the experimental data are shown in Table 1. 1, after replacing C0 with log value, the test set recognition rate is better than that of no replacement because the previous C0 direct component is too large, affecting the identification efficiency. Analysis of the number of Mel filters and the characteristic order influence Because the information expression performance of the extracted MFCC speech signal feature parameters contained in the speech is related to the number of band pass filters contained in the band pass filter group and the MFCC order set when performing the DCT transformation.Therefore, 10000 speech is divided into a training set of 7500 speech and a test set of 2500 speech, under different and conditions the data set is imported into the set model for training, and then the trained model to identify the test set, the total accuracy of 25 people in the sample is recorded, The experimental results are presented in Table 2.As can be seen from the above figure, when the order of MFCC remains unchanged, the change of accuracy caused by changing the number of filters in the Mel filter group is not large, and there is no substantial improvement or decrease trend.However, when the number of filters M is limited, and the MFCC order P is determined, it can be found that with the continuous growth of , the recognition rate of the test set will also have a large increase.This shows that the last step in the process of MFCC extraction of discrete cosine transformation (DCT) plays a decisive role in the expression of MFCC, the presence of DCT can remove the correlation of the original feature vector, but at the same time, it will truncate the original feature vector, after the DCT change MFCC can get the minimum mean square error, we let the extracted feature vector retain the energy of the original feature vector to the greatest extent.With the increase of order , the truncated original feature vector during DCT will be less, and more speech information contained in the original feature vector can be retained, which is why the test set recognition rate will increase with the increase of order . Meanwhile, it can be seen in Figure 8 that the recognition rate increases with the MFCC order , but the curve changes very little and remains constant after 24.When MFCC order reaches a certain degree, although the test set total recognition rate will still increase with the increase of orders, the promotion effect is relatively less.This means the original feature vector is short enough when the retained voice information is truncated to a certain degree.MFCC has included basically all the unique information of the speaker, continuing to increase the order , to reduce the truncated part can increase the recognition rate.And it is very low. In realizing the function of voice print recognition, in addition to the total recognition rate, the operation speed of the model itself is also a part of the function that needs to be given enough attention.From the above experiment, the recognition rate is relatively small, affected by the number of filters, and greatly influenced by MFCC order, so the following experiment will test filter number whether it is unchanged.It will also change the MFCC order , explore the change of model training and test operation time, and set the filter number to 30.The specific test results are in the following Table 3: As the growth of MFCC order changes, the operation time of the model will also increase in a linear growth mode, so that the linear growth multiple is , which can be calculated from the data in Table 3, and the growth multiple is about equal to 20.That is to say, with the growth of MFCC order to a certain extent, although the recognition rate will still increase, the rate of change will decrease, while the operation time required for model training and testing will continue to grow at the growth multiple . Considering the above, the optimal solution is MFCC order 24..When the MFCC order is 24, accuracy is a relatively high standard.If the order is less than 24, as the order grows, the recognition rate of change is still large, two orders corresponding to the recognition rate gap are obvious.When the order is greater than 24, the two different order accuracy difference is not big, but the gap between the operation time is obvious. Analysis of MFCC dynamic properties To test how much effect the difference of MFCC would have on MFCC expression in voice print recognition and the performance of MFCC and its two differences for improvement in noise resistance, artificial noise is added to the original pure speech, and the speech set is set as SNR of 20, 15, 10, 5, 0 for testing.The experimental results are shown in Table 4, and the MFCC parameter was selected for testing in this section.where represents the effective power of the speech signal, while represents the effective power of the noise. As can be seen from Equation ( 11), will decrease with the increase of the effective power of noise.Therefore, for a segment of speech, the higher the means that, the smaller the noise is, the purest the sound is, and the higher the sound quality is.When SNR=0, the representative sound and noise power are equal. In order to more clearly observe the anti-noise performance of each of the MFCC feature parameters, Table 4 is drawn as a bar chart, as shown in Figure 9.As can be seen from Figure 9, with the decrease of SNR, the recognition rate of the test set under the three combinations of feature parameters gradually decreases, but at =0, when the signal energy is equal to the noise energy, the lowest recognition rate can still reach 85.86%, and the average recognition rate is more than 90%, indicating that the anti-noise performance of MFCC itself is very excellent.However, after increasing the first and second-order difference between MFCC, although the test set recognition rate is higher than using MFCC alone as the identification feature parameter, the decreasing trend of recognition rate did not slow down with the increase of artificial noise power.Therefore, according to this phenomenon, it can be concluded that MFCC has excellent noise resistance, and although the combination of adding differences to MFCC can increase the recognition rate of the speaker recognition model, it does not further improve the noise resistance performance of MFCC. Figure 3 Figure 3 Waveform map of the speech signal Figure 5 Figure 5 The network structure model for sound print recognition 3.3.Identification model testing based on MFCC and CNN In order to test whether the constructed MFCC extraction program and the CNN speaker recognition program can run normally, the MFCC test experiment was performed first.In this experiment, 26 Mayer filter sets were set in the MFCC extraction program, and the final output of the 13 th -order MFCC feature parameters was obtained.After the samples in 10, 000 speech data sets are preprocessed and the MFCC feature extracted, they can be imported into the built CNN voice print recognition model program.The CNN recognition program is the same as that in Figure 5, and 100 batches of training set samples (including the verification set) are set.The final test results are shown in Figure 6. Figure 6 Figure 6 Sample model accuracy rate (left panel) and Sample model loss rate (right panel) As seen from the graph line of Figure 6, the accuracy and loss rate of the model in the training period changed greatly.At the beginning of the training, the accuracy of the model rose in an almost straight line, and then with the gradual increase of the training batch, the change rate gradually leveled out.The loss rate also drops almost straight first and then gradually flattens.After 20 training sessions, the accuracy and loss rate did not increase or decrease substantially, which indicates that the model reached the maximum recognition degree.With the increasing number of training batches, the accuracy and loss rate of the final training set are fitted and do not differ much.From the two graphs above, we can determine that the recognition model used and the feature parameters extracted in this paper can be used normally. Figure 9 Figure 9 MFCC difference and noise on the recognition rate Table 1 C0 and the effect of the dataset division on the test set accuracy Table 2 Effect of the number of Mel filters and the MFCC order on the test set accuracy According to the above table, data draw the influence of different Mel filter numbers and change MFCC order of the total recognition rate of the test set, as shown in Figure8. Table 3 The model operation time changes with the order Table 4 MFCC difference, as well as the effect of noise on the test set recognition rate signal is mixed with environmental noise, we usually use the signal-to-noise ratio SNR to evaluate the purity of a segment of the speech signal.SNR refers to the ratio of signal to noise in an electronic device.Its unit is often expressed in dB, which is defined as:
4,489.8
2023-09-01T00:00:00.000
[ "Computer Science" ]
Dynamics and fragmentation mechanism of (C5H4CH3)Pt(CH3)3 on SiO2 surfaces The interaction of trimethyl(methylcyclopentadienyl)platinum(IV) ((C5H4CH3)Pt(CH3)3) molecules on fully and partially hydroxylated SiO2 surfaces, as well as the dynamics of this interaction were investigated using density functional theory (DFT) and finite temperature DFT-based molecular dynamics simulations. Fully and partially hydroxylated surfaces represent substrates before and after electron beam treatment and this study examines the role of electron beam pretreatment on the substrates in the initial stages of precursor dissociation and formation of Pt deposits. Our simulations show that on fully hydroxylated surfaces or untreated surfaces, the precursor molecules remain inactivated while we observe fragmentation of (C5H4CH3)Pt(CH3)3 on partially hydroxylated surfaces. The behavior of precursor molecules on the partially hydroxylated surfaces has been found to depend on the initial orientation of the molecule and the distribution of surface active sites. Based on the observations from the simulations and available experiments, we discuss possible dissociation channels of the precursor. Introduction Nanoscale device applications require a growth of regular or specially patterned transition metal nanodeposits. Electron beam induced deposition (EBID), is a size and shape selective deposition process capable of writing low dimensional, sub-10 nm patterns on conducting and insulating substrates with tunable electronic properties [1][2][3][4][5]. However, the deposits obtained often contain less than 50% of metal, which is detrimental to their conductivity. The incomplete dissociation of the precursor molecules on the substrate during the deposition process leaves a significant organic residue, thus impairing the quality of the deposits [3,6]. This lowers the range of applicability of EBID for nanotechnological applications. Several postfabrication approaches (such as annealing, post-deposition annealing in O 2 , exposure to atomic hydrogen, post-deposition electron irradiation) were proposed as viable techniques [7,8], but these approaches are not completely free from repro-ducibility issues. Therefore, to improve the metal content and to address the nature of the organic contamination, a fundamental understanding of how the molecules behave on the substrates is necessary. This will be helpful, either to modify the existing class of precursor materials or to design a novel set of precursors, specific for electron beam deposition. To address this, we have made a series of DFT studies in which we considered fully and partially hydroxylated SiO 2 surfaces as a representative for untreated and electron beam pretreated substrates and investigated the adsorption [9,10] and dynamics of several carbonyl precursors [11]. (C 5 H 4 CH 3 )Pt(CH 3 ) 3 , in which Pt bonds directly to three methyl groups and a methylated cyclopentadienyl ring is a widely used precursor to obtain Pt deposits. Although the dissociation mechanism of (C 5 H 4 CH 3 )Pt(CH 3 ) 3 leading to the Pt deposit remains unknown, studies for a family of precursors similar to (C 5 H 4 CH 3 )Pt(CH 3 ) 3 in atomic layer deposition (ALD) conditions are available [12]. The studies in this review fairly agree that (1) the presence of surface hydroxyl groups are the source for protons that help in the evolution of H 2 , CH 4 and H 2 O during the deposition process, and (2) the molecules dissociate or associate through a ligand exchange process [13]. Recently, examination of the behavior of this molecule on fully hydroxylated SiO 2 surfaces has shown that the molecule remains physisorbed [14]. These static T = 0 K DFT computations provide insights on the bonding of (C 5 H 4 CH 3 )Pt(CH 3 ) 3 to this surface, but are limited to the adsorption behavior of the molecule. Several questions remain, such as the behavior of (C 5 H 4 CH 3 )Pt(CH 3 ) 3 on electron beam pretreated substrates, the role of temperature in the dissociation of the precursor on untreated surfaces, and the possible mechanism by which the precursor dissociates on the electron beam pretreated surfaces. Hence, in order to extend the knowledge on the adsorption and to address the open questions in the deposition process, in this study we use DFT and finite temperature DFT-based molecular dynamics (MD) simulations and investigate the adsorption behavior of (C 5 H 4 CH 3 )Pt(CH 3 ) 3 on fully and partially hydroxylated SiO 2 surfaces. We focus on explaining the initial reactions and the possible fragmentation pathways of the (C 5 H 4 CH 3 )Pt(CH 3 ) 3 molecule on the SiO 2 surface, and we explain the nature of organic contamination in the deposits. Computational Details DFT calculations for the substrates, (C 5 H 4 CH 3 )Pt(CH 3 ) 3 precursor molecules, and the combined substrate/precursor molecule system were performed using the projector augmented wave (PAW) method [15,16] as implemented in the Vienna ab initio simulation package (VASP-5.2.11) [17][18][19]. The general-ized gradient approximation in the parameterization of Perdew, Burke and Ernzerhof [20] was used as approximation for the exchange and correlation functional. In addition, dispersion corrections [21] were used to simulate the long-range van der Waals interactions. All calculations were performed in the scalar relativistic approximation. A kinetic energy cut-off of 400 eV was used and all ions were fully relaxed using a conjugate gradient scheme until the forces were less than 0.01 eV/Å. In the geometry optimizations for the molecule and the surface models the Brillouin zone was sampled at the Γ point only. Spin polarization was considered for all calculations. Different spin states (i.e., in each case the two lowest possible spin states) were considered, and only the results of the ground state are reported below. The adsorption energy (E A ) was defined as and E adsorbate are the energies of the combined system (adsorbate and the slab), of the slab, and of the adsorbate molecule in the gas phase in a neutral state, respectively. The most stable systems were considered for studying the dynamics of the adsorbed precursor molecule. MD simulations were performed for 20 ps on a canonical ensemble at a finite temperature of T = 600 K using the Nose-Hoover thermostat [22]. The temperature of 600 K [23] was chosen in accordance with a typical experiment used for (C 5 H 4 CH 3 )Pt(CH 3 ) 3 deposition. The Verlet algorithm in its velocity form with a time step of Δt = 1 fs was used to integrate the equations of motion. For these simulations, we have used a reduced (2 × 2 × 2) SiO 2 supercell in order to reduce the computational time. For reaction modeling studies, all species (transition states (TS) and intermediates (INT)) in the proposed reaction paths given below in Scheme 1 were traced at PM6 (parametrized model 6) level using the Berny algorithm implemented in the Gaussian 09 software [24]. on SiO 2 substrates The interaction of the precursor molecule with the substrate, in general, depends on both the orientation of the adsorbate and the adsorption site on the substrate. The interaction of (C 5 H 4 CH 3 )Pt(CH 3 ) 3 with the fully hydroxylated SiO 2 substrate surfaces was investigated by placing the molecule with different orientations on several bonding sites and the most stable configuration is the one in which the methylcyclopentadienylring and two of the methyl groups that are directly bonded to Pt are oriented towards the substrate [14]. In a similar way, the bonding of (C 5 H 4 CH 3 )Pt(CH 3 ) 3 precursor molecules on partially hydroxylated SiO 2 surfaces were investigated. The initial configurations of (C 5 H 4 CH 3 )Pt(CH 3 ) 3 consid- ered for simulations on the partially hydroxylated surfaces are shown schematically in Figure 1. We simulate the precursor adsorption on two different partially hydroxylated surface models, that differ in the number of available active sites on the surfaces; 11% (Figure 1, upper panel) and 22% ( Figure 1, lower panel). On these surfaces, two sets of (C 5 H 4 CH 3 )Pt(CH 3 ) 3 orientations were considered: (1) reclining orientations (Model-1/1a and Model-2/2a), that differ in the orientation of the methylcyclopentadienyl ring of (C 5 H 4 CH 3 )Pt(CH 3 ) 3 , and (2) upright configurations (Model-3/3a and Model-4/4a) in which either three of the methyl groups bonded to Pt or the centroid of the methylcyclopentadienyl ring bonds to the substrate. These initial configurations are allowed to relax without any constraints. The configurations where (C 5 H 4 CH 3 )Pt(CH 3 ) 3 is placed on the bridging sites move spontaneously to on-top sites during geometry relaxation and hence are not discussed further. The calculated adsorption energies (E A ) are summarized in Table 1 and the configurations are summarized in Figure 2. The calculated values of E A of (C 5 H 4 CH 3 )Pt(CH 3 ) 3 on 11% partially dehydroxylated surfaces indicates that the most stable configurations are Model-2 (OA Me2Cp , E A = −1.443 eV) and Model-4 (OA Cp , E A = −1.458 eV) given in Figure 2a,b. The Table 1: Adsorption energies of (C 5 H 4 CH 3 )Pt(CH 3 ) 3 on partially hydroxylated SiO 2 surfaces with 11 and 22% defects. The models listed correspond to the configurations in Figure 1. All values in eV/unit cell. Bonding of (C 5 H 4 CH 3 )Pt(CH 3 ) 3 has also been considered on the 22% partially dehydroxylated surfaces and the relaxed 297 eV, respectively. We do not observe any spontaneous dissociation on these configurations. However, when the relaxations are started with Model-3a orientation (OB Me3 ), a stronger adsorption with E A = −2.769 eV is observed. In this case, one of the three methyl groups that bonds directly to the Pt atom dissociates, as has been observed in experimental investigations [3,[25][26][27][28]. The detached methyl group was found to bind to one of the surface active sites. However, this situation was not observed on 11% partially dehydroxylated surfaces indicating that the dissociation is assisted by the neighboring active site present on the 22% partially dehydroxylated surfaces. The next most stable configurations on the 22% partially dehydroxylated surface are obtained from Model-4a ((OB Cp ) E A = −2.360 eV). A comparison of the calculated adsorption energies for the 11% and 22% cases indicates that the molecules are more stable on the latter. Furthermore, no dissociation on the 11% dehydroxy-lated surface was observed. These observations illustrate that the orientation of (C 5 H 4 CH 3 )Pt(CH 3 ) 3 on SiO 2 surfaces, and the availability and location of active sites on the surface are crucial factors in dictating the dissociation of precursors and the growth mechanism of deposits. Dynamics of the (C 5 H 4 CH 3 )Pt(CH 3 ) 3 precursor on SiO 2 substrates To gain further insight on the growth process of Pt deposits, dynamics of (C 5 H 4 CH 3 )Pt(CH 3 ) 3 on both fully and partially hydroxylated surfaces were simulated. Due to the high computational expense, however, a reduced 2 × 2 × 2 supercell of SiO 2 (Figure 3a), was used. This reduced supercell provides a closer packing of precursor molecules (the nearest-neighbor distance between two Pt atoms in (C 5 H 4 CH 3 )Pt(CH 3 ) 3 is ca. 8 Å in the reduced cell compared to ca. 15 Å in the original supercell) and an enhanced concentration of surface hydroxyl vacancies is provided in the reduced cell on partially dehydroxylated surfaces (i.e., 25% and 50% compared to 11% and 22%). However, the environment around these sites is similar (i.e., the sites are isolated on the 25% partially dehydroxylated surface and lo- cated adjacent to each other on the 50% partially dehydroxylated surface). Selected configurations on fully and partially hydroxylated SiO 2 surfaces are computed using this reduced supercell and compared for consistency (Table 2) 494 eV). Neverthless, the calculated values of E A when using the reduced supercell are higher than those when using the 3 × 3 × 4 slabs, owing to the enhanced concentration of the hydroxyl groups and defect sites, and the closer packing of the molecules. On the 50% dehydroxylated cells, where two surface active sites are located adjacent to each other, one of the methyl groups, which was originally bonded to Pt, fragments and bonds to the adjacent vacant site, as observed in the 3 × 3 × 4 cell (see Figure 2d and Figure 3e). These DFT relaxed structures (as shown in Figure 3b-e) are considered as starting point for molecular dynamics simulations. The trajectory of the MD simulations for the four considered cases are shown in Figure 4. The analysis of the trajectory indicates that on the fully hydroxylated surfaces (Figure 4a) the molecule exhibits significant changes in orientation and a drift similar as we found for carbonyl precursors [11]. The Pt-Cp ring and Pt-methyl bonds fluctuate with a maximum of 5% and 1%, respectively. Apart from this, in the simulation window, we do not observe any further indications of the dissociation of the precursor molecules. The trajectory of MD simulations and arising configurations of (C 5 H 4 CH 3 )Pt(CH 3 ) 3 on the (25% and 50%) partially dehydroxylated SiO 2 surfaces in MD simulations are shown in Figure 4b-d. When simulation is started with the configuration shown in Figure 3b, a significant bond weakening of Pt-Cp ring and Pt-methyl is observed (Figure 4b). Quantitatively, the respective bond fluctuations computed for Pt-Cp ring and Pt-methyl bonds were 11% and 9%, respectively. However, when MD simulations are performed with Figure 3d as starting configuration, where a part of the methylcyclopentadienyl ring is bonded to the surface Si atom during the simulations, the bonding of that part with the Pt(CH 3 ) 3 part of the precursor is weakened. When simulations are extended further, the Pt(CH 3 ) 3 part detaches and moves to the vacuum leaving the methylcyclopentadienyl ring bonded to the substrate (see Figure 4c). On the 50% partially dehydroxylated surface, (Figure 4d), no further association or dissociation of methyl groups or detachment of methylcyclopentadienyl rings are observed. These results illustrate that the (C 5 H 4 CH 3 )Pt(CH 3 ) 3 precursor molecules exhibit a tendency to fragment upon their interaction with the partially hydroxylated sites and a drifting without fragmentation on the fully hydroxylated surfaces. On the partially hydroxylated surfaces, the dissociation begins either with the release of one of the methyl groups bonded directly to Pt or the detachment of the methylcyclopentadienyl ring. This indicates that surface active sites are necessary for the fragmentation of precursor molecules and this is because of the electron density located on the Si atoms, which is crucial for bonding with the precursor molecule and further fragmentation [9,10]. The 25% and 50% partially dehydroxylated surfaces represent electron beam pretreated surfaces and therefore the pretreatment of the substrate with the electron beam helps in favoring dissociation of the precursor molecule and efficient deposition. Since during the MD simulations we do not observe the dissociation of the methylcyclopentadienyl ring or the release of methyl groups from the surface Si sites, it is expected that they might block the active sites from the approach of incoming precursor molecules. Pathways for (C 5 H 4 CH 3 )Pt(CH 3 ) 3 precursor fragmentation In order to further understand the nature of the interaction of (C 5 H 4 CH 3 )Pt(CH 3 ) 3 with partially hydroxylated SiO 2 surfaces, the barriers for adsorption and fragmentation of these molecules to the partially hydroxylated groups were simulated. For this purpose, a representative simple cluster model Si(OH) 3 is considered. Although this does not account for the complete surface model, it is a good approximation to evaluate the reaction barriers for different reactions observed through the MD simulations. In the literature, such a simple hydroxylated M(OH) x model has been used as a representative model to investigate deposition reactions on Al [29], Hf [13] and Si [30][31][32]. Also, the energies reported in this section are computed at the PM6 level of theory, and a relative comparison should enable reasonable understanding of the possible fragmentation pathways qualitatively. In a recent investigation, thickness-controlled site-selective Pt deposits were obtained by direct atomic layer deposition (ALD) in which ALD was performed on EBID patterned substrates. In this ALD process, an O 2 pulse is used to obtain better nucleation, even though the qualitative understanding of the role of O 2 remains uncertain. Therefore, in this section we consider the involvement of O 2 in the pathways at different stages of the reaction and calculate the energetics to compare and elucidate the role of O 2 [33]. Our simulations of the adsorption and dynamics of (C 5 H 4 CH 3 )Pt(CH 3 ) 3 on SiO 2 surfaces show that the dissociation begins with either the detachment of methyl groups or of the methylcyclopentadienyl ring. In the cluster model, the interaction of ( Figure 5). Ethane elimination, which leads to a Pt atom (PA-TS1) bonded to the Si surface has a barrier of about +1.989 eV. This reaction is exothermic by −0.798 eV. The first step (PBTS-1) in Path B leading to a second methyl elimination has a barrier of +0.724 eV and occurs in a similar fashion as the first methyl removal. This reaction resulting in species PBINT-1 is exothermic by −0.207 eV. Comparing the pathways considered and the Scheme 1: Possible reaction channels for the dissociation of (C 5 H 4 CH 3 )Pt(CH 3 ) 3 after its interaction with partially hydroxylated SiO 2 surfaces and in an O 2 environment. Possible pathways for the release of (top panel) a first methyl group and of (bottom panel) subsequent methyl groups. computed activation energies as shown in Figure 5, the third methyl elimination in presence of oxygen, and the subsequent formation of Pt-oxy species (Path-B, C, D), have a high activation barrier. The formed Pt and Pt-oxygen species might agglomerate and form Pt deposits and the role of O 2 on this process has not been explored in this investigation. Some experimental studies are available and possible mechanisms were proposed [34]. Simula- tions in this study were performed at the PM6 level, with no corrections for weak interactions and at T = 0 K. Taking into account the temperature of most processes, which ranges between 200 and 400 °C, some of the high-barrier pathways might also be operative. Also, the role of secondary electrons and the point defects that might occur as a result of electron impingement on the surface has not been explored. This study therefore provides a preliminary understanding of how the (C 5 H 4 CH 3 )Pt(CH 3 ) 3 molecule can dissociate on substrates that qualitatively represent electron beam pretreated surfaces. Conclusion We have performed theoretical simulations on the dynamics and fragmentation mechanisms of (C 5 H 4 CH 3 )Pt(CH 3 ) 3 on fully and partially hydroxylated SiO 2 surfaces. Our results provide clues for the most stable configurations of (C 5 H 4 CH 3 )Pt(CH 3 ) 3 on SiO 2 surfaces and their dynamical behavior. These results illustrate that the fragmentation of the precursor molecule begins with either the detachment of either the methylcyclopentadienyl ring or the methyl group. Detached methylcyclopentadienyl rings and the dissociated methyl groups block the surface active sites and might be the source of organic contamination. Since the composition of the obtained deposits dictate the conductance behavior, it can be speculated that a design of suitable precursors for electron beam induced deposition might be more efficient than the use of traditional ALD precursors. With our reaction modeling studies, possible pathways by which the precursor molecule can fragment on SiO 2 surfaces were also explored.
4,573.8
2017-06-22T00:00:00.000
[ "Chemistry", "Physics" ]
Symmetry Methods of Flow and Heat Transfer between Slowly Expanding or Contracting Walls An analysis has been carried out for the flow and heat transfer of an incompressible laminar and viscous fluid in a rectangular domain bounded by two moving porous walls which enable the fluid to enter or exit during successive expansions or contractions. The basic equations governing the flow are reduced to the ordinary differential equations using Lie-group analysis. Effects of the permeation Reynolds number R e , porosity R, and the dimensionless wall dilation rate α on the self-axial velocity are studied both analytically and numerically.The solutions are represented graphically.The analytical procedure is based on double perturbation in the permeation Reynolds number R e and the wall expansion ratio α, whereas the numerical solution is obtained using Runge-Kutta method with shooting technique. Results are correlated and compared for some values of the physical parameters. Lastly, we look at the temperature distribution. Introduction The studies on the boundary layer flow and heat transfer over a stretching surface have become more and more prominent in a number of engineering applications.For instance, during extrusion of a polymer sheet, the reduction of both thickness and width takes place in a cooling bath.The quality of the final product depends upon the heat transfer rate at the stretching surface.In the past, many experimental and theoretical attempts on this topic have been made.Such studies have been presented under the various assumptions of small Reynolds number , intermediate , large , and arbitrary .The steady flow in a channel with stationary walls and small has been studied by Berman [1].Dauenhauer and Majdalani [2] numerically discussed the two-dimensional viscous flow in a deformable channel when −50 < < 200 and −100 < < 100 ( denotes the wall expansion ratio).Majdalani et al. [3] analyzed the channel flow of slowly expanding-contracting walls which leads to the transport of biological fluids.They first derived the analytic solution for small and and then compared it with the numerical solution. The flow problem given in study [3] has been analytically solved by Boutros et al. [4] when and vary in the ranges −5 < < 5 and −1 < < 1.They used the Lie-group method in this study.Mahmood et al. [5] discussed the homotopy perturbation and numerical solutions for viscous flow in a deformable channel with porous medium.Asghar et al. [6] computed exact solution for the flow of viscous fluid through expanding-contracting channels.They used symmetry methods and conservation laws. The flow and heat transfer in square domain have been studied by Noor et al. [7].Our main aim is to study the heat transfer in a rectangular domain.In this study, symmetry methods are applied to a natural convection boundary layer problem.The main advantage of such a method is that it can successfully be applied to a nonlinear differential equation.The symmetries of differential equations are those groups of transformation under which the differential equation remains invariant, that is, a symmetry group maps any solution to any other solutions.The symmetry solutions are quite popular because they result in reductions of independent variable of the problem. Mathematical Problems in Engineering The purpose of this paper is to generalize the flow analysis and heat distribution of [4].The salient features have been taken into account when the fluid saturates the porous medium.Like in [4], the analytic solution for the arising nonlinear flow problem is given by employing the Lie-group method (with and as the perturbation quantities).Finally, the graphs of velocity and temperature are plotted and discussed. Mathematical Formulation of the Problem Let us consider a rectangular domain bounded by two walls of equal permeability that enable the fluid to enter or exit during successive expansions or contractions.The walls expand or contract uniformly at the time-dependent rate ḣ .The continuous sheet aligned with the -axis at = 0 means that the wall is impulsively stretched with the velocity along the -axis and (, ) as our surface temperature.At = ℎ(), it is assumed that the fluid inflow velocity is independent of the position.A thin fluid film with uniform thickness ℎ() rests on the horizontal wall.The governing time-dependent equations for mass, momentum, and energy are given by where and V are the velocity components in the and directions, respectively, and is temperature.We assume that the fluid properties are constant.Here, is the fluid density, is the dynamic viscosity, and is the thermal conductivity of an incompressible fluid.Thus, the kinematic viscosity is ] = /, is the acceleration due to gravity, is the coefficient of the thermal expansion, and the thermal diffusivity is = / , where is the specific heat, is the pressure, is time, and and are the porosity and permeability of porous medium, respectively.The appropriate boundary conditions are where ℎ() is the film thickness.The boundary condition reflects that the fluid motion within the liquid film is caused by the viscous shear arising from the stretching of the elastic wall.Now we will express the axial velocity, normal velocity, and boundary conditions in terms of the stream function Ψ.From the continuity Equation ( 1), there exists a dimensional stream function Ψ(, , ) such that which satisfies (1) identically. Solution of the Problem This section derives the similarity solutions using Lie-group method under which (10)-(12) are invariant.(17 We set The vector field given by ( 16) is a symmetry generator of (10)-( 12) if and only if [3] is the third prolongation of . We now introduce the total derivatives by differentiating (15) with respect to , , and and construct Choosing small when ℎ ≈ , the system of (10)-( 12) has the six parameter Lie-group points of symmetries generated by 3.2.Invariant Solution.When calculating invariant solutions under the group generators 3 and 4 , we found that there are no invariant solutions.Then 5 and 6 give solutions of ( 1)-( 3) and this contradicts the boundary conditions. Analytical Solution. The aim of this section is to find the solutions of ( 40)-(42) using double perturbation [3,4].For small and , we expand Using (43) into (40)-(42) and then solving the resulting problems for small and , we obtain It is noted that for → ∞, the expression of () in [4] is recovered.Let From ( 41), (45), and (46), we obtain Solving the above problems and using (46), one obtains 3.4.Numerical Solution.Now the numerical solution of (40)-(42) has been obtained using shooting method with Runge-Kutta scheme.that the higher porosity leads to higher self-axial velocity near the center and lower near the wall.The results for < 0 are quite opposite to that of > 0. A comparative study of these figures further indicates that the self-axial velocity near the center in case of injection with expanding wall and high porosity is higher than injection with contracting wall and high porosity. Results and Discussion The plots of self-axial velocity / for permeation Reynolds number = −1 (suction) and = 0.5, −0.5 (expansion and contraction, resp.) over a range of have been displayed in Figures 3 and 4. In case of > 0, these graphs depict that the higher porosity leads to lower selfaxial velocity near the center and higher near the wall.For < 0, these figures depict that the lower porosity leads to higher self-axial velocity near the center and lower near the wall.By comparing Figures 3 and 4, we note that the self-axial velocity near the center in case of suction with expanding wall and high porosity is higher than suction with contracting wall and high porosity. The behaviour of the self-axial velocity / for wall dilation rate = −0.5 (contraction) and = 1, −1 (injection and suction) over a range of has been displayed in Figures 2 and 3.For > 0, Figure 3 shows that the higher the porosity , the lower the self-axial velocity at the center and higher near the wall.Figure 2 shows that the higher the porosity , the higher the self-axial velocity at the center and lower near the wall.When < 0, Figure 3 elucidates that the lower porosity gives a higher self-axial velocity near the center and a lower one near the wall.Figure 2 elucidates that the lower porosity gives a lower self-axial velocity near the center and a higher one near the wall.A comparative study of Figures 2 and 3 indicates that the self-axial velocity near the center in case of injection with contracting wall and high porosity is higher than suction with contracting wall and high porosity. The variations of self-axial velocity / for wall dilation rate = 0.5 (expansion) and = 1, −1 (injection and suction) over a range of have been plotted in Figures 1 and 4. When > 0, then Figure 1 shows that the higher the porosity , the higher the self-axial velocity at the center and lower near the wall.Figure 4 shows that the higher the porosity , the lower the self-axial velocity at the center and higher near the wall.When < 0, Figure 1 describes that the lower porosity gives lower self-axial velocity near the center and higher near the wall.Figure 4 provides that the lower porosity yields higher self-axial velocity near the center and lower near the wall.Comparison of Figures 1 and 4 leads to the conclusion that the self-axial velocity near the center for suction with expanding wall and high porosity is higher than injection with expanding wall and high porosity.Tables 1, 2, 3, and 4 depict that the percentage error decreases when increases. Figures 5, 6, 7, and 8 plot the behaviour of self-axial velocity over a range of with fixed and . For > 0, Figures 5-8 witness that the greater leads to higher self-axial velocity at the center and lower near the wall.For < 0, these figures show that an increase in contraction ratio leads to lower self-axial velocity near the center and higher near the wall.By comparing Figures 5 and 6, we note that the self-axial velocity near the center in case of suction with expanding wall and high porosity is higher than injection with expanding wall and high porosity. Comparison of Figures 5 and 8 shows that the self-axial velocity near the center in case of injection with expanding wall and low porosity is higher than injection with expanding wall and high porosity.Comparative study of Figures 6 and 7 reveals that the self-axial velocity near the center in case of suction with expanding wall and high porosity is higher than suction with expanding wall and low porosity.By comparing Figures 7 and 8, the self-axial velocity near the center in case of injection with expanding wall and low porosity is higher than suction with expanding wall and low porosity. Tables 5, 6, 7, and 8 indicate that the percentage error is an increasing function of .The self-axial velocity / for porosity parameter = 0.5 (high porosity) and wall dilation rate = 0.5, −0.5 (expansion and contraction, resp.) over a range of has been sketched in Figures 9 and 10.For > 0, we found that increasing injection leads to a lower self-axial velocity at the center and a higher one near the wall.When < 0, Figures 9 and 10 indicate that increasing suction ratio leads to a higher self-axial velocity near the center and a lower one near the wall.Comparison of Figures 9 and 10 shows that the self-axial velocity near the center in case of injection with expanding wall and high porosity is higher than injection with contracting wall and high porosity. Figures 11 and 12 provide the variation of self-axial velocity / for porosity parameter = −0.5 (low porosity) and wall dilation rate = 0.5, −0.5 (expansion and contraction, resp.) over a range of .In case of > 0, Figures 11 and 12 show that increasing injection leads to a higher self-axial velocity near the center and a lower one near the wall.For < 0, Figures 11 and 12 show that increasing suction ratio leads to a lower self-axial velocity at the center and a higher one near the wall.A comparison between Figures 11 and 12 shows that the self-axial velocity near the center in case of Table 1: Comparison between analytical and numerical solutions for self-axial velocity / at = 0.1 for = 1, = 0.5. Numerical method Percentage error (%) = −1 The self-axial velocity / for porosity parameter = −0.5, 0.5 (low and high porosity, resp.,) and wall dilation rate 10 shows that increasing injection leads to a lower self-axial velocity near the center and a higher one near the wall.Figure 11 shows that increasing injection leads to a higher self-axial velocity near the center and a lower one near the wall.In case of < 0, Figure 10 shows that increasing suction ratio leads to a higher self-axial velocity at the center and a lower one near the wall.Increasing suction ratio leads to a lower self-axial velocity at the center and a higher one near the wall (Figure 11).A comparison shows that the self-axial velocity near the center in case of injection with contracting wall and low porosity is higher than injection with contracting wall and high porosity (Figures 10 and 11).(expansion) over a range of .In case of > 0, Figure 9 shows that increasing injection leads to a lower self-axial velocity near the center and a higher one near the wall. Figure 12 shows that increasing injection leads to a higher self-axial velocity near the center and a lower one near the wall.In case of < 0, Figure 9 depicts that increasing suction ratio leads to a higher self-axial velocity at the center and a lower one near the wall.Figure 12 shows that increasing suction ratio leads to a lower self-axial velocity at the center and a higher one near the wall.By comparing Figures 9 and 12, the self-axial velocity near the center in case of injection with expansion wall and low porosity is higher than injection with expansion wall and large porosity.Tables 9, 10, 11, and 12 show the percentage error decrease for a small . The plots in Figure 13 elucidate that the temperature distribution is constant throughout and it is independent of 10 Mathematical Problems in Engineering physical parameter.Numerical solution for temperature is similar to our analytical solution, and therefore, temperature distribution has no error. Conclusions In this paper, we have generalized the flow analysis of [4] with the influence of porous medium and heat transfer.Analytical solution for the arising nonlinear problem is obtained by using Lie symmetry technique in conjunction with a secondorder double perturbation method.We have studied the effects of porous medium (), permeation Reynolds , and wall dilation rate on the self-axial velocity and temperature distribution within the fluid.We compared the analytical solution with the numerical solution for self-axial velocity for the different values of , , and . It was found that the temperature distribution has no error since analytical solution is similar to numerical solution and both are equal to one.We also found that as increases, the percentage error decreases and that temperature distribution is constant throughout.Here, we have noticed that the obtained analytical results match quite well with the numerical results for a good range of these parameters.We also noticed that in all cases, the self-axial velocity has similar trend as in [4], that is, the self-axial velocity approaches a cosine profile.Finally, we observed that when approaches infinity, our problem reduces to the problem in [4] and our results (analytical and numerical) also reduce to the results in [4]. Figures 9 , Figures 9, 10, 11, and 12 illustrate the behaviour of self-axial velocity over a range of with fixed and .The self-axial velocity / for porosity parameter = 0.5 (high porosity) and wall dilation rate = 0.5, −0.5 (expansion and contraction, resp.) over a range of has been sketched in Figures9 and 10.For > 0, we found that increasing injection leads to a lower self-axial velocity at the center and a higher one near the wall.When < 0, Figures 9 and 10 indicate that increasing suction ratio leads to a higher self-axial velocity near the center and a lower one near the wall.Comparison of Figures9 and 10shows that the self-axial velocity near the center in case of injection with expanding wall and high porosity is higher than injection with contracting wall and high porosity.Figures11 and 12provide the variation of self-axial velocity / for porosity parameter = −0.5 (low porosity) and wall dilation rate = 0.5, −0.5 (expansion and contraction, resp.) over a range of .In case of > 0, Figures11 and 12show that increasing injection leads to a higher self-axial velocity near the center and a lower one near the wall.For < 0, Figures11 and 12show that increasing suction ratio leads to a lower self-axial velocity at the center and a higher one near the wall.A comparison between Figures11 and 12shows that the self-axial velocity near the center in case of Figures 9 and 12 indicate the behaviour of self-axial velocity / for porosity parameter = −0.5, 0.5 (low and high porosity, resp.) and wall dilation rate = 0.5
3,939.6
2013-06-18T00:00:00.000
[ "Physics", "Engineering" ]
A Single Image Dehazing Method Using Average Saturation Prior Outdoor images captured in bad weather are prone to yield poor visibility, which is a fatal problem for most computer vision applications. The majority of existing dehazing methods rely on an atmospheric scattering model and therefore share a common limitation; that is, themodel is only valid when the atmosphere is homogeneous. In this paper, we propose an improved atmospheric scattering model to overcome this inherent limitation. By adopting the proposed model, a corresponding dehazing method is also presented. In this method, we first create a haze density distribution map of a hazy image, which enables us to segment the hazy image into scenes according to the haze density similarity.Then, in order to improve the atmospheric light estimation accuracy, we define an effective weight assignment function to locate a candidate scene based on the scene segmentation results and therefore avoid most potential errors. Next, we propose a simple but powerful prior named the average saturation prior (ASP), which is a statistic of extensive high-definition outdoor images. Using this prior combined with the improved atmospheric scattering model, we can directly estimate the scene atmospheric scattering coefficient and restore the scene albedo. The experimental results verify that our model is physically valid, and the proposed method outperforms several state-of-the-art single image dehazing methods in terms of both robustness and effectiveness. Introduction Due to the atmospheric suspended particles (aerosols, water droplets, etc.) that absorb and scatter light before reaching a camera, outdoor images that are captured in bad weather (haze, fog, etc.) are significantly degraded and yield poor visibility, such as blurred scene content, reduced contrast, and faint surface color.The majority of applications in computer vision and computer graphics, such as motion estimation [1,2], satellite imaging [3,4], object recognition [5,6], and intelligent vehicles [7], are based on the assumption that the input images have clear visibility.Thus, eliminating the negative visual effects and recovering the true scene, which is often referred to as "dehazing," are highly desired and have strong implications.However, dehazing is a challenge problem since the magnitude of degradation is fundamentally spatial-variant. As a challenging problem, a variety of methods have been proposed to address this task using different strategies.The first category of methods removes haze based on traditional image processing techniques, such as histogram-based [8,9] and Retinex-based methods [10].However, the recovered results may suffer from haze residue and an unpleasing global visual effects, since the adjustment strategies do not consider the spatial relations of the degradation mechanism.A more sophisticated category of haze removal methods attempts to improve the dehazing performance by employing multiples images that are taken in different atmosphere conditions [11][12][13].Although the dehazing effect can be enhanced since extra information about the hazy image is obtained through different atmospheric properties, the limitation of these methods is evident because the acquisition step is difficult to perform.Another category of methods estimates the haze effects using a camera and polarization filter which is identically positioned [14][15][16].But these methods are only valid for mist images, where the polarization light is the major degradation factor [17]; moreover, these methods are normally time-consuming. Recently, benefiting from the atmospheric scattering model, many state-of-the-art model-based single image dehazing methods have been proposed [11,[18][19][20][21][22][23][24][25][26][27][28].Although significant progress has been made, two main limitations remain.First, the atmospheric scattering model that researchers have adopted is only valid in homogeneous atmosphere conditions, as we noted in Section 2, and therefore these model-based methods commonly lack robustness.Second, many model-based single image dehazing methods that recover clear-day images rely on the error-prone empirical assumption as the atmospheric scattering coefficient due to the estimation difficulty and model complexity and therefore are limited in terms of robustness and effectiveness. For instance, He et al. [21] proposed the dark channel prior based on the statistical observation, which enables us to directly obtain an approximate estimation of the transmission map.Despite its effectiveness in most cases, this method cannot process inhomogeneous hazy images due to the atmospheric scattering model limitation and may fail for the sky region where the prior is broken.Based on [21], Meng et al. [23] added a boundary constraint and estimated the transmission map via a weighted contextual regularization.However, it is subject to color distortion for the white object, since it cannot fundamentally address the problem of ambiguity between the surface color and haze.Fattal's method [19] is proposed with the assumption that the surface shading factor and transmission functions are statistically uncorrelated in a local patch and estimate the transmission within the segmented scenes which have the constant scene albedo.Although it can achieve an impressive effect when recovering a homogeneous mist image, it fails for dense haze scenes and inhomogeneous hazy images where the assumption is invalid.Tan [18] proposed a novel dehazing method by assuming that the clear-day images have higher contrast than the hazy images; however, results generated using a Markov Random Field (MRF) tend to be oversaturated, since this method is similar to the contrast stretching in general.Based on the Bayesian probabilistic model and the atmospheric scattering model, Nishino et al. [22] jointly estimated the scene albedo and depth by completely leveraging its latent statistical structures.Despite the almost perfect dehazing results for dense hazy images, the results tend to be overenhanced for the mist image.Tarel et al. 's method [20] estimates the veil by using combinations of filters; the advantage of this method is its linear complexity and therefore it can be implemented in real time.Nevertheless, the dehazing effect is prone to be invalid where depth changes drastically, since the median filter involved provides poor edge-preserving performance.Zhu et al. [24] proposed the color attenuation prior, and the depth information can be well estimated via this prior and a linear model; nevertheless, it fails in inhomogeneous atmosphere conditions since the atmospheric scattering model it adopted is invalid.Wang et al. [26] proposed a fusion-based method to remove haze, but haze remains when processing inhomogeneous hazy images because the atmospheric scattering model may be invalid in these cases.Besides, this method is based on wavelet fusion and tends to be invalid for dense hazy images because the ambiguity between the image color and haze cannot be well separated.Although Jiang et al. [25] introduced an efficiency hierarchical method for gray-scale image dehazing, it cannot well handle inhomogeneous scenes and suffers from color distortion due to the limitations of the model, which is similar to [26]. In this paper, we propose an improved atmospheric scattering model and a corresponding single image dehazing method that is aimed at overcoming the two main aforementioned limitations.Compared with the previous methods, major contributions of our method are presented.(1) We propose an improved atmospheric scattering model to address the limitation of the current model, which is only valid under homogeneous atmosphere conditions.By considering the inhomogeneous atmosphere, the proposed model has better properties with respect to the validity and robustness.(2) Based on the proposed model, we create a haze density distribution map and train the relevant parameters using a supervised learning method, which enables us to segment the hazy image into scenes based on the haze density similarity.Therefore, the inhomogeneous atmosphere problem can be effectively converted into a group of homogeneous atmosphere problems.(3) Using the segmented scenes, combined with the proposed scene weight assignment function, we can effectively improve the estimation accuracy of the atmospheric light by excluding most of the potential errors.(4) Few dehazing methods estimate the atmospheric scattering coefficient but simply use the error-prone empirical values due to their difficulty and model complexity; even this problem has been noticed by many researchers.We estimate the atmospheric scattering coefficient via the proposed ASP. The remainder of this paper is structured as follows.In the next section, we propose the improved atmospheric scattering model based on the limitation analysis of the current model.In Section 3, we present a novel single image dehazing method, which includes three key steps: scene segmentation via a haze density distribution map, improved atmospheric light estimation, and scene albedo recovery via the ASP.In Section 4, we present and analyze the experimental results.In Section 5, we summarize our method. Improved Atmospheric Scattering Model In computer vision and computer graphics, most modelbased dehazing methods [11,[18][19][20][21][22][23][24][25][26][27][28] rely on the following atmospheric scattering model, which is formulated under the homogeneous atmosphere assumption [11,12,29,30]: where (, ) is the pixel index, (, ) denotes the hazy image, (, ) = ⋅ (, ) represents the corresponding clear-day image, is the atmospheric light, which is a constant value throughout the whole image, (, ) is the scene albedo, and (, ) is the transmission, which is defined as where (, ) is the scene depth and is the atmospheric scattering coefficient which describes the ability of a unit volume of atmosphere to scatter light in all directions [11,24,29].Note that the atmospheric scattering coefficient is a fixed scalar in (1), which indicates that the attenuation magnitude is constant throughout the entire hazy image.However, according to [11,19,24,29], is determined by the density of atmospheric suspended particles in the atmosphere for a particular light wavelength.Consequently, the constant setting of in ( 1) is only valid in homogeneous atmosphere conditions, as discussed by [21,24,26,29].That is, when we simply regard the atmospheric scattering coefficient as a constant in inhomogeneous atmosphere conditions, the transmission in some scenes is inevitably prone to be underestimated or overestimated. In addition, the atmosphere is inhomogeneous in most practical scenarios, since haze has an inherent dynamic diffusion property according to Fick's laws of diffusion [31].As shown in Figure 1, the haze density spatially varies between different boxes within a hazy image. However, this problem can be relatively alleviated.Although the haze density spatially varies across the entire image, the haze density within a particular local region is approximately the same, since the haze diffusion is physically smooth [11].For instance, the haze density is generally similar within the same boxes in Figure 1.Thus, the inhomogeneous hazy image can be converted into a group of homogeneous scenes based on the haze density similarity, and each scene can be regarded as an independent homogeneous subimage.Based on this notion and inspired by [32,33], we redefine the atmospheric scattering coefficient in (1) as a scene-wise variable and propose an improved atmospheric scattering model as (, ) = ⋅ (, ) ⋅ −()⋅(,) + ⋅ (1 − −()⋅(,) ) , (, ) ∈ Ω () , (3) where is the scene index, Ω() is the pixel index for the th scene, and () is the scene atmospheric scattering coefficient, which is constant within a scene but varies between scenes.With this improvement, we addressed the inherent limitation of the atmospheric scattering model, because all types of hazy images can be precisely modeled. A Novel Single Image Dehazing Method In this section, we present a novel single image dehazing method that is based on the proposed improved atmospheric scattering model.In this method, we first create a haze density distribution map to describe the spatial relations of the haze density for a hazy image and further segment the hazy image into scenes based on the haze density similarity.Then, as a by-product of segmentation, we can improve the estimation accuracy of atmospheric light using a proposed scene weight assignment function.Next, based on the proposed ASP and the depth information provided via [24], we can estimate the scene atmospheric scattering coefficient and recover the truth scene albedo. Definition of a Haze Density Distribution Map. Based on the improved atmospheric scattering model, we need to segment the hazy image into scenes based on the similarity of the haze density.Thus, the spatial distribution of haze density for a hazy image should be obtained.However, to our knowledge, there does not yet exist a pixel-based nonreference haze density distribution model that is consistent well with the practical judgement of haze density.Choi et al. [34] propose the fog aware density evaluator (FADE), which is a patch-based evaluator to assess the fog density for an entire hazy image or local patch.However, it is a patch-based assessment that is relatively computationally expensive to obtain and therefore cannot be implemented as an intermediate step.Thus, a high efficiency pixel-based strategy for describing the spatial relations of the haze density for a hazy image is required. According to [34][35][36], the haze density representation is primarily correlated with three measurable statistical features of a hazy image (, ), including the brightness component (, ), texture details component ∇(, ), and saturation component ∘ (, ).Thus, inspired by [24], we create a linear model named the haze density distribution map that can be expressed as where (, ) is the haze density distribution map, 1 , 2 , and 3 are the corresponding unknown parameters for each component.Inspired by [37,38], we should further consider the representation error, such as the quantization error caused by the three components and noise.Thus, we set the total representation error as 4 .According to (4), all the components are combined to yield a description of a haze density distribution.Note that the three components are relatively independent, and the slight deviation of a component will therefore not affect the other components. To obtain all relevant parameters of the linear model, we employ a supervised learning method.We employ 500 training samples to train this linear model; each training sample consists of a pair of hazy images and the corresponding truth haze density map (in order to prepare the training data, we collect 500 various types of hazy images from the Internet and use them to produce the corresponding truth haze density).Considering the superior accuracy of FADE, we adopt it as the reference for the truth haze density representation.The training strategy is designed as the following: We utilize the gradient descent algorithm to estimate the linear parameters 1 , 2 , 3 and 4 .By taking the partial derivatives of (⋅) with respect to 1 , 2 , 3 and 4 respectively, we can obtain the following expressions: where |Λ| is the total number of pixels within the training hazy images and is the pixel index of the training hazy images.The updating procedure of the linear parameters is as follows: where the notation fl indicates that the value of in the left term is set to be the value of the right term.After training, we obtain the following optimal model parameters (retain four decimal places for precision) as 1 = 0.9313, 2 = 0.1111, 3 = −1.4634,and 4 = −0.0213.The most important advantage of this model is its linear complexity.Once the model parameters have been determined, this model can be used for modeling the haze density distribution of any hazy image. In Figure 2, we select several inhomogeneous and homogeneous hazy images with various haze densities (see Figure 2(a)) and demonstrate the corresponding haze density distribution maps (see Figure 2(b)).The dark blue areas indicate the thinnest haze scene and the dark red areas represent the densest haze scene, and the color changes from dark blue to dark red along with increasing haze density.Note that the generated haze density distribution maps are visually consistent with the spatial feature of haze density. However, we note that the haze density distribution map contains excessive texture details.This finding is caused by the depth structure of the scene objects, which may affect the components (brightness, texture details, and saturation) that we adopted to model the density distribution map.The haze density distribution should be flat and independent of any image structure [33].Although the excessive texture details imply the microscopic haze density difference, additional processing will incur extra computational cost.The accuracy will be slightly sacrificed if we eliminate part of the excessive texture details; however, we consider this to be a reasonable trade-off.Thus, we utilize the guided total variation model [33] to refine the haze density distribution map in the following: where ref is the refined haze density distribution map, is the weight function, is the haze density distribution map, and is the guidance image, which is defined as the haze density distribution map.According to [33], (8) can be expressed and processed using an iteration form, and we have 1 = 1, 2 = 12, and 3 = 1 as the regularization parameters for the approximation term, smoothing term, and edge-preserving term, respectively.By comparing Figures 2(b) and 2(c), we note that the excessive texture details have been significantly eliminated. Scene Segmentation. Using the refined haze distribution map ref , our goal in this step is to segment the map into a group of scenes based on the haze density similarity.After segmentation, pixels within a particular scene should share an approximately identical haze density, which further implies that they have the same scene atmospheric scattering coefficient.This problem is fundamentally similar to data clustering; thus, we convert this segmentation process into a clustering problem and adopt the k-means clustering algorithm [39,40].The clustering procedure can be expressed as arg min where the is the cluster number, Ω() is the th cluster, and is the cluster center.After extensive experiments and qualitative and quantitative comparisons (as demonstrated in Section 4), we obtain the relatively balanced cluster number = 3.The k-means clustering algorithm iteratively forms mutually exclusive clusters of a particular spatial extent by minimizing the mean square distance from each pattern to the cluster center.The difference after the th iteration can be expressed as where is the iteration index.This iteration procedure stops when a convergence criterion is satisfied, and we adopt the typical convergence criteria [40]: no (or minimal) differences after the th iteration; that is, () − ( − 1) < . And we set = 10 −4 to terminate this procedure.Note that because the clustering step optimizes the within-cluster sum of squares (WCSS) objective and there only exist a finite number of such partitions, the algorithm must converge to (local) optimum.However, the segmentation results may exhibit instability or oversegmentation because the k-means clustering algorithm is uncorrelated with the spatial location, and there is no guarantee that the global optimum is obtained.Thus, we further refined it via a fast MRF method [41].Then, we denote the further refined result as mrf .Figure 3 shows the corresponding results mrf of Figure 2. Estimation of Atmospheric Light. The atmospheric light is an RGB vector that describes the intensity of the ambient light in the hazy image.As discussed by [42], current single image dehazing methods estimate the atmospheric light either by user interactive algorithms [19,24] or based on the most haze-opaque (brightest) pixels [18,21,[43][44][45].Nevertheless, the located brightest pixels may belong to an interference object, such as an extra light source, white/gray objects, and high-light noise.As demonstrated in Figure 4, [21] is depicted in the green box, the result of [15] is depicted in the blue box, and our result is depicted in the red outlined areas. both He et al. 's method [21] (in the green box) and Namer et al. 's method [15] (in the blue box) locate the interference object as the atmospheric light in a challenging hazy image. As a by-product of scene segmentation, we can cope with the challenging task by designing a scene weight assignment function.Using this function, we can locate a candidate scene that excludes most of the interference objects.This function is designed based on three basic observations: (1) The probability that a scene contains the most hazeopaque (brightest) pixels is proportional to the haze density [18,21,44,45].This can be inferred according to the atmospheric scattering model; when the haze density is infinite for a pixel, the relevant pixels will be reduced to the atmospheric light. (2) The most haze-opaque (brightest) pixel belongs to the sky region with higher probability, and interference objects, such as rivers, extra light sources, and road, are primarily located spatially lower than the sky scene.Thus, we can avoid these types of interference objects by considering the scene vertical index. (3) Most existing dehazing methods are not suitable for white/gray interference objects (cars, animals, etc.) because they are not sensitive to the white/gray color [24].However, the scene coverage ratio for these objects is significantly smaller than the scene coverage ratio for a sky scene. Accordingly, we assign the weight to each segmented scene in mrf by considering the scene haze density, scene average height, and scene coverage ratio.Thus, the scene weight assignment function is defined as where res is the resolution of the hazy image and and |Ω | are the scene haze density and pixel number, respectively, for each segmented scene.Based on (12), each segmented scene will be assigned a weight, and we take the scene with the top weight as the candidate scene .In addition, to further eliminate the affection from high-light noise, we locate the top 0.1% brightest pixels as the potential atmospheric light within the candidate scene and take the average value of these pixels as the atmospheric light. As shown in Table 1, we list the assigned weight (retain four decimal places for precision) of each scene (scenes 1, 2, and 3) for Figure 3 and depict the located potential atmospheric light in the red outlined areas in Figure 5.We successfully located the atmospheric light and avoid most of the interference objects, as expected. We tested our method on the same challenge hazy image (Figure 4) and here depict our results (the red outlined areas in Figure 4).By comparison, we demonstrate the advantage of our method. Average Saturation Prior. Hazy images often lack visual vividness because the scene contents are extremely blurred with reduced contrast and faint surface colors.Inherently inspired by [21,24], we conduct a number of experiments on varied types of hazy images and high-definition clear-day outdoor images to identify statistical regularities. Interestingly, as an experimental demonstration shown in Figure 6, we notice that the RGB histograms of a hazy image are almost identically distributed (see Figure 6(c)).Conversely, the RGB histograms of a high-definition clearday outdoor image (the same scene) are significantly distinguishable (see Figure 6(d)).As shown in Figure 6(c), we also notice that the hazy image contains nearly zero pixels that are black (RGB 0, 0, and 0)/white (RGB 1, 1, and 1), whereas the high-definition clear-day outdoor image includes numerous pixels (see Figure 6(d)). The observations (on hazy image RGB histograms) indicate that most pixels in a hazy image are extremely similar, therefore causing poor visibility and vice versa.We infer that this observation will contribute to statistical regularities for an average saturation distribution; thus we perform extensive tests on various types of hazy images and high-definition clear-day outdoor images. Similar to [21,24], we collect a large number of hazy images and high-definition clear-day outdoor images from the Internet using several search engines (with the keywords hazy image and high-definition clear-day outdoor images).Then, we randomly select 2,000 hazy images and obtain the average saturation probability distribution (see Figure 7(a)).Next, we select 4,000 high-definition clear-day outdoor images with landscape and cityscape scenes (where haze usually occurs) and manually cut out the sky regions (considering the similarity between the sky region and the hazy The average saturation probability distribution of hazy images, as shown in Figure 7(a), is distinctly concentrated at approximately 0.005 (more than 40% at 0.005 and with a cumulative probability of more than 70% from 0 to 0.01).This finding indicates that few pixels are nearly pure white or black, which confirms our second observation on Figure 6(c).Thus, this result strongly suggests that the average saturation for a hazy image tends to be a very small value (0 to 0.01 with an overwhelming probability). The average saturation probability distribution of highdefinition clear-day outdoor images is demonstrated in Figure 7(b).We compute the expectation of the average saturation; the results indicate that the average saturation for a high-definition clear-day outdoor image tends to be 0.106 with a high probability.As demonstrated in Section 4, to further evaluate this conclusion, we select another six possible average saturation values and further compare the dehazing effect both qualitatively and quantitatively on another 200 various types of hazy images. Scene Atmospheric Scattering Coefficient Estimation and Scene Albedo Recovery.To our knowledge, most exiting dehazing methods take the error-prone empirical value as the atmospheric scattering coefficient.Despite the valuable progress that has been made to overcome this problem, there is likely no optimal solution for all types of hazy images (even homogeneous images) due to the variation in haze density.For instance, Zhu et al. tested numerous atmospheric scattering coefficients values in [24] to pursue an optimal solution; however, the atmospheric scattering coefficient is simply assumed to be 1 in this method.Shi et al. [46] also tried to address the problem by considering the impact of Earth's gravity on the atmospheric suspended particles; however, the dehazing results tend to be unstable [33].By combining the proposed ASP with the improved atmospheric scattering model, we can effectively estimate the scene atmospheric scattering coefficient within each scene.We derive (3) as Note that (, ) is given and is estimated in Section 3.2; the scene albedo (, ) is now a function with respect to the scene atmospheric scattering coefficient () and the scene depth (, ).Due to significant progress in estimating the scene depth [22,24], we assume that the scene depth (, ) is given by [24].Therefore, the scene albedo (, ) is a function only with respect to the scene atmospheric scattering coefficient ().For convenience of expression, we rewrite (13) as Next, based on the proposed ASP, we can obtain the scene atmospheric scattering coefficient () as where (⋅) is the average saturation computing function.Note that ( 15) is a convex function, and we can obtain the optimal solutions of the scene atmospheric scattering coefficient using the golden section method [47] and set the termination criteria to 10 −4 according to [48,49].Once we estimate the scene atmospheric scattering coefficient () for all scenes, the corresponding scattering map is obtained.Considering that the scene atmospheric scattering coefficient estimation is inherently a scene-wise process, we utilize the guided total variation model [33] to increase the edge-consistency property.Figure 8 shows four example demonstrations of hazy images (Figures 8(a) and 8(b) are homogeneous hazy images, and Figures 8(c) and 8(d) are inhomogeneous hazy images) and the obtained corresponding scattering maps.It can be noticed that the scattering maps are consistent well with the corresponding haze images. According to (3), we can directly obtain the scene albedo (, ), since all the unknown coefficients are determined, including the atmospheric light A, the scene atmospheric scattering coefficient (), and the scene depth (, ).Then, the clear-day image can be recovered as (, ) = ⋅ (, ). Experiments Given hazy image with pixels and segmented into scenes after jth iteration, the computational complexity for the proposed method is (), when the linear parameters 1 , 2 , 3 , and 4 in ( 4) are obtained via training.In our experiments, we implemented our method in MATLAB, and approximately 1.9 seconds is required to process a 600 × 400 pixels' image using a personal computer with 2.6 GHz Intel(R) Core i5 processor and 8.0 GB RAM. In this section, we first demonstrate the experimental procedure for determining the clustering number in (10).Then we demonstrate the validity of the proposed ASP through qualitative and quantitative experimental comparisons.Next, in order to verify the effectiveness and robustness of the corresponding dehazing method, we test it on various realworld hazy images and conduct a qualitative and quantitative comparison with several state-of-the-art dehazing methods, such as those by Tarel et al. [20], Zhu et al. [24], He et al. [21], Ju et al. [33], and Meng et al. [23].The parameters in our method are all demonstrated in Section 3, and the parameters in the five state-of-the-art dehazing methods are set to be optimal according to [20,21,23,24,33] for fair comparison. For quantitative evaluation and comparison, we adopt several extensively employed indicators, including the percentage of new visible edges , contrast restoration quality , FADE , and the hue fidelity .According to [50], indicator measures the ratio of edges that are newly visible after restoration, and indicator verifies the average visibility enhancement obtained by the restoration.The indicator is proposed by [34], which is an assessment of haze removal ability.The indicator is presented by [51], which is a statistical metric to indicate the hue fidelity after restoration.Higher values of and imply better visual improvement after restoration, lower values of indicate less haze residual (which means a better dehazing ability), and a smaller value of indicates that the dehazing method maintains better hue fidelity.segment it into a group of scenes based on the haze density similarity.To determine a relatively balanced clustering number k, we conduct a large number of experiments on different hazy images using different values of clustering number .Then, we compared the dehazing effect in terms of the qualitative comparison, computational time, and quantitatively comparison using three indicators (, , and ). Figure 9 shows five example experimental demonstrations of qualitative comparison using different clustering number k, and Figures 10-13 show the corresponding quantitative comparison results of , , and computational time, respectively.Through qualitative comparison, we find that the dehazing effect improves when increases from 1 to 3 and tends to stabilize afterwards.As we can see, when equals 1 (which implies removing haze using the current atmospheric scattering model ( 1)), the haze residual is obvious (see Figure 9 3 (see Figure 9(d)), haze is completely removed and the details of the scenes are adequately restored, the recovered color is nature and visually pleasing, and no overenhancement appears.However, the dehazing effect tends to be the same; even continues to increases (see Figure 9 (d), compared with Figures 9(e)-9(j)). This observation is consistent with the quantitative comparison results, as shown in Figures 10-12; when increases from 1 to 3, the values of and rise (see Figures 10 and 11), which means more edges are recovered and better visibility enhancement is obtained.The value of decreases obviously (see Figure 12), and it implies that more haze is removed when increases from 1 to 3. Despite the increased computational time, as shown in Figure 13, we think it is a reasonable tradeoff for better dehazing effect.Afterwards, with the increased clustering number (from 3 to 9), the values of and tend to be stable, and the value of fluctuates and even rises slightly, as shown in Figures 10-12.Meanwhile, the computational time rises along with the increasing clustering number. The observations for the five example experimental demonstrations are consistent with the results of more than 200 experiments.Consequently, we assume a clustering number of three is a balanced choice for our method. Experimental Comparison for ASP. In Section 3.3, we propose the ASP based on the statistics of extensive highdefinition clear-day outdoor images, and the results indicate an average saturation of 0.106 with high probability for a high-definition clear-day outdoor image.To further verify the validity of this conclusion, we test and compare the dehazing effect using different values of the average saturation (0.01, 0.05, 0.106, 0.15, 0.2, 0.25, and 0.3) on another 200 hazy images.The four example experimental demonstrations are depicted in Figure 14.Through qualitative comparison, it is obvious that the dehazing magnitude is approximately proportional to the average saturation value, especially when average saturation value rises from 0.01 to 0.15 (see Figures 14(b)-14(e)).However, when average saturation value goes beyond 0.15, the recovered image looks dim and color tends to be unnatural (see Figures 14(f)-14(h), the close-range scene in Test 1 and Test 2, the upper left corner and middle part in Test 3, and long-range scene in Test 4).When the average saturation equals 0.106, as shown in Figure 14(d), our method unveils most of the details, recovers vivid color information, and avoids overenhancement, with minimal Halo artifacts. The corresponding quantitative comparisons of Figure 14 are shown in Figures 15-18.In addition to , , and , we also measure and compare the value of indicator . As shown in Figures 15 and 16, when average saturation value rises from 0.01, the values of and increase and tend to achieve a high value when average saturation value equals 0.106 and fluctuate slightly afterwards.This indicates more edges newly visible are obtained and better visual effect is enhanced when average saturation value achieves 0.106.This observation is consistent with Figure 17; the value of declines significantly and tends to be stable when average saturation value equals 0.106, which implies the best haze removal effect can be achieved when average saturation value achieves 0.106.However, as shown in Figure 18, the value of stays at a low level and increases dramatically when the average saturation value exceeds 0.106, which means the color distortion appears inevitably.These observations on the four example experimental demonstrations are in consistency with most of the rest experimental results; thus the ASP is physically valid and is able to well handle various types of hazy image. Qualitative Comparison. Considering that the five stateof-the-art dehazing methods are able to generate perfect results using hazy images, a visual ranking of the methods is therefore difficult to complete.Thus, we select six challenging images, including a homogeneous dense haze image (Figure 19 Obviously, this is because the atmospheric scattering model adopted is invalid under inhomogeneous atmosphere condition. Due to the inherent problem of dark channel prior, He et al. 's method cannot be applied to the regions where the brightness is similar to the atmospheric light (the sky region in the Figure 21(d) is significantly overenhanced).Moreover, similar with Zhu et al. 's method, He et al. 's method tends to be unreliable when processing inhomogeneous hazy images.As we can see from the zoom-in patches of Figures 23(d [21] and further improves the dehazing effect by adding a boundary constraint, but the problem of ambiguity between the image color and haze still exits and therefore fails for the sky region in Figure 21(f In contrast, our method removes most of the haze and well unveils the scene objects, maintains the color fidelity, and eliminates the overenhancement, with minimal Halo artifacts.Note that, by taking advantage of the proposed improved atmospheric scattering model, our method is effective for both homogeneous and inhomogeneous hazy images. Quantitative Comparison. To quantitatively assess and rate the five state-of-the-art dehazing methods and our method, we compute four indicators (, , , and ) for the dehazing effects of Figures 19-24 and list the corresponding results in Tables 2-5.For convenience, we indicate the top value in bold and italics and the second-highest values in bold. According to Because the indicator correlates well with human judgements of fog density [34], we compute the values of the indicator for all dehazing results and list them in Table 5.As shown in Table 5, our method outperforms other methods for Figures 20-24 and has the second-best value for Figure 19.This finding verifies the outstanding dehazing effect of our method, and this conclusion is consistent with our observation of the qualitative comparison.Importantly, we prove the power of our method for dehazing inhomogeneous hazy images.We attribute this advantage to the proposed improved atmospheric scattering model and the corresponding dehazing method. In Table 6, we provide a comparison of the computational times.Note that our method is significantly faster than most of the other methods and relatively close to the computation time of Zhu et al. 's method.The high efficiency of our method is primarily attributed to the linear model, which describes the haze density distribution and therefore simplifies the estimation procedure using a scene-based method instead of a per-pixel or patch-based strategy. Discussion and Conclusions In this paper, we have proposed an improved atmospheric scattering model to overcome the inherent limitation of the current model.This improved model is physically valid and has an advantage with respect to effectiveness and robustness.Based on the proposed model, we further improve the effectiveness of the corresponding single image dehazing method, since we abandoned the assumption-based atmospheric scattering coefficient but estimated it via the proposed ASP. In this method, by means of the proposed haze density distribution map and the scene segmentation, the inhomogeneous problems can be converted into a group of homogeneous ones.Then, we further propose the ASP based on statistics of extensive high-definition outdoor images and first estimate the scene atmospheric scattering coefficient via ASP.Next, as a by-product of scene segmentation, we effectively increase the estimation accuracy of the atmospheric light by defining a scene weight assignment function. Experimental results verify the robustness of the proposed improved atmospheric scattering and the effectiveness of the corresponding dehazing method. Although we have overcome the inherent limitation of the current atmospheric scattering model and have identified a method for estimating the scene atmospheric scattering coefficient based on the proposed ASP, a problem remains unsolved.Despite extensive experimental assessment and comparison, finding the optimal solution for scene segmentation (the clustering problem) is a difficult mathematical task due to the variety of hazy images.To address this task, some machine learning methods can be considered, and we leave this problem for our future research. Figure 1 : Figure 1: Examples of hazy images with inhomogeneous atmosphere.Haze density is approximately the same within a box but varies between boxes. Figure 2 :Figure 3 : Figure 2: (a) Various types of inhomogeneous and homogeneous hazy images.(b) Corresponding haze density distribution maps.(c) Relevant refined haze density distribution maps. Our method He et al. 's method Namer et al. 's method Figure 4 : Figure4: A challenging hazy image for the atmospheric light locating.The result of[21] is depicted in the green box, the result of[15] is depicted in the blue box, and our result is depicted in the red outlined areas. Figure 5 : Figure 5: Located potential atmospheric light using our method (in the red outlined areas). Figure 7 : Figure 7: (a) Average saturation probability distribution of hazy images.(b) Average saturation probability distribution of high-definition clear-day outdoor images. Figure 8 : Figure 8: Hazy images (homogeneous and inhomogeneous) and the corresponding scattering map. Figure 9 : Figure 9: Five example experimental demonstrations of qualitative comparison using different clustering number .(a) Hazy image.(b-j) Left to right: the recovered images using the value of from 1 to 9, respectively. Figure 11 : Figure 11: Values of using different clustering number k. Figure 12 :Figure 13 : Figure 12: Values of using different clustering number k. Figure 17 :Figure 18 : Figure 17: Values of using different average saturation values. (a)), a homogeneous image with large white or gray regions (Figure 20(a)), a homogeneous image with a sky region (Figure 21(a)), a homogeneous image with rich texture details (Figure 22(a)), an inhomogeneous long-range image (Figure 23(a)), and an inhomogeneous close-range image (Figure 24(a)).Figures 19-24 demonstrate the qualitative comparison of the five state-of-the-art dehazing methods with our method.The original hazy images are displayed in column (a); columns (b) to (g), from left to right, depict the dehazing results and the corresponding zoom-in patches of the methods of Tarel et al., Zhu et al., He et al., Ju et al., Meng et al., and our method, respectively.As shown in Figure 19(b), Tarel et al. 's methods are obviously unable to process dense hazy image.This is due to the fact that Tarel et al. 's method uses a geometric criterion to decide whether the observed white region belongs to the haze or the scene object; thus it is unreliable under dense haze condition.In Figures 20(b) and 22(b), we can notice that Tarel ) and 24(d), haze cannot be removed globally.Despite Ju et al. 's method getting really good result, the overexposure (see the zoom-in patches of Figures 22(e) and 24(e)) and color distortion effects (see the upper part of Figures 22(e) and 23(e)) appear since the transmission estimation method is parameter sensitive.As shown in Figure 20(e), Ju et al. 's method recovers the most scene objects but suffers from the overenhancement.Meng et al. 's method is based on[21] and further improves the dehazing effect by adding a boundary constraint, but the problem of ambiguity between the image color and haze still exits and therefore fails for the sky region in Figure21(f).In addition, Meng et al. 's results significantly suffer from the overall color distortion, as illustrated in Figures 21(f), 22(f), and 23(f).In contrast, our method removes most of the haze and well unveils the scene objects, maintains the color fidelity, and eliminates the overenhancement, with minimal Halo artifacts.Note that, by taking advantage of the proposed improved atmospheric scattering model, our method is effective for both homogeneous and inhomogeneous hazy images. ) and the second top values for Figures19 and 22, which verify the validity of the proposed atmospheric scattering model and the effectiveness of our method.Although we only obtain the third top values for Figures 20 and 21, our results are more visually pleasing.Although Ju et al. 's and Tarel et al. 's results achieve the top and the second top values for Figures 20 and 21, overenhancement is evident in Figures 20(b) and 20(e), haze is significant in Figure21(b), and the corners of the sky region in Figure21(e) tend to be dark.As shown in Table the ability for dehazing methods to maintain the color fidelity can be assessed through these results.He et al. 's results get the best values for all the dehazing results, our results achieve the second-best values for three hazy images (Figures19, 21 , and 23), and our results are very closed to the second-best score for Figures20 and 22.Thus, our method can maintain the color fidelity generally for most of the challenge hazy images.However, this indicator may only partially reveal the ability of a dehazing method and are not sensitive to the overenhancement.For instance, He et al. 's results suffer from overenhancement (refer to Figure21(d)); Tarel et al. 's results are overenhanced for Figure22and achieve the second-best score.Thus, exploration of an integrated indicator, which is consistent with human visual judgement, is necessary. ). Table 2 : Value of indicator for the dehazing of Figures19(a)-24(a) using different methods. Table 3 : Value of indicator for the dehazing results of Figures19(a)-24(a) using different methods. Table 4 : Value of indicator for the dehazing results of Figures 19(a)-24(a) using different methods. Table 2 , our results yield the top value for Figure24, which is a typical inhomogeneous hazy image.Although our results only achieve the second top value for Figures 19, 20, 22, and 23, the results must be balanced because the number of recovered visible edges can cause noise amplification.For instance, Tarel et al. 's results have the highest value in Figures 19-24, but the relevant visual effects are either overenhanced or suffer from Halo artifacts.Conversely, our results avoid most of the negative effects.As shown in Table 3, our dehazing results achieve the top values for both inhomogeneous hazy images (Figures 23 and 24 Table 5 : Value of indicator for the dehazing results of Figures19(a)-24(a) using different methods.
9,854.6
2017-01-01T00:00:00.000
[ "Computer Science" ]
Gadolinium contrast agents: dermal deposits and potential effects on epidermal small nerve fibers Small fiber neuropathy (SFN) affects unmyelinated and thinly myelinated nerve fibers causing neuropathic pain with distal distribution and autonomic symptoms. In idiopathic SFN (iSFN), 30% of the cases, the underlying aetiology remains unknown. Gadolinium (Gd)-based contrast agents (GBCA) are widely used in magnetic resonance imaging (MRI). However, side-effects including musculoskeletal disorders and burning skin sensations were reported. We investigated if dermal Gd deposits are more prevalent in iSFN patients exposed to GBCAs, and if dermal nerve fiber density and clinical parameters are likewise affected. 28 patients (19 females) with confirmed or no GBCA exposure were recruited in three German neuromuscular centers. ISFN was confirmed by clinical, neurophysiological, laboratory and genetic investigations. Six volunteers (two females) served as controls. Distal leg skin biopsies were obtained according to European recommendations. In these samples Gd was quantified by elemental bioimaging and intraepidermal nerve fibers (IENF) density via immunofluorescence analysis. Pain phenotyping was performed in all patients, quantitative sensory testing (QST) only in a subset (15 patients; 54%). All patients reported neuropathic pain, described as burning (n = 17), jabbing (n = 16) and hot (n = 11) and five QST scores were significantly altered. Compared to an equal distribution significantly more patients reported GBCA exposures (82%), while 18% confirmed no exposures. Compared to unexposed patients/controls significantly increased Gd deposits and lower z-scores of the IENF density were confirmed in exposed patients. QST scores and pain characteristics were not affected. This study suggests that GBCA exposure might alter IENF density in iSFN patients. Our results pave the road for further studies investigating the possible role of GBCA in small fiber damage, but more investigations and larger samples are needed to draw firm conclusions. Introduction Small fiber neuropathy (SFN) is defined as a damage of small unmyelinated C and thinly myelinated A δ nerve fibers causing neuropathic pain with distal distribution and autonomic symptoms [1]. The underlying aetiology of SFN includes metabolic, toxic, autoimmune or genetic disorders. However, the aetiology of approximately 30% patients remains unknown and is characterized as idiopathic SFN (iSFN) [2][3][4][5]. Small nerve fiber function cannot be evaluated with neurological routine tests, such as nerve conduction studies (NCS), which only detect impairment of the fast-conducting A-α (motor NCS) and A-β (sensory NCS) fibers. Special methods, such as Heidrun H. Krämer and Patrick Bücker have contributed equally to this work. Anne Schänzer and Christoph van Thriel have contributed equally to this work. 3 quantitative sensory testing (QST) are helpful and needed to evaluate small fiber function [6]. Analyzing intraepidermal nerve fiber density (IENFD) in small skin biopsies is a recommended morphometric technique to enable the diagnosis of SFN [7,8]. Our recently published animal study showed that even exposure to macrocyclic contrast agents can be associated with neuropathological finding like a small nerve fiber pathology in humans. Mice treated with macrocyclic gadolinium (Gd)-based contrast agents (mGBCAs) and linear GBCA (lGBCA) showed Gd deposition in the skin and a significant reduction of IENFD compared to controls. Additionally, terminal axonal swelling was observed in animals treated with linear GBCA [9]. Alkhunizi et al. [10] showed that Gd could be found in the spinal cord and peripheral nerves in rats repeatedly exposed to linear and macrocyclic GBCAs. However, only the treatment with the lGBCA (gadodiamide) was associated with pain hypersensitivity. Gd, a heavy metal of the lanthanide group, has been used as a base for contrast agents in magnetic resonance imaging (MRI) for the last three decades. As free ion, Gd can inhibit calcium channels through competitive binding and thereby disturbing Ca 2+ homeostasis and mitochondrial functions [11]. Moreover, Gd activates and sensitizes the vanilloid receptor TRPV1 an important pain receptor in humans [12]. To overcome such toxicity, chelated forms of Gd, classified as linear or macrocyclic (ionic or non-ionic), have been manufactured and used in humans. In general, macrocyclic GBCAs are thermodynamically and kinetically more stable than linear GBCAs [13,14]. Although GBCAs were supposed to have a convincing risk profile two tremendous crises, namely the (1) GBCAassociated nephrogenic systemic fibrosis in patients with kidney insufficiency in 2006 [15] and in 2014 when (2) Gd deposits in human brains after application of GBCAs were described [16]. These side-effects have been predominantly reported in patients treated with linear GBCAs. Moreover, "symptoms associated with gadolinium exposure" (SAGE) such as fatigue, musculoskeletal disorders, burning skin sensations have been reported to be more prevalent for linear than macrocyclic GBCAs [17]. Although clinical and pathological consequences of Gd retention in the brain or SAGE in general were still unclear, in 2017 the European Medicines Agency (EMA) embraced a precautional position in patients safeguard and marketing authorization and linear GBCAs were suspended in the EU [18], with the exception using few linear GBCAs for special applications. However, investigations on this topic are still of broad interest, as linear GBCAs are further used outside the EU, and macrocyclic GBCAs continue to be applied worldwide. In this study, we aim to investigate if skin Gd deposits are more prevalent in patients with iSFN who have been exposed to GBCAs, and if an effect on IENFD and clinical parameters could be observed. Subject and samples This prospective observational study was carried out in three German neuromuscular centers (Giessen, Ulm and Mainz). Inclusion criteria were definite SFN according to the NEU-RODIAB criteria [19]. Patients were included in the study if idiopathic SFN was confirmed by clinical, neurophysiological, laboratory and genetic investigations. Patients were excluded if they had clinical signs of large fiber involvement, pathological nerve conduction (NC) studies or an underlying aetiology for SFN was present. Besides metabolic causes, infectious diseases, immune-mediated and paraneoplastic syndromes, genetic syndromes such as sodium channelopathies, Fabry Disease and TTR amyloidosis were ruled out [20]. During a standardized interview, the iSFN patients were asked for GBCA exposure, how often GBCAs were applicate, and the time point of the last exposure. If the patients were unsure, the radiologist responsible for the MR examinations was contacted and type, brand and volume of GBCA applied was noted. If the GBCA exposure remained unknown the patients were excluded. The final sample consisted of twenty-eight patients fulfilling the NEURODIAB criteria of definite SFN. Of these participants, 23 iSFN patients (82%) reported exposures to GBCAs (iSFNe) and 5 (18%) declared that they have never been exposed to GBCAs (iSFNne). These frequencies significantly (Chi 2 = 11.6, p < 0.001) deviate from the expected equal distribution (14 cases/exposure group). Quantitative sensory testing (QST) could be performed in 15 patients (54%). The distribution across the GBCA exposure groups is given in Table 1. The reasons for the MRI examinations were heterogeneous including imaging of brain, joints, and pelvis. The gender and age distribution of these three groups as well as the respective values of the controls are given in Table 1. All patients received a skin biopsy at the distal leg according to recommendations [8]. Additional skin biopsies from six healthy subjects without history for GBCA exposure or neuropathic pain were included in the study. Quantifying intraepidermal nerve density (IENFD) To determine the intraepidermal nerve fiber density (IENFD) standard procedures were performed. From each biopsy, sections were stained with antibody against Protein Gene Product (PGP) 9.5 a neuron-specific protein that labels axons in the peripheral nervous system [21,22]. IENFD was determined according to published counting recommendations. For all analyses, IENFD were z -transfor med (z individual = IENFD individual −IENFD reference SD reference ) using the age-and sexmatched reference values. IENFD was considered significantly "reduced" when it was below the 5% percentile of the reference data (zIENFD < 1.64) [7]. The investigators were blinded to the samples during the morphometric analysis. Elemental bioimaging of gadolinium deposits in skin samples From each skin biopsy sample, 10 µm thick cryosections were prepared and subjected to an elemental bioimaging procedure that can detect Gd in different organs [23] and that has been used in a previous animal study [9]. Skin Gd concentration was determined using laser ablation-inductively coupled plasma-mass spectrometric imaging (LA-ICP-MSI) as shown in Fig. 1. Laser ablation allows a subsequent spatially resolved elemental analysis via inductively coupled plasma-triple quadrupole mass spectrometry (ICP-TQMS) especially for metals in various tissues [23,24]. A laser spot size of 25 µm and a corresponding stage speed of 100 µm/s were selected for high-throughput ablation. The formed aerosol is atomized in the plasma, and analyzed in the mass spectrometer, partly after reaction to the detected species (e. g., 158 Gd 16 O + ) in the triple quadrupole mass analyzer. Using an appropriate software package, the transient signal of the ICP-MS is used to reconstruct the spatial distribution of the analytes within the biopsy samples ( Fig. 1). To evaluate samples with overall low expected Gd concentrations, as in the case of human skin biopsies, a scriptbased semi-quantitative approach was developed, which introduces the Normalized Event Rate (NER) as an indicator for the real Gd concentration. Utilizing this value, all patients were classified, reflecting their likelihood of prior GBCA injection: lower than 3xstandard deviation (SD) of the controls (unlikely), greater than 3xSD and lower than 10xSD of the controls (possible), and greater than 10xSD of the controls (likely). For further information regarding the calculation of the NER, please refer to the supplemental material (Supplemental Material 1). To analyze possible group differences or associations, we used the frequency of patients within the classes (unlikely, possibly, and likely) as well as the individual NER as a more quantitative estimate of Gd in the tissue. Quantitative sensory testing (QST) Quantitative sensory testing (QST) was performed according to the protocol of the German Research Network on Neuropathic Pain (DFNS) in 18 (55%) patients with iSFN [25,26]. ISFN patients with or without Gd were compared to the normative data set of the German network on neuropathic pain (DFNS) and with each other [27]. A total of 11 parameters were used in the analyses: the thermal detection thresholds for the perception of cool (CDT) and warm (WDT), the thermal pain thresholds (cold pain threshold [CPT]; heat pain threshold [HPT]), the mechanical detection thresholds (MDT), the mechanical pain thresholds (MPT), a stimulus-response function for mechanical pain sensitivity (MPS), pain in response to light touch (dynamic mechanical allodynia [DMA]), the vibration detection threshold (VDT), the wind-up ratio (WUR) to assess pain summation to repetitive pinprick stimuli and the pressure pain threshold (PPT) at the thenar eminence. QST data were z-transformed into a standard normal distribution (zero mean, unit variance) for each single parameter to allow a comparison of QST parameters independent of their physical units using the following expression (except DMA): Z = (value patient -mean controls)/SDcontrols. Z-scores below zero indicate a loss of function; z-scores above zero indicate a gain of function. Thus, elevations of thresholds (CDT, WDT, HPT, CPT, PPT, MPT, MDT, and VDT) result in negative z-scores, whereas increased ratings (MPS and WUR) result in positive z-scores. Pain questionnaires The current, maximum, and mean pain intensity of the last 4 weeks was obtained on a numeric rating scale in every SFN patients (anchors: 0 = no pain; 10 = worst pain imaginable). Pain quality and distribution was assessed using the German Pain Questionnaire of the German pain society as a section of the international society for the study of pain [28]. Statistical analysis For QST parameters, comparisons to the normative data were performed using t-tests as recommended [27]. Since only 2 patients reported DMA, no further analysis was calculated for this parameter. However, due to the small sample size bootstrapping (number of samples = 1000) procedure was used for the one sample t test using SPSS version 28.0 for Mac OS X (IBM Corp. Released 2021. IBM SPSS Statistics for Macintosh, Version 28.0. Armonk, NY, USA: IBM Corp). The iSFN patient with (iSFNe; exposed) and without (iSFNne; not exposed) confirmed GBCA exposure were compared by nonparametric Mann-Whitney-U-tests. Statistical evaluation of the LA-ICP-MSI-derived NERs and the z-transformed IENFD values was performed by non-parametric test (Kruskal-Wallis test followed by Dunn's multiple comparisons test) using GraphPad Prism version 9.4.0 for Mac OS X (GraphPad Software, San Diego, CA, USA). Nominal or ordinal variables were analyzed by frequency tables and Chi 2 tests as well as rank correlation analyses using SPSS version 28.0. For the analyses, the significance criterion was set to p = 0.05 and multiple comparisons were adjusted to the number of comparisons (Bonferroni correction). Data availability Data are available in the tables. Patients' GBCA exposure and pain characteristics All patients had length-dependent clinical signs and symptoms of small nerve fiber damage and normal sural nerve conduction studies. 23 patients showed significantly reduced IENFD (z ≤ 1.64). From the five patients with normal IENFD, all patients presented with pathological thermal detection thresholds (z ≤ 1.96). Therefore, definite iSFN was diagnosed in all included 28 patients according to the NEURODIAB criteria [19]. The results of the standardized interviews about type, brand, duration since the last application, and injected volume of GBCA resulted in detailed data for 15 (65%) 1 3 of the 23 iSNFe patients. This information together with the individual results of the elemental bioimaging and the z-scores of the intraepidermal nerve fiber density (IENFD) are given in Table 2. QST examination that were available for 15 of our iSFN patients (54%) confirmed the findings of other SFN studies [29] (see Fig. 2). The iSFN patients showed lower z-scores of the following subtest: CDT, WDT, VDT and PPT. The PPT z-score was significantly higher when [27] compared to normative data. Comparing QST data between iSFNe and iSFNne (red vs. green) showed no significant difference. One iSFNe and one iSFNne patient reported DMA. Patients' elemental bioimaging of gadolinium deposits All GBCA-related analyses were restricted to the 23 iSFNe, the 5 iSFNne patients and the 6 unexposed controls. The individual results of the elemental bioimaging analysis for all 28 iSFN patients and the 6 controls enrolled in this study can be found in Table 3.f female, m male a Yes = confirmed exposure, no = no exposure, c = controls b Compared to reference values Lauria et al.: reduced below 0.05 quantile values c Gd exposure according to the classifier (see Fig. 1) The application of the classifier approach (color-coded grouping in Fig. 3A) revealed that iSFNe patients were labelled as possible or likely, while in the iSFNne and controls were mainly classified as unlikely. Statistically, the three groups (x-axis in Fig. 3) differed significantly (Chi 2 : 24.06; P < 0.001) with respect to the results of NER-based classification (e.g. only possible or likely cases in the iSFNe group). This association was also confirmed by the ordinal-by-ordinal correlation resulting in Kendall's tau-b of 0.54 (P < 0.001) indicating that a likely classification of the NER obtained in the elemental bioimaging analyses was associate with being in the iSFNe group. Figure 3A also shows the quantitative results of the elemental bioimaging analyses of the Gd signals in the skin biopsy samples. Here, the mean rank values of the normalized event rates (NER) of the three groups were significantly different (Kruskal-Wallis statistic: 15.0; p < 0.001). Dunn's multiple comparison tests yielded significant higher Gd deposits in skin samples of the iSFNe (GBCA exposed) patients compared to controls but no significant difference to the iSFNne (GBCA not exposed) patients could be statistically confirmed. iSFNne patients and healthy controls did not differ significantly with respect to their NERs. However, due to small number of patients with more detailed information about the type, dose, or duration since the last GBCA treatment (see Table 2) an in-depth analysis of this association was not possible (Fig. 4). However, neither the type of GBCA (linear vs. macrocyclic), nor the duration since the last application of the GBCA seems to be associated with the NER obtained by the elemental bioimaging approach. Patients' intraepidermal nerve density (IENFD) The z-transformed IENFD values of the three groups included in the GBCA-related analyses (iSFNe, iSFNne, controls) are shown in Fig. 3B. In Fig. 3B, the significant reduction is given as red dot (z-score < − 1.64) compared to black dots with normal IENFD values (z-score > − 1.64). In all patients with iSFN (iSFNe and iSFNne) 85% had a significantly reduced IENFD when compared to the reference data. In the control subjects and most of the iSFNne patients, the IENFD z-scores was in the normal range of the reference data [1,7] The non-parametric analysis revealed a significant difference among the three groups (Kruskal-Wallis statistic: 12.9; p < 0.001) and Dunn's multiple comparisons showed that the IENFD z-scores of the iSFNe patients were significantly lower than in controls. Even though a huge difference between the iSFNe and iSFNne for the IENFD z-scores is shown in Fig. 3B Dunn's multiple comparisons could not confirm significance between the two groups. The analysis of the binary IENFD scores (significantly vs. not significantly reduced; red vs. black dots in Fig. 3B) revealed that in iSFNe patients (n = 23) 21 patients (91.3%) showed significantly reduced IENFD z-scores while in iSFNne patients (n = 5) only two patients (40.0%) had a IENFD z-scores below − 1.64. Accordingly, the respective odds ratio quantifying the strength of the association between GBCA exposure and significant IENFD reduction is 15.8 (95% CI 1.6-157.6; P = 0.01) indicating an almost 16-fold higher risk for reduced IENFD z-scores in iSFN patients that previously received a GBCA during MRI examination. Testing the association of the z-transformed IENFD with the QST subtests by rank correlations (Spearman's rho) showed only one significant correlation of rho = − 0.57 (p = 0.03) between the IENFD z-score and the mechanical pain thresholds (MPT). However, when adjusting for multiple comparisons this association is no longer significant. Discussion Gadolinium has been used for contrast agents in magnetic resonance imaging (MRI) for decades. In recent studies Gd deposits have been detected in several organs, including the brain [14,15]. In animal studies, a neurotoxic effect of Gd on small nerve fibers has been reported [10] but human studies are lacking. In our study, we analyzed dermal Gd deposits from patients with confirmed iSFN and healthy volunteers with LA-ICP-MSI, which allows a spatially resolved element analysis for metals in various tissues [23,24]. With this method, small amounts of Gd can be detected [24]. We were able to detect higher dermal Gd deposits in iSFN patients with confirmed GBCA exposure compared to healthy controls without exposure to GBCA. A modulating effect of Gd deposits on small nerve fibers in patients with iSFN can be assumed. The iSFN patients and controls were not matched with respect to age and for the IENFD an impact of this difference could be avoided using age-adjusted z-scores. As a rare earth element environmental exposure to gadolinium is unlikely and therefore, the results of the normalized event rates of Gd deposits obtained by LA-ICP-TQMS analysis might not be affected by the age differences. The non-significant difference between the iSFNne and controls in the NER might serve a support. However, further studies need larger sample to confirm this age independence of Gd deposits in the skin of humans. In our study, we included only patients with iSFN that was diagnosed after a thorough work-up. The aim of the study was to analyze the presence of dermal Gd deposits in Fig. 2 Individual, mean and standard deviation of the z-scores for the QST subtests obtained from 15 iSFN patients. Red dots indicate patients with confirmed GBCA application (iSFNe), while green dots represent patients without GBCA exposure (iSFNne). According to Magerl et al. (2010) t test were calculated to test significant differences to a matched control group created as a fictitious subpopulation of reference group and stars above the subtest labels indicates a Bonferroni corrected p value lower than 0.0045 Table 3 Morphometric Unlikely patients with iSFN and therefore we included patients with the diagnosis of definite iSFN. The analysis of IENFD is the gold standard for the diagnosis of SFN. However, QST analysis is still justified and pathological results allow the diagnosis of definite SFN even if the IENFD is still within normal range [19]. It is known from previous studies that IENFD further decreases over time resulting in a reduced IEFND at a later stage of the disease [30]. The recruitment of only definite SFN together with our rigid diagnostic regimen to identify truly idiopathic SFN to minimize the risk of other diseases causing the reported length-dependent symptoms of small fiber damage are the prerequisites to discuss a possible impact of Gd deposits on small fiber function. Therefore, our choice of included patients allows valid conclusions. Other underlying aetiology for SFN were thoroughly ruled out [5]. Clinical criteria as neuropathic pain was confirmed in all patients with pain questionnaire. QST was performed in a subset of patients which showed in pathological results in all of them. This procedure allows to argue for a possible role of Gd or GBCA deposits in small fiber damage in our iSFN patients, as suggested in animal studies [10]. Accordingly, in skin biopsies with more likely dermal Gd deposits the IENFD was significantly lower compared to patients with no exposure or controls where no likely Gd deposits could be confirmed by our semi-quantitative elemental bioimaging approach. The time of symptom onset was very heterogeneous and not in all skin samples the IENFD was significantly reduced. However, all patients represented with length dependent neuropathic symptoms as the main diagnostic parameter of SFN. Due to the high variability of Gd exposure in terms of dosage and time there was no correlation of dermal Gd deposits and exposure could be observed. A study analyzing skin biopsies short time after Gd exposure would be therefore interesting. Our data cannot provide the pathophysiological mechanism underlying the small fiber damage after Gd exposure. Possible explanations remain speculative and include mainly the unknown mechanisms of the Gd release from macrocyclic GBCA that are routinely used nowadays. However, our findings render the suggestion likely that Fig. 3 Chemical and neuropathological analyses of the skin biopsy samples of the iSFN groups (iSFNe: exposed to GBCA; iSFNne: not exposed to GBCA) and controls showing A the individual normalized event rates of Gd deposits obtained by LA-ICP-TQMS analysis, their medians (black lines) as well as the classifier results (red: unlikely; blue: possible; green: likely) and B individual and median (black lines) z-scores of intraepidermal nerve density (IENFD). Z-scores lower − 1.64 indicate a significant reduction (red dots in panel B) compared to normative data. Groups were compared by Dunn's multiple comparisons tests (*p < 0.05, **p < 0.01, ***p < 0.001) Fig. 4 Individual measures and median values (black lines) of the normalized event rates derived from the LA-ICP-TQMS Gd detection in the skin biopsies from iSFN patients with exposures to linear or macrocyclic GBCAs only, mixed GBCAs exposures and those who were never exposed to any type of GBCA. The numbers close to the dots indicate the individual or range of the duration (in years) from the last GBCA injection to the skin biopsy. The classifier results (red: unlikely; blue: possible; green: likely) were used to label the individual NERs GBCA might mediate neurotoxicity in susceptible patients. Further studies identifying factors that increase the risk of GBCA side effects on the peripheral nerve system would be of great benefit. In iSFN patients with reported Gd exposure we were able to detect dermal Gd deposits. For this purpose, an elemental bioimaging approach was established using the LA-ICP-MSI method, which requires only minimal sample preparation and low amounts of tissue. For the application described here, a script-based semi-quantitative approach was developed utilizing the background Gd sensitivity as a normalization approach, and established NERs are expected to be comparable between different instruments and studies. Taken together, we describe a capable tool to detect traces of Gd in small tissue samples which may not solely used for the detection of Gd but may be transferred to the detection of other elements with low natural background in skin biopsies, like e. g. platinum (Pt) from anti-cancer drugs [31]. Surprisingly, we could not observe clear differences between macrocyclic and linear GBCAs with respect to the Gd deposition. In all animal studies [9,10] there is clear evidence that (1) linear GBCAs release more Gd 3+ ions into the tissue (higher deposition), and (2) due to the higher amount of free Gd ions the toxicity on small nerve fibers is more severe. However, the retrospective coding of the GBCA type in our study and three patients with mixed GBCA types in our iSFN patients sample the most prevalent type of GBCA exposure was only known for 15 patients (45%). In these patients, the IENFD was significantly reduced, however, the Gd deposition (NER classified as possible and likely) was not associated with this SFN pathology. Thus, to further investigate the proposed neurotoxic mechanism and any adverse effects on distal small nerve fibers in humans' larger cohorts with detailed information about the GBCA exposures are needed. Although the subgroup of iSFNne is rather small, no differences in QST parameters or pain characteristics could be detected between the iSFN groups. This is in accordance with previous studies showing that neuropathic pain derives from lesions of the somatosensory nervous system and cannot be linked to a specific underlying disease [32]. However, no correlations of QST and Gd deposits could be shown in our patients. However, further studies with larger number of patients are needed to further elucidate a possible connection. Limitations Any attempt to perform a dose-response relationship using either the type of GBCA or the duration since the last GBCA exposure as a predictor for the Gd deposition in the skin biopsies or the neuropathological examinations was limited to a small number of patients providing the necessary information. On a purely descriptive level, no systematic association became obvious that could be interpreted as causal link between crucial characteristics of GBCA exposure (e.g. linear GBCAs release more Gd 3+ ) and quantitative estimated of Gd deposition in tissue (NER). Furthermore, these crucial exposure characteristics could not be used to prove any causality of a Gd 3+ -related reduction of the IENFD in iSFN patients. Moreover, studies investigating the relation between the Gd skin deposits and the time and type of GBCA administration could further increase the application range of this method. Here, a better documentation of the GBCA exposures of the iSNF patients is needed. Conclusions The previous findings of in vivo experiments with GBCA exposed mice [9] seems to be relevant for humans but further studies are needed to shed light on the mechanisms underlying this possible adverse side-effect of GBCAs. Nevertheless, our study showed that in iSFN patients with exposure to GBCAs dermal Gd deposits could be detected and IENFD was significantly reduced compared to iSFN patients without exposure to GBCAs and controls. However, the design of our study is not suitable to conclude that GBCA exposures are a risk factor for the development of iSFN. Here, a large prospective cohort study would be needed to clarify a causative role of Gd deposition in the etiology of iSFN.
6,283.2
2023-05-04T00:00:00.000
[ "Medicine", "Biology" ]
HiJAKing SARS-CoV-2? The potential role of JAK inhibitors in the management of COVID-19 JAK kinase inhibitors are being investigated as a way of managing cytokine storm in patients with severe COVID-19. JAK kinase inhibitors are being investigated as a way of managing cytokine storm in patients with severe COVID-19. Soon after infection with the novel coronavirus, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), began to spread around the world, the search for potentially useful drugs-while awaiting development of a vaccine-was on. Once SARS-CoV-2 enters the host, drugs capable of enhancing the innate or adaptive immune response could assist in controlling dissemination of the virus. Later on, in severely ill patients, hyperimmune activation leading to cytokine storm occurs and is a potential target for modulation to prevent further lung damage. Therefore, currently approved drugs used to treat rheumatologic and inflammatory diseases such as biological disease-modifying anti-rheumatic drugs (DMARDs) were immediately considered as candidate agents to be repurposed for use in patients with severe coronavirus disease 2019 (COVID-19). The angiotensin-converting enzyme 2 (ACE2) transmembrane protein provides the pathway for SARS-CoV-2 to infect epithelial cells. ACE2 is highly expressed on lung alveolar epithelial cells, which explains why the lungs are particularly vulnerable to injury following SARS-CoV-2 infection (1). ACE2 also exerts a natural protective effect against acute lung injury mediated by the renin-angiotensin system, and its down-regulation after virus entry contributes to the onset of the acute respiratory distress syndrome (ARDS) (1). resemble what has been observed in other beta-coronavirus infections with the inflammatory response driven by CD4 + and CD8 + T cells, natural killer (NK) cells, monocytes, and the cytokines that these immune cells secrete. Among 41 patients with laboratoryconfirmed COVID-19 pneumonia described by Huang and colleagues (2), the 13 patients requiring intensive care showed significantly higher serum levels of many cytokines and chemokines including interleukin-2 (IL-2), IL-7, IL-10, granulocyte colony-stimulating factor (G-CSF), tumor necrosis factor- (TNF-), and IL-1Ra compared with those who were not in an intensive care unit (ICU). In comparison with healthy controls, they also showed higher serum levels of IL-6, IL-9, IL-13, granulocyte-macrophage CSF (GM-CSF), interferon- (IFN-), IL-1, IL-8, and IL-17 (2). In a study of 50 patients with COVID-19, Hadjadj et al. (3) found an increase in peripheral blood of IL-6 and IL-6-induced genes, TNF- and TNF- pathway-related genes, as well as IL-10. Therefore, the use of Janus kinase (JAK) inhibitors (JAKi) targeting IL-6 and other cytokines with JAK-dependent signaling is one way to restrain the excessive level of cytokine signaling. Even if cytokines measured in the serum do not reflect their tissue concentrations, high tissue levels of IFN- and other cytokines may account for ACE2 down-regulation contributing to the progression of SARS-CoV-2 pneumonia. IL-1, IL-6, and IL-12 seem to be the main drivers of the hyperimmune response leading to cytokine storm via activation of T helper 1 (T H 1) and NK cells and release of chemokines, which, in turn, recruit more neutrophils and inflammatory macrophages further enhancing lung inflammation (4). In this context, the inflammatory response resembles the cytokine release syndrome (CRS) observed in patients receiving chimeric antigen receptor (CAR) T cell therapy and bispecific T cell-engaging antibodies. In CAR T celldriven CRS, elevated secretion of IL-6 by monocytes and macrophages is thought to be the main driver of symptoms. The clinical signs of CRS are life-threatening complicationsfluid-refractory hypotension and cardiac dysfunction, respiratory failure, coagulopathy, and renal and liver failure-which can be effectively treated with anti-cytokine therapy targeting IL-6-IL-6R signaling. More than 30 clinical trials with the anti-IL-6R antibodies tocilizumab and sarilumab are ongoing, and an open-label pilot study of tocilizumab in hospitalized patients with severe COVID-19 demonstrated improvements in respiratory and laboratory parameters (5). Other classes of tyrosine kinase inhibitors such as Bruton's tyrosine kinase (BTK) inhibitors are also being tested for their ability to interfere with the CRS observed in patients with severe COVID-19; preclinical studies and case series have suggested that the BTK inhibitor ibrutinib may provide protection against severe lung injury (6). Baricitinib, a targeted synthetic DMARD approved for rheumatoid arthritis (RA), has been proposed as a possible therapeutic option for COVID-19. Baricitinib is a firstgeneration JAKi targeting JAK1 and JAK2 enzymatic activity, which was also predicted, based on artificial intelligence algorithms, to inhibit the AP2-associated kinase 1 (AAK1) and the cyclin G-associated kinase (GAK), members of the numb-associated kinase (NAK) family involved in clathrin-mediated endocytosis (7). Therefore, mechanistically, baricitinib might impair SARS-CoV-2 endocytosis and the early stages of virus spread while also inhibiting the signaling of several cytokines involved in the pathogenesis of viral pneumonia and the emergence of cytokine storm (Fig. 1). On the other hand, IFN- and IL-4-two cytokines whose signaling relies on JAKs-inhibit ACE2 expression in vitro at the transcriptional level, leading to a reduction of infection, replication, and excretion of the related SARS-CoV in E6 Vero cells (8). Several clinical trials were initiated to examine the use of baricitinib or other JAK inhibitors such as ruxolitinib and tofacitinib in patients with COVID-19. with mild-to-moderate pneu monia. Twelve patients received baricitinib (4 mg for 14 days) in addition to standard of care (ritonavirlopinavir and hydroxychloroquine) and were compared with 12 age-and sex-matched con-trols who received standard of care alone (9). The primary endpoint of the study was the percentage of patients treated with baricitinib requiring intensive care admission; secondary endpoints included the clinical course of the disease and treatment-induced adverse events (9). Baricitinib-treated patients achieved significantly greater improvements in clinical signs (fever, cough, and dyspnea), lung function tests, and C-reactive protein (CRP) values. None of the patients in the baricitinib arm was admitted in intensive care compared with 33% of the controls; moreover, after 2 weeks, 7 of 12 (58%) of the baricitinib-treated patients compared with 1 of 12 (8%) of controls were discharged (9). Despite these initial positive findings, there are several JAKi-associated issues to be considered. JAKi administration is known to result in an increased risk of varicella zoster infection reactivation in patients with RA, which may cause concerns about the inhibition of antiviral cytokines-mainly type I IFN. However, SARS-CoV-2 takes advantage of an escape mechanism to block the IFN regulatory pathway at an early step; the highly pathogenic beta coronaviruses encode viral proteins that potently antagonize type I IFN production or degrade type I IFN mRNA; Chu et al. (10) showed that SARS-CoV-2 did not significantly trigger the expression of IFN and IFN-related genes in human lung tissues infected with SARS-CoV-2. Moreover, the type I IFN response is high in patients with mild to moderate COVID-19, but it is profoundly reduced in critically ill patients that display low plasma levels of IFN- and IFN- and downregulation of IFN-stimulated genes (3). The preliminary clinical evidence that oral administration of baricitinib provides a benefit in COVID-19 (8) could be due to impairments in viral entry and/or by controlling the CRS-like excessive inflammation occurring in these patients (5). If the latter is a major component of the therapeutic efficacy, it will be important to assess circulating levels of these cytokines, or at least CRP levels, which are known to correlate with IL-6, before and after the initiation of the therapy. In light of the ability of JAKi to inhibit multiple cytokines, it would be expected that patients with the highest serum levels of IL-6 and other inflammatory cytokines would be the ones that would benefit the most from this class of drugs. Furthermore, the incidence of SARS-CoV-2 infection among patients with RA treated with maintenance baricitinib, compared with patients with RA treated with conventional synthetic or biological DMARDs, may provide clues as to whether the anti viral effect suggested by Stebbing and colleagues (7) provides pre-and early post-exposure protection to those receiving baricitinib. Although public health officials work to resolve this global health threat as quickly as possible, physicians and patients want to be confident that any repurposed drugs adopted for widespread use in patients infected by SARS-CoV-2 have acceptable safety profiles. The reduction of NK cells detected during baricitinib and tofacitinib treatment suggests that use of these drugs in patients with COVID-19 should be limited in its duration. Another possible concern for using JAKi in COVID-19 is the thromboembolic risk associated with this class of drugs. At the later critical stages of the disease, most patients with COVID-19 develop coagulation abnormalities, which are associated with a poorer prog nosis. The mechanisms explaining the coagulopathy are not fully clear, with direct effects of SARS-CoV-2 on endothelial cells, effects of the cytokine storm, and anti-phospholipid antibodies being potential contributors. In this scenario, any drugs with the potential of increasing thrombotic risk should be used cautiously. Within the next few weeks and months, the clinical approach to treating patients with COVID-19 will surely be refined, guided by the results from ongoing prospective, randomized trials and the role for JAKi as well as other small molecules that have the potential to interfere with the cytokine cascade driving CRS.
2,070.6
2020-05-08T00:00:00.000
[ "Biology" ]
Near-infrared-induced electron transfer of an uranyl macrocyclic complex without energy transfer to dioxygen I. General Methods and Experimentation Instruments: UV-Vis-NIR spectral studies carried out both in Osaka and Austin using JASCO and Varian Cary 5000 spectrophotometers, respectively. Unless otherwise indicated, all UV-Vis-NIR spectroscopic measurements were carried out at room temperature in dichloromethane. H-NMR spectra were recorded at room temperature on a Varian DirectDrive 400 MHz instrument using deuterated chloroform as the solvent. 0][21][22][23] One salient feature of expanded porphyrins is the presence of Q-like bands with high extinction coefficients in the near-infrared (NIR) portion of the electromagnetic spectrum. 24We report here that the uranyl macrocyclic complex of cyclo [1]furan [1]pyridine [4]pyrrole (1; Fig. 1), which has an absorption maximum at 1177 nm, 25 can undergo NIR-induced electron transfer via the triplet exited state with electron donors and acceptors without energy transfer to dioxygen to produce singlet oxygen.This finding is ascribed to the fact that both the one-electron oxidized radical cation and one-electron reduced radical anion forms of 1 are sufficiently stable to allow for characterisation by spectroscopic means. The uranyl macrocyclic complex (1) was synthesised as reported previously. 25Complex 1 has 22 p-electron aromatic heteroannulene character, exhibiting NIR absorption bands at 846 and 1177 nm as shown in Fig. 2 (black line).The one-electron oxidation of 1 with magic blue resulted in formation of the radical cation of 1 (1 + ), which has NIR absorption bands at 1280, 1732 and 1930 nm (Fig. 2, red and Fig. S1 in ESI †).These changes in the absorption were also seen under conditions of spectroelectrochemical oxidation at 0.76 V vs. SCE (Fig. S2 in ESI †). A planar structure of the radical cation 1 + was obtained on the basis of an unrestricted DFT (UB3LYP/B1) calculation, which revealed spin delocalization over the macrocyclic core (Fig. S3b in ESI †).The singly occupied molecular orbitals (SOMOs) were found to be ligand-based MOs (Fig. S4 in ESI †).Time dependent (TD)-UDFT analyses of the radical cation 1 + revealed an allowed HOMO-LUMO transition band ( f = 0.0513), a finding that is consistent with the experimental NIR absorption spectrum (Fig. S5 and Table S1 in ESI †).Thus, a stable openshell [4n À 1] p-electron form is predicted under conditions of Fig. 1 Cyclo [1]furan [1]pyridine [4]pyrrole and its uranyl complex (1).H-NMR spectroscopy was also used to monitor the production of the presumed radical cation.Adding aliquots of magic blue to a deuterated dichloromethane solution of 1 led a concentration dependent broadening of the peaks, as would be expected for the formation of a paramagnetic radical species (Fig. S8 in ESI †).A corresponding electron paramagnetic resonance (EPR) spectroscopic analyses of 1 revealed a signal at g = 2.0051 whose intensity increased upon the addition of magic blue until approximately 1.0 equiv.had been added (Fig. S9 in ESI †). Slow diffusion of hexanes into a solution of dichloromethane containing 1 and one equiv.of magic blue led to the formation of diffraction grade single crystals.The crystal structure (Fig. 3) revealed that a b-position of one of the 2,5-diethyl substituted pyrrole rings is sp 3 hybridized and bears a chlorine substituent, forming 1-Cl + ÁSbCl 6 À .When dissolved in dichloromethane, this crystalline product gives rise to an absorption spectrum more similar to that of neutral macrocycle 1 than the radical cation species 1 + (i.e., maxima at 846 and 1177 nm are seen, Fig. S10 in ESI †).Using the TD-DFT method, a simulated spectrum was generated that matches that of the structurally characterized chloride adduct to 1 2+ (see Fig. S11 in the ESI †).The intense NIR absorption feature can be interpreted in terms of broken degeneracy for the LUMO pairs due to the unsymmetric core structure (Fig. S12 and Table S2 in ESI †).The formation of the chloride adduct is ascribed to disproportionation of 1 + , followed by the nucleophilic addition of Cl À to resulting dication, 1 2+ , because the same absorption spectrum was obtained when complex 1 was oxidized with 1 equiv. of magic blue in the presence of ten equiv.tetrabutylammonium chloride (TBACl) for 10 min (Fig. S13 in ESI †).Once produced, the chloride adduct 1-Cl + is stable in solution for at least 24 h (Fig. S14 in ESI †). The radical anion 1 À was produced by subjecting the original uranyl complex 1 to one-electron chemical reduction using cobaltocene as a reductant.The addition of cobaltocene to 1 resulted in the appearance of new absorption features at 998 and 1930 nm in the absorption spectrum (Fig. 2, blue and Fig. S15 in ESI †).These spectral features match those seen upon spectroelectrochemical reduction of 1 in dichloromethane (Fig. S16 in ESI †).An EPR spectrum expected for a radical (i.e., a feature at g = 2.003) was also recorded for this reduced species (Fig. S17 in ESI †).On this basis, it was assigned to be the 4n + 1 p-electron radical anion form of 1 (1 À ).The radical anion 1 À proved stable for up to 16 hours in dichloromethane when produced via chemical reduction (Fig. S18 in ESI †).Similar spectral features were noted under conditions of spectroelectrochemistry when 1 was subject to reduction at À0.02 V vs. SCE (Fig. S16 in ESI †).Similarly, the spin delocalized structure was predicted for 1 À (Fig. S19 in ESI †).A TD-DFT calculation produced a stick spectrum concordant with that of the experimental absorption spectrum for 1 À (Fig. S20 in ESI †).The compositions of the major transitions of 1 À are found to be straightforward; from any of the HOMOs to the LUMO (B) (Fig. S21 and Table S3 in ESI †).Furthermore, evaluation of the HOMOs for the radical cation and radical anion forms of 1 revealed increased conjugated character in the case of the radical anion 1 À relative to the radical cation 1 + .These differences may be reflected in the apparent greater stability observed for the radical anion (Table S4 in ESI †). Femtosecond laser excitation of a CH 2 Cl 2 solution of 1 at 393 nm resulted in observation of a transient absorption band at 620 nm with bleaching at 1150 nm, which matches with the absorption maximum of 1 as shown in Fig. 4. 26 The time profile of the recovery of bleaching at 1150 nm could be fit to twoexponentials.The fast component has a lifetime of 2.5 ps (Fig. 4b), while the much slower component has a lifetime of 5 ns (Fig. 4c).The fast recovery component of the ground state bleaching at 1150 nm is ascribed to the relaxation from the singlet excited state to the ground state in competition with the fast intersystem crossing to form the triplet excited state due to the heavy atom effect of uranium. 27The much slower component is considered to reflect the decay of the triplet excited state of 1 ( 3 1*).The relatively short lifetime of 3 1* as compared with triplet excited states of organic compounds, which have microseconds lifetimes, reflects the large spin-orbit coupling of uranium. 27he ability of 3 1* to undergo reactions was examined first by whether it was able to induce the conversion of triplet oxygen (the ground state) to the corresponding singlet form.However, efforts to measure the phosphorescence due to singlet oxygen at 1273 nm in fully deuterated benzene (C 6 D 6 ) revealed no appreciable signal.Additionally, we also confirmed that no oxygenation of anthracene, used as a singlet oxygen trap reagent, occurs (to produce epidioxyanthracene) in an O 2 -saturated CH 2 Cl 2 solution containing 1 under visible-NIR photoirradiation with a solar light simulator attached with a light cut-off filter (l 4 490 nm, 1 sun for 3 h).Such an observation is consistent with the energy of 3 1* being lower than that of singlet oxygen (0.97 eV). 28n the presence of tetracyanoethylene (TCNE) as an electron acceptor, NIR-induced electron transfer from 3 1* to TCNE occurred as shown in Fig. 5a, where the negative transient absorption band due to the ground state bleaching at 1150 nm, which agree with NIR absorption band of 1 in Fig. 2, is recovered with a much faster rate in the presence of TCNE as compared with than that in its absence (Fig. 4c).The driving force of back electron transfer was calculated to be 0.54 eV based on the one-electron oxidation potential of 1 (0.76 V vs. SCE) and the one-electron reduction potential of TCNE (0.22 V vs. SCE), 29 whereas the driving force of NIR-induced electron transfer is estimated to be smaller than 0.43 eV from the energy of 3 1* (o0.97 eV, vide supra).Because the driving force of back electron transfer is larger than that of NIR-induced electron transfer, back electron transfer may be faster than NIR-induced electron transfer thus accounting for the fact that no transient absorption features ascribable to 1 + or TCNE À were observed in Fig. 5a. 30The rate constant of electron transfer from 3 1* to TCNE was determined to be 6.7 Â 10 9 M À1 s À1 from the slope of a linear plot of the pseudo-first-order rate constant of the NIR-induced electron transfer vs. [TCNE] (Fig. 5b). When TCNE was replaced by 9,10-dihydro-10-methylacridine (AcrH 2 ) as an electron donor, NIR-induced electron transfer from AcrH 2 to 3 1* occurred as shown in Fig. 6.The driving force of back electron transfer is determined to be 0.83 eV from the one-electron oxidation potential of AcrH 2 (0.81 V vs. SCE) 30 and the one-electron reduction potential of 1 (À0.02V vs. SCE), whereas the driving force of NIR-induced electron transfer is estimated to be smaller than 0.14 eV from the energy of 3 1* (o0.97 eV).In this case as well, back electron transfer from 1 À to AcrH 2 + was faster than the NIR-induced electron transfer because of the larger driving force of back electron transfer at 298 K.The rate constant of electron transfer from AcrH 2 to 3 1* was determined to be 9.0 Â 10 9 M À1 s À1 form the slope of a linear plot of the pseudo-first-order rate constant of NIR-induced electron transfer vs. concentration of AcrH 2 (Fig. 6b). 30This value is close to the diffusion limited in CH 2 Cl 2 (B10 10 M À1 s À1 ).On this basis, we conclude that electron transfer from AcrH 2 to 1 is an exergonic process.The radical ion pair was detected by EPR spectroscopy under NIR photoirradiation (l 4 490 nm) at low temperature (77 K).The EPR spectrum is characterized by a broad signal due to AcrH 2 + , 30 which overlaps with the signal due to 1 À at g = 2.003 (see ESI, † Fig. S22). In conclusion, the uranyl macrocyclic complex (1) can be converted readily to the corresponding radical cation and radical anion states, both of which are stable on the laboratory timescale.The ability to access both 4n + 1 and 4n À 1 p-electron states is thought to underlie the fact that complex 1 acts as an efficient NIR-absorbing photosensitiser with a low triplet excited state energy (o0.97 eV).Specifically, it undergoes NIR-induced electron transfer with appropriately chosen electron donors and acceptors without energy transfer to dioxygen.Such a NIR photosensitiser, characterized by its absorption maximum at 1177 nm, may find applications for NIR light energy conversion. a Department of Chemistry & Biochemistry, University Station-A5300, The University of Texas, Austin, Texas, 78712-0165, USA.E-mail<EMAIL_ADDRESS>Department of Material and Life Science, Graduate School of Engineering, both chemical and electrochemical oxidation (see cyclic voltammogram in Fig. S6 in ESI †).In fact, the radical cation (1 + ) proved relatively stable with no significant change in the absorption spectrum over 50 minutes post addition of oxidant (Fig. S7 in ESI †). Fig. 3 Fig. 3 Top (left) side views (right) of the solid state structure of the chloride adduct of 1, 1-ClÁSbCl 6 .Crystals used for this analysis were obtained via the slow diffusion of hexanes into a dichloromethane solution of 1 containing 1 equiv. of ''magic blue''. Fig. 4 Fig. 4 (a) Transient absorption spectra of 1 observed at 1, 50 and 3000 ps after femtosecond laser excitation of a deaerated CH 2 Cl 2 solution of 1 (10 mM).(b) Time profile of bleaching at 1150 nm in the 0-20 ps delay time range.(c) Time profile of bleaching at 1150 nm in the 0-3000 ps delay time range.The gray lines in (b) and (c) are drawn by means of a doubleexponential curve fitting.
2,895.2
2015-04-02T00:00:00.000
[ "Chemistry" ]
PPARγ mRNA in the adult mouse hypothalamus: distribution and regulation in response to dietary challenges Peroxisome proliferator-activated receptor gamma (PPARγ) is a ligand-activated transcription factor that was originally identified as a regulator of peroxisome proliferation and adipocyte differentiation. Emerging evidence suggests that functional PPARγ signaling also occurs within the hypothalamus. However, the exact distribution and identities of PPARγ-expressing hypothalamic cells remains under debate. The present study systematically mapped PPARγ mRNA expression in the adult mouse brain using in situ hybridization histochemistry. PPARγ mRNA was found to be expressed at high levels outside the hypothalamus including the neocortex, the olfactory bulb, the organ of the vasculosum of the lamina terminalis (VOLT), and the subfornical organ. Within the hypothalamus, PPARγ was present at moderate levels in the suprachiasmatic nucleus (SCh) and the ependymal of the 3rd ventricle. In all examined feeding-related hypothalamic nuclei, PPARγ was expressed at very low levels that were close to the limit of detection. Using qPCR techniques, we demonstrated that PPARγ mRNA expression was upregulated in the SCh in response to fasting. Double in situ hybridization further demonstrated that PPARγ was primarily expressed in neurons rather than glia. Collectively, our observations provide a comprehensive map of PPARγ distribution in the intact adult mouse hypothalamus. Introduction Peroxisome proliferator-activated receptor gamma (PPARγ) is a ligand-activated transcription factor that was originally identified as a regulator of peroxisome proliferation and adipocyte differentiation (Issemann and Green, 1990;Dreyer et al., 1992;Kliewer et al., 1994;Tontonoz et al., 1994;Amri et al., 1995). PPARγ has been implicated in the cellular effects of endogenous fatty acids in peripheral metabolic tissues (Debril et al., 2001;Ahmadian et al., 2013). Furthermore, the thiazolidinedione drugs, which target PPARγ, are effective treatments for type 2 diabetes (Knouff and Auwerx, 2004;Knauf et al., 2006). A large body of evidence also suggests that functional PPARγ signaling occurs within the central nervous system (CNS). Specifically, PPARγ agonists coordinate the expressions of genes that are involved in neuronal fatty acid metabolism and the responses to brain injury (Heneka et al., 2000;Sundararajan et al., 2005;Tureyen et al., 2007;Quintanilla et al., 2008;Schintu et al., 2009;Zhao et al., 2009). PPARγ signaling has also recently been reported to be involved in the central control of glucose, feeding behavior and energy homeostasis (Diano et al., 2011;Lu et al., 2011;Ryan et al., 2011;Garretson et al., 2015). The hypothalamus has been implicated in the aforementioned actions of PPARγ on metabolic functions. Thus, the identification of PPARγ-expressing sites and cell types in the hypothalamus would greatly benefit our understanding of the mechanisms that underlie the neural control of metabolic functions. However, reports on the expression level and distribution of PPARγ in the CNS have been contradictory. For example, early studies found that PPARγ protein and mRNA were either absent or expressed at low levels that were close to the limits of detection in the adult rodent brain (Issemann and Green, 1990;Braissant and Wahli, 1998;Cullingford et al., 1998;Wada et al., 2006). However, recent mRNA mapping studies have consistently demonstrated detectable PPARγ expression in the cortex, hippocampus, and olfactory bulb (García-Bueno et al., 2005;Bookout et al., 2006a;Ou et al., 2006;Victor et al., 2006;Gofflot et al., 2007;Sobrado et al., 2009;Lu et al., 2011;Liu et al., 2014). Evidence of significant PPARγ expression in other brain sites is rather limited. Three studies have detected PPARγ protein by western blot and immunohistochemistry in the midbrain (Breidert et al., 2002;Park et al., 2004;Carta et al., 2011). Additionally, several other studies have reported a significant amount of PPARγ in the whole hypothalamus or identified feeding-related hypothalamic nuclei using quantitative PCR (qPCR) and antibody-based techniques (Mouihate et al., 2004;Sarruf et al., 2009;Diano et al., 2011;Lu et al., 2011;Ryan et al., 2011;Long et al., 2014). Other studies described PPARγ in mediobasal hypothalamic neurons using in situ hybridization histochemistry (ISH) (Long et al., 2014;Garretson et al., 2015). Notably, the results of the latter studies are at odds with those of prior mRNA studies that showed minimal hypothalamic and midbrain PPARγ (Bookout et al., 2006a;Gofflot et al., 2007). Additionally, antibody-based studies have yielded highly inconsistent results and, therefore cast doubt on the specificities of the currently available antibodies. Specifically, PPARγ immunoreactivity has been found either in neurons (Park et al., 2004;Ou et al., 2006;Victor et al., 2006) or in a mixed population of neurons and unidentified glial cells (Moreno et al., 2004;García-Bueno et al., 2005;Sarruf et al., 2009;Zhao et al., 2009;Carta et al., 2011;Lu et al., 2011). Furthermore, these same studies have described PPARγ immunoreactivities in different cell compartments and furthermore disagree on the exact anatomical distribution of PPARγ immunoreactive cells within the CNS. In face of all of these aforementioned inconsistencies in the available literature, this study sought to evaluate the anatomical distribution of PPARγ-expressing brain cells using ISH and qPCR in a spatially resolved manner, with a special emphasis on the hypothalamus. Moreover, we studied hypothalamic PPARγ mRNA regulation in response to metabolic challenges. Animals and Diets Wild-type mice on a C57Bl/6 genetic background were obtained from the UTSouthwestern Medical Center Animal Resource Center. All mice used in our ISH study were young adult males (4-to 8-week-old) that were housed in a light-controlled (12 h on/12 h off; lights on at 7 a.m.) and temperature-controlled environment (21.5-22.5 • C) in a barrier facility. The animals used for histology were fed ad libitum on a standard chow (Harlan Teklad TD.2016 Global). The above procedures were approved by the Institutional Animal Care and Use Committee of the University of Texas Southwestern Medical Center at Dallas. Tissue Collection and Preparation On the day of sacrifice, mice were anesthetized with an overdose of chloral hydrate (500 mg/kg, i.p.) between 8:00 a.m. and 11:00 a.m. For RNAScope R ISH experiments, the brains were rapidly dissected from anesthetized mice and frozen on dry ice on a piece of aluminum foil. Using a cryostat, 14-16 µm brain sections were collected on SuperFrost slides and stored at −80 • C. For the qPCR experiments, the samples were collected and prepared exactly as we have previously described (Bookout et al., 2006b;Lee et al., 2012). Chromogenic and Fluorescent ISH As summarized in Table 1, double-Z oligo probes were designed by the manufacturer (Advanced Cell Diagnostic). The tissue was processed for ISH following the manufacturer's instructions. Briefly, the tissue was fixed tissue in 10% formalin and pretreated with a protease-based solution (pretreatment 4) followed by hybridization at 40 • C for 2 h. The probes were mixed using the recommended ratio of 50:1 by volume (c1:c2 probes). Signal amplification was achieved using either diaminodenzidine (chromogenic) or specific fluorophores (FITC and Cy5). Incubation with diaminodenzidine was 10 min. The sections Please note that the probes used in this study did not distinguish between the known isoforms of PPARγ . were counterstained with either Fast-Red (Sigma #N3020) or DAPI (Vector Laboratories; H-1500). Totals of 4 and 5 mice were used for the chromogenic and fluorescent ISH, respectively. qPCR Whole hypothalamus and brown adipose tissue (BAT) samples and laser-captured microdissected samples were collected according to methods detailed in two of our previously published studies (Bookout et al., 2006b;Lee et al., 2012). The BAT and hypothalamus samples were taken from a cohort of C57/Bl6 male mice of 8-9 weeks of age (n = 6; fed on Harlan Teklad #7001 chow) (Bookout et al., 2006b;Lee et al., 2012). Hypothalamic laser-dissected samples were obtained from another cohort of C57/Bl6 male mice of 6 weeks of age (n = 12/group) that included groups that were fed ad libitum on chow diet, fasted for 24 h, or fed on high fat diet for 14 weeks (42% kcal from fat, 0.2% cholesterol-Harlan Teklad TD.88137) (Lee et al., 2012). Realtime qPCR gene expression analysis was performed exactly as we have previously described (Lee et al., 2012). To avoid sample bias, whole hypothalamus and BAT RNA were diluted to similar levels as LCM derived RNA. All RNA samples were treated and analyzed in parallel. All data are expressed as the mean ± SEM. The statistical analyses were performed with GraphPad Prism 6 software. The data were analyzed with a Two-Way ANOVA (nutritional challenge vs. brain site) followed by Dunnett post-hoc tests. The probes are listed in Table 1. Graphs were made using GraphPad Prism 6. Digital Images Acquisition and Analysis Bright-field images were captured using the 10×, 20×, and 40× objectives of a Zeiss microscope (Imager ZI) attached to a digital camera (Axiocam). A Zeiss Stemi 2000-C binocular miscroscope was also used to capture low magnification dark-field images of our diaminobenzidine-labeled tissues. Identical exposure parameters were applied to samples within each experiment. Fluorescent digital images were acquired with a 63× oil objective of a Leica TCS SP5 confocal microscope (UTSouthwestern Live Cell Imaging Core). Scanning parameters included a pinhole of 1 and a line average of 8. Laser intensity, gain and offset were adjusted appropriately to improve the signal/background. We collected stacks up to 25 optical sections separated by a step of ∼0.3-0.4 µm in a 512×512 pixel format. NIH Image J software was used to generate our final TIFF images with combined Z stacks. The Imaris 8.1.2 software was also used to produce orthogonal views of confocal z-stacks. As done by us in the past (Gautron et al., 2010), the distribution of PPARγ in the brain was evaluated in three mice by considering the density of diaminobenzidine-labeled cells per brain region independently of its surface ( Table 2). Densities were subjectively determined by visual inspection of brain sections as follows: very low, ±; low, +; moderate ++; high + + +. Our results are meant to provide inherently qualitative estimates. The distribution of PPARγ in the hypothalamus was also represented on drawings. Briefly, the outline of hypothalamic sections was drawn in Adobe Photoshop CS5.1 using images taken with a Zeiss Stemi 2000-C binocular microscope. Diaminobenzidine-labeled PPARγ-expressing cells were manually marked by individual dots. In addition, we evaluated the hybridization strengths within select diaminobenzidine-labeled hypothalamic nuclei compared to the motor cortex (Bregma −1.70 mm). Brightfield images of non-counterstained sections were taken at 40×. Image J was used to select areas of interest containing labeling. To isolate diaminobenzidine-labeled areas from background, images were binarized. The mean integrated intensity was then calculated for each selected area and divided by the surface of the area. This was repeated in 2-3 sections per brain site in 3 different mice. The resulting ratio reflected the intensity of the ISH strengths. The image editing software Adobe Photoshop CS5.1 was used to combine digital images into final plates with annotations. The size, contrast, brightness, vibrance, and sharpness of our images were adjusted for presentation purposes (as indicated in the legends), while careful attention was applied to ensure no adjustments showed anything that did not originally exist. Specifically, the adjustments were always uniformly applied to all of the images from the same experiment. DAPI-counterstained nuclei were converted to gray to achieve better contrast. Finally, the abbreviations were derived from Franklin and Paxinos's Mouse Brain in Stereotaxic Coordinates (third edition). General Distribution of Extra-hypothalamic PPARγ-expressing Sites The distribution of PPARγ-expressing cells was studied using a novel chromogenic ISH technique (RNAScope R ) (Wang et al., 2012). The ISH signal was represented as a brown precipitate (diaminobenzidine) distributed in the cytoplasm immediately surrounding the cell nuclei (Figure 1). This approach generated virtually no background and therefore facilitated the identification of individual PPARγ-expressing cells. As a positive control, we attempted to detect the expression of cyclophilin mRNA in the brain (Figures 1A,B). Consequently, a very robust signal was ubiquitously detected across the entire mouse brain. Brown adipose tissue (BAT) was used as a positive control tissue for PPARγ expression. As anticipated, PPARγ was detected in many cells across the BAT in presumptive adipocytes (Figures 1C,D). Within the cortex, a robust PPARγ signal was also seen in many cells (Figures 1E,F). As a negative control, ISH was performed using a probe recognizing the prokaryotic gene dapB. Brain sections were completely devoid of a signal ( Figure 1G). The distribution of PPARγ-expressing sites and the strengths of the hybridization signals are briefly described below and recapitulated in Table 2. In the forebrain, the VOLT and subfornical organ (SFO) both exhibited very robust signals (Table 2; Figures 2A,B). The hybridization signal was particularly high in the ependymal layer bordering the lower edge of the organ of the vasculosum of the lamina terminalis (VOLT) and the SFO (Figures 2A,B). Less robust signals were also detected in the central capillary plexus of the VOLT and the ventromedial core of the SFO. Interestingly, PPARγ was not observed in other known circumventricular organs, including the median eminence and area postrema. The entire neocortex exhibited positive hybridization signals that increased intensity from the deepest to the most superficial cortical layers (Figures 1E,F, 2C; Table 2). PPARγ expression levels slightly varied among cortical areas differences, which ought to be related to differences in neuron structure and glial density (Elston et al., 2011). PPARγ was also expressed at high level in the olfactory bulb and most prominently in the mitral cell layer and granular cells (Table 2; Figure 2D). Other forebrain structures with moderate signals included a region encompassing the basomedial amygdala (Table 2; Figure 2G), select thalamic nuclei (Table 2; Figure 3C), and the choroid plexus (ChP) and ependymal cells of the cerebroventricular system (Figures 3D,E). Moderate levels of expression were also observed in the hippocampus and extended to the entire subiculum (Table 2; Figures 3A,B). Significant signal was also present in the globus pallidus and lateral portion of the caudate putamen ( Table 2). In the rest of the forebrain, PPARγ expression was quite limited. Nonetheless, scattered cells exhibiting above-background signal were detected in the ventral pallidum, septal nuclei, islands of Calleja, and zona incerta, among a few other sites ( Table 2). The hypothalamus will be separately described in a later section. Outside of the forebrain, positive cells generally expressed PPARγ at lower levels and were distributed in a more diffuse manner. In particular, a large number of sites in the midbrain and hindbrain exhibited a weak signal including many brainstem nuclei containing sensory or motor neurons. Among these sites, the hypoglossal nucleus (12N) displayed the most evident signal (Table 2; Figure 3E). In the other sites, including, but not limited to, the dorsal raphe (DR), the hybridization signals were very low and inconsistent (Table 2; Figure 3F). In the cerebellum, a moderate signal was distributed in the granular layer and, to a lesser extent, in the Purkinje cells and the cells of the molecular layer (Table 2; Figures 2E,F). Finally, it must be noted that PPARγ was never observed in the white matter. Table 2. Scale bar is 40 µm. Arrows indicate representative positive cells. Detailed Analysis of Hypothalamic PPARγ Expression and Regulation ISH revealed very limited signals within the hypothalamus (Figures 4, 5). The suprachiasmatic nucleus (SCh) was the only hypothalamic nucleus in which the signal was clearly above background (Figures 4A, 5A,B). Using the chromogenic approach, cells with several brown dots per profile were commonly observed in the SCh (Figures 4A, 5B). The signal strengths were estimated to be roughly 5 times lower in the SCh than in the outer layers of the motor cortex ( Figure 5C). The ependymal layer of the 3rd ventricle was also clearly positive for PPARγ (Figures 4C,D). The signal was more concentrated in the mediobasal portion of the 3rd ventricle (Figures 4C,D), an area that is enriched in tanicytes (Robins et al., 2013). In comparison, very little signal was contained within the rest of the hypothalamus including in the paraventricular nucleus of the hypothalamus (PVN), the retrochiasmatic area (RCA), and the arcuate nucleus (ARC) (Figures 4B-D, 5A). In the latter regions, PPARγ was expressed in scattered cells at very low levels that were close to the limit of detection. For example, the signal strengths were estimated to be roughly 45 times lower in the ARC than in the outer layers of the motor cortex ( Figure 5C). Because our results were at odds with two studies describing abundant ARC PPARγ (Long et al., 2014;Garretson et al., 2015), we next further investigated hypothalamic PPARγ mRNA by qPCR. As depicted in Figure 6, PPARγ mRNA expression was detected in the whole hypothalamus ( Figure 6A). However, PPARγ expression in the hypothalamus was 140-fold lower than that in the BAT. To determine which hypothalamic nuclei accounted for the detected expression, we analyzed PPARγ expression in laser-capture microdissected hypothalamic nuclei according to the methods we have previously utilized (Bookout et al., 2006b;Lee et al., 2012). In agreement with our ISH results, the hypothalamic site that exhibited the highest level of expression was the SCh (Figure 6B). PPARγ was also detectable in the ARC, albeit at a low level (mean Ct ∼29.2). In all other examined hypothalamic nuclei, PPARγ was near the limit of detection (mean Ct >30) (Figure 6B). Because studies reported an up-regulated PPARγ mRNA in the mediobasal hypothalamus of high-fat fed mice (Diano et al., 2011;Long et al., 2014;Garretson et al., 2015), we considered the possibility that the PPARγ expression level might be up-regulated by nutritional challenges. Therefore, we systematically compared samples from fasted, chow fed and high-fat fed mice. In the SCh, PPARγ was significantly upregulated in response to fasting ( Figure 6B). However, in other nuclei, the levels of PPARγ expression were not regulated either by fasting nor high-fat feeding (Figure 6B). To further validate the sensitivity of our approach, we concurrently examined two marker genes that known to be abundantly expressed in identified feeding-related hypothalamic nuclei and to be regulated by nutritional challenges. As anticipated, neuropeptide Y (NPY) was only detected in the Arc and was up-regulated by fasting ( Figure 6C). Vasopressin (AVP) was only found in the SCh and PVN (Figure 6D). AVP expression in the PVN was reduced by fasting. Identities of the PPARγ-expressing Cells We used a multiplex fluorescent approach to further characterize the identities of the PPARγ-expressing cells (Wang et al., 2012). The hybridization signal was represented by individual fluorescent dots that decorated the cytoplasm adjacent to the cell nucleus (Figures 7A-D). In the brain, signals were detected in the previously described PPARγ-expressing areas that included, among other examples, high levels in the cortex ( Figure 7A) and VOLT (Figures 7B-D). The overall distribution of the observed fluorescent signals was directly comparable to that reported with the chromogenic approaches. Nonetheless, the fluorescent approach was less sensitive since areas previously described to contain very low levels of PPARγ mRNA did not contained fluorescent signals (i.e., feedingrelated hypothalamic nuclei). Several elements suggested that the PPARγ-expressing cells in the parenchyma were primarily neurons. First, the distribution in well-defined parenchymal nuclei was suggestive of a neuronal localization. Second, based on the size (>10 µm), morphology (round with multiple nucleoli), and distribution (almost exclusively in the gray matter) of the DAPI-counterstained nuclei of the parenchymal PPARγexpressing cells, the majority of these cells could be inferred to be neurons. Third, double fluorescent ISH revealed that PPARγ extensively colocalized with the pan-neuronal marker Rbof3 mRNA (also known as NeuN) (Figure 7A). For example, we observed nearly complete colocalizations between the PPARγ and Rbof3 mRNAs in the cortex (Figure 7A). Similar observations were made across the rest of the parenchyma. The VOLT, SFO, and the cerebroventricular system were the only sites containing PPARγ-expressing cells that did not exhibit colocalized Rbfox3 mRNA (Figures 7B-D). The arrangements of PPARγ-expressing cells in the latter structures were reminiscent of that of ependymocytes (Nehmé et al., 2012) (Figure 7D). FIGURE 5 | (A) Drawings of PPARγ hybridization signals in serial sections at the level of the hypothalamus. The estimated distance from the bregma (mm) is indicated in blue next to each section. Each red dot represents one identified diaminobenzidine-positive cell. Abbreviations can be found in Table 2. (B) PPARγ hybridization signals (brown) in the SCh (tissue was not counterstained). (C) Semiquantitative analysis of the ISH signals strengths in select hypothalamic nuclei compared to the cortex. Data represent mean ± S.EM. for three different animals. In the horizontal axis of this graph, arbitrary unit is used to express the ratio of the mean ISH signals/area (see methods for details). Scale bar is 50 µm. Abbreviations: Opt, optic chiasm; ARC, arcuate nucleus; CX, cortex; DMH, dorsomedial hypothalamus; VMH, ventromedial hypothalamus; SCh, suprachiasmatic nucleus of the hypothalamus. Discussion This study examined PPARγ mRNA expression in the intact adult mouse brain. Although many different types of cultured glial cells have been shown to express PPARγ (Bernardo and Minghetti, 2008;Dentesano et al., 2014), our data indicate that PPARγ is enriched in neurons in the intact brain. Contrary to our expectations, PPARγ mRNA was expressed at very low levels in all feeding-related hypothalamic nuclei. In contrast, PPARγ was found to be expressed in neurons at high to moderate levels in the entire neocortex, hippocampus, allocortex, and cerebellar cortex. The general pattern of PPARγ-expressing cells was reminiscent of those of other members of the PPAR family in the brain (Gofflot et al., 2007). Moreover, the enrichment of PPARγ in cortical and hippocampal neurons is consistent with the documented neuroprotective actions of PPARγ agonists in these neurons. PPARγ was also observed in select sensory and motor nuclei and in previously uncharacterized PPARγ-expressing sites, such as the SCh and circumventricular organs. We hope that our study will serve as a foundation for future research that aims to systematically delineate the physiological significance and transcriptional targets of PPARγ in both hypothalamic and extrahypothalamic sites. The functional significance of our findings in the central regulation of metabolism is discussed below. Technical Considerations This study used ISH approaches to detect PPARγ mRNA in the mouse brain. In our hands, chromogenic and fluorescent ISH were all suitable for the mapping of PPARγ transcripts. Nonetheless, the chromogenic technique (RNAScope R ) was the most sensitive of all. This technique also presented the advantage of generating almost no background and aided the identification of mRNA-expressing cells. Moreover, our ISH approaches present the advantage of not relying on the use of antibodies. This is important because peripheral immunoglobulins tend to accumulate in circumventricular organs and the ARC near the median eminence (Hazama et al., 2005;Yi et al., 2012), hence often resulting in the unwanted staining of these brain sites. Notably, the brain accumulation of immunoglobulins is exacerbated in metabolically-challenged animals (Hazama et al., 2005;Yi et al., 2012). Nonetheless, we must acknowledge one caveat regarding our findings. Although we don't expect the distribution of PPARγ to considerably vary with age, we cannot exclude age-related changes in PPARγ expression levels. In particular, animals used for our ISH mapping were younger than that those included in our qPCR study. This is was due to the necessity to expose our animals to dietary challenges (14 weeks on high-fat diet). Hence, we believe that additional ISH studies are warranted to examine PPARγ expression in the brain of older mice. Despite this caveat, the data obtained from qPCR of lasercaptured hypothalamic sites strongly mirrored our ISH results both in terms of distribution and expression levels. Therefore, we are confident of the specificity and sensitivity our anatomical findings. In contrast, we were not able to prove the specificity of PPARγ protein detection in the CNS using antibody-based techniques (data not shown). In particular, we tested without success two antibodies against PPARγ [Santa Cruz-7196 (H100) (Sarruf et al., 2009); Cell Signaling-2435S (C26H12) (Lu et al., 2011)]. Unfortunately, those antibodies detected several proteins other than PPARγ by western blot and, thus, might not be suitable for the detection of brain PPARγ by immunohistochemistry or western blot. Another antibody (Abcam ab191407) did not detect PPARγ by immunohistochemistry. In our opinion, our failure to identify a specific antibody against PPARγ is attributable to the fact that antibodies, and this particularly true of antibodies against receptors and signaling molecules, commonly produce unreliable staining (Saper and Sawchenko, 2003;Ivell et al., 2014). For example, it has been repeatedly shown that even antibodies that are widely used in the literature produce false positive results (Sim et al., 2004;Herkenham et al., 2011). This lack of selectivity might explain the discrepancies in the results of previously published immunohistochemical studies and our own data. Distribution and Regulation of PPARγ in the Hypothalamus We observed moderate PPARγ expression in the SCh and, to our surprise, very little expression in feeding-related hypothalamic nuclei. These findings contrast with those of recent studies that linked hypothalamic PPARγ signaling to the regulation of energy metabolism using mouse genetics (Lu et al., 2011;Long et al., 2014). However, it must be stressed that the mouse lines that have previously been utilized to delete PPARγ from the CNS were not selective for the hypothalamus and consequently might have resulted in reduced PPARγ signaling in other brain regions. This situation applies to the synaptophysin-Cre mouse, which displays widespread Cre activity in both the hypothalamus and extrahypothalamic sites in addition to peripheral sites such as the mesenchyme and alimentary, urinary and reproductive systems (see the recombinase activity at http://www.informatics.jax.org/ allele/MGI:2176055). The same shortcoming applies to POMC-Cre mice, which have been demonstrated to display Cre activity during development in over 62 brain sites including many PPARγ-containing sites (e.g., the cortex, SCh, hippocampus) (Padilla et al., 2012). Two pharmacological studies also showed that the central administration of PPARγ agonists stimulated food intake (Ryan et al., 2011;Garretson et al., 2015). However, the brain site(s) responsible for these effects has not been elucidated and accumulating evidence suggests that PPARγ agonists may act on brain cells via other receptor(s) than PPARγ (Woster and Combs, 2007;Thal et al., 2011). More convincingly, the overexpression of PPARγ selectively in the adult rat hypothalamus using a viral approach resulted in altered feeding behavior and body weight (Ryan et al., 2011). The exact hypothalamic site(s) involved in these effects has not been characterized. Two prior studies suggested that ARC neurons FIGURE 7 | Double fluorescent ISH for PPARγ and Rbfox3 in the mouse brain. (A) DAPI counterstaining (gray) and hybridization signals for PPARγ (green) and Rbfox3 (red) in the neocortex. The white arrow indicates a representative cell with coexpression of both transcripts. The orthogonal view (Imaris) further illustrates the proximity of PPARγ and Rbfox3 mRNAs in the cell labeled with a white arrow. The above data indicate that PPARγ mRNA is primarily localized in neurons. (B,C) DAPI counterstaining (gray) and hybridization signals for PPARγ (green) and Rbfox3 (red) in the VOLT. PPARγ expression is apparent in the capillary plexus (cp), dorsal cap (dc), lateral zone (lz) and ependyma (ep). PPARγ and Rbfox3 were not coexpressed. (D) Details of the hybridization signals for PPARγ (green) in the VOLT ependyma. Adjustments in contrast, brightness, and vibrance were made uniformly. Abbreviations: AVPe, anterior ventral periventricular nucleus. Scale bar in (A) is 5 µm. Scale bar in (B) is 20 µm and applies to (C). Scale bar in (D) is 12 µm. may be important in the feeding effects of PPARγ (Long et al., 2014;Garretson et al., 2015). While it cannot be entirely ruled out that PPARγ signaling in the ARC may regulate feeding behavior, its expression level in this region was extremely low. Alternatively, based on our observations, it is tempting to hypothesize that the SCh and/or tanicytes might play a role in the neural control of metabolism via PPARγ. Although these sites are not traditionally considered to be critical to the regulation of energy balance, there is emerging evidence of their implications in the control of energy expenditure and locomotor activity (Bookout et al., 2013;Balland et al., 2014). Notably, the presence of PPARγ in the SCh also implies that it is involved in the circadian control. While PPARγ has been linked to the regulation of circadian rhythms in peripheral tissues (Chen and Yang, 2014), to the best of our knowledge, the role of PPARγ in the SCh molecular clock machinery has not been well studied. It is particularly interesting that SCh PPARγ expression was upregulated by fasting, presumably due to increased circulating levels of fatty acids. Combined with the observation that food availability exerts a robust influence on clock genes expression in the SCh (Horikawa et al., 2005), our data suggest that PPARγ signaling in the SCh may serve as a link between metabolic cues and circadian rhythms. Overall, our observations call for a reappraisal of the role of hypothalamic PPARγ. Possible Role(s) of Circumventricular PPARγ in Metabolism Interestingly, high levels of PPARγ were identified in two circumventricular organs that have been implicated in the regulation of hydromineral homeostasis and drinking behaviors (Fitzsimons, 1998;Oka et al., 2015). This novel observation is important considering that PPARγ agonists that are used in diabetes treatment exert unwanted adverse effect on body fluid homeostasis that include an elevated rate of edema (Rizos et al., 2009). In addition to their actions on the kidney and cardiovascular system, our data support the idea that the circumventricular organs might be involved in PPARγ agonist-induced fluid retention. It should also be noted that the SFO is increasingly being recognized to regulate metabolic functions in addition to hydromineral homeostasis (Ferguson, 2014). Furthermore, an intact VOLT is required for the maintenance of a normal body temperature (Romanovsky et al., 2003). It is possible that the mouse lines that have previously been utilized to delete PPARγ from the CNS also targeted these areas and hence resulted in a metabolic phenotype. Considering that chylomicrons and very-low-density lipoproteins do not cross the blood-brain barrier, it is logical that PPARγ activity might peak in these two sites devoid of a blood-brain barrier. Hypothetically, the fatty acids released from the VOLT and SFO diffuse to the cerebrospinal fluid contained in the 3rd ventricle to be transported across the brain. If our hypothesis is correct, then PPARγ signaling in the SFO and VOLT might be a primary determinant of fatty acid uptake into the CNS. PPARγ signaling in the choroid plexus might also play a role in lipids metabolism in the brain.
6,925.8
2015-09-03T00:00:00.000
[ "Biology" ]
Energy-Aware and Proactive Host Load Detection in Virtual Machine Consolidation VM consolidation technique on the physical server improves the energy consumption and resource efficiency of the physical server, and thus improves the quality of service (QoS). In this article, a server threshold prediction method is proposed that focuses on the server overload and server underload detection to improve server utilization and to reduce the number of VM migrations, which consequently improves the VM's QoS. Since the VM integration problem is very complex, the exponential smoothing technique is utilized for predicting server utilization. The results of the experiments show that the proposed method goes beyond existing methods in terms of power efficiency and the number of VM migrations. Introduction Cloud computing has been raised as one of the most crucial exciting technological developments, which is growing rapidly to provide almost unlimited process, storage, and network resources for its users [13]. The growing interest in cloud resources has led to the creation of enormous cloud data centers, which need a lot of energy, and cause a large amount of operational costs. Worldwide data center power utilization information shows a nonlinear increase over the past ten years and a similar trend is predicted for the future [28]. Cloud providers utilize a lot of data centers around the world [27]. Energy is one of the significant factors of the entire cost of managing a data center and its facilities [18]. Several methods have been proposed in the literature to enhance resource utilization in cloud environments to lead to energy efficiency. As some examples, Load balancing and resource utilization were improved in [6] using a combination of reinforcement learning (RL) and coral reefs optimization. Multiple RL-based agents were utilized in [7] for online scheduling of dependent tasks of the cloud's workflows to improve resource utilization. Cooperative RL agents managed the cloud resources in [8] for multiple online scientific workflows. SARSA RL agents and genetic algorithm were used in [9] for cloud resource utilization. VM consolidation is a common and useful method to enhance resource efficiency and power effectiveness in the data centers, which has been extensively investigated in recent years. For example, server consolidation frameworks for cloud data centers have been reviewed in [2]. Server consolidation techniques in virtualized data centers have been also reviewed in [33]. A joint VM and container consolidation approach was utilized in [17] for energy-aware resource management in cloud data centers. EMC2 was proposed in [20] as an energy-efficient and multi-resource fairness VM consolidation in cloud data centers. Another energy-efficient VM consolidation algorithm in cloud data centers was also proposed in [38]. VM consolidation leverages by using virtualization as the main technique for achieving better performance and improving energy consumption. Virtualization techniques enable the data centers to deploy and run multiple instances of the VMs on a single physical server, and during the VM lifetime, it is possible to move VMs among physical servers without downtime. To reduce the number of active servers, VMs should be regularly relocated among the consolidation process using live migration according to the resource demands, and to reduce the energy consumption, the idle servers should be switched to sleep or power-off mode [11]. Due to the dynamic resource utilization model of the most current applications because of their highly variable workloads, consolidating VMs in the cloud data centers is complex. Unrestricted consolidation of the VMs may cause performance degradation by increasing the requests of applications for resources. If the requested resource or program of an application is not provisioned, its response time is increased and causes degrading quality of service (QoS) of that application. The QoS is defined for the services of the cloud service provider for the end-users. The main contributions of this paper are summarized as follows: _ Proposing a server threshold prediction method that focuses on detecting overloaded and underloaded servers. _ Improving server utilization, reducing the number of VM migrations, and enhancing the QoS. _ Utilizing an exponential smoothing technique for predicting server utilization to overcome the complexity of the VM integration problem. The remainder of the paper is composed as follows: Section 2 discusses some recent related works. Section 3 explains the system model and defines the problem. The proposed method of this paper to predict server utilization is also described in Section 3. Section 4 presents the results of simulation and finally, Section 5 concludes the work. Related Works There are 4 sub-problems in the VM consolidation procedure which need to be solved [25]. The first sub-problem is to decide when a host is overloaded. The next sub-problem is to select and identify the proper VMs for migrating from an overloaded host. VM placement is the third sub-problem, in which, a proper host for the selected VM would be picked. The last sub-problem is to identify the underloaded servers to consolidate the VMs of these servers on a smaller number of active servers [36]. Depending on the specific problems of the VM consolidation and the perspective of the research, a variety of strategies have been suggested in this area of the literature. A systematic review of challenges and open isuues for server consolidation in virtualized data centers has been proposed in [1]. The data center's workload changes continuously, which affects resource efficiency and energy utilization of the physical servers [5]. Most existing studies determine the server's upper threshold for overloaded hosts according to the host resource utilization. By applying VM consolidation on the overloaded and underloaded hosts, the energy consumption is reduced, and the QoS is improved [19,35,39]. VM placement was carried out under optimal conditions in cloud data centers in [16]. A VM placement method, called MBFD, has been proposed in [12] that reduced the number of running hosts based on a bin-packing heuristic algorithm, called BFD (bestfit decreasing) [29]. Another VM placement method was also proposed in [22] for OpenStack Neat consolidation. A fuzzy threshold-based adaptive algorithm has been proposed in [31] for identifying the overloaded and underloaded hosts. The fuzzy inference engine utilized the information of the host (resource utilization) to estimate the numerical threshold of CPU utilization. Witanto et al. [34] focused on accomplishing a method to tradeoff between the QoS and the power consumption based on the system preferences. The concept of dynamic selection was also proposed in [34] by searching into a set of consolidation methods and comparing various impacts on service level agreement (SLA) and energy to accommodate the data center with the desired preferences. The issue of cooperation among memory and processor was focused in [4], and a multi-input and multi-output controller was introduced for VM consolidation from resource capacity perspectives, while respecting SLA constraints. Dynamic modification of the capacity of the resources of the collection of VMs was considered for serving a multi-tier application. While the proposed method had a higher level of resource consolidation, its computational overhead was also high. The K-means clustering-based adaptive three-threshold framework of [37] divided the hosts into four categories, according to the amount of their loads. The number of VM migrations and the effects of these migrations on the QoS were decreased in the proposed method of [22] by consolidating VMs based on evaluating the possibility of change in the conditions of the resources into the overload state. A comprehensive research on VM consolidation has been conducted in [10]. Two VM selection policies, namely MMT (minimum migration time) policy and MC (maximum correlation) policy, were proposed in [10]. The MMT policy selected the VM for migration that has the shortest required migration time, and the MC policy selected the VM with the highest correlation of CPU usage with the other VMs for relocation to decrease the probability of overloading the host. Ant colony and extreme learning were utilized in [23] to VM consolidation in cloud data centers. Pearson correlation was used for dynamic VM consolidation in [24]. Another energy-efficient method for dynamic VM consolidation was also proposed in [21]. Some applications of VM consolidation in cloud computing were provided in [40]. The proposed method of this paper exploited a server threshold prediction method to detect overloaded and underloaded servers. This method attempts to overcome the complexity of the VM integration problem by utilizing an exponential smoothing technique for predicting server utilization. Server utilization is improved, and the number of VM migrations is reduced using this method, and thus the QoS will be enhanced. The Proposed Method Predicting the appropriate times of relocating the VMs is a key factor for improving the utilization of the VM consolidation, which leads to consolidate the VMs on fewer hosts. Various techniques are used in statistical and computational methods for this prediction. Due to the nature of the VM consolidation problem, we leverage the host extended exponential smoothing method [14][15]. Exponential smoothing is a general guideline procedure that uses the exponential window function for smoothing time series data. Although the effects of distant periods are not significant in the predictions, it is nevertheless appropriate that all previous periods are involved in the predictions. Simple exponential smoothing has been extended to empower the forecasting of data with a trend. This technique includes level and trend smoothing equations (l t and b t ), used by a forecast equation [3,[14][15] (3) where ℓt-1 is the estimate of the level in time t-1, bt-1 is the estimate of the growth rate in time t-1, l t and b t are respectively the estimates of the level and the trend in time t, α (0≤α≤1) and β (0≤β≤1) are respectively the smoothing parameters for the level and the trend, and yt is the observation of series in time t. The initial values of the parameters can be based on past statistics or initial knowledge and experience [3]. The parameters α and β can be adjusted by analyzing the results for different values to decrease the difference between real and predicted values. Using aggressive VM migration leads to extra overhead and energy consumption. The proposed method of this paper aims to prevent unnecessary VM migrations by predicting the status of host resources to improve the QoS and energy consumption. For example, assume 5 VMs with the same specifications are running on a host. The actual efficiency of different VMs at different times is shown in Table 1, and the amount of CPU uti-lization predicted by the proposed method is shown in Table 2. As shown in Table 2, the predictions are very close to the real numbers, and they will be very helpful in identifying the hosts that are overloaded and underloaded. It is clear that the predictions are not the same as what is happening in reality, and a percentage of errors in predictions is tolerable. The prediction with an acceptable difference is above the real threshold, and the hosts' threshold will not rise above that. To show that this approach can be useful for consolidating the VMs, consider this example in a data center. Suppose the host threshold is 27%. The host threshold can have different values, and this percentage is just an example to illustrate how the proposed solution works. According to this threshold, at the time 3, the host is considered as an overloaded host, and consequently, some of its VMs must be selected and migrated to the other appropriate hosts. However, if the system can be aware or guess what is going to happen to this host in a short time, it may be able to make a better decision. According to the predictions in Table 2, the resources will be reduced at the time 5. Therefore, the proposed method avoids choosing this host as an overloaded host, and allows it to continue without indicating as an overloaded host. Because the proposed method predicts that the host will return to the normal state in a short time. In other words, by predicting the future state of the host from the current state, the proposed method prevents unwanted migrations in the current state, and thus prevents the reduction in the QoS that occurs with the VM migrations. This method can be also utilized to determine the underloaded hosts. Suppose that all data center hosts are in a normal state and their VMs are fully running without resource shortage. There is only one underloaded host, the utilization of which is lower than the specified threshold. The proposed method tries to relocate VMs from underloaded hosts to the other hosts, and makes the host go sleep state or shutdown to reduce energy consumption and improve efficiency. Whenever the data center needs resources, the proposed method powers on or wakes up the host. Switching on and off or sleeping and waking up the hosts is costly and time-consuming. When the method makes a host go sleep state or shutdown, and a few minutes later, due to the growing demand for VMs resources, the host should be restarted, some overhead costs, such as the reboot time, are imposed to the host. This causes the system to experience a shortage of resources and faces a level of service violation. In this situation, predicting the data center utilization (DCU) can help the method to identify the underloaded hosts, more accurately. For example, consider a data center that has 5 hosts, which the utilizations of its hosts at different time intervals are shown in Table 3. The utilizations of the hosts are different and change during the time. At the time 3. as shown in Table 3, the utilization of the host 5 is lower than the threshold (30%), and thus the host should be considered as an underloaded host. However, prediction of the future of processor utilization is used for final decision. Since the overall data efficiency of the data center is still above the threshold in predictions, the method ignores them, and lets them continue because there is a possibility that the resources will be needed again. Hence, the host is not determined as an underloaded host. Consider time 7, when the utilization of the host 5 is lower than the threshold, and with looking at the predictions, a relative decrease in the total DCU can be considered. Therefore, this server is specified as an underloaded server if the other hosts have sufficient resources to accept the VMs of this host. Simulation Result The proposed method is executed on the CloudSim simulation toolkit [26]. The CloudSim toolkit is used to simulate the cloud data centers and their infrastructures. The experiments are carried out under following conditions: 1 The real data from the CoMon project (a monitoring system for Planetlab) are used for the experiments. The data includes the processor utilization data from more than 1,000 VMs located on hosts in more than 500 locations around the world. The data were collected on 10 random days in March and April 2011 [30]. 2 The simulation environment includes 300 servers of common types available in data centers, includ- Table 3 Data centers' utilization and their prediction Time 1 Time 2 Time 3 Time 4 Time 5 Time 6 Time 7 Time 8 Time 9 Host1 84 74 84 84 68 55 43 80 75 Host2 85 35 79 82 75 60 15 75 35 Host3 75 75 45 75 60 75 85 60 60 Host4 60 78 60 77 55 60 75 65 ing Hp ProLiant DL360 G7, Hp ProLiant DL360 Gen9, and Hp ProLiant Ml110 G5. A random model is used to model the amount of memory and bandwidth, and the processor used the CoMon project workload model. Table 4 shows the specifications of the intended servers. Amazon EC2 VMs are also utilized (from https://aws.amazon.com/ec2/instance-types/), which include 5 different types of VMs. The details of these VMs are summarized in the Table 5. The proposed method is implemented and its performance is evaluated compared to the other VM consolidation techniques, inter quartile range (IQR) and local regression (LR). These algorithms are evaluated in combination with MMT and MC. IQR applies statistical analysis for dynamic adaptation in the utilization threshold. LR estimates the future resources utilization to adjust the threshold accordingly. The first algorithm, known as LrMmt, which uses the LR algorithm with parameter 1.2 to detect the server overflow, and employs the MMT algorithm to select the VMs. The second algorithm, known as IqrMc, which uses the IQR algorithm with secure parameter 1.5 to detect the finite element, and exploits the MC algorithm to select the VMs. This comparison is based on different criteria including energy consumption, the number of VM migrations, SLA violations, and the number of host shutdowns. Table 6 shows the results obtained in one of the days. Energy consumption (kWh) Figure 3 SLA violation Fig. 1 shows the comparison of the LrMmt, the IqrMc, and the proposed method based on the number of VM migrations. The number of the migrations in the proposed method is 3154 migrations, which is 30%, and 31% lower than the number of the migrations in LrMmt and IqrMc, respectively. In the proposed method, the number of migrations has significantly diminished by eliminating relocations from the hosts that are not underloaded. When the method has to decide whether a host is overloaded or not, it predicts the state of the host's resources in the next point based on the amount of resource consumption in the past. Simulation output for day 03/03/2011 Number of VM migrations Figure 3 SLA violation Conclusions One of the main concerns of cloud data centers is to improve energy efficiency and to increase the profitability of these data centers, while keeping the quality of service at an acceptable level. In the proposed method of this paper, the problem of virtual machine (VM) consolidation was focused by considering host overload and host underload detection. The proposed method mapped the future state of the data center by predicting the hosts' workloads, and then based on these predictions, decided to specify a host as an overloaded or underloaded host. In the proposed method, an exponential smoothing technique was also leveraged to predict host utilization to make a better decision. Therefore, improper migrations were avoided, and the energy consumption was reduced. On the other hand, more accurate recognition of the underloaded hosts led to prevent continuous shutdowns of the hosts, and thus prevented a lot of VM relocations. Fig. 2 shows the comparison of the LrMmt, the IqrMc, and the proposed methods based on their energy consumptions. The proposed method is 5% and 20% better than LrMmt and IqrMc, respectively. workload prediction in the proposed method prevents turning on the new hosts, and thus reduce the energy consumption. When the number of allocated resources of a host exceeds the threshold, one or more VM(s) should be migrated to the other hosts. If a method cannot find a proper host for the migrated VM(s) a new host must be turned on for the VM(s) that cause increasing the energy consumption of the data center. The proposed method predicts the future status of the resources of the host, and if it detects that it will return to the normal state within a short time, it will not consider it as an overloaded host. Therefore, the proposed method prevents VM migrations to avoid further costs. Figure 2 Energy consumption (kWh) Fig. 3 shows the comparison of LrMmt, IqrMc, and the proposed method in terms of the SLA. The proposed method performs better than the both LrMmt and IqrMc methods. The proposed method performs better in terms of the SLA parameter than the LrMmt, because of improving the number of VM migrations. Number of VM migrations Conclusions One of the main concerns of cloud data centers is to improve energy efficiency and to increase the profitability of these data centers, while The proposed method detects the host's threshold by predicting its threshold based on the host's current conditions and its history to prevent unwanted migrations, and thus addresses the performance degradation that occurs due to the VM migrations. The proposed method offers a higher level of SLA than the IqrMc, caused by hosting more VMs. Consequently, more VMs tend to compete for resources which in some cases cause a shortage of resources. Conclusions One of the main concerns of cloud data centers is to improve energy efficiency and to increase the profit-ability of these data centers, while keeping the quality of service at an acceptable level. In the proposed method of this paper, the problem of virtual machine (VM) consolidation was focused by considering host overload and host underload detection. The proposed method mapped the future state of the data center by predicting the hosts' workloads, and then based on these predictions, decided to specify a host as an overloaded or underloaded host. In the proposed method, an exponential smoothing technique was also leveraged to predict host utilization to make a better decision. Therefore, improper migrations were avoided, and the energy consumption was reduced. On the other hand, more accurate recognition of the underloaded hosts led to prevent continuous shutdowns of the hosts, and thus prevented a lot of VM relocations.
4,884.6
2021-06-17T00:00:00.000
[ "Computer Science" ]
On the altered states of machine vision. The landscape and contemporary artistic practices is currently undergo-ing profound transformations caused by the application of technologies of machine learning to the vast domain of networked digital images. The impact of such technologies is so profound that it leads us to raise the very question of what we mean by “vision” and “image” in the age of artifi cial intelligence . This paper will focus on the work of three artists – Trevor Paglen, Hito Steyerl, Grégory Chatonsky – who have recently employed technologies of machine learning in non-standard ways. Rather than using them to train systems of machine vision with their different op erations (face and emotion recognition, object and move-ment detection, etc.) and their different fields of application (surveillance, policing, process control, driverless vehicle guidance, etc.), they have used them in order to produce entirely new images, never seen before, that they present as altered states of the machine itself. Abstract if we want to understand the invisible world of machine-machine visual culture, we need to unlearn to see like humans. We need to learn how to see a parallel universe composed of activations, keypoints, eigenfaces, feature transforms, classifiers, training sets, and the like. 28 a phenomenal quantity of realistic images from the accumulation of data on the This similar to the world we but it is not an identical reproduction. Species metamorphose into each other, stones mutate into plants and the ocean shores into unseen organisms. result: this “second” Earth, a reinvention of our world, produced by a machine that wonders about the nature of its On the altered states of machi ne vi si on. Trevor Pagl en, Hito Steyerl , Grégory Chatonsky by Antonio Somaini The landscape of contemporary visual culture and contemporary artistic practices is currently undergoing profound transformations caused by the application of technologies of machine learning to the vast domain of networked digital images. The impact of such technologies is so profound that it leads us to raise the very question of what we mean by "vision" and "image" in the age of artificial intelligence. This paper will focus on the work of three artists -Trevor Paglen, Hito Steyerl, Grégory Chatonsky -who have recently employed technologies of machine learning in non-standard ways. Rather than using them to train systems of machine vision with their different operations (face and emotion recognition, object and movement detection, etc.) and their different fields of application (surveillance, policing, process control, driverless vehicle guidance, etc.), they have used them in order to produce entirely new images, never seen before, that they present as altered states of the machine itself. Abstract The landscape of contemporary visual culture and contemporary artistic practices is currently undergoing profound transformations caused by the application of technologies of machine learning -one of the areas of so-called "artificial intelligence" -to the vast domain of networked digital images. Three strictly interrelated phenomena, in particular, are producing a real tectonic shift in the contemporary iconosphere, introducing new ways of "seeing" and new "images" -we'll return later to the meaning of these quotation marks -that extend and reorganize the field of the visible, while redrawing the borders between what can and what cannot be seen. These three strictly interrelated phenomena are: ჸ the new technologies of machine vision fuelled by processes of machine learning such as the Generative Adversarial Networks (GAN); ჸ the ever-growing presence on the internet of trillions of networked digital images that are machine-readable, in the sense that they can be seen and analyzed by such technologies of machine vision; ჸ the entirely new images that the processes of machine learning may generate. Considered from the perspective of the longue durée of the history of images and visual media, the appearance of these three phenomena raises a whole series of aesthetic, epistemological, ontological, historical and political questions. Their impact onto contemporary visual culture is so profound that it leads us to raise the very question of what we mean by "vision" and "image" in the age of artificial intelligence. What is "seeing" when the human psycho-physiological process of vision is reduced, in the case of machine vision technologies, to entirely automated operations of pattern recognition and labelling, and when the various applications of such operations (face and emotion recognition, object and motion detection) may be deployed across an extremely vast visual field (all the still and moving images accessible online) that no human eye could ever attain? In speaking of "machine vision", are we using an anthropomorphic term that we should discard in favor of a different set of technical terms, specifically related to the field of computer science and data analysis, that bear no connection with the physiological and psychological dynamics of human vision? Artists-researchers such as Francis Hunger and scholars such as Andreas Broeckmann (with his notion of "optical calculus" as "an unthinking, mindless mechanism, a calculation based on optically derived input data, abstracted into calculable values, which can become part of computational procedures and operations"), 1 Adrian MacKenzie and Anna Munster (with their ideas of a "platform seeing" operating onto "image ensembles" through an "invisual perception"), 2 Fabian Offert and Peter Bell (with their idea according to which the "perceptual topology" of machines is irreconcilable with human perception) 3 have argued for the necessity of moving beyond anthropocentric frameworks and terms, highlighting the fact that machine vision poses a real challenge for the humanities. Can we still use the term "image" for a digital file, encoded in some image format, 4 that is machine-readable even when it is not visible by human eyes, or that becomes visible on a screen as a pattern of pixels only for a small fraction of time, spending the rest of its indefinite lifespan circulating across invisible digital networks? Can concepts such as that of "iconic difference" [ikonische Differenz], 5 which highlights the fundamental perceptual difference between an image and its surroundings (its "hors champ") be still applied to machine-readable images? And what is the status of the entirely new images produced by processes of machine learning? These are images that are not produced through some traditional form of lens-based analog or digital optical recording, nor through traditional computer-generated imagery (CGI) systems, but rather through processes belonging to the wide realm of artificial intelligence that either transform existing images in ways that were impossible until a few years ago, or create entirely new images, never seen before. What do such images represent, what kind of agency do they have, how do they mediate our visual relation to the past, the present, and the future? And why have such new images generated by processes of machine learning been so often considered, both in popular culture and in the work of contemporary artists, to be the product of some kind of altered state -a "dream", a "hallucination", a "vision", an "artificial imagination" -of the machine itself? Before we analyse the way in which this last question is raised, in different ways, by popular computer programs such as Google's DeepDream (whose initial name echoed Christopher Nolan's 2010 film "Inception"), and by the recent work of contemporary artists and theorists such as Trevor Paglen, Hito Steyerl and Grégory Chatonsky, let us begin with a quick overview of the current state of machine vision technologies, with their operations and applications, and of the new images produced by processes of machine learning that are increasingly appearing throughout contemporary visual culture. The impact of machine learning technologies onto contemporary visual culture First tested in the late 1950s, with image recognition machines such as the Perceptron (developed at the Cornell Aeronautical Laboratory by Frank Rosenblatt in 1957), and then developed during the 1960s and 1970s as a way of imitating the human visual system in order to endow robots with intelligent behavior, machine vision technologies have entered a new phase, in recent years, with the development of machine learning processes, and with the possibility of using immense image databases accessible online as both training sets and fields of application. 6 If in the 1960s and 1970s the goal was mainly to extract three-dimensional structures from images through the localization of edges, the labelling of lines, the detection of shapes and the modelling of volumes through feature extraction techniques such as the Hough transform (invented by Richard Duda and Peter Hart in 1972, on the basis of a 1962 patent by Paul Hough), the recent development of machine learning techniques and the use of vast image training sets organized according to precise taxonomies -such as ImageNet, in which 14 millions of images are organized according to 21,000 categories derived from the WordNet hierarchy (a large lexical database of English nouns, verbs, adjectives and adverbs) 7 -have allowed a rapid increase in the precision of all the operations of machine vision. Among such operations, we find pixel counting; segmenting, sorting, and thresholding; feature, edge, and depth detection; pattern recognition and discrimination; object detection, tracking, and measurement; motion capture; color analysis; optical character recognition (this last operation allowing for the reading of words and texts within images, extending the act of machine "seeing" to a form of machine "reading"). For a few years now, such operations have been applied to the immense field of machine-readable images. A field whose dimensions may be imagined only if we understand that any networked digital image -whether produced through some kind of optical recording, or entirely computer-generated, or a mix of the two, as it is often the case -may be analysed by machine vision technologies based on processes of machine learning such as the Generative Adversarial Networks (GAN). 8 Starting from vast training sets containing images similar to the ones the system has to learn to identify, and feeding such training sets into an ensemble of two adversarial neural networks that act as a Generator and a Discriminator that are in competition against one another, the GAN-based machine vision systems have gradually become more and more precise in performing their tasks. All the main smart phone producers have equipped their devices with cameras and image processing technologies that turn every photo we take into a machine-readable image, and internet giants such as Google and Facebook, as well as a host of other companies, have developed machine vision and face recognition systems capable of analysing the immense quantity of fixed and moving images that exist on the internet and that continue to be uploaded every day, raising all sorts of ethical and political issues and highlighting the need for a broader legal framework that for the moment is largely missing. 9 Considered together, such machine vision systems are turning the contemporary digital iconosphere and the entire array of contemporary screens, with their various dimensions and degrees of definition, 10 into a vast field for data mining and data aggregation. A field in which faces, bodies, gestures, expressions, emotions, objects, movements, places, atmospheres and moods -but also voices and sounds, through technologies of machine hearingmay be identified, labelled, stored, organized, retrieved, and processed as data that can be quickly accessed and activated for a wide variety of purposes: from surveillance to policing, from marketing to advertising, from the monitoring of industrial processes to military operations, from the functioning of driverless vehicles to that of drones and robots, from the study of the inside of the human body through the analysis of medical imagining all the way up to the study of the surface of the Earth and of climate change through the analysis of satellite images. Even disciplines that might seem to be distant from the most common current applications of machine vision technologies, such as art history and film studies, are beginning to test the possibilities introduced by such an automated gaze, capable of "seeing" and analyzing, according to different criteria, vast quantities of visual reproductions of artworks or vast corpuses of films and videos in an extremely short time. 11 In order to fully understand the impact of machine learning onto contemporary visual culture, we need to add, to the vast field of machine vision technologies that we just described, the new images produced by processes of machine learning -often, the same GAN that are used to train and apply machine vision systems -that either transform pre-existing images in ways that were impossible until a few years ago, or create entirely new images, never seen before. In the first case, we are referring to processes of machine learning capable of transforming existing images that can have very different applications: producing 3D models of objects from 2D images; changing photographs of human faces in order to show how an individual's appearance might change with age (as with the app Face-App) or by merging a face with another face (Faceswap); 12 animating in a highly realistic way the old photograph of a deceased person (Deep Nostalgia, developed by My-Heritage); 13 creating street maps from satellite imagery; 14 taking any given video, and "upscaling" it, by increasing its frame rate and its definition. An emblematic example 11 See for example the various experiments being developed at the Google Arts & Culture Lab: "Google Arts & Culture", accessed November 3, 2021, https://artsandculture.google. com/, or the way in which the EYE Film Museum in Amsterdam is testing new ways of accessing its collections through a program fuelled by machine vision systems: "Jan Bot", accessed December 2, 2021, https://www.jan.bot/. 12 The website of Faceswap, which announces itself as "the leading free and OpenSource multiplatform Deepfakes software": "Faceswap", accessed November 3, 2021, https:// faceswap.dev/. 13 "Deep Nostalgia", accessed November 3, 2021, https://www.myheritage.fr/deep-nostalgia. 14 R. Matheson, "Using artificial intelligence to enrich digital maps. Model tags road features based on satellite images, to improve GPS navigation in places with limited map data", MIT News, January 23, 2020, https://news.mit.edu/2020/artificial-intelligence-digital-maps-0123. of this last application, which may alter significantly our experience of visual documents of the past, would be the videos realized by Denis Shiryaev 15 in which, through a process of machine learning, a Lumière film such as L'Arrivée d'un train en gare de La Ciotat (1896) is transformed from the original 16 frames per second to 60 frames per second, from the original 1.33:1 format to a contemporary 16:9 format, and from the original, grainy 35mm analog film to a 4K digital resolution. 16 In other examples of images transformed by machine learning, the transformations are much more radical, as it happens with the so-called "deepfakes": videos that use neural networks in order to manipulate the images and the sounds of pre-existing videos -in some cases a single, pre-existing image -producing new videos that have a high potential to deceive. Among the many examples that can now be found across the internet in different domains such as pornography, politics and social media, pornographic videos in which faces of celebrities are swapped onto the bodies of porn actors, a TikTok account with a whole series of odd videos by a "Deep Tom Cruise", 17 or speeches by public figures such as Barack Obama 18 and Queen Elizabeth 19 whose content has been completely altered in such a way that the movements of their mouths perfectly match the new, invented words they are uttering. And among the applications of deepfakes in the musical realm, the "new" songs by long deceased singers, whose style and voice are reproduced in a highly realistic way by applications of machine learning such as Jukebox, developed by OpenAI 20 : a "resurrecting" function that in the animated photographs of deceased persons of Deep Nostalgia. In the second case, the use of machine learning processes leads to the creation of entirely new images or sections of images: modelling patterns of motion (for example, crowd motion) in video, thereby leading computer generated imagery, in some cases fuelled by artificial intelligence, to produce new kinds of "contingent motion"; 21 producing highly photorealistic images of objects and environments for different kinds of advertising; inventing perfectly realistic faces of people that actually do not exist through open source softwares such as StyleGan and then make them accessible through projects such as Philip Wang's This Person Does Not Exist. 22 To these widespread applications of machine learning we may add the hybrid, unprecedented imagery produced by the software DeepDream, created in 2015 by the Google engineer and artist Alexander Mordvintsev: a program that uses Convolutional Neural Networks in order to enhance patterns in pre-existing images, creating a form of algorithmic pareidolia, the impression of seeing a figure where there is none, which is here generated by a process which repeatedly detects in a given image patterns and shapes that the machine vision system has been trained to see. 23 The result of such a recursive process, in which every new image is submitted again to the same kind of pattern and shape recognition, are images that recall an entire psychedelic iconography that spans through cinema, photography, the visual arts and even so-called art brut: images that are here presented as a sort of dream -a hallucinogenic, psychedelic dream -of the machine itself. The original image (Fig. 1) has been modified by applying ten (Fig. 2) and then fifty (Fig. 3) iterations of the software DeepDream, the network having been trained to perceive dogs. Exploring the "altered states" of machine vision through Generative Adversarial Networks The idea that lies at the basis of the DeepDream software -the idea that machine learning technologies, and in particular the Convolutional Neural Networks (CNN) and Generative Adversarial Networks (GAN), may be used in order to explore and reveal the altered states of machine vision -can also be found in the recent works of artists (often active also as theorists) such as Trevor Paglen, Hito Steyerl, and Grégory Chatonsky. 24 By insisting on the "creative", image-producing potential of Generative Adversarial neural networks -rather than on their standard application for the training of machine vision systems -the three of them explore a vast field of images that they consider to be "hallucinations", "visions of the future", or the product of an "artificial imagination", characterized by a new form of realism, a "disrealism". Trevor Paglen's entire work as an artist and theorist, since 2016, has been dedicated to the attempt of understanding and visualizing the principles that lie at the basis of machine vision technology. Through texts (sometimes written in collaboration with the researcher Kate Crawford), 25 exhibitions, performances, and works made of still and moving images, Paglen has tried to highlight not only the social and political biases that are inherent in the way machine vision systems are structured, but also the way in which such systems diverge profoundly from human vision. 26 In an article published in December 2016 in The New Inquiry with the title "Invisible Images (Your Pictures Are Looking at You)", 27 Paglen discusses the new challenges that arise in a context in which "sight" itself has become machine-operated and separated by human eyes, participating in a vast field of image operations. Arguing that what we are currently experiencing is part of a vast transition from human-seeable to machine-readable images -a new condition in which "the overwhelming majority of images are now made by machines for other machines, with humans rarely in the loop" -Paglen writes that i f we want to understand the i nvi si bl e worl d of machi ne-machi ne visual cul ture, we need to unl earn to see l i ke humans. We need to l earn how to see a paral l el uni verse composed of acti vati ons, keypoi nts, ei genfaces, feature transforms, cl assi fi ers, trai ni ng sets, and the l i ke. 28 We need to unlearn to see like humans. But how can we not see like humans, how can we step out of our human point of view? Accomplishing this apparently impossible task -a task which echoes the recurrent philosophical problem of how to step out of one's own socio-historical position, of one's own cognitive and emotional framework -has been the main goal of Trevor Paglen's artistic practice during the last few years, as we can see in a body of works that was initially produced in 2017 through various collaborations with computer vision and artificial intelligence researchers as an artist-in-residence at Stanford University, and was first exhibited at the Metro Pictures Gallery in New York in an exhibition entitled A Study of Invisible Images (September 8 -October 21, 2017) 29 , before being presented at various other galleries and museums such as the Osservatorio of the Fondazione Prada in Milan, or the Carnegie Museum of Art in Philadelphia, in an exhibition entitled Opposing Geometries (2020). The works in the exhibition at Metro Pictures present a possible response to the challenge of how to penetrate within systems of machine vision that tend to expel the human gaze from their processes. Among them, we find the attempt to master the machine learning techniques 28 Ibid. 29 A series of images of the works presented in the exhibition can be found at the following address: "Trevor Paglen. A study of Invisible Images", Metro Pictures, accessed January 19, 2020, http://origin.www.metropictures.com/exhibitions/trevor-paglen4. that are commonly used for machine vision applications, in order to hack them and lead them to produce entirely new images, never seen before, that may be considered as a form of hallucination of machine vision. This is what happens in a series of still images entitled Adversarially Evolved Hallucinations, which Paglen developed through a non-standard application, in three steps, of Generative Adversarial Networks. 30 The first step consisted in establishing new, original training sets. Instead of using the usual corpuses of images that are used to train machine vision systems in recognizing faces, objects, places and even emotionscorpuses that are often derived by pre-existing and easily available image databases such as the already mentioned ImageNet -Paglen established new training sets composed by images derived from literature, psychoanalysis, political economy, military history, and poetry. Among the various taxonomies he used in order to compose his training sets we find "monsters that have been historically interpreted as allegories of capitalism", such as vampires, zombies, etc.; "omens and portents", such as comets, eclipses, etc.; "figures and places that appear in Sigmund Freud's The Interpretation of Dreams", a corpus which includes various symbols from Freudian psychoanalysis; "Eye-Machines", a series of images clearly inspired by Harun Farocki's videoinstallations Eye-Machine I, II, III (2001-2003 and containing images of surveillance cameras or of spaces under surveillance; "American Predators", a corpus containing various predatory animals, plants, and humans indigenous to the United States, mixed with military hardware like predator drones and stealth bombers. The second step consisted in feeding these unusual training sets into the two neural networks of the GAN system: the Discriminator and the Generator. These two networks begin interacting with one another in an adversarial, competitive way, in such a way that the Discriminator, 30 Information on the Adversarially Evolved Hallucinations can be found in Trevor Paglen's website: T. Paglen, "Hallucinations", accessed November 3, 2021, https://paglen. studio/2020/04/09/hallucinations/. after having received the initial training set, has to evaluate the images that it receives from the Generator, deciding whether they resemble or not to those of the training set. As the process unfolds through reiterated exchanges between the two neural networks, the Discriminator becomes more and more precise and effective in evaluating the images that are submitted to it. The third step consists in the artist intervening in the process and choosing to extract, at a given moment, one of the images produced by the Generator: an image that emerges from the sequence of the adversarial exchanges, and that is the result of one of the countless attempts by the Generator to test the precision of the Discriminator, trying to fool it. In the case of the series of the Adversarially Evolved Hallucinations, all the images selected by Paglen seem to bear some kind of resemblance to the ones contained in the original training sets -even though we cannot really assess the degree of this resemblance, because the training sets are not accessible to us -while displaying at the same time different forms of deviations and aberrations that recall a sort of psychedelic imaginary. Among Paglen's Adversarially Evolved Hallucinations, we find images with titles such as Vampire (Corpus: Monsters of Capitalism), Comet (Corpus: Omens and Portents), The Great Hall (Corpus: The Interpretation of Dreams), Venus Flytrap (Corpus: American Predators), A Prison Without Gards (Corpus: Eye-Machines). In the case of Vampire (Corpus: Monsters of Capitalism), the Discriminator was trained on thousands of images of zombies, vampires, Frankensteins, and other ghosts that have been at some point -be it in essays, literary texts, or films -associated with capitalism. Paglen then set the Generator and Discriminator running until they had synthesized a series of images that corresponded at least in part to a specific class within the given corpus. From all of the even slightly acceptable options that the GAN generated, Paglen then selected the one we see in the series exhibited at Metro Pictures as the "finished work". There may be multiple reasons behind Trevor Paglen's decision to call these images "hallucinations". To begin with, such images recall a type of imagery that we might consider to be "psychedelic" or "surrealist": in some of them, we definitely see echoes of Max Ernst, or Salvador Dalì. A second reason may lie in the attempt to emphasize the fact that the result of this non-standard application of the processes of machine learning -a process which unfolds within a closed machine-to-machine space, the invisible space of the back-and-forth between the Generator and the Discriminator from which human eyes are excluded -produces images that, just like human hallucinations, have no footing in exterior reality, or may merge in unpredictable way with shapes and forms stemming from the perception of the outer world. Finally, the term "adversarially evolved hallucinations" may underline the fact that these images are the result of a machine learning process gone astray: a process which has been hacked and led to drift away from its original, standard applications. The reference to the term "hallucinations", though, should not be misleading. What Trevor Paglen tries to show us with his Adversarially Evolved Hallucinations has really nothing to do with a disruption of the orderly functioning of human consciousness. What they highlight, rather, is the radical otherness of machine vision, if compared with human vision. A radical otherness based on operations that have nothing to do with human, embodied vision, which we may just try to grasp through the impossible attempt of "unlearning to see like humans". We find a different application of images produced by machine learning in This is the Future, an installation by Hito Steyerl which was presented at the Venice Biennale in 2019, and which was conceived as an expansion of the exhibition Power Plants at the Serpentine Gallery in London the previous year. In the Venice installation, Hito Steyerl arranged onto different platforms a series of nine videos in which one could see images resembling to some kind of "vegetal" time-lapse imagery: flowers quickly blossoming and spreading out, plants and bamboo shoots growing in height and width. What interested Steyerl in the use of neural networks in this installation was the predictive nature of machine learning, and the status of "visions of the future" of its imagery: the fact that neural predictive algorithms operate through statistical models and predictions based on immense, "big data" databases, and are therefore related to the vast spectrum of predictive systems (be they financial, political, meteorological, environmental, etc.) that are present in contemporary "control societies", while at the same time being part of the longue durée of the history of prediction systems elaborated by human cultures. The main video presented in Hito Steyerl's installation, entitled This is the Future: A 100% Accurate Prediction, consists of images produced through a collaboration with the programmer Damien Henry, author of a series of videos entitled A Train Window 31 in which a machine learning algorithm has been trained to predict the next frame of a video by analyzing samples from the previous image, in such a way that, as in a perfect feedback loop, each output image 31 The videos, programmed with Tensorflow, are available at: D. Henry, "A train window", Magenta, October 3rd, 2018, https://magenta.tensorflow.org/nfp_p2p. I thank both Hito Steyerl and Damien Henry for useful information on the coding used in This is the Future. becomes the input for the next step in the calculation. In this way, after intentionally choosing or producing only the first image, all the following ones are generated by the algorithm, without any human intervention. In Hito Steyerl's work, this idea of a video entirely made up of images "predicted" by neural networks and located, as the video says, "0.04 seconds in the future", is presented as a new kind of documentary imagery: an imagery that can at the same time predict and document the future, as paradoxical as this may seem. The video begins with white text on a black background that reads: "These are documentary images of the future. Not about what it will bring, but about what it is made of". The next five sections of the video -Heja's Garden, The Future: A History, Bambusa Futuris, Power Plants and Heja's Prediction -lead us to a psychedelic landscape of images that morph sample images stemming from categories such as "sea", "fish", "flower", "rose" or "orchid": each of them is produced by a neural network that, as the electronic voice accompanying the video tells us, "can see one fraction of a second into the future". Second Earth (2019) by Grégory Chatonsky takes another route into the iconosphere produced by GAN-driven machine learning. What interests him is the idea of an "artificial imagination", capable of visualizing, through the means of artificial intelligence, "the hallucination of a senseless machine, a monument dedicated to the memory of the extinct human species". 32 Himself in charge of the coding which lies at the base of the various elements and the various media mobilized in his work, Chatonsky works in particular on what he calls the "chaînage", the "sequencing" of different artificial intelligence systems that, taken all together, produce a whole cascade of new forms: among them, neural networks capable of generating new texts (read by synthetic voices) starting from some given text databases, or capable of generating images from given texts, and texts from given images, with a new kind of AI-powered ekphrasis. The metamorphical universe that we see in the videos of Second Earth, a work that Chatonsky presents as an "evolving installation", evokes the idea of a form-generating power that used to be rooted in nature and which is now taken over by machines which are incorporating and re-elaborating the trillions of images that humans have uploaded on the internet as a sort of hypertrophic memory. The text accompanying the work reads: "Accumulate data. Outsourcing our memories. Feed software with this data so that it produces similar data. Produce realism without reality, become possible. Disappear. Coming back in our absence, like somebody else". 33 As products of a "realism without reality", what Chatonsky calls a "disrealism", 34 the images produced through Generative Adversarial Networks in Second Earth do have a hallucinatory, oneiric, "surrealistic" quality, that bears a strange kind of "family resemblance" to the ones that we find in Paglen's Adversarially Evolved Hallucinations, Steyerl's This is the Future, and in the work of other artists who have recently explored the GAN-generated imagery, such as Pierre Huyghe in his installation Uumwelt (2018). In Chatonsky's Second Earth, these images do refer to some kind of "outer reality", but the status of this outer reality is highly unclear. On the one hand, they bear traces of the images contained in the training sets that have been employed in order to activate the GANs: in the cases of the stills from one of the videos in the installation that are here reproduced, such training sets referred probably to categories such as "birds", "faces", "eyes", etc., and the images contained in the training sets -be they actual photographs, or still from videos -do in most cases refer to some profilmic reality. On the other, extracted as they are from the "latent space" of a process of machine learning in motion from the pole of absolute noise to the pole of a perfect resemblance to the images of the training set, the images of Second Earth refer to another kind of reality, one that does not exist yet. We are here in the domain of "anticipation", rather than "prediction", 35 in the perspective of an explora- tion of a non-human "artificial imagination", rather than in the denunciation of the pervasive presence of systems of control and surveillance, as it was the case in the work of Hito Steyerl. At the basis of Chatonsky's "Second Earth", we find the observation that the machine was becoming capable of automatically producing a phenomenal quantity of realistic images from the accumulation of data on the Web. This realism is similar to the world we know, but it is not an identical reproduction. Species metamorphose into each other, stones mutate into plants and the ocean shores into unseen organisms. The result: this "second" Earth, a reinvention of our world, produced by a machine that wonders about the nature of its production. Over fifty years ago, in his seminal Understanding Media. The Extensions of Man (1964), Marshall McLuhan formulated the idea that art could become, in some decisive historical moments, a form of "advance knowledge of how to cope with the psychic and social consequences of the next technology", and added that new art forms might become in these moments "social navigation charts", helping us find some orientation across a sensorium entirely transformed by new media and new technologies. Today, while we witness the first signs of what promises to be a massive impact of artificial intelligence onto all areas of our psychic, social, and cultural life, the works of artists such as Trevor Paglen, Hito Steyerl and Grégory Chatonsky do appear like "navigation charts": their exploration of the altered states of machine vision through the appropriation and the détournement of technologies such as the Generative Adversarial Networks help us better understand the aesthetic, epistemological and political implications of the transformations that such technologies are producing within contemporary visual culture. Taken together, they highlight the fact that what is at stake is the very status of what we mean by "image" and by "vision" in the age of artificial intelligence.
8,024.8
2022-05-16T00:00:00.000
[ "Computer Science", "Art" ]
Conservation Laws in Quantum-Correlation-Function Dynamics For a complete and lucid discussion of quantum correlation, we introduced two new first-order correlation tensors defined as linear combinations of the general coherence tensors of the quantized fields and derived the associated coherence potentials governing the propagation of quantum correlation. On the basis of these quantum optical coherence tensors, we further introduced new concepts of scalar, vector and tensor densities and presented some related properties, such as conservation laws and the wave-particle duality for quantum correlation, which provide new insights into photon statistics and quantum correlation. Introduction Fields interact with atoms in a fundamentally random or stochastic way. As a consequence, statistical interpretation of the outcome of most optical experiments becomes indispensable due to the fact that any measurement of light is accompanied by certain unavoidable fluctuations [1,2]. Among various descriptions of the statistical properties of light, correlation between the fields at different space-time points, known as the correlation function, has long been recognized as the most fundamental physical quantity that plays a crucial role in photon correlations. The concept of optical correlation, first introduced by Wolf, has laid a foundation on which many important problems in classical statistical optics can be treated in a unified way. Also worth special mention is the quantum theory of optical coherence created by Glauber [3][4][5], who took a quantum electrodynamics approach to the problems of photon statistics. Because of its theoretical and practical importance, coherence theory and quantum optics have developed into a challenging multifarious field of research. In quantum optics, detection of photons based on the absorption of photons via the photoelectric effect constitutes the basis of the measurements of the optical field. Photon statistics and quantum correlations have been studied extensively. However, the discussions in most of papers are restricted to the correlations between the same kinds of fields (either between electric fields E themselves or between magnetic fields H themselves) at different spacetime points, and the mixed correlation between electric field and magnetic field, introduced by Mehta and Wolf [6], seems to have received less attention in spite of its importance. Although the quantum theory for the correlations between the same kind of fields has already been established and is indeed informative, this alone cannot constitute a selfconsistent theoretical framework for the full description of the coherence properties of the quantum electromagnetic fields. The need for a more self-contained quantum correlation theory involving the EH-mixed correlations has motivated us to take a fully quantum electrodynamics approach to the problems of quantum correlations. In this paper, we will introduce, for the first time, two second-order correlation tensors: E jk (x 1 , t 1 ; x 2 , t 2 ) and S jk (x 1 , t 1 ; x 2 , t 2 ) for the complete and lucid description of quantum correlation on the basis of the quantized field theory. In formal analogy to the energy conservation, momentum conservation, and angular momentum laws for vector fields in classical electromagnetism, some new densities derived from E jk and S jk are introduced to quantum correlation theory and are related by new continuity 2 Advances in Optical Technologies equations, which indicate new conservation laws in quantum correlation theory. In terms of Fock states and coherent states, we will represent the newly introduced densities and reveal the nature of the wave-particle duality for the quantum correlation. Quantum Coherence Tensors To fully characterize the correlations involving all the components of the electromagnetic fields at any space-time points, more general coherence matrices are needed. In addition to Glauber's E-only correlation tensor [3][4][5], other first-order magnetic H-correlation and EH mixed-correlation tensors have been defined by Mehta and Wolf [6], namely, , where ρ is the density operator describing the field, the symbol Tr{· · · } stands for the trace operation, the indices ( j, k = 1, 2, 3) label Cartesian components: ( x, y, z), and we have already separated the electromagnetic field operators into their positive-frequency parts E (+) and H (+) and their negative-frequency parts E (−) and H (−) , respectively. Since the electric and magnetic field operators are related by Maxwell's equations in free space, their corresponding negative frequency operators E (−) and H (−) can also be expressed in the tensor form as where c indicates the speed of light, ε jkl is the unit tensor of Levi-Civita with antisymmetry, ∂ 1 j and ∂/∂t 1 are the differential operations to be performed with respect to the point x 1 and t 1 , and Einstein's summation convention has been employed. One of the essential aspects in which quantum field theory differs from classical theory is that two values of the field operators taken at different space-time points do not, in general, commute with one another. After multiplying both sides of (2) by E (+) m (x 2 , t 2 ) and placing it under the operator signs, we obtain the relation If we take the quantum average of (6) and interchange the order of the quantum average and differential operations, from the definition of coherence tensor in (1), we have If we apply the same procedure, we obtain In a similar manner, the divergence condition yields the following equations: After adding (7) to (10), and subtracting (9) from (8), we obtain two equations, respectively: where The tensor E jk may be called the quantum energy coherence tensor and S jk the quantum energy-flow coherence tensor, which have formal resemblance to those of the classical theory [7]. Moreover, from (11) and (12), or from (13) and (14), one has Advances in Optical Technologies 3 The equations (15)- (16) and (19)- (20) are basic equations for the propagation of the energy coherence tensor and the energy-flow coherence tensor in free space. Another set of similar equations, which involve r 2 and t 2 , can also be derived in a similar way. So far we have derived the governing equations for the correlations between all the components of the quantum electromagnetic fields at any space-time points. In order to understand the importance of the newly introduced quantum energy and energy-flow coherence tensors, let us discuss the photoelectric detection of the optical field within the quantum mechanical interaction picture. In most of optical experiments, measurements of the electromagnetic field are based on the absorption of photons (energy) via the photoelectric effect. Based on a heuristic argument, Glauber has given the expression for the average counting rate of an ideal photodetection [4], which is proportional It is important to note here that photon absorption by an ideal detector measures the average value of the Hamiltonian of the quantized radiation field: E (−) · E (+) + H (−) · H (+) , and not that of E (−) · E (+) . One should note that the energies of electric field and magnetic field are indistinguishable in photon absorption process in the detector. Therefore, the probability per unit time that a photon be absorbed by an ideal detector at point x at time t should be proportional to the expectation value of the normally ordered operator Meanwhile, due to the well-known fact that electromagnetic radiation carries both energy and momentum, any interaction between photons and matters for the exchange of energy will inevitably involve the exchange of momentum. Just as Poynting vector in the classical electromagnetic field, where S ∝ E * ×H+E×H * = E * ×H−H * ×E, it is convenient to introduce the notion of energy flow operator, defined by The operator S so defined is of course the energy flow associated with the photon absorption in light of the optical equivalence theorem [8,9]. Therefore, the average value for the i component of the detected momentum should be proportional to which is the expectation value of the operator S with normal ordering. Recording the energy and energy flow of photons with a single detector does not exhaust the available measurements we can make upon the field. In the more general expression, the fields E (−) , E (+) , H (−) , and H (+) are evaluated at the different space-time points. Therefore, these tensors E jk and S jk furnish a general measure of quantum optical correlations involving all the components of the electromagnetic field at any space-time points and give the field intensity (which is proportional to the counting rate of the detector) and energy flow vector (which is proportional to the kinetic momentum of light) as the special case for x 1 = x 2 and t 1 = t 2 . One should be aware of the facts that energy transition is always associated with momentum changes in photon absorption process, and the energies of electric field and magnetic field are indistinguishable in the ideal photodetector. It is the indistinguishability that makes the introduction of E jk and S jk meaningful for the better definition of photon-delayed coincidences on the basis of the general first-order coherence tensors. Coherence Tensor and Vector Potentials Borrowing from the general potential theory [10], we can also introduce a new concept of the coherence tensor potential A lm from which the energy coherence tensor E jk can be obtained as since the divergence of the curl of any tensor is zero, namely, If we substitute from (24) into (16), we have where a m is coherence vector potential. A lm and a m must now be determined in such a way as to satisfy the remaining Maxwell equations. Let us maintain the following gauge transform relation between A lm and a m : Substituting (26) into (20), and making use of the gauge transform in (27), we can easily find that the coherence vector potential satisfies the wave equation: Similarly, substituting (23) and (27) into (15), we have which indicates the wave equations governing the propagation of the coherence tensor potential. It should be noted that the proposed coherence tensor and vector potentials have some similar mathematical behavior with the usual vector and dual potentials [11,12], but the underlying physics is completely different. The key point of the proposed coherence potentials is to govern propagation of quantum correlation, whereas the usual vector and dual potentials are developed for the description of the electromagnetic wave in nonlinear dielectric media. The significance of the coherence tensor and vector potentials can be explained as follows. Just as the vector and scalar potentials play a more fundamental role in quantum electrodynamics than the electromagnetic fields themselves [13], the newly introduced coherence tensor potential and coherence vector potential are expected to play a more fundamental role in the descriptions of the quantum correlation and entanglement than the energy coherence tensor and the energy-flow coherence tensor. Conservation Law of Quantum-Correlation-Function Energy Noting the formal analogy to Maxwell's equations in free space, we can introduce some concepts to characterize the spatiotemporal evolution of the energy coherence tensor and the energy-flow coherence tensor. Borrowed from scalar theory in classical wavefields, an energy-flow-density-like quantity has been defined and observed experimentally when a generic coherence vortex is reported [14,15]. Under the framework of classical scalar coherence theory, the associated conservation law has been deduced. However, with the optical equivalence theorem for normally order operators [8,9] as a clue, it is possible to extend this concept so as to take into account the vectorial character of light to formulate much rigorous laws for quantum optics. Let us now take the complex conjugate of (15) and multiply by S jm . We then obtain If we add (30) with its complex conjugate, we have If we make use of relation ε jkl = −ε lk j and interchange the dummy suffices j and m, we have In a similar way, we obtain from (16) 1 c After adding (32) to (33), we readily obtain where Apart from a constant factor, (34) has a formal resemblance to that of the classical theory (see [7,Equation (3.5)]), while the physical meaning has received much less attention in the latter. From its formal analogy to energy conservation law in electromagnetism, we note that (34) is a continuity equation in which the scalar quantity W(x 1 , t 1 ; x 2 , t 2 ) may be regarded as representing an energy-density-like quantity of quantum correlation (which we term the quantum-correlation-function energy density) and the vector quantity T(x 1 , t 1 ; x 2 , t 2 ) as representing a flow-density-like quantity of quantum correlation (which we term the quantum-correlation-function flow density). Just as the magnitude and the direction of the Poynting vector represent, respectively, the field intensity and the direction of light energy flow in classic optics, the magnitude and the direction of the quantum-correlationfunction flow density vector T(x 1 , t 1 ; x 2 , t 2 ) give, respectively, a measure of the intensity of quantum correlation and the direction of the quantum correlation flow. If we integrate (34) throughout a volume V bounded by a closed surface S and apply Gauss' theorem, we have where n denotes the unit outward normal to S. Equation (37) may be given the following interpretation. The rate of increase (or decrease) of the quantum-correlation-function energy contained in V is equal to the rate at which the quantum-correlation-function energy enters (or leaves) V through the boundary S via the quantum-correlationfunction flow. With this interpretation (34) expresses the conservation law of quantum-correlation-function energy, which may be regarded as natural generalizations of those obtained on the basis of classical scalar coherence theory [15]. Conservation Law of Quantum-Correlation-Function Momemtum Now, let us consider a new law on the conservation of a momentum-like quantity in quantum correlation theory. If we take the complex conjugate of (15) and multiply by ε in j E nm and make use of tensor identity ε jkl ε in j = ε kl j ε in j = δ ki δ ln − δ kn δ li (where δ ki is equal to zero if k / = i, and equal to unity if k = i ), we then obtain Advances in Optical Technologies 5 where the use has been made of the divergence-free condition in (19). Similarly, if we apply the similar operation to (16) and use the tensor identity, we have where the use has also been made of the divergence-free condition in (20). Let us add (38) and its complex conjugate to (39) and its complex conjugate; we finally arrive at where If we integrate (40) throughout the volume V bounded by a closed surface S and apply Gauss' theorem in the tensor form, we have where n k is the k component of unit outward normal vector n. From its formal analogy to momentum conservation law in electromagnetism, we interpret the vector T(x 1 , t 1 ; x 2 , t 2 ) as a momentum-density-like quantity of quantum correlation (which we term the quantum-correlation-function momentum density). Furthermore, the symmetric tensor W ki (x 1 , t 1 ; x 2 , t 2 ) may be regarded as a Maxwell-stress-tensorlike quantity of quantum correlation (which we term the quantum-correlation-function stress tensor), representing the flux of the kth component of the quantum-correlationfunction momentum in the ith direction. Hence, the above equation states that the rate of gain (or loss) of quantumcorrelation-function momentum in the closed volume is equal to the flux of quantum-correlation-function momentum flowing into (or out of) the volume across the bounding surface. With this interpretation, (40) expresses the conservation law of quantum-correlation-function momentum, which bears a striking resemblance to the classical one [16,17]. Relation between Energy and Momentum in Quantum Correlation Function Borrowing from the classical optics [18], we can introduce some geometrical concepts to represent the quantumcorrelation-function flow. Let us consider the propagation of the quantum correlation function with a speed c through an area S normal to the direction of propagation. During a very small interval of time Δt, only the quantumcorrelation-function energy contained in the cylindrical volume, (cΔtS)W, will flow across S. We now make the reasonable assumption (for vacuum) that the quantumcorrelation-function flow is in the direction of propagation of the quantum correlation. The magnitude of the corresponding quantum-correlation-function flow is then Since Tc 2 = T (in (41)), we can express the quantumcorrelation-function energy in terms of the magnitude of the quantum-correlation-function momentum, namely, Notice, from (45), that if some amount of quantumcorrelation-function energy W is transported per square meter per second, then there will be a corresponding quantum-correlation-function momentum W/c transported per square meter per second. Thus, the mathematical relation between the energy and momentum in quantum correlation function is essentially identical to the energy-momentum relation in classical electrodynamics. Conservation of Quantum-Correlation-Function Angular Momentum Finally, we remark that the quantum-correlation-function angular momentum in quantum theory of optical coherence can also be treated in a similar way. The angular momentum density associated with the correlation tensors of the quantized field is given by where T i is the quantum-correlation-function momentum density discussed above. The flux of angular momentum density in quantum correlation function is given by where W ki is the quantum-correlation-function stress tensor given above. Conservation of angular momentum in quantum correlation function is also expressed by which again bears the close analogy to the classical theories of coherence [17]. Specific Examples and Physical Interpretations So far, on the basis of the theory of the quantized electromagnetic field, we have introduced some vector and tensor 6 Advances in Optical Technologies densities to the quantum theory of optical coherence and have developed a very close analogy between the quantum mechanical and classical theories of coherence. To offer much more penetrating insights into the role played by quantumcorrelation-function conservations in the description of the coherence properties of the quantized field, we will carry out mathematical development through the use of some particular sets of quantum states. Without loss of generality, the expansion for the vector potential of the quantized multimode electromagnetic field with its linear polarization along the x -direction takes the form where ω m is the angular frequency of the mth mode, L is the side length of a cubical volume, and h.c. stands for the Hermitian conjugate of the preceding term. From the expansions for the vector potential A of the electromagnetic filed, the electric and magnetic field operators E and H can therefore be written as By inserting the operators of the quantized fields E and H into the definition for energy coherence tensor E jk and the energy-flow coherence tensor S jk , we can evaluate the proposed quantum-correlation-function energy W and quantum-correlation-function momentum T at some particular set of quantum states of the field. Quantum Correlation Function in Fock States. As a first example, consider a multimode of the electromagnetic field in an eigenstate of Fock states (photon number states). After calculating the E-only correlation tensor, H-only correlation tensor, and EH mixed-correlation tensors from (1), we have where Tr{ ρ a † m a m } = n m is the average photon number of the mth mode, and we have put for short φ m = ω m n m exp[iω m (z 2 /c−t 2 −z 1 /c+t 1 )]. Substituting from (51) into (17)-(18), we obtain It is immediately seen from (55) that the density of quantumcorrelation-function energy will depend on the two spatiotemporal variables only through their difference and has both particle and wave manifestations. For the single-point correlation function, where z 1 = z 2 ,t 1 = t 2 , we have Thus the quantum-correlation-function energy density is discrete with particle behavior. Meanwhile, the quantumcorrelation-function energy density has uniform distribution for all the spatiotemporal position. Meanwhile, the quantum-correlation-function energy density exhibits wavelike features for the two-point correlation in photon delayed coincidence. With this interpretation, (55) expresses the simultaneous wave-particle duality of quantum-correlationfunction energy density. The dynamical behavior of the quantum correlation is governed by the total quantumcorrelation-function energy that takes the form Advances in Optical Technologies 7 Similarly, with the use of (52)-(53) for the expressions of the quantum energy and energy flow coherence tensor, the quantum-correlation-function momentum vector density has only z -direction component: As expected, the quantum-correlation-function momentum also exhibits the wave-particle duality. Then, the total quantum-correlation function momentum within the cubical volume is From the formal analogy to the concept of single photon in quantum optics, we may envision the quantum correlation of a single mode carrying a discrete quantumcorrelation-function energy W = n 2 D 2 ω 2 /L 3 with a quantum-correlation-function momentum T z = W /c = n 2 D 2 ω 2 /(cL 3 ). In light of the wave-particle duality of quantum physics, it is natural for the photon to carry the properties of both waves and of particles. Owing the Wolf [19], it has been known since 1955 that the optical coherence exhibits wave properties since the coherence function obeys a couple of wave equations. Here, it is interesting to note that, besides wave properties, the quantum correlation function also has particle nature since the correlations between the discrete photons (photon correlations) play the essential role in the quantum theory of optical coherence. Quantum Correlation in Coherent States. In quantum optics, it is also common practice to describe the state of the photon field in terms of coherent states [4,5]. As stressed by Glauber, one of the important properties for the coherent states is that the correlation functions can be factorized if the field is in coherent state. Therefore, the first-order correlation functions reduce to the factorized forms: , in which the functions E (x, t), and H (x, t) and their corresponding complex conjugates play the role of the eigenvalue and satisfy the Maxwell equations. According to (50), the eigenvalue functions E j (x, t) and H j (x, t) are thus given by where α m is eigenvalue of the annihilation operator a m , that is, a m |α m = α m |α m . As we readily see, the energy and energy flow coherence tensors of the quantized field satisfy Using the results of (62), we find that Similarly, the quantum-correlation-function momentum density takes the form where Re{· · · } stands for the real part of the complex eigenvalue. After substituting (61) into (63)-(64), we see that the coherent state of a single mode has quantum-correlationfunction energy W = (Dω) 2 |α| 4 /L 3 with quantumcorrelation-function momentum T z = c(Dk) 2 |α| 4 /L 3 . Up to now we have considered the conservation laws in the quantum correlation function based on the correlation tensors of first-order only. It should also be noted that the generalization to higher-order correlation tensors can also be derived in a similar way. Meanwhile, the proposed conservation laws in quantum correlation function may also provide an insight into other correlation phenomena in quantum mechanics. For example, the Einstein-Podolsky-Rosen (EPR) paradox draws on a phenomenon predicted by quantum mechanics, known as quantum entanglement, to show that measurements performed on spatially separated parts of a quantum system can apparently have an instantaneous influence on one another [20]. Since photons are emitted from a common source, correlations between these photons have already been established once an entangled state is created [21,22]. In an entangled two-photon state, a measurement of one chosen variable of photon one will determine the outcome of a measurement of the corresponding variable of the other photons. No matter how far apart for these two photons, the proposed conservation laws always hold for this quantum system. It is these conservation laws for quantum correlation function that add an additional condition for 8 Advances in Optical Technologies the propagation of the separated photons in their entangled state when no influence resulting from one measurement can possibly propagate to the other photon in the available time. This may give the origin of the nonlocal behavior or spooky action at a distance for quantum mechanics. Conclusion In summary, we have derived the expressions for energy coherence tensor and the energy-flow coherence tensor on the basis of quantum field theory, which provides new insights into photon statistics and quantum correlation. In terms of a electromagnetic model, we have introduced new quantities to quantum theory of optical coherence based on newly defined quantum optical coherence tensors and have related them by continuity equations, giving new conservation laws in quantum correlation function. Furthermore, we have theoretically investigated the propagation of quantum correlation function, which establishes a new relationship between the photon correlation and quantum electrodynamics and reveals the wave-particle duality of quantum correlation. Further exploration in this direction may lead to a new field in quantum optics that may be referred to as Quantum Correlation Dynamics.
5,745.6
2010-06-22T00:00:00.000
[ "Physics" ]