text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Development of liquid biofuel market: Impact assessment of the new support system in Ukraine In the light of the growing importance of biofuels in the world and Ukraine’s potential for its production, the current research focuses on analysing future development of liquid biofuel market and production possibilities in Ukraine until 2030 using the AGMEMOD model. The AGMEMOD model is an econometric, dynamic, partial equilibrium, multi-commodity model which has the capacity to evaluate changes in Ukrainian agricultural policy and the impacts of political decisions on the agricultural sector in Ukraine. The current paper offers an introduction of state aid in the form of direct support and tax preferences for liquid biofuels producers to meet the needs of the domestic market in biofuels and to achieve the indicative target of 10% consumption biofuels in the total consumption of motor fuel by transport sector till 2020. For the quantitative assessment of these effects, the AGMEMOD model was used. The results of this study indicate that implementation of the direct state support, introduction of the system of returning and cancelling excise duty for biofuels producers will stimulate the achievement of the indicative target of 10% biofuels’ consumption by the transport sector. INTRODUCTION The Food and Agriculture Organization of the United Nations (FAO) calls biofuels "the largest source of new demand for agricultural production in the past decade" and claims that they represent a new "market fundamental affecting prices for all cereals" (ALIMENTERRE, 2017). Ukraine is among the largest grain and oilseed producers in the world and has abundant natural resources for further increase of its crop production. Grains and oilseeds occupy more than half of the usable agricultural area (UAA) (55%). More than half of the country's territory is covered by highly fertile black soil known locally as "chornozem". Ukraine has around 42.7 million hectares (ha) of agricultural land. Wheat, sunflower seeds, rapeseed, and maize are the main crops, production of which is largely exportoriented (SSSU, 2017a). Throughout 2011-2017, for example, Ukraine was always the world's ten largest wheat, sunflower seeds, and maize exporters. Therefore, Ukraine has considerable potential for biomass production, including liquid biofuels. Today they are the only direct substitute for oil in transport,available on a significant scale. Ukraine has one of Europe's highest biomass potentials for biofuels and it should be used effectively for biodiesel -2 million tons; bioethanol -from 2 million tons to 5 million tons; biogas ≈ 35 billion m 3 ; solid biofuels -40 million toe (UABio, 2012). The research market for liquid biofuels consists of several basic segments of products: bioethanol (made from sugar and starch crops), biodiesel (made from vegetable oils) and alternative motor fuel (hereinafter -AMF) -for cars run on bioethanol. As a member of the Energy Community, Ukraine has implemented the EU Directive 2009/28/EC on the promotion of renewable energy (RE) and committed that the share of green energy in 2020 in the overall consumption structure would be 11% (European Parliament, 2009). Taking into account the commitments undertaken by Ukraine with the accession to the Energy Community, the Resolution of the Cabinet of Ministers of Ukraine as of October 01, 2014 No. 902 titled "National Renewable Energy Action Plan for the period up to 2020" (hereinafter -NREAP) has established mandatory national indicative targets for the use of renewable energy sources with the final energy consumption in the transport sector in 2020 being 10% (IEA, 2015). According to the Energy Strategy of Ukraine until 2035, the share of green energy in the overall consumption will be 25% in 2035 (CMU, 2017). However, the actual results of RE's development are threatening the planned targets: in 2017, the share of renewable energy sources (RES) in the energy balance was only 1.47% (NCSREPU, 2018), which is almost 7.5 times less than the 2020 target. According to many experts, the slow pace of growth of "green" energy in the country is conditioned by the imperfection of existing economic mechanisms managing and supporting the development of this sector. The ineffective interaction between stakeholders of various sectors of the economy: public, financial (Masharsky et al, 2018) has a significant negative impact on the development of the private sector of RE. Provided the appropriate framework conditions it is worth to consider Ukraine's ability to achieve the above objectives for the consumption of liquid biofuels by the transport sector, analyse the state of the biofuel production in Ukraine, identify the obstacles that exist in achieving the mandatory national indicative targets, assess the feasibility of implementation taken on Ukraine's international obligations regarding motor biofuels, and propose measures to enable the obligations. As the use of biofuels is driven by mandates -which in most countries are introduced in terms of percentage of the total fuel use -this means changes in fossil fuel use could change how biofuels are used. In general, mandates on the domestic use of bioethanol or biodiesel play an important role in modelling the demand for biofuels. Domestic mandates for biofuels can be either binding or non-binding depending on country-specific use. The mandate is non-binding if the mandated level of biofuels use is below the market equilibrium and binding when the domestic mandate pushes biofuels use and production beyond the conventional market equilibrium. In the case of a binding mandate, the biofuels price is above the equilibrium price, while the non-binding mandate has no effect on the market equilibrium. Where the mandate is binding, this is considered policy support to biofuels producers (OECD, 2018). For the quantitative assessment of the above objectives, the model AGMEMOD -an econometric, dynamic model of partial equilibrium modelling the effects of changes in agrarian policy on production, consumption, imports, exports, and prices of agricultural products -was used (AGMEMOD, 2018). The AGMEMOD includes about 50 agricultural products (including product groups) in about 35 countries (including country groups), which allows assessing the impact of policy decisions on the agrarian sector and modelling the future development of relevant indicators. For the first time, a motor biofuel market unit was created in the model and projections for the development of the liquid biofuel market in Ukraine by 2030 using the AGMEMOD model were done. The main directions of market development are identified with the introduction of support for biofuels producers in the form of direct subsidies and tax preferences, the implementation of which will ensure rapid development of the market, primarily due to the increase of biofuel production and its wide use in Ukraine and achievement of indicative targets in accordance with NREAP. There is some research concerning using biomass and biofuels. But all of them focused on issues concerning climate change for GHG emissions reductions. For example, Gielen et al. (2002Gielen et al. ( , 2003 study the optimal use of biomass for GHG emissions reductions using the BEAP model. Gül et al. (2009) utilize a global MARKAL model, denoted the Global Multi-regional MARKAL model (GMM), to analyse longterm prospects of alternative fuels in personal transport, focusing on biofuels and hydrogen. In this study, the bottom-up energy system model is linked to the climate change model MAGICC (in a similar manner as Turton, 2006). Martin Börjesson et al. (2014) investigate cost-efficient use of biofuels in road transport under system-wide CO2 reduction targets to 2050, and the effects of the implementation of targets for an almost fossil-free road transport sector to 2030 using the bottom-up, optimization MARKAL_Sweden model, which covers the entire Swedish energy system including the transport sector. The paper is structured as follows. Section 2 deals with the modelling approach for simulation of policy impacts in the biofuel sector, Section 3 describes the data, modelling scenarios and methodology, Section 4 presents the modelling empirical results and discussion results and Section 5 concludes the study. Simulation models "No model can serve all purposes". With this statement, VAN TONGEREN et al. (2001) give an overview of the most significant models used for agricultural economic analysis and classify them following specific criteria: scope of representation, regional scope, regional unit of analysis, dynamics, trade representation, treatment of quantitative policies, availability of data and parameter estimation. There is a rich variety of modelling techniques and models that are used for simulation of the effects of policy changes in the agriculture sector, in particular in the bioenergy sector. The modelling techniques are usually mathematical programming, simulation, and econometrics. The models may be static or dynamic; focus on single or several commodities; deal with the entire economy, i.e., general equilibrium models, or one or several sectors, i.e., partial equilibrium models; and the modelling results may be aggregated to regional, national or multinational levels. The models may consider bilateral trade flows; or the world market may be represented as a point market (APD, 2017). Depending on the purpose of the analysis, synergies of mathematical programming, simulation and econometric techniques, as well as the other estimation methods, such as, for example, genetic algorithms or investment appraisal methods may be used for modelling of the bioenergy sector. One of the examples of such models that are used for policy advice at the EU level and at the whole world is the Aglink-Cosimo model. The biofuels component of the Aglink-Cosimo model is a structural economic model that analyses the world supply and demand for bioethanol and biodiesel. The biofuels module is a recursive dynamic, partial equilibrium model used to simulate the annual market balances and price for the production, consumption and trading quantity of bioethanol and biodiesel worldwide. This model is completely integrated into the cereals, oilseeds and sugar component of the Aglink-Cosimo model. The production of biofuels drives the additional demand for agricultural commodities, in particular, coarse grains, vegetable oil, and sugar. The biofuels module is linked to the other components of the Aglink-Cosimo model mainly via the food-based feedstocks demand, which includes, amongst other commodities, maize, sugar, wheat, and rice (OECD, 2018). Joint Research Center was carried out research "Impacts of the EU Biofuel Target on Agricultural Markets and Land Use A Comparative Modelling Assessment" was analyzing the impacts and consequences of biofuel policies using one or other of the three models that are used for study (namely ESIM, AGLINK-COSIMO, and CAPRI) as well as several studies based on the IMPACT and the GTAP models (Mueller and Pérez Domínguez, 2008). The MIRAGE (Modelling International Relationships in Applied General Equilibrium) global computable general equilibrium model (CGE) which allows conducting a quantitative analysis of the global economic and environmental impact of biofuel development. Primary among major methodological innovations introduced in the model there is the new modelling of energy demand, which allows for substitutability between different sources of energy, including biofuels. This is facilitated by the extension of the underlying Global Trade Analysis Project (GTAP) database, which separately identifies bioethanol with four subsectors, biodiesel, five additional feedstock crops sectors, four vegetable oils sectors, fertilizers, and the transport fuel sectors. The model was also modified to account for the co-products generated in the bioethanol and biodiesel production processes and their role as inputs to the livestock sector. This model assesses the greenhouse gas emissions (focusing on CO2) associated with direct and indirect land -use changes as generated by the model for the year 2020 (ATLASS Consortium, 2010). The version of GTAP used in the studies by Hertel et al. (2008) and Taheripour et al. (2008a), known as GTAP-E, has been specially extended to deal with biofuel and climate change policies. The MARKAL Sweden energy system model is an application of the well-established MARKAL model and can be described as a dynamic, bottom-up, partial equilibrium energy system model (Gül et al., 2009). Through optimization, the model provides the overall welfare-maximizing system solution that meets the defined model constraints over the studied time horizon. Welfare-maximization implies that the cost of energy service supply and costs due to losses in consumer surplus are minimized. MARKAL Sweden applies a long-term time horizon reaching from 1995 to 2050. The model applies a comprehensive view of the Swedish energy system and describes all relevant sectors including electricity, district heating, industry, transport, premises and services (Börjesson Hagberg et al., 2016). The partial equilibrium (PE) ESIM model, in the version used in the study by Banse and Grethe (2008), contains explicit supply and demand functions for biodiesel and ethanol. It distinguishes three feedstocks for each biofuel and differentiates them further according to whether or not they have been grown on set-aside land. The model considers four by-products: gluten feed and meals from three different oilseed crops. SIM models each EU Member State individually incorporates a wide range of EU agricultural domestic and trade policies and endogenously determines a very rich set of agricultural prices. However, fossil energy prices are taken as exogenous and, being a comparative static model, it does not allow for any lagged adjustment (adjustments to price changes or other shocks that take place within the current year). Net trade flows are endogenous. There are many other models for biofuel and bioenergy development simulation. However, they all take into account and assess the impact of the setting of indicative targets under Directive 2009/28/EC and the environmental performance of the EU biofuel policy as concretized in the RED. Nevertheless, our aim is to show what models are, what methodology do they have and use of their approaches for our research. Because we at the first time among other countries have included in the model, implemented Ukrainian liquid biofuel market within the framework into AGMEMOD model (AGMEMOD UA). Moreover, in the current research, the AGMEMOD model is used because of its advantages in comparison to the models and approaches reviewed above. The AGMEMOD model stands for Agricultural Member State Modelling and was established in 2001. We chose this model because it is a partial equilibrium, dynamic, multi-country and multi-market model that is used for analyzing the effects of agricultural policies the respective sectors of the European Union (EU) and on the respective sectors of Ukraine (AGMEMOD, 2018). The assess empirically the impacts of the 2003 reform of the Common Agricultural Policy (CAP) on the agri-food sector in Finland was researched by Csaba Jansik et al. (2014). The AGEMOD model is widely used by the European Commission that made a lot of different issues concerning policy, administration and industry need medium-term projections of the expected developments in the agri-food markets for their decision-making processes. The EU Commission presents such projections for the EU (Petra Salamon et al., 2019). Since 2012 Ukraine country block AGMEMOD UA has been included in the AGMEMOD model (BANSE et al., 2012). In this part of the model, we have added a new liquid biofuel sector: bioethanol (included alternative motor fuel based on bioethanol) and biodiesel sectors. Most of the equations in the model have been estimated using annual data over the period 1973-2017, or over shorter periods in case of data were not available (such as for the new member states). The variables entering in each sub-model represent consecutive positions in the balance sheet of each market. On the supply side the beginning stocks, production, and imports are being considered and on the demand side the domestic use, exports and ending stock are modeled. For each product in each country, also the respective domestic prices (market-clearing prices) are modeled. The equilibrium in each market is reached in the model also on the level of the whole EU. The necessary condition for the model to be solved is that the equality between supply and demand in each market in each country must hold (Hamulczuk & Hertel, 2009). The general structure of the model is presented below (see Figure 1). The main advantages of the AGMEMOD model include: the model simulates a wide range of agricultural product markets and related parameters such as market prices, production, consumption, import, export, yield and land use. the model can simulate the effects of the policy reforms that are of key interest for the current project. the model considers changes in the general economic environment. In particular, GDP and population growth rates, as well as the currency exchange rate, are taken into account as exogenous parameters. the model is dynamic and allows modelling changes on a year-to-year basis. most of the core functions of the model are estimated econometrically. The results of such an estimation provide a more realistic outcome in terms of parameter estimation and choice of the functional form in comparison to the results of the calibration. the model is disaggregated to regional and producer group levels that allow considering of regional and producer differences in the effects of policy changes and development of the sector. Data description To create projections of liquid biofuels development, AGMEMOD uses a combination of exogenous and endogenous data (parameters). A change in exogenous variables may determine the assumptions of scenarios simulated by the model. Future values of variables (by 2030) that are exogenous to the model (that is, not estimated by the model), such as GDP, GDP deflator, exchange rate of the national currency, the population of Ukraine and world prices for diesel, gasoline, gasoline and diesel consumption in Ukraine, as well as excise duty on alternative motor fuel, on gasoline and biodiesel, are forecast estimates of various institutions. The separate exogenous variables of AGMEMOD Ukraine are shown in Table 1. The data for our research was collected from publications of State Statistics Service of Ukraine (SSSU), personal communication with the largest biofuel producer in Ukraine "Ukrspyrt" and with Ukrainian, an association of alternative fuels producers "Ukrbiopalivo". Historical and projected values of world market prices, national Gross Domestic Product (GDP), GDP deflator, currency exchange rate, and population growth rate are acquired from United States Department of Agriculture (USDA), Food and Agriculture Organization of the United Nations (FAO) and EU Agricultural Outlook. If necessary, data are not available, they are estimated as projections from the previous periods. Using data collected, equations representing the indicators of the biofuel market in Ukraine are estimated as time-series regressions. They are then introduced into the AGMEMOD Ukraine country model. Table 1 The exogenous variables of AGMEMOD Ukraine Source: The separate exogenous variables of AGMEMOD Ukraine 2019; Authors' results. Modeling scenarios To assess the development of the motor biofuel market (bioethanol, alternative motor fuel, biodiesel), the following scenarios were developed for the achievement of the above-mentioned goals: "Policy_10%", "Direct support_10%", "Direct support", "Returning excise duty", "Cancelling excise duty" and "Baseline Scenario". The "Policy_10%" scenario is designed to assess the required level of bioethanol consumption from different crops (wheat, corn, rye, sugar beet) and biodiesel (rapeseed and sunflower oil) based on the performance of the -NREAP in order to determine the amount of bioethanol, AMF and biodiesel consumption to achieve 10% of biofuels in the total consumption of motor fuels by 2030 ( Table 2 Assessment of the contribution of energy from renewable sources to the transport sector to achieve the mandatory indicative targets for 2020, ktoe Source: Summarized by the author on the basis of NREAP Due to NREAP, these indicators should be achieved by 2020. Taking into account the low level of liquid biofuels production in Ukraine in 2017, achieving these indicative targets is impossible until 2020 without state support. Therefore, the values of bioethanol and biodiesel consumption were calculated from 2018 to 2030. It should be noted that according to the scenario "Policy_10%" Ukraine needs to reach 503 thousand tons of bioethanol and AMF consumption and 79 thousand tons of biodiesel by 2030 (Table 3). Table 3 Volume of consumption of liquid biofuels to achieve 10% biofuel use in the overall structure of motor fuels by 2030 Source: Authors' results. To calculate the above indicators, the following assumptions were taken into account: 90% of the total consumption in the bioethanol sector could be achieved by producing bioethanol and AMF from sugar beet, 6% from corn, 3% from wheat, 1% from rye. As for indicators in the biodiesel sector, consumption of 79 thousand tons of biodiesel in total fuel consumption can be achieved by using 60% biodiesel of rapeseed oil and 40% from sunflower oil. Today in Ukraine there is no direct support and incentives for the production of liquid biofuels and development of the appropriate market. However, given the situation on the liquid biofuel market, the achievement of 10% of RES in the transport sector by 2020 is impossible without government support. That why we should assess the effect, implementing direct support and tax preferences for bioethanol and biodiesel producers on further biofuel development. The "Direct Support_10%" scenario is designed to assess the implications of introducing a new producer support system in the form of direct subsidies to stimulate and expand biofuel production and achieve 10% biofuel consumption (bioethanol, alternative motor fuel, biodiesel) in total consumption of motor fuels. Support is provided for producers who process the above products into bioethanol, alternative motor fuel, and biodiesel and depending on the volume of these produced biofuels. The size of direct support in the AGMEMOD model implemented in the form of price additions in the calculation of UAH per 100 kg of preferential products. Pricing additions are included in scenario Vol.13, No.1, 2020 equations, which results in calculating the impact on biofuel production. To model the "Direct Support_10%" scenario, it was necessary to determine the total amount that would be allocated from the budget to the support program in the form of direct budget subsidies to bioethanol producers, components based on it and biodiesel. This was done by expert estimation and using the AGMEMOD model. To achieve 10% biofuel consumption by the transport sector in the total consumption of motor fuel, the support for bioethanol producers, alternative motor fuel, biodiesel and mixtures based on it was 5 500 UAH / 100 kg of the finished product. To simulate the state support scenario, this amount of subsidies was calculated as per UAH / 100 kg, which is then added to the producer price of the product that is being subsidized, and is applied in the equations of the processing model and affects the production and consumption of the product concerned. The "Direct Support" scenario is designed to assess the implications of introducing a new system of direct support for producers, calculated and estimated depending on the expected level of gasoline and diesel consumption by 2030 to meet the needs of consumers in bioethanol, alternative motor fuel and biodiesel in the domestic motor fuel market. Demand for bioethanol, alternative motor fuel, and biodiesel is closely related to the consumption of gasoline and diesel, respectively, since bioethanol is used as an octane-enhancing additive for the production of traditional gasoline and alternative motor fuels, and biodiesel as an octane-enhancing additive for the production of conventional diesel and products based on them. Based on the data of gasoline consumption in Ukraine for the period from 2012 to 2017, the calculation of the demand for bioethanol for Ukraine was made, depending on the consumption of gasoline and the percentage of bioethanol in it, respectively, 5%, 7%, 10% during the period from 2012-2017. In 2017, the transport sector consumed 1 986 thousand tons of gasoline and 5 149 thousand tons of diesel fuel. Let's assume that the same approximate proportions of fuel consumption for energy content (30% of gasoline, 70% of diesel) will be saved until 2020, and it will be necessary to ensure the replacement of approximately 30% of gasoline, 70% diesel. Therefore, to maintain the current structure of fuel consumption by the transport sector (in particular, for preserving the approximate share of vehicles on gasoline and diesel) at 2017 levels, to provide 10% energy consumption from RES by the transport sector, it will be need 109 323 thousand tons of fuel equivalent bioethanol and 325 thousand tons of fuel equivalent biodiesel. It is worth noting that State Enterprise "Ukrspirt" may produce 160 thousand tons per year at existing production capacities. Consequently, it is theoretically possible to provide the required amount of bioethanol (152 thousand tons) by 2020. To do this, it is enough to load at the full power plants of the State Enterprise "Ukrspirt". The "Returning excise duty" scenario is designed to assess the implications of introducing a new support system for alternative motor fuels producers and biodiesel producers in the form of returning excise duty to producers from the sales of alternative motor fuels and biodiesel for a year. The scenario "Returning excise duty" was implemented in the form of price additions -the excise duty on alternative motor fuel and biodiesel in the calculation of UAH per 100 kg of preferential products, which is then added to the price of the producer of the product and is applied in the equations of the model of processing and affects the production and consumption of the products concerned. The "Cancelling excise duty" scenario is designed to assess the consequences of the abolition of the excise duty on the sale of alternative motor fuels and biodiesel produced by these biofuels' producers. The tool for supporting the development of this sector is the tax exemption for bioethanol and biodiesel as an excisable product. According to the Tax Code of Ukraine, as of 01.11.2018, excise duty on the alternative motor fuel was 130 EUR /1000 kg, biodiesel -91.2 EUR / 1000 kg (Verkhovna Rada of Ukraine 2014). In 2030, the amount of excise duty per 100 kg of alternative motor fuel production will amount to 353.63 UAH / 100 kg, and biodiesel -248.08 UAH / 100 kg. If we cancel the above types of excise, then the producers price of alternative fuel motor will increase by 5% in 2030 (from 2338.8 UAH / 100 kg to 2 456.1 UAH / 100 kg), the producers price of biodiesel and products based on it will increase by 13.27% (from 2 637.3 UAH / 100 kg to 2 987.2 UAH / 100 kg). It follows that producing bioethanol for the producer will be more profitable than producing alternative motor fuel since bioethanol is not an excisable product, but alternative motor fuel is excisable goods and the price of alternative motor fuel includes the corresponding excise duty. As for the biodiesel production, as expected, the abolition of excise duty for the producer will increase its production and will increase profitability. The "Baseline scenario" -based on the assumption that during the projected period 2018-2030, the policy framework conditions in general in Ukraine remain at 2017 level and the biofuel sector does not receive any state support from 2018 on. This also means that the model considers such factors as conditions of Deep and comprehensive free trade area agreement (DCFTA) as well as other trade regulations, military conflicts in the Donbas region and annexed Crimea, which is excluded from modelling as they were in 2017. EMPIRICAL RESULTS AND DISCUSSION Below we will present the results of the modelling of the liquid biofuel market in Ukraine using the AGMEMOD model under the scenarios "Direct support_10%", "Direct support", "Returning excise duty", "Cancelling excise duty" compared to the Baseline scenario. These results reflect changes in the production, consumption, import and export of bioethanol, alternative motor fuels, biodiesel and products based on it in Ukraine up to 2030. For the analysis of the results of simulation of support scenarios and tax preferences, consideration should first be given to changing the price of bioethanol, alternative motor fuel, and biodiesel when introducing each type of support and tax preferences to the biofuels producers concerned. As noted above, the size of support in the AGMEMOD model implemented in the form of price additions in the calculation of UAH per 100 kg of preferential products. Pricing additions are included in the scenario equations, which results in calculating the impact on biofuel production. The modelling results of price additions and changes presented in producer prices in 2030 compared to the Baseline scenario (Table 4). Table 4 The modelling results of price additions and changes presented in producer prices in 2030 Source: Authors' results. Table 4 illustrates that in the "Direct Support_10%" scenario, the largest price additions in 2030 estimated for all products presented in modelling (bioethanol, alternative motor fuel and biodiesel). The growth of prices in comparison with the Basic Scenario was also the highest for bioethanol (+ 331.53%), AMF (+ 348.17%) and biodiesel (+ 308.76%). According to the" Direct Support" scenario, we find the largest price additions for bioethanol and AMF, as well as price increases compared to the Baseline scenario, respectively, the largest for bioethanol (+ 60.28%) and AMF (+ 63.3%). Since the distribution of budget subsidies directly depends on the size of production of each product accumulated by the sector, the sectors that produce less receive fewer subsidies. Under the "Returning excise" scenario, the largest price additions are for the AMF, and the increase in prices compared to the Baseline scenario for the AMF was (+ 15.12%), for biodiesel (+ 9.41%). From the above, it follows that, as expected, the simulation results received the greatest effects and price additions with the direct support of biofuel producers. The results of the "Direct Support_10%" scenario indicate that support for bioethanol producers, alternative motor fuel and biodiesel producers in the form of direct subsidies will positively contribute to the rapid development of the bioethanol and biodiesel sector and will give an impetus for achieving the indicative target of 10% bioethanol, AMF consumption by 2020, reaching the consumption of alternative motor fuel at the level of 540.06 thousand tons, bioethanol 540.08 thousand tons, and biodiesel almost 79 thousand tons by 2030 in the total consumption of transportation fuel. Comparing the simulation results with the Baseline scenario, in 2030, the bioethanol production with state support will increase from 12.91 thousand tons to 445.87 thousand tons, almost by 34.5 times (Figure 2). By analysing the modelling results of biofuel domestic use in the "Direct support_10%" scenario, bioethanol domestic use by 2030 will increase from 90.38 thousand tons to 538.5 thousand tons (by 5.96 times), alternative motor fuel -from 89.98 to 538.13 thousand tons (by 5.98 times), biodiesel -up to 78.81 thousand tons (Figure 3). Provided the low level of bioethanol and alternative motor fuel production and the almost absence production of biodiesel, it is clear that in order to achieve indicative targets by 2020, it is required to allocate significant amount of money for this purpose: 5 500 UAH / 100 kg of finished products in order to receive more than 500 thousand tons bioethanol (alternative motor fuel) consumption by 2020 and 5 500 UAH / 100 kg to reach biodiesel consumption of 79 thousand tons by 2030. The results of the "Direct Support" scenario indicate that direct support will positively influence the rapid development of the bioethanol sector to meet the needs of the domestic market. Thus, the bioethanol production will increase by 7 times (from 12.91 to 90.6 thousand tons) compared to the Baseline scenario, and the alternative motor fuel -almost by 12.75 times (from 6.61 to 84.31 thousand tons). As regard to bioethanol domestic use will increase from 90.38 to 170.85 thousand tons (by 1.89 times), alternative motor fuel -from 89.98 to 170.4 (by 1.89 times). year Bioethanol production "Baseline Scenario" Bioethanol production 'Direct Support_10%" AMF Production "Baseline Scenario" AMF Production "Direct Support_10%" Figure 3. Bioethanol and AMF domestic use in the "Direct Support_10%" scenario compared to the Baseline scenario Source: own evaluation. From the above, it follows that the support of biofuels producers in the form of direct state subsidies for bioethanol and AMF producers at the level of 1 000 UAH / 100 kg, as expected, will contribute to the increase of biofuel production and its use to meet the domestic needs of transportation fuel market in Ukraine. The calculations show that 152.5 thousand tons of bioethanol is required to meet the needs of the domestic market. The simulation results have confirmed that in 2020, due to direct subsidies for bioethanol and alternative motor fuel producers, starting in 2018 at a rate of 1 000 UAH / 100 kg, bioethanol consumption in the volume of 172.82 thousand tons could be achieved, AMF for such same conditions -172.71 thousand tons. The introduction of the system of returning excise duty for AMF producers will stimulate the increase of the AMF production, in comparison with the Baseline scenario, almost by 3.87 times (from 6.61 to 25.62 thousand tons), and its domestic use in 2030, respectively, will increase by 1.2 times (from 89.98 thousand tons to 109.72 thousand tons). However, as compared with the direct support scenarios for producers of biofuels, as expected, production and consumption rates are somehow lower. The introduction of returning excise duty system for biodiesel producers for the sold biodiesel will stimulate an increase in biodiesel production, compared to the Baseline Scenario by 6 times (from 0.01 to 0.06 thousand tons), and its domestic use in 2030, respectively, will increase by 1.04 times (from 0.89 thousand tons to 0.93 thousand tons). The returning excise duty for the sector did not yield the desired effect due to the lack of its production and the high excise duty on biodiesel, which is currently almost equal to the excise duty on diesel. The results of the "Cancelling excise duty" Scenario indicate that the new support system will also have a positive impact on the development of the alternative fuel and biodiesel sector. The biodiesel production is expected to increase by 8 times from the Baseline scenario (from 0.01 to 0.08 thousand tons), but consumption will decrease by 2.2% (from 0.89 to 0.87 thousand tons). As for the impact of the excise duty system for the AMF sector, the following effects are expected: production will increase by 1.95 times from 6.61 thousand tons to 12.91 thousand tons, and its domestic use in 2030 will slightly increase from 89.98 to 90.38 thousand tons. It should be noted that the main factor that influenced on the increase in production of AMF and biodiesel in the abolition of excise duty was the increase in producer prices, respectively, on alternative motor fuels and biodiesel. Thus, the price of the AMF increased by 5% from 2 338.8 UAH. / 100 kg to 2 456.1 UAH / 100 kg, biodiesel at 13.27% -from 2 637.3 UAH. / 100 kg to 2 987.19 UAH. / 100 kg ( Figure 4, Figure 5). It should be noted that analysing the modelling results of the introduction of direct support for biofuel producers, will give impetus to the increased use of bioethanol and AMF almost 6 times by 2030 compared to the Baseline scenario. However, given the low level of bioethanol and AMF production and almost absence of biodiesel production, it is clear to reach the indicative target by 2020 necessary allocate a significant amount of funds: 5500 UAH /100 kg to reach bioethanol (AMF) consumption more than 500 thousand tonnes and 5500 UAH /100 kg to reach biodiesel consumption up to 79 thousand tonnes by 2030. The introduction of a system of refund of excise duty to AMF producers for the sold alternative fuel will stimulate an increase of AMF production, compared to the Baseline scenario by almost 4 times, and its use will increase by 1.2 times in 2030, respectively. The introduction of a system of refund of excise duty for biodiesel producers for the sold biodiesel will stimulate the increase of biodiesel production in the scenario 6 times compared to the Baseline Scenario, and its use in 2030, respectively, will increase 1.04 times. Excise duty reimbursement for the biodiesel sector did not produce the desired effect due to the lack of production and high excise duty on biodiesel, which is now almost on par with the excise duty on diesel. In general, the results of the "Cancelling excise duty" have shown an increase in production and consumption of AMF due to the introduction of the excise tax abolition system for AMF producers, and for biodiesel, there is an increase in its production, but compared to the Baseline scenario, consumption will decrease by 2.2%. It should be noted that the main factor that influenced the increase in production of AMF and biodiesel due to the implementation of abolition excise duty was the increase in the producer price, respectively, for AMF and biodiesel. Thus, the AMF price increased by 5% from 2338.8 UAH / 100 kg to It was analysed other researches concerning achieving indicative targets in consumption biofuels until 2020. But all of them focused on the impact development of biofuels on food production, transport forecasts were analysed for emission changes relative to a reference with no biofuel, and the land claims were investigated in different scenarios for agricultural development. For example, in India, biofuel initiatives have gained momentum with the national biofuel policy targeting 20% blending of both petrol and diesel by 2017 (Subhashree Das, et al., 2011). For the study, was taken the southern Indian state of Karnataka as an example, aims at estimating the potential to achieve policy targets. The study spatially analysed land-use change owing to biofuel expansion and its effects on food production. In research were used an integrated modeling framework to simulate land-use change and bioenergy production under two scenarios -Industrial Economy (IE) and Agricultural Economy (AE). Results indicated that meeting the 20% blending target is a challenging goal to achieve under both scenarios. Bioethanol requirements can be nearly fulfilled (88% under IE and 93% under AE) because of sugarcane expansion. However, biodiesel demands cannot be fulfilled using only degraded lands as currently planned in India, but additional agricultural land (3-4% of the total cropland) will be required for jatropha based biodiesel production. Food production will not be directly impacted until 2025, because the largest source of additional land could be short-and long-term fallows. Frederiksen, P. (2013) was created two scenarios for biofuel introduction were developed -a conservative, following EU renewable energy targets for the transport of 10 % in 2020 and keeping this level to 2030, and a more ambitious, with a biofuel share that increases to 25 % in 2030. Bioethanol and biodiesel were selected fuel types, and the respective shares of these were assumed identical. Moreover, it was assumed that the growth in bioethanol use was increasingly provided by 2nd generation bioethanol while 1st generation bioethanol was kept at a 5 % level. Forecasts of the road traffic to 2030 were developed -initially based on an oil price of 65$ per barrel and later including a variant based on 100$ per barrel. The transport forecasts were analysed for emission changes relative to a reference with no biofuel, and the land claims were investigated in different scenarios for agricultural development. Three different scenarios were developed, assuming different targets for the contribution of biofuels in transport. All three scenarios were based on the general activity and energy projections used for the new framework on climate and energy, as described in the new "Trends to 2050" manuscript (EC, 2013). In the reference scenario used in this study, which is consistent with the Trends to 2050 study by the European Commission, biofuels constitute 93% of total renewable energy use in transport, thus assisting member states reaching their renewable targets. All these positive results were achieved on the basis of relevant policies that supported and promoted biofuels use and the acceptance of biofuels by the key market players (Dr. Leonidas Ntziachristos, et al., 2014). Therefore, well -designed laws and regulations -supported by strong institutions and efficient administrative procedures -are necessary for biofuel production to prosper. Reducing excessive regulations of biofuel activities will improve the business environment that contributes to increasing competitiveness and growth of the sector. CONCLUSION Ukraine has a great competitive advantage in the production of biofuels as availability of the feedstock, fertile soils and supports through investments and know-how from abroad (Janda & Stankus, 2017). Whereas the country disadvantageously exports feedstock to Europe for cheaper price and purchase expensive gas and oil instead. Thus, national interest should be shifted from the export of raw material to processing them into final biofuel products. Based on the experience of leading countries in the biofuels market, Ukraine should overcome energy dependence through the establishment of biofuel production and its utilization within the country. According to Ukrainian Association of alternative fuels producers, main barriers that hinder Ukrainian biofuel industry from rising are as follows: high rate of excise duty that made the production of biofuel noncompetitive to traditional motor fuels and highly corrupted process of regulation of bioethanol production and fulfilment of standard technical requirements (Janda & Stankus, 2017). It is expected that in the Baseline scenario biofuel production will not face major changes, because the use of raw materials for food and feed consumption, as well as their export, will remain a more profitable option for Ukrainian producers. However, it is expected to motivate biofuel production by the introduction of a minimum of 10 % biofuel use by the transport sector. In particular, the increase of demand for biofuel by at least 10% shall positively affect the respective domestic market price and, consequently, positively influence the use of commodities for biofuel production. Therefore, it might be the case that correctly specified domestic policy will trigger the development of biofuel production in Ukraine. We propose to introduce obligatory admixture with traditional gasoline and diesel and return nonexcise production of alternative fuel and biodiesel fuel for a certain period (up to 10 years) to achieve the indicative target of 10% biofuels consumption by the transport sector and then promote the excise duty at an economically justified level of 10-20% of the excise tax rate on traditional gasoline and diesel. All of the above measures, such as direct support for liquid biofuels producers, tax incentives for producers and the introduction of a mandatory mixing rate, will contribute to achieving the indicative target of 10% consumption biofuels by the transport sector. But it is worth noting that when conducting the research, we came across with the following limitations of the analysis: many assumptions due to lack of official statistical data, limited number of observations (throughout 2010-2017), difficulty in the regression's estimation, and as a result, carefulness in interpretation of the market simulation results. The next steps of our research will become the implementation of other biofuel markets (biogas market) in AGMEMOD, statistical estimation of the respective equations; involvement of biofuel market experts (i.e., stakeholders) in the review of simulation results; adjustments and corrections of the equation parameters and/or assumptions.
9,808.4
2020-03-01T00:00:00.000
[ "Economics", "Environmental Science", "Agricultural and Food Sciences" ]
Superconductivity and Superfluidity Currently there is a common belief that the explanation of superconductivity phenomenon lies in understanding the mechanism of the formation of electron pairs. Paired electrons, however, cannot form a superconducting condensate spontaneously. These paired electrons perform disorderly zero-point oscillations and there are no force of attraction in their ensemble. In order to create a unified ensemble of particles, the pairs must order their zero-point fluctuations so that an attraction between the particles appears. As a result of this ordering of zero-point oscillations in the electron gas, superconductivity arises. This model of condensation of zero-point oscillations creates the possibility of being able to obtain estimates for the critical parameters of elementary superconductors, which are in satisfactory agreement with the measured data. On the another hand, the phenomenon of superfluidity in He-4 and He-3 can be similarly explained, due to the ordering of zero-point fluctuations. It is therefore established that both related phenomena are based on the same physical mechanism. Where M i is the isotope mass. It is important that an another explanation for the isotopic effect in superconductors can be given. In the last decades it has shown experimentally that the isotopic substitution in metals can directly lead to a change of the crystal cells parameters. It is the consequence of the fact that the zero-point oscillations of atoms in some crystal lattices are anharmonic (and they are harmonic in another crystals). It leads to the dependence of the electron density from the isotopic mass, i.e. it changes the Fermi-energy and other important characteristics of an electronic system of metals and thus it may affect on the properties of the superconductor. As it is following from the results of measurements which was carried out on Ge, Si, diamond and light metals, such as Li [2], [3], 1 there are root dependence of force constants from the mass of the isotope, which is required for the Eq.(1). The same dependence of force constants from the mass of the isotope was found in the tin [4]. Unfortunately, there are no direct measurements of an influence of the isotopic substitution on the electronic properties of metals, such as the electronic specific heat and the Fermi-energy. Nevertheless, it seems that the existence of influence of isotope substitution on the critical temperature of the superconductors is not necessarily to be clearly indicated on the work of the phonon mechanism in the formation of pairs. They can be coupled by other interactions, and isotopic substitution causes a changing in the Fermi-energy and thus it can affect on the critical parameters of the superconductor. The existing of superconductivity is the result of ordering in the system of conduction electrons. It is shown [1], that the interaction of a zero-point oscillations of the collectivized electrons can play a role of the physical mechanism which is leading to this ordering. 2 The condensate of ordered zero-point oscillations of electron gas J.Bardeen was first who turned his attention on a possible link between superconductivity and zero-point oscillations [5]. The special role of zero-point vibrations exists due to the fact that in metals all movements of electrons have been freezed out except of these oscillations. At a decreasing of temperature, the conducting electrons lose the thermal excitations and, if there is a mechanism of combination of electrons into pairs, which obey to Bose-Einstein statistics, they tend to form condensate on the level of minimum energy. Thus, the ordering in a gas of the conduction electrons can exists as a result of a work of two mechanisms (see Fig.(1)). The electron pairing First, in the electron gas should occur an energetically favorable electron pairing. The pairing of electrons can occur due to the magnetic dipole-dipole interaction. In order to the magnetic dipole-dipole interaction could merge two electrons in the singlet pair at the temperature of about 10K , the distance between this particles must be small enough: where a B = 2 mee 2 is the Bohr radius. Ie two collectivized electrons must be localized in a volume of one lattice site. It is in an agreement with the fact that the superconductivity can occur only in metals with two collectivized electrons per atom, and can not exist in the monovalent alkali and noble metals. The pairing of electrons above T c ( [6], [7]) indicates that the pairing is a necessary but not sufficient condition for the existence of superconductivity. The condensate of zero-point oscillations The condensation of the electron pairs, as bosons, is the additional necessary condition of the superconductivity arising. An additional necessary condition is the property of the electron pairs, as bosons, condense at a lower energy level, which should occur at the expense of their interactions. As at low temperatures, the zero-point oscillations are exist only, a further lowering of energy can occur at the arising of a coherence zero-point oscillations, ie ordering of their amplitudes, frequencies and phases. The lowering of electron energy at their pairing due to magnetic dipole-dipole interaction Let an electron gas has density n e and its Fermi-energy E F . Each electron of this gas is fixed inside a cell with linear dimension λ F : If to neglect the interactions of the electron gas, then its Fermi-energy can be written as [8]: However, a conduction electron interacts with the ion at its zero-point oscillations. If to consider the ions system as a positive background uniformly spread over cells, the electron inside the cell has potential energy: An electron will perform inside the cell zero-point oscillations with an amplitude of a 0 and with the energy: In accordance with the virial theorem [12], if a particle executes a finite motion, its potential energy E p should be associated with its kinetic E k by the simple relation |E p | = 2E k . In this regard, we find that the amplitude of the zero-point oscillations of the electron in the cell: Coming from the quantization condition one can determinate the frequency of zero-point oscillations Ω 0 and their wavelength L = c 2πΩ0 . These zero-point oscillations form an oscillating electric dipole moment of the electron with the amplitude value: The electron interaction via their dipole moments should lead at sufficiently low temperatures to a forming of an ordered condensate of the zero-point oscillations. The identity of the electron pairs leads to the equality of the frequencies and amplitudes of zero-point vibrations. The oscillations of the electron pairs, which are located at distances which are equal to an integer number of wavelengths L, will be in-phase, or, more exactly will be shifted on (integer number)·2π. Since the ordering of the oscillations should occur due to electromagnetic interaction of oscillating dipoles, the energy minimum should correspond to the antiphase mode of neighboring oscillators at which the distance between they Λ 0 should be equal to half the wavelength L, which is induced by oscillating dipoles: Hence 11) and the relation between the density of the ordered condensate and the density of Fermi-gas, from which it is formed: The using of the above equations allows us to find the linear size of the volume of the pair localization This linear dimension equals 10 −6 cm in the order of value and plays in this model a role which is similar to the Pippard's coherence length in the BCS. The comparison of these calculated values with measured data is shown in As result of the particle electromagnetic interaction, the density of electron system energy decreases on the value: or at taking into account Eq.(7) we obtain relation between the particle density and the value of gap of condensate spectrum: and It should be noted that these ratios differ from corresponding expressions for the Bose-condensate, which are obtained in many courses (see eg [8]): the expressions for the ordered condensate of zero-point oscillations have the coefficient α on the right side. From Eq.(15) we can obtain ie The comparison of this relation with measured data is shown in Table( 3 The critical parameters of the zero-point oscillations condensate. 3.1 The temperature dependence of the energetic distribution of condensate particles. The phenomenon of condensation of zero-point oscillations in the electron gas has the characteristic features. The evaporation of condensate into the normal state should be classified as order-disorder transition. This condensate can be destroyed by heating as well as by the application of a sufficiently strong magnetic field. So between the critical temperature and critical magnetic field condensate there must be a link which will appear in superconductors, if the superconductivity occurs at the ordering of zero-point oscillations. Let us assume that at a given temperature T < T c the system of vibrational levels of conducting electrons consists of two levels -basic level which is characterized by anti-phase oscillations of the electron pairs at the distance Λ 0 , and excited, characterized by in-phase oscillation of the pairs. Let the population of basic level is N 0 particles and the excited level has N 1 particles. Two electron pairs with in-phase oscillation have high energy of interaction and cannot form the condensate. The condensate can be formed only the particles that make up the difference between the populations of levels N 0 − N 1 . In dimensionless form, this difference defines the order parameter: In the theory of superconductivity, by definition, the order parameter is determined by the value of the energy gap At taking a counting of energy from the level ε 0 , we obtain Passing to dimensionless variables δ ≡ ∆T ∆0 , t ≡ kT kTc and β ≡ 2∆0 kTc we have δ = e βδ/t − 1 e βδ/t + 1 = th(βδ/t). This decision is very accurately coincides with the known transcendental equation of the BCS, which was obtained by the integrating of the phonon spectrum, and is in a quite satisfactory agreement with the measurement data. After numerical integrating we can obtain the averaging value of the gap: 24)) and E H (Eq.(25)) for the type I superconductors. The critical parameters of a zero-point oscillations condensate and superconductivity To convert the condensate in the normal state, we must to put up the half of its particles into the excited state (the gap collapses under this condition). To do this, taking into account Eq.(23), the unit volume of condensate should get the energy: On the other hand, we can get the normal state of an electrically charged condensate at an applying of a magnetic field of critical value H c with the density of energy: As a result, we obtain The comparison of the critical energy densities E T and E H for type I superconductors are shown in Fig.(3). The obtained agreement between energies E T (Eq.(24))and E H (Eq.(25)) can be considered as quite satisfactory for type I superconductors [10], [11]. A similar comparison of data for type-II superconductors gives the results differ in approximately two times. The correction this calculation, apparently, has not make sense. The purpose of these calculations was to show that the description of superconductivity as the effect of the condensation of ordered zero-point oscillations is in accordance with the available experimental data. And this goal can be considered quite reached. The zero-point oscillations and critical magnetic field of superconductors The direct influence of the external magnetic field of the critical value applied to the electron system is too weak to disrupt the dipole-dipole interaction of two paired electrons: To violate the superconductivity enough to destroy the ordering of the electron zero-point oscillations. In this case it is enough not very strong magnetic field to violate their. With using of Eqs. (26) and (14), we can express the gap through the critical magnetic field and the magnitude of the oscillating dipole moment: The properties of zero-point oscillations of the electrons should not be depended on the characteristics of the mechanism of association and the existence of electron pairs. Therefore, one should expect that this equation should be valid for type I superconductors, as well as for type-II superconductors (for type-II superconductor H c = H c1 is the first critical field) A satisfaction of this condition is illustrated on the Fig.(4). Рис. 4: The comparison of the calculated energy of the superconducting pairs in the critical magnetic field with the value of the superconducting gap. On this figure are marked: triangles -type-II superconductors, squares -type-I superconductors. On vertical axis -logarithm of the product of the calculated value of the oscillating moment of the electron on the critical magnetic field is plotted. On abscissa -the value of the gap is shown. 4 The critical temperature of superconductor and its electronic specific heat The electron states density and specific heat Let us consider the process of heating the electron gas in a metal. At a heating, the electrons from levels slightly below the Fermi-energy are raised to higher levels. As a result, the levels closest to the Fermi level, from which at low temperature electrons was forming bosons, become vacant. At critical temperature T c , all electrons from the levels of energy bands from E F − ∆ to E F are moved to higher levels (and the gap collapses). At this temperature superconductivity is destroyed completely. This band of energy can be filled by N ∆ particles: Where is the Fermi-Dirac function, D(E) is number of states per an unit energy interval, deuce front of the integral arises from the fact that there are two electron at each energy level. To find the density of states D(E), one needs to find the difference in energy of the system at T = 0 and finite temperature: At calculation of the density of states D(E), we must take into account that two electrons can be placed on each level. Thus, from the expression of the Fermi-energy where is the Sommerfeld constant 2 . At the using of similar arguments, we can calculate the number of electrons, which populate the levels in the range from E F − ∆ to E F . For an unit volume of material, Eq.(29) can be rewritten as: At taking into account that for superconductors ∆0 kTc = 1.76, as a result of numerical integration we obtain Thus, the density of electrons, which throw up above the Fermi level in a metal at temperature T = T c is Where the Sommerfeld constant γ is related to the volume unit of the metal. 5 The Sommerfeld constant and critical temperature superconductors The type-I superconductors The performed calculations make it possible to find the direct dependence of the critical temperature superconductor from the electronic specific heat -the experimentally measurable parameter of a solid-state. The density of superconducting carriers at T = 0 has been calculated earlier (Eq. (16)). The comparison of the values n 0 and n e (T c ) is given in the From the data obtained above, one can see that the condition of destruction of superconductivity after heating for superconductors of type I can really be written as the equation: n e (T c ) ≃ 2n 0 Eq.(37) gives us the possibility to express the critical temperature of the superconductor through its Sommerfeld constant : where the constant The comparison of the temperature calculated by Eq.(38) (corresponding to complete evaporation of electrons with energies in the range from E F − ∆ 0 up to E F ) and the experimentally measured critical temperature superconductors is given in Table (5.1.2) and in Fig.(6). and rewrite the value of the gap: This expression for the value of a gap is in a good agreement with the previously obtained Eq. (17). The de Broglie wavelengths of Fermi electrons expressed through the Sommerfeld constant are shown in Tab.5.1.3. For comparison, the de Broglie wavelengths of the superconducting pairs (Eq.(10)) and the ratio of the density of superconducting carriers to the density of fermions are given in this table. It can be seen that the ratio of these densities on the order of value are close to 10 −5 really in full compliance with the previously obtained Eq.(12). The agreement between the previously obtained estimation Eq.(18) and these data can be considered as satisfactory. The obtained estimation of relationship ∆ 0 /kT c has satisfactory agreement with measured data [10], which for type I superconductors are listed in Table (5.1.1). The estimation of properties of type-II superconductors The situation is different in the case of type-II superconductors. In this case the measurements show that these metals have the electronic specific heat on an order of magnitude greater than the calculation based on free electron gas gives. The peculiarity of these metals associated with the specific structure of their ions. They are transition metals with unfilled inner d-shell (see Table 5.2). It can be assumed that this increasing of the electronic specific heat of these metals should be associated with a characteristic interaction of free electrons with the electrons of the unfilled d-shell. Since the heat capacity of the ionic lattice of metals at low temperatures under consideration is negligible, only the electronic subsystem is active the thermally. At T = 0, the maximum kinetic energy has the electrons at the Fermi level. When heated, these electrons gain additional kinetic energy: Only fraction of the heating energy transferred to the metal is consumed to increase the kinetic energy of the electron gas in transition metals. Another part of the energy will be spent on the magnetic interaction of a moving electron. At a contact with the electron d-shells, a moving free electron induces on it the magnetic field of the order of value: The magnetic moment of d-electron is approximately equal to the Bohr magneton. Therefore the energy of magnetic interaction between the moving electron of conductivity and the d-electron is approximately equal to: This energy is not connected with the process of destruction of superconductivity. Whereas in metals with filled d-shell (type-I superconductors), the whole energy of the heating increases the kinetic energy of conductivity electrons, only a small part of the heating energy is spent on it in transition metals: Therefore, whereas the dependence of the gap in type-I superconductors from the heat capacity is defined by Eq.(38), it is necessary to take into account the relation Eq.(47) in type-II superconductors for the determination of this gap dependence. As a result of this estimation, we obtain: The comparing of the results of these calculations with the measurement data ( Fig.(7)) shows that for the majority of type II superconductors the estimation Eq.(48) can be considered quite satisfactory. Alloys and high-temperature superconductors To understand the mechanism of high temperature superconductivity is important to establish whether the high-T c ceramics are the I or II type superconductors, or they are a special class of superconductors. To address this issue, one can use the above established dependence of critical parameters from the electronic specific heat and the fact that the specific heat of superconductors I and II types are differing considerably. There are some difficulty on this way -one do not known confidently the density of the electron gas in high-temperature superconductors. However, the density of atoms in metal crystals does not differ too much. It facilitates the solution of the problem of a distinguishing of I and II types superconductors at using of Eq.(38). For the I type superconductors at using this equation, we get the quite satisfactory estimation of the critical temperature (as was done above, see Fig.6). For the type-II superconductors this assessment gives overestimated value due to the fact that their specific heat has additional term associated with the polarization of d-electrons. Indeed, such analysis gives possibility to share all of superconductors into two groups, as is evident from the figure (8). It is generally assumed to consider alloys N b 3 Sn and V 3 Si as the type-II superconductors. The fact seems quite normal that they are placed in close surroundings of Nb. Some excess of the calculated critical temperature over the experimentally measured value for ceramics T a 2 Ba 2 Ca 2 Cu 3 O 10 can be attributed to the fact that the measured heat capacity may make other conductive electrons, but nonsuperconducting elements (layers) of ceramics. It is not a news that it, as well as ceramics Y Ba 2 Cu 3 O 7 , belongs to the type-II superconductors. However, ceramics (LaSr) 2 Cu 4 , Bi-2212 and Tl-2201, according to this figure should be regarded as type-I superconductors, which is somewhat unexpected. The understanding of the mechanisms of the superconducting state is important to open the way for solution of technological problems, which constituted, however, the dream of the last century -to fabricate a superconductor which easy in producing (in the sense of a ductility) and has high critical temperature. To move in this direction, it is important, in the first place, to understand the mechanism limiting the superconducting properties. Consider a superconductor carrying a current. The value of this current is limited by the critical speed of the carrier v c . At a comparing the equation Eqs. (70) and (41), we can find that the critical velocity of superconducting carriers is about a hundred times smaller than the Fermi velocity. Thus the critical velocity has a value which seems to be equal to the speed of sound v s , since according to Bohm-Staver relation [19] the speed of sound This makes it possible to consider the destruction of superconductivity as a overcome of the sound barrier for superconducting carriers. If they were moving without friction at speeds which less than the speed of sound, after the overcoming of the sound barrier they acquire a friction mechanism. As a result of this assumption we get If so, then to obtain superconductors with a high critical temperature one must synthesize they that have a high speed of sound. It is in agreement with the fact that ceramics compared to metals and alloys, have higher elastic moduli. However, this same circumstance leads to a contradiction with the processability of the material. In order that a material was malleable it must have a relatively low elastic constants (at room temperature). A solution to this paradoxical problem remains yet elusive. 6 About the London penetration depth 6.1 The traditional approach to calculation of the London penetration depth It is commonly accepted to consider the theory the London penetration depth (see, for example [11]) in several steps: Step 1. At first, the action of an external electric field on free electrons is considered. In accordance with the Newton's law, free electrons gain acceleration in an electric field E: The obtained directional movement of the "superconducting" electron gas with the density n s creates the current with the density: where v is the carriers velocity. After differentiating on time and substituting in Eq.(52), one obtains the first London's equation: Step 2. After application of operations rot to both sides of this equation and with using of the Faraday's law of electromagnetic induction rotE = − 1 c dB dt one gets the relation between current density and magnetic field: rot j + n s e 2 m e c B = 0, Step 3. At selecting of the stationary solution of Eq.(55) and after simple transformations, one concludes that there is a so-called London penetration depth of the magnetic field in a superconductor: The London penetration depth and the density of superconducting carriers The London penetration depth is one of the measurable characteristics of superconductors, and for many of them it equals to a few hundred Angstroms [15]. In the table (1) the measured values of λ L are given in the second column. However, if to use these experimental data and to calculate the density of superconducting carriers n s in accordance with the Eq.(58), it will give three orders of magnitude larger (see the middle column of Tab. (1). Indeed, only a small fraction of the free electrons can combine into the Cooper pairs. It is only those electrons which energies are in the thin strip of the energy spectrum near E F . We can expect therefore that the concentration of superconducting carriers among all free electrons of the metal should be at the level kTc EF ≈ 10 −4 . While these concentrations, if to calculate them from Eq.(58), are in 2 ÷ 3 orders of magnitude higher (see last column of the Table (1). The reason for this discrepancy, apparently, in the using of nonequivalent transformation. At the first stage in Eq.(52), the straight-line acceleration in a static electric field is considered. At this moving, there is no current circulation. Therefore the application of the operation rot in Eq.(55) in this case is not correct. It does not lead to the Eq.(57): but it leads to a pair of equations: and to the uncertainty: rot j The adjusted estimation of the London penetration depth To avoid this incorrectness, let us consider a balance of magnetic energy in a superconductor in magnetic field. This magnetic energy is composed by an energy of penetrating external magnetic field and an magnetic energy of moving electrons. The magnetic energy of a moving electron With the using of formulas of [13], let us estimate the ratio of the magnetic and kinetic energy of the electron (the charge of e and the mass m e ) at the moving rectilinearly with velocity v ≪ c. The density of the electromagnetic field momentum is expressed by the equation: At the moving with velocity v, the electric charge carrying the electric field with intensity E creates a magnetic field with the density of the electromagnetic field momentum (at v ≪ c) As a result, the momentum of the electromagnetic field of a moving electron Where the integrals are taken over the entire space, which is occupied by particle fields, and ϑ is the angle between the particle velocity and the radius vector of the observation point. At the calculating the last integral in the condition of the axial symmetry with respect to v, the contributions from the components of the vector E, which is perpendicular to the velocity, cancel each other for all pairs of elements of the space, if they located diametrically opposite on the magnetic force line. Therefore according to Eq.(65), the component of the field which is can be taken instead of the vector E. Taking this into account, going over to spherical coordinates and integrating over angles, we obtain If to limit the integration of the field by the Compton electron radius r C = mec , 3 then v ≪ c, we obtain: In this case at taking into account Eq.(63), the magnetic energy of a slowly moving electron pair is equal to: where we introduce the notation In this case, part of the free energy of the superconductor connected with an application of a magnetic field is equal At minimization of the free energy, after simple transformations we obtain and thus Λ is the magnetic field penetration depth into the superconductor. In view of Eq.(16) from Eq.(75) we can estimate the values of London penetration depth (see table (6.3.2)). The consent of the obtained values with the measurement data can be considered quite satisfactory. The resulting refinement may be important for estimates in frame of Ginzburg-Landau theory, where the London penetration depth is used at comparison of calculations and specific parameters of superconductors. About superfluidity of liquid helium The main features of superfluidity of liquid helium became clear few decades ago [16], [17]. L.D.Landau explains this phenomenon as the manifestation of a quantum behavior of the macroscopic object. However, the causes and mechanism of the formation of superfluidity are not clear still. There is not an explanation why the λ-transition in helium-4 occurs at about 2 K. The related phenomenon -superconductivity, which can be regarded as superfluidity of the conduction electrons -can be quantitatively described if to consider it as the consequence of ordering of the electron gas zero-point oscillations. Therefore it seems as appropriate to consider superfluidity from the same point of view. The atoms in liquid helium-4 are electrically neutral, have no dipole moments and do not form molecules. Yet some electromagnetic mechanism should be responsible for phase transformations of liquid helium (as well as in other condensed substance where phase transformations are related with changes of energy of the same scale). In liquid helium, the atom density is n 4 ≈ 2 · 10 22 cm −3 , so that a single atom trapped adjacent atoms in a volume with linear dimensions roughly equal to λ ≈ 2 · 10 −8 cm. In this volume, the atom makes a zero-point oscillations with the amplitude of a 0 ≈ λ. It permits for helium to remain the liquid state even at T = 0. The radius of the atom r a , defined by the first Bohr orbit, is about 2 Zmee 2 , is approximately equal to 3 · 10 −9 cm, ie almost in an order of magnitude smaller. The frequency of zero-point oscillations ω 0 of atom in liquid helium can be determined from the condition of quantization: where m 4 = 6.7 · 10 −24 g is the mass of the He-4 atom. At zero-point oscillations, the atoms collide through their electron shells, and at a short time of collision they reverse the direction of their motion. At same time the nuclei of atoms are affected by inertia. Let us assume that the collision time is In this case, the inertial force F ≈ m 4 (a0ω0) 2 ra will act on the nucleus periodically, and it will move the nucleus relative to the center of the negative charged shell on the distance δ ω , ie it leads to an existence of oscillating dipole electric moment of atom d ω = eδ ω . To determine the polarizability of helium under action of the inertia force, it is possible to use the Clausius-Mossotti equation [18], describing the phenomenon of the polarizability under action of an external electric field. Where α is the polarizability of He-atom, ε ≈ 1.055 is the dielectric permeability of liquid helium at T → 0. In accordance with this equation under the action of applied to a nucleus the oscillating force F , the atom obtains the oscillating dipole electric moment with the amplitude depending on the polarizability [18]: After numeral calculation, we obtain α ≈ 2.5 · 10 −24 cm 3 , δ ω ≈ 7 · 10 −11 cm and d ω ≈ 4 · 10 −20 cm 5/2 g 1/2 s . At relatively high temperature, the zero-point oscillations are independent. The interaction energy of oscillating dipole moment of neighboring atoms leads at the sufficient low temperature to an ordering in the system, at which the coherence in zero-point oscillations is established throughout the whole ensemble of particles. The substitution of numerical values gives the value of the critical temperature of this ordering Thus, it can be seen that the ordering energy of oscillating dipoles is consistent with the energy λ-transition in helium-4 in order of value. It is difficult to make successfully more accurate calculations in this way because in the first the collision time of two atoms in the liquid can be estimated very roughly only. A simile explanation can be given to the transition to a superfluid state of helium-3. The difference is that the electromagnetic interaction should order the magnetic moments of the nuclei He-3 in this case. We can estimate the temperature at which this ordering is happens. Due to the zero-point oscillation, the electron shell creates on the nucleus an oscillating magnetic field: Because the value magnetic moments of the nuclei He-3 is approximately equal to the nuclear Bohr magneton µ nB , the ordering in their system must occur below the critical temperature It is in a good agreement with the data of measuring. Conclusion It is generally accepted to thought that the existence of the isotope effect in superconductors leaves only one way for the phenomenon of superconductivity explanation -based on the phonon mechanism. However, there are experimental data for believing that the isotope effect in superconductors may be a consequence of another effect, and therefore a nonphonon mechanism may be in the basis of the superconductivity. A satisfactory agreement with the measured data can be obtained if we consider the superconductivity of both -type-I and type-II -as a result of a condensation of ordered zero-point oscillations of the electrons. The density of superconducting carriers and the critical temperature of the superconductor are determined by the peculiarities of the interaction of the zeropoint oscillations, and the critical magnetic field of a superconductor is defined mechanism for the destruction of the coherence of zero-point oscillations of the electrons. The evaluations show that the critical parameters of superconductors are depending on the Sommerfeld constant (or the Fermi-energy) and does not depend on the electron-phonon interaction. The temperature dependence of the energy gap determined by the mechanism, which is standard for the order-disorder transitions. There are the both -I and II -type superconductors among the hightemperature superconducting ceramics. The general conclusion which is obtained from agreement of calculation results and measuring data in that, the superconductivity of elementary metals is result of ordering of their electron systems, ie it is based on the nonphonon mechanism.
7,768.6
2010-08-16T00:00:00.000
[ "Physics" ]
Analysis of Short-Circuit and Dielectric Recovery Characteristics of Molded Case Circuit Breaker according to External Environment : A molded case circuit breaker (MCCB) is one of the most important factors for safety to protect a load from overcurrent in a power distribution system. MCCB, which is mainly installed in switchboards and distribution boxes, may be affected by external temperatures and magnetic fields, but the above factors are still excluded from product standards and performance evaluation. This paper is the result of experimenting and studying the negative effects of these external temperatures and external magnetic fields on MCCB with short-circuit characteristic and dielectric recovery strength. As a result of temperature, it can be found that both short-circuit characteristic and dielectric recovery strength change linearly in accordance with the external temperature. The ratio of the values of 35 ◦ C to 25 ◦ C and 45 ◦ C to 25 ◦ C show the following results. t 10 , t 21 , and t 32 are 1.58, 1.53, and 1.79, respectively, in short-circuit characteristics and t i , t m , and t l are 1.59, 1.69, and 1.53, respectively, in dielectric recovery strength. Depending on the external magnetic field, the short-circuit characteristics decreased by 8.56% only in the t 21 period. The dielectric recovery strength decreases by 4.92% in the initial section (t i ) and 14.45% in the later section (t l ), respectively. It has been confirmed that the external magnetic field interferes with the emission of hot gas. Introduction A molded case circuit breaker (MCCB) is the closest circuit breaker to the consumer among medium-and low-voltage circuit breakers installed to protect the load from overcurrent in the power distribution system. Therefore, it is essential to secure the reliability of the wiring breaker in order to protect the load. The environment in which the MCCB is installed is mainly located in the electrical chamber of the building and in the distribution box of each floor. In such an environment, the wiring breaker is affected by ambient conditions such as temperature and magnetic field by devices installed together. After the development process, the circuit breaker is tested according to the product standard to verify the function and performance before mass production. The product standards used for breakers with AC 1000 V or less (DC 1500 V or less) are as follows. In IEC 60947-2 "Low-voltage switchgear and controlgear-Part 2: Circuit-breakers", the trip characteristics, short-circuit characteristics, withstand voltage, and dielectric strength and mechanical strength of the circuit breaker are tested, and a temperature rise test between each test is performed [1]. However, the temperature rise test conducted in the test standard focuses only on the temperature rise (∆T) and does not verify performance such as trip characteristics and short-circuit characteristics depending on the temperature of the environment where the breaker is installed. In addition, circuit breaker manufacturers generally manage the performance for the trip according to ambient temperature, but they are less interested in short-circuit characteristics. As mentioned above, in an environment where a circuit breaker is installed, not only heat is generated, but magnetic fields are also generated by devices and systems. IEC 60947-1 "Low-voltage switchgear and controlgear-Part 1: General rules", a product standard for low-pressure switchgear, includes verifying the performance of magnetically sensitive devices with the magnetic field test from an EMC perspective [2]. However, based on this, Annex F of IEC 60947-2, an EMC requirement applied to breakers with overcurrent protection, does not include the corresponding magnetic field test. Although it is not a magnetic field-sensitive device as much as a protection relay, communication, and measurement device (etc.) that communicates with a parent system or directs measurement and operation, the operation of a circuit breaker may be affected by a magnetic field. Therefore, it is necessary to analyze and consider the impact of external conditions on the breaker. Even if it is not required by the product standard, the dielectric recovery characteristics between electrodes by residual hot gas existing in the arc extinguishment chamber after current zero are also important for performance among the characteristics of the circuit breaker. When separating electrodes during the blocking process, the arcs generated between the electrodes form high-temperature hot gas inside the arc-extinguishment chamber. This causes a decrease in the dielectric strength between the fixed electrode and the moving electrode to remain inside the arc extinction chamber even after the arc is extinguished after the current zero. dielectric. This means that although the circuit breaker operates normally, the overcurrent blocking fails, resulting in load burnout [3]. Therefore, the dielectric recovery characteristics of the circuit breaker according to the installation environment must be considered to increase the reliability of the circuit breaker as well as the short-circuit characteristics. Currently, research on the impact of the surrounding environment on breakers and components is being conducted. Szulborski, M et al. analyzed the temperature distribution of the entire current path (from the trip coil to the fixed electrode, moving electrode, arc runner, arc extinguishment unit, and wire terminal) of the miniature circuit breaker (MCB) through a finite element method (FEA) simulation [4]. They also analyzed the magnetic field that generates a force moving toward the splitter plate inside the arc extinguishment chamber [5]. Yang, W et al. analyzed the effect of pressure, relative humidity, and temperature difference on MCB according to altitude through simulation [6]. In addition, various studies on LV or MV breakers are being conducted, and in particular, studies related to the DC system are increasing [7][8][9][10][11][12][13]. The composition of this paper is as follows: 2. Consideration of external environmental conditions, 3. experiment studies, and 4. conclusions. Section 2 deals with the consideration of external environmental conditions. Section 3 shows the results of measuring the short-circuit characteristics and the dielectric recovery voltage under external temperature and magnetic field conditions. Finally, Section 4 analyzes the experimental results and provides conclusions. Previous studies have conducted assessments on the effect of the shape and material of the internal structure of the MCCB on the insulation recovery characteristics. In this study, the short circuit characteristics and insulation recovery characteristics under the external environmental conditions that the MCCB may be subjected to are analyzed. This has value as a non-verified part of the current product standard. In addition, the research direction for performance improvement of each characteristic is suggested. Temperature The previous paper confirmed the effect of the material of the splitter plate inside the arc extinguishment chamber on the dielectric recovery characteristics of the breaker [14]. In this test result, it was analyzed how the thermal conductivity of the material affects the dielectric recovery voltage through one-way ANOVA (Analysis of Variance) analysis [15]. Table 1 shows the results of the dielectric recovery voltage when the splitter plate is made of steel, aluminum, and copper and is located in the arc extinguishment chamber. The dielectric recovery voltage was measured three times for each period to obtain the average value. Table 1. Measurement result of DRV in previous paper [14]. Figure 1 shows the main effect plot according to the material. In the initial period and the medium period, dielectric recovery voltage increases in proportion according to thermal conductivity. In particular, copper has a higher dielectric strength than the average value of the data. However, in the latter period, the dielectric recovery voltage is not proportional to the thermal conductivity, and aluminum appears to be the highest. In general, the dielectric recovery voltage is mainly affected by the cooling of hot gas in the initial stage and the emission in the latter stage. Therefore, thermal conductivity is the main factor in increasing dielectric recovery in the initial period, which is sensitive to temperature. If the temperature around the circuit breaker changes, it affects the cooling and emission of hot gas inside the arc extinguishment chamber, so analysis is needed. Magnetic Field The short-circuit characteristic of the circuit breaker is affected by the Lorentz force generated by the shape from the terminal to the fixed electrode. Therefore, the force of opening the moving electrode changes according to the magnitude of the inflow overcurrent. In addition, depending on the magnetic field applied to the arc formed after the electrode separation, the force on the movement of the arc changes. Figure 2 shows the effect of this magnetic field on the breaker. As a current flow from a fixed electrode to a moving electrode, the direction of the force applied to an arc varies depending on the direction of the external magnetic field. This force disturbs or helps the arc move toward the splitter plate. Therefore, it is necessary to analyze the movement of the arc according to the external magnetic field [16]. If the temperature around the circuit breaker changes, it affects the cooling and emission of hot gas inside the arc extinguishment chamber, so analysis is needed. Magnetic Field The short-circuit characteristic of the circuit breaker is affected by the Lorentz force generated by the shape from the terminal to the fixed electrode. Therefore, the force of opening the moving electrode changes according to the magnitude of the inflow overcurrent. In addition, depending on the magnetic field applied to the arc formed after the electrode separation, the force on the movement of the arc changes. Figure 2 shows the effect of this magnetic field on the breaker. As a current flow from a fixed electrode to a moving electrode, the direction of the force applied to an arc varies depending on the direction of the external magnetic field. This force disturbs or helps the arc move toward the splitter plate. Therefore, it is necessary to analyze the movement of the arc according to the external magnetic field [16]. Experiment Studies The circuit used in the experiment is illustrated in Figure 3, which is generally used to measure the dielectric recovery voltage and inflow of over-current [17]. The capacitor bank CS charged by the rectifier generates overcurrent at the desired frequency with inductor L. An overcurrent generated by the operation of Thyr flows into the circuit breaker, and capacitor C0 connected in parallel to the circuit breaker is installed to model a recovery voltage of the system and arbitrarily perform re-ignition. In addition, an experiment is constructed by selecting the ambient temperature and the external magnetic field as external conditions in the circuit breaker. These experimental setups appear in Sections 3.1 and 3.2, respectively. In order to analyze the short-circuit characteristics and the dielectric recovery voltage, important measuring points in the waveform are shown in Figure 4. Figure 4a shows each point for the operation of the circuit breaker when an over-current inflows. t0 is the time that the overcurrent flows into the breaker, and t1 is the time when the electrodes begin to separate. t2 is the time when the arc formed after electrode separation reaches the splitter plate, and t3 is the timing of the current reaching zero after the arc is extinguished on the splitter plate. Therefore, t10 represents the time until the moving electrode is separated from the fixed electrode after the inflow of overcurrent, and t21 represents the time the arc moves to the splitter plate after the contact is disconnected. t31 represents the time at which the arc reaching the splitter plate is extinguished. Figure 4b shows each point of the dielectric recovery characteristics of the circuit breaker after the current reaches zero. When a recovery voltage exceeding the dielectric strength is applied between the fixed electrode and the moving electrode after the current reaches zero, the arc is re-formed and Experiment Studies The circuit used in the experiment is illustrated in Figure 3, which is generally used to measure the dielectric recovery voltage and inflow of over-current [17]. The capacitor bank C S charged by the rectifier generates overcurrent at the desired frequency with inductor L. An overcurrent generated by the operation of Thyr flows into the circuit breaker, and capacitor C 0 connected in parallel to the circuit breaker is installed to model a recovery voltage of the system and arbitrarily perform re-ignition. In addition, an experiment is constructed by selecting the ambient temperature and the external magnetic field as external conditions in the circuit breaker. These experimental setups appear in Sections 3.1 and 3.2, respectively. Experiment Studies The circuit used in the experiment is illustrated in Figure 3, which is generally used to measure the dielectric recovery voltage and inflow of over-current [17]. The capacitor bank CS charged by the rectifier generates overcurrent at the desired frequency with inductor L. An overcurrent generated by the operation of Thyr flows into the circuit breaker, and capacitor C0 connected in parallel to the circuit breaker is installed to model a recovery voltage of the system and arbitrarily perform re-ignition. In addition, an experiment is constructed by selecting the ambient temperature and the external magnetic field as external conditions in the circuit breaker. These experimental setups appear in Sections 3.1 and 3.2, respectively. In order to analyze the short-circuit characteristics and the dielectric recovery voltage, important measuring points in the waveform are shown in Figure 4. Figure 4a shows each point for the operation of the circuit breaker when an over-current inflows. t0 is the time that the overcurrent flows into the breaker, and t1 is the time when the electrodes begin to separate. t2 is the time when the arc formed after electrode separation reaches the splitter plate, and t3 is the timing of the current reaching zero after the arc is extinguished on the splitter plate. Therefore, t10 represents the time until the moving electrode is separated from the fixed electrode after the inflow of overcurrent, and t21 represents the time the arc moves to the splitter plate after the contact is disconnected. t31 represents the time at which the arc reaching the splitter plate is extinguished. Figure 4b shows each point of the dielectric recovery characteristics of the circuit breaker after the current reaches zero. When a recovery voltage exceeding the dielectric strength is applied between the fixed electrode and the moving electrode after the current reaches zero, the arc is re-formed and In order to analyze the short-circuit characteristics and the dielectric recovery voltage, important measuring points in the waveform are shown in Figure 4. Figure 4a shows each point for the operation of the circuit breaker when an over-current inflows. t 0 is the time that the overcurrent flows into the breaker, and t 1 is the time when the electrodes begin to separate. t 2 is the time when the arc formed after electrode separation reaches the splitter plate, and t 3 is the timing of the current reaching zero after the arc is extinguished on the splitter plate. Therefore, t 10 represents the time until the moving electrode is separated from the fixed electrode after the inflow of overcurrent, and t 21 represents the time the arc moves to the splitter plate after the contact is disconnected. t 31 represents the time at which the arc reaching the splitter plate is extinguished. Figure 4b shows each point of the dielectric recovery characteristics of the circuit breaker after the current reaches zero. When a recovery voltage exceeding the dielectric strength is applied between the fixed electrode and the moving electrode after the current reaches zero, the arc is re-formed and the voltage is discharged. The voltage at this time is V DRV , and the time taken from the current zero to the re-ignition is t DRV . Electronics 2022, 11, x FOR PEER REVIEW 6 of 20 the voltage is discharged. The voltage at this time is VDRV, and the time taken from the current zero to the re-ignition is tDRV. Temperature Test Results In order to measure the short-circuit characteristics and the dielectric recovery voltage according to the ambient temperature, a simple chamber capable of maintaining a predetermined temperature is constructed ( Figure 5). The experiment is conducted after maintaining the predetermined temperature for 1 h by a heater connected to the thermostat. Figures 6-8 show short-circuit characteristics at 25 °C, 35 °C, and 45 °C, respectively. Table 2 shows the numerical values of these results. Temperature Test Results In order to measure the short-circuit characteristics and the dielectric recovery voltage according to the ambient temperature, a simple chamber capable of maintaining a predetermined temperature is constructed ( Figure 5). The experiment is conducted after maintaining the predetermined temperature for 1 h by a heater connected to the thermostat. Table 2 shows the numerical values of these results. the voltage is discharged. The voltage at this time is VDRV, and the time taken from the current zero to the re-ignition is tDRV. Temperature Test Results In order to measure the short-circuit characteristics and the dielectric recovery volt age according to the ambient temperature, a simple chamber capable of maintaining a predetermined temperature is constructed ( Figure 5). The experiment is conducted after maintaining the predetermined temperature for 1 h by a heater connected to the thermo stat. Figures 6-8 show short-circuit characteristics at 25 °C, 35 °C, and 45 °C, respectively Table 2 shows the numerical values of these results. The results are as follows. The values of t10 when the external temperatures are 25 °C, 35 °C, and 45 °C are 2.77 ms, 2.14 ms, and 1.78 ms, respectively. Compared to the value at 25 °C, the values of 35 °C and 45 °C are reduced by −22.5% and −35.6%, respectively. The electrodynamic repulsion force that determines the value of t10 consists of the Holm force and the Lorentz force [18]. Before the fixed electrode and the moving electrode are opened, the current flows intensively in a very small area. The electromagnetic repulsive force generated by the magnetic flux density between these electrodes is called the Holm force. Before the overcurrent enters the fixed electrode, it flows in opposite directions. This repulsive force is called the Lorentz force. The Holm force and the Lorenz force are expressed by the following formula. Here, I is the current, H is the hardness of the material, A is the area of the electrode, l is the length of the electrode arm, and r is the distance between the electrodes. As seen from the formula, Lorentz force has no temperature variable and is not affected by temperature. In the formula of the Holm force, the hardness H of the material is affected by the temperature. The hardness H of the material in this formula is the Brinell hardness (BH). Figure 9 shows the temperature-dependent ratio of BH based on a value of 25 °C [19]. The results are as follows. The values of t 10 when the external temperatures are 25 • C, 35 • C, and 45 • C are 2.77 ms, 2.14 ms, and 1.78 ms, respectively. Compared to the value at 25 • C, the values of 35 • C and 45 • C are reduced by −22.5% and −35.6%, respectively. The electrodynamic repulsion force that determines the value of t 10 consists of the Holm force and the Lorentz force [18]. Before the fixed electrode and the moving electrode are opened, the current flows intensively in a very small area. The electromagnetic repulsive force generated by the magnetic flux density between these electrodes is called the Holm force. Before the overcurrent enters the fixed electrode, it flows in opposite directions. This repulsive force is called the Lorentz force. The Holm force and the Lorenz force are expressed by the following formula. Here, I is the current, H is the hardness of the material, A is the area of the electrode, l is the length of the electrode arm, and r is the distance between the electrodes. As seen from the formula, Lorentz force has no temperature variable and is not affected by temperature. In the formula of the Holm force, the hardness H of the material is affected by the temperature. The hardness H of the material in this formula is the Brinell hardness (BH). Figure 9 shows the temperature-dependent ratio of BH based on a value of 25 • C [19]. Electronics 2022, 11, x FOR PEER REVIEW 10 of 20 Looking at the temperature range of 25 °C to 45 °C, the BH value increases, resulting in the Holm force increasing and t10 shortening. Values of t21 when the external temperatures are 25 °C, 35 °C, and 45 °C are 4.24 ms, 4.91 ms, and 5.27 ms, respectively. Compared to the value at 25 °C, the values of 35 °C and 45 °C increased by +15.9% and +24.3%, respectively. t21 is the period where the arc current generated between the fixed electrode and the movable electrode moves in the direction of the splitter plate. At this time, the force acting on the arc current is largely due to the centrifugal force caused by the movable electrode and the pressure difference of the heated gas generated by the arc discharge. Assuming that the centrifugal force is constant, the pressure difference is as follows. The pressure difference that generates the force is determined by the temperature difference, which shows that the force decreases when the external temperature rises. In addition, the high external temperature has a limitation in extinguishing the arc current in the air. The values of t32 when the external temperatures are 25 °C, 35 °C, and 45 °C are 1.45 ms, 1.54 ms, and 1.60 ms, respectively. Compared to the value at 25 °C, the values of 35 °C and 45 °C are increased by +5.7% and +10.2%, respectively. It is predictable that the lower the temperature of the splitter plate, the more advantageous it is to extinguish the arc current. The t10, t21, and t32 values at 35 °C and 45 °C were compared based on the values of t10, t21, and t32 at 25 °C, respectively. The ratios of these values at 35 °C and 45 °C are 1.58, 1.53, and 1.79 at t10, t21, and t32, respectively. It can be seen that it changes at a similar rate in all periods, and in the case of t32, it is slightly more affected. That is, it can be seen that the temperature affects the metal more than the air. Looking at the temperature range of 25 • C to 45 • C, the BH value increases, resulting in the Holm force increasing and t 10 shortening. Values of t 21 when the external temperatures are 25 • C, 35 • C, and 45 • C are 4.24 ms, 4.91 ms, and 5.27 ms, respectively. Compared to the value at 25 • C, the values of 35 • C and 45 • C increased by +15.9% and +24.3%, respectively. t 21 is the period where the arc current generated between the fixed electrode and the movable electrode moves in the direction of the splitter plate. At this time, the force acting on the arc current is largely due to the centrifugal force caused by the movable electrode and the pressure difference of the heated gas generated by the arc discharge. Assuming that the centrifugal force is constant, the pressure difference is as follows. The pressure difference that generates the force is determined by the temperature difference, which shows that the force decreases when the external temperature rises. In addition, the high external temperature has a limitation in extinguishing the arc current in the air. The values of t 32 when the external temperatures are 25 • C, 35 • C, and 45 • C are 1.45 ms, 1.54 ms, and 1.60 ms, respectively. Compared to the value at 25 • C, the values of 35 • C and 45 • C are increased by +5.7% and +10.2%, respectively. It is predictable that the lower the temperature of the splitter plate, the more advantageous it is to extinguish the arc current. The t 10 , t 21 , and t 32 values at 35 • C and 45 • C were compared based on the values of t 10 , t 21 , and t 32 at 25 • C, respectively. The ratios of these values at 35 • C and 45 • C are 1.58, 1.53, and 1.79 at t 10 , t 21 , and t 32 , respectively. It can be seen that it changes at a similar rate in all periods, and in the case of t 32 , it is slightly more affected. That is, it can be seen that the temperature affects the metal more than the air. Table 3 shows the DRV at 25 °C, 35 °C, and 45 °C It shows the C0 values (0.47, 1, and 10 μF) and their corresponding average voltage and average time, respectively. Using these voltages and times, Figure 13 illustrates the DRV V-t curve according to the temperature. Table 3 shows the DRV at 25 • C, 35 • C, and 45 • C It shows the C 0 values (0.47, 1, and 10 µF) and their corresponding average voltage and average time, respectively. Using these voltages and times, Figure 13 illustrates the DRV V-t curve according to the temperature. This paper observes the DRV characteristics by dividing time into the initial time (t i ), medium time (t m ), and late time (t l ). In general, the initial time is affected by the cooling performance of the splitter plate, and the latter time is affected by the emission of hot gas generated by arc extinguishment [20]. The results are as follows. When it is t i , the rates are−13.3% and −21.1%. When it is t m , the rates are −9.4% and −16.0%. When it is t l , the rates are −15.4% and −23.4%, respectively. Upon comparing t i and t l , the rates of change are similar. This result shows that the arc-cooling performance is poor due to the increased temperature of the splitter plate at the initial time. In addition, it can be seen that at the latter time, hot gas is not sufficiently emitted due to the high temperature outside. The rate of change of the DRV of 35 • C and 45 • C to 25 • C DRV is 1.59, 1.69, and 1.53, respectively, at t i , t m , and t l . It can be seen that it changes at a similar rate in all periods. This paper observes the DRV characteristics by dividing time into the initial time (ti), medium time (tm), and late time (tl). In general, the initial time is affected by the cooling performance of the splitter plate, and the latter time is affected by the emission of hot gas generated by arc extinguishment [20]. The results are as follows. The average time corresponding to each of the C0 values (0.47 μF, 1 μF, 10 μF) is 0.95 μs, 1.86 μs, and 4.45 μs, respectively. At ti, DRVs of 25 °C, 35 °C, and 45 °C are 279 V, 242 V, and 220 V, respectively. At tm, DRVs of 25 °C, 35 °C, and 45 °C are 381 V, 345 V, and 320 V, respectively. In this tl, DRVs of 25 °C, 35 °C, and 45 °C are 482 V, 408 V, and 369 V, respectively. The rate of change in the DRV of 35 °C and 45 °C based on the 25 °C DRV is as follows. When it is ti, the rates are−13.3% and −21.1%. When it is tm, the rates are −9.4% and −16.0%. When it is tl, the rates are −15.4% and −23.4%, respectively. Upon comparing ti and tl, the rates of change are similar. This result shows that the arc-cooling performance is poor due to the increased temperature of the splitter plate at the initial time. In addition, it can be seen that at the latter time, hot gas is not sufficiently emitted due to the high temperature outside. The rate of change of the DRV of 35 °C and 45 °C to 25 °C DRV is 1.59, 1.69, and 1.53, respectively, at ti, tm, and tl. It can be seen that it changes at a similar rate in all periods. The temperature-dependent change is significant compared to the previous study. The results according to the thermal conductivity of the splitter plate performed in the The temperature-dependent change is significant compared to the previous study. The results according to the thermal conductivity of the splitter plate performed in the past are as follows [14]. The thermal conductivity of copper, aluminum, and steel are 320, 196, and 62, respectively, where the DRV result of the initial period is 196, 155, and 97. The rise rates of aluminum and copper compared to steel are 59.8% and 102.1%, respectively. One can also check the thermal conductivity and the trend of the results. This study is characterized by a large nonlinearity. For this reason, mainly experimental studies were conducted. From this point of view, the linearity of the results with temperature is of great significance. Figure 14 shows the test setup to confirm the characteristics of the circuit breaker by disturbing the magnetic field. In order to apply a magnetic field to the arc extinguishment chamber of the circuit breaker, a core wound with an enamel wire is installed in the circuit breaker. A silicon steel plate-laminated core is used, and the current is applied to the 18 AWG enamel wire with 87 turns. The force acting on the disturbing magnetic field and the arc current is predicted using a magnetic equivalent circuit. The following equation shows the magnitude of the magnetic flux generated by the current applied to the coil. Disturbing Magnetic Field Test Results characterized by a large nonlinearity. For this reason, mainly experimental studies were conducted. From this point of view, the linearity of the results with temperature is of great significance. Figure 14 shows the test setup to confirm the characteristics of the circuit breaker by disturbing the magnetic field. In order to apply a magnetic field to the arc extinguishment chamber of the circuit breaker, a core wound with an enamel wire is installed in the circuit breaker. A silicon steel plate-laminated core is used, and the current is applied to the 18 AWG enamel wire with 87 turns. The force acting on the disturbing magnetic field and the arc current is predicted using a magnetic equivalent circuit. The following equation shows the magnitude of the magnetic flux generated by the current applied to the coil. Disturbing Magnetic Field Test Results is the magnetic flux by an external magnetic field, N is the coil turn, I is the coil current, and Rtotal is the total magnetic resistance of the magnetic equivalent circuit. According to the equation, if the core is not saturated by the magnetic field, the magnetic flux by the magnetic field is proportional to the current applied to the coil. In addition, the force applied to the arc current is shown in the following equation. fv is the volume force density and is expressed in Equation (5) as the cross product of the current density (J) and magnetic flux density (B) and the product of the magnetic field (H) and divergence of B, as shown. In this paper, when 5A flows through the core, the maximum values of magnetic field and force are approximately 4 × 10 A/m and 52 mN, respectively. These values act in a direction that interferes with the motion of the arc current. Φ is the magnetic flux by an external magnetic field, N is the coil turn, I is the coil current, and R total is the total magnetic resistance of the magnetic equivalent circuit. According to the equation, if the core is not saturated by the magnetic field, the magnetic flux by the magnetic field is proportional to the current applied to the coil. In addition, the force applied to the arc current is shown in the following equation. f v is the volume force density and is expressed in Equation (5) as the cross product of the current density (J) and magnetic flux density (B) and the product of the magnetic field (H) and divergence of B, as shown. In this paper, when 5A flows through the core, the maximum values of magnetic field and force are approximately 4 × 10 4 A/m and 52 mN, respectively. These values act in a direction that interferes with the motion of the arc current. Figure 15 shows the short-circuit characteristics in the presence of disturbing magnetic fields. These results were summarized in Table 4. Figure 15 shows the short-circuit characteristics in the presence of disturbing magnetic fields. These results were summarized in Table 4. The results are as follows. First, for t10, there is little difference with +0.01 ms. This means that the influence of disturbing magnetic fields on the Holm force and the Lorentz force of t10 is very little. In the case of the Holm force, the hardness of the material and the The results are as follows. First, for t 10 , there is little difference with +0.01 ms. This means that the influence of disturbing magnetic fields on the Holm force and the Lorentz force of t 10 is very little. In the case of the Holm force, the hardness of the material and the area of the electrode are not factors affected by the magnetic fields [21]. The Lorentz force may be affected by magnetic fields, but for the Lorentz force to be large, the reciprocating current path interval should be small, and for the Lorentz force to be greatly influenced by magnetic fields, the interval should be large. The two factors are inversely proportional. In fact, circuit breakers have small intervals, so the Lorenz force is insignificant. In t 21 , it increases by +0.35 ms compared to no magnetic fields. These disturbing magnetic fields generate forces in a direction that disturbs the arc current moving toward the splitter plate. The increase in time in this period may also affect the subsequent period t 32 , leading to interruption failure. In the case of t 32 , it is not as meaningful as in the case of t 10 . Disturbing magnetic fields do not affect arc extinguishment in this period. The increase/decrease ratios of periods t 10 , t 21 , and t 32 of disturbing magnetic fields in preparation for the case of no magnetic fields are −0.27%, 8.56%, and 0.15%, respectively. t 21 shows a significant change ratio even in no disturbing magnetic fields. It can be seen that this also increases the overall time by 4.08%. Figure 16 shows dielectric recovery voltage (DRV) under disturbing magnetic fields. area of the electrode are not factors affected by the magnetic fields [21]. The Lorentz force may be affected by magnetic fields, but for the Lorentz force to be large, the reciprocating current path interval should be small, and for the Lorentz force to be greatly influenced by magnetic fields, the interval should be large. The two factors are inversely proportional. In fact, circuit breakers have small intervals, so the Lorenz force is insignificant. In t21, it increases by +0.35 ms compared to no magnetic fields. These disturbing magnetic fields generate forces in a direction that disturbs the arc current moving toward the splitter plate. The increase in time in this period may also affect the subsequent period t32, leading to interruption failure. In the case of t32, it is not as meaningful as in the case of t10. Disturbing magnetic fields do not affect arc extinguishment in this period. The increase/decrease ratios of periods t10, t21, and t32 of disturbing magnetic fields in preparation for the case of no magnetic fields are -0.27%, 8.56%, and 0.15%, respectively. t21 shows a significant change ratio even in no disturbing magnetic fields. It can be seen that this also increases the overall time by 4.08%. Figure 16 shows dielectric recovery voltage (DRV) under disturbing magnetic fields. Table 5 shows the values of dielectric recovery voltage (DRV) under disturbing magnetic fields. Each C 0 value (0.47, 1, and 10 µF) shows the average voltage and average time. Using these voltages and times, Figure 17 shows the DRV V-t curve under distanced magnetic fields. Table 5 shows the values of dielectric recovery voltage (DRV) under disturbing magnetic fields. Each C0 value (0.47, 1, and 10 μF) shows the average voltage and average time. Using these voltages and times, Figure 17 shows the DRV V-t curve under distanced magnetic fields. The results are as follows. The average time corresponding to each of the C 0 values (0.47 µF, 1 µF, 10 µF) is 1.08 µs, 1.75 µs, and 4.73 µs, respectively. At t i , with and without disturbed magnetic fields, the DRVs are 320 V and 305 V, respectively. At t m , with and without disturbed magnetic fields, the DRVs are 382 V and 361 V, respectively. At t l , with and without disturbing magnetic fields, the DRVs are 491 V and 429 V, respectively. Based on the case with no magnetic fields, the DRV change rates of disturbed magnetic fields are 4.92%, 5.82%, and 14.45% for t i , t m , and t l , respectively. This rate of change is approximately three times greater in the late period than in the early and medium periods. The initial DRV characteristics are affected by the cooling characteristics of the splitter plate, the temperature change of the arc current, and energy loss in the air. On the other hand, the DRV characteristics of the later stage are affected by the heat gas emission after arc extinguishing. This ionized heat gas is greatly affected by the magnetic field and is prevented from moving to the exhaust. For this reason, it is judged that there is a large difference in the late period (t l ). Conclusions In this paper, an analysis of the short-circuit characteristics and dielectric recovery strength of the molded case circuit breaker according to external environment conditions is performed. The external temperature and disturbing magnetic fields are set as external environment conditions. The temperature results show that both short-circuit characteristics and dielectric recovery strength deteriorate as the temperature increases. The ratio of the value of 25 • C to 35 • C and 25 • C to 45 • C has similar values for both short-circuit characteristics and dielectric recovery strength. t 10 , t 21 , and t 32 corresponding to shortcircuit characteristics are 1.58, 1.53, and 1.79, respectively, and t i , t m , and t l , corresponding to dielectric recovery strength are 1.59, 1.69, and 1.53, respectively. In other words, when the external temperature increases from 25 • C to 45 • C, it can be seen that both short-circuit characteristics and dielectric recovery strength deteriorate at a similar rate. This shows a similar tendency to the case of the experiment conducted previously by changing the thermal conductivity of the splitter plate [14]. The second external environmental condition is disturbing magnetic fields. For this experiment, an external magnetic field is introduced in the direction in which the interruption of the circuit breaker was disturbed. As a result, the short-circuit characteristics only show a meaningful result at t 21 , and there is no significant difference in the other periods. However, if the distance between the current paths that generate the Lorentz force in other circuit breakers is increased, it is expected to show a difference at t 10 . The dielectric recovery strength shows a significantly larger difference in the latter time than in the initial time and the medium time. The ability of magnetic fields to interfere with the release of heat gases is evident. These external magnetic fields can be applied in various ways depending on the circuit breaker installation environment. The influence in one direction is dealt with in this paper. The external temperature and disturbing magnetic fields covered in this paper are not tested according to product standards. In addition, greater damage is expected due to the miniaturization of the environment in which the circuit breaker is installed and the complexity of peripheral devices. Based on this paper, it is expected that more performance evaluations of circuit breakers in various environments will be conducted.
9,367.4
2022-11-01T00:00:00.000
[ "Materials Science" ]
2D gravitational Mabuchi action on Riemann surfaces with boundaries We study the gravitational action induced by coupling two-dimensional non-conformal, massive matter to gravity on a Riemann surface with boundaries. A small-mass expansion gives back the Liouville action in the massless limit, while the first-order mass correction allows us to identify what should be the appropriate generalization of the Mabuchi action on a Riemann surface with boundaries. We provide a detailed study for the example of the cylinder. Contrary to the case of manifolds without boundary, we find that the gravitational Lagrangian explicitly depends on the space-point, via the geodesic distances to the boundaries, as well as on the modular parameter of the cylinder, through an elliptic θ-function. Introduction and generalities In two dimensions the standard Einstein-Hilbert action of gravity is a topological invariant and does not provide any dynamics for the metric. However, when matter is coupled to two-dimensional gravity with metric g one may compute the matter partition function Z mat [g] first and then define an "effective" gravitational action as JHEP11(2017)154 where g 0 is some reference metric. This gravitational action then is to be used in the functional integral over the metrics, after appropriately fixing the diffeomorphism invariance. Obviously any gravitational action defined this way will satisfy a cocycle identity S grav [g 1 , g 2 ] + S grav [g 2 , g 3 ] = S grav [g 1 , g 3 ] . (1.2) Well-known examples of such gravitational actions are the Liouville [1], Mabuchi and Aubin-Yau actions [2][3][4][5], as well as the cosmological constant action S c [g 0 , g] = µ 0 d 2 x( √ g − √ g 0 ) = µ 0 (A − A 0 ). While the Liouville action is formulated entirely in terms of g 0 and the conformal factor σ (defined as g = e 2σ g 0 ), the Mabuchi and Aubin-Yau actions crucially involve also directly the Kähler potential φ. In the mathematical literature they appear in relation with the characterization of constant scalar curvature metrics. Their roles as two-dimensional gravitational actions in the sense of (1.1) have been put forward in [6]. In particuler, ref. [6] has studied the metric dependence of the partition function of non-conformal massive matter on compact Riemann surfaces and shown that a gravitational action defined by (1.1) contains these Mabuchi and Aubin-Yau actions as first-order corrections (first order in m 2 A where m is the mass and A the area of the Riemann surface) to the Liouville action. The study of [6] was further confirmed and generalized in [7] where an exact formula for the gravitational action for any value of the mass was obtained, its expansion in m 2 A giving back the results of [6]. The Mabuchi action has drawn much attention in recent years. It has been suggested in [6] that it may serve as a candidate action for novel two-dimensional quantum gravity models, motivated partly by applications in Kähler geometry [8], such as its relation to the stability in Kähler geometry [9]. Some further physical properties of the Mabuchi action such as the critical exponent and the spectrum have been studied in [10][11][12] and [13]. Quite amazingly, the Mabuchi action also emerges as a subleading term in the gravitational effective action in Quantum Hall wave functions, such as the Laughlin state [14]- [19]. There the Mabuchi action corresponds to the Wen-Zee term in the Chern-Simons description of the Quantum Hall effect [20] and the coefficient in front of it controls the celebrated Hall viscosity. All these results and developments focussed on compact Riemann surfaces without boundaries. Obviously, it is most important to generalize the Mabuchi action, and more generally the determination of the subleading terms in the gravitational action, to the case where the Riemann surface has boundaries. In particular, with view on the relation to the Quantum Hall effect, obtaining the generalization of the effective action including the boundary effects would be most interesting. Here, ou goal will be somewhat more modest. References [6] and [7] considered a massive scalar field with action living on a compact Riemann surface M of genus h. As shown in these references, this leads to a gravitational action that, when expanded in m 2 A gives the Liouville action to JHEP11(2017)154 lowest order, S L [g 0 , g] ≡ S L [g 0 , σ] = M d 2 x √ g 0 σ∆ 0 σ + R 0 σ , g = e 2σ g 0 . (1.4) and, to first order, a combination of the Mabuchi and Aubin-Yau actions: where the Kähler potential φ is related to the conformal factor σ and the areas A and A 0 of M as measured by g and g 0 through the relation In this note, we want to study how these results get modified when the two-dimensional Riemann surface M has boundaries ∂M. A priori, two things could happen: the corresponding gravitational actions could get additional boundary contributions, and the bulk gravitational Lagrangian at a point x could explicitly depend on the geodesic distances between x and the boundaries. We will indeed observe both of these. Obviously, in the presence of boundaries, we have to impose some boundary conditions. Our choice will be guided by two requirements: we want ∆ g + m 2 , i.e. ∆ g = e −2σ ∆ g 0 ≡ e −2σ ∆ 0 to be hermitian and we want to preserve the fact that The hermiticity condition yields the vanishing of the boundary term ∂M dl n a (∂ a ϕ 1 ϕ 2 − ϕ 1 ∂ a ϕ 2 ) , (1.10) where n a is the normal vector of the boundary and dl the invariant line element on the boundary. (See the appendix for the definition of the normal vector). As usual, this leads to two possible choices of boundary conditions: either ϕ = 0 (Dirichlet) or n a ∂ a ϕ = 0 (Neumann) on the boundary. Actually, the modified Neumann (Robin) conditions n a ∂ a ϕ = c ϕ with real c are also possible. Our second condition reads dl n a ∂ a f , (1.11) selecting the Neumann boundary conditions. In particular, if the massive matter field(s) X obey these boundary conditions, one may freely integrate by parts in the matter action and JHEP11(2017)154 the equality of both expressions in (1.3) still holds for a manifold M with boundaries. From now on, we will always assume that the matter field(s) obey Neumann boundary conditions. What about the Kähler field φ and the conformal factor σ ? It follow from (1.7) that φ also must satisfy Neumann conditions (in the metric g 0 ). Indeed, the area should be given by A = √ g 0 e 2σ which, by (1.7) implies that 0 = √ g 0 ∆ 0 φ which is possible only if showing that it is not compatible to impose Neumann boundary conditions also on σ. Our main result is the formula (5.18) for the first-order (in m 2 A) correction to the gravitational action on a Riemann surface with boundaries: On the first line, S L is the Liouville action including the boundary contributions. The second line explicitly shows the terms that generalize the Mabuchi action. Here G R,bulk [g 0 ] is a certain renormalized Green's function "at coinciding points" that depends on the point on the Riemann surface, and in particular on the geodesic distances to the various boundaries. One could integrate by parts the Laplacian in ∆ 0 φ G instead, but this would also generate additional boundary terms since G (0) R,bulk does not obey the Neumann boundary conditions. We offer various re-writings of this expression involving different variations of renormalized Green's functions at coinciding points, see e.g. (5.19). To get more insight into the meaning of this rather abstract formula, we worked out the simplest case of a Riemann surface with boundary, which is a cylinder. In this case the Green's function is well-known and we explicitly determined the various versions of renormalized Green's functions at coinciding points. As expected, these quantities depend on the distance to the two boundaries of the cylinder. (In the case of the compact torus the corresponding functions are just constants.) We explicitly determined them in terms of elliptic theta functions, cf (6.29): Again, we offer some equivalent rewriting of this action, cf (6.30). The plan of this paper is the following. In the next section we will set up the basic frame to compute the gravitational action from the appropriate spectral ζ-function. The strategy is to determine the variation of the gravitational action under an infinitesimal change of the conformal factor of the metric and, in the end, to integrate this relation to obtain the gravitational action itself. In section 3, we introduce some standard tools -Green's functions and heat kernels -and discuss their specific features on manifolds with boundaries. In section 4 we show how these quantities are related to local ζ-functions with emphasis and the singularities resulting from the boundaries. In section 5 we put JHEP11(2017)154 everything together to determine the gravitational action. The lowest order term in a small mass expansion, of course, just gives back the Liouville action, including a boundary term, while the first-order term in m 2 A gives us what we call the Mabuchi action on the manifold with boundaries, as written above in (1.12). Let us emphasize again that it does not involve a boundary term, but the bulk Lagrangian explicitly depends on the geodesic distances to the various boundary components. In section 6 we work out the explicit example of a cylinder and one sees this dependence through an elliptic θ-function that depends on the distances to the two boundaries and on the modular parameter of the cylinder, cf (1.13). Finally, we study what happens for an infinitely long cylinder -viewed as a model of euclidean time and space being a circle. In this case we find that the Mabuchi Lagrangian reduces to the standard Mabuci Lagrangian (1.5) with R 0 = 0 and h = 0. Basics We call ϕ n and λ n the eigenfunctions and eigenvalues of the hermitian (thanks to the boundary condition 1 ) differential operator appearing in S mat : Since ∆ g + m 2 is real, one may choose the eigenfunctions ϕ n to be real, which we will always assume (unless an obvious complex choice has been made, like the standard spherical harmonics on the round sphere). We take the indices n to be n ≥ 0 with n = 0 referring to the lowest eigenvalue. In particular, the Laplace operator always has a constant zero-mode, ϕ 0 = 1 √ A and thus λ 0 = m 2 , since this constant obviously obeys the Neumann boundary condition. As usual, these ϕ n form a complete set of eigenfunctions. The matter partition function is defined with respect to the decomposition of the matter field on these eigenfunctions ϕ n : X = n≥0 c n ϕ n as In the massless case, this has to be slightly modified, see [6,7]. Of course, the determinant is ill-defined and needs to be regularized. We will use the very convenient regularization-renormalization in terms of the spectral ζ-functions: By Weil's law (see e.g. [21]), the asymptotic behaviour of the eigenvalues for large n is λ n ∼ n A and, hence the spectral ζ-functions are defined by converging sums for Re s > 1, and by analytic continuations for all other values. In particular, they are well-defined JHEP11(2017)154 meromorphic functions for all complex values of s with a single pole at s = 1 with residue 1 4π (see e.g. [21]). A straightforward formal manipulation shows that ζ ′ (0) ≡ d ds ζ(s)| s=0 provides a formal definition of − n≥0 log λ n , i.e. of − log det(∆ g + m 2 ): There is a slight subtlety one should take into account, see e.g. [21]. While the field X is dimensionless, the ϕ n scale as A −1/2 ∼ µ where µ is some arbitrary mass scale (even if m = 0), and the c n as µ −1 . It follows that one should write D g X = n µdcn 2π . This results in Z mat = n λn µ 2 The regularization-renormalization of determinants in terms of the ζ-function may appear as rather ad hoc, but it can be rigorously justified by introducing the spectral regularization [21]. The regularized logarithm of the determinant then equals ζ ′ (0) + ζ(0) log µ 2 plus a diverging piece ∼ AΛ 2 (log Λ 2 µ 2 + const), where Λ is some cutoff. This diverging piece just contributes to the cosmological constant action, and this is why the latter must be present as a counterterm, to cancel this divergence. Thus, we finally arrive at It is important to notice that this formula expressing the gravitational action in terms of the ζ-function is true whether the Riemann surface has a boundary or not. Of course, the ζ-function for a manifold with boundary will have some properties that differ from the case without boundary. Formally, the ζ-functions are always defined by (2.3), but the properties of the manifold are encoded in the eigenvalues λ n that appear in the sum. The strategy of [6] and [7], that we will also follow here, was to determine the infinitesimal change of the ζ-functions from the infinitesimal change of the eigenvalues λ n under an infinitesimal change of the metric, and then to integrate this relation to get S grav . The change of the eigenvalues is obtained from (almost) standard quantum mechanical perturbation theory, as we discuss next. Perturbation theory We want to study how the eigenvalues λ n and eigenfunctions ϕ n change under an infinitesimal change of the metric. Since g = e 2σ g 0 , the Laplace operator ∆ g and hence also ∆ g +m 2 only depend on the conformal factor σ and on g 0 : ∆ g = e −2σ ∆ 0 and thus under a variation δσ of σ one has where, of course, ϕ k |δσ|ϕ n = d 2 x √ g ϕ k δσϕ n . One can then apply standard quantum mechanical perturbation theory. The only subtlety comes from the normalisation condition JHEP11(2017)154 in (2.1) which also gets modified when varying σ [6,21]. One finds δλ n = −2(λ n − m 2 ) ϕ n |δσ|ϕ n , (2.8) Let us insists that this is first-order perturbation theory in δσ, but it is exact in m 2 . Note the trivial fact that, since λ 0 = m 2 , one consistently has δλ 0 = 0 . (2.10) Variation of the determinant As mentioned above, in order to compute S grav [g 0 , g] as given by (2.6), we will compute δζ ′ (0) ≡ δζ ′ g (0) and δζ(0) ≡ δζ g (0) and express them as "exact differentials" so that one can integrate them and obtain the finite differences ζ ′ g 2 (0) − ζ ′ g 1 (0) and ζ g 2 (0) − ζ g 1 (0). From (2.8) one immediately gets, to first order in δσ, As noted before, δλ 0 = 0 and, hence, there is no zero-mode contribution to the second term and one could just equally well rewrite the following results in terms of the ζ-functions defined by excluding the zero-mode [7]. Here, however, this is not of particular interest to us, and thus is a (bi)local ζ-function. As we will see, ζ(s, x, x) has a pole at s = 1 for every x. For x in the bulk, this pole is the only singularity. However, as x goes to the boundary there could be, a priori, additional singularities for other values of s, in particular for s = 0. Keeping this in mind we find 14) The rest of this paper is devoted to computing d 2 x √ g δσ(x)ζ(s, x, x) on a Riemann surface with boundaries and extracting its behaviour as s → 0 and s → 1. More generally, we will compute √ gf (x)ζ(s, x, x) where f is some sort of "test function". Once we have JHEP11(2017)154 determined these quantities, we will get the variation of the gravitational action under an infinitesimal change of metric as Finally note that the variations of the conformal factor δσ, of the Kähler potential δφ and the area δA are related as 3 Some technical tools: Green's functions and the heat kernel In this section we discuss some standard technical tools. We assume that the Riemann surface M has a boundary ∂M and that we have imposed Neumann boundary conditions. Throughout this section we assume that some fixed metric g has been chosen on M. Complete set of eigenfunctions and Green's functions Recall from (2.1) that the ϕ n and λ n are the orthonormal eigenfunctions and eigenvalues of (∆ g + m 2 ), subject to the Neumann boundary condition n a ∂ a ϕ m = 0 on ∂M. They form a complete set which means that they obey the completeness relation Actually, this continues to hold also if x or y are on the boundary 2 As always, the Green's function of an operator like ∆ g + m 2 can be given in terms of the eigenfuctions and eigenvalues as Again, this continues to hold also if x or y are on the boundary. In the massless case, λ 0 = 0 and this zero-mode must be excluded from the sum. We put a tilde on all quantities from which the zero-mode has been excluded: Furthermore, we will add a superscript (0) on all quantities that refer to the massless case. Obviously, G(x, y), G(x, y) and G (0) (x, y) satisfy the Neumann boundary conditions in each of their arguments. The heat kernel The heat kernel and integrated heat kernel for the operator ∆ g + m 2 are similarly defined in terms of the eigenvalues and eigenfunctions (2.1) as It is obvious from this definition that K(t, x, y) satisfies the Neumann boundary conditions in both arguments x and y and is the solution of Note that it immediately follows from either (3.4) or (3.5) that the massless and massive heat kernels are simply related by K(t, x, y) = e −m 2 t K (0) (t, x, y). As is also clear from (3.4), for t > 0, K(t, x, y) is given by a converging sum and is finite, even as x → y. For t → 0 one recovers various divergences, and, in particular exhibits the short distance singularity of the Green's function which is well-known to be logarithmic. The behaviour of K for small t is related to the asymptotics of the eigenvalues λ n and eigenfunctions ϕ n for large n, which in turn is related to the short-distance properties of the Riemann surface. It is thus not surprising that the small-t asymptotics is given in terms of local expressions of the curvature and its derivatives. Indeed, on a compact manifold without boundaries one has the well-known small t-expansion: 3 where ℓ 2 (x, y) ≡ ℓ 2 g (x, y) is the geodesic distance squared between x and y. For small t, the exponential forces ℓ 2 to be small (of order t) and one can use normal coordinates around y. This allows one to obtain quite easily explicit expressions for the a r (x, y) in terms of the curvature tensor and its derivatives. They can be found e.g. in [21] and, in particular, 6 . At present, on a manifold M with boundaries, this asymptotic expansion must still be valid as long as x and y are "not too close" to the boundary. However, as the points get close to the boundary, we expect extra contributions to become important. Examples Before going on, it is useful to discuss some very simple examples of manifolds with boundaries: the one-dimensional interval, the two-dimensional cylinder which is the product of the interval and a circle, and the two-dimensional half-sphere. Example 1: the one-dimensional interval The simplest example is a one-dimensional manifold that is just the interval M = [0, π] with trivial metric and ∆ = −∂ 2 x . We also take m = 0. The normalized eigenfunctions that satisfy the Neumann boundary conditions are ϕ 0 = 1 √ π and ϕ n (x) = 2 π cos nx, n = 1, 2, . . ., and the eigenvalues are λ n = n 2 . Then we formally have for any function f (λ): For f = 1 this is just the completeness relation (3.1) with the right-hand side equal to δ(x − y) + δ(x + y) where the δ are 2π-periodic Dirac distributions, i.e. defined on the circle S 1 = [0, 2π]. With x, y ∈ M \ ∂M =]0, π[, x + y never is 0 mod 2π and the δ(x + y) never contributes. Actually, x + y = x − y C , where y C = −y is the image point of y due to the boundary at y = 0. One would also expect additional image points due to the second boundary, but because of the 2π periodicity, these additional image points are equivalent to y and y C . If y = y B is on the boundary, say y = 0, then the image point y C coincides with y (possibly mod 2π) and we get 2δ(x − y B ) = 2δ(x). But π 0 dxδ(x) = 1 2 and in any case the integral of the right-hand side of (3.1) correctly gives 1. If we let f (n 2 ) = 1 n 2 , the relation (3.8) expresses the Green's function G I on the interval with Neumann boundary conditions in terms of a sum of Green's functions G S 1 on the circle: 4 G a construction well-known as the method of images. Similarly, if we let f (n 2 ) = e −tn 2 we get a relation that expresses the heat kernel on the interval K I (t, x, y) as the sum of two heat kernels K S 1 on the circle, one at x, y and the other at x, −y: Actually, the sums can be expressed in terms of the theta function The small-t asymptotics is obtained by applying Poisson resummation, or equivalently the modular transformation of θ 3 under τ → − 1 τ , 14) and which expresses the heat kernel on the circle as a sum over all geodesics going from x to y, winding an arbitrary number n times around the circle and having length squared (x−y+ 2πn) 2 . This is of course the expected result for the diffusion (Brownian motion) on a circle. For small t, the leading term in K S 1 (t, x, y) always is the n = 0 term. For K S 1 (t, x, −y), however, the leading term is n = 0 if x + y < π, while it is n = −1 if π < x + y. Thus appears, since all other a r involve the curvature and vanish in our present example. In any case, we see that for small t, the first term is exponentially small unless x is close to y within a distance of order √ t. Similarly, the second term is exponentially small unless x + y (or 2π − x − y) is of order √ t which is possible only if x and y both are close to the boundary at 0 (or at π), and thus also close to each other, within a distance of order √ t. Thus, for x or y in the bulk, the second term does not contribute to the small-t expansion. It is only if both points go to one and the same boundary that the second term becomes important. We see that we can just as well write this small-t asymptotic expansion as where the sum is over the different boundary components and y (2) C = 2π − y. While the use of image points is familiar from solving the Laplace equation for simple geometries in the presence of boundaries, we have seen that we should actually think of (x − y (i) C ) 2 as the length squared of the geodesic from x to y that is reflected once at the boundary ∂M i . Geodesics with multiple reflections necessarily are much longer and give exponentially subleading contributions. Of course, if one uses the exact expression (3.14) for K S 1 (t, x, y) + K S 1 (t, x, −y) the heat kernel of the interval is expressed as a sum over all geodesic paths from x to y being reflected an arbitrary number of times at the two boundaries. Example 2: the cylinder The two-dimensional cylinder is just an interval times a circle, I × S 1 . Thus, if we choose the interval of length a and the circle of circumference 2b, the normalized eigenfunctions of the Laplace operator satisfying the Neumann boundary conditions are JHEP11(2017)154 The heat kernel for the Laplace operator then simply is the product of the heat kernel for the circle and the heat kernel of the interval as just given in the previous example, with the obvious replacements π → a, b: with the corresponding torus obviously having periods 2a and 2b. Poisson resummation or equivalently the modular transformation formula for θ 3 yields Again, this expresses the heat kernel as a sum over all geodesics going from x to y winding m times around the circle direction of the cylinder and being reflected 2n times (for the first term) or 2n + 1 times (for the second term) at the boundaries of the cylinder. Example 3: the upper half sphere Our last example involves a curved two-dimensional manifold with a boundary: let M be the upper half of the standard round sphere of unit radius, i.e. M = S 2 + , parametrized by θ ∈ [0, π 2 ] and φ ∈ [0, 2π]. Then the boundary ∂M is just the circle at θ = π 2 and the normal derivative is n a ∂ a = ∂ θ . The eigenfunctions of the Laplace operator ∆ on the sphere S 2 are the spherical harmonics Y m l and, obviously, they still satisfy ∆Y m l = l(l + 1)Y m l on S 2 + . However, not all of them satisfy the Neumann boundary condition. As is well known, the parity of the Y m l is (−) l , so that It follows that Y m l is even (odd) under reflection by the equator at θ = π 2 if l − m is even (odd), and hence satisfies Neumann (Dirichlet) conditions at θ = π 2 . Thus, for each l, there are l +1 allowed values of m. It follows for even l −m that S 2 We see that the orthonormal eigenfunctions ϕ n of the Laplace operator on M obeying the boundary conditions simply are the √ 2Y m l with l − m even. It also follows from (3.20 JHEP11(2017)154 where it is of course understood that l ≥ 0 and |m| ≤ l. If we simply take f = 1, this is just the completeness relation, with it's right-hand side being Here, the first term is just 1 As for the interval and the cylinder, this image point is always outside of M, except if y is on the boundary. In the latter case both δ's contribute equally and one has 2δ(cos θ)δ(φ − φ ′ ) which correctly gives 1 when integrated over S 2 + . If necessary, this shows again that the completeness relation (3.1) continues to hold for x or y on the boundary. If we let f l(l + 1) = 1 l(l+1)+M 2 in (3.21) this relation expresses the Green's functions of ∆ + M 2 on the upper half sphere S 2 + in terms of a sum of two Green's function on the sphere, one at x and y and the other at x and y C : An analogous relation holds for the G when the zero-mode is excluded, as well as for G (0) when M = 0 and the zero-mode is excluded. It is interesting to study the short-distance singularity of this Green's function. In two dimensions, the short-distance singularity of the Green's function is logarithmic, and one has e.g. G (0) Then, on the half sphere, the singularity as θ → θ ′ for any (θ ′ , φ ′ ) / ∈ ∂M is given by this same logarithmic singularity, since G (0) is twice as large, i.e. − 1 2π log(θ − θ ′ ) 2 , in agreement with the factor 2 that accompanied the δ(x − y B ). Finally, taking f (λ) = e −tλ , we get the corresponding relation between the heat kernels: The heat kernel continued As it appeared from the previous examples, in simple geometries, the Green's functions and the heat kernel can be obtained by a method of images from the corresponding Green's functions or heat kernels on a "bigger" manifold without boundary, by a method of images. In all three cases we have seen that where K is the heat kernel on the "bigger" compact manifold and y C the "image point" of y. However, we have also seen in the example of the interval that the leading term in the asymptotic small-t expansion to be used for K(t, x, y C ) differs depending on whether x and y are close to one or the other boundary. Thus the small-t asymtotic expansion has the following form JHEP11(2017)154 Indeed, for m = 0, the heat kernel describes the diffusion (Brownian motion) of a particle on the manifold from x to y. On a flat manifold, this is given as a sum over the geodesic paths from x to y as 1 4πt e −ℓ 2 (r) (x,y)/(4t) , where ℓ (r) (x, y) is the geodesic length of the r th path. In particular, if the manifold has boundaries, there are (possibly infinitely) many geodesic paths that involve one or several reflections at the boundaries. We write ℓ i (x, y) for the length of the geodesic path from x to y that involves exactly one reflection at the boundary component ∂M i . Moreover, on a curved manifold, each 1 4πt e −ℓ 2 (r) (x,y)/(4t) gets multiplied by a power series in t with coefficients that can be determined order by order from the differential equation (3.5), see e.g. [22]. For small t the leading terms can involve at most one reflection, resulting indeed in the form (3.25), with ℓ 2 (x, y i (x,y)/4t . However, for small t, the terms involving e −ℓ 2 i (x,y)/4t with one reflection at ∂M i can only contribute if the points x and y are close to the boundary ∂M i , and close to each other, within a distance ≃ √ t. As t → 0, one zooms in close to the boundary which thus becomes flat. Now for a flat boundary, the length of the geodesic from x to y involving one reflection at ∂M i is the same as the length of the geodesic from x to the "mirror" image point y (i) C and e −ℓ 2 1 (x,y)/4t ≃ e −ℓ 2 (x,y (i) C )/4t , with any differences at finite t being included in a redefinition of the coefficients a (i) k (x, y). Thus, we can rewrite (3.25) equivalently as where ℓ(x, ∂M i ) denotes the geodesic distance of the point x to the boundary ∂M i . Thus Here, the local expressions a r (x, x) are the same as on a compact Riemann surface without boundary, e.g. a 1 (x, x) = R(x) 6 . If we are going to take the t → 0 limit, we will find that the terms involving the boundaries drop out, unless the point x is on the boundary ∂M i . In this case the corresponding boundary terms diverge for t → 0 (as do the bulk terms). Thus these boundary terms behave as a Dirac delta concentrated on the boundary. To be more precise, let us look at the heat kernel evaluated at x = y and integrated over the manifold against a "test function" f : Then the first term in (3.27) just gives the usual bulk result, while each of the boundary terms yields Again, for small t, the exponential forces x to be close to the boundary. We may then view the integral as an integral over the boundary and an integral normal to the boundary. For JHEP11(2017)154 a given boundary point x B we can Taylor expand all quantities around this point and do the integral in the normal direction. The leading small-t term of this normal integral then simply is given by (using Riemann normal coordinates around where g is the metric induced on the boundary, so that dx B g(x B ) = dl. The O(t 0 )corrections to this expression involve the normal derivatives of √ g and of f . As a result the small-t asymptotic expansion of (3.28) has the form (3.31) The leading small-t singularity ∼ t −1 is given by the usual bulk term, while the boundary-terms yield subleading singularities ∼ t −1/2 . Of course, this formula involves the heat kernel for all values of t, not just the small-t asymptotics. However, for s = 0, −1, −2, . . ., 1 Γ(s) has zeros and the value of ζ(s, x, y) is entirely determined by the singularities of the integral over t that arise from the small-t asymptotics of K. As shown above, the latter is given by local quantities on the Riemann surface. In particular, for any point not on the boundary of M, we have On the other hand, the values for s = 1, 2, 3, . . . or the derivative at s = 0 cannot be determined just from the small-t asymptotics and require the knowledge of the full spectrum of ∆ g + m 2 , i.e. they contain global information about the Riemann surface. Clearly, ζ(1, x, y) = G(x, y) is singular as x → y. For s = 1, ζ(s, x, y) provides a regularization of the propagator. It will be useful to study in more detail the singularities of ζ(s, x, y) which occur for s → 1 and x → y. More generally, as is clear from (4.1), any possible singularities of ζ(s, x, y) for s ≤ 1 come from the region of the integral where t is small. Thus, we tentatively let where µ is some (arbitrary) large scale we introduce to separate the singular and nonsingular parts, so that ζ − ζ sing is free of singularities. For large µ 2 , say µ 2 A ≫ 1, where JHEP11(2017)154 A is the area of our manifold, we can use the small-t asymptotics (3.25) or (3.26) of K to evaluate ζ sing . With t small, the e −ℓ 2 /4t are exponentially small unless ℓ 2 t. This means that in the first sum we must have y = x + O( √ t) and in the second sum y (i) . Since a 0 (x, y) = a 0 (x, x) + O(ℓ 2 (x, y) R) = 1 + O(t R), and similarly for a i 0 (x, y (i) C ), and since the O(tR) terms do not contribute to the singularity at s → 1, we define ζ sing (s, x, y) more precisely as where the exponential integral (or incomplete gamma) function is defined by As z → 0, the E r (z) are regular for r > 1 and have a logarithmic singularity for r = 1 (see the appendix): For x = y, the exponential integral functions are non-singular and we can set s = 1 in (4.4), with the singularity appearing as the short-distance singularity for x → y. We have: up to terms that vanish for x = y. If moreover x → y → ∂M i , i.e. they go to one of the boundaries, one has (again, up to terms that vanish in this limit) (4.9) On the other hand, for s = 1, we can set x = y directly in (4.4). More precisely, we assume Re s > 1 and analytically continue in the end. Then (recall ℓ 2 i (y, y) = 4ℓ 2 (y, ∂M i )) ζ sing (s, y, y) = µ 2−2s 4πΓ(s) If y / ∈ ∂M, only the first term yields a pole at s = 1, while for y ∈ ∂M the second term also yields the same pole and, hence, the residue is doubled: JHEP11(2017)154 Thus, we see that ζ(s, x, x) has a pole at s = 1 with residue a 0 (x,x) 4π = 1 4π for x / ∈ ∂M and residue a 0 (x,x) 2π = 1 2π for x ∈ ∂M i . Just as for the heat kernel itself, we will actually encounter expressions where ζ sing (s, y, y) is multiplied by some f (y) and integrated over the manifold. Proceeding similarly to the derivation of (3.31) we find M d 2 y √ gf (y)ζ sing (s, y, y) = µ 2−2s 4πΓ(s) This integrated expression exhibits poles at s = 1, 1 2 , − 1 2 , −1, − 3 2 , . . . and no pole at s = 0. This infinite series of poles translates the discontinuous behaviour between (4.10) and (4.11) due to the fact that the limits s → 1 and z → 0 of E s (z) do not commute, as detailed in the appendix. In particular, one has All boundary terms are at least ∼ 1 µ and we can thus restate the previous relation as lim s→1 1 + (s − 1) d ds + log µ 2 M d 2 y √ g f (y) ζ sing (s, y, y) In any case, ζ R (s, x, y) = ζ(s, x, y) − ζ sing (s, x, y) (4.16) is free of singularities and, in particular, has finite limits as s → 1 and x → y, in one order or the other, i.e. ζ R (1, x, x) is finite and well-defined. We then let This is an important quantity, called the "Green's function at coinciding points". Note that G ζ (y) contains global information about the Riemann surface and cannot be expressed in terms of local quantities only. Combining (4.13) and (4.17) we get JHEP11(2017)154 Note that the precise definition of G ζ depends on our choice of µ, as is also obvious from this last relation since its left-hand side is µ-independent. Maybe it is useful to pause and comment on the role of µ. It was introduced to separate ζ into its singular and regular parts. One might thus view it as some sort of UV-cutoff. But contrary to a usual UV-cutoff, our formula are valid for any finite µ and the relevant quantities that will appear in the gravitational action, such as (4.18) do not depend on the value of µ. One might then take the limit µ → ∞ to simplify the formula, but one must be aware that G ζ itself does not have a well-defined limit, only the combination G ζ (y)− 1 4π log µ 2 µ 2 does. The other ingredient needed for computing the variation of the gravitational action was (s, x, x). Again, one sees from (4.12), by replacing ζ sing (s, x, x) by ζ(s, x, x) that d 2 x √ g δσ(x)ζ(s, x, x) actually has poles for s = 1, 1 2 , − 1 2 , −1, . . . but not for s = 0, since the would-be pole is cancelled by the 1 Γ(s) . Hence, d 2 x √ g δσ(x)ζ(s, x, x) is regular at s = 0 and, adding the bulk contribution (4.2) and the boundary contribution read from (4.12), we get Let us relate G ζ (y) to the Green's function G(x, y) at coinciding points with the short-distance singularity subtracted. Since ζ R (s, x, y) = ζ(s, x, y) − ζ sing (s, x, y) is free of singularities, we may change the order of limits. If we first let s = 1, so that ζ(1, x, y) = G(x, y) and ζ sing (1, x, y) is given by (4.8), we find We know that G ζ (y) is a non-singular quantity for all y ∈ M, in particular also on the boundary. The logarithm subtracts the generic short-distance singularity of G(x, y), while the E 1 subtract the additional singularities present whenever y ∈ ∂M i . If, as before, we multiply this relation by some smooth f (y) and integrate over the manifold, we get in particular for these E 1 -terms: with F (y, µ) defined in (4.14). It follows that we may rewrite (4.18) as where G R,bulk (y) = lim x→y G(x, y) + 1 4π log ℓ 2 (x, y) µ 2 4 + 2γ (4.23) JHEP11(2017)154 is the Green's function at coinciding points with its bulk singularity subtracted. If necessary, (4.22) again shows that this does not depend on the arbitrarily introduced µ (although it does depend on µ which was part of our definition of the functional integral). While the quantity G R,bulk (x) has the advantage of being µ-independent, it has a (logarithmic) singularity as x approaches the boundary. However, we know that these singularities must be integrable as is clear from the equality of (4.22) with (4.18) which is finite, independently of the arbitrary choice of µ. As will become clear next, while G ζ (x) satisfies Neumann boundary conditions, this is not the case of G R,bulk (x). To study the boundary condition satisfied by G ζ (y) we only need its behaviour in the immediate vicinity of the relevant boundary component which can be read from (4.20): (4.24) Now, G(x, y) satisfies the Neumann condition in both its arguments. The same is true for the sum of the logarithms, up to terms that vanish as x and y approach the boundary. This can be seen as follows: as one zooms in close to the boundary, the boundary becomes flat and the geometry locally Euclidean, and using Riemann normal coordinates in the normal and tangential directions around the relevant boundary point (such that the boundary is at zero normal coordinate), one has ℓ 2 (x, y) ≃ (x t − y t ) 2 + (x n − y n ) 2 as well as | x n =0 = 0, i.e. the sum of the logarithms satisfies the Neumann condition in x up to terms that vanish as x and y approach the boundary. Since ℓ 2 i (x, y) is symmetric in x and y, the same is true in y. Now if any function h(x, y) satisfies the Neumann condition in both arguments, the function H(y) = lim x→y h(x, y) = lim ǫ→0 h(y + ǫ, y) then obviously also satisfies the Neumann condition. We conclude that G ζ satisfies the Neumann boundary condition on every boundary component ∂M i , i.e. n a ∂ a G ζ (y) = 0 , for y ∈ ∂M . (4.25) It follows that, if φ is any smooth function that also satisfies Neumann conditions, one has It is now also clear that G R,bulk does not satisfy the Neumann condition since its definition lacks the crucial third term in (4.24). We can now evaluate (4.18) for f = ∆φ and use (4.26) to get where Φ(y, µ) = ∆φ + 1 2 √ πµ ∂ n ∆φ + 1 12µ 2 ∂ 2 n ∆φ + . . .. At this point one might be tempted to take µ → ∞ to get rid of the last term but, of course, one must remember that G ζ also JHEP11(2017)154 depends on µ. However, this relation shows that, since the left-hand side does not depend on µ, the quantity M d 2 y √ g φ(y) ∆G ζ (y) has a finite limit as µ → ∞ and we arrive at the two following equivalent expressions: Both ways of writing require a comment: while the µ → 0 limit of the integral involving ∆G ζ exists, this is not the case of ∆G ζ (y) itself for y on the boundary. On the other hand, in the integral invoving G R,bulk , even though G R,bulk does not satisfy the Neumann condition, one might want to integrate by parts generating a boundary term: (4.29) However, this is not possible: both terms on the r.h.s. are meaningless since ∆G R,bulk (y) has a non-integrable singularity as y approaches the boundary (expected to be ∼ 1/ℓ 2 (y, ∂M)), and ∂ n G R,bulk is infinite everywhere on the boundary. The Mabuchi action on a manifold with boundaries We are now in position to assemble our results and determine the gravitational action on a Riemann surface with boundaries. As already explained, the strategy is to use the infinitesimal variation of S grav under an infinitesimal change of the metric as given by (2.15), and then to integrate δS grav to obtain S grav [g, g 0 ]. Inserting (4.19) and (4.22) into (2.15), we immediately get Note that this is not an expansion in powers of m 2 but an exact result. Our perturbation theory was a first order perturbation in δσ, not in m 2 . Indeed, G ζ and G R,bulk still depend on m 2 and we get exactly the first two terms in an expansion in powers of m 2 if we replace them by the corresponding quantities G R,bulk defined for the massless case. However there is a subtlety here, since in the massless case the zero-mode must be excluded from the sum over eigenvalues defining the Green's function. If we denote with a tilde all quantities lacking the zero-mode contribution we have JHEP11(2017)154 The quantities G, G ζ and G R,bulk all have a smooth limit as m → 0. Thus, the expansion in powers of m 2 reads 3) The first term is independent of m and corresponds to − 1 24π times the variation of the Liouville action on a manifold with boundary: 5 while the δA 2A -term contributes a piece 1 2 log A A 0 to S grav . Recall that, contrary to φ or δφ, the field σ and its variation δσ do not satisfy the Neumann condition. To go further, we need the variation of G under an infinitesimal variation of the metric corresponding to δσ. At this point it turns out to be easier 6 to study the variation of G R,bulk . The latter is obtained exactly as for a manifold without boundary. The simplest derivation just uses the differential equation satisfied by G(x, y) to obtain δG(x, y) = −2m 2 d 2 z √ g G(x, z) δσ(z) G(z, y) , (5.5) which satisfies the Neumann conditions. Alternatively, one can use the perturbation theory formulae (2.8) and (2.9) to obtain δG(x, y) = n δϕ n (x)ϕ n (y) + ϕ n (x)δϕ n (y) λ n − ϕ n (x)ϕ n (y)δλ n λ 2 n (5.6) in agreement with (5.5). Next, the variation of ℓ 2 (x, y) was given e.g. in [6,7,21]. In the limit x → y one simply has ℓ 2 (x, y) ≃ g ab dx a dx b = e 2σ(y) g (0) ab dx a dx b which shows that one has δℓ 2 (x, y) ≃ 2δσ(y) ℓ 2 (x, y) and, hence, lim x→y δ log ℓ 2 (x, y)µ 2 = 2 δσ(y) . (5.7) 5 With respect to the metrics g0 and g = e 2σ g0 one has dl = e σ dl0 as well as n a = e −σ n a 0 and thus also ∂n ≡ n a ∂a = e −σ ∂ 0 n . One sees that dl ∂nδσ = e σ dl0 e −σ ∂ 0 n δσ = dl0 ∂ 0 n δσ = δ(dl0 ∂ 0 n σ) = δ(dl ∂nσ). 6 The relevant formulae for studying the variation of G (0) ζ and of ∂M dl Σ are given in an appendix. JHEP11(2017)154 It follows that Separating the zero-mode parts 1 m 2 A , this is rewritten as (Note that the first term in (5.9) integrates to zero and does not contribute in (5.10).) Thus where G R,bulk is computed from the Green's function without zero-mode of the massless theory. The order m 2 term in (5.11) is given by the variation of the functional where we explicitly indicated the dependence of G R,bulk on the metric g. Thus In order to express Φ G [g] − Φ G [g 0 ] as a local functional of σ and φ, we use again (5.9) in the zero-mass limit and replace δσ in the first term by δA 2A − A 4 ∆δφ according to (2.16): (5.14) Note that we integrated the Laplace operator by parts without generating boundary terms since both G and δφ satisfy the Neumann boundary conditions (which is not the case for δσ) and then we used the differential equation (3.3). Equation (5.14) can be integrated as It is now straightforward to obtain JHEP11(2017)154 As already emphasized, contrary to G(x, y) or G ζ (x), the quantities G R,bulk and G (0) R,bulk do not satisfy the Neumann condition. Moreover, ∂ n G R,bulk and ∂ G (0) R,bulk are singular on the boundary. Thus, we arrive at (Recall our notation: a tilde on any quantity means that we removed the contribution of the zero-mode, and a superscript (0) means that we are computing in the massless limit. So G R,bulk is G R,bulk computed in the massless limit with the contribution of the zero-mode removed. The same remark applies to G (0) ζ used below.) The first line contains the usual Liouville action along with a factor + 1 2 log A A 0 , as well as a contribution to the cosmological constant action. The cosmological constant action is required in any case to act as a counterterm to cancel the divergence that accompanies the ζ ′ (0) when properly evaluating the determinant, e.g. with the spectral cut-off regularization as was done in [21]. The terms in the second line are the genuine order m 2 corrections. Using the second equality in (4.28), we can rewrite the latter using G R,bulk . In particular, this allows us to integrate by parts the Laplacian and to take the µ → ∞ limit. Recall that µ was arbitrary and our equations are valid for all values of µ. However, G ζ does not have a well-defined µ → ∞ limit, but G ζ (y) − 1 4π log µ 2 µ 2 does, and so does ∆ 0 G ζ , as well as G (0) ζ . We then get Written this way, the order m 2 -terms ressemble the usual Mabuchi plus Aubin-Yau actions found for manifolds without boundary [6,7]. However, here the function ∆ 0 G (0) ζ (x) no longer is a simple expression but depends non-trivially on the point x and in particular on the distances from the various boundary components. Of course, the same is true for G (0) R,bulk (x). The cylinder In this section we work out the gravitational action for the simplest two-dimensional manifold with a boundary: the cylinder. As we have seen in section 3.3.2 the heat kernel and hence also the Green's function on the cylinder are obtained from the corresponding quantities on the torus by a method of images. Thus to get the Green's function of the Laplace operator on the cylinder of length T and circumference 2πR we first determine the Green's function on the torus with periods 2T and 2πR, i.e. modular parameter τ = iπ R T . JHEP11(2017)154 Actually, it is not more complicated to obtain the Green's function for a torus with arbitrary modular parameter τ , but since we will be only interested in the "straight" cylinder, we will explicitly consider the square torus with purely imaginary τ . With respect to our general notation, throughout this section we consider a fixed reference metric g 0 and corresponding Laplacian ∆ 0 and Green's functions G(z 1 , z 2 ; g 0 ) although we will mostly drop the reference to g 0 . Green's function on the torus To get the Green's function on the torus with periods 2a and 2b , in principle, one could take the heat kernel K torus (t, x 1 , x 2 , y 1 , y 2 ) as constructed from the eigenfunctions (3.17) and eigenvalues of the Laplace operator, cf (3.18) and integrate over t form 0 to ∞. However, we have not been able to find any useful formula for . Instead, we will follow the usual approach to identify a suitable doubly periodic solution of the Laplace equation with the correct singularity at the origin. It will be convenient to use a complex coordinate z. Thus, in this section we will change our notation with respect to the previous one and call x and y the real and imaginary parts of z: When we need to label two points, 7 we will use z 1 = x 1 + iy 1 and z 2 = x 2 + iy 2 . We thus have a square torus with modular parameter τ = i b a . The reference metric g 0 is just the standard metric ds 2 = dzdz and ∆ 0 = −4∂ z ∂ z . • It is obvious from the factorization of the logarithm that for z = 0 : where A 0 is the area of the torus. • As z → 0 one has which together with the previous relation ensures that • One can show that d 2 z g(z) = 0 . (6.8) • We have the symmetry properties Thus G(z 1 , z 2 ) = g(z 1 − z 2 ) (6.10) is the appropriate Greens's function on the torus. It would be satisfying to show that this coincides with the expression for the Green's function obtained by integrating the heat kernel one gets from the eigenfunction expansion but, as already mentioned, we have not been able to find a corresponding identity in the literature. One can then define the renormalized Green's function at coinciding points G R (z) on the torus, after subtracting the short-distance singularity as (1 − q 2n ) 2 , (6.11) with q = e −πb/a . As was expected from the isometries of the torus G R is a constant. Green's function and Green's functions at coinciding points on the cylinder We now construct the Green's function on the cylinder of length T (coordinate x) and circumference 2πR (coordinate y). We choose to impose Neumann boundary conditions at x = 0 and x = T . Let g be the function defined in (6.2) with a = T and b = πR: JHEP11(2017)154 Again, the Neumann boundary conditions are achieved by adding to g(z 1 − z 2 ) the same function with z 2 = x 2 + iy 2 replaced by the appropriate image points. The boundary at x = 0 requires the image point z C 2 = −z 2 = −x 2 + iy 2 , while the boundary at x = T would require to add the image points of z 2 and z C 2 , i.e. T + (T − x 2 + iy 2 ) = 2T + z C 2 and T +(T +x 2 +iy 2 ) = 2T +z 2 . However, due to the 2T -periodicity these points are equivalent to z 2 and z C 2 and adding g (or g) at these points would result in an over-counting. Of course, this is in agreement with the relation (3.18) between the heat kernels of the torus and the cylinder, from which the corresponding Green's functions could be obtained by integration over t. Thus we let (6.13) Using the symmetry properties (6.9), one easily verifies that this indeed satisfies Neumann conditions at x 1 = 0 and T as well as at x 2 = 0 and T , e.g. From (6.7) we see that G cyl satisfies, for any x 2 = 0, T , . Integrating the right-hand side of (6.15) over the cylinder then correctly yields 0. Next, we need to determine the various Green's functions at coinciding points that played an important role for formulating the gravitational action, i.e. G cyl ζ (z) and G cyl R,bulk (z). In the present specific case of the cylinder it is useful to first define yet another function G cyl R (z) by G cyl R (z) = (6.16) = lim The additional terms subtract the bulk singularity at z 1 → z 2 , as well as the boundary singularities that occur as y 1 → y 2 and x 1 → x 2 → 0 or T . Explicitly we find that G cyl R only depends on x = Re z (as well as on q = e −π 2 R/T , of course): One sees again, that this is non-singular, even as x → 0 or x → T . It is clear from its definition that G cyl R satisfies Neumann boundary conditions, as follows also from the explicit expression just given. JHEP11(2017)154 is well-defined and its limit as µ → ∞ exists for every φ obeying the Neumann boundary conditions. If one thinks of the cylinder as a simple Euclidean version of one compact space and one time dimension, one would like to study the limit where the cylinder becomes infinitely long, i.e. T → ∞. However, as T R → ∞, one has q → 1 and the sum over n diverges, hence this expression ceases to be valid. To study the behaviour as T R → ∞, one must first do the modular transformation τ ≡ iπ R T → τ = − 1 τ = i T πR . This reads for θ 1 , as well as for θ 2 (which we will need below), This will allow us to write the theta functions as sums of powers of q = e iπ τ = e −T /R . Note that the first argument ν τ now is imaginary which will turn the sin and cos in (6.20) into sinh and cosh. We get, through similar manipulations as above, where q = e iπ τ = e −T /R . Note that the poles at x = 0 cancel and that this representation as an infinite sum is convergent and finite for all |x| < T . Note also that, although not obvious on (6.24), within this interval (−T, T ) these functions are periodic under x → x+T . Then the finiteness at x = 0 implies finiteness at x = T , too. While (6.24) is a perfectly satisfactory expression, if we think of the x-direction as time, we want time to be finite with "infinite past" and "infinite future" infinitely far away. Hence we let so that finite t corresponds to values in the "middle" of the cylinder. It is then natural to first re-express the θ 1 1 2 + t T iπ R T appearing in (6.20) JHEP11(2017)154 In order to study the limit of an infinitely long cylinder, we have seen that one has to set x = T 2 + t and use (6.27) in order to obtain the limit T → ∞ as given by (6.28 (6.31) Of course, A 0 = 2πRT and A also go to infinity in this limit, and the second and third terms of the Lagrangian have finite coefficients. As for the kinetic term, only those eigenvalues of ∆ 0 that scale as 1 A give finite contributions. Comparing (6.31) with the Mabuchi action of a manifold without boundary as defined in (1.5), we see that this corresponds to the Mabuchi action with h = 0 and vanishing background curvature (i.e. R 0 = 0): Of course, this equality is to be understood as an equality of the Lagrangian densities, rather than of the actions. While R 0 = 0 was to be expected for a cylinder, the replacement χ = 2(1 − h) → 2 was, maybe, not that obvious to guess. Acknowledgments We are grateful to the referee for suggesting various improvements of the manuscript. A Boundary integrals for two-dimensional manifolds Integration by parts on a two-dimensional manifold with boundaries generates boundary terms. The natural way to implement this is via Stoke's theorem for the integration of an exact 2-form. Let's work this out. Let α = α a dx a be a 1-form on M. Let α be its restriction to ∂M (i.e. the pullback of α under the inclusion map of ∂M into M): α = α a d x a where the d x a are the "projections" of the dx a on the tangent to ∂M. We define the not necessarily normalized tangent vector t a as d x a = t a dl where dl is the proper length one-form on ∂M. Thus α = α a t a dl . (A.1) As an example, let M be the unit sphere S 2 , with standard coordinates θ, ϕ, with the polar cap θ ≤ θ 0 removed. Then ∂M is the circle at θ = θ 0 , so that d θ = 0 and d ϕ = dϕ. Now dl = sin θ dϕ so that t θ = 0 and t ϕ = 1 sin θ . Integrals of the 1-form α over ∂M can be immediately evaluated as without the need to introduce a metric. However, sometimes, the 1-form α is the Hodge dual of some other 1-form β, i.e. α = * β, and this requies a metric. Indeed, we have * dx a = g ab ǫ bc dx c = g ab √ g ǫ bc dx c , (A.3) JHEP11(2017)154 Only the n = 0 term is potentially divergent as ǫ → 0 and x → 0, and separating it from the rest of the sum we get: (−x) n n!(n + ǫ) , (−x) n n! n + O(ǫ) , This relation holds exactly for all s > 1 and, hence, also in the limit s → 1. Not only the limit as x → 0 of (B.9) is different from (B.10), it is also singular. This non-commutativity of the limits s → 1 and x → 0 can be traced back to the behaviour of d ds x s−1 = x s−1 log x. If we first let x → 0 assuming s > 1 it yields 0, while letting first s → 1 assuming x > 0 we get log x. C Some additional variational formulae When establishing the gravitational action, we had chosen to study the properties of G R,bulk rather than those of G ζ . If one chooses to study the properties of the latter instead, one needs, in particular, the variation of G (0) ζ for finite µ. This requires the use of some additional variational formulae which we summarize in this appendix. First, the variation of G ζ also involves the variation of E 1 ℓ 2 (y,y i C )µ 2 4 . For x → y and close to the boundary one has δℓ 2 (y, y i C ) ≃ 2δσ(y B ) ℓ 2 (y, y i C ) . (C.1) JHEP11(2017)154 It follows that δE 1 ℓ 2 (y, y i C )µ 2 4 = E ′ 1 ℓ 2 (y, y i C )µ 2 4 ℓ 2 (y, y i C )µ 2 2 δσ(y i B ) = −2e − ℓ 2 (y,y i C )µ 2 4 δσ(y i B ) , (C.2) where we used (B.3) Thus, we get One also encounters i ∂M i dl δσ: where L(∂M) is the total length of the boundary. The finite variation of G (0) ζ follows from (C.3) in the zero-mass limit as Finally, at finite µ the gravitational action can then be found to be S grav [g, g 0 ] = − 1 24π S L [g, g 0 ] + 1 2 log Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
15,325.8
2017-11-01T00:00:00.000
[ "Mathematics" ]
Enhancing Financial Fraud Detection through Addressing Class Imbalance Using Hybrid SMOTE-GAN Techniques : The class imbalance problem in finance fraud datasets often leads to biased prediction towards the nonfraud class, resulting in poor performance in the fraud class. This study explores the effects of utilizing the Synthetic Minority Oversampling TEchnique (SMOTE), a Generative Adversarial Network (GAN), and their combinations to address the class imbalance issue. Their effectiveness was evaluated using a Feed-forward Neural Network (FNN), Convolutional Neural Network (CNN), and their hybrid (FNN+CNN). This study found that regardless of the data generation techniques applied, the classifier’s hyperparameters can affect classification performance. The comparisons of various data generation techniques demonstrated the effectiveness of the hybrid SMOTE and GAN, including SMOTified-GAN, SMOTE+GAN, and GANified-SMOTE, compared with SMOTE and GAN. The SMOTified-GAN and the proposed GANified-SMOTE were able to perform equally well across different amounts of generated fraud samples. Introduction The financial sector faces a significant challenge in the form of financial fraud, encompassing various forms of criminal deception aimed at securing financial gains, including activities like telecommunication fraud and credit card skimming.The proliferation of electronic payment technology has propelled online transactions into the mainstream, thereby amplifying the occurrence of fraudulent schemes.The prevalence of these fraudulent transactions has led to substantial losses for financial institutions.However, the large daily transactions pose a challenge for humans in manually identifying fraud.Recently, deep learning techniques have been explored and have shown promising results in detecting financial fraud Alarfaj et al. (2022); Fang et al. (2021); Kim et al. (2019).Unfortunately, most real-world financial fraud datasets suffer from a severe class imbalance issue, where the fraud data's proportion is significantly lower than that of nonfraud.In binary classification, class imbalance often leads to biased predictions favoring the majority class Johnson and Khoshgoftaar (2019).Consequently, the classifier's performance on the minority class is compromised, especially when encountering dissimilar frauds.Overcoming this problem poses a significant challenge, as classifiers are expected to achieve high precision and recall in fraudulent class. To address this problem, several oversampling methods have been employed to generate minority samples.Synthetic Minority Oversampling TEchnique (SMOTE) interpolates between the existing minority data to synthesize minority samples Chawla et al. (2002).Generative Adversarial Networks (GANs) comprise a discriminator that aims to differentiate between real and generated samples and a generator that strives to deceive the discriminator by synthesizing realistic samples Goodfellow et al. (2014).GANs have shown superior results compared with SMOTE Fiore et al. (2019).However, SMOTE may cause overgeneralization issues.GAN, primarily designed for image generation, is not ideal for handling the class imbalance problem.To overcome these limitations, SMOTified-GAN employs SMOTE-generated samples instead of random noises as input to the GAN Sharma et al. (2022). In addition to the aforementioned data generation techniques, other hybrids of SMOTE and GAN are worth exploring.This study presents the following contributions: 1. Introducing two data generation techniques, SMOTE+GAN and GANified-SMOTE, designed to effectively address the class imbalance issue in finance fraud detection.2. Conducting a comprehensive comparison between the proposed oversampling methods and existing data generation techniques, utilizing precision, recall, and F1-score as key performance metrics. 3. Evaluating the performance of the data generation techniques across various neural network architectures, including a Feed-forward Neural Network (FNN), Convolutional Neural Network (CNN), and the proposed hybrid FNN+CNN.4. Analyzing the impact of training classifiers on different proportions of the generated minority samples. Related Work The task of detecting financial fraud can be approached as a binary classification challenge, where classifiers examine the patterns within fraudulent and legitimate transactions to classify new transactions accurately.Consequently, it is crucial to possess an ample and diverse dataset to enable classifiers to grasp the inherent patterns of both transaction categories.Addressing the issue of inadequate fraudulent samples in the training dataset, various methodologies have been introduced to create artificial fraud instances and supplement the original data.These techniques include SMOTE, GAN, and SMOTified-GAN.SMOTE Chawla et al. (2002) has been widely applied to imbalanced training datasets.More than 85 SMOTE variations were proposed by 2018, including SMOTE+TomekLinks, SMOTE+ENN, Borderline-SMOTE, and Adaptive Synthetic Fernández et al. (2018).Recent studies proposed Radius-SMOTE Pradipta et al. (2021), which prevents overlap among generated samples, and Reduced-Noise SMOTE Arafa et al. (2022), which removes noise after oversampling.In financial fraud detection, SMOTE and its variations have been widely utilized to resample highly imbalanced datasets before training models such as AdaBoost Ileberi et al. (2021) and FNN Fang et al. (2021).Besides the finance domain, SMOTE and its variations have found extensive application in other fields dealing with highly imbalanced datasets.In bio-informatics, SMOTE has been used to discriminate Golgi proteins Tahir et al. (2020) and predict binding hot spots in protein-RNA interactions Zhou et al. (2022).In medical diagnosis, SMOTE and its variations have been employed for diagnosing cervical cancer Abdoh et al. (2018) and prostate cancer Abraham and Nair (2018).SMOTE has also been used to predict diabetes Mirza et al. (2018) and heart failure patients' survival Ishaq et al. (2021). GANs Goodfellow et al. (2014) and their variations have more recently been employed for generating minority samples to tackle the class imbalance problem.Douzas and Bacao (2018) utilized a conditional GAN (cGAN) which can recover the distribution of training data to generate minority samples.To address the mode collapse issue, Balancing GAN was proposed to generate more diverse and higher-quality minority images Mariani et al. (2018).However, in this technique, the generator and discriminator cannot simultaneously reach their optimal states, leading to the development of IDA- GAN Yang and Zhou (2021).In financial fraud detection, GAN has been employed to generate fraud samples for imbalanced datasets before training classifiers, such as AdaBoost-Decision Tree Mo et al. (2019) and FNN Fiore et al. (2019).These studies have reported that the GAN achieves higher AUC, accuracy, and precision compared with SMOTE.Interestingly, Fiore et al. (2019) found that the best performance was achieved when twice as many GAN-generated fraud samples as the original fraud data were added to the training dataset.In other financerelated domains, GANs have been utilized to address class imbalance in money laundering detection in gambling Charitou et al. (2021).GANs and their variations have also been used extensively for high-dimensional imbalanced datasets, such as images Mariani et al. (2018); Scott and Plested (2019) and biomedical data Zhang et al. (2018).Recent studies have successfully applied GANs and their variations to generate minority samples in bio-informatics Lan et al. (2020). Despite the notable accomplishments of SMOTE and GAN, these methods have certain limitations.SMOTE may introduce noise that leads to overgeneralization Bunkhumpornpat et al. (2009).While GANs can generate more "realistic" data, they may not be ideal for handling imbalanced data, as it was originally designed for generating images using random noise.Additionally, there may be insufficient real minority data available for training the GAN Mariani et al. (2018).To address these limitations, Sharma et al. (2022) proposed SMOTifed-GAN, which employs SMOTE-generated samples as input for GAN instead of random numbers, resulting in improved performance compared with SMOTE and GAN. In early studies, financial fraud detection systems predominantly depended on rulebased methodologies, wherein human expertise in fraud was translated into rules to anticipate fraudulent activities Zhu et al. (2021).However, the evolving behaviors of fraudsters and the increasing size of transaction datasets have posed challenges in identifying fraud-related rules manually.As a result, research has shifted towards machine learning methods, such as naive Bayes, logistic regression, support vector machine, random forest, and decision tree (Ileberi et al. 2021;Ye et al. 2019;Zhu et al. 2021), which can "learn" fraud and nonfraud patterns from given datasets.Nonetheless, machine learning techniques require extensive data preprocessing before training the classifier Alarfaj et al. (2022); Kim et al. (2019); Zhu et al. (2021). In recent years, deep learning has gained popularity in financial fraud detection due to its superior performance compared with traditional machine learning approaches Alarfaj et al. (2022); Fang et al. (2021); Jurgovsky et al. (2018); Kim et al. (2019).Some studies have approached financial fraud detection as a sequence classification problem, considering the temporal sequence of transactions as a crucial factor.Sequential models, such as Gated Recurrent Units Branco et al. (2020), Long Short-Term Memory (LSTM) Jurgovsky et al. (2018), and Time-aware Attention-based Interactive LSTM Xie et al. (2022), have been proposed.However, since most available financial fraud datasets lack timesequence information, sequential models may not be suitable in such cases.Due to the vector format of finance fraud datasets without time-sequence information, FNNs are considered a suitable choice Fang et al. (2021); Fiore et al. (2019); Kim et al. (2019).Initially designed for image processing and classification, CNNs have also been found effective in financial fraud detection Alarfaj et al. (2022); Chen and Lai (2021); Zhang et al. (2018).Their 1D convolution layers can extract patterns within smaller segments of a transaction vector. Building on Fiore et al. (2019)'s findings, this study aimed to assess the performance of a model using varying amounts of minority samples in the training dataset.To achieve this, the study explores the use of SMOTE, GAN, SMOTified-GAN, and other variants of hybrid SMOTE and GAN.Consequently, a combination of SMOTE-and GAN-generated minority samples, along with GANified-SMOTE, was proposed to fulfill the research aims.Finally, FNN, CNN, and FNN+CNN models were employed to ensure a fair evaluation of the performances of different data generation techniques. Data Preprocessing The experiment utilized the Kaggle (2018) credit card fraud dataset, consisting of 284,807 transactions conducted by European credit card holders over two days in September 2013.This dataset comprises 31 numerical features, including Time, Amount, Class, and 28 other unnamed features.The 'Time' feature represents the elapsed time in seconds since the first transaction, while the 'Amount' feature denotes the transaction amount.The 'Class' label indicates fraudulence, utilizing binary values, where 1 and 0 represent fraud and nonfraud, respectively.Notably, only 492 transactions (0.172%) are classified as fraudulent, resulting in a highly imbalanced distribution. To facilitate gradient descent convergence and mitigate bias towards features with larger magnitudes, all features except the 'Class' label were rescaled to the range [0, 1] while maintaining the original feature distribution.This rescaling process for a value X in a given feature was transformed into a new value X (see Equation ( 1), where X min and X max represent the minimum and maximum values of the feature, respectively) to maintain the original feature distribution. Subsequently, the dataset was divided into a training set comprising 80% of the data (227,451 nonfraud and 394 fraud) and a testing set comprising the remaining 20% (56,864 nonfraud and 98 fraud). Data Generation Methods To address the issue of class imbalance, this study explored five data generation techniques: SMOTE, GAN, and their respective combinations. SMOTE SMOTE creates synthetic minority samples rather than duplicating existing ones to avoid overfitting.For a specific minority data point x represented as a vector, a vector x k is randomly chosen from its k-nearest neighbors to generate a new sample x using Equation ( 2).In this study, 394 instances of fraudulent data from the training dataset were utilized with the SMOTE technique, employing five nearest neighbors, to generate additional fraud samples, as depicted in Figure 1. Figure 1.SMOTE employed in this study utilizing five nearest neighbors for random interpolations and generating minority samples. GAN A GAN comprises a generator G and a discriminator D that engage in a competitive training process to improve their respective objectives.The discriminator aims to correctly classify real samples x and fake samples generated by the G(z), where z represents random noise or the latent space input to the G.The D's predictions for real and generated samples are denoted as D(x) and D(G(z)), respectively.By considering real samples with a label of 1 and generated samples with a label of 0, the D's loss function is defined in Equation ( 3), where E calculates the error or distance between the D's prediction and the true label.The G's objective is to generate realistic fake samples from random noise that can deceive the D into misclassifying them.The G's general loss function, defined in Equation ( 4), allows it to improve the quality of the generated samples based on the feedback received from the D's classification.As the G and D continue enhancing their performance, the quality of the generated minority samples improves. The proposed GAN architecture, as shown in Figure 2, consists of a 5-layer FNN G with respective neuron counts of 100, 256, 128, 64, and 30.The G takes 100 random noises sampled from a normal distribution.LeakyReLU activation function (Equation ( 5)) is used in all hidden layers, and dropout layers with a dropout rate of 0.2 are added after each hidden layer to mitigate overfitting.The output layer employs the sigmoid activation function (Equation ( 6)) to produce values between 0 and 1.Similarly, the D is a 5-layer FNN with identical activation functions and dropout layers.However, the neuron counts are 30, 128, 64, 32, and 1 for each layer.The D employs a stochastic gradient descent (SGD) optimizer with a learning rate of 0.05.The loss function depicted in Figure 2 is binary cross-entropy (Equation ( 7)), as the D's task involves binary classification.The GAN network also employs an optimizer with the same learning rate as the D, but the loss function utilizes the mean squared error metric (Equation ( 8), where y i is the true label and ŷi is the predicted class) as feedback for the G. (5) The fraud data from the training dataset were utilized to train D, enabling it to recognize patterns in real fraud data and generate fraud samples.Since there were only 394 fraudulent data points available for training, the batch size was reduced to 32.The number of training epochs was set to 1000 to allow sufficient time for the G and D to improve their performance.Following training, the G is employed to generate fraud samples based on the required number of minority samples. SMOTified-GAN GAN can learn patterns from minority data, resulting in more authentic minority samples.However, using random noise as input for the GAN G can be seen as generating samples from scratch, making it more challenging to train the G to produce high-quality samples.By utilizing SMOTE-generated samples as input, the generation process becomes simpler as the G begins with pre-existing fraud samples (Sharma et al. 2022).In the proposed approach, SMOTE was applied with the five nearest neighbors to generate double the number of fraud samples.Figure 3 illustrates that 788 SMOTE-generated samples were used as input for the GAN G.The hyperparameters of the GAN in the SMOTified-GAN model remained the same as the regular GAN, except for the number of neurons in the input layer of the G, which was adjusted to 30 to match the 30 features present in the SMOTE-generated fraud samples. SMOTE+GAN To address the limitations of SMOTE and GAN, a hybrid approach was proposed and employed to enhance the ratio of fraudulent data in the training dataset.The SMOTEgenerated and GAN-generated fraud samples were directly combined with the original training dataset without any alterations, as depicted in Figure 4.The combined dataset comprised an equal contribution from both the SMOTE-and GAN-generated samples, amounting to half of the total required generated data. GANified-SMOTE Another hybrid method, GANified-SMOTE, was implemented.The random interpolation makes SMOTE-generated samples susceptible to the noise present in the dataset.Consequently, the generated minority samples are located near the boundary of the majority class, leading to higher misclassification rates.Conversely, GAN can learn the underlying patterns of the minority class, reducing the impact of such noise.By utilizing GAN-generated data for SMOTE interpolations, the limitations of SMOTE can be overcome.Additionally, applying SMOTE on the GAN-generated data can decrease reliance on the prominent patterns of the minority class, thereby mitigating overfitting.Figure 5 illustrates the utilization of fraud samples generated by the GAN, which are then processed with SMOTE to generate the necessary number of fraud samples.The resulting output from SMOTE is combined with the original training dataset, which originally contained 394 authentic fraud data points. Summary of Data Generation Methods Table 1 presents an overview of the types and quantities of fraud data utilized in each data generation method.Two experiments were conducted for each method to assess the impact of varying amounts of generated data in the training dataset.In the first experiment (Test A), the training dataset was adjusted to achieve a balanced distribution of 50% fraud and 50% nonfraud samples.In the second experiment (Test B), only 788 fraud samples were generated, twice the number of the original fraud data in the training dataset.This choice was based on the finding (Fiore et al. 2019) that injecting twice as many GAN-generated fraud samples as the original fraud data produced the optimal outcome. For both experiments, fraud samples were generated using five data generation techniques after splitting the complete dataset into training and testing sets.The testing dataset was not utilized for data generation to ensure that the validation conducted using these unseen data reflects the model's performance when applied to real-world financial fraud detection systems, as these systems encounter unseen data.A preliminary investigation was conducted to assess several hyperparameter configurations of the FNN (Feed-forward Neural Network) along with SMOTE-generated samples to tackle the problem of class imbalance.The two most effective models were selected as classifiers to evaluate all the data generation techniques.Table 2 contains the hyperparameters used for these models.Both models employed the Rectified Linear Unit (ReLU) activation function for their hidden layers.To counter overfitting, dropout layers with a dropout rate of 0.1 were inserted after each hidden layer.The output layer utilized the sigmoid activation function to ensure that the output probabilities fall within the range of 0 to 1, representing the likelihood of a transaction being fraudulent.For the loss function, binary cross-entropy was employed.Due to the substantial size of the training dataset, a batch size of 128 was chosen, and the training process was executed over 100 epochs, allowing for multiple iterations to refine the model. CNN Similarly to the FNN, various hyperparameter configurations of the CNN were tested, and the two best-performing models were selected for further investigation.The hyperparameters for these models are presented in Table 2.Both models began with an input layer of dimensions (30, 1).Subsequently, a 1D convolutional layer and a max-pooling layer were incorporated, followed by a flattening layer and a dense layer consisting of 50 neurons, utilizing the ReLU activation function, along with a dropout layer featuring a dropout rate of 0.1.The output layer consisted of a single neuron activated by the sigmoid function.The kernel size and pool size for both models were set to 3 and 2, respectively.The initial findings indicated that the CNN models reached a stable loss and accuracy after the 50th epoch.Consequently, the number of training epochs was set to 50, providing sufficient time to refine the models and observe their performance. FNN+CNN FNN and CNN models tend to misclassify nonfraudulent transactions, while demonstrating an intuitive ability to identify the same fraudulent transactions.Consequently, this study integrated the two models to enhance the final prediction, aiming to reduce the false-positive rate within the fraud class.By leveraging the strengths of both models and combining their insights, it was anticipated that the integrated approach would yield improved accuracy and more reliable identification of fraudulent transactions. Method I: The final prediction is classified as fraudulent only if both FNN and CNN predict the transaction as fraudulent.The detailed processes and decision steps of this method are depicted in Figure 6a.Both the FNN and CNN output a probability of a transaction being fraudulent, where a value greater than 0.5 is considered fraudulent.Hence, the integrated model predicted a transaction as fraudulent only if both models' output surpassed 0.5.Intuitively, it is improbable for a nonfraudulent transaction to be classified as fraudulent by both models, given their tendency to learn distinct patterns associated with fraud and nonfraud.This integration reinforces the reliability of the fraud prediction, since it must satisfy both conditions.Method II: The initial study demonstrated that the first method successfully enhanced the precision of fraud detection but resulted in a decrease in the recall.Therefore, another method was proposed to increase recall while maintaining a high precision.In certain instances, one of the models produces a value close to 1, indicating a high probability of the transaction being fraudulent.Conversely, the other model generates a value below but close to 0.5.According to the first method, these transactions would be predicted as nonfraudulent.However, intuitively, such transactions are more likely to be fraud, since one of the models strongly indicates fraud.To address this scenario, the sum of the output values from both models is utilized to make the final prediction.If the sum exceeds a selected threshold, the prediction will be fraudulent.The detailed processes and decision steps of this method are depicted in Figure 6b. Deep Learning Models with SMOTE-Generated Data In this study, two FNN and two CNN models were developed to determine the optimal configuration, and their specific hyperparameters are outlined in Table 2.The training process took place on a machine equipped with an 11th Gen Intel Core i7-11375H CPU, 16GB RAM, Intel Iris Xe Graphics, and NVIDIA GeForce RTX 3060 Laptop GPU.The training dataset consisted of an equal distribution of fraud and nonfraud data, with the fraud samples generated using the SMOTE technique (refer to Section 3.2.1).To evaluate their performance, these variations were tested on the testing dataset, and various metrics were employed, including training accuracy, loss, and time, as well as testing precision (PR), recall (RC), F1-score (F1), and root mean squared error (RMSE). In Table 3, all the top-performing models achieved impeccable precision, recall, and F1-score (PR = RC = F1 = 1.00) for the nonfraud class, and their recalls in the fraud class were satisfactory (RC ≥ 0.85).However, their precision and F1-score in the fraud class did not meet the desired criteria.FNN1, which utilized a lower learning rate, demonstrated higher precision and F1-score compared with FNN2.Similarly, CNN2, with fewer filters, exhibited higher precision and F1-score compared with CNN1, albeit with a slightly lower precision.Consequently, FNN2 and CNN2, boasting the highest F1-score within their respective FNN and CNN models, were selected as the top-performing models for integration (FNN+CNN).It is worth noting that FNN models using the SGD optimizer yielded superior results compared with those employing Adam, while the opposite was observed for CNN.Additionally, CNN's training time was longer than that of FNN due to the fewer epochs utilized. Table 3.The results of the two top-performing models within FNN and CNN variations are presented.The selection of the best model was based on comparing their recalls first, followed by their precision.This research employed four selected FNNs and CNNs to evaluate the impact and performance of the generated data.The results can be found in Table 4.When employing the identical data generation method and incorporating an equal quantity of fraudulent data, both versions of FNN and CNN yielded comparable outcomes.This demonstrates that the outcomes derived from a data generation method are not significantly influenced by the classifier's parameters.The FNN generally yielded a higher recall compared with the CNN, albeit with lower precision.In test A, both SMOTE and SMOTE+GAN exhibited significantly lower precision and F1-score across all models, despite demonstrating a high recall.However, there was substantial improvement observed in test B for these two methods, where synthetic fraud samples were injected at a ratio of twice the original records.Additionally, SMOTE+GAN achieved the highest F1-score in three out of four models during test B. The proposed GANified-SMOTE generally yielded slightly higher precision and F1-score than GAN, despite having a lower recall.This can be attributed to GAN's ability to capture the original fraud data's characteristics, resulting in the GAN-generated fraud samples being clustered in regions with a high concentration of the original fraud data.The SMOTE's application on the GAN-generated samples generates additional fraud samples in between them, potentially causing a 'blurring' effect on the fraud data's features.This could explain the generally lower recall of GANified-SMOTE in comparison with GAN.As a tradeoff, GANified-SMOTE achieves a lower false positive rate, leading to higher precision compared with GAN.When compared with SMOTified-GAN, the proposed GANified-SMOTE generally demonstrated slightly lower precision and F1-score, while maintaining a similar recall.SMOTified-GAN generates fraud samples using SMOTE-generated samples as input, which can result in the production of more realistic and diverse samples.The SMOTified-GAN-generated samples tended to be more centrally distributed within fraud areas and less centrally distributed within nonfraud areas, which could explain the higher precision and F1-score observed. FNN+CNN The top-performing models from the FNN and CNN were combined to create two distinct hybrid FNN+CNN methods, as illustrated in Figure 6.The results of the FNN+CNN approach for Method I and II are depicted in Table 5. Method I exhibited improved precision for detecting fraudulent cases compared with using FNN or CNN models alone.In Test A, the SMOTE and SMOTE+GAN showed significant improvements in precision, despite a slight decrease in recall, particularly when compared with the FNN model.This decline can be attributed to the fact that fraud predictions must meet two distinct conditions, resulting in a reduced number of predicted fraud cases.Consequently, the fraud class's precision increases, since precision is determined by the ratio of true fraud cases to predicted fraud cases.However, this trade-off leads to a decrease in the fraud class's recall.Nevertheless, the overall F1-score exhibited a slight increase compared with the individual FNN and CNN models.In Method II, different thresholds ranging from 1.1 to 1.9 were tested to determine the optimal threshold value.Since Method II's goal was to enhance recall, the best threshold value was determined based on recall. Overall, Method II yielded better results than Method I.The findings demonstrated that the proposed hybrid FNN+CNN approach in Method II outperformed the FNN and CNN models individually.Similar to the observations on the FNN and CNN, injecting twice the number of fraud samples as the original fraud data using SMOTE and SMOTE+GAN yielded better performance than a 50:50 distribution of fraud and nonfraud samples.The performance of GAN, SMOTified-GAN, and GANified-SMOTE was not significantly affected by the number of injected fraud samples.The proposed GANified-SMOTE technique achieved the highest precision for both integration methods and also exhibited high F1score and recall.This may be attributed to the pure variations in the FNN and CNN used in the hybrid model performing well with GANified-SMOTE.However, the GANified-SMOTE's performance on the FNN and CNN variations was similar.Therefore, it can be concluded that the proposed GANified-SMOTE can achieve high performance regardless of the number of injected fraud samples. Comparison with Existing Studies To evaluate the proposed methodologies, a comparison was made with previous studies that utilized the same dataset, as presented in Table 6.Given the trade-off between precision and recall, attaining flawless outcomes for models, whether existing or proposed, remains elusive.The outcomes observed by (Fiore et al. 2019) and (Sharma et al. 2022) upon applying SMOTE, GAN, and SMOTified-GAN exhibited relatively modest recalls (below 0.80), indicating limited detection of fraudulent transactions.Consequently, their F1-scores generally trailed behind those of the proposed methods.This serves to illustrate the proficiency of the proposed models in effectively identifying fraudulent transactions while upholding a minimal misclassification rate for nonfraudulent data. All the implemented techniques exhibited higher recall rates compared with the existing studies.However, this improvement came at the expense of lower precision when compared with previous research.One potential explanation for this discrepancy could be the differences in the classifiers utilized.Previous studies Fiore et al. (2019); Sharma et al. (2022) employed classifiers with less than four layers, whereas the proposed classifier consisted of at least four.Consequently, the enhanced classifier was able to better learn the distinguishing characteristics of fraudulent data, improving the identification of such instances.However, this also led to an increased misclassification of nonfraudulent data. Another factor that could have influenced the outcomes is the stochastic nature of the SMOTE (and the GAN) and deep learning models (the FNN).Despite the lower precision, the F1-score of the proposed methodologies surpassed that of the previous studies, except for SMOTE on Test A. This observation highlights the significant impact of classifier parameters on its performance, irrespective of the data generation methods employed.This observation aligns with the previous findings, indicating an overall enhancement in the F1-score when utilizing a hybrid of FNN and CNN, regardless of the specific data generation methods employed.Nonetheless, data generation methods can still affect the performance of the same classifier in different variations.The results from Fiore et al. (2019) and Sharma et al. (2022) demonstrated an increase in recall and F1-score when employing GAN as opposed to SMOTE.However, in this study, the implemented GAN did not improve recall but only enhanced the F1-score on Test A. The challenges in training the GAN may have resulted in the lower quality of generated fraudulent samples.Conversely, SMOTE's random interpolation may not effectively capture the distinguishing characteristics of fraud instances.Therefore, combining SMOTE and GAN in a hybrid approach could result in the two complementing each other and better a representation of the fraudulent data.The proposed SMOTE+GAN demonstrated a slight improvement in recall on Test A compared with SMOTE.Additionally, the implemented SMOTified-GAN and the proposed GANified-SMOTE successfully improved the F1-score. Conclusions The present study introduces SMOTE+GAN and GANified-SMOTE techniques as innovative solutions to counteract class imbalance, thereby offering financial institutions an effective tool for reducing losses due to fraudulent activities.Additionally, the integration of FNN and CNN in predicting transaction categories is proposed.The effectiveness of the newly proposed data generation methods was assessed against existing techniques using an FNN, CNN, and FNN+CNN as classifiers.The outcomes highlight the potency of GANified-SMOTE, particularly when coupled with the proposed FNN+CNN classifier, in augmenting the F1-score for fraudulent data.This high F1-score indicates the method's capacity to identify a substantial portion of fraudulent transactions with reduced misclassification of legitimate transactions.Notably, GANified-SMOTE and SMOTified-GAN consistently exhibit commendable performance across varying quantities of generated minority samples.Furthermore, the research underscores the significant impact of the classifier's hyperparameter settings on classification performance, irrespective of the employed data generation methods. In light of this experiment utilizing an online-acquired dataset, it is crucial to recognize that the study's findings may not perfectly simulate real-world scenarios marked by ever-evolving fraudulent behaviors.Future endeavors should validate the efficacy of the proposed methods within actual financial institutions.Moreover, while the experiment employs a labeled dataset with presumed accurate class labels, real-world datasets often pose the challenge of being unlabeled and necessitating comprehensive preprocessing.To tackle class labeling issues, future investigations could explore the potential of unsupervised learning in data generation.Furthermore, to firmly establish the effectiveness of the proposed methods, this study acknowledges that comparisons with existing research were limited.Factors like classifier selection may have influenced observed improvements.Therefore, to enhance generalizability, future research should involve additional classifiers and ablation studies.These efforts would serve to validate the performance of the data generation methods in diverse scenarios. Figure 2 . Figure 2. GAN architecture employed in this study consisting of a 5-layer FNN generator and discriminator, leading to the generation of the final minority samples depicted by the blue square. Figure 3 . Figure3.SMOTified-GAN architecture that employed SMOTE-generated samples as the input for the G, deviating from the traditional GAN approach that uses random noise.The final minority samples were produced and depicted in the blue square. Figure 4 . Figure 4. SMOTE+GAN architecture was employed to generate minority samples by directly incorporating SMOTE and GAN techniques.These generated samples were then merged with the original training dataset. Figure 5 . Figure 5. Proposed GANified-SMOTE architecture that involved the application of SMOTE to the GAN-generated samples, as indicated by the red dashed square. Figure 6 . Figure 6.Flowchart illustrating the two methods employed for FNN+CNN. Table 1 . Variations in the types and quantities of fraud data utilized for each data generation method. Table 2 . The hyperparameters of the two top performing models within FNN and CNN variations. Table 4 . The outcomes of employing five methods to generate data on four models were evaluated on two distinct minority samples. Table 5 . The integration of FNN2+CNN2 with Method I and II was evaluated using different data generation techniques on two minority samples, and the top-performing results for each measurement in tests A and B are highlighted in bold. Table 6 . Comparison of results obtained by existing studies and the proposed methods.Test A is the result for including twice-generated fraud samples as much as the original fraud samples, whereas Test B is the result for 50:50 fraud and nonfraud distributions.The highest Precision, Recall, and F1-score are highlighted in bold.
7,439.6
2023-09-05T00:00:00.000
[ "Computer Science", "Business" ]
Collimation method studies for next-generation hadron colliders In order to handle extremely-high stored energy in future proton-proton colliders, an extremely high-efficiency collimation system is required for safe operation. At LHC, the major limiting locations in terms of particle losses on superconducting (SC) magnets are the dispersion suppressors (DS) downstream of the transverse collimation insertion. These losses are due to the protons experiencing single diffractive interactions in the primary collimators. How to solve this problem is very important for future proton-proton colliders, such as the FCC-hh and SPPC. In this article, a novel method is proposed, which arranges both the transverse and momentum collimation in the same long straight section. In this way, the momentum collimation system can clean those particles related to the single diffractive effect. The effectiveness of the method has been confirmed by multi-particle simulations. In addition, SC quadrupoles with special designs such as enlarged aperture and good shielding are adopted to enhance the phase advance in the transverse collimation section, so that tertiary collimators can be arranged to clean off the tertiary halo which emerges from the secondary collimators and improve the collimation efficiency. With one more collimation stage in the transverse collimation, the beam losses in both the momentum collimation section and the experimental regions can be largely reduced. Multi-particle simulation results with the MERLIN code confirm the effectiveness of the collimation method. At last, we provide a protection scheme of the SC magnets in the collimation section. The FLUKA simulations show that by adding some special protective collimators in front of the magnets, the maximum power deposition in the SC coils is reduced dramatically, which is proven to be valid for protecting the SC magnets from quenching. the collimator jaws with their momentum modified only slightly in direction, but significantly in magnitude. In other words, this process converts the transverse halo particles into off-momentum halo particles. Thus, those protons will be able to escape from the transverse collimation system and are lost as soon as they reach the downstream DS section. In order to largely reduce irradiation dose rate to the SC magnets at the downstream DS section of the betatron collimation insertion (IR7) for each beam, two local collimators need to be added in the DS section. However, there is not enough space for additional collimators due to the compact design in the DS region. One solution to create space for additional collimators is to movethe cold magnets in the DS section [13]. Another solution is to replace the two original main dipoles of 8.3 T by two new shorter dipoles with new Nb3Sn magnet technology which can work at 11 T [14]. For the design of future proton-proton colliders, due to the increasing probability of single diffractive interaction with the increase in energy [15][16][17][18], the problem of beam losses in the DS where the dispersion starts to increase becomes more important and should be treated with greater care. For FCC-hh, an analogous solution to the HL-LHC with local dedicated protection collimators in the DS [19], and the problem of DS losses can be almost solved. However, it would be costly to make this kind of arrangement in the DS regions, since the same space arrangement will be applied to all the arcs due to the symmetry of the ring. In this paper, a different approach is presented, which arranges both the transverse and momentum collimation in the same cleaning insertion. In this way, the downstream momentum collimation system will clean off the particles with large momentum deviation including those experience single diffractive interactions with the primary transverse collimators. In this way, one can get rid of beam losses in the DS regions and design the arc lattice as compact as possible. However, the challenge of this method is how to join two different collimation sections. In general, the transverse collimation section is designed to be approximately dispersion-free, but relatively large dispersions are required at the locations of primary collimators in the momentum collimation section [20]. Different from the momentum collimation section at the LHC where dispersion is intentionally designed non-zero between the two adjacent DS sections, thus a chicane-like and achromatic design for the momentum collimation section is adopted here. The schematic of the method is depicted in Fig. 1, where arc dipole magnets with a simplified design of only single aperture instead of twin aperture are used, where the associated cryomodules should be designed specially to allow the pass-through of another beam pipe. In this way, sufficient longitudinal space can be available here to add necessary protective collimators and shielding in room-temperature. Detailed studies including lattice design and multi-particle simulations using the MERLIN code [18,[21][22][23] have been carried out to check the validity of the method. The result shows that this method works as expected and the beam losses at the downstream DS can be suppressed. In addition, SC quadrupole magnets with special protection are being considered to provide more phase advance in the transverse collimation section, where room-temperature magnets are used due to high radiation dose rate. This measure can enhance the transverse collimation efficiency. FIG. 1. Layout of the combined transverse and momentum collimation method For next-generation hadron collider, the above method is a general and applicable solution for collimation system. As the first conceptual approach phase, the studies presented here are mainly based on the parameters of the Super Proton-Proton Collider (SPPC), which is the second phase of the CEPC-SPPC project [24]. The layout and main parameters of the SPPC are given in the Appendix. COLLIMATION The top beam kinetic energy of SPPC in the baseline design is 37.5 TeV, which is about five times of that at LHC. The energy of halo particles is too high to be dissipated in a straightforward way. One general method to stop high-energy protons is to use multi-stage collimators. Depending on the collimators' functions, they are divided into several families. The primary collimators will intercept or scatter the primary halo particles, and the secondary collimators will intercept the secondary beam halos that are formed by the particle's interaction with the primary collimators. Sometime tertiary collimators will intercept the so-called tertiary beam halos (i.e. what emerges from the secondary collimators). At LHC, tertiary collimators are placed at the IPs, which define the minimum machine aperture in the inner triplet magnets of the IR region and protect the bottlenecks, represented by the inner triplets at the interaction points with the smallest  * (the beta function at the interaction point). The absorbers will stop the hadron showers from the upstream collimators and additional collimators are used to protect the SC magnets. As critical components in ensuring the safe operation of an accelerator like SPPC, based on the experience at LHC [19], the material of collimators would have to meet requirements: good conductivity to reduce coupling impedance, high robustness to resist abnormal beam impacts, good absorption ability for cleaning efficiency [4]. Unfortunately, not all the three conditions can be fulfilled by the same material. A robust material, such as graphite, would increase the coupling impedance, which is important for collective beam instabilities and limits the machine performance. On the other hand, a good conductor, such as copper, is not robust enough, which means that the collimator jaws can be damaged even in the normal operation mode. Thus different materials will be used. As the closest objects to the circulating beam, primary and secondary collimators must withstand the highest dose of deposited energy without permanent damage. For this reason, they are made of robust carbon fiber-carbon composite. However, referring the collimation study for FCC-hh [25,26], for an extremely high stored beam energy of about 10 GJ, assuming the total beam loss within 0.2 hour in the transverse collimation section, primary or secondary collimators have to resist power load of several hundreds of kW, it is extremely challenging for the robustness of the collimators. As for the tertiary collimators and absorbers, due to lower heat power load, they are made of high Z material, such as copper and tungsten, which can absorb particles efficiently and reduce the impedance relatively. Meanwhile, the number of collimators and sharing of phase space coverage ensure that the large level of energy deposition is distributed among them, avoiding single device overloaded. Thus, the location for each collimator needs to be optimized according to the  functions, in order to obtain larger gap openings to reduce impedance issues and obtain appropriate phase advance between collimators to improve the cleaning efficiency. A. Requirements for the lattice design for transverse collimation At LHC, collimators represent more than 90% of the impedance of all the accelerator components [27], and they produce the transverse wall impedance which scales inversely proportional to the third power of collimator gap size. Thus one effective method to reduce the impedance is to enlarge the collimator gap, which means that the collimators must be located at large β values in the case of the unchanged ratio of gaps over beam size in σ (normalized transverse rms beam radius). In addition, with a larger β, the same change in Courant-Snyder invariant means a larger change in amplitude, which enhances the impact parameter and reduces the out-scattering probability. Therefore, the  function is required to be larger in the collimation insertion than in the arcs. To have high collimation efficiency in a multi-stage collimation system, the phase advance between different stages of collimators is also very important, thus a long insertion is needed to produce enough phase advances. For proton accelerators, transverse collimation plays a major role relative to momentum collimation, thus the former has higher requirements for the lattice design and collimators and will withstand higher radiation doses. According to the principles of two stage betatron collimation [28], in one-dimensional case, the optimal phase advance μopt should satisfy 12 cos / where n1 and n2 donate the apertures of primary and secondary collimators in unit of σ, respectively. For two-dimensional case, it becomes complicated. At LHC, the long straight sections offer a phase advance μx,y ≈ 2π. In order to minimize the maximum betatron amplitudes of protons surviving the collimation system, the longitudinal positions of collimators (same as the phase advance between collimators) were optimized with the code DJ [29,30]. For next-generation colliders, a reasonable idea to improve transverse collimation efficiency is adding one more collimation stage to the four-stage collimation system used at LHC, which means larger phase advance needed in the transverse collimation section. On the premise of guaranteeing the beta functions without significantly increasing the total length of the collimation section, replacing warm quadrupoles by cold quadrupoles in the section is the only viable method. Next, we will explore the feasibility of this method in details, together with the design scheme using conventional warm quadrupole magnets. B. Requirements for the lattice design for momentum collimation In general, a particle reaches the primary collimator with a mixing of betatron amplitude and momentum deviation. So we can define the largest momentum deviation δmax with which a particle can pass through the primary momentum collimator by the following formula [28] 1 max 1 where n1 denotes the aperture of the primary momentum collimator in unit of σ (containing the dispersive part), ε denotes the geometric emittance in rms while η1 denotes the normalized dispersion at the collimator. If the maximum normalized dispersion in the primary momentum collimation section is larger than the one at the DS or the whole arc section, in principle there will be very little beam losses in the downstream DS section or even all the arc sections, based on the fact that the arc aperture is larger than n1. For more specific considerations, the normalized dispersion at the primary momentum collimator must satisfy [31]   to avoid cold losses at the DS or in the arc, where Aarc,inj(δp = 0) denotes the arc aperture for on-momentum particles in unit of σ; D,arc is the normalized dispersion with errors in the focusing quadrupole magnets; n1 and n2 denote the apertures of primary and secondary momentum collimators. In addition, for ensuring that the cut of the secondary halo is independent of the particle momentum, the dispersion derivative x  at the position of the primary momentum collimator must satisfy [28] =0 x  . As the momentum collimation deals with much smaller halo particles than the transverse collimation does and the impact parameters at the primary momentum collimators are also much larger, the collimation efficiency is not a problem. C. Lattice Scheme I with room-temperature quadrupoles in the transverse collimation section In order to confirm the effectiveness of the novel method, the SPPC collimation system is used as a test-bench. As shown in Appendix, one can see that two very long straight insertions, LSS1 and LSS5, with a length of 4.3 km are used for collimation and extraction, respectively. In a dedicated collimation section, warm quadrupoles are usually used for their high radiation resistance. However, for very high energy proton beams, the focusing strength is a problem. Thus we use quadrupole groups here each representing several quadrupole units arranged together and acting as one quadrupole. For the momentum collimation section, in order to produce the required dispersion, four groups of cold dipoles of arc dipole type are used. Meanwhile, cold quadrupoles are also used to control the betatron functions in the limited space. Figure 2 shows the optics in Lattice Scheme I for the SPPC collimation system, which is similar to the lattice design in FCC-hh to some extent [19,32]. The main parameters are listed in Table I. D. Lattice Scheme II with only SC magnets For a multi-stage collimation system, the primary and secondary collimators generate secondary and tertiary halo particles that extend several σ beyond the collimator settings, and some of them escape from the collimation insertion and are lost on the inner SC triplets at IPs where the apertures are reduced by the very large -functions. At LHC, in order to locally provide additional protection from the tertiary halo [33], 16 tertiary collimators in pairs are installed at each side of the four experiment insertions. These tertiary collimators also can protect the triplets from the mis-kicked beams, for example, due to failures of the normal conducting separation magnets [34]. One source of the machine-induced backgrounds at the detectors is due to the upstream interaction of beam protons with residual gas molecules or collimators. According to the study at LHC [35,36], the beam-gas interaction is the main contribution of background, higher than the beam-halo by one order of magnitude. For SPPC, the stored energy in the beam is as high as 9.1 GJ per beam, about 25 times of that of the LHC at design energy, and the development of hadronic and electromagnetic shower become more intense due to higher proton energy. It is foreseeable that the tertiary halo in the machine will be much more severe. One more stage of collimators installed in the transverse collimation section will convert the tertiary beam halo into quaternary beam halo, thus can help to dilute the halo particles in the experiments and reduce the risk of quenching in SC inner triplets, and the experimental background level may be reduced more or less. However, when warm quadrupoles are used, there is not enough phase advance to add additional collimators due to the weak focusing strength, or significant space will have to be added. As the space is so precious, therefore, we try to apply SC quadrupoles in the transverse collimation section to create more focusing cells. These quadrupoles are very different from those in the arcs, they will be designed with enlarged apertures and lower pole strength (no higher than 8 T), and are somewhat comparable to the triplet quadrupoles used in the experiment insertions at LHC. In this way, much higher transverse collimation efficiency can be obtained, so that the probability of particle losses in the downstream momentum collimation section and the residual halo at the experiments will be reduced largely. The phase advance between the secondary and tertiary collimators should be similar as the one between the primary and secondary collimators, assuming that most of the tertiary beam halo particles are emitted from the secondary collimators. Figure 3 shows the lattice functions. The main parameters are listed in Table II. Same as for Lattice Scheme I, we also need to consider the two collimation systems for each beam in one insertion. The distance between two beams is set to about 30 cm at the arcs, which is considered to be enough to install one collimator for one beam but cannot accommodate an additional collimator at the same location for another beam. In the momentum collimation section, the horizontal separation from the other beam is enlarged to 1.64 m which will allow the installation of the collimators for the two beams. Meanwhile, we apply SC quadrupoles with twin apertures for the two beams in the overlapping region with nominal separation. The layout of the collimation section is shown in Fig. 4. FIG. 4. Layout of the collimation insertion. P/S/T/AB denote primary collimator, secondary collimator, tertiary collimator and absorber. A. Collimation inefficiency To quantify the performance of the collimation system more precisely, the local cleaning inefficiency ̃ which is the ratio of number Ni of lost protons at any location of the ring in a given bin of length Li (set to 10 cm in general) over the total number Ntot of lost protons [4] For slow and continuous losses, the circulating intensity in the machine can be described as a where N0 is the nominal intensity,  is the finite beam lifetime. At LHC, in order to ensure commissioning and performance in nominal running, conservative minimum lifetimes min are assumed as 0.2 hour at top energy and 0.1 hour at injection energy [37]. For an operation with the minimum beam lifetime min, the total intensity q tot N is limited by the quench limit Rq where ̃ is the local cleaning inefficiency as defined in Eq.(5), the quench limit Rq in unit of protons/m/s is related to the transmission capability and the maximum deposited energy density, which defines the allowed maximum local proton loss rates [38]. Figure 5 shows the maximum total intensity at the quench limit as a function of the local cleaning inefficiency, assuming that minimum beam lifetimes of 0.1 hour at the injection energy and 0.2 hour at the top energy must be satisfied just like at the LHC. In the baseline design of SPPC, the SC magnets in the arcs use the full iron-based HTS technology [39], and the field strength of the main dipoles is 12 T. However, as the magnet technology is still being developed [40], the quench limits is not yet available. In this article, the quench limit value Rq for the SPPC arc magnets is estimated by the following formula [41] where the energy E is in unit of TeV. The same scaling was applied in the FCC-hh design [19] from the NbTi technology at LHC to the Nb3Sn technology at FCC-hh, assuming the same quench level 5 mW/cm 3 [38,42]. Thus the quench limit Rq is estimated as 0. B. Simulation results of the beam loss distributions Multi-particle simulations using the two lattice schemes described in Sections IIC and IID have been carried out with the code MERLIN [], which is a C++ accelerator library easily to extend and modify. Its organization structure can be found in References [43]. This code has a good agreement with the well-known collimation version SixTrack+K2 after the benchmarked work [44]. In the code, protons are considered lost if they undergo an inelastic interaction within the collimator jaws or if they intercept the mechanical beam pipe. The local cleaning inefficiency ̃ is used as the measure of the performance for collimation simulations. Besides the arc sections, only the functional lattices for the collimation and experiment insertions have been used and all the other insertions, such as RF, injection and extraction insertions, are replaced by periodic FODO structures. The physical aperture in the arcs is set to be the inner aperture of the beam screen [45] that is used to absorb the synchrotron radiation, and its cross section is a superposition of an ellipse and a rectangle with a mean radius of about 15 mm. As for the transverse collimation section, the apertures of the warm quadrupoles are 60 mm or larger than 85 σ in Lattice Scheme I, and the cold quadrupole apertures are enlarged to 70-80 mm or larger than 130 σ in Lattice Scheme II, which are about the same as the triplet magnet apertures in the experiments regions. In the momentum collimation section, the quadrupole magnet aperture is enlarged slightly on the premise that the pole magnetic field does not exceed the preset value 8 T. The collimator parameters in the simulations are shown in Table III, where T for transverse, M for momentum, P for primary, S for secondary, the second T for tertiary, Q for quaternary, AB for absorber, C for collimator, the collimator settings are quoted from the LHC design settings [34] and Run I operational settings [46]. In order to increase the accuracy of calculating the local cleaning inefficiency, 100 million protons are tracked for 300 turns in the SPPC ring, in which the initial beam distribution is represented by so-called halo distributions for saving the computing time. For example, for the horizontal halo collimation, the horizontal distribution is presented by two short arcs with a radius being the TPC half-gap, and the vertical distribution is a cut Gaussian at 3 σ, just as shown in Fig. 6. The impact parameter at the primary collimators is chosen as 1 μm, which is used for negligible emittance growth from the previous turn, referring to the setting in the simulation of the LHC collimation system [41]. Based on the above parameter settings, the simulations are carried out for both the horizontal and vertical halo collimations. This simulation method and assumptions can maintain a good computing performance, which have been illustrated in references [41,[46][47]. With an initial horizontal halo distribution, the proton losses can be reduced to half by introducing 11 tertiary collimators, but it could still lead to cold dipole quenches if no further protection measures are made. In contrast, with initial vertical halo distribution, the tertiary collimators can reduce the proton losses by about one order of magnitude. As shown in Fig. 7, for an initial horizontal halo distribution, one will see important proton losses at the cold dipoles due to the single diffractive effect, even with help of tertiary collimators. To solve the problem, some protective collimators (used as absorbers) can be placed here. Different from the arc DS regions where the lattice structure is very strict and the space is very tight, it is much easier to provide the space for the collimators in room temperature in the momentum collimation section. According to the positions of the lost particles, three protective collimators in Tungsten with an aperture of 10 σ and length of 1 m, same as the one of the absorbers in the transverse collimation section, are placed there to intercept the particles related to the single diffractive effect. The specific locations are as follows: one protective collimator is placed between the third and fourth dipole magnets of the first dipole group to intercept particles with very large momentum deviation, and the cryostat for the dipole group is split into two parts to allow the insertion of the collimator in room temperature; another one is placed before the quadrupole between the first and second groups of dipoles to protect the quadrupole; the third one is placed in front of the second group of dipole magnets. The proton loss distribution of the cleaning insertion with protective collimators As mentioned earlier, the beam losses in the experiment regions are also a major concern. According to the simulation results, the tertiary collimators can intercept the tertiary halo effectively, as evidence the proton losses at the quaternary collimators are reduced by more than one order in the experiment region LSS7, and by four times in the experiment region LSS3, compared to the Lattice Scheme I; the results are shown in Fig. 11. This means it is much helpful to reduce the residual halo particles in the experiment regions by adding one more stage collimators in the transverse collimation section. FIG. 11. Proton loss distribution along the full ring in lattice scheme I (a) and II (b), with initial vertical halo distribution A.Quench limits When a high-energy proton interacts with dense matter, the showering of the hadronic and electromagnetic cascades is the main process of energy deposition, which produces thousands of low-energy particles continuously until all of them are stopped in the matter and absorbed. These processes occur in the interactions between the primary protons and collimators or vacuum chamber. If these secondary showers deposit energy in the SC magnet coils then the local energy or power deposition may exceed the quench limit value, the SC magnets will experience a quench, from the SC state to the normal conducting state [48]. In general, quench limit is a function of local magnetic field, geometrical loss pattern, operating temperature, cooling conductions, and time distribution of beam losses [49]. In order to protect the SC magnets in the collimation section from quenches, it is very important to shield the particle showers and reduce the energy or power deposition in the magnet coils. In this section, we provide the protection schemes for the SC quadrupoles and dipoles which are used in the collimation section of Lattice scheme II. For simplicity, only steady state beam-loss is considered, the heat in the coils is constantly removed by the helium bath through the cable insulation [49]. To reduce the energy deposition in the SC coils, the cold quadrupoles in the transverse collimation section are designed with enlarged aperture and lower magnet field. On the one hand, the larger aperture means larger acceptance for the magnet to intercept as less as possible particles, on the other hand, the quench limit increases as the magnetic field decreases. As shown in Table II, the highest pole-tip magnetic field is 8 T, which is lower than the IR quadrupoles at LHC. Considering the Helium II and Helium boiling heat transfer mechanisms, which allow extracting more heat from the cable than the only solid conducting through the cable insulation, the estimated quench limits in the cable of cold quadrupoles ranges from 50-100 mW/cm 3 [49][50]. For the SC dipoles used in the momentum collimation section, the magnetic field is 12 T, which will use full iron-based HTS technology in the SPPC. However, some physical properties of the cable have yet to be determined so far. Thus, the conservative estimate of quench limits in the cable of cold dipoles is 5-10 mW/cm 3 [51], just same as the Nb3Sn cable. B. Energy deposition in the SC quadrupoles The Monte Carlo analysis process for energy or power deposition includes the following steps. where the magnetic fields are not included in the simulation. As mentioned in Section IIIB, in order to reduce the probability of particle losses in the SC coil of quadrupoles, they are designed with a wider aperture, and for the case of QD in Fig. 12 it is 80 mm. The material of coils is a mixture of 50% niobium-titanium and 50% copper. The cross section of the SC quadrupoles used in the transverse collimation section is shown in Fig.13, referring the insertion region wide aperture quadrupoles at LHC [48]. FIG. 13. Cross section of the SC quadrupole in transverse collimation section As a high Z material, Tungsten has been chosen as the shielding material, which can absorb the particle shower more effectively. In the geometry model, the shielding is placed at 1 m in front of the quadrupole, which is a hollow cylinder, with a length of 3 m and inner half-aperture of 10 mm or about 37 σ, outer radius is set to be 300 mm to cover the yoke of the SC quadrupole. This has proven to be tight enough to intercept the particle shower, but wide enough not to violate the multi-stage collimation hierarchy. Referring to the energy deposition study for FCC-hh [26], with the assumption that the total beam is lost on collimation system within a time period of 0.2 hrs that is used for designing the LHC collimators, the maximum power on the dogleg warm dipoles is up to 1.1 MW, which is considered to be too high to cool the dipoles easily. This is the similar situation at SPPC. In this study, the loss rate of total beam power in one hour in the collimators is used to calculate the power deposition. This abnormal beam loss usually triggers the beam ejection mechanism into the external beam dump in a few seconds. Figure 14 shows the results of the maximum power deposition density, where the bins are chosen as a compromise between the calculation precision and time consuming: 0.5 cm in radius, 2° in azimuth and 5-10 cm in length (small bins for the region where the gradient is large)along the defocusing quadrupole QD. According to the calculation results by FLUKA, the total power on the shielding is up to 480 kW and the peak power density is about 1.3 kW/cm 3 , which is located in the front face of the shielding. This value may be too high to bear for the tungsten material even for a short period, further optimization studies on the shielding material and structure, including cooling system for the shielding should be done in the future. FIG. 14. Maximum power deposition density along the quadrupole QD after the primary collimators Compared to the case without shielding (red line), the maximum power deposition along the QD with shielding is reduced by three orders of magnitude (blue line). In shielding cases, the peak power deposition is located at the first bin or the entrance part of QD, with a value of 57.7 mW/cm 3 . The possible reason for the peak is that it comes from the shower emerged from the end part of the shielding block. An optimized method is to slightly increase the aperture of the rear half shielding, so-called step-like shielding. Figure 15 shows the simulation results after the optimization, where the aperture of the rear half shielding is enlarged from 10 mm to 10.5 mm for the step-like shielding. One can see that with step-like aperture the power deposition is reduced to below 30 mW/cm 3 , which is safe from the quench limit value 50-100 mW/cm 3 . C. Energy deposition in the SC dipoles For the evaluation of quenching risk in the SC dipoles used in collimation system, the fourth dipole of the first group of dipoles, which is the closest to the first protective collimator, is considered the cold dipole which bears the highest dose of radiation. Figure 16 shows the 3D geometry model in FLUKA and the cross section of the SC dipoles, where the material of coils is chosen as the mixture of 50% Silver and 50% SmAsFeO0.2F0.8, which is one type of iron-based wires [54]. The input distribution of protons used in FLUKA is provided by the code MERLIN, which records the coordinate information of the protons lost in the first protective collimator. Here the upstream shower is not included, which is considered to be cleaned off by the transverse collimators and absorbers. For one hour beam lifetime, the power load on this protective collimator is 0.9 kW. Figure 17 shows the result of power deposition in the coils along the two most exposed dipoles, B1-4 and B1-5. The maximum power deposition is 4.5 mW/cm 3 , which is below the quench limit 5-10 mW/cm 3 . At present, additional shielding following the protective collimator for the SC coils is not necessary. However, this issue needs to be re-considered for the shorter beam lifetime or possible upgrading plane, some related results have been carried out in FCC-hh [55]. V. CONCLUSIONS A novel collimation method for future proton-proton colliders is proposed, which arranges both transverse and momentum collimation systems in the same cleaning insertion and employs SC quadrupoles in the transverse collimation section. The design and simulation results with the SPPC parameters show the convincing effectiveness of the method. Two major features are: the momentum collimation section just following the transverse collimation section can effectively clean the particles with large momentum deviation produced in the transverse collimators thus practically eliminate beam loss in the downstream DS section; the application of SC quadrupoles in the transverse collimation section can help create one more collimation stage which turns out to be very effective in reducing beam loss in the momentum collimation and experimental sections. Simulations with the FLUKA code have proven that with some protection design, the SC magnets in the collimation section can be safe from quenches caused by the radiation effect. The main design goal of collimation inefficiency 3.5510 -7 m -1 at the cold regions can be fulfilled very well. Although the details have been carried with the SPPC parameters, the method should be more general for proton colliders of such scale. Eliminating the great risk of particle loss in the cold region due to cleaning beam halo particles, for the colliders with ultra-high luminosity, it is foreseen that the collision debris gives significant contribution of particle losses around the experimental points. With the great challenges to the optical design and protection scheme of experiment insertions, it may be an effective to apply the momentum collimation method in the same long straight sections to avoid the cold losses in the downstream DSs of the experiment regions. This work should be done in the future. SPPC is a complex accelerator facility and will be able to support research in different fields of physics, similar to the multi-use accelerator complex at CERN. Besides the energy frontier physics program in the collider, the beams from each of the four accelerators in the injector chain can also support their own physics programs. The four stages, shown in Fig. 19, are a proton linac (p-Linac), a rapid cycling synchrotron (p-RCS), a medium-stage synchrotron (MSS) and the final stage synchrotron (SS). This research can occur during periods when beam is not required by the next-stage accelerator.
7,987
2016-11-16T00:00:00.000
[ "Physics", "Engineering" ]
A Combined Numerical and Experimental Study of Heat Transfer in a Roughened Square Channel with 45 ◦ Ribs Experimental investigations have shown that the enhancement in heat transfer coefficients for air flow in a channel roughened with low blockage (e/Dh < 0.1) angled ribs is on the average higher than that roughened with 90◦ ribs of the same geometry. Secondary flows generated by the angled ribs are believed to be responsible for these higher heat transfer coefficients. These secondary flows also create a spanwise variation in the heat transfer coefficient on the roughened wall with high levels of the heat transfer coefficient at one end of the rib and low levels at the other end. In an effort to investigate the thermal behavior of the angled ribs at elevated Reynolds numbers, a combined numerical and experimental study was conducted. In the numerical part, a square channel roughened with 45◦ ribs of four blockage ratios (e/Dh) of 0.10, 0.15, 0.20, and 0.25, each for a fixed pitch-to-height ratio (P/e) of 10, was modeled. Sharp as well as round-corner ribs (r/e = 0 and 0.25) in a staggered arrangement were studied. The numerical models contained the smooth entry and exit regions to simulate exactly the tested geometries. A pressure-correctionbased, multiblock, multigrid, unstructured/adaptive commercial software was used in this investigation. Standard high Reynolds number k− ε turbulence model in conjunction with the generalized wall function for most parts was used for turbulence closure. The applied thermal boundary conditions to the CFD models matched the test boundary conditions. In the experimental part, a selected number of these geometries were built and tested for heat transfer coefficients at elevated Reynolds numbers up to 150 000, using a liquid crystal technique. Comparisons between the test and numerically evaluated results showed reasonable agreements between the two for most cases. Test results showed that (a) 45◦ angled ribs with high blockage ratios (> 0.2) at elevated Reynolds numbers do not exhibit a good thermal performance, that is, beyond this blockage ratio, the heat transfer coefficient decreases with the rib blockage and (b) CFD could be considered as a viable tool for the prediction of heat transfer coefficients in a rib-roughened test section. INTRODUCTION Heat transfer coefficient in a channel flow can be increased by roughening the walls of the channel.One such method, used over the past thirty years in internal cooling passages, is to mount rib-shape roughnesses on the channel walls.These ribs, also called turbulators, increase the level of mixing of the cooler core air with the warmer air close to the channel wall, thereby, enhancing the cooling capability of the passage.Geometric parameters such as channel aspect ratio (AR), rib height-to-passage hydraulic diameter or blockage This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. ratio (e/D h ), rib angle of attack (α), the manner in which the ribs are positioned relative to one another (in-line, staggered, crisscross, etc.), rib pitch-to-height ratio (P/e), and the rib shape (round versus sharp corners, fillets, rib aspect ratio (AR t ), and skewness towards the flow direction) have pronounced effects on both local and the overall heat transfer coefficients.Some of these effects were studied by different investigators such as Burggraf [1], Chandra et al. [2,3], Han et al. [4,5,6], Metzger et al. [7], Taslim et al. [8,9], and Webb et al. [10].Among those geometries close to the present investigation are studied in the papers by Lau et al. [11] and Han et al. [12].These last two references deal with heat transfer characteristics of turbulent flow in a square channel with angled discrete ribs.Heat transfer performance of 30 , and 90 • discrete, parallel, and crossed ribs was investigated.The second investigation studied the augmentation of heat transfer in square channels roughened with parallel, crossed, and V-shaped ribs.While their rib pitchto-height ratio of 10 was identical to that in this study, the rib height-to-channel hydraulic diameter was 0.0625 in both investigations which is below the range tested in the present investigation (0.083−0.167).However, results of the smallest rib tested in this investigation are compared with those tested in the two above-mentioned references. COMPUTATIONAL MODELS The computational models were constructed for a 45 • ribroughened channel with eight ribs on each side in a staggered arrangement.The domain included the entry and exit regions exactly simulating the test setup.Numerical models were meshed and run for ribs with all sharp corners as well as ribs with round-top corners of r/e = 0.25.Rib blockage ratio (e/D h ) varied as 0.10, 0.15, 0.20, and 0.25 while the rib pitch-to-height ratios (P/e) remained constant at 10. Figure 1 shows a representative meshed domain for the rib geometry of e/D h = 0.25 which includes the air flow region between three ribs on the bottom wall and two ribs on the top wall.This arrangement continues on both sides to cover the entire channel length of 152.4 cm with a total of sixteen ribs.The computational domain size for the other rib geometries was the same.The CFD analysis was performed using Fluent/UNS solver by Fluent, Inc., a pressure-correction based, multiblock, multigrid, unstructured/adaptive solver.Standard high Reynolds number k − ε turbulence model in conjunction with the generalized wall function was used for turbulence closure.Other available turbulence models in this commercial code, short of a two-layer model which required a change in mesh arrangement for each geometry and was beyond the scope of this investigation, were also tested and did not produce results significantly different from those of the k − ε model.Mesh independence was achieved at about 700 000 cells for a typical model.Cells in all models were entirely hexagonal, a preferred choice for CFD analyses, and were varied in size bigeometrically from the boundaries to the center of the computational domain in order to have a finer mesh close to the boundaries.Figure 2 shows the details of the mesh distribution on the surface of the domain.It is seen that there are regions of high mesh concentration close to the rib surfaces in order to capture the viscous effects in the recirculating zones. TEST SECTIONS Figure 3 shows schematically the layout of the test apparatus, channel cross-sectional area, and details of the rib geometry for a typical test setup.The steady-state liquid crystal technique was employed to measure the heat transfer coefficients between a pair of ribs in these test sections.In this technique, the most temperature-sensitive color displayed by the liquid crystals is chosen as the reference color corresponding to a known temperature.By sensitive variation of the Ohmic power to a constant heat flux thin-foil heater beneath the liquid crystals, the reference color is moved from one location to another such that the entire area between two ribs is eventually covered with the reference color at constant flow conditions.This process results in a series of photographs each corresponding to a certain location of the reference color.The area covered by the reference color for each photograph is then digitized and an area-weighted average heat transfer coefficient is calculated along with the iso-Nu contours.The test section, with a length of 152.4 cm, had a 5.08 cm × 5.08 cm cross-sectional area.Three walls of this channel were made of 1.27-cm-thick clear acrylic plastic.The fourth wall, on which the heaters and liquid crystal sheets were attached and all measurements were taken, was made of a 10.16-cm-thick machinable polyurethane slab.Ribs were machined to the size from acrylic plastic stocks and were mounted on two opposite walls in a staggered arrangement at a 45 • angle with the channel flow.The entrance region of all test sections was left smooth to produce well-established hydrodynamic and thermal boundary layers.Heat transfer measurements were performed for an area between a pair of ribs in the middle of the roughened zone corresponding to X/D h = 15.Five 5.08 × 27.94 cm custom-made etchedfoil heaters with a thickness of 0.15 mm were placed on the polyurethane wall where measurements were taken using a special double-stick 0.05-mm-thick tape with minimal temperature deformation characteristics.The heaters covered the entire test section length including the smooth entry length.However, they did not extend over the actual rib surface nor on the acrylic plastic sidewalls.Thus the reported heat transfer coefficients are the averages over the wall surface area between a pair of ribs.The heat transfer coefficient on the rib surfaces are reported by investigators such as Metzger et al. [13], Korotky and Taslim [14,15], and Taslim and Lengkong [16,17].Encapsulated liquid crystals sandwiched between a mylar sheet and a black-paint coat, collectively having International Journal of Rotating Machinery a thickness of 0.127 mm, were then placed on the heaters.The test sections were covered on all sides by 5-cm-thick styrofoam sheets to minimize heat losses to the environment, except for a small window on the opposite wall at the location where photographs of the liquid crystals were taken.The radiational heat loss from the heated wall to the unheated walls as well as losses to ambient air were taken into consideration when heat transfer coefficients were calculated.A 35 mm programmable camera, in conjunction with proper filters and background lighting to simulate daylight conditions, was used to take photographs of isochrome patterns formed on the liquid crystal sheet.Surface heat flux in the test section was generated by the heaters through a customdesigned power supply unit.Each heater was individually controlled by a variable transformer.Before testing, the liquid crystal sheets were calibrated in a water bath and the reference color was measured.A contact micromanometer with an accuracy of 0.025 mm of water column measured the pressure differential across the rib-roughened channel.A critical venturi-meter, with choked flow for all cases tested, measured the total mass flow rate entering the test section.Experimental uncertainties, following the method of Kline and McClintock [18], were estimated to be 6% for the heat transfer coefficient. RESULTS AND DISCUSSION Figure 4 shows a comparison between the numerical and experimental results for all rib blockage ratios and a typical pitch-to-height ratio of P/e = 10.For consistency, the numerical results of region 4 (see Figure 3), corresponding to the location of the camera where data were collected, are compared with the test results.While a maximum difference of about 20% is observed for the ribs with the highest blockage ratio at the lowest Reynolds number, the differences are reduced to about 2% for the intermediate blockage ratios of 0.15 and 0.2 especially at the higher end of the Reynolds number range.The agreement between the numerical and test results for the lower blockage ribs is very encouraging given that a combination of moderate mesh distribution and traditional k − ε turbulence model was used with a reasonable convergence time on a typical PC.The high blockage ribs, especially when they are angled with respect to the flow direction as was the case in this investigation, create such a complex flow field that capturing all effects may require much more mesh concentration with possibly a two-layer model for turbulence.Nevertheless, CFD codes are becoming a viable tool in predicting the convective heat transfer coefficients in parametric studies during the early stage of the cooling system designs.It should be noted that the highest tested Reynolds number was around 140 000, while the numerical cases were run for up to 200 000 and, as it can be seen, the numerical Nusselt numbers continued to increase with the same trend.Figure 4 also indicates that, unlike the case of 90 • ribs, higher blockage ratio ribs at a 45 • angle with the axial flow direction do not necessarily produce a higher heat transfer coefficients than the smaller ribs.It is seen that as the blockage ration increases from 0.1 to 0.15, the heat transfer coefficient will increase accordingly.However, as the blockage ratio further increases, the heat transfer coefficient starts to decrease.Both test and numerical results confirm this behavior.A possible explanation is that for bigger ribs, the air after tripping over these big ribs will not reattach to the channel surface between the ribs as effectively as it would when the ribs are at a 90 • angle with the flow direction; thus a fairly big recirculating zone forms behind the large ribs and the cooling air will not interact thermally with the channel surface.In the extreme case, if the larger angled ribs are positioned too close to each other, the core air may jump from the top of one rib to another and entirely miss the area in between the ribs.In such cases, a re-circulation air zone similar to a cavity flow will fill the space between the ribs and the heat transfer coefficient will reduce remarkably.Figure 5 shows the numerical results for the smallest rib geometry (e/D h = 0.1) along the channel from region 1 through region 7 for sharp-corner ribs (r/e = 0) as well as rib with round corners (r/e = 0.25).Solid symbols represent the sharp-corner rib results and hollow symbols show the results of round-corner ribs.Several observations are made.Region 1 shows a lower heat transfer coefficient because this region (see Figure 3) does not benefit from the secondary flows created by either any upstream ribs or ribs on the opposite wall.There is a remarkable increase in heat transfer coefficient along the flow direction from region 1 to region 4.This increase, however, slows down asymptotically from there on.Secondary flows caused by the presence of angled ribs swirl along the ribs and create a pair of counter-rotating cells (Fan et al. [19] and Metzger et al. [20]) as depicted in Figure 6.These vortices have an additive effect along the channel and, as a result, Nusselt number increases along the channel from region 1 to region 7.It is also noted that, similar to the case of 90 • ribs that are investigated by these and other investigators, sharp-corner ribs produce higher heat transfer coefficients.Vortices shed from the sharp corners enhance the mixing of the near-wall warm air with the cooler core air thus increasing the heat transfer coefficients compared to those of round-corner ribs which are streamlined.As the blockage ratio increases, some of these trends change as we will discuss shortly.Figure 7 represents the same results for a larger rib (e/D h = 0.15).A similar behavior to a lesser degree is observed for most cases except that the Nusselt number variation in the flow direction is only pronounced from region 1 to 2. For other regions, the Nusselt number shows a slight increase first and a slight decrease from region 4 to region 7.It is speculated that stronger secondary flows created by these larger ribs establish the flow domain faster than the previous case of smaller ribs, thus only a slight variation in Nusselt number is seen.Same observations are made for the next rib blockage ratio of 0.2, the results of which are shown in Figure 8.However, for a yet larger rib geometry (e/D h = 0.25), the results of which are shown in Figure 9, all trends that were observed to the smaller ribs have changed.First, sharp-corner ribs produced lower heat transfer coefficients than round-corner ribs.For these large ribs with sharp corners, air, after separating from the rib top surface, will not reattach as effectively as it would for the case of round-corner ribs.At the same time, although stronger vortices may shed from the sharp edges, they are dissipated to the core flow and do not get a chance to scrub against the surface area between the ribs.The end result is a lower heat transfer coefficient for the sharp-corner ribs.It is also seen that, for these large ribs, the Nusselt number decreases along the test section.It is speculated that the secondary flows and shed vortices from the rib corners additively reduce the flow reattachment strength in the flow direction thus causing a continuous reduction in the heat transfer coefficient along the flow direction. Figures 10 and 11 show the isotherms and iso-Nu contours, extracted from the numerical solutions, for a typical region between a pair of ribs in the middle of the channel.It can be seen that the bottom-right region corresponds to the lowest temperature region and the highest Nusselt number region.Liquid crystal displays of isochrome confirm the same behavior.These figures indicate that, unlike the 90 • ribs, angled ribs create a remarkable spanwise variation in the heat transfer coefficient.As mentioned earlier, spanwise, counterrotating, double-cell secondary flows created by angling of ribs are responsible for this variation.It can also be seen that on the top wall, close to the ribs on both ends, there are regions of low Nusselt numbers (high temperatures).Again, liquid crystal displays showed the same behavior.This phenomenon can be best explained by looking at Figure 12 which shows the velocity vectors close to the ribs and channel surfaces.All high Nusselt number regions mentioned above correspond to high-velocity regions in this figure.These high-velocity regions, however, correspond to where rotating cells, shown in Figure 6, bring the cooler core air in contact with the channel surface.The low Nusselt number regions, on the other hand, see exactly the opposite action, that is, the rotating cell draws the air towards the channel center thus slowing down the axial air around those regions. CONCLUSIONS Comparisons between the test and numerically evaluated results showed good agreements between the two for most cases.Therefore, CFD could be considered as a viable tool for the prediction of heat transfer coefficients in rib-roughened test sections, especially during the parametric studies in early design process.It was also concluded that, unlike the ribs that are mounted perpendicular to the flow direction, 45 • angled ribs with high blockage ratios (> 0.2) do not exhibit a good thermal performance, that is, as rib blockage increases, the heat transfer coefficient decreases and, roundcorner high blockage ribs are superior to those with sharp corners. Figure 1 : Figure 1: A representative mesh arrangement for a ribbed section of the channel. Figure 2 : Figure 2: Details of the mesh arrangement on the channel surface. Figure 3 : Figure 3: Schematics of a typical test section and ribs. Figure 10 : Figure 10: Representative isotherms on the surface area between a pair of ribs. Figure 12 : Figure 12: Representative velocity vectors around the ribs and on the channel center plane.
4,222.4
2005-01-01T00:00:00.000
[ "Physics", "Engineering" ]
Investigating the Effect of Recruitment Variability on Length-Based Recruitment Indices for Antarctic Krill Using an Individual-Based Population Dynamics Model Antarctic krill (Euphausia superba; herein krill) is monitored as part of an on-going fisheries observer program that collects length-frequency data. A krill feedback management programme is currently being developed, and as part of this development, the utility of data-derived indices describing population level processes is being assessed. To date, however, little work has been carried out on the selection of optimum recruitment indices and it has not been possible to assess the performance of length-based recruitment indices across a range of recruitment variability. Neither has there been an assessment of uncertainty in the relationship between an index and the actual level of recruitment. Thus, until now, it has not been possible to take into account recruitment index uncertainty in krill stock management or when investigating relationships between recruitment and environmental drivers. Using length-frequency samples from a simulated population – where recruitment is known – the performance of six potential length-based recruitment indices is assessed, by exploring the index-to-recruitment relationship under increasing levels of recruitment variability (from ±10% to ±100% around a mean annual recruitment). The annual minimum of the proportion of individuals smaller than 40 mm (F40 min, %) was selected because it had the most robust index-to-recruitment relationship across differing levels of recruitment variability. The relationship was curvilinear and best described by a power law. Model uncertainty was described using the 95% prediction intervals, which were used to calculate coverage probabilities and assess model performance. Despite being the optimum recruitment index, the performance of F40 min degraded under high (>50%) recruitment variability. Due to the persistence of cohorts in the population over several years, the inclusion of F40 min values from preceding years in the relationship used to estimate recruitment in a given year improved its accuracy (mean bias reduction of 8.3% when including three F40 min values under a recruitment variability of 60%). Introduction Krill is an important link between lower trophic levels (phytoplankton) and highorder predators such as penguins and whales, in the Antarctic marine ecosystem [1]. Krill has also been the focus of both long-term scientific research and commercial fishery (e.g. [2][3][4]). Multiple sources of data from scientific surveys and the fishery have generated databases that provide information on key lifehistory characteristics of krill such as growth, mortality and recruitment. Typically, scientific research on krill has focussed on the summer period when logistics and operational factors are more amenable; however, the commercial fishery for krill operates year-round [5]. The Commission for the Conservation of Antarctic Marine Living Resources Scheme of International Scientific Observation (CCAMLR SISO; www.ccamlr.org), was initiated in 1992 to collect data from the krill fishery, including representative length-frequency data, from commercial captures on board fishing vessels. Recent increases in observer coverage levels in the krill fishery [6] has provided an increase in the data available from the fishery both spatially and temporally. The database of krill lengths represents an opportunity to investigate krill population dynamics, at scales not typically feasible using data from scientific surveys. Depending on the areas, the longevity of krill in the wild is estimated to range between 4 and 7 years, with an age at maturity of about 3 years [7]. Given the relatively short life-cycle and relatively high mortality rate of krill, variation in the level of recruitment is a major contributor to inter-annual variability in the abundance of krill (e.g. [8]). Measuring recruitment directly (where all new individuals in the population are recorded) in wild populations is typically only possible in a very small number of closed terrestrial systems (e.g. St Kilda Soay sheep [9]) and is impractical for marine taxa. Several studies of krill have developed methods to estimate recruitment based on changes in the population size-structure using length measurements of individual krill caught with nets (e.g. [10,11]). The rationale behind these methods is that recruitment (i.e. the number of one-year old individuals entering the population) can be estimated based on the increase in the proportion of smaller (and by inference younger) individuals in the population. These proportional indices of recruitment have been instrumental in investigating krill population dynamics (e.g. [12][13][14]) and ecosystem processes (e.g. [15][16][17]), and more recently, within integrated assessment frameworks [18,19]. Quantifying the relationship between inter-annual changes in recruitment and in environmental variables such as ice-cover [20][21][22] and ocean currents [13,23], is crucial to our understanding of the drivers of population dynamics, and enables extrapolation to future krill population states. Such analyses, however, rely on assumptions about the relationship between absolute population recruitment and proportional indices of recruitment derived from lengthfrequency data. In many studies, krill recruitment is estimated using a proportional index 'R1' defined as the ratio of the number of 1-year-old individuals to the total number of individuals (e.g. [12,24]). This ratio can be calculated by using maximum likelihood to fit age-specific mixtures of normal distributions to population level length distribution data [10]. As there are currently no cost-effective and precise methods to age krill [25], the allocation of modes in length distributions to ageclasses is dependent upon an underlying growth model. Therefore while there is no practical method to estimate absolute recruitment using length frequency data, there is a need develop and validate alternative methods for that purpose. Ideally, for a population with constant recruitment, a length-based index should accurately reflect recruitment and changes in recruitment should be reflected in changes in the index. That said, the relationship between krill recruitment indices and absolute recruitment has to date not been quantified and this has two important implications. Firstly, the performance of a given indexthat is how accurately an index represents absolute recruitment -is unknown. In an extreme example, this may lead to an index returning the same estimated recruitment under low or high recruitment. In this circumstance, the recruitment index would contain no, or misleading information. Secondly, krill recruitment varies inter-annually (e.g. [2]) and it is extremely unlikely that a given recruitment index will perform equally well across all levels of biologically plausible ranges of recruitment variability. Indeed, a priori it is reasonable to expect the performance of recruitment indices to decrease with increasing recruitment variability especially where the absolute level of recruitment is not affected by recruitment in the previous year whereas a relative index is. Nevertheless, as it is not possible to determine recruitment variability directly, it is important that a recruitment index performs adequately across the largest range of recruitment variability. Since recruitment and its variability cannot be observed directly, regression analyses based on simulated data offer a means to examine the relationship between a length-based index and recruitment and especially to investigate the uncertainty arising from increased recruitment variability. The relationships between recruitment and length-based indices of recruitment of krill were investigated at differing levels of recruitment variability in a population simulated using an individual-based model. In order to produce results that have direct relevance to the interpretation of the data collected as part of the CCAMLR SISO, individual krill were subsampled within the model according to a length-dependent selectivity function estimated for krill commercial fishing gear [26]. The specific goals of this work were to: (i) investigate the relationships between length-based indices and recruitment, (ii) select an optimum recruitment index from a suite of recruitment indices under various levels of recruitment variability; (iii) use a regression analysis to determine the relationship between the recruitment index and absolute recruitment; (iv) determine the performance of the selected recruitment index, and (v) reduce uncertainty in the recruitment index-absolute recruitment relationship by including consecutive index values from preceding years. Candidate length-based recruitment indices Based on length frequency distributions, recruitment can be estimated using order statistics. Two order statistics, the median length (mm) and the proportion of individuals smaller than 40 mm (F40, %) were used in this investigation (Fig. 1). The size of 40 mm was chosen as an appropriate cut-off to segregate recruits from older cohorts, once recruits became dominant in length frequency distributions ( Fig. 1, after April). Using a cut-off size instead of fitting a normal distribution to each length frequency mode [10] was chosen as a simpler and less ambiguous approach when compared to the often difficult and sometimes subjective task of determining modes using observations. Recruitment (i.e. the sum of one-year old individuals entering the population in a given year) is a single annual event while length-frequencies -and therefore length-based indices -are known to vary at the sub-annual scale (e.g. [3,11]). Typically krill recruitment is summarised as an annual index [10][11][12], therefore, the monthly recruitment indices, median length (mm) and F40 (%) are summarised by calculating their annual minimum, annual maximum and annual span (maximum-minimum), resulting in six candidate indices of annual recruitment. Using a krill population dynamics model in which recruitment was set randomly each year, the distribution of each index as a function of recruitment was investigated to determine which index would provide the optimum indicator of recruitment. Simulations Krill recruitment and its variability cannot be observed directly, so a model of krill population dynamics (Fig. 2), parameterised using values from the primary literature was used to simulate biologically plausible krill populations under various levels of recruitment variability. Simulating a krill population The model (Fig. 2), developed using R 3.0.3 [27], had a monthly resolution and for each individual in the population, the likelihood of survival, the growth increment and the probability of capture by the fishery were sequentially computed at each time-step. The population was tracked for ten years and in each ten-year simulation a random number of recruits were released each summer. During each simulation, the monthly median length and F40 were computed from all individuals available for capture by the fishery. The number of recruits entering the population and the monthly median length and F40 values in the final year were calculated and stored. Increasing levels of recruitment variability were achieved by releasing a number of recruits randomly set around a mean 4610 6 individuals over a range increasing from ¡10% to ¡100% by 10% increments (i.e. 10 levels of recruitment variability). For each level of recruitment variability, 2,000 simulations were run, resulting in a total of 20,000 ten-year simulations. The model outputs were used to investigate the link between recruitment and each of the recruitment indices (Section 2.1) under different levels of recruitment variability. Recruitment: Annual recruitment was simulated by releasing a random number of 1-year old individual krill into the model over the course of the summer period (25% in November and January and 50% in December). Mortality: Siegel [7] reviewed krill life history parameters and determined realistic estimates of natural mortality ranged between 0.66 yr 21 and 1.35 yr 21 (mean51.0 yr 21 ). To apply this mean rate in our model it must be converted via the following relationship [28]: where M is the proportional rate used in the individual-based model (in % time 21 ) and m is the exponential decay rate used in population dynamics models (in time 21 ); in this case, a mortality rate of 1.0 yr 21 or 0.0833 month 21 corresponds to 8% month 21 . A constant mortality rate of 8% month 21 was therefore used to determine the transition of each individual between time-steps. At each time-step, a probability P M was drawn at random from a uniform distribution bound between 0 and 100%, and where P M .M the individual survived and entered the next time-step. Upon entry into the next time-step, the age of the individual was incremented by a month. Growth: Each recruit was assigned an initial length drawn at random from a normal distribution (mean521.742 mm, standard deviation52 mm); the initial mean estimated using a von Bertalanffy growth curve commonly used for krill [29], and a standard deviation resulting in realistic dispersions of lengths around each mode (Fig. 1). Subsequent individual growth was computed at each time- step using a seasonally-varying von Bertalanffy growth model in line with the model presented by Siegel (1987 [30]; See Information S1). Simulating capture by the fishery The proportion of individuals available for capture by the fishery was determined by a length-dependent selectivity function. An individual was considered to be available for capture based on the commercial fishery selectivity ogive given in Krag et al. (2014 [26]) such that at each time-step, a probability P S was drawn at random from a uniform distribution bound between 0 and 1 and the individual was available when: where L is the length of the individual. No further sub-sampling (i.e. inclusion of sampling error) was applied; therefore all surviving individuals that were available for capture by the fishery were included in the computation of monthly lengthbased indices (but not removed from the population). The selectivity ogive used in the model is the best currently available estimate for commercial krill fishing gear. It is however important to note that it is based on a 15.4 mm diamond mesh size [26] and that our findings would only apply for krill sampled with a gear of similar mesh size and type. Simulating recruitment variability Different levels of recruitment variability were simulated within bounds defined by a recruitment variability amplitude (Rvar). The number of recruits released each year in the model R (y) was computed as the sum of a mean value R m and a deviation R d (R (y) 5R m +R d ). Deviations of different amplitude were achieved using a number drawn at random (R R , %) from a uniform distribution bound between -Rvar and +Rvar; with Rvar (%) corresponding to a recruitment variability amplitude ranging from 10% to 100% and: For example, with R m 54610 6 and Rvar550%, the number of recruits released in a given year was randomly set between 2610 6 and 6610 6 (i.e. 4610 6 ¡50%). Using a mean recruitment of R m 54610 6 , each Rvar value (10% to 100% by 10% increment) was used in 2000 simulations of ten-year krill population dynamics. The numbers of individuals (4610 6 ) and simulations (2000) enabled producing a sufficiently representative set of simulations and individual histories to investigate the effect of recruitment variability on the population size structure. The ten-year duration of each simulation ensured reaching population stable state under constant recruitment. Selecting the optimum recruitment index Within the krill population model, recruitment variability, absolute recruitment and the corresponding values of each length-based index are known. Comparing recruitment to each recruitment index under different levels of recruitment variability, the performance of each index was assessed using two criteria: (i) the recruitment index is monotonically related to absolute recruitment -this is important as no other information can be used determine absolute recruitment, so any underlying absolute recruitment to recruitment index relationship must be capable of being predicted using simple (single explanatory variable) regression, and (ii) the recruitment index is unbiased across all ranges of recruitment variability. This is important because the variability of recruitment in reality is unknown, so the relationship between recruitment and the recruitment index should ideally, remain unchanged under any level of recruitment variability. Predicting recruitment using a recruitment index Once the optimum length-based recruitment index, I, was found amongst those tested, a simple formula, R5f(I), to estimate recruitment as a function of that index was determined by regression analysis. A regression analysis was performed on the model outputs (recruitment versus index values). For each amplitude of recruitment variability the change in performance of the index as a function of recruitment variability was assessed. Assessing predictive performance The purpose of f(I) is predictive, and is not intended for inference. In order to assess the predictive performance of f(I), the prediction error (%) was computed using: In addition, the performance of f(I) in capturing recruitment uncertainty was assessed at each level of recruitment variability by computing coverage probability, here defined as the percentage of simulated recruitment values that fell inside the 95% predicted recruitment intervals. Predicting recruitment using past index values A recruitment event can potentially impact krill population size structure over several years, and additional information describing current recruitment may be contained in index values from previous years. Using the optimum length-based recruitment index from the candidate indices, the relationship between recruitment in the last year of simulations and values of that index in preceding years was investigated. For instance, given a formula (f) between recruitment (R) and an index I on year 10 (y 10 ): Selecting a recruitment index The selection of the optimum recruitment index from the six candidate indices was based on (i) the distribution of index values as a function of recruitment and (ii) the impact of recruitment variability on these distributions (Fig. 3). The indices derived from F40 -the proportion of individuals smaller than 40 mmhad a monotonic relationship with absolute recruitment across all ranges of recruitment variability (Fig. 3A-C), making F40 indices potentially useful measures of krill recruitment. The indices derived from the median length had more complex relationships with absolute recruitment (Fig. 3D-F). The span and maximum of the median (Fig. 3E, F) had highly non-monotonic responses, and were eliminated as potential indices. Out of all indices considered, the minimum F40 index (Fig. 3A) followed the clearest monotonic trend with recruitment and provided the strongest differentiation between low and high recruitment. In contrast to this, the span of F40 index (Fig. 3B) covered a wide range of recruitment values, making a regression analysis problematic. The maximum of F40 index (Fig. 3C) had poor coverage of lower recruitment values and had no clear relationship with recruitment. The minimum of median index (Fig. 3D) had difficulty accounting for lower recruitment with high index variability when absolute recruitment was less than 3610 6 individuals. Increasing recruitment variability resulted in an increased variability in all indices. Amongst all indices, the minimum F40 had the lowest variability across all levels of recruitment variability. Furthermore, the relationship between recruitment and minimum F40 was consistently following a curvilinear trend across levels of recruitment variability. The minimum F40 (F40 min) was therefore selected as the optimum recruitment index, and its relationship with recruitment (R) was best described using a linear regression of log-transformed values (i.e. a power law), with an intercept (b 0 ) and a slope (b 1 ): Subsequent analyses are carried out on the F40 min index. Regression predictive performance The curvilinear regression (Eq. 7) was fitted to model outputs from each level of recruitment variability (Fig. 4). The regression successfully captured increasing recruitment variability as demonstrated by a widening of the prediction intervals (Fig. 4). The predictive performance of each regression was assessed by calculating coverage probability as the percentage of simulated recruitment values falling inside the prediction intervals (Fig. 5). The 95% prediction intervals were selected, so when a model is performing predictions inadequately, less than 95% of simulated recruitment values will fall inside the prediction intervals. Based on coverage probability, the regression performed adequately up to 50% recruitment variability (Fig. 5). Above 50% recruitment variability, the predictive performance progressively degraded with ,95% of recruitment simulations falling inside the 95% prediction intervals. Under the widest range of recruitment variability (Rvar5100%) where recruitment was randomly set between 0 and 8610 6 individuals, 93.2% of the absolute recruitment values fell inside the 95% prediction intervals (Fig. 5). The regression parameters obtained under Rvar5100% are given in table 1. The range of prediction errors (Eq. 4) increased with the increasing recruitment variability from ranging between 27.4% and +8.8% at Rvar510% to ranging between 286.3% and +942.5% at Rvar5100% (Fig. 6). Although the median of all prediction errors remained close to zero, the boxplots illustrate that the predictive error distribution was asymmetric, with overestimates being more prevalent. This was due to the fact that the simulated recruitment was bound between values determined by Rvar (e.g. between 0 and 8610 6 individuals under Rvar5100%), while the regression could freely extrapolate estimated recruitment to higher values. Multiannual Recruitment Formula To improve predictions of recruitment, a multiannual linear regression of the logtransformed model outputs was used where explanatory variables were past values of minimum F40 (F40 min). Using three years as an example, the number of recruits released in the tenth year of simulations (R y10 ) was estimated as: Including past consecutive values of minimum F40 to predict recruitment narrowed the range of prediction errors for simulations under low recruitment (Rvar530%, Fig. 7A). The improvement was less evident under moderate recruitment variability (Rvar560%, Fig. 7B), in which case including three consecutive values of minimum F40 brought a similar improvement to when including more values. Under high recruitment variability (Rvar590%, Fig. 7C) the narrowing of the range of errors was almost negligible, particularly when including more than 3 consecutive values of minimum F40. The mean bias reduction (mean of absolute errors) resulting from the inclusion of three consecutive values was 16.5%, 8.3% and 3.6% under Rvar values of 30%, 60% and 90% respectively. Discussion The recruitment index F40 min (minimum proportion of individuals smaller than 40 mm in a given year) was selected as the optimum index from six candidate indices. F40 min was selected as optimum because in addition to its monotonic relationship that held across a range of recruitment variability, the index-torecruitment relationship could be expressed using simple curvilinear regression. In simulations of high recruitment variability (Rvar >60%, Fig. 4), the log-linear model did not perfectly capture the underlying index-to-recruitment relationship. Whilst more complex regressions may have achieved this in specific instances, it is unlikely that these models would have performed equally for all amplitudes of recruitment variability. In this research, we were seeking a model that performed well across a range of recruitment variability; in reality recruitment variability is unknown so one cannot apply a more complex model to suit high variability situations, hence a model that performed best over a range of recruitment variability was selected. Important to the process of recruitment metric selection was the underlying population model and the calculation of recruitment indices on a monthly basis, both of which will be discussed in subsequent paragraphs. The impact of recruitment variability on length-based recruitment indices was investigated using an individual-based population model. The model captured complex population-level processes emerging from cyclical and variable recruitment by accounting for the co-existence of cohorts belonging to recruitment events of different intensities. Varying the range over which recruitment took place led to simulated pulses in krill numbers, a phenomena observed at South Georgia [3], and enabled us to test recruitment index performance against a biologically plausible, albeit simulated, krill population. Proportional indices of recruitment such as F40 have traditionally been computed using length data available from a single survey or pooled at an annual scale [2, 7, 8, 11-13, 17, 20, 22, 24]. However, due to growth and mortality, lengthfrequencies vary at the sub-annual scale (e.g. [3], Fig. 1). Therefore, pooling length data into a single annual length-frequency distribution may conflate several underlying population processes, potentially confounding the signal produced by recruitment events. When searching for the optimum length-based recruitment index, order statistics -median and F40-were calculated on length data aggregated by month (see section 2.1). Monthly order statistics were summarised into a single annual recruitment estimate and the minimum value of F40 within a given year was found to be the optimum index of the recruitment that occurred in the summer of that year. Increasing recruitment variability resulted in increased uncertainty in lengthbased recruitment indices (Figs. [3][4][5]. Recruitment variability up to 50% was successfully captured using the 95% prediction intervals calculated from the curvilinear regression based on F40 min (Figs. 6, 7). True recruitment variability cannot be determined, so it is not possible a priori to select a particular regression from those determined (Fig. 4). In the absence of additional information on recruitment variability, it is recommended that the curvilinear model fitted to the widest range of recruitment variability (Rvar5100%) is used. Under high recruitment variability, the improvement brought by the use of a multiannual formula was only minimal (Fig. 7C). Whilst under low and moderate recruitment variability, the multiannual formula yielded improved predictions, it performed poorly under high recruitment variability. Outside of the simulation, true recruitment variability is unknown so it is not possible to determine when to use such formula. Therefore the simpler single-year formula obtained under high recruitment variability is recommended to estimate annual recruitment (Table 1). High population variability was not always accurately represented by the curvilinear regression, with less than 95% of the simulated population falling inside the 95% prediction intervals when recruitment variability exceeded 50%. Large prediction errors in situations of high recruitment variability suggest length-based indices are of limited value, a fact that has been previously raised in the case of fish stock assessments (e.g. [31]). More positively, the approach presented here provides an objective mechanism through which to assess the utility of recruitment indices, and which enable researchers to incorporate uncertainty when considering the links between recruitment and environmental drivers. Furthermore, our results indicated that using these indices to track recruitment events could provide an objective approach to estimate the magnitude and confidence associated with these events. In particular, the uncertainty around recruitment estimates appeared to increase with the magnitude of the recruitment event, suggesting that whilst being beneficial, correlation analysis between estimated recruitment and environmental drivers will be more difficult for highly uncertain, large recruitment events. Since a recruitment event will impact the population size structure over several years, a length-frequency distribution at a given instant may carry information on recruitment events that occurred in previous years. Including information on the population size structure over the years preceding a given recruitment event could improve the accuracy of that recruitment estimate. Improvement in length-based recruitment estimates via multiannual estimates has been suggested in previous studies (e.g. [11]), and was successfully demonstrated here when recruitment variability was less than 60% (Fig. 7). In this study, improvement in the prediction of recruitment was itself dependent on recruitment variability since increased recruitment variability weakened the link between current and previous recruitment indices. Nevertheless, an improvement in the accuracy of the recruitment predictions was obtained under all ranges of recruitment variability, and, given that the actual variability of recruitment in the real world is unknown adopting such an approach could be beneficial. However, as stated above, the improvement was only minimal (mean bias reduction of 3.6% under Rvar590%) under high recruitment variability. In addition to analysis of ecological significance, the results presented here could be beneficial to the management of the krill fishery. Stock assessment models are designed to estimate population parameters by determining the set of parameters enabling the best fit between simulations and observations, including length-frequencies distributions (e.g. [19]). Stock assessment models could benefit from the method of recruitment estimation presented here for their initialisation through a time-series of estimated recruitment. Additionally, model verification could be performed through a comparison of stock-assessment and simulation model outputs (Eq. 7). The underlying model used to simulate population dynamics was parameterised using values drawn from the published literature. In order to establish the baseline response, the model structure was intentionally kept simple and made to replicate behaviour of an average population sampled homogeneously. More complex modelling schemes could be devised in the future to account, in particular, for biological variability, such as inter-annual changes in growth, mortality, recruitment timing and duration, as well as spatial and temporal biases in sampling effort and investigate their impact on length-based recruitment estimates. In addition, recruitment was set to occur each year in simulations independently of the status of the adult population. A complete mechanistic life cycle model could be formulated in the future to account for the maturation of individuals in the population and their participation in the spawning stock. Such level of detail would enable investigating processes affecting recruitment variability such as generation time, lifespan and age at maturity. The approach of decoupling recruitment from the reproductive status of the population is robust in that it makes no assumptions about the links between the two and enables the performance of recruitment indices to be assessed without formulating hypothesis on these links. Despite the relatively simple model structure, the findings presented still bring a significant improvement in our ability to extract information from length measurements. The modelling approach described here, could be applied to any species targeted by a length-based survey, provided a temporal coverage enabling the determination of the bounds of the chosen length-based index (e.g. the determination of the minimum F40 in a given year in our case). A potential future application of this approach is the estimation of recruitment based on time-series of krill length measurements collected as part of the CCAMLR Scheme of International Scientific Observation, which could unveil crucial information on the population dynamics of Euphausia superba.
6,678
2014-12-03T00:00:00.000
[ "Biology", "Environmental Science" ]
Understanding of the Key Factors Determining the Activity and Selectivity of CuZn Catalysts in Hydrogenolysis of Alkyl Esters to Alcohols : CuZn catalysts are perspective catalysts for esters hydrogenolysis, but more knowledge is needed to optimize their catalytic performance. In this work, we consider the impact of CuZn catalysts composition on their structure, activity, selectivity, and stability in esters hydrogenolysis. Four catalysts with various Cu/Zn ratio were synthesized by a co-precipitation and characterized in as-prepared, calcined, reduced, and spent state by XRF, XRD, N 2 physisorption, CO 2 -TPD, NH 3 -TPD, and N 2 O chemisorption. XRD data revealed the effect of the composition on the size of Cu and ZnO particles. The catalytic performance was investigated using an autoclave. All catalysts exhibited high methyl hexanoate conversion about 48–60% after 3 h but their activity and selectivity were found to be dependent on Cu/Zn ratio. The conversion of methyl hexanoate and hexyl hexanoate was compared to explain the observed product selectivity. Moreover, the catalysts stability was investigated in three consecutive reaction cycles and correlated with changes in the size of constituent particles. Moreover, when different esters were tested, a slight decrease in conversion and increase in alcohol selectivity with a growth in molecule size was observed. Obtained results allow making a conclusion about the optimal composition that provides the good performance of CuZn catalysts in ester hydrogenolysis. Introduction The hydrogenolysis of carboxylic acid esters is a reaction of great commercial attractiveness. It allows selective production of corresponding alcohols and their derivatives that can further be used as raw materials in the production of surfactants, plasticizers, cosmetics, and other chemicals [1][2][3]. For several decades, the process of obtaining alcohols from carboxylic acid esters has been based on employing traditional and highly efficient Adkins catalysts, which include copper as an active metal and Cr 2 O 3 (ca. 40-50 wt%) [1,4]. The role of the latter component is preventing Cu metallic particles from sintering and maintaining their high dispersion during the catalytic process. Typically, the Adkins catalysts operate at high temperatures (200-300 • C) and hydrogen pressures (140-300 bar) [5]. Consequently, great potential lies in finding catalytic formulations that would exhibit high activity in the hydrogenolysis of esters and high selectivity to alcohols when the process is carried out under milder reaction conditions. In addition, the synthesis of Adkins catalysts is accompanied by the formation of a large amount of Cr 5+ and Cr 6+ containing toxic wastes, that are harmful to the environment [6,7]. Cr-free catalysts based on noble metals, i.e., Pd [8,9], Pt [10,11], Ru [12,13], and Rh [14], demonstrate good catalytic performance in the hydrogenolysis of esters to corresponding alcohols at mild reaction conditions. However, the high cost of such catalysts hinders their widespread industrial adoption. Therefore, non-noble metals have attracted much attention to be used as an active component in catalysts for the production of alcohols from esters. Ni-and Co-based catalysts are active in the hydrogenolysis of both the C-O and C-C bonds, provoking the occurrence of decarbonylation/decarboxylation reactions with the formation of hydrocarbons, thus resulting in a decreased selectivity to target alcohols [15]. Copper-based catalysts, on the contrary, are selective in the hydrogenolysis of the C-O bond and hydrogenation of the carbonyl group but exhibit no activity in the hydrogenolysis of the C-C bond [16][17][18][19][20][21][22]. Thus, Cu-based catalysts have a great potential for their use in the hydrogenolysis of esters to alcohols. Especially this applies to bulk CuZn catalysts which are very effective in various applications, in particular in methanol synthesis from syngas [23][24][25][26]. The performance of CuZn catalysts has been also investigated in the hydrogenolysis of different compounds, including glycerol to propane-1,2-diol [27], dimethyl or diethyl succinate to butane-1,4-diol [20], hydrogenation of succinic anhydride [28] and dimethyl adipate to 1,6-hexanediol [21,22,29]. The CuZn catalysts were reported to be highly efficient in dimethyl adipate hydrogenolysis reaching the dimethyl adipate conversion of 97% at the temperature of 205 • C and 160 bar [21]. Our previous research on the catalyst systems containing Cu as an active part promoted by ZnO, Al 2 O 3 , and MgO showed that CuZn catalysts had the highest ester conversion and selectivity to hexane-1,6-diol in the reaction of adipic acid dimethyl ester [30]. Moreover, CuZn catalysts outperformed traditionally used CuCr catalysts, known as Adkins catalysts, in the same reaction of dimethyl adipate hydrogenolysis making them a good candidate to be the environmentally friendly alternative to CuCr catalysts [21] However, as far as we are aware, no comprehensive work on the performance of such CuZn catalysts in the hydrogenolysis of other carboxylic acid esters was reported. Several studies considered the effect of Cu/Zn ratio on the structural characteristics and the performance of the catalysts, and it was concluded that ester conversion increased with a growth in a copper surface area [28] or with a decrease in a copper particle size [21]. In our recent article [30] we have concluded that ZnO served as a structural promoter in CuZn catalysts for dimethyl adipate hydrogenolysis that improves the properties of the catalysts by increasing BET surface area and specific copper area as well as stabilizing Cu crystallites. However, ZnO could not be considered as activity promoter, what was evidenced from the calculation of TOF value for the CuZn catalyst. Nevertheless, there is an urgent need in receiving more comprehensive information for a deeper understanding of how the Cu/Zn ratio affects not only the activity, but also selectivity and stability of CuZn catalysts in the hydrogenolysis of carboxylic acid esters. Although the efficiency of CuZn catalysts for ester hydrogenolysis was confirmed, the stability of such catalysts as well as their performance in hydrogenolysis of carboxylic acid esters of different chain length were not extensively studied. The present study is focused on comparing the activity, selectivity, and stability of CuZn catalysts varying in Cu/Zn ratio in the hydrogenolysis of methyl hexanoate. A difference in the performance of the catalysts was assessed by considering their physico-chemical properties, in particular by determining the size of Cu and Zn particles in as-prepared, calcined, reduced, and spent materials. In addition, to explain the observed dependence of product selectivity on the reactant conversion, we compared the activity of the prepared catalysts in the conversion of methyl hexanoate and hexyl hexanoate as an intermediate reaction product obtained by a transesterification reaction. Finally, to the best of our knowledge, we are the first who demonstrate the effect of carbon chain length in a reactant on the activity and selectivity of CuZn catalysts by carrying out experiments with different methyl esters. Chemical Analysis Four CuZn-AP precursors with varying Cu/Zn ratios were prepared by a co-precipitation method. XRF analysis evidenced that the obtained Cu/Zn ratios in the as-prepared precursors were very close to the theoretical values estimated from the chemical composition of Table 1). The high efficiency of the used synthesis method was also confirmed by the colorless aqueous filtrate obtained after separating a filter cake. It suggested that mostly all copper cations from the initial nitrate solution were included into the composition of the precipitate. Considering the XRF data, it could, therefore, be concluded that zinc cations were also largely in the composition of the precipitate. Accordingly, the synthesis method used under strictly controlled conditions should ensure a reproducible preparation of CuZn samples with the desired chemical composition. It was also assumed that the chemical composition of the samples did not change after calcination step. The phase composition from as-prepared catalyst precursors to the spent samples was investigated using XRD ( Figure 1). XRD patterns of the as-prepared materials are presented in Figure 1A. Samples 0.5CuZn-AP and 1CuZn-AP possessed aurichalcite structure Cu 2 Zn 3 (CO 3 ) 2 (OH) 6 [22,33]. In general, the results from Figure 1A evidenced that almost phase-pure materials were prepared and used for further characterization purposes, although the presence of other minor hydroxycarbonate phases with a content of up to 5% could not be ruled out [33]. The calcination of the as-prepared precursors at T = 350 • C resulted in the disappearance of the reflections from hydroxycarbonate phases and the appearance of the reflections from CuO and ZnO phases, which suggested the formation of mixed CuZn oxides ( Figure 1B). Previously it was reported that the complete decomposition of zincian malachite phase to oxides occurred between 300 and 350 • C, while a residual characteristic diffraction peak at 2θ = 13.0 • from aurichalcite phase could still be present in the XRD patterns of an aurichalcite precursor after its calcination at 350 • C [22]. In contrast, the XRD patterns of the calcined samples prepared from CuZn precursors in the present study did not show any reflexes from either aurichalcite or zincian malachite phase that suggested total decomposition of the as-prepared hydroxycarbonates ( Figure 1B). CuO and ZnO crystallite size in the calcined samples with different Cu/Zn ratio was evaluated using the Scherrer equation for reflections at 2θ ≈ 38.7 • and 31.8 • , respectively. Previously, it was suggested that aurichalcite-derived catalysts had smaller Cu crystallites than the catalysts derived from zincian malachite or copper-hydrozincite [34][35][36]. Nonetheless, some studies indicated that there was no difference in the CuO particle size of the calcined samples as a result of the precursor phase composition [31]. Similar to the latter study, no obvious dependence of either CuO or ZnO crystallite size on the increase in Cu/Zn ratio was observed for the samples prepared in the present study. Based on the XRD results, the estimated size of the CuO crystallites in the mixed oxides was 5-6 nm, while that of the ZnO crystallites was 6-7 nm ( Figure 2). structure Cu2Zn3(CO3)2(OH)6 (ref. code 00-038-0152) which was characterized by typical reflections at 2θ = 13.0°, 24.2, 34.1°, 41.9°, and 50.1° [22,31,32]. 3CuZn-AP and 6CuZn-AP contained almost exclusively zincian malachite phase (Cu0.8Zn0.2)2(OH)2CO3 (ref. code 01-079-7851) with characteristic reflections at 2θ = 14.8°, 17.6°, 24.1, or 31.9° [22,33]. In general, the results from Figure 1A evidenced that almost phase-pure materials were prepared and used for further characterization purposes, although the presence of other minor hydroxycarbonate phases with a content of up to 5% could not be ruled out [33]. 20 The calcination of the as-prepared precursors at T = 350 °C resulted in the disappearance of the reflections from hydroxycarbonate phases and the appearance of the reflections from CuO and ZnO phases, which suggested the formation of mixed CuZn oxides ( Figure 1B). Previously it was reported that the complete decomposition of zincian malachite phase to oxides occurred between 300 and 350 °C, while a residual characteristic diffraction peak at 2θ = 13.0° from aurichalcite phase could still be present in the XRD patterns of an aurichalcite precursor after its calcination at 350 °C [22]. In contrast, the XRD patterns of the calcined samples prepared from CuZn precursors in the present study did not show any reflexes from either aurichalcite or zincian malachite phase that suggested total decomposition of the as-prepared hydroxycarbonates ( Figure 1B). CuO and ZnO crystallite size in the calcined samples with different Cu/Zn ratio was evaluated using the Scherrer equation for reflections at 2θ ≈ 38.7° and 31.8°, respectively. Previously, it was suggested that aurichalcite-derived catalysts had smaller Cu crystallites than the catalysts derived from zincian malachite or copper-hydrozincite [34][35][36]. Nonetheless, some studies indicated that there was no difference in the CuO particle size of the calcined samples as a result of the precursor phase composition [31]. Similar to the latter study, no obvious dependence of either CuO or ZnO crystallite size on the increase in Cu/Zn ratio was observed for the samples prepared in the present study. Based on the XRD results, the estimated size of the CuO crystallites in the mixed oxides was 5-6 nm, while that of the ZnO crystallites was 6-7 nm (Figure 2). The calcination of the as-prepared precursors at T = 350 °C resulted in the disappearance of the reflections from hydroxycarbonate phases and the appearance of the reflections from CuO and ZnO phases, which suggested the formation of mixed CuZn oxides ( Figure 1B). Previously it was reported that the complete decomposition of zincian malachite phase to oxides occurred between 300 and 350 °C, while a residual characteristic diffraction peak at 2θ = 13.0° from aurichalcite phase could still be present in the XRD patterns of an aurichalcite precursor after its calcination at 350 °C [22]. In contrast, the XRD patterns of the calcined samples prepared from CuZn precursors in the present study did not show any reflexes from either aurichalcite or zincian malachite phase that suggested total decomposition of the as-prepared hydroxycarbonates ( Figure 1B). CuO and ZnO crystallite size in the calcined samples with different Cu/Zn ratio was evaluated using the Scherrer equation for reflections at 2θ ≈ 38.7° and 31.8°, respectively. Previously, it was suggested that aurichalcite-derived catalysts had smaller Cu crystallites than the catalysts derived from zincian malachite or copper-hydrozincite [34][35][36]. Nonetheless, some studies indicated that there was no difference in the CuO particle size of the calcined samples as a result of the precursor phase composition [31]. Similar to the latter study, no obvious dependence of either CuO or ZnO crystallite size on the increase in Cu/Zn ratio was observed for the samples prepared in the present study. Based on the XRD results, the estimated size of the CuO crystallites in the mixed oxides was 5-6 nm, while that of the ZnO crystallites was 6-7 nm ( Figure 2). The state of copper as well as the size of copper and zinc particles before catalytic runs were determined in CuZn mixed oxides reduced at 210 • C. The measurement of metallic copper particle size was failed for the samples reduced by a standard method: after being unloaded from the reactor, the reduced samples were immediately re-oxidized with atmospheric oxygen which was evidenced by the obvious and significant heating of the samples. As evidenced by their XRD patterns, most of the copper in the reduced samples were present as CuO species with only residual presence of Cu and Cu 2 O ( Figure 1C). To prevent the re-oxidation of reduced copper species, CuZn samples were reduced at 210 • C by a standard method and then treated with MeHe, as described in the Materials and Methods. XRD patterns of the reduced and treated samples ( Figure 1D) showed that Cu particles in the catalysts corresponded to Cu 0 . It suggested that (i) the total reduction of CuO species occurred at the reduction temperature of 210 • C and (ii) the treatment of the freshly reduced catalyst with the organic compound prevented the re-oxidation of metallic copper in air. Thus, the performed treatment made it possible to assess the state of ZnO and Cu species in reduced CuZn catalysts just as at the beginning of a catalytic run. Figure 2A shows that the size of Cu species in the reduced and treated CuZn-R samples before a catalytic run was already larger in comparison with the calcined CuZn-C counterparts. This trend became more pronounced with increasing Cu/Zn ratio in the catalysts, where copper particle size increased from 6.1 nm to 12.6 nm for reduced 0.5CuZn-R and 6CuZn-R, respectively. Figure 2B evidences that the size of ZnO particles also changed after the reduction step, regardless of the chemical composition of the catalysts, slightly increasing from 10.3 nm to 12.8 nm with the increasing Cu/Zn ratio from 0.5 to 6. Catalyst Surface Area The textural properties of calcined and reduced samples were investigated using N 2 physisorption. The results (Table 1) showed that the values of both BET surface area and total pore volume showed no clear dependence on the Cu/Zn ratio of calcined CuZn-C samples being in the range of 59-65 m 2 ·g −1 and 0.23-0.26 cm 3 ·g −1 , respectively. The similarity in textural properties was consistent with XRD data, which indicated no change in CuO and ZnO particle size for CuZn-C samples regardless of a Cu/Zn ratio ( Figure 2), which was also observed in our recent work [22]. The BET surface area of both CuO and ZnO was lower in comparison with that of the CuZn mixed oxides as it was shown in our recent article [37], also reflecting the effect of small size for the constituent particles. TPD of Adsorbed CO 2 and NH 3 The acid-base sites of the calcined samples were probed using NH 3 -TPD and CO 2 -TPD. Figure 3 depicts the TPD profiles of NH 3 and CO 2 adsorbed on CuZn-C mixed oxides, while Table 2 shows the number of both basic and acidic sites calculated from the area under the obtained TPD curves. The state of copper as well as the size of copper and zinc particles before catalytic runs were determined in CuZn mixed oxides reduced at 210 °C. The measurement of metallic copper particle size was failed for the samples reduced by a standard method: after being unloaded from the reactor, the reduced samples were immediately re-oxidized with atmospheric oxygen which was evidenced by the obvious and significant heating of the samples. As evidenced by their XRD patterns, most of the copper in the reduced samples were present as CuO species with only residual presence of Cu and Cu2O ( Figure 1C). To prevent the re-oxidation of reduced copper species, CuZn samples were reduced at 210 °C by a standard method and then treated with MeHe, as described in the Materials and Methods. XRD patterns of the reduced and treated samples ( Figure 1D) showed that Cu particles in the catalysts corresponded to Cu 0 . It suggested that (i) the total reduction of CuO species occurred at the reduction temperature of 210 °C and (ii) the treatment of the freshly reduced catalyst with the organic compound prevented the re-oxidation of metallic copper in air. Thus, the performed treatment made it possible to assess the state of ZnO and Cu species in reduced CuZn catalysts just as at the beginning of a catalytic run. Figure 2A shows that the size of Cu species in the reduced and treated CuZn-R samples before a catalytic run was already larger in comparison with the calcined CuZn-C counterparts. This trend became more pronounced with increasing Cu/Zn ratio in the catalysts, where copper particle size increased from 6.1 nm to 12.6 nm for reduced 0.5CuZn-R and 6CuZn-R, respectively. Figure 2B evidences that the size of ZnO particles also changed after the reduction step, regardless of the chemical composition of the catalysts, slightly increasing from 10.3 nm to 12.8 nm with the increasing Cu/Zn ratio from 0.5 to 6. Catalyst Surface Area The textural properties of calcined and reduced samples were investigated using N2 physisorption. The results (Table 1) showed that the values of both BET surface area and total pore volume showed no clear dependence on the Cu/Zn ratio of calcined CuZn-C samples being in the range of 59-65 m 2 ·g −1 and 0.23-0.26 cm 3 ·g −1 , respectively. The similarity in textural properties was consistent with XRD data, which indicated no change in CuO and ZnO particle size for CuZn-С samples regardless of a Cu/Zn ratio (Figure 2), which was also observed in our recent work [22]. The BET surface area of both CuO and ZnO was lower in comparison with that of the CuZn mixed oxides as it was shown in our recent article [37], also reflecting the effect of small size for the constituent particles. TPD of Adsorbed CO2 and NH3 The acid-base sites of the calcined samples were probed using NH3-TPD and CO2-TPD. Figure 3 depicts the TPD profiles of NH3 and CO2 adsorbed on CuZn-C mixed oxides, while Table 2 shows the number of both basic and acidic sites calculated from the area under the obtained TPD curves. The number of acid sites in calcined samples was in the range of 0.135-0.168 mmol g −1 and demonstrated an increasing trend with the growth of Cu/Zn ratio in CuZn-C samples from 0.5 to 3 followed by a slight decrease for 6CuZn-C sample. The number of basic sites in the calcined samples was in the range of 0.176-0.237 mmol g −1 and it demonstrated similar dependence on Cu/Zn ratio: the largest concentration of basic sites was observed for 3CuZn-C, while it slightly decreased for other catalysts. In our previous study, it was shown that the concentration of both acidic and basic sites in CuZn mixed oxides was larger than in single-phase ZnO or CuO [29], and it might be due to the high dispersion of individual oxidic species. Therefore, a decrease in the acid-base characteristics of the samples with either low or high Cu/Zn ratio, i.e., when approaching the pure CuO or ZnO phase composition, was in line with the previous findings. Specific Copper Surface Area The specific copper surface area was determined using N 2 O chemisorption. Prior to N 2 O chemisorption, CuZn-C mixed oxides were reduced, the reduction process was monitored, and hydrogen consumption profiles were recorded. The curves of H 2 consumption during the reduction of CuO species in CuZn mixed oxides with different Cu/Zn ratio (see the Supplementary Materials, Figure S1) evidenced that, regardless of the chemical composition of the samples, the reduction of constituent CuO species started at about 150 • C. The total amount of consumed H 2 increased with the growth of Cu content in the samples, while calculations showed that CuO content in the catalysts (Table 3) corresponded well to the chemical composition of CuZn-AP precursors determined by XRF (Table 1), only slightly overestimating the actual copper content. Thus, it was confirmed that all catalysts were fully reduced prior to N 2 O chemisorption experiments. The specific surface area of copper was evaluated from the N 2 O chemisorption experiments with the reduced CuZn catalysts. The largest values of specific Cu surface area (S Cu ) per gram of sample were determined for 1CuZn-R and 3CuZn-R, S Cu = 15.1 m 2 ·g cat −1 , while the lowest value was obtained for 6CuZn-R, S Cu = 11.9 m 2 ·g cat −1 ( Table 3). The recalculation of the specific copper area per mass of copper demonstrated a definite declining trend from S Cu = 50 m 2 ·g Cu −1 to S Cu = 17 m 2 ·g Cu −1 with the growth in the copper content in the samples (Table 3). Accordingly, Cu dispersion calculated using a formula from [30] also gradually decreased from 3.7% to 1.3% as the Cu/Zn ratio increased from 0.5 to 6. The obtained Cu dispersion values correlated well with the values obtained in other studies on Cu-containing catalysts [22,30,[38][39][40]. The Hydrogenolysis of Methyl Hexanoate (MeHe) in Presence of CuZn-R Catalysts Varied in Cu/Zn Ratio The four prepared catalysts and Zn-free Cu catalyst were tested in methyl hexanoate hydrogenolysis (MeHe) at reaction temperature of 210 • C and p H2 = 10 MPa. Figure 4A depicts the change in MeHe conversion in dependence on reaction time. All CuZn-R catalysts exhibited a high conversion of MeHe at chosen reaction conditions, which significantly exceeded that of a single-phase Cu catalyst ( Figure 4A). In contrast, a single-phase ZnO catalysts possessed zero activity in the reaction. These results unambiguously proved the promoting effect of ZnO species in the CuZn-R catalysts for the hydrogenolysis of methyl esters, as proposed in [30]. Figure 4A also evidenced that the catalysts performed differently during the reaction, consequently the performance of the catalysts at the beginning and at the end of catalytic runs was compared separately. samples (Table 3). Accordingly, Cu dispersion calculated using a formula from [30] also gradually decreased from 3.7% to 1.3% as the Cu/Zn ratio increased from 0.5 to 6. The obtained Cu dispersion values correlated well with the values obtained in other studies on Cu-containing catalysts [22,30,[38][39][40]. The Hydrogenolysis of Methyl Hexanoate (MeHe) in Presence of CuZn-R Catalysts Varied in Cu/Zn Ratio The four prepared catalysts and Zn-free Cu catalyst were tested in methyl hexanoate hydrogenolysis (MeHe) at reaction temperature of 210 °C and pH2 = 10 MPa. Figure 4A depicts the change in MeHe conversion in dependence on reaction time. All CuZn-R catalysts exhibited a high conversion of MeHe at chosen reaction conditions, which significantly exceeded that of a single-phase Cu catalyst ( Figure 4A). In contrast, a single-phase ZnO catalysts possessed zero activity in the reaction. These results unambiguously proved the promoting effect of ZnO species in the CuZn-R catalysts for the hydrogenolysis of methyl esters, as proposed in [30]. Figure 4A also evidenced that the catalysts performed differently during the reaction, consequently the performance of the catalysts at the beginning and at the end of catalytic runs was compared separately. The activity of CuZn-R catalysts at the initial 20 min of reaction was evaluated in terms of initial reaction rate, r ini. = n MeHe ·g cat −1 ·min −1 , where n MeHe stands for the number of MeHe mmoles consumed in this reaction time. The initial reaction rate decreased in the following order 3CuZn-R > 6CuZn-R > 1CuZn-R > 0.5CuZn-R ( Table 4). The growth of the initial activity with the growth of Cu/Zn ratio in the catalysts from 0.5 to 3 appeared logical as it concerned with the gradual growth of Cu content in the catalysts, i.e., with an increase in the number of active sites. But the drop of the initial reaction rate observed for 6CuZn-R with the largest Cu content should be considered in detail. To explain the obtained results, it is necessary to consider XRD data for the CuZn samples before and just after a reduction step. The size of both copper and zinc particles in CuZn-R samples considerably increased in comparison with that in CuZn-C, and this effect became more evident with an increase in the Cu/Zn ratio in the catalysts (Figure 2A). Therefore, the observed considerable sintering of Cu species in 6CuZn-R occurred already at reduction step and, accordingly, the small surface area of copper and its low dispersion (Table 3) were responsible for the decreased initial activity of this catalyst in catalytic run. An increase in the size of copper particles at reduction step in 3CuZn-R was less dramatic, which made it possible to maintain the surface area and dispersion of copper at a relatively high level, 15.1 m 2 ·g −1 and 1.85%, respectively. As a consequence, the initial activity of the 3CuZn-R was higher than that of 6CuZn-R (Table 4). With the similar surface area of copper in 1CuZn-R and 3CuZn-R, the dispersion of copper in the former catalyst was larger, 2.73%. Nevertheless, the initial activity of 1CuZn-R was lower than that of 3CuZn-R. Moreover, 0.5CuZn-R had a rather large copper surface area and the highest Cu dispersion among all prepared samples, but the initial activity of this catalyst was even lower than that of 6CuZn-R. The observed discrepancy between the physico-chemical and catalytic results allowed assuming that additional reasons could stand for the decreased initial activity of catalysts with a low Cu/Zn ratio. The poor accessibility of the smaller metallic Cu particles within the excessive ZnO species for the reactant molecules compared to bigger Cu particles could be assumed. Additionally, the intrinsic activity of copper species in the catalysts varied in Cu/Zn ratio could be evaluated considering their turnover frequency (TOF). Table 4 shows that TOF gradually increased with the growth of Cu content in the catalysts. On the other hand, an increase in the Cu/Zn ratio resulted in an increase in the copper particle size in the reduced catalysts ( Figure 2). The combination of these two trends allowed suggesting that an increase in copper particle size in reduced catalysts contributed to TOF increase for MeHe hydrogenolysis ( Figure 5A). Thus, an increase in the Cu/Zn ratio in catalysts from 0.5 to 3 resulted both in an increase in the number of copper particles, which was reflected in an increase in the active copper surface area, and in an increase in the size of such particles, which was promotional for an increase in TOF. Accordingly, the initial activity reached the highest values for 3CuZn-R catalyst. A further increase in copper content resulted in a further increase in the size of copper particles with high TOF value. At the same time, the surface and dispersion of copper in 6CuZn-R significantly decreased, and, accordingly, the initial activity of the 6CuZn-R catalyst also slightly decreased even at the largest TOF value calculated for this catalyst. With increasing reaction time, a difference in the performance of the catalysts became obvious ( Figure 4A): the larger the Cu/Zn ratio, the more pronounced drop in the activity of the catalysts at the end of experiments. 0.5CuZn-R was the least active catalyst at the beginning of the reaction but demonstrated the most stable activity with reaction time, while 6CuZn-R with a high copper content was the least stable in the performance. Despite the observed differences in the initial activity of 0.5CuZn-R, 1CuZn-R and 3CuZn-R, the final MeHe conversion for these catalysts was approximately similar in the range of 58-60% ( Figure 4A). It could be assumed that the observed difference in the performance of catalysts was concerned with a change in their properties during the reaction, which was demonstrated by a change in the composition of reaction products. With increasing reaction time, a difference in the performance of the catalysts became obvious ( Figure 4A): the larger the Cu/Zn ratio, the more pronounced drop in the activity of the catalysts at the end of experiments. 0.5CuZn-R was the least active catalyst at the beginning of the reaction but demonstrated the most stable activity with reaction time, while 6CuZn-R with a high copper content was the least stable in the performance. Despite the observed differences in the initial activity of 0.5CuZn-R, 1CuZn-R and 3CuZn-R, the final MeHe conversion for these catalysts was approximately similar in the range of 58-60% ( Figure 4A). It could be assumed that the observed difference in the performance of catalysts was concerned with a change in their properties during the reaction, which was demonstrated by a change in the composition of reaction products. Previously, it was shown that the hydrogenolysis of dimethyl adipate on CuZn catalysts resulted in the formation of 1,6-hexanediol as the targeted product and abundant byproducts formed by the transesterification reaction route, with other products being in minor amounts [21,22,29]. In case of MeHe hydrogenolysis, only two main reaction products were observed: hexan-1-ol (1-Heol) as the reaction product formed by hydrogenolysis reaction route, and hexyl hexanoate (HeHe) produced by transesterification route between MeHe and the formed 1-Heol. Figure 4B depicts selectivity to reaction products in dependence on reaction time for different CuZn catalysts. In all cases 1-Heol selectivity gradually decreased and, accordingly, HeHe selectivity increased during the experiments, which was most pronounced for catalysts with a high Cu/Zn ratio, 3CuZn-R, and 6CuZn-R. Figure 4C shows that the yield of 1-Heol was very similar for all studied catalysts at low MeHe conversion with a linear increasing trend. With further growth in MeHe conversion the linear growth of 1-Heol yield was observed for 0.5CuZn-R and 1CuZn-R, while in the case of 3CuZn-R and 6CuZn-R an obvious deviation from a straight line was observed. On the other hand, HeHe yield progressively increased with a growth in MeHe conversion, and this trend was again the most pronounced for 3CuZn-R and 6CuZn-R ( Figure 4D). The observed change in the yield of the reaction products suggested that the properties of active sites in the catalysts responsible for hydrogenolysis and transesterification routes differed. In order to explain the observed trends in the performance of the catalysts, the XRD study of the samples after the reaction was carried out. A comparison in the size of constituent particles in freshly reduced and spent catalysts evidenced that during the reaction an enlargement of Cu o particles was observed, which was the most obvious for catalysts with high Cu/Zn ratio. The size of copper parti- Previously, it was shown that the hydrogenolysis of dimethyl adipate on CuZn catalysts resulted in the formation of 1,6-hexanediol as the targeted product and abundant by-products formed by the transesterification reaction route, with other products being in minor amounts [21,22,29]. In case of MeHe hydrogenolysis, only two main reaction products were observed: hexan-1-ol (1-Heol) as the reaction product formed by hydrogenolysis reaction route, and hexyl hexanoate (HeHe) produced by transesterification route between MeHe and the formed 1-Heol. Figure 4B depicts selectivity to reaction products in dependence on reaction time for different CuZn catalysts. In all cases 1-Heol selectivity gradually decreased and, accordingly, HeHe selectivity increased during the experiments, which was most pronounced for catalysts with a high Cu/Zn ratio, 3CuZn-R, and 6CuZn-R. Figure 4C shows that the yield of 1-Heol was very similar for all studied catalysts at low MeHe conversion with a linear increasing trend. With further growth in MeHe conversion the linear growth of 1-Heol yield was observed for 0.5CuZn-R and 1CuZn-R, while in the case of 3CuZn-R and 6CuZn-R an obvious deviation from a straight line was observed. On the other hand, HeHe yield progressively increased with a growth in MeHe conversion, and this trend was again the most pronounced for 3CuZn-R and 6CuZn-R ( Figure 4D). The observed change in the yield of the reaction products suggested that the properties of active sites in the catalysts responsible for hydrogenolysis and transesterification routes differed. In order to explain the observed trends in the performance of the catalysts, the XRD study of the samples after the reaction was carried out. A comparison in the size of constituent particles in freshly reduced and spent catalysts evidenced that during the reaction an enlargement of Cu o particles was observed, which was the most obvious for catalysts with high Cu/Zn ratio. The size of copper particles in 6CuZn-R and 6CuZn-AR increased from 12.2 nm to 16.8 nm, while for 0.5CuZn-R and 0.5CuZn-AR this increase was only marginal, from 6.0 nm to 6.5 nm (Figure 2A). Therefore, the degradation in the hydrogenolysis activity observed for CuZn catalysts with high copper content could be explained by the gradual sintering of metallic copper particles during the reaction. These results thus provided additional evidence that the promoting effect of zinc oxide particles was concerned with stabilizing copper particles, and it was most pronounced for catalysts with a high ZnO content, i.e., with Cu/Zn ratio in the range of 0.5-1. In our recent article [37], we have shown that ZnO particles rather than metallic Cu 0 were responsible for the occurrence of the transesterification reaction between the methyl ester and the formed alcohol. In general, the transesterification reaction can proceed with the participation of either acidic or basic sites [41][42][43][44]. Results from Table 2 suggest that the number of both kinds of active sites in the CuZn catalysts increased with the growth in the Cu/Zn atomic ratio from 0.5 to 3, followed by a slight decrease for 6CuZn-R sample. From this point of view, the increased selectivity to HeHe with the increasing Cu/Zn ratio from 0.5 to 3 was not surprising. Nevertheless, the largest selectivity to HeHe observed for 6CuZn-R was hard to explain if considering exclusively the acid-base properties of the catalyst. Then, in addition to the acid-base characteristics of the calcined CuZn-C samples, other reasons for the observed trends in selectivity to hydrogenolysis and transesterification products could be considered. First of all, the study of catalysts after the reaction using N 2 physisorption showed that both BET surface area and total pore volume of the spent 0.5CuZn-AR were similar to the values obtained for the freshly calcined 0.5CuZn-C, while both the values were dramatically lower in case of 6CuZn-AR ( Table 1). The observed trend in the textural properties of the spent catalysts correlated well with their stability in the reaction and with observed product selectivity. Indeed, the sintering of Cu particles may contribute to a decrease in the total surface area. Additionally, the mesoporous space of the spent catalysts with a high transesterification activity could be occupied by high-molecular weight compounds which were not removed by a simple washing after the reaction. To get additional information on the transesterification performance of CuZn catalysts, experiments on the mutual processing of MeHe and 1-Heol mixture (2:1 mol/mol) at the same reaction conditions were carried out (see the Supplementary Materials, Figure S2). These experiments demonstrated that the yield of HeHe as a transesterification product formed by a reaction between MeHe and 1-Heol increased with the growth of Cu/Zn ratio in the CuZn-R catalysts. Among them, 6CuZn-R possessed a remarkable transesterification performance (see the Supplementary Materials, Figure S2). For a comparison, the performance of ZnO and reduced Cu catalysts in the conversion of MeHe and 1-Heol mixture was also evaluated (both samples were prepared by the calcination of either Zn or Cu precursors synthesized by the same recipe as was used for CuZn samples). Figure S2B depicts that the yield of HeHe was larger on a single-phase ZnO as compared to CuZn-R catalysts. It allowed suggesting that ZnO component in the CuZn-R catalysts was responsible for the transesterification reaction route in methyl ester hydrogenolysis. Nevertheless, the calculation of the corrected initial rate of HeHe formation based on ZnO content in the catalysts showed that the activity in transesterification route increased with an increase in Cu/Zn ratio in catalysts, and it was even larger for 6CuZn-R than that for a single-phase ZnO (Table 5). Taking into account the XRD data ( Figure 2B), the observed trend in the transesterification performance correlated well with the size of ZnO particles in the catalysts ( Figure 5B). In addition, the performance of the single-phase ZnO sample also agreed well with a general trend. Based on the obtained results it could be assumed that both the acid-base properties of the CuZn catalysts and the size of the ZnO species were responsible for the performance of CuZn-R catalysts in transesterification step. The Hydrogenolysis of Hexyl Hexanoate in Presence of CuZn-R Catalysts with Different Cu/Zn Ratio Considering the physico-chemical characteristics of CuZn, mixed oxides and their change during the reaction may help in explaining the observed differences in MeHe conversion and product selectivity. But these approaches could not shed light on reasons for a successive increase in the content of HeHe as a transesterification reaction product. The accumulation of HeHe among reaction products with reaction time was puzzling since this compound, also being ester, should also be converted by hydrogenolysis. Therefore, the hydrogenolysis reactivity of MeHe and HeHe should be compared under similar reaction conditions. Figure 6 demonstrates that, like MeHe, HeHe was effectively transformed by the hydrogenolysis route in presence of all studied CuZn catalysts thus yielding two 1-Heol molecules. HeHe conversion was in the range of 53-65% after 180 min, while a character in the change of HeHe conversion also followed the dependence on Cu/Zn ratio in the catalysts previously observed for MeHe hydrogenolysis. The initial activity of 1CuZn-R and 3CuZn-R in HeHe hydrogenolysis was nearly similar (Table 4), but the activity of the latter significantly slowed down during the reaction ( Figure 6). On the other hand, the initial activity of 0.5CuZn-R was lower, but HeHe conversion over this catalyst at the end of the reaction exceeded 60%, i.e., approximately similar to that observed for the most active 1CuZn-R. Finally, HeHe conversion over 6CuZn-R was the lowest independently on reaction time ( Figure 6). The comparison of TOF values for either MeHe or HeHe hydrogenolysis showed that TOF MeHe /TOF HeHe ratio was only slightly below 1 for 0.5CuZn-R and 1CuZn-R, while they increased for 3CuZn-R and 6CuZn-R to 1.3 and 1.8, correspondingly ( Table 4). The performed calculations evidenced that catalysts with low copper content demonstrated approximately similar performance in the hydrogenolysis of the two esters, while MeHe was converted more effectively compared to HeHe on catalysts with a high Cu/Zn ratio. The obtained TOF MeHe /TOF HeHe values allowed thus explaining the accumulation of HeHe among reaction products formed during MeHe hydrogenolysis on 3CuZn-R and 6CuZn-R. However, the performance of catalysts with a low Cu/Zn ratio requires an additional consideration. The activity of 1CuZn-R catalyst in the simultaneous conversion of MeHe and HeHe mixture with different MeHe/HeHe ratio (100/0, 85/15, 70/30, 50/50, 30/70, 15/75, and 0/100) was studied under the same reaction conditions. The conversion of the two esters was calculated based on their content in the initial MeHe/HeHe mixtures and after 180 min of the reaction. The conversion of the MeHe constantly increased with decreasing its content in the reaction mixture thus reflecting a concentration effect (Figure 7). In contrast, the conversion of HeHe gradually decreased with decreasing its content (or with increasing the MeHe content) in the MeHe/HeHe mixture. Moreover, at MeHe/HeHe molar ratio of 85/15 the conversion of HeHe had a negative value (Figure 7): this could be interpreted as its additional formation from MeHe by transesterification reaction route rather than its conversion by hydrogenolysis route. Consequently, a decrease in HeHe conversion in other experiments could also be partially explained by the formation of this compound upon the interaction of MeHe with the formed 1-Heol. The activity of 1CuZn-R catalyst in the simultaneous conversion of MeHe an mixture with different MeHe/HeHe ratio (100/0, 85/15, 70/30, 50/50, 30/70, 15 0/100) was studied under the same reaction conditions. The conversion of the tw was calculated based on their content in the initial MeHe/HeHe mixtures and a min of the reaction. The conversion of the MeHe constantly increased with decre content in the reaction mixture thus reflecting a concentration effect (Figure 7). In c the conversion of HeHe gradually decreased with decreasing its content (or with ing the MeHe content) in the MeHe/HeHe mixture. Moreover, at MeHe/HeHe mo of 85/15 the conversion of HeHe had a negative value (Figure 7): this could be inte as its additional formation from MeHe by transesterification reaction route rather conversion by hydrogenolysis route. Consequently, a decrease in HeHe conve other experiments could also be partially explained by the formation of this com upon the interaction of MeHe with the formed 1-Heol. Nevertheless, it should be kept in mind that MeHe was converted to 1CuZn-R predominantly to 1-Heol, rather than to HeHe (Figures 4C and 5D). Therefore, both the absence of HeHe conversion at a low HeHe content in the mixture (MeHe/HeHe = 85/15) and a decrease in the HeHe conversion even at a low MeHe content (MeHe/HeHe = 15/85) might also mean that the smaller-size MeHe could introduce an inhibition effect on the conversion of bigger-size HeHe. This assumption could explain trends in the change of product selectivity observed during the hydrogenolysis of dimethyl adipate (DMA) from our recent studies [21,22,30,37,45,46]. Indeed, selectivity to 1,6-hexanediol was below 20% in a broad range of DMA conversion and it sharply increased only at DMA conversion approaching 100% [37]. Similarly, it was found in a long-time experiment with MeHe as a feed that selectivity to 1-Heol constantly decreased and, in opposite, the selectivity to HeHe increased along with the growth in MeHe conversion up to ≈80%, followed by a sharp increase in 1-Heol selectivity at the expense of the HeHe selectivity ( Figure 8). The results suggested that during the hydrogenolysis of methyl esters in presence of CuZn catalysts, a high selectivity to alcohols could not be reached until almost the total conversion of the initial ester was achieved and, consequently, the consumption of a transesterification product would exceed its formation. [37]. Similarly, it was found in a long-time experiment with MeHe as a feed that selectivity to 1-Heol constantly decreased and, in opposite, the selectivity to HeHe increased along with the growth in MeHe conversion up to ≈80%, followed by a sharp increase in 1-Heol selectivity at the expense of the HeHe selectivity (Figure 8). The results suggested that during the hydrogenolysis of methyl esters in presence of CuZn catalysts, a high selectivity to alcohols could not be reached until almost the total conversion of the initial ester was achieved and, consequently, the consumption of a transesterification product would exceed its formation. The Stability of CuZn Catalysts in Repeating Reaction Cycles To evaluate the effect of Cu/Zn ratio on the stability of CuZn catalysts in repeated MeHe hydrogenolysis cycles, a reaction mixture after the completion of the first catalytic run was removed from an autoclave through an outlet valve, followed by the addition of a fresh reactant into the reactor. Figure 9 demonstrates MeHe conversion obtained in three consecutive catalytic runs (A) and selectivity to 1-Heol at the end of each experiment in dependence on the Cu/Zn ratio in the catalysts. The Stability of CuZn Catalysts in Repeating Reaction Cycles To evaluate the effect of Cu/Zn ratio on the stability of CuZn catalysts in repeated MeHe hydrogenolysis cycles, a reaction mixture after the completion of the first catalytic run was removed from an autoclave through an outlet valve, followed by the addition of a fresh reactant into the reactor. Figure 9 demonstrates MeHe conversion obtained in three consecutive catalytic runs (A) and selectivity to 1-Heol at the end of each experiment in dependence on the Cu/Zn ratio in the catalysts. Catalysts with lower Cu content, 0.5CuZn and 1CuZn, demonstrated stable performance in three consecutive runs of MeHe hydrogenolysis. In contrast, MeHe conversion gradually decreased in repeating reaction cycles in case of catalysts with higher Cu content, and the decrease in MeHe conversion was the most significant for 6CuZn-R ( Figure 9A). Indeed, as evidenced from XRD data, several reaction cycles resulted in further growth in the size of Cu particles in spent catalysts after the third cycle, 3CuZn-AR3 and 6CuZn-AR3 (see the SI, Figure S3), so the gradual decline of their hydrogenolysis performance could be assumed. Nevertheless, the selectivity to 1-Heol at similar MeHe conversion did not change to a great extent in experiments for all the catalysts ( Figure 9B) which Catalysts with lower Cu content, 0.5CuZn and 1CuZn, demonstrated stable performance in three consecutive runs of MeHe hydrogenolysis. In contrast, MeHe conversion gradually decreased in repeating reaction cycles in case of catalysts with higher Cu content, and the decrease in MeHe conversion was the most significant for 6CuZn-R ( Figure 9A). Indeed, as evidenced from XRD data, several reaction cycles resulted in further growth in the size of Cu particles in spent catalysts after the third cycle, 3CuZn-AR3 and 6CuZn-AR3 (see the Supplementary Materials, Figure S3), so the gradual decline of their hydrogenolysis performance could be assumed. Nevertheless, the selectivity to 1-Heol at similar MeHe conversion did not change to a great extent in experiments for all the catalysts ( Figure 9B) which was consistent with a very small change in size of ZnO particles during the consecutive reaction runs (see the Supplementary Materials, Figure S3). The presented results evidence that CuZn catalysts with the Cu/Zn ratio of 0.5 to 1 demonstrate the remarkable performance in terms of activity, selectivity, and stability in the MeHe hydrogenolysis. The Hydrogenolysis of Methyl Esters with the Different Length of Carbon Chain In previous sections we have compared the performance of CuZn catalyst in the hydrogenolysis of two esters with the same C6 acid radical but varied in alcohol radical, i.e., MeHe and HeHe. Consequently, the effect of the size of alkyl chain length in methyl esters on their conversion should also be elucidated. The hydrogenolysis activity of 1CuZn-R catalyst was compared in experiments with methyl hexanoate (MeHe), methyl octanoate (MeOc), methyl laurate (MeLa), and methyl stearate (MeSte). As evidenced from Figure 10A, ester conversion in presence of 1CuZn catalyst gradually decreased from ≈60% in case of both methyl hexanoate and methyl octanoate to ≈47% for methyl stearate. The observed decline in the ester conversion with the growth of a carbon chain could probably be concerned with the increasing struggle of larger molecules for active sites on a catalyst surface. Nevertheless, at the same ester conversion, alcohol selectivity slightly increased with an increase in the carbon chain length ( Figure 10B), that could be explained by either a difficulty for large molecules to interact with each other or by the increased hydrogenolysis activity of the resulting ester. Therefore, hydrogenolysis of ester molecules with the increased length of carbon chain prevailed compared with the transesterification step with the participation of two bulky molecules. Nevertheless, the performed experiments evidence the outstanding catalytic performance of CuZn catalysts in the hydrogenolysis of methyl esters varied in their size that allow producing corresponding alcohols with high selectivity. Preparation of Catalysts Precursors for CuZn catalysts with different copper-to-zinc ratio from 0.5 to 6.0 were prepared by a co-precipitation method based on a recipe described in [21]. Preparation of Catalysts Precursors for CuZn catalysts with different copper-to-zinc ratio from 0.5 to 6.0 were prepared by a co-precipitation method based on a recipe described in [21]. An aqueous solution of copper and zinc nitrates, Cu(NO 3 ) 2 ·3H 2 O (99.0%, Penta, s.r.o., Prague, Czech Republic) and Zn(NO 3 ) 2 ·6H 2 O (99.6%, Lach:ner, s.r.o., Neratovice, Czech Republic), in a proportion to obtain the desired Cu/Zn ratio and the total concentration of 0.5 mol·L −1 and an aqueous solution of a precipitant Na 2 CO 3 (99.4%, Lach:ner, s.r.o.) with the concentration of 1.0 mol·L −1 were simultaneously dosed into a beaker containing distilled water preheated to 60 • C. The temperature of the precipitation was 60 • C, the flow rate of the mixed salt solution was fixed while that of the precipitant solution was continuously adjusted by changing the pump performance to keep a constant pH value of 7 ± 0.1. After completing the precipitation, the mixture was aged for 90 min, the obtained precipitates were filtered, then washed with distilled water and finally, the wet cakes were dried at 60 • C for 24 h. The produced as-prepared precursors were further named as xCuZn-AP, where x stands for Cu/Zn ratio. Calcined samples were produced by the calcination of the precursors at 350 • C in air for 3 h, and named as xCuZn-C. Catalyst Characterization The phase composition of prepared samples and the particle size of the relevant phases present in the catalysts were determined by X-ray diffraction using a diffractometer PANanalytical X'Pert3 Powder (Malvern Panalytical Ltd., Malvern, UK) and Cu Kα radiation. The XRD patterns were recorded in a range of 2θ = 5 • -70 • . The crystallite sizes were calculated using the Scherrer's equation, and reflections at 2θ ≈ 31.8 • , 43.3 • , and 38.6 • were used for the particle size calculations for ZnO, Cu, and CuO, respectively [47]. The copper and zinc content in the catalyst precursors was analyzed by XRF using a ARL 9400 XP spectrometer (Thermo ARL, Switzerland) equipped with a rhodium lamp. It was assumed that the Cu/Zn ratio remained the same after the calcination step. The absence of Cu or Zn leaching was confirmed by the analysis of liquid reaction products by AAS using Agilent 280FS AA (Agilent Technologies, Santa Clara, CA, USA), where a mixture of acetylene and air was used as an atomization flame. Nitrogen physisorption was measured at 77 K using a static volumetric adsorption system (TriFlex analyzer, Micromeritics, Norcross, GA, USA). The samples were degassed at 473 K (12 h) prior to N 2 adsorption analysis to obtain a clean surface. The obtained adsorption isotherms were fitted using the Brunauer-Emmett-Teller (BET) method for specific surface area and the BJH method for the distribution of mesopores. The copper surface area was measured by the RFC technique carried out on an Autochem II 2920 (Micromeritics Instrument Corp., Norcross, GA, USA) connected on-line to an RGA 200 quadrupole mass spectrometer (Prevac, Rogów, Poland). The details of the measurements were described previously [21,22]. Temperature-programmed desorption (TPD) of CO 2 and NH 3 was carried out using a Micromeritics Instrument, AutoChem II 2920 (Micromeritics Instrument Corp., Norcross, GA, USA) equipped with a thermal conductivity detector (TCD) and a quadrupole mass spectrometer MKS Cirrus 2 Analyzer (MKS Instruments, Inc., Andover, MA, USA). Prior to adsorption of CO 2 , a catalyst was heated under a helium flow (50 mL·min −1 ) up to 300 • C and kept at this temperature for 60 min to remove impurities from the sample. In the following step, the sample was cooled down to an adsorption temperature of 25 • C and treated with a flow of CO 2 /He (50%) for 30 min. Then the sample was purged with helium for 90 min to remove physisorbed CO 2 . Afterwards, the linear temperature program (10 • C·min −1 ) was started at a temperature of 25 • C and the sample was heated up to a temperature of 450 • C. In experiments on NH 3 adsorption, a catalyst was heated under a helium flow (50 mL·min −1 ) up to 300 • C and kept at this temperature for 60 min to remove impurities from the sample. In the following step, the sample was cooled down to an adsorption temperature of 70 • C and treated with a flow of NH 3 /He (2.5%) for 30 min. Then the sample was purged with helium for 105 min to remove physisorbed NH 3 . Afterwards, the linear temperature program (10 • C·min −1 ) was started at a temperature of 70 • C and the sample was heated up to a temperature of 450 • C. Catalyst Testing All catalytic experiments on the hydrogenolysis of carboxylic acid methyl esters were carried out in a Parr stainless steel autoclave with a reactor volume of 300 mL. Methyl hexanoate (MeHe, SigmaAldrich, ≥99%, Merck Life Science spol. s r.o., Prague, Czech Republic), hexyl hexanoate (HeHe, SigmaAldrich, ≥97%), methyl octanoate (MeOc, SigmaAldrich, 99%), methyl laurate (MeLa, SigmaAldrich, ≥98%) and methyl stearate (MeSte, SigmaAldrich, ≥96%) were used as reactants for the tests. Before a catalytic experiment, a calcined catalyst (usually 1.5 g) was loaded into the autoclave and reduced in situ at 210 • C using H 2 (99.9%, SIAD Czech, s.r.o., Prague, Czech Republic). The state of copper and the size of both Cu and Zn particles after the reduction step were determined by unloading the reduced samples from the autoclave. To prevent the re-oxidation of metallic copper species with oxygen in air, hydrogen was replaced with nitrogen after completing the reduction step, the reactor was cooled down to room temperature, methyl hexanoate was added to the reduced sample, and the mixture was stirred for 30 min. Then the sample was separated from the excess of methyl hexanoate by filtration, washed with acetone and dried in ambient temperature. The reduced and treated samples were further named as xCuZn-R. To start a catalytic experiment, after the reduction step and autoclave cooling, 0.69 mol of a methyl ester was loaded into the reactor. In case of methyl stearate, the mixture of the ester with decalin (decahydronaphthalene, mixture of isomers, Aldrich, 98%) in the volume ratio of 1:2 was used. The reaction temperature was kept at 210 • C, while the hydrogen pressure was kept at 100 bar. The effect of both internal and external diffusion was evaluated by performing catalytic experiments with varying stirring rates and catalyst weights (see the Supplementary Materials, Table S1). Based on these results the stirring rate was kept at 600 RPM. Liquid reaction products were periodically (5, 10, 20, 40, 60, 90, 120, 180 min) withdrawn from the reactor during the experiment, diluted with acetone (1:25 V/V) and analyzed by an Agilent 7820 GC-FID (Agilent Technologies, Santa Clara, CA, USA) using a HP-5 capillary column, 30 m length, 0.32 mm i.d., 0.25 µm thick. The conversion of methyl esters was evaluated using Equation (1). Due to the absence of cracking reactions (confirmed by the GC analysis of the products), product selectivity was calculated using Equation (2). Methanol was excluded from a consideration because it was present both in liquid and gaseous product streams, which prevented its accurate quantification. where N ester, t = 0 stands for the initial number of methyl ester moles, N ester, t = t stands for the number of methyl ester moles at reaction time t, N ester, i stands for the number of methyl ester moles converted to product i, N ester tot. stands for the total number of converted methyl ester moles. All catalytic experiments were repeated 2-3 times to confirm their reproducibility, and an experimental error was evaluated as ±5%. In experiments with MeHe, turnover frequency (TOF) was calculated to evaluate the specific activity of copper species in the catalysts with varying Cu/Zn using the equation: TOF = (M MeHe · xMeHe) · (σ Cu ·N A /m cat ·S Cu ), where M MeHe is the amount of MeHe loaded into reactor (mmol), xMeHe is MeHe conversion, N A is Avogadro constant, m cat is the weight of a catalyst (g), and S Cu is specific copper area (m 2 ·g −1 ), σ Cu is the cross-sectional area of copper equal to 0.0154 nm 2 . To follow changes in the characteristics of CuZn samples after catalytic tests, all spent catalysts were separated from a reaction mixture by filtration, washed with acetone, and dried at T = 60 • C for 24 h. The produced spent samples were named as xCuZn-AR. To understand the stability of the CuZn catalysts in repeated catalytic cycles, their performance was evaluated in three consecutive reaction cycles. For this purpose, the spent catalysts after a catalytic run were separated from the reaction mixture, washed with acetone, dried in ambient temperature, and used in the next experiment without any additional treatments. In this case, spent catalysts after the third catalytic run were named as xCuZn-AR3. Conclusions The results presented in this study contribute to the understanding of the catalytic performance of CuZn catalysts in the hydrogenolysis of carboxylic acid esters to corresponding alcohols. First, experiments with catalysts differing in the Cu/Zn ratio of 0.5-6 confirmed the positive function of ZnO species in stabilizing metallic copper particles and preventing their sintering. Based on N 2 O chemisorption and XRD results, it was suggested that the initial activity of the catalysts in the conversion of methyl hexanoate was determined by the surface area of copper particles and their size. As a consequence, the maximum initial activity in the reaction was observed for a catalyst with the Cu/Zn ratio of 3. Nonetheless, during the reaction, the performance of the catalysts changed, and a decrease in their Cu/Zn ratio favored their stability in ester conversion. The high stability of catalysts with the Cu/Zn ratio of 0.5 and 1 in the hydrogenolysis of esters was also confirmed by experiments with three successive reaction cycles. The obtained results were explained considering XRD results on the comparative evaluation of the size of Cu and ZnO particles in calcined, reduced, and spent catalysts. Second, methyl hexanoate conversion resulted in two main products: hexan-1-ol formed via the direct hydrogenolysis of the ester, and hexyl hexanoate as the result of the transesterification reaction of methyl hexanoate with the produced hexanol. Selectivity by transesterification reaction route unexpectedly increased with a decrease in the content of ZnO particles, i.e., the active sites for this reaction step. By correlating the obtained catalytic results with XRD and TPD data it was concluded that the observed transesterification selectivity could be concerned with both the acid-base properties of the CuZn catalysts and the size of the ZnO particles. Third, to explain the excessive formation of the transesterification product, special experiments were carried out on the conversion of either pure hexyl hexanoate and methyl hexanoate or their mixtures. The obtained results demonstrated that the activity of CuZn catalysts in the conversion of both esters was comparable. The accumulation of hexyl hexanoate among the reaction products of methyl hexanoate hydrogenolysis could be explained by the constant formation of the hexyl ester from the methyl ester and by the inhibiting the effect of methyl hexanoate on hexyl hexanoate conversion. As a consequence, selectivity to the latter product constantly increased with a growth in methyl hexanoate conversion, and high selectivity to hexanol as a target reaction product could not be achieved until methyl hexanoate conversion approached 100%. Finally, the investigated CuZn catalyst showed a high efficiency in the conversion of methyl esters with the different lengths of a carbon chain, varying from hexanoate to stearate. Only a slight decrease in the conversion of a starting ester and an increase in alcohol selectivity (at the same conversion level) were observed in the performed experiments, that could be explained by a steric effect. The results obtained in this work can be used in forthcoming studies to optimize the properties of copper-containing catalysts for the selective hydrogenolysis of esters to corresponding alcohols. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/catal11111417/s1, Figure S1: The profiles of H 2 consumption obtained for CuZn mixed oxides with different Cu/Zn ratio, Figure S2: MeHe conversion observed during the conversion of MeHe + 1-Heol mixture in presence of reduced CuZn catalysts with different Cu/Zn ratio as well as single phase ZnO and Cu catalysts, Figure S3: The comparison of Cu (A) and ZnO (B) particle size in spent catalysts after the first (AR) and the third (AR-3) catalytic runs.
14,226.8
2021-11-22T00:00:00.000
[ "Chemistry" ]
Solar Panel Tilt Angle Optimization Using Machine Learning Model: A Case Study of Daegu City, South Korea : Finding optimal panel tilt angle of photovoltaic system is an important matter as it would convert the amount of sunlight received into energy e ffi ciently. Numbers of studies used various research methods to find tilt angle that maximizes the amount of radiation received by the solar panel. However, recent studies have found that conversion e ffi ciency is not solely dependent on the amount of radiation received. In this study, we propose a solar panel tilt angle optimization model using machine learning algorithms. Rather than trying to maximize the received radiation, the objective is to find tilt angle that maximizes the converted energy of photovoltaic (PV) systems. Considering various factors such as weather, dust level, and aerosol level, five forecasting models were constructed using linear regression (LR), least absolute shrinkage and selection operator (LASSO), random forest (RF), support vector machine (SVM), and gradient boosting (GB). Using the best forecasting model, our model showed increase in PV output compared with optimal angle models. Introduction Recently, research and use of photovoltaic power generation have been increasing worldwide. With issues such as depletion of natural resources and environmental pollution, securing sustainable green energy and using it more effectively became important. In particular, photovoltaic power generation has attracted a great deal of attention by using semi-permanent energy sources such as solar, but efficient development has been limited due to factors such as location, climate, and installation type. There have been numerous efforts to implement the photovoltaic systems in South Korea. The country relied heavily on the imports of fossil fuels as its source of energy and its energy-consumption rate is among world's top 10 [1]. Nonetheless, due to the negative effects the fossil fuels generate to the environment, the Korean government plans to build rural-area photovoltaic (PV) systems. Following the trend, numerous studies have been conducted by Korean researchers in terms of PV systems including topics such as the regional differences of optimal orientation of PV systems and optimal PV model under residential conditions to minimize the cost [2,3]. Solar energy gets converted into electricity using photovoltaic (PV) technology, which receives solar irradiance from its panel as a source of energy. Roman [4] noted that how much of electricity a solar system produces depends on how much sunshine it receives. Therefore, the more a PV collects, the more energy it produces. Accordingly, previous studies have focused on estimating solar radiation and the optimal tilt angle of the solar panel to maximize the amount of solar irradiation. Jamil et al. [5] estimated availability of solar radiation for south-facing flat surfaces in humid subtropical climatic region of India, and monthly, seasonal, and annual optimum tilt angles were estimated. Benghanem [6] analyzed the optimal choice of the tilt angle for the solar panel in order to collect the maximum solar irradiation for Madinah, Saudi Arabia. Wei [7] constructed forecasting models to estimate surface solar radiation on an hourly basis and the solar irradiance received by solar panels at different tilt angles, to enhance the capability of PV systems in Tainan City, Taiwan. Nevertheless, amount of sunlight reaching at PV panel is not a sole factor in expecting maximum power generated. Although not as impactful as solar radiation, factors such as elevation, humidity, and weather condition were found to be other important variables in determining solar power generation [8]. Dinçer and Meral [9] found that factors such as cell temperature, MPPT (maximum power point tracking), and energy conversion efficiency affect solar cell efficiency. Since each PV module consists of different solar cell structures, materials, and technologies, it is difficult to expect a unified spectral response when equal amount of solar radiation was given. As such, finding the optimal tilt angle of a solar panel to receive maximum sunlight does not guarantee the PV module to exploit it fully. Martin and Ruiz [10] analyzed the angular loss of the incident radiation and the surface soil. They calculated the optical losses under a certain field condition relative to the normal incidence situation, of which electrical characteristics of a PV module is applied with a clean surface. They found that dust influenced the angular loss meaningfully. This finding suggests that the angle where maximum sunlight could reach the PV module is not necessarily the angle, but a complex entanglement of a wide variety of factors. Therefore, the objective of this study is to construct a forecasting model to estimate the solar power generation and derive an angle that can maximize it through simulation considering various conditions such as weather, dust level, and aerosol level. PV data from 22 solar power plants in Daegu city, South Korea, weather data ranging from January 2016 to March 2018, and sun location data were used as input variables. The rest of this paper is organized as follows: Section 2 describes about the studying site and data. Section 3 introduces the proposed methodology of PV panel optimization based on the machine learning algorithm. Section 4 evaluates the result of the proposed model and compares the predicted solar power based on the optimized panel angle against the original angle. Finally, Section 5 discusses the conclusion of this study. Study Site and Data The study site is in Daegu city, South Korea. The collected data are from 22 PV modules out of 246 present in Daegu city. Solar Power Generation Data Set 173,568 records of solar power generation data were acquired from 22 PV modules. Collected period of the data ranges from January 2016 to March 2018. The data consists of relevant features such as module capacity, installation location, module azimuth angle, and panel angle. The panels' angles were all fixed as shown in Table 1. Meteorological Data The meteorological data of Daegu Metropolitan City was collected through Meteorological Agency's Open Weather Portal. The meteorological office operates single meteorological observatory in the city and collects time data such as temperature, precipitation, wind speed, humidity, and sunshine. Synoptic meteorological observations are ground observations that are performed at the same time on all observatories at a fixed time in order to determine the weather of the synoptic scale. The size of the scale refers to the spatial size and longevity of high and low pressures expressed in weather map. The attributes of the collected dataset are shown in Table 2. The mass concentrations of aerosols, the microdust (µm/m 3 ), were collected using dust monitor (PM10) placed in Daegu Metropolitan City. The dust monitor is a device that continuously measures the concentration of particles having a diameter of 10 µm/m 3 or less among aerosols floating in the atmosphere. In addition, aerosol data were collected (Table 3). Aerosols are solid or liquid particles floating in the air and usually have a size of about 0.001-100 µm/m 3 and are caused by natural factors such as dust, ash, and sea salt, as well as by artificial factors such as emissions from urban and industrial facilities, incineration, and automobiles. It affects climate change by flooding in the atmosphere to block or absorb solar radiation coming into the surface, or by changing cloud formation and physical properties. The Meteorological Agency observes the aerosol water concentration by particle size from 0.5 to 20 µm/m 3 at the Anmyeon Island Climate Change Monitoring Center as part of the World Meteorological Organization's Global Atmosphere Monitoring (GAW) program. Sun Position Data The hourly solar position for Daegu City during the 2016-2018 period was calculated using a theoretical equation. The declination angle, the hour angle, the zenith angle, the elevation angle, and the azimuth angle were the variables for the solar position used in this study. In addition, the ratio of beam radiation and diffuse radiation on tilted surface were also calculated. The declination angle, which is denoted by δ, has a seasonal variance due to the tilt of the earth on its axis of rotation and the rotation of the earth around the sun. The equation of declination is calculated as: where n d is the day of a year. The hour angle, which is denoted by ω, is the hourly angle of the sun's movement from the east to the west on the celestial sphere of the Earth. Sun's positional change is 15 • per hour since it takes 24 hours for sun to have a full rotation on its axis. The equation of hour angle is calculated as: where H is time in 24-hour format. The zenith angle, denoted by θ, is the angle between the sun and the direct overhead point at a measuring location. The equation is calculated as: where λ is the latitude of a measuring location. The elevation angle, denoted by α, is the angle from the sun to the observation point and the horizontal plane. The equation is calculated as: The azimuth angle, denoted by ξ, is the angle between the Earth's orbit around the sun and its horizon. The equation is calculated as: The ratio of the average daily beam radiation on a tilted surface and the ratio of the average daily diffuse radiation on a tilted surface was calculated by using equations proposed by Liu and Jordan [11]. The equation of the ratio of the average daily beam radiation on a tilted surface (R b ) depends on the point of observation's geographic location. Since the observation point of this study is located in the northern hemisphere, we used the corresponding equation: where φ is the latitude, β is the solar panel's tilt angle, and the ω ss is the sunset hour angle. Lastly, the ratio of the average daily diffuse radiation on a tilted surface (R d ) was calculated as: 3. Methodology Procedures The procedure of this study is as shown in the Figure 1. Energies 2020, 13, 529 5 of 13 depends on the point of observation's geographic location. Since the observation point of this study is located in the northern hemisphere, we used the corresponding equation: where is the latitude, is the solar panel's tilt angle, and the is the sunset hour angle. Lastly, the ratio of the average daily diffuse radiation on a tilted surface ( ) was calculated as: 3. Methodology Procedures The procedure of this study is as shown in the Figure 1. Data Collections As mentioned in the previous section, the PV module data, meteorological data, and the sun position data were the required data for this study. As each PV module's collection period differed within the range of January 2016 through March 2018, the meteorological data and sun position data were collected for this whole period. Data Preprocessing All collected data were recorded on an hourly base. As our proposed model predicts each PV module's monthly and annual output, collected data were aggregated accordingly to match the unit. Additionally, for every PV site, we calculated average daily beam radiation ( ) and average daily diffuse radiation ( ) of all possible panel tilt angles ranging between 0 and 90 degrees using the equations stated in the previous section. There were originally 69 PV sites data collected from Daegu-city but we only chose 22 of them because others had missing data. Correlation Analysis In the data preprocessing stage, we performed correlation analysis on 22 PV sites and calculated correlation between the input features and PV output to select relevant features for our forecasting model. From 31 of the available features, 14 were selected as shown in the resulting Table 4. Data Collections As mentioned in the previous section, the PV module data, meteorological data, and the sun position data were the required data for this study. As each PV module's collection period differed within the range of January 2016 through March 2018, the meteorological data and sun position data were collected for this whole period. Data Preprocessing All collected data were recorded on an hourly base. As our proposed model predicts each PV module's monthly and annual output, collected data were aggregated accordingly to match the unit. Additionally, for every PV site, we calculated average daily beam radiation (R b ) and average daily diffuse radiation (R d ) of all possible panel tilt angles ranging between 0 and 90 degrees using the equations stated in the previous section. There were originally 69 PV sites data collected from Daegu-city but we only chose 22 of them because others had missing data. Correlation Analysis In the data preprocessing stage, we performed correlation analysis on 22 PV sites and calculated correlation between the input features and PV output to select relevant features for our forecasting model. From 31 of the available features, 14 were selected as shown in the resulting Table 4. Modeling In machine learning, predictive methods serve different objectives depending on which type of prediction problem a researcher works on. Since our objective is to construct a model, which can successfully learn from the data to predict the PV output, which is a continuous variable, regression learners were considered for our predictive method candidates. In this work, gradient boosting was used as our model's base algorithm. Gradient boosting machine is an ensemble method, which constructs base learners to maximally correlate it with the negative gradient of loss function, associated with the whole ensemble [12]. Ensemble methods often improve predictive performance for its generalization power and computational advantage [13]. More specifically, Gradient boosting machine constructs a sequence of regression trees, where each tree predicts the residual of preceding tree, and the machine aggregates the predictions additively to minimize the loss [14]. Compared to other machine learning algorithms, Gradient Boosting is proven to be very successful in experimental comparisons of learning algorithms [15,16]. It is also successfully applied in industrial applications [17,18]. Considering optimization, the gradient boosting algorithm has relatively few parameters to tune. In order to verify that gradient boosting algorithm is a good fit for our study, we compared the predictive performance of different algorithms. For comparison, we randomly selected one of the 22 PV modules (S07-04) and trained each model on the subset (January 2016-December 2017). Trained models were then validated using the remaining portion of the PV dataset (January 2018-March 2018). Root-mean-square error (RMSE), which represents the difference between the predicted output and the actual output, was calculated for each model as shown in Table 5. From the result, we verified that the gradient boosting (GB) model showed the lowest RMSE (train: 2.5152, test: 5.5122). Thus, we chose the trained gradient boosting model for our simulation model after tuning the model using grid-search algorithm. Model Simulation for Monthly/Annual PV Optimal Tilt Angle For every PV module, monthly optimal tilt angles were derived by simulating our trained model. We defined the optimal tilt angle as an angle that maximizes the PV output. Simulation period was January 2017-December 2017 and simulated angles ranged from 0 to 90 degrees. Among the simulated angles, an angle that produced the highest PV output was recorded as a monthly optimal tilt angle, and the corresponding PV output was recorded as well. Similarly, simulation of 2017 as a whole was done for the annual PV optimal tilt angle and a single angle that produced the highest PV output for the entire year was recorded as optimum. Results The estimation result of the 2017 PV outputs is shown in Table 6. The estimated PV output is the annual PV output predicted by our forecasting model. Original panel angles were applied for the estimation. The trained model successfully simulated the annual PV output with identical parameters given as the original condition. The simulation result of the 2017 PV outputs is shown in Table 7. Here, our trained model simulates each PV module's annual output by applying: (1) the computed yearly optimal angle and (2) the computed monthly optimal angles. The comparison was made based on the model's estimated PV output shown in previous result. The yearly optimal angles of the PV modules were 1-29 • . Most of the modules had a small increase in PV output at the yearly optimum angle. S06-03 module showed the least improved output rate (0.03%) while S07-04 module showed the most (4.02%). In terms of angular difference, S06-03 module required least amount of angular change and S07-04 module required the maximum angular change. Similarly, we could see that other modules' rate of improvement and rate of angular change were positively correlated. This pattern partially suggests the level of efficiency in currently applied angles for all PV modules. The result of PV output difference was even more significant when angles were monthly adjusted using monthly optimum angles. For every PV module, the result of PV output for the monthly adjusted case was significantly better than the yearly adjusted case. S13-02 module showed the least improved rate (1.89%) and S07-05 module showed the most improved rate (6.32%). Although costly, the result suggests that it is advisable to adjust the panel angle in monthly fashion to expect high efficiency. Samples of monthly optimum angles and outputs are shown in the Appendix A. As shown in Table 8, when all other conditions are same and only the angle of the PV panel was adjusted as suggested by our model, we could expect a total of 0.83% (22,452 kWh) increase in overall PV output when adjusted with yearly optimum angle, and 3.32% (91,662 kWh) increase when adjusted with monthly optimum angles. To gain a realistic insight of these results, we used LCOE value (levelized costs of electricity) for the solar energy conversion value [19]. In Korea, the LCOE value for 100 kW facilities was 147.1 Korean Won (KRW)/kWh. By converting additional power generated, we saved 3302 thousand KRW (147.1 × 22,452) by yearly optimum angle and 13,483 thousand KRW (147.1 × 91,662) by monthly optimum angles for the 22 sites annually. Conclusions In this paper, forecasting model based on the gradient boosting algorithm was proposed to predict the amount of solar power generated by PV modules on both a monthly and yearly basis, which then simulated the energy generation of PV modules to derive the monthly/yearly panel tilt angles that could maximize them. The study site was in Daegu city in South Korea. The model used the solar power generation data, the meteorological data, and the sun position data. Compared to the originally fixed angles, the amount of solar energy generated by PV modules when the panel angles were fixed with yearly optimal angle brought slight increase (0.83%) in overall energy generation. The performance change of each PV modules varied from 0.03% to 4.02%, suggesting that actually applied angles of these modules differed in efficiency. When the optimal angle of each PV module was calculated and adjusted on a monthly basis, the overall energy generation had an even higher increase (3.32%) to that of original angles. The performance change of each PV modules varied from 1.89% to 6.32%. Although all modules were located in a single city and share similar geometrical attributes, the optimal angles differed to some degrees. We calculated how much of economic efficiency we gained when we applied these changes to the real-world in annual basis. In order to produce additional kWh with the original tilt angles, the studied PV modules would cost additional 3302 thousand KRW for the amount of energy that could be produced with yearly optimum tilt angles applied, and 13,483 thousand KRW for the monthly optimum tilt angles applied. The sun positional data were calculated from data collected by a single meteorological observatory. Although studied PV modules were located in a same city and would not show significant difference in sun positional data between the modules, we could expect more precise and reliable outcome in both modeling and simulation stage if we could measure the sun related data for each module. We acknowledged a limitation of generalizing our finding to different PV modules of various geographical conditions since the experiment was done on PV modules located within a single city. In our future study, we plan to collect PV modules data from different cities in order to improve the generalization of our approach. In addition, since our studies collectively combined different factors and applied for machine learning techniques, it was a little difficult to single out individual feature's effect. Future studies could address issues like 'rain effect of clearing dust level for increasing PV output' using feature engineering or statistical techniques. Conflicts of Interest: The authors declare no conflict of interest. Appendix A This section presents the monthly optimum angles and corresponding PV outputs of some of the PV modules to visualize the monthly optimum case. Monthly PV_output (S13-02)
4,786.6
2020-01-21T00:00:00.000
[ "Computer Science" ]
Superconducting Nanowire Single-Photon Detectors for Quantum Information The superconducting nanowire single-photon detector (SNSPD) is a quantum-limit superconducting optical detector based on the Cooper-pair breaking effect by a single photon, which exhibits a higher detection efficiency, lower dark count rate, higher counting rate, and lower timing jitter when compared with those exhibited by its counterparts. SNSPDs have been extensively applied in quantum information processing, including quantum key distribution and optical quantum computation. In this review, we present the requirements of single-photon detectors from quantum information, as well as the principle, key metrics, latest performance issues and other issues associated with SNSPD. The representative applications of SNSPDs with respect to quantum information will also be covered. Introduction: Superconductivity, which was discovered by the Dutch physicist Heike Kamerlingh Onnes on April 8, 1911, is one of the most renowned macroscopic quantum effects [1].Various applications of superconductors have been applied and demonstrated in the past century with the increasing understanding of superconductivity.For example, superconducting magnets have been widely applied in commercial magnetic resonance imaging machines and several major science projects, including the International Thermonuclear Experimental Reactor [2] and superconducting maglev [3].A superconducting quantum interference device is one of the most sensitive magnetic devices for biomagnetism and geophysics exploration [4; 5].Superconductors can be used to achieve sensing and detection because of their many extraordinary properties, including zero resistance, the Josephson effect, and the Cooper-pair. A photon is the quantum of an electromagnetic field, including electromagnetic radiation, such as light and radio waves, which is an elementary particle with a certain energy E = h = hc/, where h = 6.626 × 10 −34 J⋅s is the Planck's constant, c = 2.998 × 10 8 m/s is the speed of light in vacuum,  and  are the frequency and wavelength of the photons, respectively.Further, microwave photons, terahertz photons, visible/near-infrared (NIR) photons, and high-energy photons/particles exist.Superconductors can be engineered for detecting photons.Various superconducting sensors and detectors have demonstrated an unparalleled performance for almost the whole electromagnetic spectrum from a low-frequency microwave to a high-energy particle.In this review, we focus on the detection of traditional photons for NIR and visible wavelengths (400-2000 nm), having a typical energy of approximately 1-5 × 10 −19 J (0.6-3 eV).NIR and visible lights are popular bit carriers for optical communication.NIR and visible photons are also among the key quantum bit carriers of quantum information (QI). The photon detection mechanism of superconductors can be simple and straightforward.According to the BCS theory developed by Bardeen, Cooper, and Schrieffer in 1957 [6], pairs of electrons (Cooper-pairs) are formed in superconductors via electron-phonon interaction when the temperature T is lower than its superconducting transition temperature Tc.A Cooper-pair has the minimum binding energy Eg = 2(T), where (T) is the energy gap of the superconducting material, which is sensitive to T. When T << Tc, Eg = 2(0) = 3.528kBTc, where kB = 1.381 × 10 −23 J/K is the Boltzmann constant.If a photon is absorbed by a superconductor, then it may break a Cooper-pair and produce two quasi-particles if the photon energy E() is larger than Eg.Let us consider a traditional low-temperature niobium nitride (NbN) superconductor for calculation purposes.NbN has a Tc of 16 K, corresponding to the (0) of 3.2 meV.Theoretically, a single NIR photon having a wavelength of 1550 nm and an energy of 0.8 eV can break 125 Cooper-pairs.If the Cooper-pair breaking event can produce a measurable physical quantity, which can be captured using an appropriate instrument, then a single-photon detection event is registered. Different types of superconducting single-photon detectors (SPDs) exist.They may have different operation principles, use different device structures and materials, and generate different output signals even though they rely on the Cooper-pair breaking mechanism.According to different operation principles, the superconducting SPDs can be divided into different types [7]: transition edge sensor (TES), superconducting tunnel junction (STJ), microwave kinetic inductance detector (MKID), and superconducting nanowire single-photon detector (SNSPD). The TES usually comprises an ultralow-temperature superconducting film, such as tungsten (W), which produces a measurable resistive change within a sharp normal-to-superconducting transition upon photon absorption.The TES exhibits high detection efficiency, low speed, high timing jitter, and unique photon number resolvability and usually requires a sub-Kelvin operating temperature [8; 9].When an STJ operates as an SPD, one superconducting film (electrode) absorbs the photons and the photon energy is converted into broken Cooper-pairs and phonons.The transfer of the charge carriers from one electrode to another will result in a measurable electrical current on the STJ [10].An MKID is a thin-film high-Q superconducting micro-resonator, the resonance frequency and internal quality factor of which change when the incoming photons break the Cooper-pairs in the superconductor.The frequency shift and internal dissipation signal measurements are referred to as the frequency readout and dissipation readout, respectively [11; 12].STJ and MKID exhibit an optical photon detection ability; however, no practical detectors have yet been developed.Meanwhile, an SNSPD usually has a nanowire/nanostrip structure.When a photon is absorbed by the current-biased SNSPD, a local resistive domain can be observed, resulting in a voltage pulse, which indicates a detection event.An SNSPD has high detection efficiency, low dark count rate, high speed, and low timing jitter [13]. Semiconducting SPDs, such as the single-photon avalanche diode (SPAD), have been widely applied.SPADs are avalanche photodiodes biased at fields above an avalanche breakdown in Geiger mode, where a self-sustaining avalanche current can be triggered by an incident single photon. Compacted Si SPADs having a detection efficiency of more than 70% are commercially available for the detection of visible photons.InGaAs/InP SPADs are produced for detecting NIR photons because InGaAs has a lower bandgap than Si.However, their detection efficiency is usually not more than 30%, and the dark count rate is several tens of thousands of counts per second.A tricky NIR photon detection module can be referred to as the upconversion SPD.The upconversion SPD utilizes sum frequency generation in a periodically poled lithium niobite waveguide or bulk crystals, converting the NIR photons into shorter-wavelength photons and detecting them using a Si SPAD, which may increase the detection efficiency to become more than 40%. Quantum mechanism and information science are two significant scientific revolutions in the 20th century [14].Although many innovations based on quantum mechanism have been successfully applied in information science and technology (e.g., lasers and transistors), one does not see direct control or manipulation of the quantum states at the quantum level.In the previous few decades, modern science and technology has enabled the control and manipulation of various quantum systems along with information science, producing an emerging field: QI.The science and technology of QI can produce revolutionary advances in the fields of science and engineering, involving communication, computation, precision measurement, and fundamental quantum science.This is usually called "the second quantum revolution."The considerable potential of QI is expected to attract research funds of tens of billions of US dollars over the next few years from governments of many countries and regions, including Australia [15], Canada [16], China [17], Europe [18], Japan [19], Russia [20], the United Kingdom [21], and the United States [22].Many renowned information technology companies, such as Google, IBM, Microsoft, Huawei and Alibaba are also participating in the QI race. As an emerging and fast-growing field, QI does not have a uniform definition and classification.Different countries also have different classifications with respect to their initiatives.From the application perspective, the QI technology has three directions, including quantum communication, quantum computation/simulation, and quantum measurement/metrology.Quantum sensors and detectors are the core devices of the QI systems.An SPD is essential for photon (optical quantum) measurement/metrology or any measurement/metrology, where the signal can be converted to photons.For achieving quantum communication and computation, an SPD will play an irreplaceable role in the QI systems as long as a photon functions as the quantum state carrier [23]. Table 1 summarizes the state-of-the-art performances of various SPDs for QI at a telecom band wavelength of 1550 nm, which is one of the key wavelengths for quantum communication and optical quantum computation.Different SPDs have been reviewed by Hadfield in 2009 [24] and Eisaman et al. in 2011 [25].The data in Table 1 indicate that SNSPDs surpass their counterparts as an excellent SPD candidate for QI experiments.Many applications have been demonstrated using SNSPDs.Commercial SNSPDs are produced by several companies.Encouraged by the major success of the SNSPDs in QI, this review focuses on the SNSPDs and their applications in QI, although other impressive applications, such as deep-space communication [26] and light detection and ranging [27; 28], are available.Furthermore, some general or specific reviews of SNSPDs can be found [29][30][31][32][33]. This review introduces the SPD requirements based on QI, the state of the art in SNSPDs, and the applications of QI, which will provide readers with a broad perspective from the application perspective.This study contains six chapters.Chapter 2 focuses on the SPD requirement based on QI.Chapter 3 introduces SNSPDs and the operation principle, performance parameters, key issues, and latest achievements related to SNSPDs.Chapter 4 summarizes the SNSPD applications with respect to QI. Chapter 5 presents other issues, including standardization and business market.For clarity and completeness, we will also refer the interested readers to a more specialized literature on different topics. Quantum communication Cryptography is the core of security communication.Information-theoretical secure communication can be achieved using the one-time-pad method, and the key must be as long as the message and cannot be reused [42; 43].The manner in which such a long key can be distributed in the presence of an eavesdropper is called the key distribution problem.Quantum key distribution (QKD), the core of quantum communication, is developed to solve this central challenge.QKD provides information-theoretical security based on the basic principle of quantum physics.For comparison, the traditional cryptography method is based on computation complexity, and its security is dependent on the algorithm or computational power.QKD or quantum communication is believed to be the first commercial application of quantum physics at the single quantum level.The first QKD protocol was proposed by Bennett and Brassard in 1984 and is referred to as BB84 [44].Subsequently, many other protocols were proposed and demonstrated to make QKD more practical and powerful against various possible attacks, including decoy-state QKD [45][46][47], measurement device-independent QKD (MDI-QKD) [48], and the latest two-field QKD (TF-QKD) [49].For obtaining the details of quantum cryptography, refer to the review article by Gisin et al. in 2002 [50] and the latest review by Xu et al. in 2019 [51]. We may consider a classical QKD system using the BB84 protocol as an example.A sequence of single photons carrying qubit states is sent to Bob by Alice through a quantum channel, shown in Figure 1(a).Alice adopts the polarization states of single photons to encode random bits.Bob randomly selects the measurement bases, either rectilinear or diagonal, to perform measurements using two SPDs.The sifted key is obtained by keeping only the polarization data encoded and detected in the same basis.Alice and Bob can share the final secret key after performing additional classical post-processing on the sifted key.A QKD system (Figure 1(b)) needs some key components, including single-photon sources, quantum channel, and SPDs.Two or four SPDs are necessary to implement the BB84 QKD system.Regardless of the manner in which the QKD protocols are developed, SPDs play an irreplaceable part in the QKD systems if a single photon is used to transmit the quantum state.The performance matrix of the QKD systems, such as the maximum transmission distance, secure key rate (RSK), and quantum bit error rate (RBE), is dependent on the performance of the SPDs.Simple equations for the decoy-state BB84 QKD are as follows: where  is the detection efficiency of the SPD, f is the clock frequency, u is the average photon number per pulse, L is the total channel loss, and Rdc is the dark count rate of SPD.Equation 1indicates that the performance of the QKD system is dependent on the key parameters of the SPDs, including  and Rdc.The dead time and timing jitter (TJ) of SPDs will affect the performance of the high-speed QKD system. Figure 1.(a) Schematic of the BB84 protocol.Two or four SPDs are required in the system with/without a polarization modulator.Reproduced with permission.[51] Copyright 2019, the authors (arXiv); (b) A typical QKD system with a decoy-state BB84 protocol using polarization coding.LD, laser diode; BS, beam splitter; F, neutral density filter; PBS, polarizing beam splitter; l/2, half waveplate.Reproduced with permission.[50] Copyright 2002, American Physical Society. Quantum computation Quantum computers use quantum superposition to process information in parallel.Thus, they have a fundamental computing advantage over classical computers and can decrypt most of the modern encrypted communication.Some prototype quantum computation systems have been developed based on the theoretical and experimental studies conducted over the previous two decades.At the end of 2019, Google initially demonstrated quantum advantage/supremacy using a processor containing programmable superconducting quantum bits (qubits) to create quantum states on 53 qubits [52].Noisy intermediate-scale qubits will be useful for exploring many-body quantum physics.They may also have other useful applications.However, the 100-qubit quantum computer will not change the world immediately; we should consider it as a significant step toward achieving more powerful quantum technologies in the future [53].Unlike the quantum communication mostly based on photon/optical quantum, quantum computers can be built using several different physical qubits, including superconducting circuits [52], trapped ions [54], quantum dots [55; 56], nuclear magnetic resonance [57], nitrogen-vacancy centers in diamond [58], and photons [59].In addition to the superconducting quantum computer, the optical quantum computer has approached a milestone in terms of quantum advantage.Lu et al demonstrated boson sampling with 20 input photons and a 60-mode interferometer in a 10 14 -dimensional Hilbert space, which is equivalent to 48 qubits [60]. To build an optical quantum computer, one needs indistinguishable single photons, low-loss photonic circuits, and high-efficiency SPDs. Figure 2 shows an experimental setup of boson sampling using the optical quantum computer [60].A single InAs/GaAs quantum dot, resonantly coupled to a microcavity, is used to create pulsed resonance fluorescence single photons.For demultiplexing, 19 pairs of Pockels cells and polarized beam splitters are used to actively translate a stream of photon pulses into 20 spatial modes.Optical fibers with different lengths are used to compensate time delays.The 20 input single photons were injected into a 3D integrated, 60-mode ultra-low-loss photonic circuit comprising 396 beam splitters and 108 mirrors.Finally, the output single photons were detected by 60 SNSPDs.All the coincidences were recorded using a 64-channel coincidence count unit (not shown in Figure 2). The coincidence counting (CC) of the n-photon boson sampling can be expressed as where Rpump is the pumping repetition rate of the single-photon source, QD is the single-photon source brightness, de is the demultiplexing efficiency of each channel, c is the average efficiency of the photonic circuit, SPD is the detection efficiency of the SPDs, and S is the ratio of no-collision events to all possible output combinations.Equation 2shows that a high SPD of approximately 1 is critical for boson sampling when n is large.Increasing SPD from 0.3 (typical value for SPAD) to 0.8 (typical value for SNSPD) at a wavelength of 1550 nm can improve CC (n = 50) by 21 orders of magnitude.Thus, the sampling time can be considerably reduced.Furthermore, SPDs should be sufficiently fast to match the Rpump of the single-photon source; otherwise, SPD cannot be guaranteed. Figure 2. Experimental setup of boson sampling using 20 photons.The setup includes four key parts, i.e., a single-photon device, demultiplexers, a photonic circuit, and SNSPDs.Reproduced with permission.[60] Copyright 2019, American Physical Society. SPDs for QI The previously conducted analysis indicates that SPDs with high detection efficiency, low dark count rate, high counting rate, and low TJ are indispensable in case of QI.Semiconducting SPADs were previously widely applied to QI.However, their performance cannot keep up with the pace of QI development.SNSPDs were initially demonstrated in 2001 [13].Their performance has considerably improved over the previous two decades, considerably advancing QI science and technology as a key enabling technology. History An SNSPD based on the non-equilibrium hotspot effect observed in ultrathin superconducting films was first proposed by Kadin et al. in 1996 [61].The concept of SNSPD was successfully demonstrated by Gol'tsman using the NbN strip (200-nm wide and 5-nm thick) [13].Although the first result of SNSPD was moderately satisfactory, it attracted the attention of the superconducting electronics and QI communities.The first experimental demonstration of SNSPD with respect to QKD was conducted by Hadfield et al. in 2006 [62].The considerable potential of SNSPDs for various applications, including QI, has fascinated researchers all over the world.The performance of SNSPDs has been effectively enhanced based on improved knowledge and experience in materials, design, fabrication process, and the theory.Numerous milestones in QI experiments have been achieved using SNSPDs with unparalleled parameters, such as loophole-free tests of local realism [63], QKD with over 500 km of optical fiber [64; 65], and boson sampling with 20 input photons [60].Another successful milestone that was achieved outside the scope of QI is deep-space communication [26].In 2013, NASA achieved laser communications between a lunar-orbiting satellite and ground stations on Earth with downlink data rates of up to 622 Mb/s utilizing SNSPDs at a wavelength of 1550 nm.An increasing number of exciting achievements in QI will be attained in the foreseeable future. Detection mechanism The microscopic detection mechanism of SNSPD is complicated and not well understood.Various relevant studies have been conducted using different theoretical models and from different aspects.No unified model could explain all the experimental results, and some experimental results are not consistent.Regardless, we briefly explain the detection process using Cooper-pair breaking and an electrothermal feedback model [66] without considering the detailed microscopic mechanisms [67; 68]. Let us begin with a typical SNSPD, which is usually a nanowire/nanostrip with a width of approximately 100 nm made of ultrathin (5-10-nm thick) superconducting film (such as NbN and WSi).An SNSPD is usually cooled to a temperature of less than 0.5Tc and current-biased with a bias value close to but smaller than its switching current Isw such as 0.9Isw (Figure 3[a]-i).The switching current Isw is defined as the maximum current with which the nanowire can sustain superconductivity.When a photon is absorbed by the nanowire, it may break hundreds of Cooperpairs because a single photon's energy (~1 eV) is usually two to three orders magnitude higher than the binding energy of a Cooper-pair (say, 3.2 meV for NbN).Thus, the depaired quasi-particles form a hotspot in the nanowire, which repels the supercurrent into the superconducting path around the hotspot (Figure 3[a]-ii).Then, the current around the hotspot may exceed the switching current, enabling the hotspot to expand and form a resistive slot across the nanowire (Figure 3[a]-iii, iv).The resistive slot (usually several hundred ohms) will grow along the direction of the nanowire (Figure 3[a]v) due to the Joule heating effect.Simultaneously, the current in the nanowire will be forced to flow into the readout circuit; thus, a measurable signal can be obtained.With the reducing bias current in the nanowire and thermal relaxation via the substrate, the resistive slot will cool and finally disappear (Figure 3[a]-vi); then, the SNSPD is ready for the next incoming photon.Figure 3(b) gives a simple circuit by assuming that SNSPD is a switch in parallel with a resistor (Rn(t)) and in series with an inductor.The switch is used to simulate the detection event triggered by a photon.Rn(t) simulates the dynamic resistive slot with a time-dependent resistance.Generally, the total inductance of an SNSPD comprises magnetic (geometrical) inductance and kinetic inductance (Lk).The kinetic inductance is described by the imaginary part of the complex conductivity in the superconducting state, which is considerably larger than its geometric inductance.Therefore, Lk is often adopted to represent the total inductance of SNSPDs [69].This circuit will give a voltage pulse shown in Figure 3(c), which usually has a sharp rising edge with a time constant 1 = Lk/(Z0+Rn) and a slow falling edge with a time constant 2 = Lk/Z0, where Z0 = 50  is the impedance of the readout circuit.1 is usually approximately 1 ns and 2 is in the range of some nanoseconds to a few hundred nanoseconds, which is related to the active area size of the SNSPD.(c) the output signal of an SNSPD upon a detection event [31].The inset added to the original reference is a schematic of a meandered SNSPD.Reproduced with permission [31].Copyright 2012, IOP Publishing Ltd. Although the above schematics are simple, they can generally explain how the SNSPD works and what happens after the detection of a photon, which can facilitate understanding from the application perspective.However, many parameters must be designed and tuned carefully to make a functional SNSPD with respect to different aspects, such as materials, geometrics, circuits, and operation parameters. Figure 4(a) shows the scanning electron microscopy (SEM) image of an SNSPD with an optical cavity on top.The NbN film was structured into meandered nanowires using electron-beam lithography and reactive ion etching.The upper-left image shows the edge of the active area.The meandered nanowire has a width of 100 nm and a filling ratio of 0.5.It usually has a round corner that turns back and forth over the active area.The round corner structure avoids the current crowding effect, which may reduce the switching current of the SNSPD [70; 71].The upper-right image is an optical image of a packaged SNSPD, with a fiber aligned vertically from the top [72].Figure 4(b) shows the raw amplified signal of single photon detection recorded by an oscilloscope, which is consistent with the simulation result shown in Figure 3(c). Metrics of SNSPD • Detection efficiency The system detection efficiency (SDE) is a term that is easily accepted by users.SDE is defined as a parameter that indicates how effectively the SNSPD system can detect photons, including all the losses inside the system.SDE can be usually presented as SDE = couplingabsorptionintrinsic, where coupling is the efficiency of the incident photons being coupled to the active area of a detector, absorption is the absorption efficiency of the photons coupled to a detector, and intrinsic is the triggering efficiency of the absorbed photons that can produce a detectable electric signal.The ideal SNSPD system should have an SDE of unity.It is difficult to practically achieve this condition.If all the three s values are 0.97, the final SDE is 0.91.In addition, an excellent commercial fiber connector with a loss of 0.1 dB inside the system can result in an efficiency loss of 0.02. Three different optical structures have been developed for SNSPDs.The most popular one is the vertical-coupling method, which also provides the highest SDE as a standalone detector (Figure 4).The second is the waveguide-coupled method, with superconducting nanowires fabricated on the top of the waveguide.Waveguide-coupled SNSPDs generated modest SDEs due to the coupling loss of the waveguide; however, absorption and intrinsic were claimed to be close to 1, which may play an important role in integrated quantum photonics [32; 73].The final one is the microfiber coupling method, which utilizes the evanescent field of microfiber when the superconducting nanowires are in close contact with the microfiber.Microfiber coupling SNSPDs obtained an SDE of more than 50% for the broadband spectrum from visible to NIR, possibly finding interesting applications in sensing and spectrometry [74; 75].This review will focus on the vertically coupled SNSPDs. To achieve a high coupling efficiency, we should make the active area large enough to enable the effective coupling of the incoming photons.Various optic methods, such as lens, are helpful for enhancing the coupling.Several packaging methods have been developed for effective coupling, including using the cryogenic nanopositioner and novel self-alignment technique [30], with which the loss can become as low as 1% [76]. High absorption is more challenging than high coupling for SNSPDs.Previous SNSPDs made of ultrathin films with a simple meandered nanowire structure exhibit a low absorption of 0.3 (OS1 in Figure 5(a)).Rosfjord et al. initially introduced a cavity structure to effectively enhance the absorptance (OS2 in Figure 5(a) [77].To further improve the absorptance, more sophisticated cavities were designed, such as a double cavity structure with backside illumination (OS3 in Figure 5(a) [78] and a dielectric mirror (distributed Bragg reflector) with frontside illumination (OS4 Figure 5(a) [38].In principle, the optimized OS3 and OS4 may achieve an absorption of more than 0.99 according to simulation.However, based on the simulation results, imperfect materials, the deviation of the geometric parameters of the optical materials, and superconducting nanowires may reduce the absorption.A multilayer design may help to reduce the influence of the aforementioned imperfections [79].The interface reflection at the surfaces of the fiber tip and detector chip slightly reduces the absorption.At 532 and 1064 nm, the MgO substrate was adopted in the simulation for OS3 and OS4.At 1550 nm, a Si substrate was used in the simulation of OS3 and OS4.Reproduced with permission [80] Copyright 2017, IOP Publishing Ltd. intrinsic is directly and closely dependent on the superconducting quality, geometric design, and fabrication precision of the superconducting nanowires.It will be considerably influenced by the operation parameters (i.e., temperature and bias current).Moreover, the geometric and physical uniformity influence intrinsic.Any nanowire defects may result in a reduced maximum bias current, preventing intrinsic from reaching unity.Based on the saturated plateau in SDE dependence of the bias current, intrinsic can reach unity in SNSPDs made of various materials, including WSi [39], MoSi [40] and NbN [38]. The aforementioned analysis indicates that SNSPDs with an SDE of close to unity can be obtained.Table 1 shows SDE values of more than 90% at a wavelength of 1550 nm for the WSi, NbN and MoSi SNSPDs.At the Rochester Conference on Coherence and Quantum Optics in 2019, Reddy et al. reported a MoSi SNSPD with an SDE of 98%, which is the highest value observed in case of SNSPDs [41].Similar results were obtained for other wavelengths [81].We believe that a maximum SDE of close to unity will be possible for SNSPDs of WSi and NbN upon further optical optimization; this will also be possible for all the other wavelengths from visible to NIR.However, this is an interesting metrology issue related to the accuracy and uncertainty of the measured SDE because no optical power meter at the quantum level is better than SNSPDs [82]. • DCR The dark count rate (DCR or Rdc) is defined as the recorded false counts in unit time with respect to the detection events and represents the noise level of an SNSPD.In the measurement method, DCR is the number of counts in unit time recorded with no illumination.DCR consists of background DCR (bDCR) and intrinsic DCR (iDCR). As a detector with quantum-limit sensitivity, bDCR is difficult to avoid because photons are present everywhere owing to the presence of thermal electromagnetic radiation in a working condition.According to Planck's law of black-body radiation, matter with a certain temperature will radiate photons because of the thermal motion of the particles in matter.In case of fibercoupled SNSPDs, the thermal radiation of the fiber at room temperature will produce photons that may transmit through the fiber and that may be detected as background dark count.In addition, any stray light penetrating into the fiber may contribute to bDCR.In case of spacecoupled SNSPD, bDCR will be considerably large owing to severe environmental radiation.Thermal radiation has a broadband feature and may be filtered partially by filters.However, all the filters need to be operated at low temperatures (40 K or lower) and with an acceptable low loss.Several techniques, such as cooled SMF [83], standalone fiber optic filters or filter bench [84; 85], and dielectric film filters either on the chip [86] or on the tip of a fiber [87], can effectively reduce bDCR with an acceptable sacrifice in terms of SDE.For an SMF-coupled SNSPD, 80% SDE was achieved at a DCR of 0.5 Hz; this is shown in Figure 6 with a bandpass filter (BPF) being used on the fiber end-face [87].When the bias current is high (for example, Ib/Isw > 0.9 in Figure 6[a]), iDCR will be dominant with respect to the DCR.The origin of iDCR is related to the spontaneous vortex motion in the nanowire and is usually exponential to the bias current (black triangles in Figure 6[a]).A few theoretical models (fluctuations of the order parameter, thermally activated and quantum phase slips, and vortex excitations) and experimental studies have investigated the origin of intrinsic dark counts [67].Although no unified conclusion on the origin has been achieved yet, an SNSPD with a low DCR can be used as long as it is biased not close to Isw. iDCR can be neglected when the bias current of an SNSPD is lower than 0.9Isw, as shown in Figure 6[a]. • Timing jitter TJ is a key parameter of SNSPDs that surpass that of SPADs and other SPDs.TJ represents the deviation of an ideal periodic single-photon response voltage pulse from the true arrival time.Although it is not yet a critical parameter with respect to the current QI applications, its significance is expected to be revealed soon in high-speed QKD and other applications.To understand the origin of TJ, the contribution of each part to the TJ measurement systems should be understood.The user generally focuses more on the system TJ instead of the detector TJ.The system TJ jsys can be presented as follows: where jInt, jSNR, jlaser, jSYNC, and jSPC are the jitters from the SNSPD, the signal-to-noise ratio (SNR) of the output signal, the laser, the synchronization signal of the laser, and the single-photon counting (SPC) module, respectively.jInt is mainly contributed by the SNSPD geometry.When photons are absorbed at different locations along the long nanowire, the triggered electric signal propagates toward the end of the nanowire with different arrival times.This effect is discovered and verified by a differential cryogenic readout [88].A long nanowire corresponds to a large geometric contributed jitter.The random time delay between photon absorption and the appearance of the resistive area contributes to the intrinsic jitter [89].Various types of inhomogeneity, such as gap energy, line width, and thickness, may also produce jitter [90]. jSNR is vital to achieve a practical SNSPD because the amplitude of the original signal at low temperature is ~1 mV.The original signal should be amplified to achieve a high SNR.The slew rate of the rising edge of the signal will also play a role because jitter is measured by calculating the histogram of the timing of a certain voltage on the rising edge.Thus, the device parameters (bias current, kinetic inductance, etc.) and amplifier specifications (gain, noise, and bandwidth) influence jitter [91].The cryogenic amplifier [81] and impedance-matching taper [92] can also improve SNR, reducing jSNR.jlaser, jSYNC, and jSPC are instrumental specifications.The first two are as small as sub-ps, which are usually neglected.jSPC may vary from a few ps to ~20 ps.The best commercial SPC module may have a full width at half maximum (FWHM) jSPC of <3ps [93].For research purposes, an oscilloscope embedded with the jitter analysis function (sub-ps jitter) may be adopted to replace the SPC module for reducing the contribution of jSPC.However, the data collection time will be much longer than that when the professional SPC module is used [94]. The latest results show that the system TJ jsys can be as low as 3 ps (FWHM).However, it was obtained from a 5-µm-long nanowire, which is not a practical detector because of its low absorption [95].In case of a practical NbTiN SNSPD having an active area diameter of 14 µm, sub-15 ps jsys was reported with an SDE of ~75% at a wavelength of 1550 nm [81]. • Counting rate Counting rate (CR) describes how fast the detector can react to the incoming photons, which is the reciprocal of the dead time or the pulse width.The maximum counting rate (MCR) is physically limited to the order of 10 GHz by the thermal relaxation time between the nanowire and the substrate after the absorption of a photon, which is usually a few tens of ps [96; 97].Practically, SNSPDs have a large kinetic inductance, which slows the process of the bias current's recovery to the initial value after hotspot generation.As estimated from Figure 3(c), In case of a NbN SNSPD with a 7-nm thick film, an active area of 15 m  15 m, a fill ratio of 37.5%, and Lk = 1.2 H, CR is estimated to be ~40 MHz.However, the current recovery to SNSPD is a gradual process, and the SDE also recovers gradually.Thus, the SDE does not completely recover when the next photon arrives at a repetition rate of CR.Hence, the number of counts is usually smaller than the CRSDE.However, SNSPD may count the incoming photons with a higher repetition rate than CR with a lower efficiency.The useful information for the user is the SDE dependence of the input photons' intensity or the real count rate (Figure 7). To increase CR, Lk can be reduced by dividing a single nanowire into multiple nanowires.A few different structures, such as arrays [98], interleaved nanowires [99], and parallel nanowires [100; 101], exist.Figure 7 shows a detector with 16 interleaved nanowires that attained a CR of 1.5 GHz with an SDE of ∼12% at a photon flux of 1.26 × 10 10 photons/s.Optimization on circuits may improve the MCR, which is limited by the latching effect [102]. • Other parameters Apart from the four aforementioned general parameters, some other parameters are valuable for certain applications. Wavelength: Although an SNSPD is a natural broadband detector due to its low gap energy [103], an optical cavity was integrated to improve the absorption at a certain wavelength, limiting its broadband property.For a specific wavelength from UV (315 and 370 nm) to NIR (up to 2 µm), SNSPDs with high SDE have been reported [80; 104-106].Broadband SNSPDs with high SDE can be obtained by introducing a broadband cavity or some specific optical structure.Recently interest in the development of broadband and multispectral SNSPDs has been growing [107][108][109]. Photon number resolvability (PNR): PNR should be good in case of an ideal SPD.However, most SPDs, with the exception of TES, do not have this ability.As a triggering detector, a traditional SNSPD cannot distinguish the photon number.However, with a special readout circuit, SNSPD shows the potential of PNR [110].Several SNSPDs with quasi-PNR ability were reported on the basis of different space multiplexing methods [99; 111; 112], and a PNR of up to 24 could be observed [113].If multiple photons were absorbed by a single pixel, then the photon number cannot be resolved. Polarization sensitivity: The classical SNSPD is naturally sensitive to polarization because of its anisotropic meandered nanowire structure.Transverse electric mode photons have a higher absorption than transverse magnetic mode photons.To reduce the polarization sensitivity, various structures, such as spiral, three-dimensional, and fractal structures [79; 114-117], were developed.A polarization extinction ratio (PER) of less than 1.1 was obtained.Some studies adopted the opposite approach to amplify the polarization sensitivity to obtain an SNSPD with a high PER of more than 400 [118][119][120]. The selection of suitable candidate materials should ensure that the intrinsic detection efficiency becomes unity.Therefore, the key parameter is Tc (gap energy or Cooper-pair binding energy).Apart from the material itself, other geometric parameters also influence Tc, including the thickness of the film and width of the nanowire.A long nanowire structure indicates that SNSPD operates as multiple SNSPDs in series with the same bias current.The performance is bias-current sensitive; hence, nanowire homogeneity is vital with respect to the overall detector performance.For materials with a low Tc, a single photon can break more Cooper-pairs; thus, the SNSPD is more sensitive.For the SNSPD made of WSi (Tc = 5 K), obtaining an intrinsic detection efficiency of 1 is easy; this value is registered as a clear saturated plateau in the SDE-Ib curve (see Figure 1[a] in [39]).Such a saturated plateau is difficult to obtain in case of the SNSPD comprising NbN (Tc = 16 K) (see Figure 7 in [37]).Strict fabrication requirements (line width and uniformity) may be released partially for SNSPD made using low-Tc materials. However, the SNSPDs made of low-Tc materials have some disadvantages.Usually, low-Tc materials have a low critical current density (maximum direct current flowing through the superconductors without resistance divided by the cross-section area) and a large kinetic inductance, which cause the SNSPD to have a small output signal amplitude (low SNR→large TJ) and low CR.Another practical issue is the burden on cryogenics.Low Tc indicates high cryocooling costs.Most of the Nb(Ti)N SNSPDs work at temperatures of 2-3 K.However, the SNSPDs made of WSi and MoSi often operate at temperatures lower than 1 K, requiring more complicated and luxurious cryogenics. Vodolazov recently discussed the detection mechanism of an SNSPD based on the kinetic equation approach [68].The specific heat capacities of electrons and phonons are the parameters of the candidate materials that can influence the SNSPD detection dynamics.Several micron-wide dirty superconducting bridges can detect a single near-infrared or optical photon.The experimental work on single photon detection with 0.5-5-µm-wide NbN bridge has also been reported [132]. The final question often raised by users about the materials is the possibility of using HTS such as YBa2Cu3O7−x.Unfortunately, HTS cannot be used.First, HTS has a large gap energy, resulting in low sensitivity for single-photon detection.Second, as a multi-component compound, achieving good nanoscale homogeneity for a 10-micron-scale or a larger active area is difficult.Third, stability and controllability are challenging to achieve in case of ultrathin HTS films.Registering a single-photon response from HTS nanowires is possible.However, developing a practical SNSPD with an acceptable performance is not realistic using the current material technology. Cryogenics Superconductors have always been associated with cryogenics before room-temperature superconductors are invented.To operate SNSPDs and other superconducting sensors and detectors, the working temperature should be usually lower than 0.5 Tc.In case of the Nb(Ti)N SNSPDs, twostage commercial Gifford-McMahon (G-M) cryocoolers with a cooling power of 0.1<EMAIL_ADDRESS>K are widely adopted, which can run continuously at a temperature as low as 2 K with a power consumption of ~1.5 kW [133].Liquid 4 He operating at 4.2 K is an alternative, although it is a rare resource with a non-negligible running cost.Reducing the vapor pressure of 4 He can attain a working temperature lower than 2 K using the superfluid effect.The NbTiN SNSPDs may operate at high temperatures (4-7 K), which is interesting for reducing the cryogenic requirements [104].Low temperatures are preferable for running WSi and/or MoSi SNSPDs.Sub-1 K cryocoolers, such as the three-stage cryocooler [134], 3 He refrigerator, adiabatic demagnetization refrigerator, or dilution refrigerator, which are costly and non-portable, were often used. For scaled application, the size, weight, and power (SWaP) of the cooler cannot be neglected.The SWaP of the two-stage G-M cryocooler is considerably high for a communication base station, let alone the sub-1 K systems.Some studies aimed to reduce the SWaP of a cryocooler for SNSPD using space-application-compatible cryocooler technology [135][136][137][138]. Recently, a prototype cryocooler, with a two-stage high-frequency pulse-tube cryocooler and a 4 He Joule-Thomson cooler driven by linear compressors, reached a minimum temperature of 2.6 K with an input power of 320 W and a weight of 55 kg.The SNSPD hosted by this cryocooler had an SDE of 50% and a TJ of 48 ps [135].Although this performance is encouraging, there are still several steps to ensure the practicality and cost efficiency of this cooler. Niche market Because SNSPD is a cutting-edge technology, the commercial market is one of the driving forces in improving the performance and making the system practical, user-friendly, and cheap.Six startup companies, namely, ID Quantique (Switzerland) [139], PHOTEC (China) [140], Photon Spot (USA) [141], Quantum Opus (USA) [142], SCONTEL (Russia) [143], and Single Quantum (Netherlands) [144], are working on the commercialization of the SNSPD technology; the first among these companies is SCONTEL, which was founded by Gol'tsman in 2004.The market for this technology is smaller (~20 M USD in 2019) than the market for most semiconducting products.Figure 8 shows the global system installation information.However, the market is expected to continue to grow along with the global investment and commercialization of the QI technologies. Applications in QI Since the invention of SNSPD in 2001, it has attracted considerable attention from various research fields.The first application of SNSPD was to diagnose VLSI CMOS circuits in 2013 [145].The first application in QI started with the characterization of a single-photon source, which was presented by Hadfield et al. [146].Subsequently, several applications have emerged in different fields such as QI, biological fluorescence [147; 148], deep-space communication [26], and light detection and ranging, including satellite laser ranging [27; 28; 91; 149-153].The applications in QI are considerably impressive and systematic because QI requires high-performance SPDs.We will introduce the applications in QI in three categories: quantum communication, quantum computation, and others. Quantum communication The InGaAs/InP SPADs dominated in the previously conducted QKD experiments.However, their performance cannot match with the development pace of the QKD because of low DE and high DCR.Until now, the best result obtained using SPADs was reported in 2015, where a QKD containing more than 307 km of optical fiber based on a coherent one-way protocol was demonstrated with the best InGaAs/InP SPADs (DEs of 20%-22%, DCR of ~1 cps) [154].These detectors were actually cooled to 153 K using a Stirling cryocooler. The first QKD experiment using SNSPDs was conducted by Hadfield et al. in 2005 [62].A secure key rate exchange of more than 42.5 km was demonstrated using twin SNSPDs with an SDE of 0.9% and a DCR of 100 Hz.Although the SBER is lower than that when InGaAs SPAD is used, this indicates the considerable potential of using high-performance SNSPD.In 2007, Takesue et al. reported a QKD record of a 12.1-bps secure key rate over 200 km of fiber using SNSPD, which surpassed the distance record achieved using SPAD [155].Subsequently, more and more QKD experiments have been performed using SNSPDs, effectively improving the QKD distance and key rate.Improvements from single-photon source and other devices and theoretical developments, including new protocols, also contribute to the progress of QKD.Table 2 presents the important experiments of QKD using SNSPD and related parameters (SNSPD parameters, QKD results, and the adopted protocols).Almost all the current transmission distance records of QKD in fiber have been obtained using SNSPDs.Before 2017, majority of the optical quantum computation experiments were performed at wavelengths of approximately 800 or 900 nm, where high-quality single-photon sources and SPDs (SPAD: DE of ~60% for 800 nm and ~30% for 900 nm) are available [173][174][175][176]. Further developments acquired SPDs with high SDE and repetition rate, which cannot be achieved using SPADs.In 2017, He et al. demonstrated four-photon boson sampling using two SNSPDs (SDE of 52% at 910-nm wavelength and CR of 12.9 MHz), which surpassed the previous results over 100 times [177].Then, Wang et al. demonstrated scalable boson sampling with photon loss, where 13 SNSPDs were adopted [178].Furthermore, using 24 SNSPDs with an SDE of 75% at a wavelength of 1550 nm, Zhong et al. presented first 12-photon genuine entanglement with a state fidelity of 0.572  0.024 [179]. The same group recently developed solid-state sources containing highly efficient, pure, and indistinguishable single photons and 3D integration of ultralow-loss optical circuits.The team performed experiments by feeding 20 pure single photons into a 60-mode interferometer (60 SNSPDs with an SDE of 60%-82%) (see Figure 2 for the schematics).This result yielded an output with a 3.7 × 10 14 -dimensional Hilbert space, more than 10 orders of magnitude larger than those obtained in previous experiments, which, for the first time, enters into a genuine sampling regime, where it becomes impossible to exhaust all the possible output combinations [60].The results were validated against distinguishable and uniform samplers with a confidence level of 99.9%.This result was equivalent to 48 qubits and approached a milestone for boson sampling [180].We believe that quantum supremacy/advantage with boson sampling will be achieved soon using high-performance SNSPDs. Other QI applications. In the previous 15 years, apart from QKD and optical quantum communication, numerous advanced QI applications that use SNSPD have been demonstrated, such as quantum teleportation [181; 182], quantum storage [183; 184], quantum money, quantum switch, quantum digital signature, quantum fingerprint, quantum data lock, quantum ghost imaging, quantum time transfer, and quantum entropy.For lack of space, not all the achievements can be presented in this review.Therefore, some selected results are introduced here. Bell's inequalities validation: In 1964, John Stewart Bell, discovered inequalities that allow an experimental test of the predictions of local realism against those of standard quantum physics [185], which can answer the question about the completeness of the formalism of quantum mechanics raised by Albert Einstein, Boris Podolsky, and Nathan Rosen (also known as the EPR paradox) [186].In the ensuing decades, experimentalists performed increasingly sophisticated tests with respect to Bell's inequalities.However, these tests have always contained at least one "loophole," allowing a local realist interpretation of the experimental results [187].In 2015, by simultaneously closing two main loopholes, three teams independently confirmed that we must definitely renounce local realism [63; 188; 189]; one team at National Institute of Standards and Technologies (NIST) closed the detection loophole using SNSPDs [63]. The NIST experiment was based on the scheme presented in Figure 9 [187].The team used rapidly switchable polarizers (A and B) located more than 100 m from the source to close the locality loophole.WSi SNSPDs with an SDE of 91%  2% were required to close the detection loophole.Pairs of photons were prepared using a nonlinear crystal to convert a pump photon into two "daughter" entangled photons (v1 and v2).Each photon was sent to a detection station with a polarizer, the alignment of which was randomly established.The team achieved an unprecedented high probability that when a photon enters one analyzer, its partner enters the opposite analyzer.Based on this and the high-efficiency SNSPDs, a heralding efficiency of 72.5% can be achieved, which was larger than the critical value of 2/3.The confidence level of the measured violation of Bell's inequality can be evaluated by the probability p that a statistical fluctuation in a local realist model would yield the observed violation.The reported p is 2.3  10 −7 , which corresponded to a violation by 7 standard deviations.The results firmly establish several fundamental QI schemes such as device-independent quantum cryptography and quantum networks.Quantum random number generation: Randomness is important for many information processing applications.Device-independent quantum random number generation based on the loophole-free violation of a Bell inequality is the objective in QI science.Liu et al. used the stateof-the-art quantum optical technology to create, modulate, and detect the entangled photon pairs, achieving an efficiency of more than 78% from creation to detection at a distance of approximately 200 m; two SNSPDs with SDEs of more than 92% were obtained at a wavelength of 1550 nm [190].Then, 6.25 × 10 7 quantum-certified random bits were obtained in 96 h with a total failure probability of less than 10 −5 .This achievement is a crucial step toward a key aspect of practical applications that require high security and genuine randomness. Integrated quantum photonics (IQP): IQP is an important field in quantum information, which may provide an integrated platform for almost all the optical QI applications.A single-photon detector is an indispensable technology for IQP.Pernice et al. first demonstrated the integration of SNSPD on a silicon waveguide, which had an on-chip DE of 91% [73].Subsequently, the SNSPDs on waveguides made of various materials, such as SiN [191; 192], GaAs [193; 194], AlN [195], LiNbO3 [196], and diamond [197], were demonstrated, paving the way for all types of IQP applications that use SNSPDs.We refer the readers to a recent review [32] and references therein for details. Summary and outlook Because of the extensive development of QI, SNSPDs have progressed rapidly in terms of science and technology as well as application.A niche market is available and growing continuously.On the contrary, high-performance SNSPDs have been advancing many fields, including but not limited to QI.However, there is considerable room to further improve the SNSPDs.Some of the related points are listed below. 1. Performance improvement: Users always want to have SNSPDs with better performances.The requirements for all-round SNSPDs are increasing, which require that two or more parameter requirements should be simultaneously satisfied.For example, TF-QKD requires SNSPDs with high SDE and low DCR.Boson sampling needs many SNSPDs with high SDE and CR.Special applications need unique SNSPDs for other wavelengths, such as mid-infrared or longer wavelengths, broadband SNSPD, and SNSPD with the PNR ability.2. Array technology: Fabricating a single-pixel SNSPD is easier than fabricating an SNSPD array. An array acquires films with high homogeneity and a nanofabrication process with a high uniformity.The readout in case of an SNSPD array is another key technology for array application because the detectors are operated at considerably low temperatures.Some inspiring results and studies have been recently observed [98; 198].For example, the first kilopixel SNSPDs with row-column multiplexing architecture were demonstrated in 2019 [98].However, their performance, such as efficiency and uniformity, should be further improved.3. Cryogenics: SWaP and the price of the system decide whether this technology can be massively applied.SWaP and the price of the detectors can be considerably reduced by improving their yield and performance.To a certain degree, cryogenics will determine the future of SNSPDs.We need to design and develop more compact, portable, and affordable cryocoolers customized for SNSPDs.Finally, the extent of this development will be dependent on the commercialization and industrialization of QI. Figure 3 . Figure 3. (a) Schematic of the detection cycle; (b) a simplified circuit model of an SNSPD; and(c) the output signal of an SNSPD upon a detection event[31].The inset added to the original reference is a schematic of a meandered SNSPD.Reproduced with permission[31].Copyright 2012, IOP Publishing Ltd.Although the above schematics are simple, they can generally explain how the SNSPD works and what happens after the detection of a photon, which can facilitate understanding from the application perspective.However, many parameters must be designed and tuned carefully to make a functional SNSPD with respect to different aspects, such as materials, geometrics, circuits, and operation parameters. Figure 4 . Figure 4. (a) SEM image of a meandered SNSPD with an optical cavity on the top.The lefttop inset that shows the nanowire has a line width of 100 nm, and the right inset shows an optical image of a packaged SNSPD with fiber-alignment; (b) oscilloscope single shot of a single-photon response signal. Figure 5 . Figure 5. (a) OS1-OS4: Four kinds of optical structures used for vertically coupled SNSPD.(b) The calculated optical absorptance for different optical structures at 532, 1064, and 1550 nm.At 532 and 1064 nm, the MgO substrate was adopted in the simulation for OS3 and OS4.At 1550 nm, a Si substrate was used in the simulation of OS3 and OS4.Reproduced with permission[80] Copyright 2017, IOP Publishing Ltd. Figure 6 . Figure 6.(a) Normalized bias-current dependence of SDE and DCR for an SNSPD coupled using a fiber with and without a BPF on its end-face.The intrinsic DCR of the device is also plotted.(b) SDE as a function of DCR without (blue dots) and with (red squares) a BPF.(c) Top: figure of merit (FOM), FOM = SDE/(DCR  jsys).Bottom: noise equivalent power (NEP) as a function of bias current for devices with/without end-face BPF, NEP = hν • √2 • / Reproduced with permission [87] Copyright 2018, IOP Publishing Ltd. Figure 7 . Figure 7. (a) SEM image of a 16-nanowire interleaved SNSPD.(b) SDE and DCR as functions of the bias current for single pixel in the 16-pixel SNSPD.(c) Total SDE and DCR as functions of the bias current of the 16-pixel SNSPD.(d) The measured individual and combined nanowire CRs vs. the input photon rate.The red dashed line is a guide to the eye.(e) The individual and total SDEs as a function of the measured CR.The colored curves represent the results for each nanowire, whereas the dotted curve represents the total SDE obtained by summing for the 16 nanowires.Reproduced with permission.[99] Copyright 2019, IEEE Publishing Ltd. Figure 8 Figure 8 Estimated world market for the sold SNSPD systems (incomplete data from private partner) Figure 9 Figure 9 The apparatus for performing the Bell test.A source emits a pair of entangled photons v1 and v2.Their polarizations are analyzed by polarizers A and B (grayblocks) aligned along the directions a and b.Each polarizer has two output channels labeled as +1 and −1.The final correlation of the photons can be determined by the coincidence detectors (SNSPDs).(APS/Alan Stonebraker)Quantum random number generation: Randomness is important for many information processing applications.Device-independent quantum random number generation based on the loophole-free violation of a Bell inequality is the objective in QI science.Liu et al. used the stateof-the-art quantum optical technology to create, modulate, and detect the entangled photon pairs, achieving an efficiency of more than 78% from creation to detection at a distance of approximately 200 m; two SNSPDs with SDEs of more than 92% were obtained at a wavelength of 1550 nm[190].Then, 6.25 × 10 7 quantum-certified random bits were obtained in 96 h with a total failure probability of less than 10 −5 .This achievement is a crucial step toward a key aspect of practical applications that require high security and genuine randomness.Integrated quantum photonics (IQP):IQP is an important field in quantum information, which may provide an integrated platform for almost all the optical QI applications.A single-photon detector is an indispensable technology for IQP.Pernice et al. first demonstrated the integration of SNSPD on a silicon waveguide, which had an on-chip DE of 91%[73].Subsequently, the SNSPDs on waveguides made of various materials, such as SiN[191; 192], GaAs[193; 194], AlN[195], LiNbO3[196], and diamond[197], were demonstrated, paving the way for all types of IQP applications that use SNSPDs.We refer the readers to a recent review[32] and references therein for details. Table 1 . State-of-the-art performance of different SPDs at a wavelength of 1550 nm for quantum information Table 2 . List of QKD experiments using SNSPDs
12,404
2020-05-31T00:00:00.000
[ "Physics" ]
Boundary Treatment for the Subsonic/Alfvénic Inner Boundary at 2.5 R ⊙ in a Time-dependent 3D Magnetohydrodynamics Solar Wind Simulation Model A new magnetohydrodynamics (MHD) simulation model of the global solar corona and solar wind is presented. The model covers the range of heliocentric distance from 2.5 solar radii, so that coronal mass ejections at the earliest phase near the Sun can be treated in the future. This model is constructed by introducing a characteristics-based boundary treatment to an existing heliosphere 3D MHD model. In tailoring a set of characteristic equations for this new model, we assume that the coronal magnetic field is open to interplanetary space and that the solar coronal plasma is flowing outward everywhere at 2.5 solar radii. The characteristic equations for the subsonic/Alfvénic inner boundary surface are satisfied by altering the plasma density and/or temperature to maintain a polytropic relationship. In this article, the details of the characteristics-based boundary treatment for the middle of the corona (named CharM) are provided. The quasi-steady states of the solar wind derived from simulations with various choices of a parameter in the boundary treatments are compared and examined. Although further improvements are needed, we apply the new boundary treatment to simulations for three Carrington rotation periods from the minimum to maximum phase of the solar activity cycle, and show that an optimal choice yields a reasonable quasi-steady state of the transonic/Alfvénic solar wind matching the specified subsonic/Alfvénic plasma speed at 2.5 R ⊙. Introduction 3D time-dependent magnetohydrodynamics (MHD) simulations are a powerful tool in the field of solar physics and space weather research. The simulation approach can trace numerically the propagation of coronal mass ejections (CMEs) from the innermost heliosphere to interplanetary space to the position of the Earth (e.g., Han et al. 1988;Smith & Dryer 1990;Dryer et al. 1991;Odstrcil et al. 2004;Hayashi et al. 2006;Shen et al. 2011Shen et al. , 2014Wold et al. 2018;An et al. 2019;Singh et al. 2022;Wu et al. 2022). Before simulating interplanetary disturbances such as CME propagation, the ambient quiet solar wind must be prepared. To obtain the quasisteady state of the solar wind as a proxy of the ambient solar wind, time-relaxation MHD simulation approaches have been widely used (e.g., Steinolfson et al. 1982;Han et al. 1988;Linker et al. 1990;Usmanov 1993;Wang et al. 1993;Washimi & Sakurai 1993;Usmanov & Dryer 1995;Hayashi 2005;Feng et al. 2010Feng et al. , 2021Hayashi et al. 2022aHayashi et al. , 2022b. The solar wind originates from the solar surface base of the coronal hole as subsonic/Alfvénic flow. The plasma flows are heated and accelerated, and the flow becomes supersonic/Alfvénic at some distance from the solar surface. A time-relaxation approach with a time-dependent MHD simulation model can solve the nonlinear interactions between the coronal magnetic field and plasma and numerically reproduce the transonic/Alfvénic solar wind in a physics-based manner. The same MHD model for the time-relaxation simulation can be used to simulate the temporal evolutions of CME events as responses to numerical perturbation mimicking the initial phase of the CME. MHD simulation studies for the solar corona, solar wind, and disturbance propagations, in general, have used one of two choices in setting the inner boundary surface. The first choice is to place the inner boundary surface at 1 R e or the base of the corona on which the MHD variables, such as the radial components of the magnetic field (B r ), can be straightforwardly specified as realistic input boundary values from observations (e.g., Usmanov 1993;Usmanov & Dryer 1995). The other is to place the inner boundary surface in a supersonic/Alfvénic region. By setting the inner boundary sphere well beyond the critical points (i.e., at r = 15 ∼ 20R e ), simple boundary conditions such as the fixed boundary condition are allowed. In this case, it is necessary to introduce assumptions to infer the MHD boundary values on the inner boundary surface from observations and/or models (e.g., . It is often desired to simulate the solar corona and solar wind in a range of the heliocentric distance from the middle of the corona. For example, one can use information about CME events at an initial phase from coronagraph observations and/ or magnetic flux-rope models (e.g., Chen 1996;Thernisien et al. 2006;Wood & Howard 2009) to straightforwardly simulate the evolution of the CME event. Because the information is often about CMEs after their launches from the lower corona, the heliocentric distance of the simulated region is not necessarily set from 1R e in this example. In addition, we want to exclude the lowermost part of the solar corona, as the Courant-Friedrich-Levy condition is severe for regions with large Alfvén speeds and steep radial gradients of plasma quantities. By excluding the lowermost corona, we can allocate more computational resources to solving the temporal evolutions in the solar wind in distant regions. Therefore, an MHD model covering the region from the upper part of the solar corona to interplanetary space can bring benefits to space weather studies. The plasma flow in the solar corona is overall subsonic/ Alfvénic. In the subsonic/Alfvénic regions, the temporal variations of MHD variables are determined by the states of the neighboring regions in the lateral (latitudinal and longitudinal) directions and the radial direction. Because some of the necessary information is missing, MHD simulations with the subsonic/Alfvénic boundary surface often suffer computational difficulties, such as unphysical numerical vibrations and instabilities near the boundary surface that grow exponentially in time and eventually cause computational failures. Characteristics-based boundary treatments (e.g., Nakagawa et al. 1987;Wu & Wang 1987) are a powerful mathematics-based tool to compensate for the lack of information about the subsonic/ Alfvénic boundary surface. In this boundary treatment, the relationship among the temporal variations of the MHD variables is treated in a manner such that the characteristic equations of the MHD equations and the specified boundary constraints will be satisfied simultaneously (e.g., Hayashi 2005). Recently, we constructed a new subsonic/Alfvénic boundary treatment for MHD simulations with the inner boundary sphere set at the height of the middle of the corona (henceforth named CharM). We implement the CharM boundary treatment in an existing MHD simulation model, the heliosphere 3D MHD (H3DMHD) model (Han et al. 1988;Wu et al. 2007Wu et al. , 2015. In this new model, the heliocentric distance of the inner boundary sphere is chosen to be 2.5 solar radii (R e ). It is reasonable to assume that the radial component of the plasma flow is always positive (V r > 0) on the inner boundary surface at 2.5R e , because the solar coronal plasma appears to be flowing outward nearly radially at this distance. Under this assumption, the CharM boundary treatment can be simple, because it does not have to consider the case of stagnant plasma (V r ∼ 0). The heliocentric distance of the inner boundary sphere coincides with the radius of the outer boundary sphere or the source surface of the potential field source surface (PFSS) model (Schatten et al. 1969;Altschuler & Newkirk 1969). In this article, the two variables, V r and B r , on the inner boundary surface will be fixed. The characteristic method plays another role than providing computational stability: it seeks an optimal set of unspecified plasma temperatures and/or densities such that will be compatible with the specified plasma speed (V r ) and the magnetic field B r . Because it is difficult to determine the plasma temperature and density in the middle of the corona, this feature can be a useful numerical tool. Test simulations are conducted for demonstrating these capabilities and the limitations of this model. This article is organized as follows: details of the timedependent 3D MHD model and the characteristics-based boundary treatment (CharM) are given in Sections 2 and the Appendix. The simulation results are shown in Section 3. A summary and discussion are given in Section 4. MHD Model The H3DMHD (Wu et al. 2007) model is a 3D timedependent MHD simulation model for the global solar wind. The model solves time-dependent 3D MHD equations in the Sun-centered spherical coordinate system: where ñ, V, B, P g ,  , r, t, and g are the mass density, the plasma flow velocity, the magnetic field vector, the gas pressure, the energy density (=ñv 2 /2 + P g /(γ − 1) + B 2 /8π), the position vector originating at the center of the Sun, time, and the solar gravitational force (−GM ☉ r/r 3 ), respectively. The colons (:) denote the dyadic products of vectors. The plasmas are assumed to consist of fully ionized hydrogen, and the plasma pressure and plasma density are related with P g = 2ñk B T p /m p , where k B , m p , and T p are the Boltzmann constant, the mass of protons, and the plasma temperature, respectively. For the test simulations, we here set the specific heat ratio γ = 5/3. The current model does not include the effects of coronal heating; hence the solar wind plasma quantities in distant regions may not be realistic. Nonetheless, we here choose this value because γ will have to be equal to 5/3 when the heat source term is included in the coronal simulations in the future. This value of the specific heat ratio yields reasonably good agreements in the CME propagation simulations (e.g., Liou et al. 2014). The H3DMHD model solves the MHD equations in the rest frame. In the simulations shown in this article, 110 grids are placed along the radial direction to cover the heliocentric distance from 2.5 R e to 19.0 R e , with a constant grid size of Δr = 0.15R e . We note here that the heliocentric distance of the upper boundary surface can be set farther away, at 1 au or beyond. We choose a rather small number to reduce the computation times. The angular sizes of the grids in the latitudinal and longitudinal directions are both uniform and set to be 5°. In order that the effect of the solar rotation will be included, the boundary maps of specified MHD variables (B r , V r and ñ or T p ) will be longitudinally shifted at the angular velocity of the solar sidereal rotation, Ω, of 360°.0 per 24.3 days. Characteristic Boundary Treatments The set of the practical forms of the equations is given in the Appendix. In brief, under the concept of the characteristics for the hyperbolic eight-variable MHD equation system, we are allowed to specify five constraints at the 2.5 R e inner boundary surface, because there are at least five MHD wave modes directing outward from the Sun (or inward to the simulation domain). Among several possible choices, we choose the one as follows. Two sets of conditions specify four of the five constraints: the first set is that the radial components of the magnetic field (B r ) and plasma flow (V r ) are unchanged in the frame rotating with the Sun at the solar sidereal angular velocity (Ω). Here we choose to fix V r to test the present boundary treatment. The second set is that the plasma and magnetic field are parallel when seen in the rotating frame, in order to satisfy the requirement for magnetic solenoidality (Yeh & Dryer 1985). The first set is expressed with two equations for the temporal variations of the vector components, ∂ t V r = 0 and ∂ t B r = 0. The second condition is expressed as In this work, the remaining one constraint is given as a polytropic relationship, ∂ t (P g /ñ α ) = 0. The exponent (α) is the only parameter to control the CharM boundary treatment in this study, and it is not intended to represent any physics in the lower part of the solar corona. Hence, the exponent (α) is not necessarily equal to the specific heat ratio in the MHD equations (γ = 5/3). For example, with α = 1, the CharM boundary treatment alters the boundary density and simultaneously keeps the temperature constant in time, ∂ t T p ∝ ∂ t (P g / ñ) = 0. Setting α equal to a large number (such as 100) is equivalent to setting a condition of the (near-)constant density, Setting α equal to zero is equivalent to setting a condition of the (near-)constant plasma pressure, although we do not test this case in this study. In the following sections, we conduct the simulations with various values of α -1, 1.2, 1.4, 5/3, 3.0, and 100.0. The value α = 5/3 is chosen because the value is the same as the specific heat ratio (γ) in the governing MHD equations. The values 1.2 and 1.4 are between the two cases α = 1 and α = 5/3. The value 100 is chosen as a proxy of an infinitely large value of α, and the value 3.0 is set for an example case between α = 100 and α = 5/3. Hereafter, α is called the boundary-polytrope index. In the Appendix, the characteristic equations to be solved are given. It must be mentioned here that the characteristic method is flexible and that equations other than those shown in the Appendix are possible. Boundary Values at R 2.5 ☉ and Initial Values In this study, we use the data of the source surface which generally yields a weaker source surface field than anticipated from the 1 au in situ measurements. To compensate for the magnetic flux, typically a factor of 5 is sufficient, if we assume that no flux cancellation takes place in interplanetary space. In this study, we choose a factor value of 10, to run the codes with a slightly stronger magnetic field strength for testing. We apply the speed prediction formula, (kilometers per second), to infer the radial component of the plasma speed (V r ) at r = 18R e from f s . Figure 1(c) shows the inferred V r at 18 R e . The initial and boundary values of V r in this simulation study are estimated from the inferred 18 R e values through a linear relationship with the height from 1 R e , V r , , where r¢ is the heliocentric distance in units of solar radius. The initial plasma number density n (=ñ/m p ) is determined as n n V V r 215 ¢. The bottom-boundary plasma temperature (T p ) is determined from an assumption that the Notice that in each column, the overall shapes well resemble each other, but the values are substantially different. A smaller (greater) value of α yields higher (lower) boundary plasma density and lower (higher) boundary plasma temperature. scaled sum of the kinetic energy, gravitational potential, and thermal energies, r 2 {ñV 2 /2 − ñGM ☉ /r + 2k b ñT p /m p /(γ − 1)}, is constant and equal to the average at 1 au. For this estimation, the average mass density, plasma speed, and temperature at 1 au (n 1au , V 1au , and T 1au ) are set to 8.0 (count cm −3 ), 420.0 (kilometers per second), and 10 4 K, respectively. The initial value of the plasma density and the temperature above the inner boundary surface at r > 2.5R e are reduced to 50% and 10%, respectively, so that the initialized solar wind will start flowing outward smoothly. Above the inner boundary surface at r > 2.5R e , the initial values of the latitudinal and longitudinal components of the magnetic field and plasma velocity (B θ , B f , V θ , and V f ) are set to zero at t = 0. We assume that the inner boundary sphere (at r = 2.5R e ) is rotating rigidly at the rotation rate Ω, as the coronal magnetic field below this height is sufficiently strong to control the plasma flow. The longitudinal component of the plasma flow at r = 2.5R e is given as V r sin q = W f , where θ is the colatitude counted from the solar north pole. The parallel condition between the magnetic field and the plasma flow must be satisfied in the frame rotating with the Sun; hence the longitudinal component of the magnetic field is given as . The latitudinal component of the magnetic field is given as B θ = B r V θ /V r . The CharM boundary treatment uses the MHD variables as they are at the boundary grids, except that the longitudinal component of plasma flow in the rotating frame (V r sin q -W f ) is used instead of V f in the nonrotating simulation frame. The boundary variables that will not be altered through the CharM boundary treatment (such as B r and V r ) are calculated with the linear longitudinal interpolation of the original map data at each simulation time step. The variables altered through the CharM boundary treatment (X) are further altered as ∂ t X = − Ω∂ f X with the upwind differencing method, to take into account the effect of the solar rotation. It is worth noting that the relationships among the components of B and V can be set rather arbitrarily. Any choice that satisfies the induction equation, where V is defined in the frame rotating with the boundary map of the fixed B r , can be allowed here. This is a condition for maintaining the solenoidality of the simulated magnetic field with the boundary B r (Yeh & Dryer 1985). Results from Time-relaxation Simulations The simulated solar wind system evolves from the initial state in accordance with the MHD equations and the CharM boundary treatment. Figure 2 shows the profiles of the radial component of plasma velocity (V r ) in the radial direction at a selected location of the latitude of 0°and the Carrington longitude of 180°at several physical times simulated with α = 1 (fixing T p at 2.5R e ), as an example. As seen in Figure 2, the initial plasma flow immediately starts evolving, to eventually reach a (quasi-)steady state. The typical relaxation time under the present simulation settings is found to be about 15 to 20 hr on the physical timescale. We regard the simulated state at t = + 20 hr as a steady time-relaxed state. Time-relaxed States with Various Values of Boundarypolytrope Index, α In the present model, the boundary-polytrope index, α, controls the behavior of the CharM inner boundary treatment. Figure 3 shows boundary maps of the plasma density (in the left column) and temperature (in the right column) obtained with α = 1, 1.2, 5/3, and 100, at t = 20 hr. As designed, the boundary temperature with α = 1 ( Figure 3(e)) is not altered from the initial setting of the boundary value. Similarly, the boundary density with α = 100 ( Figure 3(d)) has little changed from the initial setting. In Figures 3(a)-(c) and (f)-(h), the distributions are rather blurred, because of the numerical error in transporting in the longitudinal direction as ∂ t ñ = − Ω∂ f ñ or ∂ t T p = − Ω∂ f T p . Overall, the shapes of the simulated boundary ñ and T p are similar with different α. However, the values are substantially different: the difference in the boundary density between the cases with α = 1 (Figure 3(a)) and α = 100 (Figure 3(d)) is about a factor of 4. The difference between the boundary temperature with α = 1 (Figure 3(e)) and α = 100 (Figure 3(h)) is about a factor of 5. During the time relaxation, the boundary treatment with α = 1 (α = 100), in general, increases the boundary density (temperature) to achieve a sufficiently large plasma pressure on the boundary surface so that the boundary plasma can flow outward at the specified speed under the presence of the nonuniform boundary magnetic field. The lateral gradient of the boundary magnetic field pressure can result in the direction of the simulated magnetic field being slightly diverted from the radial direction. The nonradial magnetic field above the inner boundary surface can obstruct the smooth plasma outward flows, and the influence of the obstruction can propagate backward to the inner boundary surface, because the plasma flows near the inner boundary surface are yet subsonic/ Alfvénic. Without the characteristics-based boundary treatment adjusting the plasma pressure (P g ∝ ñT p ), the simulated solar wind may not reach the steady state, as shown in Section 3.2. The higher plasma density at 2.5 R e results in higher total mass flux with the fixed V r . Because the initial values of the mass density were calculated so that the total mass will well match those measured at the Earth (or 1 au), such an increase is not indeed desired. Similarly, the higher boundary temperature results in unrealistic high solar wind speeds in distant regions. Figure 4(a) shows the speed profile in the radial direction at the selected location of the latitude of 0°(at the solar equator) and the Carrington longitude of 180°, derived with various different values of α. Substantial differences in the speed near the outer boundary (19 R e ) are seen. In Figures 4(b) and (c), the profiles of V r in time-relaxed states are shown for two other periods, CR 2059 and 2095. These two periods, CR 2059 and 2095, correspond to the minimum phase between the solar activity cycles 23 and 24 and the ascending phase of solar cycle 24, respectively. These two periods are chosen in order to examine whether the present model can handle the transonic/Alfvénic solar wind at periods other than CR 2126 (near the maximum phase of solar cycle 24 or at the first peak of the sunspot number in solar cycle 24). As seen in the plots of Figure 4, the present MHD model can indeed yield the steady state of the transonic/Alfvénic flow through the time relaxation (for 20 hr on a physical timescale) for 2.5R e r 19R e for these two periods as well. The shapes of the obtained profiles of V r shown in Figures 4(b) and (c) are similar to those in 4(a) (for CR 2126), except that the values of V r are dependent on the boundary values specified for each period and their gradients in the longitudinal and latitudinal directions. Although it is not perfectly satisfactory, among the tested cases, the choice of α = 1.2 appears to yield most reasonable results. Figure 5 shows the latitude-longitude maps of the simulated variables on the inner boundary surface at r = 2.5R e and the outer boundary surface at r = 19.0R e . As seen in Figure 5(e), the flow speed (V r ) at r = 19.0R e has moderate contrast, ranging from about 350 to 480 km s −1 . The velocity contrast is a reasonable one for the quiet solar wind, although further examinations and test simulations are needed. Figure 6 shows the profile of the plasma flow speed (V r ) in the radial direction from three simulation cases, sampled at a location (S22.5°, 180°): (a) the (quasi-)steady state derived with α = 1 (thick line) as a reference; (b) the case with fixed boundary conditions for all eight MHD variables (without the characteristics-based boundary treatment); and (c) the same as the second case, except that the temperature on the inner boundary surface is multiplied by 5. Results with and without the CharM Boundary Treatment As seen in Figure 6, the case without the characteristicsbased boundary treatment, but with the same initial boundary values as in case (a), is unstable, yielding even negative values of V r . The selected location for the radial profile was near the source of the heliospheric current sheet and the lateral (latitudinal and longitudinal) gradient of B r is relatively large. We think that this magnetic configuration is one of the factors causing the falling plasma in the second case (b). The plasma flow with insufficient thermal energy (temperature) or density cannot pass through the oblique narrow paths near a laterally expanding flow region due to the gradients of the magnetic pressure. The falling plasma cannot be steady. The current setting (enforcing V r > 0 on the inner boundary surface) allows the numerical vibrations move outward, rather than stay at a certain region. Hence the vibration may not grow exponentially in time. This allows us to continue simulation (b), but the results may not be usable for other simulations, such as CME propagation simulations. 1.01, 1.1, 1.2, 1.3, 1.46, and 5/3) in the MHD equations. The case with γ = 1.01 corresponds to the near-isothermal case. The value of 1.46 is the one inferred from the Helios data analysis (Totten et al. 1995). The boundary values of V r at 2.5 R e are fixed and identical among these cases. The radial profiles are sampled at the latitude of 0°(the solar equator) and the Carrington longitude of 180°. In the reference case (a), the CharM boundary treatment alters (usually increases) the mass density, to maintain the steady flow that matches the specified V r on the inner boundary surface. Without such a (numerical) mechanism, we cannot obtain the steady-state matching, given that V r has been rather arbitrarily determined. Profile (c) shows that the transonic/Alfvénic simulation with the fixed boundary conditions but a higher bottom-boundary temperature can yield a smooth V r profile. However, the speeds in the distant regions are about 900 km s −1 , which are unreasonably high as a quiet solar wind speed. From these results, we claim that the present CharM boundary treatment can reasonably reduce the chance of obtaining unreasonable or unstable solutions. This is an important advantage, because it is difficult to know in advance whether a certain combination of the plasma density, temperature, and velocity will result in a stable smooth flow or not. The Parker solution (Parker 1958(Parker , 1965) with a larger specific heat ratio will infer such combinations of the boundary values that are usable as the subsonic/Alfvénic fixed boundary values, but only for the case without the presence of the magnetic field. Summary and Discussion A characteristics-based boundary treatment, named as CharM, is introduced. This boundary treatment model is designed to provide numerical stability and robustness to MHD simulations for the transonic/Alfvénic solar wind, starting from a middle height in the corona. Results from time-relaxation simulations with various values of the boundary-polytrope index (α), a free parameter in the boundary treatment, are compared and examined to find that α = 1.2 is a tentatively optimal number in the present model. The characteristics-based boundary treatment is designed to assist transonic/Alfvénic MHD simulations by adjusting the boundary temperature and/or density to obtain a reasonable steady-state solution matching the specified boundary velocity and magnetic field. Insufficient thermal energy given/specified on the inner boundary at 2.5 R e can lead to the downward flow seen in the simulation with the fixed boundary condition. It is found that the CharM boundary treatment can alter the plasma values within the specified polytropic constraint, to achieve a steady state of the solar wind flow at r > 2.5R e . Because the plasma quantities at the middle of the corona are not well determined, this functionality is an important model feature for assisting Sun-to-Earth simulation models, such as those for CME propagations in the upper corona and interplanetary space. There are several items to be solved and improved in the present CharM boundary treatment. The present model with α = 1 tends to yield a steady-state solar wind with higher temperatures at 2.5R e . Similarly, the runs with a large value of α (=100) tend to yield a steady-state solar wind with a higher boundary plasma density. The optimal boundary temperature and/or density sought by CharM must be different if we include coronal heating and/or acceleration in the governing MHD equations. One of the next steps is to include additional energy and momentum sources or to apply a smaller value of γ, such as those inferred from in situ measurements (γ = 1.46; Totten et al. 1995), to mimic the thermal energy supply. In Figure 7, the velocity profiles of the steady state derived with various values of the specific heat ratio (γ) are shown. In this comparison, the boundary-polytrope index (α) is set to 5/3, and the boundary values of V r are the same. The smaller value of the specific heat ratio (closer to 1) yields a higher speed near the outer boundary. The values of V r of the nearisothermal case (γ = 1.01) at the outer boundary surface are more than twice as large as those with γ = 5/3. The profiles in the region close to the inner boundary surface are rather similar to each other, though the profile with γ = 5/3 becomes rather flat for r > 5R e , while the profiles with smaller γ keep increasing with respect to the heliocentric distance. The plasma density and temperature on the inner boundary surface in the time-relaxed steady states differ with different values of γ as a consequence of the nonlinear interaction between the inner boundary and the simulated solar wind above the inner boundary surface during the time-relaxation simulation. The interactions will be more complicated if we include the source terms of the energy and/or momentum in the governing MHD equations. It is an advantage of the present model that it is capable of studying such nonlinear interactions between the subsonic/Alfvénic boundary surface and the transonic/ Alfvénic solar wind above the boundary surface, although careful simulation setups are necessary. As the present model adjusts the boundary plasma temperature and density in a manner to maintain the polytropic relationship, P g /ñ α , the final boundary temperature and density of the steady state are dependent on the initial guesses. To prepare initial values of the boundary plasma temperature, we simply assumed that the temperature and density are functions of the plasma velocity that is inferred from the PFSS model and the flux tube expansion factor ( f s ). New methods of inferring the plasma quantities at the middle of the corona are desired. The present model can be improved with such new input information, which can in turn help to check the appropriateness of the model. In this article, we fix the boundary V r then construct the characteristic equations shown in the Appendix. The values of the specified 2.5 R e V r are rather small, in part because we want to examine how well CharM can handle the boundary MHD variables in a situation far away from being supersonic/ Alfvénic. It is possible to construct sets of characteristic equations where the boundary V r is allowed to change. For example, we can assume that the constant mass flux and the temporal variations of the boundary V r and ñ can be related as ∂ t (ñV r ) = ñ∂ t V r + V r ∂ t ñ = 0, instead of Equation (A2). Starting with this, we can construct a new set of characteristic equations, replacing the equation set (A7). We will test other possible choices as well, as the concept of the characteristics of the hyperbolic MHD system allows us to find suitable choices for retrieving desired solar wind features, such as good agreements of the mass flux at the Earth or reasonable velocity profiles with respect to the heliocentric distance near the Sun. where L l,m is the element of the left eigen matrix of the MHD equation system, for lth wave modes (numbered in nonincreasing order of eigenvalues, and mth variables (numbered in a order: ñ, V r ñ, V θ ñ, V f ñ, B θ , B f , and  ). The characteristic equation for the radial component of magnetic field, ∂ t B r = − V r ∂ r B r + L, can be isolated from the seven other characteristic equations. The components of the left eigen matrix (L l,m ) and the eigenvalues (λ l ) used in this study are from Cargo & Gallice (1997) and also given in Hayashi (2005). The right-hand side of the mth MHD variable is calculated in the same manner as the grids above the inner solar surface boundary sphere. In this present simulation study, we want to fix the radial component of plasma flow (V r ) and magnetic field (B r ). The specified V r is always set to be positive, V r > 0. In this case, four of seven eigenvalues, λ 1 = V r + V F , λ 2 = V r + V A , λ 3 = V r + V S , and λ 4 = V r , are always positive. The number of outgoing wave modes with negative eigenvalues, λ l < 0, representing the modes propagating from the domain of computation toward the Sun, is equal to or less than three. Therefore, we are allowed to specify at least five constraints to complete the equation system for determining the temporal variations of all eight MHD variables. We first consider a case where all three remaining eigenmodes (with λ 5 = V r − V S , λ 6 = V r − V A , and λ 7 = V r − V F ) are negative. In this study, we choose the five constraints as follows. The first condition is to fix B r in time, ∂ t B r = 0. This condition must be accompanied by two constraints, V r B θ − V θ B r = 0 and V r B f − V f B r = 0. We want to keep the boundary V r distributions, hence the fourth condition is expressed as ∂ t V r = 0. The fourth condition is expressed as With this, the temporal variations of B θ and B f can be written as As the last condition, we choose the polytropic relationship between the plasma density and pressure (or temperature), which is controlled with the boundary-polytrope index α. Setting α = 1 is equivalent to setting a fixed boundary condition for the temperature (∂ t T p ∝ ∂ t (P g /ñ) = 0). Setting α = ∞ is equivalent to setting a fixed boundary condition for the plasma density (∂ t ñ = 0). By setting α = 0, the boundary surface gas pressure is unchanged in time (∂ t P g ). In this work, we only test the cases with α 1. It is worth noting that the relationship is not necessarily an exponent polytropic one: we introduce the polytropic relationship for simplicity. Indeed, any relationship between the plasma density and pressure (or temperature), ∂ t P g ≔ F∂ t ñ, where F is any function, can be used for constructing the characteristics-based boundary treatment. With the equations above, the temporal evolution of the energy density (the left-hand side of Equation (4)) is rewritten as By coupling these constraints among the temporal variations and characteristic equations (Equation (A1)), we finally obtain a set of three characteristic equations with three unknown variables, ∂ t ñ, ∂ t (ñV θ ), and ∂ t (ñV f ): for the three modes with eigenvalues, λ 5 = V r − V S , λ 6 = V r − V A , and λ 7 = V r − V F . This equation set is always solvable. When λ 5 > 0, λ 6 > 0, or λ 7 > 0, we are allowed to specify more than five constraints. It is possible to design the boundary treatment to switch a set of characteristic boundary treatments for the five constraints (with five positive eigenvalues) to/from those for six, seven, or eight positive eigenvalues; however, it is rather complicated, and such switching is often very frequent and causes numerical instability. In this present model, we simply set the total contribution of an incoming wave mode to zero when its eigenvalue is positive (the wave is incoming and directed outward from the Sun): for l = 5, 6, 7. It is evident that when the boundary flow is super-Alfvénic, all RHS l will be zero; hence, all of the three temporal variations, ∂ t ñ, ∂ t (ñV θ ), and ∂ t (ñV f ), will be calculated to be zero (equivalent to the fixed boundary condition). The denominators in the equations shown in this Appendix are never equal to zero. The radial component of plasma flow, V r , is the only variable that can be zero in general cases; however, the present model specifies always positive V r (outward plasma flow) at r = 2.5R e . The plasma density, ñ, is always positive unless the vacuum is considered. A parameter, (γ − 1), is not equal to zero unless the isothermal case is considered.
8,647.8
2023-09-01T00:00:00.000
[ "Physics" ]
Grain Sieve Loss Fuzzy Control System in Rice Combine Harvesters : The main working parts of the cleaning device of a rice combine harvester can be controlled by an established control strategy in real time based on the monitored grain sieve loss. This is an efficient way to improve their cleaning adaptability, since as a consequence, the main working parameters of combine harvesters can automatically adapt to crop and environment changes, and the corresponding cleaning performance can be improved. To achieve the target of cleaning control based on the monitored grain sieve loss, a fuzzy control system was developed, which selected S7-1200 PLC as the main control unit to build the lower computer hardware system, utilized ladder language to complete the system compilation, and used LabVIEW 14.0 software to design the host–computer interface. The effects of fan speed, guide plate angle, and sieve opening on the grain sieve loss and grain impurity ratio have been investigated through a large number of bench tests. The relevance level of the operating parameters on the performance parameters has been determined also, and finally, a fuzzy control model was developed for the cleaning system. The experiment results indicated that the designed fuzzy control model can control the cleaning section settings, such as fan speed and guide plate angle automatically, and reduce the grain sieve loss to some extent. Introduction The use of combine harvesters for harvesting rice fields is rapidly increasing year by year in China as the planting area and yield keep increasing [1]. As there are different rice varieties and harvesting times, the harvesting performance of the combine harvesters significantly fluctuates under different crop-harvesting conditions. The automation of the combine harvesters is one good way to guarantee the harvesting performance, and the flagship combine harvesters made by European companies have realized the functions of operation process fault diagnosis, forward speed control, adaptive threshing and cleaning, chassis lift control, and so on, significantly improving the overall operating efficiency and performance [2][3][4][5][6][7]. However, the relevant research on the state monitoring of the operation of combine harvesters is still in its infancy period in China. Most combine harvesters merely have engine revolution speed monitoring devices installed, parameter settings can only be adjusted when combine harvesters stop working based on the experience of the operator, and the cleaning performance varies dramatically. The cleaning device as one of the core parts of the machine and cleaning performance is another one of the major factors to weigh the performance of the whole machine. However, to date, automatic control systems for the cleaning units are not commercially available. Scientists have carried out enormous relevant research on the cleaning processes of combine harvesters, and some investigations can be found investigating the effect of fan revolution speed, the area of the fan air inlet, sieve opening, and sieve vibration amplitude on the movement of grains and material other than grain (MOG) in the cleaning shoe, and several corresponding mathematical models have been developed [8][9][10][11]. Better insight into the characteristics of the cleaning process was obtained owing to the interpretation of these mathematical models. However, a complete mathematical model that is applicable to the cleaning process would need to include several equations that are difficult to obtain and would probably be very complex. On the other hand, on the basis of analyzing the existing experiment data, and comparing the merits and drawbacks of the existing control strategy, a common drawback of most standard modeling and control techniques is that they are always based on an accurate mathematical model, and cannot make effective use of the expert knowledge of experienced engineers and operators [12]. The fuzzy control methodology, which combines the advantages of the white-box and black-box approaches, is widely used in agriculture machinery control systems, and improves the performance greatly [13][14][15]. Therefore, utilizing sensor technology and fuzzy control theory to develop a control system that can monitor grain sieve loss and the main working parts of the cleaning system that can be controlled by the established control strategy in real time is an efficient way to improve their cleaning adaptability. On the basis of studying the grain sieve loss sensor [16,17], this paper mainly studies the correlation between grain sieve loss and the related working parameters (fan speed, guide plate angle, and sieve opening) to determine the main factors affecting cleaning performance; then, a cleaning process fuzzy control system was designed to maintain the cleaning system with a good cleaning performance. Overall Research Method To achieve the cleaning process control based on the monitored grain sieve loss, the technical blueprint adopted in this paper is introduced as follows: (1) First, we outline the working parameters, including on-line monitoring and automatic adjustment technology, and design relevant actuating devices to pave the way for the step-less of the working parameters to require less adjustment during cleaning. (2) Experiment results have shown that the cleaning performance is affected by several working parameters, and the effect of each working parameter on cleaning performance varies significantly. Therefore, relationships among working parameters (sieve opening, fan speed, guide plate angle, etc.) and performance parameters (sieve losses, grain impurity rate) have to be investigated through a large set of bench tests, and it is important to determine the relevance of the parameter settings and performance results according to the obtained test results. The most important information should be extracted, then the candidate input variables need to be ranked as possible control variables. (3) The control model is the key to the automation of the cleaning section. On the basis of analyzing the existing test data, and comparing the merits and drawbacks of the existing control algorithm, we establish a control strategy for cleaning control. At last, we verify the robustness and adaptability of the control model though a test bench experiment. Working Principle of the Multi-Duct Cleaning System A cleaning test bench is shown in Figure 1. Some working parameters of the test bench, such as sieve opening, vibration frequency, and fan revolution speed can be adjusted separately. In the working parameters' adjustment process, a push rod is used to drive the relevant mechanism to fulfill the task of sieve opening and guide adjusting the plate angle. The fan speed and sieve vibration frequency can be controlled by adjusting the corresponding motors' shaft revolution. In addition, the cleaning throughput can be adjusted within 0.5-4.0 kg·s −1 by controlling the vibration frequency of the electromagnetic vibration feeder. The test bench is also equipped with sensors to extract the airflow velocity, fan speed, and grain sieve loss. The working process of the multi-duct cleaning device is as follows: a return plate was added under the longitudinal axial flow threshing rotor to concentrate its threshed material and the tailings' return to the start of the cleaning shoe, and evenly distribute them. The grain pan and sieve have the same vibration frequency and amplitude. However, they have an opposite vibration direction angle to reduce the overall vibration. Thus, all of the threshed outputs can enter into the sieve surface in a uniform way. Then, at the joint action of vibration and airflow from different fan outlets, the grain will penetrate the sieve promptly. The designed working parameter adjusting mechanism is shown in Figures 2 and 3. The principle of the working parameter adjusting mechanism can be found in our patent (PCT/CN2015/074348). The adjustment range of the working parameters is shown in Table 1. Appl. Sci. 2019, 9, 114 3 of 13 However, they have an opposite vibration direction angle to reduce the overall vibration. Thus, all of the threshed outputs can enter into the sieve surface in a uniform way. Then, at the joint action of vibration and airflow from different fan outlets, the grain will penetrate the sieve promptly. The designed working parameter adjusting mechanism is shown in figures 2 and 3. The principle of the working parameter adjusting mechanism can be found in our patent (PCT/CN2015/074348). The adjustment range of the working parameters is shown in Table 1. Hardware and Software System of the Test Bench The hardware circuit of the control system mainly comprises the power circuit, working parameters' knob-adjusting circuit, the displacement sensor signal-acquiring circuit, the motor revolution speed-controlling circuit and grain sieve loss, fan revolution speed, the return plate, and the sieve vibration frequency-acquisition circuit. The EPLAN 8.23 software (EPLAN, Mattis Nerheim, Germany) was used to design the corresponding circuit, integrate the above-mentioned circuits, and connect the electrical components according to the input/output (I/O) distribution to construct the hardware system. The software system is composed of the Human-Machine Interface (HMI) and the lower computer program. We selected the SIEMENS S7-1200 PLC, SM1231 AI8 analog input module, SM122 DQ16 digital output module, CM1241 RS485 communication module, frequency converter G120C (SIEMENS, Munich, Germany), and some transmitters to build the lower computer hardware system. Ladder language was utilized to complete the system compilation. The HMI was programmed by LabVIEW 14.0 software (National Instruments, Austin, TX, USA) to complete the information display and storage functions, and send instructions to the lower computer. Modbus-TCP communication protocol was utilized to fulfill the task of information exchange between the lower computer system and the HMI. The control strategy can be programmed in LabVIEW 14.0; once the control strategy is activated, the inherent algorithm selects the current grain sieve loss and current working parameters as input, and the target value of the ideal working parameters can be calculated out. Then, they were transferred into the lower computer system through communication protocol to finish process of adjusting the working parameters. The designed hardware structure of the operating monitoring and control system is shown in Figure 4. Cleaning Performance Evaluation under Different Working Parameters The sieve loss ratio and grain impurity ratio in the grain tank are two main factors that are used to judge the cleaning performance, and the higher those two values are, the worse the cleaning performance. Taking the fan speed, guide plate angle, and sieve opening as experimental factors, a cleaning experiment was carried out in the test bench ( Figure 1) utilizing the threshed outputs from the thresh-separation test bench [18]. The total amount of threshed outputs was 60 kg, the feeding rate was 2.5 kg/s. An oil-skin was used to collect all the cleaning residual at the rear of the test bench; then, the full grains were filtered out from the MOG using the stationary re-cleaner (Agriculex ASC-3 Seed Cleaner, Guelph, Ontario, CA, Canada), weighed, and the grain sieve loss ratio was calculated. The grain impurity ratio can be calculated by sampling from the grain tank (0.2-6 kg with an accuracy of ± 1 g) according to the national standard in China (DG/T 014-2009). The grain sieve loss should be ≤1%, and the grain impurity ratio should be ≤2% (Chinese standards JB/T 5117-2006). A preliminary screening experiment was designed with D-optimal design criterion, which focuses on precise parameter estimation, and the experiment results indicated that the fan speed, guide plate II angle, and sieve opening are the main factors that affect the cleaning performance. In this paper, a response surface experiment was designed with I-optimal design criterion in JMP12.0 software (SAS, Cary, NC, USA) to learn the variation trends in the cleaning performance [19]. The basic characteristics of the rice used in the test are shown in Table 2. Grain Loss Control Strategy and Performance Checking Selecting the deviation of the monitored grain sieve loss and its deviation variation rate as input variables, a fuzzy model was developed to control the cleaning process. The grain sieve loss sampled frequency was 10 Hz. By analyzing the time series of the recorded grain sieve loss under different working conditions, the basic domain of grain sieve loss was obtained. The output variables were the controlled quantities obtained by applying fuzzy inference to the corresponding input variables. To check the control performance of the designed fuzzy cleaning controller, the fuzzy query tables for the working parameters were obtained separately according to the maximum membership principle. Then, the corresponding proportional coefficients were multiplied into the values in the fuzzy query tables to obtain the actual regulation. After converting the actual regulation into the 'if-else' control statements respectively, the experiment was carried out in the test bench to check the controller performance. Before the experiment, the threshed outputs container in the test bench was filled with 120-kg threshed outputs, and it was ensured that all of the threshed outputs could be fed into the cleaning system within 50 s, which took several trials. The control effect could be checked by comparing the monitored grain sieve loss variation. Airflow velocities at certain points inside the cleaning shoe were measured by self-heating hot-wire anemometry (VS110, Nanjing, Neng Zhao Technology Co., Ltd., Nanjing, China, with a scope of 0.5-50 m·s −1 and a resolution of 0.01 m·s −1 ) to reflect the changing of the cleaning working parameters. The cleaning experiment was carried out under the condition that the controller was not being activated first, and the initial working parameters of the cleaning system were defined as the combination that had the worst cleaning performance. Then, about 10 s later, when the cleaning system was filled with threshed materials and the anemometer monitoring value was stable, the controller was activated. The corresponding sieve loss number was continuously recorded, which was beneficial to analyze the control effect afterwards. At last, collecting all of the cleaning residue on the oil-skin, the grains were obtained by removing the MOG through using a re-cleaner (Agriculex ASC-3 Seed Cleaner, Guelph, ON, Canada), and the grain sieve loss ratio could be obtained immediately. The grain sieve loss ratio was compared through utilizing the controller and the grain sieve loss ratio at the initial working parameters. The experimental process and location of the anemometers and grain sieve loss sensor in the cleaning shoe were as shown in Figure 5. Response Surface Experiment Results Analysis The JMP 12.0 software was used to analyze the effects of the main working parameters on grain sieve loss utilizing the experimental results shown in Table 3, and the analysis results are shown in Table 4. From Table 3 can be learnt that the grain sieve loss ratio and grain impurity ration could meet the Chinese standard in the most of conditions, compared with the single duct cleaning device, the cleaning performance was improved significantly [20]. When using JMP and other professional statistical software for hypothesis testing, the p value (p value) is often used to quantify the statistical significance of the evidence. In general, p < 0.05 is significant, p < 0.01 is very significant, meaning that the probability of sampling errors caused by differences between samples is less than 0.05 or 0.01 [19]. From the sieve loss response surface experiment results shown in Table 4, it can be seen that the combination of fan speed and guide plate II angle can accurately reflect the variation trend of grain sieve loss. To establish the mathematical model for monitoring the grain sieve loss, the grain distribution at the tail sieve was studied under the different fan speeds and guide plate II angles. The interaction profiler for grain sieve loss is shown in Figure 6. Relationship among Working Parameters and Grain Impurity Ratio The analysis of the surface response experiment results regarding the grain impurity ratio is shown in Table 5. It can be seen from Table 5 that the correspondence p-value of the sieve opening is ≤0.05. Therefore, it is considered that the sieve opening is the main variable that affects the grain impurity ratio. To understand the variation in the grain impurity ratio under different working parameters, a prediction profiler for the grain impurity ratio is shown in Figure 7. In Figure 7, the closer the desirability value is to 1, the more satisfactory the result; the closer the desirability value is to 0, the more unsatisfactory the result. From Figure 7, it can be learned that the value of the corresponding desirability of the sieve opening experiences a great fluctuation, while the corresponding desirability of the fan speed and guide plate II angle have a slight change. It further confirms that the sieve opening is the main parameter to judge the grain impurity ratio in a grain tank. Fuzzy Controller for Cleaning System There is no commercial grain impurity ratio monitoring system available at present; thus, it is currently impossible to obtain the grain impurity signals in a grain tank in real time. From the experiment shown in Table 5, it can be learned that when the sieve opening is 25 mm, the cleaning system has the lowest grain impurity ratio in a grain tank. Therefore, fixing the sieve opening at 25 mm, and selecting the fan speed and guide plate II as variables can establish the control strategy and thus ensure the grain impurity ratio ≤2% in the grain tank. As the control target is to keep the grain sieve loss ratio ≤0.5% under the condition of a grain impurity ratio ≤2% in the grain tank, according to the feeding rate of 2.5 kg/s, the optimal set point of the grain sieve loss ratio is 0.5%, the corresponding grain sieve loss is 12.5 g/s, and the grain sieve loss is about 420 gains/s, according to a 1000 grain mass of rice of 30 g. Based on the proportion of grain mass in the monitoring area, the control threshold is set to six gains/100 ms. The control variables of the fuzzy controller are the fan speed and the guide plate II angle. The adjusting range of the fan speed is 1100-1500 rpm, while the guide plate II angle is distributed within 13-45 • . As shown in Figure 8, the experimental results indicated that the grain sieve loss ratio increases at a rate of 0.75 g/r as the fan speed increases, and the decrease rate of the grain sieve loss is about 2.2 g/ • with the increase of the guide plate II angle. The cleaning system has a good performance with the fan speed at 1300 rpm and a guide plate II angle of 29 • , as the corresponding grain sieve loss and grain impurity ratio are relative low. Combined with the cleaning performance under different conditions, the grain sieve loss ratio is larger when the fan speed is 1500 rpm, which is not conducive to getting a better cleaning performance. Therefore, the fan speed can be adjusted in the range of 1100-1400 rpm, and the guide plate II angle can be changed within 23-41 • . Fuzzy Controller for Cleaning System There is no commercial grain impurity ratio monitoring system available at present; thus, it is currently impossible to obtain the grain impurity signals in a grain tank in real time. From the experiment shown in Table 5, it can be learned that when the sieve opening is 25 mm, the cleaning system has the lowest grain impurity ratio in a grain tank. Therefore, fixing the sieve opening at 25 mm, and selecting the fan speed and guide plate II as variables can establish the control strategy and thus ensure the grain impurity ratio ≤2% in the grain tank. As the control target is to keep the grain sieve loss ratio ≤0.5% under the condition of a grain impurity ratio ≤2% in the grain tank, according to the feeding rate of 2.5 kg/s, the optimal set point of the grain sieve loss ratio is 0.5%, the corresponding grain sieve loss is 12.5 g/s, and the grain sieve loss is about 420 gains/s, according to a 1000 grain mass of rice of 30 g. Based on the proportion of grain mass in the monitoring area, the control threshold is set to six gains/100 ms. The control variables of the fuzzy controller are the fan speed and the guide plate II angle. The adjusting range of the fan speed is 1100-1500 rpm, while the guide plate II angle is distributed within 13-45°. As shown in Figure 8, the experimental results indicated that the grain sieve loss ratio increases at a rate of 0.75 g/r as the fan speed increases, and the decrease rate of the grain sieve loss is about 2.2 g/° with the increase of the guide plate II angle. The cleaning system has a good performance with the fan speed at 1300 rpm and a guide plate II angle of 29°, as the corresponding grain sieve loss and grain impurity ratio are relative low. Combined with the cleaning performance under different conditions, the grain sieve loss ratio is larger when the fan speed is 1500 rpm, which is not conducive to getting a better cleaning performance. Therefore, the fan speed can be adjusted in the range of 1100-1400 rpm, and the guide plate II angle can be changed within 23-41°. (a) Fan speed (b) Guide plate II angle Figure 8. Effects of working parameters on grain loss with a confidence interval of 95%. Experiments have shown that the fan speed has a major effect on grain sieve loss. Thus, a faster loop (cycle time of five seconds), which regulates the fan speed, and a slower loop, which regulates the guide plate II angle (cycle time of 10 s), were designed to control the working process. During the Experiments have shown that the fan speed has a major effect on grain sieve loss. Thus, a faster loop (cycle time of five seconds), which regulates the fan speed, and a slower loop, which regulates the guide plate II angle (cycle time of 10 s), were designed to control the working process. During the working process, the cleaning controller checks whether the current grain sieve loss is approximate to the optimal set point. If the monitored grain loss number is much higher than the optimal set point, the faster loop is activated, and the fan speed is changed. Once the monitored grain sieve loss approaches the optimal set point, the slower control loop will manage the fine-tuning of the guide plate II angle to keep the grain sieve loss below the set point and without impairing the cleaning efficiency. If the monitored grain sieve loss number falls below the optimal grain sieve loss set point, the fan speed can be increased to some extent to ensure cleaning efficiency. The basic domain of the grain sieve loss is obtained as shown in Table 6. The membership functions are all triangular, as shown in Figure 9; its fuzzy domains were E and EC, and U represents their fuzzy domains. The fuzzy subsets of the input and output linguistic variables are expressed as negative big (NB), negative middle (NM), negative small (NS), zero (ZO), positive small (PS), positive middle (PM), and positive big (PB). The fuzzy system includes 49 strips of rules for the fan speed and guide plate II angle, as shown in Tables 7 and 8. working process, the cleaning controller checks whether the current grain sieve loss is approximate to the optimal set point. If the monitored grain loss number is much higher than the optimal set point, the faster loop is activated, and the fan speed is changed. Once the monitored grain sieve loss approaches the optimal set point, the slower control loop will manage the fine-tuning of the guide plate II angle to keep the grain sieve loss below the set point and without impairing the cleaning efficiency. If the monitored grain sieve loss number falls below the optimal grain sieve loss set point, the fan speed can be increased to some extent to ensure cleaning efficiency. The basic domain of the grain sieve loss is obtained as shown in Table 6. The membership functions are all triangular, as shown in Figure 9; its fuzzy domains were E and EC, and U represents their fuzzy domains. Controller Performance Checking The initial working parameters are as follows: the fan speed is 1500 rpm, the guide plate I angle is 26.5°, the guide plate II angle is 13°, and the sieve opening is 25 mm. The experimental results indicated that there is a large sieve loss under this condition. From Figure 10, it can be seen that when the controller was not activated in the first 10 s, the airflow velocity at the upper outlet and tail sieve is large because of the higher fan speed and the smaller guide plate II angle. Under the accelerated action of airflow, the threshed material is easily blown out, resulting in a sharp increase in grain sieve loss. After activating the controller at 10 s, the monitored grain loss is gradually decreasing. With the stability of the grain sieve loss, the controller adjusts the fan speed to increase the cleaning efficiency. The grain loss is increased as the fan speed increases. At last, the grain sieve loss is stable near the set point, and the grain sieve loss is reduced. The combination of working parameters in the cleaning device determines the airflow distribution in the cleaning shoe. Therefore, the change of the airflow field is an indirect proof of the changes in the working parameters. The airflow velocity variation in the first 20 s is as shown in Figure 11, and the control performance of the controller is verified by the airflow velocity variation. From Figure 11, it can be seen that the airflow velocity at different measurement points varies dynamically during the cleaning process. It can be proven that the relevant working parameters are changing at the action of the controller. At 10 s, the airflow velocity distribution in the cleaning shoe is far from the ideal airflow velocity distribution form. According to previous experience, the grain sieve loss is larger and the relevant working parameters need to be adjusted in order to avoid the grain loss continuing to increase. After the control algorithm was activated in the first 10 s, the airflow velocity in the cleaning shoe rapidly reduced at 15 s, and the airflow velocity in the cleaning shoe gradually became close to the ideal airflow velocity distribution at 20 s. The calculated grain sieve loss with the activated controller was 0.53%. However, from Table 4, it can be known that the grain sieve loss is 0.83-2.01% under the working conditions that are similar to the initial working parameters: fan speed 1500 rpm, sieve opening 20-30 mm, guide plate II angle 13-45°, and guide plate I angle 26.5°. The grain sieve loss after the controller is activated is significantly reduced. Since the control algorithm was not activated in the first 10 s, the grain sieve loss was a bit higher. Controller Performance Checking The initial working parameters are as follows: the fan speed is 1500 rpm, the guide plate I angle is 26.5 • , the guide plate II angle is 13 • , and the sieve opening is 25 mm. The experimental results indicated that there is a large sieve loss under this condition. From Figure 10, it can be seen that when the controller was not activated in the first 10 s, the airflow velocity at the upper outlet and tail sieve is large because of the higher fan speed and the smaller guide plate II angle. Under the accelerated action of airflow, the threshed material is easily blown out, resulting in a sharp increase in grain sieve loss. After activating the controller at 10 s, the monitored grain loss is gradually decreasing. With the stability of the grain sieve loss, the controller adjusts the fan speed to increase the cleaning efficiency. The grain loss is increased as the fan speed increases. At last, the grain sieve loss is stable near the set point, and the grain sieve loss is reduced. The combination of working parameters in the cleaning device determines the airflow distribution in the cleaning shoe. Therefore, the change of the airflow field is an indirect proof of the changes in the working parameters. The airflow velocity variation in the first 20 s is as shown in Figure 11, and the control performance of the controller is verified by the airflow velocity variation. From Figure 11, it can be seen that the airflow velocity at different measurement points varies dynamically during the cleaning process. It can be proven that the relevant working parameters are changing at the action of the controller. At 10 s, the airflow velocity distribution in the cleaning shoe is far from the ideal airflow velocity distribution form. According to previous experience, the grain sieve loss is larger and the relevant working parameters need to be adjusted in order to avoid the grain loss continuing to increase. After the control algorithm was activated in the first 10 s, the airflow velocity in the cleaning shoe rapidly reduced at 15 s, and the airflow velocity in the cleaning shoe gradually became close to the ideal airflow velocity distribution at 20 s. The calculated grain sieve loss with the activated controller was 0.53%. However, from Table 4, it can be known that the grain sieve loss is 0.83-2.01% under the working conditions that are similar to the initial working parameters: fan speed 1500 rpm, sieve opening 20-30 mm, guide plate II angle 13-45 • , and guide plate I angle 26.5 • . The grain sieve loss after the controller is activated is significantly reduced. Since the control algorithm was not activated in the first 10 s, the grain sieve loss was a bit higher. Figure 11. Variation of airflow velocity distribution above the sieve within the first 20 s. Conclusions Selecting S7-1200 PLC to build the lower computer hardware system, utilizing ladder language to complete the system compilation, and using LabVIEW 14.0 software to design the host-computer interface, a multi-duct cleaning device performance monitoring and control system was developed. The effects of fan speed, guide plate angle, and the sieve opening on the sieve loss ratio and grain impurity ratio were investigated through a large number of bench tests. The experimental results indicated that the combination of fan speed and guide plate II angle can accurately reflect the variation trend of grain sieve loss, and the grain sieve loss ratio increases at a rate of 0.75 g/r as the fan speed increases, and the decrease rate of the grain sieve loss is about 2.2 g/ • with the increase of the guide plate II angle. Based on the proportion of grain mass in the monitoring area, the control threshold is set to six gains/100 ms. Then, a fuzzy control model of the cleaning process was developed for the multi-duct cleaning system, and the experimental results indicated that when the controller was not activated in the first 10 s, the airflow velocity at the upper outlet and tail sieve was large because of the higher fan speed and the smaller guide plate II angle. Under the accelerated action of airflow, the threshed material is easily blown out, resulting in a sharp increase in grain sieve loss. After activating the controller at 10 s, the monitored grain loss gradually decreased. With the stability of the grain sieve loss, the controller adjusts the fan speed to increase the cleaning efficiency. The grain loss increased as the fan speed increased. The designed fuzzy control model can fulfill the automated control of cleaning settings, such as fan speed and guide plate angle, and thus reduce grain sieve loss.
7,274.4
2018-12-29T00:00:00.000
[ "Computer Science" ]
Circulating granulocyte lifespan in compensated alcohol‐related cirrhosis: a pilot study Abstract Although granulocyte dysfunction is known to occur in cirrhosis, in vivo studies of granulocyte lifespan have not previously been performed. The normal circulating granulocyte survival half‐time (G − t ½), determined using indium‐111 (111In)‐radiolabeled granulocytes, is ~7 h. In this pilot study, we aimed to measure the in vivo G − t ½ in compensated alcohol‐related cirrhosis. Sequential venous blood samples were obtained in abstinent subjects with alcohol‐related cirrhosis over 24 h post injection (PI) of minimally manipulated 111In‐radiolabeled autologous mixed leukocytes. Purified granulocytes were isolated from each sample using a magnetic microbead‐antibody technique positively selecting for the marker CD15. Granulocyte‐associated radioactivity was expressed relative to peak activity, plotted over time, and G − t ½ estimated from data up to 12 h PI. This was compared with normal neutrophil half‐time (N − t ½), determined using a similar method specifically selecting neutrophils in healthy controls at a collaborating center. Seven patients with cirrhosis (six male, aged 57.8 ± 9.4 years, all Child‐Pugh class A) and seven normal controls (three male, 64.4 ± 5.6 years) were studied. Peripheral blood neutrophil counts were similar in both groups (4.6 (3.5 − 5.5) × 109/L vs. 2.8 (2.7 − 4.4) × 109/L, respectively, P = 0.277). G − t ½ in cirrhosis was significantly lower than N − t ½ in controls (2.7 ± 0.5 h vs. 4.4 ± 1.0 h, P = 0.007). Transient rises in granulocyte and neutrophil‐associated activities occurred in four patients from each group, typically earlier in cirrhosis (4–6 h PI) than in controls (8–10 h), suggesting recirculation of radiolabeled cells released from an unidentified focus. Reduced in vivo granulocyte survival in compensated alcohol‐related cirrhosis is a novel finding and potentially another mechanism for immune dysfunction in chronic liver disease. Larger studies are needed to corroborate these pilot data and assess intravascular neutrophil residency in other disease etiologies. Introduction and Background Advanced forms of alcohol-related liver disease are associated with high rates of bacterial and fungal sepsis, which are a frequent cause of hospitalization and death in patients with cirrhosis (Verma et al. 2006). In part, this relates to defects in neutrophil function, including impaired phagocytic capacity and high resting oxidative burst (Mookerjee et al. 2007). More recently, neutrophil dysfunction has been shown to occur in those with compensated cirrhosis and to be transmissible to the neutrophils of healthy controls by incubation in plasma from cirrhotic subjects (Tritto et al. 2011). In vitro studies have shown increased rates of neutrophil apoptosis in decompensated versus compensated liver disease, mediated through increased capsase-3 activity (Ram ırez et al. 2004). This, coupled with hypersplenism, has been used to explain neutropenia in cirrhosis. The normal in vivo circulating neutrophil lifespan is controversial, with a wide range of values depending upon the method of measurement. When determined from the sequential recovery of autologous radiolabeled granulocytes from peripheral blood, the normal intravascular half-life (t ½ ) is~7 h (Saverymuttu et al. 1985). A much longer value of 5.4 days has recently been described using in vivo heavy water ( 2 H 2 O) labeling (Pillay et al. 2010), although significant concerns exist regarding the validity of this method (Li et al. 2011;Tofts et al. 2011). Indium-111 oxine ( 111 In) is a gamma-emitting radionuclide with a 67 h physical half-life that preferentially labels neutrophils in a stable manner and is therefore ideally suited for in vivo study of neutrophil kinetics. Neutrophils constitute the majority of circulating granulocytes, the remainder comprising small numbers of eosinophils and basophils. Granulocytes are key to the innate immune response, although their lifespan in patients with compensated cirrhosis is currently unknown. Aims In this pilot study, we aimed to determine the intravascular survival time of 111 In-radiolabeled granulocytes in subjects with compensated alcohol-related cirrhosis. Patients and Methods Subjects with compensated alcohol-related cirrhosis (Child-Pugh class A) were recruited from the outpatient liver clinic at our institution. Cirrhosis was diagnosed either from previous liver biopsy or the combination of clinical findings and compatible radiology (typically computed tomography showing an irregular liver margin and features of portal hypertension). In all cases, other causes of liver disease were diligently excluded. To avoid confounding effects from alcohol-induced bone marrow toxicity, all had been abstinent from alcohol for ≥6 months prior to recruitment. In all instances, we sought to verify self-reported abstinence by reference to primary and secondary care records and excluded those in whom there was uncertainty. For comparison, healthy controls without liver disease were studied at a collaborating center. All participants were ambulatory outpatients at the time of the study without clinical evidence of active or recent infection. Leukocyte radiolabeling protocol All subjects underwent conventional indium-111 ( 111 In)labeled leukocyte scintigraphy. Autologous mixed leukocytes were radiolabeled in vitro under sterile conditions according to published guidelines (Roca et al. 2010), taking precautions at all stages to minimize ex vivo cell perturbation. Briefly, 45 mL of venous blood was mixed with the anticoagulant acid-citrate-dextrose. Erythrocytes were allowed to sediment over 45 min, aided by the addition of 1% methylcellulose. A leukocyte-rich, platelet-depleted cell pellet was obtained by centrifugation of the supernatant and washed once with normal saline. The cells were resuspended in saline and incubated with approximately 25 MBq 111 In-oxine for 15 min, after which radiolabeling was terminated by the addition of autologous platelet-poor plasma. The radiolabeled leukocytes were pelleted, the supernatant aspirated, and cell-associated and unbound radioactivity measured to calculate the radiolabeling efficiency. Radiolabeled mixed leukocytes were resuspended in a further 3 mL plateletpoor plasma and injected intravenously. The administered radioactivity was~20 MBq. Leukocyte labeling in normal controls recruited at the collaborating center was performed using 111 In-tropolone, an alternative ligand to oxine, although the labeling procedures were otherwise identical. Granulocytes Sequential peripheral venous blood samples were obtained between 30 min and 10 h postinjection (PI) of 111 In-radiolabeled mixed leukocytes and again between 20-25 h PI. Purified granulocytes were separated from each wholeblood sample using a magnetic microbead-based antibody technique positively selecting for the granulocyte-specific antigen CD15 (autoMACS, Miltenyi Biotec, Bergisch, Germany) (Zahler et al. 1997). The CD15-associated radioactivity was measured using a c counter (WIZARD 1480, PerkinElmer, MA) and expressed relative to the number of granulocytes per sample, determined using a hemocytometer. These values were expressed as a percentage of the peak value in each subject and plotted over time. Circulating granulocyte survival half-life (G À t ½ ) was calculated from the gradient of an exponential fitted to the data points acquired up to 12 h PI. Neutrophils In normal controls studied at the collaborating center, purified neutrophils were isolated from peripheral blood samples obtained up to 24 h PI using a similar negative selection antibody-microbead technique, specifically selecting for neutrophils (RoboSep, StemCell Technologies, Vancouver, Canada). The normal neutrophil half-life (N À t ½ ) was determined in the same manner as G À t ½ . Statistical analysis Data are presented as mean AE standard deviation, median (interquartile range), or number (%) and all reported Pvalues are two-tailed. Quantitative variables were compared using Student's t-test or analysis of variance (ANOVA) and the Mann-Whitney U-test or Kruskal-Wallis test for parametric and nonparametric data, respectively. The study received external ethical approval and all participants gave informed written consent. Patient characteristics Seven patients with cirrhosis and seven normal controls were studied. One had undergone previous liver biopsy and in the remaining six the diagnosis rested on clinical and radiological grounds. Compared with normal controls, subjects with cirrhosis were younger and a greater proportion were male, although these differences were not statistically significant (Table 1). Total peripheral leukocyte, neutrophil, and platelet counts were similar in both groups. All patients with cirrhosis were Child-Pugh class A (Child-Pugh score 6 in one case and 5 in all other subjects) and the stated median duration of abstinence from alcohol was 18 (11-84) months. Five (71.4%) had previously been admitted with episodes of hepatic decompensation, (severe alcoholic hepatitis, n = 3; variceal hemorrhage, n = 1 and refractory ascites, n = 1. All had radiological evidence of cirrhosis (irregular nodular liver margin), and in addition, two had splenomegaly, the maximum spleen size being 12.7 cm. Intravascular granulocyte and neutrophil lifespan Mean normal circulating N À t ½ determined from six healthy controls was 4.4 AE 1.0 h. One normal control in whom N À t ½ was 14.4 h was deemed to be an outlier and excluded from the analysis. Mean G À t ½ in cirrhosis was significantly shorter than the normal N À t ½ (2.7 AE 0.5 h, P = 0.007) (Fig. 1). G À t ½ or N À t ½ was unrelated to the peripheral neutrophil count (q = À0.249, P = 0.412). Figure 2 shows example blood clearance curves in a normal control (A) and patient with compensated cirrhosis (B). Cell-associated radioactivity up to 12 h PI followed a monoexponential decay function. Pooled recovery values up to 12 h PI in normal controls (Fig. 2C) displayed a greater spread than in cirrhosis (Fig. 2D), giving rise to accordingly lower R 2 values (0.74 vs. 0.88, respectively). In both groups, the intravascular half-life determined from the exponential fit of pooled recovery values was similar to the mean of individual lifespan measurements (G À t ½ in cirrhosis 3.0 h vs. normal N À t ½ 5.3 h). Normal neutrophil recovery values after 20 h PI lay close to the extrapolated exponential generated from measurements up to 12 h PI. However, in cirrhosis, granulocyte-associated radioactivity in later peripheral blood samples was higher than that expected from the earlier data. In 13 of 14 late samples (92.9%) obtained more than 20 h PI, CD15-associated radioactivity was greater than that anticipated from the extrapolated Data are presented as mean AE standard deviation, median (IQR) or number (%). MELD, modified end-stage liver disease score. Normal values: leucocyte count 4-11 9 10 9 /L, neutrophil count 2-7.5 9 10 9 /L, platelet count 150-450 9 10 9 /L. Intravascular residency half-time (h) Normal (N-t½) Cirrhosis (G-t½) exponential function generated using data up to 12 h PI (Fig. 2D). Transient rises in neutrophil-associated and CD15assoicated activities were observed in four normal controls and four patients with cirrhosis, respectively (example recovery curves shown in Fig. 3). These transient rises typically occurred later in normal subjects (~8-10 h PI) compared to those with cirrhosis (~4-6 h PI), commensurate with the shorter G À t ½ in cirrhosis (Fig. 3). Discussion Although defects in neutrophil function have been reported in various forms of chronic and acute-onchronic liver disease, this pilot study is the first to report the in vivo measurement of granulocyte lifespan in cirrhosis. A number of findings are noteworthy. Firstly, normal intravascular neutrophil residency time determined from the recovered neutrophil fraction of radiolabeled mixed leukocytes was shorter than previously reported Cell-associated radioactivity in peripheral blood (% maximum) Cell-associated radioactivity in peripheral blood (% maximum) Cell-associated radioactivity in peripheral blood (% maximum) Cell-associated radioactivity in peripheral blood (% maximum) (~5 h vs.~7 h) (Saverymuttu et al. 1985). Previous similar studies have radiolabeled purified neutrophils, a process that requires substantial ex vivo cell manipulation (Saverymuttu et al. 1985), which risks activating cells and consequently altering their in vivo behavior. The method utilized in this study minimized ex vivo cell perturbation during radiolabeling and isolated pure cell lines from the whole-blood samples used in the measurement of radiolabeled neutrophil or granulocyte recovery. Furthermore, we determined the t ½ of granulocytes and neutrophils using samples of whole-blood obtained over a longer period (up to 12 h rather than 5 h in previous studies) (Saverymuttu et al. 1985). Secondly, mean granulocyte lifespan in compensated alcohol-related cirrhosis was significantly shorter than the normal neutrophil lifespan. This is despite the presence of small numbers of radiolabeled eosinophils selected in CD15-positive samples, which, through their longer lifespan in blood (Farahi et al. 2012), would be expected to marginally prolong total granulocyte residency compared to pure neutrophil measurement. Suppressed granulocyte lifespan in cirrhosis is consistent with existing in vitro data suggesting increased frequency of neutrophil apoptosis in chronic liver disease (Ram ırez et al. 2004). A transient increase in the time courses of neutrophil and granulocyte-associated activities was observed in normal controls and those with alcohol-related cirrhosis (Fig. 3). These findings suggest recirculation of radiolabeled cells into the circulating granulocyte pool from sites of margination, resonating with recently reported findings using 111 In-labeled eosinophils (Farahi et al. 2012). The source(s) of recirculating granulocytes remain unknown and warrants further study with dynamic gamma camera imaging over carefully selected time points. A "tail" in granulocyte recovery data was observed in alcohol-related cirrhosis, likely reflect eosinophils isolated alongside neutrophils using CD15-positive selection. Eosinophils have been shown to have a longer intravascular residency time than neutrophils (~25 h) (Farahi et al. 2012). Eosinophils constituted just 2.8 AE 1.2% of total granulocytes in these individuals and are therefore unlikely to substantially affect recovery values obtained up to 12 h PI. However, as a consequence of their long intravascular lifespan, over time, eosinophils form a greater proportion of residual circulating radiolabeled cells. The greater proportion of radiolabeled eosinophils in samples obtained after 20 h PI is likely to account for the prolonged curve seen in CD15-positive separations in cirrhosis, not observed with purified neutrophil separations in normal controls. Limitations inherent in the study methodology include the use of differing cell separation techniques in cirrhosis and normal controls due to the recruitment of subjects in two separate centers. However, since neutrophils comprise the vast majority of granulocytes (95.3 AE 2.1% in those with cirrhosis), the comparison between N À t ½ and G À t ½ appears to be scientifically appropriate. By calculating G À t ½ from samples up to 12 h PI, the effect of radiolabeled eosinophils within CD15-positive samples is minimized. It is possible that the shorter G À t ½ we Cell-associated radioactivity in peripheral blood (% maximum) Cell-associated radioactivity in peripheral blood (% maximum) identified in cirrhosis relates in part to the differences either in leukocyte radiolabeling or cell selection techniques. Our method relies upon rapid and even distribution of radiolabeled cells between circulating and marginating granulocyte pools, and upon a steady rate of mature granulocyte release from the bone marrow during the period of measurement. Men were over-represented in the cirrhotic group, consistent with the male predominance of those affected by alcohol-related liver disease. However, we did not identify any difference in granulocyte or neutrophil residency according to gender (P = 0.819). Finally, the sample sizes were comparatively small and would therefore benefit from further studies to corroborate the findings in a larger cohort. We attempted to measure G À t ½ using this technique in a further cohort with decompensated liver disease in the setting of severe alcoholic hepatitis (n = 3, data not shown). These individuals exhibited significant variability in peripheral blood neutrophil counts postradiolabeled leukocyte administration. Attempts to correct radiolabeled cell recovery data for changes in the peripheral neutrophil count generated radically differing values and hence we were unable to determine G À t ½ in these subjects with any degree of confidence. However, future research with more refined techniques and a larger sample size may enable measurement of granulocyte lifespan in other forms of chronic liver disease, as well as acute and acuteon-chronic liver failure. In conclusion, in this pilot study, we have shown for the first time that the intravascular granulocyte lifespan is suppressed in compensated alcohol-related cirrhosis. This is of potential significance given the existing evidence for neutrophil dysfunction and resulting susceptibility to infection in cirrhosis. Infection is a frequent trigger of hepatic decompensation, often culminating in acute-on-chronic liver failure, and remains an important predictor of in-hospital mortality. We identified the intravascular granulocyte lifespan in abstinent subjects with compensated alcohol-related cirrhosis to be substantially lower than both normal controls and the previous reported normal circulating survival half-time for 111 In-labeled granulocytes (Saverymuttu et al. 1985). The delayed recirculation phenomena observed in both normal individuals and those with cirrhosis are novel findings and warrant further study to determine the foci from which radiolabeled cells are released into the circulation.
3,757.6
2016-09-01T00:00:00.000
[ "Medicine", "Biology" ]
Anomalous anti-damping in sputtered β-Ta/Py bilayer system Anomalous decrease in effective damping parameter αeff in sputtered Ni81Fe19 (Py) thin films in contact with a very thin β-Ta layer without necessitating the flow of DC-current is observed. This reduction in αeff, which is also referred to as anti-damping effect, is found to be critically dependent on the thickness of β-Ta layer; αeff being highest, i.e., 0.0093 ± 0.0003 for bare Ni81Fe19(18 nm)/SiO2/Si compared to the smallest value of 0.0077 ± 0.0001 for β-Ta(6 nm)/Py(18 nm)/SiO2/Si. This anomalous anti-damping effect is understood in terms of interfacial Rashba effect associated with the formation of a thin protective Ta2O5 barrier layer and also the spin pumping induced non-equilibrium diffusive spin-accumulation effect in β-Ta layer near the Ta/Py interface which induces additional spin orbit torque (SOT) on the moments in Py leading to reduction in . The fitting of (tTa) revealed an anomalous negative interfacial spin mixing conductance, and spin diffusion length,. The increase in αeff observed above tTa = 6 nm is attributed to the weakening of SOT at higher tTa. The study highlights the potential of employing β-Ta based nanostructures in developing low power spintronic devices having tunable as well as low value of α. . The increase in α eff observed above t Ta = 6 nm is attributed to the weakening of SOT at higher t Ta . The study highlights the potential of employing β-Ta based nanostructures in developing low power spintronic devices having tunable as well as low value of α. In recent years, the Rashba spin orbit interaction (RSOI) has emerged as a powerful tool for significantly enhancing the spin transfer torque (STT) in ferromagnetic (FM) layer when it is in contact with the heavy metallic nonmagnetic (NM) layer, i.e., in NM/FM hetero-structures, e.g., Bi 2 Se 3 /Py, Ta/CoFeB/MgO, etc [1][2][3][4][5][6][7][8] . In the presence of charge current through the NM layer, the RSOI forces the spins at the interface via spin Hall effect (SHE) in transverse direction thereby creating a non equilibrium spin accumulation near the NM/FM interface 2 . In presence of a dc-magnetic field, the accumulated interfacial non-equilibrium spin density interacts with the magnetization of FM layer via ferromagnetic exchange coupling and eventually reverses the magnetization at high current density. Referred to as the anti-damping of the magnetization precession 3,4,[9][10][11][12][13][14] , this phenomenon essentially lowers the Gilbert's damping constant (α), when compared to the case of bare FM. Similar RSOI like anti-damping effect also originates from the Berry curvature, associated with the phase with broken inversion symmetry, which produces SOT that counteracts the magnetization dynamics 15 . As the effect, by its fundamental origin, relies on the local accumulation of spins near the interface, the Rashba effect is also referred to as interfacial spin Hall effect 3 . The anti-damping observed in Rashba effect fundamentally arises due to local modification in the spin orbit interaction near the interface, which gives rise to the Rashba spin orbit torque (RSOT) necessary for lowering of α. It may be pointed out that this so-called SHE-RSOT, which is significant only when thickness of NM layer ( ) t NM is comparable/smaller than its spin-diffusion length λ ( ) SD , is quite different from the bulk SHE driven STT (observed when t NM > λ SD ) wherein both damping and anti-damping effects could occur depending upon the magnitude and direction of DC-current 16 . In this later case of bulk SHE-STT in NM/FM bilayers, interfacial contribution to STT arising due to local interactions at the interface are usually very weak and hence are often ignored 3 . Now a days, physics related to interface is playing an important role in technological applications like magnetic random access-memory, magnetic data storage and spin based logic devices. Hence, the influence of the nature of interface in NM/FM bilayer system on the spin pumping is of paramount importance for realization of spintronic devices. Recently theoretical groups have reported that the RSOI also occurs in ferromagnetic semiconductors (FMS), e.g., Mn x Ga 1−x As, etc [13][14][15]17,18 . It is to be noted that in all these reports, the non-equilibrium spin density is created when the FMS is subjected to an rf-current. This non-equilibrium spin density exerts SOT on the magnetization via the exchange coupling between the carrier's magnetic moment. However, at higher currents, the noise due to both the Oersted field and the associated heating effects dominates over SOT and leads to suppression or disappearance of anti-damping [19][20][21][22][23] . This makes it very difficult to separate the contributions from the RSOT and SHE-STT. In this communication, we present the experimental evidence of anti-damping SOT in a β-Ta/Py/SiO 2 /Si(100) bilayers without any DC-current flowing through Ta. Although the observed anti-damping effect in these β-Ta/ Py bilayers bears similarity with regard to the DC-current induced anti-damping effect observed in Ta/CoFeB bilayer systems [24][25][26] , the anti-damping observed in the present case of β-Ta/Py bilayers is, however, anomalous since it is observed in the absence of DC-current to β-Ta layer. Based on the analyses of the line boarding in the ferromagnetic resonance (FMR) spectra recorded on the bilayers having different Ta layer thickness (t Ta ) deposited in-situ over the Py layer of constant thickness t Py = 18 nm, it is proposed that the observed anti-damping effect has its origin associated with a Rashba like interfacial SOT arising due to the spin accumulation at the β-Ta/ Py interface [27][28][29][30][31] . The anti-damping effect is found to be systematically dependent on t Ta ; becoming more and more pronounced with the increase in t Ta till about 6 nm. Above t Ta ~ 6 nm, its strength decreases monotonically and becomes more or less independent of t Ta above 8 nm. These experimental results which are manifestations of the FMR induced spin-pumping mechanism in β-Ta/Py bilayers are explained in terms of negative interfacial effective spin mixing conductance (g ↑↓ ) which depends on t Ta . The studies suggest that Ta can act as a potential candidate material for inducing Rashba like-torque leading to lower α, which could be useful for developing potential spintronics devices with relatively low power dissipation due to the absence of any DC-current. Py thin films (thickness 18 nm) were grown at room temperature on SiO 2 /(100)Si substrates (SiO 2 is the native oxide layer on Si) by pulsed DC magnetron sputtering using 99.99% pure Py target. The β-Ta layers of different thicknesses varying from 1-24 nm (in steps of 1 nm from 1 to 8, 2 nm from 8 to 12 and 4 nm from 12-24 nm) were grown on top of Py(18 nm) by using 99.99% pure Ta target. The base pressure of sputtering chamber was ∼ 2 × 10 −7 Torr and Ar working pressure of ~3.2 × 10 −3 Torr was maintained during bilayer growth. The in-plane magnetization of β-Ta/Py thin films was measured by Physical Property Measurement System (PPMS) (Model Evercool-II from Quantum Design Inc) facility at IIT Delhi. The X-Ray diffraction studies on these thin films have been done by using X'Pert-Pro x-ray diffractometer (XRD) with Cu-K α (1.54 Å) source for studying the phase purity and orientation aspects of the Ta and Py thin films. The thicknesses of individual layers and interface roughness (~0.4 nm) were accurately determined by x-ray reflectivity (XRR) measurements. The in-plane resonance field H r and linewidth ∆H were measured by using broadband lock-in-amplifier based ferromagnetic resonance (LIA-FMR) technique developed in-house with the help of a vector network analyzer (VNA) in an in-plane magnetic field configuration employing a coplanar waveguide (CPW). Figure 1a shows the schematic of the FMR set up. The VNA (HP make Model 8719 ES) sourced the microwave signal at a particular frequency to the CPW placed within in the pole gap of an electromagnet as shown in Fig. 1a. The FM/NM bilayer thin film sample (size 1 × 4 mm 2 ) is mounted on the central signal line (S) of CPW (see Fig. 1b) such that the film-side is in contact with S while the substrate side faces upward. In this geometry, the external DC-magnetic field H from the electromagnet and microwave field h rf of the CPW are transverse to each other, and both lie parallel to the film-plane. The resonance condition is obtained by sweeping H at different constant values of microwave frequency (5-10 GHz). To improve the signal to noise ratio, the DC-magnetic field was modulated using an AC-field of an optimized strength of 1.3 Oe at 211.5 Hz frequency which was obtained by powering a pair of Helmholtz coils from the reference oscillator of the lock-in-amplifier from Stanford Research Systems Inc. (Model-SR 830 DSP). The output signal, essentially the derivative of the signal from the sample, locked at 211.5 Hz was detected by the LIA via an RF-diode detector. X-ray photoelectron spectroscopic (XPS) spectra were recorded using SPECS make system which uses MgK α (1253.6 eV) source and hemispherical energy analyzer (pass energy of 40 eV with a resolution of ~0.3 eV) to probe the surface of the β-Ta/Py bilayers. Figure 2 shows the X-Ray diffraction patterns recorded in θ-2θ mode on Py (18 nm), β-Ta (30 nm) and β-Ta ( ) t Ta /Py bilayer thin films, where t Ta corresponds to thickness of Ta layer. The formation of highly textured β-Ta phase is established from (i) the presence of very intense (002) and (004) peaks from the β-Ta phase at 2θ value of 33.5° and 70.1°, respectively, and (ii) the absence of the most intense peaks of α-Ta phase, namely the (110) peak at 2θ = 38.4° (which overlaps with (202) and (211) of β-Ta) and the isolated (200) peak at 2θ = 56.0°. It may be noted that, in the present case, the formation of the phase pure β-Ta required a relatively higher sputtering power ~150 W (over 2" dia target area). The 2θ peak position of 33.2° corresponding to d value of 2.70 Å (very close to reported value of 2.67 Å for Ta) 32 reveals that the growth of Ta thin films is in desired tetragonal β-phase having preferential orientation of the (200) planes. This is consistent with its measured value of resistivity of 180 μ Ω .cm at room temperature which agreed excellently well with the reported values in literature 24,33 . While we did not observe any discernible XRD peak in the bare Py film in θ-2θ scan mode due to its small thickness (18 nm), the glancing angle XRD (inset of Fig. 2) pattern recorded at 0.5 and 1°, however, confirmed the growth of Py having (111) preferred orientation. The spin dynamic response of these bilayer films is investigated by analyzing the FMR spectra recorded by reducing the external dc-magnetic field from the saturation magnetization state of β-Ta/Py bilayers at different constant microwave frequencies lying in the range of 4-10 GHz. For determining the H r and ∆H at constant frequencies, the observed FMR spectra were fitted with the derivative of Lorentzian function as shown by solid lines (Fig. 3(a)). The frequency dependence of H r observed for β-Ta(1 nm)/Py (18 nm) bilayer films is shown in Fig. 3(b). The observed H r vs. f data are fitted (solid lines in Fig. 3(b)) by using the Kittel's equation 34 where γ is the gyromagnetic ratio (= . × 1 856 10 11 Hz/T) when the spectroscopic splitting factor g is taken as 2.1. Given the higher thickness of Py layer, i.e., 18 nm in the present case, it is very reasonable to ignore the anisotropy term '−2K/M s ' in equation (1), and hence the Kittel's equation reduces in the present case to, The values of saturation magnetization πM 4 s , obtained from the fitting of H r versus f data of the bilayers, are found to lie in the range of 938-1013 mT. Within the error of estimation, the πM 4 s values so determined on β-Ta(t Ta )/Py bilayers are clearly large compared to that of bare Py layer (see dotted line in Fig. 4), clearly ruling out the presence of any magnetically dead-layer in these bilayers having different t Ta . This is in sharp contrast to the reported work on Ta/CoFeB 33,35 . Instead, in the present case, an increase in πM 4 s value with increase in t Ta till about 6 nm can be noted from Fig. 4. It is conjectured that this increase in πM 4 s (~8.0% higher compared to that in bare Py layer) inferred from the FMR measurement could result from the presence of the extra spin density due to diffusive spin accumulation 3,36 in β-Ta layer. This accumulation, which has induced extra magnetization, is indeed theoretically shown to be originating via the local strong spin orbit coupling near the interface due to the proximity with the FM layer 1,3,13,36 . There exist some reports in literature wherein the magnetic proximity effect is reported in Ta/FM structures 37,38 .The πM 4 s value as determined from the PPMS measurement of the β-Ta(6 nm)/ Py(18 nm) bilayer was found to be 805.24 emu/cc (≈ 1012 mT), which agrees reasonably well with the value estimated from the fitting of FMR data on the same sample (Fig. 4). In order to have a deeper insight of the FMR induced spin pumping in these β-Ta/Py bilayers, we now turn to the frequency dependence of ∆H. Figure 5 shows the observed frequency dependence of ∆H (open data symbols) for these β-Ta (t Ta )/Py bilayer thin films. It can be seen that ∆H increases linearly with the resonance frequency. This linear increase clearly suggests that the damping of the precession in this β-Ta/Py bilayer system is governed by the intrinsic Gilbert's phenomena, i.e., magnon-electron (ME) scattering. The observed frequency dependence of ∆H is fitted with the equation 39 , where, ∆H 0 accounts for line-broadening owing to the extrinsic contributions (e.g., scattering due to magnetic inhomogeneities, etc.) to Gilbert's damping. Normally the presence of the inhomogeneous broadening contribution ∆H 0 is indicative of the inferior film-quality. In the present case, Δ Η 0 ~ 0 (0 to 0.2 mT, i.e., only 1-5% of Δ Η , see Figs. 5 and 6(a)) for various bilayers, indicating the excellent film quality of these samples. The 2 nd term in equation (3) represents the intrinsic ME contribution to the line-width and is proportional to the Gilbert's damping constant. In fact, for bilayers such as β-Ta/Py in the present case, the usual damping parameter α should be replaced with α eff so as to account for the extra contribution α ∆ coming from the spin pumping contributions, i.e., effective Gilbert's damping constant, α α α = + ∆ eff . From the fittings of ∆H versus f data of Fig. 5, we obtained the variation in ∆H 0 and α eff as t Ta is varied in 1-24 nm range. The results are plotted in Fig. 6(a,b), respectively. Within the error of fitting, the inhomogeneous line broadening ∆H 0 can be seen to be nearly con- Fig. 6(b) that irrespective of the values of t Ta , the observed values of α eff for these β-Ta(t Ta )/Py bilayers are smaller than that of the bare Py sample which possessed α = 0.0093 ± 0.0003. This observed decrease in α eff is quite remarkable and suggests the existence of anti-damping effect (even in the absence of any DC-current) in these β-Ta(t Ta )/Py bilayers. In addition, α eff initially exhibits a monotonic decrease as t Ta is increased, and attains the smallest value of 0.0077 ± 0.0001 near = t 6 nm Ta ( Fig. 6(b)). Thereafter, α eff exhibited a relatively sharp increase to 0.0086 ± 0.0001 at t Ta = 8 nm followed by a relatively small variation in α eff . As t Ta is increased from 0 to 6 nm, the observed significant anti-damping (i.e., decrease in the effective α and quantified by Δ α = − 0.0016) in these β-Ta/Py samples can be understood on the basis of generation of (Rashba like) interfacial SOT at the interface of β-Ta/Py bilayers. It is emphasized here that the origin of anti-damping effect observed in the present case cannot be accounted due to the SHE-STT, since the decrease in α eff is observed in the absence of DC-current. Combined with the observed increase in 4πM s due to the proximity induced strong spin-orbit coupling (Fig. 4), the FMR results (Fig. 6b) therefore substantiate the presence of strong SOT leading to negative α ∆ (anti-damping) when Ta is deposited on Py layer. To have further deeper insight about the anti-damping effect observed in the present case, the present results have been analyzed within the framework of a theoretical model proposed by Tserskovnyak/Bauer et al. [27][28][29][30][31] on the basis of diffusive spin accumulation hypothesis. According to this model, it was shown that in the absence of DC-current in a NM/FM bilayer system, the spin pumping into the NM layer 40 is usually governed by the interfacial spin mixing conductance (g ↑↓ ) 41 where Re(g ↑↓ ) is the real-part of the g ↑↓ . It may be noted that  J S is polarized perpendicular both to the instantaneous magnetization m and to its time derivative dm dt . Before we further discuss  J S , we recall that this transfer of  S from FM to NM layer is known 27 to depend critically upon the nature of the NM layer via a parameter ∈, which is defined as the ratio of spin flip parameter (τ sf ) within the NM layer to the spin-injection/pumping rate (τ sp ) from the FM layer. Tserskovnyak/Bauer et al. [27][28][29][30][31] showed that in the case of ∈ > 0.1, the spins accumulation at the interface is not possible in sharp contrast to the case when ∈ < 0.1 43,44 . In the later case, the spin angular momentum ( )  S associated with the spins accumulated at the FM/NM interface creates a non-equilibrium spin density in the NM layer 27,31 . As a consequence of this, a back flow of spin current (indicated by . In the present case of β-Ta (t Ta )/ Py(18 nm) bilayers, existence of this extra torque (i.e., SOT) could account for the observed decrease in α eff (or the anti-damping effect) since ∈ < 0.1 for the nonmagnetic Ta layer present in our bilayers [43][44][45] . In order to ascertain the presence of SOT (without DC-current), the observed thickness dependence of α eff in these β-Ta(t Ta )/Py(18 nm) bilayers ( Fig. 6(b)) were quantitatively analyzed using the theoretical predictions of decrease in interfacial diffusive spin accumulation with the thickness of NM layer [27][28][29][30][31] . The spin accumulation at the interface is very sensitive to spin diffusion length λ SD of β-Ta layer 33,36 . We argue that for the thickness regime, λ < < t 0 Ta SD (≈ 2.74 nm for β-Ta) 25,36 , the strength of the spin accumulation near the interface regime is expected to dominate over the effect of SOC of the β-Ta layer. This suggests that instead of damping, the spin-accumulation can generate  J S 0 which can contribute to the smaller α. Understandably, there would be a case corresponding to a critical value of t Ta above which the SOT caused by  J S 0 weakens (due to lowering of  J S 0 as t Ta is increased) compared to that when spin accumulation is absent, which would result in increase in α eff 36 . In the present case of β-Ta, this crossover in α eff is expected to lie near t Ta = 6 nm (which is ~2l SD , since above this Ta film-thickness, no back flow of  S to FM layer occurs due to the loss of the spin coherence within the bulk of Ta 43 ). The initial decrease in α eff observed with the increase in t Ta , therefore, finds a natural explanation within this diffusive spin accumulation model. Above t Ta = 6 nm, the  J S 0 diminishes due to the decrease in spin accumulation expected at higher t Ta , accounting for the rise in α eff observed above t Ta = 6 nm. These results are in excellent agreement with the results obtained by Jiao et al. who observed higher ISHE signal in different bilayers of Py with Ta, Pt, and Pd, only in thickness regime t NM ≤ λ SD (ref. 36). According to ref. 46, in the presence of spin accumulation in β-Ta the net change in effective Gilbert's damping due to SOT is determined in terms of g ↑↓ , , is the Bohr magneton. This equation shows that SOT is an interfacial effect and hence is expected to decrease with the thickness of Ta layer beyond λ 2 SD . Figure 7 shows the plot of α ∆ versus t Ta . From the fit of the data in Fig. 7 to equation (5), one can experimentally find the interfacial spin mixing conductance and the spin diffusion length of Ta layer. The value of g ↑↓ and λ SD determined from the fitting are (− . ± . ) × 1 13 0 05 10 18 m −2 and . ± . 2 47 0 47 nm, respectively. This value of λ SD is very close to the theoretical reported value . ± . 2 70 0 40 nm by Morota et al. 25 We can also determine the transparency (T) of interface (which accounts for the flow of spin current density that diffuses from FM layer to the NM layer and the actual spin current density generated via spin pumping process from FM layer; such that T < 1) by using 47 Here g ↑↓ and λ SD are determined from the fitting parameters of equation (5), σ Ta is the conductivity of β-Ta layer ( = 5.5 × 10 5 ohm −1 m −1 ), h is the Planck's constant, and e is the charge of the electron. The calculated value of T from equation (6) is − 0.98(± 0.05) at t Ta = 6 nm. The negative and higher value of T combined with low and negative value of g ↑↓ in β-Ta/Py bilayer suggests that there is poor band matching between Py and β-Ta layers that causes back reflection of  S into the FM layer from the interface. This leads to the possibility of exchange of torque by the spins present on the two sides in close proximity to the NM/FM interface by Rashba effect 8,13,14,26,[47][48][49] . Thus, it is concluded that the observed anomalous anti-damping in β-Ta/Py bilayers could be accounted for by the presence of non-equilibrium spin density (providing strong experimental support to the Tserskovnyak's theoretical model) near the NM/FM interface. It is reiterated that anti-damping is also reported earlier in ferromagnetic semiconductors like (Mn x Ga 1−x As) 15 and at the interface of NM/FM bilayer system 33 due to generation of RSOT by non-equilibrium spin density with the application of RF and dc-currents. While the anomalous anti-damping observed in these β-Ta(1-24 nm)/Py(18 nm) bilayers appears to provide the strong experimental support in favor of the theoretical predictions of decrease in interfacial diffusive spin accumulation with the thickness of NM layer [27][28][29][30][31] , it is quite likely that the decrease in α eff in Ta capped Py layers, compared to higher α eff of bare Py, could also be a result of the protection of the underlying Py layer by the formation of protective oxide barrier. To ascertain the contribution of oxide barrier in lowering the α eff , we performed XPS measurements and XRR simulations on a few representative samples, namely the bare Py film and Py/β-Ta(4 nm) bilayer. The XPS spectrum recorded on bare Py film (see Fig. 8(a)) did not clearly support the formation of (antiferromagnetic) NiO. On the other hand, the XPS spectra of Py/β-Ta(4 nm) bilayer clearly revealed the formation of a thin Ta 2 O 5 top layer protecting the Ta as well as Py under layer (see Fig. 8b,c for Ta-4f and O-1s levels, respectively) 50 . Thus, the bilayer with t Ta = 1 nm is, in fact, Ta 2 O 5 /Py. This finds support from ref. 16 wherein 1 nm Ta cap is reported to fully protect the surface of the Py layer from its oxidation. Since Ta 2 O 5 is known to possess inversion asymmetry [ref. 51], this bilayer is expected to exhibit significant amount of anti-damping SOT 51 due to the non-equilibrium spin-accumulation arising predominantly because of interfacial Rashba effect, consistent with the profound drop observed in α eff in the t Ta = 1 nm bilayer (Fig. 6b) as compared to the bare Py. A similar fall in α eff was also reported by Allen et al. (ref. 33). It is evident that at higher t Ta , the thin metallic layer of Ta will eventually isolate the Py from Ta 2 O 5 . Figures 9a-c show the simulated and experimental XRR spectra recorded on samples with t Ta = 0, 3 and 4 nm by considering an oxide layer on the top of the bilayers. The fitted value of layer thickness and its roughness together match quite well with the nominal thicknesses of the Ta and Py layers. It is evident that XRR simulations provide the additional experimental support in favor of a thin protective cap of NiO (t~1 nm) in Py and of Ta 2 O 5 (~2 nm) on top of β-Ta(3,4 nm)/Py(18 nm) bilayers. Thus, the XPS and XRR measurements together suggest that the anti-damping effect in β-Ta(t Ta )/Py(18 nm) bilayers occurs due to the interfacial Rashba effect (predominant till t Ta~3 nm) and to the spin pumping induced spin accumulation in β-Ta layer below t Ta < 6 nm. Eventually, when t Ta is increased above ~6 nm (≅ 2λ SD ), α eff understandably starts exhibiting usual spin-pumping driven damping effect due to the transfer of spin angular momentum in β-Ta. It is to be noted here that recently Akylo et al. 51 , Kim et al. 52 , and Qiu et al. 53 have also independently established the enhancement in 'effective field' due to Rashba effect with the increase in t NM having strong spin orbit coupling. In summary, the FMR studies performed on β-Ta(1-24 nm)/Py(18 nm)/SiO 2 /Si revealed an anomalous decrease in the effective Gilbert's damping constant as compared to the bare Py(18 nm) layer. The analyses of the FMR line broadening data suggests that the anomalous behavior could be satisfactorily understood by considering the dominance of Rasbha like spin orbit torque at the interface of β-Ta/Py bilayer due to formation of a thin protective Ta 2 O 5 barrier layer and the spin pumping induced non-equilibrium diffusive spin-accumulation effect in β-Ta layer until its thickness is smaller than its spin diffusion length, i.e., t Ta ≤ 6 nm. The study clearly establishes that owing to very small spin diffusion length, the thickness of the non-magnetic β-Ta layer in these bilayers is very critical to the Gilbert's damping in the adjacent Py layer. Above 6 nm thickness of Ta, α increases in magnitude due in part to the spin de-coherence at higher t Ta and also in part due to decrease of Rasbha like spin orbit torque away from interface. The observed decrease in effective Gilbert damping constant in β-Ta/Py bilayer is very promising, and demonstrates the potential of using Ta based nanostructures in developing low power spintronic devices as it no more necessitates the presence of DC-current for tuning the damping parameter.
6,469.4
2016-01-19T00:00:00.000
[ "Physics" ]
MEMS Tunable Diffraction Grating for Spaceborne Imaging Spectroscopic Applications Diffraction gratings are among the most commonly used optical elements in applications ranging from spectroscopy and metrology to lasers. Numerous methods have been adopted for the fabrication of gratings, including microelectromechanical system (MEMS) fabrication which is by now mature and presents opportunities for tunable gratings through inclusion of an actuation mechanism. We have designed, modeled, fabricated and tested a silicon based pitch tunable diffraction grating (PTG) with relatively large resolving power that could be deployed in a spaceborne imaging spectrometer, for example in a picosatellite. We have carried out a detailed analytical modeling of PTG, based on a mass spring system. The device has an effective fill factor of 52% and resolving power of 84. Tuning provided by electrostatic actuation results in a displacement of 2.7 μm at 40 V. Further, we have carried out vibration testing of the fabricated structure to evaluate its feasibility for spaceborne instruments. Introduction In recent years, extensive research has been carried out in the industry for the development and launching of small satellites into space. Small satellites have been found to be very successful for dedicated applications such as remote sensing due to features like low mass (1 kg to 50 kg), small size, low power consumption and low manufacturing costs [1]. One such effort aims to build a dedicated small-satellite-compatible, miniaturized spectroscopic cameras for remote sensing [2]. The integration of MEMS technology is promising for the future development of such systems and for small satellite programs in general [3,4]. One candidate device for a satellite-carried spectrometer is a tunable diffraction grating [5]. More generally, tunable gratings are attractive for sensing, displays [6] and tunable lasers [7][8][9]. Since a small footprint is essential for the tunable grating in the micro-satellite setting, the use of MEMS technology for the fabrication of our grating is particularly appealing. Earlier approaches to tuning gratings have been by means of grating light valve [10], microfluidic actuation [11], piezoelectric actuation [12], electrostatic actuation [13][14][15], thermal actuation [16], and elastomeric actuation [17]. In our work we have chosen electrostatic actuation that can work in wide range of temperatures and pressures with low power, and can provide sufficient actuation range for our considered application. The most common electrostatic actuators are limited by small displacement to a few micrometers. This is because electrostatic comb-drive based actuator encounters the pull-in instability if sufficiently high actuation voltages are attempted [18]. This limits the maximum size of a electrostatic pitch tunable diffraction grating to about a few millimeters. However, they can be used for integrated computational imaging spectroscopic applications by making use of optical diversity techniques [19]. Diverse measurements with different optical transfer functions can be obtained by varying the pitch of the diffraction grating in these optical systems. By this method and with computational algorithms, multiple undersampled images can be used to obtain a super-resolution image [20]. For such imaging spectroscopic applications, large resolving power diffraction gratings with high fill factors are found to be advantageous. To achieve this, careful analysis and design of the micromechanical structure is necessary. This paper describes the analysis of MEMS based analog pitch tunable diffraction grating using Silicon-On-Insulator (SOI) technology. The following sections describe the design, modeling, fabrication and testing of our device. Further, the feasibility of using silicon micromachined PTG for space-borne instruments is investigated by subjecting it to mechanical vibrations. Pitch Tunable Diffraction Grating (PTG) Light incident on the diffraction grating is dispersed according to the diffraction relation given by where θ is the incident angle, θ m is the diffracted angle of order m, λ is the incident wavelength, and Λ is the pitch of the diffraction grating. The value of diffraction orders (m) are integers and m = 0 gives the non-dispersive term. The schematic working principle of a PTG is depicted in Figure 1. Tuning can be incorporated by elongating or compressing the pitch, thereby changing the diffraction angle (θ m ). The pitch change of the diffraction grating from Λ to Λ + dΛ leads to a change in diffraction angle dθ m , which is given by where dΛ = x N is the pitch change, x is the displacement achieved in actuation and N is the number of grating lines. It is evident that larger deflection provides better tuning range. However, larger deflection also leads to change in the duty cycle (DC), and a significant drop in diffraction efficiency (η) [21] expressed as The DC in this context is defined as the ratio of beam width to the pitch. Maximum efficiency of about 10% is obtained when duty cycle is 50%. Micromechanical Implementation In our design, we employ in-plane electrostatic comb-drive pair actuation mechanism for its simplicity and relative ease of fabrication [22][23][24]. The schematic of the electrostatic actuation based PTG is depicted in Figure 2. The grating grooves are implemented as a set of beams, which are supported by holding springs. The grating beams together with the holding springs are suspended by the actuating springs that are connected to the anchors. The actuation springs are equipped with a comb-drive-fingers structure that generates electrostatic force with the application of voltage. The electrostatic force generated by the comb-drive pairs can be modeled by a planar parallel plate capacitor [25]. This generated force is balanced by mechanical force, thereby effectively increasing the overlap area. The net electrostatic force generated by the comb-drive pairs consists of a combination of pull-in force by the parallel plates, fringing fields (edge effects) and ground-plane levitation force (between the suspended structure and the substrate). Neglecting the fringing fields and ground-plane levitation force, the capacitive force generated between the comb-drive pairs is given by where F e is the electrostatic force, n is the number of comb fingers, 0 is the permittivity of free space, t is the thickness of the device, g is the separation between the parallel plates, V is the applied voltage, F m is the mechanical force generated, k e f f is the effective spring constant of the structure, and x is the displacement obtained. This force, in turn, pulls the actuator spring in the x direction to balance the electrostatic force. The larger tuning range can be achieved by increasing the actuation voltage. Optical Design The intensity profile of a diffraction grating when N slits are illuminated by a collimated beam with normal incidence, can be obtained by An important parameter that needs to be considered while designing the diffraction grating is the resolving power (R), defined as number of grating lines (N). The resolving power of the diffraction grating for two wavelengths (λ 1 , λ 2 ) which are closely spaced is given by The full width half maximum (FWHM) of the diffraction profile depends on N. Figure 3a shows the first order diffraction profile when a wavelength of 488 nm is incident on N slits. It is evident from the plot that FWHM reduces i.e., resolution increases with the increase in N. Resolution also depends on the wavelength regime which is depicted in Figure 3b, where the number of grating lines plotted against resolution for three wavelengths. Here the center wavelength and wavelength spacing are taken as (λ 1 + λ 2 )/2 and (λ 1 − λ 2 ) respectively. Further, it is evident that to obtain a spectral resolution less than 10 nm for 632 nm wavelength in 1st order diffraction, there needs to be a minimum 64 grating lines. Micromechanical Design The stiffness constant of the holding flexure can be derived by treating the flexure as a four guided cantilever structure. The expression for computing the stiffness value for a single holding spring with thickness t, is obtained from where k x h is the holding spring stiffness in x direction, E is Young's modulus, I x h is the moment of inertia and w h (L h ) is the width (length) of the holding springs. The spring constant of the actuating spring can be treated as a two guided beam structure connected in parallel. Accordingly, the spring stiffness value of the actuating arm in the lateral direction (k x a ) for one-side is where k x a is the actuating spring stiffness in x direction, I x a is the moment of inertia of the actuating spring and w a (L a ) is the width (length) of the actuating springs. The spring stiffness of the actuating arm in the y direction is computed as where k y a is the actuating spring stiffness in y direction. The springs of the actuating arm are designed to provide maximum tangential displacement (minimum tangential stiffness, k x a ) and minimum normal displacement (maximum normal stiffness, k y a ). The stiffness ratio has to be large enough to avoid lateral pull-in instability which limits the actuation range of the device. It is obtained as For design, the effective spring constant (k e f f ) has to be computed first, followed by the evaluation of the other parameters. Yu et al. reported that the displacement of the comb drive fingers drops significantly when the grating beams are connected with the actuating spring [26]. This discrepancy can be explained by a detailed analytical expression of the effective spring constant. The effective stiffness of the structure can be derived from the mass-spring model which solely depends on actuating spring stiffness (k a ), holding spring stiffness (k h ), and the number of grating periods (N). Mass-Spring Model To compute the unknown effective spring constant (k e f f in Equation (4)), the structure in Figure 2 can be modeled as a mass-spring system portrayed in Figure 4. The equivalent spring constant between two grating beams (k eq ) is the parallel combination of springs connected in series. The stiffness of a spring connected to N beams can be computed as a combination of N springs of individual stiffness k eq (= k h ) connected in series. The force generated from the comb drives on either side of the device pulls the first N 2 grating beams in one direction and the remaining beams in the other direction to obtain maximum tuning range. The effective spring constant of the structure when electrostatic force is applied in both directions can be computed as After obtaining k e f f , the remaining parameters (L h , L a , w h , w a ) are chosen based on the fabrication constraints. The length of the grating beams is chosen to have an effective diffraction area of 1 mm × 1 mm. The minimum feature size of the structure is selected as 2 µm, which is the critical dimension for photolithography. The design values are summarized in Table 1. In-Plane Modal Analysis The in-plane mechanical resonant frequencies of the PTG are computed using a mass-spring model for stability analysis. The natural frequencies of the mass-spring system can be derived by equations of motion for an undamped linear system using where x is the time dependent vector that describes the motion, M is the mass matrix and K is the stiffness matrix given by where m a is the sum of mass of the actuating arm and the mass of comb fingers attached to it, m g is the sum of the mass of grating beam and the four supporting holding beams, k a (=k x a ) is the actuating spring stiffness along x direction, and k h (=k x h ) is the holding spring stiffness along x direction. To find the harmonic solution, we assume that x takes the from X sin(ωt) which results in − MXω 2 sin ωt + KX sin ωt = 0 (13) On further simplification, the above equation gives KX = ω 2 MX. The generalized eigenvectors are given by substituting vector v and λ in the above equation gives the form Kv = λMv. The eigenvalue solution λ is computed by MATLAB for M and K. The natural frequency of the system for the ith eigenvalue is then The analytical model for structural analysis and resonant frequency was validated by 3D finite element simulations before proceeding to the fabrication. Microfabrication The fabrication of the device was performed using surface machining technology on a 100 mm diameter SOI wafer with 10 µm device layer thickness, 2 µm buried oxide, 450 µm handle layer, <100> crystal orientation, boron doping and a resistivity between 1 Ω cm to 20 Ω cm. The wafer was initially dipped in buffered hydrofluoric acid (HF) solution for 2 min and was rinsed using de-ionised (DI) water. The wafer was spin dried and then placed on a hot plate at 110 • C for 30 min to make the surface dehydrated. Before performing the spin coating, the wafer was treated with HMDS primer for 120 s to make the surface hydrophilic, thus improving the adhesion of photoresist with the wafer. A brief schematic representation of fabrication process is depicted in Figure 5. In the first step, contact pads for external electrical connection are patterned by photolithography using AZ9260 photoresist on the SOI wafer and then sputtered with Titanium (Ti)/ Gold (Au), further followed by lift-off (steps 1-3 in Figure 5). Ti serves as the adhesion layer between Si and Au. To protect the Ti from buffered oxide etchant (BOE) in the later step of fabrication, the wafer is again patterned using AZ9260 photoresist and sputtered with Chromium (Cr)/ Gold (Au) on top of the existing pads, thereby, completely covering Ti layer (steps 4-6 in Figure 5). Further, photolithography for the PTG structure is carried out with a thinner photoresist, AZ7217 followed by Deep Reactive Ion Etching (DRIE) process to obtain nearly vertical walls. Finally, the sacrificial oxide layer beneath the structure is removed by BOE solution (steps 7-9 in Figure 5). The main challenge during drying is the stiction between adjacent grating beams and also in the holding springs due to the surface tension between water and the device. To overcome this problem, critical CO 2 drying process is adopted. By this method, the wafer is dried with liquid CO 2 under critical point temperature and pressure by which the physical properties of gaseous and liquid states remain unchanged. The SEM images of the fabricated device are depicted in Figure 6. Device Characterization and Testing We found that device performance was dependent on micro fabrication tolerances. With the conditions of the DRIE system we employed, etching produces a negative sloped profile in the beams where the trench space is large and a positive profile where trenches are closely spaced [27,28]. This discrepancy in etching profile might result in relatively large variation in the expected device performance [29]. The comparison between the designed values from the analytical modeling and the fabricated values are shown in Table 1. The modeling and performance analysis of such etching profiles has been carried out in [30]. Based on the fabricated structure, we modified the analytical modeling of displacement as where θ, φ and ψ are the etching profile angles in comb-drive fingers, actuation springs and holding springs, respectively. The spring stiffness with the modified design is calculated to be 1.56 N m −1 against the initially designed value of 0.94 N m −1 . Further, the negatively tapered etching profile in the comb-drives leads to reduction in electrostatic force from 8.8 µN to 4.6 µN for the driving voltage of 40 V. Static Measurements The fabricated PTG was tested in a probe station by observing the deflection of the grating beams with the applied voltage. Figure 7a,b depict the tuning of PTG without and with the driving voltage along with the difference between two images. The inset in Figure 7c shows a line profile along the white dashed line from the two images of the grating. Stretching of the grating beams is achieved by the moving comb fingers during actuation. The displacement obtained in the comb fingers is plotted with respect to the voltage in Figure 8. The plot clearly shows that displacement follows a quadratic relation with the applied voltage and also the measured values match with the modified design. For safe working of the device, the driving voltage has to be less than the pull-in instability voltage which is given by Since our design has higher stiffness ratio (k y a /k x a in Equation (10)), and the operating voltage was far less than the pull-in voltage, the device did not face any lateral pull-in instability. However, we observed that the device was sticking to the handle layer of the wafer when higher driving voltages were applied. This can be rectified by modifying the fabrication steps by incorporating additional photolithography on the handle layer followed by etching. Optical Characterization The fabricated grating was also tested optically. We illuminated the PTG with a laser beam of wavelength 488 nm, and the diffraction pattern was observed on a dark screen that is shown in Figure 9. To validate the shift in the diffracted angle, the 1st order diffraction spot (m = +1) was detected on a camera. Further, voltage was applied through the contact pads; and the resulting shift in the first order diffraction was measured to be four pixels. This corresponds to a change of 218 µrad in diffraction angle. A comparison of our PTG with other research works is shown in Table 2. Mechanical Vibration Test The reliability tests that are generally carried in the satellite missions are radiation tests, vacuum tests, thermal cycling tests, thermal shock tests, mechanical shock tests and mechanical vibration tests. Out of these, the MEMS devices are more susceptible to mechanical shock and vibrations. The level of mechanical shock depends on the type of the mission. In general, for a satellite assignment there are three levels of shock that are experienced: launching, separation from the rocket, and landing. Typical shock values experienced by previously reported missions are 4.5 G for the Russian Soyuz vehicle [32] and 3 G for NASA Space Shuttle [33]. To measure the level of shock, a finite element simulation (ANSYS 17.2) was performed with different shock levels. The simulations were carried out with the same geometry and with the designed values described in Table 1. Anisotropic silicon was chosen as the material for all the simulations. In one simulation, a shock of 5 G was applied along the weakest out-of-plane direction of the PTG. It was found that maximum stress is concentrated on the actuating springs and there was no significant stress in the optical grating region (Figure 10). The result shows that maximum principal stress is 4.15 MPa, far less than the fracture strength of Silicon (7000 MPa). Since our fabricated device is stiffer than the design values, the device would most likely surpass the shock test. We also studied the mechanical strength of our device by subjecting it to mechanical vibrations. The mechanical vibration is an important concern during the satellite launch. The typical values of the frequency of vibration are 1 Hz to 100 Hz [34]. Hence, it is important to make sure that the natural frequency of the device does not fall within this range of frequencies leading to fracture of the device. To verify that, the PTG is not susceptible to vibrations, we carried out vibration test using a mini-shaker by varying its frequency using a function generator. The amplitude of excitation is controlled by increasing the voltage in the function generator. We also observed that the amplitude of excitation reduces with the increase in frequency. The vibration response of the mini shaker was measured by a laser displacement sensor and the corresponding amplitude-frequency plot is shown in Figure 11a. Figure 11. Vibration test analysis of the PTG with (a) frequency response of the vibrations measured using laser displacement sensor; (b) Performance of our device measured experimentally before and after the vibration test. The in-plane vibrations and out-of-plane vibrations were performed by placing the device in different orientations. A different device was used for vibration testing as we did not prefer damaging our best device. The process ran for 1 h and no mechanical failure of the structure was observed. After undergoing the vibration test, the device was evaluated in a probe station. Figure 11b shows the displacement-voltage relation of the device before and after the test. The test results show that the device was capable of sustaining mechanical vibrations. Conclusions We designed and fabricated a silicon-based pitch tunable diffraction grating using micromachining technology. A detailed analytical modeling was carried out for the PTG using a mass-spring model. The device was fabricated with a 10 µm thick SOI wafer. The micro-mechanical performance was analyzed for the fabricated structure with the modified design. The tuning produces a displacement of 2.7 µm for an actuation voltage of 40 V. With the application of different voltages, the displacement was found to follow the expected quadratic law, in good agreement with the nominal parameters. The diffraction grating was tested on an optical bench by applying actuation voltage to observe the positional shift in the diffraction orders. Finally, the device was subjected to vibration tests and was found to meet the criteria for spaceborne applications.
4,933.6
2017-10-01T00:00:00.000
[ "Engineering", "Physics" ]
Comparison of intelligent modelling techniques for forecasting solar energy and its application in solar PV based energy system : The measurement of solar energy data is a difficult task and rarely available even for those stations where measurement has already been done. Further, the PV power forecasting is an important element for smart energy management system. In the present scenario, utilities are developing the smart-grid application and PV power forecasting is an important key tool for a new paradigm. The forecasting of solar energy during clear sky-condition can be easily estimated using mathematical models; however, forecasting under the influence of hazy, cloudy, and foggy sky-conditions do not provide accuracy with these models. Therefore, an intelligent modelling techniques i.e. fuzzy logic, artificial neural network (ANN), and adaptive-neural-fuzzy-inference system (ANFIS) models are proposed based on sky-conditions namely clear/sunny sky, hazy sky, partially cloudy/foggy sky, and fully cloudy/foggy sky-conditions for forecasting global solar energy. To design the model, 15 years averaged datasets of meteorological parameters were used for distinct climate zones across India. Further, comparison of intelligent models has been carried out with regression models using statistical indicators. The proposed model has been implemented for short-term PV power forecasting under composite climatic conditions. Simulation results confirm that the ANFIS model provides supremacy for PV power forecast as compared to other models. Greek symbols δ solar declination angle (°) γ temperature parameter at MPP (dimensionless) ϕ latitude of the region (°) ω s mean sunrise hour angle (°) n day days in a year beginning from 1st January onwards (dimensionless) η o optical efficiency (%) Introduction Renewable energy resources and its effective use are intermittently allied with the sizing, optimisation, and operation of solar energy systems. It is an environment-friendly, clean, and inexhaustible source of energy that can be effectively utilised for the generation of power. A reasonable accurate knowledge of solar resource availability is of prime importance for solar engineers in the development and designing of solar photovoltaic (PV)-based energy systems. Unfortunately, the availability of solar radiation data is scarce because of high instrument cost, limited spatial coverage, and limited length of the record. Due to unavailability of the measured data, global solar energy forecasting is of prime importance at the Earth's surface. For this purpose, it is essential to develop models based on more readily available meteorological data for forecasting global solar energy [1][2][3][4][5][6][7][8][9][10]. Solar radiation model ranges from mathematical models to hybrid intelligent models. In the past, various mathematical models such as REST, Modified Hottel's, CPRC2, and REST2 and so on, have been developed for estimating global solar energy under IET Energy Syst. Integr., 2019, Vol. 1 Iss. 1, pp. [34][35][36][37][38][39][40][41][42][43][44][45][46][47][48][49][50][51] This is an open access article published by the IET and Tianjin University under the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/) cloudless skies [11][12][13][14]. Recent research carried out shows that the mathematical models available in the literature are not accurate, primarily due to the extreme simplicity of parameterisation; however, empirical models based on multiple regression analysis are presented for estimating global solar energy. Angstrom presented the first attempt for estimating global solar energy using sunshine hours under clear sky conditions [15]. In recent research, Bayrakci et al. [16] proposed empirical models for estimating global solar energy for the Turkey region. In this work, 105 literature models are assessed with the aid of statistical validation tests. Also Benson's models are investigated and compared. It is found in this research that the cubic and quadratic models are appropriate for January-June and July-December periods, respectively. Further, several correlations are available for the estimation of global solar energy correlated with one or more meteorological parameters [17][18][19][20]. Most of the previous researches have been carried out for Middle East countries; however, very few models discussed forecasting global solar energy for Indian climatic conditions [21,22]. In recent work by Khalil and Aly [23] empirical models have been evaluated based on statistical error-tests for estimating global solar energy using sunshine hours, relative humidity, and ambient temperature for Saudi Arabia region. It is observed in this research that maximum solar energy can be achieved during summer while this value diminishes during autumn and winter. The regression models developed so far for assessing global solar energy were available for clear sky conditions; however, such models are unsuitable for estimating global solar energy during cloudy sky conditions. Presence of moisture, dust, clouds, and aerosols in the lower atmospheric region causes uncertainty in the atmosphere. The reduction in extra-terrestrial solar radiation occurs due to the external atmosphere which varies from 30% in a clear sky condition to 100% in a cloudy/foggy sky condition. For Indian climatic conditions where about 50-100 days are cloudy, accurately estimating global solar energy based on multiple regression analysis is a tedious task [24][25][26][27]. Therefore, intelligent modelling techniques have been introduced for forecasting global solar energy. A detailed literature survey on the issue reveals artificial intelligence techniques focusing on the theoretical aspects, principles, and design methodology. Further, the hybrid intelligent system has been introduced like the adaptive-neuralfuzzy inference system (ANFIS) which integrate the features of artificial neural network (ANN) and fuzzy logic approach [28]. The fuzzy logic models are introduced wherein probabilistic approaches do not give a realistic description of the phenomenon. Sen [29] proposes a fuzzy logic model using sunshine duration for estimating solar energy. Most of the previous researches investigated the fuzzy model for forecasting solar energy and its application in the field of renewable energy system [30][31][32][33][34][35]. In recent research, Suganthi et al. presented an application of fuzzy logic based models in renewable energy systems namely solar, wind, bio-energy, micro-grid, and hybrid systems. In this research, it is found that the fuzzy-based models are extensively used in recent years for site assessment, for installing of PV systems, power point tracking in solar PV systems, and its optimisation [36]. Recently, Perveen et al. proposed a sky-based model employing fuzzy logic modelling for forecasting global solar energy using meteorological parameters namely dew point, sunshine duration, ambient temperature, wind speed, and relative humidity. It is observed in this research that with the inclusion of dew point as a meteorological parameter the accuracy of the model significantly increases [37]. For complex systems with large datasets, maintaining accuracy for such data sets using fuzzy logic modelling would be a tedious task. Therefore, ANN-based models are introduced, employing artificial intelligent techniques which are data-driven and can subsequently perform the structure simulation. The ANN model is ideal for modelling non-linear, dynamic and complex system [38][39][40][41][42][43][44][45][46][47][48][49]. Chang et al. proposed a radial basis function neural network (RBFNN) based model for short-term power forecasting wherein 24 h of input data at 10-min resolution have been considered for training the proposed neural network. In this research, obtained results have been compared with other ANN-based methods, and the result shows that the RBFNN model is more accurate [50]. In recent research, Khosravi et al. proposed a comparison of the multilayer feed-forward neural network (MLFFNN), and support vector regression with a radial basis function (SVR-RBF) for forecasting wind speed. In this research, temperature, pressure, relative humidity, and local time are considered as input parameters, and statistical indicators show that the SVR-RBF model outperforms MLFFNN model [51]. Detailed literature review reveals that for estimation of complex functions, an accurate analysis of some neurons and hidden layers with the aid of ANN is a difficult task as they are large in number. Also, large training time is involved in such a neural network, which subsequently slows down the response of the system. The existing neural network model does the summation operation; however, it does not operate based on the product of weighted inputs. Therefore, hybrid intelligent systems are introduced for forecasting solar energy which is a fusion of ANN and fuzzy logic approach for forecasting global solar energy. Many researchers have investigated the integrated features of ANFIS in forecasting global solar energy and its application in wind power forecasting [52][53][54][55][56][57][58][59]. Jang [60] has presented an architecture underlying the principle of ANFIS which is implemented within the framework of adaptive networks. In this work, the proposed ANFIS can construct an input-output mapping based on stipulated data pairs. In recent research, Liu et al. [61] proposed a hybrid methodology for shortterm power forecasting using ANFIS. In this research, individual forecasting models are presented such as back-propagation neural network, least squares support vector machines, and RBFNN. The results of the comparison reveal that the proposed hybrid methodology using ANFIS presents a significant improvement in accuracy. Most of the intelligent models discussed in the literature for forecasting global solar energy were available for clear sky conditions; however, very few researchers have explained about modelling based on variation in sky-conditions defined as clear sky, hazy, foggy, and cloudy sky conditions and for widely changing climatic conditions based on distinct climate zones. Further, most of the previous researches employing hybrid intelligent systems were based on forecasting global solar energy using meteorological parameters like duration of sunshine hours, wind speed and others; however, very few literature is available that elucidated about using dew-point in addition to other available meteorological parameters; the inclusion of which as significantly increased the accuracy of the models. Therefore, in this work, an attempt has been made to establish intelligent models such as fuzzy logic approach, ANN, and ANFIS models for forecasting global solar energy based on sky-conditions defined as clear (type-a) sky, hazy (type-b) sky, partially cloudy/ foggy (type-c) sky, and fully cloudy/foggy (type-d) sky conditions and for five weather stations across India covering widely changing climatic condition thereof, i.e. composite, warm and humid, hot and dry, cold and cloudy, and moderate climatic conditions, respectively. Simulations have been carried out for global solar energy forecasting based on meteorological parameters namely dew-point, ambient temperature, sunshine hours, relative humidity, and wind speed, respectively. Further, comparisons of the proposed intelligent model have been carried out with empirical regression models with the aid of statistical validation tests. The obtained results have been further implemented for short-term PV power forecasting of solar PV-based energy system employing 250 W multi-crystalline solar PV modules operated at maximum power point (MPP) operating conditions under composite climatic conditions. This work is arranged in the following manner. Section 2 presents the methodology. Section 3 discusses the implementation of intelligent modelling techniques for solar PV-based energy systems. Section 4 presents statistical validation tests. Section 5 presents results and discussions. The conclusion has been carried out in Section 6 and Section 7 presents the references. In this work, meteorological parameters include the duration of sunshine hours, dew-point, relative humidity, ambient temperature, wind speed, and global solar energy. The 15 years averaged measured data have been obtained from IMD (Indian Meteorological Department), NIWE (National Institute of Wind Energy), and NISE (National Institute of Solar Energy) [62,63]. The normalisation of the data has been done and defined in 0.1-0.9 range so to avoid convergence issues for five weather stations which represent distinct climatic conditions. The Indian climatic condition possesses wide-ranging weather conditions and the criteria for assigning location to depend on sky conditions prevailing for six months or more. The factors affecting climate include location, latitudinal extent, monsoon winds and so on. In 1988, Bansal and Minke [64] evaluated the mean averaged data from 233 weather stations, and explained five distinct climate zones and are presented in Table 1. Sky-based classification In this work, the classification based on sky-conditions can be described as follows [65]: 2.2.1 Clear/sunny (type-a) sky: If sunshine duration equals or more than 9 h, and diffuse radiation equals or less than 25% of global solar energy. Hazy sky (type-b): If sunshine duration lies in the middle of 7-9 h and diffuse radiation is >25% and <50% of global solar energy. Partially cloudy/foggy sky (type-c): If sunshine duration lies in the middle of 5-7 h and diffuse radiation is >50% and <75% of global solar energy. Fully cloudy/foggy sky (type-d): If the sunshine duration is lower than 5 h and diffuse radiation is >75% of global solar energy. Regression models for global solar energy estimation Angstrom proposed the first ever model based on global solar energy correlation with a sunshine duration which is later improved by Page and Prescott [66,67]. In this, the value of (H g ) global solar energy can be attained by multiplying the forecasted clearness index (H g /H o ) by H o which represents extra-terrestrial solar radiation and can be calculated using standard geometric relations [68,69]. In equation with different meteorological parameters, the correlation is of multiple linear regressions and expressed by (1) given below: p = a + bq 1 + cq 2 + dq 3 + eq 4 + f q 5 + ⋯ + nq n , where G sc is 1367 W/m 2 , δ is the declination angle, ϕ represents the location latitude, ω s is the mean sunrise angle, and n day is days beginning from 1st January. S o is the maximum possible sunshine hours; and S is the daily sunshine hours. Forecasting global solar energy using the fuzzy logic approach Fuzzy logic models are employed in forecasting global solar energy. In this work, the variables L, M, and H are defined with fuzzy terms as VL, LM, MH, HH, and VH, respectively, and five such membership functions in 0.1-0.9 range are defined in a fuzzy inference system of MATLAB for forecasting global solar energy. The fuzzy membership function for ambient temperature is presented in Fig. 1 and the models have been developed using MATLAB fuzzy logic toolbox. Assessment of global solar energy based on ANN The ANN architecture employing a feed-forward neural network is designed in a way that output variables are calculated from variables at the input side. The ANN architecture presented in this research comprises three layers out of which the first layer has five input parameters, a hidden layer with tan-sigmoid function 'tansig', which is described by the following equation: where x is the input, and linear activation 'purelin' transfer function has been used in the output that would solve the hard problem as shown in Fig. 2. The neural network toolbox in MATLAB has been used for implementing a neural network algorithm, and 'TRAINLM' is used for training the network. The output of the network can be modelled by where x ij is the j th neuron incoming signal (at the input layer), θ i is the i neuron bias, and w ij is the connection weights directed from j neuron to i neuron (at the hidden layer). Hybrid intelligent system for forecasting global solar energy The ANFIS is a graphical analysis of the fuzzy-Sugeno system which lies within the framework of adaptive networks. The ANFIS architecture makes use of a hybrid learning rule which combines gradient-descent, back-propagation, and least-squares algorithm. In this, there is no effect of input-output mapping to the response of the network, i.e. complexity reduces. One of the advantages of the hybrid system is the faster convergence rate. In this, MATLAB software has been used for data training and testing using function 'anfisedit' in the command window. ANFIS architecture: It is a multilayer feed-forward network which comprises nodes with directed links, with a function similar to the Takagi-Sugeno FIS model as shown in Fig. 3 [70]. Layers of ANFIS: It comprises five layers as follows: 2.6.3 Layer 1: In layer 1, node acts as an adaptive function node which gives a degree of membership function as shown below: where O 1,i and O 1,j represent the output functions and µ x,i is the membership function degree for fuzzy sets A i and B i , respectively. Layer 2: In this, the node is either fixed or non-adaptive and labelled as '∏' where the output is the incoming signals product and shown as: 2.6.5 Layer 3: In layer 3, the node is either non-adaptive or fixed labelled as 'N', which indicates normalisation to the firing strength as 2.6.7 Layer 5: It is the output layer which comprises a single fixed node which is labelled as 'Σ' and sums the incoming signals as Implementation of intelligent modelling techniques for solar PV-based energy system The large-scale penetration of solar PV technology in the smart energy management system has become a challenging task. The output power variation in a solar PV-based energy system can lead to the unstable operation of the power system. The fluctuations in the output lead to the issues in its use and subsequently reduce the PV generation capacity. Damage may arise in the stability of the utility grid and the power quality because of the imbalance between the demand and supply. In this research, 250 W multicrystalline solar PV modules have been employed for short-term PV power forecasting operated at MPP operating conditions under composite climatic conditions. Performance specification of 250 W multi-crystalline solar PV modules The efficiency of module, The generation of power from solar PV-based energy system can be explained by the following equations [71]: and where P PV , STC is the PV system rated power output of single array at MPP conditions, N PVS is the number of photovoltaic arrays in series, G T is solar irradiance in W/m 2 at STC, P PV is the power output of PV array at MPP, γ is a temperature parameter at MPP, N PVP is the number of PV arrays in parallel, T j is the junction temperature of the solar panel in °C, T amb is the ambient temperature in °C, and N OCT is a constant. Statistical validation tests Various statistical validation tests have been performed for evaluating the performance of the models. Mean percentage error (MPE) It is defined as the variation in the measured and forecasted value given below: Mean bias error (MBE) It gives the correlation performance between the measured and forecasted data given below: Root mean square error (RMSE) It is expressed by the equation given below: Coefficient of determination (R 2 ) It can be defined as where x is the number of observations, m i and f i are the ith measured and forecasted data, and m a and f a are the averaged measured and forecasted data, respectively. Estimating global solar energy using regression modelling In this, empirical models have been established using multiple regression analysis correlating meteorological parameters such as global solar energy with sunshine duration, atmospheric pressure, wind speed, ambient temperature, relative humidity, rainfall, and cloudiness index to estimate global solar energy. Statistical validation tests are used for evaluating the model performance and are illustrated in Table 2. Principal component analysis has been applied to the developed model for obtaining a correlation with the highest correlation coefficients, and it is observed that the sevenparameter correlations, provides the best fit and makes it useful for global solar energy estimation for distinct climatic zone across India and are presented in Table 3. From Table 3, it is revealed that for hot and dry climate the best fit model is achieved by (55) with MPE = 0.36%, and R 2 = 0.64; for a warm and humid climate (see Fig. 4), the best fit model is achieved by (56) with MPE = 1.93%, and R 2 = 0.87; for composite climate, (57) gives the best fit with MPE = 0.42%, and R 2 = 0.71; for moderate climate, (58) gives the best fit with MPE = 0.25%, and R 2 = 0.81; and for cold and cloudy climate, (59) gives the best fit with MPE = 1.5%, and R 2 = 0.71, respectively. Intelligent models for forecasting global solar energy In this section, a comparison between fuzzy logic, ANN, and ANFIS based model has been carried out for global solar energy forecasting based on different sky-conditions using meteorological parameters namely dew-point, relative humidity, sunshine hours, wind speed, and ambient temperature, respectively, for distinct climatic conditions. The performance has been evaluated using statistical validation tests and is presented in Table 4 from which the following inferences can be drawn as follows. Clear/sunny (type-a) sky: Jodhpur climatic conditions are favourable for this skycondition as the MPE obtained by simulating the measured and forecasted data using intelligent models are observed to be less for this station as compared to other stations. The averaged MPE is observed to be 0.31% by employing a fuzzy logic approach, 0.05% by using ANN, and with ANFIS based model the error reduced to 0.00002681%, respectively. The reason behind is that the Jodhpur climate is hot and dry wherein sunny climatic conditions exists throughout the year. Further, the graphical representation between the measured and forecasted data has been shown in Fig. 5a. Hazy (type-b) sky: Delhi climatic conditions are favourable for this sky-condition as the MPE obtained by simulating the measured and forecasted data using an intelligent model is observed to be less for this station as compared to other stations. The averaged MPE is observed to be 0.42% by employing fuzzy logic, 0.14% by using ANN, and the error reduced to 0.00001653% with the ANFIS model. The reason behind is the higher humidity levels, which vary from 25-35% during dry periods to 60-90% during wet periods. Further, the graphical representation between the measured and forecasted data has been shown in Fig. 6b. Partially cloudy/foggy (type-c) sky: Chennai climatic conditions are favourable for this sky-condition as the MPE obtained by simulating the measured and forecasted data using intelligent models are observed to be less for this station as compared to other stations. The averaged MPE is observed to be 0.30% by employing fuzzy logic, 0.20% by using ANN, and with ANFIS based model the error reduced to 0.00002036%, respectively. This condition is apparently due to high diffused radiation owing to cloudy sky conditions. During summer, the temperature can reach as high as 30-35°C, whereas during winter, the temperature lies between 25 and 30°C. Further, the graphical analysis between the measured and forecasted data has been shown in Fig. 4c. Fully cloudy/foggy (type-d) sky: Shillong climatic conditions are favourable for this sky-condition as MPE obtained by simulating the forecasted, and measured data using intelligent models are observed to be less for this station as compared to other stations. The averaged MPE is observed to be 1.30% by employing fuzzy logic, 0.46% by using ANN, and the error reduced to 0.00001428% with the ANFIS model (see Fig. 7). This is due to the reason that during winter the global solar radiation is low with a high amount of diffused radiation, which makes winter extremely cold. Further, the graphical representation between the measured and forecasted data has been shown in Fig. 8d. Comparison of intelligent models with regression models Further, the comparative analysis of intelligent models namely fuzzy logic, ANN, and ANFIS based model have been carried out with regression models, and the performance has been evaluated using statistical validation test for the composite climate of India are presented in Table 5. It is evident from Table 5 that the hybrid intelligent systems perform best in comparison to other models for forecasting global solar energy. The averaged MPE obtained by using regression models is 1.67% for composite climatic conditions. However, the obtained result is far better by using intelligent models for global solar energy forecasting. With fuzzy logic methodology, the averaged MPE reduced to 0.41%, which is comparatively lesser than the regression models. The MPE further reduced to 0.12% by It is, therefore, revealed from the results that by employing hybrid intelligent systems, the obtained error obtained is less. This is due to the reason that the ANFIS based model presents a specified mathematical structure and makes it a good adaptive approximator. Further, for a network of similar complexity, the Implementation of intelligent models in solar PV-based energy systems The solar PV power forecasting is an important element for the smart grid approach which helps in optimisation of the smart energy management system which can integrate the renewable power generation efficiently. Since the power generating from solar energy resource is fluctuating and non-linear in nature, it becomes very difficult to estimate power output with mathematical models; therefore, intelligent approaches based on fuzzy logic, ANN, and ANFIS models have been presented for power forecasting of solar PV-based energy system employing multi-crystalline 250 W solar PV modules operating at MPP tracking conditions are presented in Table 6. From Table 6, it is evident that by employing a hybrid intelligent approach, i.e. the ANFIS model, the averaged MPE obtained is 0.0001% which is far less as compared to other models. By employing fuzzy logic, the MPE obtained is 0.01%, while with the ANN model, 0.0021% of MPE is achieved, respectively. Hence, the hybrid modelling approach is far accurate and precise as compared to other models. Further, it is seen from the results that for all months of the year, MPE is less in the case of the ANFIS model. For the winter season (January), the averaged MPE by employing a fuzzy logic approach is 0.09%, using ANN the error reduced to 0.004%, and with ANFIS model the error further reduced to 0.0003%. Similarly, for the summer season (June) the averaged MPE by employing fuzzy logic is 0.07%, by using ANN the error reduced to 0.0033%, and with ANFIS model the error further reduced to 0.0001%, respectively. It can also be observed that error is large for the rainy season (August) because of large uncertainties associated with the data. The averaged MPE by employing fuzzy logic is 0.28%, using ANN error reduced to 0.0304%, and with the ANFIS model, the error further reduced to 0.0003%, respectively. Given those above, it is found that the ANFIS model performs better than other models in terms of a faster convergence rate with learning and training ability. The ANFIS methodology makes use of training pattern as compared to other methods and hence reduces the computational time complexity. It has certain advantages such as the ease of design, robustness, and adaptability with the nonlinearity associated with the data. The ANFIS methodology integrates the features of both fuzzy logic and ANN which increases the system accuracy and makes the system response much faster. Further, parallel computation is allowed in ANFIS structure which presents a well-structured representation with a hybrid platform for solving complex problems and is a feasible alternative to the conventional model-based control schemes. This hybrid approach deals with the issues associated with variations and uncertainty in the power plant parameters and structure, thereby improving the system robustness. Further, it allows a better integration with other control design methods. Intelligent model for short-term PV power forecasting The generation of power from a renewable energy resource is gaining attention because of advancement in the field of solar PVbased energy systems. In the present scenario, power bidding is done on 10 min timescale by many distribution companies. Further, the uncertainty and the variability associated with the solar PV power plant leads to inappropriate operation. Hence, this mandates the short-term power forecasting for successful and efficient integration of solar power generating plants into the utility grid. In this research, an intelligent modelling technique such as fuzzy logic, ANN, and a hybrid approach is presented for very short-term power forecasting of a solar PV-based energy system under composite climatic conditions. The input included the measurements of solar irradiance, cell temperature, and PV generation for the day at a timescale of 10 min and used as input for short-term PV power output forecasting which varies according to different weather conditions. Various factors affect the power generation, such as climatic variations, solar insolation, the temperature of solar panel, ambient temperature, and the topographical position. So, it becomes a tedious task to define the output with a single model; therefore, the output is modelled based on different sky-conditions, namely sunny sky, hazy sky, partially cloudy/foggy sky, and fully cloudy/foggy sky conditions, respectively, using different meteorological parameters as these factors make a significant impact on the power output of the solar PV systems and are presented in Tables 7-10, respectively. Table 7, it can be seen that the performance of the sunny sky model is better in power forecasting of a solar PV-based energy system. The averaged measured power during a sunny sky day is 98 W. However, the MPE obtained is 0.077% by employing a fuzzy logic methodology, the error reduces to 0.0079% by using ANN, and it further reduces to 0.000054% with ANFIS methodology. The day variation between the measured and forecasted power for the duration of 24 h by employing intelligent models have been presented in Fig. 9a. Hazy sky: It is observed that the MPE obtained by employing a fuzzy logic methodology for this sky condition is 0.049%, the error reduced to 0.022% by using ANN, however, with ANFIS model the MPE is least and further reduced to 0.004%, respectively, as shown in Table 8. The averaged measured power during a hazy sky day is 82 W. The day variation between the measured and forecasted power for the duration of 24 h by employing intelligent models have been presented in Fig. 9b. Partially cloudy/foggy sky: For this sky condition, the averaged measured power is 76 W. By using fuzzy logic, the MPE is 1.20%, this error reduced to 0.20% by using ANN, however, with ANFIS model the MPE is least and further reduced to 0.003%, respectively, as shown in Table 9. The day variation between the measured and forecasted power for the duration of 24 h by employing intelligent models have been presented in Fig. 9c. Table 10, it is evident that the PV power output is less during fully foggy/cloudy sky condition with the averaged measured power of only 27 W. The MPE is 0.21% by employing a fuzzy logic methodology, the error reduces to 0.091% by using ANN, and with ANFIS the error obtained is 0.0014%, respectively. The day variation between the measured and forecasted power for the duration of 24 h by employing intelligent models have been presented in Fig. 9d. Fully cloudy/foggy sky: From Such forecasts would help manage supply and demand for energy building in a smart grid environment. This research will help the stakeholders such as power engineer, technocrats, utility, designer, service provider, and operation engineer for developing the smart energy management system wherein the PV-based power forecasting is one of the key components for this new paradigm. This research would be practically useful in providing appropriate control, optimisation, power smoothening, real-time dispatch, the requirement of additional generating stations and the selection of appropriate energy storage system which may mitigate the issues of power fluctuations obtained from solar PV-based energy systems. From Fig. 9, it is evident that the generation of power in a solar PV-based energy system varies significantly with variation in skyconditions. This observation reveals that the forecasting model should be based on weather classifications. However, for composite climatic conditions, the sunny and hazy model outperforms other models. In this case, the PV system is installed in National Institute of Solar energy, Delhi, where the sunny days are present during most of the year. Conclusion In this research, different models based on intelligent approaches such as fuzzy logic, ANN, and ANFIS have been developed and presented for solar energy forecasting using meteorological parameters. The results obtained from different models are compared with a regression model by statistical indicators. Based on the comparative analysis, it is revealed that the performance of the ANFIS based model provides more accurate results in comparison to other intelligent models. The short-term PV power forecasting may be implemented for many applications such as providing appropriate control for PV system integration, optimisation, power smoothening, real-time power dispatch, the requirement of additional generating stations and the selection of appropriate energy storage.
7,344.4
2019-01-25T00:00:00.000
[ "Environmental Science", "Computer Science" ]
Quantum no-scale regimes in string theory We show that in generic no-scale models in string theory, the flat, expanding cosmological evolutions found at the quantum level can be attracted to a"quantum no-scale regime", where the no-scale structure is restored asymptotically. In this regime, the quantum effective potential is dominated by the classical kinetic energies of the no-scale modulus and dilaton. We find that this natural preservation of the classical no-scale structure at the quantum level occurs when the initial conditions of the evolutions sit in a subcritical region of their space. On the contrary, supercritical initial conditions yield solutions that have no analogue at the classical level. The associated intrinsically quantum universes are sentenced to collapse and their histories last finite cosmic times. Our analysis is done at 1-loop, in perturbative heterotic string compactified on tori, with spontaneous supersymmetry breaking implemented by a stringy version of the Scherk-Schwarz mechanism. Introduction Postulating the classical Lagrangian of the Standard Model in rigid Minkowski spacetime proved to be a very efficient starting point for computing quantum corrections. However, beyond this Standard Model, theories sometimes admit a gravitational origin. In particular, considering N = 1 supergravity models in dimension d = 4, where local supersymmetry is spontaneously broken in flat space, and restricting the Lagrangians to the relevant operators gives renormalizable classical field theories in rigid Minkowski spacetime, where supersymmetry is softly broken [1]. In that case, consistency of the picture should imply the possibility to commute the order of the above operations, namely first computing quantum corrections and then decoupling gravity. To explore this alternative point of view in arbitrary dimension d, the classical supergravity theories may be viewed in the framework of no-scale models [2] in string theory, for loop corrections to be unambiguously evaluated. By definition, the no-scale models are classical theories where local (extended) supersymmetry is (totally) spontaneously broken in flat space. In this context, the supersymmetry breaking scale is a scalar field which is a flat direction of a positive semi-definite classical potential. Therefore, if its vacuum expectation value is undetermined classically, a common wisdom is that this no-scale structure breaks down at the quantum level (see e.g. [3]). One way to implement a spontaneous breaking of supersymmetry in string theory is via coordinate-dependent compactification [4,5], a stringy version of the Scherk-Schwarz mechanism [6]. An effective potential is generated at 1-loop and is generically of order O(M d ), where M is the supersymmetry breaking scale measured in Einstein frame. Assuming a mechanism responsible for the stabilization of M (above 10 TeV for d = 4) to exist, one then expects the quantum vacuum to be anti-de Sitter-or de Sitter-like, with no way to obtain a theory in rigid Minkowski space, once gravity is decoupled. Exceptions may however exist. In type II [7] and open [8] string theory, the 1-loop effective potential V 1-loop of some models vanishes at specific points in moduli space. In heterotic string, the closest analogous models [9] are characterized by equal numbers of massless bosons and fermions (observable and hidden sectors included), so that V 1-loop is exponentially suppressed when M (σ) , the supersymmetry breaking scale measured in σ-model frame, is below the string scale M s [10][11][12]. These theories, sometimes referred as super no-scale models, can even be dual to the former, where V 1-loop vanishes [13]. However, all these particular type II, orientifold or heterotic models are expected to admit non-vanishing or non-exponentially suppressed higher order loop corrections [14], in which case they may lead to conclusions similar to those stated in the generic case. Moreover, the particular points in moduli space where V 1-loop vanishes or is exponentially small are in most cases saddle points. As a consequence, moduli fields are destabilized and, even if their condensations induce a small mass scale M H < M such as the electroweak scale, the order of magnitude of V 1-loop ends up being of order O(M d−2 M 2 H ) [11], which is still far too large to be compatible with flat space. In the present work, we will not assume the existence of a mechanism of stabilization of M that would lead (artificially) to an extremely small cosmological constant. Instead, we take seriously the time-dependance of M induced by the effective potential, in a cosmological setting. We show the existence of an attractor mechanism towards flat Friedmann-Lemaître-Robertson-Walker (FLRW) expanding universes, where the effective potential is dominated by the kinetic energies of M and φ, the dilaton field. Asymptotically, the cosmological evolution converges to that found in the classical limit, where the no-scale structure is exact. For this reason, we refer to this mechanism as an attraction to a "quantum no-scale regime". In these circumstances, flatness of the universe is not destabilized by quantum corrections, which justifies that rigid Minkowski spacetime can be postulated in quantum field theory. We stress that even if the effective potential, which scales like M d , is negligible from a cosmological point of view, the net value of the supersymmetry breaking scale M remains a fundamental ingredient of the theory in rigid spacetime, since it determines the order of magnitude of all soft breaking terms. Note however that the analysis of the constraints raised by astrophysical observations about the constancy of couplings and masses, or the validity of the equivalence principle, stand beyond the scope of the present work [15]. The above statements are shown in heterotic string compactified on a torus, with the total spontaneous breaking of supersymmetry implemented by a stringy Scherk-Schwarz mechanism [4,5]. Actually, we analyze a simplified model presented in Sect. 2, where only a small number of degrees of freedom are taken into account. To be specific, we consider in a perturbative regime the 1-loop effective action restricted to the scale factor a, as well as M and φ. In terms of canonical fields, the scalars can be described by a "no-scale modulus" Φ with exponential potential, and a free scalar φ ⊥ . Notice that numerous works have already analyzed such systems, namely scalar fields with exponential potentials [16,17], sometimes as autonomous dynamical systems or by finding explicit solutions. Motivated by different goals, these studies often stress the onset of transient periods of accelerated cosmology. Such models have been realized by classical compactifications involving compact hyperbolic spaces, S-branes or non-trivial fluxes (field strengths) [18]. In the present paper, we find that the space of initial conditions of the equations of motion can be divided into two parts, and we present explicitly the resulting cosmologies in Sects. 3-6. 1 In the first region, which is referred as supercritical and exists only if V 1-loop is negative, no classical limit exists. Thus, the universe is intrinsically quantum and its existence is found to be limited to a finite lapse of cosmic time. On the contrary, when the initial conditions sit in the so-called subcritical second region, the perturbative solutions can be seen as deformations of classical counterparts. It is in this case that attractions to quantum no-scale regimes take place. If, as mentioned before, the latter can correspond to flat expanding evolutions, we also find that other quantum no-scale regimes exist, which describe a Big Bang (or Big Crunch by time reversal). Moreover, when V 1-loop is positive, a short period of accelerated expansion can occur during the intermediate era that connects no-scale regimes of the two previous natures [18]. Whereas when V 1-loop is negative, M decreases as the universe expands and is thus forever climbing its potential [17]. Notice that this behaviour contradicts the naive expectation that M should run away to infinity and lead to large, negative and a priori non-negligible potential energy. We also point out that the above perturbative properties are expected to be robust when higher order loop corrections are taken into account. 2 Finally, we summarize our results and outlooks in Sect. 7. The setup In this section, we consider a simplified heterotic string no-scale model in dimension d ≥ 3, in the sense that the dynamics of only a restricted number of light degrees of freedom is taken into account. Our goal is to derive the 1-loop low energy effective action and associated field 1 A particular case in dimension 4 is already presented in Ref. [19], and describes the cosmological evolution of a universe at finite temperature T , when T M . 2 This supposes the implementation of a regularization scheme to get rid off infrared divergences arising at genus g, when massless propagators and non-vanishing tadpoles at genus g − 1 exist. This may be done by introducing a small mass gap by curving spacetime [20]. equations of motion to be solved in the following sections. At the classical level, the background is compactified on n ≥ 1 circles of radii R i , times a torus, (2.1) The volume moduli of T 10−d−n are supposed to be small enough for the lightest Kaluza-Klein (KK) mass scale cM s associated with this torus to be very large, c 1. On the contrary, the n circles are supposed to be large, R i 1, and are used to implement a coordinate-dependent compactification responsible for the total spontaneous breaking of supersymmetry [4,5]. In σ-model frame, we define the resulting low supersymmetry breaking scale to be At the quantum level, assuming a perturbative regime, an effective potential is generated at 1-loop [10,11,21], where Z is the genus-1 partition function and F is the fundamental domain of SL(2, Z), parameterized by τ ≡ τ 1 + iτ 2 . In the second expression, n F , n B count the numbers of massless fermionic and bosonic degrees of freedom, while v d,n > 0 depends (when n ≥ 2) on the n − 1 complex structure moduli, R i /R d , i = d + 1, . . . , d + n − 1. The origins of the different contributions are the following: -The n B + n F towers of pure KK modes associated with the massless states and arising from the n large directions yield the term proportional to M d (σ) . -On the contrary, the pure KK towers based on the states at higher string oscillator level lead to the exponentially suppressed contribution. -Finally, all states with non-trivial winding numbers along the n large directions, as well as the unphysical i.e. non-level matched states yield even more suppressed corrections, Since we restrict in the present paper to the regime where M (σ) cM s , we will neglect from now on the exponentially suppressed terms. Splitting the dilaton field into a constant background and a fluctuation, φ dil ≡ φ dil + φ, the 1-loop low energy effective action restricted to the graviton, φ and the radii R i 's takes the following form in Einstein frame 3 : In this expression, R is the Ricci curvature, is the Einstein constant, and the potential is dressed with the dilaton fluctuation, 5) where M is the supersymmetry breaking scale measured in Einstein frame, Note that the classical limit of the theory is recovered by taking κ 2 → 0. In order to write the equations of motion, it may be convenient to perform field redefinitions. The kinetic term of the scalar field M being non-canonical, we define the so-called "no-scale modulus" Φ as where α is an appropriate normalization factor, Moreover, the effective potential being by construction independent on the orthogonal com- the latter is a canonical free field. By also redefining the complex structure deformations as the action takes the final form (2.11) where the 1-loop effective potential is To keep the toy model as simple as possible, we treat the complex structures as constants, ϕ k ≡ cst., k = 1, . . . , n − 1, and ignore as well the remaining internal moduli (other than the volume d+n−1 i=d R i appearing in the definitions of Φ and φ ⊥ ). Looking for homogeneous and isotropic cosmological evolutions in flat space, we consider the metric and scalar field The equations of motion for the lapse function N , scale factor a, no-scale modulus Φ and φ ⊥ take the following forms, in the gauge N ≡ 1 which defines cosmic time x 0 ≡ t, where H ≡ȧ/a. In order to solve the above differential system, we consider a linear combination of the three first equations that eliminates both K and V 1-loop , which is a free field equation identical to that of φ ⊥ . Integrating, we havė where c ⊥ , c Φ are arbitrary constants. Note that under time-reversal, the constants c Φ , c ⊥ change to −c Φ , −c ⊥ . To proceed, it is useful to eliminate the effective potential between Eqs (2.14) and (2.15), The above equation is however a consequence of the others, as can be shown by taking the time derivative of Eq. (2.14) and using the scalar equations (2.16), (2.17). Therefore, we can solve it and insert the solution in Friedmann equation (2.14) in order to find, when n F − n B = 0, the fully integrated expression of the no-scale modulus or M . To reach this goal, we first use Eqs (2.19) to express the scalar kinetic energy K as a function of the scale factor and H, so that Eq. (2.20) becomes a second order differential equation in a only. Second, when c Φ = 0, we introduce a new (dimensionless) time variable τ , in terms of which this equation becomes ( 2.22) Note that using the definition of α in Eq. (2.8), we have 0 < ω < 1, for arbitrary d ≥ 3 and n ≥ 1. Finally, using again Eqs (2.19), Friedmann equation (2.14) takes an algebraic form, once expressed in terms of time τ , We will see that the forms of the solutions for the scale factor a and the supersymmetry breaking scale M depend drastically on the number of real roots allowed by the quadratic polynomial P(τ ). Moreover, in order to find the restrictions for string perturbation theory to be valid, we will need to display the dilaton field evolution. Using the definitions of the scalars Φ and φ ⊥ , we have where φ ⊥ is determined by its cosmic time derivative, or To derive the above relation, we have used the definition of τ and Eq. (2.21) to relate the time variables t and τ , In the following sections, we describe the cosmological evolution obtained for arbitrary c ⊥ /c Φ , which admits a critical value (2.27) corresponding to a null discriminant for P(τ ). Supercritical case When c ⊥ and c Φ = 0 satisfy the supercritical condition 1) P(τ ) has no real root. Due to Friedmann equation (2.23), the no-scale model must satisfy n F −n B < 0. Moreover, the classical limit κ 2 → 0 is not allowed (!) This very fact means that in the case under consideration, the cosmological evolution of the universe is intrinsically driven by quantum effects. In particular, the time-trajectory cannot allow any regime where the 1-loop effective potential may be neglected. To be specific, integrating Eq. (2.21), we find and a 0 is an integration constant, while combining this result with Eq. (2.23) yields We see that all solutions describe an initially growing universe that reaches a maximal size before contracting. In the limits τ → ∞, = ±1, the expression a(τ ) together with the definition of τ yield where t is an integration constant. For c Φ > 0, this describes a Big Bang at t t , while c Φ < 0 corresponds to a Big Crunch at t t . Since A > 0, we have in these regimes 5) which shows that the evolution of the universe at the Big Bang and Big Crunch is dominated by the no-scale modulus kinetic energy, partially compensated by the negative potential energy. As announced at the beginning of this section, the quantum effective potential plays also a fundamental role at the bounce, since To study the domain of validity of perturbation theory during the cosmological evolution, it is enough to focus on the dilaton in the above τ → ∞ limits. Eq. (2.25) shows that asymptotically, φ ⊥ converges to an integration constant, so that Eq. (2.24) leads to Thus, the consistency of the 1-loop analysis is guaranteed late enough after the Big Bang and early enough before the Big Crunch. Moreover, the scale factor is assumed to be large enough, for the kinetic energies in Eq. (3.5) to be small compared to the string scale. This is required not to have to take into account higher derivative terms in the effective action or, possibly, the dynamics of the whole string spectrum. For the above two reasons, the cosmological evolution can only be trusted far enough from its formal initial Big Bang (t t sign c Φ ) and final Big Crunch (t t −sign c Φ ). To summarize, the supercritical case realizes a quantum universe whose existence is only allowed for a finite lapse of cosmic time (unless string theory resolves the Big Crunch and allows a never-ending evolution). Since the quantum corrections to the off-shell classical action allow new cosmological evolutions which describe the birth of a world sentenced to death, we may interpret this finite history as that of an "unstable flat FLRW universe" arising by quantum effects. It is however not excluded that the expanding phase of the solution (3.2), (3.3), (2.25) may be related in some way to some cosmological era of the real world. Subcritical case and quantum no-scale regimes When the integration constants c ⊥ and c Φ = 0 satisfy the subcritical condition P(τ ) admits two distinct real roots, Important remarks follow from Friedmann equation (2.23). First, the bosonic or fermionic nature of the massless spectrum determines the allowed ranges of variation of τ , Second, taking the classical limit κ 2 → 0 is allowed, and yields evolutions τ (t) ≡ τ − or τ (t) ≡ τ + . Therefore, the classical trajectories are identical to those obtained for quantum super no-scale models, i.e. when n F − n B = 0. In the following, we start by describing the cosmological solutions in the super no-scale case, and then show that the quantum evolutions for generic no-scale models (n F − n B = 0) admit quantum no-scale regimes, i.e. behave the same way. When τ ≡ τ ± , 5 Eqs (2.21) and (2.23) being trivial, we use the definition of τ given in Eq. (2.22) to derive the scale factor as a function of cosmic time t, where t ± is an integration constant. For c Φ > 0, this describes a never-ending era t > t ± of expansion, initiated by a Big Bang occurring at t = t ± . Of course, the solution obtained by time-reversal satisfies c Φ < 0 and describes an era t < t ± of contraction that ends at the Big Crunch occurring at t ± . Integrating the no-scale modulus equation in (2.19), we find where Φ ± is an integration constant and In total, when the 1-loop effective potential vanishes (up to exponentially suppressed terms), the cosmological evolution is driven by the kinetic energies of the free scalar fields, The dilaton evolution is found using Eq. (2.24), where φ ⊥± is an integration constant and Unless P ± vanishes, in which case the dilaton is constant, the string coupling g s = e φ varies monotonically between perturbative and non-perturbative regimes. For instance, the 5 One may think that the space of solutions in the super no-scale case is divided in two parts, corresponding to either τ ≡ τ + or τ ≡ τ − . This is however not true. Including the critical case of Sect.5.1, all the evolutions are actually of the form solution τ ≡ τ + is perturbative in the large scale factor limit, c Φ (t − t + ) → +∞, when P + > 0. In order to translate this condition into a range for c ⊥ /c Φ , we introduce 2α(ω d−2 n + 1) , (4.10) which satisfy 0 < ± γ ± < γ c , and find P + > 0 if and only if In a similar way, the solution τ ≡ τ − is perturbative in the small scale factor limit, . (4.12) As already mentioned in the supercritical case, beside the conditions for the g s -expansion to be valid, the above solutions suppose the scale factor to be large enough, for the higher derivative terms (α -corrections) to be small. Because of this constraint, the Big Bang (t t ± ) and Big Crunch (t t ± ) behaviours are only formal. Case n F − n B = 0 Let us turn to the analysis of a generic no-scale model, thus characterized by n F − n B = 0. In this case, τ can actually be treated as a time variable and, integrating Eq. (2.21), we find where a 0 > 0 is an integration constant. Using this result with Friedmann equation (2.23) yields M d , which can be written in two different suggestive ways, where we have defined by time-reversal, with c Φ < 0. We see that all branches start and/or end with a vanishing scale factor, when τ → ±∞ or τ → τ − . 6 In all cases, whether da/dτ vanishes, is infinite or is finite when a(τ ) → 0, we will see that da/dt diverges at a finite cosmic time, thus describing a formal Big Bang or Big Crunch. Note that when n F − n B = 0, all branches allow τ to approach τ + and/or τ − . When this is the case, the behaviour τ (t) → τ ± yields, using the definition of τ given in Eq. (2.22), for some integration constant t ± . This shows that the cosmological evolution of the universe as well as that of the scalars Φ and φ ⊥ approach those found in the super no-scale case n F − n B = 0, i.e. for vanishing 1-loop effective potential (up to exponentially suppressed terms). For this reason, we define the limits τ → τ ± of the generic no-scale models as "quantum no-scale regimes". These are characterized by phases of the universe dominated by the scalar kinetic energies, (4.17) In fact, when τ → τ + , the divergence of the scale factor, a(τ ) → +∞, and the fact that K + > 0 imply that the quantum potential is effectively dominated. Moreover, Eq. (4.16) shows that this regime lasts for an indefinitely long cosmic time, c Φ t → +∞. In a similar way, when τ → τ − , since a(τ ) → 0 and K − < 0, the effective potential is again dominated, and this process is realized when cosmic time approaches t − , c Φ (t − t − ) → 0 + . To summarize, when τ → τ ± , assuming a perturbative regime, the quantum cosmological evolution of the no-scale model is attracted to that of a classical background (κ 2 = 0), where the no-scale structure 6 The trajectories allowing τ to approach τ − are shown in Fig. 1(ii) in the case da/dτ → ±∞, when τ → τ − . This occurs when On the contrary, when γ M < |c ⊥ /c Φ | < γ c , one has da/dτ = 0 at τ = τ − . Finally, |c ⊥ /c Φ | = γ M implies |da/dτ | to be finite and non-vanishing at τ = τ − . However, these different behaviours do not play any important role in the sequel. is exact. In particular, the temporal evolution of the no-scale modulus Φ approaches that of a free field, so that the no-scale structure tends to withstand perturbative corrections. Note however that the cosmological solutions found for no-scale models satisfying n F − n B < 0 can also admit other regimes. As in the supercritical case, the latter correspond to limits τ → ∞, = ±1, describing a formal Big Bang or Big Crunch, (4.18) where t is an integration constant. In these circonstances, the universe is dominated by the kinetic energy and negative potential of the no-scale modulus, as summarized in Eq. (3.5). To determine the domain of validity of perturbation theory, we integrate Eq. (2.25), which introduces an arbitrary constant mode φ ⊥0 , and use Eq. (2.24) to derive the time dependance of the dilaton, (4.19) where P ± is defined in Eq. (4.9). Therefore, the conditions L ± > 0 for perturbative consistency of the attractions to the quantum no-scale regimes τ → τ ± are those found in the super no-scale case: For τ → τ + , c ⊥ /c Φ must satisfy Eq. (4.11), while for τ → τ − , c ⊥ /c Φ must respect Eq. (4.12). In particular, when n F − n B > 0, the cosmological evolution between τ − and τ + is all the way perturbative if γ − < c ⊥ /c Φ < γ + . In this case, the quantum potential is negligible, V 1-loop K, throughout the evolution, except in the vicinity of τ = 1, where it induces the transition from one no-scale regime to the other. On the contrary, the regimes τ → ∞, which can be reached when n F − n B < 0, can be trusted up to the times the evolutions become non-perturbative, as follows from Eq. (3.7), which is valid for arbitrary For n F − n B = 0, the subcritical case can also give rise to non-trivial dynamics of the no-scale modulus, which can be summarized as follows, for instance in the case c Φ > 0: (i) When √ ωγ c < |c ⊥ /c Φ | < γ c , even if the effective potential is dominated in the no-scale regime τ → τ − , M d turns out to diverge, as can be seen in Fig. 2(i), which shows the three branches M d can follow as a function of τ . The directions of the evolutions for increasing cosmic time t are again indicated for c Φ > 0. Along the trajectories satisfying τ > τ + , the universe expands and is attracted to the no-scale regime τ → τ + , while M (t) decreases. Thus, even if this is counterintuitive, the supersymmetry breaking scale forever climbs its negative effective potential (n F − n B )v d,n M d [17]. This fact contradicts the expectation that M should increase and yield a large, negative potential energy. On the contrary, the situation is more natural in the other branches. For the solutions satisfying τ < τ − , if M (t) also starts climbing its negative potential, it is afterwards attracted back to large values, with the turning point sitting at τ = ω. Finally, along the branch τ − < τ < τ + , the universe expands and is attracted to the no-scale regime τ → τ + , while M (t) drops along its positive (ii) When |c ⊥ /c Φ | < √ ωγ c , as shown in Fig. 2(ii), M d vanishes when τ → τ − . Along the branch τ > τ + , M (t) climbs as before its negative potential [17], while for τ < τ − , it drops. The branch τ − < τ < τ + is the most interesting one: While the scale factor increases, M (t) first climbs the positive potential (n F − n B )v d,n M d , and then descends. At the turning point located at τ = ω, we haveΦ = 0 and V 1-loop > 0, which is enough to show that for small enough |c ⊥ |, the scale factor accelerates for a lapse of cosmic time [16][17][18]. However, the resulting e-fold number being of order 1, this acceleration of the universe does not yield any efficient inflationary effect 7 . (iii) Finally, Fig. 2 The directions of the evolutions for increasing cosmic time t are indicated for c Φ > 0. Solid curves refer to no-scale models with n F − n B < 0, while the dashed ones refer to no-scales models with n F − n B > 0. To conclude on the subcritical case, let us stress again that the dynamics arising from initial conditions such that c Φ > 0 when n F − n B ≥ 0 , or c Φ > 0 and τ i > τ + when n F − n B < 0 , (4.20) enter the no-scale regime τ (t) → τ + . 8 Critical case In the critical case, which corresponds to the polynomial P(τ ) has a double root τ + = τ − = 1, and the no-scale model must satisfy n F −n B ≤ 0, as follows from Eq. (2.23). In the sequel, we show that the qualitative behaviour of the associated cosmological evolutions are similar to those found in the subcritical case. Case n F − n B = 0 The solutions in the super no-scale case are actually those found in the subcritical case for where t 0 , Φ 0 , φ ⊥0 are integration constants and The above evolutions are perturbative in the large scale factor regime, c Φ (t − t 0 ) → +∞, when P 0 > 0. This is the case for c ⊥ /c Φ = −γ c , as well as for For the no-scale models with negative 1-loop effective potentials, Eqs (2.21) and (2.23) yield where a 0 > 0 is an integration constant. Fig. 1(iii) shows in solid lines the two branches τ > 1 and τ < 1 the scale factor a(τ ) can follow, while the dotted line τ ≡ 1 corresponds to the critical super no-scale case. The trajectories admit one of the two limits τ → 1 + or τ → 1 − , which lead in terms of cosmic time to where t ± is an integration constant. The behaviour τ → 1 + describes an expanding or (4.6) vanishes for r = 0, the effective potential is still dominated by the moduli kinetic energies, 6) which proves that the limits τ → 1 ± describe quantum no-scale regimes. On the contrary, the limits τ → ∞, = ±1, yield Big Bang/Big Crunch behaviours, as shown in Eqs (4.18) and (3.5). Case c Φ = 0 What remains to be presented is the cosmological evolution for c Φ = 0. The supersymmetry breaking scale can be found by integrating the no-scale modulus equation in (2.19), which where Φ 0 is a constant. This result can be used to write Friedmann equation (2.14) as which for c ⊥ = 0 requires the no-scale model to satisfy n F − n B < 0. Note that this is not surprising since this case is somehow "infinitely supercritical". As a result, no-scale regimes are not expected to exist. To be specific, when c ⊥ = 0, the above differential equation can be used to determine the cosmic time as a function of the scale factor, and t * is an integration constant. Defining (6.4) the time variable t varies from t + to t − . At t = t + , a Big Bang initiates an era of expansion that stops when the scale factor reaches its maximum a max at t = t * . Then, the universe contracts until a Big Crunch occurs at t = t − . Close to the initial and final times t ± , the scale factor behaves as which leads to scalings similar to those given in Eq. (3.5), namely However, the scalar φ ⊥ converges in this case to a constant, so that Eq. (2.24) yields As a result of Eqs (6.6) and (6.7), the cosmological solution we have found can only be trusted far enough from the formal Big Bang and Big Crunch, not to have to take into account α -and g s -corrections. Finally, when c ⊥ = 0 and n F − n B < 0, the maximum scale factor a max is formally infinite, so that no turning point exists anymore. In fact, relations (6.5) and (6.7) become equalities: The evolution for = +1 describes a never-ending era t > t + of expansion, while the trajectory for = −1 describes an era t < t − of contraction. For super no-scale models, i.e. when n F − n B = 0, the case c ⊥ = 0 yields the trivial solution where all fields a, Φ, φ ⊥ are static. Summary and conclusion We have considered the low energy effective action of heterotic no-scale models compactified on tori down to d dimensions. At 1-loop, the effective potential backreacts on the classical background, which is therefore time-dependent. Interested in homogeneous and isotropic cosmological evolutions, we have restricted our analysis to the dynamics of the scale factor a(t), the no-scale modulus Φ(t) and a free scalar φ ⊥ (t), which is a combination of the dilaton and the volume involved in the stringy Scherk-Schwarz supersymmetry breaking [4,5]. The space of solutions can be parameterized by (c ⊥ /c Φ , τ i ), where c ⊥ , c Φ are integration constants and τ i is the initial value of τ (t) = 2A dc Φȧ a d−2 (see Eq. (2.22)). Fig. 3 shows the partition of the R 2 -plane of cosmological solutions: The interior of the ellipse is realized by models where n F − n B > 0, and yields trajectories τ (t) which follow dashed vertical lines, from bottom to up when c Φ > 0. Similarly, the exterior of the ellipse corresponds to models having n F − n B < 0, with τ (t) following vertical lines from up to bottom when c Φ > 0. The trajectories in the supercritical regions (I) and (I ), which have no classical counterparts, are characterized by a bounded scale factor. The evolution starts and ends with a Big Bang and a Big Crunch, where the universe is dominated by the kinetic energy and quantum potential of the no-scale modulus. Translated in terms of a perfect fluid of energy density ρ and pressure P , we have However, the above regime τ → ±∞ of low scale factor can only be trusted until higher order corrections in g s and α become important. On the contrary, the solutions in the subcritical regions (II) and (III) are attracted to the quantum no-scale regime τ → τ + , which restores the no-scale structure [2] as the universe expands, and is easily (if not always, see Eq. (4.11)) perturbative in g s . As a result, the evolution is asymptotically dominated by the classical kinetic energies of Φ and φ ⊥ . The endless expansion and flatness of the universe are compatible with quantum corrections, which justifies that rigid Minkowski spacetime may be postulated in quantum field theory. Moreover, the evolutions in the subcritical regions (III) and (IV) admit the second quantum no-scale regime τ → τ − , which is realized as the scale factor tends (formally) to 0. In total, the trajectories in region (III) connect two regimes where with possibly an intermediate period of accelerated cosmology, however too short to account for inflation. In regions (II) and (IV), the state equation of the fluid evolves between P ∼ ρ and P ∼ 2A d−1 + 1 ρ. On the one hand, the drop in M (t), which takes place in the quantum no-scale regime τ → τ + and is independent of the sign of the potential, forbids the existence of any cosmological constant i.e. fluid satisfying P ∼ −ρ. On the other hand, neglecting the time-evolution (which makes sense at a cosmological scale) of the scale factor and scalar fields to end up with a theory in rigid Minkowski spacetime valid today, the energy density M d is effectively constant but not coupled to gravity. Thus, from either of these points of view, the words "cosmological" and "constant" exclude somehow each other. Note that this is also the case in other frameworks. For instance, compactifying a string theory on a compact hyperbolic space, flat FLRW solutions can be found, where the volume of the internal space is timedependent and plays formally the role of M (t) in the present work. Its associated canonical field, which is similar to Φ, admits an exponential and positive potential, however arising at tree level [16][17][18]. This setup can be realized by considering S-brane backgrounds or non-trivial fluxes. In all these cases, it is important to study the constraints arising from variations of couplings and masses at cosmological time scales, as well as present violations of the equivalence principle [15]. The simple model we have analyzed in details in the present work can be upgraded in various ways. First of all, the full dependence of the 1-loop effective potential on the internal metric, internal antisymmetric tensor and Wilson lines can be computed [11]. New effects then occur, due to the non-trivial metric of the moduli space and the existence of enhanced symmetry points [22]. Another direction of study consists in switching on finite temperature T [5,19,[23][24][25]. To see that the qualitative behaviour of the evolutions may be modified, let us assume T M and an expanding universe in quantum no-scale regime τ → τ + . In this case, we have [19] so that M/T decreases and the screening of thermal effects by quantum corrections eventually stops. In fact, new attractor mechanisms exist [19,24]. For instance, when n F − n B > 0, quantum and thermodynamic corrections balance so that the free energy, which is nothing but the effective potential at finite temperature, yields a stabilization of M (t)/T (t). At late times, the evolutions satisfy 1 (7.4) and are said "radiation-like". This is justified since the total energy density and pressure satisfy ρ tot ∼ (d − 1)P tot , where ρ tot , P tot take into account the thermal energy density and pressure derived from the free energy, as well as the kinetic energy of the no-scale modulus Φ [19,23,24].
9,020.2
2017-11-24T00:00:00.000
[ "Physics" ]
Spin-orbit splitting of quantum well states in n -monolayer Ir / Au(111) heterostructures The effect of spin-orbit coupling on quantum well states (QWSs) in atomically thin Ir adlayers deposited on the Au(111) substrate is studied in the framework of the density functional theory. Varying the Ir film thickness from 1 to 3 atomic layers, we find numerous Ir-derived QWSs, which are mainly of d character. The resulting band dispersion of QWSs appearing around the surface Brillouin zone center in a wide Au(111) energy gap is analyzed in the framework of the Rashba model. In all such QWSs, the fitted values of the Rashba parameter exceed 2 eV Å. The maximal value of 6 . 4 eV Å was obtained for the 1-monolayer-Ir / Au(111) system. We explain such large spin splitting by hybridization between different QWSs. Strong enhancement is observed in the density of electronic states at the surface in the energy region around the Fermi level caused by these QWSs. (Received 9 October 2019; revised manuscript received 24 April 2020; accepted 11 May 2020; published 5 June 2020) The effect of spin-orbit coupling on quantum well states (QWSs) in atomically thin Ir adlayers deposited on the Au(111) substrate is studied in the framework of the density functional theory. Varying the Ir film thickness from 1 to 3 atomic layers, we find numerous Ir-derived QWSs, which are mainly of d character. The resulting band dispersion of QWSs appearing around the surface Brillouin zone center in a wide Au(111) energy gap is analyzed in the framework of the Rashba model. In all such QWSs, the fitted values of the Rashba parameter exceed 2 eV Å. The maximal value of 6.4 eV Å was obtained for the 1-monolayer-Ir/Au(111) system. We explain such large spin splitting by hybridization between different QWSs. Strong enhancement is observed in the density of electronic states at the surface in the energy region around the Fermi level caused by these QWSs. DOI: 10.1103/PhysRevB.101.235409 I. INTRODUCTION In crystals without inversion symmetry a twofold Kramer's degeneracy of the electron energy bands protected by timereversal symmetry is lifted. As a result, these bands are split into two sets by the spin-orbit coupling (SOC) and each of these bands is characterized by the momentum and spin [1,2]. This effect constitutes a basis for a wide variety of phenomena realized in condensed matter systems [3][4][5]. In particular, at surfaces and interfaces where the crystal symmetry is broken, the phenomenon known as Rashba spin-orbit coupling takes place in the two-dimensional electron states [6,7]. The research fields exploiting this spin-orbit interaction have received great attention since, besides the fascinating physics, promising materials suitable for room-temperature spin-based device applications can be developed. One of the directions in the running investigations is the search for materials characterized by a large strength of SOC, which can be quantified by the Rashba parameter α R . This parameter determines the linear term in the resulting energy band dispersions from the Rashba point, where the two bands intersect. Several routs in finding the systems with the largest SOC parameter were exploited. For instance, the spin-orbit splitting of the energy bands in the metallic systems is significantly larger in comparison to the conventional semiconductor systems. A large splitting of the surface states due to the *<EMAIL_ADDRESS>spin-orbit interaction was first discovered at the noble metal surfaces [8][9][10]. To enhance α R , an obvious choice is to use the heavier elements like lead and bismuth. Thus, strong Rashba spin-orbit splitting of the surface-state bands was found on low-index Bi surfaces [11,12] and in ultrathin Bi films deposited on a Si(111) surface [13,14]. Exceptionally large spin-orbit Rashba splitting was observed in the surface states realized in ordered surface alloys formed by heavy elements like Bi or Pb on the surfaces of noble metals Ag or Cu [15][16][17]. In such surface alloys the surface state energies and spin-orbit splitting can be tuned by changing the composition parameters. Even larger splitting was found in the surface and bulk states of layered polar semiconductors, like BiTeI [18][19][20][21], which is about one order of magnitude larger than that in the conventional semiconductor quantum wells [22,23]. In the calculations for organic-inorganic perovskite compounds (OIPCs), the Rashba parameters in a wide range from <0.1 to almost 10 eV Å were obtained [24]. The measurements using the angle-resolved photoemission spectroscopy found Rashba parameters α R of 7 ± 1 and 11 ± 4 eV Å [25]. Spin-orbit interaction has also received great attention in the field of topological insulators [26][27][28][29][30]. In addition to topological surface state in the fundamental energy gap, an unoccupied Dirac cone like surface state was found in the local energy gap of the topological insulators [31][32][33]. The SOC effects are greatly enhanced in the systems of reduced dimensions. Therefore a natural way to induce a giant spin-orbit splitting consists in exploiting the quantumwell states (QWSs) in atomically thin films that originate from a quantization of the electronic states in the surfaceperpendicular direction [34][35][36]. For instance, the SOC effect on the d and s-p QWSs in Au adlayers on W(110) and Mo(110) was studied experimentally and theoretically [37]. The QWSs observed in a Bi monolayer (ML) adsorbed on Cu(111) [17] show a remarkably strong Rashba spin-orbit splitting reaching the Rashba parameter between α R = 1.5 and 2.5 eV Å. These QWSs are totally unoccupied and located at about 2 eV above the Fermi level (E F ). Of a special interest are the quantum states with energies close to E F . Such kind of states can be introduced by, e.g., transition metal atoms. Theoretically, Wu and Li showed [38] that giant SOC in the electronic states close to E F can occur in thin free-standing Ir films with thickness of few atomic layers when one of the sides is covered by a H ML. Even though the spin-orbit interaction in H atoms is extremely weak, the H-Ir interaction induces such a splitting by breaking the inversion symmetry of the film. A deposition of a graphene ML on the H-terminated 3ML Ir(111) results in a slight shift of the energies of the Ir electronic states [38]. This correlates with a small effect on the Ir(111) surface state dispersion upon the graphene deposition observed in photoemission [39]. An extensive first-principles study of the adsorption of Ir on the Au(111) surface considering the Ir coverage from 1/9 up to one ML was performed by Freire et al. [40]. However, the details of electronic structure were not reported in that publication. On the experimental side, recently the Ir-Au bimetallic systems created by deposition of Ir atoms on Au substrates have started to attract attention as promising candidates for automotive exhaust catalysis. However, the existing experimental work in the field of catalysis was focused on synthesis of small Ir nanoparticles (NPs) which offer maximum surface/volume ratios. Thus Ahn et al. [41] found that at the sub-ML deposition rate the Ir islands of ≈(2 to 3) nm in diameter grow on the Au surface. Upon increase of Ir coverage, the dense pyramidal Ir islands of a thickness equivalent to 8 Ir MLs formed on the Au(111) surface. The interface was atomically sharp in both cases. In the experiment by Štrbac et al. [42] after short deposition time, a coverage of the Au(111) surface with Ir was achieved with deposited islands of one to four MLs high. The lateral size of the deposited Ir islands ranged from 10 to 25 nm. For longer deposition times, the lateral size of Ir islands ranged from 25 to 50 nm without an increase in the deposit height. However, exact atomic distribution in NPs grown in these and other publications [43][44][45][46][47][48][49][50] was not studied. Up to now, the Ir/Au(111) systems were grown in inherently dirty electrodeposition processes where it is difficult to control the film morphology. On the other hand, to explore the Rashba effects in atomically thin Ir films on Au(111) one would need to grow such films in a well-controlled ultrahigh vacuum environment. However, it might be difficult to realize the formation of Ir films on Au(111) since in the densityfunctional calculations the Ir atoms on the Au(111) surface prefer to localize beneath the top Au atoms [40]. At the same time, experimentally Zhang et al. [51,52] detected movement of deposited Ir atoms into bulk Au at temperatures above 400 K only. Moreover, they speculated that the as deposited 3D Ir islands may change to a more uniform morphology after gentle heating below 400 K. This suggests that it might be possible to create metastable atomically thin nML-Ir/Au(111) systems employing a substrate, cold enough to inhibit alloying or three-dimensional (3D) island formation [53]. In this work, we study the electronic structure of the nML-Ir/Au(111) heterostructures with a number n of Ir atomic layers ranging from one to three. The number of valence electrons in Ir is less by one than in Pt. Consequently, the energy positions of the Ir QWSs around E F are shifted upward in comparison with the nML-Pt/Au(111) system studied recently [54], which offers additional possibities in engineering the electronic states in such bimetallic systems. The dispersion shape and properties of the Ir-related QWSs as well as the layer-resolved density of states (LDOS) are analyzed. The impact of the spin-orbit interaction on these states is quantified by analyzing the electronic structure and LDOS obtained with and without spin-orbit coupling (WSOC) included. In particular, we analyze the SOC effect on the Ir-derived QWSs of d character in the framework of the Rashba model. The rest of the paper is organized as follows. In Sec. II, a brief description of the methods used and some computational details are given. In Sec. III, we present the electronic structure of the systems under study. The summary and conclusions are presented in Sec. IV. II. CALCULATION METHODS The calculations were performed within the framework of the density functional theory formalism by the projector augmented wave method employing a pseudopotential scheme implemented in the VASP code [55,56]. For the description of the exchange-correlation effects, the local density approximation in the Ceperley-Alder parametrization was used [57]. The configurations 5d 10 6s 1 and 5d 7 6s 2 were used for the valence electrons in Au and Ir, respectively. Self-consistent electron density was determined by using a Monkhorst-Pack scheme [58] with a 11 × 11 × 1 grid of k points in the surface Brillouin zone (SBZ). For all calculations, we used the criterion of convergence with respect to the total energy to 10 −8 eV. The optimized bulk lattice parameters are a 0 = 4.04 Å for Au and a 0 = 3.82 Å for Ir. The calculated lattice constants are in good agreement with the experimentally determined values a 0 = 4.045 Å for Au [59] and a 0 = 3.8341 Å for Ir [60]. The clean Ir(111) and Au(111) surfaces of a semifinite crystal were simulated with a 23 layers film. The same number of layers was employed for the description of the gold substrate in the case of the nML-Ir/Au(111) systems, where the Ir adlayers consisting of n atomic layers were placed on both sides of the slab. The in-plane lattice parameter for the Ir adlayers was chosen to be equal to the Au bulk constant. For each system, we performed optimization of the vertical atomic positions in the Ir adsorbate layers and four outermost layers of the Au substrate on each side of the film. Fifteen internal Au atomic layers were kept in their bulk positions. It is known that heterostructures based on atomically thin adlayers may have complex behavior depending on the concentration of the adatoms. Especially complex atomic structure can be realized for sub-ML and 1 ML adlayer thicknesses. Frequently, an adlayer is formed, while in other cases the adsorbate forms an ordered ML beneath the surface atomic layer of the substrate. Regarding the Ir adsorption on the flat Au(111) surface, from the study of sub-ML and 1 ML Ir adsorption on the Au(111) surface Freire et al. [40] concluded that the top gold ML prefers to segregate above the Ir ML. The same conclusion was derived from the calculations of the segregation energy in the 3ML Au-Ir-Ir and Ir-Au-Ir systems [61]. Such behavior can be explained by large positive surface segregation energy for the Ir impurity on the Au(111) surface calculated by Ruban et al. [62]. Based on this theoretical work, here we studied the electronic structure of the 1ML-Ir/Au(111) heterostructure for two vertical positions of the Ir ML: on top of the Au(111) surface and beneath the surface Au atomic layer. In the case of 2 and 3 Ir MLs the on-top position of the Ir adlayer was chosen. To reveal the SOC impact introduced in the electronic states of the nML-Ir/Au(111) systems, the band structure is calculated with and without spin-orbit coupling included. In this work, we concentrate on the Ir-induced QWSs around the SBZ center. For the analysis of its spin-orbit splitting, we employ a simple model accounting for SOC by the Rashba Hamiltonian [2,6,7]. The parameters of this model are the effective mass m * and the Rashba parameter α R . The band maximum (minimum) is shifted by SOC from k = 0 (denoted as a point in the SBZ) by k 0 and found at an energy of E 0 with respect to the position at , and the Rashba parameter is defined as α R = 2E 0 /k 0 . An alternative way consists in the determination of the linear dispersion term of the spin-orbit split band pair in the vicinity of the Rashba point [63]. A. Clean Au(111) and Ir(111) surfaces The electronic structure of a clean Au(111) surface was previously studied in detail both experimentally and theoretically in numerous publications. In this work, we base on the one reported in Fig. 4 of Ref. [54]. In gold, the 5d valence electronic bands are completely occupied and reside at energies below about −2 eV and the states around E F are of the mainly s-p character. An attractive feature of the Au(111) surface electronic structure is the presence of a wide energy gap in the bulk spectrum around the SBZ center at energies above ≈−1.4 eV. In this energy gap there exits a surface state of s-p character with a parabolic-like dispersion and energy of ≈ −0.5 eV at the point with respect to the Fermi level. Away from the point this surface state disperses upward, crosses E F and can be traced up to about 1 eV above E F . This state is frequently referred to as a Shockley surface state. Its wave function is localized in the surface region with substantial expansion to the vacuum side [64]. Such spatial localization of this surface state can explain its disappearance upon the deposition of various adlayers. In particular, it occurs in the case of the alkali atom ultra-thin coverage on Cu (111) [65,66] and Pt(111) [66] as well as in the case of the Pt thin film adsorption [54]. The same behavior we observe in the present case of Ir adsorption on Au(111). The spin-orbit coupling has a strong impact on the dispersion of the Au(111) Shockley surface state [54,67]. Its energy is shifted by several tens meV downward upon the inclusion of spin-orbit interaction in the calculation. Moreover, this state experiences notable Rashba spin-orbit splitting, which can be described with a splitting coefficient (Rashba parameter) α R ≈ 0.35 eV Å [8]. A number of other occupied surface states of d character can be found in Au(111) below −2 eV. These states experience notable splitting upon the inclusion of spin-orbit interaction. Their characteristics obtained in our calculation are close to those found in other density-functional calculations [9,10,[67][68][69][70] and agree with the photoemission experiment data [68]. In order to better understand the electronic structure of atomically thin Ir films and the impact by the Au substrate, in Fig. 1, we present the electronic structure of a clean Ir(111) surface. The electronic structure calculated with taking into account the spin-orbit interaction is reported in Fig. 1(a). Here, one can see that at energies below ≈1.5 eV the electronic states are of mainly d character. Above this energy, the electronic states have essentially s-p character. In the center of the SBZ there is an energy gap in the unoccupied part of the spectrum. Its bottom is at 1.23 eV, with the border having a parabolic-like upward dispersion. A similar energy gap exists on the (111) face of noble and other transition metals. However, in contrast to the noble metal surfaces, Pd(111), and Pt(111) [71][72][73][74][75][76], this gap in Ir(111) does not support a Shockley-like surface state. We explain this fact by a symmetry of the lower edge of the energy gap which has d character in Ir(111). For instance, in the (111) noble metal surfaces the symmetry of the lower edge is of a p type. As a result, the condition of a "p-s" inverted energy gap for the existence of a Shockley surface state is fulfilled [77]. In Ir(111), the symmetry of this gap is "d-s". This correlates with the absence of such a state on the Os(0001) surface, where a similar d-s energy gap exists [78]. In general, our calculated electronic structure of Ir(111) is very close to that calculated by Dal Corso [79]. Furthermore, the calculated data are in agreement with the photoemission experiments [80,81]. Several surface states can be found in Fig. 1(a) in the unoccupied part at finite wave vectors. The majority of them have resonance character. With the spin-orbit interaction included, the majority of these states reduce its localization at the surface as can be deduced from the comparison with Fig. 1(b), where the Ir(111) electronic structure obtained without the inclusion of the spin-orbit term is reported. With SOC included, only the surface state located in close vicinity to E F along the MK direction enhances its surface character due to the shift into the energy gap interior. Its energy position and dispersion are very close to the data of the photoemission experiment [82] that found this state at E F at the K point locating and dispersing downward with changing the wave vector from K toward the point. In the occupied part of the SOC electronic structure reported in Fig. 1(a), we find several surface and resonance states with strong localization at the surface owing to its d character. In particular, in the vicinity of the point just below E F , we find a pair of spin-split surface states SS and SS crossing each other at the energy of −0.18 eV at . This is in agreement with the photoemission value of −0.34 eV at [39,83]. A signature of these states can be detected in the normal emission spectra reported by Pletikosić et al. [81] and Elmers el al. [84] in the 0. as well. A downward dispersion of two resonance branches SS and SS follows a conventional Rashba-like shape despite the band splitting at the point due to the finite-thickness effect. As seen in Fig. 1(b), in the Ir(111) electronic structure obtained without spin-orbit interaction this surface state pair merges into a doubly degenerate band denoted SS. Furthermore, without the inclusion of the spin-orbit interaction the SS band is shifted upward by about 0.11 eV and has a maximum at the energy of −0.07 eV. At the point, this surface state has a dominating s-porbital character as is evidenced by its charge density distribution reported in Fig. 2(a). However, its orbital composition rapidly transforms into a d type at finite wave vectors. In Figs. 2(b) and 2(c), we show the charge density plot for the upper SS and lower SS spin-split branches evaluated at k = 0.04 M. Even at such small k we observe the drastic transformation of the character of this surface state (especially the lower-energy SS one) into a d type. This is accompanied by more efficient penetration into the crystal. Upon the in-crease of the wave vector, this tendency is maintained for both branches. In Fig. 1(c), we zoom the dispersion of these spin-split surface states showing its spin polarization, which demonstrates a typical Rashba-like spin texture. As seen in the left panel of Fig. 1(c), the in-plane xy spin polarization is significantly larger than the z spin component presented in the right panel. In our calculation, the Rashba splitting parameter α R for this state is 1.1 eV Å. This value agrees rather well with the experimentally measured value of α R ≈ 1.3 eV Å [39]. Our characteristics regarding this surface state are close to the calculated data for the 15-and 18-atomic-layer-thick slabs of Ir in Refs. [39] and [81], respectively. Since the surface states SS and SS appear in the region where the bulk-band states exist, their dispersion has a typical resonance character. This is evidenced by the avoidingcrossing behavior involving bulk-like energy bands. Reaching the binding energy about 1.5 eV this surface state merges the bulk states in accord with the experiment [39,84]. In Fig. 1(a), below E F one can find several other surface states at finite wave vectors. Some of them are true surface states owing to localization in the bulk energy gaps, like the spin-split ones with the energies of −1.12 and −1.38 eV at the K point. The other surface states disperse over a lowerenergy gap with the energies of −2.76 and −3.09 eV at the K point. At wave vectors around the SBZ center, one can observe a surface state with the energy of −2.73 eV. It has a parabolic-like dispersion with the upward dispersion from the point. Its clear surface character is ensured by its d type and location in a symmetry energy gap. A detailed description of properties of these and other surface states can be found in Ref. [79]. In Fig. 1(d), we present LDOS for the WSOC and SOC cases, as well as the difference between LDOS for four upper layers and LDOS of a central layer of the slab (which can be considered as representing a bulklike one). The surface states with dominating d-type character give rise to the strong peaks in LDOS at the surface. Especially a strong enhancement of the charge can be found in LDOS of the surface layer in the energy intervals between −1.4 and −0.2, −0.1 and 0.5, and below −1.7 eV. The effect of SOC in LDOS is significant and leads to the redistribution of all the features. B. 1ML-Ir/Au(111) In Figs. 3(a) and 3(b), we show the electronic structure of the 1ML-Ir/Au(111) system calculated, respectively, with and without SOC. The region delimited by a pink rectangular with addition of the spin texture is presented in Fig. 3(c) on the enhanced scale. We find that adsorption of the Ir ML produces a strong impact on the electronic structure of a pure Au(111) surface. First of all, we do not observe any signature of a Shockley s-p surface state in the wide energy gap around the SBZ center. Instead, in Fig. 3(a) we find six energy bands in the energy gap around the point. Due to its spatial confinement to the Ir ML we interpret these states as the Ir-derived QWSs. At the point we observe the energy gap in 0.30 eV between the bands 1 -1 and 2 -2 . The energy gap between the bands 2 -2 and 3 -3 is 0.39 eV. The indirect gaps between the bands 1 and 2 is 0.20 eV, whereas that between the states 2 and 3 is 0.04 eV. The most visible effect of the SOC inclusion is the spin splitting of all electronic states localized at the surface and a substantial energy shift of many of them. Moreover, the shape of these Ir-derived QWS bands changes notably. In particular, the bands 1 and 1 present a more parabolic-like behavior in the point vicinity. At finite wave vectors, they are spin split. However, it is impossible to fit the dispersion of two resulting bands 1 and 1 using the Rashba model, since the energy splitting varies significantly with the wave vector. We relay such behavior to strong hybridization of these states with the QWSs 2 and 3 upon the switching on of the spin-orbit interaction. In the case of the 1ML Ir adlayer, this hybridization is notably stronger than in the 1ML-Pt/Au(111) case, where it is possible to describe the spin-orbit splitting of a similar band by a Rashba model with the splitting coefficient α R = 1.5 eV Å [54]. At the point, the QWSs 1 and 1 coincide at the Rashba point with the energy of 0.5 eV. Its charge density distribution is reported in Fig. 4(a). Here the orbital composition of these states consists of dominating d orbital at Ir ions with a small portion localized in the vicinity of the top Au atomic layer. In both directions, the bands 1 and 1 disperse upward resembling the dispersion of the unoccupied s-p Ir bulk-like states of Fig. 1 In the same energy gap, we find the other Ir spin-separated QWSs with the energy of 2.24 eV at M. Due to its almost flat dispersion over a large portion of the SBZ, the bands 2 and 2 give a strong contribution to LDOS in the surface region around E F . As is seen in Fig. 3(d), LDOS at the Ir atomic layer around E F is almost two times larger than the Ir bulk value. Owing to the penetration of the wave function of the QWSs 2 and 2 into the Au substrate, LDOS at E F in the top Au atomic layer is enhanced in comparison with the Au bulk interior by almost the same factor. Other Ir-derived QWSs are observed in the Au energy gaps around the SBZ borders. Their properties resemble those of the Pt-induced QWSs in the 1ML-Pt/Au(111) system studied recently [54]. Therefore we do not discuss such states in this work. In order to understand the evolution of the Ir-derived QWSs upon switching on the spin-orbit interaction, we investigated how the electronic structure of the 1ML-Ir/Au(111) system changes with varying the spin-orbit coupling coefficient α so . The calculations were performed for α so = 0.1, 0.3, 0.5, 1.0, and 2.0. The resulting SOC electronic structure is shown in Fig. 5. In the panels (a)-(e), the dispersion of the states includes its spin projection in the xy plane. The size of the symbols is proportional to its projection magnitude. In the panels (f)-(j) the same band structure contains information about the z spin component of each state. One can see how the spin splitting of the energy bands with the localization at the Ir atomic layer gradually increases with the increasing the magnitude of α so . At small α so 's, the bands 1, 2, and 3 split into three separate groups without mixing between them at any α so . At the smallest α so value, the bands 2 and 2 in Figs. 5(a) and 5(f) have similar magnitude of the xy and z spin components at any finite k . In the case of α so = 0.3 the spin polarization of these states is mainly in the xy plane at k in the vicinity of . At larger k the bands 2 and 2 have spin polarization predominantly in the z plane. For α so = 0.5, the spin polarization of the bands 2 and 2 changes substantially. At finite k along K the xy and z spin components are of the same size at the wave vectors in the Au(111) energy gap. Upon entering the region with the Au bulk-like energy bands the z spin orientation prevails. Along the M direction the magnitudes of the xy and z spin components become similar upon approaching the energy gap border and maintain such ratio for larger k . When α so reaches a value of 1.0 the spin texture of all the QWSs changes significantly. Along K inside the Au energy gap the spin in the bands 2 and 2 orients mainly in the xy plane. Outside the gap the spin orientation in the z direction becomes dominating. As for the M direction, the xy spin orientation is preferential at any k . Notice that the spin polarization of the QWS 2 and 2 with the variation of k is different from a conventional Rashbalike shape exemplified in the case of the bands 1 and 1 . Thus, the spin orientation of the bands 2 and 2 changes from positive (negative) to negative (positive) at finite k 's along the same symmetry direction. This was observed in other systems [37] and can be explained in our case by the strong hybridization with the band 3 . If we increase α so up to 2.0 the spin texture of the 2 and 2 bands in Figs. 5(e) and 5(j) has a dominating xy polarization at all the wave vectors. The same, although at a lesser scale, is observed in the spin alignment of the QWS bands 1 and 1 . Regarding the transformation of the band 3 into a couple of the spin-split bands 3 and 3 , its evolution with the α so magnitude in Fig. 5 reveals that its spin texture is similar to that in a Rashba spin-split scenario at α = 0.1, 0.3, and 0.5. These bands have the xy spin orientation at any k . This situation changes when the α so value reaches 1.0 and 2.0. In these cases, the spin orientation of the 3 band changes at finite k 's. Again, we relay such spin-texture behavior in the band 3 to its strong hybridization with the 2 bands. As seen in Fig. 3(c), between the bands 2 and 3 there is a small energy gap only. Strong hybridization between the QWSs 2 -2 and 3 -3 is evidenced in their spatial localization. In Fig. 6(a), we report charge density distribution for the states 2 and 2 at the point. Here one can observe that in addition to the d-type contribution it has a strong admixture of the s-p character in the region above the Ir ML. Some portion of such symmetry can be found at the Ir-Au interface and between the top and the second Au atomic layers. When k is at 0.03 M the charge density of the 2 and 2 experiences notable transformation, as seen in Figs. 6(b) and 6(c). At such a small k , the state 2 loses its s-p contribution significantly and now the d admixture prevails. The same, although on a less scale is observed for the state 2 . Once we shift k to 0.08 M the charge density for the states 2 and 2 reported in Figs. 6(d) and 6(e), respectively, is almost completely dominated by the d-type contribution. Variation in the charge density distribution of the QWSs 3 and 3 with the departure from the point occurs as Fig. 7(b). On the contrary, Fig. 7(c) shows that it remains almost the same in the QWS 2 . When k changes to 0.08 M the presence of the s-p admixture in the charge density of these states shown Figs. 7(d) and 7(e) becomes more evident. In contrast to the bands 1 and 1 , where the Rashba splitting model can not be applied, the spin splitting of the bands 2 -2 and 3 -3 is described by this model with certain limitations. The main problem is the bands 2 and 2 , since it is not clear which sign of the effective mass is appropriate in this case. We take it negative considering the dispersion of the band 2 in the WSOC case of Fig. 3(b). Another problem consists in ambiguity in choosing a region in the k space for the fitting procedure. In this work, we used two regions in the k space for fitting. As reported in Table I, with fitting the dispersion of the bands 2 and 2 at k close to the band 2 maximum, we obtain α R = 6.4 eV Å. This value is probably the largest spin-splitting coefficient reported up to now in metallic systems. In the case of fitting the dispersion of the bands 2 and 2 in a region close to the Rashba point we obtain α R = 5.0 eV Å which is rather large as well. Notice that in the case of the Pt ML adsorption studied in Ref. [54] such a procedure could not be applied for the description of the spin splitting of the similar QWSs. Regarding the spin splitting in the bands 3 and 3 , the fitting procedure again is rather ambiguous. If we take as a reference the regions of k close to the maximum in the band 3 dispersion, the fitting gives α R = 5.1 eV Å. Choosing the -point vicinity results in α R = 3.3 eV Å. In order to address the Au segregation effect in the 1ML-Ir/Au(111) system, in this work, we investigated the case when an Ir ML is situated under the Au surface atomic layer. Figure 8 demonstrates the band structure of such a system calculated with (a) and without (b) SOC. By comparing these two panels we conclude that the switching on the spin-orbit interaction produces a similar effect as in the previously discussed system. Comparison of the SOC band structure presented in Fig. 8(a) with that in Fig. 3(a) reveals that the variation in the Ir ML position with respect to the Au surface atomic layer produces relatively little effect on the energy position and dispersion on the Ir-induced QWSs. We relay such insensitivity of the Ir-derived electronic states to its predominantly d character. Nevertheless, some modifications can be appreciated. As for the QWSs 1 -1 , 2 -2 , and 3 -3 of primary interest here, their energy positions in the Au energy gap experience downward shift by about 0.2 eV. The shape of the bands 1 and 1 is hardly affected by the variation in the Ir ML position. On the contrary, the downward shift of the band 2 depends slightly on the wave-vector magnitude being maximal at the SBZ center. As a result, its dispersion in the Au energy gap becomes almost flat and appears closer to E F crossing the corresponding Rashba point. As for the bands 3 and 3 its dispersion is also affected by such downward shift. In the K direction, the bands 3 and 3 disperse down to energy of −0.35 and about −0.6 eV, respectively. After reaching the bottom the dispersion of corresponding resonances becomes positive. After crossing with another resonance state its dispersion changes the sign again. Entering the Au energy gap around the K point these QWSs reestablish its true surface character with very close dispersion and almost reaches E F at K. On the contrary, along the M direction the QWS bands 3 and 3 disappear rather quickly upon entering the Au bulk-band continuum. Some other changes can be noted in the dispersion of the other Ir-derived states. For instance, the lowest-energy Ir QWSs in the upper-energy gat at K shift upward and locate in the Au energy gap increasing its localization in the Ir ML. The spin texture of the QWS bands 1 -1 , 2 -2 , and 3 -3 shown in Fig. 8(c) resembles that in Fig. 3(c) although a small reduction in the xy spin amplitude in the QWS reported on the left panel of Fig. 8(c) can be noted. As for the z spin orientation, the right panel of Fig. 8(c) confirms that it is negligible for all the states above −1 eV. The fitted values for the Rashba splitting coefficients for the QWSs 1, 2, and 3 can be found in Table I. Some reduction of the α R can be noted in comparison to the Ir top position. Nevertheless, the unusually large values are obtained even in this case despite local symmetry in the Ir ML vicinity in the z direction and essentially d character of the Ir QWSs. This can be explained by the role played by the s-p component in these states, which is larger expanded in the space as seen in Fig. 9 and can feel the different environment around the Au top atomic layer in comparison to the gold interior. Regarding LDOS reported in Fig. 8(d), its comparison with the case when the 1ML Ir atomic layer is placed on top of the Au(111) surface reveals a few quantitative differences but qualitatively its behavior is similar. Spin polarization for this case also very close to the previous one. Thus we may conclude that all features of this heterostructure are not very sensitive to the position of the Ir adlayer with respect to the top Au atomic layer. C. 2ML-Ir/Au(111) The calculated electronic structure of the 2ML-Ir/Au(111) system is presented in Fig. 10. In the SOC band structure of Fig. 10(a), one can see that the addition of an Ir ML produces numerous modifications in the electronic states in the surface region. For instance, the number of the Ir-derived QWS bands increases in the Au energy gaps at the SBZ borders and its energy positions are different from those in Fig. 3(a). In particular, four unoccupied spin-resolved QWS bands appear in the vicinity of the MK line. Such an increase of the QWS number and energy variation is in accord with the increased thickness of the Ir film. The number of Ir-derived QWS pairs in the Au energy gap around the SBZ center increases from three to four as well. The dispersion of the bands 1 and 1 resembles that in the case of 1ML-Ir/Au(111), although its parabolic shape in the vicinity of in Fig. 10(a) is more pronounced. Similar to the WSOC case of Fig. 10(b) these bands are degenerate at the point. However, like in the 1ML-Ir/Au(111) case, the value of spin splitting of the QWSs 1 and 1 depends on the wave vector magnitude. As a result, we could not fit their dispersion by the Rashba model in a satisfactory way. The spin texture of the QWSs 1 and 1 reported in Fig. 10(d) presents strong z spin component in the -point vicinity, whereas the in-plane component is negligible. Upon increase of the wave vector, the spin alinement becomes in-plane-like, similar to a typical Rashba-like in-plane spin texture. Due to SOC, the energy gap in 0.39 eV exists between the QWSs 1 -1 and 2 -2 at the point. The minimal indirect gap between the QWSs 1 and 2 is of 0.15 eV. The dispersion of the QWS 2 is positive at any k in Fig. 10(a). Along the K symmetry direction this QWS reaches the Au energy gap border at the energy about 0.65 eV merging the QWS 2 . Interaction with the Au bulk-like states of the resulting resonance is so strong that it disappears very quickly. Along the M direction, the dispersion of the QWSs 2 and 2 becomes the same after leaving the Au energy gap. It can be traced over the large distance since after entering the Au projected-bulk-state continuum it maintains its strong surface state character. It disappears in the vicinity of the M point only. Comparing Figs. 10(a) and 10(b), one can see that the SOC switching on results in a strong spin-orbit splitting of the doubly degenerate QWS 2 in the Au energy gap. The resulting QWSs 2 and 2 experience also notable downward shift at the SBZ center. Interestingly, the spin splitting of the QWS 2 is notably larger than the similar Pt-derived QWS in the 2ML-Pt/Au(111) system [54]. Moreover, the shape of the 2 and 2 QWS bands in 2ML-Ir/Au(111) allows us to apply the Rashba model. The values of the Rashba coefficient are reported in Table I are unusually large. Strong hybridization of these QWSs with other Ir QWSs in the vicinity of the SBZ center is reflected in their spin texture reported in Fig. 10(c). There one can see that the in-plane spin orientation of QWSs 2 and 2 changes the sign on each side from the Rashba point which is not described by the Rashba model. In Fig. 10(a), beyond the Rashba point vicinity, the QWSs 3 and 3 have strong downward dispersion upon increasing the wave vector. In the K direction, upon reaching the Au band-gap boundary the QWS 3 transforms into a strong resonance with a minimum at the energy of −0.38 eV. At larger k its dispersion can be hardly traced due to strong hybridization with other Ir QWS. On the contrary, the QWS 3 disappears rather quickly after leaving the energy gap. The same we observe for both these QWSs in the M direction. The dispersion of the spin-split QWSs 3 and 3 is modified significantly in comparison with the "parent" QWS 3. At the point the energy position of the Rashba point of these bands locates at significantly lower energy shifting to 0.10 eV. Its dispersion shape and spin-texture reported in Fig. 10(c), is closer to the conventional Rashba model. This is also reflected in the smaller difference between the Rashba coefficients obtained in the two fitting procedures as evidenced from Table I. An increased number of the Ir MLs results in the addition of two other Ir-derived spin-degenerate QWSs at lower energies in the Au energy gap around the SBZ center. In Figs. 10(a) and 10(b), these states are labeled as 4 and 5. They are degenerate at at −0.85 eV. A similar set of doubly degenerate QWSs is observed in the 2ML-Pt/Au(111) system at the bottom of the Au energy gap [54]. However, SOC does not separate the QWSs 4 and 5 at the SBZ center in 2ML-Ir/Au(111) like it occurs in 2ML-Pt/Au(111) [54]. In the vicinity of the dispersion of the QWS 4 band is almost flat in both symmetry directions. It disappears beyond the energy gap. However, along the K it reappears again at larger wave vectors and energies as a strong resonance with positive dispersion that can be detected up to the energy of about 1.5 eV above E F . Being flat inside the energy gap the QWS 5 band dispersion almost coincides with that of the QWS 4. In the K direction, the QWS 5 state after reaching the energy gap border splits into two spin-resolved resonances with negative dispersion. In Fig. 10(a), its presence can be observed down to the energy of −1.7 eV. A much more weak resonance linked to the QWSs 4 and 5 with a similar downward dispersion can be detected in M as well. In the M the QWS 5 maintains almost the same energy over about 1/3| M|. At larger wave vectors it disperses upward, crosses E F , and reaches the M point vicinity where it ceases to exist. At the SBZ borders, all other Ir QWSs experience notable spin-orbit splitting as well. The splitting depends strongly on the wave vector. Therefore it is difficult to apply a Rashba model for the description of these states. The presence of the QWSs with strong localization in the Ir adlayer produces significant increase in LDOS of both Ir MLs as seen in Fig. 10(d). In particular, a strong increase is observed in the energy interval from −1.1 to 0.6 eV. Even a larger LDOS increase can be found in the surface Ir atomic layer at energies around −2.5 eV. In the interface Au atomic layer, we also observe some enhancement of LDOS around E F and in the −0.9 to −1.7 eV energy interval. D. 3ML-Ir/Au(111) In Figs. 11(a) and 11(b), the SOC and WSOC band structure of the 3ML-Ir/Au(111) system is presented, respectively. Their comparison demonstrates that in the 3ML-Ir/Au(111) system the inclusion of SOC produces a notable impact on the Ir-derived QWSs. In particular, as in the previously discussed systems, the energy position and dispersion of the QWSs change significantly. Nevertheless, the spin splitting is reduced in all the QWSs in accord with the increased thickness of the Ir adlayer. The QWSs 1 -1 , 2 -2 , and 3 -3 can be found at somewhat higher energies than in the systems containing the thinner Ir adlayers. Their dispersion is rather close to the 2ML-Ir/Au(111) case. The upper-energy QWSs 1 and 1 have an upward parabolic-like dispersion with a clear surface-like character up to energies about 2.5 eV. These bands have the bottom at 1.0 eV and are spin-split at finite wave vectors. However, such a splitting is notably smaller than in Figs. 3(a) and 10(a). Again, the Rashba model can not be applied for the description of the spin splitting of these states in 3ML-Ir/Au(111). The QWSs 2 and 2 have a much flatter dispersion. Inside the Au energy gap the spin splitting of the QWSs 2 and 2 in Fig. 11(a) is re duced notably as well. This is confirmed by the fitted Rashba coefficients reported in Table I, which drop significantly in comparison with the previously discussed three systems. Their energy at the Rashba point is 0.58 eV and the separation caused by SOC diminishes upon approaching the energy gap boundaries in both symmetry directions. In the K direction, the QWSs 2 and 2 evolve in the Au energy gap up to the energy about 1.1 eV. Having reached the gap boundary these bands transform into resonance and quickly lose their surface character inside the Au bulk-band continuum. Along M these QWSs are degenerate beyond the gold energy gap and have a resonance character with positive dispersion. Interestingly, even at a such Ir film thickness, the Ir-derived QWSs feel rather efficiently the details of the substrate band structure. Like in 2ML-Ir/Au(111), in Fig. 11(a), we find the Ir generated QWSs 3 and 3 in the vicinity of E F . The Rashba point of these states is at 0.22 eV. As seen in Fig. 11(a), these QWSs exist only inside the energy gap. In both symmetry directions, these QWSs have one minimum and one maximum. Such dispersion shape ensures that the lower-energy band 3 crosses E F only. Besides the strong deviation of the dispersion of these QWSs from a parabolic-like shape, its spin texture in Fig. 11(c) has essentially a conventional in-plane alignment. Nevertheless, at finite wave vectors along K, a nonzero z spin component in these QWSs can be detected. The Rashba coefficients for these states is reduced significantly in comparison to the other systems. In the occupied part of the band structure of Fig. 11(a), we find other four Ir-induced QWSs denoted as 4, 5, 6, and 7, which are spin-degenerate at the point. As Figs. 11(a) and 11(c) evidence, SOC produces strong energy separation of the upper-energy QWSs 4 and 5. Thus, at , an energy gap of 0.42 eV opens between them. On the contrary, the spin-orbit splitting of the QWSs 4 and 5 is rather modest. Only at k in the vicinity of 0.3 K large energy separation in the pair originated from the QWS 4 is observed. It is caused by strong hybridization with the QWS 3 . We relay to this hybridization the notable z spin component in these QWSs observed in Fig. 11(c) along K. Beyond the point, the upper QWS 4 disperses upwards and splits into two branches by SOC. At larger wave vectors its dispersion becomes negative. Upon approaching the gap boundaries on both sides from the SBZ center, it transforms into a weak resonance. After that, it is difficult to follow its dispersion. The lower-energy QWS 5 splits into two spin-resolved bands at finite wave vectors with a positive dispersion and a large effective mass around the SBZ center. After reaching the gap boundary in the K direction, its dispersion becomes negative. It has a minimum at about −0.73 eV. At larger wave vectors this state loses its surface character. In contrast, along the M direction, after entering the Au-projected-bulkstate continuum the QWS 5 presents pronounced positive dispersion and reaches the maximum at 0.84 eV in the vicinity of the M point. The presence of two other Ir QWSs labeled as 6 and 7 at the bottom of the substrate energy gap around the SBZ center in contrast with a single QWS 6 in 3ML-Pt/Au(111) outside the Au band gap [54] is explained by the upper energy position of the valence d bands in Ir. In the 3ML-Ir/Au(111) SOC band structure, we find that an energy gap of 0.10 eV opens between the QWSs 6 and 7. However, the spin-orbit splitting of the QWSs 6 and 7 is small. Only upon the transformation into resonance states after leaving the energy gap, the energy separation between the spin-split bands becomes notable. However, it is difficult to analyze its dispersion in terms of the Rashba model due to strong hybridization effects. Other QWSs with true surface and a resonance surface character can be observed at finite wave vectors in Fig. 11(a). The properties of the majority of them are very similar to those discussed in 3ML-Pt/Au(111) [54]. Therefore we do not concentrate on them in this publication. Figure 11(d) demonstrates that the quantization of the Ir electronic states in 3ML-Ir/Au(111) produces a strong enhancement in LDOS in four upper atomic layers. Strong peaks in the vicinity of E F with the maximum at 0.2 eV are observed in all three Ir atomic layers. In the occupied part, the enhancement of charge in LDOS is maximal in the surface Ir layer. Interestingly, in the interface Ir layer, LDOS below E F is notably larger than in the second layer. In the interface Au atomic layer LDOS is enhanced as well in certain energy regions. Especially the large increase we observe in the energy region between −0.8 and −1.7 eV. IV. CONCLUSIONS We have performed the density-functional-theory calculations of the electronic structure of the nML-Ir/Au(111) heterostructures with n = 1, 2, and 3. Varying the adlayer thickness allowed us to study the formation of the Ir-derived valence quantum-well states in a systematic way with special attention to the states in a wide s-p energy gap of the Au(111) substrate at the surface Brillouin zone center. In order to address the impact of spin-orbit interaction in such systems, the calculations with and without the spin-orbit coupling were performed. We find that the spin-orbit splitting of the Ir quantumwell states is very large in nML-Ir/Au(111) at all the considered n. In particular, the maximal value of the Rashba coefficient α R = 6.4 eV Å was obtained for one QWS in 1ML-Ir/Au(111). Moreover, for the upper-energy QWSs in the vicinity of the SBZ center in all the systems the values for α R exceeding 2 eV Å were obtained. We relay such strong spin-orbit splitting in the Ir-derived QWSs to its dominating d character and strong mutual hybridization. A large number of strongly localized at the surface quantum well states results in a strong enhancement of the layered density of states at energies around the Fermi level in comparison to the clean Ir(111) and Au(111) surfaces. Energy positions of the QWSs according to the Fermi level strongly depend on the Ir adlayer thickness which may be attractive for the engineering of such electronic systems. We expect that such a system would be interesting for the observation of the effects found in this study.
11,963
2020-06-05T00:00:00.000
[ "Physics" ]
Ancient administrative handwritten documents: X-ray analysis and imaging The heavy-element content of ink in ancient administrative documents makes it possible to detect the characters with different synchrotron imaging techniques, based on attenuation or refraction. This is the first step in the direction of non-interactive virtual X-ray reading. Reading ancient handwritings: role of X-rays We have correlated the chemical analysis by X-ray fluorescence spectroscopy (XRF, with a portable instrument) of 15th to 17th century administrative Italian documents and the X-ray imaging of the handwritten characters using synchrotron radiation. This combined approach has enabled us to extensively and flexibly characterize the ink composition prior to the imaging tests, then to exploit the properties of synchrotron radiation for advanced X-ray imaging (including refraction-based contrast when attenuation is weak) and tomography, and finally to correlate the results. Heavy elements in inks (Del Carmine et al., 1996) normally allow their detection by X-ray attenuation. However, strong concentration fluctuations occur between different manuscripts and between different areas of the same specimen. Sometimes low concentration impedes detection by attenuation; an alternative method is refractive-index contrast (Hwu et al., 1999) in differential phase contrast imaging . Our study is part of the international Venice Time Machine (VTM) project (http://dhvenice.eu). The Archivio di Stato in Venice holds about 80 km of archival documents spanning over ten centuries and documenting every aspect of the Venetian Mediterranean Empire. If unlocked and transformed into a digital data system, this information could change significantly our understanding of European history. But the sheer mass of data is a problem. VTM plans to digitalize and decipher the entire collection in 10-20 years. To facilitate and accelerate this task, the project also explores new ways to virtually 'read' manuscripts, rapidly and non-invasively. We specifically plan to use X-ray tomography to computer-extract page-by-page information from sets of projection images. The raw data can be obtained without opening or manipulating the manuscripts, reducing the risk of damage and speeding up the process. This approach is based on precursor projects exploiting X-rays to decipher documents. Notably, synchrotron light was used to retrieve 'lost' text from the 'Archimedes Palimpsest' (Bergmann, 2000) with X-ray fluorescence. The use of X-ray tomography to analyze handwriting was pioneered by Seales et al. (Lin & Seales, 2005;Baumann et al., 2008;Seales et al., 2011) with the EDUCE project. A top-level program in this direction was launched by T. Wess of Cardiff University (Mills et al., 2012;Patten et al., 2013). In particular, the project virtually 'unrolled' scrolls producing flat readable images and assessed the possibility of damage by X-ray exposure. These efforts are part of several pioneering studies exploiting synchrotron techniques for the humanities and art (Janssens, 2011;Creagh & Bradley, 2007;Caforio et al., 2014;Morigi et al., 2010;Reischig et al., 2009;Dik et al., 2008Dik et al., , 2010Možir et al., 2012;Faubel et al., 2007;Kennedy et al., 2004;Gunneweg et al., 2010;Murphy et al., 2010). In general, such efforts have demonstrated high effectiveness in clarifying issues, notably those related to chemical and microstructural properties and their relations to issues such as environment-related damage, origin, dating etc. However, they still realise only a small fraction of the potential applications of synchrotron experiments in these domains. Along the path to deciphering the Venice collection, we must deal with a series of key issues. A fundamental one is the nature of the ink in everyday documents (as opposed to pieces of high artistic or historical value) and its detection by X-rays. This is the subject of our present study. The issue is by no means clear a priori. The very nature of the VTM project requires studying archival documents such as ship records, notary papers, work contracts, tax declarations, commercial transactions and demographic accounts. For such items, the ink composition is scarcely documented. Furthermore, the Archivio di Stato collection spans ten centuries, with inevitable fluctuations in the ink chemistry. Clarifying these issues has a remarkable and multi-faceted potential impact. First, in addition to the Venice collection the techniques could be applied to many documents at risk throughout the world. Second, the chemical and microstructural information is also relevant for the deterioration mechanisms and could help in preventing them. Our main results are the following. First, heavy elements are systematically detected in the inks of personal and commercial documents over the three centuries investigated here. Second, the chemical composition is basically consistent with the historical records of the inks (Yale University Library Special Collections Conservator Unit, 2012; Capella, 420). Third, the relative amounts of heavy elements change drastically between different document areas and from manuscript to manuscript, with no historical trend. Fourth, there is, as expected, a direct correlation between the quality of attenuation-contrast images and the chemical composition. In the best situations the X-ray attenuation image quality is good enough to perform tomographic reconstruction of phantom 'volumes' created by stacking ancient manuscript fragments. In the worst cases, even simple visualization is a problem. In such cases, the properties of synchrotron light become very helpful. Indeed we exploited its spatial coherence (Margaritondo, 2002) to record images with refractive-index contrast. As demonstrated by the extensive experience (Hwu et al., 1999) with other types of specimens, this could allow the recognition of faded-out or very weak characters. Chemistry of ancient inks in everyday administrative writings Why could heavy elements be present in ancient inks? Let us start from black inks (Yale University Library Special Collections Conservator Unit, 2012; Capella, 420). For many centuries, Europe widely used a formula generically denominated as 'iron gall', a name suggesting the presence of iron. A less common black ink, with no heavy elements, was the Roman atramentum scriptorium, based on lampblack with a gum binder. In addition to black inks, part of the documents, typically those with important writings, used coloured inks. Some of them contained heavy elements, e.g. mercury in cinnabar red (Delaney et al., 2014). Other common recipes did not use heavy elements, e.g. the Brazil red (Yale University Library Special Collections Conservator Unit, 2012). Even the generic iron gall formula corresponded to a wide variety of ingredients and recipes (Yale University Library Special Collections Conservator Unit, 2012). The basic fabrication process was the reaction of an acid with an iron compound. The most common procedure involved tannic acid (C 76 H 52 O 46 ) and iron sulfate (FeSO 4 ) in rainwater, white wine or vinegar (Smith, 2009). Tannic acid was obtained from plants, the richest source being the 'galls' produced by trees in response to parasite attacks (e.g. by gall wasps); for example, the British oak galls or the top-quality 'Aleppo galls'. Iron sulfate was known as 'green vitriol', extracted from mines, notably coal mines. The reaction of tannic acid with FeSO 4 produced, with oxygen exposure, ferrotannate, a black pigment. In addition to the pigment, the black inks also contained a water-soluble binder. One of the most common was gum arabic. This is a natural product of trees, e.g. acacia, rich in polysaccharides and glycoproteins; its main component is arabin (Smith, 2009;Ruggiero, 2002), the calcium salt of the polysaccharide arabic acid. Other ingredients could be present, such as logwood pigment. One important property of iron gall inks is their time evolution. Although a small quantity of black pigment is developed with the oxygen present in the solution, most is produced with atmospheric oxygen after writing, over hours or days. Furthermore, the ink is corrosive over very long periods of time, chemically attacking the substrate. The ink-substrate interaction was extensively analyzed by Banik et al. (1983);and Neevel & Reissland (1997), Proost et al. (2004) and Kanngiesser et al. (2004) attacked this issue with synchrotron radiation XANES (X-ray absorption near-edge structure) and microfluorescence techniques. The above features agree with our chemical analysis, based on XRF. We performed the XRF experiments with a portable m-XRF spectrometer ARTAX (model 200;Brucker). This instrument uses an air-cooled fine-focus Mo X-ray source with a collimator. The detector is a Peltier cooled silicon drift device with 10 mm 2 active area, reaching a resolution better than 150 eV for the Mn K fluorescence, with a count rate up to 10 5 s À1 and a dead time < 10% at 4  10 4 s À1 . The instrument is equipped with a visible-light CCD camera (20 magnification) and with a pointing laser, to identify and picture the analyzed area. In our tests, XRF spectra were recorded under He flux, operating the spectrometer at 15 kV and 1500 mA (exposure time 180 s). The specimens were carefully handled to avoid contamination but posed no particular fragility problems. The portable instrument enabled us to analyze whole manuscripts and several micro-areas of each manuscript. Its performances were perfectly adequate to our chemical analysis without requiring a synchrotron source. In principle, the XRF instrument could also yield images, but the corresponding time per image would have been exceedingly long (several days per picture) for our final objectives, and would exceedingly complicate tomography. Figs. 1-4 include XRF spectra taken from 200 mm-wide (the collimator size) spots of 16th and 17th century Italian writings of personal and administrative nature and from a 15th century religious parchment manuscript. We acquired them using the same geometric conditions for all specimens. All spectra revealed Fe in the ink areas, as opposed to the substrate areas. In addition, we detected Ca in the same ink areas. This is consistent with the use of gum arabic as the binder. Calcium was also found in the substrate of the parchment specimens presumably due to chalk used during support preparation (Van der Snickt et al., 2008). These rather straightforward results are accompanied by several other findings. In the 1590 (Fig. 1) and 1664 (Fig. 3) specimens the ink also contains Cu and Zn, which are known to contribute to the ink-substrate interaction (Banik et al., 1981). Furthermore, the Fe and Ca content varied substantially. We reached this semi-quantitative conclusion using spectra recorded with geometric, voltage and current conditions as similar as possible for different specimens and areas. The different amounts of Fe correlate well with the X-ray attenuation contrast, as shown in Figs. 1-4. These figures include visible photographs as well as X-ray attenuation images. The radiographs were taken at the TOMCAT beamline (Stampanoni et al., 2007) Results for a 1590 Italian (Tuscany) handwritten record. Top right: X-ray fluorescence spectra taken from the 200 mm-wide spots marked on the top-left visible picture. Bottom: comparison of a 14 mm  18 mm visible picture (left) with an X-ray image taken in the free propagation mode of the TOMCAT beamline (15 keV, exposure time 80 ms) (Stampanoni et al., 2007). In this acquisition mode, two X-ray effects are visible: attenuation allows character recognition whereas phase contrast enhances the paper fibres. Figure 2 Results like those of Fig. 1, for a 1646 Italian (Tuscany) specimen (image size 18 mm  14 mm). Here, the iron concentration in the ink is too low to detect characters with X-ray attenuation whereas the paper fibres are still visible by phase contrast. Figure 3 Results like those of Fig. 1, for a 1664 Italian (Tuscany) specimen (image size 18 mm  14 mm). The radiograph reveals otherwise nearly invisible holes caused by ink-induced paper corrosion. Scherrer Institute. The X-ray source was a 2.9 T superbending magnet with a critical photon energy of 11.1 keV, and an electron beam (source) size of 46 mm  16 mm. The main optical component is a fixed-exit double-crystal multilayer monochromator that covers the energy range 6-45 keV. The crystal optics is mounted on two independent high-precision goniometers. The first crystal has motorized pitch, roll and horizontal translation; the second crystal has the same degrees of freedom and, in addition, yaw and vertical translation. The entire system is positioned on a base plate that can be vertically adjusted. The vertical size of the beam is 'controlled' by moving the end-station along the beam path (up to 15 m travel range). For the images of Figs. 1-3, the photon energy was 15 keV, and 25 keV for Fig. 4. Throughout the image-recording experiments the storage ring current was kept constant at 400 mA by operation in the top-up mode. TOMCAT can record images with different modes, described in detail by Stampanoni et al. (2007). The images in Figs. 1-3 were taken with the free-propagation operation mode and reflect both phase contrast and attenuation contrast. Phase contrast is primarily visible in the microscopic substrate features. Attenuation contrast prevails in the ink areas. The effects of iron on X-ray attenuation contrast are visible when comparing Fig. 1 and Fig. 3 with Fig. 2: in the 1646 specimen the Fe peak is rather weak and the X-ray image shows no clear evidence of characters. In addition to black (iron gall) ink, ancient manuscripts could also have colour inks. Our 15th century parchment specimen included red characters, written with ink containing Hg (cinnabar). Hg was indeed detected in the X-ray fluorescence spectra, as seen in Fig. 4, and produced excellent attenuation contrast. On the contrary, the Fe signal from the black characters is weak, and the contrast low. Positive tests of tomographic reconstruction are shown in Fig. 5 for a 1679 specimen, consisting of a stack of eight 0.8 cmdiameter fragments simulating a small volume. We took a set of projection images over a rad range, at an angular distance of /10 3 from each other, using 15 keV photons and a time exposure of 10 ms per picture. Fig. 5 shows three tomographically reconstructed 'pages' with recognizable characters (comparable results were obtained for the other pages) and two three-dimensional reconstructed side views. Note that the spatial resolution (6.5 mm detector pixel) is largely sufficient not only to distinguish different 'pages' but also to determine whether the writing is on the front or on the back of each 'page'. Alternate imaging modes How could we handle the cases of low attenuation contrast? Synchrotron light can provide the answer, allowing imaging techniques with contrast mechanisms related to the imaginary part of the complex refractive index, i.e. to phase-related phenomena like refraction (Margaritondo & Hwu, 2013). Fig. 6 shows a test: the imaged area is the same as that of the red characters of Fig. 4, but the contrast mechanism is different. The image pair was recorded using differential phase contrast (DPC), discussed in detail by Weitkamp et al. Tomography results for a 1679 Italian (Tuscany) document. Centre: three reconstructed images of virtual 'pages' with different characters, corresponding to the visible photographs on the left-hand side. Right: three-dimensional reconstructions showing side views of the pages with ink visible over them. The tomography was performed for a stack of eight manuscript fragments simulating a small volume. The bright area in the two right-hand-side images is a small magnet keeping the fragment stack in place. Figure 4 Results similar to those of Fig. 1, for 15th century religious writing on parchment, of Italian origin (image size 18 mm  7 mm). The bottom part shows a comparison of two visible pictures of red and black ink characters and two X-ray absorption images below (recorded with a grating interferometer at TOMCAT, DPC mode, 25 keV, exposure time 80 ms) (Stampanoni et al., 2007). (2005). With suitable digit-by-digit mathematical processing , the raw DPC images yield pictures corresponding to absorption, scattering and refraction. Fig. 4 shows indeed DPC absorption-contrast images obtained in this way. Fig. 6 shows instead scattering and refraction images. The important point here is that the refraction images reflect the local specimen morphology rather than only its chemical composition. We can speculate that what we see in Fig. 6 reflects the substrate morphology modified by the writing process or by the ink-substrate interaction. This could be used to detect a character when the attenuation contrast gives a faint picture. Conclusions and perspectives Our tests yielded overall positive results along the path to virtual X-ray reading but stressed some potential problems. The positive point is that the European recipes to fabricate common inks over several centuries produced in most cases heavy elements sufficient for character detection by X-ray attenuation. We verified that the corresponding image quality is suitable for advanced tomographic reconstruction, at least for a limited number of pages. In some instances, however, the heavy-element content was too weak. Our preliminary tests indicate that these cases can be potentially handled with refraction images. Note that the attenuation images and the refraction images carry a wealth of information besides the written characters. They can notably reveal information, complementary to other experimental techniques (Proost et al., 2004;Kanngiesser et al., 2004), related to the ink-substrate interaction, which in many cases leads to corrosion and deterioration. This will hopefully contribute to the identification of ways to prevent long-term damage. Results such as those discussed here open the way to a new strategy for information harvesting from ancient documents, alternate to page-by-page recording of visible pictures. A single tomographic set could yield the same information more rapidly and with minimized interaction with the document. Many problems remain to be solved along this path: in particular, we must test the extension of the tomographic reconstruction to larger-size documents. Also crucial will be the development of adequate software, for example for automatic analysis of reconstructed images. However, the technique must still be optimized to reach its best performances. On a positive side, throughout our tests we found no evidence whatsoever of radiation damage, consistent with Patten et al. (2013).
4,043.2
2015-01-30T00:00:00.000
[ "Physics" ]
Altered EEG resting-state large-scale brain network dynamics in euthymic bipolar disorder patients Background Neuroimaging studies provided evidence for disrupted resting-state functional brain network activity in bipolar disorder (BD). Electroencephalographic (EEG) studies found altered temporal characteristics of functional EEG microstates during depressive episode within different affective disorders. Here we investigated whether euthymic patients with BD show deviant resting-state large-scale brain network dynamics as reflected by altered temporal characteristics of EEG microstates. Methods We used high-density EEG to explore between-group differences in duration, coverage and occurrence of the resting-state functional EEG microstates in 17 euthymic adults with BD in on-medication state and 17 age- and gender-matched healthy controls. Two types of anxiety, state and trait, were assessed separately with scores ranging from 20 to 80. Results Microstate analysis revealed five microstates (A-E) in global clustering across all subjects. In patients compared to controls, we found increased occurrence and coverage of microstate A that did not significantly correlate with anxiety scores. Conclusion Our results provide neurophysiological evidence for altered large-scale brain network dynamics in BD patients and suggest the increased presence of A microstate to be an electrophysiological trait characteristic of BD. 1 Introduction 36 Bipolar disorder (BD) is a common and severe psychiatric disorder, with an important personal and 37 societal burden (Cloutier et al., 2018;Eaton et al., 2012). The worldwide prevalence of BD is 38 considered to range between 1% and 3% (Merikangas et al., 2011;Ferrari et al., 2016). BD patients 39 are frequently misdiagnosed and often identified at late stages of disease progression, which can lead 40 to inadequate treatment (Hirschfeld, 2007) and worse functional prognosis (Vieta et al., 2018). A better 41 This is a provisional file, not the final typeset article understanding of the underlying pathophysiology is needed to identify objective biomarkers of BD that 42 would improve diagnostic and/or treatment stratification of patients. 43 Possible candidates for neurobiological biomarkers in BD could arise from the abnormalities of 44 functional brain networks. Evidence from brain imaging studies consistently points to abnormalities in 45 circuits implicated in emotion regulation and reactivity. Particularly, attenuated frontal and enhanced 46 limbic activations are reported in BD patients (Chen et al., 2011;Houenou et al., 2011;Kupferschmidt 47 and Zakzanis, 2011). Interestingly, regions implicated in the pathophysiology of the disease, such as 48 the inferior frontal gyrus, the medial prefrontal cortex (mPFC), and the amygdala present altered 49 activation patterns even in unaffected first-degree relatives of BD patients (Piguet et al., 2015), pointing 50 towards brain alterations that could underlie disease vulnerability. Moreover, evidence from functional 51 magnetic resonance imaging (fMRI) studies showed aberrant resting-state functional connectivity 52 between frontal and meso-limbic areas in BD when compared to healthy controls (Vargas et al., 2013). 53 A recently developed functional neuroanatomic model of BD suggests, more specifically, decreased 54 connectivity between ventral prefrontal networks and limbic brain regions including the amygdala 55 (Strakowski et al., 2012;Chase and Phillips, 2016). The functional connectivity abnormalities in BD 56 in brain areas associated with emotion processing were shown to vary with mood state. A resting-state 57 functional connectivity study of emotion regulation networks demonstrated that subgenual anterior 58 cingulate cortex (sgACC)-amygdala coupling is critically affected during mood episodes, and that 59 functional connectivity of sgACC plays a pivotal role in mood normalization through its interactions 60 with the ventrolateral PFC and posterior cingulate cortex (Rey et al., 2016 The patients were recruited from the Mood Disorders Unit at the Geneva University Hospital. A 99 snowball convenience sampling was used for the selection of the BD patients. Control subjects were 100 recruited by general advertisement. All subjects were clinically evaluated using clinical structured 101 interview (DIGS: Diagnostic for Genetic Studies, (Nurnberger et al., 1994). BD was confirmed in the 102 experimental group by the usual assessment of the specialized program, an interview with a 103 psychiatrist, and a semi-structured interview and relevant questionnaires with a psychologist. 104 Exclusion criteria for all participants were a history of head injury, current alcohol or drug abuse. 105 Additionally, a history of psychiatric or neurological illness and of any neurological comorbidity were 106 exclusion criteria for controls and bipolar patients, respectively. Symptoms of mania and depression 107 were evaluated using the Young Mania Rating Scale (YMRS) (Young et al., 1978) and the 108 Montgomery-Åsberg Depression Rating Scale (MADRS) (Williams and Kobak, 2008), respectively. 109 Participants were considered euthymic if they scored < 6 on YMRS and < 12 on MADRS at the time 110 of the experiment, and were stable for at least 4 weeks before. All patients were medicated, receiving 111 pharmacological therapy including antipsychotics, antidepressants and mood stabilizers, and had to be 112 under stable medication for at least 4 weeks. The experimental group included both BD I (n = 10) and 113 BD II (n = 7) types. 114 To check for possible demographic or clinical differences between groups, subject characteristics such 115 as age, education or level of depression were compared between groups using independent t-tests. 116 Anxiety is highly associated with BD (Simon et al., 2004;2007) and is a potential confounding variable 117 when investigating microstate dynamics at rest. For example, decreased duration of EEG microstates 118 at rest in patients with panic disorder has been reported (Wiedemann et al., 1998). To check for possible 119 differences in anxiety symptoms, all subjects were assessed with the State-Trait Anxiety Inventory 120 (STAI) (Spielberger et al., 1970). Anxiety as an emotional state (state-anxiety) and anxiety as a 121 personal characteristic (trait-anxiety) were evaluated separately. Scores of both state-and trait-anxiety 122 range from 20 to 80, higher values indicating greater anxiety. The scores were compared between 123 patients and controls using independent t-tests. 124 This study was carried out in accordance with the recommendations of the Ethics Committee for 125 Human Research of the Geneva University Hospital, with written informed consent from all subjects. 126 All subjects gave written informed consent in accordance with the Declaration of Helsinki. The 127 protocol was approved by the Ethics Committee for Human Research of the Geneva University 128 Hospital, Switzerland. 129 EEG recording and pre-processing 130 The EEG was recorded with a high density 256-channel system (EGI System 200; Electrical Geodesic 131 Inc., OR, USA), sampling rate of 1kHz, and Cz as acquisition reference. Subjects were sitting in a 132 This is a provisional file, not the final typeset article comfortable upright position and were instructed to stay as calm as possible, to keep their eyes closed 133 and to relax for 6 minutes. They were asked to stay awake. 134 To remove muscular artifacts originating in the neck and face the data were reduced to 204 channels. 135 Two to four minutes of EEG data were selected based on visual assessment of the artifacts and band-136 pass filtered between 1 and 40 Hz. Subsequently, in order to remove ballistocardiogram and oculo-137 motor artifacts, infomax-based Independent Component Analysis (Jung et al., 2000) was applied on all 138 but one or two channels rejected due to abundant artifacts. Only components related to physiological 139 noise, such as ballistocardiogram, saccadic eye movements, and eye blinking, were removed based on 140 the waveform, topography and time course of the component. The cleaned EEG recordings were down-141 sampled to 125 Hz and the previously identified noisy channels were interpolated using a three-142 dimensional spherical spline (Perrin et al., 1989), and re-referenced to the average reference. All the 143 preprocessing steps were done using MATLAB and the freely available Cartool Software 3.70 144 (https://sites.google.com/site/cartoolcommunity/home), programmed by Denis Brunet. 145 EEG data analysis 146 To estimate the optimal set of topographies explaining the EEG signal, a standard microstate analysis 147 was performed using k-means clustering (see Supplementary The duration in milliseconds indicates the most common amount of time that a given microstate class 177 is continuously present. The global explained variance for a specific microstate class was calculated 178 by summing the squared spatial correlations between the representative map and its corresponding 179 assigned scalp potential maps at each time point weighted by the GFP (Murray et al., 2008 Clinical and demographic variables 199 There were no significant differences in age and level of education between the patient and the control 200 groups. In both groups, very low mean scores on depression and mania symptoms were observed, 201 which did not significantly differ between the two groups. BD patients showed higher scores on state 202 and trait scales of the STAI. For all subject characteristics, see Table 1. Since some microstate parameters showed a non-homogeneity of variances in the two groups (Levene's 214 tests for the microstate C coverage and microstates A and C duration; p<0.01), we decided to calculate 215 Mann-Whitney U test to investigate group differences for temporal parameters of each microstate. 216 We found significant between-group differences for microstate classes A and B. Both microstates 217 showed increased presence in patients in terms of occurrence and coverage. The two groups did not 218 differ in any temporal parameter of microstates C, D, or E. The results of the temporal characteristics 219 of each microstate are summarized in Table 2 and Fig. 2. 220 Clinical correlations 221 This is a provisional file, not the final typeset article The results of Spearman's rank correlation revealed no significant associations between the MADRS 230 and YMRS scores and the occurrence or coverage of the microstate A and B (all absolute r-values < 231 0.30). 232 Alpha rhythm 233 The Mann-Whitney U test showed significantly decreased alpha power (p < 0.03, Z-value 2.7) in the 234 BD compared to HC group (see Fig. 3). The results of Spearman's rank correlation revealed no 235 significant associations between the alpha power and occurrence or coverage of microstates A and B 236 (all absolute r-values < 0.40). 237 238 4 Discussion 239 Our study presents the first evidence for altered resting-state EEG microstate dynamics in euthymic 240 patients with BD. Patients were stable and did not significantly differ in their depressive or manic 241 symptomatology from healthy controls at the time of experiment. Despite this fact, they showed 242 abnormally increased presence of microstates A and B, the latter correlating with the anxiety level. 243 In an earlier combined fMRI-EEG study the microstate A was associated with the auditory network 244 (Britz et in euthymic BD patients might be related to the hyperconnectivity of the underlying networks that 265 involve the temporal lobe, insula, mPFC, and occipital gyri. 266 Anxiety symptoms were previously associated with greater severity and impairment in BD (Simon et 267 al., 2004) BD patients, we found an abnormally increased presence of microstate B that was associated with a 288 higher anxiety. In particular, the occurence together with coverage and only the coverage were 289 positively correlated with scores of trait and state anxiety, respectively. The observed change in 290 microstate B dynamics might be, therefore, more related to a relatively stable disposition than to the 291 actual emotional state. Previous studies also suggest that anxiety may influence visual processing 292 (Phelps et al., 2006;Laretzaki et al., 2010) and that connections between amygdala and visual cortex 293 might underlie enhanced visual processing of emotionally salient stimuli in patients with social fobia 294 (Goldin et al., 2009). Our finding of increased presence of microstate B positively associated with 295 anxiety level in euthymic BD patients is consistent with these observations. Additionally, a more 296 regular appearance for microstate B and increased overall temporal dependencies among microstates 297 were recently reported in mood and anxiety disorders, suggesting a decreased dynamicity in switching 298 between different brain states in these psychiatric conditions (Al Zoubi et al., 2019). Another 299 microstate study on anxiety disorders reported a decreased overall resting-state microstate duration in 300 panic disorder (Wiedemann et al., 1998). That early study, however, did not assess temporal 301 characteristics of different microstates separately and it is therefore difficult to compare those findings 302 with our observations. Further evidence is needed to determine, whether the increased presence of 303 microstate B in our experimental group is a characteristic feature of BD or anxiety, or whether it is 304 related to both conditions. 305 We found an unchanged duration but a higher occurence and coverage of A and B microstates in BD 306 patients. In other words, an unchanged sustainability in time and still increased presence of these 307 microstates in patients compared to healthy controls were observed. Possible explanation for this 308 finding appears to be a redundance in activation of the sensory and autobiographic memory networks were also observed in patients with multiple sclerosis, moreover predicting depression scores and other 316 clinical variables (Gschwind, et al., 2016). It was suggested that multiple sclerosis affects the "sensory" 317 (visual, auditory) rather than the higher-order (salience, central executive) functional networks (Michel 318 and Koenig, 2018 BD patients were previously shown to display lower alpha power as compared to healthy controls 335 (Basar et al., 2012), as it was the case here. We failed, however, to find any significant correlation 336 between the altered microstate dynamics and decreased alpha power. Our findings, therefore, further 337 support the previously reported independence of microstates from EEG frequency power fluctuations 338 . 339 In summary, results of the current study seem to indicate that dysfunctional activity of resting-state 340 brain networks underlying microstates A and B is a detectable impairment in BD during an euthymic 341 state. The presence of microstate A and B represents measures that might be implicated in clinical 342 practice, although using these parameters for early identification of BD at individual level could prove 343 challenging. If future studies confirm the same pattern in prodromal or vulnerable subjects, it could 344 help detection of at-risk subjects and therefore the possiblility for early intervention. The present study 345 has, however, some limitations. Our low sample size made it impossible to examine any potential 346 influence of medication on the microstate parameters by comparing patients receiving a specific drug 347 with those not receiving it. Possible effects of medication on our results should be therefore taken into 348 account. Due to the same reason, it was not possible to examine any potential influence of subtypes of 349 BD on microstate results. 350 Conclusions 351 Our study described altered EEG resting-state microstate temporal parameters in euthymic bipolar 352 patients. Our findings provide an insight into the resting-state global brain network dynamics in BD. 353 Since the increased presence of microstate A is not unique to BD patients, having been reported also 354 in other psychiatric disorders (see Michel and Koenig, 2018), it might be considered only as a non-355 specific electrophysiological marker of BD. Moreover, studies examining possible interactions 356 between microstate dynamics and BD symptoms are needed to better understand the dysfunction of 357 large-scale brain network resting-state dynamics in this affective disorder. 358 6 non-outlier range (whiskers). The x-axis represents the subject group; the y-axis represents the average 402 alpha (8-14 Hz) power across 204 channels. Note significantly decreased alpha power in the BD 403 compared to HC group (p < 0.03, Z value 2.7). 404 9 Conflict of Interest 405 The authors declare that the research was conducted in the absence of any commercial or financial 406 relationships that could be construed as a potential conflict of interest. 407 10 Author Contributions 408 ADdesigned the study, performed the analysis, and wrote the initial draft; JMA, AGD and CPwere 409 responsible for clinical assessment; CMMserved as an advisor; CBcollected the HD-EEG data 410 and was responsible for the overall oversight of the study. All authors revised the manuscript. The funding sources had no role in the design, collection, analysis, or interpretation of the study. 417
3,648.4
2019-06-12T00:00:00.000
[ "Psychology", "Medicine", "Biology" ]
Superior properties of the PRESB preconditioner for operators on two-by-two block form with square blocks Matrices or operators in two-by-two block form with square blocks arise in numerous important applications, such as in optimal control problems for PDEs. The problems are normally of very large scale so iterative solution methods must be used. Thereby the choice of an efficient and robust preconditioner is of crucial importance. Since some time a very efficient preconditioner, the preconditioned square block, PRESB method has been used by the authors and coauthors in various applications, in particular for optimal control problems for PDEs. It has been shown to have excellent properties, such as a very fast and robust rate of convergence that outperforms other methods. In this paper the fundamental and most important properties of the method are stressed and presented with new and extended proofs. Under certain conditions, the condition number of the preconditioned matrix is bounded by 2 or even smaller. Furthermore, under certain assumptions the rate of convergence is superlinear. Introduction Iterative solution methods are widely used for the solution of linear and linearized systems of equations. For early references, see [1][2][3]. A key aspect is then to use a proper preconditioning, that is a matrix that approximates the given matrix accurately but is still much cheaper to solve systems with and which results in tight eigenvalue bounds of the preconditioned matrix, see e.g. [4][5][6]. This should hold irrespective of the dimension of the system and thus allow a fast large scale modelling. Thereby preconditioners that exploit matrix structures can have considerate advantage. Differential operators or matrices on coupled two-by-two block form with square blocks, or which have been reduced to such a form from a more general block form, arise in various applications. The simplest example is a complex valued system, where A, B, x, y, f and g are real valued, which in order to avoid complex arithmetics, is rewritten in the real valued form, that is, where no complex arithmetics is needed for its solution. For examples of use of iterative solution methods in this context, see e.g. [7][8][9][10]. As we shall see, much more important examples arise for instance when solving optimal control problems for partial differential equations. After discretization of the operators, matrices of normally very large scale arise which implies that iterative solution methods must be used with a proper preconditioner. The methods used are frequently of a coupled, inner-outer iteration type which, since the inner systems are normally solved with variable accuracy, implies that a variable iteration outer acceleration method such as in [11], or the flexible GMRES method [12] must be used. However, as we shall see, for many applications sharp eigenvalue bounds for the preconditioned operator can be derived, which are only influenced to a minor extent by the inner solver so one can then even use a Chebyshev iterative acceleration method. This implies that there are no global inner products to be computed which can save much computer time since computations of such inner products are mostly costly in data communication and other overhead, in particular when the method is implemented on parallel computers. During the years numerous preconditioners of various types have been constructed. For instance, in a Google Scholar search of a class of matrices based on Hermitian or Skew Hermitian splittings, one encounters over 10,000 published items. Some of them have been tested, analysed and compared in [13]. It was found that the square block matrix, PRESB preconditioning method has superior properties compared to them and also to most other methods. It is most robust, it leads to a small condition number of the preconditioned matrix which holds uniformly with respect to both problem and method parameters, and sharp eigenvalue bounds can be derived. The methods can be seen as a further development of an early method used in [14], and also of the method in [15]. The method has been applied earlier for the solution of more involved problems, see e.g. [16][17][18]. We consider here only methods which can be reduced to a form with square blocks. Some illustrative examples of optimal control of parabolic problems with time-harmonic control can be found in [19][20][21][22]. In this paper we present the major properties of the PRESB preconditioner on operator level, with short derivations. This includes presentation of a typical class of optimal control problems in Sect. 3 with an efficient implementation of the method, derivations of spectral properties with sharp eigenvalue bounds in Sect. 4 an inner product free implementation of the method in Sect. 5 and conditions for a superlinear rate of convergence properties in Sect. 6. To shorten the presentation, we use the shorthands r.h.s and w.r.t. for "right hand side" and "with respect to", respectively. The shorthands for symmetric and positive definite and symmetric and positive semidefinite are denoted spd and spsd, respectively. The nullspace of an operator A is denoted N (A). A basic class of optimal control problems For various iterative solution methods used for optimal control problems, see [23][24][25][26][27][28][29][30][31][32][33][34][35]. For a comparison of PRESB with some of the methods referred to above, see [13]. Some methods are based on the saddle point structure of the arising system and use the MINRES method [28,36] as acceleration method, see e.g. [37][38][39][40]. Other methods use the GMRES method as acceleration method [6,12]. In this paper we present methods based on the PRESB preconditioner. This method has been used for optimal control problems, see e.g. [13,19,21]. For other preconditioning methods used for optimal control problems, see [41][42][43][44][45]. For comparisons with some of the other methods referred to above, see [7,13,46]. A particularly important class of problems concern inverse problems, where an optimal control framework can be used. Examples include parameter estimation [47] and finding inaccessible boundary conditions [48], where a PRESB type preconditioner has been used. As an illustration, we consider a time-independent control problem, first using H 1 -regularization and then the L 2 -regularization, with control function u and target solution y as described in [49], see also [46,50] for more details. For the H 1 -regularization, let ⊂ R d be a bounded connected domain, such that an observation region 1 and a control region 2 are given subsets of . It is assumed that 1 ∩ 2 is nonempty. The problem is to minimize subject to a PDE constraint Ly = f with given boundary conditions, where where c is differentable and d − 1 2 ∇ · c ≥ 0. Here the fixed boundary term g admits a Dirichlet liftg ∈ H 1 ( ), and β > 0 is a proper regularization constant. For notational simplicity we assume now that c = 0 and d = 0. Then the corresponding Lagrange functional takes the form where y ∈g + H 1 0 ( ), u ∈ H 1 ( 2 ) and λ is the Lagrange multiplier, whose inf-sup solution equals the solution of (2.1), (2.2). (In the following we delete the integral incremental factor d .) The stationary solution of the minimization problem, i.e. where ∇L(y, u, λ) = 0, fulfils the following system of PDEs in weak form for the state and control variables and for the Lagrange multiplier: Using the splitting y = y 0 +g where y 0 ∈ H 1 0 ( ) the system can be homogenized. In what follows, we may therefore assume that g = 0, and hence y ∈ H 1 0 ( ). We consider a finite element discretization of problem (2.3) in a standard way. Let us introduce suitable finite element subspaces and replace the solution and test functions in (2.3) with functions in the above subspaces. We fix given bases in the subspaces, and denote by y, u and λ the corresponding coefficient vectors of the finite element solutions. This leads to a system of equations in the following form: where M 1 and M 2 are the mass matrices used to approximate y and u, i.e. corresponding to the subdomains 1 and 2 . In the same way, K and K 2 are the stiffness matrices corresponding to and 2 , respectively, and the rectangular mass matrix M corresponds to function pairs from × 2 . Here λ and y have the same dimension, as they both represent functions on , whereas u only corresponds to nodepoints in 2 . We also note that the last r.h.s is 0 due to g = 0. In the general case where g = 0 we would have some g = 0 in the last r.h.s, i.e. non-homogenity would only affect the r.h.s. and our results would remain valid. Problem (2.3), as well as system (2.4) has a unique solution. Properly rearranging the equations, we obtain the matrix form We note that M 2 + K 2 is symmetric and positive definite so we can eliminate the control variable u in (2.5): Hence we are lead to a reduced system in a two-by-two block form: Here one introduces the scaled vectorλ := 1 √ β λ and multiplies the second equation in (2.6) with − 1 √ β . Using the notation For this method we assume that K is spd. Similarly, after reordering and change of sign we obtain that is, In this method K can be nonsymmetric in which case the matrix block in position (1, 2) is replaced by K . For the L 2 -regularization method, where the term 1 2 β u 2 H 1 ( ) is replaced by where M 0 = MM −1 2 M T . Our aim is to construct an efficient preconditioned iterative solution method for this linear system and to derive its spectral properties and mesh independent superlinear convergence rate. Construction and implementational details of the PRESB preconditioner Consider an operator or matrix in a general block form, where A and the symmetric parts of B and C are spsd and the nullspaces N (A) and N (B) and N (A) and N (C) are disjoint. Hence A + B and A + C are nonsingular. If B = C, a common solution method (see e.g. [40]) is based on the block diagonal matrix, A spectral analysis shows that the eigenvalues of . This preconditioning method can be accelerated by the familiar MINRES method [36]. Due to the symmetry of the spectrum, its convergence can be based on the square of the optimal polynomial for the interval [ 1 which has spectral condition number √ 2 and corresponds to a convergence factor (2 1/4 − 1) (2 1/4 + 1) 1 12 . But note that the indefiniteness of the spectrum requires a double computational effort compared to the single interval. To avoid the indefinite spectrum and enable use of the GMRES method as acceleration method we now consider the following, PRESB preconditioner Its spectral properties will be shown in the next section. In particular, when B = C, the matrix P A simply becomes In the case of the system matrix (2.7) of the control problem, the PRESB preconditioner has the form P (1) We show now that there exists an efficient implementation of the preconditioner (3.2). It can be factorized as Hence its inverse equals Therefore, besides some vector operations and a operator or matrix vector multiplication with B, an action of the inverse involves a solution with operator or matrix A + B and one with A + C. In some applications A is symmetric and positive definite and the symmetric parts of B, C are also positive definite, which can enable particularly efficient solutions of these inner systems. The above forms have appeared earlier in [13]. Remark 3.1 A system with P A , can alternatively be solved via its Schur complement system as Clearly one can also use S as a preconditioner to the exact Schur complement S = A + B A −1 C for A, which gives the same spectral bounds as the PRESB method. For further information about use of approximations of Schur complements, see [5,23]. However, this method requires the stronger property that A is nonsingular, and besides solutions with A + B and A + C, it involves also a solution with A to obtain the corresponding iterative residual. In addition, when the solution vector x has been found, it needs one more solution with matrix A to find vector y. Furthermore, in many important applications A is singular. Therefore the method based on Schur complements is less competitive with a direct application of (3.5). Spectral analysis based on a general form of the preconditioning matrix Consider matrix A, of order 2n × 2n and its preconditioner P A in (3.1) and (3.2). Here we change the sign of the second row. To find the spectral properties of P −1 A A, consider the generalized eigenvalue problem It follows that λ = 1 for eigenvectors (x, y) such that {x ∈ N (B + C), y ∈ C n arbitrary}. Hence, the dimension of the eigenvector space corresponding to the unit eigenvalue λ = 1 is n + n 0 , where n 0 is the dimension of the nontrivial nullspace of B + C. An addition of the equations in (4.1) shows that and hence, from the first equation in (4.1), it follows which can be rewritten as where μ is an eigenvalue of the generalized eigenvalue problem Hence, We extend now this proposition to the case of complex eigenvalues μ but still under the condition that B = C. Proposition 4.2 Let A be spsd, B = C and let the eigenvalues of μ( that is, the eigenvalues are contained in a circle around unity with radius < 1. Proof It follows from (4.5) that For small values of the imaginary part η, the above bound becomes close to the bounds found in Proposition 4.1. Spectrum for complex conjugate matrices where C = B * Consider now the matrix in (3.1) where C = B * , i.e. it can be complex-valued. This statement has already been shown in [19] but with a slightly different proof. Proposition 4.3 Let A be spd, B + B * positive semidefinite and assume that B is related to Proof It follows from (4.5) that where x * denotes the complex conjugate vector. The above shows that the relative size, Re(μ)/|μ| of the real part of the spectrum of B = A −1/2 B A −1/2 determines the lower eigenvalue bound of P −1 A A and, hence, the rate of convergence of the preconditioned iterative solution method. For a small such relative part the convergence of the iterative solution method will be exceptionally rapid. As we will show later, such small parts can occur for time-harmonic problems with a large value of the angular frequency. We present now a proof of rate of convergence under the weaker assumption that A is spsd. Proposition 4.4 Let A and B Proof The generalized eigenvalue problem takes here the form Hence and it follows from (4.4) that Clearly, any vector x ∈ N (B+B * ) corresponds to an eigenvalue λ = 1. It follows from Hence, λ = 1 in this case also. To estimate the eigenvalues λ = 1, we can consider subspaces orthogonal to the space for which λ = 1. We denote the corresponding inverse of A as a generalized inverse, A † . It holds then Spectral properties of the preconditioned matrix, P (1) h for the basic optimal control problem We recall that the preconditioner P (1) h is applicable only if K is spd. To find the spectral properties of the preconditioned matrix P can use an intermediate matrix, and first find the spectral values for B −1 P (1) h and then for this gives the wanted properties. Let then μ denote an eigenvalue of the generalized eigenvalue problem, Here We note that if ξ = 0, then η = 0, since K is spd. Since ξ + η ∈ N ( M 1 − M 0 ) ⊥ , it follows then that both ξ = 0 and η = 0 and Hence μ is contained in an interval bounded independently of the parameters h and β. Consider now the eigenvalue problem The second row yields again M 1 ξ = Kη. Substituting this in the first equation, leads to Taking the inner product with η, and using (Kξ then we readily obtain: where θ min and θ max are defined in (4.7). In order to study the uniform behaviour of θ min and θ max as β → 0, note that the definition of M 1 and M 0 implies More precisely, we can make the estimate as follows. We have On the other hand, the previously seen equality M 1 ξ = Kη implies that Kη has zero coordinates where M 1 ξ has, i.e. in the nodes outside 1 is bounded below uniformly in β. Hence, altogether, θ min , θ max and ultimately the spectrum of P −1 h A h are bounded uniformly w.r.t β ≤ c h 4 . Spectral analyses for the preconditioner P (2) h The analyses of the preconditioning matrix C = P (2) h in (2.9) of A = A (2) h will take place in two steps. We introduce then an intermediate matrix B for which the preconditioning of C follows from Sect. 4.1. We assume here that the observation domain is a subset of the control domain. Hence P (2) h = BB −1 C will be considered as the preconditioner to A and using the already described eigenvalue bounds for B −1 C, we only have to derive eigenvalue bounds for B −1 A. Let then where M is a weighted average, Note that since 0 ⊂ 1 , E is symmetric and positive semidefinite. Hence from Here We note that the upper bound in (4.9) is taken for ξ = 0. Then it follows from (4.8) that K T η = 0. Hence Hence the spectral condition number of B −1 A is bounded by As we have seen, it holds that the condition number of Since γ 0 and γ 1 are not known in general a proper value of the parameter α can be α = 1/2. Then However, if γ 0 is small, but γ 1 sufficiently larger than unity, then it is better to let α = 1 − ε, where ε is small. Then On the other hand, if γ 0 is large, that is if the observation domain 0 nearly equals the control domain, we note that γ 0 → ∞ and ε). In fact, if M 0 = M 1 , then E = 0, and we can let α = 0 i.e. M = M 0 = M 1 . In all cases, the considered bounds hold uniformly with respect to regularization parameter β and in principle also w.r.t. the mesh parameter h. Remark 4.1 Other well-known preconditioning strategies for general two-by-two block matrices, such as block-triangular preconditioners, are also applicable, cf., e.g. [24,55,56]. We do not discuss them here any further. Although robust with respect to the involved parameters, in [7,13,46,50] some of them have been shown to be computationally less efficient than PRESB on a benchmark suite of problems. The PRESB preconditioning method is not only fastest in general, but also more robust. Its convergence factor is bounded by nearly 1/6 which shows that after just 8 iterations, the norm of the residual has decreased by a factor of about 0.5 · 10 −6 . Moreover, it is even somewhat faster due to the superlinear convergence to be discussed in Sect. 6. Inner-outer iterations The use of inner iterations to some limited accuracy perturbs the eigenvalue bounds for the outer iteration method. As pointed out in [51], see also [5], one must then in general stabilize the Krylov iteration method. However, it has been found that for the applications we are concerned with the perturbations are quite small and, even if they can give rise to complex eigenvalues, one can ignore them as the outer iterations are hardly influenced by them. Inner product free methods Krylov subspace type acceleration methods require computations of global inner products, which can be costly, in particular in parallel computer environments, where the inner products need global communication of data and start up times. It can therefore be of interest to consider iterative solution methods where there is no need to compute such global inner products. Such methods have been considered in [52] but here we present a shorter proof and some new contributions. As we have seen, the PRESB method results mostly in sharp eigenvalue bounds. This implies that it can be very efficient to use a Chebyshev polynomial based acceleration method instead of a Krylov based method, since in this method there arise no global inner products. As shown e.g. in [52,57], the method takes the form presented in the next section. Numerical tests in [52,58] show that it can outperform other methods even on sequential processors. A modified Chebyshev iteration method Given eigenvalue bounds [a, b], the Chebyshev iteration method, see e.g. [1][2][3][4][5] can be defined by the recursion For problems with outlier eigenvalues one can first eliminate, i.e. 'kill' them, here illustrated for the maximal eigenvalue, by use of a corrected right hand side vector, The so reduced right hand side vector equals and one solves by use of the Chebyshev method for the remaining eigenvalue bounds. Then one can compute the full solution, However, due to rounding and small errors in the approximate eigenvalues used, the Chebyshev method makes the dominating eigenvalue component 'awake' again, so only very few steps should be taken. This can be compensated for by repetition of the iteration method, but then for the new residual. The resulting Algorithm is: Algorithm Reduced condition number Chebyshev method: For a current approximate solution vector x, until convergence, do: Solve B −1 Ax = q, by the Chebyshev method with reduced condition number. 5. Compute x =x + 1 λ max q 6. Repeat In some problems a large number of outlier eigenvalues larger than unity appear. Normally they are well separated. One can then add the to the unit value closer ones to the interval [1/2, 1], to form a new interval [1/2, λ 0 ], where λ 0 > 1 but not very large and let the remaining eigenvalues, say [λ 1 , λ max ] form a separate interval. After scaling the intervals one get then two intervals, for which a polynomial preconditioner with the polynomial λ(2 − λ) can be used. It is also possible to use a combination of the Chebyshev and Krylov method, that is start with a Chebyshev iteration step and continue with a Krylov iteration method. This has the advantage that the eigenvalues can be better clustered after the first Chebyshev iteration step, so the Krylov iteration method will converge superlinearly fast from the start. If the eigenvalues of the preconditioned matrix are contained in the interval [ 1 2 , 1], we use then a corresponding polynomial preconditioner, Let μ be the eigenvalues of P(B −1 A). Then μ(λ) = λ(3−2λ) so min μ(λ) = μ( 1 2 ) = μ(1) = 1 and max λ μ(λ) = 9 8 , which is taken for λ = 3/4. Hence the convergence rate factor for a corresponding Krylov subspace iteration method (see e.g. [3]) becomes bounded above by which leads to a very fast convergence and which is further improved by the effect of clustering of the eigenvalues. Superlinear rate of convergence for the preconditioned control problem As we have seen, the condition number can be small but not in all applications. Even if it is small it can be of interest to examine the appearance of a superlinear rate of convergence. Under certain conditions one observes a superlinear rate of convergence of the preconditioned GMRES method. Below we first recall well-known general conditions for the occurrence of this, and then derive this property in applications for control problems. For some early references on superlinear rate of convergence, see [59][60][61]69] and the authors' papers [66,70]. Preliminaries: superlinear convergence estimates of the GMRES method Consider a general linear system with a given nonsingular matrix A ∈ R n×n . A Krylov type iterative method typically shows a first phase of linear convergence and then gradually exhibits a second phase of superlinear convergence [5]. When the singular values properly cluster around 1, the superlinear behaviour can be characteristic for nearly the whole iteration. We recall some known estimates of superlinear convergence, also valid for an invertible operator A in a Hilbert space. When A is symmetric positive-definite, a well-known superlinear estimate of the standard conjugate gradient, CG method is as follows, see e.g. [5]. Let us assume that the decomposition holds, where I is the identity matrix. Let λ j (E) denote the jth eigenvalue of E in decreasing order. Then In our case the matrix is nonsymmetric, for which also several Krylov algorithms exist. In particular, the GMRES and its variants are most widely used. Similar efficient superlinear convergence estimates exist for the GMRES in case of the decomposition (6.2). The sharpest estimate has been proved in [59] on the Hilbert space level for an invertible operator A ∈ B(H ), using products of singular values and the residual error vectors r k := Au k − b: Here the singular values of a general bounded operator are defined as the distances from the best approximations with rank less than j. Hence s j (A −1 ) ≤ A −1 for all j and the right hand side (r.h.s.) above is bounded by k j=1 s j (E) A −1 k . The inequality between the geometric and arithmetic means then implies the following estimate, which is analogous to the symmetric case (6.3): 1, 2, ...), (6.5) whose r.h.s. is a sequence decresing towards zero. We note that the above Hilbert space setting is particularly useful for the study of convergence under operator preconditioning, when the preconditioner arises from the discretization of a proper auxiliary operator. Such results have been derived by the authors in various settings, based on coercive and inf-sup-stable problems, with applications to various test problems such as convection-diffusion equations, transport problems, Hemholtz equations and diagonally preconditioned optimization problems, see, e.g. [64][65][66]. This approach will be used in the present chapter as well. Operators of the control problem in weak form Let us consider the control problem (2.3). We introduce the inner products with β > 0 defined in (2.3). Define the bounded linear operators Q 1 : and also, similarly Then system (2.3) can be rewritten as follows: that is, where we stress that these quations correspond to the weak form and are obtained by Riesz representation. This can be written in an operator matrix form Well-posedness and PRESB preconditioning in a Hilbert space setting The uniqueness of the solution of system (6.7) can be seen as follows: if b = 0, then setting the third and first equations into the second one, respectively, we obtain u + Q * 2 Q 1 Q 2 u = 0, whence, multiplying by u, we have Since Q 1 is a positive operator, we obtain u 2 ≤ 0, that is, u = 0, which readily implies y = 0 and λ = 0. Now, since the 3 by 3 operator matrix in (6.8) is a compact perturbation of the identity, uniqueness implies well-posedness (i.e. if 0 is not an eigenvalue then it is a regular value, as stated by Fredholm theory, see, e.g. [62]). Hence for any b ∈ H 1 0 ( ) there exists a unique solution (y, u, λ) of system (6.7), moreover, this solution depends continuously on b. System (6.7) can be reduced to a system in a two-by-two block form by eliminating u using the second equation u = −Q * 2 λ, in analogy with (2.6): Now let us introduce the product Hilbert space As seen above, for any b ∈ H, after eliminating u, system (6.9) has a unique solution (y, λ), which depends continuously on b. This means well-posedness, in other words, L is invertible, hence the inf-sup condition holds: According to (3.4), we define the PRESB preconditioning operator as Further, letting (that is, the remainder term), we have the decomposition Now one can see similarly to the case of L that P is also invertible: first, uniqueness of solutions for systems with P follows just as in the algebraic case described in Sect. 3, using that Q 1 and Q 2 Q * 2 are positive operators, and then the well-posedness follows again from Fredholm theory. Consequently, we can write (6.17) in the preconditioned form (6.18) The finite element discretization Recall the system matrix (2.7) and the preconditioner (3.4), where, for simplicity, we will omit the upper index "(1)" in what follows: These matrices are the discrete counterparts of the operators L and P in (6.11) and (6.15). Recall the definitions M 1 : Further, let us define the matrices Here the "energy matrix" S h corresponds to the energy inner product (6.10), and Q h is the discrete counterpart of the operator Q. Then the decomposition can be written in the preconditioned form where I h denotes the identity matrix (of size corresponding to the DOFs of the FE system). Using the definition of the stiffness matrix, a useful relation holds between S h and the underlying inner product , . H in the product FEM subspace Namely, if x, w ∈ V h are given functions and c, d are their coefficient vectors, then where · denotes the ordinary inner product on R n . In the sequel we will be interested in estimates that are independent of the used family of subspaces. Accordingly, we will always assume the following standard approximation property: for a family of subspaces (V h ) ⊂ H, for any u ∈ H, dist(u, V n ) := min{ u − v n H : v n ∈ V n } → 0 (as n → ∞). (6.24) Superlinear convergence for the control problem Our goal is to study the preconditioned GMRES first on the operator level and then for the FE system. Convergence estimates in the Sobolev space Our goal is to prove superlinear convergence for the preconditioned form of (6.13): First, the desired estimates will involve compact operators, hence we recall the following notions in an arbitrary real Hilbert space H : 1, 2, . . .) the ordered eigenvalues of a compact self-adjoint linear operator F in H if each of them is repeated as many times as its multiplicity and |λ 1 (F)| ≥ |λ 2 (F)| ≥ · · · (ii) The singular values of a compact operator C in H are where λ j (C * C) are the ordered eigenvalues of C * C. Proposition 6.1 The operators Q 1 and Q 2 in (6.6) are compact. Proof The L 2 inner product in a Sobolev space generates a compact operator, see, e.g. [63]. The operators Q 1 and Q 2 correspond to L 2 inner products on 1 and 2 , hence they arise as the composition of a compact operator with a restriction operator from to 1 or 2 in L 2 ( ). Altogether, Q 1 and Q 2 are compositions of a compact operator with a bounded operator, hence they are also compact themselves. Proof Using the invertibility of P and L, the compactness of P −1 Q and the decomposition (6.18), we may apply estimate (6.5) with operators A := P −1 L and E := P −1 Q. The fact that s j (P −1 Q) → 0 implies that ε k → 0. Later on, we will be interested in estimates in families of subspaces. In this context the following statements involving compact operators will be useful, related to inf-sup conditions and singular values: Proposition 6.3 [64,66] Let L ∈ B(H) be an invertible operator in a Hilbert space H, that is, (6.28) and let the decomposition L = I + E hold for some compact operator E. Let (V n ) n∈N + be a sequence of closed subspaces of H such that the approximation property (6.24) holds. Then the sequence of real numbers satisfies lim inf m n ≥ m. Convergence estimates and mesh independence for the discretized problems Our goal is to prove mesh independent superlinear convergence when applying the GMRES algorithm for the preconditioned system (6.29) Here the system matrix is A = P −1 h A h , and we use the inner product c, d S h := S h c · d corresponding to the underlying Sobolev inner product via (6.23). Owing to (6.22), the preconditioned matrix is of the type (6.2), hence estimate (6.5) holds in the following form: 1, 2, . . . , n). (6.30) In order to obtain a mesh independent rate of convergence from this, we have to give a bound on (6.30) that is uniform, i.e. independent of the subspaces Y h and h . This will be achieved via some propositions on uniform bounds. An important role is played by the matrix In accordance with Proposition 6.3, we consider fine enough meshes such that the following inf-sup property can be imposed: there existsm > 0 independent of h such that Proof Both matrices are self-adjoint w.r.t. the K-inner product since M 1 and M 0 are symmetric. Hence, first, where C is the Poincaré-Friedrichs embedding constant and y stands for the function in the subspace Y h whose coefficient vector is y. Further, Here, for a fixed vector λ, denote v := (M 2 + K 2 ) −1 M T 0 λ. Then for the functions v and λ in the subspaces U h and h , whose coefficient vectors are v and λ, respectively. Hence, from the Cauchy-Schwarz inequality, Now, the definition of v, (6.33) and (6.34) yield Hence for any vector y = 0, denoting z := N −1 y, we have |Nz| 2 K = |y| 2 K := Ky · y ≤ (K + M 1 )y · y = K −1 (K + M 1 )y, y K = N −1 y, y K = z, Nz K ≤ |z| K |Nz| K , hence |Nz| K ≤ |z| K , i.e. N K ≤ 1, which is independent of h. Secondly, since M 0 is also positive semidefinite, the same proof applies to (I + K −1 M 0 ) −1 as well. Finally, the independence property for K −1 M 0 has already been proved in Proposition 6.5. Altogether, our proposition is thus also proved. Now we can derive our final result: Theorem 6.2 Let our family of FEM subspaces satisfy properties (6.24) and (6.32). Then the GMRES iteration for the n×n preconditioned system (6.29), using PRESB preconditioning (6.19), provides the mesh independent superlinear convergence estimate and (ε k ) k∈N + is a sequence independent of h. Proof Owing to Corollary 6.2 and Proposition 6.6, there exist constants C 0 , C 1 > 0 such that independently of h. We can easily see that the matrices A −1 h S h are also uniformly bounded in S h -norm. Namely, inequality (6.32) yields inf c∈R n c =0 From the above, we obtain This has been proved in [66] for another compact operator and energy matrix, and the argument is analogous to our case: in fact, it directly follows from Proposition 6.4 (b) if P is the projection to our product FEM subspace V h . Then, combining this estimate with (6.38) and using Proposition 6.4 (a), we obtain Altogether, using (6.39) and (6.40), the desired statements (6.36)-(6.37) readily follow from (6.30). Extended problems The distributed control problem (2.1) and (2.2) has proper variants, see also [49]. The finite element solution of these problems leads to similar systems as in (2.5), such that the mass matrix block M 0 is replaced by some other blocks, corresponding again to proper discretized compact operators. Based on this, one can repeat the arguments of the previous subsections and similarly obtain mesh independent superlinear convergence of the preconditioned GMRES iteration under the PRESB preconditioner. These analogous derivations are not detailed here, we just mention the problems themselves based on [49] and indicate the full analogy of their structures. Boundary control of PDEs The boundary control problem involves the minimization of the same functional (2.1) subject to the PDE constraint where the control function u is applied on the boundary, but f is a fixed forcing term. The FE solution of this problem leads to a similar system as in (2.5), where the mass matrix M 0 is replaced by a matrix N connecting interior and boundary basis functions. The mass and stiffness matrices for u now act on the boundary: they are denoted by M u,b and K u,b . Altogether, the matrix analogue of (2.5) takes the form Control under box constraints In real problems one often has to take box constraints into account, in which the functions y and/or u are assumed to satisfy additional pointwise constraints. For the state variable y, this prescribes y a ≤ y ≤ y b for some given constants y a and y b , and similarly, for u we prescribe u a ≤ u ≤ u b . An efficient way to handle such problems includes penalty terms in the objective function and semi-smooth Newton iterations for their minimization, see [30,49]. See also [67,68]. To this paper further related references, see [66,[68][69][70][71][72][73][74][75][76]. The arising linear systems (after proper rearrangement) have a form similar to (2.5). For the state constrained case the matrix is where ε > 0 is a small penalty parameter and G A is a diagonal matrix with values 0 or 1 indicating whether y satisfies the box constraint in that coordinate. The reduced matrix and the PRESB preconditioner are derived again analogously to (6.19). The new factors G A at the mass matrix M y do not change the fact that the term G A M y G A corresponds to a discretized compact operator, hence the structure of this problem is again analogous to the previous ones. Concluding remarks It has been shown that the PRESB preconditioning method applied for two-by-two block matrix systems with square blocks can outperform other methods, such as the block diagonally preconditioned MINRES method. The PRESB method can be accelerated by the GMRES method, which results in a superlinear rate of convergence. Since in some problems the eigenvalue bounds are known and often tight, one can as an alternative method use a Chebyshev acceleration which doesn't give a superlinear convergence but saves computational vector inner products and therefore saves wasted elapsed computer times for global communications between processors. supported by the Hungarian Ministry of Human Capacities, and further, it was supported by the Hungarian Scientific Research Fund OTKA SNN125119. Funding Open access funding provided by Eötvös Loránd University. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
9,318.4
2020-08-26T00:00:00.000
[ "Mathematics", "Engineering" ]
Detection of a Spinning Object at Different Beam Sizes Based on the Optical Rotational Doppler Effect : The rotational Doppler effect (RDE), as a counterpart of the conventional well-known linear Doppler effect in the rotating frame, has attracted increasing attention in recent years for rotating object detection. However, the effect of the beam size on the RDE is still an open question. In this article, we investigated the influence of the size of the probe light; i.e., the size of the ring-shaped orbital angular momentum (OAM)-carrying optical vortex (OV), on the RDE. Both the light coaxial and noncoaxial incident conditions were considered in our work. We analyzed the mechanism of the influence on the RDE under the light coaxial, lateral misalignment, and oblique incidence conditions based on the small-scatterer model. A proof-of-concept experiment was performed to verify the theoretical predictions. It was shown that both the signal-to-noise ratio and the frequency spectrum width were related to the OV size. The larger the beam size, the stronger the RDE signal observed in the practical detection. Especially in the lateral misalignment condition, the large OV size effectively reduced the signal spreading and enhanced the signal strength. These findings may be useful for practical application of the optical RDE in remote sensing and metrology. Introduction As a powerful measurement tool, Doppler velocimetry has been applied in many areas in our world [1]. In recent years, with the rise of research on vortex structured beams, the rotational Doppler effect (RDE) has gradually become a research hot topic [2][3][4][5][6][7][8][9][10]. The concept of the RDE can be traced back to as early as the 1980s, when Bruce et al. introduced the concept of angular Doppler effect, which was associated with the spin angular momentum (SAM) [11]. Based on the conservation of energy and angular momentum, a frequency shift ±Ω could be observed in the scattered beam after the circular polarized light interacted with the rotating body. With the confirmation that the Laguerre-Gaussian (LG) beam could carry orbital angular momentum (OAM) [12], the RDE associated with OAM has further attracted people's interests [13,14]. In 2013, Lavery et al. systemically proposed the scheme of the detection of a spinning object by using the OAM carrying optical vortex (OV) based on RDE [2]. Since then, a wide range of spinning target measurement solutions have been reported, such as the measurement of the rotational speed [15], rotation direction [16], and even angular acceleration and compound motion [8,17,18]. The core element to realize rotation detection by using RDE is the OV beam, the phase distribution of which contains the helical (spiral) component described by the factor exp(i ϕ), where is the topological charge of the OV and ϕ denotes the angular coordinate. In recent years, plenty of methods have been reported for the generation and the measurement of the OV [19][20][21]. The spiral phase term causes a phase singularity (and the amplitude zero) in the center of the OV light field, and therefore forms a doughnut-shaped intensity distribution for the single mode and a petal-like distribution for the superpositions of modes with different [22]. Both the single mode and the superposition mode have a central dark core in their intensity cross section. Different from the conventional linear Doppler velocimetry, the OV-based RDE has a close relationship with the relative discrepancy between the OV axis and the axis of rotation [23,24]. Most of the previous works were conducted under the condition that the OV axis was strictly coaxial with the rotating axis, which is difficult to achieve in practical measurements. In recent years, the so-called noncoaxial RDE, which occurs when there is a misalignment between the rotating and OV axis, also has been investigated thoroughly [24,25]. There are several different interpretations of the origin of the optical RDE; one obvious and clear interpretation is the intensity modulation of the superposed OV beam [26]. Based on the small-scatterer model of RDE, when the OV propagation axis is coaxial with the object rotation axis, a frequency shift of the same magnitude occurs, since each scattering point within the light field has the same linear speed Ωr in the plane, where r is the radial coordinate of the point scatterer [6,27]. This means that the beam radius of the OV relative to the rotating object may influence the measurement result of the RDE. However, the influence of the beam size on the RDE has not been studied enough in previous research. In this work, we investigated the influence of the OV size on the RDE and realized the spinning speed measurement under different beam radius. From the perspective of the OV, its beam size was determined by both the topological charge and the beam waist. In terms of experimental equipment, this size also could be adjusted by changing the telescope aperture in the optical path. The influence of the OV size on the RDE under both the coaxial and noncoaxial conditions was considered. Since the size of the target to be detected in practical RDE velocimetry may range from molecules to macroscopic object, the investigation of the OV size is an important guide for practical applications. Concept and Principles As the paraxial approximate solution of Helmholtz equations, the LG mode associated with OAM is widely used in RDE-based measurements. Here, we took the classical LG mode as the probe OV and performed a theoretical analysis, as well as the following proofof-concept experiment. The standard expression of an LG beam in circular-cylindrical coordinates can be written as [23,28]: where p and l are the radial and azimuth index, respectively; C is a constant that stands for the amplitude; L |l| p represents the generalized Laguerre polynomial of order p and degree |l|; and z R is the Rayleigh range expressed by z R = πw 0 2 /λ, where ω 0 is the beam waist at the initial plane (z = 0) where the beam is narrowest. The functions ω(z) and R z are the radius of the Gaussian beam and curvature radius of the wavefront, respectively. k = 2π/λ is the overall wave number of the light beam. The beam radius of the OV is defined as the distance between the point where the field amplitude falls to the outermost 1/e of the maximum value and the center of the beam. Therefore, based on Equation (1) of the complex amplitude of an electric field in LG mode, the beam radius of the LG mode can be given as: where w(z) is the radius of the Gaussian beam at the propagation distance z, which is given as: According to Equations (2) and (3), the beam size of the LG mode is determined by the topological charge , radial index p, propagation distance z, and the initial beam waist w 0 . Considering that the topological charge is directly related to the magnitude of the RDE frequency shift, p determines the intensity shape of the light field, which also influences the signal-to-noise ratio of the measurement result [29]. Within the Rayleigh diffraction distance, the propagation distance z has little effect on the beam size. It is obvious that the beam waist w 0 is directly related to the beam radius ω pl and does not change the other parameters of the OV light field. Therefore, w 0 can be used as a variable to investigate the effect of beam size on the RDE. Although the beam size can be adjusted by adding optical elements such as beam expanders, the complexity of the test setup is further increased. We first considered the RDE under the OV coaxial incidence condition, as shown in Figure 1a. Since the beam radius w pl is the same with the radius r of the small scatterer, the frequency shift introduced by the rotation is given as ∆ f = Ω/2π, where Ω is the rotational speed of the object. It can be seen that the magnitude of the RDE frequency is only determined by the topological charge of the OV and the rotational speed. Therefore, the beam size has no influence on the RDE in theory. This is the ideal situation, and is rarely encountered in practical measurements; the case in which the vortex light is not coaxial with the rotating axis is more common. Now let us consider the noncoaxial incidence condition when there is a lateral misalignment between the OV axis and the rotating axis, as shown in Figure 1b. Based on the principle of the noncoaxial RDE, when there is a small lateral misalignment d between the rotating axis and the beam axis, the RDE can be expressed as [30]: where θ indicates the angular coordinates of the scattering point in the column coordinate system, which determines the position of the small scatterer within the light field [30]. For each small scatterer within the light field, its RDE frequency shift is related to the beam radius w pl . The above formula manifests that the RDE frequency shift signal is not one single peak anymore for each small scatterer. The corresponding spectral width can be expressed as ∆ f = Ωd/πw pl . Therefore, the larger the OV radius, the smaller the spread of the spectrum for a fixed misalignment. Apart from that, when the value of d/w pl is a constant, the width of the frequency spectrum will be unchanged. However, this does not mean that the frequency bandwidth can be reduced indefinitely as the beam radius increases. Because the practical measurement process is influenced by many factors, the beam quality and the reception of the scattered light are also influenced by the beam radius. Under ideal conditions, the RDE frequency shift bandwidth can be reduced to a single peak, but in practice, there is a limit that depends on the specific measurement conditions. In general, when the lateral misalignment and the total energy of the probe OV remain constant, the larger the radius of the beam, the more concentrated the RDE frequency shift signals is, and thus the greater the intensity of the signal. Another fundamental noncoaxial condition is the light oblique incidence, as shown in Figure 1c. Compared to the situation of coaxial incidence, each small scatterer within the probe beam will not experience the same frequency shift anymore due to the ring radius w pl being equal to the radial position r of the small scatterer. On one hand, owing to the oblique illumination, the beam profile on the object will change from annular to elliptic annular. On the other hand, the angle between the scatterer velocity and the beam Poynting vector is changed, producing an additional linear Doppler effect of each tiny scatterer. Combined with the beating frequency effect on the observation of the optical frequency shift, the modulated frequency on the light oblique incidence can be expressed as [25]: where γ is the oblique angle and θ z gives the position of each scatterer in the coordinates. Based on the above equation, the beam size seems to have no influence on the RDE under light oblique incidence, because the expressions of the RDE under OV oblique incidence are not related to the beam radius. However, while this is still the ideal condition, the practical measurement is a different story. Since the power of the probe OV beam is constant, the larger the beam size, the more total energy of the beam is dispersed. For the receiver side, it may then receive a lower echo light signal. Another fundamental noncoaxial condition is the light oblique incidence, as shown in Figure 1c. Compared to the situation of coaxial incidence, each small scatterer within the probe beam will not experience the same frequency shift anymore due to the ring radius pl w being equal to the radial position r of the small scatterer. On one hand, owing to the oblique illumination, the beam profile on the object will change from annular to elliptic annular. On the other hand, the angle between the scatterer velocity and the beam Poynting vector is changed, producing an additional linear Doppler effect of each tiny scatterer. Combined with the beating frequency effect on the observation of the optical frequency shift, the modulated frequency on the light oblique incidence can be expressed as [25]: where γ is the oblique angle and z θ gives the position of each scatterer in the coordinates. Based on the above equation, the beam size seems to have no influence on the RDE under light oblique incidence, because the expressions of the RDE under OV oblique incidence are not related to the beam radius. However, while this is still the ideal condition, the practical measurement is a different story. Since the power of the probe OV beam is constant, the larger the beam size, the more total energy of the beam is dispersed. For the receiver side, it may then receive a lower echo light signal. Experimental Setup To prove the theoretical analysis, we conducted a proof-of-concept experiment as shown in Figure 2. The laser source generated the beam with a wavelength of 532 nm. After it was expanded, the laser size was large enough to fully cover the screen of the spatial light modulator (SLM, Hamamatsu, X15213, Japan). Since the SLM only could modulate the light in a horizontal linear polarization state, a linear polarizer (LP) was arranged before the SLM. The computer generated the holograms of the OV beam in LG mode according to Equation (1), then the holograms were uploaded to the screen of the Experimental Setup To prove the theoretical analysis, we conducted a proof-of-concept experiment as shown in Figure 2. The laser source generated the beam with a wavelength of 532 nm. After it was expanded, the laser size was large enough to fully cover the screen of the spatial light modulator (SLM, Hamamatsu, X15213, Japan). Since the SLM only could modulate the light in a horizontal linear polarization state, a linear polarizer (LP) was arranged before the SLM. The computer generated the holograms of the OV beam in LG mode according to Equation (1), then the holograms were uploaded to the screen of the SLM. By changing the corresponding parameters of the holograms in the computer, the characteristics of the generated OV beam could be adjusted conveniently. The light reflected from the SLM may have contained several orders due to the effect of the grating phase. Therefore, a 4f filter was arranged after the SLM to filter the undesired orders, which left only the first order, which was the desired OV. The light intensity distribution is shown in Figure 2c. Finally, the probe light illuminated the surface of the rotating object. An RDE frequency shift was produced after the OV beam interacted with the object. The echo light was received by a beam splitter (BS) accompanied by an avalanched photodetector (APD). A data-acquisition card (DAC) was connected to the APD for the photoelectric signal conversion. Then, the signal was sampled by the computer for the following processing and RDE frequency shift extraction. The computer also was connected to the rotor for the rotational speed control and the translation stage for the object position adjustment. The rotating object was a plane rotating disk, as shown in Figure 2b; its surface was covered with silver paper to increase the intensity of the echo light. In the experimental operation, we adjusted the beam size by changing the beam waist parameter ω 0 in the hologram, and changed the state of coaxial or noncoaxial incidence of the OV by adjusting the position and angle of the translation stage. All the other settings remained the same. The echo light was received by a beam splitter (BS) accompanied by an avalanched photodetector (APD). A data-acquisition card (DAC) was connected to the APD for the photoelectric signal conversion. Then, the signal was sampled by the computer for the following processing and RDE frequency shift extraction. The computer also was connected to the rotor for the rotational speed control and the translation stage for the object position adjustment. The rotating object was a plane rotating disk, as shown in Figure 2b; its surface was covered with silver paper to increase the intensity of the echo light. In the experimental operation, we adjusted the beam size by changing the beam waist parameter 0 ω in the hologram, and changed the state of coaxial or noncoaxial incidence of the OV by adjusting the position and angle of the translation stage. All the other settings remained the same. Results and Discussions Based on the above experimental setup, we first conducted the experiments in the condition that the OV beam had coaxial incidence to the rotating object. Here, the rotational speed was set at =50 Ω round per second (rps), and the topological charge of the probe light we employed here was = 25 ±  with a zero-order radial index ( 0 p = ). The corresponding simulated and experimental results are shown in Figure 3. As the beam waist of the OV holography was set to different values, the actual radius pl w of the OV profile shining on the rotating surface was 5.5 mm, 4.5 mm, and 3.5 mm, as shown in Figure 3a,d,g, respectively. Results and Discussions Based on the above experimental setup, we first conducted the experiments in the condition that the OV beam had coaxial incidence to the rotating object. Here, the rotational speed was set at Ω = 50 round per second (rps), and the topological charge of the probe light we employed here was = ±25 with a zero-order radial index (p = 0). The corresponding simulated and experimental results are shown in Figure 3. As the beam waist of the OV holography was set to different values, the actual radius w pl of the OV profile shining on the rotating surface was 5.5 mm, 4.5 mm, and 3.5 mm, as shown in Figure 3a,d,g, respectively. In the measurement process, we sampled for 0.1 s at a sampling rate of 10,000 Hz. The measured time domain signals are presented in Figure 3b,e,h. As the beam radius decreased, the scattered light intensity gradually increased and the energy of the beam became more concentrated. Correspondingly, the amplitude of the RDE frequency shift signals decreased as the beam radius decreased, as shown in Figure 3c,f,i. The magnitude of the RDE frequency shift was in good agreement with theoretical prediction, which was f mod = 2500 Hz, and the corresponding rotational speed could be calculated directly. Although the magnitude of the RDE frequency shift was not affected by the beam size, the signal-to-noise ratio (SNR) of the RDE frequency shift was closely related to the beam size. A larger OV beam size may have aroused a stronger RDE signal. This was because the larger the size of the light field, the more scattered light could be received, resulting in a weakening of the received signal light. In addition to the coaxial incidence cases described above, there were also noncoaxial incidence cases. We further conducted the experiment under the noncoaxial incidence condition. The first condition was a small lateral misalignment d between the beam axis and the rotating axis (d < w pl ). The lateral misalignment here was set to 0.5 mm relative to the beam radius of 2.5 mm, 4 mm, or 5 mm, as shown in Figure 4a,d,g, respectively. Under the noncoaxial condition, the scattering inhomogeneity of the surface of a rotating object had a significant influence on the probe light. Therefore, the fluctuation of the time domain signal was more significant than the coaxial incidence condition shown in Figure 3. In the measurement process, we sampled for 0.1 s at a sampling rate of 10,000 Hz The measured time domain signals are presented in Figure 3b,e,h. As the beam radiu decreased, the scattered light intensity gradually increased and the energy of the beam became more concentrated. Correspondingly, the amplitude of the RDE frequency shif signals decreased as the beam radius decreased, as shown in Figure 3c,f,i. The magnitud of the RDE frequency shift was in good agreement with theoretical prediction, which wa mod 2500Hz = f , and the corresponding rotational speed could be calculated directly. Alt hough the magnitude of the RDE frequency shift was not affected by the beam size, th signal-to-noise ratio (SNR) of the RDE frequency shift was closely related to the beam size A larger OV beam size may have aroused a stronger RDE signal. This was because th larger the size of the light field, the more scattered light could be received, resulting in weakening of the received signal light. In addition to the coaxial incidence cases described above, there were also noncoaxia incidence cases. We further conducted the experiment under the noncoaxial incidenc condition. The first condition was a small lateral misalignment d between the beam axi and the rotating axis ( pl d w < ). The lateral misalignment here was set to 0.5 mm relative to the beam radius of 2.5 mm, 4 mm, or 5 mm, as shown in Figure 4a,d,g, respectively. Unde the noncoaxial condition, the scattering inhomogeneity of the surface of a rotating objec had a significant influence on the probe light. Therefore, the fluctuation of the time do main signal was more significant than the coaxial incidence condition shown in Figure 3 The measured time domain signals under different beam sizes are shown in Figure 4b,e,h The most obvious up and down vibrations were directly caused by the rotation and th frequency of the vibration being equal to the rotating frequency. The RDE frequency shif was in the high-frequency component of the time domain signal. After being fast Fourier transformed, the RDE signals in the frequency domain could be obtained. Under the noncoaxial incidence condition, the RDE frequency shift signa was not on a single peak anymore, but was broadened in a certain width. As the beam After being fast Fourier transformed, the RDE signals in the frequency domain could be obtained. Under the noncoaxial incidence condition, the RDE frequency shift signal was not on a single peak anymore, but was broadened in a certain width. As the beam size increased, the signal spectrum became narrower. Since the total energy of the echo light was constant, which was decided by the laser power, the more concentrated the signal spectrum was, the greater the signal strength, as shown in Figure 4c,f,i. Compared with the result measured for w pl = 2.5 mm, the signal for w pl = 5 mm was stronger and possessed fewer missing frequency peaks. The corresponding measurement results were in good agreement with the theoretical prediction according to Equation (4). Therefore, a large radius detection OV could be used to minimize the measurement error caused by the lateral misalignment. There are two ways to measure the specific rotating speed. One is based on the frequency spectrum width of the broadened signals. The rotational speed can be calculated as Ω = π∆ f w pl / d. The other is by employing the adjacent frequency difference of the discrete RDE signals [24]. The rotating frequency is equal to the adjacent frequency difference. Both require a strong frequency signal in order to obtain an accurate rotating speed. The corresponding results manifested that a larger beam size could tolerate a larger lateral misalignment, which was helpful in the practical detection applications. Under the small lateral misalignment incidence condition, we further conducted the experiments under different beam radii. As shown in Figure 5, the larger the lateral misalignment, the larger the bandwidth of the RDE signal. When the lateral misalignment was constant, the larger the beam size, and the smaller the signal bandwidth. Another basic noncoaxial condition is the OV oblique illuminating the rotating object According to Equation (5), the magnitude of the RDE was still not affected by the size o the OV. We first conducted the simulation under a beam waist of Figure 6a-c. Under the beam oblique incidence, the frequency spectrum was broadened. As the beam size decreased the SNR of the signal was surprisingly increased. However, the value of the frequency shift and the width of the frequency spectrum were not affected. Subsequently, we conducted a practical measurement based on the rotating disk. The experimental results were almost in agreement with the simulated results, which showed that the smaller the beam size, the higher the SNR of the signals. However, there was stil an obvious difference between the simulated and experiment results. The frequency dif ference between each two adjacent signal peaks was half of the simulated results, while Another basic noncoaxial condition is the OV oblique illuminating the rotating object. According to Equation (5), the magnitude of the RDE was still not affected by the size of the OV. We first conducted the simulation under a beam waist of 0.8 × 10 −3 , 0.6 × 10 −3 , and 0.4 × 10 −3 , respectively. The corresponding beam radius w pl was 5 mm, 4 mm, and 2.5 mm, respectively. The simulated results are shown in Figure 6a-c. Under the beam oblique incidence, the frequency spectrum was broadened. As the beam size decreased, the SNR of the signal was surprisingly increased. However, the value of the frequency shift and the width of the frequency spectrum were not affected. the value of the frequency difference actually should have been twice of the value of the rotation speed based on the off-axis OAM mode expansion theory [31]. The reason for the denser signal spectrum of the experimental results was the mode purity of the probe beam not being as pure as the simulation counterpart, and it was difficult to achieve the incident conditions in the experiment in which only a tilt angle existed without a lateral offset. In general, however, the regularity of the SNR variation with the OV size was consistent. Subsequently, we conducted a practical measurement based on the rotating disk. The experimental results were almost in agreement with the simulated results, which showed that the smaller the beam size, the higher the SNR of the signals. However, there was still an obvious difference between the simulated and experiment results. The frequency difference between each two adjacent signal peaks was half of the simulated results, while the value of the frequency difference actually should have been twice of the value of the rotation speed based on the off-axis OAM mode expansion theory [31]. The reason for the denser signal spectrum of the experimental results was the mode purity of the probe beam not being as pure as the simulation counterpart, and it was difficult to achieve the incident conditions in the experiment in which only a tilt angle existed without a lateral offset. In general, however, the regularity of the SNR variation with the OV size was consistent. Discussion and Conclusions For the coaxial and lateral misalignment incidence conditions, a larger beam size was able to elicit a higher SNR of the RDE signals, based on the experimental results. Intuitively, a large beam radius corresponds to a large light field area, so the intensity of the detected light can be increased. However, the radius of the probe beam cannot be increased indefinitely. One reason is that the larger the beam radius, the larger the area of its hollow dark nucleus, so it is easy to miss from the middle for small targets. The other reason is the aperture of the lens in the OV generation optical path is limited, meaning the size of the probe beam cannot be too large. In practical remote-sensing applications, the beam quality of the OV beam is easily affected by atmospheric turbulence. It is convincing that a larger beam size is more susceptible to atmospheric disturbances [32,33], which would also affect the measurement accuracy of the results. Although the experimental conclusions were clear, the actual detection process was affected by a variety of factors. Therefore, we could analyze the effect of the beam size under specified conditions, but the optimal beam size selection under multiple combined conditions remains a challenge to be solved. In summary, we investigated the influence of the OV beam size on the measurement of the RDE. Theoretically, by changing the beam waist of the holograms, the beam radius can be adjusted without changing any other parameters of the probe OV. The relationship between the beam size and the RDE was analyzed in different light incidence conditions. Although the magnitude of the RDE frequency shift was not affected by the beam size in the coaxial and light oblique incidence conditions, the SNR and the distribution of the signals were related to the beam size under light. Experimentally, we designed a proof-ofconcept experiment to prove our hypothesis. Under the coaxial beam incidence condition, the larger the size of the OV beam, the higher the SNR of the RDE signal. Under the noncoaxial beam incidence condition, the SNR of the signals gradually increased in the case of laterally displaced incidence as the beam size increased, while in the case of oblique incidence, the opposite result was observed. Our findings provide a useful guide to the practical detection of RDE-based measurements, and may promote the application of RDE metrology techniques. Author Contributions: Conceptualization, S.Q. and Y.R.; methodology, S.Q. and T.L.; formal analysis, S.Q. and X.Z.; writing-original draft preparation, S.Q. and R.T.; writing-review and editing, T.L. and Y.R.; supervision, Y.R. All authors have read and agreed to the published version of the manuscript.
6,843.2
2022-07-25T00:00:00.000
[ "Physics" ]
Surrogate “Level-Based” Lagrangian Relaxation for mixed-integer linear programming Mixed-Integer Linear Programming (MILP) plays an important role across a range of scientific disciplines and within areas of strategic importance to society. The MILP problems, however, suffer from combinatorial complexity. Because of integer decision variables, as the problem size increases, the number of possible solutions increases super-linearly thereby leading to a drastic increase in the computational effort. To efficiently solve MILP problems, a “price-based” decomposition and coordination approach is developed to exploit 1. the super-linear reduction of complexity upon the decomposition and 2. the geometric convergence potential inherent to Polyak’s stepsizing formula for the fastest coordination possible to obtain near-optimal solutions in a computationally efficient manner. Unlike all previous methods to set stepsizes heuristically by adjusting hyperparameters, the key novel way to obtain stepsizes is purely decision-based: a novel “auxiliary” constraint satisfaction problem is solved, from which the appropriate stepsizes are inferred. Testing results for large-scale Generalized Assignment Problems demonstrate that for the majority of instances, certifiably optimal solutions are obtained. For stochastic job-shop scheduling as well as for pharmaceutical scheduling, computational results demonstrate the two orders of magnitude speedup as compared to Branch-and-Cut. The new method has a major impact on the efficient resolution of complex Mixed-Integer Programming problems arising within a variety of scientific fields. The associated systems are created by interconnecting I smaller subsystems, each having its own objective and a set of constraints. The subsystem interconnection is modeled through the use of system-wide coupling constraints. Accordingly, the MILP problems are frequently formulated in terms of cost components associated with each subsystem with the corresponding objective functions being additive as such: Furthermore, coupling constraints are additive in terms of I subsystems: The primal problem (1), (2) is assumed to be feasible and the feasible region is assumed to be bounded and finite. The MILP problems modeling the above systems are referred to as separable. Because of the discrete decisions, however, MILP problems are known to be NP-hard and are prone to the curse of combinatorial complexity. As the size of a problem increases, the associated number of combinations of possible www.nature.com/scientificreports/ solutions (hence the term "combinatorial") increases super-linearly (e.g., exponentially) thereby making problems of practical sizes difficult to solve to optimality; even near-optimal solutions are frequently difficult to obtain. A beacon of hope to resolve combinatorial difficulties lies through the exploitation of separability through the dual "price-based" decomposition and coordination Lagrangian Relaxation technique. After the relaxation of coupling constraints (2), the coordination of subproblems amounts to the maximization of a concave nonsmooth dual function: where Here L(x, y, is the Lagrangian function. The Lagrangian multipliers ("dual" variables) are the decision variables with respect to the dual problem (3), and it is assumed that the set of optimal solutions is not empty. The minimization within (4) with respect to {x, y} is referred to as the "relaxed problem. " While the sizes of the primal and the relaxed problems are the same in terms of the number of discrete variables, the main advantage of Lagrangian Relaxation is the exploitation of the reduction of the combinatorial complexity upon decomposition into subproblems. Accordingly, the number of discrete decision variables within the primal problem is n = I i=1 n x i , so the worst-case complexity of solving the primal problems is O(e I i=1 n x i ) . By the same token, the worst-case complexity required to solve the following subproblem is O(e n x i ) . The decomposition "reverses" the combinatorial complexity thereby exponentially reducing the effort. The decomposition, therefore, offers a viable potential to improve the operations of existing systems as well as to scale up the size of the systems to support their efficient operations. While decomposition efficiently reduces the combinatorial complexity, the coordination aspect of the method to efficiently obtain the optimal "prices" (Lagrangian multipliers) has been the subject of an intense research debate for decades because of the fundamental difficulties of non-smooth optimization. Namely, because of the presence of integer variables x, the dual function (3) is non-smooth comprised of flat convex polygonal facets (each corresponding to a particular solution to the relaxed problem within (4)) intersecting at linear ridges along which the dual function q( ) is non-differentiable; in particular, q( ) is not differentiable at * thereby ruling out the possibility of using necessary and sufficient conditions for the extremum. As a result of the non-differentiability of q( ) , subgradient multiplier-updating directions, however, are non-ascending directions thereby leading to a decrease of dual values; subgradient directions may also change drastically thereby resulting in zigzagging of Lagrangian multipliers (see Fig. 1 for illustrations) and slow convergence as a result. www.nature.com/scientificreports/ Traditional methods to maximize q( ) rely upon iterative updates of Lagrangian multipliers by taking a series of steps s k along subgradient g(x k , y k ) directions as: is a an optimal solution to the relaxed problem (4) with multipliers equal to k . Within the Lagrangian Relaxation framework, subgradients are defined as levels of constraint violations if present, can be handled by converting into equality constraints by introducing non-negative real-valued slack variables z The multipliers are subsequently projected onto the positive orthant delineated by restrictions ≥ 0. Because of the lack of differentiability of q( ) , notably, at the optimum * , the stepsize selection plays an important role to guarantee convergence to the optimum as well as for the success of the overall Lagrangian Relaxation methodology for solving MILP problems. One of the earlier papers on the optimization of non-smooth convex functions, with q( ) being its member, though irrespective of Lagrangian Relaxation, is Polyak's seminal work 22 . Intending to achieve the geometric (also referred to as "linear") rate of convergence so that � k − * � is monotonically decreasing, Polyak proposed the stepsizing formula, which in terms of the problem under consideration takes the following form: Within (7) and thereafter in the paper the standard Euclidean norm is used. Subgradient directions, however, 1. are generally difficult to obtain computationally when the number of subproblems (5) to be solved is large, and 2. change drastically thereby resulting in zigzagging of Lagrangian multipliers and slow convergence. Moreover, 3. stepsizes (7) cannot be set due to the lack of the knowledge about the optimal dual value q( * ). To overcome the first two of the difficulties above, the Surrogate Subgradient method was developed by 23 whereby the exact optimality of the relaxed problem (or even subproblems) is not required. As long as the following "surrogate optimality condition" is satisfied: the multipliers can be updated by using the following version of the Polyak's formula and convergence to * is guaranteed. Here "tilde" is used to distinguish optimal solutions {x k , y k } to the relaxed problem from the solutions {x k ,ỹ k } that satisfy the "surrogate optimality condition" (8). Unlike that in Polyak's formula, parameter γ is less than 1 to guarantee that q( * ) > L(x k ,ỹ k , k ) so that the stepsizing formula (9) is well-defined, as proved by Zhao et al. 23 . Once {x k ,ỹ k } are obtained, multipliers are updated by using the same formula as in (6) with stepsizes from (9) and "surrogate subgradient" multiplier-updating directions g(x k ,ỹ k ) used in place of subgradient directions g(x k , y k ) . Besides reducing the computational effort owing to (8), the concomitant reduction of multiplier zigzagging has also been observed. The main difficulty is the lack of knowledge about q( * ) . As a result, the geometric/linear convergence of the method (or any convergence at all) is highly questionable in practice. Nevertheless, the underlying geometric convergence principle behind the formula (8) is promising and will be exploited in "Results" section. One of the first attempts to overcome the difficulty associated with the unavailability of the optimal [dual] value is the Subgradient-Level method developed by Goffin and Kiwiel 24 by adaptively adjusting a "level" estimate based on the detection of "sufficient descent" of the [dual] function and "oscillation" of [dual] solutions. In a nutshell, a "level" estimate is set as q k lev = q k j rec + δ j with q k rec being the best dual value ("record objective value") obtained up to an iteration k, and δ j is an adjustable parameter with j denoting the j th update of q k lev . Every time oscillations of multipliers are detected, δ j is reduced by half. In doing so, stepsizes appropriately decrease, q k lev increases (for maximization of non-smooth functions such as (3)) and the process continues until δ j → 0 and q k lev → q( * ). To improve convergence, rather than updating all the multipliers "at once, " within the Incremental Subgradient methods 25 , multipliers are updated "incrementally. " Convergence results of the Subgradient-Level method 24 have been extended for the Incremental Subgradient methods. Within the Surrogate Lagrangian Relaxation (SLR) method 26 , the computational effort is reduced along the lines of the Surrogate Subgradient method 23 discussed above, that is, by solving one of a few subproblems at a time. To guarantee convergence, within SLR, distances between multipliers at consecutive iterations are required to decrease through a specially-constructed contraction mapping until convergence. As demonstrated by Bragin et al. 26 , the SLR method converges faster as compared to the above-mentioned Subgradient-Level method 24 and the Incremental Subgradient methods 25,27 for non-smooth optimization. Unlike the Subgradient-Level and Incremental Subgradient methods 25,27 , the SLR method does not require obtaining dual values to set stepsizes, which further reduces the effort. Aiming to simultaneously guarantee convergence while ensuring fast www.nature.com/scientificreports/ reduction of constraint violations and preserving the linearity, the Surrogate Absolute-Value Lagrangian Relaxation (SAVLR) method 28 was developed to penalize constraint violations by using l 1 "absolute-value" penalty terms. The above methods are reviewed in more detail in Supplementry Information Section. Because of the presence of the integer variables, there is the so-called the duality gap, which means that even at convergence, q( * ) is generally less than the optimal cost of the original problem (1), (2). To obtain a feasible solution to (1), (2), the subproblem solutions when put together may not satisfy all the relaxed constraints. Therefore, to solve corresponding MILP problems, heuristics are inevitable and are used to perturb subproblem solutions. The important remark here is that the closer the multipliers are to the optimum, generally, the closer the subproblem solutions are to the global optimum of the original problem, and the easier it is to obtain feasible solutions through heuristics. Therefore, having fast convergence in the dual space to maximize the dual function (3) is of paramount importance for the overall success of the method. Specific heuristics will be discussed at the end of the "Results" section. Results Surrogate "Level-Based" Lagrangian Relaxation. In this subsection, a novel Surrogate "Level-Based" Lagrangian Relaxation (SLBLR) method is developed to determine "level" estimates of q( * ) within the Polyak's stepsizing formula (9) for fast convergence of multipliers when optimizing the dual function (3). Since the knowledge of q( * ) is generally unavailable, over-estimates of the optimal dual value, if used in place of q( * ) within the formula (9), may lead to the oscillation of multipliers and to the divergence. Rather than using heuristic "oscillation detection" of multipliers used to adjust "level" values 24 , the key of SLBLR is the decision-based "divergence detection" of multipliers based on a novel auxiliary "multiplier-divergence-detection" constraint satisfaction problem. "Multiplier-Divergence-Detection" problem to obtain the estimate of q( * ). The premise behind the multiplierdivergence detection is the rendition of the result due Zhao et al. 23 : Theorem 1 Under the stepsizing formula such that {x k ,ỹ k } satisfy the multipliers move closer to optimal multipliers * iteration by iteration: The following Corollary and Theorem 2 are the main key results of this paper. Corollary 1 If then Theorem 2 If the following auxiliary "multiplier-divergence-detection" feasibility problem (with being a continuous decision variable: ∈ R m ) admits no feasible solution with respect to for some k j and n j , then ∃ κ ∈ [k j , k j + n j ] such that Proof Assume the contrary: ∀κ ∈ [k j , k j + n j ] the following holds: By Theorem 1, multipliers approach * , therefore, the "multiplier-divergence-detection" problem admits at least one feasible solution - * . Contradiction. (16) it follows that ∃ q κ,j such that q κ,j > q( * ) and the following holds: The equation (18) can equivalently be rewritten as: Therefore, A brief yet important discussion is in order here. The overestimate q j of the dual value q( * ) is the sought-for "level" value after the j th update (the j th time the problem (15) is infeasible). Unlike previous methods, which require heuristic hyperparameter adjustments to set level values, within SLBLR, level values are obtained by using the decision-based principle per (15) precisely when divergence is detected without any guesswork. In a sense, SLBLR is hyperparameter-adjustment-free. Specifically, neither "multiplier-divergence-detection" problem (15), nor the computations within (18)-(20) requires hyperparameter adjustment. Following Nedić and Bertsekas 27 , the parameter γ will be chosen as a fixed value γ = 1 I , which is the inverse of the number of subproblems and will not require further adjustments. Note that (15) simplifies to an LP constraint satisfaction problem. For example, after squaring both sides of the first inequality � − k j +1 � ≤ � − k j � within (15), after using the binomial expansion, and canceling � − k j � 2 from both sides, the inequality simplifies to To speed up convergence, a hyperparameter ζ < 1 is introduced to reduce stepsizes as follows: Subsequently after iteration k j+1 , the problem (15) is sequentially solved again by adding one inequality per multiplier-updating iteration until iteration k j+1 + n j+1 − 1 is reached for some n j+1 so that (15) is infeasible. Then, stepsize is updated by using q j+1 per (21) and is used to update multipliers until the next time it is updated to q j+2 when the "multiplier-divergence-detection" problem is infeasible again, and the process repeats. Per (21), SLBLR requires hyperparameter ζ , yet, it is set before the algorithm is run and subsequently is not adjusted (see "Numerical testing" section for empirical demonstration of the robustness of the method with respect to the choice of hyperparameter ζ). To summarize the advantage of SLBLR, hyperparameter adjustment is not needed. The guesswork of when to adjust the level-value, and by how much is obviated -after (15) is infeasible, the level value is formulaically recalculated. On improvement of convergence. To speed up the acceleration of the multiplier-divergence detection through the "multiplier-divergence-detection" problem, (15) is modified, albeit heuristically, in the following way: Unlike the problem (15), the problem (22) no longer simplifies to an LP problem. Nevertheless, the system of inequalities delineate the convex region and can still be handled by commercial software. Discussion of (22). Equation (22) is developed based on the following principles: 1. Rather than detecting divergence per (15), convergence with a rate slower than √ 1 − 2 · ν · s is detected. This will lead to a faster adjustment of the level values. While the level value may no longer be guaranteed to be the upper bound to q( * ) , the merit of the above scheme will be empirically justified in the "Numerical testing" section. 2. While the rate of convergence is unknown, in the "worst-case" scenario √ 1 − 2 · ν · s is upper bounded by 1 with ν = 0 , thereby reducing (22) to (15). The estimation of √ 1 − 2 · ν · s is thus much easier than the previously used estimations of q( * ) (as in Subgradient-Level and Incremental Subgradient approaches). 3. As the stepsize approaches zero, √ 1 − 2 · ν · s approaches the value of 1 regardless of the value of ν , once again reducing (22) to (15). update multipliers per (6) by using g(x k ,ỹ k ) as end if 9: i ← i + 1 10: < ε then search for feasible solutions x f eas , y feas that satisfy (2) to obtain a feasible cost There are three things to note here. 1. Steps in lines 15-16 are optional since other criteria can be used such as the number of iterations or the CPU time; 2. The value of q( k ) is still needed (line 1) to obtain a valid lower bound. To obtain q( k ) , all subproblems are solved optimally for a given value of multipliers k . The frequency of the search for the value q( k ) is determined based on criteria as stated in point 1 above; 3. The search for feasible solutions is explained below. Search for feasible solutions. Due to non-convexities caused by discrete variables, the relaxed constraints are generally not satisfied through coordination, even at convergence. Heuristics are thus inevitable, yet, they are the last step of the feasible-solution search procedure. Throughout all examples considered, following 28 (as discussed in Supplementary Information), l 1 -absolute-value penalties penalizing constraint violations are considered. After the total constraint violation reaches a small threshold value, a few subproblem solutions obtained by the Lagrangian Relaxation method are perturbed, e.g., see heuristics within accompanying CPLEX codes within 28 to automatically select which subproblem solutions are to be adjusted to eliminate the constraint violation to obtain a solution feasible with respect to the overall problem. Numerical Testing. In this subsection, a series of examples are considered to illustrate different aspects of the SLBLR method. In "Demonstration of convergence of multipliers based on a small example with known optimal multipliers" section, a small example with known corresponding optimal Lagrangian multipliers is considered to test the new method as well as to compare how fast Lagrangian multipliers approach their optimal values as compared to Surrogate Lagrangian Relaxation 26 and to Incremental Subgradient 25 methods. In "Generalized Assignment Problems" section, large-scale instances of generalized assignment problems (GAPs) of types D and E with 20, 40, and 80 machines and 1600 jobs from the OR-library (https:// www-or. amp.i. kyoto-u. ac. jp/ membe rs/ yagiu ra/ gap/) are considered to demonstrate efficiency, scalability, robustness, and competitiveness of the method with respect to the best results available thus far in the literature. In "Stochastic job-shop scheduling with the considerationof scrap and rework" section, a stochastic version of a job-shop scheduling problem instance with 127 jobs and 19 machines based on Hoitomt et al. 29 is tested. In "Multi-stage pharmaceutical scheduling" section, two instances of pharmaceutical scheduling with 30 and 60 product orders, 17 processing units, and 6 stages based on Kopanos et al. 13 are tested. For "Demonstration of convergence of multipliers based on a small example with known optimal multipliers" section and "Generalized Assignment Problems" section, SLBLR is implemented within CPLEX 12.10 by using a Dell Precision laptop Intel(R) Xeon(R) E-2286M CPU @ 2.40GHz with 16 cores and installed memory (RAM) of 32.0 GB. For "Stochastic job-shop scheduling with the considerationof scrap and rework" section and "Multi-stage pharmaceutical scheduling" section, SLBLR is implemented within CPLEX 12.10 by using a server Intel(R) Xeon(R) Gold 6248R CPU @ 3.00GHz with 48 cores and installed memory (RAM) of 192.0 GB. Demonstration of convergence of multipliers based on a small example with known optimal multipliers. To demonstrate convergence of multipliers, consider the following example (due Bragin et al. 30 www.nature.com/scientificreports/ As proved by Bragin et al. 30 , the optimal dual solutions are * 1 = 0.6 and * 2 = 0. Inequality constraints are converted to equality constraints after introducing slack variables. In Fig. 2, the decrease of the corresponding distances from current multipliers to the optimal multipliers ( � k − * � ) is shown, and the SLBLR method is compared with the Incremental Subgradient method 25 and the Surrogate Lagrangian Relaxation method 26 . Within the SLBLR method, the equation (15) is used to detect divergence, and ζ = 1 2 is used to set stepsizes within (21). In essence, only one hyperparameter was required, which has a quite simple explanation -"when the stepsize is 'too large, ' cut the stepsize in half. " As demonstrated in Fig. 2, the SLBLR method converges fast with � k − * � decreasing roughly along a straight line on a log-scale graph suggesting that the rate of convergence is likely linear as expected. As for the Incremental Subgradient method, two hyperparameters are required: R and δ (corresponding values used are shown in parentheses in the legend of Fig. 2 (left)). A trial-and-error analysis indicated that "acceptable" values are R = 0.25 and δ = 24. Increasing or decreasing R to 0.5 and 0.125, respectively, do not lead to improvements. Likewise, increasing or decreasing δ to 48 and 12, respectively, do not lead to improvements as well. "Plateau" regions in the figure are caused by the fact that as stepsizes get smaller, a larger number of iterations is required for multipliers to travel the predetermined distance R; during these iterations, stepsizes are not updated and multipliers may oscillate around a neighborhood of the optimum without getting closer. While the above difficulty can be alleviated and convergence can be improved by hyperparameters τ , β , and R l as reviewed in Supplementary Information, however, an even larger number of hyperparameters would be required. As for the Surrogate Lagrangian Relaxation method, several pairs of hyperparameters (M and r) have been used as well (corresponding values used are shown in parentheses in the legend of Fig. 2 (right)), yet, the performance of Surrogate Lagrangian Relaxaton does not exceed the performance of the SLBLR method. Herein lies the advantage of the novel SLBLR method: the decision-based principle behind computing the "level" values. This is in contrast to the problem-dependent choice of hyperparameters R and δ within the Subgradient-Level 24 and Incremental Subgradient 25 methods, and the choice of M and r within Surrogate Lagrangian Relaxation 26,28 (see "Introduction" section and Supplementry Information for more detail). Even after obtaining "appropriate" values of the aforementioned hyperparameters by using a trial-anderror procedure that entails effort, results obtained by Surrogate Lagrangian Relaxation 26 and the Incremental Subgradient method 25 do not match or beat those obtained by the SLBLR method. The specific reasons are 1. Heuristic adjustments of the "level" values are required 24,25 based on multiplier "oscillation detection" or "significant descent" (for minimization of non-smooth functions). However, these rules do not detect whether multipliers "start diverging. " Moreover, oscillation of multipliers is a natural phenomenon when optimizing nonsmooth functions as discussed in "Introduction" section since multipliers may zigzag/oscillate across ridges of the function, so the multiplier "oscillation detection" may not necessarily warrant the adjustment of level values. On the other hand, multiplier "oscillation" is detected by checking whether multipliers traveled a (heuristically) predetermined distance R, hence, the divergence of multipliers can go undetected for a significant number of iterations (hence, the "plateau" regions shown in Fig. 2 (left)), depending on the value of R. To the best of the (23) min www.nature.com/scientificreports/ authors' knowledge, the subgradient-and surrogate-subgradient-based methods using Polyak's stepsizes with the intention of achieving the geometric/linear convergence rate either require q( * ) , which is unavailable, or require multipliers to travel infinite distance to guarantee convergence to the optimum * 24 . 2. While SLR avoids the need to estimate q( * ) , the geometric/linear convergence is only possible outside of a neighborhood of * 26 . Precisely for this reason, the convergence of multipliers within SLR with the corresponding stepsizing parameters M = 30 and r = 0.01 (as shown in Fig. 2 (right)) appears to follow closely convergence within SLBLR up until iteration 50, after which the improvement tapers off. Generalized assignment problems. To demonstrate the computational capability of the new method as well as to determine appropriate values for key hyperparameters ζ and ν while using standard benchmark instances, large-scale instances of GAPs are considered (formulation is available in subsection 4.2 of Supplementary Information). We consider 20, 40, and 80 machines with 1600 jobs (https://www-or.amp.i.kyoto-u.ac.jp/members/ yagiura/gap/). To determine values for ζ within (21) and ν within (22) to be used throughout the examples, several values are tested using GAP instance d201600. In Table 1, with fixed values of ν = 2 and s 0 = 0.02 , the best result (both in terms of the cost and the CPU time) is obtained with ζ = 1/1.5 . With the value of ζ = 1/4, the stepsize decreases "too fast" thereby leading to a larger number of iterations and a much-increased CPU time as a result. Likewise, in Table 2 with fixed values of ζ = 1/1.5 and s 0 = 0.02 , it is demonstrated that the best result (both in terms of the cost and the CPU time) is obtained with ν = 2 . Empirical evidence here suggests that the method is stable for other values of ν. The robustness with respect to initial stepsizes ( s 0 ) is tested and the results are demonstrated in Table 3. Multipliers are initialized by using LP dual solutions. The method's performance is appreciably stable for the given range of initial stepsizes used (Table 3). SLBLR is robust with respect to initial multipliers 0 (Table 4). For this purpose, the multipliers are initialized randomly by using the uniform distribution U [90,110]. For the testing, the initial stepsize s 0 = 0.02 was used. As evidenced from Table 4, the method's performance is stable, exhibiting only a slight degradation of solution accuracy and an increase of the CPU time as compared to the case with multipliers initialized by using LP dual solutions. To test the robustness as well as scalability of the method across several large-scale GAP instances, six instances d201600, d401600, d801600, e201600, e401600, and e801600 are considered. SLBLR is compared with Depth-First Lagrangian Branch-and-Bound method (DFLBnB) 31 , Column Generation 32 , and Very Large Scale Neighborhood Search (VLSNS) 33 , which to the best of the authors' knowledge are the best methods for at least one of the above instances. For completeness, a comparison against Surrogate Absolute-Value Lagrangian Relaxation (SAVLR) 28 , which is an improved version of Surrogate Lagrangian Relaxation (SLR) 26 , is also performed. The latter SLR method 26 has been previously demonstrated to be advantageous against other non-smooth optimization www.nature.com/scientificreports/ methods as explained in "Introduction" section. Table 5 presents feasible costs and times (in seconds) for each method. The advantage of SLBLR is the ability to obtain optimal results across a wider range of GAP instances as compared to other methods. Even though the comparison in terms of the CPU time is not entirely fair, feasiblecost-wise, SLBLR decisively beats previous methods. For the d201600 instance, the results obtained by SLBLR and the Column Generation method 32 are comparable. For instance d401600, SLBLR obtains a better feasible solution and for instance d801600, the advantage over the existing methods is even more pronounced. To the best of the authors' knowledge, no other reported method obtained optimal results for instances d401600 and d801600. SLBLR outperforms SAVLR 28 as well, thereby demonstrating that the fast convergence offered by the novel "level-based" stepsizing, with other things being equal, translates into better results as compared to those obtained by SAVLR, which employs the "contraction mapping" stepsizing 28 . Lastly, the methods developed in 31-33 specifically target GAPs, whereas the SLBLR method developed in this paper has broader applicability. Stochastic job-shop scheduling with the consideration of scrap and rework. To demonstrate the computational capability of the method to solve large-scale stochastic MILP problems, a job-shop scheduling problem is considered. Within a job shop, each job requires a specific sequence of operations and the processing time for each operation. Operations are performed by a set of eligible machines. To avoid late shipments, expected tardiness is minimized. Limited machine capacity brings a layer of difficulty since multiple "individual-job" subproblems are considered together competing for limited resources (machines). Another difficulty arises because of uncertainties, including processing times [34][35][36][37][38][39] and scrap [40][41][42] . Re-manufacturing of one part may affect and disrupt the overall schedule within the entire job shop, thereby leading to unexpectedly high delays in production. In this paper, we modified data from the paper by Hoitomt et al. 29 by modifying several jobs by increasing the number of operations (e.g., from 1 to 6) and decreasing the capacities of a few machines; the data are in Tables S1 and S2. The stochastic version of the problem with the consideration of scrap and rework is available within the manuscript by Bragin et al. 42 . With these changes, the running time of CPLEX spans multiple days as demonstrated in Fig. 3. In contrast, within the new method, a solution of the same quality as that obtained by CPLEX, is obtained within roughly 1 hour of CPU time. The new method is operationalized by relaxing machine capacity constraints 42 and coordinating resulting job subproblems; at convergence, the beginning times of several jobs are adjusted by a few time periods to remove remaining machine capacity constraint violations. Multi-stage pharmaceutical scheduling. To demonstrate the capability of the method to solve scheduling problems complicated by the presence of sequence-dependent setup times, a multi-stage pharmaceutical scheduling problem proposed by Kopanos et al. 13 is considered. Setup times vary based on the sequencing of products on each unit (machine). Scheduling in this context is combinatorial in the number of product orders, units, and stages. The new method is operationalized by relaxing constraints that couple individual processing units, Table 4. Robustness results for instance d201600 with respect to initial multipliers 0 . The best feasible cost values obtained are in bold. Table 5. Comparison against the best results currently available. * The optimality is certified by the LP optimal values, which are 97105 and 97034 for instances d401600 and d801600, respectively. * * The optimality is certified through the lower bound results of, i.e., Posta et al. 31 . − † Not solved to optimality within 24 hours and not reported within the original paper by Posta et al. 31 . − These instances were not considered within the papers by Sadykov et al. 32 and Bragin et al. 28 Fig. 4. Case number Feasible cost With a relatively small number of product orders, 30, an optimal solution with a feasible cost of 54.97 was found by CPLEX within 1057.78 seconds. The optimality is verified by running CPLEX until the gap is 0%; it took 171993.27 seconds to verify the optimality. SLBLR takes a slightly longer time to obtain the same solution -1647.35 seconds (Fig. 4 (left)). In contrast, with 60 product orders, CPLEX no longer obtains good solutions in a computationally efficient manner; a solution with a feasible cost of 55.98 is obtained after 1,000,000 seconds. Within SLBLR, a solution with a feasible cost of 55.69 is obtained within 1978.04 seconds. This constitutes more than two orders of magnitude of improvement over CPLEX as demonstrated in Fig. 4 (right; log scale). When www.nature.com/scientificreports/ doubling the number of products, CPLEX's performance is drastically deteriorated, while the performance of SLBLR is scalable. Discussion This paper develops a novel MILP solution methodology based on the Lagrangian Relaxation method. Salient features of the novel SLBLR method, inherited from the previous versions of Lagrangian Relaxation, are: 1. reduction of the computational effort required to obtain Lagrangian-multiplier-updating directions and 2. alleviation of zigzagging of multipliers. The key novel feature of the method, which the authors believe gives SLBLR the decisive advantage, is the innovative exploitation of the underlying geometric-convergence potential inherent to Polyak's stepsizing formula without the heuristic adjustment of hyperparameters for the estimate of q( * ) -the associated "level" values are determined purely through the simple auxiliary "multiplier-divergence-detection" constraint satisfaction problem. Through testing, it is discovered that SLBLR is robust with respect to the choice of initial stepsizes and multipliers, computationally efficient, competitive, and general. Several problems from diverse disciplines are tested and the superiority of SLBLR is demonstrated. While "separable" MILP problems are considered, no particular problem characteristics such as linearity or separability have been used to obtain "level" values, and thus SLBLR has the potential to solve a broad class of MIP problems.
7,583.8
2022-03-09T00:00:00.000
[ "Mathematics" ]
Geographical resistome profiling in the honeybee microbiome reveals resistance gene transfer conferred by mobilizable plasmids The spread of antibiotic resistance genes (ARGs) has been of global concern as one of the greatest environmental threats. The gut microbiome of animals has been found to be a large reservoir of ARGs, which is also an indicator of the environmental antibiotic spectrum. The conserved microbiota makes the honeybee a tractable and confined ecosystem for studying the maintenance and transfer of ARGs across gut bacteria. Although it has been found that honeybee gut bacteria harbor diverse sets of ARGs, the influences of environmental variables and the mechanism driving their distribution remain unclear. We characterized the gut resistome of two closely related honeybee species, Apis cerana and Apis mellifera, domesticated in 14 geographic locations across China. The composition of the ARGs was more associated with host species rather than with geographical distribution, and A. mellifera had a higher content of ARGs in the gut. There was a moderate geographic pattern of resistome distribution, and several core ARG groups were found to be prevalent among A. cerana samples. These shared genes were mainly carried by the honeybee-specific gut members Gilliamella and Snodgrassella. Transferrable ARGs were frequently detected in honeybee guts, and the load was much higher in A. mellifera samples. Genomic loci of the bee gut symbionts containing a streptomycin resistance gene cluster were nearly identical to those of the broad-host-range IncQ plasmid, a proficient DNA delivery system in the environment. By in vitro conjugation experiments, we confirmed that the mobilizable plasmids could be transferred between honeybee gut symbionts by conjugation. Moreover, “satellite plasmids” with fragmented genes were identified in the integrated regions of different symbionts from multiple areas. Our study illustrates that the gut microbiota of different honeybee hosts varied in their antibiotic resistance structure, highlighting the role of the bee microbiome as a potential bioindicator and disseminator of antibiotic resistance. The difference in domestication history is highly influential in the structuring of the bee gut resistome. Notably, the evolution of plasmid-mediated antibiotic resistance is likely to promote the probability of its persistence and dissemination. Aghy2aoGk5W_8F1dgySKf9 Video Abstract Video Abstract Background The overuse of antibiotics has led to environmental contamination through landfills, treated wastewater draining, and waste from livestock farms. The widespread use of antibiotics imposes a selection force for disseminating antibiotic resistance genes (ARGs) among bacteria [1]. ARGs carried by environmental microorganisms are picked up by animals and are potentially transferred into their native associated bacteria [2]. Recent surveillance revealed a transmission of NDM-beta-lactamase-producing bacteria from a poultry farm to wild birds [3]. Escherichia coli producing extended-spectrum β-lactamase have been detected from the gut of wild gulls feeding on human waste ashore [4]. Clinically relevant resistance genes were also reported in migratory birds with low human contact and transmitted geographically distantly [5]. Thus, animals not only mirror the presence of ARGs in the contaminated environment but also serve as possible reservoirs and potential vectors of multidrug-resistant microbes [6]. Wildlife and insects have been investigated as potential indicators of environmental dissemination of AMRs [7]. Specifically, honeybees are important plant pollinators, playing a fundamental role in the pollination of plant species in both natural ecosystems and agricultural crops. During pollinating within an area of ~ 3 km from the hive, honeybees interact with environmental microorganisms. The environmental antibiotic-resistant bacteria and the associated ARGs may transfer between the environmental and gut bacteria [8]. Such transfer potentially contribute to the spread of ARGs to new pollination sites during foraging trips [9]. Tetracyclines have been used in the control of American foulbrood in A. mellifera, which is caused by the pathogen Paenibacillus larvae. These treatments have led to severe accumulation of tetracycline resistance genes in the bee gut microbiome [10]. Moreover, the tetR genes are consistently associated with mobile elements showing high similarity to those characterized by human pathogens or domesticated animals, indicating an intermediate role of honeybees in the environmental transmission of ARGs [11]. In addition, the sulfonamide resistance gene sul2 was detected in the bee gut, which had the highest sequence similarity to the IncQ plasmid [12]. IncQ is a family of plasmids with a unique strand-displacement mechanism of replication and functions within a broad range of hosts [13]. These results suggest that IncQ plasmids might mediate ARG transfer in the bee gut. The honeybee Apis cerana represents an important pollinator in Asia; however, its gut resistome remains uncharacterized, in contrast to multiple studies on A. mellifera. Both A. cerana and A. mellifera have simple and host-restricted gut bacteria dominated by only 5-9 core bacterial genera. The core gut members, including Gilliamella, Snodgrassella, Bartonella, Bifidobacterium, and Lactobacillus Firm-4 and Firm-5, account for > 95% of the whole gut community [14]. However, bacteria of the same phylotype were observed to cluster separately, corresponding to the host species, leading to varied gut microbial compositions between A. cerana and A. mellifera [15]. It has been shown that the gut bacteria from the same genus isolated from honeybees and bumblebees differed in their ARG carriage profile [16]. Specifically, bacterial strains isolated from Chinese bumblebees possessed the multidrug resistance gene emrB, while tetracycline resistance genes were uniquely present in gut bacteria from the USA. Unlike A. mellifera, A. cerana were mostly domestically managed without constant transfer for pollination in China, and the traditional beekeeping practices of A. cerana maintain a semiferal nature less affected by artificial domestication [17]. Thus, the gut of A. cerana represents a promising model to evaluate the local ARG burden driven by long-term selective pressures [18]. Moreover, the conserved microbiota makes the honeybee a tractable, realistic, and confined ecosystem for studying the transfer and maintenance of ARGs across gut bacteria [19]. Therefore, honeybee may serve as an ideal indicator of the environmental antibiotic resistance. Here, we examined whether honeybees from different geographical locations exhibited distinct resistome profiles, and the potential horizontal transfer of ARGs among honeybee gut symbionts. Bee samples of A. cerana were collected from 14 geographical locations across China. Using metagenomic sequencing, we showed that the A. cerana gut resistome was dominated by different types of ARGs from those in A. mellifera, and the load and diversity varied for samples from different locations. The bee gut symbiotic members carried several core ARGs prevalent in samples from all locations. Transferrable ARGs were frequently detected, and streptomycin resistance loci were identified in the genomic regions of different symbionts from multiple areas. Finally, we confirmed the conjugative transfer of ARGs mediated by the mobilizable plasmid among the core bacterial genera Gilliamella, Snodgrassella, and Bartonella, which were specific to the honeybee gut environment. These results clarify the horizontal gene transfer among gut symbionts in the spread of antibiotic resistance. Sample collection and DNA extraction A. cerana bees were obtained from 18 different apiaries in 14 provinces across China (Fig. 1). We sampled worker bees from each colonies between April 2017 and January 2019. A. mellifera bees were collected in Jilin in September 2017. All hives of A. cerana and A. mellifera were from traditionally managed stationary apiaries, where beekeepers do not exchange the queens and domesticate without transfer. There was not any antibiotic treatment history in all hives sampled in this study. We sampled 5-10 honeybees from each apiary for the metagenomic analysis. A full list with detailed information was summarized in Dataset S1. The entire gut was pulled out from the tail of adult bees without touching the abdomen surface using sterilized forceps [20]. Individual guts were stored in 1.5-ml tubes with 100% ethanol or directly frozen at -80 °C and were transported to the laboratory. DNA extraction and shotgun metagenomic sequencing The whole gut DNA was extracted using the CTAB method described by Kwong et al. [21]. A total of 126 honeybee gut samples were used for metagenomic sequencing. Sequencing libraries were generated using NEBNext UltraTM II DNA Library Prep Kit for Illumina (New England Biolabs, MA, USA), and the library quality was assessed on Qubit 3.0 Fluorometer (Life Technologies, Grand Island, NY) and Agilent 4200 (Agilent, Santa Clara, CA) system. The libraries were then sequenced on the Illumina Hiseq X-Ten platform with 150 bp pairedend reads. Metagenomic sequencing generated 739 Gb of data with an average of 5.86 Gb for each sample. Adaptor trimming and quality control of the raw sequencing data were carried out using the fastp software (-q 20, -u 10) [22]. The reads were then mapped to the reference genomes of A. cerana (GCA_001442555) and A. mellifera (GCA_003254395) accordingly using the BWA-MEM algorithm with the option "-t 4, -R, -M" [23] to filter out sequencing reads. Microbiome and resistance gene analysis The bacterial community profiling was performed using the Metagenomic Intra-Species Diversity Analysis System (MIDAS) pipeline as described in Su et al. [24]. The Shannon and Simpson indices were calculated by the 'vegan' package. The principal coordinates analysis (PCoA) was performed based on Bray-Curtis distance using the 'vegan' package. We characterized the resistome structure by aligning reads to the MEGARes database following the AmrPlusPlus pipeline [25] with modifications. Sequencing reads were mapped to the MEGARes database using the BWA algorithm (-t 10, -R). The SAM formatted alignment file was analyzed using ResistomeAnalyzer (80% nucleic acid sequence similarity) for the quantification of ARGs. The abundance of different resistance types was calculated at different levels (Gene, Group, Mechanism, and Class) corresponding to the annotation in the MEGARes database. The sequencing read abundance of ARGs was then normalized by the number of reads mapped to the 16S rRNA genes. Thus, the normalized abundance of ARGs was transformed as a "copy of ARG per copy of 16S rRNA gene" as suggested by Li et al. [26]. The number of the total 16S rRNA gene sequences was determined by METAXA2 (-f q, -plus T) [27]. To specifically detect transferrable ARGs in our metagenomic data, we analyzed our assembled contigs with ResFinder [28]. A threshold of at least 90% similarity and 60% of the reference length was used. The bacterial origin of ARGs was predicted by assigning taxonomy to metagenomic-assembled contigs harboring antibiotic resistance genes. We assembled clean reads using MEGAHIT (-r, -t) to obtain contigs. To exclude the contigs without ARGs, we aligned the reads previously mapped to the ARGs in the MEGARes database (see above) to the assembled contigs with BLASTn, and only contigs possessing ARGs were applied for the taxonomy classification. The taxonomy of the assembled contigs containing ARGs was determined by comparing to the custom genomic database of both honey and bumble bee gut bacteria [29] using the two-way reciprocal best hit analysis. We used Procrustes analysis to determine the correlation of resistome profiles and the microbiota compositions of each sample as described by Munk et al. [30]. We performed Hellinger transformation of the matrices of the gene-level ARG composition and the species-level microbiota taxonomy and calculated the Bray-Curtis dissimilarities between each pair of data (ARGs-Microbiota composition). The symmetric Procrustes correlation coefficients between the dissimilarity matrix of microbiota and resistome ordinated by PCoA were analyzed using the 'protest' function in the vegan R package. Antibiotic susceptibility testing and conjugation experiment Strains of the honeybee gut bacteria, Gilliamella apicola, Gilliamella apis, Bartonella apis, and Snodgrassella alvi, were isolated from the gut of A. mellifera [29]. The core gut bacteria of bee guts were obtained by plating the gut homogenates on Brain Heart Infusion (BHI; Oxoid, Basingstoke, UK)) or Columbia agar medium supplemented with 5% (vol/vol) defibrinated sheep blood (Solarbio, Beijing, China) at 35 °C under a CO 2 -enriched atmosphere (5%) for 2 days. The identification of the isolated single colonies was performed by sequencing the 16S rRNA genes. The occurrence of type IV secretion system (T4SS)-associated genes necessary for the conjugation of the IncQ plasmid was detected by a BLASTn search using SecReT4 v2.0 [31]. E. coli MFDpir strain was cultivated on LB agar supplemented with 0.3 mM diaminopimelic acid at 37 °C for 24 h. We determined the antibiotic susceptibility of the bee gut bacterial strains towards different antibiotics, including chloramphenicol, kanamycin, ampicillin, and spectinomycin. Bacterial cells were grown and diluted to 10 7 CFU/ml. We inoculated 5 μl of the bacterial culture on plates supplemented with a gradient concentration of each antibiotic. Each bacterial strain was tested in triplicates, and plates without antibiotics were used as control. The inhibition of antibiotics on different strains was determined by incubating plates for 48 h. The details of strain resistance to antibiotics are shown in Dataset S5. The in vitro conjugation assays used either E. coli MFDpir or G. apis W8126 as the donor strains and other honeybee gut bacteria (Gilliamella apis, Gilliamella apicola, Snodgrassella alvi, Bartonella apis) as the recipient strains. To generate donor strains, we transferred the IncQ plasmid pBTK519 (Addgene Plasmid #110603) with the kanamycin resistance gene into E. coli MFDpir and G. apis W8126 via conjugation [32]. The donor strains were grown on LB or BHI agar plates with 50 μg/ ml kanamycin. Cells were harvested from the plates, suspended in 1 ml of 1 × PBS, and then centrifuged for 5 min at 6000×g. The supernatants were removed, and the cells were washed by 1 × PBS twice. Washed cells were resuspended in PBS to a final concentration of 10 8 CFU/ ml. The donor and recipient strains were mixed in a 1:1 ratio for the conjugation experiment. Thirty microliters of the mixtures were spotted onto BHI agar containing 0.3 mM diaminopimelic acid (DAP) and incubated for 16 h. After incubation, we scraped the cell mixture into a 1.5-ml centrifuge tube and washed twice with 1 ml sterile 1 × PBS to remove residual DAP. Then, we plated 100 μl of the mixtures on selective plates with 25 μg/ml of kanamycin and the other designated antibiotics for different recipient strains (Dataset S5). Candidate transconjugants were picked and passaged again on selective medium. The identity was confirmed by PCR amplification and Sanger sequencing of the 16S rRNA gene. All the conjugative mating experiments were conducted in three biological triplicates. Regional distribution of the A. cerana gut resistome across China A total of 94 gut samples of A. cerana were collected from 14 provinces across China (5-10 guts from each sampling site). We collected samples from two different sites in Hainan, Yunnan, and Sichuan provinces ( Fig. 1; Dataset S1). To compare the resistome patterns of A. cerana and A. mellifera, we also collected 32 guts of A. mellifera from Jilin Province. These A. mellifera colonies were domesticated without transportation for pollination. After filtering reads derived from bee hosts, we obtained an average of 12 million reads (150 bp paired-end) per sample by shotgun metagenomic sequencing. First, we analyzed the gut community structure of both bee species sampled in this study. Overall, the gut microbiota of both A. mellifera and A. cerana was dominated by a few core bacterial genera, as described in previous studies [21]. The genus Apibacter specific to eastern honeybees and bumblebees was detected only in the A. cerana gut, while Bartonella was more abundant in A. mellifera ( Figure S1a). The diversity of the gut community of A. mellifera from Jilin was higher than that of A. cerana sampled from the same area and most A. cerana from other locations ( Figure S1b, c). PCoA showed that the microbiomes of A. mellifera formed a separate cluster with those of A. cerana ( Figure S1d). Detailed analyses of the gut composition of A. cerana have been included in a separate study [24]. We used the AMR++ pipeline to analyze the ARG composition of each sample. Overall, the guts of A. cerana and A. mellifera harbored 78 and 306 groups of ARGs, respectively, presumably conferring resistance to 37 and 26 classes of antibiotics (Dataset S2). ARGs conferring resistance to classes of aminoglycosides, elfamycins, macrolide-lincosamide-streptogramin (MLS), and cationic antimicrobial peptides were present in almost all bees (Fig. 2a). The ARGs of aminoglycosides and MLS were dominant in the majority of A. cerana samples, while A. mellifera from Jilin had a larger proportion of ARGs belonging to the tetracycline (16.8%) and sulfonamide (13.2%) families. Although ARGs of tetracyclines were not abundant in most A. cerana guts, they were prevalent in samples from Guangdong, Jiangxi, Hainan, Fujian, and Sichuan provinces ( Figure S2). The abundance of ARGs of aminocoumarins, bacitracin, fluoroquinolones, fosfomycin, nucleosides, trimethoprim, and multidrug resistance was low in both bee species, which all showed an average abundance < 7%. The diversity of ARGs at the levels of classes and groups (defined by the AMR++ database) [25] was not significantly different between most A. cerana samples, but they were higher in A. mellifera than in A. cerana from several locations (Fig. 2b, c). Specifically, the average group number of ARGs in A. mellifera was more than 4 times higher than that in A. cerana, and both of these bees were collected from Jilin (Fig. 2c). Consistently, A. cerana samples showed lower Shannon diversity indices than A. mellifera, except in Guangdong and Jiangxi (Fig. 2d). To exclude the impact of the size of gut bacterial numbers, the relative abundance of ARGs was then normalized by the copy number of 16S rRNA genes. The ARG load was significantly higher in the gut of A. mellifera than in that of A. cerana collected from Jilin and those from many other locations (Fig. 2d). PCoA based on the Bray-Curtis distance showed that when plotted by sampling sites and host species, the resistome of all sampled honeybees exhibited clear segregation (Fig. 2f ). Notably, the gut resistome composition of A. mellifera was isolated from the A. cerana samples (Fig. 2f, left panel). Since the resistome patterns might be influenced by the composition and diversity of gut microbiota, we wondered if the varied resistome of honeybees was primarily affected by the microbiota structure. Procrustes analysis indicated that samples with similar taxonomic compositions did not necessarily show similar resistome patterns (P = 0.139, Fig. 2g), and the Procrustes residuals were extremely high in samples from Guangdong, Jiangxi, Taiwan, and Fujian provinces (Fig. 2h), indicating that the resistome was not determined merely by the bacterial composition in A. cerana. Since regional factors cannot be excluded as the cause of the differences in the gut resistome of the two honeybee species, we further assessed the distribution of ARGs in the public metagenomic datasets of A. cerana and A. mellifera sampled from Japan and Switzerland [33]. The resistomes of both honeybee species from all locations were dominated by aminoglycoside resistance genes ( Figure S2). All A. mellifera samples had a higher ratio of tetracycline resistance genes than the A. cerana samples. The tetracycline resistance genes were more abundant in A. mellifera from Japan and Switzerland than in A. mellifera from China. Thus, the composition of the resistome is mainly different between A. cerana and A. mellifera, while it is conserved within species across regions. Country-wide and location-specific core resistome Since the gut microbiota of A. cerana is relatively consistent across China ( Figure S1), we wondered if the ARG components are stable across locations. To characterize the ARGs that were stable among A. cerana samples across China, we identified core ARGs detected in > 50% of all samples (country-wide), as well as among sample sets from different regions (region-specific). This followed the definitions from previous resistome studies [34]. We identified six groups of ARGs widespread in > 50% of A. cerana samples across all sampling sites. They were defined as "core of country-wide" ARGs, including genes conferring resistance against aminoglycosides (A16S, RRSC, RRSH), cationic antimicrobial peptides (CAP16S), MLS (MLS3S), and elfamycins (TUFAB) (Fig. 3a). Overall, the relative abundance of A16S and MLS23 was higher than that of the other country-wide core groups. All six country-wide ARGs were abundant in A. cerana from Guangdong, while all country-wide ARG groups were rare in samples from Jiangxi (Fig. 3b). In addition, 15 groups of ARGs were specifically present in a high proportion (> 50%) in different locations and were defined as "core of specific locations" ARGs. They consisted of genes conferring resistance against aminoglycoside, beta-lactam, rifampin, sulfonamide, and tetracycline (Fig. 3c, Figure S3). Although A. cerana samples from Jiangxi had a low level of country-wide ARGs, specific core ARGs accounted for a large proportion of their resistome (Fig. 3a). Moreover, Jiangxi had the largest number of specific core ARGs (8 groups), which were also prevalent in A. cerana from Guangdong, Hainan, and Fujian (Fig. 3c). Interestingly, GYRBA (aminocoumarins), GYRB (fluoroquinolones), and PTSL (fosfomycin) were specific core ARGs in samples from Taiwan. Core and transferrable ARGs are carried by different gut bacteria To explore the contributions of different gut bacteria to the resistome, the distribution of ARGs among gut bacteria was assessed by counting the identified ARGtaxon associations (see "Methods"). We traced the origin of all country-wide and specific core ARGs. We found that the same group of ARGs could be carried by different bacterial species (Dataset S3), and the prevalence of ARG groups positively correlated with the bacterial taxonomy numbers at the species level (Fig. 4a). Most of the ARGs were carried by the core bacterial genera in the gut of A. cerana, and the aminoglycoside resistance genes showed the broadest taxonomic ranges and highest frequency (Fig. 4b). Interestingly, ARGs were mainly carried by Gilliamella, especially those for resistance to fosfomycin (88%), beta-lactams (73%), fluoroquinolones (63%), and MLS (57%). In contrast, rifampicin resistance genes were contributed mainly by Bifidobacterium, and 37% of ARGs against tetracyclines originated from Snodgrassella. Since ARGs were harbored by multiple gut species, we then identified potential transferrable ARGs from assembled metagenomic contigs using ResFinder [28]. A total of 100 ARGs with transfer potential were identified Fig. 3 Core and location-specific ARG components of A. cerana. a Relative abundance of the core ARGs detected in > 50% of A. cerana samples (core of country-wide) and those detected in > 50% samples only from certain locations (core of specific locations). Other ARGs occurred only in < 50% samples of any sampling sites. b Distribution of the six group of ARGs (Core of country-wide) in bees from different locations. Different letters (a, b, c, d) above each bar stand for statistical differences between A. cerana sampling sites (LSD test, P < 0.05). c The presence of the core ARG groups specific to different locations in honeybee gut samples (Dataset S4). Interestingly, Guangdong and Jiangxi provinces with the highest ARG richness also possessed more transferrable ARGs, but they were barely identified in Hunan and Jilin provinces (Fig. 4c). The richness of transferrable ARGs was higher in A. mellifera than in A. cerana. Seven transferrable ARGs were prevalent in both A. cerana and A. mellifera, which were shared by all bacterial members in the gut of A. mellifera (Fig. 4d). However, in A. cerana, tetM, and tetW were present only in gram-positive Bifidobacterium and Lactobacillus. Consistently, genes specific to A. mellifera were carried by almost all gut bacteria. In A. cerana, Snodgrassella and Gilliamella were the major contributors to transferrable ARGs. Notably, strA and sul2 genes were detected in the A. cerana-specific gut member Apibacter. The strA, strB, and sul2 genes are always organized as an antibiotic resistance gene cluster widely distributed in plasmids and chromosomally integrated elements [35]. We then identified the genetic arrangement of transferrable ARGs and associated mobile gene elements on the assembled contigs in A. cerana gut samples. IncQ plasmid-mediated sul2-strA-strB transmission was frequently identified in A. cerana We found that the sul2, strA, and strB genes also cooccurred in the contigs of the Gilliamella and Snodgrassella strains. Moreover, the contigs contained genes for the origin of replication (oriV), mobilization (mobABC), and replication (repABC). Interestingly, these genes formed a genomic region highly syntenic to the IncQ plasmid RSF1010, and they were detected in strains even from different geographic locations (Fig. 5a). Although almost all sequences from the symbiont contigs were identical to those of RSF1010, deletions, insertions, and substitutions were detected along the loci. Notably, the sul2 and strA genes were detected in complete forms in all contigs, but the strB genes were always truncated. Large fragments of deletions were frequently detected inside the integration regions, mainly spanning the rep-BAC region and the strB gene from all contigs, and the deletions inside integration regions were flanked by short homologous sequences. Furthermore, identical point mutations in sul2, strA, and even in the intergenic regions were detected in Gilliamella and Snodgrassella from various locations, suggesting that these mutations occurred before the integrations or were horizontally transferred between different bacterial species. Another gene cluster was found in the A. cerana-specific Apibacter contigs from Jiangxi, Guangdong, and Tibet, nearly identical to pMS260, a broad-host-range IncQ family plasmid. Similarly, the strB and mobC genes were incomplete due to partial deletions (Fig. 5b). Experimental validation of the IncQ-mediated transmission of ARGs Thus far, our results have shown the widespread transferrable ARGs in A. cerana gut symbionts, and the horizontal transfer of ARGs might be associated with the IncQ plasmid as a potential vector. Therefore, we tested whether ARGs could be transferred between bee gut bacteria via the IncQ plasmid. Since IncQ is a non-conjugative but mobilizable plasmid that requires a mating pair channel encoded in bacterial host chromosomes to fulfill its transfer, only donor strains possessing a T4SS can transfer IncQ plasmids between different bacteria. We searched for T4SS component genes in honeybee gut strains, identifying them in a few strains from Gilliamella. We then used G. apis W8126 as the donor strain to test the transferability of the IncQ plasmid between different gut symbionts. Phylogenetically different strains from Gilliamella, Snodgrassella, and Bartonella, which are gram-negative core members in the bee gut, were used as potential recipients, and E. coli MFDpir was included as a positive control of the donor strain [36]. First, we tested the natural antibiotic sensitivity of nine recipient strains for the discrimination of transconjugants in subsequent conjugation assays (Dataset S5). Then, we introduced the plasmid pBTK519 that was genetically assembled with the RSF1010 backbone carrying kanamycin resistance into the donors (Fig. 6a). After coculturing the donor and recipient strains for 16 h, we evaluated the conjugation events using selective plates supplemented with different antibiotics. We found that E. coli MFDpir could deliver the IncQ plasmid to all recipient strains. Although successful conjugative transfer was detected in all Bartonella strains, only S. alvi M0351 and G. apis M0364 could receive the plasmid when using G. apis W8126 as the donor (Fig. 5b). No transconjugants were detected in two strains from the G. apicola species, suggesting a low conjugation efficiency. Thus, our experiments indicated that the mobilizable IncQ plasmid could be transferred between honeybee gut symbionts, contributing to the broad dissemination of ARGs in different gut bacteria. Discussion The gut microbiome has attracted great attention as it functions as a reservoir and potential ARG source [37]. Although the A. cerana gut composition appears to have low variations among different geographical regions (Figure S1), a clear location-dependent resistance pressure was seen in our study. This pressure might be caused by honeybees being subjected to geographically different antibiotic burden from the local environment [12]. During their usual foraging activities, honeybees can cover wide areas where agricultural, industrial, and other anthropogenic activities occur. Therefore, honeybees are likely to be exposed to contaminated environmental sources, such as pollen, nectar and water. There are positive associations between ARG abundance in beehive products and anthropogenic environments, suggesting that ARGs might originate from the honeybee foraging environment [38]. Our results indicated that the gut resistome differs between the two honeybee species and among A. cerana from various geographic locations. Previous studies have shown that A. mellifera has a larger and more diverse gut community than A. cerana [33], which is consistent with our findings ( Figure S1). These features may contribute to the observed higher ARG abundance and diversity in A. mellifera. In addition, A. mellifera carry more tetracycline and sulfonamide resistance genes than A. cerana (Fig. 2a), which could be caused by different breeding environment and habits. For example, A. mellifera are suitable for breeding in plain areas with concentrated honey sources, and the activity range of A. mellifera is closer to human beings [39]. Moreover, it may be contributed by the history of using tetracycline and other related drugs to control bee diseases [40]. The gut microbiome of A. cerana is dominated by six country-wide ARGs prevalent in samples across China. Notably, three of these ARGs are aminoglycoside resistance genes. Aminoglycosides are natural antibiotics derived from actinomycetes and are frequently administered to treat bacterial infections [41]. Aminoglycoside resistance genes are detected frequently in rivers [42], livestock [30], and the human gut [43]. Moreover, recent IncQ plasmid transmission between honeybee-specific gut symbionts. a Experimental procedure for in vitro conjugation. IncQ plasmid pBTK519 (Kan R ) was used for transferability test between different donor and recipient strains. Donor strains of E. coli MFDpir and G. apis W8126 harboring the pBTK519 were co-incubated with strains from three honeybee gut bacterial genus, Gilliamella, Snodgrassella and Bartonella, as recipients. BHI agar supplemented with kanamycin and another designated antibiotics were used for transconjugants selection. b Experimental results of in vitro conjugation. The phylogenetic tree was constructed by maximum-likelihood method (RAxML) based on the whole genomes of isolated strains. The donor G. apis strain W8126 is shown in red evidence has suggested that other elements (e.g., heavy metals) can select and stimulate the stabilization of aminoglycoside resistance genes [44]. Our results showed that most ARGs within the A. cerana core resistome were maintained in the core gut members of honeybees (Fig. 4b). The two proteobacteria, Gilliamella and Snodgrassella, in the bee gut contributed most to the core resistome. Most ARGs were mainly carried by Gilliamella, while Snodgrassella was the major carrier of tetracycline resistance genes. A previous screening of the honeybee gut also showed that most of the tetracycline-resistant clones were Snodgrassella, which harbored tetracycline resistance loci at high frequencies [10]. However, the distinct resistome profiles of different locations were unlikely to be caused merely by microbiome variance since the gut community composition was discrepant with that of the resistome by Procrustes analysis (Fig. 2g, h). Compared to that of A. mellifera, the gut resistome of A. cerana consists of fewer ARGs and less ARG diversity. Even after taking the large gut bacterial size and diversity into account, the normalized abundance of ARGs in A. cerana was lower. This result suggests that A. mellifera was under a higher antibiotic resistance pressure. A. mellifera is more intensively managed than A. cerana, and there was a long history of oxytetracycline use to control A. mellifera larval diseases in the USA [10]. Furthermore, A. mellifera is the most frequent crop floral visitor, while A. cerana visits local plant species more often [45]. The pollination preference of the two species might also contribute to the varied resistome profile. Transferable ARGs were the dominant driving force for the overall dissemination of antibiotic resistance. We found that gut samples harboring more ARGs also possessed more transferable ARGs, suggesting that there might be an environmental stress, such as antibiotic selection pressure, that maintains the ARGs [46]. Correspondingly, our results showed that transferrable ARGs were present in all A. cerana samples from different locations; however, the load was much lower than that in A. mellifera samples. We found syntenic resistance loci with high sequence similarity across bee gut bacteria. These contigs were found in samples even derived from different districts, indicating a high potential for horizontal transfer between bacterial hosts and environments. In particular, two sets of contigs particularly widespread between hosts harbored a sul2-strAB cluster. Previous studies have detected a high prevalence of streptomycin resistance genes in bees from the USA, and the strAB genes are associated with the Tn5393 transposon in Snodgrassella [8]. The association of ARGs with mobile elements, such as plasmids and transposons, is critical to facilitate the spread of ARGs between environments [46]. We identified that the nucleotide sequences of the whole genomic region with flanking replication and mobility genes were almost identical to those from the IncQ plasmids (Fig. 5). We found that the genomic regions of A. cerana symbionts were essentially identical to two plasmids belonging to the IncQ-1 subgroup. This finding indicates that the IncQ plasmids are integrated into the chromosome of bee gut bacteria, as found in Salmonella enterica [47] and Vibrio cholerae [48]. Interestingly, we detected sequence deletions in the integrated plasmid fragments, especially in the strB and replicon modules. It has been shown that the IncQ plasmid can evolve "satellite plasmids" with replicon deletions, which promises an immediate fitness advantage enabling the maintenance and further transmission of antibiotic resistance traits [49]. In our findings, deletions inside integration regions were flanked by short homologous sequences, as found in the satellite plasmid, suggesting that plasmid evolution might participate in the dissemination of ARGs [49]. In addition, the sul2-strA-strB gene cluster was detected in different honeybee gut bacteria, and they were probably derived from different sources of IncQ plasmid origin. We found that the inserted plasmid was widely distributed and persisted almost unchanged across indigenous honeybee symbionts, and this phenomenon was also observed in Salmonella from bovine and human sources [35]. Accordingly, our in vitro conjugation assays demonstrated that all tested honeybee gut species were successful recipients of the IncQ plasmid. However, several recipient strains failed in plasmid acquisition despite multiple attempts when G. apis W8126 was used as the donor. Specifically, the conjugation efficiency was extremely low for two phylogenetically distant strains (W8136 and W8127), which might be due to genetic divergence [50] or the different restriction modification systems causing genetic isolation [51]. Conclusions In this study, we provided a comprehensive overview of the distribution of antibiotic resistance elements in honeybees across China, highlighting the role of the bee microbiome as a reservoir of resistance genes and a potential bioindicator of local antibiotic pressure. Horizontal transfer occurs widely among native gut symbionts, promoting dissemination of antibiotic resistance between honeybee gut bacteria and environmental species. Future works using the honeybee model system could assist the exploration of resistance spread driven by mobile elements in the gut environment and the in vivo evolution of plasmid-mediated antibiotic resistance, which alleviates the fitness costs and favors persistence and propagation.
7,932.4
2022-05-03T00:00:00.000
[ "Environmental Science", "Biology" ]
Queen reproductive tract secretions enhance sperm motility in ants Queens of Acromyrmex leaf-cutting ants store sperm of multiple males after a single mating flight, and never remate even though they may live for decades and lay tens of thousands of eggs. Sperm of different males are initially transferred to the bursa copulatrix and compete for access to the long-term storage organ of queens, but the factors determining storage success or failure have never been studied. We used in vitro experiments to show that reproductive tract secretions of Acromyrmex echinatior queens increase sperm swimming performance by at least 50% without discriminating between sperm of brothers and unrelated males. Indiscriminate female-induced sperm chemokinesis makes the likelihood of storage directly dependent on initial sperm viability and thus provides a simple mechanism to secure maximal possible reproductive success of queens, provided that initial sperm motility is an accurate predictor of viability during later egg fertilization. Introduction In promiscuous mating systems, sperm compete for direct egg fertilization or access to storage sites, whereas females may bias the outcome by chemically modulating the direction (chemotaxis) or speed (chemokinesis) of sperm motility [1,2]. Faster-swimming sperm are usually more successful in fertilizing eggs [3][4][5], but very little is known about the actual processes involved and the magnitude of effects exerted by female secretions. Quantifying such effects is difficult because interindividual variation in duration of sperm storage and time towards female re-mating is normally high [6,7]. The social Hymenoptera (ants, bees and wasps) offer interesting exceptions to this rule, because mate choice and insemination are restricted to a single brief time window early in adult life, during which virgin queens store all the sperm they will ever obtain during their life. Exclusive single queen mating is ancestral in all ants, bees and wasps that evolved morphologically distinct queen and worker castes [8,9], but polyandry (the storage of multiple ejaculates) evolved in several evolutionarily derived lineages [10]. In such clades, ejaculates compete for access to the queen's sperm storage organ (the spermatheca), from where sperm will be used to fertilize eggs for up to several decades [11]. Because re-mating later in life is impossible, a queen's lifetime reproductive fitness will depend on the quantity and viability of the sperm stored after early life insemination [12]. This should imply strong selection for storing only viable sperm until the maximal storage capacity is reached. Acromyrmex leaf-cutting ants are highly suitable to investigate whether virgin queens have evolved mechanisms to preferentially store sperm of the highest possible quality because: (i) all queens are inseminated by multiple males [13,14], (ii) queens have a large fluid-filled bursa copulatrix where ejaculates are deposited before a fraction of sperm can enter the smaller spermatheca by active motility [13,15] and (iii) the time span between insemination and final storage is only a few hours [13]. We used Acromyrmex echinatior to investigate whether queen reproductive tract secretions affect sperm motility such that faster sperm are more likely to become stored and whether any such effects are universal or discriminate against related sperm, as inbreeding in haplodiploid insects can incur fitness costs by increasing the probability of diploid larvae developing into sterile males [16,17]. Material and methods Colonies of A. echinatior were excavated in Gamboa, Panama from 2001 to 2014 and reared in Copenhagen at 258C and relative humidity 60 -70%. Winged reproductives were collected shortly before each trial and checked for sexual maturity during dissection [18] with watchmaker forceps in Hayes saline (see the electronic supplementary material for details). Accessory testes of 16 males were punctured to collect subsamples of outflowing sperm with a pipette tip previously loaded with 3 ml Hayes saline containing 375 mM of SYTO 13 (Molecular Probes) fluorescent dye (see the electronic supplementary material for details). Mixtures were gently pipetted into a counting chamber (SC-20-01-04-B, Leja), after which spermatozoa were observed with a spinning-disc confocal microscope (Revolution XD, Andor) at 20Â magnification. The fluorescent dye was excited with a 488 nm laser and motility recorded at 30 frames per second with an Andor iXon DU-897-BV EMCCD camera. For each male, we obtained two 5 s recordings, between which we changed the field of vision within the same counting chamber, expecting that sperm motility parameters should remain similar unless measurements were affected by technical noise. We analysed recordings with To visualize the two sperm storage organs whose fluids were used in the experiments, this particular virgin queen was artificially inseminated with blue dye prior to dissection. Bars are mean + s.e. and horizontal lines specify significance of differences (****p , 0.0001). rsbl.royalsocietypublishing.org Biol. Lett. 12: 20160722 the computer-assisted sperm analyser (CASA) plugin [19] for IMAGEJ (http://imagej.nih.gov/ij/) and assessed measurement repeatability with the R package rptR, method REML [20,21]. Using the same methods, we obtained 5 s recordings of spermatozoa from 10 individual males (two replicate experiments) while swimming in: (i) reproductive tract fluid from a virgin queen collected from the same colony as the focal male, (ii) virgin queen fluid from an unrelated colony and (iii) Hayes saline as a control. To obtain female fluid, we dissected the bursa copulatrix and spermatheca of virgin queens (figure 1a), gently punctured these organs in 3 ml Hayes in a 0.2 ml PCR tube, centrifuged at 17 000g for 3 min, and transferred 1.5 ml supernatant (or Hayes only) into 2 ml Hayes containing SYTO 13 (375 mM final concentration) before using 3 ml aliquots as test fluids. Because CASA yields intercorrelated measures of sperm velocity, we performed a principal component analysis (JMP v. 12) incorporating curvilinear velocity (VCL), velocity on the average path (VAP) and straight-line velocity (VSL) and used the first principal component (PC1) as a proxy of overall sperm velocity. PC1, the proportion of motile sperm and sperm linearity (LIN) were subsequently used as dependent variables in linear mixed-effects models fitted by restricted maximum likelihood (see the electronic supplementary material for details). Results All measurements of sperm motility within the same counting chamber were highly repeatable, which confirmed that our methods were reliable (proportion of motile sperm: R ¼ 0.89, s.e. The proportion of motile sperm increased by almost 70% (figure 1b) when spermatozoa were exposed to queen reproductive fluid compared with Hayes saline (F 1,104 ¼ 69.55, p , 0.0001), but reproductive fluids from related and unrelated females had similarly enhancing effects (F 1,104 ¼ 0.48, p ¼ 0.49). Sperm velocity (PC1; figure 1c) increased by ca 50% in queen fluids compared with the Hayes controls (F 1,104 ¼ 49.09, p , 0.0001), similar to the three original variables that loaded PC1 (49.4% for VCL, 52.2% for VAP, 57.2% for VSL) and once more without segregation between related and unrelated female fluids (F 1,104 ¼ 0.10, p ¼ 0.75). Sperm movement (figure 1d) was consistently more linear (25.8% increase) when swimming in female fluid compared with Hayes saline (F 1,104 ¼ 23.35, p , 0.0001), with no difference between related and unrelated female reproductive fluids (F 1,104 ¼ 0.01, p ¼ 0.91). To exclude the possibility that proteins or other compounds from non-reproductive tissues could have similar effects, we ran an extra series of controls repeating the experiment with haemolymph and hindgut fluid. The effects of these extra controls were identical to Hayes saline for the proportion of motile sperm and sperm velocity, but similar to reproductive tract fluid for sperm linearity (figure 2), suggesting that only the effects on sperm motility and velocity are induced by compounds specific to the female reproductive tract. Discussion Our results indicate that the fluid contained in the reproductive tract of A. echinatior queens activates sperm and increases swimming performance by at least 50% for both The results are consistent with these female secretions having evolved mechanisms analogous to the mammalian sperm hyperactivation process [22] to ensure that only the most viable sperm become stored in the spermatheca, where they will stay viable for a potential life span of up to two decades. Our finding that female reproductive tract fluid indiscriminately affects related and unrelated sperm does not refute that inbreeding can incur fitness costs, as matched matings at the sex determining locus are known to impose genetic load on Acromyrmex colonies in the form of diploid males [16]. Rather, it suggests that chemokinesis and self-non-self recognition cannot be combined, or that the likelihood of small effective population size and inbreeding by chance is sufficiently low to preclude selection for costly discrimination. Whether kin recognition would occur with undiluted secretions remains to be seen, but such an artefactual explanation seems unlikely as other effects of reproductive fluids are rather insensitive to dilution (this study and [15,23]). Our results are consistent with the exceptional selection forces on sperm competition and storage that we outlined in the Introduction as being unique for the social Hymenoptera. Even in evolutionarily derived lineages where multiple insemination is the norm, ant queens accumulate all the sperm they will ever have in a single mating flight just after reaching sexual maturity. Earlier studies showed that male seminal fluid of Acromyrmex and Atta leaf-cutting ants incapacitate other male's sperm after insemination [15], and that spermathecal secretions neutralize this negative effect in Atta where sperm is deposited almost immediately in the spermatheca without protracted pre-storage in the bursa copulatrix [23,24]. However, Acromyrmex sperm need to actively swim towards final storage [13]. This suggests that the two leafcutting ant genera have fundamentally different mechanisms of female control over sperm incapacitation among competing males, with Atta queens apparently using mass elimination of sperm competition upon final storage and Acromyrmex using individual sperm chemokinesis. Our in vitro experimental results suggest that Acromyrmex queens fill their spermatheca gradually while prioritizing the most viable sperm present in a larger pool of candidate sperm in the fluid-filled bursa copulatrix [13]. As the Atta sperm storage mechanism is truly exceptional, we expect that our present findings for Acromyrmex are more representative for ants in general. Our study shows that the unusual characteristics of ant mating systems based on lifetime commitment of sexual partners provide interesting opportunities to test aspects of sperm competition and female manipulation of sperm storage, which cannot be experimentally manipulated with the same feasibility in solitary insects where promiscuous re-mating across the female life span is the norm. Ethics. Ant collection and exportation followed regulations in Panama and importation regulations in Denmark. Treatment of ants followed guidelines at the University of Copenhagen. Data accessibility. The datasets supporting this article have been uploaded as part of the electronic supplementary material.
2,431.8
2016-11-01T00:00:00.000
[ "Biology" ]
Historical Analysis of Bank Profitability Using CAMEL Parameters : Role of Ownership and Political Regimes in Pakistan In first sixty years of its existence financial sector of Pakistan has experienced two prominent episodes. One, there was an experimentation with the ownership structure of financial institutions which started with promotion of ownership by the private sector and then in 1970s they were nationalized. Subsequently the process was reversed in the 1990s transferring most of the banking assets back to the private sector. Two, on the political front, for long 33 years autocrat have interrupted the democratic order many times. The objective of this study is to take stock of the performance of banking industry when it was in private hands vis-à-vis when banks were nationalized, and, as a supplement, to evaluate the impact of dictatorship versus democracy on the performance of banking industry. Using historical dataset this study offers analysis of banking sector performance by using CAMEL parameters. Our main findings are that when banks are in private hands their profitability is positively related to quality of their assets and management, and it has negative relation with capital adequacy and liquidity. However when banks are under government ownership asset quality and liquidity become irrelevant in determining the profitability whereas capital adequacy, management quality continue to impact bank profitability. This implies that government ownership works like implicit guarantee for banks (a) that they would remain solvent in the short run], and (b) that it would absorb losses emanating from deterioration of bad assets. As regards political regimes the study finds that there is no noticeable difference in the impact of bank specific parameters whether a democratic government is in place or dictatorship is imposed in the country. These findings have implications for bank regulations, monetary policy and for instituting legal reforms in the financial sector. Introduction For almost three decades, terms like reforms, restructuring, liberalization and privatization have been widely used both in literature and in practice.Economic literature abundantly explores the nexus between finance and growth and the main objective is to find ways for enhancing efficiency in the financial intermediation process.Different countries at different stages of economic development choose a pace and direction of reforms process in their financial systems to advance and sustain their successes and to mitigate effects of the past mistakes.Among others, a major issue under research relates to governments' ownership and control of financial institutions.Calari (2004) made an interesting note that about 40 percent of the world population lives in countries where government is the major owner of the banking system.Essentially the inordinate engagement of the government in the financial sector is justified on grounds that public ownership of banks and Development Financial Institutions (DFIs) serves to boost their role as conduits for channelizing funds to the targeted and underserved sections of the economy. From a long-term development perspective, it is generally highlighted that the development of those targeted industries or sectors should not be retarded just because the private sector lacks initiative, incentives or capacity.Japan, South Korea, Singapore and lately China are some of the leading examples where the industrialization process has largely been sponsored by state-owned DFIs or those countries have histories of autocratic rules.For countries like Germany, Patel (2004) states that command mode of financial intermediation was used in the post-World War II era to boost development process of the country.Economics literature offers host of studies on the linkage between performance and efficiency indicators and the outcome of reforms and privatization processes however they present mixed results leaving this issue inconclusive.One of the technical reasons projected for absence of certainty about effects of ownership of financial institutions on their performance is heterogeneity of the banks, and others include simultaneity of number of reforms measures initiated during the course of overall restructuring and liberalization of the financial sector.Nonetheless researcher generally agree that when it comes to assessment of bank performance then it is the private entrepreneurship who sets the incentives right, not the government. Pakistan, a developing country, presents an interesting example of dominant involvement of government, sometimes as a heavyweight owner of financial assets and sometimes asserting itself through the autocratic rule in the country.In short, government has always taken keen interest in the development of the financial sector, inventing ways like credit rationing and fixing credit pricing to channel much needed financial resources to selected sectors of the economy, even until earlier this decade.It is further argued that performance of the financial institutions may be mapped, on almost one-to-one basis, onto the real sector and that is why government directly owns financial institutions or indirectly mandates rules of the game.In this context analyzing performance, efficiency and stability of financial institutions under different ownership regimes presents an intriguing area of research. The standard theoretical literature presents two popular views on government shareholdings of financial institutions -Development View and Political View (Lewis, 1955 andGerschenkorn, 1962).The former states that in countries with weak financial institutions, government ownership of strategic institutions, including banks, is needed to jump start both financial and economic development.In contrast, the Political View takes lead from the principle-agent problem and stipulates that government ownership of financial institutions creates multiple distortions leading to inefficiencies, which is why change in ownership from public to private sector is promoted to enhance efficiency of financial institutions. It is argued that if the government owns the means of production then the principal-agent relation holds between the state and the society, and the ensuing agency cost needs to be minimized.Identification of this objective implies that an ideal solution is to completely divest public ownership of assets.In simple words, support for privatization is argued on the grounds of actual or perceived distortions created by public ownership and control, as governmentowned institutions work at less than the optimal level. In the words of Megginson and Netter (1998), "the theoretical arguments for the advantages of private ownership of the means of production are based on a fundamental theorem of welfare economics: Under strong assumptions, a competitive equilibrium is Pareto optimal.However, the assumptions include requirements that there are no externalities in production or consumption, that the product is not a public good, that the market is not monopolistic in structure, and that information costs are low.Thus, a theoretical argument for government intervention based on efficiency grounds rests on an argument that markets have failed in some way such that one or more of these assumptions do not hold, and that the government can resolve the market failure".This leads us to the idea of government's role as the lender of last resort during crisis times as seen most recently during the Global Financial Crisis (GFC), when ownership of a number of financial institutions was again transferred back to government domain.However, this is considered to be an urgent and temporary measure, focused on mitigating the impact of such an unprecedented crisis. Most of the recent studies on banking sector performance in Pakistan use recent data from 1990s onwards and evaluate operational efficiencies using data envelopment analysis.There is hardly any study which takes account of overall financial history of the country.To the best of our knowledge this is the first study which uses parameters from CAMEL framework, the most recent supervisory model, and takes a historical account of banking sector performance from 1953 to 2008.Therefore this research builds on the previous work and fills the gap in economic literature by (a) extending analysis on the overall financial history of Pakistan, and (b) using CAMEL technique to offer unique perspective on the variation in the factors which are relevant to banks' profitability under private ownership versus government control. Literature Review Starting from 1974 for almost two decades of government ownership and control had rendered those institutions completely ineffective, operationally as well as financially.Their commercial viability was at stake as most of them had their equity wiped out by bad debts.Every year government had to allocate funds from tax payers' money to sustain institutions which were effectively running down the stream due to: (a) inept human resource allocation; (b) technical inefficiencies; (c) lack of entrepreneurial incentives; (d) collusion of the bank managers and their boards; and (e) dishonest and politically-connected bank borrowers; all examples of distortions in the financial intermediation process resulting in bad lending practices.These factors, taken together, provided the impetus for privatization process.Reform process not only involved change of ownership and control of banks to private hands, but it was also accompanied with overhauling of the supervisory structure at the central bank (risk-based on-site and off-site supervisory structure was introduced in the early part of this decade).Initiatives to reset the financial system of the country kicked in early 1990-91 and continued through the whole decade.This process involved privatization of state-owned banks, issuance of license to open new private banks, strengthening and rationalization of branch networks, etc. Financial sector reforms in Pakistan started in 1989 and they graduated subsequently into ridding the banks, the leading financial institutions, of government ownership transferring them to private hands (Burki andAhmad, 2010, 2011).The root cause of this change of ownership was to give private entrepreneurs a chance to turn those inefficient financial bureaucracies into viable commercial financial concerns.Theory of efficient resource allocation through market forces was the main driving force. A market-based economic system is inherently incompatible with state involvement in financial business.The general consensus is that privately owned financial institutions are better performers and more efficient than stateowned.While comparing performance, most of the studies find that state-owned, partially privatized, fully privatized or private and foreign banks build an order (from least to the best) of efficiency.For instance, Berger, Hasan, and Klapper (2004) use data of 28 developing countries to show that foreign banks are most efficient, followed by private banks while state-owned banks are least efficient.Boardman and Vining (1989) using four profitability ratios and two measures of X-efficiency find that state-owned and mixed (state and private) ownership enterprises are significantly less profitable and productive than are privately owned firms.They also find that mixed enterprises are no more profitable than SOEs, suggesting that full private control, not just partial ownership, is essential to achieving performance improvement.Similar argument has been put forth by empirical research by La Porta, Lopez-de-Silanes and Shleifer (2002). Focusing on the banking sector, Sathye (2005) has used five years data (1998)(1999)(2000)(2001)(2002) on Indian banking industry and tested if the performance of state-owned banks is consistent with privatized or private banks.He applies difference in mean test of significance to confirm if the partially privatized banks perform better and their efficiency indicators exceed those of public banks.He records that partially privatized banks are quickly catching up with fully private banks as there is no significant difference in their performance and efficiency indicators.He argues that gradual off-loading of state-ownership to private sector is an optimal strategy and that it should be accompanied by wider reforms in financial sector. Various studies have also been conducted to empirically test the relationship between privatization and performance on the domestic banking industry.Using parametric approach, Di Patti and Hardy (2005) have analyzed Pakistani bank data and show that foreign banks are more profit efficient, followed by private and then state-owned banks, but the average cost efficiency of these banks is similar.Another study on the domestic banking system is by Burki & Niazi (2009).They use data envelopment analysis and data from 1991 to 2000 to determine that foreign banks show superior cost efficiency than private and state-owned banks, but that the efficiency of foreign banks deteriorates once the consolidation stage of the financial reforms is over. In a similar vein Galal, Leroy, Jones and Ingo (1994), Megginson, Nash & Randenborgh (1994), La Porta andLopez-de-Silanes (1997), andMegginson Jeffry &Netter (1998) promote the idea of privatization to improve performance of state-owned companies.Boubarki and Cosset (1998) empirically investigated performance of firms in pre-and post-privatization using international data from 21 developing countries including Pakistan, and find that an average representative firm after passing through the privatization process performs better, accumulates reserves and pays dividend regularly.Taking lead from that study, Hakro and Akram (2009), using Pakistani firm level data, have compared the operating and financial performance of pre-and post-privatization periods of firms in almost all sectors of the economy, including banks.Their sample covers 49 out of 161 privatized units during 17 years starting from year 1988.Considering privatization as an event, they have applied test of difference in mean/median values method (by using Wilcoxon signed-rank test) in 3 years before and after privatization sample.However, unlike other studies they found that profitability, efficiency, output and dividends parameters are not significantly different in the two time periods leading them to conclude that the privatization process has not resulted in significant performance enhancement of banks.Main problem in their study is the application of the test of significance on a very small time series, built on limited sample set. Alternate methods of testing potential performance and efficiency enhancements from privatization include 'cost benefit analysis' (for instance used by Galal et al, 1994), ratio analysis, data envelopment analysis (used by Bousoffiane, Martin and Parker, 1997), X-efficiency analysis (Qayyum and Khan, 2007), stochastic frontier analysis (Burki and Ahmad, 2010), etc.In a most recent study Maththew (2010) has applied DEA on Pakistan banking data and finds that in contrast to low technical efficiency enhancements, cost inefficiencies of banking institutions has declined over time.However, the author acknowledges discrepancies in the short-span data set used for analysis and gives mixed conclusions while highlighting limitations of his estimation technique. Motivation of the Study: The debate on the role of ownership of banks by public versus private owners is rooted in the issue of corporate performance and efficiency enhancement.Public sector ownership of banks, especially in the form of overall nationalization of banking assets, is generally considered counterproductive as it distorts managerial incentives to deliver on their promises.Moreover bank regulation becomes internalized as government assumes role of the owner, ombudsman and the arbiter, etc. Similarly literature on impact of political regimes on the performance of banks and its relation with the ownership is scarce with reference to developing countries.It needs to be explored if the impact of a dictatorial role has any similarities with the nationalization of banking assets or otherwise. Using almost sixty of data of banking industry this study attempts to investigate variability in the impact of ownership of banking institutions when it moves from private to public sector.In addition, the study attempts to explore variation in profitability under different political regimes -democracy and dictatorship.We hypothesize that profit motive of investors work under private ownership of financial institutions and links quality of assets, quality of bank management, bank liquidity condition and capital adequacy with profitability.We assume that when banks are in private hands their profitability increases with improvement in quality of assets and management and falls with accumulation of liquidity and increase in capital.However if government takes over as owner and controller of financial institutions then capital adequacy and management quality continue to affect profitability but asset quality and liquidity drop out of the list of its determinants.In other words dominance of government either as an owner of financial assets or as an autocrat controlling the entire regime eliminates the importance of short term solvency and asset quality as factors affecting performance of the banking institutions.The likely reason for this behavior could be the implicit guarantee provided by the government for short term solvency and viability of financial institutions if asset quality deteriorates.As regards political regimes, our conjecture is that democracy behaves like private ownership of financial institutions and dictatorship has similarities with government ownership of financial institutions. Ownership: Private vs. Public We hypothesize that under private ownership of financial institutions profitability is linked positively with assets and management quality whereas liquidity and capital adequacy affect profitability negatively.However, when financial assets are in government ownership and control only capital adequacy and quality of bank management continue to affect bank profitability, liquidity and quality of bank assets become irrelevant.A potential explanation may be drawn from government's capacity to become an implicit guarantor of short term solvency and assume the credit risk when asset quality deteriorates. Political Regime: Democracy vs. Dictatorship As regards political regimes our conjecture is that the bank profitability is affected in the same way by the quality of assets and bank management, liquidity and capital adequacy whether democratic order is maintained in the country or a dictator is ruling it. These hypotheses have been tested by means of CAMEL framework.i Using linear regression model with dummies for private-public ownership and for political regimes, and various ratios to account for asset quality, capital adequacy, liquidity and management performance as independent variables the behavior of Return on Equity (ROE) and Return on Assets (ROA) has been explored.To the best of our knowledge there is no other study which has employed CAMEL parameters on historical data on banking industry in Pakistan.Application of standardized ratios on the aggregate dataset for almost 60 years allows straightforward evaluation of the impact of a variety of factors on the profitability of banking industry.This section on literature review is followed by discussion on data, its sources and structure.Section 4 gives specification of methodology after which paper presents its findings.The last section concludes the paper.It is important to take a note of different conventions used in constructing variables of interest.For instance while using bank level data two heads of accounts namely 'Head office and interbank adjustments' and 'Contingent assets/liabilities as per contra' have been dropped in calculation of total assets/liabilities position of the consolidated position of the banking sector.The former entries are set off when banks' head offices compile data from all the branches for making annual account, and the latter being same on both assets and liabilities directly drops out.Further, total capital has been defined as sum of paid-up capital, reserves and balance brought forward from last year; earning assets have been defined as sum of advances and investments; total deposits are sum of demand and time deposits; investment includes federal and provincial government securities, shares of local and foreign companies and other investments; and net profit has been calculated by taking difference of total receipts (income) and total disbursements (expenses). Data Description For comparing performance of banking institutions during different ownership regimes period between 1956 and 1973, and 1991 and 2008 together has been taken as private sector ownership and that between 1974 1990 as a period of public ownership of banking assets.As a supplemental work the period from 1953 to 2008 has been divided into democratic and autocratic rules.The former presents the time in which elected governments are in place (1953 to 1958, 1971 to 1977, and 1989 to 1999) and the later represents dictatorial rules (1958 to 1969, 1977 to 1988, and 1999 to 2008). Methodology Using unique aggregated bank level data set covering 60 years of financial history of the country, we have employed CAMEL parameters to measure effect of different bank specific factors on the profitability of banks in Pakistan.Our model takes following form: Where Earnings, used as a dependent variable, measures profitability of banks alternatively first as a ratio of net profit to total equity (Return on Equity, ROE) and as a ratio of net profit to total assets (Return on Assets, ROA). On the right-hand side of the equation Xt is a vector of explanatory variables and βis are their respective coefficients.The vector Xt includes popular determinants of profitability of banks such as total capital to total assets ratio -a measure of capital adequacy, growth rate of assets -a measure of asset quality of banks, total expenditures to total income ratio -a measure of managements' performance, and finally total deposits to total advances ratio -a measure of bank liquidity.The vector Ct of control variables include growth rate of real Gross Domestic Product (GDP), growth rate of money supply (M2), and growth rate of branch network of banks.These variables generally control for the effects of macro-monetary shocks.The last item in equation ( 1) is ε t represents white-noise error term. Following models present the functional forms in which dummy variables have been deployed to judge the impact of (a) ownership (private versus public) on banks profitability, and (b) political regime (democracy versus autocratic rule) on banks profitability. Bank Profitability: Earnings, which add strength to banks' capital base and ensure viability of banking institutions in the long run, may be gauged by return on equity (RoE) and return on assets (RoA).Data series shows that during the period 1956-73 when banking assets were in private hands, profitability of banks declined from 1.3 in 1956 to 0.8 in 1973.Similar pattern was observed during the nationalization period where profitability was healthy in the beginning and then started declining with a small uptick during 1983-84.Thereafter, during 1990s and 2000s, the period of financial reforms, profitability improved after privatization of MCB and ABL in 1991.However it declined only later with erosion of earning assets and amassing of nonperforming assets of HBL and UBL dipping to the losses during 1997-98.Moreover, freezing of foreign currency accounts in 1998 also had its detrimental impact on the profitability of foreign banks operating in Pakistan.However in 2000s bank profitability improved significantly along with handing over of HBL and UBL to the private sector. Capital Adequacy: Total capital to total assets ratio, a measure of capital adequacy, shows the capacity of financial institutions to absorb losses after all other options of risk absorption have been exhausted.During 1958During -1963 this ratio improved and as the banks expanded their business, this ratio declined and except for the years 1969-70, it continued on its downtrend, touching the 1.8 percent mark in 1976.Afterwards it started rising crossing 5 percent mark in 1988. During the nationalization period blanket guarantee was issued by the government to the deposit holders of all the nationalized banks.That was a time when capital was linked to deposits and sovereign guarantee to depositors did not motivate the banks to expand their capital base.In 1981 the paid-up capital of most of the nationalized banks was increased simultaneously indicating that the decision to expand the capital was more administrative in nature than based upon technical reasons.iii Later on during the period of reforms and privatization, banks' capital base improved considerably.In fact the process of reforms and restructuring forced mergers and acquisitions of many small and fragile banks thereby increasing their fall-back cushion; and two, implementation of Basel Capital Accord compelled banks to expand their total capital according to the level and nature of risks they are exposed to. Asset Quality: Asset quality of banks can be gauged by the growth rate of total assets and data shows that it deteriorated during the nationalization period.Growth rate of advances picked up with the growth in overall economic activity in 1959.This period setting foundation of financial liberalization witnessed marked growth in advances.For the year 1965 when Indo-Pak war broke out growth in advances declined.In 1966 introduction of concessionary lending scheme for small loans have had its beneficial effect on growth in advances modestly pulling it up.Later on, in 1968 credit ceilings as a credit control tool were introduced which had its detrimental impact on growth of advances in 1969. Another factor which affected growth of advances was the change in interest rate on advances charged by bigger banks which was increased from 8 percent to 9 percent and later on maximum interest rate charged by smaller banks was also increased from 10 to 11 percent in 1972. Management: Like any other service industry, financial sector's performance is heavily dependent upon the health and soundness of its management.It is the most difficult aspect to objectively quantify across firms in the same industry as well as over time.Two variables are popularly used to gauge management soundness of banks; total expenditures to total income ratio and earnings per employee.In this study we have relied on 'profit per employee' as a measure of management performance.In the earlier part of the pre-nationalization period it was very low on account of linear expansion of the industry.However, when government started granting licenses to new private banks it went up very quickly.Part of the explanation of comes from unhealthy competition among banks, and rest is a story of massive corruption and inefficiencies in the banking practices of that time.However average ratio of this period was more than 10 marks lower than the average of nationalization period.A closer look at the table shows that soon after nationalization this ratio came down from 88.4 percent in 1973 to 86.0 percent in first two years of the nationalization period.Starting from 1976 it started rising and touched the peak in 1985 and slid down little bit afterwards and rose again in 1990. Liquidity: Although collecting deposits and extending loans is considered to be the main activity of deposit money institutions, however a high advances to deposit (ADR) ratio indicates liquidity strain which has the potential to damage the overall performance of banks.Comparison of pre-and post-nationalization period with government owned and controlled era of 1974-90 shows that the ratio was high during the nationalization period when schemes of directed credit were introduced and nationalized banks did heavy funding of public sector enterprises.A cursory look at the pre-nationalization period shows that this ratio remained above the 80 percent mark during 1964 to 1970, a period when private sector banks mushroomed.Industry and connected lending practices siphoning depositors' money to related parties of the banking institutions were pervasive during the period.In the current post-public sector dominance period, this ratio has become stable and stayed around a yearly average of 67.6 percent. Results This section presents regression results of the impact of bank specific factors on bank profitability after controlling for macroeconomic variables and the size of the banking industry. Descriptive Analysis Table 1 gives summary statistics on selected variables.Both the dependent and main independent variables are ratios of different factors as explained in other sections, and control variables are in percent growth form.As the data shows most of the variables are asymmetrical (three negatively and rest positively skewed).Over a period of 56 years mean return on equity is 28 percent whereas mean return on assets is only 1 percent.Asset quality which has been measured as growth of assets has a 16 percent with a standard deviation of 8 percent.Liquidity, measured by ratio of advances to deposits, has a mean of 71 percent ad standard deviation of 10 percent.Profit per employee, a measure of management quality of the overall banking industry, has a mean of 10 percent with a standard deviation of 23 percent.This shows wide variability in the size of the banks, some a small and others are very large.Average amount of capital to total assets ratio is close to 2 percent with a standard deviation of 1 percent.Since risk-weighting of assets is a modern concept and has been in practice for last 10 to 15 years or so, therefore total assets have been used in the denominator and that explains the smaller size of capital adequacy measure. Table 1: Summary Statistics of Selected Variables From the pool of control variables, money supply, an indicator of monetary expansion in the country, grew at an average rate of 13 percent with a standard deviation of 6 percent, branch network of the banks, a proxy for overall expansion in banking business, expanded by 7 percent with a standard deviation of 12 percent, and GDP (at constant factor cost), a measure of overall economic growth of the economy, grew on average by 5.28 percent with a standard deviation of 2.32 percent.This section discusses regression estimates of alternate specifications.Basic model covers all the years and ROE has been used as a dependent variable to explore the impact of asset quality, bank liquidity, capital adequacy and management quality of banks.In all the models growth rates of broad measure of money supply (M2) and gross domestic product (GDP) along with size of banking industry branch network have been used as controls.Table 2 shows regression results in three different settings.First model (Overall) uses all observation in the dataset and tests dependence of ROE on the above stated variables.It shows that growth of assets, liquidity and better management quality have positive influence on the bank profitability however impact of only management quality is statically significant.On the other hand capital adequacy has a negative significant impact.Model 2 and 3 use dummy for ownership variable.Regression estimates show that when banking assets are in private sector's hands (ownership dummy, D own = 1) asset and management quality affect bank profitability positively whereas liquidity and capital adequacy influence bank profitability negatively.All these coefficients are statistically significant.These findings are consistent with the argument that private ownership of financial resources is better than the alternate of transferring them into public domain because the former has the right set of incentives to innovate and contain overall cost (Shleifer, 2004). Regression Analysis On the other hand, when banking assets are owned by the government (ownership dummy, D own = 0) only capital adequacy is the relevant determinant; other variables change their direction of relation with the ROE however they are statistically insignificant showing that they have no influence in determining bank profitability.A potential explanation of these results could be that due to implicit guarantee of the state the bank profitability under government ownership becomes independent of most of the bank specific factors.One can argue that under government ownership banks may be investing very heavily in risk-free government securities which are available for anytime to generate cash thereby delinking liquidity from profitability of the banks. To check for robustness of the results, alternate measure of profitability, ROA, has been regressed on set of parameters as above and, on the whole, the findings are consistent with the outcomes of ROE as a dependent variable.Regression estimates with ROA are presented in Appendix Table A1.From Table 3, the first column (Model 1: Overall) gives regression estimates without the use of dummy variable.Column 2 (Model 2: D Rule = 1) presents results of regression estimate when dummy variable is set to account for years when country was under democratic dispensation.The last column (Model 3: D Rule = 0) accounts for those years when country was witnessing autocratic rule.Model 2 shows that when a democratic order is maintained in the country the quality of banking assets and that of their management exert positive significant impact on the profitability of the banking industry however liquidity and capital adequacy affect profitability negatively.On the other hand, Model 3 shows that when country is ruled by dictatorship regression coefficients for asset quality and liquidity become statistically insignificant whereas capital adequacy and quality of management continue to have significant negative and positive impact respectively.Potentially, it is the internalization of the short term solvency under dictatorial rule, like under government ownership, in the business model of the banking institutions.These results show that a dictatorial regime behaves more like an implicit government guarantee for short term solvency of the banking institutions and also provide blanket coverage for any deterioration in the quality of assets.However, like in other models, an increase in bank capital imposes a cost and depresses the profitability and a better management gives a boost to it. Table A2 in Appendix shows results for impact of change in political regime on the behavior of banks specific parameters interacting with bank profitability.Here, ROA, an alternate measure of profitability has been regressed on set of parameters, and on the whole, the findings are consistent with the outcomes of ROE as a dependent variable.Regression estimates with ROA are presented in Table 3. Conclusion In this study we have tried to explore the impact of change in ownership and political regimes on bank performance by deploying parameters of CAMEL framework.Dummies for private-public ownership and for political regimes, along with a time series of asset quality, capital adequacy, liquidity and management performance have been regressed on return on equity, and then return on assets, as measures of bank profitability.This approach allows straightforward evaluation of the impact of these factors on profitability of banking industry by using historical data.Unlike other studies on the subject privatization and its impact on performance of banking institutions, our focus is to capture the variation in the behavior of different bank specific variables on the bank profitability when they are tested in competing ownership models (private versus government) and political orders (democracy versus dictatorship).The study estimates bank profitability by innovatively using parameters from CAMEL framework, a standardized bank supervisory model, and makes use of historical dataset on the banking industry of Pakistan. The findings highlight interesting features related to profitability of banking industry.After controlling for effects of real GDP growth, growth of money supply (M2) and growth of bank branches (as a proxy for size of the banking industry), the base model shows that main determinants of bank profitability (measured by ROE and ROA) are quality of banking assets and that of its management, bank liquidity and its capital adequacy.Our main findings are that with the quality of its assets and management bank profitability has positive relation and with liquidity and capital adequacy it has negative relation.In the next stage then dummy variables are used to estimate impact of (a) change in bank ownership from private to public domains, and (b) impact of change in political regime from democracy to dictatorship. Regression estimates show that when banking institutions are in private control bank profitability increases with improvement in quality of assets and management, and it goes down when banks increase capital base or hoard liquidity.These findings are consistent with the profit-motive of the investors when banking assets are in private hands.However when banks are owned by the government, the linkage between quality of assets and liquidity becomes statistically insignificant implying that government becomes a guarantor of short term solvency and provides cover for asset deterioration. Using similar set of parameters, the study finds that the behavior of the bank specific independent variables (capital adequacy, asset quality, management quality and bank liquidity) do not change whether democracy is the mode of governance or a dictator is running the county. The paper has lessons for countries who have experienced nationalization of banking assets and for those who have witnessed alternating democratic and dictatorial rules.It suggests that government ownership or shift in political orders may limit the dividends from the efficiency induced by the incentive for private investors especially those residing in democracies. This study is based on scheduled banks' data from 1953 to 2008.Largely the data for the period 1953-1989 have been taken from various issues of Banking Statistics of Pakistan ii , an annual publication of the SBP, and supplemented by other sources including various issues of Statistical Bulletin published by State Bank and Handbook of Statistics on Pakistan Economy (2010) and annual audited accounts of the banks wherever they are available. Table 2 : Impact of Ownership and Bank Specific Parameters on Profitability (Dependent Variable: ROE) Table A2 : Impact of Political Regimes and Bank Specific Parameters on Profitability
8,097.6
2015-01-25T00:00:00.000
[ "Economics" ]
Comparison Principle for Hamilton-Jacobi-Bellman Equations via a Bootstrapping Procedure . We study the well-posedness of Hamilton–Jacobi–Bellman equations on subsets of R d in a context without boundary conditions. The Hamiltonian is given as the supremum over two parts: an internal Hamiltonian depending on an external control variable and a cost functional penalizing the control. The key feature in this paper is that the control function can be unbounded and discontinuous. This way we can treat functionals that appear e.g. in the Donsker–Varadhan theory of large deviations for occupation-time measures. To allow for this flexibility, we assume that the internal Hamiltonian and cost functional have controlled growth, and that they satisfy an equi-continuity estimate uniformly over compact sets in the space of controls. In addition to establishing the comparison principle for the Hamilton–Jacobi–Bellman equation, we also prove existence, the viscosity solution being the value function with exponentially discounted running costs. As an application, we verify the conditions on the internal Hamiltonian and cost functional in two examples. Introduction and aim of this note The main purpose of this note is to establish well-posedness for first-order nonlinear partial differential equations of Hamilton-Jacobi-Bellman type on subsets E of R d , in the context without boundary conditions and where the Hamiltonian flow generated by H remains inside E. In (HJB), λ > 0 is a scalar and h is a 0123456789().: V,-vol R. C. Kraaij where θ ∈ Θ plays the role of a control variable. For fixed θ, the function Λ can be interpreted as an Hamiltonian itself. We call it the internal Hamiltonian. The function I can be interpreted as the cost of applying the control θ. The main result of this paper is the comparison principle for (HJB) in order to establish uniqueness of viscosity solutions. The standard assumption in the literature that allows one to obtain the comparison principle in the context of optimal control problems (e.g. [2] for the first order case and [10] for the second order case) is that either there is a modulus of continuity ω such that |H(x, p) − H(y, p)| ≤ ω (|x − y|(1 + |p|)) , (1.2) or that H is uniformly coercive: More generally, the two estimates (1.2) and (1.3) can be combined in a single estimate, called pseudo-coercivity, see [4, (H4), Page 34], that uses the fact that the sub-and supersolution properties roughly imply that the estimate (1.2) only needs to hold for appropriately chosen x, y and p such that H is finite uniformly over these chosen x, y, p. The pseudo-coercivity property is harder to translate as in this way the control on H does not necessarily imply the same control on Λ, in particular in the case when I is unbounded. We return on this issue below. The estimates (I) and (II) are not satisfied for Hamiltonians arising from natural examples in the theory of large deviations [12,13] for Markov processes with two scales (see e.g. [6,18,27,29] for PDE's arising from large deviations with two scales, see [3,16,17,20,21] for other works connection PDE's with large deviations). Indeed, in [6] the authors mention that well-posedness of the Hamilton-Jacobi-Bellman equation for examples arising from large deviation theory is an open problem. Recent generalizations of the coercivity condition, see e.g. [9], also do not cover these examples. In the large deviation context, however, we typically know that we have the comparison principle for the Hamilton-Jacobi equation in terms of Λ. In addition, even though I might be discontinuous, we do have other types of regularity for the functional I, see e.g. [32]. Thus, we aim to prove a comparison principle for (HJB) on the basis of the assumption that we have the following natural relaxations of (or the pseudo-coercive version of) (I) and (II). (i) For θ ∈ Θ, define the Hamiltonian H θ (x, p) := Λ(x, p, θ). We have an estimate on H θ that is uniform over θ in compact sets K ⊆ Θ. This estimate, for one fixed θ, is in spirit similar to the pseudo-coercivity estimate of [4] and is morally equivalent to the comparison principle for H θ . The uniformity is made rigorous as the continuity estimate in Assumption 2.14 (Λ5) below. (ii) The cost functional I(x, θ) satisfies an equi-continuity estimate of the type |I(x, θ) − I(y, θ)| ≤ ω I,C (|x − y|) on sublevel sets {I ≤ C} which we assume to be compact. This estimate is made rigorous in Assumption 2.15 (I5) below. To work with these relaxations, we introduce a procedure that allows us to restrict our analysis to compact sets in the space of controls. In the proof of the comparison principle, the sub-and supersolution properties give boundedness of H when evaluated in optimizing points. We then translate this boundedness to boundedness of I, which implies that the controls lie in a compact set. The transfer of control builds upon (i) for Λ(x, p, θ 0 x ) when we use a control θ 0 x that satisfies I(x, θ 0 x ) = 0. This we call the bootstrap procedure: we use the comparison principle for the Hamilton-Jacobi equation in terms of Λ(x, p, θ 0 x ) to shift the control on H to control on Λ and I for general θ. That R. C. Kraaij and M. C. Schlottke NoDEA this sketch, we refrain from performing localization arguments that are needed for non-compact E. Thus, to summarize, we use the growth conditions posed on Λ and I and the pseudo-coercivity estimate for Λ to transfer the control on the full Hamiltonian H to the functions Λ and the cost function I. Then the control on Λ and I allows us to apply the estimates (i) and (ii) to obtain the comparison principle. Next to our main result, we also state for completeness an existence result in Theorem 2.8. The viscosity solution will be given in terms of a discounted control problem as is typical in the literature, see e.g. [2,Chapter 3]. Minor difficulties arise from working with H that arise from irregular I. Finally, we show that the conditions (i) to (vi) are satisfied in two examples that arise from large deviation theory for two-scale processes. In our companion paper [26], we will use existence and uniqueness for (HJB) for these examples to obtain large deviation principles. Illustration in the context of an example As an illustrating example, we consider a Hamilton-Jacobi-Bellman equation that arises from the large deviations of the empirical measure-flux pair of weakly coupled Markov jump processes that are coupled to fast Brownian motion on the torus. We skip the probabilistic background of this problem (See [26]), and come to the set-up relevant for this paper. Let G := {1, . . . , q} be some finite set, and let is the set of probability measures on G. Let F = P(S 1 ) be the set of probability measures on the one-dimensional torus. We introduce Λ and I. • Let r : G × G × P(E) × P(S 1 ) → [0, ∞) be some function that codes the P(E) × P(S 1 ) dependent jump rate of the Markov jump process over each bond (a, b) ∈ Γ. The internal Hamiltonian Λ is given by • Let σ 2 : S 1 × P(G) → (0, ∞) be a bounded and strictly positive function. The cost function I : E × Θ → [0, ∞] is given by Aiming for the comparison principle, we note that classical methods do not apply. The functionals Λ are not coercive and do not satisfy (I). We show in "Appendix E" that they are also not pseudo-coercive as defined in [4]. The functional I is neither continuous nor bounded. Once can check e.g. that if θ is a finite combination of Dirac measures, then I(μ, θ) = ∞. We show in Sect. 5, however, that (i) to (vi) hold, implying the comparison principle for the Hamilton-Jacobi-Bellman equations. The verification of these properties is based in part on results from [23,32]. Summary and overview of the paper To summarize, our novel bootstrap procedure allows to treat Hamilton-Jacobi-Bellman equations where: • We assume that the cost function I satisfies some regularity conditions on its sub-levelsets, but allow I to be possibly unbounded and discontinuous. • We assume that Λ satisfies the continuity estimate uniformly for controls in compact sets, which in spirit extends the pseudo-coercivity estimate of [4]. This implies that Λ can be possibly non-coercive, non-pseudo-coercive and non-Lipschitz as exhibited in our example above. In particular, allowing discontinuity in I allows us to treat the comparison principle for examples like the one we considered above, which so far has been out of reach. We believe that the bootstrap procedure we introduce in this note has the potential to also apply to second order equations or equations in infinite dimensions. Of interest would be, for example, an extension of the results of [10] who work with continuous I. For clarity of the exposition, and the already numerous applications for this setting, we stick to the finite-dimensional firstorder case. We think that the key arguments that are used in the proof in Sect. 3 do not depend in a crucial way on this assumption. The paper is organized as follows. The main results are formulated in Sect. 2. In Sect. 3 we establish the comparison principle. In Sect. 4 we establish that a resolvent operator R(λ) in terms of an exponentially discounted control problem gives rise to viscosity solutions of the Hamilton-Jacobi-Bellman equation (HJB). Finally, in Sect. 5 we treat two examples including the one mentioned in the introduction. Main results In this section, we start with preliminaries in Sect. 2.1, which includes the definition of viscosity solutions and that of the comparison principle. We proceed in Sect. 2.2 with the main results: a comparison principle for the Hamilton-Jacobi-Bellman equation (HJB) based on variational Hamiltonians of the form (1.1), and the existence of viscosity solutions. In Sect. 2.3 we collect all assumptions that are needed for the main results. Preliminaries For a Polish space X we denote by C(X ) and C b (X ) the spaces of continuous and bounded continuous functions respectively. If X ⊆ R d then we denote by C ∞ c (X ) the space of smooth functions that vanish outside a compact set. We denote by C ∞ cc (X ) the set of smooth functions that are constant outside of a compact set in X , and by P(X ) the space of probability measures on X . We equip P(X ) with the weak topology induced by convergence of integrals against bounded continuous functions. Throughout the paper, E will be the set on which we base our Hamilton-Jacobi equations. We assume that E is a subset of R d that is a Polish space which is contained in the R d closure of its R d interior. This ensures that gradients of functions are determined by their values on E. Note that we do not necessarily assume that E is open. We assume that the space of controls Θ is Polish. We next introduce viscosity solutions for the Hamilton-Jacobi equation with Hamiltonians like H(x, p) of our introduction. Definition 2.1. (Viscosity solutions and comparison principle) Let Consider the Hamilton-Jacobi equation We say that u is a (viscosity) subsolution of equation ( We say that v is a (viscosity) supersolution of Eq. (2.1) if v is bounded from below, lower semi-continuous and if, for every f ∈ D(A)there exists a sequence We say that u is a (viscosity) solution of Eq. A similar simplification holds in the case of supersolutions. Remark 2.4. For an explanatory text on the notion of viscosity solutions and fields of applications, we refer to [8]. Remark 2.5. At present, we refrain from working with unbounded viscosity solutions as we use the upper bound on subsolutions and the lower bound on supersolutions in the proof of Theorem 2.6. We can, however, imagine that the methods presented in this paper can be generalized if u and v grow slower than the containment function Υ that will be defined below in Definition 2.13. Main results: comparison and existence Then R(λ)h is the unique viscosity solution to f − λHf = h. Remark 2.9. The form of the solution is typical, see for example Section III.2 in [2]. It is the value function obtained by an optimization problem with exponentially discounted cost. The difficulty of the proof of Theorem 2.8 lies in treating the irregular form of H. Assumptions In this section, we formulate and comment on the assumptions imposed on the Hamiltonians defined in the previous sections. The key assumptions were already mentioned in the sketch of the bootstrap method in the introduction. To these, we add minor additional assumptions on the regularity of Λ and I in Assumptions 2.14 and 2.15. Finally, Assumption 2.17 will imply that even if E has a boundary, no boundary conditions are necessary for the construction of the viscosity solution. We start with the continuity estimate for Λ, which was briefly discussed in (i) in the introduction. To that end, we first introduce a function that is used in the typical argument that doubles the number of variables. Definition 2.10. (Penalization function) We say that Ψ : and if x = y if and only if Ψ(x, y) = 0. We will apply the definition below for G = Λ. Definition 2.11. (Continuity estimate) Let Ψ be a penalization function and let G : Suppose that for each ε > 0, there is a sequence of positive real numbers α → ∞. For sake of readability, we suppress the dependence on ε in our notation. (2.6) Remark 2.12. In "Appendix C", we state a slightly more general continuity estimate on the basis of two penalization functions. A proof of a comparison principle on the basis of two penalization functions was given in [23]. The continuity estimate is indeed exactly the estimate that one would perform when proving the comparison principle for the Hamilton-Jacobi equation in terms of the internal Hamiltonian (disregarding the control θ). Typically, the control on (x ε,α , y ε,α ) that is assumed in (C1) and (C2) is obtained from choosing (x ε,α , y ε,α ) as optimizers in the doubling of variables procedure (see Lemma 3.5), and the control that is assumed in (C3) is obtained by using the viscosity sub-and supersolution properties in the proof of the comparison principle. The required restriction to compact sets in Lemma 3.5 is obtained by including in the test functions a containment function. • For every c ≥ 0, the set {x | Υ(x) ≤ c} is compact; To conclude, our assumption on Λ contains the continuity estimate, the controlled growth, the existence of a containment function and two regularity properties. Assumption 2.14. The function Λ : E × R d × Θ → R in the Hamiltonian (2.2) satisfies the following. For any x ∈ E and θ ∈ Θ, the map p → Λ(x, p, θ) is convex. We have Λ(x, 0, θ) = 0 for all x ∈ E and all θ ∈ Θ. (Λ3) There exists a containment function Υ : E → [0, ∞) for Λ in the sense of Definition 2.13. (Λ4) For every compact set K ⊆ E, there exist constants M, C 1 , C 2 ≥ 0 such that for all x ∈ K, p ∈ R d and all θ 1 , θ 2 ∈ Θ, we have (I5) For every compact set K ⊆ E and each M ≥ 0 the collection of functions {I(·, θ)} θ∈ΩK,M is equicontinuous. That is: for all ε > 0, there is a δ > 0 such that for all θ ∈ Ω K,M and x, y ∈ K such that d(x, y) ≤ δ we have To establish the existence of viscosity solutions, we will impose one additional assumption. For a general convex functional p → Φ(p) we denote Assumption 2.17. The set E is closed and convex. The map Λ is such that In Lemma 4.1 we will show that the assumption implies that ∂ p H(x, p) ⊆ T E (x), which in turn implies that the solutions of the differential inclusion in terms of ∂ p H(x, p) remain inside E. Motivated by our examples, we work with closed convex domains E. While in this context we can apply results from e.g. Deimling [11], we believe that similar results can be obtained in different contexts. is intuitively implied by the comparison principle for H and therefore, we expect it to hold in any setting for which Theorem 2.6 holds. Here, we argue in a simple case why this is to be expected. First of all, note that the comparison principle for H builds upon the maximum principle. . As x = 0 is a boundary point, we conclude that f (0) ≤ g (0). If indeed the maximum principle holds, we must have The comparison principle In this section, we establish Theorem 2.6. To establish the comparison principle for f − λHf = h we use the bootstrap method explained in the introduction. We start by a classical localization argument. We carry out the localization argument by absorbing the containment function Υ from Assumption 2.14 (Λ3) into the test functions. This leads to two new operators, H † and H ‡ that serve as an upper bound and a lower bound for the true H. We will then show the comparison principle for the Hamilton-Jacobi equation in terms of these two new operators. We therefore have to extend our notion of Hamilton-Jacobi equations and the comparison principle. This extension of the definition is standard, but we included it for completeness in the appendix as Definition A.1. This procedure allows us to clearly separate the reduction to compact sets on one hand, and the proof of the comparison principle on the basis of the bootstrap procedure on the other. Schematically, we will establish the following diagram: In this diagram, an arrow connecting an operator A with operator B with subscript 'sub' means that viscosity subsolutions of f − λAf = h are also viscosity subsolutions of f − λBf = h. Similarly for arrows with a subscript 'super'. We introduce the operators H † and H ‡ in Sect. 3.1. The arrows will be established in Sect. 3.2. Finally, we will establish the comparison principle for H † and H ‡ in Sect. 3.3. Combined these two results imply the comparison principle for H. Proof of Theorem 2.6. We start with the proof of (a). Let f ∈ D(H). Then Hf is continuous since by Proposition B.3 in "Appendix B", the Hamiltonian H is continuous. We proceed with the proof of (b). Fix h 1 , h 2 ∈ C b (E) and λ > 0. Let u 1 , u 2 be a viscosity sub-and supersolution to f − λHf = h 1 and f − λHf = h 2 respectively. By Lemma 3.3 proven in Sect. 3.2, u 1 and u 2 are a sub-and supersolution to Definition of auxiliary operators In this section, we repeat the definition of H, and introduce the operators H † and H ‡ . We proceed by introducing H † and H ‡ . Recall Assumption (Λ3) and the constant C Υ := sup θ sup x Λ(x, ∇Υ(x), θ) therein. Denote by C ∞ (E) the set of smooth functions on E that have a lower bound and by C ∞ u (E) the set of smooth functions on E that have an upper bound. Definition 3.2. (The operators H † and H and set Preliminary results The operator H is related to H † , H ‡ by the following Lemma. We only prove (a) of Lemma 3.3, as (b) can be carried out analogously. As the function [u − (1 − ε)f ] is bounded from above and εΥ has compact sublevel-sets, the sequence x n along which the first limit is attained can be assumed to lie in the compact set Denote by f ε the function on E defined by By construction f ε is smooth and constant outside of a compact set and thus lies in establishing (3.2). This concludes the proof. The comparison principle In this section, we prove the comparison principle for the operators H † and H ‡ . The proof uses a variant of a classical estimate that was proven e.g. in [8,Proposition 3.7] or in the present form in Proposition A.11 of [7]. Fix Additionally, for every ε > 0 we have that Let u 1 be a viscosity subsolution and u 2 be a viscosity supersolution of f − λH † f = h 1 and f − λH ‡ f = h 2 respectively. We prove Theorem 3.4 in five steps of which the first two are classical. We sketch the steps, before giving full proofs. Step 1: We prove that for ε > 0 and α > 0, there exist points x ε,α , y ε,α ∈ E satisfying the properties listed in Lemma 3.5 and momenta p 1 This step is solely based on the sub-and supersolution properties of u 1 , u 2 , the continuous differentiability of the penalization function Ψ(x, y), the containment function Υ, and convexity of p → H(x, p). We conclude it suffices to establish for each ε > 0 that Step 2 : We will show that there are controls θ ε,α such that As a consequence we have For establishing (3.7), it is sufficient to bound the differences in (3.9) by using Assumptions 2.14 (Λ5) and 2.15 (I5). Step 3: We verify the conditions to apply the continuity estimate, Assumption 2.14 (Λ5). Step 4 : We verify the conditions to apply the estimate on I, Assumption 2.15 (I5). R. C. Kraaij and M. C. Schlottke NoDEA Step 5 : Using the outcomes of Steps 3 and 4, we can apply the continuity estimate of Assumption 2.14 (Λ4) and the equi-continuity of Assumption 2.15 (I5) to estimate (3.9) for any ε: which establishes (3.7) and thus also the comparison principle. We proceed with the proofs of the first four steps, as the fifth step is immediate. Proof of Step 1: The proof of this first step is classical. We include it for completeness. For any ε > 0 and any α > 0, define the map Φ ε,α : Let ε > 0. By Lemma 3.5, there is a compact set K ε ⊆ E and there exist points and lim α→∞ αΨ(x ε,α , y ε,α ) = 0. (3.12) As in the proof of Proposition A.11 of [23], it follows that At this point, we want to use the sub-and supersolution properties of u 1 and u 2 . Define the test functions ϕ ε,α Using (3.11), we find that u 1 − ϕ ε,α 1 attains its supremum at x = x ε,α , and thus Denote p 1 ε,α := α∇ x Ψ(x ε,α , y ε,α ). By our addition of the penalization (x − x ε,α ) 2 to the test function, the point x ε,α is in fact the unique optimizer, and we obtain from the subsolution inequality that With a similar argument for u 2 and ϕ ε,α 2 , we obtain by the supersolution inequality that With that, estimating further in (3.13) leads to Thus, (3.6) in Step 1 follows. Proof of Step 3, (i) and (ii) : We first establish (i). By the subsolution inequality (3.14), 21) and the lower bound (3.18) follows. We next establish (ii). By the supersolution inequality (3.15), we can estimate To perform this estimate, we first write To estimate the second term, we aim to apply the continuity estimate for the controls θ 0 ε,α . To do so, must establish that (x ε,α , y ε,α , θ 0 ε,α ) is fundamental for Λ with respect to Ψ. By Assumption 2.15 (I3), for each ε the set of controls θ 0 ε,α is relatively compact. Thus it suffices to establish inf These two estimates follow by Assumption 2.14 (Λ4) and (3.18) and (3.19). The continuity estimate of Assumption 2.14 (Λ5) yields that This means that there exists a subsequence, also denoted by α such that Thus, we can estimate (3.24) by (3.27) and (3.26). This implies that (3.22) holds for the chosen subsequences α and that for these the collection (x ε,α , y ε,α , θ ε,α ) is fundamental for Λ with respect to Ψ establishing Step 3. Existence of viscosity solutions In this section, we will prove Theorem 2.8. In other words, we show that for h ∈ C b (E) and λ > 0, the function R(λ)h given by is indeed a viscosity solution to f −λHf = h. To do so, we will use the methods of Chapter 8 of [19]. For this strategy, one needs to check three properties of R(λ): The operator R(λ) is a pseudo-resolvent: for all h ∈ C b (E) and 0 < α < β we have Thus, if R(λ) serves as a classical left-inverse to 1 − λH and is also a pseudoresolvent, then it is a viscosity right-inverse of (1 − λH). For a second proof of this statement, outside of the control theory context, see Proposition 3.4 of [24]. Establishing (c) is straightforward. The proof of (a) and (b) stems from two main properties of exponential random variable. Let τ λ be the measure on R + corresponding to the exponential random variable with mean λ −1 . • (a) is related to integration by parts: for bounded measurable functions z on R + , we have • (b) is related to a more involved integral property of exponential random variables. For 0 < α < β, we have Establishing [7]. Since we use the argument further below, we briefly recall it here. We need to show that for any compact set K ⊆ E, any finite time T > 0 and finite bound M ≥ 0, there exists a compact set K = K (K, T, M ) ⊆ E such that for any absolutely continuous path γ : for any 0 ≤ τ ≤ T , so that the compact set K := {z ∈ E : Υ(z) ≤ C} satisfies the claim. We proceed with the verification of Conditions 8.10 and 8.11 of [19]. By Proposition B.1, we have H(x, 0) = 0 and hence the application of H to constant 1 function 1 satisfies H1 = 0. Thus, Condition 8.10 is implied by Condition 8.11 (see Remark 8.12 (e) in [19]). We establish that Condition 8.11 is satisfied: for any function f ∈ D(H) = C ∞ cc (E) and x 0 ∈ E, there exists an absolutely continuous path x : [0, ∞) → E such that x(0) = x 0 and for any t ≥ 0, To do so, we solve the differential inclusioṅ where the subdifferential of H was defined in (2.9) on page 10. Since the addition of a constant to f does not change the gradient, we may assume without loss of generality that f has compact support. A general method to establish existence of differential inclusionsẋ ∈ F (x) is given by Lemma 5.1 of Deimling [11]. We have included this result as Lemma D.5, and corresponding preliminary definitions in "Appendix D". We use this result for F (x) := ∂ p H(x, ∇f (x)). To apply Lemma D.5, we need to verify that: (F1) F is upper hemi-continuous and F (x) is non-empty, closed, and convex for all x ∈ E. (For the definition of T E , see Definition 2.16 on page 10). Part (F1) follows from the properties of subdifferential sets of convex and continuous functionals. H is continuous in (x, p) and convex in p by Proposition B.1. Part (F3) is a consequence of Lemma 4.1, which yields that F (x) ⊆ T E (x). Part (F2) is in general not satisfied. To circumvent this problem, we use properties of H to establish a-priori bounds on the range of solutions. Step 1: Let T > 0, and assume that x(t) solves (4.4). We establish that there is some M such that (4.1) is satisfied. By (4.4) we obtain for all p ∈ R d , and as a consequencė x(t)∇f (x(t)) − H(x(t), ∇f (x(t))) ≥ L(x(t),ẋ(t)). Since f has compact support and H(y, 0) = 0 for any y ∈ E, we estimate H(y, ∇f (y)). R. C. Kraaij and M. C. Schlottke NoDEA By continuity of H the field F is bounded on compact sets, so the first term can be bounded by Therefore, for any T > 0, we obtain that the integral over the Lagrangian is bounded from above by M = M (T ), with H(y, ∇f (y)). From the first part of the, see the argument concluding after (4.2), we find that the solution x(t) remains in the compact set for all t ∈ [0, T ]. Step 2 : We prove that there exists a solution x(t) of (4.4) on [0, T ]. Using F , we define a new multi-valued vector-field F (z) that equals F (z) = ∂ p H(z, ∇f (z)) inside K , but equals {0} outside a neighborhood of K. This can e.g. be achieved by multiplying with a smooth cut-off function g K : E → [0, 1] that is equal to one on K and zero outside of a neighborhood of K . By the estimate established in step 1 and the fact that Υ(γ(t)) ≤ C for any 0 ≤ t ≤ T , it follows from the argument as shown above in (4.2) that the solution y stays in K up to time T . Since on K , we have F = F , this implies that setting x = y| [0,T ] , we obtain a solution x(t) of (4.4) on the time interval [0, T ]. Proof. Fix x ∈ E and p 0 ∈ R d . We aim to prove that The result will follow from the following claim, where ch denotes the convex hull. Having established this claim, the result follows from Assumption 2.17 and the fact that T E (x) is a convex set by Lemma D.4. We start with the proof of (4.7). For this we will use [22,Theorem D.4.4.2]. To study the subdifferential of the function ∂ p H(x, p 0 ), it suffices to restrict the domain of the map p → H(x, p) to the closed ball B 1 (p 0 ) around p 0 with radius 1. To apply [22,Theorem D.4.4.2] for this restricted map, first recall that Λ is continuous by Assumption 2.14 (Λ1) and that I is lower semi-continuous by Assumption 2.15 (I1). Secondly, we need to find a compact set Ω ⊆ Θ such that we can restrict the supremum (for any p ∈ B 1 (p 0 )) in (4.6) to Ω: In particular, we show that we can take for Ω a sublevelset of I(x, ·) which is compact by Assumption 2.15 (I3). Let θ 0 x be the control such that I(x, θ 0 x ) = 0, which exists due to Assumption 2.15 (I2). Let M * be such that (with the constants M, C 1 , C 2 as in Assumption 2.14 (Λ4)) Note that M * is finite as p → Λ(x, p, θ 0 x ) is continuous on the closed unit ball B 1 (p 0 ). Then we find, due to Assumption 2.14 (Λ4), that if θ satisfies I(x, θ) > M * , then for any p ∈ B 1 (p 0 ) we have where ch denotes the convex hull. Now (4.7) follows by noting that I(x, θ) does not depend on p. Examples of Hamiltonians In The purpose of this section is to showcase that the method introduced in this paper is versatile enough to capture interesting examples that could not be treated before. First, we consider in Proposition 5.1 Hamiltonians that one encounters in the large deviation analysis of two-scale systems as studied in [6] and [27] when considering a diffusion process coupled to a fast jump process. Second, we consider in Proposition 5.7 the example treated in our introduction that arises from models of mean-field interacting particles that are coupled to fast external variables. This example will be further analyzed in [26]. with non-negative rates r : Suppose that the cost function I satisfies the assumptions of Proposition 5.9 below and the function Λ satisfies the assumptions of Proposition 5.11 below. Then Theorems 2.6 and 2.8 apply to the Hamiltonian (5.1). Proof. To apply Theorems 2.6 and 2.8, we need to verify Assumptions 2.14, 2.15 and 2.17. Assumption 2.14 follows from Proposition 5.11, Assumption 2.15 follows from Proposition 5.9 and Assumption 2.17 is satisfied as E = R d . Remark 5.2. We assume uniform ellipticity of a, which we use to establish (Λ4). This leaves our comparison principle slightly lacking to prove a large deviation principle as general as in [5]. In contrast, we do not need a Lipschitz condition on r in terms of x. While we believe that the conditions on a can be relaxed by performing a finer analysis of the estimates in terms of a, we do not pursue this relaxation here. Remark 5.3. The cost function is the large deviation rate function for the occupation time measures of a jump process taking values in a finite set {1, . . . , J}, see e.g. [13,14]. Remark 5.4. In the context with a = 0 and I as general as Assumption 2.15, we improve upon the results of Chapter III of [2] by allowing a more general class of functionals I, that are e.g. discontinuous as for example in Proposition 5.7 below. In [10] the authors consider a second order Hamilton-Jacobi-Bellman equation, with the quadratic part replaced by a second order part. They work, however, with continuous cost functional I. An extension of [10] that allows for a similar flexibility in the choice of I would therefore be of interest. Remark 5.5. Under irreducibility conditions on the rates, as we shall assume below in Proposition 5.9, by [15] the Hamiltonian H(x, p) is the principal eigenvalue of the matrix A x,p ∈ Mat J×J (R) given by a(x, 1)p, p + b(x, 1), p , . . . , a(x, J) Next we consider Hamiltonians arising in the context of weakly interacting jump processes on a collection of states {1, . . . , q} as described in our introduction. We analyze and motivate this example in more detail in our companion paper [26]. We give the terminology as needed for the results in this paper. The empirical measure of the interacting processes takes its values in the set of measures P ({1, . . . , q}). The dynamics arises from mass moving over the bonds (a, b) ∈ Γ = (i, j) ∈ {1, . . . , q} 2 | i = j . As the number of processes is send to infinity, there is a type of limiting result for the total mass moving over the bonds. We will denote by v(a, b, μ, θ) the total mass that moves from a to b if the empirical measure equals μ and the control is given by θ. We will make the following assumption on the kernel v. (a, b, μ, θ) is either identically equal to zero or satisfies the following two properties: exists a decomposition v(a, b, μ, θ) = v † (a, b, μ(a))v ‡ (a, b, μ, θ) such that v † is increasing in the third coordinate and such that v ‡ (a, b, ·, ·) is continuous and satisfies v ‡ (a, b, μ, θ) > 0. A typical example of a proper kernel is given by v(a, b, μ, θ) = μ(a)r(a, b, θ) where L x is a second-order elliptic operator locally of the form Proof. To apply Theorems 2.6 and 2.8, we need to verify Assumptions 2.14, 2.15 and 2.17. Assumption 2.14 follows from Proposition 5.13 and Assumption 2.15 follows from Proposition 5.10. We verify Assumption 2.17 in Proposition 5.19. Remark 5.8. The cost function stems from occupation-time large deviations of a drift-diffusion process on a compact manifold, see e.g. [15,32]. We expect Proposition 5.7 to extend also to non-compact spaces F , but we feel this technical extension is better suited for a separate paper. Suppose that the rates r : {1, . . . , J} 2 × E → R + are continuous as a function on E and moreover satisfy the following: each pair (i, j), we either have r(i, j, ·) ≡ 0 or for each compact set K ⊆ E, it holds that Verifying assumptions for cost functions I Then the Donsker-Varadhan functional I : E × Θ → R + defined by Hence I is uniformly bounded on K × Θ, and (I4) follows with U x the interior of K. R. C. Kraaij and M. C. Schlottke NoDEA (I5) : Let d be some metric that metrizes the topology of E. We will prove that for any compact set K ⊆ E and ε > 0 there is some δ > 0 such that for all x, y ∈ K with d(x, y) ≤ δ and for all θ ∈ P(F ), we have Let x, y ∈ K. By continuity of the rates the I(x, ·) are uniformly bounded for x ∈ K: For any n ∈ N, there exists w n ∈ R J such that By reorganizing, we find for all bonds (a, b) the bound r(a, b, x). Thereby, evaluating in I(y, θ) the same vector w n to estimate the supremum, We take n → ∞ and use that the rates x → r(a, b, x) are continuous, and hence uniformly continuous on compact sets, to obtain (5.3). the second-order elliptic operator that in local coordinates is given by where a x is a positive definite matrix and b x is a vector field having smooth entries a ij x and b i x on F . Suppose that for all i, j the maps is the minimizer of I(x, ·), that is I(x, θ 0 x ) = 0. This follows by considering the Hille-Yosida approximation L ε x of L x and using the same argument (using w = log u) as in Proposition 5.9 for these approximations. For any u > 0 and ε > 0, Sending ε → 0 and then using (5.5) gives (I2). (I3): Since Θ = P(F ) is compact, any closed subset of Θ is compact. Hence any union of sub-level sets of I(x, ·) is relatively compact in Θ. (I4): Fix x ∈ E and M ≥ 0. Let θ ∈ Θ {x},M . As I(x, θ) ≤ M , we find by [31] that the density dθ dz exists, where dz denotes the Riemannian volume measure. As the dependence is continuous in y, we can find a open set U ⊆ E of x such that there are constants c 1 , c 2 , c 3 , c 4 , c 1 , c 3 being positive, that do not depend on θ, such that for any y ∈ U : From (5.7), (I4) immediately follows. (I5): Since the coefficients a x and b x of the operator L x depend continuously on x, assumption (I5) follows from Theorem 2 of [32]. Verifying assumptions for functions Λ Furthermore, there exists a constant L > 0 such that for all x, y ∈ E and z ∈ F , and suppose that the functions b are one-sided Lipschitz. Then Assumption 2.14 holds. Remark 5.12. The above proposition is slightly more general than what we consider in Proposition 5.1, as there we assume that F = {1, . . . , J} is a finite set. Proof. Continuity of Λ is a consequence of the fact that is the pairing of a continuous and bounded function V (x, p, ·) with the measure θ ∈ P(F ). (Λ2): Let x ∈ E and θ ∈ P(F ). Convexity of p → Λ(x, p, θ) follows since a(x, z) is positive definite by assumption. If p 0 = 0, then evidently Λ(x, p 0 , θ) = 0. (Λ3): We show that the map Υ : E → R defined by Υ(x) := 1 2 log 1 + |x| 2 is a containment function for Λ. For any x ∈ E and θ ∈ P(F ), we have , and the boundedness condition follows with the constant (Λ4): Let K ⊆ E be compact. We have to show that there exist constants M, C 1 , C 2 ≥ 0 such that for all x ∈ K, p ∈ R d and all θ 1 , θ 2 ∈ P(F ), we have Fix θ 1 , θ 2 ∈ P(F ). We have for x ∈ K a(x, z)p, p dθ 1 (z) ≤ a K,max a K,min a(x, z)p, p dθ 2 (z) In addition, as a K,min > 0 and b K,max < ∞ we have for any C > 0 and sufficiently large |p| that Thus, for sufficiently large |p| (depending on C) we have . We proceed with the example in which Λ depends on p through exponential functions (Proposition 5.7). Let q ∈ N be an integer and Γ : = (a, b) a, b ∈ {1, . . . , q}, a = b be the set of oriented edges in {1, . . . , q}. Proposition 5.13. (Exponential function Λ) Let E ⊆ R d be the embedding of E = P({1, . . . , q}) × (R + ) |Γ| and Θ be a topological space. Suppose that Λ is given by where v is a proper kernel in the sense of Definition 5.6. Suppose in addition that there is a constant C > 0 such that for all (a, b) ∈ Γ such that v(a, b, ·, ·) = 0 we have Then Λ satisfies Assumption 2.14. Verifying the continuity estimate With the exception of the verification of the continuity estimate in Assumption 2.14 the verification in Sect. 5.2 is straightforward. On the other hand, the continuity estimate is an extension of the comparison principle, and is therefore more complex. We verify the continuity estimate in three contexts, which illustrates that the continuity estimate follows from essentially the same arguments as the standard comparison principle. We will do this for: • Coercive Hamiltonians This list is not meant to be an exhaustive list, but to illustrate that the continuity estimate is a sensible extension of the comparison principle, which is satisfied in a wide range of contexts. In what follows, E ⊆ R d is a Polish subset and Θ a topological space. Then the continuity estimate holds for Λ with respect to any penalization function Ψ. For the empirical measure of a collection of independent processes one obtains maps Λ that are neither uniformly coercive nor Lipschitz. Also in this context one can establish the continuity estimate. We treat a simple 1d case and then state a more general version for which we refer to [23]. We have Now note that y ε,α − x ε,α is positive if and only if e pε,α − 1 is negative so that the first term is bounded above by 0. With a similar argument the second term is bounded above by 0. Thus the continuity estimate is satisfied. where v is a proper kernel. Then the continuity estimate holds for Λ with respect to penalization functions (see Sect. C) Here we denote r + = r ∨ 0 for r ∈ R. In this context, one can use coercivity like in Proposition 5.15 in combination with directional properties used in the proof of Proposition 5.17 above. To be more specific: the proof of this proposition can be carried out exactly as the proof of Theorem 3.8 of [23]: namely at any point a converging subsequence is constructed, the variables α need to be chosen such that we also get convergence of the measures θ ε,α in P(F ). Open Access. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/. Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. A. Viscosity solutions In Sect. 3 we work with a pair of Hamilton-Jacobi equations instead of a single Hamilton-Jacobi equation. To this end, we need to extend the notion of a viscosity solution and that of the comparison principle of Sect. 2.1. We say that u is a (viscosity) subsolution of Eq. (A.1) if u is bounded, upper semi-continuous and if for all (f, g) ∈ A 1 there exists a sequence x n ∈ E such that We say that v is a (viscosity) supersolution of Eq. (A.2) if v is bounded, lower semi-continuous and if for all (f, g) ∈ A 2 there exists a sequence x n ∈ E such that As before, if test functions have compact levelsets, the existence of a sequences can be replaced by the existence of a point. B. Regularity of the Hamiltonian In this section, we establish continuity, convexity and the existence of a containment function for the Hamiltonian H of (2.2). We repeat its definition for convenience: To prove that H is continuous, we use Assumption 2.15. What we truly need, however, is that I Gamma converges as a function of x. We establish this result first. Before we start with the proof, we give a remark on the generality of its statement and on the assumption that Θ is Polish. Remark B.4. The proof of upper semi-continuity of H works in general, using continuity properties of Λ, lower semi-continuity of (x, θ) → I(x, θ) and the compact sublevel sets of I(x, ·). To establish lower semi-continuity, we need that the functionals I Gamma converge as a function of x. This was established in Proposition B.2. Remark B.5. In the lemma we use a sequential characterization of upper hemicontinuity which holds if Θ is Polish. This is inspired by the natural formulation of Gamma convergence in terms of sequences. An extension of our results to spaces Θ beyond the Polish context should be possible to Hausdorff Θ that are k-spaces in which all compact sets are metrizable. φ(x, p) is non-empty as θ 0 x ∈ φ(x, p) and it is compact due to Assumption 2.15 (I3). We are left to show that φ is upper hemi-continuous. We proceed with proving lower semi-continuity of H. Suppose that (x n , p n ) → (x, p), we prove that lim inf n H(x n , p n ) ≥ H(x, p). Let θ be the measure such that H(x, p) = Λ(x, p, θ) − I(x, θ). We have • By Proposition B.2 there are θ n such that θ n → θ and lim sup n I(x n , θ n ) ≤ I(x, θ). • Λ(x n , p n , θ n ) converges to Λ(x, p, θ) by Assumption (Λ1). establishing that H is lower semi-continuous. The Lagrangian L is obtained as the supremum over continuous functions. This implies L is lower semi-continuous. C. A more general continuity estimate In classical literature, the comparison principle for the Hamilton-Jacobi equation f − λHf = h is often proven using a squared distance as a penalization function. This often works well due to the quadratic structure of the Hamiltonian. In different contexts, e.g. for the Hamiltonians arising from the large deviations of jump processes, this is not natural, see the issues arising in the proofs in [16,23]. In absence of a general method to solve these issues, ad-hoc procedures can be introduced. One such ad-hoc procedure introduced in [23] is to work with multiple penalization functions (in that context {Ψ 1 , Ψ 2 }) that explore different parts of the state-space. Any argument that has been carried out in the main text can be carried out with the generalization of the continuity estimate below. In other words, the operator G evaluated in the proper momenta is eventually bounded from above and from below. D. Differential inclusions To establish that Condition 8.11 of [19] is satisfied in the proof of Theorem 2.8, we need to solve a differential inclusion. The following appendix is based on [11,28] and is a copy of the one in [25]. We state it for completeness. Let D ⊆ R d be a non-empty set. A multi-valued mapping F : D → 2 R d \ {∅} is a map that assigns to every x ∈ D a set F (x) ⊆ R d , F (x) = ∅. If we assume sufficient regularity on the multi-valued mapping F , we can ensure the existence of a solution to differential inclusions that remain inside D. Definition D.2. Let D ⊆ R d be a non-empty set and let F : D → 2 R d \ {∅} be a multi-valued mapping. (i) We say that F is closed, compact or convex valued if each set F (x), x ∈ D is closed, compact or convex, respectively. Then the differential inclusionγ ∈ F (γ) has a solution on R + for every starting point x ∈ D. Note that the comparison principle for f −λΛf = h does in fact hold. This is due to the fact that one only needs to establish that lim inf α→∞ Λ(x α , p α ) − Λ(y α , p α ) ≤ 0 for appropriately chosen x α , y α , p α without absolute value signs, see Proposition 5.17. The removal of absolute value signs is essential: the comparison principle for f − Λf = h for Λ(x, p) = x [e p − 1] fails. This is related to the statement that an associated large deviation principle fails, see Example E of [33].
11,919.6
2019-12-13T00:00:00.000
[ "Mathematics" ]
Atomic structure calculations of helium with correlated exponential functions The technique of quantum electrodynamics (QED) calculations of energy levels in the helium atom is reviewed. The calculations start with the solution of the Schr\"odinger equation and account for relativistic and QED effects by perturbation expansion in the fine-structure constant $\alpha$. The nonrelativistic wave function is represented as a linear combination of basis functions depending on all three interparticle radial distances, $r_1$, $r_2$ and $r = |\vec{r}_1-\vec{r}_2|$. The choice of the exponential basis functions of the form $\exp(-\alpha r_1 -\beta r_2 -\gamma r)$ allows us to construct an accurate and compact representation of the nonrelativistic wave function and to efficiently compute matrix elements of numerous singular operators representing relativistic and QED effects. Calculations of the leading QED effects of order $\alpha^5m$ (where $m$ is the electron mass) are complemented with the systematic treatment of higher-order $\alpha^6m$ and $\alpha^7m$ QED effects. I. INTRODUCTION The helium atom is the simplest many-body atomic system in the nature. Since the advent of quantum mechanics, helium was used as a benchmark case for developing and testing various calculational approaches of many-body atomic theory. Today, the nonrelativistic energy of various helium electronic states can be computed with an essentially arbitrary numerical accuracy [1,2]. The same holds also for the leading-order relativistic correction. Subsequently, the quantum electrodynamics (QED) effects in the atomic structure of helium can be clearly identified and studied by comparison of theoretical predictions with the large body of available experimental data. Experimental investigations of helium spectra have progressed rapidly over the years, recently reaching the precision of a few tens of Hertz [3]. For light atomic systems such as helium, relativistic and QED corrections to energy levels can be systematically accounted for by the perturbation expansion in the fine-structure constant α. The starting point of the expansion is the nonrelativistic energy of order α 2 m (= 2 Ry, where m is the electron mass and Ry is the Rydberg energy). The leading relativistic correction is of order α 4 m, whereas QED effects enter first in order α 5 m. A large body of work has been done in recent years to calculate QED effects in helium spectra. Extensive calculations of helium energies were accomplished by Gordon Drake et al. [4][5][6]. Their calculations are complete through order α 5 m and approximately include some higher-order QED effects. The next-order α 6 m QED correction was for a long time known only for the fine-structure intervals [7,8]. For individual energy levels, these effects were derived and calculated numerically by one of us (K.P.) [9][10][11]. The higher-order α 7 m QED effects were evaluated by us first for the fine structure [12][13][14] and just recently for the triplet n = 2 states of helium [15][16][17]. The purpose of this article is to review and systematize the technique of calculations of the helium atomic structure, developed in numerous investigations over the last three decades. The starting point of the calculations is the Schrödinger equation, which is solved variationally after expanding the wave function into a finite set of explicitly-correlated basis functions depending on all three interparticle radial distances. It has been known for a long time [18] that inclusion of the interelectronic distance explicitly into the basis set is crucially important for constructing an accurate representation of the two-electron wave function. Moreover, it has also long been recognized [19] that an accurate wave-function representation should satisfy the so-called cusp conditions at the two-particle coalescence points | r i − r j | = 0. The cusp condition is expressed [20], after averaging over angles and for the singlet states, as where r is an interparticle distance and the parameter λ = 1/2 for the electron-electron and λ = −Z for the electron-nucleus cusp (where Z is the nuclear charge number). The two most succesful basis sets used in the literature for high-precision calculations of the atomic structure of helium are: the Hylleraas basis set adopted by Drake et al. [4][5][6] and the exponential basis set put forward by Korobov [21,22] and used in numerous calculations of our group. Both these basis sets are explicitly correlated and are able to reproduce the cusp conditions with great accuracy. In the present work we will concentrate on the exponential basis set, because only this basis has been successfully used in calculations of higher-order QED effects so far. II. WAVE FUNCTIONS The spatial wave function ψ LML with a specified total angular momentum L and its momentum projection M L for a two-electron atom is standardly represented as where f l1l2 is the radial part of the wave function, r = r 1 − r 2 , andr = r/r. Furthermore, Y l1l2 LML are the bipolar spherical harmonics, where l 1 m 1 l 2 m 2 |lm is the Clebsch-Gordan coefficient and Y lm are the spherical harmonics. We stress that the radial part of the wave function is assumed to be explicitly correlated, i.e., the function f depends on all interparticle distances, r 1 , r 2 , and r. In this case, the sum over l 1 and l 2 in Eq. (2) is restricted [23] by two conditions (A) : l 1 + l 2 = L , or (B) : l 1 + l 2 = L + 1 , (4) which lead to wave functions of different parities (−1) l1+l2 . The bipolar spherical harmonics are usually handled in the spherical coordinates using the apparatus of Racah algebra, see, e.g., Ref. [24]. We find, however, that calculations with explicitly correlated functions are more conveniently performed in Cartesian coordinates. One of the reasons is that the action of numerous momentum operators encountered in calculations is most easily evaluated in the Cartesian coordinate system. The corresponding calculations can easily be automatized and performed with the help of systems of symbolic computations. For this purpose, the expansion of the wave function is more conveniently made in terms of the bipolar solid harmonics. In order to define them, we start with the solid harmonics, where the normalization coefficient A L is fixed below. The solid harmonics obey the following summation rule, where (r i r j r k . . .) (L) is a traceless and symmetric tensor of the order L constructed from components of the vector r with Cartesian indices i, j, k . . .. and the summation over these Cartesian indices is implicit. The last equation determines A L , which is related to the coefficient of x L in the Legendre polynomial P L (x), specifically, We now define the bipolar solid harmonics Y l1l2 LML as where R ≡ r 1 × r 2 , ξ is an arbitrary vector, and the righthand-side of the above equations does not depend on ξ after the L-fold differentiation. The bipolar solid harmonics are proportional to the corresponding bipolar spherical harmonics with a prefactor that does not depend on angles, so their angular parts are exactly the same. Now, using Eq. (6), we obtain that the bipolar solid harmonics Y l1l2 LML obey the analogous summation rule where Y l1l2 i1..iL are the symmetric and traceless tensors of rank L with Cartesian indices i 1 . . . i L , The summation formula (10) shows that the matrix elements with the spatial wave function can be represented in terms of matrix elements with the Cartesian wave function as follows where Q is an arbitrary spatial operator. Eq. (14) is the Cartesian representation of the spatial wave function used in the present work. We now present explicit formulas for the Cartesian wave functions for different values of the angular momentum and parity. For L = 0 we have l 1 = l 2 = 0 and only even parity. The wave function is just a scalar, where the upper sign in ± corresponds to the singlet and the lower sign to the triplet state. For L = 1, we have (l 1 , l 2 ) = (0, 1), (1, 0) for the odd parity and (l 1 , l 2 ) = (1, 1) for the even parity. The corresponding wave functions are vectors, The L = 2 odd and even wave functions are second-rank tensors, where we suppressed arguments of the radial functions F and G and the elementary second-rank tensors are defined as Explicit expressions for the L = 3 and L = 4 functions can be found in Appendix A of Ref. [25]. The spatial wave functions are normalized by III. EVALUATION OF MATRIX ELEMENTS The spin-dependent wave function with definite values of the total momentum J, its projection M , the angular momentum L, and the spin S is given by where M S is the spin projection, χ SMS is the spin function, and ψ LML is the spatial wave function. As described in the previous section, in our calculations we evaluate all matrix elements in Cartesian coordinates. The spatial wave function with the angular momentum L is represented in the form (14); namely, as a traceless tensor of rank L symmetric in all Cartesian indices carried by r 1 , r 2 , and r 1 × r 2 . In addition, it is assumed that the wave function has a definite symmetry with respect to r 1 ↔ r 2 . The norm and the expectation value of any spinindependent operator are immediately reduced to the spatial matrix element, where the summation over Cartesian indices is implicit. This equation is sufficient for determining the nonrelativistic wave function and the nonrelativistic energy. The relativistic and QED corrections involve operators depending on the electron spin. The expectation value of an arbitrary operator Q on a state with definite J, for the singlet S = 0 states, is expressed as where I is the unity matrix, J = L + S, S = ( σ 1 + σ 2 )/2, and the trace is performed in the 4-dimensional space of two spins. Further evaluation of the matrix element proceeds by performing the trace of the operators in the spin space, with help of the following trace rules, In the case of a spin-independent operator Q, Eq. (25) is reduced to Eq. (24). For the triplet states one considers three values of J = L−1, L, L+1. The expectation value then takes the form For the spin-independent operators, this equation is equivalent to Eq. (24). The coefficients A JL and B JL are obtained by considering two particular cases, (2) . The left-hand-side of Eq. (31) is then immediately expressed in terms of J and L, whereas the right-hand-side is evaluated by using This consideration gives for J = L+1, L, and L−1, correspondingly. These are all the formulas needed to factorize out the spin dependence of matrix elements and to express them in terms of spatial integrals. The expectation values of an arbitrary operator Q for the singlet and triplet wave functions are obtained from Eqs. (25) and (31). We now write explicitly the corresponding expressions. The results for the S states are For the P states, we obtain The results for the D states are IV. INTEGRALS WITH EXPONENTIAL BASIS FUNCTIONS The radial parts of the wave function (14) are represented as linear combinations of the exponential basis functions, where c k are linear coefficients, N is the size of the basis, and α k , β k , and γ k are nonlinear parameters obtained in the process of the basis optimization. One of the great features of the exponential basis functions is that the evaluation of radial integrals is very simple. A calculation of radial matrix elements of various operators with wave functions of the form (46) is reduced to evaluation of the integrals I(i, j, k), For matrix elements of the nonrelativistic Hamiltonian, only integrals with non-negative values of i, j, and k are required. All such integrals can be obtained by differentiation of the master integral I(0, 0, 0) over the nonlinear parameters, for n i , n j , n k ≥ 0. The expression for the master integral I(0, 0, 0) is very simple, Matrix elements of relativistic corrections involve integrals with additional inverse powers of r 1 , r 2 , and r, whose evaluation requires two additional master integrals. Their expression can be obtained by integrating Eq. (49) with respect to the corresponding nonlinear pa- where Li 2 is the dilogarithm function [26]. Other integrals for relativistic corrections are obtained by differentiating the above formulas for master integrals. We note that Eq. (50) contains a spurious singularity at α = β. The zero in the denominator is compensated by the vanishing logarithm function and thus is not a real singularity but can lead to numerical instabilities. In order to transform Eq. (50) to an explicitly regular form, we introduce a regularized logarithm function ln 1 (x) by separating out the first term of the Taylor expansion, Introducing ln 1 (x) with x = (α − β)/(β + γ) in Eq. (50), we obtain a regular representation of this formula. In practical calculations we encounter more spurious singularities of this kind. They are eliminated with the help of functions ln n (x), which are introduced analogously to ln 1 (x) by separating n first terms of the Taylor expansion of ln(1 + x). Matrix elements of QED corrections involve several integrals with large negative powers of radial distances, like 1/r 3 , 1/r 4 , and even 1/r 5 . Such integrals are singular and need proper definitions. With the exponential functions, it is possible to obtain simple and numerically stable representations for such integrals. The corresponding procedure is described in Appendix A. Numerical results for basic singular integrals for the 2 3 S and 2 3 P states of helium are presented in Table I. In our calculations of the α 7 m QED effects [27], integrals with ln r were encountered for the first time, where γ E is the Euler gamma constant. Such integrals are evaluated with the help of the following set of master integrals [27]: where Li 3 is the trilogarithm function [26]. Eq. (56) is valid for α > β. The corresponding result for α < β is obtained by the analytic continuation with help of the following identities [26] The result for the case of α = β is straightforwardly obtained from Eq. (56). V. NONRELATIVISTIC ENERGY AND WAVE FUNCTION The nonrelativistic Hamiltonian of the helium atom for the infinitely heavy nucleus is where p a = −i ∇ a is the momentum operator of the electron a and Z is the nuclear charge number (Z = 2 for helium). The Schrödinger equation is A direct solution of the Schrödinger equation is standardly substituted by the problem of finding the minimum or, generally, a stationary point of the variational functional The variational eigenvalues obtained from this functional are the upper bounds to the true eigenvalues, and the corresponding eigenvectors provide the linear coefficients c k of the basis-set expansion (46). It is important that the variational principle works equally well for the ground and for the excited states. The finite nuclear mass correction to the nonrelativistic energy is induced by the nuclear kinetic energy operator where M is the nuclear mass and P = − p 1 − p 2 is the nuclear momentum. There are two ways to incorporate the nuclear mass effect to the nonrelativistic energy: (i) to include the operator δ M H into the nonrelativistic Hamiltonian H 0 and solve the nuclear-mass dependent Schrödinger equation and (ii) to solve the Schrödinger equation for the infinitely heavy nucleus and to account for the nuclear mass effects by perturbation theory. In our calculations with the exponential basis we found that the inclusion of δ M H into the nonrelativistic Hamiltonian leads to numerical instabilities for S states (but not for P and higher-L states). So, for S states we account for the nuclear mass effects by perturbation theory (up to the third order in 1/M [28]), whereas for the P and D states we usually include δ M H into the solution of the Schrödinger equation. We checked that for the P and D states both methods yield equivalent results. It should be mentioned that in the literature it is customary to split the operator δ M H into the mass-scaling and mass-polarization parts, The effect of the mass scaling (caused by the first term in Eq. (63)) can be incorporated into the nonrelativistic Hamiltonian (59) by switching to the reduced-mass atomic units r → µ r, where µ = 1/(1 + m/M ) is the reduced mass. As a result, the mass-scaling term leads to the appearance of the reduced mass prefactor in the nonrelativistic energy E 0 → µ E 0 and only the mass polarization term needs to be accounted for separately. We find it more convenient to keep the nuclear kinetic energy operator in the closed form (62), because this greatly simplifies consideration of higher-order recoil QED effects. Because the nonrelativistic Hamiltonian H 0 does not depend on spin, its matrix elements are immediately reduced to radial integrals with the spatial wave functions according to Eq. (24). Computing the action of gradients ∇ 1,2 on the wave functions (14), we express the matrix elements ψ|H 0 |ψ as a linear combination of integrals I(i, j, k) with i, j, k ≥ 0, which are rational functions of the nonlinear parameters α n , β n , and γ n . The choice of the nonlinear basis parameters α n , β n , and γ n is crucially important for obtaining an accurate and compact representation of the wave function and the energy E 0 . The general approach is to perform the variational optimization of the basis parameters, by searching for a minimum of the eigenvalue of the Hamiltonian matrix corresponding to the desired reference state. Because the optimization of each individual nonlinear parameter is not effective from the computational point of view, we use the approach introduced by Vladimir Korobov [21]. In this method, the (real) nonlinear parameters α, β, and γ are quasirandomly distributed in the intervals and the parameters A 1,2 , B 1,2 , and C 1,2 are determined by the variational optimization. We note that the nonlinear parameters as well as A 1,2 , B 1,2 , and C 1,2 can be both positive and negative. However, in order to ensure the normalizability of the wave function and its physical behavior at large r 1 , r 2 , and r, we require that where ǫ ∼ √ 2 E io , with E io being the ionization energy. The performance of the basis set can be significantly improved if one introduces several sets of intervals A 1,2 , B 1,2 , and C 1,2 which are optimised variationally. In our calculations we use typically two or three sets of intervals. This can be considered as an analogue of several different exponential scales in the Hylleraas-type calculations by Drake at al. [6,29]. We also note that in calculations for excited 1snl states it is advantageous to include several screened hydrogenic wave functions of the type φ Z 1s ( r 1 ) φ Z−1 n ′ l ( r 2 ) with n ′ ≤ n in the basis, whose parameters are excluded from optimization. This ensures that the variational optimization is localized at the local minimum with the desired principal quantum number n and does not collapse to lower n's. Our procedure for determination of the nonrelativistic wave function and energy looks as follows. For a given size of the basis N the nonlinear parameters α n , β n , and γ n with n = 1, . . . , N are distributed quasirandomly within the initial set of intervals with parameters A i , B i , and C i . Then, the N × N matrix of the nonrelativistic Hamiltonian H 0 is computed. The linear coefficients c n and the desired reference-state eigenvalue E 0 are determined by the inverse iteration method. The inversion of the Hamiltonian matrix is performed by the LDU decomposition method. This procedure is repeated for different sets of the parameters A i , B i , and C i , searching for the minimum value of the energy eigenvalue. A disadvantage of working with the exponential basis is that the basis quickly degenerates as N is increased (i.e. the determinant of the Hamiltonian matrix becomes very small), which leads to numerical instabilities in linear algebra routines. Because of this the usage of an extended-precision arithmetics is mandatory. In our calculations we used the Fortran 95 libraries for the octupleprecision (about 64 digits) arithmetics written by V. Korobov [30], quad-double routine by D. H. Bailey, and MP-FUN/MPFR library by D. H. Bailey [31]. Table II shows an example of the convergence of numerical results with the exponential basis with increase of the basis size. We observe that with just N = 200 basis functions one obtains the nonrelativistic energy with about 10-digit accuracy. VI. RELATIVISTIC CORRECTION The relativistic correction splits the nonrelativistic energy levels with quantum numbers L > 0 and S > 0 into sublevels according to the value of the total momentum J. This effect is known as the fine structure. It is often convenient to consider separately the centroid energy levels obtained by averaging over all J sublevels, and the fine-structure intervals between individual J sublevels. The centroid energy is defined as The relativistic correction is induced by the Breit Hamiltonian, which is conveniently separated into the spin-independent and the spin-dependent parts, In the leading order of perturbation theory, the spinindependent part H A contributes only to the centroid energy, whereas the spin-dependent part H fs causes the fine structure splitting. A. Centroid energy The spin-independent part of the Breit Hamiltonian is given by where P = − p 1 − p 2 is the nuclear momentum. In order to account for the finite nuclear mass effects, the expectation value of the operator H A should be evaluated with the eigenfunctions ψ M of the Schrödinger Hamiltonian with the finite nuclear mass (i.e. the sum of Eqs. (59) and (62)). Alternatively, the wave function ψ M can be constructed by perturbation theory in 1/M . In our calculations, we include the nuclear recoil effect for the relativistic correction perturbatively for the S states, and nonperturbatively for the L > 0 states. The matrix element of H A is reduced to the radial integral with the spatial wave functions according to Eq. (24) and can be evaluated numerically. However, the expectation values of the operators p 4 a and δ 3 (r a ) are slowly converging with respect to the size of the basis because these operators are nearly singular. It is possible to significantly improve the speed of convergence if one transforms these operators to a more regular form [32]. Specifically, for a given nearly singular operator H X we search for another, more regular operator H XR and an additional operator Q X , which satisfy the following equation where {. , .} denotes the anticommutator. It is obvious that H X = H XR , as long as the expectation value is evaluated with the eigenfunctions of the Hamiltonian H 0 . In practice, it is usually possible to find such a pair of operators H XR , Q X that the most singular part of H X is absorbed in the anticommutator. The additional operator Q X is generally a combination of Z/r 1 , Z/r 2 , and 1/r, with the coefficients in front of these terms determined by requiring the cancellation of all Dirac-δ-like contributions. Specifically, we find the following regularized form of the operator H A (without the nuclear recoil) [10] where V = −Z/r 1 − Z/r 2 + 1/r. The operator ∇ 2 1 ∇ 2 2 in the above formula is not self-adjoint and requires an explicit definition. Its action on a trial function φ on the right should be understood as plain differentiation (omitting δ 3 (r)); no differentiation by parts is allowed in the matrix element. It can be checked that the operators H A and H AR satisfy the following equation where Formulas with the finite nuclear mass are analogous but more lengthy; they are given by Eqs. (62)-(67) of Ref. [33]. Table III presents numerical results for the leading relativistic correction to the 2 3 P centroid energy, performed with different basis sets. We observe that, for the same basis size, the number of correct digits for the matrix element is half as much as for the nonrelativistic energy. B. Fine structure The fine structure of energy levels is induced by spindependent operators. The spin-dependent part of the Breit Hamiltonian is conveniently written as a sum of three operators with different spin structure, with where κ = α /2π + O(α 2 ) is the anomalous magnetic moment correction and σ a is the vector of Pauli matrices acting on a'th electron. We note that the operators H B , H C , and H D contain radiative corrections in form of the electron anomalous magnetic moment. In this way we account for the complete QED effects of order α 5 m to the fine structure. It should be mentioned that the matrix element of H C is nonzero only if the operator is sandwiched between wave functions with different spin values. Therefore, any symmetrical matrix element of H C vanishes, and this operator does not contribute in the leading order of perturbation theory. We note, however, that H C contributes to the second-order perturbation corrections (in the order α 6 m). In order to perform the spin-angular reduction in the matrix elements of H fs , it is convenient to introduce spatial operators Q B , Q C , and Q D , explicitly separating the spatial and the spin degrees of freedom, Using Eqs. (31), (38)- (41) and performing traces of the spin operators, we express all matrix elements in terms of spatial radial integrals. For the 3 P states, we obtain where for J = 0, 1, and 2, respectively. For the 3 D states, an analogous calculation yields where for J = 1, 2, and 3, respectively. VII. LEADING QED CORRECTION The leading QED contribution is of the order α 5 m. For the fine structure, this contribution is already accounted for by the electron anomalous magnetic moment terms in the Breit Hamiltonian, as given by Eqs. (74)-(76). So, we need to examine only the centroid energy. The spin-independent mα 5 Hamiltonian representing the leading QED effects was derived in the 1950s by Araki and Sucher [34,35] where ln k 0 is the so-called Bethe logarithm defined as and 1/r 3 ǫ is the regularized 1/r 3 operator (distribution) defined by its matrix elements with an arbitrary smooth function f ( r) as +4 π δ 3 (r) (γ E + ln ǫ) . (90) The nuclear recoil correction to the leading QED contribution consists of two parts, where δ M H is defined by Eq. (62), and δ M H (5) is the recoil addition to the α 5 m Hamiltonian given by [36] Here, δ M ln k 0 is the correction to the Bethe logarithm ln k 0 induced by the nonrelativistic kinetic energy operator P 2 /2, and 1/r 3 a,ǫ is the regularized 1/r 3 a operator defined analogously to Eq. (90). The recoil correction to the Bethe logarithm δ M ln k 0 is often separated into the mass-scaling and masspolarization parts, where δ p1p2 denotes the perturbation due to the mass polarization operator p 1 · p 2 . The corresponding separation for the 1/r 3 ǫ matrix element reads From the computational point of view, the numerical evaluation of the QED effects involves two new features, as compared to the relativistic correction: matrix elements of the singular operators 1/r 3 and 1/r 3 a and the Bethe logarithm. Calculation of expectation values of singular operators with exponential basis functions is examined in Appendix A; it does not present any computational difficulties. On the contrary, the computation of the Bethe logarithm is rather nontrivial; it is examined in the next section. A. Bethe logarithm There are two different approaches developed for the calculation of the Bethe logarithm in few-electron atoms. The first one starts with the definition (89) and uses the basis-set representation of the Hamiltonian as a sum of the spectrum of the eigenfunctions. The difficulty is that the sum in the numerator is nearly diverging because the dominant contribution comes from the high-energy continuum states of the spectrum. This problem is solved by using a basis set whose spectrum of pseudostates spans a huge range of energies [37]. An alternative approach was first introduced by C. Schwartz [23] and further developed by V. Korobov [38][39][40]. Within this method, the Bethe logarithm ln k 0 is represented as an integral over the momentum of the virtual photon, with subtracting the ultraviolet asymptotics and performing the limit, where D = 2πZ δ 3 (r 1 ) + δ 3 (r 2 ) , ∇ ≡ ∇ 1 + ∇ 2 , and The asymptotic expansion of J(k) for large k reads Splitting the integration interval (0, Λ) into two parts (0, K) and (K, Λ), where K is an arbitrary cutoff parameter, we can rewrite Eq. (95) as The above expression is finite, does not depend on K, and is suitable for a numerical evaluation. We now address the angular reduction in the secondorder matrix element J(k) given by Eq. (96). It is performed in several steps. First, we represent the gradient acting on the reference-state wave function ∇ j ψ i1..iL as a sum of irreducible Cartesian tensors, as described in Appendix B. For example, the gradient acting on a Pstate wave function ∇ j ψ i is represented as a sum of the L = 0, L = 1, and L = 2 irreducible Cartesian tensors, which induce, correspondingly, the L = 0, L = 1, and L = 2 angular-momentum contributions from the resolvent. The second-order matrix element of an irreducible tensor Φ i1..iL is transformed as where Φ i1..iJ is the solution of the inhomogeneous Schrödinger equation Inserting the explicit representation of Φ as a sum over the spectrum, we obtain An alternative way to arrive at this expression is to observe that the scalar product Φ|ψ includes an integration over the continuous and a summation over the discreet variables, namely Φ|ψ ≡ Φ i1..iL |ψ i1..iL = i1..iL d 3n r Φ i1..iL * (r) ψ i1..iL (r). The advantage of the integral representation of the Bethe logarithm is that J(k) has a form of the symmetric second-order perturbation correction and thus obeys the variational principle. We therefore can variationally optimize the basis-set representation of the resolvent For lower values of k, the basis can be variationally optimized if one fixes pre-optimized parameters of the more deeply bound states with E n < E 0 . Our numerical procedure was performed in two steps. First, we optimized the basis for several different scales of the photon momentum, k = 10 i , with typical values of i = 1, .., 4. After that, the computation of the function J(k) was performed with a basis obtained by merging together the optimized sets for the two closest k i points, thus essentially doubling the size of the basis. In the second step, we perform the integration over k. The integral over (0, K) (with the typical choice of K = 10) was calculated analytically, after the full diagonalization of the Hamiltonian matrix. The remaining interval was split into two parts, (K, K 2 ) and (K 2 , ∞), with the typical choice of K 2 = 10 4 . The integral over the former was performed with help of Gauss-Legendre quadratures, after the change of variables t = 1/k 2 . The remaining part of the integral was calculated analytically, after fitting numerical values of J(k) to the known form of the asymptotic expansion, where pol(x) denotes a polynomial of x. The first terms of this expansion are given by Eq. (97), whereas the higher-order coefficients are obtained by fitting. Calculations of the Bethe logarithm for the finite nuclear mass can be performed analogously to the above, or by perturbation theory. The numerical procedure for evaluation of the recoil correction to the Bethe logarithm by perturbation theory is described in Appendix A of Ref. [41]. Table IV presents a comparison of different calculations of the Bethe logarithm for the 2 3 P state of helium. The most accurate results for the ground and excited states of helium are obtained by Korobov in Ref. [40]. Results for He-like ions can be found in Refs. [37,41]. VIII. α 6 m QED EFFECTS The α 6 m QED corrections to energy levels in atoms are represented by the sum of the expectation value of the effective α 6 m Hamiltonian H (6) and the second-order perturbation correction induced by the Breit Hamiltonian, where H We note that in order to avoid admixture of higher-order contributions in E (6) , we have to retain only the α 4 m part in the definition of the Breit Hamiltonian, i.e., to set the magnetic moment anomaly κ → 0 in the definitions (74)-(76). This is indicated by the superscript "4" in the corresponding operators. Formulas for the effective α 6 m Hamiltonian H (6) are rather lengthy and will not be reproduced here. In the case of fine structure, they were first obtained by Douglas and Kroll in 1974 [42] and later re-derived in Refs. [43,44]. For the energy centroid, the situation is greatly complicated because of the appearance of numerous diverging operators. The corresponding derivation was accomplished by one of us (K.P.), in Ref. [9] for the triplet states and in Ref. [10] for the singlet states of helium. The complete formulas suitable for numerical evaluation can be found in Ref. [25]. The nuclear recoil α 6 m correction has the same structure as the non-recoil one, but the expressions for the operators are much more complicated. This correction was calculated in Ref. [33] for the triplet states and in Ref. [45] for the singlet states of helium. A. Second-order terms We now discuss the evaluation of the second-order contributions, represented by the second term in Eq. (103). Such corrections were first calculated for the fine structure by Hambro [46] and by Lewis and Serafino [7]. Later, the fine-structure calculations were greatly improved in Refs. [47][48][49]. For the centroid energies, the second-order corrections were calculated in Refs. [10,11] for the 2S and 2P states and in Refs. [25,50] for the nD states of helium. It is convenient to rewrite Eq. (103), expressing the second-order perturbation correction more explicitly, (105) We note that the non-symmetrical second-order corrections (the last two terms in the above equation) vanish for the centroid energy, but contribute to the fine structure. The second-order perturbative corrections are calculated as follows. In the first step, we perform traces over the spin degrees of freedom in the matrix elements. Then we decompose the product of a tensor operator Q and the reference-state wave function ψ into the irreducible tensor partsψ, as described in Appendix B. In the last step we calculate the second-order matrix elements induced by the irreducible partsψ as (see Eq. (101)) The numerical evaluation of symmetrical second-order contributions was carried out with the variational optimization of the nonlinear parameters of the basis set for the resolvent 1/(E 0 − H 0 ). Convergence of numerical results is often rather slow, especially for contributions with H AR . This is associated with the fact that the effective wave function |δψ = 1/(E 0 − H 0 ) ′ |H AR has an integrable singularity at r a → 0. In order to represent such wave functions with the exponential basis, very large (both positive and negative) exponents are required. In order to effectively span large regions of parameters, we used non-uniform distributions of the nonlinear parameters. E.g., for the nonlinear parameters α i we used the distributions of the kind [9] with a = 2 and 3, where the variable t i has a uniform quasirandom distribution over the interval (0, 1) and the variables A 1,2 are subjects of variational optimization. An example of the convergence study of the second-order correction H AR 1 (E0−H0) ′ H AR is given in Table V. Numerical evaluation of non-symmetrical second-order contributions was carried out with basis sets, optimized for the corresponding symmetrical corrections. IX. α 7 m QED EFFECTS The α 7 m QED correction to energy levels in atoms is given [12] by the sum of the relativistic correction to the Bethe logarithm E L , the expectation value of the effective α 7 m Hamiltonian H (7) , and the perturbation of the α 5 m QED operator by the Breit Hamiltonian, The regularized effective α 5 m Hamiltonian is [17] H (5) where H R is non-Hermitian and is assumed to act on a ket trial function φ on the right. The relativistic correction to the Bethe logarithm is rather complicated. We will not discuss its calculation here, but direct the reader to original studies. This correction was first calculated for the fine structure of the 2 3 P state; the corresponding calculations for helium and helium-like ions were performed in Refs. [12][13][14]. In our recent investigation [15] we performed a calculation for the energy centroid of the 2 3 S and 2 3 P states. For singlet states of helium, this correction has never been calculated so far. The derivation of the effective α 7 m Hamiltonian H (7) for helium is an extremely difficult problem. It was first accomplished by one of us (K.P.) for the fine structure in Refs. [12,13]. Recently, we performed [16,17] the derivation of H (7) for triplet states of helium and calculated [27] the corresponding correction to the energies of the 2 3 S and 2 3 P states. For singlet states, the effective α 7 m Hamiltonian is unknown. From the computational point of view, the main difficulty of the evaluation of the α 7 m correction is the calculation of the Bethe-logarithm contribution E L . The computational scheme is similar to that for the plain Bethe logarithm and is described in Ref. [15]. Conversely, the computation of the expectation value of H (7) and the second-order corrections is very similar to the calculation of the α 6 m corrections. X. OTHER EFFECTS The finite nuclear size correction is given by (in relativistic units) where R is the root-mean-square nuclear charge radius, and the expectation value of the Dirac δ functions is assumed to include the finite-nuclear-mass correction induced by δ M H. The higher-order QED effects are approximated on the basis of known results for hydrogenic atoms. Specifically, the hydrogenic one-loop and two-loop corrections for the 2s state of He + are given by [51] An approximation for the higher-order α 8 m QED correction to the ionization energies of the helium atom is obtained from the corresponding hydrogenic 2s contribution by XI. COMPARISON OF THEORY AND EXPERIMENT In this section we summarize numerical results of QED calculations of energy levels in 4 He and compare theoretical predictions with available experimental results. Table VI presents such a comparison for transitions between states with the principal quantum number n = 2. We note that our present theoretical uncertainty for the 2 3 S -2 1 S transition is increased as compared to our previous work [28]. The reason is an accidental cancelation of the estimated α 7 m term between the 2 3 S and 2 1 S states in Ref. [28]. Now the α 7 m correction is calculated for the 2 3 S state and the theoretical uncertainty is defined by the 2 1 S state only. Table VI shows good agreement of theory and experiment for the singlet-singlet and triplettriplet transitions but some tension for the singlet-triplet transitions. Specifically, we note a 2.3 σ deviation from the experimental result [53] for the 2 3 S-2 1 P transition (with σ denoting the standard deviation). Of particular importance is the agreement observed for the 2 3 P -2 3 S transition, because in this case two triplet states are involved, for which the theoretical accuracy is the highest. Theoretical calculations of energies for the 2 3 S and 2 3 P states [17] are complete through order α 7 m, with resulting theoretical uncertainty below 100 kHz, whereas for the 2 1 S and 2 1 P states the theory [28] is complete up to order α 6 m only and the theoretical accuracy is on the level of 1 MHz. For the D states, theoretical calculations [25,50] are also complete at order α 6 m, but the absolute theoretical precision is much higher since the QED effects are smaller. In general, we conclude that for the intrashell n = 2 transitions there is good agreement for transitions between the states with the same spin multiplicity and some tension for the states of different spin multiplicity. The situation becomes even more strained when we consider ionization energies and transitions involving states with different n's. The corresponding comparison is presented in Table VII. We immediately notice that all differences between theory and experiment are of the same sign and that most of them are outside of the theoretical error bars. The largest discrepancies are found for the 2 3 S 1 -3 3 D 1 and the 2 3 P 0 -3 3 D 1 transition, of 6 and 12 σ, correspondingly. These transitions involve the triplet states, for which theoretical uncertainties are the smallest, so that 0.5 MHz differences from the experimental values lead to large relative deviations. The comparison in Tables VI and VII suggests that there might be a contribution missing in theoretical calculations of energy levels, which weakly depends on L but strongly depends on the principal quantum number n (the latter is natural because the 1/n 3 scaling is typical for all QED effects). This conjecture was put forward in Ref. [50] and since then strengthened by subsequent calculations and measurements. Such a missing contribution most likely originates from the α 6 m or α 7 m QED corrections because all other theoretical effects are crosschecked against independent calculations [5]. Table VIII presents the comparison of theoretical and experimental results for the fine-structure intervals of the 2 3 P state in 4 He. Theoretical predictions for these intervals are of greater accuracy than for other intervals of the n = 2 manifold. This is both due to the fact that the theory of these intervals [14,57] is complete at the order α 7 m and due to the smallness of QED effects. We observe a generally good agreement between theory and experiment for the fine-structure intervals. The only tension is a 1.4 σ deviation for the P 1,2 interval measured in Ref. [3]. We note that all pre-2010 experimental results were to a greater or lesser degree influenced by unaccounted quantum-interference effects and were reevaluated in Refs. [74,75]. Summarizing, we have reviewed a large amount of work accomplished during the last decades in calculations of QED effects in the atomic structure of the helium atom. The leading-order α 5 m QED effects are nowadays well established by independent calculations and tested by comparison with numerous experiments. However, recent calculations of higher-order α 6 m and α 7 m QED effects revealed some small but systematic deviations from high-precision experimental transition energies. Having in mind the importance of the helium spectroscopy for determination of nuclear properties and fundamental constants, we conclude that further theoretical and experimental efforts are needed in order to find the reasons behind the observed discrepancies. In calculations of the Bethe logarithm and the secondorder perturbation corrections, we encounter a problem of decomposition of products of irreducible Cartesian tensors into the irreducible parts. In this section we collect formulas required for such decompositions. The product of two vectors is represented as a sum of a symmetric and traceless second-rank tensor, a vector, and a scalar, The product of a vector and a symmetric and traceless second-rank tensor is decomposed as where This identity can be verified by contracting Eq. (B2) with δ ij and ǫ ijk . It can be easily extended to the higher-rank tensors Q. Finally, we present the decomposition of the product of two symmetric and traceless tensors P ij and Q kl , required for calculations of second-order corrections involving D-states, P ij Q kl = (P ij Q kl ) (4) + ǫ ika T jal + ǫ jka T ial + ǫ ila T jak + ǫ jla T iak + δ ik T jl + δ il T jk + δ jk T il + δ jl T ik
10,530.8
2021-07-11T00:00:00.000
[ "Physics" ]
Event-Based Quantum Mechanics: A Context for the Emergence of Classical Information : This paper explores an event-based version of quantum mechanics which differs from the commonly accepted one, even though the usual elements of quantum formalism, e.g., the Hilbert space, are maintained. This version introduces as primary element the occurrence of micro-events induced by usual physical (mechanical, electromagnetic and so on) interactions. These micro-events correspond to state reductions and are identified with quantum jumps, already introduced by Bohr in his atomic model and experimentally well established today. Macroscopic bodies are defined as clusters of jumps; the emergence of classicality thus becomes understandable and time honoured paradoxes can be solved. In particular, we discuss the cat paradox in this context. Quantum jumps are described as temporal localizations of physical quantities; if the information associated with these localizations has to be finite, two time scales spontaneously appear: an upper cosmological scale and a lower scale of elementary “particles”. This allows the interpretation of the Bekenstein limit like a particular informational constraint on the manifestation of a micro-event in the cosmos it belongs. The topic appears relevant in relation to recent discussions on possible spatiotemporal constraints on quantum computing. Introduction Quantum mechanics (QM) is the current theory of reference in the study of the micro-physical world. The application of its principles led to the full elucidation of the behavior of matter at the particle scale, of the atomic nuclei, of atoms, of molecules, of condensed matter. Such principles are at the base of the prodigious development of today's quantum technologies. In addition, it was the starting point for the formulation of quantum field theory. Nevertheless, the nature of the physical world at the scale of quantum processes remains the subject of debate [1]. In particular, the mechanisms that convert the quantum information associated with these processes into classical information, thus allowing the emergence of the macroscopic classical world, remain elusive. In this article we wish to investigate these mechanisms in a new light. In order to illustrate the approach, let us consider an unstable microscopic quantum system, let us say an atomic nucleus of Radium 226 in isolation. This system decays in Radon 222 with a very long half-life (1600 years); this means that the nuclear quantum amplitude undergoes a slow unitary time evolution tending asymptotically to the amplitude of the nuclear state designated as Radon 222. This description would potentially be applicable both to the single nucleus of Radium 226 and to a set of nuclei of Radium 226 prepared at the same initial moment. Now, the peculiarity of QM is that the amplitude of a single nucleus of Radium 226 can undergo a discontinuous jump to the amplitude representative of the Radon 222. This discontinuous evolution is what is called a quantum jump (QJ). It is noteworthy, first of all, that a QJ is an event in the history of a single nucleus; identical nuclei simultaneously prepared in the same way will decay at different times. Therefore, the time evolution of the set of all these nuclei will lead to a mixture consisting of a decreasing fraction of non-decayed nuclei (whose amplitudes will be expressed by the same superposition of Radium 226 and Radon 222) and a growing fraction of Radon 222. Moreover the decay, that is the QJ, is an objective physical fact. Indeed a detector placed near the nucleus that undergoes the decay will detect an alpha particle, product of the decay (and possibly the gamma photon of rearrangement). By applying the principles of conservation of physical quantities such as charge, energy and impulse, we obtain that the quantities transported by the decay products are those related to the transformation of Radium 226 in Radon 222. Since the emission of these products is instantaneous with respect to the unitary evolution of the nuclear amplitude, it follows that the QJ is instantaneous on the scale of this second process (and it can be assumed that it is instantaneous in the strict sense). Therefore, there is an objective physical process, the QJ, which converts quantum information, associated with a superposition, in classical information associated with a mixture. This process operates on a microscopic scale, therefore in the absence of any noise that can produce decoherence and in total isolation. Furthermore, it does not depend on the presence or absence of measurement devices (which can reveal at most the decay products when the process has ended). The QJ is clearly an outcome of the known and usual physical interactions: specifically, the strong nuclear interaction governing the alpha decay. These interactions produce two effects: a unitary evolution of the amplitudes that can be described in principle by the quantum equations of motion and their discontinuous and non-unitary variation constituted by the QJ. In other words, each interaction induces both Hamiltonian and non-Hamiltonian effects on the evolution of quantum amplitudes. The quantum jumps were introduced by Bohr in his famous trilogy [2][3][4], in relation to the explanation of atomic spectral lines. Direct observation of atomic QJs and their discrimination from the underlying unitary process required the development of refined experimental methods that became available in the mid 1980s. With the method of ionic traps was possible to use "shelved" atoms for this observation [5]. Subsequently, quantum jumps were directly observed in a number of atomic, molecular, electromagnetic and nuclear microsystems [6][7][8][9][10][11][12][13][14][15][16][17][18][19]. However, while the QJs represent a well established experimental fact, the current formulations of the QM are not based on the explicit recognition of their existence. It is good to repeat that QJ is a real physical phenomenon (for example, the effective transmutation of an atomic nucleus with the emission of observable decay products) that occurs as the effect of a specific interaction (in the example, a strong interaction). It should not be confused with its effect, i.e., the reduction of quantum amplitude and the consequent production of classical information. Furthermore, while this information may become available as a measure of the knowledge acquired by a human observer, the presence of such an observer is not a necessary condition for the QJ to take place. We take the QJs as elementary "events" of an objective physical world, independent of the presence of observers. Furthermore, we adopt an ontological perspective where the macroscopic systems are stable aspects of recurrent schemes of QJs. The QM, reformulated according to these principles, thus becomes a description of the physics of this objective world. Now we want to discuss some of the advantages of this perspective. Since the QJs induce a reduction of quantum amplitude, one may question if they may have some relationship with the other phenomenon to which the QM attributes the same property, that is, the measurement. In this paper we will assume that the reduction induced by the quantum measurement process is nothing but the effect of a quantum jump on the evolution of the complex consisting of the measurement apparatus and the micro-entity. For consistency with the previous reasoning, the jump will be the non-Hamiltonian effect of the (even negative) interaction between the apparatus and the micro-entity. In this way the measurement becomes an ordinary physical phenomenon and we can consider it as a special case of QJ. The concepts of "measurement" and "observable" therefore lose their centrality in the construction of the theory. The peculiarities of quantum measurement will be examined later. The succession of the QJs experienced over time by the same elementary "quantum system" is naturally a discrete set: two successive QJs are separated by a finite time interval. The lack of existence of events related to the system in this interval means that in this interval the "system", intended as a set of QJs, does not exist in time. The elementary "quantum system" is therefore not an object, in the classical sense: it is a causal connection between single QJs. Its identity will be characterized by the existence of a specific Hilbert space where the transition amplitude between a QJ and the next is evaluated (if the number of particles is not conserved, we will have instead a Fock space, but in this paper we will not discuss this circumstance). The quantum amplitude (bra or ket) associated with a system is therefore not the "state" of some object; in particular, a superposition of amplitudes should not be intended as a superposition of "states". The situation is very different if the "quantum system" is not elementary, that is, it consists of a multiplicity of elementary constituents able to interact between themselves with production of many QJs at the same time. The characterization of the system will then also require the specification of these QJs. The case of classical macro-systems is that of the ideal limit in which the system is defined entirely by average properties, relatively stable over time, of an enormously high number of QJs. Such a system is an object, it is endowed with continuous existence in time and its state is described by classical variables, to which classical measures of information are attributable. By attributing the role of primary constituents of the physical world on the quantum scale to the QJs is therefore possible to construct a version of the QM in which the transition from the (fundamental) quantum level to the classical one is fully understandable. To construct such a version we have to remember that a QJ projects the quantum amplitude of the system on a given subspace of the Hilbert space; it can therefore be represented by a self-conjugated operator. The QJ is therefore the localization, in the temporal domain, of that projection and of the physical quantities associated with it. Temporal localizations of physical quantities thus assume the role of primary elements of physical reality, with the result that this one is made up of events, and no longer objects. This formulation assigns the same theoretical and experimental significance to two inseparable and complementary aspects of the quantum world: that localized in time (QJs) and that not localized in time (amplitudes). The relationship between these two aspects can be described in terms of the relationship between quantum information coded in the Hilbert space and classical information produced in the QJs. In particular, the Bekenstein limit becomes a limit on classical information which can be expressed in a finite volume of space by the QJs. If this limit has to be finite, the temporal de-localization must be limited both above, on the cosmological scale, and below. This last constraint determines the appearance of a scale of elementary particles, an element which is not explicit in the usual formulation of the QM. The evolution of the superposition of amplitudes in a quantum computer, however, does not happen in spacetime. Therefore, this evolution does not seem to be limited by spatio-temporal constraints, which instead can afflict the input-output operations. The recent debate on the possibilities of quantum computing raised by Davies and Aaronson [20,21] should be considered in this perspective. The structure of this paper is as follows. In Section 2, the quantum basic notions are re-arranged according to the principles outlined in this introduction; in particular the notion of event is specified and the Born rule is introduced. Section 3 briefly introduces the notions of classical system and measurement. Section 4 is dedicated to the discussion of the cat paradox. In Section 5 we discuss the relationship between localization and information. In Section 6, some considerations about the possible spatiotemporal limits of quantum computing are discussed according to the perspective illustrated in this paper. A comparison with other similar approaches in the literature is reported in Section 7. Open problems and the possibility of future research are briefly sketched in Conclusions. Rewriting Quantum Postulates In this section we present a reformulation of the basic concepts of the QM according to the recognition of the fundamental nature of the QJs as events on the quantum scale. In order to highlight the impact of this recognition, we will refer to the semi-formalized presentation level typical of textbooks. In our intervention on the standard formalism we will try to be as conservative as possible, compatibly with the variations we intend to represent. We assume two primitive theoretical concepts: (1) A real variable t ∈ T ⊂ R, the 'time'. (2) A 'rigged' Hilbert space H with a scalar product ϕ|ψ ∈ C; |ψ , |ϕ ∈ H. Postulate 1. ∃E ⊂ T such that: (a) E is finite or countable; We call the application f the 'manifestation' of the 'event' |ψ t k ψ t k |. As one can see, this event is a self-adjoint operator on H; t k is the 'instant' when the event 'occurs'. It is assumed that t k+1 ≥ t k , ∀k ∈ N and there are not events manifested in the interval (t k , t k+1 ). The postulate 1 thus defines a causal structure of successive events based on a conditional probability. Let us consider now a linear operator o on H, which is diagonal in the orthonormed and complete basis |φ i ; i = 1, 2, 3...: The projector |φ i φ i | is a possible image of the application f at the time t if ∃|ψ ∈ H such that Prob[(|ψ t ψ t |) (|φ i,t φ i,t |)] ≡ 0, where t ∈ T. If o i ∈ R and each |φ i φ i | is a possible image of the application f then o is called 'physical quantity' on the space H. Postulate 2. The conditional probability of two successive events |ψ t k ψ t k | and |ψ t k+1 ψ t k+1 | is expressed as: where H = H + is a physical quantity on H called 'Hamiltonian'. Postulate 2 defines a rule for the conditional probability that connects two subsequent events (Born rule). The mean value of the physical quantity (1) manifested in the event |ψ t k ψ t k | is given, as it is easy to verify, by: The postulate 2 does not privilege either of the two directions of time. However, it is also possible to introduce a time-oriented dynamics by defining the pair of amplitudes: |ψ t = S|ψ t k ; quantum 'forward' amplitude (5) ψ t | = ψ t k+1 | S + ; quantum 'backward'amplitude to which the two time evolution equations, equivalent to Equation (2): ih∂ t |ψ t = H|ψ t for t ∈ [t k , t k+1 ) 'forward' evolution (6) − ih∂ t ψ t | = H ψ t | for t ∈ (t k , t k+1 ] 'backward' evolution (7) are respectively associated. In general, however, it is |ψ t t=t k+1 = |ψ t k+1 and ψ t | t=t k = ψ t k |. This circumstance is named 'quantum jump' or 'quantum discontinuity'. For t ∈ (t k , t k+1 ) it is formally possible to define the mean values ψ t | o|ψ t , ψ t | o|ψ t , but they have not a direct physical meaning as they are not manifested in an event. Along this alternative route, the Born rule can be introduced as follows. Let A = Σ i a i |a i a i | be a physical quantity. Let us imagine n → ∞ mental copies of the causal connection between two subsequent events (without intermediate events), and impose that these copies differ only for the final event. The final events will all be, by hypothesis, of the kind |a i a i |; this is possible in virtue of the definition of A as a physical quantity, which assures the existence of a manifestation f with these images. The final 'average' event will then be: From (8) we have ∑ i P i = 1, so that P i is the probability of the manifestation of the |a i a i | event whose existence is guaranteed by the postulate 1. Since the initial event is fixed, the quantum forward amplitude at the instant immediately preceding the manifestation of the final event is also fixed and we denote it as: Thus, the density operator ρ = |ψ ψ| associated with this amplitude is also fixed. The operator |a k a k | associated with the final event transforms |ψ in c k |a k , and then ρ in c * k c k |a k a k |. The same operator transforms Λ in P k |a k a k | . The coefficients c i in Equation (9) evolve in time according to Equation (6) but their physical meaning remains undefined. We define it by imposing that the effects of the two transformations (of ρ and Λ) induced by the final event are equal, so that this event induces the transition ρ → Λ. As a result of this definition we obtain P k = c * k c k that is the Born rule. The postulate 2 can therefore be replaced by the evolution Equation (6) for the forward amplitude, adding as a new postulate the transition ρ → Λ induced by the final event (projection postulate). It is important to note that, despite the apparent irreversibility implied by both the choice of a determined temporal direction and the decoherence implicit in the projection postulate, such construction is completely time-symmetrical. One could start from the evolution Equation (7) for the backward amplitude, keep the final event fixed and assume the initial event as a variable. In this case the density operator ρ will be the one associated with the backward amplitude at the moment of the initial event, while Λ will be the 'initial average event'. One will then have to impose that, as a result of the initial event, ρ → Λ. Postulate 2 seems preferable because it avoids a separate postulate of projection. Before closing this section we would like to discuss briefly two valid concepts both for a manifestly time-symmetrical description and an apparently time-oriented formulation. Let us consider the event |ψ t k ψ t k |. If then the diagonal elements |φ i φ i | in the expansion of |ψ t k ψ t k | are named 'virtual sub-events' of the event |ψ t k ψ t k |. Crossing one of the two slits in the double slit experiment is a typical example of virtual sub-event. The virtual sub-events are not manifested, that is no application f : The second relevant point is that while the event |ψ t ψ t | represents the localization of the 'quality' ψ t (and the physical quantities associated with it) in the time domain, as for the spatial localization some additional remarks are necessary. The three-dimensional space enters our scheme only through the form of the Hamiltonian H. For example, posing: where ∆ is the Laplacian operator on the Euclidean three-dimensional space E 3 , the equation ih∂ t |ψ t = H|ψ t admits solutions dependent on x ∈ E 3 : The operator impulse is then defined as p = −ih ∇, and it acts on the space of the solutions ψ(x, t). Equation (12) clearly shows that the operators |x x| are 'virtual sub-events' of the event |ψ t ψ t |. This fact is generally denoted as 'spatial delocalization' of the wave-function ψ(x, t), although an 'a-spatiality' is really involved here, in the sense that the 'position' is not actually manifested except in the particular case of a quality ψ t coincident with a particular spatial position x. It seems useful to point out here that the use of a Hamiltonian operator does not indicate the motion of anything, but rather it has to be seen as a 'probability gradient', a notion that unifies different formalisms such as Bohm potential and Feynman path integrals [22]. The fact that the application f introduced with the Postulate 1 is "sensitive" to time but not to space generates the well-known phenomenon of the non-separability of entangled amplitudes, well exemplified by the amplitudes of a pair of identical particles of spin 1/2 in a state of singlet. If one of the two particles is sent to a polarizer that separates the two beams with opposite spin values, each of which is subsequently directed to a detector, it is possible to have a "click" in one of the two detectors. This means that at that moment f localizes a precise spin value (better, a value of the spin component along the measurement axis) in a spatial region corresponding to the detector volume. With this, the virtual sub-event corresponding to the specific component of the singlet actually selected by the measurement becomes a real event. The spin of the other particle is therefore localized in time at the same instant. It remains not localized in space, but it can become so if the other particle is also subjected to a similar measurement. General Remarks on Measurements Temporal localization is associated with interaction micro-events (quantum jumps) and not with measurements performed by an experimenter. The measurement procedures were therefore absent in our previous discussion. It is only now that, having defined the context of a more fundamental quantum reality whose events are temporal localizations, we can move on to the description of measurement procedures as particular physical processes within that reality. These procedures involve particular macroscopic entities called "measurement apparatuses"; we must therefore define, in succession, classical macroscopic bodies and measurement apparatuses in the context of the quantum reality constituted by temporal localizations. We emphasize that this is a fundamental difference with respect to the conventional formulation, which is notoriously agnostic about the existence of a quantum reality. We call 'classical macroscopic body' a cluster of events whose averaged properties evolve deterministically over time, within the limits allowed by the finiteness of the quantum of action h. Let us try to clarify formally the concept, at least sufficiently for the purposes of our argument. If the cluster were empty, that is, there were no events but only the possibility of their manifestation, such possibility would be (we assume) expressed by quantum amplitude |ψ , to which the density operator ρ = |ψ ψ| would correspond. We now postulate that |ψ is decomposable in a basis of amplitudes |ψ k , such that the actual manifestation of the events inside the cluster reduces ρ to its diagonal component in this basis. In other terms, ρ becomes a linear superposition ρ of density operators ρ k = |ψ k ψ k | : When this happens we will say that the cluster is a classic macroscopic body. In practice, (13) expresses the total decoherence of ρ in a basis selected by the dynamics of the events themselves. Of course, the quantum equation of time evolution (whose validity is here assumed as universal) applies to the operator ρ , and it can be objected that in general this evolution does not lead to the final 'state' (13). This objection, however, does not take into account that the quantum evolution of the amplitudes (or more generally of the density operator), as it is usually defined, is relative to the unperturbed situation between two successive events. It defines the probability of occurrence of the next event, but the actual manifestation of this one modifies the initial condition of the next evolution stage. Consequently, the quantum equation of time evolution must be applied starting from this new condition and this happens many times in a single unit of time. The unitarity of the time evolution is thus broken and the average result is decoherence. A classical macroscopic body is made decoherent by the same discontinuities that constitute its essence, in a basis depending on its internal dynamics. According to this view, a classical macroscopic body is an object, which is actualized in space and time precisely because it is a complex of actualizations in the temporal domain (events). It is persistent over time and endowed with a substance (its events) with attributes (the average properties of the cluster). The evolution of these attributes can admit a classical (approximate) description. On the other hand, the single event and the connection between two successive events represent phenomena that cannot be classically described and therefore are, in this sense, 'entirely quantum'. Let us consider now the elements |φ k of a second Hilbert space H (the "particle"); we assume these elements are the eigenvectors of a physical quantity o . If the manifestation of the event |φ k φ k | at the instant t ev implies with certainty the transition λ k → δ kk for any value of k, k and t ev , this transition is called a 'measurement' of o, with 'result' |φ k φ k |. The classical macroscopic body is then called a 'measurement apparatus' of the physical quantity o. Denoting with |φ = Σ k c k |φ k the probability amplitude of the events |φ k φ k |, evaluated at the instant t, the density operator associated with the complex measurement apparatus plus particle is then defined as ρ|φ φ| where ρ is defined by relation (13). Since the interaction between the particle and the apparatus, mediated by the QJ, is diagonal on the basis |ψ k |φ k , this operator evolves to: This expression equates the diagonal component (on the basis |ψ k |φ k ) of the density operator calculated in absence of quantum jumps, that is ρ |φ φ|, to which the same consideration previously made for ρ can be now applied. Measurements are made possible by selective coupling between the 'particle' operators |φ k φ k | and the (already) de-coherent ρ k states of a classical macroscopic body. Before the quantum jump, each ρ k is coupled to any |φ k φ k | and vice versa (without entanglement); the quantum jump selects a specific coupling ρ k |φ k φ k | . We deal with a two stage measurement process. The first stage involves only microscopic components of the measurement apparatus and consists of the interaction that determines the manifestation of the event |φ k φ k |. The second stage consists of amplification/registration of this event, which in turn determines the transition λ k → δ kk . While the amplification/recording phenomena can be described, with high precision, in classical language (which does not preclude their exact description in quantum terms), the first stage is entirely quantum. The manifestation of |φ k φ k |, when considered in itself without regard of the following amplification/recording, rather represents the localization of the value o k of o in the time domain. It is our opinion that these localizations correspond to what other researchers have called 'hidden measurements' [23,24]. Several non-local or contextual 'ε-machines' can be imagined. For example, in [25,26] the Born rule is exactly reproduced by the selection of a specific 'loop' associated with a specific 'transaction'. However, these aspects are beyond the scope of this paper and we leave them open. As it can be seen, with respect to standard presentation the difference lies mainly in the differentiation that is made between events (quantum jumps), which are localizations of packets of physical quantities in time domain, and measurement procedures. To clarify further the concept, let us examine two concrete cases. Let us first imagine an electron incident on a single slit screen. If the electron is absorbed by the screen, a quantum jump occurs in the Hilbert space associated with the combination 'electron + atom of the screen absorbing the electron'. In this jump, the quantum amplitudes that represented the electron in flight and the atom in a stationary state are transformed into new amplitudes describing the electron bound to the atom and this one (we say) in an excited orbital. This quantum jump takes place at a precise instant of time, corresponding to the absorption of the electron; it localizes both the electron and the atom in time domain. At the same time, the electron is localized in the three-dimensional space with an accuracy defined by the volume occupied by the final atomic orbital. If the electron is not absorbed and passes through the slit (negative interaction with the screen), it is only the quantum amplitude of the electron to undergo a quantum jump at the time of passage. This jump converts the quantum amplitude of the electron into a new amplitude that exhibits the phenomenon of diffraction. It localizes the electron in time (moment of passage) and at the same time in three-dimensional space; the accuracy of spatial localization is defined by the slit size. In these processes an interaction occurs (real in one case, negative in the other) with temporal localization of a packet of physical quantities. Such localization is the quantum jump. However, the localization event is not amplified (nor recorded) on a macroscopic scale and there is therefore no measurement. The second case is that of an electron which impacts a photographic plate and reduces a silver atom contained within a silver halide granule dispersed in the plate emulsion. This case is quite similar to that of the real interaction with the screen atom, with the difference, however, that the microscopic reduction event can be amplified macroscopically through the photographic process (development, fixing, washing). During this process, the state of all the atoms of the granule is at first changed (amplification); this change is successively made permanent (recording) through the fixing process. The final result is a macroscopic modification: the appearance of a darkened granule in the plate emulsion. The subsequent scanning of the plate can only detect an already existing condition: the presence of a darkened grain. The following inferences seem therefore clear: (1) the measurement process is a possible, but not necessary, concomitance of the electron temporal localization event and these two things should not be confused; (2) before the quantum jump, the electron is spatially delocalized in correspondence of the positions of a multitude of silver atoms in different grains and each grain interacts with the entire electron; (3) when the jump occurs, this delocalization is reduced and the electron is spatially localized in correspondence of a specific silver atom within a single, specific grain; (4) before its interaction with the electron, and regardless of it, each grain was actualized as a multitude of quantum jumps; the interaction simply modifies the course of this actualization. Therefore, no Schrödinger cat situation arises. This situation is modeled by Equation (14) as follows: λ * k λ k represents the fraction of silver atoms in k-th grain; c * k c k is the probability of presence of the electron in correspondence of the k-th grain. The quantum jump selects a specific state ρ k |φ k φ k | with probability λ * k λ k c * k c k . The main advantage of this proposal lies, in our view, in the most definite delimitation of the role of measuring apparatus. To illustrate the concept, consider the distribution of molecular speeds in a gas at a definite temperature. The properties of this distribution are determined by the impacts between the gas molecules and between these and the molecules of the vessel walls. These impacts are all QJs that localize molecules in space-time. However, none of these impacts, which contributes to defining the classical system 'gas', is observed by the experimenter. In fact, the amplification and recording of the single impact is lacking (the system is at the equilibrium and without memory) so that we cannot speak of the energy and impulse exchanged in these impacts as 'observables'. It is instead appropriate to talk about objective physical quantities associated with the single molecule or exchanged in the single impact. Thus, it is justified the well-known fact that quantum mechanics can be successfully applied to phenomena that are not actually observed, such as the formation of a single chemical bond in a bulk of molecules or the impacts of gas molecules with the walls. Experimental apparatuses represent a condition in the definition of quantum amplitudes, but they are not their cause. Thus, even the known difficulties encountered in the application of the formalism to cosmology are removed, especially in the first moments after the initial singularity, when no observer and no setup could still exist. Considerations about the "Cat Paradox" In this section we apply the formalism defined in the previous sections to the concrete case of the pointer states of a measurement apparatus interacting with a micro-entity. In particular, we will consider the paradigmatic case of the "cat paradox" [27], trying to highlight how this paradox does not occur in the present formulation. In the conventional version of the QM the "cat paradox" is represented by the following situation: where: G = nucleus in ground state, E = nucleus in excited state, L = live cat, D = dead cat. Let us see now how things are in the formulation we propose. First, the two states of the cat are decoherent because they are associated with distinct sets of actualizations (micro-events). Each of them is coupled with a distinct value of a dichotomous variable: "the QJ occurred" or "the QJ did not occur". The QJ is here the nuclear decay. Consider the nuclear amplitude: The occurrence of the QJ corresponds to the action of the projector |G G| on ψ n ; the result of this action is A|G . The non-occurrence of the QJ corresponds to the action of the operator 1 − |G G| on ψ n ; the result of this action is B|E . We have therefore the two decoherent couplings A|G |D and B|E |L that originate the total density matrix: which is the particular form assumed by Equation (14) for the specific problem. The single experimental run begins with the preparation (|E E|)(|L L|), corresponding to the fact that a QJ has not yet occurred. The nuclear amplitude evolves as a superposition of G and E until the decay occurs. When the decay occurs, the density matrix relative to the single run becomes (|G G|)(|D D|). This sudden transition is the killing of the cat. The unitary time evolution of the superposition of G and E has no reflection on the status of the cat in the single run, except when the QJ occurs. What happens is that the QJ converts the qubit ψ n to the corresponding bit "E or G". After that the macroscopic measurement apparatus couples with this bit. Thus the QJ transforms quantum information in classical information. The decoherence of the L, D states of the cat makes them refractory to the superposition, and therefore unsuitable to act as the basis states of a qubit that can be the input of a QJ. They can only read the occurrence (or non-occurrence) of the QJ. Instead, the states of the particles that contribute in a vertex of interaction to produce a QJ can undergo to a superposition and are in fact, generally, entangled. The unitary time evolution concerns the qubit before the QJ; it affects the apparatus only through the conversion of the qubit by the QJ. The time evolution of Ω is then non unitary. In the conventional version, quantum amplitudes (bra or ket) are associated with the states of a system. It turns out therefore incomprehensible that the same physical situation (the cat + nucleus complex) is simultaneously represented by a superposition and a mixture. Two "states" cannot in fact be superposed and inchoerent at the same time. In this version, however, the amplitudes are associated with events and not with states of any system; moreover, the nature of this association is different for the superposition and for the mixture. In the conceptual experiment of the cat, the superposition of amplitudes associated with the nucleus (the "qubit") is relative to the two outcomes G and E, conditioned by past events (the "preparation"), of an event that has not yet occurred. The conjugated superposition is relative to the results, conditioned by future events (the "detection"), of a past event already happened. The elements of the density matrix of the cat + nucleus complex (the "bit") are instead related to events actualized in the present moment and to their logical negatives: the single nucleus is "already decayed" (G) or "not-yet-decayed" (E). As can be seen, the essential point is that the cat is a set of actualizations and its qualifications L and D represent two distinct complexes of classical properties a, b, ... z defined on this set, in a way such that: Moreover: In this expressions the symbols ¬, ∨, ∧ represent respectively the negation, the inclusive disjunction (the Latin vel) and the conjunction of classical logic. We immediately see that the properties attributed to the cat do not respect the relations of quantum logic. The distributivity of ∧ with respect to ∨ means that properties a, . . . , z are relative to a level of description that is enormously coarse if compared to the fineness of the quantum of action; on this level the possible non-commutativity of physical quantities does not play any role. A property of this type could be, for example, "inside the cat the blood circulation is active" or the opposite. It is evident that, with respect to the truth of statements of this kind, the quantum delocalization is irrelevant and we are therefore in the full domain of application of classical physics. Let us thus consider the quantum amplitude: The projector |ψ ψ| describes a set of classical properties, as requested, if and only if |α| 2 = 1 or |β| 2 = 1. Otherwise, although it is a well-formed expression of quantum formalism, it will have an empty semantic set. It is the same situation that occurs in the grammar of the ordinary language with expressions like "the liquid pencil" or "the children of a sterile woman". In terms of the axiomatic discussed in this paper, |ψ ψ| can be manifested as an event if and only if |α| 2 = 1 or |β| 2 = 1: (∃t ∈ T, f : t → |ψ ψ|) =⇒ (|α| 2 = 1) ∨ (|β| 2 = 1) Accordingly, in our re-formulation of quantum mechanics, the measurement apparatus is not in a superposition of pointer states. Therefore, if the measurement apparatus and the micro-entity undergoing the measurement are closed inside a box inaccessible to an external observer, who however knows the initial state of the complex apparatus + micro-entity, this observer cannot legitimately deduce that such complex is in a superposition state. In fact, this is not the law of evolution of this complex in our version of quantum mechanics. Therefore, paradoxes such as that of "Wigner's friend" cannot arise, nor more elaborate paradoxes such as the one recently discussed by Frauchiger and Renner [28,29]. Quantum Jumps and Bits: Localization in Time as Information In this section, in which we will take the liberty to be a little more speculative, we would like to reconsider the idea of 'event', understood as 'localization in time', from an informational perspective. It seems to us that a possible starting point in this direction is represented by the uncertainty principle. In our proposal, this fundamental principle of QM describes the intrinsic limitation of the manifestation of physical quantities on a quantum scale (let us consider, for example, the a-spatiality already mentioned in the case of the position). An expression of type σ(q)σ(p) ≥ h/4π indicates that it is not possible to reduce the product of the amounts of delocalization σ(q) of the position q and σ(p) of the momentum p below the limit value h; this value then sizes the volume of an elementary cell in the phase space. The volume occupied by a physical system in its phase space therefore contains a finite number of distinguishable states. The information associated with the manifestation of one of these states is therefore finite, though it may be enormous. Bekenstein estimated an upper bound for information I associated with a system with total energy E enclosed in a sphere of radius R in ordinary three-dimensional space [30]: The finiteness of I defines a range of classical values attributable to two non-commuting variables such as position and momentum. An examination of the system on a finer scale leads us into realms dominated by delocalization and entanglement, such as that of atomic orbitals and their transitions [31]. On the other hand, a compromise between classical properties and quantum uncertainty can arise when weak measurements are performed [32]. A particular form of the Bekenstein constraint valid for confined systems within a horizon of events (it was originally deduced for black holes in [33]) is the following: where A is the horizon area and l ≈ 10 −33 cm is the Planck length. I represents the information enclosed within the horizon and therefore 'lost' from the point of view of the world outside the horizon, for which it represents the entropy associated with the horizon. Let us consider now the transition of a physical quantity, delocalized in time, to its condition completely localized in time, the passage we identified with the notion of 'event' or QJ. Such a passage corresponds to the acquisition of information on the temporal localization of the physical quantity (or the packet of physical quantities). If we compare with a metaphor inspired by information technology, the time domain to a memory storage device (we say a hard disk), then the 'event' is the irreversible act of writing a data packet on the disk. In systemic terms, we can talk about the informational openness of QM, correlated with the conversion of primordial non-local information into a measurable form according to Shannon and Turing. Primordial information is clearly a formal cause in the Aristotelian sense, with the additional characteristic of being synchronic and therefore very different from the diachronic efficient cause which is usual in physics (dynamical causality expressed by the unitary evolution of amplitudes). This one is probably only an appearance of this deeper formal causation, as perceived from the time domain. Bohm proposed the term 'active information' [34]. There are very interesting models on these aspects, such as the implicate order explored with non-commutative geometry by Basil Hiley, but here we will not go into details on this [35,36]. All this suggests that the natural habitat of QM is pre-temporal [37,38]. One can ask if there is no minimum proper time interval θ 0 between two successive temporal localizations of the same particle. One can also wonder whether the coordination of events by an observer is not limited within a horizon of radius ct 0 centered on the observer itself (c is the limit speed). We can attempt to estimate t 0 from the experimental value of the cosmological constant λ, which is in the order of 10 −56 cm −2 . Assuming that the origin of such a constant is the presence of a de Sitter horizon, the relation λ = 4 3c 2 t 2 0 must hold, which provides ct 0 ≈ 10 28 cm. The de Sitter horizon area is then A ≈ (10 28 cm) 2 and inserting this value into (23) we obtain I ≈ 10 123 . We can divide the portion of the contemporaneousness space of the observer, internal to the observer de Sitter horizon, in 'cells' of volume (cθ 0 ) 3 , each corresponding to a distinguishable spatial localization. Since the radius of the horizon is ct 0 , the number of such cells is N ≈ [(ct 0 ) 3 /(cθ 0 ) 3 ]. Each cell can be in one of two states: 'on' if a localization occurs in it, 'off' otherwise. The number of possible states is clearly 2 N and the information associated with these states is log 2 (2 N ) = N. We assume that quantum jumps manifest the elementary components of the physical system within an elemental cell of volume h 3 in their phase space, so that each state will correspond to a cell of volume h 3N in the phase space of the total system. The logarithm of the number of states is therefore the same information I of (23) and we have I ≈ [(ct 0 ) 3 /(cθ 0 ) 3 ]. From this relation we obtain cθ 0 ≈ 10 −13 cm, a result that is of the same order of both the classical radius of the electron and the range of strong interactions (color confinement). The time interval θ 0 ≈ 10 −23 s is of the same order of the 'chronon' introduced by Caldirola in his classical theory of the electron, just as the time interval between two successive localizations of the electron in space-time [39,40]. If this reasoning is correct, the impact of the finiteness of I on the temporal localization process would consist in simultaneous appearance of two scales, both independent on cosmic time: one cosmological (de Sitter radius), the other at the particle level (chronon). This suggestion could open new perspectives of unification between elementary particle physics and cosmology. We also observe that, starting from the two fundamental constants of the localization process, i.e., the intervals t 0 and θ 0 , it is possible to define a maximum acceleration ct 0 /(θ 0 ) 2 . On a spatial interval cθ 0 , corresponding to the typical scale of the elementary particles, this acceleration is reached if a speed variation cθ 0 /(d P /c) is implemented in a temporal interval d P /c, where ct 0 /(θ 0 ) 2 = cθ 0 /(d P /c) 2 . Substituting I = [(ct 0 )/(cθ 0 )] 3 and A = 4π(ct 0 ) 2 in Equation (23), we can see that d P ≈ l. It is of course possible to proceed in reverse order, defining the Planck scale through the maximum acceleration, and thus obtain the (23). The foundation of (23) therefore appears to be the global-local connection manifested in the process of localization, rather than some form of "holographic principle". This suggests a purely informational interpretation of the Planck scale. The connection of this scale with the universal gravitational constant could be accidental and perhaps due to the limitations imposed on the principle of local equivalence between gravity and inertia by the existence of a maximal acceleration. We leave this topic open for subsequent research work. Quantum Computing in and Beyond Spacetime In this section we will discuss the problem of the possible existence of spatiotemporal constraints on the quantum computing, raised by some authors [20,21]; our reference context will be the one described in the previous section. We will argue that such a constraint exists on the cosmological scale, but that it is in practice unattainable and, in any case, devoid of effects on the actually implementable quantum computing schemes. Our reasoning can be considered an answer to Davies [20]. We will start by considering the case of a single qubit: This qubit will be by hypothesis associated with a particle. We also assume that the basic amplitudes |+ , |− are spatially encoded, in the sense that the attribution of one of these two amplitudes to the particle corresponds to its localization in one of two distinct spatial regions. We can consider the actual case of a particle of spin 1/2 sent to a Stern-Gerlach analyzer which separates the two components of spin along the direction of the applied magnetic field. The click of the counter "+" downstream of the analyzer will be at the same time the measurement of the spin with result +1/2 and the spatial localization of the particle within the volume of the counter. A similar consideration will apply to the click of the "−" counter. Consider then a system of n particles, each associated with a qubit with basis amplitudes distinct from those of all the others. By encoding each of these amplitudes with a distinct spatial region, the total amplitude of the n particles will be a superposition of 2 n n-ples of distinct spatial regions. While the quantum computing occurs at the level of the phase relations between these n-ples, which are a-spatial and therefore not subject to any constraint of spatial nature, the situation is a little different for the single n-ple. It is indeed evident that every n-ple of space regions must belong to space, and therefore be contained in it as a subset. Now, the maximum number n of qubits with spatially codable basis amplitudes will be given by the maximum number of possible spatial positions for a particle at a given moment, within the cosmological horizon. As we have argued in the previous section, this number is I ≈ 10 123 . In other words, the number of qubits will be subject to the Bekenstein limit n ≤ 10 123 . Naturally this limit is satisfied by all current and future quantum computer projects. However, it is also possible to show that it is unreachable by computers made up of ordinary matter with stable nuclei and electrons. The average density of ordinary matter in the Universe (with the exclusion of dark matter and dark energy) is ≈ 5 ×10 −31 g/cm 3 . Almost all of the mass is represented by nucleons, with an individual mass of 1.67 × 10 −24 g. It is obtained hence a nucleonic density of 3 × 10 −7 /cm 3 . About half of the nucleons consists of protons and assuming that their charge is neutralized by as many electrons, an electronic density of 1.5 × 10 −7 /cm 3 is obtained. We have therefore a particle density of 4.5 × 10 −7 /cm 3 . Assuming a de Sitter radius of 1.4 × 10 28 cm, there are therefore in total 0.5 × 10 79 particles (nucleons and electrons). A lot less than 10 123 . Comparisons with Other Approaches No theory is born of nothingness, and every scholar is well aware of walking with others. In this section we aim to take stock of the kinships that have been a source of inspiration for us. Although the formal tools introduced in this work are the same as the current formulation of the QM, the prescriptions on their use are different, and this leads to a difference in the predictions obtainable with the two versions of the theory. Naturally, nothing new can be expected regarding the description of what in the usual jargon can be defined as "micro-systems" represented by "pure states"; therefore, the representation of the configuration in isolation of atoms, atomic nuclei and molecules-for example-will be the same. However, there will be significant differences in the treatment of interacting systems, in particular those with many particles, since they will present, in appropriate limit conditions, transition phenomena to the status of classical objects not described in the usual version. In our proposal, therefore, the inclusion of a specific ontological element (the identification of the reduction of the amplitude with the physical phenomenon of the "jump") leads to predictive differences not obtainable in the context of a simple interpretation of the theory in its current formulation. Other approaches try to obtain similar results by modifying the dynamics of the theory, a stratagem that is not used in the context of our proposal. This is the case of spontaneous localizations contemplated by the Ghirardi-Rimini-Weber (GRW) approach, both in the form of "hits" and as continuous processes obtained by adding appropriate stochastic terms to the quantum motion equation [41][42][43][44]. In particular, the version known as GRW-flash [45][46][47] may have some similarities with our proposal, so it is important to summarize also the differences. First of all, we do not hypothesize spontaneous localizations (that is, independent from ordinary interactions) that happen randomly and that remain elusive on the experimental plan. Instead, we identify the processes of reduction of the wave function with specific objective physical phenomena whose existence is today experimentally well demonstrated: the quantum jumps. These jumps are induced by the usual physical interactions (electromagnetic, weak, nuclear etc.). Our description, therefore, does not contemplate the possibility of spontaneous flashes distinguished from ordinary interactions, but has instead to do with a quantum discontinuity connected with such interactions [31]. This point is relevant with reference to the measurement theory (see the conclusion of [47]). Moreover, we see no reason to favor the positional basis and we mean the localization in a temporal, not a spatial sense. What is instantiated is the projector on the quantum amplitude and not the spatial position. We see the peculiarity of the quantum description in the fact that, according to it, the ordinary interactions modify the amplitudes according to one or the other of two causal schemes that are mutually irreducible and, at the same instant of time, mutually exclusive: the unitary evolution and the QJ. To the dynamical causation (which involves the transfer of energy between spatio-temporal regions) a formal causation is added (which involves a certain localization information entering the spatio-temporal domain). The appearance of a particle scale in the context of the basic formalism recalls the representation of "corpuscles" as localized concentrations of field energy, guided by a pilot wave; a representation typical of non-linear theories such as those proposed by the de Broglie school (see e.g., [48,49]). The difference, however, is that in the context of the present approach (which preserves the linearity of conventional formalism) we describe the temporal localization of physical quantities, rather than the spatial localization of a pre-existent globule. In other words, our idea of objectivity of the physical world mirrors an ontology of events rather than substances, according to Russell and Whitehead's classic ideas. In describing both the relata and the relations, i.e., quantum jumps and transition amplitudes respectively, we have used their full temporal symmetry offered by the QM in its usual version. We have therefore considered both the transition amplitude and its conjugate and, for the QJ, both the ket and its conjugated bra. Both these choices recall the Transactional Interpretation of the QM (TIQM) introduced by Cramer in the 1980s [50][51][52] and advocated in particular by Kastner (see eg [53] for a general presentation of her views). As far as we are concerned, we have carefully considered the TIQM since its appearance, letting us be inspired by its principles. However, we have developed these principles in a different form from the one originally proposed by Cramer [26,54]. Our personal elaboration naturally connects to the postulates presented in this article and clarifies them. However, it has led us on a path that differs from the TIQM for some significant differences, which we will now try to summarize. The basis of transactional narrative is the simultaneous emission of the "offer" and its conjugated (the "confirmation"), respectively in the future and past light cone of the emission point. The same status of reality is attributed to the offer and the confirmation and the superposition of offers and confirmations induces the exchange of quanta. The emitter is therefore also an absorber, and the whole description is traced back to specific properties of the absorbers [55]. It seems to us that this description concerns the propagation of specific fields on the temporal (or spatio-temporal) domain and is therefore internal to this domain. It involves, even in the process of double emission or absorption-emission, the only efficient diachronic causality; that is, it has to do with a dynamical causality in space-time. In fact the reference model assumed by TIQM is the Wheeler-Feynman electrodynamics [50]. The basis of our reformulation is instead the localization of physical quantities in the temporal domain. The causality involved in the localization process is formal, not efficient; it is synchronic (in that it connects the atemporal realm of quantities, that is, Hilbert space, to temporal domain), rather than diachronic (connection between instants). There is no process of emission or absorption of quantum fields in the classical sense of the term. There is no object, field or particle, which acts as emitter-absorber. The connection between events is timeless (the time labels the different actual localizations) and the transition amplitudes are a time-symmetrical aspect of this connection: there is no propagation of fields in time or space-time. The role of the offer-confirmation pair is assumed here by a projector, that is, an algebraic entity that represents a transformation, and which can be matched with the elements of a classical propositional calculus. To be more exact, we consider a cosmic process of manifestation that associates a temporal label to the projectors, thus carrying out a localization action to which information is associated, which is thus "entered" in the temporal domain; and in this sense we speak of a formal cause. The process we are trying to describe seems to be more fundamental than what can be captured with the ontological categories of a classical field theory, even if extended in a time-symmetric and non-local sense and applied to quantum fields. It is sufficient to consider that the potential non-separability of amplitudes appears here on a native level. In this sense, perhaps the closest suggestion is the implicate order of Bohm-Hiley [56], however, with the important clarification that its explication occurs at the level of quantum discontinuity. As we have seen, informational considerations on discontinuity seem to connect in a natural way to two important scales: that of elementary particles and that of Planck, perhaps opening up new perspectives of foundational research on a theory of inestimable success like the QM. Another research program with which our approach has clear convergences is that of decoherence [57][58][59][60][61][62]. Therefore, it seems opportune to emphasize here the similarities and differences, and we think that the best way to do this is to discuss briefly an elementary ideal case. Consider a classical macroscopic system; as we have seen, it consists of a normally very large number of elementary quantum components (for example, molecules). Let S be one of these components and E the complex consisting of the remaining components. Let us consider a specific interaction between S and the individual elements of E such that the total amplitude of S + E at time t is: |ψ = α|s 1 |e 1 + β|s 2 |e 2 (25) For simplicity, we have considered only two (orthogonal) basis amplitudes s 1 and s 2 of S and two (not necessarily orthogonal) amplitudes e 1 and e 2 of E. Now consider the trace of |ψ ψ| on the states of E; it will contain terms of interference proportional to the real part of the scalar product of the two states of E. If these ones are orthogonal, the interference disappears, whereas in the general case it will be attenuated. Because S is randomly chosen among the elementary components of the macroscopic system, and this random choice determines the components of E, we have that the possible existence of an interaction of this type between S and E, independent on this choice, leads to a quantum decoherence between the elementary components of the system. This decoherence makes the system, in a certain sense, "more classical" because it diagonalizes the density matrix of each of its components, averaged on the degrees of freedom of the others, on a basis selected by the interaction itself. Of course, coherence does not disappear, it is simply transferred from the basis elements of S to the amplitudes of S + E. This is the mechanism underlying the theory of decoherence. In our approach all this remains true. Now, however, at a given instant of time the total amplitude expressed by the preceding equation "collapses" into one of the basis amplitudes of S (s 1 or s 2 ) due to a quantum jump. The probability of the two results is defined by the diagonal components of the density matrix in the basis s 1 , s 2 . After the collapse, the off-diagonal coefficients reappear as a result of unitary evolution, until a subsequent quantum jump. Thus the mean density matrix will have both diagonal and off-diagonal terms. The decay of the off-diagonal terms in a "multi-hit" process (where each "hit" is a QJ) was analyzed by Simonius in a pioneering work [63]. The decoherence time is a function of the frequency of the Rabi oscillations of the free S system, of the degree of orthogonality between the amplitudes of E and of the (normally Poissonian) temporal distribution of "hits", in turn dependent on state variables such as temperature and pressure. The mean effect of a succession of quantum jumps on the evolution of the density matrix therefore seems to be what the theory of decoherence describes. From our point of view, however, the fundamental process is a piecewise unitary evolution, whose intervals are joined by genuinely non-unitary discontinuities. Conclusions As is well known, the absence of trajectories in quantum formalism and the instantaneous cancellation of the wave-function when the particle impacts on an absorber make any interpretation of Quantum Mechanics (QM), based on a realistic classical approach, extremely difficult. This difficulty has generated the two still dominant trends in the debate about the nature of QM. On the one hand, the idea of the incompleteness of the theory goes back to Einstein and arrives to current proposals like that of 't Hooft. On the other hand, the 'pragmatic' approach of Bohr is centered on what the observer can say of the world through measurement procedures (for a good recent review, see [1]). It should be said that none of the two trends provides universally shared answers for the weirdest aspects of QM. We recall that the 'realistic' and 'pragmatic' trends have been established long before the experimental confirmation of non-local aspects. If, for the Copenhagen view, non-locality is an 'unexpected host', for realistic theories it is difficult to reconcile QM and relativity. Both of the interpretative lines have tried to retain the image of persistent micro-physical object bearers of persistent properties. Realists used 'globules' driven by a medium, while pragmatists saw quantum amplitudes as descriptors of the 'state' of a 'system'. Our re-reading of QM basic postulates identifies the 'quantum jump' with the notion of 'physical event', i.e., the temporal localization of a set of physical quantities, without any necessary relation with the measurement processes and apparatus. The causal connections between events is assured by the unitary time evolution of amplitudes, corresponding to a condition of temporal de-localization of those same quantities. This connection (if any) is probabilistic, and is described by the Born rule. No 'micro-object' is assumed in the time interval between two subsequent events. The quantum amplitudes associated with the preparation or post-selection are therefore not 'states' of any 'system'; in particular, a linear superposition of these amplitudes has not to be intended as a superposition of 'states'. Temporal localizations of physical quantities thus assume the role of primary elements of physical reality, with the result that this one is made up of events, and no longer objects. The Hilbert space thus becomes the basic mathematical structure that allows the definition of events as projection operators on the one hand, and the specification of the conditional probability of causally related events on the other. This approach eliminates the fundamental role of measurement-based notions (such as that of 'observable') while retaining the potentiality of the conventional formalism. Detaching the meaning of quantum mechanical formalism from the narrow situation represented by the usual experimental setup with the stages of preparation, propagation and observation broadly extends the scope of application of formalism itself. The widespread use of formalism that has been made over the decades to describe the structure of matter (particles, nuclei, atoms, molecules, condensed states) is in this way justified. The quantum jump is a non-unitary operation that converts quantum information (encoded in forward or backward amplitude) into classical information; an operation which is represented by an appropriate projector of a Hilbert space. The QJ consists in the temporal localization of the physical quantities summarized in this projector. "Matter" is understood here as the complex of these localizations and not as some kind of support for localized quantities. Each QJ is an elementary interaction in the course of which a quantum of action is exchanged and therefore materializes a single elementary cell (bit) in the phase space of a physical system. An analysis of the cosmological limits to positional information associated with these localizations was carried out in Section 5. It is important not to confuse this information with that processed by a quantum computer, as the Universe or a computer that simulates it. The cosmological limits induced on this second information are analyzed in Section 6 and the results are congruent with the well known calculation of Lloyd [64]. Differently from Lloyd's assumptions, the concept of "event", which is here distinct from that of "observation", does not imply in itself any irreversibility. As a result, most of the QJs are not registered. Quantum information is processed between two successive QJs, causally connected through a transition amplitude. This processing does not "take place" in the usual sense of the term because the quantum superposition, considered in the time interval between the two QJs, is a pure mathematical construct (Section 2) that evolves only in the sense of its parametric dependence on the instant of the second QJ (forward amplitude) or first QJ (backward amplitude). Not surprisingly, the attempt to analyze this evolution in spatio-temporal terms leads to virtual trajectories joining virtual sub-events, for example the two paths in the double slit experience. These paths are naturally interfering and it has been argued that this is the origin of the hyper-Turing nature of quantum computing [65]. Taking this suggestion, an important field that opens to further investigations is that of the clarification of the relationships between the hyper-computational aspects of quantum computing and the non-local structure of elementary interactions modeled by quantum theories. The structure seems to indicate an emergent aspect of the spatio-temporal ordering of events.
15,376.2
2019-01-10T00:00:00.000
[ "Physics" ]
Insights about the structure of farnesyl diphosphate synthase (FPPS) and the activity of bisphosphonates on the proliferation and ultrastructure of Leishmania and Giardia Background The enzyme farnesyl diphosphate synthase (FPPS) is positioned in the intersection of different sterol biosynthesis pathways such as those producing isoprenoids, dolichols and ergosterol. FPPS is ubiquitous in eukaryotes and is inhibited by nitrogen-containing bisphosphonates (N-BP). N-BP activity and the mechanisms of cell death as well as damage to the ultrastructure due to N-BP has not yet been investigated in Leishmania infantum and Giardia. Thus, we evaluated the effect of N-BP on cell viability and ultrastructure and then performed structural modelling and phylogenetic analysis on the FPPS enzymes of Leishmania and Giardia. Methods We performed multiple sequence alignment with MAFFT, phylogenetic analysis with MEGA7, and 3D structural modelling for FPPS with Modeller 9.18 and on I-Tasser server. We performed concentration curves with N-BP in Leishmania promastigotes and Giardia trophozoites to estimate the IC50via the MTS/PMS viability method. The ultrastructure was evaluated by transmission electron microscopy, and the mechanism of cell death by flow cytometry. Results The nitrogen-containing bisphosphonate risedronate had stronger anti-proliferative activity in Leishmania compared to other N-BPs with an IC50 of 13.8 µM, followed by ibandronate and alendronate with IC50 values of 85.1 µM and 112.2 µM, respectively. The effect of N-BPs was much lower on trophozoites of Giardia than Leishmania (IC50 of 311 µM for risedronate). Giardia treated with N-BP displayed concentric membranes around the nucleus and nuclear pyknosis. Leishmania had mitochondrial swelling, myelin figures, double membranes, and plasma membrane blebbing. The same population labelled with annexin-V and 7-AAD had a loss of membrane potential (TMRE), indicative of apoptosis. Multiple sequence alignments and structural alignments of FPPS proteins showed that Giardia and Leishmania FPPS display low amino acid identity but possess the conserved aspartate-rich motifs. Conclusions Giardia and Leishmania FPPS enzymes are phylogenetically distant but display conserved protein signatures. The N-BPs effect on FPPS was more pronounced in Leishmania than Giardia. This might be due to general differences in metabolism and differences in the FPPS catalytic site. Background Farnesyl diphosphate synthase (FPPS) is a key enzyme in sterol metabolism. It is positioned at the intersection of different pathways, including those involved in the biosynthesis of isoprenoids, dolichols, ubiquinones and ergosterol/cholesterol. Giardia and other early diverging eukaryotes do not synthesize ergosterol or cholesterol de novo in contrast to Leishmania and trypanosomatids that synthesize ergosterol instead of cholesterol, which is produced by humans and other mammals. The pathway for ergosterol biosynthesis includes enzymes that differ from cholesterol biosynthesis, making the ergosterol biosynthesis pathway a potential target for chemotherapy [1,2]. Other pathways and enzymes of sterol metabolism include isoprenoid/prenylation and the dolichol biosynthesis. These pathways are ubiquitous in eukaryotes but have not received much attention. Genomic analysis has facilitated prediction of several metabolic pathways among eukaryotic organisms [3] and these predicted pathways enable comparisons to be made between sterol metabolism in early branching protozoans such as Giardia and Leishmania. Leishmaniasis is a complex of diseases. There are more than 20 Leishmania species that cause different diseases, i.e. visceral leishmaniasis (VL), cutaneous leishmaniasis (CL) and mucocutaneous leishmaniasis (MCL). Leishmaniasis occurs in 102 countries, and CL is the most common and widespread [4]. More than 70% of the CL cases occur in 10 countries: Afghanistan, Algeria, Brazil, Colombia, Costa Rica, Ethiopia, the Islamic Republic of Iran, Peru, Sudan and the Syrian Arab Republic [4]. Around 90% of the global VL cases are reported in only six countries: Bangladesh, Brazil, Ethiopia, India, South Sudan and Sudan. In the Americas, Leishmania infantum is the etiological agent of VL [5], which is lethal if not treated. Brazil has a high burden of CL and VL with an incidence rate of 1.46 and 0.41 cases per 10,000 inhabitants, respectively [4,6], CL cases are widespread throughout the Brazilian national territory and VL cases are reported in 21 states [7]. Leishmaniasis has been spread to previously non-endemic areas including urban centers. Indeed, nearly 1600 Brazilian cities have autochthonous transmission [7]. Giardia is the causative agent of giardiasis. It is a major cause of diarrhea in humans and an important public health problem [8,9]. Giardia duodenalis (syn. G. intestinalis and G. lamblia) is divided into eight genetic assemblages (A-H) [10,11] and possesses two morphological forms: trophozoites that infect the duodenum; and cysts that facilitate disease transmission by contaminating soil, food, and water following excretion in the feces. Giardia duodenalis assemblages A and B are responsible for human giardiasis and these types are globally distributed [9,10,12]. Giardia sterol metabolism is restricted to a few metabolic pathways [13] including the isoprenoid, the dolichol, and the ubiquinone or coenzyme Q (CoQ) pathways. CoQ is a component of the electron transport chain in aerobic organisms such as Leishmania, but is detected at much lower levels in Giardia, which has a poorly developed endomembrane system and lacks organelles including the Golgi and mitochondria [14,15]. In contrast to Giardia, Leishmania has a complex lifecycle and sterol metabolism. It has adapted to a life-cycle that alternates between the promastigote (the infective form found inside the phlebotomine vector) and the amastigote form that resides inside the macrophages of the mammalian host. Leishmania has a sophisticated endo-membrane system, evolved mitochondria, and possesses the main enzymes and pathways of sterol metabolism. The enzyme profile of sterol metabolism and the presence of sterol-metabolizing gene sequences in the genome of Giardia and Leishmania suggest that the five carbon isoprene units, isopentenyl diphosphate (IPP) and its isomer dimethylallyl diphosphate (DMAPP), are synthesized via the mevalonate pathway (MEV) [3]. The IPP and DMAPP metabolites are substrates of farnesyl diphosphate synthase (FPPS) and lead to production of 15 carbon farnesyl diphosphate (FPP). FPP is a key intermediate of sterol metabolism with a role in the post-translational modification of proteins via farnesyl transferase as well as in protein prenylation of the Ras superfamily of small GTP-binding proteins. FPP is also the precursor of several biomolecules with distinct biological function including the polyisoprenoids composed of 11 to 23 isoprene units known as dolichols [16]. Dolichols are carriers of N-glycan and glycosylphosphatidylinositol (GPI). They are inserted in the internal membrane of the endoplasmic reticulum (ER) and have a role in post-translational modification of proteins. Leishmania and Giardia produce dolichols with 11 to 12 isoprene units [17,18]. Giardia lost the capacity to synthesize ergosterol and cholesterol de novo during evolution, but it does possess the enzymes of the MEV pathway including FPPS. Comparative analyses based on profiling of sterol biosynthethic enzymes of 46 eukaryotic proteomes showed that farnesyl/geranyl diphosphate synthase (FPPS and GPPS) and farnesyl transferase complex are ubiquitous in all organisms studied, including Giardia. This indicates that isoprenoid production is indispensable for all eukaryotes [3]. Giardia FPPS displays the conserved motifs and protein signatures found in FPPS of other organisms but has low identity with FPPS of humans, and Leishmania as evaluated previously by multiple sequence alignment and phylogenetic analyses [19]. In L. major, the FPPS structure was elucidated via crystallography [20]. Functional characterization of the recombinant FPPS has demonstrated that the enzyme is strongly inhibited by nitrogen-containing bisphosphonates (N-BP) such as risedronate [21,22]. N-BPs have been the frontline treatment for bone disorders including osteoporosis, tumorassociated bone disease, and Paget's disease [23]. N-BPs lead to depletion of FPP and GGPP isoprenoids and required prenylation of small GTPase proteins. The failure of protein prenylation due to N-BP is one of the main mechanisms behind decreased bone resorption by osteoclasts [24]. Bisphosphonates are also shown to be active against some protozoans [19,25] but have not been tested on L. infantum and G. duodenalis. Furthermore, the mechanism of death and the effect on mitochondrial function and ultrastructure due to N-BP treatment has not been rigorously explored in parasitic protozoans. We performed molecular modelling of FPPS sequences from L. infantum and from the distantly related FPPS enzyme of Giardia. Phylogenetic analysis of Leishmania FPPS and of different isolates of Giardia was also performed. We tested the effect of N-BPs on the protozoan L. infantum and Giardia to evaluate its effects on protozoan proliferation, viability and ultrastructure. Our results suggest that the isoprenoid pathway may represent an interesting target for evaluating mechanisms of cell death and a target for anti-parasitic drugs. Multiple sequence alignment and phylogenetic reconstruction A Basic Local Alignment Search Tool (BLASTp) search was performed with FPPS sequences of proteins experimentally characterized and deposited in the Protein Data Bank (PDB) and UniProtKB/Swiss-Prot (Table 1). The sequences were download in FASTA format to perform multi-alignment and phylogenetic analysis. Multiple sequence alignment was performed using MAFFT v7 (EMBL-EBI search and sequence analysis tool; https ://www.ebi.ac.uk/Tools /msa/mafft /) applying the BLOSUM62 matrix 1.53 gap open penalty and default parameter settings [27]. Phylogenetic analysis of FPPS sequences was performed in MEGA 7 [28,29]. The evolutionary history was inferred by the Maximum Likelihood method based on the JTT matrix-based model [30]. The bootstrap consensus tree inferred from 1000 replicates was used to represent the evolutionary history of the analyzed taxa [31]. Initial tree(s) for the heuristic search were obtained automatically by applying Neighbor-Join and BioNJ algorithms to a matrix of pairwise distances estimated using a JTT model. The topology with superior log likelihood values were then selected. Theoretical modeling of FPPS The amino acid sequences of Leishmania infantum and Giardia intestinalis (Assemblage A isolate WB; gene: GL50803_6633) related to farnesyl pyrophosphate synthase were used to construct the 3D theoretical structure models of this enzyme. These sequences were subjected to BLASTp searches (https ://blast .ncbi.nlm.nih.gov/Blast .cgi?PAGE=Prote ins) to identify potential template structures from PDB for prediction of Giardia and Leishmania FPPS structures. Regarding the FPPS target sequence for L. infantum, 50 models were generated with the standard auto model routine and optimized via the variable target function method (VTFM) until 300 iterations were achieved using Modeller version 9.18 [32] and PDB 4JZB [20] as the template. The model with the lowest discrete optimized protein energy (DOPE) value was selected. The system's energy was minimized using Chimera software by applying the default parameters (http://www. rbvi.ucsf.edu/chime ra). Additionally, the sequence of Giardia WB FPPS was submitted to the I-Tasser server (https ://zhang lab.ccmb.med.umich .edu/I-TASSE R/) to obtain a model via a threading prediction. Both models were submitted to the SAVES server (http://servi cesn.mbi.ucla.edu/SAVES /) to be evaluated by validation programs (Ramachandran plot, ERRAT and Veri-fy3D). The three-dimensional structures were generated using PyMOL [33]. All structures have electrostatic surface maps created using PyMOL with the APBS plugin (default parameters) predicted by assuming a pH 7.0. Promastigotes were incubated with each compound for 72 h at 26 °C. Tetrazolium salt-based viability assay Parasite cultures of Leishmania promastigotes and G. duodenalis trophozoites were centrifuged at 2000×g for 10 min at 4 °C. Before centrifugation, G. duodenalis trophozoites were placed on ice for 10 min and then shaken to detach the parasites. After centrifugation, the medium was removed and the pellet was suspended with the same volume of saline buffer composed of 21 mM HEPES, 0.7 mM Na 2 PO 4 , 137 mM NaCl, 5 mM KCl at pH 7.4 supplemented with 6 mM glucose (SBG). For each condition, 100 µl of each parasite cell suspension was transferred in triplicate to a 96-well plate. To perform the negative control, 100 µl of each parasite suspension was transferred in duplicate and fixed with 0.4% paraformaldehyde. Afterwards, 20 µl of MTS/PMS mixture was added to each well containing 100 µl of protozoan in buffer SBG [34]. To produce the MTS/PMS mixture, 50 µl PMS (Sigma-Aldrich; P9625) stock solution was added to 1 ml of MTS stock solution (CellTiter 96 AQueous MTS Reagent Powder, G1112; Promega, São Paulo, Brazil). Electron microscopy To evaluate the ultrastructure of protozoans treated with NB-Ps, Leishmania promastigotes and G. duodenalis trophozoites were incubated and fixed with 2.5% glutaraldehyde, 2.0% paraformaldehyde in cacodylate buffer, and post-fixed with 1% osmium tetroxide and 0.8% potassium ferrocyanide in 0.1 M cacodylate buffer (pH 7.4) for 1 h at room temperature. The samples were then washed, dehydrated in acetone, and embedded in Epon. Thin sections were stained with uranyl acetate and lead citrate and observed via transmission electron microscopy (Tecnai ™ Spirit TEM; FEI Company, São Paulo, Brazil). Flow cytometry analysis of L. infantum promastigotes treated with alendronate Programmed cell death was evaluated in L. infantum promastigotes treated for 24, 48 and 72 h with 100 µM alendronate and 10 µM miltefosine (M5571; Merck, São Paulo, Brazil) as a control. After treatment, parasites were analyzed by flow cytometry (BD Accuri ™ C6; BD Biosciences, São Paulo, Brazil), and the BD Accuri C6 Software. Early and late apoptotic processes were distinguished using the vital dye 7-amino-actinomycin (7-AAD; BD Pharmingen, São Paulo, Brazil) as well as Annexin-V-FITC that binds to the exposed phospholipid phosphatidylserine (PS) in membranes (FITC Annexin V Apoptosis Detection Kit I; BD Pharmingen). Briefly, 1 ml of L. infantum culture with approximately 2-8 × 10 6 / ml promastigotes was centrifuged at 2000×g for 10 min, and the pellet was suspended in 100 µl of binding buffer according to the manufacturer's suggestions. The 5 µl of annexin-V-FITC and/or 5 µl of 7-AAD (10 mg/ml) was added and incubated for 15 min. We then added 400 µl of binding buffer to a final volume of 500 µl. The samples were analyzed with a flow cytometer (BD Accuri C6, BD Biosciences), and 20,000 events were acquired. Controls were performed in promatigotes whose membranes had been permeabilized with 0.5% Triton X-100 for 15 min followed by incubation with 7-AAD (FL3) and/ or Annexin-V-FITC (FL1). The forward and side scatter plots (FSC-H × SSC-H) were used to evaluate the promastigote population with respect to cellular volume and shape. The L. infantum promastigotes were incubated with 100 nM tetramethylrhodamine ethyl ester (TMRE, BD Pharmingen) to investigate the mitochondrial membrane potential after treatment with bisphosphonates. A stock solution of TMRE (1 mM) was prepared in DMSO and stored at − 20 °C. Thereafter, 1 ml of promastigotes incubated with bisphosphonates or miltefosine for 48 or 72 h, approximately 5-8 × 10 6 /ml, were centrifuged at 2000×g for 10 min. The pellets were suspended in 1 ml of saline buffer (137 mM NaCl, 5 mM KCl, 0.7 mM Na 2 HPO 4 , 6 mM glucose, and 21 mM HEPES, pH 7.3) and 0.1 µl of TMRE stock solution (FL2) was added and incubated for 15 min at room temperature. The promastigotes were then centrifuged, washed, and evaluated by flow cytometry (BD Accuri C6, BD Biosciences). Despite the low identity of Giardia FPPS with Leishmania and other eukaryotes, Giardia FPPS has conserved aspartate-rich motifs that are characteristic of FPPS enzymes. The first aspartate (D)-rich motif (FARM) is composed of DDXXD and is found in eukaryotic organisms (Fig. 1a, b). The second aspartate rich motif (SARM) has the DDXXD sequence (Fig. 1c, d). Crystallographic studies have shown that these motifs face each other and create a binding pocket. Both motifs are also involved in the catalytic site of the FPPS enzyme of several organisms, even in E. coli. The FARM sequence motif is conserved from Homo sapiens to bacteria (Fig. 1a) and display two conserved arginine (R) residues and one lysine (K) downstream. These sequences are involved in binding of the substrate and coordination with Mg +2 ions (Fig. 1a). A comparison of the L. major, L. infantum, and Giardia WB FARM motif indicates that the conserved aspartate residues (Fig. 1b). Phenylalanine (F) is a key residue involved in limiting the product chain length in Leishmania. The tyrosine (Y) residue has the same role in trypanosomes (Fig. 1b). Multiple sequence alignment was performed to compare the SARM motif, which is composed of conserved aspartate residues in different organisms (Fig. 1c) and to compare the SARM motif and surrounding residues from trypanosomatids and Giardia (Fig. 1d). A sequence comparison between Leishmania and Giardia WB demonstrates the following conserved residues (Fig. 1d). These lysines are also conserved in other organisms (Fig. 1c) and are involved in binding the substrate phosphate in coordination with Mg +2 [35]. Indeed, a phenylalanine (F) residue is observed upstream the SARM motif in all sequences of different organisms except for Giardia that has a lysine (K) residue (Fig. 1b). Previous work identified seven conserved regions in FPPS from trypanosomatids [19], the seventh region in the C-terminal region in Giardia has a conserved arginine residue and eight extra amino acids (Fig. 1e). Three-dimensional structure prediction and structural analyses Here, the 3D model of the L. infantum FPPS was generated via comparative modeling based on the L. major structure (PDB ID 4JZB). The sequence alignment between L. major and L. infantum FPPS proteins had 96.95% identity with a coverage of 99%. We chose the model with the lowest DOPE scores (− 47931, 92188) among the 50 generated models. The Ramachandran plot of the selected model shows that 100% of all residues were allocated in energetically allowed regions with 99.2% in the favored region. The overall quality factor achieved with ERRAT was 92.6554, and the Verify 3D server estimated that 94.20% of the residues of L. major FPPS had an averaged 3D-1D score ≥ 0.2. These results indicate that the refined model has good quality and is reliable for further computational analysis. The best results for the Giardia WB FPPS sequence were obtained with PDB 6B02 from a similarity search against PDB sequences using BLASTp [36]. The alignment between the Giardia WB FPPS and the template sequences showed that the identity and coverage were 29.57% and 81%, respectively. Structural prediction by threading methods was applied due to this low sequence identity with any other organism and the lack of experimentally elucidated three-dimensional structures. The Ramachandran plot of the resulting model had 93.8% of all residues allocated in energetically allowed regions with 76.2% in the favored region. The ERRAT analysis showed an overall quality factor higher than 81.7. Veri-fy3D showed that 65.36% of the amino acid scored ≥ 0.2 in the 3D/1D profile. Thus, the Giardia FPPS theoretical model has sufficient quality to perform in silico analysis. The structures are conserved when analyzing the alignment of the FPPS structures from organisms belonging to the Leishmania genus (Fig. 2a); the RMSD is 0.221 Å. The conserved residues found in FARM (DDIMD) and SARM (DDVMD) motifs for L. major and L. infantum are represented in Fig. 2b and c, respectively. However, the structural alignment between the L. major crystallographic structure and the G. duodenalis theoretical model displayed a lack of conserved residues that compose the regions outside the FARM and SARM motifs (Fig. 2d-f ). The aspartic acid residues presented in FARM and SARM motifs (DDXXD) are part of the catalytic cavity and have an important role in protein function. Meanwhile, the residues represented by XX are the same for L. major and L. infantum in FARM (Fig. 2b) and SARM (Fig. 2c) but are different for G. duodenalis FARM (Fig. 2e) and SARM (Fig. 2f ) motifs. Importantly, even if there is a difference between the electrostatic potential on the surfaces of the three-dimensional structures of the FPPS, the site of the FARM and SARM motifs is electrostatically negative (Fig. 2g). Effect of N-BPs on Leishmania and Giardia The expression of sterol biosynthetic enzymes are upregulated in the insect stage of Trypanosoma brucei and Trypanosoma cruzi procyclic and epimastigote forms [3]; thus, we evaluated N-BP inhibition on the promastigotes of L. infantum. Giardia has four enzymes of the MVA pathway: acetoacetyl-CoA thiolase, HMG-CoA-synthase, HMG-CoA reductase, and mevalonate kinase, expressed on the trophozoites stage [37]. We performed at least three assays with each bisphosphonate in the range of 5-500 μM on promastigotes of L. infantum for 72 h incubation. We also evaluated the cytotoxic effect of bisphosphonates on G. duodenalis trophozoites and created concentration curves from 10 μM to 1 mM for 48 h of incubation. Risedronate, ibandronate, and alendronate have increased antiproliferative activity on promastigotes of L. infantum (Table 2) versus trophozoites of G. duodenalis (Table 2). Only risedronate and ibandronate display antiproliferative activity (Table 2) in G. duodenalis as evaluated by the IC 50 . Effect of N-BPs on the ultrastructure of Leishmania and Giardia Leishmania promastigotes treated with risedronate, ibandronate, and alendronate displayed the same ultrastructural alterations. To correlate the ultrastructural alterations caused by N-BPs with mechanisms of cell death in Leishmania, we evaluated the L. infantum promastigotes treated with 100 µM alendronate or 20 µM risedronate. Promastigotes accumulated small vesicles in the Golgi region near the kinetoplast (Fig. 3d, e) as well as mitochondrial swelling (Fig. 3e, k), altered cell division (Fig. 3h), formation of intracellular vesicles and lamellae (Fig. 3f-j), blebbing of the plasma membrane (Fig. 3g), as well as nuclear pyknosis and chromatin condensation (Fig. 3f, j). There was also an invagination of the plasma membrane in the flagellar pocket region without membrane rupture (Fig. 3i, j) as well as concentric membranes in regions of the mitochondria and myelin figures (Fig. 3k, l). Membrane integrity can distinguish apoptosis from necrosis. Promastigotes treated with alendronate have preserved plasma and nuclear membranes, as evaluated by electron microscopy. Giardia duodenalis treated with 300 µM risedronate or ibandronate for 48 h had a high frequency of concentric membranes near the nuclei as well as nuclear pyknosis and membrane layers and lamella formation on the nucleus (Fig. 4d, g, h). There was membrane detachment and formation of intracellular lamellae in the cytoplasm , c), and the residues of this region are not conserved between L. major and G. duodenalis (e, f). Electrostatic potential surface map generated by PyMol for the structures of FPPS (g). The color scale for surface electrostatic potential was set from − 1 kT/e (red) to 1 kT/e (blue). Key: white, neutral; blue, positive charge; red, negative charge ( Fig. 4d-f ). Small vesicles were distributed in the cytoplasm (Fig. 4e). Intense myelin figures suggested nuclear engulfment (Fig. 4i), which can be caused by membrane accumulation from the endoplasmic reticulum. These ultrastructural alterations are due to disorganization of the endomembrane system. Evaluation of programmed cell death in Leishmania treated with N-BP We evaluated apoptosis and necrosis caused by N-BPs with three probes: (i) Annexin-V-FITC that labels exposed phosphatidylserine; (ii) 7-amino-actinomycin D (7-AAD) to evaluate plasma membrane integrity; and (iii) TMRE to estimate the loss of mitochondrial potential. A negative control comprised of viable promastigotes without N-BPs treatment was not stained with Annexin-V-FITC and/or 7-AAD. Control promastigotes after 72 h of cultivation did not undergo the process of apoptosis or necrosis (Fig. 5a). The L. infantum promastigotes treated with 10 µM miltefosine for 48 or 72 h had two populations of promastigotes that increased with time as seen in the SSC × FSC plot (Fig. 5b). After 72 h of incubation with miltefosine, population 1 displayed co-staining with annexin-V and 7-AAD or just 7-AAD staining, and population 2 was not stained or displayed only annexin-V staining. Incubation of L. infantum promastigotes with 10 µM miltefosine induced co-staining in 12% and 31% after 48 and 72 h, respectively. Staining with only annexin-V was reported in 9.2% and 8.9 % and staining with 7-AAD was observed in 1.7% and 21% of promastigotes. These results indicate that miltefosine caused apoptosis and necrosis. Promastigotes treated with 100 µM alendronate for 24, 48 and 72 h (Fig. 5c) displayed Annexin-V-FITC staining in 22.8%, 36% and 34% of promastigotes, respectively. Co-staining with annexin-V-FITC and 7-AAD was observed in 30%, 24% and 25% of the population. There was low staining with 7-AAD; 0.8% to 1.3% after 24 to 72 h. The high percentage of single labelling with annexin-V indicates exposure of PS and early apoptosis. The co-staining and the nearly absent labelling of promastigotes with 7-AAD alone reflects apoptosis in an advanced stage in L. infantum promastigotes. The dispersion plot (SSC × FSC) shows that promastigotes treated with alendronate have two populations. The cells with volume preserved (over 60% of the total) were composed of unlabeled promastigotes or those only labelled with annexin-V (Fig. 5c). The second population was comprised mainly of promastigotes co-stained with annexin-V and 7AAD and represents damaged promastigotes. We included controls comprising Annexin-V-FITCand 7-AAD-labelled promastigotes of Leishmania permeabilized with Triton X-100. The forward and side scatter plot (Fig. 5d) demonstrated a loss of membrane integrity and cellular volume. As expected, co-staining with annexin-V-FITC and 7-AAD was observed in 66-74% of promastigotes (Fig. 5d). Evaluation of mitochondria membrane potential damage in Leishmania treated with N-BP We also evaluated the mitochondrial potential with tetramethylrhodamine ethylesterpercholate (TMRE) in the same culture of L. infantum promastigotes treated for 72 h with N-BPs or miltefosine. TMRE is a cationic lipophilic dye that accumulates in the active mitochondrial of viable protozoans; the fluorescent intensity is a direct measure of its accumulation and cellular metabolism. We evaluated L. infantum promastigotes untreated with drugs and unlabeled with TMRE, as control, by means of a light scattering plot (SSC × FSC) (Fig. 6a) and FL2 (Fig. 6b). The evaluation of mitochondrial function with TMRE (FL2) displayed that 93.6% of the promastigotes incorporated TMRE, represented by M2. The M2 population displayed two contiguous peaks: one had higher TMRE incorporation and was represented by 25.5% of the promastigotes; and the second one is composed of 68.5% (Fig. 6c). This second group is likely promastigotes in stationary phase, they had reduced division and mitochondrial activity. Similarly, two populations of promastigotes from L. donovani were observed with TMRE and increased after 6 and 7 days of cultivation [38]. After incubation with miltefosine and alendronate, another population with low or absent TMRE incorporation appeared and was represented by M1. This population increased with miltefosine concentration from 5 to 10 µM (not shown). The dispersion plot (SSC × FSC) demonstrated two populations of promastigote (Fig 6d, g) indicative of promastigotes with reduced cellular volume and membrane damage. After incubation with TMRE, M1 represented 51.1% of promastigotes affected by 10 µM miltefosine under apoptosis or necrosis, and M2 represented promastigotes with functional mitochondria (45.9%; Fig. 6e). The overlay (Fig. 6f ) of the TMRE positive control (Fig. 6c) versus promastigotes treated with miltefosine (Fig. 6e) showed a decrease of the M2 subpopulation. Alendronate (100 µM) abolished mitochondrial membrane potential but in a lower percentage of promastigotes than miltefosine. In our assays, 100 µM alendronate could reduce mitochondrial membrane potential in approximately 28% of promastigotes as represented by M1 (Fig. 6h). The M2 population predominated in promastigotes treated with alendronate (71.3%). An overlay of the histograms shows that the positive control with TMRE and treated with alendronate for 72 h had a M1 population with slight displacement of M2 to the lower fluorescent emission indicating lower TMRE labeling (Fig 6i). As expected, the same population labelled with annexin-V and 7-AAD had a loss of mitochondrial functions. Discussion A previous report compared trypanosomatids with apicomplexans by multiple sequence alignment and predicted seven conserved domains in the FPPS enzyme [19]. We highlighted these conserved domains in the alignment of Leishmania FPPS with different Giardia assemblages, and FPPS sequences from other organisms (Fig. 1a, c). The Giardia FPPS enzyme possesses 405 amino acids, which is about 40 amino acids longer than FPPS from Leishmania and trypanosomatids: this is reflected in long internal loops and the carboxyl terminal region (Fig. 1e). The human FPPS carboxy terminal tail has a conserved domain VII and adopts a rigid configuration. The conserved Arg 351 side chain forms a bridge with the terminal Lys 353 (K). There are amino acid residues that switch on and off the tail configuration [39]. Leishmania major and T. cruzi FPPS lack this mechanism. However, Giardia has 8-mers increase in the carboxyl terminal tail with a possible similar role. Despite the differences between Leishmania and Giardia FPPS sequences, the main FPPS signatures were identified in Giardia sequences including the FARM motif. This motif allows the classification of FPPS in type I enzymes (eukaryotic origin) or type II enzymes (prokaryotic origin). Type I FARM has a DDXXD signature found in trypanosomatids, S. cerevisiae, H. sapiens and in Giardia (Fig. 1b), type II displays two extra amino acids inside the FARM sequence, i.e. DDXXXXD sequence [40,41], as demonstrated in E. coli (Fig. 1a). Previous studies demonstrated that aromatic or bulky amino acids at the fourth to fifth amino acids upstream of the FARM motif are fundamental for product specificity and length. They are another characteristic of type I enzymes. Figure 1a shows E. coli FPPS, a type II enzyme without these residues before the FARM motif. These residues are also involved in the binding of NB-P inhibitors and interactions with the phenyl ring of bisphosphonates [41]. The aspartate-rich domains have a role in substrate binding through the phosphates in coordination with Mg +2 ions. The carboxyl group of aspartate, residues 98-102 of FARM, and residues 250-254 of SARM coordinate with the phosphate atoms inside the catalytic pocket. The phosphates of N-BP also make hydrogen bonds with the amino groups in Lys 207 and Lys 264 [20,35]. Human FPPS has two phenylalanine residues (Fig. 1a) at the fifth and fourth positions upstream of the FARM motif [20]. In L. major, His 93 (H) replaces the phenylalanine at the fifth position, and there is a Phe 94 at the fourth position. In T. cruzi FPPS, both phenylalanines are replaced, the fifth one by His 93 (H), and the fourth by the aromatic amino acid Tyr 94 (Y) [20,21]. In Giardia, the fifth phenylalanine upstream the FARM motif is replaced by Ala 115 (A), but the Phe 116 (F) at the fourth position is maintained. Phenylalanine and tyrosine can limit prenyl chain elongation. Amino acid residues with a smaller side chain (alanine or glycine) can impact the product specificity and increase the space inside the pocket to accommodate longer length prenyl chains. Thus, functional characterization of Giardia FPPS, i.e. cloning, mutagenesis and kinetic analysis, can elucidate the amino acids that are involved in the affinity for the substrates, for inhibitors (NB-P), and the prenyl chain product. Previous work demonstrated that FPPS from Giardia formed a separate branch distinct from trypanosomatids (kinetoplastids) and prokaryotes [19], but phylogenetic analysis and multi alignment comparing Giardia assemblages with Leishmania FPPS proteins were not performed. Thus, we constructed a phylogenetic tree of FPPS enzymes from 17 organisms (Fig. 1f ): Giardia and Leishmania FPPS sequences were placed in distinct clades. Molecular modelling of Giardia FPPS using a threading approach was performed due to the low identity of FPPS from Giardia with FPPS from Leishmania and other organisms. The model generated from the enzymes helps to explain the conformation adopted by the Giardia FPPS including the position of the amino acid residues inside and surrounding the FARM motif and the SARM motif (Fig. 2). These are involved in the binding of the substrate and the inhibitors. Some studies have tested the activity of N-BPs in different protozoans: in vitro assays of T. cruzi and L. donovani amastigotes inside infected Vero cells, T. brucei trypomastigotes, Toxoplasma gondii tachyzoites, and Plasmodium falciparum intraerythrocytic stages. All models demonstrated that N-BPs, especially aromatic compounds such as risedronate, have significant antiprotozoal activity with an IC 50 in the nanomolar or low micromolar range [42]. The IC 50 for N-BPs for intracellular amastigotes of T. cruzi was 147 ± 31.2 µM for alendronate and 123 ± 26.4 µM for risedronate. In amastigotes of L. donovani, IC 50 for N-BPs was 82.5 ± 14.6 µM for alendronate and 2.3 ± 0.3 µM for risedronate. This demonstrates that the aromatic N-BP, risedronate, has a better activity in L. donovani amastigotes. Promastigotes of Leishmania spp. are good models to evaluate inhibition of the sterol pathway including mechanisms of cell death. They can be used to evaluate the damage and impact on the parasite ultrastructure due to N-BPs and other inhibitors. Despite being the form found in the insect, the dividing promastigotes are easy to cultivate, have active mitochondria, and display high expression of FPPS as evaluated by mRNA and polycistronic RNA [21]. Our study used promastigotes of L. infantum and showed that risedronate had higher activity (13.8 ± 6.0 µM) followed by ibandronate (85.1 ± 26.5 µM) and alendronate (112.2 ± 61.2 µM) ( Table 2), without toxicity to the host cells as evaluated in RAW and LLCMK2 cells (data not shown). N-BPs are strong inhibitors of the recombinant FPPS enzyme. N-BPs such as risedronate, alendronate, and pamidronate are competitive inhibitors of IPP and GPP, substrates of FPPS. The recombinant FPPS of T. cruzi displayed a higher affinity for risedronate with a K i of 0.032 µM than alendronate and pamidronate (K i of 1.04 µM and 2.02 µM, respectively) [22] as estimated by a Dixon plot. Furthermore, when the affinity for each N-BPs was evaluated by the IC 50 , human recombinant FPPS displayed higher affinity for risedronate with an IC 50 of 0.010 µM versus T. cruzi recombinant FPPS with an IC 50 of 0.037 µM for risedronate [22]. Leishmania major recombinant FPPS enzyme also showed higher activities for risedronate (IC 50 of 0.17 µM) than ibandronate (IC 50 of 0.48 µM) [21]. These differences in the IC 50 of the FPPS recombinant enzymes from different organisms can be associated with the amino acid residues found in the binding pocket surrounding the FARM and SARM motifs; these can alter the affinity for the N-BP inhibitors. Our electron microscopic data showed that N-BPs causes several alterations in intracellular membrane and organelles of Leishmania such as myelin figures, mitochondrial swelling, plasma membrane blebs and membrane disorganization (Fig. 3). Previous studies demonstrated that L. amazonensis treated with specific inhibitors of ergosterol biosynthesis display morphological alterations and cell death associated with sterol depletion [2]. We also observed Golgi disorganization with small vesicles distributed in the cytoplasm as well as invagination near the flagellar pocket (Fig 3); these are suggestive of alterations in the exocytosis and endocytosis. The inhibition of protein prenylation by bisphosphonates, inhibitors of prenyl protein transferases, or inhibitors of mevalonate or isopentenyl pyrophosphate synthesis (lovastatin, mevastatin and phenylacetate) can profoundly affect cell morphology, cell replication, intracellular signal transduction, and lead to cell death by apoptosis, as demonstrated elsewhere [22,24]. Programmed cell death was described previously in trypanosomatids and in L. donovani promastigotes and amastigotes, caused by parasites in stationary phase or induced by pentostan and amphotericin B. These drugs can induce PPL-cleavage activity, change membrane integrity, increase the electron density in the cytoplasm, and lead to nuclear condensation [38]. Previous publications describing T. cruzi epimastigotes treated with the sterol biosynthesis inhibitors, ketoconazole and lovastatin, indicated branching of the mitochondrial membranes including a concentric pattern of the inner mitochondrial membrane in contact with kinetoplast and myelin figures suggestive of autophagy [43]. There are also reports of increased intensity in the membrane potential after rhodamine treatment [43]. In contrast, we did not observe increased mitochondrial potential, as evaluated via TMRE nor branching of mitochondrial membranes in promastigotes by electron microscopy after treatment with N-BP. Our results suggest that necrosis is not the main mechanism of L. infantum promastigotes death caused by N-BPs, only ~ 1% of the promastigotes were stained with 7-AAD alone after 48 to 72 h incubation with alendronate (Fig. 5). Indeed, staining with Annexin-V alone and co-staining with 7-AAD led to 34% and 25% labelling, respectively (Fig. 5), and the co-stained population had a loss of membrane potential (TMRE) (Fig. 6). In contrast, T. cruzi epimastigotes treated with ketoconazole and lovastatin for 12 h had marked co-staining with Annexin-V and PI or with PI alone, and very little Annexin-V staining [43]. Instead, we suggest that apoptosis is the main mechanism of death caused by N-BPs in L. infantum promastigotes. It was described that inhibition of protein prenylation by N-BP is one of the main mechanisms underlying decreased bone resorption by osteoclasts. Apoptosis has been reported for osteoclasts treated with N-BPs [44]. Indeed, N-BPs such as clodronate, etidronate, pamidronate, alendronate, and risedronate in concentrations of 10 and 1000 µM induced apoptosis in Caco-2 human epithelial cells [45]. A recent in vivo study demonstrated that subcutaneous administration of zoledronic acid in mice inhibits prenylation of Rab1A, Rab5B, Rab7A, and Rab14 in mouse peritoneal macrophages [46]. Ubiquinone or coenzyme Q (CoQ), a component of the electron transport chain in aerobic organisms as Leishmania, can have biosynthesis affected by inhibition of FPPS due to N-BPs, which can be correlated also with the mitochondrial damage observed by electron microscopy in the promastigotes of L. infantum treated with risedronate and alendronate. Depending on the Leishmania species and life stage, coenzyme CoQ8 and CoQ10 were detected in lower amounts. CoQ9 is the predominant homologue, and it has been detected in all organisms including ones without identifiable mitochondria such as Giardia, which has only a reminiscent mitochondrion [15]. The N-BPs activity was more pronounced in L. infantum than in trophozoites of G. duodenalis, as evaluated by the viability method. We found an IC 50 of 271 ± 62 µM for ibandronate and 311 ± 120 µM for risedronate ( Table 2). The higher activity, anti-proliferative effect of N-BPs, such as risedronate on L. infantum, compared to Giardia, can relate to the fact that Giardia has a minimal sterol metabolism, and to the differences in the catalytic site and pocket of the FPPS in each organism. The inhibitory concentration of risedronate and ibandronate N-BP was higher in Giardia trophozoites than Entamoeba indicating a lower activity versus Entamoeba. Previous studies evaluated the N-BP activity in amitochondriate Entamoeba histolytica, compared to the apicomplexan parasite Plasmodium. Trophozoites of E. histolytica and P. falciparum in the intraerythrocytic stage displayed an IC 50 above 200 µM for alendronate and pamidronate [47]. The IC 50 values were 73.5 µM and 123 µM for risedronate and 53.6 µM and 50.1 µM for ibandronate, respectively [47]. In Giardia and E. histolytica, inhibition of FPPS by N-BP can impact the biosynthesis of dolichol and isoprenoids because ergosterol biosynthesis is absent. Another aspect to consider is the expression of FPPS in different subcellular compartments, affecting the N-BP intracellular distribution. An FPPS fusion with GFP demonstrated localization in peroxisomes in the amoeba Dictyostelium discoideum [48]. An enzyme in mevalonate pathway, 3-hydroxy-3-methylglutaryl-coenyzme A (HMG-CoA) reductase, is an integral enzyme of endoplasmic reticulum (ER) in the amitochondriate Giardia. The ER membranes are the site of polyisoprenoids and dolichol biosynthesis. Trophozoites of G. duodenalis treated with N-BPs displayed many more myelin figures (Fig. 4) than L. infantum promastigotes. Concentric membranes around the nucleus and around other organelles are indicative of autophagy. The main pathways affected by inhibition of FPPS are the prenylation of proteins and dolichols biosynthesis. In accordance with previous biochemical evidence involving the incorporation of labeled FPP and GGPP isoprenoids in GTP binding proteins, Giardia performs isoprenylation of 50 and 21-26 kDa proteins [49]. Prenylation is essential for GTP-binding proteins function, because it is required for protein association to intracellular membranes and for protein-protein interactions including intracellular vesicular transport, membrane endocytosis, and exocytosis. Conclusions Inhibition of the enzyme FPPS by N-BPs can cause a shortage of GPP, FPP and GGPP, which are intermediate metabolites involved in the regulation of cellular functions and homeostasis. A shortage of FPP can cause failure in the isoprenylation of proteins as well as the nuclear lamina and Rab GTPases that are anchored in the intracellular region of the plasma membrane. The nuclear lamina and Rab GTPases interfere with the vesicular transport, endocytosis and exocytosis. Deficits in the synthesis of dolichol interfere with asparagine (N)-linked glycosylation that regulates numerous cellular activities such as glycoprotein quality control, intracellular trafficking and cell-cell communications. These alterations concur with our findings; disorganization of intracellular membranes culminating in Leishmania apoptosis. The inhibition caused by NB-P in promastigotes of Leishmania and on trophozoites of Giardia suggests that they are good models to evaluate protein prenylation and mechanisms of cell death. FPPS is in a branching point in sterol metabolic pathways. It is a key enzyme in the mevalonate pathway and a good candidate for drug design. Based on the catalytic site and mechanism of catalysis of the FPPS in each organism, it is possible to develop specific bisphosphonate inhibitors with high affinity for FPPS expressed in each protozoan.
9,305.8
2020-04-05T00:00:00.000
[ "Biology", "Medicine" ]
Nanoemulsion containing essential oil from Xylopia ochrantha Mart. produces molluscicidal effects against different species of Biomphalaria (Schistosoma hosts) BACKGROUND This work describes a chemical study of the essential oil from leaves of Xylopia ochrantha, an endemic Annonaceae species from Brazil, and its activity against Biomphalaria species. Considering its poor solubility in aqueous medium, the essential oil was nanoemulsified to evaluate its action on controlling some mollusc species of genus Biomphalaria, snail hosts of Schistosoma mansoni that causes schistosomiasis, which mainly affects tropical and subtropical countries. OBJECTIVES The main aims of this work were to analyse the chemical composition of essential oil from X. ochrantha, and to evaluate the effect of its nanoemulsion on molluscs of genus Biomphalaria and their oviposition. METHODS Chemical analysis was performed by gas chromatography coupled to mass spectrometry. Nanoemulsions were prepared by a low energy method and characterised by particle size and polydispersity index. Biological assays evaluating the mortality of adult species of B. glabrata, B. straminea and B. tenagophila and their ovipositions upon contact with the most stable nanoemulsion during 24 and 48 h were performed. FINDINGS Chemical analysis by mass spectrometry revealed the majority presence of bicyclogermacrene and germacrene D in the essential oil. The formulation with a hydrophilic-lipophilic balance (HLB) of 9.26 was the most suitable for the oil delivery system. This nanoemulsion caused the mortality in B. tenagophila, B. straminea and B. glabarata of different sizes at levels ranging from 50 to 100% in 48 h. Additionally, the formulation could inhibit the development of deposited eggs. CONCLUSION Thus, these results suggest the use of nanoemulsified essential oil from X. ochrantha as a possible alternative in controlling some Biomphalaria species involved in the schistosomiasis cycle. Schistosomiasis is an acute and chronic parasitic disease caused by trematode worms of the genus Schistosoma and is transmitted by several types of snails. It affects people along 78 countries mainly in tropical and subtropical regions. Schistosomiasis is the second largest infectious-parasitic disease in the world after malaria. In 2014, at least 61.6 million people in the world have been treated for schistosomiasis. (1) The acute form of this disease causes symptoms like fever, fatigue, myalgia, malaise, non-productive cough, whereas later stages show abdominal pathologies such as diarrhoea, diffuse abdominal pain, and hepatosplenomegaly. A chronic condition occurs when Schistosoma deposits its eggs and reactions from the host immune system lead to urinary, intestinal, hepatic, and ectopic forms of the disease. (2) Human schistosomiasis is caused by Schistosoma mansoni that uses molluscs of genus Biomphalaria as its intermediate host. In many cases, prevention methods like Biomphalaria and Oncomelania eradication using chemical pesticides are relevant to control the disease. Niclosamide (Bayluscide, Bayer, Leverkusen, Germany) is the only commercially available molluscicide recommended by the World Health Organization (WHO) for large-scale use in Schistosomiasis Control Programs. (3) However, this drug is toxic to other organisms, and resistance to niclosamide requires a search for new drugs and substances to be used in the vector control. (4) Therefore, the WHO encourages a search for alternative drugs in schistosomiasis control. Natural products can be seen as a promising alternative as they are plentiful in schistosomiasis-endemic countries and have a large number of different substances in their extracts, which hinders the appearance of vectors-resistance. (5) Essential oils are used to the control of snails, and plant species with high yields of essential oil as from genus Xylopia, showed activity against Biomphalaria glabrata, a relevant intermediate host of S. mansoni. (6) However, essential oils have low solubility in water, which may compromise their activity against Biomphalaria. Therefore, they must be processed into a formulation capable of viable use in biological activity studies. Nanotechnology has been used to solve problems of hydrosolubility and stability for active substances. There are several nanocarriers of drugs including nanoparticles, liposomes, and nanoemulsions (NEs). (7) Nanoemulsions have been widely used as nanocarriers of hydrophobic drugs and essential oils. The majority of NEs are dispersions of nanometric oil droplets in water stabilized by surfactants. The droplet size of a nanoemulsion is typically in the range of 20-200 nm. (8,9) Some key advantages of these nanocarriers are easy preparation, simple composition, the possibility of industrial-scale production, low cost of production, and high thermodynamic stability. (10,11) Xylopia ochrantha Mart. is an endemic Annonaceae species from Brazil and is popularly known as "imbiúprego". There is little information regarding the chemical, pharmaceutical, and biological activities from this species. (12,13) Thus, this work aimed to identify the chemical composition of essential oil from X. ochrantha leaves and to evaluate the activity of a nanoemulsion produced with this oil on the developmental stages (oviposition) and mortality of different Biomphalaria species. Extraction of essential oil -A mixture of leaves (1000 g) from the three specimens collected was turbolised with distilled water. The material was then placed in a five L bottom flask and subjected to hydrodistillation for 4 h in a Clevenger-type apparatus. In the end, the oils were collected and stored at 4ºC for further analyses. Gas chromatography/mass spectrometry analysis -The essential oil recovered was analysed using a GC-MS-QP5000 (Shimadzu) gas chromatograph, equipped with a mass spectrometer using electron ionisation. Gas chromatographic (GC) conditions were as follows: injector temperature, 260ºC; flame ionisation detection (FID) temperature, 290ºC; carrier gas, Helium; flow rate, 1 mL/ min; and split injection with split ratio 1:40. The oven temperature was initially 60ºC and was increased to 290ºC at a rate of 3ºC/min. One microliter of each sample, dissolved in dichloromethane (1:100 mg/μL), was injected into a DB-5 column (0.25 mm I.D, 30 m in length, and 0.25 μm film thickness). Mass spectrometry (MS) electron ionisation was at 70 eV and scan rate was 1 scan/s. Retention indices (RI) were calculated by extrapolating the retention times of a mixture of aliphatic hydrocarbons (C 9 -C 30 ) analysed under the same conditions. (14) Substances were identified by comparing their retention indices and mass spectra with those reported in literature. (15) The MS fragmentation pattern of compounds was also compared with NIST mass spectra libraries. Quantitative analysis of the chemical constituents was performed by flame ionisation gas chromatography (CG/FID), under the same conditions as GC/MS analysis. Percentages of these compounds were obtained by FID peak area normalisation method. Nanoemulsification method and determination of hydrophilic-lipophilic balance (HLB) -Emulsification was performed by modification of the low energy method described by Ostertag et al. (16) Emulsions comprised 5% (w/w) of X. ochrantha oil, 5% (w/w) of surfactants, and 90% (w/w) water. Tween 20 and Span 80 were used as surfactants to prepare the nanoemulsions. Various emulsions with HLB values ranging from 4.3 to 16.7 were prepared by mixing the surfactants in different proportions. For preparation of nanoemulsions, oily phase constituted by X. ochrantha oil and surfactants was homogenised by magnetic stirring (400 rpm) for 30 min at room temperature. After this, the aqueous phase (distilled water) was added to the oily phase under the same continuous magnetic stirring (400 rpm) for 1 h. The formulations were analysed to verify their stability by analysis of droplet size and polydispersity index values. The formulation with the lowest values of droplet size and polydispersity index indicated the adequate oil HLB. Nanoemulsion characterisation -Droplet size and polydispersity index (PDI) of the nanoemulsions were determined by photon correlation spectroscopy (Zetasizer ZS, Malvern, UK). Droplet measurements were performed in triplicate and average droplet size was expressed as the mean diameter ± standard deviation. Molluscicidal assays -Nanoemulsion of essential oil from X. ochrantha (100 ppm) was diluted to three different concentrations (80 ppm, 60 ppm, and 40 ppm) in distilled water. For the assay, 10 adult snails of B. glabrata species (measuring 10-12 mm diameter, from Sumidouro, RJ, Brazil) were exposed to 2 mL of the nanoemulsion, in a 24-well plate for 48 h at room temperature, according to the bioassay adapted from WHO, 1983 (17) (Fig. 1). Adult snails free of S. mansoni infection were kept in breeding grounds at the Laboratory for Analysis and Promotion of Environmental Health (LAPSA/IOC). Snail mortality was recorded over the following 48 h, and was compared with the nanoformulation control (formulation without oil at 100 ppm), positive control [Bayluscide WP 70® (Niclosamide) at 1 ppm], and negative control (distilled water). Absence of snail retraction into their shells after stimulation of the cephalopodal mass and/or release of haemolymph were the criteria of death. These assays were performed in triplicate. Complementary assessments of mortality and inhibition of posture development were performed with B. straminea, B. tenagophila, and B. glabrata from other locations (Ressaca, Belo Horizonte, MG, Brazil). For each analysis of mortality, according to a previous protocol, 2 mL of the DL 25 , DL 50 , and DL 90 sample concentrations obtained in the previous analysis with B. glabrata were used, and the groups of molluscs were divided into 10 individuals, and were separated according to size (3-5 mm, 4-6 mm, and 8-10 mm). Analysis of postures viability of B. glabrata (Ressaca, MG) was performed by treating them (two and five days old) with 2 mL of DL 25 , DL 50 , and DL 90 sample concentrations, and counting the number of eggs that were not viable in relation to the initial amount, at 24 and 48 h (Fig. 2). As in the previous assay, the results were compared with the positive, negative, and nanoformulation controls. Concentrations that killed 50% and 90% (LC 50 and LC 90 ) of the exposed snails in 24 h and 48 h (compared with the negative-control cultures) were expressed as the mean and standard deviation, and were statistically analysed using Student's t-test (p ≤ 0.05), using the software R (MASS data package). RESULTS Essential oil -Essential oil obtained from fresh leaves (2.0 g) showed a bright green colour and a yield of 0.2%. Oil density was 0.8 g/mL and pH was equal to 5.0. In total, 27 substances were identified with the predominance of sesquiterpenes and hydrocarbons (68.55%). The main substances found in these analyses were bicyclogermacrene (25.18%), and germacrene D (20.90%). Substances such as β-pinene (8.07%), sylvestrene (6.50%), and E-caryophyllene (6.23%) were also present (Table I). Nanoemulsion -The droplet size and PDI were used to select the HLB of the NE (Table II). The formulation with a HLB of 9.26 (40% of Tween 20 and 60% of Span 80), showed values of droplet size as 114 ± 1 nm and a PDI of 0.1 ± 0.0, which in turn, are characteristic of a stable nanoemulsion. The selected NE exhibited a translucent and homogeneous appearance. Molluscicidal assays Effects on B. glabrata juvenile and adults from Sumidouro and Ressaca regions -Molluscicidal activity was tested in juveniles (size of 3-5 mm), adults with 6-7 mm, and adults with 8-10 mm. Additionally, these specimens were collected from two regions, Sumidouro (RJ, Brazil) and Ressaca (MG, Brazil 48 h), respectively, within 24 h. The NE could cause a higher mortality rate among adults of size 6-7 mm (100% at the concentration of 47 ppm). When it was used at a concentration of 32 ppm, the NE caused mortality in 96.7% adults from this region. Adult molluscs (between 8-10 mm) were affected to a lesser extent, exhibiting mortality of 3.3% in 24 h at the concentration of 47 ppm (Table III). NE treatment for 48 h basically maintained the results observed after 24 h (Table IV) NE treatment for 48 h in molluscs of 3-5 mm treated with 78 and 32 ppm caused 100%, and 93.3% mortality, respectively. When treated for 48 h, molluscs of 6-7 mm showed 100% mortality in the presence of 47 ppm NE (Figs 3,4). Molluscs of 8-10 mm showed mortality in the range of 100% when treated with 47 ppm (Table IV). Effects on B. tenagophila adults -At the highest concentration tested (78 ppm), molluscs of 3-5 mm were totally affected after 24 h, with 100% mortality. Molluscs of 6-7 mm also showed 100% mortality at 24 h, when tested at the 47 ppm concentration. In this same period, 32 ppm NE caused 96.7% mortality. In larger molluscs (8-10 mm), 47 ppm NE caused a mortality rate of 26.7% in 24 h (Table III). NE treatment for 48 h in molluscs of 3-5 mm and 6-7 mm caused 100% mortality in 47 ppm (Fig. 5). In molluscs of 8-10 mm, the concentration of 47 ppm resulted in 53.3% mortality (Tables IV). In all these analyses, the positive control was fully effective for the three species in all sizes. NE free of essential oil did not cause mortality. Effects on eggs posture development -In the analysis of eggs posture, NE was able to render the eggs from recent oviposition (two days) unfeasible in a period of 24 h with 100% inhibition for B. straminea, B. tenagophila, and B. glabrata (Ressaca, MG) at both concentrations (47 and 72 ppm) tested (Table V). The five days old oviposition treated with 47 ppm NE showed 91.7% unviable eggs after 24 h of treatment in B. straminea molluscs. At the concentration of 78 ppm, a reduction of the effect in B. straminea (17.3%) was observed. The molluscs B. tenagophila treated with 47 ppm exhibited 49.4% unviable eggs, and 78 ppm caused a greater effect with 82.5% unviable eggs in 24 h. The molluscs B. glabrata (Ressaca, MG) showed 62.7% unviable eggs with 47 ppm, and 73.5% with 78 ppm for the same egg stage and time (Table V). In 48 h, all species eggs at two days were rendered unfeasible (Table VI). When the eggs were observed at late oviposition (five days), NE caused the maximal effect on B. straminea and B. glabrata (Ressaca, MG). In late oviposition, B. tenaghofila at 47 ppm and 78 ppm showed 70.4% and 95.2% unviable eggs, respectively. DISCUSSION Bicyclogermacrene and germacrene D, two major substances of the leaf essential oil from X. ochrantha, are the main components in other species of the genus such as X. langsdorffiana A.St.-Hil. & Tul., X. aethiopica (Dunal) A.Rich., and X. aromatica (Lam.) Mart. Furthermore, high occurrence of sesquiterpenes in foliar essential oil is also observed in these species. (6,18,19) The yield of leaf essential oil (0.20%) is similar to that in other species of the genus, in a range from 0.07% and 0.43%, relative to fresh leaves. (19,20,21) Bicyclogermacrene is best known for its antimicrobial and fungicidal activities. (22,23,24) Germacrene D is cited in literature as a simulator of insect pheromones, and is also involved in other important mechanisms of the insect-plant relationship. (25,26) Additionally, a previous study demonstrated molluscicidal activity from the leaf essential oil of X. langsdorffiana against B. glabrata, in which, germacrene D was the major compound. (6) This result can suggest an association with the activity found in the essential oil from X. ochrantha against B. glabrata, since these species are included in the same genus and have germacrene D as one of the two main constituents. Although the constituents of essential oils from X. ochrantha are related to relevant biological activities, they have low solubility in water, which hinders their dispersion into the environment in aquatic media. Thus, delivery in aqueous carriers as nanoemulsions (NEs) is required. NEs were successfully produced using a low energy method of spontaneous emulsification. This method is simple and only uses magnetic stirring. (8) The aqueous phase was dripped over the organic phase under continuous magnetic stirring for 1 h. The transition of surfactants from the oil phase to aqueous phase produces interfacial turbulence and spontaneous formation of nanometric oil droplets that are involved and stabilised by surfactants (27) (Table II). The following criteria were used for the analysis and selection of the formulation: size < 200 nm; PDI < 0.25; and organoleptic characteristics: absence of phase separation and precipitation. The formulation with HLB of 9.26 (40% of Tween 20 and 60% of Span 80) showed low values of droplet size and polydispersity index, which in turn, are characteristic of a nanoemulsion system. Development of a system to provide a suitable nanoformulation with a size below 200 nm for essential oil from X. ochrantha allows a more efficient aqueous dispersibility, stability, and delivery of actives in the target of action. NEs as delivery systems for essential oils permit interaction between the active principles and biological membranes. In general, this occurs through one of four main routes: 1-increased surface area and passive transport across the plasma membrane of cells; 2-fusion of the droplets with the cell membrane occurring specifically upon delivery of the substances at the site of action; 3-reservoir effect with sustained release of essential oil; 4-electrostatic interaction between positively charged droplets and negatively charged biological membrane favouring bioadhesion and biological effects. (8) These facts highlight the relevance of bionanotechnology as an effective alternative for the control of diseases through inhibition of intermediate hosts of parasites present in the environment. Moreover, the biodegradability of the formulation is another factor that minimises residual toxicity problems of molluscicidal agents to the environment. Molluscicidal activity of the nanoemulsion containing the essential oil from X. ochrantha against Biomphalaria molluscs could be observed in all evaluated species, including individuals of different ages/sizes and at different stages of egg development. According to the observed effects on mortality of B. glabrata of size 10-12 mm (Sumidouro, RJ), the LC 90 /24h (85.46 ppm) of the oil is considered satisfactory, as the value recommended by the WHO for a plant sample to be considered active is DL 90 /48h < 100 ppm. (3) Differences in action on individuals of different species could be observed, so that comparing the NE concentration (47 ppm) at 24h, B. tenaghofila appeared to be more susceptible to NE, though all species with sizes of 3-5 mm and 6-7 mm presented a high mortality rate. Adult individuals of B. glabrata (8-10 mm) exhibited resistance. Treatment for 48 h at the concentration of 47 ppm caused an effect similar to that at 24 h. In contrast to old specimens (8-10 mm) that increased the susceptibility at B. straminea. Another observation can be noted when comparing the sizes of individuals within the same species, wherein smaller/younger molluscs were found to be less resistant to NE action, suggesting that older individuals may have a more developed resistance system against the active components of NE. NE also acts on oviposition with an inhibitory effect at the concentrations of 47 ppm and 78 ppm. All eggs, from more recent oviposition (two days), presented 100% mortality in 24 h. In eggs from older oviposition (five days), 100% inhibition of development was observed only in a period of 48 h, although inhibition was also observed at 24 h, at varying degrees. These results suggest a higher resistance to oil compounds by the more developed eggs of B. tenaghofila species. Finally, these biological assays demonstrated the molluscicidal activity of the nanoemulsified essential oil from X. ochrantha and the inhibition of different stages of development in Biomphalaria molluscs, which denotes the biological importance of a species with poor scientific information regarding controlling disease transmitters. In conclusion -This study reported the molluscicidal activity and chemical composition of essential oil from leaves of X. ochrantha, a Brazilian endemic species. This is the first report on the activity of this species in the control of molluscs that are part of infectious diseases cycles, more specifically against Biomphalaria that are involved in the transmission of schistosomiasis. Thus, this result suggests the use of nanoemulsified essential oil from X. ochrantha as a promising alternative for Biomphalaria control, as this plant species is endemic to a country affected by schistosomiasis.
4,527.6
2019-04-08T00:00:00.000
[ "Biology" ]
Concreteness and abstraction in everyday explanation A number of philosophers argue for the value of abstraction in explanation. According to these prescriptive theories, an explanation becomes superior when it leaves out details that make no difference to the occurrence of the event one is trying to explain (the explanandum). Abstract explanations are not frugal placeholders for improved, detailed future explanations but are more valuable than their concrete counterparts because they highlight the factors that do the causal work, the factors in the absence of which the explanandum would not occur. We present several experiments that test whether people follow this prescription (i.e., whether people prefer explanations with abstract difference makers over explanations with concrete details and explanations that omit descriptively accurate but causally irrelevant information). Contrary to the prescription, we found a preference for concreteness and detail. Participants rated explanations with concrete details higher than their abstract counterparts and in many cases they did not penalize the presence of causally irrelevant details. Nevertheless, causality still constrained participants’ preferences: They downgraded concrete explanations that did not communicate the critical causal properties. Abstract A number of philosophers argue for the value of abstraction in explanation. According to these prescriptive theories, an explanation becomes superior when it leaves out details that make no difference to the occurrence of the event one is trying to explain (the explanandum). Abstract explanations are not frugal placeholders for improved, detailed future explanations but are more valuable than their concrete counterparts because they highlight the factors that do the causal work, the factors in the absence of which the explanandum would not occur. We present several experiments that test whether people follow this prescription (i.e., whether people prefer explanations with abstract difference makers over explanations with concrete details and explanations that omit descriptively accurate but causally irrelevant information). Contrary to the prescription, we found a preference for concreteness and detail. Participants rated explanations with concrete details higher than their abstract counterparts and in many cases they did not penalize the presence of causally irrelevant details. Nevertheless, causality still constrained participants' preferences: They downgraded concrete explanations that did not communicate the critical causal properties. Keywords Explanation . Causal reasoning What is the difference between a good description and a good explanation of an event? The former is an enumeration of the state of affairs at the time and place of the event while the latter is an account of why or how that event came to be. Consider a description of a car accident resulting in a pedestrian being injured. One can describe the location of the accident, the speed of the car, the conditions of the tires and the road. One could also refer to the make and color of the car, whether the car radio was on and even the clothes the driver was wearing, the color of the victim's eyes, the number of nearby cars and pedestrians-an exhaustive array of facts that were true when the event in question took place. Arguably, the amount of information contained in a good description is constrained only by pragmatic considerations: the available time and space modulated by the needs and patience of the audience. On the other hand, given that the central aim of explanation is to provide understanding-be it for purposes of diagnosis, prediction, or pure aesthetic pleasure (Keil, 2006)-only information in service of that aim should be included. Arguably, the color of the driver's eyes in the above example does not enhance anyone's understanding regarding the car accident, therefore it might be descriptively relevant but it is probably not explanatorily relevant. Determining what is explanatorily relevant depends on the account of explanation one adopts, and philosophers have long debated the nature of explanation, especially in scientific practice (Hempel, 1965;Kitcher, 1981;Salmon, 1984;Strevens, 2008). A parallel question is whether explanation quality increases as accuracy increases. Unlike a description, whose quality is often contingent on the precision of its representation, there are reasons to believe that Babstracting^an explanation (i.e., removing certain details or decreasing its precision) might in fact improve it. For some philosophers, abstraction signifies an undesirable departure from reality. On such views, an ideal explanation would mention every relevant factor at the highest degree of precision, but this quickly becomes unattainable either due to incomplete knowledge or to practical limitations. Cartwright (1983) argues that through abstraction, scientific explanations become false since they apply only in ideal conditions not found in nature. Railton (1981) argues that abstraction is a fair compromise, but a compromise nevertheless. For Nowak (1992), the distance from reality is progressively minimized by successive scientific theories: Starting from an abstract but false theory of a phenomenon, progress is achieved by adding more and more influencing factors and specifying them with more and more accuracy, such that the theory is brought closer and closer to reality. Others, however, attribute value to abstraction. Jorland (1994), for example, thinks that abstraction improves explanations by leaving out nonessential factors, thus enabling Bone to grasp the essence^(p. 274). Garfinkel (1981) explains that hyperconcrete explanations (i.e., explanations that contain too much detail) are overly sensitive to slight perturbations. Explaining an accident by referring to the car's high speed, for example, is more robust than referring to the speed and the color of the driver's shirt: while the former will explain multiple accidents, the latter will lead to different explanations for accidents that occurred at the same speed but in which the driver wore a blue, red, or purple shirt. Explanations that are too concrete are not merely Btoo good to be true^(i.e., impractical) but rather Btoo true to be good^ (Garfinkel, 1981, p. 58). Similarly, proponents of causal, and especially counterfactual, theories of explanation (Hitchcock & Woodward, 2003;Kuorikoski & Ylikoski, 2010;Strevens, 2007;Weslake, 2010) believe that an explanation can be improved even by leaving out terms that assert causal influence on the event to be explained (the explanandum). Specifically, Strevens (2007) argues that properly abstracted explanations are explanatorily superior to their concrete and, by definition, more accurate counterparts by focusing on difference makers (facts or events in the absence of which the explanandum would not occur). Thus, the criterion for mentioning a detail when explaining some phenomenon is whether that detail makes a difference to the phenomenon's occurrence. When explaining why it took approximately 2 seconds for the apple falling from a tree to reach the ground, even if the gravitational pull from the moon did exert some influence on the apple, it did not change the fact that the flight of the apple lasted for approximately the time it did. On that basis, the removal of lunar influence from the terms mentioned in the explanation improves its quality. In short, Strevens (2007) distinguishes causal influence from difference making and argues that it is difference making that is key to explanatory quality. In a similar vein, Garfinkel (1981) points out that explanations are often intended to help the prevention of future occurrences of the explanandum. Details that make no difference have no use and may in fact hinder such practical goals. Philosophical discussions often focus on explanations as used in science in an attempt to arrive at normative principles regarding abstraction. What principles do people use when evaluating everyday explanations? Is abstraction still valued? Do people choose to mention the exact speed of the car, or is it preferable to say that the car was moving fast or that the speed of the car fell within a certain range of values, all of which would still lead to the accident? Similarly, does including causally irrelevant but accurate information, such as the make of the car or even the color of the driver's shirt, reduce the quality of the explanation? Experimentally, Weisberg, Keil, Goodstein, Rawson, and Gray (2008) have shown that when judging behavioral explanations, people value the presence of neuroscientific details, even when these do not do any explanatory work (see also Fernandez-Duque, Evans, Christian, & Hodges, 2015). Their hypothesis is that psychological accounts that appeal to neuroscience-a lower level discipline dealing with phenomena that are more microscopic than psychology's-generate an unwarranted sense of understanding (Trout, 2008; see also Hopkins, Weisberg, & Taylor, 2016). Another option, however, is that people do not have a bias toward neuroscience-based or even reductionist explanations per se but have a preference for concretization, for the detail and precision that a lower level explanation provides. In that case, one would expect to observe a preference for concreteness even when competing explanations do not differ in their level of reduction. In the present set of experiments, we address these questions by obtaining ratings of competing explanations of everyday events that differ in the amount of information removed or abstracted from descriptions. At the extreme is what we call Birrelevant^explanations: explanations that contain all the information present in the description including causally irrelevant detail (i.e., information about events that do not plausibly exert any causal influence on the explanandum). 1 The removal of irrelevant information yields what we call Bconcrete^explanations, containing only causally relevant information specified with a high degree of accuracy. Finally, by replacing precise descriptions with qualitative statements, we have constructed Babstract^explanations that specify only the difference makers, the events without which the explanandum would not occur. Experiments 1a and 1b Participants were presented with a description of either a landslide (Experiment 1a) or a bad strawberry yield (Experiment 1b). To increase the believability of the descriptions, the stories were presented as newspaper reports (see Fig. 1). In addition, although the stories were made up, the information we used was copied from various specialized sources (e.g., geological surveys, pest management reports). After reading the report, participants were asked to rate the quality of three explanations that differed in their degree of concreteness (irrelevant, concrete, and abstract). Following the philosophical views on the importance of causality and counterfactual dependence in explanation (Garfinkel, 1981;Hitchcock & Woodward, 2003;Strevens, 2007), we expected noncausally related details to be penalized by participants. Furthermore, given the level at which the question was posed (BWhy was there a landslide?^rather than BWhy did the landslide happen in this particular way?^), we expected participants to penalize details that had a causal influence but did not make a difference as to whether or not the landslide occurred, such as the exact diameter of soil particles. In contrast, if the tendency toward reductionism (Trout, 2008;Weisberg et al., 2008) is actually a particular case of a more general tendency for accuracy, then people should prefer the more detailed explanations. 2 Participants and materials We recruited 61 participants for each of Experiments 1a and 1b through Amazon's Mechanical Turk. The mean age was 32.9 years (SD = 11.6) in Experiment 1a and 31.7 years (SD = 11.1) in Experiment 1b. There were 35 females (57.4%) in Experiment 1a and 34 (55.7%) in Experiment 1b. In this and all experiments, participants were paid $0.50 for taking part, and the experiments were programmed in Adobe Flex 4.6 and conducted over the Internet (all experiments and the collected raw data can be seen at http://goo.gl/0rMOQd). Design and procedure All participants rated all three explanations. After two introductory screens that welcomed participants and asked them to minimize disturbances during the experiment, participants saw the description of a landslide (Experiment 1a) or a strawberry harvest (Experiment 1b) presented as a newspaper report (see Fig. 1). The landslide report included some noncritical information aimed at increasing the believability of the report (e.g., BRoad crews have begun the cleanup effort^). The main causal factors were the slope, the consistency, and the vegetation of the hill. Three additional causally irrelevant facts described the color of the hill's particles, the edibility of the vegetation, and the position of the hill relative to the premises of a local festival. 3 Similarly, the strawberry report initially mentioned the strawberry market growth and went on to describe a bad strawberry season by referring to the temperature, the attacks of a strawberry bug, and the winds during ripening. The three causally irrelevant details with respect to the poor strawberry harvest were the size of the bug, the direction of the winds, and the color of the strawberry flowers. In the next screen, participants were asked, based on the information contained in the newspaper report, to rate the quality of the three explanations by placing a marker on sliders that ranged from poor to excellent. The way explanations should be rated was left intentionally vague as we wanted participants to apply their own intuitive criteria. To make sure that participants were not treating the question as a memory test, the critical information from the newspaper report was repeated in this screen. The order in which explanations appeared on the screen was randomized for each participant. The three explanations (see Table 1) differed in their degree of concreteness: They either repeated, omitted, or altered the information contained in the original newspaper report. In each scenario, six facts were manipulated. Three facts were included unchanged in the concrete and the irrelevant explanations but were abstracted in the abstract explanation. Another three were included only in the irrelevant explanation and were omitted from both the concrete and the abstract explanations. Thus, there were two critical comparisons: one between the concrete and the abstract explanations, with the latter containing the same information at a higher level of abstraction, and a second comparison between concrete and irrelevant explanations that differed only in the three additional but causally irrelevant facts contained in the latter. After rating the three explanations, participants were asked to report the causal relevance of each of the terms used in the explanations. For each of the nine terms (three concrete, three abstract, three irrelevant), participants were asked two questions intended to test whether causal relevance guides the evaluation of explanations: Did they agree with the assertion that the term Bwas a cause of the landslide/poor strawberry production^(henceforth: causal ratings), and did they agree with the assertion that the term Baffected the particular way in which the landslide happened^/Baffected particular aspects of this year's poor strawberry production^(henceforth: causal influence ratings)? These questions also served as a validity check: They allowed us to verify whether our designation of certain terms as causally irrelevant agreed with people's intuitions. The two versions of the causal question (causality and causal influence) were meant to capture potential differences between Bwhat^and Bhow^causation (Gerstenberg, Goodman, Lagnado, & Tenenbaum, 2015). In other words, it may be the case that participants prefer concrete terms (e.g., 37 degrees slope) rather than abstract terms (e.g., steep slope), even though both adequately explain what happened (e.g., a landslide), because only the concrete term explains exactly how it happened, despite the fact this is not what is being asked. Participants rated irrelevant factors significantly lower than concrete factors both when asked whether the factor caused the explanandum (p < .001 in both experiments) and when asked whether it influenced aspects of the explanandum (p < .001 in both experiments). Irrelevant factors were also rated lower than abstract factors in both direct causal questions (p < .001 in both experiments) and causal influence questions (p < .001 in both experiments). Finally, although abstract factors were rated higher than concrete factors, this difference was generally not significant, with one exception: In Experiment 1a, when asked whether each of the factors caused the landslide, participants were more likely to report that abstract factors (fine particles, steep slope, sparse vegetation) were more causally responsible than concrete factors (particles with diameter 2/64 of an inch, 37 degrees slope, 13% vegetation coverage) at a significant level (p = .004). Discussion In contrast to what difference-making accounts of explanation propose regarding abstraction (Hitchcock & Woodward, 2003;Strevens, 2007), the findings of Experiment 1 indicate a preference for detail, even in cases in which detail is not judged to be causally relevant to the explanandum. 4 Explanations in which the causal terms were abstracted received consistently lower ratings than explanations mentioning exact values. Yet when asked about the causal role of the Fig. 1 Stimuli used in Experiments 1a (left) and 1b (right). The information contained in the right column of each picture was shown again when participants were asked to rate the explanations factors that featured in each explanation, participants rated concrete and abstract factors equally highly, with abstract factors receiving significantly higher causal ratings on one occasion. Similarly, participants did not penalize the presence of irrelevant factors in explanations, despite the fact that they judged those factors to have minimal causal relation to the explanandum. One question arising from these results is the role of causation in the way people evaluate explanations. Given the long philosophical tradition connecting causation and explanation (Psillos, 2002;Salmon, 1984;Woodward, 2003), it is surprising that the potency of each factor to bring about the explanandum was not the main determinant of the way the explanations were evaluated. The abstract terms, although equally efficacious, resulted in weaker explanations, while, conversely, causally irrelevant factors did not weaken the explanations. It might be argued that, at least for some comparisons, it was not concreteness per se that guided participants but the presence of numerical values as indication of explanation quality. For that reason, we conducted an additional experiment, not reported here, using the same materials as in Experiment 1 but where abstract explanations included numerical ranges. Thus, the term Blow temperature,^for example, was changed to Blow temperature (0-5 degrees Celsius).^Ratings for abstract explanations were unchanged, as were the overall comparisons that we have reported here. There are several other ways to understand our findings. Concreteness might signal the expertise or the truthfulness of the person providing the explanation (Bell & Loftus, 1985, 1989. Alternatively, it might be that the description of the exact conditions in which an event occurred facilitates understanding other aspects of the situation, thus achieving better unification (Kitcher, 1981). For example, the fact that the accident one is trying to explain took place close to the hospital, although not causally relevant to the accident itself, might explain other aspects of the event, such as the swift arrival of the ambulance. Before further discussing ways to account for our results, the next set of experiments will reevaluate the current findings in a simpler and more controlled context. Experiments 2a and 2b The second experiment aimed for a more controlled investigation of the surprising findings obtained in Experiment 1. That experiment had the benefit of ecological validity but had only two scenarios and used only explanations of natural phenomena. Finally, the explanations incorporated multiple factors that could have interacted in complex ways. With that in mind, the following experiments used short explanations for multiple everyday events that varied only a single factor and included social as well as physical phenomena. Participants and materials Sixty-one participants were recruited through Amazon's Mechanical Turk for Experiment 2a and 60 for Experiment 2b. The mean age was 33.0 years (SD = 9.2) in Experiment 2a and 34.6 years (SD = 11.2) in Experiment 2b. There were 25 Design and procedure Experiment 2a compared concrete to irrelevant explanations, and Experiment 2b compared concrete to abstract explanations. In each case, the two explanations differed by a single detail. Both experiments had a 12 (scenario) × 2 (explanation type) repeated-measures design. After a few introductory screens, participants were asked to rate two explanations for each of 12 everyday events. 5 Each screen presented the description of the event to be explained, a question specifying the explanandum and two explanations, each followed by a slider ranging from poor to excellent. The order of the events to be explained was randomized as well as the left-right position of the two explanations for each event. In Experiment 2a, the two explanation types (concrete and irrelevant) were identical apart from the fact that the irrelevant explanation contained an extra detail that had no causal connection to the explanandum. For example, one of the stories described Michael's road accident, mentioning that Michael had drunk eight vodka shots and three glasses of gin and tonic at Joe's bar and asking, BWhy did Michael have an accident?T he concrete version explained the accident by saying that the eight vodka shots and the three glasses of gin and tonic that Michael consumed severely reduced his concentration and increased his reaction time. The irrelevant explanation was identical, except that it also mentioned that Michael consumed the drinks at Joe's bar. Experiment 2b compared concrete and abstract explanations for the 12 scenarios. While the concrete versions referred to the exact values or quantities that were mentioned in the scenarios, the abstract explanations used a higher level of description for the critical term. For example, in the story of Michael's accident, the concrete explanation was identical to the one used in Experiment 2a, while the abstract explanation mentioned that Michael had consumed Ban excessive amount of alcohol^instead of the particular drink types and quantities. scenario, F(7.16, 429.8) = 3.94, p < .001, η 2 = .02, and the explanation type, F(1, 60) = 9.61, p < .001, η 2 = .02, as well as their interaction, F(7.99, 479.4) = 3.94, p < .001, η 2 = .02. Although there were variations between scenarios, participants rated only two of 12 irrelevant explanations higher than their concrete counterparts. Discussion The current results replicated people's preference for concrete explanations but also showed that causally irrelevant details made explanations less appealing. In the vast majority of the scenarios, participants gave higher ratings to explanations that contained particular details rather than abstracted versions of those details. Unlike Experiment 1, explanations were rated significantly lower when they contained causally irrelevant information. Apart from using different scenarios, there are two candidate explanations for the discrepancy between Experiments 1a and 2a in the way irrelevant details were treated. An important difference is that each irrelevant explanation in Experiments 1a and 1b contained three causally irrelevant details rather than one, as in Experiment 2a. Perhaps discarding three details removes too much information from the explanations, even if that information is not causally connected to the explanandum. Alternatively, the simultaneous presentation of three (Experiment 1) rather than two (Experiment 2) competing explanations might have led to attraction effects (Huber, Payne, & Puto, 1982) in Experiment 1. A strong preference for concrete over abstract explanations in Experiment 1 might have increased the ratings for irrelevant explanations that contained the same concrete descriptions. This is consistent with the fact that the difference between concrete and irrelevant explanations in Experiment 2a was smaller than the difference between concrete and abstract explanations in Experiment 2b. Removing detail through abstraction may be less desirable than removing irrelevant detail. Fig. 3 Mean values for causal ratings (i.e., BX caused the landslide/poor strawberry yield^) and causal influence ratings (i.e., BX affected the particular way in which the landslide happened^/BX affected particular aspects of this year's poor strawberry production^) averaged over participants Both proposed accounts, together with the fact that irrelevant explanations still received high ratings (significantly higher than the midpoint) and the reluctance to abstract away, suggest that concreteness and accuracy are valued properties of explanations. However, the possibility remains that good explanations must appeal to causality. After all, the critical causal property was presumably inferred even in the concrete explanations of Experiment 2. For example, Beight vodka shots and the three glasses of gin and tonic^is for most people an Bexcessive amount of alcohol.^So the former phrase both communicates the difference maker and provides specific details. But the expression of detail appears to be important too. Admittedly, there are many ways to transform a concrete explanation to an abstract one, as there are a wide variety of details one can add to an abstract explanation. Our results are surely influenced by some of our choices. There is no easy way to counteract this issue besides conducting further work and testing more variables. In this direction, the next pair of experiments radically changes the type of concrete information that is included. We assess the role of causation by testing whether people continue to prefer concreteness even when critical causal properties are not communicated. Experiments 3a and 3b The aim of the final set of experiments is twofold. First, we wish to see if people's attitude to causally irrelevant details generalizes to a different set of scenarios. Second, we will assess people's preference for concreteness over abstraction in cases where concrete details fail to transmit the causally critical properties, allowing us to evaluate the perceived importance of causality in explanation. Participants and material There were 44 participants in Experiment 3a and 36 in Experiment 3b, recruited through Amazon's Mechanical Turk. The mean age was 30.6 years (SD = 8.37) in Experiment 3a and 33.0 years (SD = 9.44) in Experiment 3b. There were 18 females (40.9%) in Experiment 3a and 10 (27.8%) in Experiment 3b. Design and procedure Experiments 3a and 3b used a different set of scenarios, 6 but the design was identical to Experiments 2a and 2b. However, in Experiment 3b, abstract and concrete explanations differed along a second dimension: In the absence of specialized knowledge, the concrete term did not communicate the causal properties that bring about the explanandum either because the term itself was obscure or because it used a relatively unknown scale. In contrast, the abstract explanation mentioned either the category or a qualitative property. For example, to explain a fire in a warehouse, the concrete explanation attributed the fire to the presence of ethyl chloride while the abstract explanation referred to a highly flammable material. Similarly, to explain someone's respiratory problems, the concrete explanation mentioned the presence of carbon dioxide at a level of 3,000 ppm while the abstract explanation referred to a very high level of carbon dioxide. Discussion In Experiment 3a, irrelevant details were not penalized as was the case in Experiments 1a and 1b. This rules out accounts based on attraction effects or the number of details included in explanations, discussed earlier. Experiment 3b shows that the preference for concreteness that was observed in previous experiments does not persist when the concrete terms fail to communicate the causal properties of the event that brought about the explanandum. Although concrete explanations are still rated significantly higher than the midpoint, people prefer explanations that convey causal information as predicted by causal accounts of explanation (Salmon, 1984;Woodward, 2003). General discussion We have investigated the extent to which people prefer explanations that present an accurate account of the state of affairs at the time of the explanandum or, alternatively, whether abstraction improves the perceived quality of explanations. In our experiments, abstract explanations either removed information that had no causal relation to the explanandum or replaced precise terms with more abstract ones that highlighted difference-making properties. In violation of certain philosophical prescriptions (Garfinkel, 1981;Hitchcock & Woodward, 2003;Strevens, 2007;Weslake, 2010), abstraction is not in itself a desirable feature of an explanation. It becomes the preferred option only when it improves the communication of causal properties. In our experiments, people show a tendency for concreteness as long as the causal properties are not obscured by technical terms. Therefore, although causality appears to be a necessary property of a good explanation, causal terms are not selected based on their difference-making properties but rather on how accurately they match the events that took place. Because people attend to whether the explanation offers a causal property, one might expect that details that are not causally related to the explanandum would reduce an explanation's judged quality. Our results on this question were mixed, but some conclusions can be drawn. In two out of three experiments presented here, participants rated irrelevant explanations as high as concrete ones. Even though in Experiment 2a explanations without irrelevant details were preferred, ratings for irrelevant explanations significantly exceeded the midpoint in every scenario that we have tried. Causally irrelevant information is not penalized as strongly as one might expect. Since recent philosophical prescriptions (Strevens, 2007) suggest the removal even of factors that exert causal influence on the explanandum, provided these are not difference makers, the reluctance of our participants to penalize even causally irrelevant factors indicates a misalignment between the proposed normative principles and actual everyday practice. Our findings extend the endorsement of concreteness and detail beyond the domain of neuroscience (Weisberg et al., 2008). They also extend it beyond the scope of reductionism (Hopkins et al., 2016;Trout, 2008). The competing explanations in our experiments did not differ in level of analysis. For example, causally irrelevant details, such as the edibility of the vegetation at a hill where a landslide occurred (Experiment 1a), are not more microscopic but simply more descriptive of the events that are being explained, yet participants did not penalize their presence in explanations. In contrast to the current set of studies, the majority of philosophical work is concerned with explanations of lawlike regularities. 7 As a result, rather than evaluating explanations for token events, like a landslide at a particular location, most normative accounts assess explanations of type events, such as the natural phenomenon of landslides. It remains an open question how people would evaluate explanations of such regularities and whether explanations that appealed only to difference makers would, in those cases, be preferred. More generally, although the current results point toward a preference for detail, it is reasonable to assume that this might depend on a variety of factors, such as the type of phenomena and the type of concrete details, the way the explanation is abstracted or concretized, as well as the interests of the receiver and their assumed background knowledge. We already saw in Experiment 3b, for example, that concrete details are not preferred when they fail to communicate the critical causal properties. Similarly, it is plausible that for certain types of properties (e.g., functional properties) there could be a stronger preference for abstraction. All these are open possibilities with further work required to decide whether the bias for concreteness is in fact universal or whether a mixed model where detail is sometimes preferred and other times penalized is more appropriate. Given our observations, a pressing question is what underlies the observed preference for concreteness that leads to deviations from philosophical prescriptions. Our experiments inspire a few conjectures. Abstract explanations are more generalizable by retaining only the essence of the causal mechanism (Jorland, 1994;Nowak, 1992) and facilitating prediction. Detailed explanations, on the other hand, provide more information about the particular instance. This is more useful for understanding and perhaps explaining additional aspects of the particular instance (Kitcher, 1981;Strevens, 2007). Therefore, it might be the case that preferences regarding explanations depend on one's aims. For example, referring back to our car accident example, a policy maker aims to prevent further accidents, while an insurance agent aims to understand as many aspects of the accident as possible. The former might show a preference for abstract, generalizable explanations due to their potentially preventative role (Garfinkel, 1981), while the latter might prefer explanations with concrete details that can prove useful when processing the insurance claim. Thus, the tendency toward precision might be explained by participants defaulting to a more backwards-looking stance toward explanations. Similarly, irrelevant details might have helped participants visualize how the explanandum came to be. The fact, for example, that someone suffered from respiratory problems is adequately explained by his exposure to high levels of carbon dioxide, irrespective of whether or not this exposure took place in the school where he works (Experiment 3a). This causally irrelevant information, however, helps one imagine the mechanism that led to the explanandum, which by itself might promote a feeling of understanding and, moreover, explain why children were endangered or why a legal case was brought against the builder. What is apparent is that people do not adhere to the normative principles put forward by some philosophers. Explanations in everyday usage serve goals beyond uncovering how an explanandum came to be. These goals are better served through detail and accuracy, making abstraction a less than ideal option. Author note This project/publication was made possible through the support of a grant from The Varieties of Understanding Project at Fordham University and The John Templeton Foundation. The opinions expressed in this publication are those of the author(s) and do not necessarily reflect the views of The Varieties of Understanding Project, Fordham University, or The John Templeton Foundation. Cake Lucia's cake was a disaster. The cake has been in Lucia's oven for 9 hours. & Why was Lucia's cake ruined? -Concrete: Because the cake was left in the oven for 9 hours, and thus it was completely burnt. -Abstract: Because the cake was left in the oven for a very long time, and thus it was completely burnt. -Irrelevant: Because the cake was left in Lucia's oven for 9 hours, and thus it was completely burnt. Accident Michael had a road accident. He had drunk eight vodka shots and three glasses of gin and tonic at Joe's bar. & Why did Michael have an accident? -Concrete: Because the eight vodka shots and the three glasses of gin and tonic that Michael consumed severely reduced his concentration and increased his reaction time. -Abstract: Because the excessive amount of alcohol that Michael consumed severely reduced his concentration and increased his reaction time. -Irrelevant: Because the eight vodka shots and the three glasses of gin and tonic that Michael consumed at Joe's bar severely reduced his concentration and increased his reaction time. Job Kevin's application for a job was unsuccessful. He sent his application via priority mail 35 days after the application deadline. & Why was Kevin's application unsuccessful? -Concrete: Because his application was sent 35 days after the deadline, and thus he was not even considered for the position. -Abstract: Because his application was sent long after the deadline, and thus he was not even considered for the position. -Irrelevant: Because his application was sent via priority mail 35 days after the deadline, and thus he was not even considered for the position. 8. Lake Llyn Dinas, a lake near Beddgelert in north Wales, has frozen. The water temperature in the lake that is famous for its trout fishing was 14 degrees Fahrenheit. & Why did the lake freeze? -Concrete: Because, since the temperature of the lake was 14 degrees Fahrenheit, the lake's water molecules were all locked together, forming ice. -Abstract: Because, since the temperature of the lake was below 32 degrees Fahrenheit, the lake's water molecules were all locked together, forming ice. -Irrelevant: Because, since the temperature of the lake that is famous for its trout fishing was 14 degrees Fahrenheit, the lake's water molecules were all locked together, forming ice. Bleeding Tina's finger was bleeding heavily. She had cut her finger ¼ inches deep with a kitchen knife. & Why did Tina's finger bleed heavily? -Concrete: Because the ¼-inch deep cut damaged blood vessels that carry blood under high pressure. -Abstract: Because the nonsuperficial cut damaged blood vessels that carry blood under high pressure. -Irrelevant: Because the ¼-inch deep cut by the kitchen knife damaged blood vessels that carry blood under high pressure. Inflation The rate of inflation has increased. Earlier in the year, the left-wing government has raised the sales tax by 10%. & Why did the rate of inflation increase? Elections In the last German elections, the Free Democrats (FDP) failed to meet the 5% vote threshold required to enter the parliament. The party, which was established in 1948, secured 4.8% of the vote. & Why did the FDP fail to enter the German parliament? -Concrete: Because the party won 0.2% fewer votes than required to exceed the threshold. -Abstract: Because the party won fewer votes than required to exceed the threshold. -Irrelevant: Because the party that was established in 1948 won 0.2% less than required to exceed the threshold. Appendix B: Stimuli used in Experiments 3a and 3b 1. Respiratory Peter was suffering from respiratory problems. The concentration of carbon dioxide in the school where he was teaching was regularly at the very high level of 3,000 ppm. & Why was Peter suffering from respiratory problems? -Concrete: Because he was regularly exposed to carbon dioxide at the level of 3,000 ppm. -Irrelevant: Because he was regularly exposed to carbon dioxide at the level of 3,000 ppm at the school where he was teaching. -Abstract: Because he was regularly exposed to a very high level of carbon dioxide. Club Melissa had a headache after leaving the night club. The music in the recently renovated club was playing extremely loudly at 120 dBA SPL. & Why did Melissa have a headache? -Concrete: Because the music in the club was playing at 120 dBA SPL. -Irrelevant: Because the music in the recently renovated club was playing at 120 dBA SPL. -Abstract: Because the music in the club was playing extremely loudly. 3. Welfare Hospitals across the country struggled to cope with demand. The government had severely reduced the health-care budget by 2.4%, contrary to what was promised before the elections. & Why did the hospitals struggle to cope with demand? -Concrete: Because the government had reduced the health-care budget by 2.4%. -Irrelevant: Because, contrary to what was promised before the elections, the government had reduced the health-care budget by 2.4%. -Abstract: Because the government had severely reduced the health-care budget. Holidays Paul did not enjoy his holidays. During his stay, there were strong winds with average speed of 46 mph blowing from the west, which prevented Paul from enjoying the beach. & Why did not Paul enjoy his holidays? -Concrete: Because the 46-mph winds prevented him from enjoying the beach. -Irrelevant: Because the 46-mph winds blowing from the west prevented him from enjoying the beach. -Abstract: Because the strong winds prevented him from enjoying the beach. Flood Larry's house was flooded. The Yare river, which is nearby, has overflown for the third time in that year. & Why was Larry's house flooded? -Concrete: Because Yare, which is nearby, could not withhold the water. -Irrelevant: Because Yare, which is nearby, could not withhold the water for the third time in that year. -Abstract: Because a river, which is nearby, could not withhold the water. Medical Daniel was stressed. The outcome of the scintigraphy medical test that he had 5 days earlier was positive. & Why was Daniel stressed? -Concrete: Because the outcome of his scintigraphy was positive. -Irrelevant: Because the outcome of the scintigraphy that he had 5 days earlier was positive. -Abstract: Because the outcome of his medical test was positive. Nausea Mary was feeling very nauseated. She had accidentally ingested 8 grams of paracetamol, much more than the recommended maximum daily dose. & Why was Mary feeling very nauseated? -Concrete: Because she had ingested 8 grams of paracetamol. -Irrelevant: Because she had accidentally ingested 8 grams of paracetamol. -Abstract: Because she had overdosed on paracetamol. Landslide There was a landslide at the hill close to David's house. Throughout the cold winter day, intense rain was falling at the rate of 0.63 inches per hour. & Why was there a landslide close to David's house? -Concrete: Because, throughout the day, rain was falling at the rate of 0.63 inches per hour. -Irrelevant: Because, throughout the cold winter day, rain was falling at the rate of 0.63 inches per hour. -Abstract: Because, throughout the day, intense rain was falling. 9. Fire A large fire destroyed the warehouse. Around noon on a very hot day, at a time when many people were still working, a large quantity of the very flammable ethyl chloride was being transported in the warehouse. & Why was there a fire in the warehouse? -Concrete: Because a large quantity of ethyl chloride was being transported in the warehouse at a time when the temperature was very high. -Irrelevant: Because a large quantity of ethyl chloride was being transported in the warehouse at a time when the temperature was very high and many people were still working. -Abstract: Because a large quantity of a very flammable material was being transported in the warehouse at a time when the temperature was very high. Strawberry Barbara's strawberry yield was very poor that year. The strawberries that were planted in early autumn attracted the leaf-eating tarsonemid mite. & Why was Barbara's strawberry yield poor that year? -Concrete: Because her strawberries attracted the tarsonemid mite. -Irrelevant: Because her strawberries that were planted in early autumn attracted the tarsonemid mite. -Abstract: Because her strawberries attracted a leafeating mite. 11. TV Kevin's TV had no picture even though it did have sound. The cold-cathode florescent lamp, a component present in LCD TVs and used to generate light, was malfunctioning. & Why did Kevin's TV have no picture even though it did have sound? -Concrete: Because the cold-cathode florescent lamp was malfunctioning. -Irrelevant: Because the cold-cathode florescent lamp, a component present in LCD TVs, was malfunctioning. -Abstract: Because the component used to generate light was malfunctioning. Accident John had a car accident. He was driving after having drunk three glasses of tsikoudia, a very strong alcoholic drink made from pomace. & Why did John have an accident? -Concrete: Because he was driving after having drunk three glasses of tsikoudia. -Abstract: Because he was driving after having drunk three glasses of a very strong alcoholic drink. -Irrelevant: Because he was driving after having drunk three glasses of tsikoudia made from pomace. Open Access This article is distributed under the terms of the Creative Comm ons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
9,672.6
2017-05-11T00:00:00.000
[ "Philosophy" ]
Spherical and Hyperbolic Fractional Brownian Motion We define a Fractional Brownian Motion indexed by a sphere, or more generally by a compact rank one symmetric space, and prove that it exists if, and only if, 0 < H ≤ 1/2. We then prove that Fractional Brownian Motion indexed by an hyperbolic space exists if, and only if, 0 < H ≤ 1/2. At last, we prove that Fractional Brownian Motion indexed by a real tree exists when 0 < H ≤ 1/2. Introduction Since its introduction [10,12], Fractional Brownian Motion has been used in various areas of applications (e.g.[14]) as a modelling tool.Its success is mainly due to the self-similar nature of Fractional Brownian Motion and to the stationarity of its increments.Fractional Brownian Motion is a field indexed by R d .Many applications, as texture simulation or geology, require a Fractional Brownian Motion indexed by a manifold.Many authors (e.g.[13,8,1,7,2]) use deformations of a field indexed by R d .Self-similarity and stationarity of the increments are lost by such deformations: they become only local self-similarity and local stationarity.We propose here to build Fractional Brownian Motion indexed by a manifold.For this purpose, the first condition is a stationarity condition with respect to the manifold.The second condition is with respect to the self-similar nature of the increments.Basically, the idea is that the variance of the Fractional Brownian Motion indexed by the manifold should be a fractional power of the distance.Let us be more precise.The complex Brownian motion B indexed by R d , d ≥ 1, can be defined [11] as a centered Gaussian field such that: can be defined [10,12] as a centered Gaussian field such that: The complex Brownian motion B indexed by a sphere S d , d ≥ 1, can be defined [11] as a centered Gaussian field such that: where O is a given point of S d and d(M, M ) the distance between M and M on the sphere (that is, the length of the geodesic between M and M ).Our first aim is to investigate the fractional case on S d .We start with the circle S 1 .We first prove that there exists a centered Gaussian process (called Periodical Fractional Brownian Motion, in short PFBM) such that: where O is a given point of S 1 and d(M, M ) the distance between M and M on the circle, if and only if, 0 < H ≤ 1/2.We then give a random Fourier series representation of the PFBM.We then study the general case on S d .We prove that there exists a centered Gaussian field (called Spherical Fractional Brownian Motion, in short SFBM) such that: where O is a given point of S d and d(M, M ) the distance between M and M on S d , if and only, if 0 < H ≤ 1/2.We then extend this result to compact rank one symmetric spaces (in short CROSS). Let us now consider the case of a real hyperbolic space H d .We prove that there exists a centered Gaussian field (called Hyperbolic Fractional Brownian Motion, in short HFBM) such that: where O is a given point of H d and d(M, M ) the distance between M and M on H d , if, and At last, we consider the case of a real tree (X, d).We prove that there exists a centered Gaussian field such that: where O is a given point of X, for 0 < H ≤ 1/2. 2. Assume 0 < H ≤ 1/2.Let us parametrize the points M of the circle S 1 of radius r by their angles x.B H can be represented as: where and (ε n ) n∈Z is a sequence of i.i.d.complex standard normal variables. Proof of Theorem 2.1 Without loss of generality, we work on the unit circle S 1 : r = 1.Let M and M be parametrized by x, x ∈ [0, 2π[.We then have: The covariance function of B H , if there exists, is: Let us expand the function x → d H (x, 0) in Fourier series: We will see that the series n∈Z |f n | converges.It follows that equality (7) holds pointwise. Since We can therefore write, no matter if x − x is positive or negative: . We now prove that R H is a covariance function if and only if 0 < H ≤ 1/2.p i,j=1 Let us study the sign of f n , n ∈ Z .Since f −n = f n , let us only consider n > 0. Using the concavity/convexity of the functions x → x 2H , one sees that 1. H ≤ 1/2 All the f n are negative and (8) is positive. 2. H > 1/2.We check that, if B H exists, then we should have: All the f n , with n even, are positive, which constitutes a contradiction. In order to prove the representation (5), we only need to compute the covariance: 3 Spherical Fractional Brownian Motion Proof of Theorem 3.1 Let us first recall the classification of the CROSS, also known as two points homogeneous spaces [9,17]: spheres S d , d ≥ 1, real projective spaces P d (R), d ≥ 2, complex projective spaces P d (C), d = 2k, k ≥ 2, quaternionic projective spaces P d (H), d = 4k, k ≥ 2 and Cayley projective plane P 16 .[6] has proved that Brownian Motion indexed by CROSS can be defined.The proof of Theorem 3.1 begins with the following Lemma, which implies, using [6], the existence of the Fractional Brownian Motion indexed by a CROSS for 0 < H ≤ 1/2.Lemma 3.1 Let (X, d) be a metric space.If the Brownian Motion B indexed by X and defined by: exists, then the Fractional Brownian Motion B H indexed by X and defined by: Proof of Lemma 3.1 For λ ≥ 0, 0 < α < 1, one has: We then have, for 0 < H < 1/2: Let us remark that: , so that: Denote by R H (M, M ) the covariance function of B H , if exists: Let us check that R H is positive definite: (9) is clearly positive and Lemma 3.1 is proved. We now prove by contradiction that the Fractional Brownian Motion indexed by a CROSS does not exist for H > 1/2.The geodesic of a CROSS are periodic.Let G be such a geodesic containing 0. Therefore, the process B H (M ), M ∈ G is a PFBM.We know from Theorem 2.1 that PFBM exits if, and only if, 0 < H ≤ 1/2. Proof of Corollary 3.1 Let φ be the isometric mapping between M and the CROSS and let d (resp.d) be the metric of M (resp.the CROSS).Then, for all M, M ∈ M, one has: Let O be a given point of M and O = φ( O).Denote by R H (resp. R H ) the covariance function of the Fractional Brownian Motion indexed by M (resp.the CROSS). It follows that R H is positive definite if and only if, R H is positive definite.Corollary 3.1 is proved. Hyperbolic Fractional Brownian Motion Let us consider real hyperbolic spaces H d : with geodesic distance: where The HFBM is the Gaussian centered field such that: where O is a given point of H d . Real trees A metric space (X, d) is a real tree (e.g.[3]) if the following two properties hold for every x, y ∈ X. • There is a unique isometric map f x,y from [0, d(x, y)] into X such that f x,y (0) = x and f x,y (d(x, y)) = y. Theorem 5.1 The Fractional Brownian Motion indexed by a real tree (X, d) exists for 0 < H ≤ 1/2. Corollary 4. 1 Let (M, d) be a complete Riemannian manifold such that M and H d are isometric.Then the Fractional Brownian Motion indexed by M and defined by:B H ( O) = 0 (a.s.) , E|B H (M ) − B H (M )| 2 = d 2H (M, M ) M, M ∈ M , exists if, and only, if 0 < H ≤ 1/2.The proof of Corollary 4.1 is identical to the proof of Corollary 3.1. Let (M, d) be a complete Riemannian manifold such that M and a CROSS are isometric.Then the Fractional Brownian Motion indexed by M and defined by:
1,965.6
2005-12-21T00:00:00.000
[ "Mathematics" ]
Elastic modulus evolution of rocks under heating–cooling cycles Rocks decay significantly during or after heating–cooling cycles, which can in turn lead to hazards such as landslide and stone building collapse. Nevertheless, the deterioration mechanisms are unclear. This paper presents a simple and reliable method to explore the mechanical property evolutions of representative sandstones during heating–cooling cycles. It was found that rock decay takes place in both heating and cooling processes, and dramatic modulus changes occurred near the α − β phase transition temperature of quartz. Our analysis also revealed that the rock decay is mainly attributed to the internal cracking. The underlying mechanism is the heterogeneous thermal deformation of mineral grains and the α – β phase transition of quartz. Scientific RepoRtS | (2020) 10:13835 | https://doi.org/10.1038/s41598-020-70920-3 www.nature.com/scientificreports/ This paper aims to reveal the decay mechanisms of representative sandstones during cyclic heating-cooling treatments. To this end, the Young's modulus evolutions of the sandstones with temperature will be characterized with the aid of the HTIET (see Fig. 1). The underlying deterioration mechanisms will be revealed via a mean-field model linking the modulus evolution with the crack density change in rocks. Figure 2 shows the modulus changes of yellow and red sandstones with temperature during two continuous heating-cooling cycles (peak temperature 800 °C). It can be seen that the initial modulus of the yellow sandstone is around 8 GPa, while that of the red sandstone is around 20 GPa. The difference in Young's modulus of these sandstones is due to their differences in mineral composition, porosity and crack density. The modulus changes of the yellow sandstone during heating can be seen in four stages. The modulus first increases slightly from room temperature to 250 °C. It then decreases steadily until reaching 500 °C. In the range of 500 °C to 600 °C, the modulus drops quickly. Above 600 °C, however, it increases again yet dramatically. During cooling, the modulus decrease is first minor, but then becomes dramatically when the temperature approaches 600 °C. After that, the change turns out to be very small again. Unlike the changes in the heating stage of the first heating-cooling, in the second heating-cooling cycle, modulus first decreases slightly, but then shows a dramatic increase when the temperature approaches 600 °C. The change of modulus during the cooling stage of the second heating-cooling cycle is similar to the first cycle. It is noted that the critical temperature with dramatic modulus changes in the heating process is higher than that in the cooling stage. www.nature.com/scientificreports/ Similarly, the modulus change of the red sandstone in the heating process contains four stages as well. Different from those of the yellow sandstone described above, however, the initial increase (first stage) and the consequent decrease (second stage) of modulus of the red sandstone are much more significant. Moreover, compared with the dramatic modulus increase of the yellow sandstone near 600 °C, the modulus change of the red sandstone is very small. Obviously, the modulus variations revealed above indicate that the rock materials in the heating-cooling cycles have experienced rather complicated deterioration processes that cannot be explained by post-test characterizations. Results To further clarify the deterioration of the sandstones, we carried out the tests with different peak temperatures, as shown in Fig. 3. The modulus difference at the beginning (room temperature) may be due to the different initial structure defects in the specimens. In the heating stage, the changes of modulus up to the different peak temperatures are similar with each other. In the cooling stage, dramatic drop of Young's modulus appears when the peak temperature is 600 ˚C and beyond. A short increase and a slight decrease then take place after the dramatic drop. In the cases of peak temperatures below 600 ˚C, the moduli decrease smoothly from peak temperatures to room temperature during cooling, particularly for the yellow sandstone. It is interesting to note that the modulus of the yellow sandstone decreases significantly in both the heating and cooling processes; while that of the red sandstone occurs mainly in the heating process. In a real fire, the heating rate can be up to 20 ˚C /min 21,30 . Moreover, when a fire rescue is conducted, the cooling rate could be very high as well. Therefore, it is very important to clarify the effect of high heating-cooling rate on the rock decay. Thus we carried out the tests at the heating-cooling rate to 15 °C/min (the maximum rate doable in our lab configuration). Figure 4 shows the modulus changes of the two sandstones at different heating-cooling rates. It can be seen that the rate does not affect the basic trend of the modulus change. The modulus difference in the heating stage is mainly due to the different initial status of the specimens (crack density and distribution etc.). At 15 °C/min, during heating the dramatic modulus drop and increase take place at a higher www.nature.com/scientificreports/ temperature than that at 5 °C/min. This is because the heat conductivities of rocks are very small, leading to temperature gradient and change delay in the rocks. Discussion A rock is a natural aggregate of minerals with internal defects (cracks and pores). Therefore, its elastic modulus is determined by both the mineral components and internal defects. Due to the anharmonic atomic vibration 31 and phase transformation 32 , the moduli of minerals will change with temperature. Furthermore, in a heating-cooling cycle thermal expansion/shrinkage-induced stress concentration will trigger and enhance the growth of internal cracks. All these factors can affect the microstructures and properties of the rocks and thus lead to the complicated variations of modulus with temperature as observed in Figs. 2, 3, 4. The modulus changes of major minerals with temperature. The modulus changes of the major minerals in sandstones with temperature are summarized in Fig. 5. Quartz is the most important component in sandstones. Some studies have been carried out on the modulus changes of quartz with temperature, including both single crystalline 32,33 and polycrystalline 32,34 . Considering that the distribution of quartz grain in rocks is random, here we only focus on the modulus changes of polycrystalline quartz 32 . As shown in Fig. 5, the Young's modulus of polycrystalline quartz decrease slightly with temperature from 25 to 520 ˚C, and then further decrease quickly towards a minimum point, followed by a dramatic increase near 573 ˚C. The steady decrease of elastic constants in the low-temperatures is attributed to the atomic force-constant softening 35 . The dramatic change of modulus near 573 ˚C is due to the α -β phase transformation of quartz [32][33][34] . This has been proved to be energetically dominant and can be explained in the framework of Landau theory 34 . Apparently, this transition should be the main reason for the dramatic modulus changes of rocks near 600 ˚C. It is noted that the critical transition temperature can be affected by pressure, which may explain the difference in the critical transition temperatures 34 . Feldspars (KAlSi3O8-NaAlSi3O8-CaAl2Si2O8) are a group of rock-forming tectosilicate minerals. Different from quartz, the structure of feldspar are very stable even at 1,000 ˚C 36 . Therefore, its modulus only decreases slightly with temperature due to anharmonic atomic vibration 31,37 . The modulus-temperature relationship of Feldspars can be expressed by a linear function 38 : where 85 GPa is the averaged Young's modulus of Feldspars at room temperature. Calcite is also very stable below 700 ˚C. It was reported that 39 the changes of the bulk modulus and shear modulus of calcite can be expressed as K (GPa) = 79.57-0.023(T-25), G (GPa) = 32.23-0.009/(T − 25), respectively. According to the classic elastic relationship, the Young's modulus changes of calcite can then be calculated (see Fig. 5). At a high temperature above 700 ˚C, however, calcite will break down via the reaction CaCO 3 → CaO + CO 2 39 . Hematite is the mineral component making the sandstone red. This mineral is very stable up to 1,200 ˚C 40 , and therefore its modulus also has a linear relationship with temperature as shown in Fig. 5 31 . Compared with the other mineral components, the modulus of clay is much smaller (~ 6.2 GPa 41 ) due to the internal defects. During a heating-cooling cycle, the nucleation of crack occurs in the minerals. Therefore, in the following analysis we assume that the modulus of clay is constant during heating-cooling cycle, and the effect of internal defects will be studied separately. the averaged mineral modulus of rocks. Unlike igneous rocks which have interlocking crystal grains, sandstones contain loosely-coupled mineral grains embedded in clay 2,3 . Assuming that the mineral grains are randomly distributed, the average modulus of minerals can then be estimated by the Voigt-Reuss-Hill (VRH) average method 42,43 , i.e., (2), one can get the average mineral modulus of yellow and red sandstones and their changes with temperature as shown in Fig. 6a, b, respectively. For comparison, the measured modulus (E m ) during the first heating stage is also added in the figures. It is clear that below 300 ˚C, for both cases the average moduli of minerals decrease with temperature, which is in contrast with the initial increase of the measured modulus E m . Moreover, although the red sandstone has similar mineral components to those of the yellow sandstone, the decrease of E m of the former during the heating stage is much larger than that of the latter. In the neighborhood of 600 ˚C, the sudden decrease and increase of E m is apparently the effect of α − β quartz transition. However, the basic trends of two sandstones are very different. More importantly, the modulus changes of the mineral components are reversible, which also contrast to that of rocks. Therefore, to explain the experimental result, we must consider the evolution of internal cracks in rocks as well. crack evolution during heating-cooling. In rocks, three types of microcracks can be distinguished 23 : (1) intergranular microcracks that close to grain boundaries, (2) intragranular microcracks that emanate from a pore or a grain boundary, and (3) transgranular microcracks that run across one or several grains. However, it is very difficult to carry out direct and independent measurements of crack density such as from scanning electron microscope 44 . It is even much more difficult to accurately characterize the evolution of cracks during the heating-cooling process. According to the theory of damage mechanics 22,23 , the effective modulus of a solid with micro-cracks can be described by where υ 0 is the averaged Poisson's ratio of matrix, and ρ is the crack density. The crack density can be expressed as ρ = N m=1 a (m)3 V , where a (m) is the dimension of the m-th crack and V is the specimen volume 22 . Considering that the changes of Poisson's ratio of mineral components with temperature are very small, the variation rate of crack density against temperature can be expressed as: Apparently, the changes of crack density are proportional to the changes of E VRH /E m . Using this equation we can qualitatively analyze the evolution of crack density during a heating-cooling cycle. As shown in Fig. 6a, b, in Stage I for both cases the average modulus of mineral components E VRH decreases while the measured modulus E m increases. That is, in this stage E VRH /E m decreases with temperature and thus the crack density should decrease as well. This can be attributed to the temporary closure of cracks. In rocks, most microcracks appear in the grain boundaries of mineral grains. The width of a crack is normally in the order of several micrometers and the grain size ranges from several hundred micrometers to few millimeters 44 . Considering that the coefficient of thermal expansion of minerals is in the order of 1 × 10 -5 1/K 45 , the thermal expansion of quartz, feldspar and calcites grains can push the surrounding microcracks to close. In stage II, E m starts to decrease with a higher rate than that of E VRH . Therefore, E VRH /E m and the crack density increase accordingly. The decrease of E m should be attributed to the formation and propagation of new cracks. (2) www.nature.com/scientificreports/ In this stage, the continuous thermal expansion of mineral grains makes them contact and push each other. Due to the non-uniform shape and constraints, some grains will be pushed to move or rotate, creating new cracks around the grains. The higher the temperature, the more new cracks and voids are created, leading to the continuous decrease of Young's modulus. In stage III, the measured modulus decreases dramatically, corresponding to the dramatic modulus decrease of quartz. It is noted that in the yellow sandstone from 500 ˚C to that of the minimum point, E VRH /E eff changes from 13.3 to 11.15, indicating that the crack density is decreasing slightly. However, in the red sandstone, the ratio change is difficult to be determined. We can only conclude that the crack density change is not significant in this stage. After the transition point, i.e., in stage IV, ratio E VRH /E m of the yellow sandstone changes from 11.15 to 18.33, indicating a dramatic increase of crack density. This is because the volume of β quartz is much larger than that of α quartz, creating more cracks during the phase transition. E VRH of the red sandstone increases dramatically, while E m remains almost the same, and thus its crack density should also increase in this stage. Overall, the modulus changes in stage IV should be a balanced result of quartz hardening and crack nucleation softening. Similar analysis can be conducted to understand the crack evolution during cooling and in the second heating-cooling cycle. As shown in Fig. 2a, in the first cooling process, the modulus of the yellow sandstone decreases slightly at high temperature and then drops very quickly around 600 ˚C, following the modulus change of quartz. Ratio E VRH /E m does not change too much in the process, indicating that crack density remains almost unchanged in the process. Hence, the sudden decrease of E m is mainly due to the modulus decrease of quartz. The same conclusion can be made on the red sandstone. Since the crack density in the red sandstone generated in the heating process is much higher than that in the yellow sandstone, the effect of quartz phase transformation in the former is much smaller than that in the latter. After then, E VRH keeps increasing but the change of E m is very small in both of the sandstones. Therefore, ratio E VRH /E m and crack density should increase in this stage. Considering that during the cooling stage all the grains are shrinking , the increase of crack density should be attributed to the opening of existing cracks that were closed by thermal expansion stress. The decrease of modulus observed in the test with other peak temperatures should be due to the same reason of crack opening. Since many cracks have been generated in the first heating-cooling cycle, the nucleation of new cracks during the second cycle is very small. At the beginning of heating, the thermal expansions of mineral grains are not enough to close the cracks. E m increases again in the yellow sandstone only when a large expansion is produced by the α − β quartz transition. In the second cooling process, the trend is almost reversible, indicating that a few new cracks are generated in the process. Young's modulus measured under different conditions. Variations of rock modulus with temperature in the literature are not consistent with each other [46][47][48][49] . This is because different techniques and rocks were used in measuring the modulus. As shown in Fig. 7, the reported modulus can be divided into two categories, measured (1) at high temperature and (2) at room temperature after heat treatment. Our result clearly shows that the moduli of rocks measured at the peak temperature are completely different from those measured at room temperature after heat treatment. To clarify this further, we summarized the modulus changes with different peak heating temperatures in Fig. 7. For the convenience of comparison, all the moduli in the figure were normalized by the corresponding moduli at room temperature. Although the rock types are different, it is clear that the moduli measured at high temperature are larger than those measured after heat treatment. Moreover, at a temperature below 300 ˚C the measured moduli at high temperature increase slightly due to the closure of cracks, while the moduli measured after heat treatment always decrease. Therefore, in discussing the temperature effect on the rock decay, it is very important to differentiate the experimental results obtained by different techniques, because the microstructure of rocks could be significantly changed during the cooling process. www.nature.com/scientificreports/ conclusions This paper has presented a comprehensive investigation into the decay of red and yellow sandstones in heating-cooling cycles by instantly monitoring their modulus changes. In the heating stage the modulus changes of both rocks can be divided into four stages, i.e. stage I: the initial slightly increasing from room temperature to ~ 300 ˚C, stage II: the decrease stage from 300 to 500 ˚C, stage III: the dramatic decrease from 500 to 600 ˚C, and stage IV: above the critical temperature near 600 ˚C. In the cooling stage, the changes of modulus consist of two stages divided by the critical temperature. In the second cycle, significant modulus changes only occur near the critical temperature of the α − β phase transformation of quartz. Changing the heating-cooling rate doesn't affect the basic trend. The modulus changes are mainly attributed to the internal cracks evolutions and α − β phase transformation of quartz. According to the theory of composite material and damage mechanics, a relationship between the measured modulus and the crack density was established, which can be straightforwardly used for revealing the decay mechanisms of rocks during heating-cooling cycles. Methods Two rocks, yellow sandstone and red sandstone from Xuzhou, China, are studied in this paper. The mineralogical composition of sandstones was identified by performing powder X-ray diffraction (XRD) analysis. The yellow sandstone mainly consists of quartz (66. 5 (13, vol%). All the specimens with a dimension of 40 × 4 × 10 mm (± 0.1 mm) were cut from large block rock materials (visibly free of fractures). The surfaces of the specimens were carefully polished by abrasive paper (up to #800). The specimen is suspended by supports in the furnace (see Fig. 1), in which heating-cooling cycles with different peak temperatures and rates were conducted (see Table 1). At the peak temperature, a holding time of 10 min was set for all tests. In the cases with a peak temperature of 800 ˚C, the second heating-cooling cycle was also conducted. During the heating-cooling cycles, the specimen was excited by the impact bar every 15 s. The vibration signal of the specimen was collected by a ceramic hollow bar and recorded by a high-precision microphone outside the furnace. The Young's modulus can be calculated by the following equation 26,28 where m the mass of the bar, f the fundamental flexural resonant frequency of the bar, L is length, w is width, h is thickness. For each heating/cooling cycle, repeated tests have been carried out for at least three times. Considering that the mass and dimension of specimen would change during heating-cooling cycles, all the measured modulus has been amended accordingly via Eq. (5). After obtaining the evolution of Young's modulus with temperature, a mean field model will be established to analyze the crack density changes of sandstone during the heating-cooling cycles. Data availability The data used to support the findings of this study are included within the article. www.nature.com/scientificreports/
4,599
2020-08-14T00:00:00.000
[ "Geology" ]
A technique for quantifying intracellular free sodium ion using a microplate reader in combination with sodium-binding benzofuran isophthalate and probenecid in cultured neonatal rat cardiomyocytes Background Intracellular sodium ([Na+]i) kinetics are involved in cardiac diseases including ischemia, heart failure, and hypertrophy. Because [Na+]i plays a crucial role in modulating the electrical and contractile activity in the heart, quantifying [Na+]i is of great interest. Using fluorescent microscopy with sodium-binding benzofuran isophthalate (SBFI) is the most commonly used method for measuring [Na+]i. However, one limitation associated with this technique is that the test cannot simultaneously evaluate the effects of several types or various concentrations of compounds on [Na+]i. Moreover, there are few reports on the long-term effects of compounds on [Na+]i in cultured cells, although rapid changes in [Na+]i during a period of seconds or several minutes have been widely discussed. Findings We established a novel technique for quantifying [Na+]i in cultured neonatal rat cardiomyocytes attached to a 96-well plate using a microplate reader in combination with SBFI and probenecid. We showed that probenecid is indispensable for the accurate measurement because it prevents dye leakage from the cells. We further confirmed the reliability of this system by quantifying the effects of ouabain, which is known to transiently alter [Na+]i. To illustrate the utility of the new method, we also examined the chronic effects of aldosterone on [Na+]i in cultured cardiomyocytes. Conclusions Our technique can rapidly measure [Na+]i with accuracy and sensitivity comparable to the traditional microscopy based method. The results demonstrated that this 96-well plate based measurement has merits, especially for screening test of compounds regulating [Na+]i, and is useful to elucidate the mechanisms and consequences of altered [Na+]i handling in cardiomyocytes. Background The sodium ion (Na + ) is the main determinant of the body fluid distribution, and transsarcolemmal Na + gradient is a key regulator of the various intracellular ions and metabolites. In the heart, the concentration of free intracellular Na + ([Na + ] i ) has been shown to increase in the presence of cardiac diseases including ischemia, heart failure, and hypertrophy [1][2][3][4][5]. Because [Na + ] i is important in modulating the electrical and contractile activity, quantifying [Na + ] i is of great interest. Therefore, several techniques for measuring [Na + ] i have been established to clarify the mechanisms and consequences of altered [Na + ] i regulation, and the standard procedure currently used for measuring [Na + ] i in a single cell is a fluorescent microscopy-based method [6][7][8][9][10]. Sodiumbinding benzofuran isophthalate (SBFI), the most widely used Na + -sensitive fluorescent indicator provides spatial and temporal resolution of [Na + ] i with sufficient selectivity in the presence of physiological concentrations of other ions [11]. The ratiometric measurement with SBFI permits us to cancel out variable dye concentrations in the cells and shares the same filter equipment used for the Ca 2+ indicator, Fura-2. Although the use of microscopy and ratio imaging in combination with SBFI has some merits, including the fact that it requires a minimal number of cells, permits the discrimination against dye leaked out of the cells, and provides the ability to see indicator compartmentalization [6], this technique requires a fluorescence microscope equipment to switch between filters. Furthermore, it is difficult to test the effects of several types of compounds and/or compounds at several concentrations simultaneously. On the other hand, a method using a cell suspension loaded with fluorescent indicator in a cuvette recorded by a spectrophotometer has been reported, but it might not be adequate for living adherent cells. Moreover, when one measures [Na + ] i in cells using a closed culture space without a perfusion chamber system to wash out the dye leaked from the cells, this leaked dye reduces the accuracy of the measurements of [Na + ] i [6]. Microplate readers with a 96-well format have been widely used in combination with various types of cellbased applications, including measuring the fluorescence intensity, because it employs a standardized rapid protocol for screening and examining multiple cell types and compounds, while requiring small amounts of materials. A method for measuring [Ca 2+ ] i in adherent cells attached to a 96-well microtiter plate using a microplate reader has been reported previously [12]. However, to our knowledge, no microplate reader-based method has previously been applied to measure [Na + ] i in cardiomyocytes in combination with SBFI. Moreover, there are few reports on the long-term effects of compounds on [Na + ] i in cultured cells, although rapid changes in [Na + ] i during a period of seconds or several minutes have been widely discussed. In comparison with adult cardiomyocytes, neonatal cells have the advantage of being easily cultured and having a longer viability. Therefore, we applied cultured neonatal rat ventricular cardiomyocytes (NRVM) in this system to examine the chronic effects of compounds on [Na + ] i . The aim of this study was to investigate a new method to measure [Na + ] i in NRVM attached to a 96-well microtiter plate using a microplate reader and to confirm the rational in vivo calibration method for SBFI in this system. We also investigated the effects of probenecid against dye leakage out of the cells. To confirm the reliability of this technique, the rapid effects of the Na + /K + ATPase inhibitor, ouabain, on [Na + ] i were evaluated. We further examined the chronic effects of aldosterone on [Na + ] i in NRVM to illustrate the utility of the new method. Results and discussion Probenecid prevents the leakage of SBFI from cardiomyocytes As SBFI-AM hydrolyzes, the 340/380 nm excitation ratio gradually increases [6]. In our preliminary experiment, the fluorescence intensity continued to gradually increase during the measurements, even after the 60-minute period that had been previously reported to allow for complete hydrolysis [6]. Di Virgilio, et al. reported that the consequences of dye (Fura-2) leakage were relevant for experiments in closed cuvettes, because secreted dye can account for a considerable percentage of the total fluorescence signal [13][14][15]. Because each well of the 96-well plate that we used in our experiment was also a closed space, the gradual increase of fluorescence intensity after recording for a 60-minute period, at which time the completion of hydrolysis was expected [6], was speculated to be the result of dye leakage. Probenecid, an organic anion transport blocker, has been reported to prevent Fura-2 leakage from cells, and this effect has also been reported for SBFI used to measure [Na + ] i [16]. Cao, et al. reported the value of [Na + ] i in neocortical neurons, and demonstrated that several compounds induced changes in [Na + ] i using a microplate reader with a 96-well format [17]. However, they did not use probenecid in their experiments. They might have been able to successfully measure [Na + ] i in neocortical neurons without taking into account the dye leakage, because the significance of dye leakage from the cells depends on the cell line. To determine whether probenecid prevents dye leakage from cardiomyocytes in our 96-well microplate-based experiment, we compared the fluorescence ratio of SBFI in the cells incubated with the recording medium in the presence and absence of 1 mM probenecid. Because a stable SBFI fluorescence ratio was obtained after approximately 80 min of recording with 1 mM probenecid in the preliminary experiment, the relative fluorescence ratio compared to that at 80 min was estimated. Figures 1A and 1B clearly show the inhibitory effect of probenecid on the dye leakage from cardiomyocytes. A stable fluorescence ratio was obtained for at least 30 min after 80 min of recording in the presence of probenecid, while the ratio continued to increase in the wells without probenecid (solid line in Figure 1C. At 120 min recording, there was an estimated 8% increase in the SBFI ratio, indicating an approximately 8-10 mM increase in [Na + ] i ). This result indicates that probenecid is essential to prevent the overestimation of [Na + ] i caused by dye leakage. The concentrations of probenecid and time needed for treatment to inhibit dye leakage vary among different types of cells [13][14][15]. For our present method, probenecid effectively blocked SBFI efflux at a concentration of 1 mM, and was added only during the recording period after SBFI had been loaded into the cells. Therefore, in further experiments, we measured [Na + ] i in NRVM in Tyrode solution in the presence of 1 mM of probenecid. Several reports have suggested that probenecid can lead to unwanted effects in cells. In particular, probenecid has been reported to reduce the rise in [Ca 2+ ] i induced by depolarization of the plasma membrane or by a receptor-directed agonist, such as bradykinin [14]. Although this may not affect [Na + ] i itself, attention is needed for the function of cells when using this agent for a long time. [18,19] and adult cells [9,[20][21][22] measured by microscopy or a spectrophotometer, which ranged from 5 to 13 mM. These results suggest that our method has sensitivity comparable to the microscopy-based method. The value of [Na + ] i in myocytes depends on the ionic strength, pH, and the composition of the solutions used during the isolation of the myocytes [6,23,24]. In addition, the [Na + ] i levels in freshly prepared and cultured cells have been reported to be different for other cell lines [25]. Therefore, the protocol used needs to be carefully understood to ensure that an accurate comparison can be made of the absolute value of [Na + ] i . Dye compartmentalization has been reported when SBFI is loaded at physiological temperature (37°C). However, this could be reduced by loading SBFI at room temperature [26,27]. In fact, the fraction of SBFI compartmentalized has been reported to range from 10 to 50% [7,8,10], and it is still uncertain even when it is recorded by microscopy, because the loaded dye concentration and loading time have varied among experiments. The disadvantages associated with population-averaged protocols using a plate reader and multiple cells, which thus meant that we could not directly detect indicator compartmentalization in the cells in each experiment are considered to be negligible, due to the fact that the changes in the fluorescence ratio are considered to mainly reflect the changes in the cytoplasmic [Na + ] levels [6][7][8]. Transient effects of ouabain on [Na + ] i in cardiomyocytes Ouabain is a specific Na + /K + pump inhibitor that has been widely used for the treatment of patients with heart failure and atrial fibrillation. Ouabain is known to transiently alter [Na + ] i in cardiomyocytes. To confirm the reliability of our technique, we examined the effects of ouabain at con- This result is comparable to the results reported using ouabain or another specific Na + /K + pump inhibitor, strophanthidin, which were measured by fluorescent microscopy or a spectrophotometer [7,9,19], suggesting that our present technique detects the changes in [Na + ] i induced by agents with accuracy comparable to the traditional microscopy-based method. The long-term effects of aldosterone on [Na + ] i in cardiomyocytes We and others have recently reported that aldosterone induces [Na + ] i elevation in cultured cardiomyocytes, and that this effect was rapid, non-genomic, and occurred in a mineralocorticoid receptor-independent fashion [28,29]. Although there was a previous report that aldosterone activated Na + /H + exchange in cardiomyocytes [30], the longterm effect of aldosterone on the estimated value of [Na + ] i in cardiomyocytes is still unknown. To clarify this, we measured [Na + ] i in NRVM after treatment with vehicle or aldosterone at a concentration of 0.1 nM to 100 nM for 24 h using the new method. The mean value of [Na + ] i in cells treated with 100 nM aldosterone was significantly higher than that of cells treated with vehicle (9.1 ± 0.5 mM vs 6.7 ± 0.4 mM, n = 11, P < 0.01), although a lower concentration of aldosterone did not affect [Na + ] i ( Figure 4). This result indicates that chronic aldosterone exposure alters [Na + ] i handling in cardiomyocytes, which might have (patho) physiological effects in the heart. Most of the previous studies about [Na + ] i in the heart were focused on the rapid effects of agents. However, altered [Na + ] i handling under pathological conditions, including heart failure and cardiac hypertrophy, is a continuous phenomenon. In this context, using NRVM and investigating the change in [Na + ] i after long-term treatment with various compounds may be helpful for understanding the mechanisms and consequences of [Na + ] i handling in the heart. Conclusions The results of this study indicate that using a microplate reader and a ratiometric measurement of SBFI used in combination with probenecid provides accurate values for [Na + ] i in NRVM attached to 96-well plates. This method has merits in that it allows for the changes in [Na + ] i in cultured cells treated with several types or concentration of agents to be measured simultaneously, and provides a more thorough investigation of the long-term effects of agents. In addition, the present method can be applied to measure [Na + ] i in other types of adherent cells with some modification of the concentration of probenecid and length of treatment. Preparation of cardiomyocytes and cell culture All animal procedures conformed to the National Institutes of Health Guide for the Care and Use of Laboratory Animals and were approved by the Animal Research Committee of Jikei University. NRVM were isolated from one-to three-day-old Sprague-Dawley rats according to the manufacturer's protocol from Worthington Biochemical (Lakewood, NJ). Purified NRVM were plated at a density of 1*10 5 cells/well in 96 well clear bottom plates in low-glucose (1000 mg/liter) DMEM (GIBCO) supplemented with 10% fetal bovine serum (GIBCO), 20 mM HEPES and antibiotics (100 U/ml penicillin G and 100 μg/ ml streptomycin; Wako). The cells were allowed to attach at 37°C in a 5% CO 2 atmosphere, and subconfluent myocyte monolayers were obtained after 48 h. Sixteen hours before treatments with the indicated agents, the medium was replaced with DMEM supplemented with charcoalstripped FBS (GIBCO). In the experiments using ouabain, after attainment of a stable fluorescence ratio, we replaced 50 μl of medium in each well with 50 μl of a ouabain-containing solution. The microplate reader can take measurements in each well of a plate within 90 seconds, and the fluorescence intensity was automatically recorded every 2.5-5 minutes. In each microplate, NRVM of the same preparation in 10 wells were prepared with Tyrode solution in the absence of SBFI to measure the background signals of NRVM and microplates. Mean fluorescence signals from the 10 SBFIunloaded wells at 340 nm and 380 nm were subtracted from the individual signals of SBFI-loaded wells at each wavelength. All of the experimental conditions, including in vivo calibrations, were performed in sextuplicate. In vivo calibration of SBFI The in vivo calibration of SBFI was accomplished, similar to the previous reports, by exposing the cardiomyocytes to various concentrations of extracellular [Na + ] (0-20 mM) in the presence of 1 mg/l gramicidin D, 100 μM strophanthidin, 2 mM EGTA, and the pH was adjusted to 7.1 with Tris base [9,20]. Myocytes had been treated with gramicidin D to allow the free movement of Na + , K + , and H + , strophanthidin to inhibit the Na + /K + pump, and EGTA to increase the permeability of the cell membrane to Na + [6,8,9]. Using these agents, a stable equilibrium between the intracellular and the extracellular [Na + ] was achieved. A linear fit of the calibration plots between 0 and 20 mM [Na + ] i was used to convert SBFI fluorescence ratios (340/380 nm) to values of [Na + ] i . The calibration solutions were prepared by mixing two solutions of equal ionic strength. One solution contained 145 mM Na + (30 mM NaCl, 115 mM sodium gluconate) and no K + , while the other one had 145 mM K + (30 mM KCl, 115 mM potassium gluconate) and no Na + . Under these calibration conditions, the effect of K + on SBFI is negligible in physiological [Na + ] i between 0 and 20 mM, although SBFI is known to be sensitive to K + [9]. A calibration was performed at the end of each experiment. Statistical analyses The data are expressed as the means ± standard error for the indicated number of experiments. The statistical analyses were performed using Student's t test and one way ANOVA, followed by Scheffe's test. Values of P < 0.05 were considered to be significant.
3,893.4
2013-12-01T00:00:00.000
[ "Biology" ]
Deep Stacking Network for Intrusion Detection Preventing network intrusion is the essential requirement of network security. In recent years, people have conducted a lot of research on network intrusion detection systems. However, with the increasing number of advanced threat attacks, traditional intrusion detection mechanisms have defects and it is still indispensable to design a powerful intrusion detection system. This paper researches the NSL-KDD data set and analyzes the latest developments and existing problems in the field of intrusion detection technology. For unbalanced distribution and feature redundancy of the data set used for training, some training samples are under-sampling and feature selection processing. To improve the detection effect, a Deep Stacking Network model is proposed, which combines the classification results of multiple basic classifiers to improve the classification accuracy. In the experiment, we screened and compared the performance of various mainstream classifiers and found that the four models of the decision tree, k-nearest neighbors, deep neural network and random forests have outstanding detection performance and meet the needs of different classification effects. Among them, the classification accuracy of the decision tree reaches 86.1%. The classification effect of the Deeping Stacking Network, a fusion model composed of four classifiers, has been further improved and the accuracy reaches 86.8%. Compared with the intrusion detection system of other research papers, the proposed model effectively improves the detection performance and has made significant improvements in network intrusion detection. Introduction With the development of the Internet of Things (IoT), device embedding and connection have generated more and more network data traffic [1]. The increase in data volume has also led to more threats to network security. With the updating of network technology, more and more malicious attacks and threat viruses are appearing and spreading at a faster speed [2]. As the main means to defend against advanced threats, network intrusion detection faces new challenges. There are two common detection methods: feature-based detection and anomaly-based detection [3]. When the attack signature is known, signaturebased detection is very useful. Conversely, anomaly-based detection can be used for known or unknown attacks. As a traditional network attack detection method, the intrusion detection system based on feature detection is widely used because of its simplicity and convenience. Its shortcomings are also obvious. The feature-based intrusion detection system cannot detect unknown attack types and the detection accuracy is limited by the feature size and update speed of the signature database. In recent years, researchers have tried to introduce other technologies in intrusion detection to solve this problem, especially the recent emergence of machine learning technology. Many researchers have applied machine learning algorithms, such as decision trees, k-nearest neighbors, support vector machines and deep neural networks to the field of intrusion detection and have achieved some initial results. However, according to the 'no free lunch' theorem, we cannot find the best algorithm [4]. Each algorithm model may be outstanding in some aspects and inferior to other • An ensemble learning system DSN is proposed, consisting of the decision tree, k-nearest neighbor, random forest and deep neural network. DSN improves the accuracy of intrusion detection technology and provides a new research direction for intrusion detection. • The proposed DSN combines the predictions of multiple basic classifier models, fusing decision information and improving the generalization and robustness of the detection model. • We use a real NSL-KDD dataset to evaluate our proposed system. The experimental results show that DSN has better performance than traditional methods and most current algorithms. We consider that the proposed system has good application prospects for IDS. The rest of the paper organizes as follows: Section 2 briefly reviews intrusion detection technology. Section 3 describes the dataset and the algorithms used, including decision tree, deep neural network and deep stacking network. In Section 4, the proposed DSN algorithm is described in detail. The experiments of choosing basic classifiers and comparison experiments of performance analysis are given respectively in Section 5. Finally, Section 6 provides some personal opinions and conclusions, including further work afterward. Related Works In recent years, many scholars have tried to use machine learning algorithms to study new intrusion detection methods [5]. Several studies have suggested that by selecting relevant features, the detection accuracy and performance of IDS can be considerably improved [6]. Hodo et al. [7] analyzed the advantages of various machine learning methods in intrusion detection and discussed the influence of FS in IDS. Janarthanan et al. [8] conducted experiments to compare the effects of features on various machine learning algorithms and pointed out some most important features in intrusion detection. Some scholars focus on using Feature selection (FS) to improve intrusion detection performance. Bamakan et al. [9] presented a novel support vector machine (SVM) with FS by Particle swarm optimization (PSO), which improved the performance of classification for IDS. Elmasry et al. [10] applied two PSO algorithms to perform feature selection and hyperparameter selection respectively, which improved the detection effect of deep learning architectures on IDS. Thaseen et al. [11] designed a multiclass SVM with chi-square feature selection, which reduces the time of training considerably and effectively improves the efficiency of the algorithm. Deep learning has achieved many successes in speech detection, image recognition, data analysis and other fields, becoming the preferred solution to many problems. Many scholars have also begun to use deep learning to solve intrusion detection problems. Wu et al. [12] designed a convolutional neural network (CNN) to select features from data sets automatically and set the weight coefficient of each class to solve the problem of sample imbalance. Muhammad et al. [13] proposed the IDS based on a stacked autoencoder (AE) and a deep neural network (DNN), which reduced the difficulty of network training and improved the performance of the network. Yang et al. [14] designed a DNN with an improved conditional variational autoencoder (ICVAE) to extract high-level features, over- Although machine learning and deep learning have certain advantages in intrusion detection, the disadvantages are also obvious. A single algorithm tends to have a high detection rate for certain attack categories while ignoring the detection effect of other attack categories. In order to solve this problem, many scholars try to use the idea of integrated learning to solve the problem of intrusion detection. Rahman et al. [15] proposed an adaptive intrusion detection system based on boosting with naive Bayes as the weak (base) classifier. Syarif [16] applied and analyzed three traditional ensemble learning methods for intrusion detection. Gao [17] proposed an adaptive voting model for intrusion detection, which consists of four different machine learning methods as the base classifiers, resulting in an excellent performance. The development of the above-mentioned intrusion detection technologies is encouraging, but these classification technologies still have detection deficiencies, such as being insensitive to unknown attacks and a low detection rate when detecting a few attacks. In order to overcome these problems, this paper uses preprocessing technology to deal with the dataset and selects the basic classifier of ensemble learning selected to construct the ensemble learning model DSN. Finally, the system DSN solves the above-mentioned problems by learning the advantages of each classifier. NSL-KDD Dataset Introduction The famous public KDDCUP' 99 is the most widely used data set for the intrusion detection system [18]. However, there are two critical problems with this data set, which seriously affect the performance of the evaluated system. One is that many redundant duplicate records will cause the learning algorithm to be biased towards identifying duplicate records. Second, the sample ratio is seriously unbalanced and some attack categories exceed 70%, making them too easy to be detected, which is not helpful for multi-class detection. Both of these problems have seriously affected the evaluation of intrusion detection performance. To solve these problems, Tavallaee proposed a new data set NSL-KDD [19,20], which consists of selected records of the complete KDD data without mentioned shortcomings [5]. Table 1 shows the detailed information of the dataset NSL-KDD. Many scholars have carried out a series of studies on NSL-KDD and analysis shows that the NSL-KDD data set is suitable for evaluating different intrusion detection models [21]. Therefore, we selected the NSL-KDD data sets to validate the proposed model. Table 1 shows the distribution of the NSL-KDD dataset. Decision Tree Decision tree (DT) is a commonly used machine learning method to complete classification and regression tasks. The decision tree model has a tree structure, starting from the root node and branching using the essential features of the data. Each branch represents the output of a feature and each child node represents a category. The classification decision tree is a kind of supervised learning and the required classification model can be obtained by giving sample training. The input data finally completes the classification task through the judgment of each node. According to the criteria for judging branch characteristics, decision trees can be divided into ID3, C4.5 and CART. ID3 uses a greedy strategy and uses information gain based on information entropy as a branch criterion. In the classification problem, take a data set D with K classes as an example. The information entropy of probability distribution is defined as follows: where p k represents the probability of sample points belonging to k class. Choose feature A as the split node, the conditional entropy and information gain is defined as follows: where D j represents the sample subset of class j in feature A. The greater the information entropy, the higher the uncertainty of the sample set. The essence of the classification learning process is the reduction of sample uncertainty (that is, the process of entropy reduction). The greater the change of the information gain, the better the classification effect of the feature on the sample set. Therefore, the feature split with the largest information gain should be selected. Deep Neural Network Algorithm Deep neural network (DNN) is a deep learning algorithm widely recognized by scholars. Figure 1 shows the basic structure of DNN. The network structure of DNN includes the input layer, hidden layer and output layer and each layer is fully connected. Each neuron has no connection with the neurons between the layers and is connected with all the neurons in the next layer. After each layer of the network, there is an activation function acting on the output, which strengthens the effect of network learning. Therefore, DNN can also be understood as a large perceptron composed of multiple perceptrons. Take the ith layer forward propagation calculation as an example, the formula is as follows: where x represents the input value, w represents the weight coefficient matrices and b represents the bias vector. Deep Stacking Network Algorithm Individual machine learning algorithms usually have shortcomings and cannot complete complex task requirements. Therefore, we try to combine many different machine learning algorithms to form a learning system. We call this learning system ensemble learning and the algorithms that make up the learning system are called individual learners. Ensemble learning can be divided into two categories. One type is serialization methods that have strong dependencies between individual learners and must be generated serially, such as boosting and AdaBoost. The other is parallelization methods that can be generated at the same time without strong dependencies on individual learners, such as bagging and random forest (RF). Stacking is a combination strategy that combines the calculation results of individua learners. Wolpert [22] put forward the idea of stacked generalization in 1992, using another machine learning algorithm to combine the results of individual machine learning devices. This method improves the performance of the algorithm, reduces the generalization error and makes the model more widely used. Deng proposed the use of deep neura networks as the combined layer algorithm to further improve the performance of the stacking model, called the deep stacking network (DSN) [23]. DSN usually consists of two modules. The first module is the classifier module, composed of classifiers with different classification performances and performs preliminary prediction processing on the input data. 'Stacking' refers to concatenating all output pre- In a multi-class network, ReLU is usually used as an activation function, the formula is as follows: The loss function measures the output loss of training samples and calculates the back propagation of the network through the loss function to optimize the network structure. In the classification task, we usually choose cross-entropy as the loss function, the formula is as follows: where N represents the number of the input data set, M represents the number of categories, y i represents whether the classification i corresponds to the real category and p i represents the probability of predicting into category i. Deep Stacking Network Algorithm Individual machine learning algorithms usually have shortcomings and cannot complete complex task requirements. Therefore, we try to combine many different machine learning algorithms to form a learning system. We call this learning system ensemble learning and the algorithms that make up the learning system are called individual learners. Ensemble learning can be divided into two categories. One type is serialization methods that have strong dependencies between individual learners and must be generated serially, such as boosting and AdaBoost. The other is parallelization methods that can be generated at the same time without strong dependencies on individual learners, such as bagging and random forest (RF). Stacking is a combination strategy that combines the calculation results of individual learners. Wolpert [22] put forward the idea of stacked generalization in 1992, using another machine learning algorithm to combine the results of individual machine learning devices. This method improves the performance of the algorithm, reduces the generalization error and makes the model more widely used. Deng proposed the use of deep neural networks as the combined layer algorithm to further improve the performance of the stacking model, called the deep stacking network (DSN) [23]. DSN usually consists of two modules. The first module is the classifier module, composed of classifiers with different classification performances and performs preliminary prediction processing on the input data. 'Stacking' refers to concatenating all output predictions with the original input vector to form a new input vector for the next module. The second module is the prediction fusion module. By training the new combined input data obtained from the previous layer, a new network can be obtained. The network can effectively use the output data obtained from the previous layer for further processing. The prediction result output by the network is more accurate and closer to the true value. The Proposed Intrusion Detection Method In this paper, a deep stacking network model is designed which selects commonly used machine learning algorithms, such as support vector machines (SVM), decision trees, random forests, k-nearest neighbors (KNN), AdaBoost, deep neural networks (DNN), etc., as the basic classifiers. Through comparative testing, we select four machine learning methods as the basic classifiers. Through data preprocessing and deep neural network tuning, the best detection effect is finally obtained. Figure 2 shows the algorithm flow of the proposed model, mainly includes following 7 steps: 1. Input the original NSL-KDD training data set. The pre-processing module discretizes the string information in the data set, filters important feature selection, handles imbalanced data and normalizes the data. 2. Use 10-fold cross-validation to divide the pre-processed dataset and then input the data into various classifiers for training. 3. After using training data to conduct cross-validation training for all algorithms, select algorithms with better detection accuracy and operational performance as the basic classifier. Then discretize the classification results. 4. Input the predicted classification result of the training set and the original category as the training set, initialize the parameter weights of the neural network, train the network parameters and finally generate the Deep Stacking Network model. 5. Input the NSL-KDD testing data set. The pre-processing module discretizes the character information in the data set, selects essential features and normalizes the data. 6. Use the trained basic classifier to initially predict the classification results and discretize the results. 7. Input the preliminary predicted classification results into the trained neural network to obtain the prediction results of the Deep Stacking Network model. Data Pre-Processing Data pre-processing is a necessary step for data analysis, and it is also an essential part of an intrusion detection system. The preprocessing stage mainly includes four units: one-hot encoding, feature selection, data standardization, imbalance handling and normalization. network parameters and finally generate the Deep Stacking Network model. 5. Input the NSL-KDD testing data set. The pre-processing module discretizes the character information in the data set, selects essential features and normalizes the data. 6. Use the trained basic classifier to initially predict the classification results and discretize the results. 7. Input the preliminary predicted classification results into the trained neural network to obtain the prediction results of the Deep Stacking Network model. Figure 2. Deep stacking network model. Data Pre-Processing Data pre-processing is a necessary step for data analysis, and it is also an essential part of an intrusion detection system. The preprocessing stage mainly includes four units: one-hot encoding, feature selection, data standardization, imbalance handling and normalization. One-Hot-Encoding There are 41 features in the NSL-KDD dataset, including 3 string features and 38 continuous value features. In machine learning, character type information cannot be used directly and needs the encoding methods to convert it. One-hot-encoding is one of the most commonly used methods to deal with the numeralization of categorical features [24]. It converts each character type feature into a binary vector and marks the corresponding category as 1 and the others as 0. For example, the feature protocol_type has a total of three attributes: tcp, udp and icmp. By one-hot-encoding, tcp is encoded into (1, 0, 0), udp is encoded into (0, 1, 0) and icmp is encoded into (0, 0, 1). Overall, the three character type features protocol_type, service, flag are mapped into 84-dimensional binary values. Otherwise, the num_outbound_cmds feature value is 0, so this feature is removed. Therefore, the original 41-dimensional NSL-KDD can be transformed into a new 121-dimensional data set. Feature Selection Feature selection (FS) is a commonly used method of data aggregation. In some ways, we can select important features and remove the remaining redundant features to alleviate the problem of dimensionality disaster. Similarly, removing irrelevant features can reduce the difficulty of machine learning tasks and increase the efficiency of storage space There are 41 features in the NSL-KDD dataset, including 3 string features and 38 continuous value features. In machine learning, character type information cannot be used directly and needs the encoding methods to convert it. One-hot-encoding is one of the most commonly used methods to deal with the numeralization of categorical features [24]. It converts each character type feature into a binary vector and marks the corresponding category as 1 and the others as 0. For example, the feature protocol_type has a total of three attributes: tcp, udp and icmp. By one-hot-encoding, tcp is encoded into (1, 0, 0), udp is encoded into (0, 1, 0) and icmp is encoded into (0, 0, 1). Overall, the three character type features protocol_type, service, flag are mapped into 84-dimensional binary values. Otherwise, the num_outbound_cmds feature value is 0, so this feature is removed. Therefore, the original 41-dimensional NSL-KDD can be transformed into a new 121-dimensional data set. Feature Selection Feature selection (FS) is a commonly used method of data aggregation. In some ways, we can select important features and remove the remaining redundant features to alleviate the problem of dimensionality disaster. Similarly, removing irrelevant features can reduce the difficulty of machine learning tasks and increase the efficiency of storage space utilization. In some machine learning algorithms, FS can help the algorithm improve detection performance, especially decision tree algorithms [25]. Imbalance Handling It can be clearly seen from the table that the training samples are imbalanced on the NSL-KDD data set. Unbalanced training samples will cause the trained model to be biased to recognize most sample categories, resulting in the degradation of the model's detection performance. Therefore, we choose to process the training samples. Commonly used methods for processing unbalanced data sets include under-sampling and over-sampling. The model in this paper uses undersampling to process the training samples of the data set. From a security perspective, the intrusion detection system should identify the attack type as much as possible and can appropriately reduce the normal traffic data in the training sample when inputting the training sample, so that the focus of model training is to identify the attack type. We use the non-replacement method to sample the normal flow data randomly. Table 2 shows the sample distribution of the new data set. Different dimensions of input data usually have different dimensions and orders of magnitude. When using machine learning, data normalization is a very necessary measure. The transformed NSL-KDD has 121-dimensional features and there are big differences between the features, so we use data normalization to reduce the differences for improved performance [26]. In this paper, the zero-mean normalization and the min-max normalization method are adopted to reduce the differences in different dimensions. The zero-mean normalization processes the data by changing the average value to 0 and the standard deviation to 1. The formula is as follows: where Z i and σ, respectively, represent the mean and standard deviation value of the ith feature Z i and Z ij represents the feature value after normalization. The min-max normalization scales the data to the interval [0, 1] through a linear transformation. The formula is as follows: where max(Z i ) and min(Z i ), respectively, represent the maximum and minimum value of the ith feature Z i and Z ij represents the normalized feature value between [0, 1]. Training Classifiers The classifier module reads the preprocessed data and uses the ten-fold cross-validation method to process the training data. In the 10-fold cross-validation method, the entire training set is randomly divided into 10 folds, of which 9 folds work as sub-training data and the remaining 1 fold works as sub-validation data. The read data is used for model training. We first choose KNN, RF, SVM, DT, LR, DNN to process the data. Two standardization methods are mentioned in Section 4.1. According to the different characteristics of machine learning algorithms, we have different data standardization methods for different algorithms. Most studies have proved that feature selection can improve the effectiveness of decision tree algorithms [26]. Therefore, in the setting of the decision tree algorithm, the feature selection method is used for feature selection. Commonly used feature selection methods include the correlation coefficient method, PSO feature selection method, etc. In this paper, the PSO method is selected as the feature selection method [27]. Table 3 shows the different preprocessing methods of different algorithms: The classification decision tree in this article uses the ID3 algorithm as the way to build the tree model. First, perform feature selection, reducing the number of features of the input data from 121 to 56. Then use the data after feature selection to input decision tree training. Each time the feature with the largest information gain is selected as the bifurcation node, each child node connects two branches to build a binary decision tree. In the decision tree, choosing the bifurcation point is the key to affecting the classification performance of the decision tree. Take the 4-layer decision tree trained on NSL-KDD data as an example, where src_bytes is the root node of the decision tree and the decision tree model is shown in Figure 3. The classification decision tree in this article uses the ID3 algorithm as the way to build the tree model. First, perform feature selection, reducing the number of features of the input data from 121 to 56. Then use the data after feature selection to input decision tree training. Each time the feature with the largest information gain is selected as the bifurcation node, each child node connects two branches to build a binary decision tree. In the decision tree, choosing the bifurcation point is the key to affecting the classification performance of the decision tree. Take the 4-layer decision tree trained on NSL-KDD data as an example, where src_bytes is the root node of the decision tree and the decision tree model is shown in Figure 3. Proposed Deep Stacking Network The Deep Stacking Network (DSN) is divided into two layers. In the first layer, th classifier module, each classifier has 10 different model parameter structures based on 10 fold cross-validation. Each model performs a result prediction on its corresponding veri fication set and can get the prediction results of 10 verification sets. The set of 10 validatio sets corresponds to a complete training set. We superimposed the prediction results of th 10 validation sets to obtain the prediction results of a complete training set. This predictio result on the training set can help us evaluate the performance of the classifier and use i as the new training set input to the next layer. At the same time, each model inputs th Proposed Deep Stacking Network The Deep Stacking Network (DSN) is divided into two layers. In the first layer, the classifier module, each classifier has 10 different model parameter structures based on 10-fold cross-validation. Each model performs a result prediction on its corresponding verification set and can get the prediction results of 10 verification sets. The set of 10 validation sets corresponds to a complete training set. We superimposed the prediction results of the 10 validation sets to obtain the prediction results of a complete training set. This prediction result on the training set can help us evaluate the performance of the classifier and use it as the new training set input to the next layer. At the same time, each model inputs the data that need to be predicted and the mode of the predicted values of these 10 models are taken as the prediction result of the classifier, which is used as the new test set to input by the next layer. Figure 5 shows the process of data 'stacking'. So far, we have not only made full use of the training effect of the complete training set but also used the entire training set for model performance evaluation. Proposed Deep Stacking Network The Deep Stacking Network (DSN) is divided into two layers. In the first laye classifier module, each classifier has 10 different model parameter structures based o fold cross-validation. Each model performs a result prediction on its corresponding fication set and can get the prediction results of 10 verification sets. The set of 10 valid sets corresponds to a complete training set. We superimposed the prediction results 10 validation sets to obtain the prediction results of a complete training set. This pred result on the training set can help us evaluate the performance of the classifier and as the new training set input to the next layer. At the same time, each model inpu data that need to be predicted and the mode of the predicted values of these 10 m are taken as the prediction result of the classifier, which is used as the new test set to by the next layer. Figure 5 shows the process of data 'stacking'. So far, we have not made full use of the training effect of the complete training set but also used the training set for model performance evaluation. In the algorithm proposed in this paper, 4 classifiers are selected as the basic cla ers of Deep Stacking Network. Each classifier corresponds to a new training set and a test set. Then use one-hot encoding to convert the prediction results from character bles to discrete variables and the prediction results from 1-dimensional features to mensional features. For example, the prediction result is that Probe becomes (0, 0, 1, In the algorithm proposed in this paper, 4 classifiers are selected as the basic classifiers of Deep Stacking Network. Each classifier corresponds to a new training set and a new test set. Then use one-hot encoding to convert the prediction results from character variables to discrete variables and the prediction results from 1-dimensional features to 5-dimensional features. For example, the prediction result is that Probe becomes (0, 0, 1, 0, 0). Therefore, a total of 20-dimensional training set features and 20-dimensional test set features can be obtained. Combining the original classification label of the training set with the features of the new training set is the input of the new training set. The features in this training set can also be called appearance features. These appearance features can help us train to get the influence relationship of each basic classifier in the network. Figure 6 shows the composition of the new training set and test set. In the second layer, the prediction fusion module, we use a simple neural network for decision fusion. Since decision fusion does not need to explore the deep relationship between features and labels, we use a hidden layer of neural network for model training The neural network structure is shown in Figure 7. Adjust the network parameters by In the second layer, the prediction fusion module, we use a simple neural network for decision fusion. Since decision fusion does not need to explore the deep relationship between features and labels, we use a hidden layer of neural network for model training. The neural network structure is shown in Figure 7. Adjust the network parameters by inputting a new training set, use the ReLU function as the activation function and use the softmax function to adjust before outputting to get the final classification result. Figure 6. The composition of the new training set and test set. Predict In the second layer, the prediction fusion module, we use a simple neural net for decision fusion. Since decision fusion does not need to explore the deep relation between features and labels, we use a hidden layer of neural network for model trai The neural network structure is shown in Figure 7. Adjust the network paramete inputting a new training set, use the ReLU function as the activation function and us softmax function to adjust before outputting to get the final classification result. Performance Evaluation In this article, nine indicators commonly are used in intrusion detection to eva the performance of the intrusion detection system, including four confusion matrix cators of true positive (TP), true negative (TN), false positive (FP), false negative (FN five evaluation indicators of accuracy (ACC), precision, recall rate, F1-score, multi accuracy (MACC). Table 4 shows the confusion matrix. Performance Evaluation In this article, nine indicators commonly are used in intrusion detection to evaluate the performance of the intrusion detection system, including four confusion matrix indicators of true positive (TP), true negative (TN), false positive (FP), false negative (FN) and five evaluation indicators of accuracy (ACC), precision, recall rate, F1-score, multi-class accuracy (MACC). Table 4 shows the confusion matrix. The six evaluation indicators are defined as follows: The ACC is usually an indicator of traditional binary classification tasks. According to the standard of multi-attack classification, multi-class accuracy (MACC) is proposed, which can help us better compare the performance of classifiers. MACC = Number of samples successfully classified Total number of samples (13) Experimental Setup The proposed system is performed by a laboratory computer with Intel(R) Core(TM) i7-9750H CPU@ 2.60 GHz and 16.00 GB of RAM using Python on system Windows 10. All experiments are performed on the preprocessed NSL-KDD dataset. Firstly, select the appropriate basic classifier by screening the appropriate machine learning algorithm. After selecting the basic classifier, we conduct experiments on the complete system to evaluate the performance of the model. Figures 3 and 6 show the structures of two neural networks used in the system, DNN and DSN. The number of neurons in the hidden layer in DNN is 2048-1024-512-256-128, the number of neurons in the hidden layer in DSN is 128, the activation function of the hidden layer is ReLU and the activation function of the output layer is Softmax. The optimization algorithm of two networks is Adam [28], where two important parameters need to be set, named the learning rate and the number of epochs. When the learning rate of the network is too high, the loss function of networks will oscillate without convergence. If the learning rate is too low, the slow convergence rate will hinder the update of networks. Therefore, choosing an appropriate learning rate is very important for network performance optimization. In this experiment, a set of learning rates [0.1, 0.01, 0.001, 0.0001, 0.00001] is selected as the candidate parameters of the two networks and the accuracy of the network on the verification set is used as the measurement standard. Similarly, the number of iterations is also critical to the optimization of the network. A large number of epochs will cause the network to waste time cost and it is easy to cause the network to overfit. The small number of epochs will result in insufficient network convergence and poor model learning performance. This experiment finds the appropriate number of iterations from the changing law of the loss function value during network training. In order to find the right parameters, we use the 10-fold cross-validation method mentioned in Section 4.2 to find the best parameters. For the basic classifier DNN, as shown in Figure 8, the learning rate is optimal between 0.0001 and 0.00001 and finally set to 0.00003. As shown in Figure 9, the experiment shows that the training loss basically does not change after 50 iterations. We set the number of iterations to 50. For DSN, as shown in Figure 10, the learning rate reaches the maximum accuracy at 0.001. We choose 0.001 as the learning rate. As shown in Figure 11, the loss function of the network stabilizes after 20 iterations, so we choose to set the number of iterations to 20. In order to find the right parameters, we use the 10-fold cross-validation method mentioned in Section 4.2 to find the best parameters. For the basic classifier DNN, as shown in Figure 8, the learning rate is optimal between 0.0001 and 0.00001 and finally set to 0.00003. As shown in Figure 9, the experiment shows that the training loss basically does not change after 50 iterations. We set the number of iterations to 50. For DSN, as shown in Figure 10, the learning rate reaches the maximum accuracy at 0.001. We choose 0.001 as the learning rate. As shown in Figure 11, the loss function of the network stabilizes after 20 iterations, so we choose to set the number of iterations to 20. In the feature selection of DT, try to select a different number of features to test the classification effect. As shown in the Figure 12, when the number of features is 56, the best accuracy of 99.78% can be achieved. Therefore, the number of DT feature selections in this article is set to 56. The parameters of other basic classifiers are set according to the default In the feature selection of DT, try to select a different number of features to test the classification effect. As shown in the Figure 12, when the number of features is 56, the best accuracy of 99.78% can be achieved. Therefore, the number of DT feature selections in this article is set to 56. The parameters of other basic classifiers are set according to the default In the feature selection of DT, try to select a different number of features to test the classification effect. As shown in the Figure 12, when the number of features is 56, the best accuracy of 99.78% can be achieved. Therefore, the number of DT feature selections in this article is set to 56. The parameters of other basic classifiers are set according to the default parameters provided by the Sklearn library. In order to establish a good ensemble learning model, it is first necessary to screen the basic classifiers with excellent performance. In the experiment, a 10-fold cross-validation method was used to evaluate the performance of the six selected algorithms. We consider the effect of the algorithm from the perspective of the predicted success rate of each attack type so that the characteristics of each classifier can be analyzed, which will help us choose a good basic classifier to improve the performance of the entire intrusion detection system. Table 5 shows the results of cross-validate on the new training set. From the table, it is appreciated that three algorithms of KNN, DT and RF have outstanding performance in detection accuracy. Among them, RF has the best performance in detecting Normal categories, DT has the best performance in detecting Probe and R2L categories and DNN has the best performance in detecting DoS and U2R categories. In terms of time spent, DT used the shortest time and SVM used the longest due to slow modeling. Stacked generalization requires us to choose the classifiers to be good and different, so we choose KNN, DT, RF and DNN that have outstanding performance in various aspects as the basic classifier of the DSN network. Results and Discussion Tables 6 and 7 respectively show the performance of each classifier on the test set and the performance results of the DSN model on the NSL-KDD test set. From the perspective of accuracy, DT and DNN have reached high accuracy. Among them, RF has the best performance in detecting Normal categories, DT has the best performance in detecting Probe and R2L categories and DNN has the best performance in detecting DoS and U2R categories. This is basically the same as the previous results on the validation set and meets the In order to establish a good ensemble learning model, it is first necessary to screen the basic classifiers with excellent performance. In the experiment, a 10-fold cross-validation method was used to evaluate the performance of the six selected algorithms. We consider the effect of the algorithm from the perspective of the predicted success rate of each attack type so that the characteristics of each classifier can be analyzed, which will help us choose a good basic classifier to improve the performance of the entire intrusion detection system. Table 5 shows the results of cross-validate on the new training set. From the table, it is appreciated that three algorithms of KNN, DT and RF have outstanding performance in detection accuracy. Among them, RF has the best performance in detecting Normal categories, DT has the best performance in detecting Probe and R2L categories and DNN has the best performance in detecting DoS and U2R categories. In terms of time spent, DT used the shortest time and SVM used the longest due to slow modeling. Stacked generalization requires us to choose the classifiers to be good and different, so we choose KNN, DT, RF and DNN that have outstanding performance in various aspects as the basic classifier of the DSN network. Results and Discussion Tables 6 and 7 respectively show the performance of each classifier on the test set and the performance results of the DSN model on the NSL-KDD test set. From the perspective of accuracy, DT and DNN have reached high accuracy. Among them, RF has the best performance in detecting Normal categories, DT has the best performance in detecting Probe and R2L categories and DNN has the best performance in detecting DoS and U2R categories. This is basically the same as the previous results on the validation set and meets the different requirements of a good classifier. The DSN model is not prominent in each attack category, but it combines the advantages of four basic classifiers, improves the overall classification accuracy and also solves the problem of the low accuracy of a single algorithm in certain categories of attack recognition. In terms of training and testing time, the proposed model is acceptably higher than most algorithms and lower than SVM. The multi-class detection accuracy of DSN reached 86.8%, the best performance. In order to better demonstrate the performance of this system in intrusion detection, we will compare the proposed model with the intrusion detection algorithms proposed by seven scholars, including DNN, RNN, Ensemble Voting and SAAE-DNN. Table 8 shows the classification accuracy of the algorithm on NSL-KDD Test+ and NSL-KDD Test-21, respectively. The classification accuracy of DSN on NSL-KDD Test+ is 86.8% and the classification accuracy on NSL-KDD Test-21 is 79.2%, which is significantly higher than other comparison algorithms. [29] Bagging NSL-KDD Test+ / 84.25% Kanakarajan [30] GAR-forest NSL-KDD Test+ / 85.05% GAO [17] Ensemble Voting NSL-KDD Test+ / 85.2% Tang [31] SAAE-DNN NSL-KDD Test+ 87.74% 82.14% Yang [14] ICVAE-DNN NSL-KDD Test+ 85.97% / Proposed method DSN NSL-KDD Test-21 83.19% 79.2% Yin [32] RNN-IDS NSL-KDD Test-21 / 64.67% Yang [33] MDPCA-DBN NSL-KDD Test-21 / 66.18% Tang [31] SAAE-DNN NSL-KDD Test-21 / 77.57% Conclusions and Future Work This paper proposes a novel intrusion detection approach called DSN that integrates the advantage of four machine learning methods. For the real network dataset NSL-KDD, we use Pre-processing to normalize data. In the experiment, four of the six machine learning methods were selected as the basic classifiers for ensemble learning. The integrated learning model DSN gathers the advantages of four different classifiers and improves the performance of the algorithm. Compared with other researches, it is proved that our ensemble model effectively improves the detection accuracy. The DSN proposed in this article has a good application prospect, which is worthy of further exploration. The data used in the experiment is NSL-KDD, which is an unbalanced data set. Therefore, the use of this data set for training will inevitably lead to the learning result biased towards the majority of samples. How to use limited training samples to improve the adaptability of multi-classification is the key to solving the problem. Ensemble learning is an excellent method that can improve the performance of the model in a short time. However, it is not advisable to use ensemble learning methods blindly. Different algorithms are suitable for different classification situations. It is necessary to select the correct algorithm as the basic classifier to fundamentally improve the overall effect of the model. From the condition of model optimization, the most important thing is to optimize the data, followed by the optimization algorithm and finally, the parameters of the optimization algorithm. Future work will focus on improving IDS performance. I consider the detection cost of each algorithm for different attack categories as a measurement standard and design a new IDS. At the same time, I will choose other intrusion detection data sets collected from reality to experiment with the intrusion detection performance of the algorithm. Designing an intrusion detection system capable of parallel processing and learning is our next step of work.
9,886.8
2021-12-22T00:00:00.000
[ "Computer Science" ]
Orphan Genes Shared by Pathogenic Genomes Are More Associated with Bacterial Pathogenicity Recent pangenome analyses of numerous bacterial species have suggested that each genome of a single species may have a significant fraction of its gene content unique or shared by a very few genomes (i.e., ORFans). We selected nine bacterial genera, each containing at least five pathogenic and five nonpathogenic genomes, to compare their ORFans in relation to pathogenicity-related genes. Pathogens in these genera are known to cause a number of common and devastating human diseases such as pneumonia, diphtheria, melioidosis, and tuberculosis. Thus, they are worthy of in-depth systems microbiology investigations, including the comparative study of ORFans between pathogens and nonpathogens. We provide direct evidence to suggest that ORFans shared by more pathogens are more associated with pathogenicity-related genes and thus are more important targets for development of new diagnostic markers or therapeutic drugs for bacterial infectious diseases. Earlier studies found that ORFans are shorter, have lower GC content, and evolve more rapidly (6)(7)(8)(9)(10). Therefore, ORFans were once thought to be mispredicted proteincoding genes. However, accumulating experimental evidence has been demonstrated that many ORFans correspond to real and functional proteins (7,(11)(12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23)(24). In addition, it has been suggested that newly evolved ORFan genes often confer new traits and play significant roles in assisting their host organisms to adapt to the ever-changing environments (5,9). For example, an ORFan gene named neaT was characterized in extraintestinal pathogenic (ExPEC) Escherichia coli to have a key role in the virulence of ExPEC in zebrafish embryos (24). Therefore, although molecular biologists tend to focus more on conserved genes, the taxonomically restricted ORFans are likely to be more important for the emergence of species-specific traits: e.g., the ability of pathogens to infect their hosts. Previously, ORFans have been shown to be enriched in genomic islands (GIs) of bacterial genomes (25). GIs are defined as horizontally transferred gene (HGT) clusters that often contain virulence factor (VF) genes and can transform nonpathogens to pathogens. Hence, many GIs are also known as pathogenicity islands (PAIs), a term we prefer to use in this article. In fact, PAIs were shown to contain more VF genes than the rest of the genome (26). Another study showed that 39% of ORFans in 119 prokaryotic genomes were found in clusters of genes with atypical base compositions (27), which correspond to horizontally transferred foreign elements from other bacteria or viruses. However, none of the previous large-scale analyses of prokaryotic ORFans (e.g., references 4, 28, 29, and 30) have distinguished pathogens and nonpathogens. Recent pangenome analyses of numerous bacterial pathogens and their closely related nonpathogenic strains have suggested that each genome of a single species may have a significant fraction of unique gene content known as the variable genome (31)(32)(33)(34)(35)(36)(37)(38)(39)(40)(41). Many of the unique genes are lineage-specific ORFans; those unique genes residing in PAIs or prophages may have contributed to the bacterial pathogenicity (42,43). In this study, our goal was to study the association between ORFans and pathogenicity of bacteria by analyzing fully sequenced bacterial genomes, which have been classified into pathogen (P) and nonpathogen (NP) groups. We identified ORFans adopting the pangenome idea, according to which proteins from the variable genome are ORFans. Compared to previous studies, the novelty of this study is that we have classified ORFans into different groups: SS-ORFans (strain-specific ORFans present in just one genome), PS-ORFans (pathogen-specific ORFans shared by pathogenic genomes), and NS-ORFans (nonpathogen-specific ORFans shared by nonpathogenic genomes). Specifically, using bacterial genomes from nine bacterial genera, we aimed to address the following questions by comparing genomes of the same genus. (i) Do pathogens have more genes than nonpathogens? (ii) Do pathogens have a higher percentage of ORFans than nonpathogens? (iii) Do pathogens have more pathogenicity-related genes (PRGs), such as genes in prophages and PAIs and genes identified as HGTs and VFs, than nonpathogens? (iv) Which group of ORFans is more represented in the four types of PRGs and thus is more likely to be associated with bacterial pathogenicity? RESULTS Overall comparisons of ORFans between pathogens and nonpathogens in nine genera. The nine bacterial genera with more than five complete pathogenic genomes and five complete nonpathogenic genomes are shown in Table 1 (also see Materials and Methods). Here "complete" means that the genomes are fully determined and assembled. Bacteria of these genera are known to cause a number of common and devastating human diseases (see Table S1 in the supplemental material). Table 2, the 505 genomes are grouped into 340 pathogenic (P) genomes (1,255,580 proteins) and 165 nonpathogenic (NP) genomes (657,172 proteins). The percentages of ORFans are calculated relative to the gene contents in the two groups of genomes, respectively (see Fig. 1 and Materials and Methods for how we defined the four groups of ORFans). In the 340 P genomes, the percentage of SS-ORFans is 1.39% and the percentage of PS-ORFans is 4.48%. Similarly, in the 165 NP genomes, the percentage of SS-ORFans is 2.60% and the percentage of NS-ORFans is 6.00%. Hence, the overall percentage of ORFans seems higher in NP than P genomes, which agrees with a previous study (19% nonpathogen-associated genes versus 14% pathogen-associated genes) (26). As shown in Furthermore, Table 2 also shows the four groups of ORFans further broken into the four types of PRGs (pathogenicity-related genes [explained in Materials and Methods]). For example, the percentage of SS-ORFans in P genomes carried by prophages is 12.24%, which was calculated by no. of SS-ORFans in prophages/total no. of SS-ORFans: 2,138/17,455. For prophages and PAIs, it is clear that ORFans of P genomes are more likely to be carried by PAIs and prophages than ORFans of NP genomes (e.g., for prophages, P genomes [18.75% ϩ 12.24%] versus NP genomes [9.50% ϩ 8.54%]). When looking at different ORFan groups, the percentage of PS-ORFans is always the highest (18.75% for prophages and 30.41% for PAIs). Additionally, it appears that ORFans are more likely to be carried by PAIs and prophages than non-ORFans in both P and NP genomes, which extends the finding made in reference 25. For VFs, the numbers of ORFans annotated as VFs are very small, in contrast to much larger numbers for non-ORFans. Notably, 259 (0.66%) NS-ORFans are VFs, compared to 2,718 (4.84%) PS-ORFans being VFs. A previous study has shown that VFs are highly enriched in PAIs compared to non-PAI regions (26). Interestingly, here we showed that most VFs are found in non-ORFans (more conserved genes shared by P and NP genomes). This is likely because, as indicated in reference 26, there are VFs commonly found in P and NP genomes, which are more abundant in bacterial genomes than those pathogen-associated VFs. For HGTs, non-ORFans were excluded in our HGT identification because they do not qualify, "having limited blastp hits in taxonomically close (genus-level) genomes" (see Materials and Methods). Table 2 shows that NP genomes have higher percentages of ORFans identified as HGTs than P genomes, contrary to the other three types of PRGs. However, it should be noted that Table 2 combined ORFans of the nine genera as a whole for comparisons. Thus, the above observations could be biased due to the fact that some genera have more genomes (e.g., Streptococcus) or have better-annotated PRGs (e.g., Escherichia) than others. To obtain more statistically robust results without biases, we have counted the number of ORFans in each genome (see Data Set S1 in the supplemental material), calculated the percentages, and further statistically compared the P and NP genomes in each genus. Pathogens do not always have more genes than nonpathogens. The pairwise nonparametric Wilcoxon test P values (the second column of Table 3) show that not all genera have their P genomes carrying more genes than NP genomes. In four out of the nine genera: Bacillus, Escherichia, Pseudomonas, and Streptococcus, the P genomes have a higher number of genes than NP genomes. However, it is the opposite in three other genera: Clostridium, Corynebacterium, and Mycobacterium. This result remains the same even when excluding plasmids in the analysis. This finding largely agrees with a previous study (44), which compared the number of genes in four genera (Bacillus, Escherichia, Pseudomonas, and Burkholderia) using a smaller data set. Entwistle et al. Pathogens do not always have more PRGs than nonpathogens. In Table 3, we have also compared the percentage of PRGs between P and NP genomes in each genus. (Detailed counts are available in Data Set S1.) For prophage-carried genes, Table 3 shows that, although in Escherichia, pathogens tend to have more genes located in prophages than nonpathogens (44), in the other eight genera pathogens do not have more prophages than nonpathogens. For PAIs, in two genera (Burkholderia and Escherichia), the percentage of genes located in PAIs is higher in P genomes, while in two other genera (Clostridium and Pseudomonas), it is the opposite. Thus, it was inaccurate to conclude based on Table 2 that there is a higher percentage of prophages and PAIs in P genomes of all nine genera, because this is only true for Escherichia (Table 3), which dominated the prophage and PAI data. For VFs, four genera (Corynebacterium, Listeria, Mycobacterium, and Pseudomonas) have a higher percentage of VF-carried genes in P than NP genomes. Lastly, for HGTs, four genera (Burkholderia, Clostridium, Corynebacterium, and Mycobacterium) have a lower percentage of ORFans derived from HGT in P than NP genomes. Therefore, the genus-by-genus statistical tests showed that pathogens do not always have more PRGs than nonpathogens, and the observations vary between different genera. The percentage of PS-ORFans is always higher than that of SS-ORFans in pathogens, which is not true in nonpathogens. When taking the P and NP genomes of the nine genera as a whole for comparison, a sequence of percentages was observed in Table 2: % NS-ORFans (NP) Ͼ % PS-ORFans (P) Ͼ % SS-ORFans (NP) Ͼ % SS-ORFans (P). For more accurate comparisons without bias from combining different genera, we have performed genus-by-genus statistical tests, and for each genus, four comparisons with the four groups of ORFans have been made (see Fig. 2 legend). Wilcoxon nonparametric test P values for these comparisons can be found in Table S2 in the supplemental material. The detailed counts of different ORFans are available in Data Set S1. For the comparison SS-ORFans (P) versus SS-ORFans (NP), only in Escherichia was the percentage of SS-ORFans (P) significantly higher than the percentage of SS-ORFans (NP); in six genera (Burkholderia, Corynebacterium, Listeria, Mycobacterium, Pseudomonas, and Streptococcus), it is the opposite. For the comparison PS-ORFans (P) versus NS-ORFans (NP), in three genera (Escherichia, Burkholderia, and Streptococcus), the percentage of PS-ORFans is significantly higher than the percentage of NS-ORFans; however, in three other genera (Bacillus, Corynebacterium, and Pseudomonas), it is the opposite. All of these findings suggest that nonpathogens do not necessarily have more ORFans than pathogens, because different genera behave differently. For the comparison PS-ORFans (P) versus SS-ORFans (P), in the nine genera, the percentage of PS-ORFans is always significantly higher than the percentage of SS-ORFans. This suggests that ORFans tend to be shared by different pathogenic genomes. However, for the comparison NS-ORFans versus SS-ORFans (NP), in four genera (Bacillus, Clostridium, Corynebacterium, and Pseudomonas), the percentage of NS-ORFans is significantly higher than the percentage of SS-ORFans, while in Escherichia, the percentage of NS-ORFans is significantly lower than the percentage of SS-ORFans, and in the other four genera, there is no significant difference. Therefore, unlike P genomes, NS-ORFans are not always more abundant than SS-ORFans in NP genomes. PS-ORFans are always more abundant than SS-ORFans in PRGs in pathogens, which is not true in nonpathogens. We continued by comparing the percentages of Table 4, PAIs in Table 5, VFs in Table 6, and HGTs in Table 7), which is a novel analysis of this study. For prophages, PAIs, and VFs, we first compiled a list of proteins encoded by these PRGs in each genome, and then we separated PRGs into SS-ORFans, PS-ORFans, and non-ORFans in pathogenic (P) genomes and into SS-ORFans, NS-ORFans, and non-ORFans in nonpathogenic (NP) genomes. Lastly, we calculated their percentages for Wilcoxon tests. For HGTs, non-ORFans were excluded in the Wilcoxon tests of Table 7. The detailed counts of different ORFans in different PRGs are available in Data Set S1. The most interesting observation from Tables 4 to 7 is that the percentage of PS-ORFans is significantly higher than percentage of SS-ORFans in P genomes of almost all the genera for all the four types of PRGs. (Listeria in Table 6 has a P value of 0.5, because only 1 out of the 40 Listeria genomes has VFs, and thus, the P value is not meaningful.) This also agrees with the finding made in Fig. 2 and Table S2 that in P genomes of the nine genera, the percentage of PS-ORFans is always higher than the percentage of SS-ORFans. This finding suggests that PS-ORFans (shared by multiple P genomes) are more associated with bacterial pathogenicity than SS-ORFans (unique in each genome). In contrast, in NP genomes, the comparison of the percentages of PS-ORFans and SS-ORFans for the four types of PRGs does not show such uniformity. Particularly, for prophages and PAIs (Tables 4 and 5), most of the genera show no significant difference. To study what functions are overrepresented in ORFans, we have compared the GO annotation of our four ORFan data sets against that of a protein data set randomly selected from the entire gene content of the nine genera. A binomial test was run on each GO term to test if the ORFan count is significantly higher than the random protein count. Data Set S2 in the supplemental material provides the top-ranked GO terms that are significantly overrepresented in the four groups of ORFans. As expected, GO terms related to phages (such as DNA integration, virus tail fiber assembly, and viral genome ejection) are among the most overrepresented functions found in PS-ORFans. Interest- ingly, DNA integration is also in the top 10 GO terms found in the other three ORFan groups. In addition, two GO terms (DNA excision [related to DNA repair after recombination] and response to nutrient [related to extracellular stimulus]) are found in the top 10 terms for three of the four ORFan groups. A database of ORFans of pathogenic bacteria. All the ORFan data generated in this study are provided through an online database, ORFanDB (http://cys.bios.niu.edu/ ORFanDB/). The website features an embedded interactive web application that allows a user to select a species and then further narrows their selection based on strain and ORFan type using a set of nested tabs. The final nested tab ("Protein Information") reveals data about the ORFan, such as hits in PRGs, a Jbrowser instance showing the genomic neighborhood, and genome metadata curated from JGI (Joint Genomic Institute). There is also a download page from which the user can download all the data available, genus-specific data, or ORFan type-specific data. Lastly, a help page and an about page are created to provide the user with information on how to use the application. DISCUSSION Previous literature has studied the four types of pathogenicity-related genes (PRGs) using comparative genomics approaches (25)(26)(27)44). Two papers have specifically compared prophages (44) and VFs (26) between pathogens and nonpathogens. In addition, we and others have focused on developing new computational methods for the identification of ORFans in hundreds of bacterial genomes and metagenomes (2)(3)(4)6). Despite these previous efforts, the novelty of the current work is that we have separated ORFans into four different groups, which enabled us to compare them within/between pathogens and nonpathogens of the same bacterial genus, particularly in terms of their relative abundance in the four types of PRGs. Before this study, the previous literature had already suggested that (i) at least in some genera, P genomes are larger than NP genomes (44), (ii) ORFans are overrepresented in PAIs compared to the rest of the genome (25), and (iii) combining genomes from different genera, overall, P genomes have fewer ORFans than NP genomes (26). Our data extended these findings. For example, for finding i, Table 3 showed that in four out of nine genera, P genomes have more genes than NP genomes, whereas in the other five genera, this is not true. For finding ii, the previous finding was extended with four groups of ORFans in Table 2, which showed the following for genes located in PAIs: % PS-ORFans (P) Ͼ % SS-ORFans (NP) Ϸ % SS-ORFans (P) Ͼ % NS-ORFans (NP) Ͼ Ͼ % non-ORFans (P) Ͼ % non-ORFans (NP). This finding was also extended to prophages, showing the following: % PS-ORFans (P) Ͼ % SS-ORFans (P) Ͼ % NS-ORFans (NP) Ͼ % SS-ORFans (NP) Ͼ Ͼ % non-ORFans (P) Ͼ % non-ORFans (NP). For finding iii, Table 2 confirmed that NP genomes have a higher overall percentage of ORFans than NP genomes, but also showed that the percentage of SS-ORFans (NP) is higher than the percentage of SS-ORFans (P), and the percentage of NS-ORFans (NP) is higher than the percentage of PS-ORFans (P). However, we argued that an unbiased genus-by-genus comparison was required to obtain a more accurate result. When comparing them in each genus ( Fig. 2 and Table S2), the percentages of NS-ORFans (NP) and SS-ORFans (NP) were no longer always higher than those of PS-ORFans (P) and SS-ORFans (P), respectively. For example, in Escherichia, the percentage of PS-ORFans (P) was significantly higher than that of NS-ORFans (NP) and the percentage of SS-ORFans (P) was significantly higher than that of SS-ORFans (NP). The most significant findings of this study are that in pathogens of the nine genera, the percentage of PS-ORFans was consistently higher than that of SS-ORFans ( Fig. 2 and Table S2), and the percentage of PS-ORFans annotated to be PRGs (all the four types) was also consistently higher than that of SS-ORFans (Tables 4 to 7). These findings were even more intriguing when seeing in nonpathogens of the nine genera that such a strong and uniform pattern (i.e., % NS-ORFans Ͼ % SS-ORFans) across all the nine genera did not exist. To add even more support for these findings, we have run "all versus all" blastp search on the 56,196 PS-ORFan and 39,437 NS-ORFan data sets (Table 2) separately. Then we counted how many genera each query ORFan had hits in. In total, 2,437 (4.34%) PS-ORFans and 2,088 (5.29%) NS-ORFans also have blastp hits in other genera than their self-genus. After grouping ORFans based on the number of genera (ORFan conservation), we plotted the percentages of each group matching prophages and PAIs and observed a positive correlation for PS-ORFans but not for NS-ORFans (Fig. 3). We also did the same for VFs and HGTs (see Table S4 in the supplemental material). VFs showed a similar pattern, but the numbers were too small to be significant. HGTs had positive correlations in both PS-ORFans and NS-ORFans. Overall, this further suggests that the more conserved PS-ORFans (found in more genera) are, the more likely they are pathogenicity related. In contrast, this is not true for NS-ORFans-at least in prophages and PAIs. From the evolutionary selection perspective, new genes from phages, distant bacteria, PAIs, and other mobile genetic elements can constantly enter the host genome through horizontal gene transfer; however, these new genes have to go through the natural selection process, where only those providing selective advantage to their bacterial hosts (i.e., pathogenicity) are eventually fixed in the pathogen population (e.g., found in multiple pathogenic genomes of the same genus). It should be mentioned that such an HGT selection model works for any genes and any biological processes in any genomes. Notably, in nonpathogens, we also observed a significant percentage of ORFans and PRGs (Table 2). However, the selection of PRGs and ORFans in nonpathogens may not be as strong and universal as in pathogens. These findings strongly suggest that the PS-ORFans that are shared by multiple pathogens have a higher success rate to transform a nonpathogen to a pathogen compared to SS-ORFans. Therefore, PS-ORFans should be considered better targets to identify novel PRGs and to develop diagnostic/therapeutic drugs. Lastly, other than ORFans that originated through horizontal gene transfer (gene gain) from phages or other bacteria, there are other important factors that can also account for bacterial pathogenicity, such as gene loss due to genome reduction (i.e., smaller P genomes), modification of the core genome (non-ORFans) with single nucleotide polymorphisms (SNPs), indels, and recombinations (42,43,45). Although not a focus of this study, some of these factors such as SNPs found in PRGs of non-ORFans may be a more plausible reason for infectious disease outbreaks, which usually happen in a relatively short evolutionary time scale, as revealed by the numerous recent whole-genome shotgun sequencing efforts for genomic epidemiology studies (e.g., reviewed in references 46, 47, and 48). Table S4. MATERIALS AND METHODS Genome data. In total, 6,005 completely sequenced and assembled bacterial genomes were downloaded from the RefSeq database (ftp://ftp.ncbi.nih.gov/genomes/refseq/bacteria) as of August 2017, denoted as Bacteria-DB. A list of bacterial genomes at http://www.pathogenomics.sfu.ca/pathogen-associated/2014/ was manually curated and classified into pathogen (P) and nonpathogen (NP) groups by the Brinkman lab (26). As this list was from an older version of the RefSeq database, there were a smaller number of genomes curated and available in the above web link than the Bacteria-DB we used. The 2,864 GenBank accession numbers (ACs) of these genomes were used to extract their RefSeq data files (genomic fna, protein faa, etc.) from the Bacteria-DB. Out of the 2,864 ACs, 2,479 were found in Bacteria-DB. Nine genera with Ͼ5 pathogenic and Ͼ5 nonpathogenic genomes (in total, 505 genomes) were kept for further analyses. ORFan identification. As shown in Fig. 1, for each bacterial genus, we used all of its genomes (P and NP) to make a combined proteome (all proteins of a genome). We then ran an "all versus all" blastp search (E value of Ͻ0.01) using DIAMOND (49), and based on the search result, we classified proteins of each genome into the following: 1. SS-ORFans: strain-specific ORFans, defined as proteins with DIAMOND hits restricted to the query genome (two groups of SS-ORFans: those from P and those from NP) 2. PS-ORFans (only in P): pathogen-specific ORFans, defined as proteins with DIAMOND hits restricted to Ն2 pathogenic genomes 3. NS-ORFans (only in NP): non-pathogen-specific ORFans, defined as proteins with DIAMOND hits restricted to Ն2 non-pathogenic genomes 4. Non-ORFans: defined as the rest of proteins in the genomes PRGs. Four types of genes were identified in the 505 genomes: prophage genes, PAI genes, VF genes, and HGT genes. The genomic locations of ORFans were compared to the genomic locations of prophages in the PHASTER database (50) and to the genomic locations of PAIs in the IslandViewer database (51). The ORFan genes in prophages and PAIs were then classified into SS-ORFans, PS-ORFans, and NS-ORFans groups. To determine if an ORFan is a virulence factor (VF) gene, ORFan sequences were blastp searched against the VFDB (52) using DIAMOND (E value of Ͻ1eϪ5). Horizontally transferred (HGT) genes were identified as proteins having limited blastp hits in taxonomically close (genus-level) genomes but more hits in taxonomically distant (order-level) genomes. To determine if an ORFan is horizontally transferred, ORFan sequences were blastp searched against the protein sequences of the Bacteria-DB (6,005 genomes of various taxonomic phyla) using DIAMOND (E value of Ͻ1eϪ5). We defined an ORFan to be horizontally transferred if it has very few blastp homologs within the studied genus, but has blastp homologs in other taxonomic orders. Specifically, the DIAMOND result was filtered to remove all hits of the same genus as the ORFan query. Then the taxonomic lineages of the remaining hits were examined. If the ORFan has all its remaining hits from different taxonomic orders (two levels up from genus in the taxonomy hierarchy), it means that the ORFan does not have blastp hits in other genomes of the same genus than those used for ORFan identification, but has hits in genomes of more distant orders. This is evidence of gene transfer from distant organisms, and such ORFans were retrieved as HGTs. For example, a PS-ORFan protein, WP_001086421.1, from Escherichia coli APEC O1 (GCF_000014845) has a small number of blastp hits within the Escherichia genus (all hits are from pathogenic genomes) and no other hits within the Enterobacterales order. However, it has numerous hits in other orders of the Gammaproteobacteria class and orders of other bacterial phyla. Such atypical taxonomic distribution of WP_001086421.1's blastp hits can be explained either by HGT from distant organisms into pathogens of the Escherichia genus or by massive gene loss within the Enterobacterales order. As the Enterobacterales order is one of the most sequenced bacterial orders (thousands of genomes in Bacteria-DB), the chance of massive and independent gene loss is much smaller than the chance of recent HGT. This is true for all the genomes of the nine genera, for they are all from well-represented orders in the genome database. Functional annotation of ORFans. We modified a workflow reported in reference 3 to annotate ORFans for Gene Ontology functional descriptions. DIAMOND was used to compare all the ORFans to the UniProt database. The best hit of each ORFan was kept if the alignment identity was Ն80% and the E value was Յ0.01. The GO terms of the UniProt hits were then assigned to the ORFans by parsing the UniProt ID mapping file downloaded from the UniProt ftp site. In total, 39,330 ORFans were annotated with GO using UniProt2GO. ORFans that were not annotated by UniProt2GO were then compared to the PDB70 database using the more sensitive profile-based tool hhsearch (53). The results were parsed to keep the best hit if the probability threshold was Ն80% and the E value was Յ1. The GO terms of the PDB hits were then assigned to the ORFans by parsing the PDB2GO mapping file downloaded from the GOA (GO annotation) ftp site. In total, 13,053 ORFans were annotated with GO using PDB2GO. Altogether, 52,383 ORFans were mapped to GO terms. For GO enrichment analysis, 100,000 proteins were randomly selected from the nine genera, and subjected to the same workflow to be mapped to GO terms. The R function binom.test was used to compare the number of ORFans with a specific GO term (limited to the 5th level of GO terms from BP [biological process] and MF [molecular function] categories) to the number of random genes with the same GO term. P.adjust in R was used to adjust for multiple comparisons. Data availability. The data from this study were organized into a MySQL database. A web application was written in R, using primarily the Shiny package, to provide a user interface to explore these data. Shiny Server was used to host the publicly available website, ORFanDB, in which all of the ORFan data have been made available (http://cys.bios.niu.edu/ORFanDB/).
6,218.8
2019-02-12T00:00:00.000
[ "Biology", "Medicine" ]
The black box problem revisited. Real and imaginary challenges for automated legal decision making This paper addresses the black-box problem in artificial intelligence (AI), and the related problem of explainability of AI in the legal context. We argue, first, that the black box problem is, in fact, a superficial one as it results from an overlap of four different – albeit interconnected – issues: the opacity problem, the strangeness prob - lem, the unpredictability problem, and the justification problem. Thus, we propose a framework for discussing both the black box problem and the explainability of AI. We argue further that contrary to often defended claims the opacity issue is not a genuine problem. We also dismiss the justification problem. Further, we describe the tensions involved in the strangeness and unpredictability problems and suggest some ways to alleviate them. Introduction One of the most pressing problems related to the use of AI in the decision-making processes is the so-called black box problem (Castelvecchi 2016;Rudin 2019).Various AI tools, especially those based on the machine learning mechanism, are designed to analyze huge sets of data, find patterns 'hidden' therein, and offer a solution (e.g., a decision to a legal case, a medical course of action, granting a loan, etc.). The problem is that -for various reasons -we often do not know how or why the algorithm got to the proposed solution.This may obviously be problematic.Imagine we decided to use AI to adjudicate fairly simple legal cases in order to reduce judge's caseload and speed up legal proceedings.The question is, whether we would be content in accepting such judgment 'no matter what', or rather we would require at least the basic knowledge of how the algorithm arrives at its decisions.Should we allow AI algorithms to be 'black boxes', or should we rather have the ability to look into the boxes to understand what happens? In the AI scholarship the black-box problem is often addressed under the label of explainability1 (Guidotti et al. 2018;Miller 2019;Arrieta et al. 2020;Vilone and Longo 2021).In general, an AI system (e.g., a machine learning model) may be considered explainable if it can be understood by humans through an external, simplified representation (though explanatory value of internal components of the models is also discussed -see Jain and Wallace 2019; Wiegreffe and Pinter 2019).The issue is frequently tackled from two different perspectives -technical (IT) and legal -between which there seems to be a significant gap.Law itself does not define explainability; it does however provide rules, mostly requirements, which in the literature are linked to the issue of explainable automated decision making (Doshi-Velez et al. 2017, Wachter et al. 2017a, Bibal et al. 2021).Those requirements are not homogenous: • some concern the private sector, other apply to the public administration and judiciary, • some are specific to the automated decision making, other relate to decision making in general, • (maybe most importantly) some pertain to the AI systems as a whole, others to the instances of decisions. The private automated decision making with regard to explainability is regulated by the growing, mostly with successive EU acts, body of specific rules (see e.g.art.13.2(f) and 14.2(g) of the GDPR and amended art.6(a) of Directive 2011/83 on Consumer Rights).Even those rules do not, however, contain a unified concept of explainability.According to Bibal et al. (2021) they may be interpreted in technical terms and categorized as requirements for revealing of: • the main data features used in the model or in a decision, • all data features used in a decision, • the combination of data features used in a decision, • the whole model. On the other hand, automated administrative and judicial decision making, if at all permissible, seems to be bound by the general rules adopted in administrative and court procedures.In general, the decisions should be motivated (justified) with reasons: facts, legal provisions and, in case of judicial decisions, reasoning related to the arguments of the litigation parties.From the technical perspective, the system should be able to provide not only the decision, but also the relevant legal rules and proper arguments (Atkinson et al. 2020;Bibal et al. 2021). Such considerations have also led to the development of the explainable AI movement (XAI) (Gunning and Aha 2019).It is based on the assumption that AI algorithms already perform tasks of such importance that they should be 'white boxes', transparent not only to their creators but also to end-users (or anyone who may be affected by the algorithm's decision).In this context, the US National Institute of Standards and Technology developed four principles of XAI: (1) The system should be able to explain its output and provide supporting evidence (at least). (2) The given explanation has to be meaningful, enabling users to complete their tasks.If there is a range of users with diverse skill sets, the system needs to provide several explanations catering to the available user groups. (3) This explanation needs to be clear and accurate, which is different from output accuracy.(4) The system has to operate within its designed knowledge limits to ensure a reasonable outcome (Phillips et al. 2021). A similar idea is sometimes proposed in the context of the laws of robotics, which are easily transferable to the general AI research, when it is suggested that in addition to the famous three laws identified by Asimov a fourth criterion should be added -the explainability of AI actions (Murphy andWoods 2009, Wachter et al. 2017b). Are these proposals (legal regulations, XAI movement, etc.) the right way to proceed?Are AI algorithms really black boxes?And even if they are -do we really need to worry about it?In this short essay we would like to suggest that the black box problem is, in fact, a superficial one as it results from an overlap of four different -albeit interconnected -issues: the opacity problem, the strangeness problem, the unpredictability problem, and the justification problem.Let us consider all those issues one by one. Minds as blackboxes Imagine one has designed a sophisticated algorithm which aids judges to determine the optimal sentence for a repeat offender.The algorithm is fed with data pertaining to the offender's personal life, the circumstances of their first as well as subsequent offenses, but also with a huge database on other repeat offenders: what were the circumstances of their actions, what were their sentences, how they behaved in prison, what was their life after they were released, etc.The size of the database is such that no man would be able to get acquainted with it, not to mention analyze it, within their lifespan.Meanwhile, the algorithm, run on a fast computer, needs only seconds to come up with a verdict.How should we perceive such verdicts?Do we know what happens when the algorithm does its magic? The answer is yes and no.Yes, because we have designed the algorithm in such a way that it looks for patterns in the huge database.The pattern may be, for example, that repeat offenders who are highly intelligent and have no permanent employment are more likely to keep breaking the law, therefore a longer sentence (and hence a longer isolation) is called for in their case.What we know is that the algorithm looks for such patterns.What we do not know is what the pattern on which the algorithm 'based its decision' is, and what exactly were the steps that led the algorithm to such a conclusion. Is this problematic?At the surface, it seems like a very bad way to make important decisions.In public life, and in the law particularly so, we strive for transparency, and there seems to be no transparency in the machine learning 'magic'.But let us compare our algorithm with a real judge, who makes a decision in a similar case.Do we really know what is going on in the judge's head?Can we be sure what is the pattern they base their decision on?The last decades of research in experimental psychology and neuroscience suggest a plain answer to this question, and the answer is 'no' (Bargh and Morsella 2008, Guthrie et al. 2001, Brożek 2020).The way people make most -if not all -of their decisions is unconscious.In our decision-making, there are no clearly identifiable 'steps', which we are aware of and can control.Usually, the decision simply appears in our minds, 'as if from nowhere' (Damasio 2006). The interesting thing is that -in experimental psychology and neuroscience -what we are trying to figure out is what is the mechanism of the unconscious decisionmaking.Thus, in relation to the functioning of the human mind we are looking for the kind of knowledge we already have in the case of any AI algorithm.For example, it is very likely that the human mind is a powerful pattern-finder.What we are interested to learn is how such patterns are found, what is the pattern-detection mechanism, and not what patterns are actually found, how do they look, or why the mind has based its decision on this rather than a different pattern. From this perspective, the human mind is much more of a black box than the most sophisticated machine learning algorithm.For the algorithms, we at least know how they work, even if we cannot explain why they have arrived at a particular decision.In the case of the human mind, we have only a tentative outline of the answer to the question how it works (Bonezzi et al. 2022). Our inability to understand exactly what an AI algorithm does is sometimes referred to as the opacity problem (Burrell 2016;Zednik 2021).However -when compared with our knowledge and understanding of the way the human mind works -the algorithms are not really that opaque.The opacity problem does not seem to be a genuine issue.At the same time, we do not question the decisions humans make, or at least not in the way we put into doubt the decisions made by AI.We do not treat minds as black boxes, even if they seem to be black boxes par excellence.Why is it so? Stranger things A simple answer to the question posed at the end of the previous section is that we do not consider our minds 'black boxes' because we are familiar with them.AI algorithms, and in particular machine learning algorithms, seem like black boxes, because they are unfamiliar: they are 'strange things' we are not yet accustomed to.The operative word here is 'yet'.Ever since the beginning of human civilization, we have created many artifacts which -initially strange to us -have become familiar companions in our daily lives.Writing, print, steam engines, electricity, automobiles, radio, television, computers, mobile phones -they all once were sources of fear and awe, mysterious black boxes, but, with time, we have got accustomed to them.The same may be true of AI algorithms providing us with practical (legal, medical, technical) decisions: give us some time together and we will get familiar with them. This answer -that we consider AI algorithms 'black boxes' because they are 'stranger' than other things -may be true, but it is at the same time somewhat shallow.We believe that there is a deeper reason for human beings to be perfectly happy with the decision-making processes of the human mind, while feeling uneasy when letting AI algorithms decide.In order to understand it, we need to spare a few words on folk psychology. Folk psychology is the ability of mind-reading, i.e., of ascribing mental states to other people.A more detailed characterisation -albeit not an incontestable onehas it that folk psychology is a set of the fundamental capacities which enable us to describe our behavior and the behavior of others, to explain the behavior of others, to predict and anticipate their behavior, and to produce generalizations pertaining to human behavior.Those abilities manifest themselves in what may be called the phenomenological level of folk psychology as "a rich conceptual repertoire which [normal human adults] deploy to explain, predict and describe the actions of one another and, perhaps, members of closely related species also.(…) The conceptual repertoire constituting folk psychology includes, predominantly, the concepts of belief and desire and their kin -intention, hope, fear, and the rest -the so-called propositional attitudes" (Davies and Stone 1995). One can also speak of the architectural level of folk psychology which consists of the neuronal and/or cognitive mechanisms which enable ascribing mental states to others.Importantly, this level is not fully transparent or directly accessible to our minds -while we are able to easily describe the conceptual categories we use to account for other people's behavior (at the phenomenological level), we usually have no direct insight into the mechanisms behind mind-reading (Brożek and Kurek 2018). It follows that we understand and explain behavior -including decision-makingnot as it happens, but as seen through the lenses of the folk-psychological conceptual scheme.Moreover, we are in principle not aware that this interpretive mechanism is at work, since the architectural level of folk psychology is not something we may observe.This fact explains why we have no problem in accepting decisions made by other people -even if actually their minds are black boxes to us.We do not see it that way, because the conceptual apparatus of folk psychology makes us interpret the behavior of others as an outcome of a decision-making process which seems to be transparent and perfectly understandable.At the same time, we have a problem accepting a decision made by an AI algorithm, because this is not the way decisions are made, at least from the point of view of folk psychology. Thus, our thesis is that the opacity problem is not the real problem with AI algorithms.The real issue lies somewhere else and may be deemed the strangeness problem.Moreover, the strangeness in question is not superficial -it will not dissolve once we get accustomed to the AI algorithms making decisions for us.Such algorithms are different from cars, airplanes and mobile phones, because they seem to be doing what -according to folk psychological conceptual scheme -only real people, equipped with rational minds and free will, can do. Is it possible to overcome this difficulty?It is extremely difficult to answer this question.On the one hand, the research in psychology and anthropology shows that the folk psychological conceptual scheme is a cultural creation -it differs from culture to culture (Lillard 1998).For example, the very concept of agency and decisionmaking is different in the Western culture and Eastern cultures or the cultures of indigenous peoples of the Amazon (Morris and Peng 1994).From this perspective, it may be possible for the folk psychological conceptual scheme to evolve with time into something different, e.g., a framework which accepts AI algorithms as capable of decision-making. On the other hand, at least some mechanisms behind the folk-psychology seem to be inborn.In particular, as suggested by research in the developmental psychology, it seems that folk psychology is deeply rooted in the human ability to spontaneously distinguish between two kinds of interactions (causality) in the world -physical and intentional (Bloom 2004).We perceive the interactions between physical objects as goversned by a different set of laws than the intentional actions of other people.Only in the latter case one can speak of genuine decision-making processes.The question is, therefore, whether AI algorithms can fall into this second category -can we perceive them as intentional? It seems that the answer is negative as long as we do understand how the algorithm functions.Historically, humans attributed intentionality to physical objects -stones, trees or rivers (Hutchison 2014).Even today, we have a tendency to anthropomorphize inanimate objects (e.g., robots which perform some mundane tasks).However, given our current worldview, once we know that a vital decision (e.g., to a legal case or pertaining to the medical treatment) is generated by a 'soulless' algorithm, such anthropomorphic attributions may not be possible.They would require a major shift in our worldview, which -given its nature -is difficult if not impossible to imagine.Thus, it does not seem likely that we will perceive AI algorithms as making genuine decisions, especially the more vital ones.The strangeness problem is an enduring one. Cognitive safety Let us imagine now that someone has developed an extremely complex AI algorithm based on deep machine learning which has one goal: to provide an answer to the question in the form 'what is the sum of x + y', where x and y are natural numbers in the range from 1 to 100.The algorithm uses a huge dataset and is highly accurate -in 1 3 fact, it has been used several million times and has always given a correct answer.Disregarding the fact that there are much less complex computational ways for adding natural numbers, we would probably never question the algorithm.The reason is that it gives answers which are expected.What is expected gives us no headache. The human mind is a wonderful mechanism, capable of maneuvering in a highly complex and unpredictable world.Because of this complexity and unpredictability, the mind naturally gravitates towards often used and well-tested behavioral patterns and previously accepted beliefs.It is somewhat cognitively rigid, or to put it differently, it is a cognitive conservative (Kruglanski 1989;Webster and Kruglanski 1994;Brożek 2020).Revolutions in our individual cognitive spheres as well as in our culture do not happen too often and take some time to exercise real influence on what we think and do. One important lesson which comes from psychology and the cognitive sciences is that the human mind strives for certainty (Kruglanski 1989).This need is deeply rooted in us by the evolutionary processes and manifests itself, inter alia, in our emotional mechanisms (Kruglanski 1989; Brożek 2020).In the recent years much attention has been paid to the so-called epistemic emotions (Gopnik 2000).Contradiction or some other inconsistency in our experience -be it Einstein's uneasiness that the observed motion of Mercury minimally deviates from the predictions of Newton's theory, or the feeling that 'something would be wrong' if we just went for holiday ignoring the fact that our uncle is terminally ill, always generates an emotion: of curiosity, disorientation or even anxiety.It motivates us to seek an explanation where the cognitive dissonance comes from.The feeling that 'something is wrong' is the main driving force behind Einstein's search for the general theory of relativity; but it is the same force which makes us skip vacation in face of a serious illness in our family.Without epistemic emotions there would be no discoveries, whether spectacular or small.The reduction of anxiety and disorientation, satisfying one's curiosity, and sometimes amusement or revelation, are the rewards we get for making our worldview more coherent (Hurley et al. 2011). One would be mistaken, however, claiming that emotions have only positive influence on our cognitive processes, motivating us to search for better answers to the questions we pose operating in the physical and social environment.Emotions can also significantly disturb the thinking process -not only in extreme cases, where strong emotions like fear enter the stage, but also in quite ordinary decision processes.For example, it is difficult to 'work' for a longer time with two alternative and mutually inconsistent hypotheses.The mind has an inclination to quickly settle such a conflict rather that analyze the consequences of each hypothesis and systematically compare them.It is connected to the fact that -as we have observed above -our emotions drive us to certainty and reward us for it.It is difficult to accept that we base our beliefs and actions on conjectures rather than on solid and unshakable foundations.It is easier to believe that we have reached certainty even if objectively we are far from it (Kruglanski 1989). This drive for cognitive safety makes it difficult for us to accept outcomes of a decision-making process which are unexpected.It doesn't matter who or what is making the decisions: the mere fact that the outcome is unexpected makes us uneasy. For the same reason, a decision-making mechanism -be it a human being, an AI algorithm or an oracle -which produces expected outcomes is much easier to accept.This is problematic in the context of our discussion for the following reason: the role of AI algorithms is not only to replace us in some cognitive tasks such as making a medical diagnosis or delivering a legal decision.Given that AI algorithms -and in particular machine learning -are capable of analyzing huge datasets in ways far exceeding the abilities of the human mind, our hope is that the algorithms will produce better outcomes than humans are capable of.However, this means that these outcomes will be unexpected. In this way we arrive at the tension inherent in what we may call the unpredictability problem: we do not welcome surprises, while this is exactly what the AI algorithms are made for. Justify me In the law -and, more generally, in the social life -we expect decisions to be justified or at least justifiable.The typical perception of how lawyers -and, in particular, judges -operate is based on three tenets: (1) Legal reasoning has a clearly identifiable structure. (2) Legal reasoning consists in carrying out operations on sentences (beliefs) in an algorithmic way. (3) Legal reasoning is based -in one way or another -on the rules of classical logic.As a consequence, legal reasoning aims at providing a solution to a legal case which is justified (rational) (Wróblewski 1992;Alexy 2009;Stelmach andBrożek 2006, Hage 2005). Meanwhile, the research in cognitive science shows that the actual processes of practical reasoning are a far cry from such an ideal model.Although there is no one single, commonly accepted theory of actual legal thinking, the existing approaches seem to share (to a greater or lesser degree) the following assumptions (Brożek 2020;Brożek et al. 2021): (1) Most (if not all) legal decisions are made in a way which has no identifiable structure nor consists of algorithmic steps.The decisions appear in one's mind as if from nowhere. (2) Most, if not all, decisions are made in (a) an unconscious way, where the unconscious processes are largely an effect of (b) social training and are based on (c) emotional reactions. (3) In practical decision-making, reason (rational argumentation) has a secondary role.It either serves as an ex post factum rationalization of the decisions made (to defend those decisions against the criticism of others) or, in the best case, it has an indirect or otherwise hugely limited influence on the decisionmaking process (Haidt 2001). These facts underline another difficult tension: between the way we perceive rationality and justification, and the way in which justification is usually produced.The classical stance is that conscious, rational deliberation is what precedes the decision (Kant 1909).From this perspective, it is quite understandable that an outcome of the work of an AI algorithm, which cannot be traced back and repeated, and hence remains 'mysterious', cannot be treated as rational or justified.In other words, the decision reached by the algorithm does not meet our standards of justification -at least as long as the way of reaching the decision is considered constitutive of its justificatory power. However, a different approach -the one that takes seriously the actual mechanisms of (human) decision-making -opens the way for a different understanding of rationality and justification.It is not important how the decision was reached; the only question is whether the decision can be defended (justified) ex post.From this perspective, decisions made by AI algorithms can be rational, when an appropriate (acceptable) justification can be adduced in their favor. In fact, this perspective paves the way for a reconceptualisation of how algorithms for legal decision-making (or for aiding legal decision-making) should be structured.The general idea is to 'mimic' the behavior of the human mind.The envisaged system would consist of two components or modules: the 'intuitive' and the 'rational'.The intuitive module would enable the system to learn from experience (i.e., large datasets) what are the patterns connecting types of legal problems with the corresponding legal decisions.For such an architecture, some machine learning seems to be the best option.The rational module, in turn, would be based on the existing (mainstream) models in AI & Law, i.e. it would be based on the use of some appropriate logical system (e.g., a kind of defeasible logic).However, the goal of the module would be different than usually assumed in the AI & Law literature: instead of producing a legal decision in the case at hand, it would aim at justifying a decision reached by the intuitive module.Thus, the rational module would 'work backwards': given a decision (based on the knowledge accumulated in the datasets and 'uncovered' by machine learning algorithms), it would search for a proper justification for it (this is similar to the idea of post-hoc explanations; however, the goal of the rational module would be to look for justifications, not explanations).Such a procedure is not mysterious; in fact, this is what the original ancient Greek meaning of 'analysis' is.As the great mathematician Pappus put it: For in analysis we suppose that which is sought to be already done, and we inquire from what it results, and again what is the antecedent of the latter, until we on our backward way light upon something already known and being first in order.And we call such a method analysis, as being a solution backwards (anapalin lysin) (quoted after Hintikka and Remes 1974). The proposed architecture requires one more element.If the process of constructing a justification for the 'intuitive solution' cannot be completed (e.g., there appears a contradiction), one cannot simply accept the intuitive decision.What is needed is a kind of 'feedback loop' between the intuitive and the rational components (in this, the proposed solution further differs from the ideas pertaining to post-hoc explanations). One can envisage it in various ways.In particular, it may function as a veto (if the intuitive decision cannot be justified, it is simply rejected and the intuitive module is activated to search for another solution), or it may be a more constructive mechanism (i.e., some modifications to the intuitive solution are introduced and tested in a backand-forth procedure between the intuitive and the rational modules). The soundness of the computational architecture outlined above notwithstanding, the moral of our considerations is that the justification problem in relation to the decisions made by AI algorithms remains a genuine one as long as we claim -for example after Kant -that the way the decision is made generates the justificatory power of that decision.Once this requirement is abandoned, and the method of the ex-post justification is allowed, the decisions of AI algorithms may be rendered rational; moreover, it is possible to construct the computational system in such a way that it takes advantage of the incredible power of pattern-finding in large datasets, while at the same time providing us with a justification for the decision made. Conclusion & perspectives Let us repeat the question which propelled our analysis: is there really a black box problem in relation to the AI algorithms?We believe that this question involves four different, although interconnected issues: the opacity, the strangeness, the unpredictability and the justification problems.Our analysis suggests that -contrary to the often expressed opinion -opacity problem is not significant.In fact, we do understand and can explain the operations of AI algorithms in a much better and more complete way than the functioning of the human mind.However, there is an additional problem here.The algorithms involved may be quite sophisticated so that only well-trained specialists may fully grasp their mechanics.From this perspective, the XAI and related postulates seem reasonable: the algorithm user (or someone whose legal or economic interest may be influenced by the decision made by the algorithm) should have access to an understandable (even if simplified) description of the functioning of the algorithm.If this is not the case, at least for some users the algorithms would remain 'black boxes'. Similarly, the justification problem discussed above does not seem to be a genuine one as long as we do not consider justification to be generated by the way in which a decision is made.This is a crucial observation.As we have seen, in the short discussion pertaining to the explainability of AI, it is sometimes claimed that an explainable algorithm for solving legal issues should provide us with reasons for the given decision.Once we admit that justification may be constructed ex post, this requirement can be met. The other two issues -of strangeness and unpredictability -are more problematic.The unpredictability of the decisions made by AI algorithms is what generates our distrust in them in the first place; however, it also represents what is really powerful in machine learning and related methods.They were designed to find patterns in datasets which cannot be analyzed by humans with their limited computational capacities.There is a genuine tension here.Fortunately, it does not seem to affect the question of explainability of AI as relevant to law.When we address properly the other problems -of opacity by providing (a simplified and) understandable description of the algorithm, and of justification by generating an acceptable ex post justification for the decision -even an unpredictable decision may be acceptable: ultimately, in such a case we would know what happened (i.e., how algorithm works) and that the outcome is justified. The strangeness problem is also troublesome.It seems that -given our folk psychological conceptual apparatus -we cannot treat AI algorithms as genuine decisionmakers.But this creates a kind of cognitive tension which is difficult to alleviate.There will always be a sense of strangeness and the transpiring need for a better understanding and justification of what AI algorithms do.Therefore, the postulates of the XAI movement as well as other suggestions pertaining to explainability of AI in the context of law are relevant.Our insight is, however, that the need for them does not arise from the opacity of AI algorithms but rather from their strangeness. We believe that our observations provide a new perspective on the discussions currently taking place in the AI&Law literature and pertaining to XAI.Arguably, most of the issues dealt with within the field of XAI & law stem from two fundamental questions: 1. When, if ever, should explainability be required by the law? 2. What kind of explanation would be optimal from the legal perspective? In the contemporary literature it is by far easier to find answers to the second question.As Bilal et al. ( 2021) point out, the general opposition is between explaining the mathematics of models and providing explanations that make sense to humans.The AI & law researchers seem to prefer the latter solution (Pasquale 2017;Selbst and Barocas 2018;Mittelstadt et al. 2019;Hacker et al. 2020) and offer its variants (Ye et al. 2018;Prakken 2020;Prakken and Ratsma 2021).This line of research also includes numerous papers interpreting the existing (or, as in the case of EU AI Act, pending) legal requirements on explainability, criticizing them or proposing new mechanisms and provisions (Goodman and Flaxman 2017;Malgieri and Comandé 2017;Wachter et al. 2017a;Selbst and Powles 2018;Casey et al. 2019;Zuiderveen Borgesius 2020;Grochowski et al. 2021;Kaminski 2021;Hacker and Passoth 2022;Sovrano et al. 2022).It is worth noting that de lege lata objections state that the rules are vague, too weak or incompatible with AI conceptual grid rather than unnecessary. On the other hand, the first question (on the need for explainability in law) is tackled less often and mostly indirectly.Most prominently, Rudin (2019) advocates replacing explainable black boxes with inherently interpretable models, at least in the case of high-stakes decisions.It may be seen as abandoning one requirement (explainability) in favor of another, arguably a stricter one (interpretability).Similar concerns have recently been raised in the context of non-discrimination law by Vale et al. (2022). The account we presented above suggests that more balance is needed between the two issues.We agree with Rudin that explainability may not be the holy grail of AI & law.However, our premise is different: explainability is valuable, but does not have to be required to a greater extent than in the case of human decision making (at least when black boxes perform no worse than humans).Strangeness is more problematic than opacity, hence the criteria for explanation (justification) should be rather psychological (understanding, trust) than technical.Moreover, one should not dismiss various approaches to providing explanations without testing them (see Prakken and Ratsma 2021) in different legal contexts, as the needs and risks differ between sectors.The same applies to the development of legal rules on explainability (see Zuiderveen Borgesius 2020) -we should shape the law on the basis of existing problems, not adjust the problems to the law.
7,457.8
2023-04-04T00:00:00.000
[ "Law", "Computer Science" ]
Design and validation of an accelerator for an ultracold electron source • A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers. I. BRIGHT ELECTRON SOURCES AND THEIR APPLICATIONS Pulsed high brightness electron sources are used, for example, in measuring the temperature of surfaces after interaction with ultrafast lasers [1], in observing transient structure in femtosecond chemistry [2], or in realizing high brightness x-ray sources [3].The brightest pulsed electron sources are based on the photoemission process to produce electron bunches that are subsequently accelerated in strong electric fields [4]. A way to improve beam brightness is to reduce the source size, because the brightness is proportional to the beam current I divided by the surface area of the beam cross section A, and the solid angle associated with the uncorrelated angular spread, One example of this approach is an electron source based on carbon nanotubes (CNT) field emitters [5].They are actually the brightest electron sources available at the moment.Here, the electrons are emitted from a submicron surface and are able to produce a current of up to 1 A. Some applications, as, for example, ultrafast electron diffraction [6], x-ray free-electron lasers [7], or x-ray production by Compton scattering [8], can also benefit from higher brightness, but require much larger currents than CNTs can provide.In fact, the required currents can only be produced in pulsed mode.For these cases, an alternative route to increasing brightness was proposed [9]. Brightness depends inversely on the square of beam emittance, which in turn depends on the square root of the source temperature T, where is the so-called root-mean-square (rms) normalized emittance [4].Here, m is the electron mass, c the speed of light, x the transverse position, p x the transverse momentum, and h i indicates averaging over the entire distribution.Therefore, if we are able to produce electron bunches with a low initial temperature, emittance will also be low and the brightness high, without having to reduce the source size.In this way, pulsed operation with high peak currents and low emittance can be achieved.Our approach to improve the present brightness of pulsed electron beams is based on this idea of a low temperature source [9].Here, laser-cooled atoms [10] are ionized just above threshold and an ultracold plasma (UCP) is created [11].The electrons of this plasma are initially created with a temperature of approximately 1 mK.Because of the heating process inside the plasma, the electrons quickly equilibrate to a higher temperature in the order of 10 K, which is still orders of magnitude lower than the electron temperature in photoguns [4]. To prevent a space-charge-induced increase in emittance, high electric fields must be turned on with subnanosecond rise time to bring a beam as fast as possible to sufficiently high energies.It has been shown in Claessens et al. [9] that the brightness of such an electron beam can be orders of magnitude higher than what exists now in the field of (sub)picosecond pulsed electron sources. In order to achieve the full potential of this type of source, a specialized accelerator structure is required.It combines an atom trap [12] with the possibility to create fast high voltage fields.To this end, we developed a special diode structure together with a pulsed power supply.This article presents the design of both the accelerating structure and pulsed power supply and shows its value as an accelerator for our cold-atom-based electron source. It is shown that in this first intermediate setup, ultralow emittances of 0:04 mm mrad can be achieved in pulsed mode, for bunch charges up to 0.1 pC and 80 ps bunch lengths.The resulting brightness is 130 times lower than that of the Linac Coherent Light Source (LCLS) electron source at SLAC [13].Further improvement of the pulsed high voltage supply, by sharpening the voltage pulse to subnanosecond rise times, should lead to the same emittance, but much shorter pulses of 0:1 ps, resulting in a brightness 10 times higher than the LCLS source.Our final goal is to combine the accelerator presented in this paper with a 1 MV-0.1 ns rise time voltage power supply, as proposed in [9].With that, the source can attain a brightness 30 times higher than the LCLS electron source.Using laser-triggered spark gap technology to switch MV voltages, it is possible to generate 1 ns long and 1 MV high pulses with 0.1 ns rise and fall time.As has been shown by several groups, including our own [14], such pulses can be applied across gaps as small as 1 mm without breakdown, for the simple reason that 1 ns is too short for a breakdown to occur.The acceleration structure presented in this paper is suited for guiding such voltage pulses.The setup presented here is a first step towards the realization of the electron source concept presented in [9]. II. ACCELERATOR DESIGN A technical drawing of the accelerator is given in Fig. 1.It has a coaxial structure.The advantage of using a coaxial geometry is that it can guide very steep field gradients.The designed structure is tapered to reduce reflections of the incoming electric field.The accelerator consists of an inner conductor on which a negative voltage is applied, and an outer conductor which is grounded.A glass ring is used to support the inner conductor of the structure.The structure is designed such as to allow the trapping of a cloud of cold atoms at the center of the accelerating structure, the socalled acceleration point, shown in Fig. 1.The design parameters are given in Table I. The atom trapping process needs six laser beams [10].A typical size for the diameter of such a laser beam is 10 mm.There are six holes of 20 mm diameter in the outer conductor for the access of these beams.The beams intersect each other in the acceleration point, where the electrons are initially created.One of the laser beams is brought to that point via a mirror placed inside the inner conductor.In addition, there are also holes for the ionizing laser beam and for the electron beam.The inner conductor is connected to a high voltage feedthrough. In the design process we have maximized the electric field amplitude at the point where the electrons will be initially created.This practically means that the distances between the inner and outer conductor are kept as small as possible.At the same time, the fields must be below the breakdown limit in vacuum, which we conservatively assumed to be 100 kV=cm.For that reason, the distance between the inner and outer conductor is 16 mm, the radius of curvature R i at the end of the inner conductor is 10 mm, and the radius of curvature R a at the acceleration point is 7 mm. The accelerating structure was first tested with a DC voltage in order to see if it can sustain the maximum DC voltage that it was designed for, namely, 30 kV.The metal surface was conditioned by slowly bringing the inner conductor to the maximum operating voltage. A cylindrically symmetric field map for this structure, calculated with SUPERFISH [15], is shown in Fig. 2. The dimensions of the inner conductor, outer conductor, glass ring and ceramic part of the feedthrough, as listed in Table I, together with their relative permittivity, are used as input.The holes in the outer conductor, which break cylindrical symmetry, are left out.In the figure, the equipotential lines are shown.The electric field strength at the acceleration point is 0:37 kV=cm per kV input voltage.The field map is used for analysis and simulations. In our first experiments we are going to use typical electric field's rise times of 30 ns.The corresponding wavelength is in the order of meters.This is much larger than the dimensions of the accelerating structure.Therefore, the electric field rises uniformly in the entire structure, not causing reflections.Subsequently, the static map can also be used for a pulsed situation. The same accelerating structure can be used with even shorter rise time, in the order of 150 ps, which we plan to employ in the future.The wavelength corresponding to subnanosecond rise time becomes comparable with the structure dimensions.In this case, the field can be diminished in amplitude before it reaches the acceleration point due to reflections associated with impedance mismatch.By tapering the inner conductor, the impedance mismatch, and therefore reflections, are minimized. To check that the combination of the high voltage feedthrough and the accelerator also works at high frequencies, a Hewlett Packard 8753C network analyzer was used.The amplitude and phase of the electric field reflected by the setup were measured as a function of the electromagnetic waves frequency.The frequency interval used in this experiment was between 300 kHz and 1 GHz, corresponding to a rise time between 1:2 s and 350 ps, respectively.The amplitude of the reflected signal was found to be constant within 0.6 dB over this range.A near-linear dependence of the phase lag versus frequency is observed in the 200-900 MHz range (Fig. 3), corresponding to a reflection of the wave at a fixed point located at 0:70 0:02 m beyond the point where the network analyzer is connected to the accelerating structure.This is the effective distance that a pulse has to travel from the input on the feedthrough up to the acceleration point, namely, first through the 0.1 m long feedthrough from a ceramic with " r 9, second through a 0.2 m long connection pipe, and third through the 0.2 m long acceleration.We conclude that up to 350 ps rise time there is no significant distortion of an input high voltage signal. We have also analyzed the pulse propagation with the 3D time domain solver CST-MICROWAVE STUDIO (MWS).An illustration of the process can be seen in Fig. 4. Here, the pulse is shown at three different moments in time: as the pulse enters the structure (a), halfway the structure (b), and as it reaches the acceleration point (c).In this simulation, the access holes in the outer conductor are also included.For 150 ps rise time, we find an electric field strength at the accelerator point of 34 kV=cm per kV input voltage, i.e., 8% decrease compared to SUPERFISH calculations.The rise time behavior at the accelerator entrance and acceleration point was also monitored in the MWS simulations, as shown in Fig. 5.It can be seen from Fig. 5 that the 150 ps rise time of an input pulse remains the same at the acceleration point.The accelerator structure is therefore very well suited for sub-ns risetime voltage pulses. III. FAST HIGH VOLTAGE GENERATION As it has been stated in our proposal [9], to produce bright electron bunches, a high electric field should be turned on very fast.The proposed source works at a voltage of 1 MV switched on in 150 ps, which in principle can be produced with state-of-the-art technology [14].The required laser-triggered sparkgap technology is however very cumbersome in use and needs further development before it can be applied in practice.In our first experiments we will use commercially available technology to switch 30 kV in tens of nanoseconds.The corresponding fields are high enough to extract bunches up to 1 pC from a UCP, with a very low emittance. The system used to produce fast and high electric fields consists of two components: a DC high voltage power supply, and a transistor-based switch setup.The high voltage supply unit is a Brandenburg Model 807R.It produces a maximum DC voltage of 30 kV and it delivers a maximum current of 1 mA.The DC power supply is connected via a high voltage coaxial cable to the switch box.The power supply feeds the transistor-based switch.The fast high voltage transistor switch is a Behlke Model HTS 300.It is a solid state switch used to generate high voltage pulses with a very fast leading edge.The maximum operating voltage is 30 kV, with a peak current of 30 A. The rise time given in the specifications (with a loading capacitance of 20 pF) is 15 ns at a voltage of 24 kV.Pulse duration is 150 ns. To produce the desired rise time of the voltage, the switch is inserted in a classical pulsed discharge circuit (Fig. 6).This means that the energy is collected from a primary energy source (a DC power supply in this case) and is then rapidly released from storage and converted to pulsed form.The output signal goes via a commercial DC ultrahigh-vacuum feedthrough (Kurt J. Lesker Co. Ltd.Model EFT 3012093) into the accelerator structure. The charging resistor R 1 is a 40 M high voltage resistor (Caddock, Type MX485), which limits the current to less than 1 mA.The charging capacitor C 1 is 2 nF.The time constant for the charging circuit is 80 ms.The system can therefore be operated at a repetition rate of a few hertz.After the switch is closed, the charge accumulated on C 1 is transferred to the accelerator structure, represented by a capacitor C 2 with 12 pF capacitance.Because of the large difference between the buffer capacitor C 1 and C 2 , there is only a small decrease in voltage on C 1 after closing the switch.R 2 is a 50 current limiting resistor that protects the switch.The time constant for the loading of the accelerator is R 2 C 2 0:6 ns, so voltage rise time is determined by the rise time of the transistor switch. The droop time of this circuit is R 3 C 1 10 s.After 150 ns, the switch opens again.The influence of droop on the voltage on C 2 during the switch closure time is only 2% of the maximum voltage.At this point the voltage on C 2 will drop and, together with the leakage resistor R 3 of 5 k, it will give a decay time constant of 60 ns. Using a Tektronix high voltage probe (Model P6015A) with a capacitance of 3 pF, the output signal given by this switch circuit has been measured on the inner wire of the vacuum feedthrough.A typical voltage-time characteristic is shown in Fig. 7.The negative high voltage pulse first FIG. 5. CST-MICROWAVE STUDIO simulation results for a pulse with a rise time of 150 ps.The input electric field is represented by squares ᮀ (left axis) and the electric field at the acceleration point is represented by bullets (right axis).The maximum electric field at the input was scaled to 1 kV=cm.increases linearly, and is followed by a flat top of 150 ns length, and an exponential decay, in accordance with the design values. The rise time (defined as 10%-90% of voltage amplitude) has been measured as a function of the DC voltage (see Fig. 8).Above 5 kV, the rise time depends linearly on voltage, with a slope of 0:70 0:01 kV=ns.Accordingly, one gets a rise time of 24.5 ns at a voltage of 24 kV, somewhat larger than the specifications of the switch. IV. ELECTRIC FIELD MEASUREMENT This section deals with static and dynamic measurement of electric fields produced in the accelerating structure. A. Static electric field measurement with cold ions A possibility of measuring the static electric field produced at the acceleration point is by using a time-of-flight (TOF) method.A cloud of cold atoms is produced at the acceleration point using the same procedure as described in [16].The cold atoms are subsequently ionized by a pulsed dye laser with a wavelength of 480 nm and a 6 ns pulse length.The ionization volume has a cylindrical shape with a radius of 70 m and a height of a few millimeters.The orientation of the cylindrical volume is perpendicular to the acceleration direction.A positive DC voltage is applied to the inner conductor of the accelerator.After the ionization, the ions are accelerated towards a microchannel plate detector placed at a distance L 282 mm from the acceleration point.With the help of an oscilloscope triggered by the ionization laser pulse, it is possible to measure the TOF between the photoionization and the moment that the ions reach the detector. The TOF measurement can be used to calculate the energy U the ions gain in the accelerator: where l 0 14:5 mm corrects for the fact that the ions are accelerated in the first few millimeters and are not immediately at their maximum velocity.The value of l 0 is determined from simulations.The ionization volume can be precisely moved to different axial positions z, within a few millimeters.The TOF measured in this manner gives a z-dependency.The z-derivative 1 q dU dz of energy is equal to the local electric field. Figure 9 shows TOF measurements as a function of initial position z.Also shown is the ion energy U as a function of z calculated with Eq. ( 4).The energy Uz is found to depend approximately linearly on z, which allows us to calculate the electric field at the acceleration point: Ez 0 0:33 0:05 V=cm per V acceleration voltage, in agreement with the value of 0:37 V=cm calculated with the SUPERFISH field map (see Fig. 2).From Fig. 9 be seen that, e.g., at a voltage of 1.36 kV, the measured energy U0 665 eV is also, within experimental uncertainty, in agreement with the energy calculated by the electric field integral R 1 0 eEzdz 678 eV from the SUPERFISH field map. The main source of uncertainty comes from the determination of the axial position z of the ionization volume.In the setup it is determined by two CCD cameras that image the trapped cold gas in two perpendicular directions.This position accuracy is 0:5 mm.We conclude therefore that the local electric field and the beam energy are in agreement within the experimental accuracy of 15% with the calculated design values. B. Pulsed electric field measurement with Pockels effect As explained in Sec.III, the rise time of the electric field was measured with the help of a high voltage probe at the exit of the switch box.It should also be checked that there is the same rise time at the accelerating point, where the electrons will be created. To this purpose, an ellipticity measurement method that employs the Pockels effect was used [17].The method is based on measuring a change in birefringence induced by an electric field. A lithium niobate crystal (LiNbO 3 ) was used for this purpose.The characteristics of this crystal are given in Table II [18].Lithium niobate has been chosen because the saturation time, i.e., the time in which an internal electric field builds up and cancels the external electric field, is much longer than the rise times we want to measure, i.e., in the order of ns.ZnTe crystals, for example, are not suitable for this purpose, but can be used for subpicosecond rise times when combined with an ultrafast laser [19]. The setup consists of a HeNe laser with a wavelength of 633 nm, a polarizing beam splitter (PBS) cube, a quarterwave-plate (QWP), another PBS cube, and two photodiodes (P 1 and P 2 ) (see Fig. 10).The lithium niobate crystal is placed with the help of a small PVC mount between the high voltage electrodes.The first PBS makes the laser beam linearly polarized.After passing the crystal, it passes through the QWP and becomes circularly polarized.It is subsequently divided by the second PBS.Diodes P 1 and P 2 will measure therefore two equal signals when no electric field is present.When a voltage is applied, the electric field induces birefringence and therefore an ellipticity in the laser beam polarization, which results in a difference between the two photodiode signals: The phase shift due to the crystal can be written as where is the laser wavelength, r 21 the electro-optical coefficient, n 0 the refractive index, E the internal electric field in the crystal, and d the length that the laser beam travels through in the crystal. A measurement of the electric field with the crystal placed at the acceleration point is shown in Fig. 11.Here is also plotted a signal given by the high voltage probe as measured on the inner conductor of the accelerator.It can be seen that the signal on the inner conductor of the accelerator has the same rise time as the signal measured by the lithium niobate crystal at the acceleration point. In the quasistatic situation, applying a DC external field of 3 kV, a ratio I 1 ÿ I 2 =I 1 I 2 of 0.042 was found.Using Eq. ( 6), this leads to an electric field inside the crystal of 90 V=cm.A calculation of the electric field with the CST-EM STUDIO simulation program gave 65 V=cm, which is 30% lower.In view of the fact that the electro-optic coefficient r 21 is not very accurately known and the lithium niobate crystal has a strong influence on the field geometry, this is a satisfactory agreement. V. VALORIZATION OF THE DESIGN As stated in Sec.II, an electric field map was made in SUPERFISH for the DC case (see Fig. 2).By multiplying the electric field of the map with the measured time dependency of the voltage source output (Fig. 11), the map can also be used in the pulsed situation.Fields generated in this way were used in the GPT code to simulate the behavior of an electron bunch accelerated in the designed structure.GPT calculates charged particle trajectories in 3D electromagnetic fields, including all space-charge effects [20]. As initial bunch conditions, exactly the same pancake conditions as in Claessens et al. [9] were chosen, i.e., a radial distribution of R 2 mm and a thickness of 15 m.The simulations have been performed for two different charges, 100 and 1 fC.The high voltage slew rate, which is an essential parameter for our simulations, was 0:7 kV=ns (see Sec. III).The rms normalized emittance [see Eq. ( 3)] and the rms bunch length as a function of longitudinal position z obtained from these simulations are shown in Fig. 12. It can be seen from Fig. 12(a) that the beam has in both situations a very low rms normalized emittance of 0:039 mm mrad (100 fC) and 0:035 mm mrad (1 fC).These remain constant after leaving the accelerating structure.For 100 fC, the electron bunch is initially compressed to 20 ps due to velocity bunching [9] [Fig.12(b)] and then expands linearly in time, reaching a length of 80 ps after 300 mm from the initial point.In case of a smaller charge, 1 fC, the velocity bunching effect brings the bunch to a 0.7 ps length.By applying, for example, a pulse of 1 MV in 150 ps, a 100 fC bunch can be compressed to 20 fs.Clearly, the amount of bunch compression is limited by space-charge forces. Compared with the LCLS injector at SLAC [13], where electron bunches of 1 nC have been measured with a length of 10 ps and an emittance of 1 mm mrad, our intermediate setup will be able to deliver a 130 times lower brightness.However, our final configuration of 1 MV in 150 ps will be able to deliver a 100 fC bunch with a 30 times higher brightness. VI. CONCLUSIONS AND OUTLOOK A setup specially developed for a new type of pulsed high brightness electron source based on cold atom traps has been described.It consists of a DC power supply, switch setup, and dedicated accelerator.The electric field rise time has been measured with an electro-optical method.The static electric field at the acceleration point and the energy the accelerated particles gain have also been measured using cold ions TOF measurements.The experimental results and the calculated values agree within the error limits. Compared with the experiment described in [16], this accelerator has the advantage of being cylindrically symmetric, and provides a higher energy of the electron bunches, making them less sensitive to stray magnetic fields.Simulations show that this setup will be able to produce 100 fC electron bunches with an emittance of 0:04 mm mrad and a bunch length of 80 ps. For shorter bunch length, an important step is to further reduce the rise time of the electric field.A possible solution is magnetic compression of the high voltage pulse using saturable ferrite cores.In the past, this method produced pulses in the same voltage range with subnanosecond rise time [21,22].The advantage of a shorter rise time is that the beam energy will be higher.Then, the electrons are less sensitive to space-charge effects and the emittance will therefore be even lower.Also the bunch length will be smaller due to stronger velocity bunching, leading to higher current and thus higher brightness.A transmission line transformer (TLT) can be used after the sharpener to increase the amplitude of the high voltage.One specially designed broadband TLT will be available in our group in the future [23].It is capable of multiplying subnanosecond high voltage pulses by a factor of 10.In this way, the final goal of 1 MV-150 ps technology can be reached. FIG. 1 . FIG. 1. (Color) Technical drawing of the accelerator (cross section): (a) outer conductor; (b) inner conductor; (c) acceleration point; (d) glass ring; (e) mirror; (f) pumping holes; (g) laser beams; (h) radius of curvature on the inner conductor R i ; (i) radius of curvature at the acceleration point R a . FIG. 3 . FIG.3.Phase shift of the reflected signal as given by a network analyzer. FIG. 4. (Color) CST-MICROWAVE STUDIO simulation for a pulse with a rise time of 150 ps.(a) The pulse at the entrance; (b) the pulse halfway through the accelerator; (c) the pulse reaches the acceleration point.In these pictures the vacuum is visualized.The electric field strength is indicated by colors.Red is the maximum electric field strength and green minimum. FIG. 6 . FIG. 6. Schematics of the transistor-based switch used to produce a pulsed high voltage.The values of the components are R 1 40 M, R 2 50 , R 3 5 k, C 1 2 nF, and C 2 12 pF. FIG. 7. Output signal of the switch measured using a Tektronix P6015A high voltage probe. FIG. 12 . FIG.12.GPT simulations using a SUPERFISH field map with a slew rate of 0:7 kV=ns: (a) rms normalized emittance; (b) rms bunch length; both of them as a function of axial position z.The dashed line is for 100 fC and the solid line for 1 fC. TABLE I . Parameters used in the design of the accelerator.
6,164.2
2008-05-07T00:00:00.000
[ "Physics" ]
Fourier Integral Operator Model of Market Liquidity: The Chinese Experience 2009–2010 : This paper proposes and motivates a dynamical model of the Chinese stock market based on linear regression in a dual state-space connected to the original state-space of correlations between the volume-at-price buckets by a Fourier transform. We apply our model to the price migration of orders executed by Chinese brokerages in 2009–2010. We use our brokerage tapes to conduct a natural experiment assuming that tapes correspond to randomly assigned, informed, and uninformed traders. Our analysis demonstrates that customers’ orders were tightly correlated—in the highly nonlinear sense of prediction by the neural networks—with Chinese market sentiment, significantly correlated with the returns of the Chinese stock market, and exhibited no correlations with the yield of the bellwether bond of the Bank of China. We did not notice any spike of illiquidity transmitting from the US Flash Crash in May 2010 to trading in China. of the predictions can be inferred from the squares of the correlation coefficient. For instance, if we treat broker “b” as informed, trader “a” could have predicted bond yield for the next month from their imbalances with an explanatory power 𝜌 = 𝑟 (cid:2871)(cid:2870) = 0.5178 (cid:2870) ≈ 26.8%. Test results were poorly reproducible on successive runs of the network. Introduction The purpose of this paper is to extend methods of spectral analysis which facilitate visual representation of trading patterns including high-frequency data (see [1]. This paper does not use high-frequency data but analyzes the database of all executed orders by a select anonymous brokerage). The research intends to find new uses of Fourier analysis and deep learning in financial econometrics. Namely, we analyze predictable market dynamics in the state-space dual to the price-to-volume distribution and connected with the state-space by Fourier transform as an alternative of the conventional vector autoregression (VAR). Regression residuals were subjected to analysis using several neural networks. Another novelty is the use of deep learning not to predict the market data but to verify a protocol, which imitates a natural experiment. The objective of our analysis is to validate the utility of this new method. In principle, it applies to datasets of any size, yet we use a limited dataset of the daily retail trades in the Chinese stock market during the years 2009-2010 so that calculations can be performed on the PC. These datasets were provided by the retail brokerages to the regulator as a matter of compliance. The application of the state-space approach to the analysis of market microstructure is not new. Hendershott and Mankveld (2014) [2] specifically emphasized this line of research with application to HFT as an alternative to an autoregressive class of models. A standard way to analyze state-space distributions is Kalman filtering, which [3] notably implemented it to distinguish between liquidity-driven and informed trading components of the trading volume for S&P500. The preprocessing stage described below is our new alternative to Kalman filtering. We employ a correlation measure of the state-space invented by [4], but here apply it to the price bucket in its entirety rather than to individual stocks. Taking correlations as the first stage of data de-noising has been done by [5], in particular, for his studies of the "Flash Crash" of 6 May 2010. The paper is structured as follows. In Section 2, we run a literature review. In Section 3, we provide summary statistics of databases at our disposal. In Section 4, we describe the state-space of the problem. In Section 5, we compose the model of predictable trade and a description of its inputs and outputs. In Section 6, we provide validation for our model for the predictable variation of trading intensity. The residuals of our prediction model are being analyzed through shallow and deep learning networks simulating the decision process of the traders in response to the new events. We discuss information which can be gleaned by the fictitious traders in the subsequent Section 7. In Section 8, we investigate the prevalence of low-priced stock in the early Chinese stock market. In Section 9, we introduce a dynamic version of the Amihud illiquidity measure which we employ in Section 10 for a single event study. This single event study is a hypothesized influence of the "Flash Crash" in the US, and is analyzed based on the Amihud illiquidity measure. Literature Review The empirical market microstructure has to deal with many complexities: the latency of execution, and incomplete or deliberately manipulated data. One such complexity is that order flow "lives" in transaction time rather than in physical time [6,7]. Another is that the real trading costs can be hard to estimate and relatively easy to conceal [8]. We partially circumvent limitations of the first kind by using interday correlations of intraday price migrations, using volume buckets as regression panels. Correlations should be free from the absence of transaction time stamps in our database because of the T + 1 rule, unique to the Chinese stock markets described in [9,10]. Many semi-empirical measures have been used to describe market behavior on a microstructure level [11]. The most popular or theoretically well-researched are the ones called VPIN (Volume-Synchronized Probability of Informed Trading [12], volume imbalances [13][14][15], VWAP and its modifications (Volume at Weighted Average Price, [16][17][18][19]), and all the different versions of Amihud measure [20,21]. Note that VPIN or VWAP measures do not distinguish particular stocks, placing them in uniform price buckets. This is the methodology we accept for the current paper; however, a relatively little depth of the 2009-2010 stock market in China, I had to modify it for available data. Consequently, in the low-priced stock segment (below 10 renminbi, further CNY), a bucket can contain a portfolio of similar-priced stocks, while in the high-priced market segment, one or zero stocks are more the norm (see Section 7 for details). Three types of volume per weighted price distributions were analyzed: the "Buy" volumes, the "Sell" volumes, and the imbalances volumes, which could be called BVWAP, SVWAP, and IVWAP, in deference to extant terminology. The simulations were done with "Buy", "Sell", and "Imbalance" separately, but we use IVWAP in the rest of the paper unless explicitly indicated because it is most transparent in terms of interpretation. Our definition of imbalance varies slightly from the standard [16], where it is defined as twice the difference between buy and sell volume divided by their arithmetic mean. We use geometric means for the same purpose (see Section 3). In the case of imbalances, our measure is similar to the VPIN distribution proposed by Easley, Prado, and O'Hara [12], except that it does not involve the computation of intra-bucket price variance. Instead, we use day-to-day correlations of volumes within a given price bucket. We also employ the measure inspired by [20] to test the contagion between America and China during the days surrounding the Flash Crash. Evidence of contagion had previously been presented by [22]. To test our methodology, we used brokerage tapes of several Chinese brokerages submitted to the mainland Chinese stock exchange as a matter of regulatory compliance during 2009-2010. These tapes contain only completed trades; they do not have timestamps beyond one day, but display most of the trades with "Buy" or "Sell" indicators across the entire price range. Because the tapes divide trades by "Buy" and "Sell" (less than 10% of the records miss this stamp), we do not rely on the algorithmic estimation of this division as in [23]. Whatever incompleteness exists in our data, it lies in the reporting procedures for the brokerages which existed during these years. To analyze the volumetric data, we combine them into uniform buckets of 0.5 CNY so that a typical number of buckets is around 150-180 during any given day during 2009-2010. We track the migration between the buckets as an indicator of the direction of trading. Then, we build a dual space-connected by the Fourier transform with the original state space-model of the Chinese stock market microstructure, which we further analyze by (deep) learning algorithms. In the above analysis, we follow in the footsteps of [24] Foster and Viswanathan (1996), who developed a theoretical model of several groups of traders who try to predict the actions of others. Our model allows us to gauge how these predictions could have panned out empirically. We use three metrics of market reaction: the Chinese market sentiment [25][26][27], returns on the Shanghai stock market, and yields on a Bank of China 10-year bond. Using our model, we can directly and relatively parsimoniously explore the conditions prevailing in dark pools, artificially supplying or denying our assumed traders any external information about the activities of their colleagues or the direction of indexes. The use of neural networks to analyze financial data now seems routine, although only a few substantive papers were published as late as five years ago [28]. Only in 2020 did top journals begin to publish research papers which used neural nets [29]. The difference between the present research and all the papers known to this author (see above-cited papers and [30]) is that the present research does not try to use deep learning to beat forecasts of the market data, which is their conventional purpose. We certainly cannot match the sophistication of algorithms being used by the modern HFT firms and hedge funds and the computational power available to them [31][32][33], though our "primitive" algorithms could have been closer to the state-of-the-art in 2009-2010. Therefore, the protocol similar to the analysis of natural experiments (see Section 7 for more details) and a small, relatively outdated database were used to compensate for the lack of available resources. Summary Statistics of the Databases Brokerage tapes provided as Excel files have the following format shown in Figure 1. during 2009-2010. These tapes contain only completed trades; they do not have timestamps beyond one day, but display most of the trades with "Buy" or "Sell" indicators across the entire price range. Because the tapes divide trades by "Buy" and "Sell" (less than 10% of the records miss this stamp), we do not rely on the algorithmic estimation of this division as in [23]. Whatever incompleteness exists in our data, it lies in the reporting procedures for the brokerages which existed during these years. To analyze the volumetric data, we combine them into uniform buckets of 0.5 CNY so that a typical number of buckets is around 150-180 during any given day during 2009-2010. We track the migration between the buckets as an indicator of the direction of trading. Then, we build a dual space-connected by the Fourier transform with the original state space-model of the Chinese stock market microstructure, which we further analyze by (deep) learning algorithms. In the above analysis, we follow in the footsteps of [24] Foster and Viswanathan (1996), who developed a theoretical model of several groups of traders who try to predict the actions of others. Our model allows us to gauge how these predictions could have panned out empirically. We use three metrics of market reaction: the Chinese market sentiment [25][26][27], returns on the Shanghai stock market, and yields on a Bank of China 10year bond. Using our model, we can directly and relatively parsimoniously explore the conditions prevailing in dark pools, artificially supplying or denying our assumed traders any external information about the activities of their colleagues or the direction of indexes. The use of neural networks to analyze financial data now seems routine, although only a few substantive papers were published as late as five years ago [28]. Only in 2020 did top journals begin to publish research papers which used neural nets [29]. The difference between the present research and all the papers known to this author (see abovecited papers and [30]) is that the present research does not try to use deep learning to beat forecasts of the market data, which is their conventional purpose. We certainly cannot match the sophistication of algorithms being used by the modern HFT firms and hedge funds and the computational power available to them [31][32][33], though our "primitive" algorithms could have been closer to the state-of-the-art in 2009-2010. Therefore, the protocol similar to the analysis of natural experiments (see Section 7 for more details) and a small, relatively outdated database were used to compensate for the lack of available resources. Summary Statistics of the Databases Brokerage tapes provided as Excel files have the following format shown in Figure 1. The brokerage tapes include date, order price in CNY, type of order "buy" and "sell", as well as the order's volume. They did not have a timestamp in the years 2009-2010. Further on, we attribute separate spreadsheets as the "tapes" of fictitious traders from zero to four for all brokerages to conduct a natural experiment. As we can see from the The brokerage tapes include date, order price in CNY, type of order "buy" and "sell", as well as the order's volume. They did not have a timestamp in the years 2009-2010. Further on, we attribute separate spreadsheets as the "tapes" of fictitious traders from zero to four for all brokerages to conduct a natural experiment. As we can see from the data in Table 1, summary statistics for individual traders are comparable and we treat them as an extra level of randomization of our data. We plot the stock volumes and prices from one of the tapes in Figure 2. Visual inspection confirms an overall growth in trading throughout Table 1. Summary statistics of Brokerage 5 for all the 484 trading days during 2009-2010. (Table 1 queries the data from original datasets. Traders are marked as 0 through 4. Volume data are rounded to one share. The stock price is expressed in the Chinese Yuan. The index B or S refers to a "Buy" or a "Sell" order on a given tape. We used sample volume variance as an indicator of volume dispersion. The daily standard deviation of the volume can be approximated as √ 484 times √ SVV. There is no systematic difference between the first four tapes, while records in the fifth tape gravitate towards higher-priced stocks. Table 1, summary statistics for individual traders are comparable and we treat them as an extra level of randomization of our data. We plot the stock volumes and prices from one of the tapes in Figure 2. Visual inspection confirms an overall growth in trading throughout the two years, which is not surprising for a developing stock market, and includes a few spikes but no other obvious tendencies. Table 1 queries the data from original datasets. Traders are marked as 0 through 4. Volume data are rounded to one share. The stock price is expressed in the Chinese Yuan. The index B or S refers to a "Buy" or a "Sell" order on a given tape. We used sample volume variance as an indicator of volume dispersion. The daily standard deviation of the volume can be approximated as √484 times √SVV. There is no systematic difference between the first four tapes, while records in the fifth tape gravitate towards higher-priced stocks. The tapes do not distinguish between individual stocks. Given the comparatively thin volume of trading in the stock markets of Mainland China during 2009-2010, we can only surmise that the trades within a given price bucket belong to one stock or a maximum of two stocks. The tapes do not distinguish between individual stocks. Given the comparatively thin volume of trading in the stock markets of Mainland China during 2009-2010, we can only surmise that the trades within a given price bucket belong to one stock or a maximum of two stocks. Our database included eight brokerages with the symbols rfokp4c, hvw5se4, zbe0rgv0, qwixupca, qguyi05q, gxbmxv0, q1ysmbyz, and 5vuyp3bu. Only the last three of these brokerage tapes contained complete data on the volume, which we further denote by the first symbol as "g", "q", and "5". Most numerical examples in this paper, as well as the data, refer to Brokerage 5. Even though it is likely that the division of trades between tapes is arbitrary, for later analysis we shall imagine them as belonging to separate "traders". This corresponds to the intuitive idea that in modern high-frequency trading, the trader is a computer algorithm which arbitrarily parses the state-space. Our tapes contain (see Table 1) several tens of thousands of trades over the period of two years. For comparison, a characteristic latency of a trading signal is τ ≈ 2-3 ms, which roughly corresponds to the computer messages cycling the circumference of New York City and its vicinity with the speed of light [6]. Inherent latency of trading quotes is even shorter, see [34], Table 1. Thus, if one wants to project this rate on the intensity of modern high-frequency trading, all the tapes of one brokerage would correspond to 7-8 min of wholesale trading. This illustrates the utility of observing emerging markets, where the tendencies requiring very large datasets to analyze can be observed with much less granularity. Formation of the State-Space We apply three stages of data analysis in our model. In the first stage, we allocate all daily orders to the price buckets. From these price buckets, we construct a state-space from the day-to-day correlations of order volumes, which we use as new vectors of our state-space. In Equation (1), The intuition for this definition is that only a reasonably small number of price buckets contribute to this measure because a daily price change for any stock is expected to be small compared to its price. Indeed, only the price changes below 8 CNY were consistently observed on each trading day, although these changes could obviously be larger for some days. Equation (1) was written with an assumption that day-to-day correlations of order volumes exhibit more stability than the volumes themselves. Heuristically, this assumption is supported by the existence of the unique T + 1 rule in the Chinese stock markets, according to which one has to hold stock one day or more before selling [9] so that intraday noise must be uncorrelated between today and tomorrow. The predictive power of the correlations (if any) can be used by an informed broker in the following manner. If correlations between buckets ρ ij were persistent, a broker could predict the migration by x in a given price bucket i, by the formula: In real life, the brokers can use a next-day predictorρ ij, to guess their next-day order. Note that in the 100%-efficient markets, or for markets in equilibrium, our state variable is exactly zero. Covariance matrices could be more consistent from the mathematical point of view, but they are harder to interpret intuitively-particularly because they grow as squares of the volume with more active trading-and visualize. Moreover, covariance matrices, because of their nonlinear growth with average volume, obscure participation of the "high impact trades", i.e., the trades, which influence price much in excess of their size (Xiaozhou, 2019). In Equation (1) we use three types of variables as "volumes": a volume of buy trades, a volume of sell trades, and an imbalance volume, which we define as the difference Mathematics 2022, 10, 2459 6 of 25 between buy and sell volume at a given price bucket. In the case of imbalance statistics, our measure is reasonably similar to VPIN proposed by [12]. Our state-space is a discrete space of sixteen price buckets, separated by ∆ = 0.5 CNY (daily changes of stock prices by more than 16∆ = 8 CNY were seldom observed in the sample). We split the entire trading book into 0.5 CNY buckets by the price change (a few hundred encompassing all the stock price range) but use only the first sixteen buckets. Using a larger number causes a spurious periodicity in our data. A smaller granularity will leave too few events in each bucket to allow confident averaging, while a larger granularity will average over most daily price changes. The division of the trading book into equal buckets allows us to avoid the problem that the trades in our database are not stamped with the name of a particular stock. The trading volume of all stocks experiencing "zero" or "significant" price change goes into the same bucket. Our construction of the phase space potentially allows a two-way analysis: panel analysis which is based on individual buckets and time-series analysis which follows the evolution of buckets through time. The second stage of our analysis is building a dynamic model of trading. Our only assumption is that the state variables evolve by a linear dynamics, which we estimate from our data: For our analysis, we use a dual state-space obtained by the Fourier transform of the initial state-space of a model. Philosophically, our choice of the state variable is based on the Bochner theorem in functional analysis, which states that covariance of a weakly stationary stochastic process always has a representation as a Fourier integral of a stationary measure [35]. Henceforth, a broad class of stochastic processes can be represented in the form above [36] (Chapters 14 and 15). Here, we only display our model in the form we had used in our analysis. The Fourier transform of the Equation (4) gives a linear regression in dual state-space: In Equation (4), because of the Fourier transform, the vectors are assumed to be complex, i.e., with twice the dimensionality of the original state-space. The beta matrix has the dimension of 32 × 32 if we separate real and complex parts. Because our initial state vectors are real, there is covert symmetry in the coefficients and some rows in the beta matrix are identical zeroes. The Kronecker delta in the regression residual assumes that all spurious correlations between volumes disappear in one-two days. Note that we make no assumptions about the random process governing the price dynamics. The only limitation of Equation (4) is the size of the beta matrix we use to approximate a continuous Fourier integral operator [37,38]. The Inverse Fourier transform of our beta operator is analogous to the Q-operator in [39] Markovian model of the Limit Order Book (LOB). Original daily state vectors are recovered through the inverse Fourier transform below, where the hat denotes the predicted independent variable. They can have a small complex part because of the finite representation of decimals in the computer, which we ignore. And In the third stage of our analysis, we employ neural networks to make sense of the regression residuals, i.e., whether they are reflecting real economic surprises or a result of noise trading. We do not know the prediction algorithms being used by the traders and, with time, they might become more complicated than anything we can devise. Therefore, we try an inverted strategy of deep learning. Namely, given an unpredictable part of the day-to-day volume correlations, we try to predict the realized indexes of the Chinese economy. The intuition behind this method is that if there is systematic unexpected buying or selling pressure in the market, it must reflect prevailing market sentiment. First Validation of the Model We have tested our model's beta estimator for different traders in our database. Our results are represented by the sets of 484 × 16 matrices (the number of trading days during 2009-2010 times the number of the price buckets). The correlations between columns and rows of the matrixβ ωω for the imbalance volumes in Formula (3) are given in Table 2. (In the series of our tests, we used Buy, Sell, Total Volume, and Imbalance indicators. The results were broadly similar across all selected measures (for instance, see Appendix B). For most of this paper, except Section 7 where we used "Buy" quotes, we selected imbalances to represent our data. In particular, imbalances can be directly compared with the "Cost of Trading" measure [40]. Complex beta matrices have dimensions 32 × 32 because of the real and imaginary parts of Fourier-transformed state vectors. Yet, under inverse Fourier transform, because of the internal symmetry, both the prediction and residual vectors are real. We display temperature maps of an estimation of a single tape in Figure 3. Correlations of beta matrices, Equation (4), computed between columns and rows for the temporal correlation of imbalance volumes (see Equation (1)). Tape numbering corresponds to the rows in Table 1. Correlations are symmetric across the diagonal. All the correlations between coefficients are insignificantly different from unity. In testing regression (3) for the five data tapes, beta matrices are practically identical despite the state vectors being vastly different. This suggests the robustness of our model for the predictable component of the daily correlation of the imbalances. A similar picture was also observed from correlating betas between Buy and Sell tapes. Correlations of beta matrices, Equation (4), computed between columns and rows for the temporal correlation of imbalance volumes (see Equation (1)). Tape numbering corresponds to the rows in Table 1. Correlations are symmetric across the diagonal. All the correlations between coefficients are insignificantly different from unity. In testing regression (3) for the five data tapes, beta matrices are practically identical despite the state vectors being vastly different. This suggests the robustness of our model Predictable Component of the Bid-Ask Volume Correlations As a criterion for the quality of approximation of the Equations (5) and (6), we use the vector error estimator for the predictor and the residuals: where σ is the empirical volatility of the data andX is the estimate from the regression of Equation (5). In Equation (8), we retained the multiplier and 1/T, T = 484 (trading days) in the denominator as well as the numerator for clarity. Index n = 1 ÷ 16 numbers a vector of the state-space. A typical plot of variances of the predictor and the residual is given in Figure 4. for the predictable component of the daily correlation of the imbalances. A similar picture was also observed from correlating betas between Buy and Sell tapes. Predictable Component of the Bid-Ask Volume Correlations As a criterion for the quality of approximation of the Equations (5) and (6), we use the vector error estimator for the predictor and the residuals: where σ is the empirical volatility of the data and is the estimate from the regression of Equation (5). In Equation (8), we retained the multiplier and 1/T, T = 484 (trading days) in the denominator as well as the numerator for clarity. Index n = 1 ÷ 16 numbers a vector of the state-space. A typical plot of variances of the predictor and the residual is given in Figure 4. We note that the predictor and the residual time series by construction have zero correlation. However, the coincidence of time-weighted variances between the price buckets in Figure 4 is quite impressive and it is typical for a tally of the imbalances. For quantifying the determination of regression prediction and regression residuals, we used running correlations of panel variances for the 484 trading days in the sample. The matrix of these correlations is provided in Table 3. From this matrix, we observe that 30-40% of the daily variability of the traders' samples and 50% of the monthly variability is contributed by the prediction variance, and the rest by the regression residuals. The same observation that the variance of the empirical distributions is being split approximately 50:50 between the predictor and the residual can be made from Figure 4. We display original empirical data (blue), the predictor variance P n (green), and the variance of the residuals F n (red dash), all integrated for 484 trading days. We note that the predictor and the residual time series by construction have zero correlation. However, the coincidence of time-weighted variances between the price buckets in Figure 4 is quite impressive and it is typical for a tally of the imbalances. For quantifying the determination of regression prediction and regression residuals, we used running correlations of panel variances for the 484 trading days in the sample. The matrix of these correlations is provided in Table 3. From this matrix, we observe that 30-40% of the daily variability of the traders' samples and 50% of the monthly variability is contributed by the prediction variance, and the rest by the regression residuals. The same observation that the variance of the empirical distributions is being split approximately 50:50 between the predictor and the residual can be made from Figure 4. where i = 1 ÷ 5 and j = 1 ÷ 5 are individual traders indicating the explanatory power of linear regression. Correlations between different brokerage tapes are statistically insignificant. This exercise suggests that the trader's samples are independent in the sense of linear regression. For the trader, it means that processing data from another trader by linear regression does not contribute any valuable information. Individual traders can fairly predict their correlations between today's and the next day's volumes, i.e., persistence of their own demand across all price buckets, but not correlations for other traders. Analysis of the Phase Space Regression Residuals The model of Equations (3) and (4) describes a predictable component in the day-today correlations of trading volume of price migrations including the zeroth price bucket (price changes below 0.5 CNY). Residuals contain both microstructure noise and reactions to unpredictable economic events in the market. To analyze the residuals, we employ several methods inspired by neural networks. We do not know what kind of training algorithms traders might be using, given the quick progress in the algorithmic finance and computation power since 2009-2010 and even as this paper is being written. Henceforth, we employ the following method. Instead of a prediction of the out-of-sample trading data, we attempt backdating market data through a simulated experiment, namely, a neural network trained by our order data attempts to predict the Chinese market sentiment index, returns on the Shanghai stock index, and yields on the bellwether 10-year bond of the Bank of China (for details of the protocol, see below). Because of the monthly periodicity of the sentiment index, for the consistency of our tests, we used monthly stock returns and monthly bond yield as well. In our case, real-life trading algorithms would have to predict the "unexpected" direction of price changes imprinted in brokerage orders given their information on the markets. Yet, we assign to our imagined traders-represented by the brokerage tapes-a much simpler task of predicting a monthly index given their observation of the day-on-day correlation of orders within a given range of price change. (This reasoning is based on an unproven but intuitive assumption that an economically simpler problem-guessing a "covert" index from proprietary trading data rather than the other way around-is less demanding algorithmically). Our procedure corresponds to the following stylized situation. We select a randomly chosen "informed" trader who observes orders from her own clients and trains her network by predicting the index. We use her data as a network input and then simulate the behavior of other traders whom we consider uninformed as to the direction of the three chosen indexes but who controllably can observe or be in the dark concerning the actions of their colleagues from the same brokerage ( Figure 5). The situation of "leaky brokers" has been described in [41] in the following terms: "When considering the theoretical soundness of a market equilibrium in which brokers leak order flow information, one may wonder why an informed asset manager is willing to trade with brokers that tend to leak to other market participants… The broker would enforce this cooperative equilibrium across subsequent rounds of trading. In particular, the broker can exclude from the club the managers that never share their private information and reward with more tips the managers that are more willing to share". The results were averaged over six or twelve independent runs of the network and were not significantly different. The best results were obtained by a seven-layer convolutional neural network (CNN), though other options have been explored (Appendix C). Conventionally, CNN is used for image recognition and analysis. Essentially, we used matrices of the residuals of the output regression (depicted as a heat map of the matrix in Figure 3) as if they were digitized information for the visual images to predict the direction of an index. Table 4 displays the trials with randomly selected informed traders in both training and predictive samples, as well as training samples with only "uninformed" traders, i.e., the traders who observe only bid and ask volumes per basket, without access to current or past magnitudes of the index. Unlike the results from Table A1 in Appendix C, the statistically significant results from Table 4 were broadly reproducible on successive runs of the network. The situation of "leaky brokers" has been described in [41] in the following terms: "When considering the theoretical soundness of a market equilibrium in which brokers leak order flow information, one may wonder why an informed asset manager is willing to trade with brokers that tend to leak to other market participants . . . The broker would enforce this cooperative equilibrium across subsequent rounds of trading. In particular, the broker can exclude from the club the managers that never share their private information and reward with more tips the managers that are more willing to share". The results were averaged over six or twelve independent runs of the network and were not significantly different. The best results were obtained by a seven-layer convolutional neural network (CNN), though other options have been explored (Appendix C). Conventionally, CNN is used for image recognition and analysis. Essentially, we used matrices of the residuals of the output regression (depicted as a heat map of the matrix in Figure 3) as if they were digitized information for the visual images to predict the direction of an index. Table 4 displays the trials with randomly selected informed traders in both training and predictive samples, as well as training samples with only "uninformed" traders, i.e., the traders who observe only bid and ask volumes per basket, without access to current or past magnitudes of the index. Unlike the results from Table A1 in Appendix C, the statistically significant results from Table 4 were broadly reproducible on successive runs of the network. The general conclusion from Table 4 is that CNN can reliably predict Chinese market sentiment from daily imbalances, prediction of the stock market returns is usually significant at 10% but not at 5%, and the yield of BOC bond cannot be inferred from the imbalances. The implication of insider information does not improve prediction very much for the market sentiment index, somewhat helps to predict the direction of the stock index but well within the assumed 10% statistical dispersion of the results, and is irrelevant for the direction of bond yields. In all cases, there is little difference whether an "uninformed" trader trains her network on the imbalances of her informed colleague or another uninformed trader. Table 4. Correlation of predictions of the three indexes. Traders "a" and "b" were designated as informed about the index. The first two columns include informed traders in both training and prediction samples. The second pair of columns include data from an informed trader only in the training sample. The third pair of columns does not have an informed trader at all. For comparisons, we show the results obtained by a single 500 round and by changing ReLU into the TANH perceptron function. The lower left rectangle indicates data obtained from six TANH trials. (A) Correlations of monthly predictions based on market sentiment with actual data. Predicted Index Sentiment Traders Discussion: Possible Sampling Issues Reporting files do not contain identification of the execution price and a particular stock. Our formation of the price buckets was based on a uniform division of the price range in a trading book into the intervals of 0.5 CNY. We considered this choice optimal because it allows a significant number of price buckets (120 ÷ 400). Furthermore, migration of the price for more than 8 CNY in a given day is rare, and we can use a parsimonious 16 × 16 matrix approximation for the Fourier evolution operator. This, or any similar choice-for instance, an arbitrary 0.4317 CNY-assures that there could be one, several, or no trades in a particular price bucket. Yet, the Chinese stock market, which, in 2009-2010 was in its nascence, was dominated by the low-priced stocks (below 10 CNY). At the end of 2009, of the 293 listed equities, only 34 issues (11.6%) had a mean price below 10 CNY, and 259 had a price above that number. Henceforth, the lower price buckets could be systematically different from the upper buckets in that they might contain several stocks, while the price buckets above the average (see Table 1) can contain only one stock or none altogether. To clarify this problem, we artificially split the trader books into parts comprising the stocks with the price below 10 CNY and above 10 CNY. Of course, there could be some borderline migration of stocks priced slightly above 10 CNY into the first portfolio and stocks priced slightly below into the second. We expect a small influence of this issue on day-to-day trading and we ignored it in our analysis. When we accomplished the procedure described in Sections 4-7, for our censored trading books, the results were broadly the same as having been observed in Table 4. Namely, if one uses portfolios of low-priced stocks to train the net and then predict the index from the trading books, in which only the high-priced stocks are included, one can confidently infer the sentiment index; the Shanghai stock index is predicted in some runs but with low statistical validity, and there are no correlations between the net trained our traders' positions and the yields on the 10-year bond of the bank of China. These results do not change much if we train the net on the high-priced stock and leave the prediction to the trader of the low-priced stock. Only the prediction of the sentiment index slightly improves (correlation grows from~95-97% to~97-99%), which suggests that the higher-priced stock was more liquid and, henceforth, had a better predictive value. As is the case with all statistical experiments, this observation is only tentative. Empirical Liquidity of the Chinese Stock Market in the Period 2009-2010 A proposed microstructure model of the Chinese stock market allows us to analyze both predictable and unpredictable frictions resulting from two interleaving factors: (1) imperfect balance between buy and sell orders, and (2) securities changing value during trading. The net cost of trading is computed similarly to [40], though their formula can accept different conventions. Our formula presumes that the brokerage sells an asset in today's quantity marked to market at yesterday's buy price and replenishes its inventory sold yesterday at today's ask price, with the cost to the customer being the same in value and opposite in magnitude. Of course, the signs in Equation (9) are arbitrary. In Equation (9), π t is our definition of the cost of trading, and p a , p b are the ask/bid price buckets. The V b , V s are the volumes of buy/sell orders. The index i = 1-16 signifies the price bucket. Note that in market equilibrium, in Equation (7), cost averaged over all buckets is equal to the (constant) bid-ask spread times the daily turnover and is always non-negative. Outside of equilibrium, the sign of π t can be arbitrary because of fluctuating stock prices. Our analysis by CNN indicates that the net cost of trading is a fair predictor of the market direction in the sense that we have outlined in a previous section ( Figure 5). Namely, if we assume that the broker or regulator is "blind" to the order size, she can get a clear idea about the Chinese market sentiment from the trading costs only. Their idea of stock market direction would be imperfect but statistically significant and, finally, there is no connection to the Bank of China bond prices through our model. While this exercise is purely imaginary as being applied to the Chinese brokerages, we suggest that this conclusion-that, in the observed period, trading costs reflected market sentiment more or less mechanically (see Figure 6)-can help traders and regulators alike in the case of "Dark Pools". In the latter case, the information about the exchange's strategy is covert and can be gleaned only indirectly. Namely, if we assume that the broker or regulator is "blind" to the order size, she can get a clear idea about the Chinese market sentiment from the trading costs only. Their idea of stock market direction would be imperfect but statistically significant and, finally, there is no connection to the Bank of China bond prices through our model. While this exercise is purely imaginary as being applied to the Chinese brokerages, we suggest that this conclusion-that, in the observed period, trading costs reflected market sentiment more or less mechanically (see Figure 6)-can help traders and regulators alike in the case of "Dark Pools". In the latter case, the information about the exchange's strategy is covert and can be gleaned only indirectly. Further on, we use liquidity lambda to predict the same indexes. The intuitive meaning of our version of the Amihud measure is that it represents the average cost for the agent to make a roundtrip inside the same price bucket with one share. To provide a glimpse of the magnitude and volatility of λ, we display its daily dynamics in Figure 7. The Equation (9) can be recast in the (dynamic) version of [20] liquidity measure. Namely, Further on, we use liquidity lambda to predict the same indexes. The intuitive meaning of our version of the Amihud measure is that it represents the average cost for the agent to make a roundtrip inside the same price bucket with one share. To provide a glimpse of the magnitude and volatility of λ, we display its daily dynamics in Figure 7. (9)) across all price buckets for a single trader. Single Event Analysis Our sample includes a day of the Flash Crash in US stock markets (6 May 2010), which means, dependent on daytime, either trading day 326 or 327 in our sample. There is no visible anomaly in the liquidity of the Chinese stock market during or after that day. It is interesting to analyze this event using our methods. (9)) across all price buckets for a single trader. Single Event Analysis Our sample includes a day of the Flash Crash in US stock markets (6 May 2010), which means, dependent on daytime, either trading day 326 or 327 in our sample. There is no visible anomaly in the liquidity of the Chinese stock market during or after that day. It is interesting to analyze this event using our methods. The spillovers from the established stock markets into the Chinese markets had previousy been studied by [42] using the measure of volatility proposed by [43,44]. They observed that before 2010, the Chinese stock market produced volatility spillovers to Taiwan and Hong Kong, but its influence on the European and North American markets was statistically insignificant. However, the shocks in the US stock market affected every stock market they studied. Beginning in approximately 2007, some pushback from the Chinese stock market could be observed. The question of whether the mutual influences between the stock markets were macroeconomic or microstructural in nature was investigated by [22]. They also observed asymmetric shocks, i.e., the shocks propagating predominantly from US markets into China but not the other way around. Li and Peng noticed that a structural shock in the US markets usually decreases correlations between the Chinese and American stock markets. We decided to investigate the influence of the US stock markets on China by our methods of the CNN analysis of the regression residuals. We display the testing strategy in Figure 8. The observation period (years 2009-2010) is split into eight overlapping samples of 60 days each. One sample of two adjacent periods (usually, but not necessarily the first) is used for network training. This constitutes one training and five predictor samples. We then attempt to predict monthly indexes backward from the training data. The null hypothesis is formulated as follows: The intuitive meaning of the null hypothesis above is that predictions obtained from subsamples of illiquidity measures are no different from each other. We, of course, would prefer that the null be rejected for the samples containing the Flash Crash in America. Mathematics 2022, 10, x FOR PEER REVIEW 16 of 25 Figure 8. Schematic description of the testing procedure. We use the data on λ, which we consider a measure of market friction, from the training sample to predict indexes from λ's in five other samples. In the drawing above, the training sample comes first in calendar time but its position within the entire sample can be arbitrary. The intuitive meaning of the null hypothesis above is that predictions obtained from subsamples of illiquidity measures are no different from each other. We, of course, would prefer that the null be rejected for the samples containing the Flash Crash in America. The statistical significance of the correlations of the predicted monthly indexes for one of the traders (tapes) is shown in Table 5. Note that, in agreement with the results of Section 6, almost no statistically significant correlation of prediction of the return on the stock index and none for the yields on the 10-year bond can be observed. On the contrary, the prediction of the market sentiment from an observed Amihud illiquidity measure is Figure 8. Schematic description of the testing procedure. We use the data on λ, which we consider a measure of market friction, from the training sample to predict indexes from λ's in five other samples. In the drawing above, the training sample comes first in calendar time but its position within the entire sample can be arbitrary. The statistical significance of the correlations of the predicted monthly indexes for one of the traders (tapes) is shown in Table 5. Note that, in agreement with the results of Section 6, almost no statistically significant correlation of prediction of the return on the stock index and none for the yields on the 10-year bond can be observed. On the contrary, the prediction of the market sentiment from an observed Amihud illiquidity measure is robust. This corresponds to the testing of H1, and of H2 above where the right-hand side is replaced by zero. As a rule, null cannot be rejected for any of the three indexes. (A) Probabilities for prediction of sentiment index, (B) probabilities for prediction of stock market returns, and (C) probabilities for prediction of bond yields. Highlighted are the subsamples, which include the US Flash Crash of 6 May 2010. We observe from Table 5 that the null hypothesis-illiquidity in the subsamples that include the date of the American Flash Crash behaves any different from other samplescannot be rejected. Only one probability in Table 5 is below 10%, and it is not stable with respect to the consecutive runs of the neural network with a different seed. We cannot discern an influence of a shock from microstructure data, yet the results, e.g., of Li and Peng (2017), suggest that the transmission of shocks was real. This suggests that the transmission between the US and Chinese markets was mainly through macroeconomic fundamentals. As we have seen from Table 5, our deep learning-based method could simply require too long a sample to resolve the spillover. Conclusions In our paper, we propose a microstructure model of the Chinese stock market. We build it from a state-space of interday correlations of volumes between price buckets. Because of the T + 1 rule in the Chinese stock markets, interday correlations are expected to significantly reduce the microstructure noise. For the 100%-efficient market in equilibrium, our time series would be exactly equal to zero. This model is analyzed by OLS in a dual state-space, connected to an original statespace by a Fourier transform across the multiple price buckets. This procedure corresponds to an approximation of the predictable fraction of the time evolution of trading volumes by a Fourier integral (pseudodifferential) operator of a general form. In a discrete case, this operator can be represented by a matrix of arbitrary dimension (selected by convenience as 32 × 32) acting on the space of the Fourier coefficients (see Appendix A for details). This method is completely general and can be applied to any time series, which can be grouped into panels, in our case volumes per a given stock price. The presented model based on the Fourier integral operators, augmented by the deep learning methods, can be calibrated daily by the market makers. It predicts the next day's volume at a price change x (see Equation (2)) and has no fundamental price inputs. Henceforth, the prediction reveals the dynamics of brokerage demand driven by its clientele without intervening events in the market. The proposed method (or protocol, in the parlance of the deep learners: (1) preprocessing, filtering using Fourier Integral Operators; (2) processing, recognition of the sequence of residual matrices using neural network; (3) postprocessing, index forecast with a trained network) is universal and can be applied to any trading system as a continually updated model. It can simulate any given market with a level of complexity chosen by the broker and/or the regulator. Namely, each day, a predictable trading forecast is provided by a multivariate filter (Fourier Integral Operator in the current paper, but other methods, e.g., Kalman filter, can be used as well). This stage is "dumb", in the sense that it does not take into account intervening changes in market fundamentals. Past residuals to the regression can be independently predicted by the neural network and added up to the results of the filtering. In the third stage, different validation strategies can be applied. This protocol can incorporate generic processes such as multivariate ARMA (similar to the one used in the paper) or a specific microstructure model, e.g., Roll's [4]. The main conclusion of our analysis is that the unpredictable dynamics of trades completed by the Chinese brokers in the period 2009-2010 were tightly correlated with the Chinese market sentiment in a highly nonlinear sense of machine learning. This can be interpreted as investors trading according to the available market information they receive. The returns on the stock index were predictable, but statistically they were barely significant. Because the predictive power of the trades for the actual stock market returns was small, this indicates investors were following a herd mentality. We observed no connection between the trading activity and the yields on the bellwether 10-year bond of the Bank of China. This may signify that the market risk in an emerging market, which describes the Chinese stock market in 2009-2010, has only a small dependence on prevailing borrowing rates. Finally, we tested whether the Flash Crash of American markets on 6 May 2010, was reflected in the liquidity of the Chinese stock market. For this, we used a dynamic version of the Amihud illiquidity measure λ. The illiquidity measure predicted by the CNN was almost as good a predictor of market sentiment as the VAP (see Figure 6), and we used it to analyze a possible contagion of the American Flash Crash on the Chinese markets. With our methods, we did not find any statistically significant evidence that liquidity was higher or lower than the average during the period preceding or following the Flash Crash. This indicates a need for a finer measure of contagion between American and Chinese stock markets. In particular, samples of 60 days long can be too crude to resolve the influence of the Flash Crash, which may have lasted only 1-2 trading days. The limitations of the proposed method are that all regressions at the preprocessing stage must be linear. Non-linear regressions lead to the non-linear integral operators on the right-hand side, which may be more complicated or less computationally stable than the original equations. The second limitation is that the choice of neural network (discussed in Appendix C) can be rather arbitrary with few hard-wired criteria to guide the selection. Moreover, the main limitation of this study is its restricted array of free data from the Shenzhen Stock Exchange and its limited computer resources. (All computer algorithms were tailored to be run on a conventional PC using not more than 15-30 min of processor time in the case of R algorithms.) However, all of the paper's methods can be applied to datasets of arbitrary size. Conflicts of Interest: The author does not recognize any conflict of interest. Appendix A Mathematically, one can make the following observation about the temporary correlations as the state-space variables. Let u t = u t + 1 t and v t = v t + 2 t be our time series, where u t and v t are signals with the correlation coefficient ρ and epsilon terms are microstructure noises with variances σ 2 1 and σ 2 2 uncorrelated with the signals and each other: Then, their noisy correlation for a small white noise becomes: This is a downward biased estimator of a true correlation, stationary, as long as the second moments of the noise are small and stationary. Furthermore, the correlation coefficients are concentrated in the [−1, 1] range and, thus, are more amenable to intuitive and graphic interpretation. Appendix B. Pseudodifferential Operators Pseudodifferential operators are a particular case of the Integral Fourier Operators [43]. Frequently, financial time series are being described by the AR(n) models, which are, in turn, the discrete analogs of the differential operators with constant coefficients. Pseudodifferential operators can be considered as extensions of ARMA(p,q) models. From that angle, conventional ARMA(p,q) models are discrete pseudodifferential operators with a rational function as a symbol. The formal definition of the pseudodifferential operator goes as follows (e.g., [37,38,45]: One can easily write a solution for a Cauchy problem for the Kolmogorov-Fokker equation, describing Ito diffusion through a pseudodifferential operator. Indeed, where L is a generator of Ito diffusion of the following form. Here, a is an n-dimensional vector, Σ is the n × n matrix, and i, j = 1 ÷ n-dimensions of the state-space. This problem's solution expressed in a form of Equation (A1) looks as: In our case, the phase-space regression has a form of Equation (4) of the main text: To better demonstrate a connection of this regression to the pseudodifferential operators, we replace discrete time steps in Equation (A6) with continuous time. The transfer Equation (A7) becomes: The value of the state vector as a function of a state variable x for an arbitrary time T can be expressed as a moving average-type equation: In Equation (A4), the symbol of the pseudodifferential operator is equal to: Appendix C. Types of Neural Networks Used in Our Estimations The first test used the prediction of three indexes from the first four average moments using a shallow neural network with one hidden layer. When we feed the network data from the stock imbalances, the shallow learning network predicts a constant answer indicating zero information about the direction of the indexes. This negative result is not all bad because it suggests that the distribution of the residuals is practically indistinguishable from a normal distribution, a nice enough feature for such an artificial model. The first test used the prediction of three indexes from the first four average moments using a shallow neural network with one hidden layer. When we feed the network data from the stock imbalances, the shallow learning network predicts a constant answer indicating zero information about the direction of the indexes. This negative result is not all bad because it suggests that the distribution of the residuals is practically indistinguishable from a normal distribution, a nice enough feature for such an artificial model. Figure A1. Example of estimating of shallow three-layer network with one hidden layer for the prediction of the market sentiment (stock returns, bond yields) from the first four moments of the error distribution. The input data for the network are pre-processed and the data for the first four monthly moments are being fed into the network. Blue arrows from circles marked as '1' signify the training of one sample by the actual market index data. For our second test, we have used a complete set of residual matrices and the following experimental procedure ( Figure 4 in the main text). We trained a relatively deep 10layer network on a sample of all monthly regression errors from a randomly chosen trader omitting the last day of each month and tested this prediction on the same trader's data sample of the last days of the month. Then, we tried to predict our indexes (sentiment, stock returns, and bond yields) back from the supposedly unexpected changes-provided by our regression-in other trading tapes. One of the difficulties in dealing with neural networks is that results frequently represent multidimensional tensors. By their origin, they cannot be listed compactly in twodimensional tables and their presentation on a sheet of paper or a computer screen is nonintuitive. Figure A1. Example of estimating of shallow three-layer network with one hidden layer for the prediction of the market sentiment (stock returns, bond yields) from the first four moments of the error distribution. The input data for the network are pre-processed and the data for the first four monthly moments are being fed into the network. Blue arrows from circles marked as '1' signify the training of one sample by the actual market index data. For our second test, we have used a complete set of residual matrices and the following experimental procedure (Figure 4 in the main text). We trained a relatively deep 10-layer network on a sample of all monthly regression errors from a randomly chosen trader omitting the last day of each month and tested this prediction on the same trader's data sample of the last days of the month. Then, we tried to predict our indexes (sentiment, stock returns, and bond yields) back from the supposedly unexpected changes-provided by our regression-in other trading tapes. One of the difficulties in dealing with neural networks is that results frequently represent multidimensional tensors. By their origin, they cannot be listed compactly in two-dimensional tables and their presentation on a sheet of paper or a computer screen is non-intuitive. We present the results of the 10-layer network in Table A1. Training on buy or sell signals had some limited power for predicting market indexes from a sample of imbalance correlations for the same trader, or imbalances for other traders whom we considered "blind", i.e., who predict the direction of indexes from their observations of the imbalance quotes. Some outputs from this model are shown in Figure A2. The inverse path using the same network, i.e., predicting indexes by the training sets on unexpected changes in imbalances, was added for control. The scalar network results show little dependence on the number of training rounds and functional shapes of individual transmission functions (ReLu-rectified linear unit, hyperbolic tangent, or logit). We present the results of the 10-layer network in Table A1. Training on buy or sell signals had some limited power for predicting market indexes from a sample of imbalance correlations for the same trader, or imbalances for other traders whom we considered "blind", i.e., who predict the direction of indexes from their observations of the imbalance quotes. Some outputs from this model are shown in Figure A2. The inverse path using the same network, i.e., predicting indexes by the training sets on unexpected changes in imbalances, was added for control. The scalar network results show little dependence on the number of training rounds and functional shapes of individual transmission functions (ReLu-rectified linear unit, hyperbolic tangent, or logit). Figure A3. Examples of the Chinese sentiment index (blue) and its backward prediction (red) by a 10-layer scalar network from buy quotes (correlation ρ = 0.5841) and sell quotes (correlation ρ = 0.2934) of randomly selected traders. Traders are "uninformed", i.e., they make predictions based on their observations through a network trained by an informed trader. Results are poorly reproducible. quotes. Some outputs from this model are shown in Figure A2. The inverse path using the same network, i.e., predicting indexes by the training sets on unexpected changes in imbalances, was added for control. The scalar network results show little dependence on the number of training rounds and functional shapes of individual transmission functions (ReLu-rectified linear unit, hyperbolic tangent, or logit). Figure A3. Examples of the Chinese sentiment index (blue) and its backward prediction (red) by a 10-layer scalar network from buy quotes (correlation ρ = 0.5841) and sell quotes (correlation ρ = 0.2934) of randomly selected traders. Traders are "uninformed", i.e., they make predictions based on their observations through a network trained by an informed trader. Results are poorly reproducible. Figure A3. Examples of the Chinese sentiment index (blue) and its backward prediction (red) by a 10-layer scalar network from buy quotes (correlation ρ = 0.5841) and sell quotes (correlation ρ = 0.2934) of randomly selected traders. Traders are "uninformed", i.e., they make predictions based on their observations through a network trained by an informed trader. Results are poorly reproducible. Our third exercise was to use a seven-layer convolutional neural network (CNN) sketched in Figure A1. CNN is conventionally used for image recognition and analysis. Essentially, we used matrices of output regression (Example provided in Figure 2) as if they were digitized information for the visual images to predict the direction of an index. Unlike the results from Table 3, the statistically significant results from Table 4 were broadly reproducible on successive runs of the network. Table A1. Select runs of a 10-layer neural network for the backward prediction of monthly indexes from traders' activity. Capital letters B, S, and I mean "Buy", "Sell", and "Imbalance" samples of residuals. Letters from a-d refer to a particular trader. An arrow designates training vs. prediction tapes. The symbols r 1-3 refer to the correlation coefficients of the neural network predictor and the actual indexes. The explanatory power of the predictions can be inferred from the squares of the correlation coefficient. For instance, if we treat broker "b" as informed, trader "a" could have predicted bond yield for the next month from their imbalances with an explanatory power ρ = r 2 3 = 0.5178 2 ≈ 26.8%. Test results were poorly reproducible on successive runs of the network. Table A1. Select runs of a 10-layer neural network for the backward prediction of monthly indexes from traders' activity. Capital letters B, S, and I mean "Buy", "Sell", and "Imbalance" samples of residuals. Letters from a-d refer to a particular trader. An arrow designates training vs. prediction tapes. The symbols r1-3 refer to the correlation coefficients of the neural network predictor and the actual indexes. The explanatory power of the predictions can be inferred from the squares of the correlation coefficient. For instance, if we treat broker "b" as informed, trader "a" could have predicted bond yield for the next month from their imbalances with an explanatory power = = 0.5178 ≈ 26.8%. Test results were poorly reproducible on successive runs of the network. Our third exercise was to use a seven-layer convolutional neural network (CNN) sketched in Figure A1. CNN is conventionally used for image recognition and analysis. Essentially, we used matrices of output regression (Example provided in Figure 2) as if they were digitized information for the visual images to predict the direction of an index. Unlike the results from Table 3, the statistically significant results from Table 4 were broadly reproducible on successive runs of the network. Finally, we tested the Long Short Term Memory (LTSM) network and post-processed the results by several smoothing algorithms (mean, moving average, exponential moving average) for the previous 21 days-the average trading month in China for the years in question-to compare the results with monthly indexes. The box chart of the procedure is portrayed in Figure A5. We plot some of the outputs in Figure A6. The results of this Finally, we tested the Long Short Term Memory (LTSM) network and post-processed the results by several smoothing algorithms (mean, moving average, exponential moving average) for the previous 21 days-the average trading month in China for the years in question-to compare the results with monthly indexes. The box chart of the procedure is portrayed in Figure A5. We plot some of the outputs in Figure A6. The results of this procedure do not depend much on the number of epochs and batch sizes are slightly better than the 10-layer network above but worse than the CNN network, which we use in the main text. procedure do not depend much on the number of epochs and batch sizes are slightly better than the 10-layer network above but worse than the CNN network, which we use in the main text. Figure A5. Box chart of the Long Short Term Memory estimation in our paper's context. Index input signifies one of three monthly indexes (sentiment, stock market returns, and yield on BOC bond). Daily predictions of the indexes are smoothed through post-processing before being compared to the original inputs. The LTSM network was run with default Keras parameters. Figure A6. Backward LTSM prediction of monthly sentiment index (blue) using data from the broker's own training tape (orange), and three other brokers (blue, grey, and yellow dash). The general downward tendency of the sentiment for the period but not much else can be predicted from other brokers' tapes. procedure do not depend much on the number of epochs and batch sizes are slightly better than the 10-layer network above but worse than the CNN network, which we use in the main text. Figure A5. Box chart of the Long Short Term Memory estimation in our paper's context. Index input signifies one of three monthly indexes (sentiment, stock market returns, and yield on BOC bond). Daily predictions of the indexes are smoothed through post-processing before being compared to the original inputs. The LTSM network was run with default Keras parameters. Figure A6. Backward LTSM prediction of monthly sentiment index (blue) using data from the broker's own training tape (orange), and three other brokers (blue, grey, and yellow dash). The general downward tendency of the sentiment for the period but not much else can be predicted from other brokers' tapes. Figure A6. Backward LTSM prediction of monthly sentiment index (blue) using data from the broker's own training tape (orange), and three other brokers (blue, grey, and yellow dash). The general downward tendency of the sentiment for the period but not much else can be predicted from other brokers' tapes.
15,530.2
2022-07-14T00:00:00.000
[ "Economics", "Mathematics" ]
A Stacked Microstrip Antenna Array with Fractal Patches A novel microstrip antenna array, which utilizes Giuseppe Peano fractal shaped patches as its radiation elements and adopts a two-layer stacked structure for achieving both wideband and high-gain properties, is proposed. Parametric study estimates that the proposed antenna’s size can be arbitrarily adjusted by changing the fractal proportion while high aperture efficiency is maintained. Two prototypes with 2 × 2 and 4 × 4 fractal patches, respectively, on each layer are designed, fabricated, and measured. Both simulation and measurement results demonstrate that the proposed antenna possesses encouraging performances of wideband, high directivity, and high aperture efficiency simultaneously; for example, for the two prototypes, their S 11 < −10 dB impedance bandwidths are 23.49% and 18.49%, respectively; at the working frequency of 5.8 GHz, their directivities are 12.2 dBi and 18.2 dBi, and their corresponding aperture efficiencies are up to 91.0% and 90.5%, respectively. Introduction Directional antennas with high radiation gain are key devices in many practical applications such as the remote wireless communication.Due to their attractive features like low weight, low profile, small size, and being easy to manufacture, employing microstrip antennas to form array is a widely adopted method to design directional antennas with high gain.However, it is well known that microstrip antennas have an intrinsically narrow bandwidth [1], typically a small percent of the center frequency.In view of the explosive growth of the wireless system and the booming demand for a variety of new wireless applications, it is important to design directional antennas with both wideband and high gain to cover a wide frequency range.Indeed, there are countless researches proposed in the literature to design microstrip antennas with high gain or wide bandwidth, but few of them are for both properties together. To tackle the narrow bandwidth problem of microstrip antennas, various techniques have been proposed.Among them is stacking one or several parasitic layers on a microstrip antenna [2], and various methods have been widely used.For example, the 8-layer stacked patch unit assembly allows for great bandwidth which is more than 50% of the center frequency [3] and the use of high dielectric constant substrate for the driven layer and low dielectric constant substrate for the superstrate can offer more than 25% on bandwidth [4,5].The intrinsic properties of fractal geometries are conducive to the miniaturization of antenna size and realization of multiband or broadband characteristics [6,7]; L-shape slot loaded broadband patch antenna for enhancing the gain without affecting the broadband impedance matching characteristics [8]; the artificial magnetic conductor structures are employed as the antenna magnetic ground plane for bandwidth enhancement and radiation gain improvement of patch antenna [9].A dipole antenna with a double electromagnetic band gap (EBG) reflector is presented for wide operating bandwidth and high gain [10].However, there are some defects in the designs.Some of them have high profile; some of them require special materials and increase the production costs; some of them have complex structures, which increase the difficulty in manufacture.More details are shown in Table 1. In this work, a novel microstrip antenna array, which employs a two-layer stacked structure and Giuseppe Peano fractal shaped patches for realizing both wideband and high gain properties, is proposed, analyzed, and measured.The remainder of the paper is organized as follows.Section 2 introduces the configuration of the proposed antenna array.A parametric study is presented in Section 3. Simulated and 2 International Journal of Antennas and Propagation Antenna Configuration Fractals are geometrical shapes, which are self-similar, repeating themselves at different scales.With the development of the fractal theory, the nature of fractal geometries has been exploited in many fields of engineering and science, including antenna design.The utilization of fractal geometries in antenna design has led to the evolution of a new class of antennas called fractal shaped antennas.The Giuseppe Peano fractal is a class of fractal geometries.Its recursive procedure is shown in Figure 1.A Giuseppe Peano starts from a segment with length 2 and allows its central part with length 1 to break into two zigzag sections; it is constructed iteratively by growing new zigzag sections that have a specific length ratio = 2 / 1 with respect to their parent section. As depicted in Figure 2, when the Giuseppe Peano fractal is applied to the edges of the square patch, this fractal patch with different sections resonates at different frequencies which together to form a wide working frequency band. The configuration of the proposed microstrip antenna array is illustrated in Figure 3.This antenna utilizes a twolayer stacked structure, containing a radiation layer and a parasitic layer.Each layer is printed on a PCB (printed circuit board) with relative permittivity = 2.55 and thickness ℎ = 1 mm.The two layers are separated by air with distance of ℎ = 3.2 mm. On the top surface of the radiation layer, some Giuseppe Peano fractal shaped patches are etched periodically.They act as radiators and are connected to a microstrip corporate feeding network to form an array.On the bottom surface of the parasitic layer, the same patches as that on the radiation layer are also etched.Those patches are parasitic elements for enhancing the bandwidth and gain of the antenna array. This antenna is fed from a 50 Ω coaxial connector.The microstrip corporate feeding network consists of a serial of -junctions to deliver electromagnetic energy uniformly and multiple-section quarter-wavelength impedance transformers to achieve impedance match and is used to provide equal amplitude and in-phase excitation to all fractal shaped patches. Parametric Study Here, we investigate the effect of the antenna's parameters on its performance characteristics.In this section, the proposed antenna works at 5.8 GHz, has 2 × 2 and fractal 4 × 4 patches on the radiation and parasitic layers, respectively, and adopts second iteration Giuseppe Peano fractal patches. Effect of the Fractal Proportion. The Fractal proportion 𝑛 is defined as which can be seen from Figure 1.The greater the is, the closer to square patch the Peano fractal radiator is.The 2 × 2 square patches antenna array which works at the same frequency of 5.8 GHz is illustrated in Figure 4.Each layer is printed on a PCB with relative permittivity = 2.55 and thickness ℎ = 1 mm, the same as the fractal ones. It is obvious that the 11 < −10 dB impedance bandwidth is about 14.02% (from 5.37 GHz to 6.18 GHz).At its working frequency of 5.8 GHz, the antenna has an input reflection coefficient of −22.65 dB, which estimates that a good impedance match has been achieved.Different 2 × 2 array antennas of different fractal proportion ( = 2, = 2.5, = 3, = 3.5, = 4, and = 4.5) which are also working at 5.8 GHz are optimized by GA. Figure 7 reveals the simulated reflection coefficient of different fractal proportion antenna arrays and Figure 8 reveals the simulated directivity. From Table 2, one can observe that the fractal proportion has a great influence on the antenna performance.As fractal proportion increases, the aperture area of the proposed Initiator Second iteration First iteration directivity, shown in Figure 9.The vast majority of them are more than 85%.The average aperture efficiency (from 5.4 GHz to 6.1 GHz) is shown in Table 3. Table 3 demonstrates that the average aperture efficiency of the Peano fractal antenna array is higher than that of the square antenna in the same working band.The maximum value of average aperture efficiency is obtained when the fractal proportion = 4. Effect of Fractal Iteration. First and second iteration are relatively applied to the edges of the square patch.All the parameters of the second iteration antenna array are set to be the same as the first one.The comparison of the reflection coefficient of these two antennas is drawn in Figure 10.One can observe that the impedance bandwidth for 11 < −10 dB is 24.18% (from 4.98 GHz to 6.35 GHz), which is much wider than that of square patches antenna array and slightly larger than that achieved in first iteration of the fractal patches antenna array.As the iteration of fractal geometry increases, its resonance frequency decreases, which may lead to an effective antenna miniaturization.However, for iterations higher than the second iteration, the antenna design becomes quite complicated and its fabrication becomes difficult.The comparison of simulated radiation patterns in the -plane and -plane of these two antennas is shown in Figure 11. The comparison indicates that the influence of radiation patterns that comes from the fractal iteration is almost negligible. Effect of Array Elements Number. In this part, a proposed antenna with 4 × 4 Giuseppe Peano fractal radiated elements and working at 5.8 GHz is also optimized by GA. Figure 12 compares the simulated reflection coefficient of the 2 × 2 square patch antenna and that of the 4 × 4 Giuseppe Peano fractal antenna.As known to all, when the number of array elements increases, the impedance bandwidth of the antenna array decreases because of the mutual coupling between the array elements [13,14].Although the element number of fractal antenna array is four times that in the square antenna, the impedance bandwidth for 11 < −10 dB is 18.43% (from 5.32 GHz to 6.40 GHz), which is much wider than that of square patches antenna array introduced previously. Experimental Results Two prototype antennas with 2 × 2 and 4 × 4 radiations elements, respectively, have been fabricated and measured, which are shown in Figure 13.Some glass sticks with a diameter of 5 mm are used for propping them up. Figure 14 is the comparison of the measured and reflection coefficient of the prototype antennas, respectively. The measured and simulated results are in good agreement.From the measurement, the 11 < −10 dB impedance bandwidth of the antenna is about 23.49% (from 5.07 GHz to 6.42 GHz) for 2 × 2 fractal array antenna and 18.19% (from 5.34 GHz to 6.41 GHz) for 4 × 4 fractal array antenna, respectively.At its working frequency of 5.8 GHz, the antenna has an input reflection coefficient of −16.95 dB and −18.81 dB, respectively, which estimates that a good impedance match has been achieved.Figures 15 and 16 depict the simulated and the measured radiation patterns at different frequencies within the effective frequency band. Conclusion A novel Giuseppe Peano fractal antenna array is presented.Structural parameters of the proposed antenna are optimized by a parallel GA to achieve both high gain and wideband properties over a desirable frequency band with the center of 5.8 GHz.Two prototype antennas were fabricated and measured.The measurement results and the simulation results agree well and show that the optimized antenna array possess some encouraging properties. International Journal of Antennas and Propagation By comparing the proposed antenna with the square patches one, the important conclusions resulting from this study are as follows. (1) The Giuseppe Peano fractal configuration provides extremely high flexibility to achieve broadband performance while maintaining higher average aperture efficiency in the operating frequency band.Fractal proportion can be selected according to design requirements; for example, if the impedance bandwidth is a major consideration in design, the fractal proportion close to 3.5 is comparatively suitable; if the design requires making full use of the high aperture efficiency to maintain the high directivity over the bandwidth, the value approximating 4 is more reasonable. (2) In the case of the same number of array elements and the working frequency, the Giuseppe Peano fractal antenna array can more effectively reduce the required aperture area (reduce 51%) than the traditional square patches antenna array.As the iteration of fractal geometry increases, its resonance frequency decreases; this may lead to an effective miniaturization of antenna.At the same time, the radiation pattern is essentially unchanged. (3) Although the element number of fractal antenna array is four times that in square antenna, the impedance bandwidth for 11 < −10 dB is 18.43% (from 5.32 GHz to 6.40 GHz), which is still much wider than that of square array antennas introduced previously (which is about 14.02% from 5.37 GHz to 6.18 GHz).This clearly showed that the introduction of fractal radiation unit can reduce the mutual coupling between the antenna elements. Given the conclusion above, the merits of wideband and high aperture efficiency make the proposed antenna a good candidate for various applications. Figure 9 : Figure 9: The simulated aperture efficiency of the different fractal proportion antenna array. Figure 10 : Figure 10: Comparison of the simulated reflection coefficient of the 1st iteration and 2nd iteration. 8 Figure 11 : Figure 11: Comparison of the radiation patterns of the fractal iterations, 1st and 2nd. Figure 12 : Figure 12: Comparison of the simulated reflection coefficient of the 2 × 2 square patch antenna and the 4 × 4 Giuseppe Peano fractal antenna. Figure 13 : Figure 13: The both layers of the fabricated prototype antennas. Figure 14 : Figure 14: The measured and simulated reflection coefficients of the prototype antenna. Figure 16 : Figure 15: Measured and simulated radiation patterns on the -plane and the -plane at different frequencies of the 2 × 2 fractal antenna array. Table 1 : Detailed data of the antennas mentioned previously. Table 2 : Details of different antennas.
3,033.8
2014-02-09T00:00:00.000
[ "Engineering", "Physics" ]
CD4+ Th immunogenicity of the Ascaris spp. secreted products Ascaris spp. is a major health problem of humans and animals alike, and understanding the immunogenicity of its antigens is required for developing urgently needed vaccines. The parasite-secreted products represent the most relevant, yet complex (>250 proteins) antigens of Ascaris spp. as defining the pathogen-host interplay. We applied an in vitro antigen processing system coupled to quantitative proteomics to identify potential CD4+ Th cell epitopes in Ascaris-secreted products. This approach considerably restricts the theoretical list of epitopes using conventional CD4+ Th cell epitope prediction tools. We demonstrate the specificity and utility of our approach on two sets of candidate lists, allowing us identifying hits excluded by either one or both computational methods. More importantly, one of the candidates identified experimentally, clearly demonstrates the presence of pathogen-reactive T cells in healthy human individuals against these antigens. Thus, our work pipeline identifies the first human T cell epitope against Ascaris spp. and represents an easily adaptable platform for characterization of complex antigens, in particular for those pathogens that are not easily amenable for in vivo experimental validation. INTRODUCTION Ascaris spp. infections currently affect around 820 million people leading to impaired growth, impaired physical fitness and cognition, while also reducing general performance, in particular in children 1 . Considering the worldwide prevalence and intensity of Ascariasis in humans, it is critical to overcome the current limitations in controlling this parasitic infection. Improvements on infrastructure and educational programs together with revised mass drug administration programs will definitively contribute to mitigate the impact of Ascariasis 1 . However, the ideal solution would be the development of vaccines that prevents commonly observed re-infection after chemotherapy 2,3 . Candidate vaccines to prevent Ascaris spp. infection should be able to trigger effective antibody responses targeting essential antigens for the parasite to complete its life cycle 4 . The challenge is that these large, multicellular and cuticularized parasites confront the host with a complex mixture of protein antigens from which we have still very poor knowledge on their importance for infection or their immunogenicity. Ascaris spp. actively excrete and secrete complex mixtures of molecules, excretory secretory (ES) products, which are essential in parasite's communication with its host, and shaping the host immune response [5][6][7] . The ES proteins comprise critical targets for vaccination in animal models and are expected to bear targets for vaccination in humans as well (reviewed in ref. 2 ). Interestingly, animal models have clearly shown the dependency on MHCII-restriction for the target of the antibody responses [8][9][10] . Thus, the deterministic nature of T-B cell responses-which implies that CD4 + T h cell epitopes define the target of the antibody response-should contribute to define targets for the design of candidate subunit vaccines 11 . Therefore, identifying the epitopes that are involved in the host's natural CD4 + T h cell responses is essential to understand, monitor or modulate adaptive immune mechanisms that orchestrate Ascaris spp. expulsion. CD4 + T h cells recognize antigenic peptides presented by the major histocompatibility complex class II (MHCII) proteins expressed on antigen-presenting cells (APC). Peptides from antigens only become immunogenic when they are selected for presentation, and remain bound to MHCII molecules for a sufficient time to allow T cell surveillance. Thus, the abundance of antigens, their resistance to degradation and the affinity for the MHCII will define the potential immunogenicity of any peptide. To date, conventional in silico approaches predict peptide-MHC (pMHC) affinity mostly based on MHC pocket occupation by the peptide amino acids 12,13 , which is usually not very accurate for MHCII. Indeed, the IEDB CD4 + T cell immunogenicity prediction tool only reaches 50% of the immune response for those peptides below the percentile 20. However, these approaches ignore relevant aspects affecting epitope selection, such as the dynamics of peptide-MHCIIs 14,15 (pMHCII), the peptide-editing function of HLA-DM 16 , and the influence of proteolytic activities on antigen presentation 17 . Although integrating proteolytic degradation improves the current conventional methods 18 , a robust in silico ranking of the most effectively presented peptides is still elusive. Experimental approaches based on recombinant proteins or subcellular fractions containing endosomal compartments rich on MHCII have been applied to define epitope selection on single antigens 19,20 . Culture of primary DC and quantitative immunopeptidomics of infected cells has also been used to define CD4 + T h cell epitopes from Listeria monocytogenes in mouse 21 . However, the complex infection cycle of Ascaris spp. and of its antigens pose a considerable challenge for any of these methods. Consequently, to date, there is no experimental set-up described to define the CD4 + T h immunogenicity of complex antigenic mixtures as nematode ES. nematodes are suggested to be polygamous and often show female-biased sex ratio within the distribution in a host. Analyzing female and male ES antigens separately for their influence on CD4 + T h cell responses presents an unbiased approach to account for pathogen gender-heterogeneity during infection, sexual dimorphism (e.g., size) and gender-associated genes/proteins as reported previously for other parasitic nematodes 6,7,23,24 . We generated human T cell lines from healthy volunteers reacting to ESF or ESM antigens using the antigen-specific T cell enrichment and expansion as described by Bacher et al. 25 ( Supplementary Fig. 1a). This approach helped to overcome the expected low in vivo frequency of any potential Ascaris ES-specific CD4 + T h cells in healthy (uninfected) donors. The presence of reactive T cells and its low frequency was confirmed by CD40-L staining ( Supplementary Fig. 1b). CD40-L, is specifically expressed by CD4 + T h cells shortly after TcR-mediated antigen recognition irrespectively of the restricting MHC allele and can be used to assess and enrich antigen-specific T cells 26 . Re-stimulation of the generated cell lines specific for ES antigens resulted in a remarkable increase on CD40-L + cells when compared to the corresponding controls (Fig. 1a). Upregulation of CD40-L and CD40-L/cytokine co-expression ( Supplementary Fig. 1c) after restimulation confirms a functional CD4 + T h phenotype of Ascarisreactive T cells. Interestingly, when testing reactivity of T cell lines for the mismatched ES antigen we could detect a lower proportion of reactive T cells, suggesting the presence of gender specific-T cell responses (Fig. 1b). The observed gender-specific T cell responses motivated us to perform a proteomic characterization of ESF and ESM antigens 6,7 ( Fig. 1c-e). Our aim was to identify ESF and ESM antigens bearing potential CD4 + T h epitopes. By combining the use of the exponentially modified Protein Abundance Index (emPAI) with 16 O/ 18 O-labeling ( Fig. 1d) we defined the ESM and ESF composition, thereby retrieving absolute protein abundances of ESM and ESF extracts (Fig. 1e), and the relative difference between them (Fig. 1e). This analysis yielded additional information regarding the differences in abundance between ESF and ESM (Supplementary Table 1) in comparison to the previously described dataset 7 . The difference in the number of protein sources found here when compared with the previous study (175 previously vs. 254 in our case) arises from our conservative approach when considering the source of tryptic peptides. Rather than selecting a single leading protein, we explicitly kept protein entries with small differences in their primary sequences but which are not clearly distinguishable by conventional shotgun MS. This characterization could be used either as a reduced sampling space for potential CD4 + T h cell epitopes vs. the whole Ascaris spp. predicted proteome, or as an internal control for further experiments in which these 250 proteins are expected to be the immunogenic determinants of host responses to Ascaris spp. We found large differences in the abundance of proteins either within a single extract or between ESM and ESF (fold-differences up to 10 3 for emPAI and 10 7 , respectively). The identification of CD4 + Th cell epitopes from the ES products, with 1.5 × 10 5 potential 15-mers, represents a challenge. In silico, one could make use of binding predicting tools such as NetMHCIIPan reducing the number of candidates since only strong and weak affinity binders are expected to be immunogenic. For a set of seven common MHCII allotypes NetMHCIIPan defines between 1.2 × 10 4 to 3.9 × 10 4 peptides for the ES ( Supplementary Fig. 2a). More sophisticated approaches as the IEDB CD4 T cell immunogenicity prediction tool 27 (IEDBcd4) rely on a combination of binding affinity prediction, and neural network trained on experimentally validated epitopes from the IEDB. Thus, candidates ranked in the first 20th percentile would bear around half of the immunogenic candidates of an antigen. In this case, the number of CD4 + T h cell epitope candidates within the ES lies in the range of 800-1500 for the same MHCII allotypes ( Supplementary Fig. 2b). However, these methods ignore relative and total protein abundances that could be of relevance when assessing the immunogenicity of ESF and ESM. Experimentally, the most popular approaches rely on the identification of pathogen-derived MHCIIbound peptides by MS. Either cell cultures or animal models infected with the pathogen, or provided with the corresponding antigens, are the sources for isolation of MHCII-peptides. The availability of adequate infection models and the lack of biological replicating capacity of the ES poses a major challenge for any of such approaches. We hypothesized that an in vitro experimental approach recapitulating key events of antigen presentation such as proteolysis of antigens by cathepsins and catalyzed peptide exchange by HLA-DM, should contribute to define potential CD4 + T h epitopes. We reasoned that this minimalist experimental approach described by Sadegh-Nasseri's group 19 should facilitate the identification of potential epitopes when using complex antigenic mixtures. Indeed, using this system, the lower background of self-peptides will benefit the MS identification of pathogen derived antigenic peptides when compared to mass spectrometry analysis of MHCII-associated peptidomes from cell culture or in vivo samples. We used for our experiments the ES antigens (ESM or ESF), two common MHCII allotypes (DRB1*07:01 or DRB1*15:01, Supplementary Fig. 2c) preloaded with the placeholder Class II invariant chain peptide-CLIP and HLA-DM, which functions as peptideeditor 16 (details in Supplementary Fig. 2d) and the commercial proteases previously described 19 (Fig. 2a). Recombinant MHCII proteins are resistant to proteases (Fig. 2a) and can be further pulled-down from the mixtures to elute and determine the bound peptides by LC-ESI-MS. We used MaxQuant for peptide identification considering a customized database that includes all the potential entries derived from the Ascaris genome, and the molecules of the in vitro reconstitution system (MHCII, HLA-DM and proteases) (Fig. 2b). As expected, the majority (>99.0%) of the peptides belong to protein sources identified in the ES. The information on MS1 intensity for each identified peptide was used by our recently described epitope analysis tool, PLAtEAU 28 , to define consensus peptides including their relative abundance for each series of peptides that contain a common core but vary in the length of their N-terminal and C-terminal extensions (Fig. 2b). This experimental workflow dramatically reduces the number of potential CD4 + T h epitopes to less than 10 3 peptides for all conditions, which once analyzed with PLAtEAU resulted in around 150 candidate epitopes for each condition (details given in Fig. 2c and Supplementary Fig. 2e). The full list of peptide identified consists of a total of 335 potential T cell epitopes, which were annotated based on the abundance of the corresponding protein source, the predicted affinity for each allotype, and whether the binding core is found in any peptide defined by the IEDBcd4 (Supplementary Table 2). There is a considerable overlap between the consensus peptides selected by each allotype for the ESF and ESM, and around one third are found in the lists of IEDBcd4 predicted immunogenic peptides ( Fig. 2c and Supplementary Fig. 2f). More interestingly, there are certain peptides found exclusively enriched in for either ESF or ESM. The presence of predicted weak and high affinity binders increases from 10 and 1% in the pool of all potential peptides of the ES to 20 and 7% in the experimentally determined peptides, respectively. The presence of peptides with no predicted affinity for the MHCII allotypes used may represent mainly intermediate steps of the antigen processing reactions that are stable enough to overcome the immunoprecipitation process and that would be exchanged under more dynamic conditions of protein turnover in the living cell. Thus, the protein concentrations available under the test-tube conditions assayed may represent a limiting factor. Hierarchical clustering analysis of the annotated potential epitopes based on their abundance reveals that both MHCII molecules selected mostly epitopes from proteins with intermediate and high emPAI values, and in particular from those enriched (e.g., ESF:ESM ratio > 2) in the respective antigen source (ESM or ESF) (Volcano plots in Fig. 2d and Supplementary Fig. 2g). We selected a limited set of peptides that would allow us to test the performance of the reconstituted in vitro system on its own and in comparison to in silico prediction tools to define immunogenic candidates (Figure 2e). We initially selected a limited set of six candidates including the Ov17 (F1LAR2 127-146 ) consensus peptide defined exclusively under DRB1*07:01 + ESF conditions (predicted to be immunogenic by IEDBcd4 but with weak affinity for the restricting allele). This peptide represents an ideal candidate to prove the selectivity and performance of our Fig. 2 The use of a reconstituted in vitro antigen processing facilitates the detection of CD4 + T h cell epitopes. a ES products are incubated in vitro with recombinant proteins and adequate buffer conditions. Proteolytic activities used degrade most of the components of the reactions except for MHCII proteins which are subsequently pull down using a conformation specific antibody (L243). b MaxQuant is used for their identification and PLAtEAU defines series of nested peptides and retrieve the consensus peptides and corresponding MS1 intensities according to the MaxQuant output. For each peptide a relative abundance value is retrieved based on the MS1 intensity and the total ion current from the run. This approach yields a list of candidate epitopes with relative abundance values and predicted affinities. c Summary of the performance of the experimental determination of candidate antigens for each condition tested. The overlap between the peptide sets (based on predicted binding cores to facilitate comparisons) for each allotype and ES antigen, and the predicted IEDB immunogenicity score is shown as Venn Diagrams (sized according to numbers, the IEDB set consists of 3678 entries and it is cut in this figure but shown in full in Supplementary Fig. 2f). d Mapping of the identified potential epitopes to their corresponding protein sources using the intensity color code shown in the legend. e Summary of peptides used to evaluate the performance of the reconstituted in vitro system. The peptide sequence is shown in the first column and underlined is shown the binding core predicted for DRB1*07:01. The corresponding protein source with the amino acid positions covered by the peptide are indicated in the second column. Abbreviated uniprot names are provided. Last columns include the relative and total abundance of each protein sources and the predicted binding affinity for DRB1*07:01 and whether any peptide with the same binding core (underlined) is predicted to be immunogenic by IEDBcd4 prediction tool. f Representative dot blot example for a ESF antigen-specific T cell line generated from an healthy DRB1*07-typed volunteer and re-stimulated with either whole ES antigen (40 µg/ mL), no antigen (w/o) or a pool of synthetic peptides (25 µg/mL for each peptide) selected from the in vitro reconstituted HLA-DRB1*07:01 experimental data set (f). g Summarizes CD40-L frequencies among CD4 + , indicative for peptide recognition by CD4 + T cells, for whole ES antigen, peptide pool and single peptide re-stimulations. Peptide sequences are indicated in Table (f). Combined are data from the same healthy, DRB1*07-typed volunteer from n = 3 separate experiments with n = 2 separate re-stimulations (1st peptide set) or n = 1 experiment with n = 3 separate re-stimulations (2nd peptide set). CD40-L frequencies per experiment were corrected for individual background CD40-L expression of w/o antigen/w/o APC controls (mean with SEM). h Representative dot blots of an Ascaris ESF-specific, DRB1*07T cell line analyzed for Ascaris ESF peptide-specific tetramer staining. Left side indicates overall frequency of ESF antigen specific CD4 + cells after expansion compared to control. Right side shows corresponding tetramer staining with DRB1*07:01-Tet-CLIP (control), Tet-RtBP and Tet Ov17 gated on CD4 + T cells after expansion. Italic numbers indicate calculated Tet + frequency relative to proportion of ESF antigen-specific T cells. experimental approach. Experiments on swine and mouse models have shown the potential of the OV17 antigen (F1LAR2/As16) for conferring protection to Ascaris spp. infection [29][30][31] , and furthermore the restricting allele reaches up to 9% of the global population and even higher frequencies in Ascaris spp. endemic areas (up to 10.5% in Africa and 15% in Asia) 32 . Other peptides included in this list comprise candidates excluded by either the in silico or experimental approaches. We queried whether the TcR pool present in healthy individuals would respond to these candidates. We derived ESF T cell lines from a healthy, DRB1*07typed volunteer and assessed CD40-L expression after restimulation with either whole antigen, the selected pool of six peptides, or single peptide loaded APCs (Fig. 2f). Strikingly, we found that OV17, and to a lower extent PABA1_1 peptides, which we selected as potential CD4 + T h cell epitopes experimentally and also considered as potentially immunogenic by IEDBcd4 yield a T cell response above the background (dashed line) (Fig. 2g). We further verified the specificity of the experimental approach by testing this same set of peptides using ESF expanded cell lines from a DRB1*03:01; DRB1*15:01 volunteer (Supplementary Fig. 2h). We additionally tested a second set of peptides including candidates found by our experimental approach and neglected by either one or both in silico tools (Fig. 2e). Interestingly, we confirm the immunogenicity of four candidates excluded by the IEDBcd4 and with weak affinity prediction for DRB1*07:01 (Fig. 2e, g). Note that again in these experiments we re-confirm the Ov17 peptide as a prime candidate of immunogenicity in the DRB1:07:01 background. In summary, the proposed experimental approach outperforms to the use of either prediction tools on their own. Considering the co-dominant expression of other HLA genes in the donor-derived cell lines, the reactivity observed could arise from the presentation of the corresponding peptides by any MHCII allotype present in the APCs. We confirmed the DRB1*07:01 restriction for the presentation of the OV17 peptide by tetramer staining of ESF CD4 + T cell lines. A significant pool of CD4 + T cells responding to this peptide is detected when it is displayed by DRB1*07:01 tetramers compared to control tetramers (DR7 CLIP or DR7 RtBP tetramers; Fig. 2h and Supplementary Fig. 1e). Of note, the overall low frequency of DRB1*07:01-Tet Ov17 + cells can be explained by the low frequency of whole antigen-reactive T cells for the cell line applied in that assay (only 6.9% CD40-L + , Fig. 2g), but still reflect that 2.8% of all ESF reactive T cells bind DRB1*07:01-Tet Ov17 . Together, we demonstrate that a reduced set of in vitro and ex vivo experiments is extremely useful to define human CD4 + T h cell epitopes from complex antigenic mixtures bypassing the need of animal models 33 or immunization in humans 34 . The use of controlled redox potential and/or the gamma-interferon-inducible lysosomal thiol reductase 35 should further contribute to improving the presented approach by facilitating access to disulfide-bonded epitopes. However, presentation of the selected candidates already represents an excellent platform for testing the importance of specific antigen processing factors such as HLA-DM 36 or its competitive inhibitor HLA-DO 37 . Conditions can be tuned to include distinct MHC allotypes or combinations thereof, and the nature of the antigen can reach from single proteins to complex mixtures derived from secretomes, complete pathogens or cellular lysates. From a biological point of view we characterize the OV17 (F1LAR2 127-146 ) epitope as a human CD4 + T h cell epitope for Ascaris spp. and show its restriction by the DRB1*07 allotype. Immunization with this antigen in animal trials elicits considerable protective immunity to Ascaris spp. including specific antibodies and CD4 + T h cells [29][30][31] . It will thus be of great importance to further characterize immune responses against the OV17/AS16 peptide in either carriers or non-carriers of DRB1*07:01 to confirm the potential of these antigen for vaccination. Another interesting candidate is PABA1 10 , showing a completely different peptide selection pattern for the two alleles used, with a considerably higher number of peptides selected by DRB1*15:01 when compared to DRB1*07:01 (7 vs. 1 respectively). Prospectively, the stage is then set to investigate a pooled sample of a limited number of antigens to profile the T cell immune status of infected individuals. Furthermore, gaining access to HLA-typed material will contribute to define the deterministic nature of B-T cell responses to complex antigens to rationalize the development of vaccine subunits to Ascaris spp. Antigen preparation Excretory-secretory (ES) antigens were prepared from worm culture supernatants of male and female adult Ascaris spp. worms obtained from a local slaughter house. In brief, worms were separated by sex and washed several times in a balanced salt solution (BSS) containing antibiotics and used as culture media for adult worms (127 mM NaCl, 7.5 mM NaHCO 3 , 5 mM KCl, 1 mM CaCl 2 , 1 mM MgCl 2 , 200 U/mL penicillin, 200 μg/mL streptomycin, 50 μg/mL gentamicin, 2.5 μg/mL amphotericin B) and kept at 37°C with 5% CO 2 . Media was replaced on a daily basis, sterile filtered through a 0.22 μM vacuum-driven filter system and collected for ES antigen preparations starting 48 h after beginning of worm culture and finally stored at −20°C until further use. Worm culture supernatants collected over 1 week were further concentrated using centrifugal protein concentrators with a 5 kDa MWCO (Vivaspin, Sartorius, Göttingen, Germany) to obtain the final, concentrated ESF antigen (from female worms) and ESM (from male worms). Mass spectrometry Peptide mixtures were analyzed by a reversed-phase capillary system (Ultimate 3000 nanoLC) connected to an Orbitrap Velos (Thermo Fischer) using conditions and settings described in the ref. 28 . In brief, peptides reconstituted in 0.1% (v/v) TFA, 5% (v/v) acetonitrile, and 6.5 µL were loaded into a reversed-phase capillary nano liquid chromatography system (Ultimate 3000, Thermo Scientific, USA) connected to an Orbitrap Velos mass spectrometer (Thermo Scientific). LC separation was performed on a capillary column (Acclaim PepMap100 C18, 2 μm, 100 Å, 75 μm i.d. × 25 cm, Thermo Scientific) at an eluent flow rate of 300 nL/min. Mobile phase A contained 0.1% formic acid in water, and mobile phase B contained 0.1% formic acid in acetonitrile. The column was pre-equilibrated with 3% mobile phase B followed by a linear increase up to 50% of mobile phase B in 50 min. Mass spectra were acquired in a data-dependent mode utilizing a single MS survey scan (m/z 350-1500) with a resolution of 60,000 in the Orbitrap, and MS/MS scans of the 20 most intense precursor ions in the linear trap quadrupole. For quantitative proteomics, we used 16 Peptides eluted from the reconstituted in vitro antigen processing system were measured as described and MaxQuant software (version 1.5.2.8) was used for peptide identification. Customized databases featuring reviewed and non-redundant Uniprot Ascaris spp. proteins from uniprot were used (accessed March 2017) for the in vitro reconstituted experiments to which we included all other recombinant proteins used in the assay, namely human cathepsins, HLA-DR2 and HLA-DR7, as well as HLA-DM. No enzyme specificity was used for the search, and a tolerance of 10 ppm was allowed for the main ion search and 50 ppm for the MSMS identification The "match between runs" feature was enabled. The FDR was set at 0.01 (1%). Reverse IDs and known contaminants like keratins were filtered before further data analysis. F. Ebner et al. NetMHCIIPan 3.2 was used to predict peptide binding to the indicated HLA-DR allotypes. The protein database generated upon the MS analysis (only quantified proteins) was loaded into the webserver using a 2 and 10% cutoff for strong and weak binders (unless otherwise indicated). Constructs, protein expression, and purification DNA constructs encoding HLA molecules used in this study have been generated according to the sequences available in the IMGT/HLA database (http://www.ebi.ac.uk/ipd/imgt/hla/). The cDNAs encoding the different HLA subunits were cloned into the pFastBacDual. Recombinant proteins were expressed in Sf9 cells infected at an MOI of 5 for 72 h. Supernatants were concentrated and dialyzed (in PBS) using a Vivaflow200 tangential filter. To purify the target proteins by immunoaffinity chromatography the concentrated and buffer exchanged concentrates were applied to either an anti-HLA-DR-FF-sepharose or M2 anti flag (Sigma) 16 for HLA-DR and HLA-DM, respectively. Depending on the application, HLA-DR molecules were further treated prior to their use with only Thrombin (20 U/mg protein; for tetramer preparation) or with Thrombin and V8 protease (10 U/mg; in vitro reconstituted system) for 2 h at 37°C. Subsequently proteases were inactivated by adding Complete protease inhibitor cocktail (Roche) and further gel-filtrated, while HLA-DM proteins were directly subjected to gel filtration. Both types of HLA proteins were gel-filtrated in a Sephadex S200. Fractions containing the proteins of interest were pooled and concentrated with Vivaspin 10 KDa MWCO spin filter. Peptide selection and synthesis The complete list of peptides obtained after PLAtEAU analysis for all experiments was loaded into excel as a single list. We used the excel random selection function to generate a list of four peptides from the whole dataset. The corresponding sets of four peptides (500 iterations) were screened to define those with two peptides originating from the same antigen (ESF or ESM) and found in one set of experiments (DRB*01:07 or DRB*15:01). An additional criteria included the presence on the set of four peptides of at least one peptide not found in the corresponding experiment (used as control). The corresponding list of peptides indicated arises as the one fulfilling these criteria and having the larger distance between pIm values. Synthetic peptides were subsequently purchased from Peptides and Elephants (Berlin, Germany). Purity as stated by the vendor was more than 95%. All peptides were protected in their N-termini and C-termini by addition of an Ac and NH 2 group, respectively. In vitro reconstituted antigen processing system The cell-free reconstituted in vitro system described by Sadegh-Nasseri et al. 19 was modified according to the specific needs of the experiments. HLA molecules (1 μM) together with the candidate antigens (200 μg/mL) and HLA-DM (0.25 μM) were incubated for 2 h at 37°C in citrate phosphate 50 mM pH 5.2 in the presence of 150 mM NaCl. Cathepsins were added to reaction mixtures after incubation with L-Cysteine (6 mM) and EDTA (5 mM). Cathepsin B (Enzo), H (Enzo), L (Enzo), and S (Sigma) were used for our in vitro experiments at molar ratios (cathepsin:substrate) ranging from 1:250 to 1:500. The final reaction mixture was incubated at 37°C for 2-5 h. Afterwards the pH was adjusted to 7.5 and Iodoacetamide was added (25 μM). Immunoprecipitation (IP) of the pMHCII complexes was performed using L243 covalently linked to Fast Flow sepharose. Peptides were eluted from purified MHCII adding TFA 0.1% to the samples. Peptides were separated from the MHCII molecules by using Vivaspin filters (10 kDa MWCO) and a subsequent reverse phase C18 enrichment. The filtrates were further lyophilized and resuspended for mass spectrometry analyses in a mixture of H 2 O (0.94):AcN (0.05):TFA (0.01). For assessing precursor frequencies of Ascaris ES antigen F specific CD4 + T cells, PBMC were stimulated as described above and Brefeldin A (3 µg/mL, Thermo Fisher Scientific) was added after the first 2 h of stimulation. Generation of Ascaris suum ES antigen-specific T cell lines and restimulation The generation of Ag-specific T cell lines was performed as described by Bacher et al. 25 . In brief, the stimulated and isolated CD154 + Ascaris ES antigen F-specific T cells were cultured 1:100 with autologous, mitomycin C (Sigma-Aldrich) treated feeder cells in X-VIVO™ 15 (Lonza, Basel, Switzlerland) supplemented with 5% human AB serum (PAN-Biotech, Aidenbach, Germany), 100 U/mL penicillin, 100 μg/mL streptomycin (PAN-Biotech, Aidenbach, Germany), and 50 ng/mL IL-2 (PeproTech, NJ, US). Cells were expanded for 14 days and culture medium was replenished with IL-2 containing media when needed. For restimulation after 14 days, autologous PBMC were CD3-depleted using BD FACSAria™ III cell sorter or CD3bead MACS and co-cultured 1:1 with expanded T cell lines in the presence of the indicated antigens. For assessing the frequency of total Ascaris ES antigen F and/or M reactive T cells after expansion, co-cultured cells were restimulated with 40 µg/mL ES antigen F for 6 h. For addressing peptide specificities, restimulation was performed with single, synthetic peptides (25 µg/mL) or a pool of all peptides (25 µg/mL of each peptide). Brefeldin A (3 µg/mL) was added after the first 2 h of stimulation. Antibody staining and flow cytometric analysis Cells were acquired using BD FACSCanto II (with Diva software, Heidelberg, Germany) and post-acquisition data analysis was carried out using FlowJo software (TreeStar, Ashland, OR, US). Tetramer preparation and staining Purified recombinantly expressed HLA molecules were treated with Thrombin and subsequently subjected to size exclusion chromatography (Sephadex S200). The placeholder peptide CLIP was exchanged by the indicated peptides incubating HLA molecules with 50 molar excess of the desired peptide for 72 h in the presence of molecular loading enhancers. In brief, the FR dipeptide (150 μM) and AdaEtOH (100 μM) were used to promote CLIP exchange for the corresponding peptides. After gel filtration the peptide loading of HLA-II complexes was verified by MS. The generated peptide HLA class II complexes were biotinylated in a BirA sequence (DRB chain) using a BirA ligase (Avidity). The Biot-peptide-HLA class II complexes were then used to generate tetramers using Streptavidin-PE. Tetrameric complexes were finally separated by gel filtration and stored in PBS + NaAz (0.02%). Statistical analysis GraphPad Prism 7.0 software (GraphPad Software San Diego CA, USA) was used in general for statistical analysis. Variance was calculated with the two-way ANOVA method. The null hypothesis was rejected when the p value was lower than 0.05. Perseus software 39 was mainly used to analyze the MS data. Epitopes identified by the PLAtEAU algorithm (% Intensity from the TIC) were loaded as matrixes into Perseus. Data was log2 transformed and missing values were imputed as 0. The resulting matrices were plotted as heat-maps. Columns were hierarchical clustered with "average" as agglomeration method and "Pearson correlation" as distance matrix. Rows were ordered by hierarchical clustering using "average" as agglomeration method and "euclidean" as distance matrix. Epitopes eluted from each experimental condition were grouped and used to define the mean intensity value for each peptide or epitope. p values were calculated based on the observed intensities using t-test, in this case an FDR of 0.01 and a S0 = of 2 were used. Reporting summary Further information on experimental design is available in the Nature Research Reporting Summary linked to this article. DATA AVAILABILITY The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium (http://proteomecentral.proteomexchange.org) via the PRIDE partner repository 40 with the dataset identifier PXD015924 and PXD015012. Constructs for the recombinant expression of MHCIIs are available upon request. The PLAtEAU algorithm can be retrieved from https://github.com/e-morrison/plateau, or used as a webtool at: https://plateau.bcp.fu-berlin.de/.
7,305.2
2019-07-14T00:00:00.000
[ "Biology" ]
On the convergence of double Elzaki transform In this research, we have studied the convergence properties of Double Elzaki transformation and the results have been presented in the form of theorems on convergence, absolute convergence and uniform convergence of Double Elzaki transformation. The Double Elzaki transform of double Integral has also been discussed for integral evaluation. Finally, we have solved a Volterra integro-partial differential equation by using Double Elzaki transformation. Introduction *Integral transforms are valuable for the simplification that they bring about, most often in dealing with differential equation subject to particular boundary conditions. Proper choice of the class of transformation usually makes it possible to convert not only the derivatives in an intractable differential equation but also the boundary values into terms of an algebraic equation that can be easily solved. The solution obtained is, of course, the transform of the solution of the original differential equation, and it is necessary to invert this transform to complete the operation. Integral transform, mathematical operator that produces a new function ( ) by integrating the product of an existing function ( ) and a so-called kernel function ( , ) between suitable limits. The process, which is called transformation, is symbolized by the equation ( ) = ∫ ( , ) ( ) . Several transforms are commonly named for the mathematicians who introduced them. In the Laplace transform, the kernel is − and the limits of integration are zero and plus infinity, in the Fourier transform, the kernel is (2π) −1/2 e − and the limits are minus and plus infinity. In Schiff (2013), The Laplace transform of f as whenever the limit exists (as a finite number). When it does, the above integral is said to converge. If the limit does not exist, the integral is said to diverge and there is no Laplace transform defined for f. The notation ( ) will also be used to denote the Laplace transform of f, and the integral is the ordinary Riemann (improper) integral. The parameter belongs to some domain on the real line or in the complex plane. In Belgacem et al. (2003) and Watugala (1993), a new integral transform, called the Sumudu transform defined for functions of exponential order. We consider functions in the set A, defined by For a given function in the set A, the constant M must be finite, while 1 and 2 need not simultaneously exist, and each may be infinite. Instead of being used as a power to the exponential as in the case of the Laplace transform, the variable in the Sumudu transform is used to factor the variable t in the argument of the function f. Specifically, for ( ) in A, the Sumudu transform is defined by Belgacem (2007) presented the fundamental properties, analytical investigation of the samudu transform and applications to integral equations. In Belgacem (2007) and Belgacem and Karaballi (2006), all existing samudu shifting theorems and recurrence results have been generalized, also presented applications to convolution type integral equations with focus on production problems and inverse Sumudu transform of a singular function satisfies the Tauberian theorem, where the dirac delta function fails. In Belgacem (2006Belgacem ( , 2009, Laplace transform definition is implemented without resorting to Adomian decomposition nor Homotopy perturbation methods. He also applied the natural transform to Maxwell's equations and obtained the transient electric and magnetic field solution. In Belgacem and Silambarasan (2017b), The Samudu transform is applied to arbitrary powers Dumont bimodular Jacobi elliptic functions for arbitrary powers. Belgacem (2010) applied the Samudu transform applications to Bessel's functions and equations. In Belgacem and Silambarasan (2017a), the Samudu transform integral equation is solved by continuous integration by parts, to obtain its definition for trigonometric functions. Belgacem and Al-Shemas (2014) proposed ideas towards the mathematical investigations of the environmental fitness effects on populations dispersal and persistence. Goswami and Belgacem (2012) giave a sufficient condition to guarantee the solution of the constant coefficient fractional differential equations by Samudu transform. The Elzaki Transform is a new integral transform introduced by Elzaki (2011a). Elzaki Transform is modified form of Laplace Transform. Elzaki transform is well applied to initial value problems with variable coefficients and solving integral equations of convolution type. Elzaki Transform is also used to find solution of system of partial differential equations (Elzaki, 2011b). Typically, Fourier, Laplace and Sumudu transforms are the convenient mathematical tools for solving differential equations. Elzaki Transformation is defined for the function of exponential order. Consider a function in the set S defined as For a given function f(t) in the set S, the constant M must be finite, number k 1 , k 2 may be finite or infinite. The Elzaki Transform denoted by the operator E is defined as The variable v in this transform is used to factorize the variable t in the argument of the function f. The purpose of this study is to show the applicability of this interesting new transform and its efficiency in solving some convergence theorems. Convergence theorem of double Elzaki integral In this section, we prove the convergence theorem of double Elzaki integral. Theorem 2.1: Let ( , ) be a function of two variables continuous in the positive quard of the xt-plane. If the integral converges at = and = then integral converges for < , < . for the proof we will use the following lemmas. converges at s= then the integral converges for < Proof: Consider the set Now let 1 → 0. Both terms on the right which depend on 1 approach a limit and The given theorem is proved if the integral on the right converges. Now by using the "Limit test" for convergence (Widder, 2005). For this we have converges for ≤ and integral converges at = then the integral (6) converges for < Proof: Therefore ( Now let R 2 → ∞. If s < s o , the first term on the right approaches zero. The given theorem is proved if the integral on the right converges. Now by using the Limit test for convergence (Widder, 2005). We consider h (x, s)dx converges for p < p o . The proof of the Theorem 2.1 is as follows where converges for < . Therefore, the integral in RHS of (10) converges for < , < . Hence the integral Therefore the integral (1) converges absolutely for (p ≤ p o s ≤ s o ). Uniform convergence In this section we prove the uniform convergence of double Elzaki Transform. Double Elzaki transform of double integral We now find the double Elzaki transform of double integral. Application of double Elzaki transform in Volterra integro-partial differential equation We use the double Elzaki transform to solve the problem which is already solved in Moghadam and Saeedi (2010) using Differential transform method. Example 5.1: Consider the following Volterra Integro Partial Differential Equation. The single Elzaki transforms of equation ( Now by using double inverse Elzaki transform, we obtain solution of (20) as follows, ( , ) = + . Conclusion We have proved the convergence, absolute convergence and uniform convergence of double Elzaki transform. Besides these, we obtained double Elzaki transform of double integral and use it to solve Volterra integro-partial differential equation.
1,681.4
2018-06-01T00:00:00.000
[ "Mathematics" ]
3D Object Recognition Using Fast Overlapped Block Processing Technique Three-dimensional (3D) image and medical image processing, which are considered big data analysis, have attracted significant attention during the last few years. To this end, efficient 3D object recognition techniques could be beneficial to such image and medical image processing. However, to date, most of the proposed methods for 3D object recognition experience major challenges in terms of high computational complexity. This is attributed to the fact that the computational complexity and execution time are increased when the dimensions of the object are increased, which is the case in 3D object recognition. Therefore, finding an efficient method for obtaining high recognition accuracy with low computational complexity is essential. To this end, this paper presents an efficient method for 3D object recognition with low computational complexity. Specifically, the proposed method uses a fast overlapped technique, which deals with higher-order polynomials and high-dimensional objects. The fast overlapped block-processing algorithm reduces the computational complexity of feature extraction. This paper also exploits Charlier polynomials and their moments along with support vector machine (SVM). The evaluation of the presented method is carried out using a well-known dataset, the McGill benchmark dataset. Besides, comparisons are performed with existing 3D object recognition methods. The results show that the proposed 3D object recognition approach achieves high recognition rates under different noisy environments. Furthermore, the results show that the presented method has the potential to mitigate noise distortion and outperforms existing methods in terms of computation time under noise-free and different noisy environments. Introduction Significant effort has been dedicated to developing efficient and reliable remote healthcare systems with the Internet of Things (IoT) applications [1]. This development can be achieved through transmitting efficient and secure medical images and videos of the patients and processing them in a fast and reliable way. To this end, advanced remote monitoring schemes of the patients become essential. In particular, efficient three-dimensional (3D) object recognition techniques could be beneficial to process the images and videos of medical systems. This is due to the ability of object recognition to enable feature extraction, which is essential as it provides unique characteristics that can identify objects [2]. Besides, object recognition is also considered as having the most significant importance in the industrial environment, as it represents each object individually and can distinguish the object [3]. These cues are used to extract discriminative features for accurate recognition [4]. Therefore, there is an increasing interest in object recognition, especially in the fields of machine vision, pattern recognition, and machine learning applications [5][6][7]. Various domains, including facial identification [8], gender description [9], and gesture analysis, among others, use object recognition. Object recognition is also used in object identification, medical diagnosis, security applications, multimedia communication, and computer interface applications [4,10]. Related Works Object recognition and classification can be considered essential techniques, which are beneficial in various applications such as healthcare systems, pattern recognition, molecular biology, and computer vision [11][12][13][14][15]. To this end, significant research works have been developed for efficient 3D object recognition. Besides, feature extraction for 3D objects is extremely useful for classification [16]. Extensive researches have been carried out to develop 3D object classification methods. Some of these works are based on the principles of moment invariants and 3D moments. To this end, a method of 3D translation, rotation, and scale invariants (TRSI) was developed in [17] from geometric moments and an alternative approach was presented later in [18]. A tensor approach to derive the rotation invariants from the geometric moments was proposed in [19]. Besides, an automatic algorithm was proposed in [20,21] to generate 3D rotation invariants from geometric moments. Recently, a 3D Hahn moments combined with convolutional neural networks (CNN) was proposed in [22] to enhance the 3D object classification. Specifically, the work in [22] proposed a hybrid approach based on combining the 3D discrete Hahn moments and CNN to improve 3D object classification. A multi-layer artificial neural network (ANN) perception approach was proposed in [23] for the classification and recognition of 3D images. In [24], a deep learning approach based on neural network and Racah-based moments was proposed for 3D shape classification. Additionally, in [16], an approach based on the combination of 3D discrete orthogonal moments and deep neural network (DNN) algorithms was proposed to improve the classification accuracy of the 3D object features. In [25], a 3D discrete Krawtchouk moments method was proposed for content-based search and retrieval applications. In [26], a 3D image analysis was considered using Krawtchouk and Tchebichef polynomials, where orthogonal moments were exploited to characterize various types of 2D and 3D images. To this end, orthogonal moments are used in many applications such as image analysis [27,28], face recognition [29], pattern recognition [30,31], steganography [32], image reconstruction [33,34], and medical image analysis [35,36]. The recognition process depends extremely on the feature extraction process, which is used to distinguish between different objects. To this end, the process of object localization and object normalization is considered essential for feature extraction technique [37]. As such, essential issues in object recognition and computer vision applications are the extraction of significant features from objects [38]. Object recognition to date is still a challenging problem that affects pattern recognition. This is because the accuracy of object recognition can be affected by class variations [4,39,40]. In particular, different methods are utilized to extract the features from the images. These methods can be classified as deep-learningbased methods, orthogonal-moment-based methods, and texture-based methods [41][42][43][44][45]. While the recognition accuracy of deep-learning-based methods can be very high, these methods run into a substantial amount of computational complexity, as explained in [46][47][48]. In the orthogonal moment approaches, the features of the object are calculated efficiently through the use of Orthogonal Polynomials (OPs) techniques [49]. Due to their effectiveness, orthogonal moments (OMs) and OPs have been widely exploited in recent years for pattern recognition, form descriptors, and image analysis [50,51]. The OMs-based method gives a powerful capability for evaluating the image components because the image components can be efficiently represented in the transform domain [49]. In many object recognition applications, OMs can be utilized to extract features. It is possible to consider the OMs as a scalar approach that is utilized to define and characterize a function. Such OMs can be used to achieve an effective extraction of the features. The OPs function also contains the coordinates of an image in addition to OMs [52,53]. According to work performed in [44], OMs can be exploited in feature extraction from images with various geometric invariants, including translation, scaling, and rotation. In general, various types of moments can be used for image processing. For instance, due to their simplicity, geometric moments are favored above other types of moments [54]. To depict an image with the least amount of redundancy possible, a Zernike and pseudo-Zernike moments approach was developed in [55]. In [55], a moments-based approach was proposed by exploiting the fractional quaternion for colored image detection [56]. This is because the fractional quaternion, which is considered an opposed approach to integer-order polynomials, can represent functions, according to [44]. Furthermore, the diagnosis of plant diseases has been accomplished using fractional-order moments [57]. In [58], the image analysis used Zernike and Legendre polynomials, which act as the kernel functions for Zernike and Legendre moments, respectively. In particular, the Zernike moments approach has the property of invariance in addition to its capability of image data storing and processing with the least amount of redundancy. However, because the Zernike moments approach focused only on the continuous domain, such an approach would require image coordinate adjustments and transformations for discrete domain [59,60]. To address the challenge of computing continuous moments in image analysis, the discrete OMs approach has recently been proposed [61]. To this end, Mukundan presented a series of moments in [62] that uses discrete Tchebichef polynomials to analyze the image. Typically, the extraction approaches are divided into global and local features. The former is also called a holistic-based approach [63], which can capture the essential characteristics of the full human face image. At the same time, it is known as the component-based approach or block-processing-based approach, from specific areas in images [64]. In the global feature-based approach, various imaging setups are used to achieve improved performance for feature extraction [65]. To this end, several feature extraction techniques have been proposed so far to enable a global feature-based approach [66,67]. In block processing or what is known as the local feature-based approach, the image features can be extracted locally by utilizing OMs, which entails processing the image's blocks after it has been divided into several blocks to ease their processing. In this approach, the signals such as images and videos can be divided efficiently into several blocks so that they transfer to another domain to extract the features [68]. The signal characteristics can be stored locally in memory to prepare it for the next step of processing. The work in [63] demonstrates that the (local) block-processing-based approach achieves better performance in feature extraction compared with the (global) holistic-based approach. One technique for extracting local features is the local binary patterns method [69][70][71]. In addition, the combination of global-and local-based approaches, which is termed the hybrid features extraction-based approach, aims to achieve the highest object recognition accuracy [72,73]. It is demonstrated that block processing, which represents local feature extraction, can achieve the highest recognition accuracy with the trade-off of higher computation complexity. Specifically, compared with global features, local features are thought to be more reliable and improve recognition accuracy, see, e.g., [74][75][76]. To this end, partitioning the images using image block processing has the potential of extracting the blocks of any image and analyzing them sequentially. From the perspective of computer memory, this operation is not sequential, which is seen as a major flaw in performance and a crucial difference between the memory and the speed of the CPU. While such an operation would result in additional cache misses and replacements, accessing the complete matrix sequentially can aid in maintaining spatial locality [68]. The removal of additional procedures will speed up the extraction of local features. Specifically, extracting local features from the image blocks using discrete transform will decrease the computational complexity, which is called a fast overlapped block-processing method for feature extraction [68]. Although several advanced methods have been proposed for object recognition, the accuracy and running time are to date considered challenging issues that need to be addressed. Therefore, finding a quick and accurate mechanism for 3D object detection is necessary. Additionally, most of the exciting works need to account for the impact of undesirable noise on recognition. Hence, there is a limited understanding of the effect of noisy environments. Therefore, investigating the proposed method in the noise condition is significant to characterize the effectiveness of the feature extraction for object recognition processes. Paper Contributions To overcome the aforementioned challenges, a robust object recognition algorithm that exploits Charlier polynomials and their moments is proposed. The proposed algorithm has a powerful capability for the characterization and feature extraction of the signals of the 3D objects effectively. In addition, to extract the features effectively and in a fast manner, this paper exploits an overlap block-processing technique to provide a construction of auxiliary matrices, which essentially extends the original signal to prevent the time delay in the loops computation. Furthermore, the proposed method is evaluated in the noise condition to characterize the effectiveness of the proposed method in feature extraction for object recognition processes. The major contributions of this paper can be summarized as follows: (1) Proposing an advanced design for robust 3D object recognition, which takes into account the accuracy, computational complexity, and execution time. (2) Exploiting the powerful Charlier polynomials to extract the features of the 3D objects. (3) Developing a fast overlapped block-processing algorithm, which shows more accurate processing for the blocks of the image to perform fast feature extraction with low complexity. The proposed overlapped block-processing method is mainly used to decrease the computation time. (4) Finally, implementing the support vector machine (SVM) to classify object recognition features accurately. To this end, a well-known dataset known as the McGill benchmark dataset is used for performance evaluation [77]. The results demonstrate that the proposed method achieves high recognition accuracy with lower computational complexity. Furthermore, the results demonstrate that the proposed method is able to reduce noise distortion and outperforms traditional methods under both clean and noisier environments. These achievements signify the importance of the proposed method for the future implementation of 3D object recognition. Paper Organization The paper is organized as follows. In Section 2, the orthogonal polynomials and their moments are introduced. In Section 3, the methodology of the proposed method for feature extraction and recognition of 3D objects is presented. In Section 4, the performance evaluation of the proposed method and the numerical results are discussed. Finally, the conclusion of the paper is presented. Preliminaries of OPs and Their OMs The mathematical definition of the utilized OPs is explained in this section. Additionally, this section also describes the computation of the OMs for the 3D signals. Charlier Polynomials Computation and Their Moments This subsection discusses the Charlier polynomials and their moments. In addition, the existing three-term recurrence (TTR) relation is described. Several studies have considered the use of Charlier polynomials due to its accuracy and effectiveness [78]. To this end, research on the application of Charlier polynomials has been divided into two main areas: moment computation algorithms and recurrence relation algorithms. For the recurrence relation-based algorithms, the research works exploit the n-direction and x-direction of the matrix. However, generating high-order polynomials is not possible in these recurrence algorithms. This is due to the use of the initial values and the number of recurrence times. The research works make use of either the x-direction or the n-direction of the recurrence algorithm as their calculation algorithms. To the best of our knowledge, no research studies have looked into using Charlier polynomials and their moments for 3D object detection. This paper investigates the effect of using Charlier polynomials for 3D object recognition. This paper also aims to provide an efficient method for achieving a recurrence relation to compute Charlier polynomials for high-order polynomials. In what follows, the Charlier polynomials and their moments computation are presented. Computation of Charlier Polynomials Charlier polynomials C n (y; p) of dth dimension can be calculated as follows: n, where p denotes the parameter of the Charlier polynomials, and 2 F 0 represents the mathematical formulation of the hypergeometric series, which is expressed as [79] where (a) k denotes the ascending factorial, which is termed as the Pochhammer symbol [79]. Following the expressions provided by Equations (1) and (2), Charlier polynomials can be written as It is worth noting that the orthogonality condition should be met with Charlier polynomials. Besides, the weighted function can be applied to the Charlier polynomials so that where D = N − 1, ω C (x; p), which denotes the weighted function and ρ C (d; p) represents the squared norm of Charlier polynomials dx. The weighted function and the squared norm of Charlier polynomials are provided in expressions (5) and (6), respectively. It is worth noting that the calculation of the Charlier polynomials' coefficients provided by the expression in Equation (3) may cause numerical instability. Hence, to overcome this issue, a weighted normalized Charlier polynomial is applied. To this end, the nth order of weighted normalized Charlier polynomials can be expressed aŝ Computation of Charlier Moments This subsection discusses the computation of Charlier moments. The Charlier moments, denoted as transform coefficients, are scalar quantities utilized to demonstrate signals without redundancy [49,80]. For a one-dimensional (1D) signal, denoted as f (x), Charlier moments can be computed in the moment domain as where µ n denotes the Charlier moments and Ord represents the maximum number of orders utilized for signal representation. To obtain the signalf (x) from the Charlier domain (moments domain), inverse transform can be utilized as follows: For a two-dimensional (2D) signal f (x, y) of size N × M, the Charlier moments with 2D signal, denoted as µ nm , can be computed as where the parameters Ord 1 and Ord 2 denote the highest order used for the representation of the signal. To reconstruct the 2D signalf (x, y), denoted asf = f , from the Charlier domain, the following inverse transformation is used: To compute the moments for higher dimensional space, in our case, the 3D signal, f (x, y, z), the following formula is used: Charlier Coefficients Computation Using Recurrence Relation Algorithm This section presents the algorithm exploited to compute the coefficients of Charlier polynomials. It is worth noting that the algorithm used in this paper is the recurrence relation, which has been presented in [78]. The computation of initial values of Charlier polynomials' coefficients is essential for obtaining an efficient and reliable recurrence relation algorithm. It should be noted that both three-term recurrence relations algorithms in the x-direction and the n-direction depend on two sets of initial values. To this end,Ĉ 0 (x; p) andĈ 1 (x; p) are the two initial values used in the three-term recurrence relation algorithm in the x-direction. In general, calculating the set of initial values is mathematically intractable. This is attributed to incorrectly computed values. To address this issue, a logarithmic function is used [78]: where logΓ denotes the logarithmic mathematical operation for the gamma function. For the range n > p, n = p + 1, p + 2, . . . , N − 1, the following expression is used: After computing the coefficients for x = 0, they are used to compute the Charlier polynomials' coefficients for x = 1 using the following recurrence relation: To this end, the polynomial space of the Charlier polynomials is divided into two portions: lower triangle and upper triangle [78]. These portions are known as "Part 1" and "Part 2", which are shown in Figure 1. Charlier polynomials' coefficients in the lower triangle matrix ("Part 1") are obtained using three three-term recurrence relations. In addition, Charlier polynomials' coefficients in the upper triangle matrix ("Part 2") are obtained using the symmetry relation provided in the expression given bŷ C n (x; p) =Ĉ x (n; p) n = 0, 1, . . . , N − 1, and x = 0, 1, . . . , n − 1. After the weighted normalized Charlier polynomials' identity and initial values calculations are presented, the calculation of the Charlier polynomials' coefficients in "Part 1" is performed by exploiting the three-term recurrence in the x-direction. To this end, the following calculations arê where x = 1, 2, . . . , N − 1 and n = x, x + 1, . . . , N − 1; the parameters A and B are obtained, respectively, as [78] For more clarification, the utilized algorithm for the weighted normalized Charlier polynomials is summarized in Algorithm 1. 11: for n = 0 to N − 1 do 12:Ĉ n (1; p) ← (p−n) √ nĈ n (0; p) 13: end for 14: {Compute the coefficients in "Part 1"} 15: for x = 1 to N − 1 do 16: for n = x to N − 1 do Methodology of the Proposed Feature Extraction and Recognition Method of 3D Object This section presents the feature extraction and recognition processes for the presented 3D object recognition algorithm. For any recognition system, a feature extraction process is employed to represent signals. As a result, local feature extraction can be used to enable more effective object recognition systems rather than global feature extraction due to their effectiveness, as discussed earlier in the introduction. Therefore, the 3D image might be separated into blocks to increase recognition accuracy. Each block has a size of B x × B y × B z . The Charlier polynomials are generated using the procedures in Section 2.4, where Charlier polynomials can be generated with parameter p. The brief methodology of the presented 3D recognition algorithm is shown in Figure 2. First, the 3D image information is obtained. Then, the Charlier polynomials are generated with parameter p. Next, the overlapped polynomials are generated to reduce the computation cost. After that, the fast 3D moments' computation is used to transform the 3D images into the moment domain. Finally, the features are normalized and used to train the SVM model for recognition. The global-based feature extraction approach is, to some extent, inaccurate for noisy environments, which highly impedes the characterization of efficient 3D object algorithms in more realistic settings. Moreover, the performance of 3D object recognition accuracy may be degraded in noisy environments [45,81]. Therefore, preprocessing for the 3D object becomes essential to mitigate the effect of noise but it may come at the expense of increasing the computation complexity. The extraction of local features leads to a high computation cost because the traditional method is used, which is considered a bottleneck for real-time application [82]. The local features are extracted after partitioning the 3D object into sub-blocks. For more clarification, see Figure 3. A fast overlapped block-processing technique is exploited to overcome the above challenges. To extract local features, most applications use a non-overlapping blockprocessing technique. On the other hand, overlapped block processing could enhance the accuracy of 3D object recognition [45,81]. Typically, the processing of the blocks in parallel will significantly raise the cost of computing. We solved this issue by using the fast overlapped block processing described in [68]. The fundamental idea behind fast overlapped block processing (FOBP) is to extend the image by adding auxiliary matrices, which does away with the requirement for nested loops. The computing cost of the feature extraction procedure will be drastically reduced by eliminating the nested loops (see Figure 4). Suppose a 3D image F with a size of N x × N y × N z needs to be partitioned into overlapped blocks. The size of the blocks are B x × B y × B z with overlapping sizes of v x , v y , and v z in the x, y, and z-direction, respectively. This lead to a total blocks (T Blocks ) of T Blocks = Blocks x × Blocks y × Blocks z , For further details about the expressions above, see Figure 5. Suppose the matrix G represents the extended version of F and can be computed as follows [68]: where E x , E y , and E z are rectangular matrices with sizes of (B x · Blocks x × N x ), (B y · Blocks y × N y ), and (B z · Blocks z × N z ), respectively. For further elucidation, the matrix E d is shown in Figure 6, where d represents the dimensions (x, y, and z). To compute the moments (M) for a 3D image using matrix multiplication, Equation (12) can be rewritten as follows [83]: By substituting Equation (22) in Equation (23), we obtain Note that M represents the matrix form of the moments µ nml . By following the proof presented in [68], Equation (24) can be rewritten as follows: where Q d are computed as follows: where the matrix R d can be obtained as follows: where matrix I denotes an identity matrix, ⊗ denotes the Kronecker product, and H d represents the Charlier polynomials. Note that d represents the dimensions x, y, or z. Due to the matrices independence from the image, they are generated, stored, and repeatedly used [45,68]. The process for the matrices generation is depicted in Figure 7. After the matrices (Q x , Q y , and Q z ) are generated, the images are transformed into the Charlier moment domain to extract features (see Algorithm 2). Then, these features are normalized to obtain the feature vector. Finally, the objects are classified based on the extracted features. Algorithm 2 The 3D moments computation [83] Input: F = 3D image, Q d = Charlier polynomials. Output: FV = Charlier moments. 1: Generate extended 3D image (G) from the 3D image F {Equation (22).} 2: Get stored Charlier polynomials Q x , Q y , and Q z {Using Equation (26).} 3: for z = 1 to Ord z do 4: M ← M + R z ⊗ Q x G Q y 5: end for 6: FV ← reshape(M) {Reshape the computed moments as a feature vector.} 7: return FV {Note: in the training and testing phases, the feature vector is normalized.} In this paper, the normalized feature vector is obtained and considered an input to the classifier. To this end, a label (ID) is considered for each input image of the objects. The classification procedure is performed in this paper using SVM. The SVM technique is selected here due to its effectiveness in optimizing the margin between hyperplane separation classes and data [84]. Furthermore, the SVM technique can be very efficient for object recognition. This is attributed to the fact that SVM is more robust to signal fluctuation [85]. In this paper, LIB-SVM is used in the classification process [86]. Figure 8 shows a model of the proposed 3D object recognition method. Experiments and Discussions In this section, the performance of the proposed Charlier polynomials algorithm for 3D object recognition is evaluated. In this experiment, the well-known McGill datasetdeveloped in [77]-is used as a benchmark dataset. In particular, this dataset contains 19 classes, denoted as 3D objects. These 3D objects are named as planes, spiders, spectacles, snakes, pliers, octopus, teddies, dolphins, fours, ants, humans, tables, chairs, dinosaurs, fishes, hands, cups, craps, and birds. Samples of the aforementioned 3D objects are shown in Figure 9. In this experiment, the results are obtained over 19 different objects and various effects. These effects are translations and rotations. The sample objects are translated in the x, y, and z axes and their combinations (xy, xz, yz, and xyz) range from (1, 1, 1) to (10, 10, 10) with a step of (1, 1, 1). In addition, for each direction, the sample objects are rotated in the x, y, z, xy, xz, yz, and xyz axes between 10 • and 360 • with a step of 10 • . The resulting number of samples per object is 1252, which produces a total number of 23,788 samples for all objects. The flow diagram process of the 3D object recognition is shown in Figure 8. For the 3D object recognition, different block sizes are considered in this experiment, which are given by the block sizes of 64 × 64 × 64, 32 × 32 × 32, and 16 × 16 × 16. Tables 1-3 present the performance results of block sizes of 64 × 64 × 64, 32 × 32 × 32, and 16 × 16 × 16, respectively. Besides, different overlap sizes are also considered in addition to the sizes of the testing and training sets considered during this experiment, which are given as 70% and 30%, respectively. As discussed earlier, the proposed solution for 3D object recognition and feature extraction is Charlier polynomials (see Algorithm 1) with parameter p = Block Size/2. The SVM model is used in the proposed algorithm for object classification. The LIB-SVM library developed in [86] is used to train the extracted features. The SVM kernel exploits LIB-SVM and uses the radial basis function. In the training phase, five-fold cross-validation is utilized to obtain the values of the SVM parameters (see Figure 2). The recognition accuracy is the number of correct predictions divided by the total number of predictions as follows: Tables 1-3 reported the recognition rate for clean and noisy environments. First, we will discuss the clean environment results; then, the noisy environment will be considered successively for different types of noise. The results in Table 1 show that the accuracy of block size of 64 × 64 × 64 starts at 68.25% and increases to 80.04% as the overlap block size is increased from 0 × 0 × 0 to 16 × 16 × 16, which shows an improvement ratio of 14.73%. This implies that increasing the overlap block size can help in improving the recognition accuracy. For the block size of 32 × 32 × 32 given in Table 2, the object recognition accuracy starts with a value of 76.28% at overlap size of 0 × 0 × 0, which achieves an accuracy improvement of 8.03% higher than that obtained with the block size of 64 × 64 × 64 at an overlap size of 0 × 0 × 0. In addition, the object recognition accuracy of the block size of 32 × 32 × 32 is increased to 80.10% at an overlap block size of 4 × 4 × 4. For the block size of 16 × 16 × 16 given in Table 3, the object recognition accuracy is increased from 70.58% to 80.10% as the overlap size is increased from 0 × 0 × 0 to 2 × 2 × 2. To this end, the highest accuracy performance is achieved at a block size of 64 × 64 × 64 when an overlap size of 16 × 16 × 16 is used, at a block size of 32 × 32 × 32 when an overlap size of 4 × 4 × 4 is exploited, and at a block size of 16 × 16 × 16 when an overlap size of 2 × 2 × 2 is utilized, as illustrated in Tables 1-3, respectively. In a nutshell, the best accuracy can be achieved when the overlap block size is increased and the block size is decreased. Different noisy environments are considered for further clarification and evaluation of the proposed object recognition method, and the results for each type are reported. It is worth noting that (GN) stands for Gaussian noise, (SPN) stands for salt-and-pepper noise, and SPKN stands for Speckle noise. Note that different noise levels are considered for each type of noise. From Table 1, it is obvious for the case of GN with all its different densities values from 0.0001 to 0.0005 that the accuracy is increased as the values of the overlap block size are increased. In addition, the same observation is perceived for SPN and SPKN for all noise density values. Moreover, Table 2 shows that for all types of noise and noise densities, higher accuracy is achieved at the highest overlap block size. On the other hand, for a block size of 16 × 16 × 16, higher accuracy is obtained for the overlap block size equal to 1 × 1 × 1 for all noisy environments. The results show that the best-case scenario can be obtained at a block size of 32 × 32 × 32 and an overlap size of 4 × 4 × 4. The accuracy of object recognition starts at very low values with the block size of 64 × 64 × 64, which is given in Table 1 when an overlap size of 0 × 0 × 0 is considered. The recognition accuracy is then increased to the highest values at a block size of 64 × 64 × 64 and an overlap size of 16 × 16 × 16, as given in Table 1; a block size of 32 × 32 × 32 and overlap size of 4 × 4 × 4, as shown in Table 2; and a block size of 16 × 16 × 16 and overlap size of 2 × 2 × 2, as demonstrated in Table 3. To evaluate the performance of the presented algorithm, a comparison is performed with existing works in terms of recognition accuracy. Table 4. It can be observed from Table 4 that the average recognition accuracy of the presented algorithm outperforms the accuracy of the existing works. According to the results obtained from this table, the recognition accuracy of the presented algorithm is significantly high compared with that computed from the existing algorithms for all the given values of the block size (64 × 64 × 64, 32 × 32 × 32, and 16 × 16 × 16) and overlap size (16 × 16 × 16, 4 × 4 × 4, and 1 × 1 × 1). Therefore, it can be concluded that the presented algorithm can be useful in object recognition applications. Furthermore, in order to provide further performance evaluation of the proposed method, the computation time of the proposed algorithm is compared with the traditional algorithm. To this end, Figure 10 illustrates an average computation time for 10 runs for both the proposed and traditional algorithm under different values of block sizes and overlap sizes. In addition, the percentage performance improvement between the proposed algorithm and the traditional algorithm is also provided. This percentage performance improvement is obtained by dividing the results from the traditional algorithm by those obtained from the proposed algorithm. Figure 10 shows that the proposed algorithm significantly outperforms the traditional algorithm, where the average performance improvement across whole values is recorded as around 4.70 compared with the traditional algorithm. The proposed recognition algorithm achieves the highest percentage performance improvement when the block size is 16 × 16 × 16 with an overlapped size of 4 × 4 × 4, which is recorded as 7.86. This clearly signifies the robustness of our algorithm when a small block size, i.e., 16 × 16 × 16, is considered. Conclusions This paper presents an efficient algorithm for 3D object recognition with low computational complexity and fast execution time based on Charlier polynomials. The proposed algorithm has a powerful capability for extracting the features of the 3D object in a fast manner. This was attributed to the overlapped block-processing technique, which allows the signals to be virtually extended to auxiliary matrices to avoid the time delay during loops computation. In addition, in order to characterize the effectiveness of the proposed 3D object recognition method, a noise environment was considered in the evaluation and comparison. This paper also implemented the SVM algorithm to classify the 3D object features. The proposed 3D object recognition method was evaluated under different environments. The results illustrate that the proposed 3D object approach achieved high recognition accuracy as well as low computation time under the different noisy environments considered. This achievement signifies the importance of the proposed 3D object recognition method for future applications. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript:
8,005.2
2022-11-26T00:00:00.000
[ "Computer Science" ]
Super-enhancer profiling identifies novel critical and targetable cancer survival gene LYL1 in pediatric acute myeloid leukemia Background Acute myeloid leukemia (AML) is a myeloid neoplasm makes up 7.6% of hematopoietic malignancies. Super-enhancers (SEs) represent a special group of enhancers, which have been reported in multiple cell types. In this study, we explored super-enhancer profiling through ChIP-Seq analysis of AML samples and AML cell lines, followed by functional analysis. Methods ChIP-seq analysis for H3K27ac was performed in 11 AML samples, 7 T-ALL samples, 8 B-ALL samples, and in NB4 cell line. Genes and pathways affected by GNE-987 treatment were identified by gene expression analysis using RNA-seq. One of the genes associated with super-enhancer and affected by GNE-987 treatment was LYL1 basic helix-loop-helix family member (LYL1). shRNA mediated gene interference was used to down-regulate the expression of LYL1 in AML cell lines, and knockdown efficiency was detected by RT-qPCR and western blotting. The effect of knockdown on the growth of AML cell lines was evaluated by CCK-8. Western blotting was used to detect PARP cleavage, and flow cytometry were used to determine the effect of knockdown on apoptosis of AML cells. Results We identified a total of 200 genes which were commonly associated with super-enhancers in ≧10 AML samples, and were found enriched in regulation of transcription. Using the BRD4 inhibitor GNE-987, we assessed the dependence of AML cells on transcriptional activation for growth and found GNE-987 treatment predominantly inhibits cell growth in AML cells. Moreover, 20 candidate genes were selected by super-enhancer profile and gene expression profile and among which LYL1 was observed to promote cell growth and survival in human AML cells. Conclusions In summary, we identified 200 common super-enhancer-associated genes in AML samples, and a series of those genes are cancer genes. We also found GNE-987 treatment downregulates the expression of super-enhancer-associated genes in AML cells, including the expression of LYL1. Further functional analysis indicated that LYL1 is required for AML cell growth and survival. These findings promote understanding of AML pathophysiology and elucidated an important role of LYL1 in AML progression. Supplementary Information The online version contains supplementary material available at 10.1186/s13046-022-02428-9. Background Acute myeloid leukemia (AML) is a myeloid neoplasm that accounts for 7.6% of hematopoietic malignancies. In bone marrow (BM), AML arise from oncogenic transformation of hematopoietic progenitors which damages the blood tissue. According to reports, the long-term survival rate of patients with AML is less than 20% [1][2][3]. It has also been reported that approxi-mately18,000 AML cases are diagnosed each year in Europe [4]. The roles of multiple genes are different between pediatric AML and adult AML. AML is complex, and exploring its pathogenic mechanisms will help improve the current state of AML treatment [5][6][7][8]. A series of hub genes have been identified in AML. RUNX family transcription factor 1 (RUNX1) dysfunction is reported to be one of the major pathogenic mechanisms of AML [9]. RUNX1 point mutations have been identified in myelodysplastic syndromerelated AML. It has been reported that somatic RUNX1 mutations have been found in approximately 10% of patients with de novo AML [10]. Myeloperoxidase (MPO) has been widely accepted as a marker for AML diagnosis, and it is also associated with AML prognosis [11]. Cyclin-dependent kinase 6 (CDK6) is another key molecule in the development of AML. It functions as a driver of mixed-lineage leukemia rearrangements [12]. Pediatric AML is different from adult AML as their biological process and clinical prognoses are distinct [13,14]. Childhood AML is reported to have fewer somatic mutations and more cytogenetic abnormalities than adult AML. The epigenetic landscapes of pediatric and adult AML are also different. Furthermore, differences in prognosis between childhood AML and adult AML have also been reported. Super-enhancers (SEs) represent a special group of enhancers that have been reported in multiple cell types [15]. Super-enhancers recruit a particularly large number of transcription factors/cofactors and induce the transcription of many target genes, compared with typical enhancers (TEs). H3k27ac is one of the frequently-used indicators for super-enhancer identification [15]. Aberrant expression of genes triggered by super-enhancers participates in many biological processes, therefore the screening and identification of hub genes driven by super-enhancers attracted the attention of many researchers. Super-enhancers have been reported to be implicated in multiple types of cancers. Super-enhancer promotes the growth and survival of t(4;14)-positive multiple myeloma [16]. HJURP was reported to be an SE-associated gene in t(4;14)-positive multiple myeloma. Superenhancer activates the histone chaperone HJURP, which leads to abnormal overexpression of HJURP in t(4;14)positive multiple myeloma. Overexpression of HJURP further promotes tumor cell proliferation and is associated with poor outcome in t(4;14)-positive multiple myeloma. Super-enhancer was found to activate the Wnt/ beta-catenin pathway and promotes the proliferation of liver cancer cells [17]. In hepatocellular carcinoma specimens, a live-specific super-enhancer drives lncRNA-DAW, leading to activation of the Wnt/beta-catenin pathway. Oncogenic super-enhancers were also identified in colorectal cancer through genome-wide profiling [18]. Via a genome-wide investigation of the enhancer distribution in colorectal cancer tissues, super-enhancer loci were identified. Super-enhancers were found to govern PHF19 and TBC1D16 and participate in colorectal cancer tumorigenesis. In addition, super-enhancer was reported to play a role in glioma progression [19]. In glioma cells, TMEM44-AS1 activates Myc signaling, and Myc binds to the super-enhancer of TMEM44-AS1, forming a positive feedback loop. Myc was reported to interact with mediator complex subunit 1 and regulate the super-enhancer of TMEM44-AS1 in glioma cells. The small molecule Myc inhibitor, Myci975, can alleviate glioma cell growth promoted by TMEM44-AS1. Superenhancer was also found to be involved in squamous cell carcinoma [20]. Super-enhancers were reported to form at cancer stemness genes and disruption of superenhancers using BET inhibitors was reported to inhibit the self-renewal of cancer stem cells in head and neck squamous cell carcinoma. Super-enhancer was reported to control the expression of TP63, which is involved in cancer stem cell self-renewal in head and neck squamous cell carcinoma. BRD4 was reported to recruit MED1 and p65 to form super-enhancers, and a BRD4 inhibitor was reported to disrupt super-enhancers and decrease the tumorigenic potential of cancer stem cells in head and neck squamous cell carcinoma. Furthermore, super-enhancer is known to play a role in triple-negative breast cancer [21]. The super-enhancer heterogeneity in breast cancer subtypes was uncovered through multiomic profiling. Certain genes (including FOXC1, MET, LYL1 is required for AML cell growth and survival. These findings promote understanding of AML pathophysiology and elucidated an important role of LYL1 in AML progression. Keywords: Acute myeloid leukemia, Super-enhancer, LYL1, GNE-987, ChIP-Seq analysis and ANLN) were identified to be regulated specifically by triple-negative breast cancer-specific super-enhancers. A super-enhancer-driven master regulator of invasion and metastasis was identified in triple-negative breast cancer. In addition, super-enhancer is reported to be abnormally activated and result in CHPT1 overexpression, which leads to enzalutamide resistance in castration-resistant prostate cancer [22]. In this study, we performed super-enhancer profiling through ChIP-Seq analysis of AML cell lines and AML samples, followed by functional analysis. We identified 200 common super-enhancer-associated genes in AML samples, and a series of those genes are cancer genes. We also found that GNE-987 treatment downregulates the expression of super-enhancer-associated genes in AML cells, including the expression of LYL1. Further functional analysis indicated that LYL1 is required for AML cell growth and survival. These findings provide novel insights into the pathophysiology of AML and elucidate a crucial role of LYL1 in promoting AML progression. Samples This study was performed according to The Code of Ethics of the World Medical Association (Declaration of Helsinki). The ethics committee of Children's Hospital of Soochow University approved this study (No.SUEC2000-021 & No.SUEC2011-037). Written informed consent was obtained from each participating individual's guardian. A total of 11 pediatric AML bone marrow samples, 7 T-ALL bone marrow samples, and 8 B-ALL bone marrow samples were collected, and H3K27ac signals were detected by ChIP-Seq. The clinical characteristics of the patients are shown in Table 1 and Supplementary Table 1. Cell lines and culture Human AML cell lines, including NB4, Kasumi-1, MV4-11, and HL-60 were acquired from the cell bank of Chinese Academy of Sciences. Cells were cultured at 37 °C in RPMI medium (Thermo Fisher Scientific) supplemented with 1% penicillin-streptomycin (Beyotime Biotechnology, Shanghai, China) and 10% fetal bovine serum (Biological Industries, CT, USA), in a humidified incubator with 5% CO 2 . Cell viability assay Leukemia cells were seeded in the 96-well plate and GNE-987 was added at different concentrations. Control group cells were treated with 0.05% dimethyl sulfoxide (DMSO) without GNE-987 in complete medium. Cell viability was determined by a Cell Counting kit-8 (CCK8) assay (Dojindo Molecular Technologies, Tokyo, Japan) according to the manufacturer's instructions after 24 h of drug treatment. Each concentration was tested in three independent experiments. Cell proliferation was quantified using Graph Prism 7.0 (GraphPad Software Inc., San Diego, CA, USA). Lentivirus preparation and infection We constructed short hairpin RNA (shRNA) targeting LYL1 (shown in Supplementary Table 2) using the pLKO.1-puro lentiviral vector (IGE Biotechnology Ltd., Guangzhou, China). We also constructed PLVX-LYL1 (shown in Supplementary Table 2) in this study. To prepare lentivirus, we purchased the envelope plasmid and packaging plasmid from Addgene (pMD2.G: #12,259; psPAX2:#12,260; Cambridge, MA, USA). Next we cotransfected pMD2.G, psPAX2 and the transfer plasmid into 293FT cells by polyethylenimine (linear MW 25,000 Da, 5 mg/mL, pH 7.0) (cat. No. 23966-1; Polysciences, Warrington, PA, USA). Then we replaced the entire volume of culture medium with fresh medium after 6 h. Next we harvested the viral supernatant at 48 h post-transfection and filtered it through a 0.22 μm filter. Then we infected the leukemia cells with lentivirus in the presence of 10 μg/mL Polybrene (Sigma-Aldrich) for 24 h. Next, we selected stable cells using puromycin (Sigma-Aldrich). RNA preparation and real-time PCR We extracted total RNA with TRIzol ® reagent (Invitrogen, CA, USA). Next we reverse transcribed total RNA to cDNA with a High-Capacity cDNA Reverse Transcription Kit (Applied Biosystems, CA, USA). Quantitative real-time PCR was performed with LightCycler ® 480 SYBR Green I Master mix (cat. No. 04707516001; Roche, Penzberg, Germany) in a LightCycler 480 Real Time System (Roche). Then, we calculated mRNA expression Apoptosis assay We harvested leukemia cells and washed them with cold PBS. Then, the cells were suspended in 1 × binding buffer, and stained with fuorescein isothiocyanate (FITC)-Annexin V antibody and PI solution by the FITC-Annexin V apoptosis kit (cat. No.556420; BD Biosciences, Franklin Lakes, NJ, USA). Apoptosis was analyzed using flow cytometry (Beckman Gallios ™ Flow Cytometer; Beckman). RNA-seq analysis and data processing RNA-seq was performed using protocols from Novogene Bioinformatics Technology Co., Ltd. (Beijing, China). Library construction was the first step, with reverse transcription of total RNA to cDNA. Next, the cDNA library was sequenced. We then filtered the raw reads and mapped the clean reads using HISAT. Next we calculated gene expression level and identified differentially expressed genes with DESeq2 (P < 0.05 and fold-change> 2 or fold-change< 0.5). Differentially expressed genes were further analyzed with the R package cluster-Profiler [23] and the DAVID Bioinformatics Resources v6.8 online server (https:// david. ncifc rf. gov) for enrichment analysis. Chromatin immunoprecipitation (ChIP) We crosslinked 3-5× 10 7 cells with 1% formaldehyde for 10 minutes, and then quenched the crosslinking reaction with 1. 25 [25], according to the parameters -g hs -n test -B -q 0.01. Next we converted the bedgraph files generated by MACS2 to bigwig files with the UCSC bedGraphToBigWig tool, and then visualized the bigwig files using Integrative Genomics Viewer (IGV) [26]. Then we identified super-enhancers by the ROSE (Rank Order of Super Enhancers) method [27,28], according to the parameters -s 12500 -t 2000 (−s, stitching distance; −t, TSS exclusion zone size). Public hi-C data collection and analysis Hi-C data for THP-1 cell line (GSE126979) were downloaded from the Gene Expression Omnibus database. Read mapping and loop calling were performed with HiC-pro (v.3.1.0) [32]. For alignment, MboI restriction sites in the hg38 build were used. HiC-pro uses Bow-tie2 for mapping and we specified -very-sensitive -L 30 --score-min L,-0.6,-0.2 --end-to-end -reorder for global options and --very-sensitive -L 20 --score-min L,-0.6,-0.2 --end-to-end -reorder for local options. We used 'GAT CGA TC' as the ligation site during the mapping process. Statistical analysis Student's t-test or the Mann-Whitney u test was used for comparisons between two groups. Statistically significant P values are labeled as follows: * for P < 0.05, ** for P < 0.01, and *** for P < 0.001. Statistical analysis was performed using GraphPad Prism 7.0 (GraphPad Software, Inc., La Jolla, CA, USA). Super-enhancers are enriched at transcriptional regulatory genes in AML samples and AML cells To identify genes correlated with super-enhancers in AML, we carried out H3K27ac ChIP-seq analysis in 11 AML samples (Table 1 and Supplementary Table 1). In this study NB4, MV4-11 and THP-1 cells were also used as representative AML cell lines. For these 3 AML cell lines, we carried out H3K27ac ChIPseq analysis in NB4 cell line, and also analyzed public H3K27ac ChIP-seq datasets for MV4-11 and THP-1 cell lines (GSE80779 and GSE123872) [29,30]. Additionally, we included 7 T-ALL samples and 8 B-ALL samples, to compare the H3K27ac signals with those in AML samples. Putative super-enhancers identified in each of the 11 AML samples are shown in Fig. 1A-K and Supplementary Table 4, with RNAseq results for 9 of the 11 AML samples shown in Fig. 1L. Putative super-enhancers identified in the 7 T-ALL samples and 8 B-ALL samples are shown in Supplementary Tables 5 and 6. The principal component analysis (PCA) result and clustering result based on the peak signals clearly distinguished AML samples from T-ALL or B-ALL samples ( Fig. 2A and B). Next, a total of 200 genes were selected which were commonly correlated with super-enhancers in ≧10 AML samples (Supplementary Table 7). Gene ontology enrichment analysis suggested that these 200 genes were enriched in regulation of transcription and regulation of myeloid cell differentiation (Fig. 2C, Supplementary Table 8). RNAseq results of 9 AML samples suggested that the expression levels of the 200 super-enhancer-associated genes were significantly higher than those of the other genes (Fig. 1L, Supplementary Table 9). Our results indicate that the genes commonly correlated with super-enhancers in AML are generally involved in the regulation of transcription. Super-enhancers in AML are associated with the known cancer genes Among the 200 above-mentioned genes, a series of genes have been determined to be involved in cancers. For instance, super-enhancers were present at the MPO gene locus in 10 AML samples and 3 AML cell lines (NB4, MV4-11, and THP-1) (Fig. 2D). Similarly, we found superenhancers at the SPI1 locus in 11 AML samples and 3 AML cell lines (NB4, MV4-11, and THP-1) (Fig. 2E). Additionally, super-enhancers were observed at the ZFP36L2 gene locus in all AML samples (Fig. 2F). Compared to T-ALL or B-ALL, super-enhancers associated with MPO were observed to be AML specific, while super-enhancers associated with ZFP36L2 were common to all three hematological diseases (AML, T-ALL, and B-ALL) (Supplementary Figs. 1 and 2). In addition, compared to T-ALL, superenhancers associated with SPI1 were found to be AML and B-ALL specific ( Supplementary Fig. 3). According to these findings, these gene loci are particularly activated in AML. Although the sample size of this cohort was small, our results indicate that super-enhancers are correlated with cancer genes and lead to abnormal activation of those genes during AML progression. The BRD4 inhibitor GNE-987 inhibits AML cell growth To evaluate the dependence of AML cell growth on transcriptional activation, we next examined the effect of the BRD4 inhibitor GNE-987 on AML cells. We chose four AML cell lines, NB4, Kasuma-1, HL-60, and MV4-11. In this analysis, all the four AML cell lines showed high sensitivity to GNE-987, with 50% inhibitory concentrations of less than 50 nM (Fig. 3A). Cell cycle analysis indicated that GNE-987 treatment led to cell-cycle arrest in G1 phase (Fig. 3B). Of note, apoptosis rates of AML cells were significantly increased after GNE-987 treatment (Fig. 3C, D, E, F). Western blotting analysis also showed PARP cleavage after GNE-987 treatment in the four AML cell lines (Fig. 3G), indicating that GNE-987 induced apoptotic cell death. These results indicated that GNE-987 treatment predominantly inhibits the growth of AML cells. GNE-987 treatment inhibits the expression of super-enhancer-associated genes in AML cells We next carried out RNAseq to obtain gene expression profiles after GNE-987 treatment in NB4 cells. A total of 11,834 genes were identified to be differentially expressed between the control group and the GNE-987-treated group (Fig. 4A, Supplementary Table 10). Through gene ontology enrichment analysis, these 11,834 genes were found to be enriched in ribonucleoprotein complex biogenesis (Fig. 4B). Notably, the expression levels of the superenhancer-associated genes HEMGN, LYL1, ANKRD13D, RREB1, NACC1, ZEB2, SCYL1, ASNA1, TNRC18, GSE1, TRMT1, SLC39A13, ZFP36L2, FRMD8, PTMA, SPEN and PAF1 were significantly downregulated after GNE-987 treatment based on the RNAseq restuls (Fig. 4C,D). qRT-PCR validation further showed that these genes were downregulated in NB4 cells treated with GNE-987 (Fig. 4E). These findings suggest that GNE-987 efficiently downregulates the expression of super-enhancer-associated genes in AML cells. Selection of candidate cancer genes in AML by super-enhancer and gene expression profile Our ChIP-seq results suggested that many genes involved in AML pathogenesis are correlated with super-enhancers. The expression of these genes was significantly inhibited by GNE-987 treatment. Therefore, we next combined super-enhancer profiling with gene expression profiling to identify critical cancer genes implicated in AML pathogenesis. We focused on the 200 genes that were associated with super-enhancers in ≧ 10 AML samples ( Fig. 4C; Supplemental Table 7). Additionally, we performed H3K27ac ChIP-Seq after GNE-987 treatment in NB4 cells, and performed filtering to identify genes that also harbor super-enhancers in NB4 cells (Fig. 4C, Supplementary Table 11). We next performed filtering to identify genes that were also significantly downregulated after GNE-987 treatment (P value < 0.05, log2 fold change <− 1) in NB4 cells (Fig. 4C). Filtering according to the above stringent criteria narrowed down the list of candidates to 20 genes (Fig. 4C, D). LYL1 is required for AML cell growth and survival Among those 20 genes, LYL1 was associated with superenhancers in 10 AML samples (Figs. 4D and 5A). The results of Hi-C data also represent interaction between super-enhancer and LYL1 in THP-1 cell line. Superenhancers associated with LYL1 were common to all three hematological diseases (AML, T-ALL, and B-ALL) ( Supplementary Fig. 4). The public BRD4 ChIP-Seq data for the AML cell line MV4-11 (GSE101821) showed that the gene region of LYL1 had strong signals (Fig. 5B, track 1), and BRD4 was found to function cooperatively with CEBPE and RUNX1 ( Supplementary Fig. 5), suggesting a potential role for BRD4 in the transcriptional regulation of LYL1 in AML. LYL1 also harbored superenhancer in NB4 cells, and the H3K27ac signal was significantly decreased in the LYL1 gene region in NB4 cells treated with GNE-987 (Figs. 4D and 5B (tracks 2-3)). In fact, LYL1 expression was significantly downregulated by GNE-987 treatment in both the NB4 and Kasumi-1 cell lines (Fig. 5B (tracks 4-7), 5C, 5D, 5E, 5F). We compared the expression pattern of LYL1 between AML cases and healthy controls based on a public transcriptomic dataset (GSE114868) [33] and found that LYL1 was significantly overexpressed in AML samples (Fig. 5G). Knockdown of LYL1 demonstrated that loss of LYL1 significantly inhibited the growth of NB4 and Kasumi-1 cell lines (Fig. 6A, B, C). Consistently with this finding, the apoptosis rates of NB4 and Kasumi-1 cells were significantly increased after LYL1 knockdown (Fig. 6D, E). Cell viability experiment further showed that cell growth was not significantly influenced by GNE-987 (treated for 24 h) in LYL1 over-expressed AML cells, compared to LYL1 non-overexpressed AML cells (Fig. 6F), suggesting that the effect of GNE-987 on AML cell growth depends in part on LYL1. We next assessed the binding pattern of LYL1 across genomic regions in three public ChIP-Seq datasets (for the NB4, HL-60 and Kasumi-1 cell line respectively, GSE63484) [31] and found that LYL1 functions cooperatively with Elf4, RUNX2, CEBPD, TEAD3, GATA2, and Twist2 (Fig. 7A, B, C). Importantly, LYL1 was found to bind to the promoter region of 386 genes in all three AML cell lines (NB4, HL-60, and Kasumi-1) (Fig. 7D, Supplementary Table 12). qRT-PCR validation further showed that these genes were significantly downregulated in NB4 cells in response to LYL1 silencing (Fig. 7E). Together, these results suggest that LYL1 is required for the growth and survival of human AML cells. Discussion AML is an aggressive neoplasm and its prognosis is poor [8]. AML is complex, and exploring its pathogenic mechanisms will help improve the current state of AML treatment [5][6][7][8]. Multiple hub genes have been identified in AML. RUNX1 dysfunction is a major pathogenic mechanism of AML [9]. MPO is a marker for AML diagnosis and prognosis [11]. CDK6 is another key molecule in AML development [12]. Super-enhancers are attracting the attention of many researchers at present. Super-enhancers recruit a particularly large number of transcription factors/cofactors and induce the transcription of many target genes, compared with typical enhancers. Super-enhancers have been reported to be frequently associated with cancer genes [28,[34][35][36]. Super-enhancer promotes the growth and survival of t(4;14)-positive multiple myeloma [16]. Superenhancer was found to activate the Wnt/beta-catenin pathway and promotes the proliferation of liver cancer cells [17]. Oncogenic super-enhancers were also identified in colorectal cancer through genome-wide profiling [18]. In addition, super-enhancer was reported to play a role in glioma progression [19]. Super-enhancer was also found to be involved in squamous cell carcinoma [20]. Furthermore, super-enhancer is known to play a role in triple-negative breast cancer [21]. In addition, superenhancer has been reported to be abnormally activated and result in CHPT1 overexpression, which leads to enzalutamide resistance in castration-resistant prostate cancer [22]. To date, the biological significance of superenhancers in AML remains unclear, and it is useful to identify critical super-enhancers and associated genes that are required for the development of AML. In our study, we found super-enhancers at 200 gene loci in ≧ 10 AML samples. These genes are required for AML progression. Strikingly, we identified super-enhancers at the LYL1 gene locus in 10 AML samples. We also found that GNE-987 treatment downregulates the expression of super-enhancer-associated genes in AML cells, including the expression of LYL1. Further functional analysis indicated that LYL1 is required for AML cell growth and survival. These results elucidated a crucial role of LYL1 in promoting AML progression. The ChIP-Seq results in AML cell lines differed from those in AML patient samples in this study, and further ChIP-Seq analysis of AML cell lines is necessary to confirm this discrepancy. LYL1 was identified to function in clear cell renal cell carcinoma [37]. It was reported to have strong association with immune infiltrations in clear cell renal cell carcinoma. Copy number amplification of LYL1 was reported in gliosarcoma based on a comparison at the molecular level between gliosarcoma patients and glioblastoma patients [38]. LYL1 expression also showed differences between osteosarcoma and control samples, and the expression of LYL1 was significantly decreased in osteosarcoma cell lines compared to normal cells [39]. LYL1 gene amplification was also determined to be involved in the development of uterine corpus endometrial carcinoma, and LYL1 gene amplification was reported to be a risk factor for poor prognosis in patients with uterine corpus endometrial carcinoma according to a study of 370 patients with uterine corpus endometrial carcinoma [40]. LYL1 is able to maintain primitive erythropoiesis, and it was reported to bind to a subset of stem cell leukemia (SCL) targets according to ChIP-seq analysis in a human erythroleukemia cell line [41]. LYL1 functions in platelet production in mice [42]. It regulates GATA1 expression and functions cooperatively with SCL in platelet production. It has been reported that the expression of LYL1 is higher in AML than in normal bone marrow, and LYL1 was found to be overexpressed in myelodysplastic syndrome compared with normal bone marrow [43]. LYL1 is a member of a heptad of transcription factors that play roles in human CD34+ haematopoietic stem and progenitor cells (HSPCs), and LYL1 has prognostic significance in AML [44]. In this study, LYL1 was found to be associated with super-enhancers in 10 AML samples, suggesting that aberrant expression of LYL1 triggered by super-enhancers probably plays a role in AML development. It has also been reported that transcriptional regulation can be both BRD4-dependent and BRD4-independent [45]; therefore the expression of LYL1 might be regulated in both BRD4dependent and BRD4-independent manners. To our knowledge, there is no reported enhancer regulation of LYL1 to date, and whether the expression of LYL1 is also regulated by the BRD4-independent enhancer needs to be explored in future studies. Conclusion In summary, we identified 200 common super-enhancerassociated genes in AML samples, and a series of those genes are cancer genes. We also found that GNE-987 treatment downregulates the expression of superenhancer-associated genes in AML cells, including the expression of LYL1. Further functional analysis indicated that LYL1 is required for AML cell growth and survival. These findings promote the understanding of AML pathophysiology and elucidate an important role of LYL1 in AML progression.
5,805.8
2022-07-16T00:00:00.000
[ "Biology" ]
Performance-Evaluation Index for Precision Poverty Alleviation in China’s Shaanxi Province An effective and accurate poverty-alleviation system is necessary for eradicating poverty and promoting regional economic and social growth. Performance evaluation plays a key role in developing a precise poverty-alleviation policy. However, systematic performance evaluation of precise poverty-alleviation efforts has been largely ignored in the literature. This study sorts the poverty-alleviation performance of 10 major urban areas in Shaanxi Province using the count and analysis method. The empirical findings show that among poverty indexes, the yield of agricultural products has the greatest impact on poverty alleviation. Furthermore, the poverty-alleviation performance of Xianyang, Weinan, and Ankang is relatively high. The efforts of Xi’an and Baoji are at the middle level, and those of Tongchuan and Yan’an are at a relatively low level. This paper identifies the poverty alleviation performance status for each area at the indicator level and then offers a corresponding analysis and proposes countermeasures. Introduction Poverty alleviation has become one of the most serious problems faced by developed and developing countries.The significance of the problem can be best seen in the inclusion of poverty alleviation in the United Nations sustainable development goals (SDGs).Poverty in all its forms everywhere is the top priority of United Nations SDGs (Baloch, Danish, Khan, & Ulucak, 2020;Baloch, Danish, Khan, Ulucak, & Ahmad, 2020).Poverty brings with it serious social and economic problems in general and for developing countries in particular.Poverty afflicts developing and under-developed countries in various ways.It is seen in income inequality, lack of productive assets, chronic hunger and malnutrition, shortage of clean water, homelessness, high unemployment, low life expectancy, lack of education, and ongoing social injustice. Among developing nations, China has made reasonable efforts to reduce and alleviate poverty.The 19th National Congress of the Communist Party of China emphasized poverty alleviation efforts and the need to define reasonable poverty-alleviation goals and strengthen the evaluation and supervision of poverty alleviation efforts (Wang, 2018).In particular, the Shaanxi Province, which had 1.69 million impoverished people in 2017 (about 570,000 fewer than in 2016), is an example of poverty alleviation efforts having achieved remarkable results (Yan et al., 2018). Despite efforts made to alleviate poverty achieving some results, there are still many practical challenges that need to be addressed.For example, there has yet to be an identification of the object of poverty-alleviation efforts that provides a dynamic and accurate identification and targeting mechanism.Moreover, the use of povertyalleviation funds has lacked a comprehensive and clearly separated input-supervision mechanism (He, 2019).In addition, there is no effective way of screening povertyalleviation projects (Z.Zhao, 2020).A mechanism that can effectively coordinate poverty-alleviation efforts has yet to be developed (Y.Zhang, 2020).Each of the regions in China faces its own problems and situations.There is thus a need for a comprehensive evaluation indicator for poverty alleviation that offers an accurate picture of the situation.This paper proposes a multidimensional poverty-alleviation indicator taking Shaanxi Province as an example to provide a scientific and accurate theoretical basis for poverty-alleviation work. Studies in the 1990s began measuring poverty by utilizing unidimensional indicators such as income levels.Later studies show that measuring poverty through these unidimensional indicators might limit attempts to understand more complex features of poverty.As a result, research has gradually begun to shift from unidimensional to multiple assessments.For example, the multidimensional poverty index (MPI) ranks 10 indicators, including education, health, and living standards, to comprehensively measure dimensions of poverty (Chakravarty, 1997).An advantage of using the MPI is that it includes key factors relevant to the less-privileged, thereby effectively enabling an analysis of the dilemmas of the poor.Moreover, the MPI can be used to horizontally compare the characteristics of poverty in different regions and can also show the vertical incidence of poverty.At the same time, the MPI reflects how much individuals or families have been usurped. Two groups of scholars address precision poverty alleviation.One group reviews and analyzes poverty alleviation policies, the limitations of the departments concerned (Fu, 2017;Liao, 2016), and the lack of autonomy of the working group (Huang, 2018).The second group of authors focuses on special cases of poverty alleviation in selected areas.These authors consider the experience of poverty alleviation efforts in Yanchi County and Ningxia, highlighting the significance of poverty alleviation work in the country (Li & Song, 2017;H. Liu, 2016).Many studies point to the practical dilemmas faced in poverty alleviation and development and how strategies might be optimized (J.Chen & Gong, 2017;Shi et al., 2017).Fan and Zhou (2017) analyze the fragmentation dilemma in precision poverty alleviation and propose an ''anti-fragmentation'' approach.From the perspective of policy ecology, Y. Liu (2020) constructs an optimal holistic path for poverty alleviation. This study is different from previous studies in the following ways.First, in establishing model screening indicators, previous studies focus only on the structure of the model, its inclusion dimensions, and the selection of measurement indicators, ignoring the weights of particular indicators.In the practice of precision poverty alleviation, many indicators can reflect the poverty level of a certain area or region, and having different weight settings for indicators will greatly impact the evaluation results.This study thus takes into account the weight setting of the multidimensional poverty evaluation index to accurately and objectively evaluate the degree of poverty and the main causes for its ''stickiness.''Second, most of the existing multidimensional poverty research is based on qualitative analysis.There is limited data to comprehensively describe specific poverty problems.Therefore, this study intends to use scientific and rigorous methods to study the weight-setting problem in the multidimensional poverty evaluation index by taking into account the geographical characteristics of different poverty-stricken areas in the selection of evaluation indicators; this will allow a more comprehensive and objective assessment of the aspects of poverty. The remainder of the study proceeds as follows.Section 2 covers the related literature and research status, Section 3 introduces the material and methods, Section 4 presents the results and analysis, Section 5 presents the discussion of results, and, finally, Section 6 concludes and offers policy recommendations. Data and Variable Description This paper collects data on 15 poverty-alleviation indicators for the 10 major cities in Shaanxi Province from 2016 to 2017.Shaanxi Province is one of the provinces with the highest rate and deepest level of poverty in China and, therefore, the most significant povertyalleviation task ahead.This paper uses the province's poverty-alleviation work as an example and evaluates the performance of these efforts through investigation and analysis of the statistical data.The study uses entropy weight and fuzzy analysis methods to ensure the sustainability of poverty-alleviation work in the future. There are various methods to determine index weights, such as the index value, frequency analysis, and expert scoring methods.In particular, the entropy weight and fuzzy analysis methods quantify subjective and objective weightings to study multidimensional poverty.This approach allows a choice of practical evaluation indicators and extensive poverty-related statistical data support.Moreover, the use of the entropy weight method helps us determine the weight of the multidimensional indicators or complicated structures when there is a lack of availability of important data; this method is therefore widely applied in the theory of fuzzy mathematics (T.Chen et al., 2014). Most of the current research on multidimensional poverty focuses on the identification and measurement of poverty, and that attention to the weighting of multidimensional poverty indicators is relatively simplistic.This paper selects five dimensions: economic, developmental, social, ecological, and life.These dimensions allow an assessment of poverty, and the selected indicators have a certain representativeness in the classified level.The combination of dimensional factors and indicators allows us to establish a poverty-alleviation performance-evaluation index system using Shaanxi Province location characteristics.We also utilize the fuzzy evaluation method to evaluate the overall poverty-alleviation performance of different cities in Shaanxi Province.To do so, we use a combination of the decision-making trial and evaluation laboratory (DEMATEL) and the technique for order preference by similarity to ideal solution (TOPSIS) methods. Following the establishment of a relevant povertyalleviation performance-evaluation index, the actual situation of poverty alleviation in Shaanxi Province is determined using data on the five dimensions: economic level, development level, social level, ecological level, and life.The indicators are selected after fully considering the availability of data and ensuring the representativeness and authenticity of the selected data. ( The number of students in ordinary middle schools is a representative indicator selected from educationrelated poverty alleviation.Education plays a fundamental role in sustainable and precise poverty alleviation.Education-related poverty alleviation promotes the balanced development of compulsory education and targets improved levels of education for the poor.At the same time as providing cultural knowledge, education enhances the skills of labor and can also improve the productive capacity of poor people and their ability to work and start businesses. The total volume of post and telecommunications business is representative of the status of regional industrial structures.Disordered post and telecommunications restrict the inclusive growth of the economy.The growth of postal and telecommunications services allows the communication needs of poor areas to be met.The per capita road area of road infrastructure and the total growth rate of post and telecommunications business together reflect the effectiveness of infrastructure construction for poverty alleviation. (iv) Ecological level.The ecological indicators include the output of agricultural products, the total power of agricultural machinery, and the forestation of previously barren hills and wasteland.Both the output of agricultural products and the total power of agricultural machinery are measures of industrial poverty alleviation; they have potential poverty alleviating effects due to their natural attributes.The forestation of barren hills and wasteland reflects the effectiveness of efforts to improve the local ecosystem.The efforts at this level are mainly aimed at the rural economic structure and improving the income of farmers to alleviate poverty.(v) Life level.The indicators for this dimension include cable TV coverage, growth in the rate of Internet broadband use, and energy consumption savings per unit of GDP.The first two indicators are related to poverty alleviation through cultural undertakings.Poverty alleviation at this level can enhance the well-being of the poor.The indicator of energy consumption savings per unit of GDP is part of energy-related poverty alleviation, the reduction of energy consumption per unit of GDP, improvement of energy efficiency, and promotion of green development.There is a brief description of each indicator in Table 1. The Application of Evaluation Methods Entropy Weight Method.The objective assignment method is used to determine the weight of the attribute.This paper employs the entropy weight method to calculate the weight of each attribute (H.Zhang & Yu, 2012) using Excel software.The specific method is as follows: a.There are m items to be evaluated and n evaluation indicators, forming the original data matrix Where r ij is the evaluation value of the ith item under the jth index.In this paper, j represents the 15 evaluation indicators selected from the five dimensions, and i represents the 10 main cities in Shaanxi Province.The process of finding the weight value of each indicator follows.b.Calculate the proportion r ij of the indicator value of the ith item under the jth indicator: c. Calculate the entropy value e j of the jth indicator: d. Calculate the entropy weight w j of the jth indicator: If the entropy value e j of an index is larger, the smaller the variation of the index value; that is, the smaller the amount of information provided, the smaller the influence of the index on the comprehensive evaluation.Therefore, the indicator should be given a small weight.In the actual analysis, based on the range of variation of the index value, the weight of each index is defined by the entropy weight method, and finally, a reasonable and effective result is obtained.The following is the Unit GDP energy saving increase rate % calculation process.First, the above 15 indicators are selected to evaluate the overall poverty alleviation performance of 10 major cities in Shaanxi Province.The specific data are shown in Table 2. In the second step, we undertake a nondimensionalization process.This step in the entropy weight method is taken to maintain the original variability of the data and, at the same time, remove the dimensionality.In the standardization of the data, the dimension value is divided by the maximum value of the same index to eliminate the dimension; using formula (5), standardized data are obtained and shown in Table 3. In the third step, we calculate the weight matrix of the data and then use formula (3) to obtain the entropy values of each index, as shown in Table 4. The information reflected in Table 4 to obtain the entropy weight of each index, as shown in Table 5. Finally, the comprehensive weights of the five levels are obtained and shown in Table 6. Analysis of the Results If the values of all items in index j are consistent (the degree of variation is negligible), the entropy value of this index is one, and the entropy weight is zero.This indicates that the metric does not provide valid information, and it can be removed.For specific indicators, from Tables 4 and 5, the TV population coverage (a 3 ) entropy weight is 0; that is, it has little effect on poverty alleviation, and this indicator can also be ignored.The output of agricultural products (a 2 ), investment in fixed assets (a 5 ), and the growth rate of the total post and telecommunications business (a 9 ) account for the largest proportion of all indicators; that is, these factors have the most significant impact on poverty alleviation.This may be because credit initiatives aimed at poverty alleviation focus on agricultural infrastructure, science and technology, scientific and technological content of agricultural products, and the total output of agricultural products.That is, the output of agricultural products has a clear impact on poverty alleviation, and without outside investment, it may be more difficult to eliminate poverty.Therefore, it is important to direct efforts to areas with low levels of investment. In poverty-alleviation work, fixed asset investment has a greater impact on poverty-alleviation performance than other forms of investment.Correspondingly, the per capita disposable income growth rate of farmers (a 1 ) and the rate of increase in energy consumption per unit of GDP (a 11 ) have no significant effect on the effect of poverty alleviation.From Table 6, among the five dimensions of evaluation index analysis, the ecological layer has the greatest impact on poverty-alleviation performance, followed by the social level, and the least affected is the life level. Fuzzy DEMATEL Evaluation Method Decision Making Trial and Evaluation Laboratory is a method of using system theory and matrix tools to analyze system factors.Through the logical relationship between various factors in the system and the direct influence matrix, the degree of mutual influence between various factors is measured and then identified out of the overall key factors (Altuntas & Dereli, 2015). In order to enhance the timeliness of the research, we selected first-line experts from relevant industries to score 15 poverty-alleviation indicators in 10 cities (districts) of Shaanxi Province.We used MATLAB R2017 software to assess the importance of various factors affecting poverty-alleviation performance and the importance of criteria.The language variables are shown in Table 7. Figure 1 is a membership function of the IT2FSs language terminology. We combine the interval II-type fuzzy set (IT2FS) theory and the fuzzy DEMATEL method to study the complex relationship between different standards and identify the main factors affecting the success of povertyalleviation efforts.We integrate the expert evaluations and normalize the results to obtain the initial direct As for calculating the weight of poverty-alleviation indicators, the evaluation of expert indicators is defuzzified, and the calculation process and results are shown in Table 9. TOPSIS Method The TOPSIS method (Si & Sun, 2011) normalizes the original data.The method neglects the interaction between the indicators to measure the difference accurately and objectively and thereby reflect the essential situation. Assume that there are m items (limited items), n conditions, the expert evaluates the j th attribute of the i th target as x ij , and obtains the initial judgment matrix V as: Data normalization: In formula (8), ''original'' refers to the value of the original indicator that has not been normalized; the highquality indicator is that the larger the value of a certain indicator.The lower-quality indicator is the smaller the value of a certain indicator.Six indicators are located between the high and low values.These are normalized to obtain matrix Z: This matrix is used to identify the most and least effective practices The optimal solution consists of the maximum value of the above-defined schemes, and the worst scheme consists of the minimum values.These values denoted as Z + and Z 2 , and Z + and Z 2 form a new vector, expressed as follows: We then calculate the Euclidean distance D i + and D i 2 of each evaluation object and Z + and Z 2 : The proximity of each evaluation object to the optimal solution C i is calculated as follows. The closer C i is to 1, the better the evaluation object is.These are sorted according to the degree of proximity of C i to form a decision basis.The calculation process using the TOPSIS method for the decision matrix (Table 9) is as follows.First, we combine the normalized decision matrix (Table 9) with the formulas ( 8) and ( 9) to obtain the matrix Z (Table 10). Second, we determine Z + and Z 2 according to formula (10), as shown in Table 11.Third, following formula (11), we obtain D i + and D i 2 as shown in Table 12.Fourth, by substituting values into formula (12), we calculate C i and sort the results as shown in Table 13.The final result from the combined fuzzy DEMATEL and TOPSIS methods is that the higher the C i value calculated, the higher the poverty-alleviation performance of the city, and vice versa. Fifth, we grade poverty-alleviation performance for Shaanxi Province by employing the Kapetanios, Shin, and Shell (KSS) test using SPSS20.0software.The estimated value falls within the 95% confidence interval (subject to a normal distribution), and it has a standard deviation (s) of .07 and a mean (m) of .49.According to the definition of the normal distribution, the performance ratio for poverty alleviation in urban areas of Shaanxi Province (m 2 .44s) is set to a low-performance level, and the ratio of (m + 0.44s) is set to a highperformance level, which is located between the high and low levels.Defined as a medium-performance level, as shown in Table 14. Finally, the results of the performance evaluation for the various cities' poverty-alleviation efforts are shown in Table 15. Discussion In this section, we discuss the poverty-alleviation performance-evaluation index developed using the TOPSIS method and the DEMATEL model to assess the povertyalleviation performance of 10 cities (districts) in Shaanxi Province.The evaluation shows variation in the indicators and comprehensive performance of different urban areas.We can see that only three urban areas are achieving a relatively high level of poverty-alleviation performance, and the overall poverty-alleviation performance of Shaanxi Province needs to be improved.Even under the same conditions of poverty, the effects of poverty alleviation may differ.These efforts are greatly affected by factors such as being at different levels of economic development and having different social conditions.Generally speaking, the poverty-alleviation effect is better for economically developed regions than for underdeveloped regions.The poverty-alleviation efforts in Shaanxi Province needs to be further improved and optimized.According to the principle of normal distribution, the performance of poverty-alleviation efforts in Shaanxi Province is divided into three levels: low, medium, and high.Xianyang City, Weinan City, and Ankang City have the highest level of performance, Xi'an City and Baoji City are at a medium level, and Tongchuan City and Yan'an City are at a relatively love level.At the same time, the 2017 assessment of GDP growth in various urban areas in Shaanxi Province (2017 Shaanxi GDP rankings 2018) showed that the growth rate of GDP of Tongchuan City and Yan'an City was 7.6%, which was the lowest level in the province.This also confirmed that the poverty-alleviation performance of the two cities is not good. The list of poverty-stricken counties in Shaanxi Province in 2017 (Table 16) is used for a comprehensive analysis of the specific situation of poverty alleviation in Shaanxi Province.From Table 16, we see that Hanzhong, Yulin, and Ankang are the three most impoverished counties in the provinces, but these three cities have better poverty-alleviation performance, namely middle and high.In 2017, Hanzhong City proposed ''eight batches'' of preferential poverty-alleviation policies and implemented its preferential poverty-alleviation approach that targeted the protection of the poorest.The policies focused on relocation, education and poverty alleviation, industry, entrepreneur-based employment, poverty-stricken housing renovation, ecological compensation, and health. As shown in Table 16, the greatest number of povertystricken provincial counties are in Ankang City, but a combination of planning and research has led to the performance of poverty alleviation in Ankang City being at a high level.In 2017, the GDP growth rate for Ankang City was the highest in the province, up 10.5% year-on-year (2017Shaanxi GDP rankings, 2018).In 2017 Ankang promoted the scientific quality, production skills, and industrial development of poor farmers by introducing new methods of farming, the latest technologies, and new models.In addition, substantial efforts were made concerning the endogenous motivation of the poor, hematopoietic functions, and boosting self-confidence.The ability to develop and help the poor by accelerating the pace of poverty alleviation has contributed to the fight against poverty in Ankang City. The fuzzy evaluation reveals that the povertyalleviation performance of Yan'an and Tongchuan is relatively poor.The C i value of Yan'an City in these two cities is 0.34, which is much lower than the lowperformance level of 0.4592 in the evaluation index constructed in this paper.A detailed analysis of poverty alleviation in Yan'an City is carried out next.The first part At the ecological level, the output level of agricultural products in Yan'an City is the lowest among the 10 urban districts selected, accounting for only 9.29% of the highest output in Hanzhong City in Shaanxi Province, and with negative growth from 2016 to 2017 of 20.02.The total power level of agricultural machinery in the same dimension is also low.The levels of total postal and telecommunications services and the number of students in ordinary secondary schools are relatively low.Therefore, it is recommended that Yan'an City should increase its investment in fixed assets.The government should coordinate and promote poverty-alleviation work when allocating fixed assets across departments.At the same time, at the living level, the number of Internet broadband users should be increased. In 2016, Shaanxi Province included a ''Special Plan for Poverty Alleviation and Broadband in Rural Areas'' as a key part of poverty-alleviation projects in the province.A special fund was allocated for poverty alleviation in the provinces and municipalities, and poor households are able to purchase basic cable TV services and WIFI hotspots.By 2017, Internet broadband users in Yan'an City accounted for 4.16% of the province's users, and the city accounted for 7.62% of the province's growth.Yan'an City should still increase investment in this area. Conclusions and Policy Recommendations This paper comprehensively evaluates the performance of precision poverty alleviation in Shaanxi Province with the multidimensional poverty evaluation index system.Based on the statistical yearbook data of Shaanxi Province from 2016 to 2017, the multidimensional poverty indicators and their weights are studied by using the entropy weight and fuzzy evaluation methods.We have drawn the following insights in the evaluation of precision poverty alleviation. First, the multidimensional measurement of povertyalleviation performance is a scientific, effective, and comprehensive method of evaluation, which can be applied to measure poverty at the macro-and micro-level.Second, we conclude that it is important to rely on more than a single indicator or dimension of poverty to identify poverty-stricken households; a household's poverty status should be assessed according to the five aspects of economy, development, society, ecology, and living ability.It is for this reason that, in the multidimensional survey of poverty alleviation in Shaanxi Province, we found that the impact of different indicators in the same dimension on poverty-alleviation performance is also very different.Therefore, a single indicator may overestimate or underestimate the impact of the dimension on poverty alleviation.Finally, the dynamic analysis should be selected to timely adjust the file establishment database to avoid waste of resources and give full play to the maximum benefits of poverty-alleviation funds. Based on the above results, we make the following policy recommendations for poverty alleviation.First, to strengthen investment in poverty alleviation and development, it is important to support the coordination of the province's strengths and poverty-alleviation investment by setting up an industrial development fund.For example, on May 3, 2018, the ''International Agricultural Development Fund Loan Shaanxi Rural Characteristic Industry Development Project'' declared by Shaanxi Province was approved by the UN IFAD Board of Directors and received a loan of US$72 million from the United Nations International Fund for Agricultural Development (Shaanxi, 2018).The project focused on the development of technology linkages in various economic sectors of the poverty-alleviation industry, the allocation and optimization of basic shared facilities, the control and support of related projects, and impetus and support for precise poverty alleviation in Shaanxi Province. Second, superior industries and a mechanism of interest linkages should be developed by focusing on improving the economic base of targeted areas, relying on key projects, and giving priority to the development of featured industries.Moreover, we encourage leading enterprises to cooperate with poverty-stricken areas to create new products, brands, and production bases.For example, in October 2017, Weinan City built 428 agricultural enterprises and 1,646 family farms at the municipal level and above.In addition, Wei Nan City actively promoted the policy of regional public branding, and the emergence of the fruit industry accelerated the development of the regional economy. Third, to enhance the welfare of urban and rural residents, it is necessary to closely integrate urban and rural development, promote integrated construction, optimize levels of infrastructure service, strengthen population agglomeration capabilities, improve traffic conditions, and broaden employment paths.For example, Ankang City has established a health insurance network, focusing on telemedicine, contracting services, and other measures to reduce the number of people suffering from poverty due to illness.Similarly, Xianyang City took the initiative to help the poor, allocated funding for deserving students, and reduce the school-dropout rate because of poverty. Finally, the concept of ''green development, ecology, and enriching the people'' should be implemented in Shaanxi Province, which was the hardest-hit area for soil erosion in China.Shaanxi Province has already made great efforts to improve the situation.Ecology-based poverty alleviation in Shaanxi Province should be pursued by focusing on increasing green resources, improving ecological carrying capacity, identifying the right direction for the development of a green industry as a pillar industry, coordinating the development of a poverty-alleviation mechanism, and achieving a win-win situation for poverty alleviation and ecological civilization construction. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Table 1 . Selection of Indicators. Table 6 . The Average Level of Weights at Five Levels. Table 4 . Multidimensional Indicator Entropy Value e j . Table 5 . Multidimensional Indicator Entropy Weight w j . Table 7 . Language Variables for Assessing Relevance Criteria. Table 13 . C i Values and Sequences. Table 15 . Results of Poverty-Alleviation Performance Evaluation.of the entropy method generates the weight of each indicator and its overall contribution to poverty alleviation.The five dimensions of poverty alleviation are ranked in order as follows: ecological level, social level, developmental level, economic level, and the level of life.A specific analysis is made of the proportion of indicators in each dimension.
6,364.4
2023-07-01T00:00:00.000
[ "Economics", "Agricultural and Food Sciences" ]
Evolution of morphological and climatic adaptations in Veronica L. (Plantaginaceae) Perennials and annuals apply different strategies to adapt to the adverse environment, based on ‘tolerance’ and ‘avoidance’, respectively. To understand lifespan evolution and its impact on plant adaptability, we carried out a comparative study of perennials and annuals in the genus Veronica from a phylogenetic perspective. The results showed that ancestors of the genus Veronicawere likely to be perennial plants. Annual life history of Veronica has evolved multiple times and subtrees with more annual species have a higher substitution rate. Annuals can adapt to more xeric habitats than perennials. This indicates that annuals are more drought-resistant than their perennial relatives. Due to adaptation to similar selective pressures, parallel evolution occurs in morphological characters among annual species of Veronica. INTRODUCTION Flowering plants have repeatedly evolved a shorter life history of less than a year, with a record of less than three weeks from germination to seed set (Cloudsley-Thompson & Chadwick, 1964). The evolution of annual life cycles is combined with a monocarpic habit (i.e., death of the plant after first and only reproduction). Such plants are called annuals irrespective of considerable differences in their life histories (Mortimer, Hance & Holly, 1990) related to different ecology and habitats. The independent evolution of annuality in more than 100 different families from more than 30 orders of angiosperms (sensu The Angiosperm Phylogeny Group, 2016) and often even multiple times independently among closely related species (e.g., Albach, Martinez-Ortega & Chase, 2004;Andreasen & Baldwin, 2001;Hellwig, 2004;Jakob, Meister & Blattner, 2004;Kadereit, 1984) has made characterization of the annual habit difficult. Furthermore, the necessity to complete the life cycle within one season puts enormous constraints on plants that evolutionarily resulted in reduction in size to reach reproductive age faster and more reliably. Such a scenario has led to convergent evolution in several traits in annuals, especially a selfing breeding system but also a variety of other morphological, physiological, karyological and genomic traits (Silvertown & Dodd, 1996). This widespread convergence has given rise to misconceptions about the evolution of annuals, particularly in cases when a rigorous phylogenetic hypothesis is lacking and comparative methods are not employed (Albach, Martinez-Ortega & Chase, 2004). Several environmental factors that are not mutually exclusive can cause circumstances under which annuals have advantages over perennials, and most of these are related to the ability of annuals to survive unfavorable periods as seeds. Proposed factors include seasonal stress such as drought (Macnair, 2007;Whyte, 1977), heat (Evans et al., 2005), frost (Tofts, 2004;Whyte, 1977), unpredictable environment (Stearns, 1976), grazing/seed predation (Klinkhamer, Kubo & Iwasa, 1997;Vesk, Leishman & Westoby, 2004), flooding (Kadereit, Mucina & Freitag, 2006), limited maternal resources (Hensel et al., 1994), low competition (Lacey, 1988) and escape from pathogens over time (Clay & Van der Putten, 1999;Thrall, Antonovics & Hall, 1993). Even anthropogenic selection factors such as regular mowing and cultivation techniques may induce annual life history (Baker, 1974;Hautekèete, Piquot & Van Dijk, 2002). Therefore, it is often unclear whether evolutionary change is associated with annual life history per se or whether it is a reaction to a specific environmental condition. Advances in phylogeny reconstruction and comparative analyses allow investigation of the processes and the pattern of life history variation in more detail. Whereas a number of taxa have been analyzed in detail to infer the number of origins of annual life history and infer climatic circumstances of the shifts (e.g., Datson, Murray & Steiner, 2008;Turini, Bräuchler & Heubl, 2010) few employed rigorous comparative methods to analyze these shifts in life history. For example, Drummond et al. (2012) demonstrated increased speciation rates in derived montane perennial clades of Lupinus compared to lowland annuals. Ogburn & Edwards (2015) found perennials occupying cooler climatic niches than related annuals. Veronica is a good model system to investigate this issue since annual life history has been shown to have evolved with convergent morphological characteristics multiple times in the same geographical region (Albach, Martinez-Ortega & Chase, 2004). Veronica comprises about 450 species and is the largest genus in the flowering plant family Plantaginaceae (Albach & Meudt, 2010). Most species-including all annuals-are distributed in the Northern Hemisphere but there is also an additional prominent radiation in the Australasian region (but without annuals). Life forms include herbaceous annuals or perennials, and also shrubs or small trees. About 10% of Veronica species are annuals, a life history which has originated at least six times independently in the genus (Albach, Martinez-Ortega & Chase, 2004). Chromosome numbers, phytochemistry and DNA sequence data support the polyphyly of annuals in the genus (Albach & Chase, 2001;Müller & Albach, 2010) However, despite the fact that many species of Veronica are widespread in accessible regions of the world, climate data has thus far not been included in any analysis of the genus. Also, morphological characters were mostly mapped on phylogenetic trees (e.g., Albach, Martinez-Ortega & Chase, 2004) but not included in a comparative analysis. Thus, crucial information to understand the evolution of the genus has, thus far, been excluded from analyses. In this study, we implemented a comparative analysis of morphological and climate data using phylogenetic methods to address the following two questions: (1) What convergent morphological trends are displayed in annuals? (2) Are there climatic factors that may favor annual life history? By answering these questions, we aim to expand our understanding of the evolution of life history and its impact on the adaptability of plants. More specifically, we address the hypothesis that annual life history and selfing evolved in parallel in adaptation to drought. Therefore, we tested a correlation of life history with a number of characters, such as corolla diameter, known to be correlated with selfing in Veronica (Scalone, Kolf & Albach, 2013) and contrasted these with characters considered unrelated to mating system, such as leaf length. For environmental parameters, we specifically tested a number of bioclimatic parameters associated with precipitation and temperature to test the alternative hypothesis that annual life history is related to hot temperature. By including a range of morphological and climatological data, we want to infer more exactly, which characters are associated with the annual-selfing-syndrome. MATERIAL AND METHODS A total of 81 individuals representing 81 species and all 12 subgenera of Veronica, were used to establish the phylogenetic tree in this study. Of these, sequences from 67 species were downloaded from GenBank from previous studies (Albach & Meudt, 2010), whereas sequences from 14 species, which were collected in Xinjiang Province of China, were newly generated for this study (see Table S1). Six individuals of five other genera of Veroniceae (Lagotis, Picrorhiza, Wulfeniopsis, Wulfenia, and Veronicastrum) were designated as outgroups. Genomic DNA extraction and purification was carried out using commercial kits according to manufacturer's instructions (D2485-02, OMEGA bio-tek). In accordance with the methods of Albach & Meudt (2010), we carried out PCR, sequencing and phylogenetic tree reconstruction. DNA sequences of four regions were PCR-amplified, including nuclear ribosomal internal transcribed spacer region (ITS) with primers ITSA (Blattner, 1999) and ITS4 (White et al., 1990), plastid DNA (cpDNA) trnL-trnL-trnF with primers c and f (Taberlet et al., 1991), rps16 with primers rpsF and rpsR2 (Oxelman, Lidén & Berglund, 1997), psbA-trnH with primers psbA (Sang, Crawford & Stuessy, 1997) and trnH (Tate & Simpson, 2003). A PCR program of 95 • C for 2 min, 36 cycles of: 95 • C for 1 min, 50-55 • C for 1 min, and 72 • C for 1.5-2 min, and finally 72 • C for 5 min and 10 • C hold, was used for all markers. DNA sequencing was performed by Sangon Biotech Co., Ltd (Shanghai, PR China). Bayesian inference methods were used to analyze the combined data set. Best fitting substitution models for the datasets were inferred using jModelTest 2.1.7 (Darriba et al., 2012). The Bayesian inference tree was built using MrBayes 3.2.5 (Ronquist et al., 2012) with the GTR+ model using the Markov chain Monte Carlo (MCMC) for 1,000,000 generations with a burn-in of 250,000. The posterior probability (PP) was used to estimate nodal robustness. The stationarity of the runs was assessed using Tracer version 1.6 (Rambaut et al., 2014). We approximated divergence times using the function chronopl in the R package ''ape'' (Paradis et al., 2015). We obtained morphological traits from field measurements and referenced from various flora, such as Flora of China (Hong & Fischer, 1998), Flora d'Italia (Pignatti, 1982), Flora of New Zealand (Allan, 1961), New Zealand Plant Conservation Network (http://nzpcn.org.nz/default.aspx). Plant traits were coded for each species according to characters and character states used by Saeidi- Mehrvarz & Zarre (2004). In total 9 binary characters about resource acquisition and reproductive characteristics were taken into consideration (character states and scoring matrix were shown in Tables S2 and S3). We obtained GPS latitude/longitude data from the GBIF website (http://www.gbif.org/) for up to 500 occurrence records for each species using the function occ in the R package ''spocc'' (Chamberlain, Ram & Hart, 2016). Invalid, low accuracy or duplicate data were removed. GPS data of species collected by us were also added to the analysis. Bioclimatic variables were obtained for each of the geographical coordinates from WorldClim (www.worldclim.org) and processed using ArcGIS version 10.0. Climate data from each locality was acquired using the toolbox function ''Extract Values to Points'' and average values for each bioclimatic variable was calculated for each species. Drought and heat can affect annual and perennial relative fitness (Macnair, 2007;Whyte, 1977;Evans et al., 2005;Pérez-Camacho et al., 2012), and 7 related bioclimatic variables were selected for analysis (GBIF localities and corresponding climate data, average data were shown in Tables S4 and S5). We used the function ace in the R package ''ape'' (Paradis et al., 2015) to estimate ancestral character states and the associated uncertainty for life history. Additionally, we also calculated phylogenetic signal using the function phylo.d in the package ''caper'' (Orme et al., 2012). The R package ''iteRates'' was used to implement the parametric rate comparison test and visualize areas on a tree undergoing differential substitution (Fordyce, Shah & Fitzpatrick, 2014). We have conducted phylogenetic comparative analysis. The function binaryPGLMM in the R package ''ape'' was used to perform comparative tests of morphological traits between annual and perennial plants. We tested climate data differences between annual and perennial plants using the function aov.phylo in the package ''geiger' ' (Harmon et al., 2008). RESULTS The phylogenetic relationships of Veronica from Bayesian inference of the four-marker dataset are shown in Fig. S1. The result of Bayesian phylogenetic analyses was assessed using Tracer with all ESSs >200 (after discarding a burn-in of 25%). The main clades of the phylogenetic tree were consistent with previous studies. The evolution and inferred ancestral life history in Veronica are shown in Fig. 1. Scaled likelihood of perennial life history at the root was 0.99. The D value as calculated in caper is a measure of phylogenetic signal in a binary trait, for which a value smaller than 0 indicates high correlation of the trait with phylogenetic differentiation and greater than 1 corresponds to a random or convergent pattern of evolution. The value of D for life history was −0.55, thus demonstrating relatively strong phylogenetic conservatism. This implies that lifespan is a relatively conservative trait and the change from perennial to annual, despite seven origins in the genus, is not a frequent occurrence. Substitution rates (as measured by branch lengths) differ among clades within Veronica (Fig. 2). In general, clades with more annual species have faster substitution rates. The only significant increase in substitution rates subtends the clade of annual subgenera Cochlidiosperma and Pellidosperma, whereas most of the significant decreases in There are obvious differences in some morphological traits between annual and perennial plants ( Table 1). Analysis of the morphological traits measured here shows that perennials have larger leaves, longer stamens and larger corollas, whereas annuals tend to have larger bracts and capsules with deeply emarginated apices. Differences in habitats between annual and perennial plants are summarized in Table 2. The results demonstrated that annuals can withstand higher temperature (in warmest month). In terms of precipitation, there are also significant differences in precipitation of driest month. Perennials are found in areas of higher precipitation compared to annuals. DISCUSSION The evolution of annual life history is a common evolutionary transition in angiosperms having occurred in more than 100 families. In angiosperms, the perennial habit is believed to be the ancestral condition (Melzer et al., 2008). Nevertheless, secondary evolution of perennial life history from annual herbaceous ancestors has been shown to occur in certain environments, such as islands (Bohle, Hilger & Martin, 1996;Kim et al., 1996) and mountains (Karl et al., 2012). Here, we analyzed a number of hypotheses regarding the evolution of annual life history in more detail based on comprehensive information on The blue nodes mean that substitution rates of that clade are faster than that of the remainder tree, whereas red nodes express the opposite meaning. The sizes of the colored nodes indicate the likelihood of rate-shifts. * The asterisk means that a rate-shift is significant. The results are based on limited sampling (<20%). morphology and ecological data based on an explicit phylogenetic hypothesis. While many of these hypotheses were inferred in previous studies, modern comparative analytical tools allow to check these hypotheses in more detail. In this study, the ancestral condition of the genus Veronica has been inferred to have been perenniality and the annual life history has evolved multiple times with a single reversal in V. filiformis of the Caucasus Mountains consistent with previous conclusions (Albach, Martinez-Ortega & Chase, 2004). Overall, we inferred seven origins of annuals. An additional three origins of annuality (in V. hispidula, V. peregrina and V. anagalloides (all subgenus Beccabunga;Albach, Martinez-Ortega & Chase, 2004;Müller & Albach, 2010) are not included in the analysis here. The seven to ten independent shifts between life histories are associated with considerable morphological diversity among annual species. However, certain characters are characteristic for annuals (the annuality syndrome) associated with the rapid completion of the life cycle. For example, the generation-time hypothesis, which assumes that mutations are mostly accumulated during recombination, states that organisms that reproduce faster such as annuals also have more DNA substitutions over time (Page & Holmes, 2009). Results of this study demonstrate that clades including annuals have a higher substitution rate and are, thus, consistent with this theory and previous analyses for Veronica (Müller & Albach, 2010), although this is significant only for the oldest clade of annuals (V . subg. Cochlidiosperma (Rchb.) M. M. Mart. Ort. & Albach). On the other side, the perennial clade with the lowest substitution rate (V . subg. Pseudoveronica, see above) is also the one with the highest diversification rate (Meudt et al., 2015). However, the impact of life history transformation is not restricted to substitution rate. Two of the correlations detected are most likely associated with the smaller stature of annuals. These are the larger leaves of perennials and the larger bracts in annuals (especially in subgenera Pocilla and Cochlidiosperma) that compensate for the reduced number and size of stem leaves in smaller plants. Also, reduction to a single, terminal inflorescence is likely to be a consequence of small size but may also be related to differences in breeding system. Other inflorescence characters are more clearly associated with differences in breeding system between annuals and perennials. Estimates for selfing among angiosperms as a whole are 25-30% (Barrett & Eckert, 1990) with estimates for annuals alone going up to 50% (Hamrick & Godt, 1996). The association between annual life history and selfing has been known for some time (Henslow, 1879) and has also been thoroughly discussed in the literature (e.g., Barrett, Harder & Worley, 1996;Stebbins, 1957). Annual species invest fewer resources into their sexual organs (e.g., number of lateral inflorescences; density of inflorescence, corolla size) than perennials (although not necessarily relative to overall size of the plants). Such changes are likely to be associated with parallel changes in life history and breeding system. A larger corolla and longer stamens have previously been demonstrated to be correlated with an outcrossing breeding system in the genus (Scalone, Kolf & Albach, 2013). Surprisingly, a longer style is here not associated with perenniality as inferred by Scalone, Kolf & Albach (2013). In contrast, we infer that selfing is facilitated by lowering the stigma below the anthers through emargination of the capsule. By that means, the stigma is removed from the anthers without shortening the style. Other characters that may have an influence on breeding system in perennials is the trend towards tubular corollas, which may contain more nectar, and the longer pedicels in perennials that allows better presentation of the flower. Thus, our analysis supports the notion that outcrossing is associated with perennial life history in Veronica (Albach & Greilhuber, 2004). Such a correlation in the evolution of annual life history is often argued to be due to reproductive assurance in annuals, depending on reproduction in their single season of flowering (Busch & Delph, 2012). However, to understand the basis for this association, one needs to move beyond such correlation and understand the ecological circumstances of transitions in life history. Several such circumstances have been inferred to be responsible for the evolution of annual life history (see 'Introduction'). Here, we inferred higher temperature, higher temperature variation and lower precipitation to be the characteristic environmental conditions for annuals in comparison with perennials. This is consistent with previous suggestions that inferred drought, heat or unpredictable environment are responsible for the evolution of annual life history (Evans et al., 2005;Stearns, 1976;Whyte, 1977). Thus, despite the multiple origins of annuals in the genus, annual clades in Veronica may have reacted to the same climatic circumstances favoring a change in life history. Although we did not specifically test for differences among clades of annuals, markedly different climatic circumstances in one clade of annuals should have led to differences between inferences based on phylogenetically informed and non-phylogenetic analyses. Consequently, it is likely that parallel evolution in different groups of Veronica led to the evolution of annual life history and a characteristic set of related characters. Parallel evolution is more likely if occurring in the same region at the same time because of the same selection pressure. Based on the molecular dating of Veronica in Meudt et al. (2015), however, annual lineages originated over a range of dates starting in the Miocene, similar to other Mediterranean annuals inferred to have originated in response to the evolution of the Mediterranean climate evolution and the Messinian salinity crisis (Fiz, Valcárcel & Vargas, 2002). With the exception of V. peregrina, not included here, all groups of annual Veronica originated from ancestors in the Mediterranean and southwest Asia. Thus, progressing aridification may have spurred evolution of annual life history at different times in the same region in different groups of Veronica. During aridification, competition from related species decreased, and environmental filtering became a major limiting effect on species. Under such circumstances, the avoidance strategy of annuals by drought-tolerant seeds is favored by natural selection (De Bello, LepŠ & Sebastia, 2005). However, this hypothesis will be investigated in more detail in the different clades of annual Veronica by more detailed investigation of character evolution and ancestral habitat estimation. ADDITIONAL INFORMATION AND DECLARATIONS Funding This work was supported by the West Light Talents Cultivation Program of Chinese Academy of Sciences (XBBS201202) and the National Natural Science Foundation of China (Grant No. 31400208). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Data Availability The following information was supplied regarding data availability: All the data acquired from the GPIF website (http://www.gbif.org/) and the GPS data collected by the authors is contained in Table S4. Table S5 is a summary table with the average data used for each of the 81 species. Supplemental Information Supplemental information for this article can be found online at http://dx.doi.org/10.7717/ peerj.2333#supplemental-information.
4,620
2016-08-16T00:00:00.000
[ "Biology", "Environmental Science" ]
NKT cells in the antitumor response: the β version? NKT cells recognize glycolipids presented by CD1d-expressing antigen-presenting cells (APCs) and include type I NKT cells with antitumor function and type II NKT cells, which have been reported to suppress the antitumor response. Some type II NKT cells recognize sulfatide, a glycosphingolipid with a sulfate modification of the sugar. Type I NKT cells recognize different glycosphingolipids. In this issue of the JCI, Nishio and colleagues showed that APCs could process sulfatide antigens, analogous to protein processing for peptide-reactive T cells. Antigen processing in lysosomes removed sulfate to generate a glycosphingolipid that stimulated type I NKT cells and thereby turned an antigen with no antitumor activity into one that not only stimulated type I NKT cells but also stimulated antitumor responses. These findings may extend to the development of glycolipid antigens that could stimulate anticancer responses via antigen processing by APCs. Type I and type II NKT cells recognize different glycolipids NKT cells, which are T lymphocytes that are distinct from innate immune NK cells, have been tested extensively for anticancer responses on the basis of the initial positive findings that they prevent cancer metastases in mice (1).NKT cells are defined generally as T lymphocytes that recognize glycolipid antigens presented by CD1d, a nonpolymorphic antigen-presenting molecule (2).Because CD1d is not polymorphic, agents that stimulate NKT cells should activate these lymphocytes in all individuals, and not be limited by the vast polymorphisms in HLA antigen-presenting molecules that define the responses of other T cells.In this issue of the JCI, Nishio and colleagues define a cellular mechanism in DCs that converts a synthetic glycosphingolipid antigen into a more effective anticancer agent (3) (Figure 1). Structural studies of the NKT cell antigen receptor (TCR) in a complex with CD1d and antigen reveal a shape in which the lipid chains are buried within the hydrophobic CD1d antigen-binding groove (4).The sugar sits at the surface of the glycolipid-CD1d complex, where it provides contact points with the NKT TCR (4).A complicating factor in analyzing the anticancer response of NKT cells, however, is that there are two categories, and only one of them, so-called type I NKT cells, has anticancer activity in mouse studies.The other NKT cell subset, type II NKT cells, is reported to be suppressive of the tumor response (5).Type I NKT cells express a limited TCR repertoire with an invariant TCRα chain (6).These cells recognize several types of glycolipids, mainly glycosphingolipids (GSLs) that are composed of a ceramide lipid with an α stereochemical linkage of the sugar to the lipid.This α-linked structure is found in some types of bacteria (7,8), while in contrast, most mammalian GSLs have a different stereochemistry, a β-linked sugar.GSL antigens with β-linked sugars minimally activate type I NKT cells, but can still activate them (9,10).Notably, the structural data indicate that the α-linkage of the sugar in GSL antigens is critical for the optimal fit with the type I NKT cell TCR containing an invariant α chain (4).Because strong antigen activation of mouse type I NKT cells by GSLs with α-linked sugars showed anticancer efficacy in mice, and these compounds activate human type I NKT cells (11), cancer clinical trials were initiated (12), and many trials followed.Although antitumor efficacy was limited in patients (13), attempts are underway to synthesize more effective GSLs for cancer treatment and as vaccine adjuvants for infectious diseases (14). Type II NKT cells exhibit a more diverse TCR repertoire than do type I NKT cells.Although studies suggest type II NKT cells can recognize different types of glycolipids (15), recognition of sulfatide, a GSL with a sulfated β-linked galactose sugar, has been reported to activate at least some type II NKT cells (16).Studies of NKT cell sulfatide reactivity have used natural sulfatides, which have the same carbohydrate moiety but complex mixtures of ceramide lipids.This natural heterogeneity could be important for influencing the immune response, however, because subtle changes in the ceramide lipid structure, such as the addition of one or more double bonds, or alteration of the hydrocarbon chain length, have been shown to lead to dramatic changes in the NKT cell immune response (17).Therefore, a key feature of the Nishio et al. study was the use of synthetic sulfatide antigens with defined ceramide fatty acid chains, either with zero, one, or two double bonds, and immune assays using well-characterized immortalized mouse type I and type II NKT cell hybridomas.NKT cells recognize glycolipids presented by CD1d-expressing antigenpresenting cells (APCs) and include type I NKT cells with antitumor function and type II NKT cells, which have been reported to suppress the antitumor response.Some type II NKT cells recognize sulfatide, a glycosphingolipid with a sulfate modification of the sugar.Type I NKT cells recognize different glycosphingolipids.In this issue of the JCI, Nishio and colleagues showed that APCs could process sulfatide antigens, analogous to protein processing for peptide-reactive T cells.Antigen processing in lysosomes removed sulfate to generate a glycosphingolipid that stimulated type I NKT cells and thereby turned an antigen with no antitumor activity into one that not only stimulated type I NKT cells but also stimulated antitumor responses.These findings may extend to the development of glycolipid antigens that could stimulate anticancer responses via antigen processing by APCs. Lipid structure affects NKT cell activation cytokine secretion was dependent on CD1d expression.Activation by C24:2 led to an increased number of type 1 conventional DCs (cDC1s), which are important for tumor immunity and production of IL-12.These are all features earlier shown to be important for the antitumor response by type I NKT cells (18).Mice deficient for Traj18, the single TCR Jα segment required for type I NKT cells, express CD1d and therefore also retain type II NKT cells in the absence of type I NKT cells.C24:2 or βGalCer injection did not reduce tumor metastases in these mice, confirming the requirement for processed C24:2 to induce the anticancer response by type I NKT cells. Although the processing of glycolipid antigens by APCs had been defined previously in a few studies (19), the work by Nishio et al. definitively shows that antigen processing can affect the type of glycolipid-reactive T cell that is stimulated, with corresponding effects on the antitumor response.C24:2 and βGalCer are not particularly strong stimulators of type I NKT cells, however, and are therefore unlikely to be important in unmodified form for human cancer immunotherapy.Still, the findings may lead to the design of more effective glycolipid antigens that DCs might process to provide an anticancer therapy through NKT cell stimulation. or stimulation with APCs deficient for the enzyme all abolished the ability of the synthetic sulfatide to stimulate type I NKT cell hybridomas.The response to βGalCer was not affected, ruling out nonspecific effects.Presumably, the lack of aryl sulfatase A activity would have allowed the DCs to stimulate the type II NKT cell hybridomas in the presence of C24:2, although this was not shown.Regardless, overall, the data are consistent with a model in which glycolipid antigen processing, meaning sulfate cleavage from the sulfatide GSL in lysosomes, changes antigen structure from sulfatide GSL to βGalCer and, consequently, antigen reactivity from type II to type I NKT cells. Sulfatide antigen processing stimulates the anticancer response The reductionist approach undertaken by Nishio and colleagues has the weakness that only a few mouse T cell hybridomas were analyzed, but the authors followed up by showing that human type I NKT cells could be activated in vitro by C24:2.Antigen stimulation was decreased by inhibiting endosomal acidification, suggesting that stimulation of human type I NKT cells by C24:2 required antigen processing.Furthermore, in vivo experiments showed that C24:2 provided protection from metastases of the CT26 colon cancer to the lung, although C24:1, less stimulatory for type I NKT cells, did not have an effect on tumor nodules.The tumor protection by C24:2 injection was dependent on IFN-γ synthesis.Spleen and lung mononuclear cells from mice injected with C24:2 produced more cytokines, especially IFN-γ, than did those from mice injected with C24:1, and could directly assess TCR reactivity in the absence of costimulatory or other signals from the antigen-presenting cells (APCs), and they confirmed that the structure of the lipid influenced TCR-mediated activation.Importantly, sulfatide C24:1, with a single unsaturated bond in the fatty acid, and C24:2, with two unsaturated bonds, activated type II but not the type I NKT cell hybridomas.The results were reversed, however, when the NKT cell hybridomas were stimulated with the synthetic sulfatide antigens that were cultured with bone marrow-derived DCs that expressed CD1d.In this case, only type I, not type II, NKT cells were stimulated.The stimulation of type I NKT cells was more effective with C24:2 than C24:1, although the structural basis for this remains unclear.These data suggest that the DCs altered or processed the sulfatide to change the NKT cell type they stimulated. Lysosomal antigen processing stimulates type I NKT cells CD1d is most often loaded with stimulatory antigens in lysosomal compartments.In lysosomes, aryl sulfatase A can cleave the sulfate from sulfatides to generate a GSL with a β-linked sugar, β-galactosyl ceramide (βGalCer).Nishio and colleagues showed that βGalCer, the presumptive product of aryl sulfatase A-mediated antigen processing, stimulated the type I, but not the type II, NKT cell hybridomas when it was cultured on CD1d-coated plates, although it was less stimulatory than a synthetic αGalCer counterpart.Additional experiments showed that either the inhibition of endosomal acidification, the inhibition of the aryl sulfatase A enzyme with sulfite, The authors analyzed the reactivity of a type II NKT cell hybridoma and several type I NKT cell hybridomas to synthetic sulfatides in culture wells coated with soluble CD1d protein.Therefore, they Related Article: https://doi.org/10.1172/JCI165281Conflict of interest: MK is a member of the Scientific Advisory Board of Appia Bio Inc. MK and his spouse each own shares in the privately held diagnostic company Invivoscribe, which specializes in the diagnosis of hematologic tumors.Copyright: © 2024, Kronenberg et al.This is an open access article published under the terms of the Creative Commons Attribution 4.0 International License.Reference information: J Clin Invest.2024;134(4):e177663. https://doi.org/10.1172/JCI177663. Figure 1 . Figure 1.APCs stimulate type I NKT cells and the antitumor response via lysosomal generation of a modified glycosphingolipid.APCs exposed to C24:2 generate βGalCer via lysosomal processing, which removes a sulfate.NKT cells recognize glycolipids presented by CD1d-expressing APCs.Notably, βGalCer stimulates type I NKT cells to have an antitumor response through the production of IFN-γ.
2,391.2
2024-02-15T00:00:00.000
[ "Biology", "Medicine" ]
A standardized non-instrumental tool for characterizing workstations concerned with exposure to engineered nanomaterials The French national epidemiological surveillance program EpiNano aims at surveying mid- and long-term health effects possibly related with occupational exposure to either carbon nanotubes or titanium dioxide nanoparticles (TiO2). EpiNano is limited to workers potentially exposed to these nanomaterials including their aggregates and agglomerates. In order to identify those workers during the in-field industrial hygiene visits, a standardized non-instrumental method is necessary especially for epidemiologists and occupational physicians unfamiliar with nanoparticle and nanomaterial exposure metrology. A working group, Quintet ExpoNano, including national experts in nanomaterial metrology and occupational hygiene reviewed available methods, resources and their practice in order to develop a standardized tool for conducting company industrial hygiene visits and collecting necessary information. This tool, entitled “Onsite technical logbook”, includes 3 parts: company, workplace, and workstation allowing a detailed description of each task, process and exposure surrounding conditions. This logbook is intended to be completed during the company industrial hygiene visit. Each visit is conducted jointly by an industrial hygienist and an epidemiologist of the program and lasts one or two days depending on the company size. When all collected information is computerized using friendly-using software, it is possible to classify workstations with respect to their potential direct and/or indirect exposure. Workers appointed to workstations classified as concerned with exposure are considered as eligible for EpiNano program and invited to participate. Since January 2014, the Onsite technical logbook has been used in ten company visits. The companies visited were mostly involved in research and development. A total of 53 workstations with potential exposure to nanomaterials were pre-selected and observed: 5 with TiO2, 16 with single-walled carbon nanotubes, 27 multiwalled carbon nanotubes. Among the tasks observed there were: nanomaterial characterisation analysis (8), weighing (7), synthesis (6), functionalization (5), and transfer (5). The manipulated quantities were usually very small. After analysis of the data gathered in logbooks, 30 workstations have been classified as concerned with exposure to carbon nanotubes or TiO2. Additional tool validity as well as inter-and intra-evaluator reproducibility studies are ongoing. The first results are promising. Introduction 1.1. EpiNano, the French surveillance program for workers potentially exposed to engineered nanomaterials The development of EpiNano surveillance program is conducted by the French Institute for Public Health Surveillance (Institut de Veille Sanitaire, InVS) at a joint request of the French Ministries of Health and of Labour [1,2]. EpiNano aims at surveying mid-and long-term health effects possibly related with occupational exposure to either carbon nanotubes or titanium dioxide (TiO 2 ) nanoparticles, aggregates and agglomerates in workers employed in the nanotechnology-related industrial or research and development facilities in France. EpiNano consists of a registry of workers likely to be exposed to engineered nanomaterials and a prospective epidemiological cohort study [3]. The protocol of the EpiNano program received an approval from the French authority of privacy and individual rights protection (Commission nationale de l'informatique et des libertés, CNIL) for next 20 years of follow-up. Carbon nanotubes and TiO 2 nanoparticles, aggregates and agglomerates were chosen as priority engineered nanomaterials based on the considerations as follow [2,3]:  available toxicological data,  quantities manufactured in France and projected for production development,  the choice of France in the framework of the sponsorship program for the testing of engineered nanomaterials sponsored by the Organisation for Economic Co-operation and Development (OECD),  social perception factors. Identification of workers eligible for EpiNano program Workers potentially exposed to carbon nanotubes or TiO 2 are identified using a 3-level approach [3]: 1. identification and selection of companies dealing with corresponding engineered nanomaterials (based on compulsory declaration and questionnaires [3,4]), 2. company in field visit and identification of the workstations concerned with exposure to engineered nanomaterials, 3. identification of workers involved in jobs and tasks performed in workstations identified as concerned with exposure to engineered nanomaterials and invited into the program. Aim and methodological development To cope with high variability of companies in terms of size, activity, industrial process, work size as well as with operation and exposure conditions, a standardization of a second step of the worker identification method was extremely important. Moreover, additional criteria the EpiNano method should meet with were: 1. non-instrumental assessment based on the state-of-the-art methodology [5], 2. easiness of usage by EpiNano team members (i.e. epidemiologists and industrial hygienists non-specialized in engineered nanomaterial exposure), 3. inexpensiveness and friendly-using format, 4. extensiveness to all necessary information for workers' individual exposure assessment in upcoming epidemiological studies, and 5. potential usefulness for company occupational safety and health staff (managers and/or occupational physicians). A working group Quintet ExpoNano was created including national experts from the leading French institutes (French institute for public health surveillance (InVS), the French institute for occupational health and safety (INRS), the Atomic energy commission (CEA), the French institute for industrial safety and environmental protection (INERIS), and the University of Bordeaux Segalen) specialized in nanoparticle metrology, industrial hygiene, occupational medicine, and epidemiology. The working group reviewed available methods and tools for in-filed observations and inspections, measurement technics and exposure measurement data, and compared their respective practices of in-field studies. The recommendations for characterizing potential emissions and exposure to aerosols released from nanomaterials in workplace operations published by INRS, INERIS and CEA [5] were respected. The 2 method integrated the first three stages of the general five-stage procedure, the fourth and fifth stages being dedicated to a measurement campaign [5]. At term of 6-months collaboration, a first version of the tool was proposed and tested in field. After a series of additional format improvements and rewordings, an agreement of the working group has been reach on a final version of the tool. This version was used on 10 workplaces during four months, before its final validation [6]. Tool description The method consists of identifying within each company the workrooms and activities that work with engineered nanomaterials in order to identify the workstations possibly causing exposure to them and to assess this potential exposure semi-quantitatively. It is based on a technical inspection of the plant (in-field visit), interviews with workroom supervisors, and observation of the activity at each workstation. This inspection is based on the Onsite technical logbook [4,6]. This tool enables evaluators to standardize the in-field observation and data collection. Two versions of the Onsite technical logbook, in French and in English, are available [4,6]. The tool is structured in 3 parts: 1. Company: activity and process description; 2. Workrooms: type and dimensions, air flow, efficacy of the ventilation system, local maintenance, staff and workstations, potential sources of non-manufactured ultrafine particles emissions (background aerosols); 3. Workstations: instruments, techniques, equipment, process enclosure, details about incoming and outgoing products; presence of collective protection, personal protective equipment (PPE), tasks and operation performed in the workstation, quantity of product handled per operation, frequency and duration of operation. A short questionnaire [3,4] sent to a company occupational safety and health manager prior to the onsite company visit allows to prepare the in-field visit, and to gather all potentially useful documents (the plant's blueprints, certificates of control and maintenance of the collective protective equipment, annual declaration reports and supplementary materials such as nanomaterial characterization data and results of the exposure measurement campaigns) to be consulted onsite during the in-field visit. An in-field visit is generally organized over one or two days. It begins with an exchange of information with representatives of the company in a conference room, about the EpiNano project (objectives, procedures) and about the company (its activities and work processes). The discussion makes it possible to fill in the first part of the Onsite technical logbook on company's activities and processes implemented. The discussion is followed by a study of the plant's blueprints to locate the circulation of materials in the premises and thus identify the workrooms where nanomaterials are present. The technical inspection, in the strict sense of the term, makes it possible to visit workrooms and to observe the workstations and real activity. This step enables to describe the use of nanomaterials in detail. During the inspection, the EpiNano team members (2 or 3 people, including at least one industrial hygienist and one epidemiologist) must be accompanied by the plant's director of hygiene and safety, the laboratory or department director, and the occupational physician. During the inspection, the items of the second and third parts of the Onsite technical logbook are completed, in order, so that the workstations possibly causing exposure can be identified and the potential exposure further assessed [6]. After the inspection, verification and data entry of the information in the logbook, a report of the inspection is sent to the company. This report includes the conclusions of the workstation evaluations and a list of the workstations that potentially cause exposure to nanomaterials, aggregates and agglomerates. A copy of the computerized data from the logbook is attached to the report. Implementation of the method and first results The method is designed for non-instrumental exposure assessment by non-specialized users. It was tested and further used for tracking workstations concerned with exposure to engineered nanomaterials and recruiting potentially exposed workers. Ten first companies which accepted to participate in EpiNano program were visited from January through May 2014 [6]. The visited workplaces had in average six workrooms (Min=1, Max=13) and 2 workstations per workroom (Min=1, Max=4). The mean number of workstations where carbon nanotubes or TiO 2 nanoparticles, their aggregates or agglomerates could be handled is around eight depending on company activity, with up to 27 workstations in a largest industrial workplace. In total, fifty three workstations were observed and resulted in completed Onsite technical logbooks. Among these workstations, there were twenty-five (47%) workstations in private companies and 28 (53%) in public workplaces, mostly academic research and development laboratories. Carbon nanotubes were most frequently handled material encountered in 43 of the observed workstations (single-wall carbon nanotubes in 16 (30%) and multiwall carbon nanotubes in 27 (51%) workstations respectively), while TiO 2 was handled in 5 (9.4%) workstations. In 18 workstations (34%) multiple types of engineered nanomaterial were handled. 2.3.1. In epidemiology and qualitative exposure assessment In EpiNano system the identification of workstations with exposure concern is preformed regardless the use of personal protective equipment [2,6]. Workstations where a worker could experience a direct contact with engineered nanomaterial (including aggregates and agglomerates) that gives potential for inhalation or cutaneous contamination are classified as workstations concerned with exposure. The information about personal protective equipment, amount of engineered nanomaterial handled during an operation as well as frequency and duration of handling is gathered from workers' individual EpiNano inclusion questionnaire. This information will be accounted for in workers' individual exposure score for workers involved in workstations identified as concerned with exposure to engineered nanomaterials [2,3,6]. Overall, in ten workplaces visited till May 2014, 30 workstations (57%) were classified as concerned with exposure to either carbon nanotubes or TiO 2 . Figure 1 presents the types of operations and tasks performed in the observed workstations and in workstations classified as concerned with exposure to engineered nanomaterial. Among the parameters assessed during the in-field visits, dustiness and humidity of the engineered nanomaterial seem to be the most important determinants of the possible exposure in a workstation [6]. 2.3.2. In industrial hygiene and risk management. The data collected through the Onsite technical logbook are computerized and sent to the company. This data might be directly used by companies for risk management proposes, for instance by implementing control banding approach to assess and control exposure to engineered nanomaterials in different workstations. Several tools of control banding have been proposed specifically for engineered nanomaterials [7][8][9]. The Onsite technical logbook contains all essential parameters for implementing any of these tools for assessing exposure bands in workplaces. While there is no consensus on an appropriate exposure metric to be measured for assessing individual exposure to engineered nanomaterials in workers, International organization for standardization (ISO) recommends using control banding approach in workplaces dealing with engineered nanomaterials [9]. Consequently, our method may be straightforward and helpful for both exposure characterization and risk management which might be further improved with more accurate and quantitative exposure measurement data. Method validation A validation study is required in order to address the reliability of the proposed method and reproducibility of the exposure assessment results based on it in order to prevent bias in risk estimator in the epidemiological study [10]. The inter-method [11] and intra-method [12] comparisons of the exposure assessment were performed; the methods and results will be presented in the upcoming publication (manuscript in preparation). According to the results of these comparisons, the proposed method presents a substantial agreement with a more precise expert exposure assessment (Cohen's Kappa=0.69) and a good agreement based on intra-method repeatability test. . In conclusion, the method and the tool (the Onsite technical logbook) presented in this paper were developed by the French institute for public health surveillance (InVS), the French institute for occupational health and safety (INRS), the atomic energy commission (CEA), the French institute for industrial safety and environmental protection (INERIS), and the University of Bordeaux Segalen, as part of the partnership entitled ExpoNano Quintet. This tool makes it possible to collect all of the information necessary to identify and characterize workstations that might cause occupational exposure to carbon nanotubes or TiO 2 nanoparticles, aggregates, and agglomerates. It is part of a semi- quantitative method to characterize the potential for exposure to intentionally produced nanomaterials in different workstations [2,3]. This practical method makes it possible to follow the recommended procedure for assessing potential emissions and characterizing occupational exposure during operations involving nanomaterials [5]. The results of validation studies are promising and will be provided in a forthcoming publication. This method, which is simple and does not require an instrument (no sampling, no aerosol measurements), is designed to be usable as part of the EpiNano program of epidemiologic surveillance of workers potentially exposed to nanomaterials in France [3,6]. Moreover, it can be useful for risk management purposes in companies, for instance in frame of implementation of the control banding approach to assign exposure bands to the different workstations concerned with exposure to carbon nanotubes or TiO 2 nanoparticles, aggregates and agglomerates.
3,283
2015-01-01T00:00:00.000
[ "Environmental Science", "Engineering" ]
Dynamic Carpooling in Urban Areas: Design and Experimentation with a Multi-Objective Route Matching Algorithm : This paper focuses on dynamic carpooling services in urban areas to address the needs of mobility in real-time by proposing a two-fold contribution: a solution with novel features with respect to the current state-of-the-art, which is named CLACSOON and is available on the market; the analysis of the carpooling services performance in the urban area of the city of Cagliari through emulations. Two new features characterize the proposed solution: partial ridesharing, according to which the riders can walk to reach the driver along his/her route when driving to the destination; the possibility to share the ride when the driver has already started the ride by modeling the mobility to reach the driver destination. To analyze which features of the population bring better performance to changing the characteristics of the users, we also conducted emulations. When compared with current solutions, CLACSOON allows for achieving a decrease in the waiting time of around 55% and an increase in the driver and passenger success rates of around 4% and 10%, respectively. Additionally, the proposed features allowed for having an increase in the reduction of the CO 2 emission by more than 10% with respect to the traditional carpooling service. Introduction Vehicular traffic congestion is one of the main problems of most of our cities and towns [1]: it degrades the quality of life, leading to a wide set of social, economic and environmental impacts.It calls for a great effort in studying and deploying innovative and ambitious urban transport modes to reach a less car-dependent life-style, which is one of the main causes of urban traffic congestion. The particular vehicles used for transport, the source of energy and the infrastructure used to implement the transport play a critical role for the evaluation of the social, environmental and climate impact [2].Different alternative transport modes have been implemented for reducing air pollution, most of the time based on public transport services, where several options have been proposed and deployed according to the specific configurations of the cities' public transportation infrastructures, e.g., city buses, light rails, trains, subways, ferries.The use of public transport infrastructures is indeed certainly one of the best solutions to face the challenge of vehicular traffic congestion.However, conventional public transport denotes a service that follows fixed routes and schedules; it may not be available in certain areas, and usually, the distance to the stop locations may be great in sparse areas.Therefore, public transport cannot accommodate all types of mobility needs, which would inevitably be met by the use of personal transportation means.Unfortunately, still, a significant number of people prefer the use of a personal car (or other personal transportation means) over public transportation, who however should be persuaded to change their mobility-style.A report by the U.S. Environmental Protection Agency revealed that light-duty vehicles are the source of nearly 25% of the country's greenhouse gas emissions [3].Consequently, cutting this significant source of emissions is crucial, and a shift of single-occupancy vehicles towards shared cars could help to address these problems.In this context, carpooling has been a widely-accepted concept to implement Intelligent Transport Systems (ITS) in smart cities and to reduce the gas emissions caused by the use of the personal car.Carpooling (also known as ridesharing) is defined as the sharing of car journeys so that more than one person travels in one car, thus reducing travel costs, such as fuel costs, tolls, etc., but most importantly, from the societal and environmental point of view, reducing air pollution.While the concept of carpooling has existed for decades [4], currently, this service is having a lift thanks to the advancements in the ICT sector.In particular, the wide-spread availability of broadband Internet services allows for the deployment of powerful tools for carpoolers to meet potential companions and reach an agreement on the shared trips [5,6].Most of the ridesharing systems operating today allow the passenger and drivers to find convenient trip arrangements over the Internet, to support trust building between registered users and to implement billing systems to charge passengers and compensate drivers [7].These procedures must allow for quick and easy matching of carpoolers' needs, as well as for assisting in establishing itineraries, prices and payment methods [8]. Nowadays, the most widespread implementations of ridesharing services rely on a static approach: the carpoolers post the requests and the offers several hours in advance for a future transportation need, and shared rides have to be arranged before the trip starts.On the other hand, dynamic ridesharing is a relatively new type of carpooling: it is a system where an automated process employed by a ridesharing provider matches up drivers and riders on a very short notice [9], which can range from a few minutes to a few hours before the departure time.Dynamic ridesharing clearly brings several advantages over the static ridesharing approach. This paper focuses on the challenge of a dynamic ridesharing service in urban areas.The major contributions of this paper are the following: • A new carpooling platform named CLACSOON is presented [10], which is intended to make simple the interaction of the clacsooners, i.e., the platform users, to find a trip companion and interact during all of the phases of the sharing experience.This platform is currently working in the area of Cagliari, and it is available for the iOS and Android operating systems. • A novel matching algorithm is proposed, which is a route matching algorithm that has two novel features with respect to the state-of-the-art: partial ridesharing, according to which the riders can walk to reach the driver along his/her route when driving to the destination; the possibility to share the ride when the driver has already started the ride by modeling the mobility to reach the driver destination. • Due to the impossibility to control the characteristics of the real users, an emulation system is deployed to analyze the key parameters that affect the Quality of Experience (QoE) provided to the users when changing the characteristics of the population.The objective is to have key information on how to drive the creation of the population of users (through marketing operations) to reach the desired usage targets.The performance has been evaluated considering the ridesharing success rate for both driver and passenger, the waiting time and the total system CO 2 saved.The results have been compared with the case for which the novel proposed features were not used, showing significant improvements. The remaining of the paper is organized as follows: Section 2 presents relevant past works and highlights the novelty of the proposed system; Section 3 describes the implemented platform; Section 4 presents CLACSOON's matching algorithm; Section 5 discusses the experimental results for the case study; conclusions and future work are drawn in last section. Past Works and Introduced Innovation Nowadays, most of the carpooling services implement a "static" approach: when using such a service, the carpoolers have to post ridesharing requests and offers several hours before their desired departure time, and the shared ride has to be arranged before the trip starts.This approach is shown to be effective for mid/long distance trips, while it is not suitable for short distance trips, which often occur in an urban scenario: in this case, a real-time approach fits better.Dynamic ridesharing is a relatively new type of carpooling: it is a system where an automated process employed by a ridesharing provider matches up drivers and riders on a very short notice [9], which can range from a few minutes to a few hours before departure time.In addition to using communication technologies, dynamic ridesharing systems must establish a procedure that enables travelers to form ridesharing instantaneously [11].This type of carpooling generally makes use of three recent technological advances: GPS navigation devices to determine a driver's route and arrange the shared ride, smartphones for riders to request a ride from wherever they happen to be and social networks to establish trust and accountability between drivers and passengers.These elements are coordinated by a ridesharing provider, which can match rides using opportune route matching algorithms. In the following two subsections, we review the past works, and we present the introduced innovation, respectively. Past Works The idea of dynamic ridesharing is not new, and major initiatives have been tried in the past in the field of business, for example by Flinc (www.flinc.org),Carma Carpooling (https://www.gocarma.com/) and Commutr (www.getcommutr.com).Dynamic ridesharing clearly brings several advantages over the static ridesharing approach, especially in terms of flexibility in satisfying the users needs.Because of its potential, also several research efforts have been conducted in the last few years, but the problem of matching ride requests and ride offers at a large scale remains challenging.Several matching agencies tried different approaches, but what constitutes the best procedure is still a matter of debate [8].The ridesharing matching problem in the literature is often modeled as an optimization problem [8,11].A commonly-used objective is to minimize the overall travel distances in the optimization problem or considering multiple objectives, including the minimization of the overall travel times, the maximization of the number of ride-matches and the minimization of the system response time.The main technical challenge is the complexity of the optimization problem and the matching process itself, along with the complexity of accurately modeling the carpoolers behavior.On the practical side, one of the main challenges regards the critical mass issue, which is faced by dynamic ridesharing services, in particular, in their startup phase; this problem consists of the difficulty in achieving a critical mass of users in order for the service to find an appropriate mate for the users requests, bringing an adequate matching success rate.This challenge is also related to the QoE perceived by the users, which depends on factors, such as safety, social discomfort and time flexibility.In Xing et al. [12], a ridesharing concept for short-distance travel within metropolitan areas is designed as a multi-agent system to handle spontaneous ridesharing requests of prospective passengers with transport opportunities available on a short call bases.The work in Arnould et al. [13] illustrates WiSafeCa (Wireless Traffic Safety Network Between Cars), a Eureka/Celtic-founded European project that consists of researching and prototyping efficient car-to-car and car-to-infrastructure networking mechanisms striving to reduce accidents and traffic congestion.In the scope of the project, a dynamic ridesharing system was designed, in order to serve real-time transport requests.In Agatz et al. [14] is considered the problem of matching drivers and riders for a dynamic ridesharing scenario, presenting a simulation study based on travel demand data for the city of Atlanta.The matching problem is described as the minimization of the total system-wide vehicle miles incurred by users and their individual travel costs.The simulation results indicated that the use of sophisticated optimization methods based on a rolling horizon approach substantially improve the performance of ridesharing systems over a greedy matching algorithm.In the definition of their study, an important assumption is that a driver could make only one pickup and one delivery: this constraint makes the problem easier to solve, but it prevents the driver from serving some riders even if they are on his/her desired route.Another important assumption for the study was that a shared ride must be agreed before the start of the driver's trip; moreover, the dynamics of the positions and the speeds of all of the shared vehicles are omitted.The work in Herbawi et al. [15] addresses the dynamic ride-matching problem with time windows, optimizing a multi-criteria objective function.Extending the work proposed by Agatz, they propose a genetic and insertion-based heuristic algorithm for solving the optimization problem, also considering the multiple ride problem (i.e., more than one rider for a single driver).The problem is represented using a maximum-weight bipartite matching model, and the optimization software CPLEX is used to solve it.In Di Febbraro et al. [16], the proposed ridesharing system considers the interactions between drivers, riders and the system manager using a model based on mixed continuous-integer linear programming to maximize the performance of dynamic ridesharing systems.The dynamics of the positions and the speeds of all of the shared vehicles are omitted for simplicity, and it is assumed that users can meet only at a priori fixed delivery stations, such as near bus stops, intersections and the corners of squares.The performance of the proposed model has been analyzed through a simulation based on the modeling framework for Discrete Event Systems (DES).In Mallig et al. [17], the authors describe a former implementation of the agent-based travel demand model mobiTopp, with the aim of realizing a realistic model for ridesharing as an agent-based travel demand model.The model has the limitation that it currently supports only end-to-end ridesharing, i.e., only matching between origin-destination (O/D) zones. Some works have analyzed the benefits of the proposed carpooling solutions.In Cho [18], the authors present a case study analyzing 12 carpooling services in Europe and the United States and conclude that interpersonal interactions in the service encounter (which depend on the application/service interface) play a significant role in the QoE perceived by the users.In Cici et al. [19], the authors investigate and assess the potentials of ridesharing by developing an algorithm that matches users characterized by similar mobility patterns on the basis of departure time, O/D locations and social distance based on data from popular social networks.The results provide an upper bound to the potential of ridesharing performance, indicating that the decrease in the number of cars in a city can be as high as about 30% when the users are willing to share a ride with friends of friends.In Tsao et al. [20], the authors present the potentials of carpooling for reducing traffic congestion in a hypothetical metropolitan area, assuming a uniform distribution of O/D locations.This model attempts to measure the potentials of ridesharing based on spatial and temporal factors, but assumes that only people who live in common home/work zones would consider sharing a ride with one another.Whereas this study is one of the most comprehensive studies in estimating the ridesharing performance in terms of traffic reduction, it has made important simplifications that have most probably underestimated the achievable results with respect to more realistic deployment scenarios [21]. Sharing a ride can also lead to some side effects: for drivers, making a detour to reach the riders' pick-up and drop-off points could represent a waste of time and money when these points are not close to the driver's route, since that behavior increases the total miles traveled by the driver.This drawback has generally a lower impact when compared to the total savings in CO 2 emissions due to the sharing of the ride, but it points out some fields of improvement for ridesharing systems.For example, this side effect can be mitigated by a carpooling system that evaluates only pickup points on the driver's route. Proposed Innovation As it resulted from the previous review, many works have focused on the optimization problem, but only a few have worked on the modeling of the driver mobility to find better matches.One common assumption is that a shared ride must be agreed before the starting of the driver's trip, whereas the partial ridesharing (a partial ridesharing [8] happens when the pick-up and drop-off locations are located on the driver's original route, either if their origins and destinations are located on major streets or determined by negotiations) mode is not currently facilitated by matching agencies [8], and to the best of our knowledge, its benefits have not been investigated in the literature yet. Based on these considerations, the novel carpooling solution for dynamic ridesharing service proposed in this paper and named CLACSOON includes the partial ridesharing mode.In this way, the driver avoids taking a detour whenever possible; therefore, it leads to an increment in the total system-wide CO 2 savings.Clearly, it calls for the riders to walk to reach the driver along his/her route when driving to the destination.Additionally, by introducing the modeling of the position of the driver's vehicle, only the remaining part of the route that a driver has to travel is considered when evaluating the matching.Therefore, this approach enables the possibility for shared rides to be agreed on the fly after the starting of the driver's trip, when a rider happens to be close to the remaining part of a driver's route.This approach leads to an increment in the number of total shared rides. Another important contribution of this paper is that, to evaluate the impact on the performance of the system changing the population characteristics, an emulation system has been deployed to generate increasing numbers of users that interact with the CLACSOON platform, and extensive trials are implemented to analyze some performance indicators, varying the characteristics of the population in the city of Cagliari (Italy).In particular, the passenger success rate, the driver success rate and the total system-wide CO 2 saved have been evaluated with respect to the characteristics of the population.The results shows that introducing the aforementioned features in a route matching algorithm leads to a substantial performance improvement. The CLACSOON System Architecture The CLACSOON system has been designed and implemented considering an urban scenario where the aim is to offer a real-time, i.e., dynamic, carpooling service.The objective is to satisfy the needs of users that have an unplanned (or predicable) need of mobility in the city that could not be scheduled in advance.Accordingly, the system architecture needed to implement a service that simplifies and automatizes the provisioning of the carpooling processes, considering also the users QoE and creating an incisive user persuasion strategy.The main functional requirements to develop such a service can be briefly described as follows: • Accounting: to allow the user to access the service.Each user has a profile where various kinds of information are stored, such as name, age, type of car and received feedback, which are very important to build the reputation level. • Request and offer insertion: to allow each user to insert an offer or request a ride.Each ride is identified by a departure point, an arrival point, time flexibility parameters and a search radius representing the maximum detour from the scheduled trip. • Automatic matching: the server dynamically evaluates the possible matching between a ride (either an offer or request) and the sets of complementary rides. • Matching notification: if a matching is found, the system notifies the users.Each matching notification contains the pick-up point, the drop-off point and the expected driver arrival time. Each user can accept or refuse the notification. The system has to be used by users in mobility, so the access to the system has to be guaranteed by mobile devices.Accordingly, the design of the system architecture considers this facility, and the front-end layer has been designed for mobile devices, considering the major operating systems.As for the back-end side of the system, it is deployed in the cloud to offer good reliability considering the high number of expected connections and then to provide good availability and capability features [22].In the implementation of the CLACSOON platform, the technology chosen is Google App Engine and its tools for cloud solutions.Others services of the third parties (e.g., Facebook APIs, Direction APIs) are used to build the proposed service. As already mentioned, the system follows the mobile-cloud paradigm.Figure 1 shows the major components: • The mobile client allows the user to access the carpooling service in mobility.Its sensors (e.g., GPS) are used to simplify the access to the service and to enhance the user experience [23].For all communications toward the server, the JSON format is used. • The cloud application server is the core of the system.It enables the access of users, processes all requests and offers for rides and calculates the matching between requests and offers. • The cloud database has the task to store all data useful for the service: user profile, ride offers, ride requests, trips, payments, feedback and other information. • The Facebook APIs are used to simplify the process of registration by offering a quick and easy service to access the system.Using the Facebook social graph, the aim is to increase the social participation of users. • A directions provider is used to evaluate the information concerning the route between departure and destination locations chosen by the user for his/her ride.This information includes travel directions, estimated path length, estimated travel time and likely speeds derived from road types. • The push notification services are used to enable the push notification toward the smartphones.This feature is a milestone to obtain the real-time requirement [24]. The CLACSOON application can be downloaded from the iOS and Android markets. The Route Matching Algorithm This section describes the CLACSOON's route matching algorithm and its ridesharing model.The proposed algorithm contemplates the partial ridesharing mode, i.e., it takes into account the possibility for a rider to reach the driver along his/her route, thus avoiding that the driver takes a detour whenever possible.Furthermore, the algorithm implements a method for estimating the position of the driver's vehicle in an urban context, which enables the possibility for sharing rides after the starting of the driver's trip.During the design of the matching algorithm, we considered a dynamic ridesharing scenario in which a ridesharing provider receives all of the trip announcements for each participant.We also assume that the ridesharing provider relies on the availability of a directions provider, which provides the information concerning the route between a departure and a destination location.This information includes travel directions, estimated path length, estimated travel time, expected speed derived from road types, which may or may not depend on the historical average speed data over certain time periods.Such a service is provided by many agencies; an example is the Google Maps Directions API [25], which is a service that calculates directions between locations using HTTP requests.Bing Map [26] is another map service provider, which calculates and display directions and routes on the Map with Direction API module or with Bing Map Rest Services.Several alternatives can also be used for those ridesharing providers who opt for a self-hosted direction provider: a great example is The Open Source Routing Machine [27], which is a high performance routing engine written in C++ designed to run on OpenStreetMap data. Each rider and driver request includes the desired departure and arrival locations.Each ride offer or request includes a timeout, which has to be intended as the maximum time the user is willing to wait before finding a mate.Furthermore, each announcement includes a search radius, which has to be intended as the maximum detour that the driver is willing to make from his/her original route or the maximum distance the rider is willing to walk to reach the pickup point.With this information, the proposed service automatically establishes shared rides over time, matching potential drivers with riders.For the purpose of describing the route matching algorithm, we assume that at a given time t: The problem of finding the matching between drivers and riders can be formulated as described in the following.The matching algorithm has to satisfy the following constraints: 1.The total number of riders in a vehicle must not exceed the number of spare seats specified by the driver; 2. The entire commuting route must start at the departure and end at the destination locations specified by the driver; 3.Each rider must be picked up before he/she can be dropped off.This constraint seems obvious, but it must be made explicit in a carpool matching algorithm. 4. The maximum distance that a rider p has to walk for reaching the pickup point cannot exceed the search radius R p ; 5. The maximum detour that a driver d has to take with respect to his/her route, for picking up a rider, cannot exceed the search radius R d ; 6.The rider and the driver can wait to find a matched mate for a shared ride at most the timeouts T p and T d , respectively. The constraints from 1 to 3 are usual for a commute process [28], while the constraints from 4 to 6 are specific for the proposed dynamic ridesharing system.As mentioned previously, the potential route of a driver is encoded with a polyline α d , which is a matrix with two columns where each row represents the coordinate of each point in the polyline.Accordingly: where i indexes the points in the route and The number of points in this matrix (n) is clearly variable and depends on the departure and arrival points, as well as on the route solution proposed by direction providers.The ridesharing service should be implemented in a way to require the minimal intervention from the users to maximize usability, but at the same time giving him/her the freedom to chose among a possible list of mates.This implies that the ridesharing service finds all the matches and notifies the user with a list of suitable travel companions.The proposed matching algorithm works as follows: the algorithm first searches for one (or more) suitable matching and then, when the matching is found, the arrangement of the shared ride is proposed to the participants.The driver and the rider then can accept or refuse it.In most studies, the objective of the matching algorithm is the maximizing of the system-wide miles saved, the maximizing of the success rate (the percentage of satisfied drivers and riders) or the minimizing of the waiting time of drivers and riders.Clearly, these objectives partially conflict with each other.Depending on the policy of the ridesharing provider, one (or a combination) of the aforementioned objectives is selected for the implementation of the matching algorithm.In our solution, we consider a weighting of the length of the shared trip and needed detour.As already stated, this differentiates with respect to the alternative proposals, as we consider the partial ridesharing mode and the detour of the riders.The proposed route matching algorithm relies on the following three sequential functions that are executed: • Temporal matching: for each new user (either a rider or driver), the system evaluates whether the a time constraint is satisfied for each possible travel companions, given the timeout T, the driver's travel duration and the current shared rides allocation, but without considering any geographical constraint; • Geographical matching: this is the evaluation of the matching between a driver and a rider on the basis of the distance from their paths.This step is performed for each pair (d, p) of drivers and riders that satisfied the previous matching.This step also takes into account the theoretical future position of the driver's vehicle, from the beginning till the end of his/her ride. • Cost function evaluation: this evaluates the cost C d,p for a shared ride between each driver d and rider p that satisfied both the temporal matching and the geographical matching constraints. The details of these steps will be explained in detail in the following sections. The list of possible travel companions is then ordered by the value of the cost C d,p .This result represents the output of CLACSOON's matching algorithm.This list is then proposed to riders and drivers. Temporal Matching For the temporal matching, it is necessary to consider the effect of the timeout (T d and T p ), which is the maximum time the user is willing to wait to find a mate, and after this amount of time, the ride request is considered to be expired.For the drivers, it is also important to consider the estimated travel duration τ d , as after this amount of time, the ride offer is considered to be over.In case the rider starts the ride after the driver, then the following two conditions must be verified: which check that the rider arrives before the driver timeout and before the ending of his/her trip.Differently, in case the driver starts the ride after the rider, the following condition must be verified: which check that the driver arrives before the rider timeout.Each pair (d, r) that satisfies this step is then evaluated in cascade by the geographical matching algorithm. Geographical Matching Each pair (d, p) that satisfies the temporal matching constraints is evaluated by the geographical matching algorithm.For this purpose, we propose a method for estimating the driver's position at the time t on the basis of his/her destination.Modeling the position of the driver's vehicle enables sharing rides even after the starting of the driver's trip.As mentioned before, α d is the desired route for a driver d, which connects the point Recall that n is the number of segments in the polyline.Assuming that τ d i is the travel duration between the point i and the point i + 1, we can decompose the total travel duration τ d as the sum of the travel duration of each single segment of the route: For simplicity, we can assume that: We then choose to estimate the driver position at the time t = t DEP d + ∆t as the point of the route with index K d (t): Given that the constraints discussed in Section 4.1 have just been satisfied, note that this equation has to be considered in the range t DEP d ≤ t ≤ (t DEP d + τ d ), i.e., the position of the driver is considered to be undefined before the beginning of the ride and after the ride is over. The remaining part of the path that a driver d has to travel at a time t can then be expressed as: Accordingly, β d (t) is a subset of α d , which does not contain the points with index i < K d (t) that the driver d should have passed by at time t.In other words, β d (t) represents the part of the route that theoretically the driver has to travel after the time t.When evaluating the matching at the time t between the offer d and a request p, only the remaining route points β d (t) are considered, against the departure D p and destination A p points of the rider: this feature enables drivers to pick up riders on the fly, if the pickup point is close to the remaining route points in β d (t).This situation is depicted in Figure 2.For the purpose of describing the geographical matching constraints, we assume that: A first constraint for the matching to be found is that the index of the point β dst d (t) has to be greater than the index of β dep d (t), so that the rider must be picked up he/she can be dropped off.To take the partial ridesharing mode into account, when the search radius R p specified by a rider allows him/her to reach a departure pickup point on the driver's route, the system places the pickup point on the point β dep d (t).A similar method is used for the evaluation of the destination pickup point β dst d (t).In this way, this setting avoids the driver taking a detour when possible, and thus, it is expected that it leads to an increasing of the total system-wide CO 2 savings.Depending on the values of the rider's and the driver's search radius, the matching algorithm assigns the departure pickup point P dep (t) and the destination drop-off point P dest (t) with the following method: The three conditions in both of the previous equations have been derived from the following motivations: Condition (1) if the search radius of the rider R p is greater or equal to the distance ∆dep d,p (t) between his/her departure and the driver's route, the matching algorithm specifies the location 2) are not satisfied, a pickup point does not exist, an then, a matching between p and d does not exist. An analogous approach is followed in calculating the drop-off point. The total driver's deviation (for simplicity, we consider the deviation to be in a straight line) from his/her original path can be expressed as: where w D d,p is a binary variable set to one if P dep ≡ D p , i.e., if a detour from the driver's original path is needed.Likewise, w A d,p is a binary variable set to one if P dest ≡ A p and set to zero otherwise.The last constraint to be satisfied is that the total detour that a driver should take in order to reach the pick-up and drop-off points cannot be higher than the distance ∆km p,d covered by the shared route, i.e., the shared ride provides positive cost savings.If this constraint is not satisfied, there would not be a benefit for the driver to take the detour in order to share the ride. If both P dest (t) and P dep (t) are defined, the matching is assumed to be found. Cost Function For the pairs of offers and requests (d, p), which satisfy the temporal and geographical constraints, the value of a cost function C d,p (t) for a shared ride is evaluated. It takes into account the following two elements: ∆km p,d , which is the length of the shared ride; dev p,d , which is the length of the needed detour. The cost function is defined as: where Θ, Ψ are tuning parameters that respectively determine the importance of the detour from the original path and with respect to the travel sharing.Accordingly, a list of suitable travel companions is ordered, which are associated with: • the departure pickup point P dep (t) • the destination pickup point P dest (t) The list of suitable travel companions for a user represents the output of CLACSOON's matching algorithm.The user is then left with the option to select the best mates according to his/her personal interests.In the next section on the performance evaluation, we assume that the user always selects the mate corresponding to the lowest cost function. Analysis of the Experimental Results The CLACSOON platform has been implemented, and the relevant service is publicly available for the major mobile operating systems (iOS and Android).Currently, the service is operating at a small scale, and it has attracted three thousand users, mostly located within the area of Cagliari.Since we were facing the critical mass issue, we were interested in analyzing the performance of the system, in relation with the population characteristics.For this purpose we implemented an emulation system, as the current population of CLACSOON users is limited and because we were interested in analyzing the performance with different population characteristics, which cannot be controlled in real scenarios.The place we selected for the emulation scenario is the city of Cagliari, which is an Italian municipality with nearly 150,000 inhabitants, with a metropolitan area (including the surrounding 15 municipalities) of more than 420,000 inhabitants [29].Considering a real area, we were able to emulate the mobility patterns in real urban conditions, including real roads in the city and real paths between any departure and destination (e.g., pedestrian zone, one-way roads, limited traffic zones).Three Key Performance Indicators (KPI) have been analyzed: the number of shared rides, the waiting time to find a ride and the average total system-wide CO 2 savings.The following subsections describe the emulation system, present the experimental setup, analyze the achieved performance results and provide a comparison with alternative approaches. Description of the Executed Emulations In this section, we describe the experimental setup and the emulation system.We implemented an agent-based emulator that generates the ride offers and requests on behalf of real uses, evaluates the matching between them and emulates the sharing of rides.The emulator is implemented in Java, and it is based on the core of the CLACSOON platform (indeed, the matching is exactly the service in production, but executed in the emulation environment).In our experiments, we ran several scenarios, each one characterized by a combination of parameter settings as explained in the following.In the following, we describe the processes we followed for the configuration, setup, run and evaluation phases in our experiments. Configuration During the configuration step, a list of scenarios is generated: each scenario represents the configuration of a population.During the experiments, we have changed some parameters of the population to evaluate the effects on the KPIs; these parameters are listed in Table 1.The performed experiments have been conducted by selecting an area of interest in the city of Cagliari (centered at: 39.23, 9.14) and with an area (A) of about 64 km 2 , which is where the users can operate.This area is of interest for this study since the majority of CLACSOON'S users mainly operate inside this boundary.Furthermore, this area is representative of medium-small cities with numerous residential areas, commercial sites, factories and historic neighborhoods within its metropolitan boundaries.Figure 3 shows the area selected for this case study where the area of interest is delimited by a black line.Each run lasts for S hours, during which a total of N users act as either passengers or drivers.When evaluating the performance of the system with respect to the spatial clacsooners density, we have varied both the population density (N k ) and the ratio between the number of drivers and the number of passengers, i.e., L d /L p .As shown in the table, N ranges from 600 to 2500, which correspond to a different population density given the size of the reference geographical area, and the ratio L d /L p ranges from 1/8 to eight.In the performed emulations, we also refer to the timeout T, which ranges from 1 to 30 min.For simplicity, we assume the same timeout T for each rider and each driver.Each scenario represents a combination of the parameters listed in Table 1.To perform the simulations proposed in this case study, we evaluated the KPIs for three sizes of the population N, eight levels for the ratio L p /L d and eight values for the timeout T, for a total of 192 scenarios. Setup The setup step consists of the generation of each member of the population for a single run.During this step, the total population N is divided into L d drivers and L p passengers.The emulator assigns each user departure and destination locations chosen randomly and uniformly in the selected area, with the following two constraints: (1) both locations fall on a street (2) it is actually possible to travel from the departure to the destination, i.e., a path exists between these points Each trip is assigned the shortest path between the departure and destination points, which is calculated by the directions provider.Generating random paths within this area leads to an average travel duration of approximately 13 min with a standard deviation of approximately 6 min (Figure 4).We chose to select randomly and uniformly the starting and the arrival points due to the lack of mobility models for ridesharing users for the city of Cagliari.A similar simplifying assumption has been made in Tsao et al. [20], where the authors, due to the lack of data, assumed a uniform distribution of departure and destination locations in a hypothetical metropolitan area.In Cici et al. [19] and Amey [21] , the authors pointed out that this simplifying assumption should lead to an underestimation of the carpooling performance.Therefore, recent mobility surveys for the city of Cagliari would be needed to assess more accurately the performance of the proposed solution. The emulator also assigns to each driver and each rider the desired departure time t DEP u .The time interval between two successive departure times is set to have an exponential distribution within the time window S. If we consider this period and a given number of drivers L d and of riders L p , we obtain an expected time interval between offers µ d and an expected time interval between requests µ p : Run Once the population's details have been set, the run step is executed.Each run for a given scenario has been repeated for 20 cycles, in order to reduce the width of the confidence interval.In particular, we checked the 95% confidence interval for one of the most important KPI, i.e., the passenger success rate, whose results are shown in Figure 5, and we checked that it was very small, so that we almost had no overlaps among the curves.Specifically, it was lower than 0.01, which was very low.The emulator models a situation in which a user u joins the population at his/her desired departure time t dep u , simulating the publication of an offer or request through the CLACSOON mobile application.Strictly after the user u joins the population, the matching algorithm is evaluated between this user and the set of complementary users: if the user is a driver, the matching is evaluated against a set of riders, and vice versa.If no matching is found, the user is given a time T to be contacted by another user.When a new rider joins the population, a set of offers that satisfy the constraints specified by the temporal matching is retrieved from the database.These offers are then evaluated by the geographical matching algorithm.If one or more offers satisfy the geographical matching, the value of the cost function is evaluated for these offers, and the ride is agreed upon with the offer, which leads to less cost.Moreover, the number of empty seats for the offer is reduced by one, and the ride request is marked as busy, i.e., it is not possible for other drivers to give a lift for this request.On the other side, when a new driver joins the population, the list of the existing ride requests is retrieved from the database.Those rides have to satisfy the constraint of not being busy, i.e., the ride request must not be committed to another driver.If the aforementioned constraint is satisfied, then the matching between the offer and the request is evaluated by the temporal matching and the geographical matching algorithm.If a matching is found, the shared ride is considered to be agreed upon; the ride request is marked as busy; and the spare seats for the ride offer are decremented by one. Evaluation After the end of a run, the following KPIs are computed: • Passenger waiting time: the average of the time that the riders that found a match had to wait before finding that match • Passenger success rate: the percentage of riders that found a ride • Driver success rate: the percentage of drivers that shared a ride • Total system-wide CO 2 saved: the sum of the estimation of the CO 2 saved for each shared ride Like the cost function (13) defined in the Section 4.3, the estimation of the total system-wide CO 2 saved is based on the following two elements: (i) the length of the shared ride, i.e., the distance that the riders should have traveled alone if the shared ride was not agreed upon; (ii) the length of the needed detour that the drivers had to take to reach the pick-up and drop-off point.Therefore, this estimation takes into account that drivers could drive longer distances to pick up the riders.The results of the emulations from this case study have been computed as an average of the KPIs obtained from 20 runs for each scenario.As mentioned before in Section 5.1.1,a total number of 192 scenarios has been executed, leading to a total number of about 4000 runs. Experimental Results The performance in relation with the time distribution of the service utilization has been evaluated varying the rate of ride offers ( f d ) and the rate of ride requests ( f p ).The performance has also been evaluated in relation with the maximum waiting time of the users T. Figures 5 to 8 show the results according to the mentioned KPIs varying the parameters with the values indicated in Table 1. Passenger Success Rate Figure 5 shows the passenger success rate, which is the percentage of passengers that find a ride.This chart shows the trend of this indicator in relation with the timeout, for three levels of the ratio L d /L p and three levels of the population N. The first highlighted trend is that the success rate increases with the population, which is something that is expected, since if the spatial density of users is low, the probability to have a matching is small, as well.The growth in relation with the timeout is significant until the value of T is around 15 min; after this value, any further increase in the timeout does not have a big impact.This is due to the fact that the random paths generated in the selected area result in an average travel length of 13 min.This value of T is comparable to the bus transit frequency within the city of Cagliari.If we consider a threshold in the success rate of 80%, we see that this can be achieved with a balance between the numbers of drivers and of passengers (i.e., L d /L p = 1) if the latter have the patience to wait for up to 13 min in the case that the clacsooner density is 40 users/km 2 .Otherwise, if the percentage of drivers is high, having a ratio L d /L p = 4, then the passengers would have to wait only 6 min.This result tells us that depending on the patience of the customers, a different marketing action should be followed to reach the needed percentage of drivers in the clacsooner population. Driver Success Rate Figure 6 shows the driver success rate.The first highlighted trend is that the success rate increases when the population increase and when the ratio L d /L p decreases (i.e., the greater the number of riders request a ride, the higher the driver success rate).Figures 5 and 6 show that, with the same population level, the rider success rate is higher than the driver success rate.This difference is related to the different nature of these two agents: a single driver could give a ride to more than one passenger.This situation is more likely to happen if the number of drivers is higher than the number of riders and when the number of users is high.Moreover, an increase in the timeout always leads to an increase in the success rate for riders, but for drivers, this effect is limited: the driver success rate increases slowly after 15 min.This is due to the fact that the random paths generated in the selected area result in an average travel length of 13 min.The emulator models a situation in which riders insert their trip when they arrive in the desired pickup point, and then, they can wait for a match at most until their waiting time reaches the timeout.Drivers, as opposed to riders, insert their trip when they are ready to start the trip, and they could obtain a match at most until their trip is over.Therefore, a further increase in the timeout does not lead to a big benefit for drivers.Moreover, when L d becomes higher than L p and for a high level of population, there is another important trend.The system results in being unbalanced in favor of the riders, so most of them can find a ride.The remaining drivers have a low probability to find a passenger, because the majority of passengers have just agreed to take a ride from a driver. Passenger Waiting Time Figure 7 shows the rider's average waiting time needed to find a ride.The trend is linear and decreases when the number of drivers increases.If the ratio L d /L p is high (i.e., more drivers than riders are in the system), the waiting time is low and vice versa.In case there are more drivers than riders and then there are many offers, the probability to find a ride quickly becomes high.Therefore, if the number of drivers is higher than or equal to the number of passengers, the waiting time increases slowly.This figure does not consider the waiting time of rides that have not found a ride (in fact, a waiting time for these rides is undefined), so this figure has to be considered in conjunction with Figure 5. Total System-Wide CO 2 Saved The average success rate and the waiting time represent the performance from the single trip point of view.By collecting the travel length of every shared ride, we are able to compute the global system CO 2 saved.The result is shown in Figure 8.This chart shows the trend of this indicator in relation to the the ratio L d /L p , for three levels of the population N and for three levels of the timeout T. It is important to note that the CO 2 saved is a KPI that describes the performance of the whole system.The CO 2 saved is estimated as the product between the total travel shared (evaluated in km) and the average CO 2 emitted by a car (140 g of CO 2 per km) [30].Since this parameter is assumed to be proportional to the shared travel length, it is also representative of the total cost savings generated by the carpooling system.The curve reaches the maximum value when the number of riders is close to the number of drivers, that is when the system is balanced.It increases when the timeout increases, as well as when the population increases. Assuming the aforementioned value of CO 2 emitted per km, we can compute the value of emission savings in the time window S used in the simulation.It is notable that with a timeout value of 10 min and for L p ≈ L d , the emission savings in this scenario are about 80 kg for 10 users/km 2 , 250 kg for 20 users/km 2 and 720 kg for 40 users/km 2 .It is clear that the trend is not linear, but follows an exponential increase with respect to the population. Performance Comparison Following the approach done in [11], to assess the value of CLACSOON's matching algorithm, in this section, we compare its performance with those of an alternative matching algorithm we developed (named "DUMMY"), which presents the two following simplifications with respect to CLACSOON's matching algorithm: • when a match is found, the pickup and the drop-off points are assigned to be on the rider's desired departure ( D p ) and destination ( A p ) points, respectively.• a driver cannot accept requests after his/her trip started: each shared ride must be agreed upon before the starting of the driver's trip, and the dynamics of the positions and the speeds of the vehicles are not taken into account Note that the DUMMY algorithm not only contemplates the identical ridesharing (i.e., when the departure has to be the same for riders and drivers).Indeed, the DUMMY matching algorithm also contemplates the presence of intermediate meeting points that can: (i) be on the original route of the driver; (ii) not be on the way of an original route of the driver, so that a detour would be needed to reach the pick-up and drop-off points.The difference with the CLACSOON algorithm is that, when rider's desired departure and destination locations are not on the way of the driver's original route, the driver should have to take a detour to reach the rider's departure and destination points.Therefore, the DUMMY matching algorithm does not contemplate the partial ridesharing mode and prevents drivers from picking up riders on the fly, even if the pickup points result in the driver's route, since the shared ride has to be arranged before the starting of the driver's trip.With this comparison, we intend to specifically evaluate the major two novel features introduced by our proposal.The following results shows the comparison of the indicators for a population density of 40 users/km 2 .In the following, if not explicitly stated otherwise, the simulation environment parameters are equal to those listed in Table 1. Figure 9 clearly demonstrates that the CLACSOON matching algorithm performed better than the DUMMY matching algorithm in terms of all of the KPIs we computed.Figures 9a-c show the computed results in relation with the timeout T and for L d /L p = 1, i.e., the number of riders is set to be equal to the number of drivers.For instance, for a timeout of 10 min, the CLACSOON matching algorithm leads to a decrease in the waiting time of around 55% and an increase in the driver and passenger success rates of around 4% and 10%, respectively.However, note that the relative advantage for the success rate decreases with the timeout for both drivers and riders.Figure 9d shows the total system-wide CO 2 saved for a timeout of 10 min, in relation with the ratio L d /L p .It is notable that, for L p = L d , the CLACSOON algorithm leads to an increase (+21%) of the CO 2 saved, which corresponds to 120 kg of CO 2 emission savings over the value computed for the DUMMY matching algorithm.For the selected scenario, the overall CO 2 that riders would have emitted if each one had driven his/her car (estimated with respect to the total length of their desired route) is estimated to be approximately 1150 kg, so a participation rate of 40 users/km 2 leads to the 64% of emission savings for the CLACSOON matching algorithm and the 54% of emission savings for the DUMMY algorithm. Conclusions and Future Works In this work, we presented the CLACSOON platform that introduces some novel features with respected to the state-of-the-art.This platform has been implemented and is working mostly in the area of Cagliari.We introduced an important novel functionality according to which the route matching algorithm contemplates the partial ridesharing mode, i.e., riders can walk to reach the driver along his/her route when driving to the destination, resulting in higher total system-wide CO 2 savings.Moreover, we introduced the novel functionality according to which the matching algorithm models the position of the driver's vehicle in an urban context so that shared rides can be agreed upon after the starting of the driver's trip.An emulation system has also been implemented to analyze the performances of the proposed matching algorithm in a simulated smart urban scenario, with respect to the characteristics of the users.The performance has been evaluated considering some key parameters that affect the Quality of Experience (QoE) provided to the users, i.e., ridesharing success rate for both driver and passenger and waiting time.We have also analyzed the total system CO 2 saved.An interesting result consists of the fact that the system shows the best level of performance, i.e., the maximum of the total system-wide CO 2 saved, when the system is balanced, i.e., when the number of drivers is near or equal to the number of riders.The results have also been compared with the case that the novel proposed features were not used.We observed that with CLACSOON, in the case that the users were keen on waiting till 10 min from the service request, it is possible to achieve a decrease in the waiting time of around 55% and an increase in the driver and passenger success rates of around 4% and 10%, respectively.Additionally, the proposed features allowed for having an increase in the reduction of the CO 2 emission of more than 10% with respect to the traditional carpooling service. The results presented in this paper can be considered to evaluate the requirements to build a successful urban carpooling service.Future works will be focused on the use of even more real scenarios.One aspect to be considered is the generation of trips according to mobility surveys or studies that analyze the travel demand.These data would be needed to further validate the proposed analysis. Figure 1 . Figure 1.The CLACSOONsystem: a sketch of the functional blocks. Figure 3 . Figure 3.The area for the case study. Figure 4 . Figure 4. Average travel duration within the selected area. ) is the minimum distance between the set of route points in β d (t) and the rider's departure A p ; • ∆dst d,p (t) is the minimum distance between the set of route points in β d (t) and the rider's destination D p ; are the two points on the driver's route with the minimum distance from D p and A p , respectively.These points represent the pick-up points on the driver's route. t) to be the departure pickup point.Condition (2) if the search radius of the rider R p is lower than the distance ∆dep d,p (t), but the search radius of the driver R d is greater or equal to the distance ∆dep d,p (t), the pickup point is assigned to be on the rider's departure point D p .Condition (3) if both Conditions (1) and ( Table 1 . Values of parameters varied during the experiment.
13,115.2
2017-02-10T00:00:00.000
[ "Engineering", "Computer Science" ]
Integral Distinguishers of the Full-Round Lightweight Block Cipher SAT_Jo Integral cryptanalysis based on division property is a powerful cryptanalytic method whose range of successful applications was recently extended through the use of Mixed-Integer Linear Programming (MILP). Although this technique was demonstrated to be efficient in specifying distinguishers of reduced round versions of several families of lightweight block ciphers (such as SIMON, PRESENT, and few others), we show that this method provides distinguishers for a full-round block cipher SAT_Jo. SAT_Jo cipher is very similar to the well-known PRESENT block cipher, which has successfully withstood the known cryptanalytic methods. ,e main difference compared to PRESENT, which turns out to induce severe weaknesses of SAT_Jo algorithm, is its different choice of substitution boxes (S-boxes) and the bit-permutation layer for the reasons of making the cipher highly resource-efficient. Even though the designers provided a security analysis of this scheme against some major generic cryptanalytic methods, an application of the bit-division property in combination with MILP was not considered. By specifying integral distinguishers for the full-round SAT_Jo algorithm using this method, we essentially disapprove its use in intended applications. Using a 30-round distinguisher, we also describe a subkey recovery attack on the SAT_Jo algorithm whose time complexity is about 266 encryptions (noting that SAT_Jo is designed to provide 80 bits of security). Moreover, it seems that the choice of bitpermutation induces weak division properties since replacing the original bit-permutation of SAT_Jo by the one used in PRESENT immediately renders integral distinguishers inefficient. Introduction Lightweight block ciphers play an important role in providing the security in various constrained environments (referring to different applications of Internet of ings). In recent years, many resource-efficient block ciphers have been proposed, such as MIDORI [1], PICCOLO [2], MIBS [3], PRIDE [4], PRESENT [5], and LBLOCK [6]. Recently, many new lightweight ciphers (candidates) in the second round of NIST's lightweight cryptography standardization process were also proposed [7]. However, because of restricted design rationales, certain lightweight designs sometimes fail to deliver a reasonable resistance to certain cryptanalytic methods. Although designers of new schemes provide a security analysis against the well-known attacks (e.g., integral attacks [8], differential attacks [9], and linear attacks [10]), it may happen that not all attacks are taken into consideration. In this work, we consider a lightweight block cipher SAT_Jo [11] (proposed in 2018) and search for integral distinguishers based on division property using the MILP technique [12] introduced in [13]. Before describing the contribution of this work in more detail, we briefly summarize a development of integral attack and division property. Namely, in 1997, Daemen et al. [14] proposed a square attack on block cipher SQUARE. In 2001, Lucks et al. [15] proposed a saturation attack on TWOFISH cipher, which generalizes the square attack. Biryukov et al. [16] introduced a multiset attack on the SPN-based block ciphers. en, in 2002, Knudsen et al. [8] proposed the so-called integral analysis, which generalizes the previous three attacks. In fact, from the point of view of Boolean functions, this attack is also closely related to higher-order differential attack proposed in [17]. Some further versions of this attack have been derived in 2008 by Z'aba et al. [18], who proposed the bitpattern-based integral attack. It has been shown that one can derive integral distinguishers by analyzing the propagation of the integral property, where one tracks the positions of active, constant, and balanced bits. More specifically, the opponent selects a set of plaintexts having a portion of bits fixed at certain positions (called constant bits), whereas the remaining bits can take all possible values and are called active. Moreover, the XOR sum of their corresponding ciphertexts is computed (alternatively, a suitable subset is considered). Now, if the XOR sum at certain positions is always 0, regardless of the choice of secret key, such bits are called balanced. On the other hand, if the XOR sum changes at some positions (depending on the secret key value), such bits are commonly called unknown. is integral property can then be used to distinguish the real encryption algorithm form a random permutation. A further generalization of integral attacks has been introduced by Todo [19] at EUROCRYPT 2015, by developing a cryptanalytic framework based on the so-called division property. Later, Todo and Morii [20] proposed bitbased division property, which was utilized for construction of a 15-round integral distinguisher for SIMON32 [21]. Finally, at ASIACRYPT 2016, Xiang et al. [13] proposed a method which combines the bit-based division property and searches for the division trails by employing the MILP method. Consequently, this combination successfully overcomes the main issue of the bit-based division property reflected in relatively high time and memory complexity which is bounded above by 2 n , where n is the block length. In what follows, we describe the contribution and structure of the subsequent sections. Our contribution: in this paper, we analyze the lightweight block cipher SAT_Jo, which is built as a substitutionpermutation (SP) network and processes plaintext blocks of length 64 bits through an iterative application of 31 identical rounds, using the secret key of size 80 bits. We emphasize that the designers of this algorithm provided the security evaluation [22] of the cipher by considering some main cryptanalytic tools such as differential and linear cryptanalysis, as well as the resistance against algebraic attacks. However, to the best of our knowledge, the robustness of this scheme with respect to integral attacks has not been evaluated so far. We consider the three basic operations used in the SAT_Jo algorithm which then give rise to a set of linear inequalities that characterize the propagation of bit-based division property for SAT_Jo algorithm. Similar to the analysis performed in [13], by employing the open-source Gurobi MILP solver, an automated search for integral distinguishers is performed. Most notably, this MILP solver returns an integral distinguisher for the full-round SAT_Jo algorithm within a few seconds on a standard personal computer. Consequently, the bit-permutation of SAT_Jo algorithm (linear layer) appears not to be well designed and its increased efficiency turns out to be traded-off against lower security margins. ough our cryptanalysis does not substantially differ from the security evaluation in [13] (performed on SIMON, PRESENT, and a few more lightweight block ciphers), the results are quite dramatic due to the possibility of specifying integral distinguishers for a fullround block cipher which is not quite common. Moreover, we show that an efficient subkey recovery attack, whose time complexity corresponds to 2 66 encryptions, can be easily mounted using our distinguisher. Outline of the paper: Section 2 mainly introduces notations and definitions related to the division property. In Section 3, we discuss the MILP method and propagation rules of division property. In Section 4, an MILP model for SAT_Jo algorithm is derived, and its application is summarized in Section 5. In Section 6, the conclusion is given. Preliminaries By F n 2 , we denote the binary vector space of all n-tuples denotes the i-th coordinate of x. roughout this work, the following definitions will be used. Here, we have that Definition 2 (algebraic normal form (ANF) [19]). A Boolean function f: F n 2 ⟶ F 2 can be uniquely represented by its algebraic normal form (ANF) as where a f u ∈ F 2 are the binary constants that depend on u and specify f. In 2015, Todo [19] introduced the division property (as a generalization of the integral property), which was utilized to efficiently construct integral distinguishers (mainly applicable to S-box-oriented block ciphers). is concept was later refined in [20] by introducing the bit-based division property, which applies to block ciphers that do not necessarily employ S-boxes. e following definitions capture the essence of the bit-based division property. Definition 3 (ordering ″ ≺ ″ ). For two binary vectors k � (k 1 , . . . , k n ) ∈ F n 2 and k * � (k * 1 , . . . , k * n ) ∈ F n 2 , the inequality ″ ≺ ″ between k and k * is defined as k ≺ k * if and only if Definition 4 (bit-based division property [20]). Let X be a multiset whose elements belong to the space F n 2 . en, X is said to satisfy the division property D 1 n k (0) ,k (1) ,...,k (q− 1) , if the parity of π u (x) for all x ∈ X is always even. Equivalently, the following conditions must be satisfied: By 1 n , we denote the binary all-one vector of size n (i.e., 1 n � (1, 1, . . . , 1)), where for simplicity, the all-one vector of size one will be simply denoted by 1 instead of 1 1 . To provide more clarity about the bit-based division property, we give the following example. is exactly equal to 0 for any u ∈ 0000, 1000, 0100, 0010, 0001, 1001, 0110, 0101 { }. In addition, the propagation rules for the bit-division property in SPN schemes were also derived in [19,20]. Nevertheless, since these rules are not relevant in our context, we omit their specification. Definition 5 (division trail [13]). Let f r denote a round function of an iterated block cipher. Assume that an input multiset to the block cipher has initial division property D 1 n k { } , and denote the division property after propagating through f r for i rounds by D 1 n K i . us, we have the following chain of division property propagations: Moreover, for any vector k * i ∈ K i (i ≥ 1), there must exist a vector k * i−1 ∈ K i−1 such that k * i-1 can propagate to k * i by division property propagation rules. Furthermore, for Example 2 (Proposition 5 in [13]). Denote by D 1 n k { } the division property of the input multiset of an iterated block cipher, and let f r be its round function. Denote also by the r-round propagation of division property. us, the set of the last vectors in this chain of all r-round division trails which start with k is equal to K r . A Brief Overview of the MILP Method. Many classical cryptanalytic methods can be converted into optimization problems, where the main goal is to achieve an optimal solution (minimum or maximum) of the objective function under certain constraints. e mixed-integer linear programming is a well-known optimization method also used in the field of cryptanalysis and in particular for finding division trails in block ciphers [13,20]. In general, the objective function can be defined as where the linear constraints (including the requirement on variables x i ) are given as follows: Notice that the MILP problem can be transformed into an integer programming (IP) problem if I � n − 1. In particular, it has been verified that IP problems, in general, are somewhat easier to solve than MILP problems of similar kind [12]. For our purpose, the parameters involved in the MILP method are all positive integers. An MILP model is denoted by M, the variables involved are denoted by M.var, the constraints are denoted by M.con, and the objective function is denoted by M.obj. A simple example of an MILP instance can be described as follows. e set of linear inequalities, denoted by L, is given by where x, y, z ∈ Z + and the objective function is q � x + y + 2z. e goal is then to find the maximum value of q. In this example, the domain of the objective function is determined by the two inequalities and constraints that x, y, z ∈ Z + , and then the feasible solutions of the objective function in this domain are obtained. e maximum value of q is 3, and it corresponds to (x, y, z) � (1, 0, 1). On the other hand, a closely related problem is to provide a set of points, say A, and to obtain the set of linear inequalities L (using for instance the inequality_generator () function in the Sage software) for which all the solutions satisfying L are included in this set of points A. For further details on how this method works, the reader is referred to Appendix A in [13], where a detailed example is elaborated. As noticed in [13], the main problem with this approach is that the number of linear inequalities returned can be quite large which then makes the MILP instance computationally infeasible. e solution to this was provided by Sun et al. through a greedy algorithm which selects a subset of linear inequalities in L that still efficiently describes A (see [23] and Algorithm 1 in [13]). Usually, the goal of an MILP problem is to quickly find a feasible (or optimal) solution to the given problem. In the context of bit-based division property, one constructs an MILP model such that it describes the propagation trails of the integral property. is procedure then represents an automatic search for integral distinguishers, where solutions of the MILP problem are interpreted as follows (see also [13]): Security and Communication Networks (i) Each feasible solution to the system of linear inequalities corresponds to a division trail. In other words, these feasible solutions do not contain any impossible division trail. (ii) Conversely, each division trail must satisfy all linear inequalities in the system. at is, each division trail corresponds to a feasible solution of the linear inequality system. Note that, in our work, the constructed MILP model will be solved by the open-source mathematical optimization software Gurobi (https://www.gurobi.com). Bit-Based Division Property in terms of MILP. e main reason behind the use of MILP tools in context of the bitbased division property is to improve the time complexity when searching for division trails. In essence, a division trail of an encryption algorithm is obtained by converting the basic operations (involved in the round function) into corresponding linear inequalities, which satisfy the propagation rules of the division property. Initial division property and stopping rule: let us consider a multiset X with division property D 1 n K and let e i denote the vector of length n (also called a unit vector) whose i-th coordinate is the only nonzero coordinate. In [13], it was illustrated how to determine the existence of r-round integral distinguisher by checking whether K r+1 contains all e i (i ∈ 1, 2, . . . , n { }). More precisely, if one can find all the unit vectors e i in the set K r+1 (thus, each e i ∈ K r+1 ), then it means that there does not exist any r-round division trail. Equivalently, if there exists e i such that e i ∈K r+1 , then it means that one can find an r-round integral distinguisher. In terms of Definition 4, the previously described termination test (condition) for the division property can be explained as follows. Let Y denote the output of r encryption rounds performed on the input set X. If Y does not have any useful integral property, then the XOR sum of all vectors of Y is unknown for each bit position. is means that ⊕ y∈Y π e i (y) is unknown for any unit vector e i ∈ F n 2 , where i ∈ 1, 2, . . . , n { }. On the contrary, if there exists at least one unit vector e i which does not belong to K, then the value at the i-th position of ⊕ y∈Y π e i (y) is always equal to zero, i.e., we can find an r-round integral distinguisher. For an iterated block cipher with a round function f r , let D 1 n k { } denote the division property of an input multiset. Also, let be the r-round division property propagation, where K r denotes the set of vectors of all r-round division trails which start with k. Now, if we denote an r-round division trail by (a 0 0 , . . . , a 0 n−1 ) ⟶ · · · ⟶ (a r 0 , . . . , a r n−1 ), then the set of linear inequalities (which constitute the MILP model) depends on variables a j i ∈ F 2 (i ∈ [0, n − 1], j ∈ [0, r]). In addition, the objective function is set to be Obj: Min a r 0 + a r 1 + · · · + a r n−1 . Notice that feasible solutions of the given MILP model are all division trails, and furthermore, if K i does not contain allzero vector, then the objective function will never take the zero value. At the end of the search, the balanced and unknown positions of the integral distinguisher can be determined. More precisely, those unit vectors e i which are not in K r will indicate the balanced positions in the distinguisher. When performing integral analysis on a given block cipher based on the division property and using the MILP model (whose round functions consist of a composition of the S-box and linear layer), the search for effective integral distinguisher is the main goal of the attack. In general, this analysis can be roughly divided into the following three steps: Step 1: determine the division property of the initial input, that is, the specific number of active and constant bits of the input. Step 2: using the division property mentioned in Step 1, the MILP model of the division path through the round function is constructed according to the structural characteristics of the cryptographic algorithm itself, including both linear and nonlinear layer. Step 3: let the bit-based division property of r identical encryption rounds of a given block cipher, using the MILP model, be denoted by M. In order to obtain M, one needs to consider r-round propagation of the bitbased division property in the MILP model of the single round function operation. is is basically done by using the division trail specified by (a 0 0 , . . . , a 0 n−1 ) ⟶ · · · ⟶ (a r 0 , . . . , a r n−1 ). As previously mentioned, the system of linear inequalities L will depend on the binary variables a j i , where i � 0, . . . , n − 1 and j � 0, . . . , n − 1 (thus, MILP becomes a 0-1 integer programming problem). However, many of these variables are automatically removed (assigned to a constant value 0) when running Algorithm 3 in [13]. is algorithm uses the set of inequalities L and the objective function to find feasible solutions of the MILP instance M and is constantly updated by adding new constraints with respect to a j i , more precisely by setting a j i � 0 when needed. e reader is referred to [13] for further details on how Algorithm 3 works. Notice, however, that the MILP instance that models the search for bit-based distinguishers is executed several times (this is an intrinsic property of Algorithm 3 in [13]) since we need to check whether all the unit vectors are included in K r+1 , as a stopping rule. Finally, if the solver can find a feasible solution for a particular MILP instance, then the existence of an r-round distinguisher for a given cipher is established (in our case for the SAT_Jo encryption algorithm). Since some specific cryptographic operations such as key addition and adding a round constant do not affect the propagation of division property, these operations will not be considered here. An MILP Model of SAT_Jo Algorithm In this section, we describe the process of modelling SAT_Jo algorithm as an MILP instance for the purpose of specifying integral distinguishers. A Description of SAT_Jo. e schematic structure of SAT_Jo block cipher is shown in Figure 1, whereas a precise description of its encryption process is given in Algorithm 1. e round function of SAT_Jo is similar to the one in the PRESENT block cipher, and it is defined as a composition of the S-box layer (applying 16 times the S-box defined in Table 1) and the bit-permutation function defined in Table 2. As mentioned earlier, SAT_Jo iterates the round function 31 times, where in addition the round key is applied at the end (as a postwhitening step). We omit the definition of the newRoundKey function because it is not important for the division property. Remark 1. Notice that the permutation layer uses a simple rule P(x + i) � P(x) + 8i mod 64 which simplifies design but at the same time induces serious security issues (bad diffusion properties). An Integral Attack on SAT_Jo Using Division Property. In order to apply the MILP method, one firstly has to derive a set of linear inequalities Ax ≤ b (defined in Section 3.1, where A � [a ij ]) to describe the propagation of division property based on the structure of the round function. We note that both the S-box and permutation layer (P-box) affect the division property when deriving the MILP model. On the other hand, the division property is not affected by the AddRoundKey step in Algorithm 1, and thus the MILP model of a round function is constructed without considering this operation. Modelling S-Box of SAT_Jo. Now, in order to derive the set of inequalities for the S-box layer of SAT_Jo, we only have to consider the S-box defined in Table 1. Let x � (x 0 , x 1 , x 2 , x 3 ) denote the input of this S-box and y � (y 0 , y 1 , y 2 , y 3 ) denote its corresponding output. e ANF of the Sbox (given in Table 1) is given by where modulo two addition is performed. en, utilizing Algorithm 2 in [13], we obtain 45 division trails (shown in Table 3) of the SAT_Jo S-box. Each division trail of a 4-bit S-box can be viewed as an 8dimensional vector in F 8 2 . us, 45 division trails form a subset T of F 8 2 . Next, by taking T as an input to the in-equality_ generator () function of SageMath software, a set of 162 linear inequalities is returned. e following SageMath software code is used for this purpose: [23]. If the division path through the S-box is described by (a 0 , a 1 , a 2 , a 3 , these 10 inequalities are given as follows: Security and Communication Networks In order to obtain the solutions of linear inequalities restricted to F 8 2 , we only need to specify that all variables can only take values in {0, 1}. Modelling the Permutation Layer of SAT_Jo. In order to describe the permutation layer as an MILP instance, some intermediate variables are introduced to describe the basic operations in the permutation layer. Since the design of the permutation layer of the SAT_Jo encryption algorithm is relatively simple and described on the bit level in [5] (the bit i of the internal state is moved to bit position j in accordance with Table 2 and follows the rule given in Remark 1), the division path of input/output through the permutation layer is easily embedded in the MILP model. A Search Algorithm for Integral Distinguishers for SAT_Jo Algorithm. To summarize the whole procedure, an automatic search algorithm for integral distinguishers of SAT_Jo is given by Algorithm 2 (which is similar to Algorithm 3 in [13]). Note that the notation M(L, Obj) (used in Algorithm 2) denotes the MILP model M for r rounds composed of the set of inequalities L and an objective function Obj. Also, the set of output bits after r rounds is denoted by S � a r 0 , . . . , a r 63 . The Results By specifying and solving the MILP instance that models the full-round SAT_Jo algorithm (having 31 encryption rounds), we can specify different integral distinguishers. Table 4 shows how many active bits can be set in the input and how many balanced bits are obtained in the output for the SAT_Jo algorithm. Note that all these results are practically confirmed on a personal computer within a few seconds. Moreover, integral distinguishers could be found for up to 151 encryption rounds, which indicates a serious design flow regarding the choice of bit-permutation employed in the SAT_Jo algorithm. Recall that, for active bits at the input, denoted by "a," we essentially take all possible input values at these positions. For instance, if we have 5 active bits in the input, then in total we require 2 5 plaintexts that cover all the possible values at these specific positions. Other input bits that are kept fixed are denoted by "c." e balanced bits at the output, denoted by "b," simply correspond to those positions of the ciphertext having the same number of zeros and ones, whereas unbalanced cases are denoted by "?." Table 5 shows other cryptanalytic results for SAT_Jo. e key recovery attack on SAT_Jo: in order to perform a key recovery attack on the full-round SAT_Jo cipher, one can use the 30-round distinguisher specified in the first row in Table 4. More precisely, a set of 2 2 plaintexts which satisfies the input of the integral distinguisher is selected. Moreover, one needs to guess the last round subkey bits (64 bits in total) which are then used together with the ciphertexts to calculate the output of the 30 th round (the socalled one round partial decryption). For a guessed 64 bit subkey k 31 , if the XOR sum of the state bits at the output of the 30 th round is zero, then it is considered as a valid candidate for the correct subkey; otherwise, the guessed value is considered incorrect. In order to achieve the correct one among these candidates, one selects another set of four plaintexts P 1 , . . . , P 4 (again varying the first two bits) and obtains the corresponding ciphertexts C 1 , . . . , C 4 . For each candidate subkey, the decryption of the Obj � obj. Objective ()//obj.Objective represents the objective function of the returned model (6) for r � 0; r ≤ 31; r + + do (7) var � obj.getValue (i)//Return the i-th variable of the objective function (8) value � obj.getAttr ( ′ x ′ )//Get the var value of the current solution (9) if value � 1 then (10) delete/{var}in S//Delete the var value in S (11) M.addConstraint (var � 0) (12) M.update () (13) break (14) end if (15) end for (16) end if (17) end if (18) end for (19) return S all //Represent the S value of all outputs ALGORITHM 2: An automatic search of integral distinguishers for SAT_Jo algorithm. Step 1: after 31 rounds of encryption are performed on the 4 selected plaintexts according to the 30-round integral distinguisher, the opponent can attain 4 ciphertexts. Step 3: similarly, the opponent guesses the 4-bit k 0∼1 , k 2∼3 so that she can decrypt the 30 th -round data status. In this case, she can further calculate the XOR sum of the state bits at the output of the 30 th round. is attack requires 8 chosen plaintexts, and its time complexity is about 2 64+2 � 2 66 encryption operations. e success rate of this attack is 1. Notice that the master key is of length 80 bits, and after recovering 64 bits of k 31 , the similar procedure can be performed to retrieve other subkeys. Remark 2. e simulations have been conducted using the computer with the following specification: Intel(R) Core (T-M) i5-8300H CPU@ 2.30 GHz, RAM-8 GB, x64 Windows 10. In addition, the Python programming language, Sage software, and Gurobi solver have been used to implement the search algorithm. Conclusion We remark that the choice of bit-permutation used in the SAT_Jo algorithm appears to be the main reason for the existence of full-round integral distinguishers. Indeed, replacing the bit-permutation used in the SAT_Jo algorithm by the one employed in the PRESENT block cipher implies that there are no integral distinguishers for the full-round SAT_Jo. In particular, if the original permutation layer of SAT_Jo is replaced by the bit-permutation given in Table 6, one can verify that the SAT_Jo variant cipher achieves quite good integral property. More precisely, an integral distinguisher can then be specified for at most 9 encryption rounds. e main weakness of SAT_Jo algorithm, as already mentioned in Remark 1, is an inappropriate choice of its bitpermutation which does not provide sufficient diffusion. e permutation layer uses a simple rule P(x + i) � P(x) + 8i mod 64 for SAT_Jo, which simplifies design but at the same time induces serious security issues (bad diffusion properties). However, the new permutation layer (see Table 6) uses the different rule P(x + i) � P(x) + 4i mod 64). Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that there are no conflicts of interest.
6,819.4
2021-09-18T00:00:00.000
[ "Computer Science", "Mathematics" ]
Mechanism of Power Quality Deterioration Caused by Multiple Load Converters for the MVDC System Medium-voltage direct current (MVDC) systems are widely used to ship power-distributed systems, wind farms, and photovoltaic power plants. With the increase of load converters interfacing into the MVDC system, the power quality deteriorates. Few research studies focused on the factors affecting the MVDC power quality, and effects caused by multiple load converters are often neglected. In this study, the mechanism of power quality deterioration caused by interfacing multiple load converters on the MVDC system has been discussed. The impedance model of the MVDC system is developed with the state-space averaging method and the small-signal analysis method. A three-level H-bridge DC/DC converter is employed as the load converter. The results by the analysis of the impedance model show that the more the load converters connect to the MVDC system, the more fragile the MVDC system is to background harmonics. Simulation cases are implemented to verify this conclusion. INTRODUCTION In recent years, medium-voltage direct current (MVDC) systems have been gradually applied to ship power-distributed systems (Su et al., 2016;Mo and Li, 2017). The rated voltage levels of the MVDC system include 1.5, 3, 6, 12, 18, 24, and 30 kV. The power quality of the MVDC system starts to receive attention. The research on this field mainly focused on the measurement and evaluation of the power quality (Crapse et al., 2007;Ouyang and Li, 1646;Shin et al., 2004) and the way to improve it (Xie and Zhang, 2010;Puthalath and Bhuvaneswari, 2018;Arcidiacono et al., 2007). Few references discuss the factors that degrade the power quality. The reference by Steurer et al., (2007) explored the impact of the pulsed power charging loads on power quality. This study used high-precision modeling and simulation to analyze the problem without a deeper theoretical analysis. The reference by Sulligoi et al., (2017) mentioned that the multi-load converter connected to the MVDC system may lead to unstable bus voltage and deteriorate the power quality, yet the impact mechanism was not explained in detail. On this basis, this study discusses the mechanism of the multi-load converter's influence on power quality. In addition to the influence of the number of load converters on the power quality, the characteristics of the load converter itself are also considered. At present, there are mainly three types of converters used in MVDC systems: the modular multilevel converter (MMC), three-level DC converter, and dual active bridge (DAB) converter. The power switches in the MMC structure withstand less voltage stress and generate less electromagnetic interference (Mo et al., 2015;Kenzelmann et al, 2011;Ferreira, 2013), which is conducive to better power quality. The application of wide bandgap devices such as SiC MOSFETs can reduce the stages of the MMC, thereby reducing the complexity of the MVDC system (Zhao et al., 2020;Zhao et al., 2021). The DAB has a good soft-switching performance and can achieve higher efficiency (Yanhui Xie et al., 2010;Zhao et al., 2017). The circuit topology of the three-level DC converter is relatively simple, easy to control, and more stable (Xiao et al., 2014;Xinbo Ruan et al., 2008). These three types of converters have their own characteristics. As for load converters, they can all be regarded as constant power loads with negative resistance, which introduce the system instability concern. In prior to analyzing the influence of the network formed by the connection of multiple load converters on power quality, a suitable system model should be established. Many [references] have proposed modeling methods for MVDC systems. The reference by Khan et al., (2017) divided the MVDC system into three parts, including the power system, load system, and energy storage system, and established a detailed transient simulation model. The reference by Ji et al., (2018) described the system with an adjacency matrix and proposed a hierarchical control based on the system matrix. The reference by Tan et al., (2017) proposed a convex model for MVDC systems to study the transmission losses. The modeling methods mentioned in the studies by Khan et al., (2017); Tan et al., (2017);and Ji et al., (2018) were all for specific research purposes and could not be used to analyze the system state in general. References by Shi et al., (2015); Bosich et al., (2017); used the state-space averaging model and the small-signal analysis method to analyze the dynamic process of the system and then proposed a corresponding control strategy to maintain the stability of the bus voltage. Among them, the load converter model was taken as a constant power load model with a controlled current source connected in parallel with a capacitor. The parallel connection of multiple constant power load models is equivalent to a constant power load model, while this model is not appropriate for investigating the interactions of different load converters. The reference by Liu et al., (2017) utilized impedance modeling to analyze the stability and harmonics of the MVDC system including the power generation system and the motor drive system, while the influence of multiple load converters is also ignored. Although the models in references by Shi et al., (2015); Bosich et al., (2017); Liu et al., (2017); Sulligoi et al., (2017) cannot be used to describe the effects of multi-load converters, the modeling method can be used to analyze the system state. In view of the above problems, this study explores the mechanism of the multi-load converter affecting the power quality based on the impedance network analysis method. An MVDC system with four load regions is taken as an example. A three-level H-bridge DC converter is used as the load converter. The state-space averaging method and the small-signal analysis method are used to establish the impedance model of the load converter; then, the impedance network of the system is established. Through comparing the system impedance spectrum under different numbers of load converters, the influence of the number of load converters on power quality is revealed. The contribution of this study is as follows. (1) This study reveals for the first time that an increase in the number of load converters will increase the probability of background harmonics being amplified in the MVDC system and make the system more susceptible to low-frequency background harmonics. (2) The impedance model of the MVDC system is established by using the state-space averaging method and the small-signal analysis method to analyze the spectrum change of the system resonance point, and the mechanism of the power quality deterioration of the MVDC system caused by the multi-load converter is revealed. The rest of this study is organized as follows. A modeling method of MVDC systems is proposed in Section 2. In Section 3, the input impedance model of the three-level H-bridge DC converter is introduced. On the basis, the influence of load converters on power quality is analyzed in Section 4, and the mechanism of the influence is verified in Section 5. Section 6 concludes the full text. Figure 1 shows the network architecture of the MVDC system. Its configuration includes the following parts: 1) one power generation module (PGM); 2) one MVDC system bus; and 3) one to four load areas. The PGM is connected to the bus through a three-phase rectifier bridge, and the load area is connected to the bus through a three-level H-bridge DC converter. It is assumed that there are background harmonics on the output side of the three-phase rectifier bridge, which affects the power quality of the DC bus. To simplify the analysis, the output impedance of the PGM is ignored, and the load on the output side of the three-level H-bridge is replaced by a pure resistance. Finally, the smallsignal model of the MVDC system shown in Figure 2 is obtained. The inductance and the resistance are represented by a series of Z line in Figure 2. The input impedance of the load converter can be derived from equations 3 and (4). INPUT IMPEDANCE MODEL OF THE THREE-LEVEL H-BRIDGE DC CONVERTER The topology of the three-level H-bridge DC converter is shown in Figure 3. C g is the voltage equalizing capacitor on the output side. R C is the equivalent resistance of the voltage equalizing capacitor. S 1~S8 are the switching tubes on the inverter side. D c1~Dc4 are the clamping diodes. The transformation ratio of the intermediate frequency transformer T m is 1:N T . L and C are the output filter parameters, and R is the load. u d is the input voltage, and i d is the input current. i L is the current on L. u o is the output voltage. u P is the voltage of the upper-end equalizing capacitor. u N is the voltage of the lower-end equalizing capacitor. u T1 is the primary side voltage of the transformer, and its direction is specified as the direction shown in Figure 3. In this model, it is assumed that the frequency of the equalizing control loop is high; the influence of the control loop can be ignored. As a result, the switch devices in the figure are all ideal devices, and the transformer is an ideal transformer. Through the analysis, the working waveforms of the converter can be obtained, as shown in Figure 4, and the simplified model of Figure 3 can be obtained, as shown in Figure 5 (Zhao et al., 2017). According to Figures 4, 5, the state equations for the eight operating states (a~h) of the three-level H-bridge converter can be listed in Table 1. Based on the previous assumptions, it can be obtained that ( 1 ) Assuming that the converter is controlled by a single voltage loop, the relationship between the conduction angle d α and the output voltage u o can be expressed as where k p and k i are the parameters of the PI controller, and u p o is the reference of the output voltage. With the state-space averaging method and the small-signal analysis method, the transfer function from the input voltage to the input current can be obtained. (3) Therefore, the input impedance of a three-level H-bridge converter can be expressed as INFLUENCE OF THE LOAD CONVERTER ON POWER QUALITY In order to analyze the influence of multi-load converters on power quality, the input voltage u t and input current i t of the load area closest to the PGM (hereinafter referred to as load area 1) are taken as an example for analysis. It is denoted that the equivalent input impedance of n load regions is Z n . Thus, it can be deduced from Figure 2 that the expression of Z n is u t and i t can be expressed as where u g is the background harmonic, and n ranges from 1 to 4. Equations 6 and 7 reflect that the input voltage and current in load region 1 are affected by its self-impedance, impedance of other load regions, and the background harmonics. The transfer function from u g to u t is denoted by T U (s), and the transfer function from u g to i t is denoted by T I (s). Then, their expressions are shown in the following formulas. The spectral changes of T U (s) and T I (s) reflect the influence degree of multi-load converters on power quality. With different n, two transfer functions are calculated, and their Bode plots are shown in Figure 6. The parameters of the converter are listed in Table 2. It can be seen from Figure 6 that with the increase of the load converter number, the resonance point in the Bode diagram increases, and the original resonance peak frequency becomes lower. The resonance peak in the figure indicates that the background harmonics are amplified at this resonance point. The increase of resonance points means that the system is more susceptible to the influence of background harmonics. Lower resonant peak frequencies mean that the system is more susceptible to low-frequency disturbances, which are often difficult or expensive to filter out. In order to verify the above analysis results, a simulation model of the MVDC system based on the MATLAB/ Simulink platform is established with an architecture shown in Figure 1. The PGM is replaced by an ideal voltage source, and a broad-spectrum white noise is superimposed on the ideal voltage source as background harmonics. The number of load zones varies from 1 to 4. The voltage and current on the input side of load area 1 are measured, and the measured data are subjected to fast Fourier transform (FFT) analysis (Li, 2021a;Li, 2021b;Li, 2022). The analysis results are shown in Figures 7,8. It can be seen from Figures 7,8 that the high content of the ripple frequency in the simulation results is basically consistent with the resonance point frequency in the Bode plot obtained from T U (s) and T I (s). When one load zone is connected to the system, the ripple content at the frequency of 470 Hz is the highest. When two load areas are connected to the system, there are two frequencies with higher ripple content, and their frequencies are 270 and 750 Hz, respectively. As the number of load zones increases, the types of ripples with higher content gradually increase, while the frequency of high-content ripples becomes lower. CONCLUSION This study analyzes the mechanism of power quality deterioration caused by the multi-load converter connected to the MVDC system. In this study, the load converter is modeled and analyzed by the state-space average method and the small-signal analysis method, and then, the impedance network model of the MVDC system is established. When the number of load converters changes, voltage and current on the input side of load area 1 are affected by the background harmonics. Finally, the influence of the number of load converters on power quality is analyzed. Two main conclusions are drawn: (1) As the number of load converters increases, background harmonics are amplified in the MVDC system. (2) The increase of load converters makes the MVDC system more susceptible to low-frequency background harmonics. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
3,281.6
2022-03-17T00:00:00.000
[ "Engineering", "Physics" ]
A Multi-Scale Wavelet 3D-CNN for Hyperspectral Image Super-Resolution : Super-resolution (SR) is significant for hyperspectral image (HSI) applications. In single-frame HSI SR, how to reconstruct detailed image structures in high resolution (HR) HSI is challenging since there is no auxiliary image (e.g., HR multispectral image) providing structural information. Wavelet could capture image structures in di ff erent orientations, and emphasis on predicting high-frequency wavelet sub-bands is helpful for recovering the detailed structures in HSI SR. In this study, we propose a multi-scale wavelet 3D convolutional neural network (MW-3D-CNN) for HSI SR, which predicts the wavelet coe ffi cients of HR HSI rather than directly reconstructing the HR HSI. To exploit the correlation in the spectral and spatial domains, the MW-3D-CNN is built with 3D convolutional layers. An embedding subnet and a predicting subnet constitute the MW-3D-CNN, the embedding subnet extracts deep spatial-spectral features from the low resolution (LR) HSI and represents the LR HSI as a set of feature cubes. The feature cubes are then fed to the predicting subnet. There are multiple output branches in the predicting subnet, each of which corresponds to one wavelet sub-band and predicts the wavelet coe ffi cients of HR HSI. The HR HSI can be obtained by applying inverse wavelet transform to the predicted wavelet coe ffi cients. In the training stage, we propose to train the MW-3D-CNN with L1 norm loss, which is more suitable than the conventional L2 norm loss for penalizing the errors in di ff erent wavelet sub-bands. Experiments on both simulated and real spaceborne HSI demonstrate that the proposed algorithm is competitive with other state-of-the-art HSI SR methods. Introduction Hyperspectral image (HSI) is collected in contiguous bands over a certain electromagnetic spectrum, and the spectral and spatial information in HSI is helpful for identifying and discriminating different materials in the scene. HSI has been applied to many fields, including target detection [1], environment monitoring [2], and land-cover classification [3]. However, the spatial resolution of HSI is often limited due to the trade-off between the spatial and spectral resolutions. Some Earth Observation applications, such as urban mapping [4] and fine mineral mapping [5], require high resolution (HR) HSI. Therefore, enhancing the spatial resolution of HSI is of significance for the application of HSI. There are several ways to enhance the spatial resolution of HSI. Some auxiliary images, e.g., panchromatic image and multispectral image (MSI), often have higher spatial resolution [6]. Hyperspectral pan-sharpening reconstructs HR HSI by fusing the low resolution (LR) HSI with a HR panchromatic image taken over the same area at the same time (or a similar period). Pan-sharpening could be implemented by • Unlike the previous deep learning models that reconstruct HR HSI directly [29][30][31][32], the proposed network predicts the wavelet coefficients of the latent HR HSI, which is beneficial for reconstructing detailed textures in HSI. • In the predicting subnet, different branches corresponding to different wavelet sub-bands are trained jointly in a unified network, and the inter sub-band correlation can be utilized. • The network is built based on 3D convolutional layers, which could exploit the correlation in both spectral and spatial domains of HSI. • Instead of the conventional L2 norm, we propose to train the network with the L1 norm loss, which is fit for both low-and high-frequency wavelet sub-bands. The remainder of the paper is organized as follows. In Section 2, we introduce some related works. In Section 3, we present the proposed HSI SR method, including the architecture and training of the network. The experimental results are given in Section 4. In Section 5, we present some analyses and discussion on the experiment. In Section 6, we conclude with observations specific to the potential of our approach to single-frame HSI SR. CNN Based Single Image SR CNN could extract features from the local neighborhood of image by convolving with trainable kernels, which makes it easy to exploit spatial correlation in an image. CNN has become the most popular deep learning model in many image processing tasks, particularly in image SR [40][41][42][43][44][45][46]. In [38], Dong et al. proposed to learn the mapping between the LR and HR images using a CNN. The HR image can be inferred from its LR version with the trained network. Inspired by this idea, several CNN based single image SR methods have been proposed [41][42][43][44][45][46]. In [41], a very deep CNN for SR was proposed and trained with a residual learning strategy. Trainable parameters would drastically increase in very deep CNN, and a recursive CNN was proposed to address this issue by sharing the parameters of different layers in [42]. Most CNN SR methods employed the high-level features for reconstruction and neglected the low-and mid-level features. In [43,44], the authors proposed a residual dense network for SR, in which layers were densely connected to make full use of the hierarchical features. To address the challenge of super-resolving an image by large factors, the authors in [45] proposed progressive deep learning models to upscale the image gradually. Similarly, a Laplacian Pyramid SR CNN (LapSRN) was proposed in [46], which could progressively reconstruct the high-frequency details of different sub-bands of the latent HR image. Application of Wavelet in SR Wavelet describes image structures in different orientations. Employing wavelet in image SR, particularly the high-frequency wavelet sub-bands, is beneficial for preserving the detailed image structures. Many wavelet based SR methods have been proposed [47][48][49][50]. In [47], the LR image was decomposed into different wavelet sub-bands, the high-frequency sub-bands were interpolated and then combined with the LR image to generate HR image via inverse wavelet transformation. Similarly, the LR image was decomposed by two types of wavelets, and the high-frequency sub-bands of the two wavelets were then combined and followed by inverse wavelet transformation [48]. In [49,50], edge prior was utilized in the high-frequency sub-bands estimation to make the SR result sharper. Wavelet could also be used in CNN to better infer image details and enhance the sparsity of the network. For example, in [34,35], the mapping between the LR and HR images was learned by a CNN in wavelet domain for single image SR. However, these SR methods were designed for a single image, therefore applying these methods to HSI in band-by-band fashion would neglect the spectral correlation in HSI and lead to high spectral distortion. Multi-Scale Wavelet 3D CNN For HSI SR In this study, we transform the HSI SR problem into predicting the wavelet coefficients of HSI. In this section, we first introduce some basics on wavelet package analysis and 3D CNN, then we propose the MW-3D-CNN for HSI SR, including the architecture and the loss function. Wavelet Package Analysis Wavelet package transformation (WPT) could transform an image into a serial of wavelet coefficients sub-bands with the same size. An example of WPT with Haar wavelet function is given in Figure 1. The one-level decomposition is shown in Figure 1b. It can be found that the low-frequency sub-band (i.e., the top-left patch) describes the global topology. The detailed structures in vertical, horizontal, and diagonal orientations can be captured by different high-frequency sub-bands (i.e., the rest patches). By repeating the decomposition to each sub-band recursively, we can obtain higher-level WPT results, such as the two-level decomposition in Figure 1c. It is noted that the decomposition is applied to both the low-and high-frequency sub-bands, so the sub-bands of higher-level decomposition are of the same size. The original image can be reconstructed from these sub-bands via inverse WPT. Wavelet describes image structures in different orientations. Employing wavelet in image SR, particularly the high-frequency wavelet sub-bands, is beneficial for preserving the detailed image structures. Many wavelet based SR methods have been proposed [47][48][49][50]. In [47], the LR image was decomposed into different wavelet sub-bands, the high-frequency sub-bands were interpolated and then combined with the LR image to generate HR image via inverse wavelet transformation. Similarly, the LR image was decomposed by two types of wavelets, and the high-frequency subbands of the two wavelets were then combined and followed by inverse wavelet transformation [48]. In [49,50], edge prior was utilized in the high-frequency sub-bands estimation to make the SR result sharper. Wavelet could also be used in CNN to better infer image details and enhance the sparsity of the network. For example, in [34,35], the mapping between the LR and HR images was learned by a CNN in wavelet domain for single image SR. However, these SR methods were designed for a single image, therefore applying these methods to HSI in band-by-band fashion would neglect the spectral correlation in HSI and lead to high spectral distortion. Multi-Scale Wavelet 3D CNN For HSI SR In this study, we transform the HSI SR problem into predicting the wavelet coefficients of HSI. In this section, we first introduce some basics on wavelet package analysis and 3D CNN, then we propose the MW-3D-CNN for HSI SR, including the architecture and the loss function. Wavelet Package Analysis Wavelet package transformation (WPT) could transform an image into a serial of wavelet coefficients sub-bands with the same size. An example of WPT with Haar wavelet function is given in Figure 1. The one-level decomposition is shown in Figure 1b. It can be found that the low-frequency sub-band (i.e., the top-left patch) describes the global topology. The detailed structures in vertical, horizontal, and diagonal orientations can be captured by different high-frequency sub-bands (i.e., the rest patches). By repeating the decomposition to each sub-band recursively, we can obtain higherlevel WPT results, such as the two-level decomposition in Figure 1c. It is noted that the decomposition is applied to both the low-and high-frequency sub-bands, so the sub-bands of higher-level decomposition are of the same size. The original image can be reconstructed from these sub-bands via inverse WPT. 3D CNN For HSI, both spatial and spectral domains should be exploited in feature extraction. By convolving with 3D kernels, 3D CNN could extract features from different domains of volumetric data. The activity of the k-th feature cube in the d-th layer following formulation in [51] can be written as 3D CNN For HSI, both spatial and spectral domains should be exploited in feature extraction. By convolving with 3D kernels, 3D CNN could extract features from different domains of volumetric data. The activity of the k-th feature cube in the d-th layer following formulation in [51] can be written as where, c means the set of feature cubes in the (d-1)-th layer connected to the k-th feature cube in the d-th layer, d,k,c (u,v,w) is the value at position (u, v, w) of the 3D kernel associated with the k-th feature cube. The size of the 3D kernel is U × V × W. F d,k (x,y,z) is the value at position (x, y, z) of the k-th feature cube in the d-th layer. g(·) is a non-linear activation function such as Rectified Linear Unit (ReLU) and Sigmoid function, etc. By convolving with different kernels, several 3D feature cubes can be extracted in each layer of 3D CNN, as shown in Figure 2b. Pixels of spatial neighborhood and adjacent bands are involved in 3D convolution, and the spectral-spatial correlation in HSI can be jointly exploited in feature extraction [52,53]. Remote Sens. 2019, 11, x FOR PEER REVIEW 5 of 22 where, c means the set of feature cubes in the (d-1)-th layer connected to the k-th feature cube in the d-th layer, is the value at position ( , , ) u v w of the 3D kernel associated with the k-th feature cube. The size of the 3D kernel is is the value at position ( , , ) x y z of the k-th feature cube in the d-th layer. ( ) g ⋅ is a non-linear activation function such as Rectified Linear Unit (ReLU) and Sigmoid function, etc. By convolving with different kernels, several 3D feature cubes can be extracted in each layer of 3D CNN, as shown in Figure 2b. Pixels of spatial neighborhood and adjacent bands are involved in 3D convolution, and the spectral-spatial correlation in HSI can be jointly exploited in feature extraction [52,53]. Network Architecture of MW-3D-CNN The correlation exists not only in the spatial and spectral domains, but also among the wavelet package sub-bands of HSI. Considering the inter wavelet package sub-bands correlation, an embedding subnet is designed to learn shared features for different wavelet package sub-bands. These shared features are then fed to a predicting subnet to infer the wavelet package coefficients. Both of the embedding and predicting subnets are built based on 3D convolutional layers, which could naturally exploit the spectral-spatial correlation in HSI. The overall architecture of MW-3D-CNN is shown in Figure 3. Embedding Subnet The embedding subnet projects the LR HSI into deep feature space and represents it as a set of feature cubes that are shared by different wavelet package sub-bands. 3D convolutional layers and non-linear activation layers are alternatively stacked in the embedding subnet. The embedding subnet extracts feature cubes from the LR HSI , where m, n, and L are the number of rows, columns, and spectral bands, respectively. Both spectral and spatial information of HSI can be encoded by 3D convolution during the feature extraction, after several 3D convolutional layers, the LR HSI X could be represented by a serial of spectral-spatial feature cubes, which are expressed as ( ) , where S is the number of feature cubes,  denotes the function of embedding subnet. It is noted that zero padding is adopted in each convolutional layer to make the feature cubes the same size with the LR HSI. Predicting Subnet The embedding subnet is followed by a predicting subnet, which infers wavelet package coefficients. There are multiple output branches in the predicting subnet, each of which corresponds to one wavelet package sub-band. The predicting subnet takes the feature cubes extracted by the embedding subnet as input, each branch of the predicting subnet is trained to infer the wavelet coefficients at each sub-band. Similar to the embedding subnet, each branch in the predicting subnet Network Architecture of MW-3D-CNN The correlation exists not only in the spatial and spectral domains, but also among the wavelet package sub-bands of HSI. Considering the inter wavelet package sub-bands correlation, an embedding subnet is designed to learn shared features for different wavelet package sub-bands. These shared features are then fed to a predicting subnet to infer the wavelet package coefficients. Both of the embedding and predicting subnets are built based on 3D convolutional layers, which could naturally exploit the spectral-spatial correlation in HSI. The overall architecture of MW-3D-CNN is shown in Figure 3. Embedding Subnet The embedding subnet projects the LR HSI into deep feature space and represents it as a set of feature cubes that are shared by different wavelet package sub-bands. 3D convolutional layers and non-linear activation layers are alternatively stacked in the embedding subnet. The embedding subnet extracts feature cubes from the LR HSI X ∈ R m×n×L , where m, n, and L are the number of rows, columns, and spectral bands, respectively. Both spectral and spatial information of HSI can be encoded by 3D convolution during the feature extraction, after several 3D convolutional layers, the LR HSI X could be represented by a serial of spectral-spatial feature cubes, which are expressed as ψ(X) ∈ R m×n×L×S , where S is the number of feature cubes, ψ : R m×n×L → R m×n×L×S denotes the function of embedding subnet. It is noted that zero padding is adopted in each convolutional layer to make the feature cubes the same size with the LR HSI. Predicting Subnet The embedding subnet is followed by a predicting subnet, which infers wavelet package coefficients. There are multiple output branches in the predicting subnet, each of which corresponds to one wavelet package sub-band. The predicting subnet takes the feature cubes extracted by the embedding subnet as input, each branch of the predicting subnet is trained to infer the wavelet coefficients at each sub-band. Similar to the embedding subnet, each branch in the predicting subnet is also stacked by 3D convolutional layers and non-linear activation layers with zero padding strategy adopted, and the predicted wavelet coefficients have the same spatial size with the LR HSI. The desired HR HSI is obtained by applying inverse WPT to the predicted wavelet coefficients, so the upscaling factor of SR depends on the number of WPT levels. Specifically, suppose the number of WPT levels is l, there would be N w = 4 l wavelet package sub-bands, and the number of output branches in the predicting subnet is also 4 l . Taking the shared feature cubes ψ(X) as input, the i-th branch ϕ i predicts the i-th wavelet package sub-band as ϕ i (ψ(X)) ∈ R m×n×L , where ϕ i : R m×n×L×S → R m×n×L , i = 1, 2, . . . , N w denotes the function of the i-th branch. The output of MW-3D-CNN can be denoted as a set of wavelet package coefficients: In the training stage, the MW-3D-CNN learns the mapping between the LR HSI and the wavelet package coefficients of the latent HR HSI. In the testing stage, given the LR HSI, the MW-3D-CNN would infer the wavelet package coefficients at each sub-band. Applying inverse WPT to the predicted wavelet package coefficients, the HR HSI can be obtained: where, φ denotes inverse WPT,Ŷ ∈ R (r×m)×(r×n)×L is the estimated HR HSI, r = 2 l is the upscaling factor of SR. Different wavelet sub-bands share the common deep layers in the embedding subnet due to the inter wavelet sub-bands correlation. The embedding subnet learns the shared feature cubes and the predicting subnet optimizes with respect to each wavelet package sub-band. The embedding subnet connects different branches into a unified predicting subnet and allows them to be jointly optimized. Specifically, the errors in each wavelet package sub-band can be jointly back-propagated to the embedding subnet to learn the shared features, and the embedding subnet will refine different branches in the predicting subnet. Compared with training each branch independently, such joint training could make different branches facilitate each other and implicitly capture the correlation among different wavelet sub-bands. is also stacked by 3D convolutional layers and non-linear activation layers with zero padding strategy adopted, and the predicted wavelet coefficients have the same spatial size with the LR HSI. The desired HR HSI is obtained by applying inverse WPT to the predicted wavelet coefficients, so the upscaling factor of SR depends on the number of WPT levels. Specifically, suppose the number of WPT levels is l, there would be 4 l w N = wavelet package sub-bands, and the number of output branches in the predicting subnet is also 4 l . Taking the shared feature cubes ( ) ψ X as input, the ith branch i ϕ predicts the i-th wavelet package sub-band as denotes the function of the i-th branch. The output of MW-3D-CNN can be denoted as a set of wavelet package coefficients: 1 2 { ( ( )), ( ( )),..., ( ( )),..., ( ( ))} 1, 2,..., In the training stage, the MW-3D-CNN learns the mapping between the LR HSI and the wavelet package coefficients of the latent HR HSI. In the testing stage, given the LR HSI, the MW-3D-CNN would infer the wavelet package coefficients at each sub-band. Applying inverse WPT to the predicted wavelet package coefficients, the HR HSI can be obtained: where, φ denotes inverse WPT, is the upscaling factor of SR. Different wavelet sub-bands share the common deep layers in the embedding subnet due to the inter wavelet sub-bands correlation. The embedding subnet learns the shared feature cubes and the predicting subnet optimizes with respect to each wavelet package sub-band. The embedding subnet connects different branches into a unified predicting subnet and allows them to be jointly optimized. Specifically, the errors in each wavelet package sub-band can be jointly back-propagated to the embedding subnet to learn the shared features, and the embedding subnet will refine different branches in the predicting subnet. Compared with training each branch independently, such joint training could make different branches facilitate each other and implicitly capture the correlation among different wavelet sub-bands. Our MW-3D-CNN focuses on predicting the wavelet package coefficients of HR HSI, compared with predicting the HR HSI directly, we consider three advantages. Firstly, wavelet coefficients Sub-band Inverse WPT LR HSI Estimated Wavelet Coefficients Input Figure 3. The architecture of the proposed multi-scale wavelet (MW)-3D-CNN, the number and the size of convolutional kernels are denoted at each layer, and the embedding subnet and predicting subnet have three and four layers respectively. Our MW-3D-CNN focuses on predicting the wavelet package coefficients of HR HSI, compared with predicting the HR HSI directly, we consider three advantages. Firstly, wavelet coefficients describe the detailed textural information in HSI. Training the MW-3D-CNN to predict the wavelet coefficients is beneficial for recovering the detailed structures in HSI [33,36]. Secondly, a network with sparse activations is easier to be trained [34,35]. Wavelet coefficients have sparsity characteristics in the high-frequency sub-bands, and predicting wavelet coefficients promotes the sparsity of the MW-3D-CNN and makes the training easier and the trained network more robust. Finally, the MW-3D-CNN extracts features from the LR HSI directly. Compared with extracting features from the interpolated LR HSI, such as in [40,41], information in larger receptive field can be exploited. Training of MW-3D-CNN All the convolutional kernels and bias in the embedding and predicting subnets are trained in an end-to-end manner. L2 norm, which measures mean square error, is often used in loss function in the conventional CNN based image SR methods. However, the output of our network is the wavelet coefficients, which have larger values in the low-frequency sub-band and smaller values in the high-frequency sub-bands, as shown in the histograms in Figure 4. The L2 norm loss penalizes heavily on larger errors and is less sensitive to smaller errors [37]. On the contrary, the L1 norm loss penalizes equally on both larger and smaller errors, and it is more suitable than the L2 norm loss for wavelet coefficients prediction. In addition, compared with the L2 norm loss, the L1 norm loss is helpful for recovering sharper image structures with faster convergence [38]. Therefore, we propose to train the MW-3D-CNN with the L1 norm loss, the loss function is written as where, C i j andĈ i j = ϕ i (ψ(X j )) are the ground truth and the predicted wavelet package coefficients of the i-th sub-band respectively, j = 1, 2, . . . , N, N is the number of training samples, i = 1, 2, . . . , N w , N w = 4 l is the number of sub-bands. X j is the LR HSI of the j-th training sample. λ i is the weight balancing the trade-off between different wavelet sub-bands, which is set to 1 for simplicity in the experiment. The loss function is optimized using the adaptive moment estimation (ADAM) method with standard back propagation. The trainable convolutional kernels and bias are updated according to the following rule [54]: where, θ (t) denotes the trainable parameters (i.e., convolutional kernels and bias) at the t-th iteration, α is learning rate, ε is a constant to stabilize the updating, which is set to 10 −6 . m (t) and v (t) are bias-corrected first and second moment estimates respectively: where ∂θ is the gradient with respect to the trainable parameters θ. β 1 and β 2 are two exponential decay rates for the moment estimation. In our implementation, the learning rate α is initially set to 0.001 and decreased by half for every 50 training epochs. The exponential decay rates β 1 and β 2 are set to 0.9 and 0.999 respectively. The batch size is set to 64. The number of training epochs is 200. Experimental Results In this section, we compare the MW-3D-CNN with other state-of-the-art HSI SR methods on several simulated HSI datasets. In order to demonstrate the applicability of MW-3D-CNN, we also validate it on real spaceborne Hyperion HSI. Since there is no reference HSI for SR assessment in real data case, we use the no-reference HSI assessment method in [55] to evaluate the SR performance. Experiment Setting Three datasets were used in the experiment. The first one is the Reflective Optics System Imaging Spectrometer (ROSIS) dataset, which contains two images taken over Pavia University and Pavia Center with sizes of 610 × 340 and 1096 × 715, respectively. The spatial resolution is 1.3 m. After discarding the noisy bands, there are 100 bands remained in the spectral range 430~860 nm. The second dataset was collected by Headwall Hyperspec-VNIR-C imaging sensor over Chikusei, Japan, on July 29, 2014 [56]. The size is 2517 × 2335 with spatial resolution 2.5 m. There are 128 bands in the spectral range of 363~1018nm. The third dataset is 2018 IEEE GRSS Data Fusion Contest data (denoted as "grss_dfc_2018"), which was acquired by the National Center for Airborne Laser Mapping (NCALM) over Houston University, on February 16, 2017 [57]. The size of this data is 1202 × 4172. The spatial resolution is 1 m. It has 48 bands in the spectral range of 380~1050 nm. The above data was treated as original HR HSI, the LR HSI was simulated via Gaussian downsampling, which is a process of simulating LR HSI via applying a Gaussian filter to HR HSI and then down-sampling it in both vertical and horizontal directions. The Gaussian down-sampling was implemented using the "Hyperspectral and Multispectral Data Fusion Toolbox" [16]. For downsampling by a factor of two, the Gaussian filter was of size 2 × 2 with zero mean and standard deviation 0.8493; for down-sampling by a factor of four, the Gaussian filter was of size 4 × 4 with zero mean and standard deviation 1.6986. All these parameters in down-sampling are suggested in [16,17]. We cropped three sub-images with rich textures from the original HSI as testing data, and the remainder was used as training data. About 100,000 LR-HR pairs were extracted as training samples to train the MW-3D-CNN. Each LR HSI sample was of size 16 × 16 × 16. For training the MW-3D-CNN by an upscaling factor of two, there were four branches in the predicting subnet, the output wavelet coefficients in each branch were of size 16 × 16 × 16, and the corresponding HR HSI sample was of size 32 × 32 × 16. For training the MW-3D-CNN by an upscaling factor of four, there were 16 branches in the predicting subnet, the output wavelet coefficients in each branch were also of size 16 × 16 × 16, and the corresponding HR HSI sample was of size 64 × 64 × 16. It is noted that there was no overlapping between the training and testing regions. The network parameters of MW-3D-CNN were set according to network parameters in Figure 3. Haar wavelet function was used in WPT. Comparison with State-of-the-Art SR Methods In this sub-section, we compare the proposed method with other state-of-the-art HSI SR methods. Spectral-spatial group sparse representation HSI SR method (denoted as SSG) [27], and two Experimental Results In this section, we compare the MW-3D-CNN with other state-of-the-art HSI SR methods on several simulated HSI datasets. In order to demonstrate the applicability of MW-3D-CNN, we also validate it on real spaceborne Hyperion HSI. Since there is no reference HSI for SR assessment in real data case, we use the no-reference HSI assessment method in [55] to evaluate the SR performance. Experiment Setting Three datasets were used in the experiment. The first one is the Reflective Optics System Imaging Spectrometer (ROSIS) dataset, which contains two images taken over Pavia University and Pavia Center with sizes of 610 × 340 and 1096 × 715, respectively. The spatial resolution is 1.3 m. After discarding the noisy bands, there are 100 bands remained in the spectral range 430~860 nm. The second dataset was collected by Headwall Hyperspec-VNIR-C imaging sensor over Chikusei, Japan, on July 29, 2014 [56]. The size is 2517 × 2335 with spatial resolution 2.5 m. There are 128 bands in the spectral range of 363~1018 nm. The third dataset is 2018 IEEE GRSS Data Fusion Contest data (denoted as "grss_dfc_2018"), which was acquired by the National Center for Airborne Laser Mapping (NCALM) over Houston University, on February 16, 2017 [57]. The size of this data is 1202 × 4172. The spatial resolution is 1 m. It has 48 bands in the spectral range of 380~1050 nm. The above data was treated as original HR HSI, the LR HSI was simulated via Gaussian down-sampling, which is a process of simulating LR HSI via applying a Gaussian filter to HR HSI and then down-sampling it in both vertical and horizontal directions. The Gaussian down-sampling was implemented using the "Hyperspectral and Multispectral Data Fusion Toolbox" [16]. For down-sampling by a factor of two, the Gaussian filter was of size 2 × 2 with zero mean and standard deviation 0.8493; for down-sampling by a factor of four, the Gaussian filter was of size 4 × 4 with zero mean and standard deviation 1.6986. All these parameters in down-sampling are suggested in [16,17]. We cropped three sub-images with rich textures from the original HSI as testing data, and the remainder was used as training data. About 100,000 LR-HR pairs were extracted as training samples to train the MW-3D-CNN. Each LR HSI sample was of size 16 × 16 × 16. For training the MW-3D-CNN by an upscaling factor of two, there were four branches in the predicting subnet, the output wavelet coefficients in each branch were of size 16 × 16 × 16, and the corresponding HR HSI sample was of size 32 × 32 × 16. For training the MW-3D-CNN by an upscaling factor of four, there were 16 branches in the predicting subnet, the output wavelet coefficients in each branch were also of size 16 × 16 × 16, and the corresponding HR HSI sample was of size 64 × 64 × 16. It is noted that there was no overlapping between the training and testing regions. The network parameters of MW-3D-CNN were set according to network parameters in Figure 3. Haar wavelet function was used in WPT. Comparison with State-of-the-Art SR Methods In this sub-section, we compare the proposed method with other state-of-the-art HSI SR methods. Spectral-spatial group sparse representation HSI SR method (denoted as SSG) [27], and two CNN based SR algorithms, i.e., SRCNN [40] and 3D-CNN [32], were used for comparison. As an often used benchmark, bicubic interpolation was also compared. All the parameters of SSG, SRCNN, and 3D-CNN followed the default setting as described in [27,40], and [32]. The training samples and training epochs of SRCNN and 3D-CNN were the same with that of MW-3D-CNN, which guarantees the fairness of comparison. The SR performance was assessed using peak-signal-noise-ratio (PSNR, dB), structural similarity index measurement (SSIM) [58], feature similarity index measurement (FSIM) [59], and spectral angle mean (SAM). We compute the PSNR, SSIM, and FSIM indices on each band, and then calculated the mean values over all the spectral bands. The assessment indices of different SR methods are given in Tables 1 and 2. The scores of our method are better than the compared methods in most cases. The 3D-CNN in [32] could extract spectral-spatial features from HSI and jointly reconstruct different spectral bands, so it could lead to less spectral distortion than the SRCNN, as shown in Tables 1 and 2. Both 3D-CNN and MW-3D-CNN are in the framework of 3D CNN, and the MW-3D-CNN predicts the wavelet coefficients of the HR HSI, rather than directly predicting the HR HSI. Focusing on the wavelet coefficients makes the MW-3D-CNN more effective in preserving structures in HR HSI, so the results of MW-3D-CNN have higher PSNR values. In order to test the robustness of MW-3D-CNN over larger upscaling factor, we also implemented the SR by a factor of four and report the indices in Table 2. It can be found that the MW-3D-CNN can also achieve competitive results in most cases by an upscaling factor of four. In Figure 5, we plot the PSNR indices of different SR methods on each band. It is clear that the proposed method outperforms other methods on most spectral bands. CNN based SR algorithms, i.e., SRCNN [40] and 3D-CNN [32], were used for comparison. As an often used benchmark, bicubic interpolation was also compared. All the parameters of SSG, SRCNN, and 3D-CNN followed the default setting as described in [27,40], and [32]. The training samples and training epochs of SRCNN and 3D-CNN were the same with that of MW-3D-CNN, which guarantees the fairness of comparison. The SR performance was assessed using peak-signal-noise-ratio (PSNR, dB), structural similarity index measurement (SSIM) [58], feature similarity index measurement (FSIM) [59], and spectral angle mean (SAM). We compute the PSNR, SSIM, and FSIM indices on each band, and then calculated the mean values over all the spectral bands. The assessment indices of different SR methods are given in Tables 1 and 2. The scores of our method are better than the compared methods in most cases. The 3D-CNN in [32] could extract spectral-spatial features from HSI and jointly reconstruct different spectral bands, so it could lead to less spectral distortion than the SRCNN, as shown in Tables 1 and 2. Both 3D-CNN and MW-3D-CNN are in the framework of 3D CNN, and the MW-3D-CNN predicts the wavelet coefficients of the HR HSI, rather than directly predicting the HR HSI. Focusing on the wavelet coefficients makes the MW-3D-CNN more effective in preserving structures in HR HSI, so the results of MW-3D-CNN have higher PSNR values. In order to test the robustness of MW-3D-CNN over larger upscaling factor, we also implemented the SR by a factor of four and report the indices in Table 2. It can be found that the MW-3D-CNN can also achieve competitive results in most cases by an upscaling factor of four. In Figure 5, we plot the PSNR indices of different SR methods on each band. It is clear that the proposed method outperforms other methods on most spectral bands. 9 and 11, we also give the residual maps of SR results, in which reconstruction error at each pixel can be reflected. In Figure 6, it is clear that the result of MW-3D-CNN is closer to the reference image, and the results of other compared methods are much brighter than the original HR image, which means that the spectral distortion is heavier. We also display some small areas by enlarging them to highlight the details of the SR results. In Figures 6 and 10, both SSG and SRCNN results suffer from artifacts with stripe-like patterns. By comparing the details in Figure 10, it can be found that our MW-3D-CNN SR results are sharper than the 3D-CNN results. In the residual maps, it can be observed that all the SR results contain errors in the edges and details. Compared with other methods, our MW-3D-CNN method generates less errors. For example, in Figure 11, the error values in the MW-3D-CNN residual map are much sparser, which also demonstrates that predicting the wavelet coefficients is helpful for recovering the edges and detailed structures in the HR HSI. We also present running time comparison of different SR methods in Tables 3 and 4. Most of the SR methods could infer HR HSI quickly. In the SSG method, dictionary learning and sparse coding is time consuming, so SSG takes the longest time to reconstruct HR HSI. The running time of MW-3D-CNN is comparable to 3D-CNN, as both of them could super-resolve HSI within 2 s. The running time comparison in Tables 3 and 4 indicates that our proposed method could achieve competitive performance in both SR accuracy and running time. SSG result [27], (c) SRCNN result [40], (d) 3D-CNN result [32], and (e) the proposed MW-3D-CNN result. The residual maps are displayed by scaling to the minimum and maximum errors. SSG result [27], (c) SRCNN result [40], (d) 3D-CNN result [32], and (e) the proposed MW-3D-CNN result. The residual maps are displayed by scaling to the minimum and maximum errors. Application on Real Spaceborne HSI In this sub-section, we also apply the MW-3D-CNN to real spaceborne HSI SR to demonstrate its applicability. Earth Observing-1 (EO-1)/Hyperion HSI was used as testing data. The spatial resolution of Hyperion HSI is 30 m. There are 242 spectral bands in the spectral range of 400~2500 nm. The Hyperion HSI suffers from noise, and after removing the noisy bands and water absorption bands, 83 bands remain. The Hyperion HSI in this experiment was taken over Lafayette, LA, USA on October 2015. We cropped a sub-image with size 341 × 365 from it as the study area. As there is no HR HSI in real application, we used the Wald protocol to train the networks [24]. The original 30 m HSI was regarded as HR HSI, and LR HSI with resolution 60 m was simulated via down-sampling. The LR-HR HSI pairs were used to train the MW-3D-CNN that could super-resolve HSI by a factor of two. The trained MW-3D-CNN was then applied to the 30 m Hyperion HSI, and HR HSI with 15 m resolution could be obtained. The super-resolved Hyperion HSIs are shown in Figure 12. In Figures 13 and 14, we show, in zoom, the results of the compared methods. The resolution of Hyperion HSI is enhanced significantly through SR. Compared with other methods, the proposed MW-3D-CNN generates HSI with sharper edges and clearer structures, as indicated by the area highlighted in the dashed boxes. Since there is no reference image for assessment, the traditional evaluation indices such as PSNR cannot be used here. We used the no-reference HSI quality assessment method in [55], which measures the deviation of reconstructed HSI from pristine HSI, to evaluate the super-resolved Hyperion HSIs. The original Hyperion images were first screened for noisy bands and water absorption bands. The remaining bands were used as training data, quality-sensitive features were extracted from the training data and a benchmark multivariate Gaussian model was learned for the no-reference HSI assessment. The no-reference HSI quality scores after SR are listed in Table 5. It shows that by an upscaling factor of two where the SR image is at 15 m resolution, the proposed MW-3D-CNN performs better than other methods with a lower score, which means that the SR result deviates less from the pristine HSI than other SR results. Application on Real Spaceborne HSI In this sub-section, we also apply the MW-3D-CNN to real spaceborne HSI SR to demonstrate its applicability. Earth Observing-1 (EO-1)/Hyperion HSI was used as testing data. The spatial resolution of Hyperion HSI is 30 m. There are 242 spectral bands in the spectral range of 400~2500 nm. The Hyperion HSI suffers from noise, and after removing the noisy bands and water absorption bands, 83 bands remain. The Hyperion HSI in this experiment was taken over Lafayette, LA, USA on October 2015. We cropped a sub-image with size 341 × 365 from it as the study area. As there is no HR HSI in real application, we used the Wald protocol to train the networks [24]. The original 30 m HSI was regarded as HR HSI, and LR HSI with resolution 60 m was simulated via down-sampling. The LR-HR HSI pairs were used to train the MW-3D-CNN that could super-resolve HSI by a factor of two. The trained MW-3D-CNN was then applied to the 30 m Hyperion HSI, and HR HSI with 15 m resolution could be obtained. The super-resolved Hyperion HSIs are shown in Figure 12. In Figures 13 and 14, we show, in zoom, the results of the compared methods. The resolution of Hyperion HSI is enhanced significantly through SR. Compared with other methods, the proposed MW-3D-CNN generates HSI with sharper edges and clearer structures, as indicated by the area highlighted in the dashed boxes. Since there is no reference image for assessment, the traditional evaluation indices such as PSNR cannot be used here. We used the no-reference HSI quality assessment method in [55], which measures the deviation of reconstructed HSI from pristine HSI, to evaluate the super-resolved Hyperion HSIs. The original Hyperion images were first screened for noisy bands and water absorption bands. The remaining bands were used as training data, quality-sensitive features were extracted from the training data and a benchmark multivariate Gaussian model was learned for the no-reference HSI assessment. The no-reference HSI quality scores after SR are listed in Table 5. It sho Sensitivity Analysis on Network Parameters It is theoretically hard to estimate the optimal network parameters of a deep learning architecture. We empirically tuned the network parameters and presented them in Figure 3. In this sub-section, we give the sensitivity analysis of MW-3D-CNN over the network parameters. We vary one network parameter and fix others, then observe the SR performance. The sensitivity analysis over the size of 3D convolutional kernel is in Table 6. Proper large convolutional kernel size is necessary for collecting spatial and spectral information for HSI SR. It is clear that the best performance is achieved with convolutional kernel size 3 × 3 × 3. The performance decreases when the convolutional kernel size is set to 5 × 5 × 5. More spatial and spectral information can be exploited by larger convolutional kernel, but higher complexity of the network will be caused, and more parameters need to be trained. This may explain why the performance drops with the increase of kernel size. The number of 3D convolutional kernels determines the number of feature cubes extracted by each layer. In our MW-3D-CNN, we set 32 convolutional kernels for each layer of the embedding subnet and 16 convolutional kernels for each layer of the predicting subnet, which leads to the best performance in most cases, as shown in Table 7. With the increase of convolutional kernel number, more feature cubes could be extracted, but the complexity of the network would be increased. Usually, the deeper the network, the better the performance. With deeper architecture, the network would have larger capacity. In Table 8, it is shown that the best performance can be obtained in most cases when the number of convolutional layers in the embedding subnet and predicting subnet is set to three and four. The Rationality Analysis of L1 Norm Loss In order to verify the rationality of L1 norm loss, we trained the MW-3D-CNN using the L2 norm loss written as then compared it with the one trained using the L1 norm loss in Equation (4). The comparison is presented in Table 9. The L1 norm loss could mitigate the unbalance in penalizing low-and highfrequency wavelet package sub-bands caused by the L2 norm loss, so the MW-3D-CNN trained with the L1 norm loss performs better than the L2 norm loss on the testing data, as shown in Table 9. In the training stage, the errors of the i-th wavelet package sub-band predicted by the MW-3D-CNN can be expressed as (C i j −Ĉ i j ), where j = 1, 2, . . . , N, N is the number of training sample. We present the histograms of the errors after 200 training epochs in Figure 15. It is clear that the errors of different wavelet package sub-bands have similar statistics, as most of the errors are close to zero and tend to follow Laplacian distributions. Compared with the L2 norm, the L1 norm is more suitable for penalizing the Laplacian-like errors, which demonstrates the rationality of the L1 norm loss as well. The Rationality Analysis of L1 Norm Loss In order to verify the rationality of L1 norm loss, we trained the MW-3D-CNN using the L2 norm loss written as then compared it with the one trained using the L1 norm loss in Equation (4). The comparison is presented in Table 9. The L1 norm loss could mitigate the unbalance in penalizing low-and highfrequency wavelet package sub-bands caused by the L2 norm loss, so the MW-3D-CNN trained with the L1 norm loss performs better than the L2 norm loss on the testing data, as shown in Table 9. In the training stage, the errors of the i-th wavelet package sub-band predicted by the MW-3D-CNN can be expressed as ( ) , N is the number of training sample. We present the histograms of the errors after 200 training epochs in Figure 15. It is clear that the errors of different wavelet package sub-bands have similar statistics, as most of the errors are close to zero and tend to follow Laplacian distributions. Compared with the L2 norm, the L1 norm is more suitable for penalizing the Laplacian-like errors, which demonstrates the rationality of the L1 norm loss as well. Figure 15. The histograms of errors in different wavelet sub-bands after 200 training epochs. The training data is extracted from Pavia University, the MW-3D-CNN is trained with the L1 norm loss, and the upscaling factor is two. The Rationality Analysis of 3D Convolution In this sub-section, in order to analyze the advantage of 3D convolution over 2D convolution for HSI SR, we replaced all the 3D convolutional layers in the MW-3D-CNN with 2D convolutional layers. In this case, it reduces to the architecture as the wavelet-SRNet method in [36]. Then we compared the MW-3D-CNN with the wavelet-SRNet. The loss function of wavelet-SRNet was originally designed with L2 norm in [36]. Here, we also trained the wavelet-SRNet with L1 norm as loss function, and the corresponding results are denoted as wavelet-SRNet-L2 and wavelet-SRNet-L1. The comparison between the MW-3D-CNN and the wavelet-SRNet is presented in Table 10. In Table 10, it can be found that the MW-3D-CNN performs better than the wavelet-SRNet on the three datasets. The MW-3D-CNN is based on 3D convolutional layers, which could naturally exploit the spectral correlation and reduce the spectral distortion in HSI SR. We could also find that when the L1 norm is used as loss function for the wavelet-SRNet, the SR performance is slightly better than the L2 norm, which also demonstrates the effectiveness of L1 norm. Figure 15. The histograms of errors in different wavelet sub-bands after 200 training epochs. The training data is extracted from Pavia University, the MW-3D-CNN is trained with the L1 norm loss, and the upscaling factor is two. The Rationality Analysis of 3D Convolution In this sub-section, in order to analyze the advantage of 3D convolution over 2D convolution for HSI SR, we replaced all the 3D convolutional layers in the MW-3D-CNN with 2D convolutional layers. In this case, it reduces to the architecture as the wavelet-SRNet method in [36]. Then we compared the MW-3D-CNN with the wavelet-SRNet. The loss function of wavelet-SRNet was originally designed with L2 norm in [36]. Here, we also trained the wavelet-SRNet with L1 norm as loss function, and the corresponding results are denoted as wavelet-SRNet-L2 and wavelet-SRNet-L1. The comparison between the MW-3D-CNN and the wavelet-SRNet is presented in Table 10. In Table 10, it can be found that the MW-3D-CNN performs better than the wavelet-SRNet on the three datasets. The MW-3D-CNN is based on 3D convolutional layers, which could naturally exploit the spectral correlation and reduce the spectral distortion in HSI SR. We could also find that when the L1 norm is used as loss function for the wavelet-SRNet, the SR performance is slightly better than the L2 norm, which also demonstrates the effectiveness of L1 norm. Robustness over Wavelet Functions In the experiment, we used Haar wavelet function in WPT. In this sub-section, we also perform the MW-3D-CNN with other two wavelet functions, Daubechies-2 and biorthogonal wavelet functions, to evaluate the robustness of MW-3D-CNN over the wavelet function. In Table 11, it can be found that the SR performance with different wavelet functions is close to each other. The SR performance changes slightly with different wavelet functions, which demonstrates the robustness of MW-3D-CNN over the wavelet functions. The MW-3D-CNN is implemented on Tensorflow [60], with a NVIDIA GTX 1080Ti graphic card. It takes about 7 h and 20 h to train the MW-3D-CNN with upscaling factor two and four respectively. In the testing stage, inferring a HR HSI only takes less than two seconds, it is fast because there is only feed forward operation involved. Conclusions In this study, a MW-3D-CNN for HSI SR was proposed. Instead of predicting the HR HSI directly, we predicted the wavelet package coefficients of the latent HR HSI, and then reconstructed the HR HSI via inverse WPT. The MW-3D-CNN is constituted by an embedding subnet and a predicting subnet, both of which are built on 3D convolutional layers. The embedding subnet projects the input LR HSI into feature space and represents it with a set of feature cubes. These feature cubes are then fed to the predicting subnet, which consists of several output branches. Each branch corresponds to a wavelet package sub-band and predicts the wavelet package coefficients of each sub-band. The HR HSI can be reconstructed via inverse WPT. The experiment results on both simulated and real spaceborne HSI demonstrate that the proposed MW-3D-CNN could achieve competitive performance. The MW-3D-CNN learns the knowledge from the external training data for HSI SR. HSI has its prior information in both spectral and spatial domains, such as the structural self-similarity [26] and low rank prior [61][62][63]. Exploiting these prior information helps regularize the ill-posed HSI SR problem. How to combine such internal prior with external learned knowledge in the deep learning will need to be examined in future work. Furthermore, integrating adversarial loss [64] in training the network is another direction to boost the SR performance.
11,550
2019-06-30T00:00:00.000
[ "Environmental Science", "Computer Science", "Engineering" ]
Using Intercritical CCT Diagrams and Multiple Linear Regression for the Development of Low-Alloyed Advanced High-Strength Steels : The present work presents a theoretical and experimental study regarding the microstructure, phase transformations and mechanical properties of advanced high-strength steels (AHSS) of third generation produced by thermal cycles similar than those used in a continuous annealing and galvanizing (CAG) process. The evolution of microstructure and phase transformations were dis-cussed from the behavior of intercritical continuous cooling transformation diagrams calculated with the software JMatPro, and further characterization of the steel by scanning electron microscopy, optical microscopy and dilatometry. Mechanical properties were estimated with a mathematical model obtained as a function of the alloying elements concentrations by multiple linear regression, and then compared to the experimental mechanical properties determined by uniaxial tensile tests. It was found that AHSS of third generation can be obtained by thermal cycles simulating CAG lines through modifications in chemistry of a commercial AISI-1015 steel, having an ultimate tensile strength of UTS = 1020–1080 MPa and an elongation to fracture of Ef = 21.5–25.3%, and microstructures consisting of a mixture of ferrite phase, bainite microconstituent and retained austenite/martensite islands. The determination coefficient obtained by multiple linear regression for UTS and Ef was R 2 = 0.94 and R 2 = 0.84, respectively. In addition, the percentage error for UTS and Ef was 2.45–7.87% and 1.18–16.27%, respectively. Therefore, the proposed model can be used with a good approximation for the prediction of mechanical properties of low-alloyed AHSS. This work presents a novel methodology to obtain third generation low-alloyed TRIP-AHSS under thermal cycles similar than those used in a CAG process. Mechanical properties were predicted by a multiple linear regression (MLR) model obtained from data reported in the literature for AHSS-TRIP steels (with a wide range in the concentration of alloying elements). The mechanical properties reported for a specific steel grade, and the mechanical properties reported for different processing conditions, but for the same chemical composition, were considered to obtain the mathematical model. methodology to obtain used and experimental were to evaluate the capability of the proposed methodology to obtain reproducible results. Introduction Recent trends in vehicle production are characterized by the application of lightweight principles to fulfill both the customer demands and increased legal requirements [1][2][3][4][5]. AHSS are classified in three generations [8]. The first generation was developed out of mild steel by adding certain alloying elements. High-strength low-alloy steels (HSLA) were developed by changes in chemical composition and combining different strengthening mechanisms, causing an increase in strength, but with lower elongation [9]. HSLA steels led to DP, TRIP, and martensitic steels, all with increased strength at the expense of lower elongation to fracture [10,11]. The second generation of AHSS (TWIP) is characterized by fully austenitic microstructures obtained by adding significant amounts of alloying elements such as Mn, Si or Al [12][13][14][15]. Even though the goal of significantly increasing both the strength and elongation characteristics can be met in these materials, they are hardly used in the automobile industry due to their high costs and challenges related to weldability, galvanizing, elevated wear on forming dies, increased springback, flange stretching, edge cracking and fatigue compared to other steels [16,17]. Recently, there has been increased funding and research for the development of the "3rd Generation" of advanced high-strength steels (AHSS) [5,8,[18][19][20][21][22][23]. The third generation of AHSS seeks to provide ductility and high strength without the joining problems and high costs associated with the previous generations [5,8]. The recent concerns of environmental protection and current policies to reduce greenhouse gas emissions have driven steel manufacturers to reduce the weights of their components. In this context, using thinner steel sheets of third generation AHSS may result in a mass reduction, which in turn can lead to lower consumption with increased environmental protection [1,5]. However, to obtain thin sheets of AHSS for automobile applications, it is necessary to overcome several and often contradictory constraints including a good combination of high formability, lightness, high mechanical strength, production possibilities and durability, all under strong economic constraints. Overcoming these contradictory objectives is often the task of metallurgists, such combination in steel, a material considered so well known, remains a real challenge [23]. Amongst the different types of AHSS, DP, TRIP and CP are considered as attractive steels to be extended into 3rd generation advanced high-strength steels [8,23]. TRIP-aided multiphase steels may exhibit an improved combination between strength and ductility, thus satisfying the demands of the automotive industry for high-strength steels with good formability [24]. TRIP steels are characterized for having a triple-phase microstructure consisting of ferrite, bainite and retained austenite, which needs to be obtained during thermal treatments. Chemistry and heat treatment parameters play an important role in the kinetics of phase transformations that may occur during the heat treatment of steel, and thus on the resulting microstructure and mechanical properties. The designers, researchers of material sciences and manufacturers are usually contingent on results of experiments conducted in a testing laboratory to identify mechanical properties [25]. Therefore, to obtain the desired properties of a specific material, the composition and processing parameters need to be customized prior to conducting the experiment, which demands massive expenditure and time to figure out the properties of materials [25]. Commercial softwares (e.g., JMatPro, MatCalc) represent a potential tool to predict the kinetics of phase transformations in multicomponent alloys based on sound physical principles rather than purely statistical methods. They have been employed in the modelling of creep and precipitation hardening [26]; for prediction of phase transformation temperatures on heating and cooling to design quenching and partitioning (Q&P) processing routes [27]; to predict the evolution of phases during double-step heat treatment of medium-Mn AHSS [28]; to explain the evolution of the phases as a function of both time and temperature parameters during solidification and homogenization [29]; for calculation of the temperature-dependent thermal properties, i.e., density, conductivity and specific heat capacity, in DP and TRIP steels [30]; and for the analysis and the prediction of the kinetics of precipitation in microalloyed steel grades subjected to different processing steps [31]. Recently, some authors evaluated the feasibility of obtaining dual-phase (DP) steels from the intercritical temperature range [32], by thermal cycles that simulate continuous annealing lines. Although a microstructure of ferrite + martensite was expected by conducting different thermal treatments, a large amount of bainite was obtained with the proposed heat treatments. It was reported that for the austenitization conditions and the particular case of the steel investigated, the viability of producing DP steels under the conditions mentioned above was limited. Cooling rates greater than 100 • C/s were required to obtain the specific ferrite-martensite microstructures, which can not be reproduced at an industrial level [32]. These results indicate that apart from thermal treatments, chemical composition also plays an important role in the development of particular microstructures and properties. All the above-mentioned works have reported the used of software for a better understanding of microstructural changes and properties of a specific steel grade. However, since small changes in the concentration of the alloying elements can cause significant changes in the phase transformations kinetics and in the resulting properties, investigating the effects of chemical composition becomes of great importance. Multiple regression has been widely employed in the steel and cast-iron industry, for instance, to predict the mechanical behavior of hot-rolled, low carbon steels as a function of the concentration of alloying elements and rolling conditions [33]; to predict fatigue strength in structural steels based on variations in the chemical composition and processing parameters [34]; to predict the mechanical behavior of cast-iron rolls by variations in the chemical composition [35]; to predict yield ratio and uniform elongation in high-strength bainitic steels as a function of microstructural characteristics [36]; and to predict the yield strength of different steel rebars with different chemical composition and thermomechanical variables [37]. Chemical compositions of the steels rebars were characterized as having low contents of C, Mn, Si and Al. These works show that statistical methods can be used as potential tools for the prediction of mechanical properties when variations in chemical compositions are involved. It has been recently reported that CCT diagrams constructed from intercritical temperatures are practically unavailable in the open literature [32]. Most CCT diagrams in steels have been constructed from temperatures where austenite is the stable phase (full austenitization) [32], which does not allow a precise estimation of the microstructures resulting from processing routes like the ones used to fabricate multiphase high-strength TRIP steels. In addition, as far as the authors' knowledge, the use of multiple linear regression combined with computer simulations of the behavior of intercritical CCT diagrams calculated as a function of the concentration of the alloying elements has not been reported, which could represent a potential tool to propose new chemistries that allow the development of third generation TRIP steels. This work presents a novel methodology to obtain third generation low-alloyed TRIP-AHSS under thermal cycles similar than those used in a CAG process. Mechanical properties were predicted by a multiple linear regression (MLR) model obtained from data reported in the literature for AHSS-TRIP steels (with a wide range in the concentration of alloying elements). The mechanical properties reported for a specific steel grade, and the mechanical properties reported for different processing conditions, but for the same chemical composition, were considered to obtain the mathematical model. The behavior of intercritical CCT diagrams and mechanical properties were monitored for each modification. This methodology allowed us to propose a chemical composition to obtain advanced high-strength TRIP steels under conditions similar than those used in a CAG process. Theoretical and experimental results were compared to evaluate the capability of the proposed methodology to obtain reproducible results. Computational Study to Evaluate the Behavior of Intercritical CCT Diagrams as a Function of the Concentration of the Alloying Elements The combined effects of alloying elements (C, Ni, Mn, Mo, Al, Cr, Si, P, Nb, Cu, Ti and S) on the behavior of the pearlite, ferrite, martensite and bainite were followed through the variations of the intercritical CCT diagrams calculated with the software JMatPro. Diagrams were obtained at a temperature required to produce 50% ferrite (α) + 50% austenite (γ), the proportion of phases expected in the annealing process to optimize the strength to ductility ratio in TRIP steels [52,53]. A commercial steel with 0.12 wt.% C, 0.75 wt.% Mn, 0.26 wt.% Si, 0.23 wt.% Cu, 0.090 wt.% Cr, 0.080 wt.% Ni, 0.013 wt.% Mo, 0.003 wt.% Al, 0.011 wt.% P and 0.005 wt.%S (determined by optical emission spectroscopy), was used as the raw material since in this steel, the amount of the alloying elements to be investigated was low. This allowed for the adjustment of the chemical composition to modify both the kinetics of phase transformations and the mechanical properties. The carbon content was adjusted to 0.16 wt.% to reduce the negative effect of this element on weldability; manganese and nickel were employed to promote austenite stabilization at room temperature; silicon and aluminum were used to suppress the precipitation of cementite during the isothermal bainitic treatment (IBT); niobium and copper were used to promote the precipitation strengthening effect and molybdenum was added to favor both precipitation and solution hardening. Changes in the steel chemistry were made to adjust the critical transformation temperatures Ac 1 (temperature from which austenite is formed) and Ac 3 (temperature from which the stable phase is austenite) as well as the M s (temperature from which martensite is formed) and M f (ending temperature for the austenite to martensite transformation), to produce low-alloyed advanced high-strength TRIP steels by thermal cycles simulating CAG processes. Changes in chemistry were also conducted to modify the mechanical properties sought out to obtain third generation steels. Prediction of Mechanical Properties as a Function of the Concentration of the Alloying Elements Using Multiple Linear Regression A multiple linear regression model was obtained from data of alloying elements and mechanical properties reported in the literature for first, second and third generation AHSS ( Table 1). The indirect effects of processing parameters, mechanical properties reported for a specific steel grade and the mechanical properties reported for different processing conditions, but the same chemical composition, were considered to generate the mathematical model. The model was obtained with the program Minitab 18, using the independent variables (concentrations of alloying elements) to explain the behavior of the dependent variables (mechanical properties). Equation (1) shows a multiple linear regression model [54]: where (b o , b 1 . . . b k ) are the estimations of coefficients of the multiple linear regression model, (x 1i , x 2i . . . x ki ) are the independent variables and y i is the dependent variable. The independent variables were the concentrations of C, Mn, Si, Al, P, Cr, Nb, Ni, Cu, Mo, Ti and S, and the dependent variables were ultimate tensile strength and elongation to fracture. To evaluate the reliability of the model, the behavior of the residuals or errors obtained from the multiple linear regression is analyzed, in addition to the coefficient of determination, R 2 , which determines the quality of the model in replicating results and the variation in the response that can be explained by the model. Table 1. Chemical composition and mechanical properties of AHSS to obtain the multiple linear regression model. Experimental Work Conducted for Validation: Fabrication, Processing and Characterization Fabrication of steel consisted of vacuum fusion and casting in metal ingot molds. The composition considered to fabricate the steel was selected considering both the results of the computational study and the results of the linear regression model aiming to obtain third generation multiphase TRIP steels by thermal cycles similar than those used in a CAG process. Ingots with dimensions of 9.3 cm width × 6.8 cm length × 2.54 cm height were subjected to homogenization at 960 • C for 1 h. Homogenized ingots were processed by hot rolling at 1100 • C to obtain hot-rolled steel strips of 2.3 mm thickness. Hot-rolled samples were pickled and cold-rolled to obtain thin steel sheets of 1.2 mm thickness. Cold-rolled samples were subjected to a final heat treatment simulating the conditions of a CAG process to obtain the desirable phases expected in TRIP steel: ferrite, bainite and retained austenite. Thermal cycles were conducted in a LINSEIS L78 quenching dilatometer, which allowed the determination of the critical temperatures Ac 1 , Ac 3 and M s , M f , on heating and cooling, respectively. Figure 1 shows a schematic diagram of a CAG line indicating the parameters used in the present work to obtain TRIP steels, which were set considering the analysis of phase transformations and the behavior of the CCT diagrams. Thermal treatment was proposed, aiming to conduct the: (i) annealing stage at temperatures in the two phases field region (Ac 1 < T < Ac 3 ) and the (ii) isothermal bainitic treatment (IBT) at temperatures above the start temperature for martensite (T > Ms). According to Figure 1, TRIP steels constituted of bainite, ferrite and austenite were expected. The melting point of zinc is about 420 • C, therefore, the industrial galvanizing process is usually conducted above this temperature. Cold-rolled and heat-treated samples were prepared by conventional metallographic techniques including grinding and polishing. The microstructure was revealed by chemical etching using 2% Nital and LePera reactant. Microscopical observation by optical microscopy was made in an Olympus GX51 inverted metallurgical microscope, while observations by scanning electron microscopy were made in a PHILIPS XL30 microscope. Mechanical properties were determined from engineering stress vs deformation curves obtained by uniaxial tensile tests conducted according to ASTM E-8. The concentration of carbon and sulfur in the experimental steel was determined in a simultaneous carbon/sulfur LECO CS230 analyzer by infrared absorption spectroscopy based on ASTM E-1019. The concentration of other elements was determined by optical emission spectrometry using a SPECTROLAB M11 spectrometer based on the procedures of standard ASTM E-415. CCT Diagrams Behavior as a Function of Chemistry The behavior of continuous cooling transformation (CCT) diagrams calculated from intercritical temperatures as a function of the concentration of alloying elements is shown in Figure 2. The diagram of Figure 2a was calculated using the chemical composition of the raw material. As can be seen, the diagram shows the pearlite and ferrite curves at the left part. As mentioned above, these diagrams were calculated at a temperature required to form 50% ferrite + 50% austenite, which means at a temperature within the intercritical region. Therefore, transformations observed during cooling correspond to the transformation of intercritical austenite. In this way, microstructures formed after thermal cycles can have intercritical ferrite (~50%), proeutectoid ferrite, bainite and/or martensite depending on the cooling rate [32]. To calculate the diagram of Figure 2b, the content of Cu, Al, Si and Mo in the starting material (0.23 wt.%, 0.003 wt.%, 0.26 wt.% and 0.013 wt.%) was increased to 0.50 wt.%, 0.03 wt.%, 0.50 wt.% and 0.10 wt.%, respectively. As can be seen, temperatures A 1 and A 3 slightly increased from 712 • C and 835 • C to 714 • C and 846 • C, respectively. Transformation curves of ferrite, pearlite and bainite as well as the martensitic transformation temperatures do not show a significant change compared to the diagram of the raw material (compare Figure 2a Subsequent changes include the adjustment of Ni and Cu to 1.5 wt.%, and an increase in Mo and Si contents from 0.1 wt.% to 0.3 wt.%, and 0.5 wt.% to 1.5 wt.%, respectively. As shown in Figure 2d, this modification causes a contraction of the intercritical range and displacement of the pearlite and bainite transformation to the right of the diagram and at lower temperatures. The transformation from austenite to proeutectoid ferrite is practically avoided for the cooling rates shown in the diagrams, and martensitic temperatures decrease. The use of Ni and Cu is not particularly crucial for third generation AHSS. Raising their content has a direct impact on the cost of steel. Typical temperatures for annealing during a CAG process range from 800 to 875 • C depending on steel thickness and galvanizing is usually conducted between 450 and 475 • C. Therefore, to obtain TRIP steels in a CAG process it is necessary to have the presence of ferrite and austenite during annealing, and promote the austenite to bainite transformation at temperatures similar to those used in the galvanizing stage. Additions of Cu (austenite former), Al, Mo and Si (ferrite formers, ferrite stabilizers) to the raw material cause a slight increase in the A 1 and A 3 temperatures. This effect is attributed to the effect of Al, Mo and Si, which stabilize the ferrite phase and retard the ferrite to austenite phase transformation upon heating, shifting the transformation temperatures to higher values [70]. The region between the ferrite and bainite transformation curves upon cooling also increases slightly due to the addition of the alphagene elements (compare Figure 2a with Figure 2b). With increments in the C (austenite former) and Mn concentrations (austenite former, austenite stabilizers), the transformation temperatures A 1 and A 3 decrease, and the intercritical range is cut short to ∆T = 66 • C (Figure 2c). This behavior is associated with the stabilization of austenite, which in turn shortens the intercritical range [71]. Similar behavior to the one observed in the Fe-C metastable equilibrium diagram, the length of the intercritical region for carbon contents above 0.02 wt.% is reduced, up to it practically disappears for carbon contents of about 0.8 wt.%. Increasing the amount of Cu (austenite former), Mo and Si (ferrite formers and ferrite stabilizers), shortens the intercritical range even more to ∆T = 49.2 • C (Figure 2d), which suggests that the combined effects of C, Mn and Cu predominate over the effects of Mo and Si. The contents of Al, Cu, Mn, Mo, Ni and C were reduced to open the intercritical range, reducing the extension of the martensite transformation temperature, and shifting the bainite transformation curve to the left of the diagram (Figure 2e). Finally, to adapt the transformation temperatures to conditions that could be reproducible at the industrial level, the contents of Ni and C were reduced, while that of Al was increased to produce the CCT diagram shown in Figure 2f. The main conclusion that can be drawn from the computer simulation is that the behavior of CCT diagrams is influenced not only by the presence of alloying elements and their relative amounts, but also by how they interact with one another. Variations in the Vickers hardness (HV) are also related to the phases resulting after cooling and to solution hardening caused by the alloying elements. Furthermore, it can be observed from this study that cooling rate and chemical composition have an influence in the resulting microstructures and hardness. The effect of chemical composition for constant cooling conditions can be analyzed from the CCT diagrams of Figure 2e,f. Keeping in mind that the calculated CCT diagrams show the decomposition of intercritical austenite, it is observed that for a cooling rate of 100 • C/s (first main cooling curve from left to right in the diagram), austenite decomposes to martensite in both steels, however, hardness produced from the austenite to martensite phase transformation varies from 374 HV (Figure 2e) to 534 HV (Figure 2f). When decomposition of austenite in both steels promotes the formation of bainite and martensite on continuous cooling, i.e., when the cooling is conducted at 10 • C/s (second main cooling curve from left to right in the diagram), the hardness calculated by the transformation of austenite is about 373 HV and 526 HV (Figure 2e,f, respectively). These results reflect the importance of alloying elements on hardness. The cooling conditions have also an influence in the resulting microstructure and hardness for a given composition. For instance, for the chemical composition employed to calculate the diagram of Figure 2f, the hardness calculated for cooling rates of 100 • C/s, 10 • C/s, 1 • C/s, 0.1 • C/s and 0.01 • C/s (main cooling curves from left to right in the diagram) was 534 HV, 526 HV, 354 HV, 247 HV and 220 HV, respectively. The variation of this property is related to the resulting microstructures: martensite, bainite + martensite, pearlite + bainite, pearlite and ferrite + pearlite, respectively. For this reason, for the development of new AHSS grades, it is necessary to consider both thermal treatment conditions and chemical composition. Due to the importance of chemical composition on the kinetics of phase transformations and properties, some authors determined the continuous cooling transformation (CCT) diagrams for eight Cr-Mo steels [72]. These steels are used for the manufacture of automotive parts such as differential and transmission gears, transmission shafts, steeringknuckle pins, rear-axle shafts, steering-knuckles, and the like. [72]. The higher contents of carbon, manganese, chromium and molybdenum contribute to the increase in hardness [72]. This behavior is similar to the one observed in the present investigation where the highest hardness was observed in simulations with higher contents of carbon and manganese (Figure 2c). The decrease in C and Mn (gamma gene elements) move the ferrite transformation curve to the left of the diagram leading to lower hardness values [72]. Similar behavior is observed in the present work for lower concentrations of C and Mn (compare Figure 2c with Figure 2b). Decreasing the amount of Mo also contributes to the reduction in hardness [72], as can be also seen by comparing Figure 2c with Figure 2b. The variation in the Ac 1 and Ac 3 transformation temperatures indicates that the intercritical range opens when the contents of Mn and C are lower [72], this behavior is similar to the one observed in the present research (compare Figure 2a,b with Figure 2c,d). Although the determination of CCT diagrams by dilatometry is more precise than computer simulations, the time and cost associated with the application of thermal treatments, preparation and characterization of samples and evaluation of mechanical properties are significantly higher. The results of the present work show that computer simulations can be used as a potential tool to investigate, at first instance, the influence of the concentration of the alloying elements and cooling conditions on the resulting microstructure and hardness. Hardness can be related to mechanical strength, but not with elongation to fracture; this represents the main disadvantage of using just CCT diagrams for the development of AHSS. In this context, multiple linear regression analysis can be complementary to predict the ultimate tensile strength-to-elongation to fracture ratio as a function of the concentration of the alloying elements, which in addition to the behavior of CCT diagrams can help to develop advanced high-strength TRIP steels by using thermal cycles similar than those of a CAG process. Prediction of Mechanical Properties As mentioned above, to obtain the mathematical model by multiple linear regression, the indirect effects of processing parameters were considered including both the mechanical properties reported for a specific steel grade, and the mechanical properties reported for different processing conditions, but for the same chemical composition. Figure 3 shows residual plots for elongation to fracture (Ef), four in one, obtained from the multiple linear regression. They are presented for detecting non-random variation, non-normality, non-constant variance and outliers of the data. As can be seen, the residuals exhibit an approximately straight line in the plot of normal probability (Figure 3a), and the histogram shows an approximate symmetric nature (Figure 3c) indicating a normal distribution of residuals. Residuals are scattered randomly in the plot of residuals versus the fitted values and the vertical width of the scatter does not appear to increase or decrease across the fitted values, so we can assume a constant variance in the error terms (Figure 3b). Residuals do not show a clear pattern in the residual versus order plot, which indicates that there is no undesirable effect (Figure 3d). The normal probability plot, histogram plot, residuals versus the fitted values and residual versus observation order plot do not exhibit any abnormal behavior of the residuals. Figure 4 shows residual plots for ultimate tensile strength (UTS), four in one, obtained from the MLR. The behavior of the residuals is similar to the one observed for elongation to fracture. The normal probability plot shows an approximately straight line (Figure 4a), with the approximate symmetric nature of the histogram (Figure 4c) indicating a normal distribution of the residuals. The residuals are scattered randomly around zero, which allow us to assume a constant variance (Figure 4b). A clear pattern is not observed for residuals in the residual versus order plot, suggesting that there is no undesirable effect (Figure 4d). The normal distribution of residuals in the normal probability plot, histogram plot, residuals versus the fitted values and residual versus observation order plot is one condition that must be met in the multiple linear regression model. The general MLR models proposed to predict the elongation to fracture (Ef) and ultimate tensile strength (UTS) can be written as Equations (2) and (3), respectively: The coefficient of determination obtained from the multiple regression was R 2 = 0.84 and R 2 = 0.94, for Ef and UTS, respectively. Furthermore, the mechanical properties calculated from these equations with the chemical composition employed to obtain the CCT diagram of Figure 2f were Ef = 25% and UTS = 995 MPa, properties that can classify the steel within the third generation AHSS. MLR analysis has been employed widely in the cast-iron and steel industry to predict certain physical and mechanical properties especially when several processing variables are involved. This method has been used with success in hot-rolled low carbon steel strips to predict their properties [33], components for structural applications [34], cast-iron rolls [35], high-strength bainitic steels for pipeline applications [36] and steel rebars [37]. The most common variables used in those works were chemical composition and processing variables [33][34][35]37], and microstructural characteristics [36]. The use of MLR has been scarcely investigated to predict the mechanical properties of thin steel sheets of advanced high-strength TRIP steels. The work reported in Ref. [37] is of special interest since statistical methods were employed for the prediction of the mechanical properties of several low carbon steels. Compositions selected in that work to predict the mechanical properties were selected for steel rebars rather than for plain steel sheet products. The work does not consider the use of CCT diagrams, which makes it difficult to propose processing routes to produce the required microstructures and mechanical properties for desirable chemical composition. In addition, yield strength was the only property predicted, but equations for UTS or Ef to fracture were not reported [37]. The coefficient of determination obtained in the present work from the multiple regression was R 2 = 0.84 and R 2 = 0.94, for Ef and UTS, respectively. The lower value in the coefficient of determination obtained for elongation to fracture suggests that this property is more sensitive to changes in the microstructural characteristics of the steels investigated, which were not considered in the present work to obtain the equations. It is important to mention that, although this method can be used at first along with intercritical CCT diagrams to propose chemical compositions for developing AHSS by thermal cycles simulating a CAG process, it becomes necessary to consider the microstructural aspects in future works to obtain more reliable results. Microstructural Characteristics and Mechanical Properties of Experimental Steel To validate the results obtained by the analysis of CCT diagrams and MLR, the experimental steel was obtained at a laboratory scale. The chemical composition considered to fabricate the steel was that needed to obtain the diagram shown in Figure 2f, which according to MLR, could lead to the following mechanical properties: Ef = 25% and UTS = 995 MPa. The chemical composition obtained after fusion and casting was C: 0.14%, Mn: 1.9%, Si: 1.1%, Al: 0.31%, Mo: 0.20%, Ni: 0.06%, Cu: 0.21%, Nb: 0.12%, Cr: 0.06%, P:0.011% and S: 0.004% (all in wt.%), which shows minor variations in the alloying elements compared to the composition proposed. Figure 5 shows the transformed fraction of austenite with continuous heating determined by dilatometry and the lever rule. As can be seen, as the temperature of steel increases above Ac 1 , the amount of ferrite (α) decreases, and the amount of austenite (γ) increases up to Ac 3 . Austenite is stable above Ac 3 , which is characterized by a linear behavior between dilation and temperature. Within the two phases (ferrite + austenite) field, the variation of the BC/AC ratio is related to the transformed fraction of austenite. The temperature needed to obtain 50% α + 50% γ with continuous heating is about 805 • C, which is similar than that calculated by JMatPro to obtain the same proportion of phases (Figure 2f). The transformation temperatures calculated by the software JMatPro for the chemical composition proposed were A 1 = 714.4 • C and A 3 = 891.8 • C (Figure 2f). The critical transformation temperatures determined with continuous heating for the experimental steel were about Ac 1 = 696 • C and Ac 3 = 903 • C ( Figure 5), which means a variation of 18.4 • C and 11.2 • C, respectively. These differences can be associated with the variations between the proposed chemical composition and the composition obtained in the experimental steel after fusion and casting. These results suggest that calculations by JMatPro can provide a good approximation of the transformation temperatures in low-alloyed steels. Other authors have reported a comparative study between the transformation temperatures calculated by JMatPro and the ones obtained by dilatometry in three different spring steel grades [73]. It was concluded that empirical heat treatment data are helpful for guidance; however, for optimisation purposes, the exact parameters are a requirement. In the present work, intercritical annealing was conducted at 800 • C based on the results of Figures 2f and 5. Isothermal bainitic treatment (IBT) was conducted at 450 • C considering that this temperature is similar to the one used in a CAG process. Figure 6a-c show temperature vs time plots obtained experimentally for times of IBT of 30 s and 120 s, respectively. The corresponding dilation curves are presented in Figure 6b-d. Samples with IBT = 30 s show a first change in the dilation curve during annealing at 800 • C, which relates to the ferrite to austenite transformation (Figure 6b). During IBT, there is another change in the dilation curve, which relates to the austenite-to-bainite phase transformation. Furthermore, an additional change is observed in the final cooling, which corresponds to the austenite-to-martensite phase transformation. All these observations are supported by the CCT diagram of Figure 2f and the results of Figure 5. As can be seen in Figure 6d, samples with IBT = 120 s also exhibit three changes, which are associated with the ferrite-to-austenite, austenite-to-bainite and austenite-to-martensite phase transformations. However, the latter change is more significant than in samples with IBT = 30 s. The results suggest that the lower amount of bainite, for shorter IBT times, causes carbon enrichment in austenite favoring its retention at room temperature (Figure 6b). For larger IBT times, the amount of bainite is higher and it is accompanied with the precipitation of carbides; therefore, carbon in austenite is reduced causing the transformation of austenite to martensite (Figure 6d). Some authors investigated the evolution of phase transformations in cold-rolled steel sheets (2.47% Mn, 1.51% Si and 0.22% C) with a thickness of 1.2 mm (similar to the steel thickness used in the present work) [74]. Steel samples were thermally treated by dilatometry and phase transformation was followed by the changes in the dilation curve. Austenitization was conducted at 900 • C (1 min), followed by quenching at 100 • C/s to 350 • C, 375 • C and 400 • C. Holding time at these temperatures was varied from 1000 s to 3600 s, followed by a second rapid cooling to room temperature conducted at 100 • C/s [74]. The results show two changes in the slop of the dilation curve. The first change was associated with the austenite formation and the second was related to the austenite to bainite phase transformation [74]. No further transformation was observed during the final cooling. The absence of a third transformation during the second quenching was attributed to the completion of the austenite to bainite phase transformation even after 1000 s. The temperatures used to follow the evolution of bainite were equal or lower than 400 • C [74], meaning that the bainite transformation in such work was promoted at temperatures even lower than the melting point of zinc (about 420 • C). Galvanizing of steel is usually conducted at temperatures equal to or higher than 450 • C and thus, to obtain multiphase steels under similar conditions, the bainite should be promoted at temperatures near 450 • C. The results of the present work give evidence of the austenite to bainite phase transformation at 450 • C (expansion in the dilation curve during IBT, Figure 6b-d). A third change is also observed during the final cooling to room temperature even when samples were cooled down at a slow cooling rate (2 • C/s). The transformation of austenite to martensite observed during cooling is more significant in samples with 120 s of IBT. It has been reported that the unstable austenite may transform to martensite during final cooling if carbon enrichment in austenite is not sufficient [75]. It appears then that carbon depletion by formation of carbides, is more significant for larger IBT times, leading to a more significant change during final cooling. The third dilation change observed in Figure 6 is consistent with the lower IBT times (30 s and 120 s), in the case of the work reported in [74], there is no austenite available after 1000 s and 3600 s of IBT, and thus no further transformation can occur even if a rapid cooling rate (100 • C/s) is used. Microstructures resulting from thermal cycles are present in Figure 7. Figure 7a-c show images obtained by optical microscopy (OM), while Figure 7b-d show images obtained by scanning electron microscopy (SEM). Ferrite (α), bainite (α B ) and retained austenite/martensite (γ/α') are colored in gray, brown and white, respectively, when they are observed by OM (Figure 7a-c). Their morphology is observed in Figure 7b-d, which shows a dark gray phase (ferrite), light gray + white (bainite) and white (austenite/martensite) as indicated by the yellow, green and blue arrows, respectively. It is important to mention that both austenite and martensite acquire the same color (white) when etching with LePera reagent, however, considering the results of Figure 6b-d, it is possible to conclude that aggregates of these phases are mainly constituted by austenite in samples with IBT = 30 s, and by martensite in samples with IBT = 120 s. According to the CCT diagram of Figure 2f, ferrite and austenite can be obtained during heating of steel at 800 • C, and bainite can be formed during IBT at 450 • C. These results are consistent with the changes in the dilation curve and with the resulting microstructure ( Figures 6 and 7, respectively). Other authors have designed the microstructure of low-alloyed multi-phase TRIP steels combining computer simulations with experimental data [76], reporting similar morphologies to the ones obtained in the present work. Thermodynamics and kinetics calculations were employed to obtain a useful methodology to predict maximum ferrite and retained austenite fractions by a two-stage thermal cycle consisting of an intercritical annealing and subsequent isothermal bainitic treatment. Processing of steel used for the validation stage was done by equal channel angular pressing (ECAP). ECAP methods are very effective in deforming metals in the severe plastic deformation range, but so far it does not seem easy to make them industrially viable [77]. In the present work, the chemical composition was designed to develop multi-phase low-alloy TRIP steels based on intercritical CCT diagrams and analysis by MLR, considering the possibility of obtaining these steels by thermal cycles simulating CAG lines, which offers an advantage over the proposed methodology reported in [76]. Figure 8 shows the stress vs strain curves of cold-rolled and heat-treated samples subjected to thermal cycles shown in Figure 6a,c. As can be observed, cold-rolled samples have the highest UTS values and the lowest values of elongation to fracture (Ef). After thermal treatment, a significant reduction in UTS and an increase in Ef are observed. UTS of cold-rolled samples was about 1655 MPa with an Ef lower than 5% (average obtained from two tests). In the case of thermally treated samples, strength decreases with a significant increase in Ef. Samples subjected to an IBT of 30 s show UTS values of about 1020 MPa and Ef around 25.3%. Increasing the time of IBT to 120 s causes an increase in the average UTS value to 1080 MPa and a reduction in the average Ef to 21.5%. According to the results of Figures 6 and 7, the thermal treatment causes the formation of multiphase structures consisting mainly of ferrite phase + bainite microconstituent + austenite/martensite islands or a mixture of ferrite phase + bainite microconstituent + martensite/austenite islands with an IBT time of 30 s and 120 s, respectively. This result suggests that the amount of martensite favors the increment in strength and the reduction in elongation to fracture. Table 2 shows a comparison between the mechanical properties obtained by the multiple linear regression model (Equations (2) and (3)), and the ones obtained experimentally in thermally treated samples. The percentage error is also included; as can be seen, the proposed model can be satisfactorily used for the prediction of ultimate tensile strength, which is consistent with the coefficient of determination obtained for this property (R 2 = 0.94); the percentage error for this property is less than 8%. Elongation to fracture shows a higher percentage error (less than 17%), which is consistent with the lower coefficient of determination (R 2 = 0.84). These results suggest that elongation to fracture is more sensitive to microstructural changes (grain size, type, amount, morphology of second phases), which were not considered to obtain the model. The mechanical properties of the AHSS−TRIP steels predicted by the proposed equations exhibit a better approach to real values when typical microconstituents are obtained (ferrite, bainite, austenite). This observation is supported by the lower relative errors observed in the presence of ferrite + bainite + retained austenite/martensite in samples subjected to times of IBT = 30 s. When carbide precipitation is more significant (IBT = 120 s), unstable austenite transforms to martensite as observed in Figure 6d, leading to a microstructure consisting mainly of ferrite + bainite + martensite/austenite, which leads to higher relative error ( Table 2). Conclusions • The use of intercritical CCT diagrams and the proposed multiple linear regression model, both obtained as a function of the concentration of the alloying elements, provide an approach for the prediction of both microstructures and mechanical properties of low-alloyed AHSS−TRIP steels of the third generation. • Theoretical study of phase transformations from CCT diagrams shows a good approximation with the results obtained by dilatometry. • Percentage error for ultimate tensile strength varies from 2.45% to 7.87%, while the one of elongation to fracture varies from 1.18% to 16.27%, which suggests that the latter is more sensitive to microstructural changes that were not considered to obtain the model. • The methodology presented in this investigation represents a potential tool for the development of low-alloyed advanced high-strength steels obtained under conditions that simulate continuous annealing and galvanizing lines.
9,667
2021-11-03T00:00:00.000
[ "Materials Science" ]
Predicting Charging Time of Battery Electric Vehicles Based on Regression and Time-Series Methods : A Case Study of Beijing Battery electric vehicles (BEVs) reduce energy consumption and air pollution as compared with conventional vehicles. However, the limited driving range and potential long charging time of BEVs create new problems. Accurate charging time prediction of BEVs helps drivers determine travel plans and alleviate their range anxiety during trips. This study proposed a combined model for charging time prediction based on regression and time-series methods according to the actual data from BEVs operating in Beijing, China. After data analysis, a regression model was established by considering the charged amount for charging time prediction. Furthermore, a time-series method was adopted to calibrate the regression model, which significantly improved the fitting accuracy of the model. The parameters of the model were determined by using the actual data. Verification results confirmed the accuracy of the model and showed that the model errors were small. The proposed model can accurately depict the charging time characteristics of BEVs in Beijing. Introduction With the rapid development of the global automobile industry, the increasing vehicle ownership has resulted in worsening problems of environmental pollution and energy shortage.Battery electric vehicles (BEVs) have become a mainstream technology direction that promotes energy conservation, emission reduction and environmental protection due to their good environmental protection and energy adjustment effects.They have an unparalleled advantage over conventionally fueled vehicles to realise the automobile industry's technological transformation, upgrading and development [1].However, BEVs have the disadvantage of short driving range compared to conventional fuel vehicles.Therefore, BEV drivers need to charge their vehicles during trips [2].Charging processes take a long time, so the charging behaviour during trips prolongs the driver travel times [3].These problems pose a serious obstacle for drivers in choosing BEVs to travel.The accurate prediction of charging time can effectively alleviate the inconvenience to drivers caused by the inevitable charging behaviour during trips.Drivers can predict the charging time in advance according to the state of the vehicle to plan for efficient travel.Therefore, attention should be given to the charging time prediction in the charging behaviour of BEV drivers. Charging time accounts for a relatively large proportion in the total travel time, which is an important factor for charging behaviour [4].Recently, few studies on the problem of charging time prediction for BEVs have been conducted.In practice, accurately predicting charging time is difficult because environmental and other unobservable factors can affect the charging time of BEVs.With the recent development of data collection techniques, large volumes of data for BEV charging events can be obtained.Besides the observable factors, many environmental and unobservable factors are hidden in the data.Establishing a model through the data is an effective method to realize the accurate charging time prediction. In this study, a prediction model of BEV charging time was built based on charge and discharge data that were collected from 70 BEVs in Beijing, China.The data were representative because the BEVs used to collect data operate like other vehicles in the road network and the charging behaviour in real-world condition can be reflected in the data.Notably, compared with the data used in previous literature, the data used in this study to fit the prediction model of charging time were collected from BEVs in Beijing.The data used in the previous studies were mainly collected from BEVs in the USA.The traffic and charging environment in other countries and areas, such as China, differ from those in the USA.For example, most BEV drivers in the USA charge their vehicles by using private chargers at home.However, in China, private chargers are not widely installed at residences due to the constrained residential and traffic conditions.Public charging stations are the main mode used to charge BEVs in China.This finding indicates that the characteristic of charging behaviour in the USA and China is different.Therefore, most of the previous research results on charging behaviour are not suitable for the condition in Beijing.The proposed model is suitable to be applied in predicting charging time of BEVs in Beijing and other similar areas.Moreover, in real-world scenarios, the charging time of BEVs is affected by several factors, such as battery capacity, residual energy, battery health state, battery charging efficiency and charged amount.Some factors cannot be directly observed, such as battery health state and battery charging efficiency, and thus cannot be used to build a prediction model of charging time.However, the unobservable factors have significant impacts on the charging time of BEVs.Therefore, the charging time prediction model without the unobservable factors leads to a significant prediction error.To address the problem, a time-series method was used to fit the prediction error that results from the unobservable factors, thereby reducing the prediction error and improving the prediction accuracy.A combined model for charging time prediction, which simultaneously adopts the regression and time-series methods for modelling, was established based on the actual data.The proposed model may be used by BEV drivers to determine their travel plan or by city planners to design public charging infrastructure considering the charging behaviour of BEV drivers. The rest of the paper is organised as follows: in Section 2, the literature review is presented.In Section 3, the data sources are described and the data processing is introduced.In Section 4, a combined model for predicting charging time of BEVs is built based on the regression and time-series methods by using the actual data.In Section 5, the conclusions and directions for future research are presented. Literature Review Charging time is one of the most important factors for charging behaviour.It has a significant impact on the travel time of BEVs.However, few studies have explored the charging time and its prediction.In recent years, several studies have explored the charging behaviour and its impacts from the perspective of a power grid system operation, because the charging behavior of BEV drivers has significant impacts on power grid systems.He et al. [5] established the dynamic models for the BEV battery and power systems, and the impacts of the charging behaviour of BEV drivers on the power systems were explored based on the dynamic models.Clement-Nyns et al. [6] discussed the impacts of charging behavior on a residential distribution grid.A coordinated charging strategy was proposed to minimize the power losses and to maximize the main grid load factor.An et al. [7] proposed a computational framework for decision-making process of charging behaviour.The vehicle-to-grid services were considered in the framework, which aimed to improve the operational efficiency and security of power grid systems.Habib et al. [8] analysed the impacts of various conditions of charging behaviour on power grid systems.The coordinated/un-coordinated charging, delayed charging and off-peak charging were analysed to explore their impacts on power grid systems.Zhang et al. [9] proposed a decentralized BEV charging strategy to ensure high-efficiency charging and reduce load variations for power grid systems during charging periods.An extensive set of simulations and case studies with real-world data were used to demonstrate the benefits of the proposed strategies and the impacts of proposed charging strategy on power grid systems were discussed.Cui et al. [10] established a multi agent-based simulation framework to model the spatial distribution of BEV ownership at local residential level.Based on the framework, the impacts of the charging behavior resulting from the increasing BEV ownership on the local power grid system were explored by considering different charging strategies. Moreover, related to the charging behaviour of BEVs, there are several studies that have discussed the methods to mitigate the concentrated charging.Kumar and Tseng [11] examined the impacts of demand response management on the chargeability of BEVs and proposed a scheduling driven algorithm to mitigate the concentrated charging.Aziz et al. [12] developed a battery-assisted charging system to improve the charging performance of a quick charger for BEVs.In addition, the effects of proposed system on mitigating the concentrated charging in different seasons were demonstrated by charging experiments.Mukherjee et al. [13] explored the problems of BEV concentrated charging and established a bounded maximum energy usage maximum BEV charging problem.A pseudo-polynomial algorithm was proposed to obtain an upper bound for the energy usage.The strategy for mitigating the concentrated charging was proposed based on the simulation results.Besides mitigating the concentrated charging, the charger distribution is correlated strongly with the charging behaviour.Several studies have discussed the charger distribution problem based on charging behaviour of BEV drivers.Sun et al. [14] adopted the mixed logit models to explore the fast-charging station choice behaviour.The results provided a basis for early planning of a public fast charging infrastructure.Oda et al. [15] proposed a model for quick charging service and analysed the charging behaviour for quick charging to estimate future waiting times.Based on the results, the charger distribution problem was discussed to reduce waiting times for charging.Awasthi et al. [16] proposed a method to deal with the optimal planning of charger distribution by considering charging behaviour of BEV drivers.A hybrid algorithm based on genetic algorithm and improved version of conventional particle swarm optimization was utilized for finding optimal locations of charging station. However, in the studies as mentioned above, the impacts of charging behaviour are not analysed from the perspective of drivers.The charging behavior of BEVs has significant impacts on drivers' trips.Recently, there exist several studies that have explored the charging behaviour from the perspective of BEV drivers.Jabeen et al. [17] conducted a survey on driver charging start time, charging time and charging cost by considering the charging behaviour of BEVs.The results show that most drivers charge their BEVs at home.When charging at a public charging station, the drivers are concerned with the charging time.Azadfar et al. [18] studied the main factors affecting the charging behaviour of BEV drivers based on the data of resident travels and charging behaviour.The results show that the penetration rate of BEVs, charging station facilities, battery performance and charging costs have become the main factors affecting charging behaviour.Among them, charging station facilities and battery performance are the two most important factors affecting charging behaviour.Axsen and Kurani [19] applied a network survey to collect data on the BEV purchase rate, parking habits, location of charging piles and selection of charging piles to further understand charging demands in the USA.Bunce et al. [20] conducted a survey on charging behaviour of residents and found that most drivers tend to plan travel and charging times in advance rather than finding a possible charging opportunity at any time.Franke and Krems [21] established a user-battery interaction style variable based on the information about charging and driving of 79 BEV drivers during six months to explore the effect of driver psychological state on their charging behaviour.The results show that familiarity with BEV, acceptable price range and use efficiency of BEV affect the charging behaviour of BEV drivers.Adornato et al. [22] conducted a follow-up survey on several BEV drivers in Southeastern Michigan, USA, to obtain possible charging time and location of BEVs and establish models of energy consumption and charging demand prediction.The results show that drivers often charge their BEVs at shopping malls, home or work, with an average charging time of 30 min, 3.8 h (excluding night time) and 9.4 h, respectively.However, the affecting factors and prediction methods for charging time have not been explored in these studies.The existing studies have mainly analysed the charging behaviour of BEVs based on statistical methods.The statistical regularity of charging behaviour was described in these studies.Moreover, a few studies have explored the charging behavior based on collected data from the running BEVs.However, the data used in the existing studies were mainly collected from BEVs operating in the USA and other similar developed countries.The results are unsuitable for BEVs operating in Beijing or other similar areas due to the different traffic and charging environment.Furthermore, several studies have previously analysed the characteristic quantity of charging time, such as average charging time in specified areas.However, charging time prediction was not involved in the previous studies.Charging a BEV is time consuming, which significantly prolongs the total travel time.Thus, the accurate prediction of the charging time will provide decision support of BEV drivers to determine travel plan. Notably, compared to the existing methods in previous literature, the proposed method is developed based on the actual data collected from BEVs operating in Beijing.The impacts of the traffic and environment in Beijing on the charging processes of BEVs are involved in the charging time prediction.Moreover, in the proposed model for charging time prediction, a time-series method is adopted to reduce the prediction error that results from the unobservable impacting factors of charging processes.To the best of our knowledge, this is the first time that the time-series method is applied to establish the model for charging time prediction. Data Collection and Processing The data used in this study were collected from 70 BEVs that are produced by BAIC Motor Corporation, Ltd. (Beijing, China).These BEVs are widely used in Beijing.The vehicle type is BJ5020XXYV3R-BEV.They are mainly used for short-distance travel in the city.The maximum driving range of the BEVs is 128 km at normal atmospheric temperature.The maximum speed that the BEVs can operate is 93 km/h.The nominally capacity of their batteries is 24 kWh, which is a common capacity level for BEV batteries in recent years.During the data collection process, the BEVs operate in the road network as regular vehicles.During their operating, the charge and discharge data of them were collected from the operation monitoring and scheduling platform.The charge and discharge data, collected by information collection and transmission terminal installed in BEVs, were regularly sent to the platform with GPRS wireless transmission technology in a certain period (e.g., 5 s), and the platform was regularly logged into the database.Thus far, the amount of all the charge and discharge data from March 2015 to April 2017 were approximately 30 G. Data included timestamp, car number, total current, speed, total voltage, mileage and state of charge, among others.The state of charge (SOC) is one of the important parameters used to describe the state of a battery [23].It indicates the ratio of the current power of the battery to the battery capacity, which is a relative quantity between 0 and 1: where Q m is the rated capacity of the battery (kWh), and Q c is the current capacity of the battery (kWh). The rated capacity of the battery was 24 kWh.The total charging and discharging data in 2016 were selected as the research object because of their completeness and stability.The research object contained a large amount of data that are not related to charging behaviour; hence, the raw data need to be filtered to obtain a complete charging process.Firstly, five variables, namely, time, vehicle number, total current, total voltage and SOC, were selected from the charge and discharge data in 2016.Secondly, SOC was extracted from the extracted data, taking a continuous growth interval of a vehicle battery SOC as an initial charging process.Finally, 41,400 sets of initial charging process were obtained. In the data acquisition process, duplicate and abnormal data usually have unstable receiving and sending capabilities.The two data anomalies presented above were directly deleted.Moreover, if the wireless signal intensity is small when the car terminal uses GPRS to send data, then the data cannot be sent to the platform and will then be lost.Data deletion operation can also result in data loss.Lagrange interpolation method was used to ensure the integrity of the data and improve the accuracy and credibility of the model [24].In addition, the original charging data cannot truly reflect the actual charging process for some data loss during charging.Thus, identifying the charge state of the initial charging process and selecting out the effective charging process after data were deleted and interpolated are necessary.The steps of selecting an effective charging process from the original charging data are as follows: Step 1: Selection of Time_start and SOC_start: When I_total < 0, Speed = 0, Time_start = Time_min and SOC_start = SOC_min. Step 2: Selection of Time_end and SOC_end: 30 min < Time_operate-Time_max < 2 h; hence, these data can be deleted for taking up only 2.4% of the total data.The following cases should be considered: (1) When Time_operate-Time_max ≤ 30 min, the vehicle begins to operate after it is fully charged; in this case, Time_end = Time_max and SOC_end = SOC_max; (2) When Time_operate-Time_max ≥ 2 h, the vehicle cannot collect the data because the in-vehicle data collection device is turned off.Therefore, it cannot be considered that the vehicle stops charging at Time_max directly.The SOC at the start of the next operation is taken as the current SOC at the completion of charging; SOC_end = SOC_operate.Average operation on historical charge shows that the charge time is approximately 2 min when SOC increases to 0.4%.The formula of the end of charging is as follows: where the variables are defined as follows: Time_start (start charging time), Time_end (stop charging time), SOC_start (start charging SOC), SOC_end (stop charging SOC) and SOCc (SOCc = SOC_end-SOC_start).SOC_min (SOC minimum) and SOC_max (SOC maximum) correspond to Time_min and Time_max respectively; SOC_operate (SOC at the start of the vehicle next operation) corresponds to Time_operate.40350 sets of effective charging process data were obtained by laying a foundation for the next modelling. Data Analysis The data of the effective charging process were analysed to determine the main factors that influence the charging time during the charging process.In the charging process, the charging time increases with the increase of the charged amount SOCc, which has a positive linear relationship.To verify the linear relationship between the charged amount SOCc and charging time, the partial correlation analysis method [25] was applied.The data, which include 10 sets of BEV charging processes at different periods in August 2016, were adopted to obtain the partial correlation coefficient and significance values, as shown in Table 1.As shown in Table 1, 10 different sets of BEV charging processes at different periods are analysed to show the partial correlation coefficients between SOCc and charging time.The values of all the partial correlation coefficients were greater than 0.980, which indicate a strong positive lineal relationship between them.Moreover, for the significance, if its values are less than 0.05, the results of the partial correlation are significant.In the table, the significance values are significantly less than 0.05, which indicate that the results of the partial correlation are significant. Moreover, to explore the impacts of charge voltage and current on charging processes, six sets of charging data were extracted to obtain the variation of voltage and current that increased with SOC, as shown in Figure 1.As shown in Figure 1a, the total voltage slowly rises in most time during the charging process, and it sharply rises when the battery is about to be filled.Figure 1b presents that the total battery current is always negative and fluctuates up and down in the range of 0.4 during the charging process.Therefore, the voltage has less effect on the charging time, and charging current is relatively stable during the charging process. Basic Regression Model According to the data analysis results, the charged amount SOCc has a significant linear relationship with charging time.The basic regression prediction model between the charging time and charged amount SOCc is shown in Equation ( 3): where y represents the charging time of the BEV, the unit is h; x indicates the battery charged amount SOC (%) during the charging process; k and b represents the waiting parameter; and ε represents the white noise. A set of representative BEV charging process data from 10:12:15 to 17:06:20 on 2 September 2016 were selected to obtain the undetermined parameters in the charging time model, as shown in Equation ( 3).The forgetting factor recursive least-squares algorithm [26] was adopted to realise the identification of model parameters by considering the characteristics of BEV charging data.The parameter identification results are shown in Figure 2. The parameter identification results show that parameter k changed significantly in the initial iteration, but it soon converged and tended to be stable, and parameter b varied a little throughout the iteration.As shown in Table 2, the convergence value of parameter k is 0.0871, and the convergence value of parameter b is 0.0127.To obtain accurate parameter values, 15 arbitrary sets of BEV charging data from 1 September 2016 to 25 October 2016 were chosen to use the parameter identification method presented above for many tests and to obtain the estimated parameters, as shown in Table 3.After the parameter identification tests, the mean value of parameter k was equal to 0.0854, and the mean value of parameter b was equal to 0.0091.Therefore, the relationship model between the charging time y and the charged amount SOCc x can be expressed as: To verify the fitting effect of the model, the goodness-of-fit test and the significance test were carried out [27].The statistical test results are shown in Table 4.As shown in Table 4, the fitting coefficient R 2 is 0.951 and the standard deviation s e is 0.043, indicating that the fitting effect of the model is good; F = 79.88 > F 0.05 , indicating a significant linear relationship between the charged amount SOCc and the charging time; and T = 14.40 > T 0.05 , indicating that the regression coefficient k = 0.0854 between the charged amount SOCc and charging time is significant. To further verify the error of the model, the mean error (EMean), root mean square error (RMSE) and root mean square relative error (RMSRE) were used as the indexes to test the model [28] on the basis of three sets of charging data not used for modelling.The results are shown in Table 5, where the mean error is less than 0.07, the root mean square error is less than 0.09 and the root mean square relative error is less than 0.016.In other words, the prediction error of the charging time was within 6 min.Therefore, the charging time model based on the charged amount SOCc can exactly reflect the actual charging process, which has quite accuracy and practicality.In addition, the errors of the three groups of data were standardised to observe the distribution of the errors.The equation used to standardise the errors is: where z e i represents the standard error of the ith observed value; y i and y i are the predicting and actual values, respectively; and s e represents the standard deviation of the error sequence.Figure 3 presents the standard error of charging time prediction.As shown in the figure, for the three BEVs, 65.5%, 79.6% and 82.6% of the standard error were found to be between −2 and 2, respectively.If the error follows the normal distribution, then approximately 95% of the normalised error between −2 and 2 should be found in the normalised error graph.The error of the charging time and the charged amount SOCc did not satisfy this condition.In other words, the error did not follow the normal distribution.Therefore, the model needs to be calibrated to obtain accurate predictions.The charged amount SOCc was regarded as a different observed point, and the corresponding error was the observed value of that point.The time-series method was applied to further modify the model. Combined Model Based on Regression and Time-Series Methods In statistics, a set of variables permutated with time sequences is called time series of random events.The modelling process of time-series method includes stability test, differential operation, white noise test and auto regression moving average (ARMA) model fitting [29].The stability test of prediction error of data 1 in Table 5 was first operated based on the time-series modelling process.The main methods of stability test are drawing test and ADF test [30].Figure 4 shows the sequence diagram of the estimated error of charging time.As shown in the figure, the estimated error of the charging time shows a cyclical growth trend.Moreover, the results of autocorrelation and partial correlation indicate that the autocorrelation coefficient of the sequence decreased slowly and the autocorrelation coefficient was greater than zero in a long period of delay, which is consistent with the growth trend presented in Figure 4. Furthermore, ADF unit root test results are shown in Table 6; p = 1 (>0.05) as represented by Prob.* in the table.The sequence is a non-stationary sequence integrated with the chart and ADF test results.The original sequence is a non-stationary sequence, and thus the p-order difference was used to eliminate the tendency of the original sequence.First-and second-order difference results of the original sequence are presented in Figure 5.The variances of the sequence values after first-and second-order difference were calculated.The results are shown in Table 7. Difference Variance First order 0.000159 Second order 0.000469 The second difference was determined as excessive difference (0.000469 > 0.000159).The first-order difference of the original sequence can eliminate its tendency.After eliminating the tendency of the original sequence, a three-step differential sequence was conducted to eliminate the cycle effect.Figure 6 presents a new sequence after the three-step differential sequence.As shown in Figure 6, the new sequence has no significant trend or periodicity after the difference, but presents characteristics of random fluctuation.However, whether this new sequence is stationary or not is dependent on autocorrelation (partial correlation) and ADF test.ADF test results of the sequence are shown in Table 8; p = 0 (< 0.05).The results of autocorrelation and partial correlation of the sequence after differential sequence indicate that the autocorrelation coefficient of this sequence was rapidly reduced to 0 after a very short delay.Therefore, the sequence after difference was a stationary sequence integrated with the chart and ADF test results.Table 9 shows the results of the white noise test after difference.The value of Q is obtained by the autocorrelation coefficients of all the samples in the consideration of order of delay.The value of P is used to reflect the significant level of the autocorrelation coefficients.By observing the value of P and comparing it with the significant level of 0.05, whether the sequence is white noise can be determined.In the table, all the P values are less than the significant level of 0.05, so the sequence after the differential sequence was not white noise.Therefore, the sequence needs to be analysed.In the model identification process, the characteristics of the autocorrelation and partial correlation graphs of the sample were used to estimate the self-correlation order [31].As shown in Figure 6, the autocorrelation coefficient attenuation speed of sequence after difference after a delay is very fast, and the clear majority falls within a scope of two times the standard deviation.Thus, the truncation of autocorrelation coefficient can be determined; the partial correlation coefficient presents an obvious tailing phenomenon.According to the basic principles of ARMA, moving average (MA) (1) was selected initially.However, sparse coefficient model was finally selected due to the sudden increase of the autocorrelation and partial correlation coefficients in the decay process.Test results are shown in Tables 10 and 11. Model SSR SE MA (1) 0.004 0.005 MA (24) 0.004 0.005 MA (25) 0.002 0.004 MA (26) 0.004 0.005 MA (1,24) 0.004 0.005 MA (1,25) 0.003 0.004 MA (1,26) 0.003 0.005 MA (24,25) 0.002 0.004 MA (24,26) 0.004 0.005 MA (25,26) 0.002 0.003 MA (1,24,25) 0.002 0.004 MA (1,24,26) 0.002 0.004 MA (1,25,26) 0.001 0.003 MA (24,25,26) 0.002 0.004 MA (1,24,25,26) 0.001 0.003 ARMA (24,1) 0.003 0.005 ARMA (24,24) 0.003 0.005 ARMA (24,25) 0.002 0.004 ARMA (24,26) 0.003 0.005 ARMA (24 According to the test results presented above, MA (1,24,25,26) and MA (1,25,26) were selected because of their good fitting accuracy and small residual sequence.Corresponding to the structures of MA, the MA (1,24,25,26) has four parameters and MA (1,25,26) has three parameters.The parameters of MA (1,24,25,26) are denoted as θ(1), θ (24), θ (25) and θ (26).The parameters of MA (1,25,26) are denoted as θ(1), θ (25) and θ (26).Furthermore, the least square estimation method was adopted as the model parameter estimation method, and the parameter estimation results are shown in Table 12.The time-series model test included parameter and model significance tests [32].Table 13 shows the results of parameter significance test.As shown in Table 13, θ(24) is not significant; P = 0.67 (>0.05).MA (1,24,25,26) did not meet the requirements; hence, MA (1,25,26) was selected as the final model.Based on the difference operation, the prediction combined model is as follows: where y t and x t are the charging time and charged amount SOCc in the combined model; u(t) denotes the error under time-serie t; u t−1 , u t−3 , u t−4 are the autocorrelation coefficients that are result from the reductive differential sequences; v t , v t−1 , v t−25 , v t−26 are the stochastic disturbance sequences.A model significance test was used to verify the validity of model.A good model can fully extract the relevant information of the sequence value.In other words, the residual sequence should be a As shown in the prediction results presented above, the change trend of real values is basically in line with the prediction curves.As shown in Figure 7b, errors between the predicted and real values are distributed between −0.0112 and 0.0110; as shown in Figure 7d, errors between the value and real values are distributed between −0.0101 and 0.0066; as shown in Figure 7f, errors between the predicted and real values are between −0.0145 and 0.0135.The error analysis results are shown in Table 16.As shown in Table 16, the mean errors are all less than 0.007, the root mean square errors are all less than 0.008 and the root mean square relative errors are all less than 0.002.In other words, the prediction error of charging time was controlled within 0.42 min.The accuracy of the charging time prediction was improved by 92% as compared with the basic regression prediction.Figure 8 shows the standard error of the charging time prediction.As shown in Figure 8, the standard error of the three BEVs falls between −2 and 2. Evidently, the errors were normally distributed.To sum up, the combined model made the prediction of charging time accurate and practical. Conclusions The accurate prediction of charging time is an important issue for drivers of BEVs, which is very useful in determining travel plan and alleviate range anxiety during trips.In this study, actual data from 70 BEVs operating in Beijing, China, were used to explore the charging time prediction.The data completely recorded the charging processes of BEVs.After data processing, the experimental data were applied to perform the relation analysis.The results indicated a significant linear relationship between the charged amount SOCc and charging time.A basic regression model was established by using the experimental data for parameter identification based on the data analysis results.The fitting effect of the model was verified by using the goodness-of-fit and significance tests.Moreover, to further improve the accuracy of the prediction results for charging time, a time-series method was applied to calibrate the proposed model based on the actual data.Therefore, a combined model for charging time prediction based on regression and time-series methods was built.An experiment was designed to verify the combined model based on the experimental data.The standard error of the model was within a reasonable range, and the errors were normally distributed, thereby confirming that the proposed model possessed good prediction accuracy. Notably, all experimental data were collected from BEVs operating in Beijing, which mainly records the charging processes of the vehicles.However, the external environment (such as charging station state and temperature) when charging a BEV may affect the charging time.Therefore, future research can consider the impact of external environment on charging time prediction to further improve model performance. Figure 1 . Figure 1.Charge voltage and current of six charging processes.(a) Charge voltage and SOC; (b) Charge current and SOC. Figure 2 . Figure 2. Identification result of parameters for basic regression prediction model.(A) Identification result of parameter k; (B) Identification result of parameter b. Figure 3 . Figure 3. Standard error of charging time prediction. Figure 5 . Figure 5. First-and second-order difference results of the original sequence.(a) First-order differential sequence; (b) Second-order differential sequence. Figure 7 . Figure 7. Prediction results and errors of the prediction combined model of charging time.(a) Prediction of charging time of data 1; (b) charging time prediction error of data 1; (c) prediction of charging time of data 2; (d) charging time prediction error of data 2; (e) Prediction of charging time of data 3; (f) charging time prediction error of data 3. Figure 8 . Figure 8.Standard error of charging time prediction. Table 1 . Partial correlation analysis of SOCc and charging time. Table 2 . Parameters of charging time model. Table 3 . Parameters of charging time model. Table 4 . Statistical test results of regression equation. Table 8 . ADF test result after differential sequence. Table 9 . White noise test results after differential sequence. Table 10 . SSR and SE of ARMA (p, q).
7,544.6
2018-04-24T00:00:00.000
[ "Engineering", "Environmental Science" ]
High-performance solutions of geographically weighted regression in R ABSTRACT As an established spatial analytical tool, Geographically Weighted Regression (GWR) has been applied across a variety of disciplines. However, its usage can be challenging for large datasets, which are increasingly prevalent in today’s digital world. In this study, we propose two high-performance R solutions for GWR via Multi-core Parallel (MP) and Compute Unified Device Architecture (CUDA) techniques, respectively GWR-MP and GWR-CUDA. We compared GWR-MP and GWR-CUDA with three existing solutions available in Geographically Weighted Models (GWmodel), Multi-scale GWR (MGWR) and Fast GWR (FastGWR). Results showed that all five solutions perform differently across varying sample sizes, with no single solution a clear winner in terms of computational efficiency. Specifically, solutions given in GWmodel and MGWR provided acceptable computational costs for GWR studies with a relatively small sample size. For a large sample size, GWR-MP and FastGWR provided coherent solutions on a Personal Computer (PC) with a common multi-core configuration, GWR-MP provided more efficient computing capacity for each core or thread than FastGWR. For cases when the sample size was very large, and for these cases only, GWR-CUDA provided the most efficient solution, but should note its I/O cost with small samples. In summary, GWR-MP and GWR-CUDA provided complementary high-performance R solutions to existing ones, where for certain data-rich GWR studies, they should be preferred. Introduction Geographically Weighted Regression (GWR) Charlton 1996, 1998;Fotheringham, Charlton, and Brunsdon 1998;Fotheringham, Brunsdon, and Charlton 2002) is a technique specifically developed to explore spatial heterogeneities in a regression's "response to predictor variable" relationships. Unlike a fixed coefficient regression, such as an Ordinary Least Squares (OLS) regression, GWR allows regression coefficients to vary spatially; the resultant coefficient maps allow an investigation into their change (if any) across space. The GWR methodology has been extensively developed in terms of its usage and extensions (Comber et al. 2022), but where inference in GWR is not always as stable as that found with say, an OLS regression and as such, GWR adaptations exist to counter this (da Silva and Fotheringham 2016; Harris et al. 2017). GWR has been widely applied in many scientific domains, including regional economics (e.g. Jin, Xu, and Huang 2019), urban planning (e.g. Cao et al. 2019b), sociology (e.g. Yin et al. 2018), ecology (e.g. Liu et al. 2019), public health (e.g. Wang et al. 2019;Xu et al. 2021), agriculture (e.g. Harris et al. 2017), and environmental science (e.g. Cao et al. 2019a;Huang and Wang 2020). Our increasingly digital world continues to generate huge volumes of data -many of which are spatially indexed (Lee and Kang 2015;Ivan et al. 2017). However, in order to attribute process understanding to such "Big Spatial Data" almost all spatial models require adaptation so they can be efficiently calibrated and validated within tolerable time frames. GWR is one such model that is computationally demanding and in this respect has benefitted from highperformance computing solutions (Harris et al. 2010;Murakami et al. 2020;Li et al. 2019b). Commonly, such solutions only exist for the conventional forms of GWR, where many extended GWR models are more computationally demanding still -for example, multiscale GWR (Lu et al. 2018;Li and Fotheringham 2020) which requires a complex iterative solution to its calibration. Similarly, Geographically and Temporally Weighted Regression (GTWR) (Huang, Wu, and Barry 2010;Fotheringham, Crespo, and Yao 2015) for space-time processes has a higher computational demand than that found with conventional GWR. Unsurprisingly, there are an increasing number of (conventional) GWR applications exploring "Big Data" (e.g. Cao, Diao, and Wu 2019). Here, we conducted a bibliometric study, searching the keyword "Geographically Weighted Regression" via Web of Science (WoS), where in total, 2014 articles were found from 1999 to 2019, and their keywords are visualized in a word cloud form ( Figure 1). Observe the frequency (size) of "Big Data", which appears second only to "GWR". Thus, the demand for high-performance solutions for GWR is clear, where its application in "Big Data" problems can be limited (Murakami et al. 2020), even with the employment of the existing solutions listed above (section 1.2). Existing implementations of GWR There are a number of standalone implementations with GWR enabled, such as GWR3 (Charlton, Fotheringham, and Brunsdon 2003), GWR4 (Nakaya et al. 2009), the GWR tool in ESRI ArcGIS (ESRI Corp 2011), and Multi-scale GWR (MGWR) . GWR is also available through scripting platforms with: the mgwr module of the PySal package in Python (Oshan et al. 2019); as part of the econometrics toolbox in MATLAB (LeSage and Pace 2009); and five R packages -spgwr (Bivand and Yu 2006), Geographically Weighted Models (GWmodel) (Lu et al. 2014b;Gollini et al. 2015), gwrr (Wheeler 2013), McSpatial (McMillen 2015 and lctools (Kalogirou 2016). The five R packages considered as a whole provide the richest suite of GWR forms (e.g. conventional, robust, heteroskedastic, multiscale, space-time and more) and therefore development here is most appropriate. However, all suffer computationally, particularly given the strict memory limit for specific operation systems (R Core Team 2020). Workarounds to exceeding computational limits exist, such as coarse-scaling the observations, or the use of aggregations via upscaling (e.g. Yang et al. 2019) -all prior to a GWR fit, but none are ideal given important sources of information, fine scale detail and variability are lost. Existing high-performance solutions Efforts to improve the computational efficiency of GWR exist. Firstly, through Harris et al. (2010) who implemented a grid-based (parallelization) approach to conventional GWR. More recently, Li et al. (2019a) developed a python implementation (FastGWR) that optimizes the conventional GWR algorithm together with embedding multi-core parallel computing technology. This computational scheme has also been transplanted for use with multiscale GWR (Li and Fotheringham 2020). Wang et al. (2020) proposed a high-performance solution of GWR with the Compute Unified Device Architecture (CUDA), namely Fast-Parallel-GWR (FPGWR) which was developed with Microsoft Visual Studio 2015 and CUDA development kit. Finally, a mathematical approach was taken by Murakami et al. (2020) who proposed Scalable GWR (ScaGWR) that saves on computational overheads via the pre-compressing of large matrices and vectors with polynomial kernels. For ScaGWR, the computational cost presents a linear relationship with the sample size, while a quadquadratic order appears for the usual un-adapted GWR form. The ScaGWR routine can be found in the R package scgwr (Murakami et al. 2019) and GWmodel. ScaGWR provides approximate coefficient estimates in comparison with conventional GWR, where the results from ScaGWR might vary slightly when different parameters are specified -for example, the chosen degree or order of the polynomials (Murakami et al. 2020). Our study's approach in R In this sense, the computational bottleneck is still problematic for GWR (and its extensions) in the R environment, particularly for the geographically weighted functions in GWmodel. However, generic highperformance computing options have been incorporated in many packages since R release 2.14.0 (Eddelbuettel 2020), where grid computing, cloud computing, multicore and Graphic Processing Unit (GPU) are commonly invoked. In this respect, this study investigates highperformance solutions for (conventional) GWR within the R package GWmodel, where our workflow consists of three hierarchies: 1) optimize the algorithm for a GWR calibration for accepting out-of-memory issues with "Big Data"; 2) adopt multi-thread parallel computing for a GWR calibration (GWR-MP), which enables analysis on a standard Personal Computer (PC) with a multi-core processor; 3) apply parallel computing on the GPU devices via CUDA (GWR-CUDA). For performance evaluation, we compare the performances of the new solutions proposed (i.e. GWR-MP and GWR-CUDA) with existing solutions found in GWmodel, MGWR and FastGWR using varying sample sizes, where the latter two are outside of the R environment. We haven't included FPGWR where CUDA was also adopted as: 1) the source code or tool is not available; and 2) key aspects of FPGWR are not clear, such as distance calculation, kernel function implementation, making FPGWR difficult to fully reproduce. This study is organized as follows. Firstly, we provide a description of conventional GWR methodology and the new high-performance techniques proposed. Secondly, competing high-performance solutions to GWR are objectively compared through a designed experiment. Thirdly, we summarize and suggest future research. Basics of GWR The conventional GWR model characterizes spatially varying relationships via location-specific regressions whose coefficients are estimated by (geographically) weighted least squares. The model can be expressed as (Brunsdon, Fotheringham, and Charlton 1996;Fotheringham, Brunsdon, and Charlton 2002): where y i is the dependent variable at location i on a two-dimensional space; x ik is the value of the k th independent variable at location i; l is the number of independent variables; β 0 u i ; v i ð Þ is the intercept parameter at location i; β k u i ; v i ð Þ is the local regression coefficient for the k th independent variable at location i; u i ; v i ð Þ are the spatial coordinates of location i; and ε i is the independent random error at location i. In line with Tobler's first law of geography (Tobler 1970), extended to consider situations in which nearby regression relationships are more similar than distant ones, GWR consists of a series of local regressions where observations are weighted (i.e. given decreasing influence) via a distance-decay kernel function (Lu et al. 2014a). The estimator of the coefficients at location i has the following matrix expression: where X is the matrix of the independent variables with a column of 1s for the intercept; y is the Þ is a n � n diagonal matrix denoting geographical weights of each observation for calibrating the local regression at location i, and is defined as: (3) where w ij j ¼ 1; � � � ; n ð Þ is calculated via a kernel function decaying with respect to Euclidean distance, or some other distance metric (Lu et al. 2014a), between locations i and j, and n represents the number of observations. Gaussian, exponential, bi-square, box-car, tricube are among the many kernel functions that can be specified (Gollini et al. 2015), where an optimal kernel bandwidth is commonly found by leave-one-out cross-validation or by a corrected Akaike Information Criterion (AICc) procedure. The kernel bandwidth relays the chosen spatial scale of the regression relationships. Diagnostics for a GWR model's fit are essential, where R-squared, adjusted R-squared and AICc are commonly reported. These can be expressed as (Fotheringham, Brunsdon, and Charlton 2002): where y _ i is the fitted value at location i; � y is the mean value of y; σ is the estimated standard deviation of the error term: where I is an n � n identity matrix and tr S ð Þ and tr S T S À � denote the traces of the hat matrix S and S T S. For GWR, each row S i of the hat matrix can be found as follows: where X i is its i th row of the matrix X of independent variables. Furthermore, t statistics at each individual regression point can be produced along with the coefficient estimates. For each estimated regression coefficient at For each location-specific calibration, the standard errors are obtained from: ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi The given calculations are commonly reported in most GWR software tools, where the algebraic matrix operations are programmed in a straightforward manner. However, their computational cost is expensive, particularly when dataset size (number of observations n) is large. Computational burden is primarily a consequence of: 1) complex matrix operations, particularly the n � n matrices involved, like the hat matrix S; and 2) a large number of matrix operations are repeated in the location-wise calibrations, kernel bandwidth optimization and when calculating the model fit diagnostics. Reducing memory cost for GWR A variety of GWR forms and extensions are present in GWmodel, making it the most comprehensive GWR R package (Comber et al. 2022). In early versions of GWmodel, all GWR functions were developed directly from the algebraic formulations in Section 2.1 above. This requires a number of n � n matrices to be calculated and stored, specifically for calculating diagnostic information and enabling statistical inference (Leung, Mei, and Zhang 2000). Note here, that it is almost impossible to allocate as much as 2 GB to a single vector in a 32-bit or 64-bit build of R due to predefined allocations of address space on Windows (R Core Team 2020). Allocating memory for a 16,000 � 16,000 numeric matrix in R will normally be an upper limit. This means the maximum n for any of the conventional GWR functions in R is around 16,000. However, in practice, the maximum number of observations a conventional GWR tool can handle is likely to be much smaller (i.e. n ≪ 16,000). It is therefore necessary to first relieve these memory constraints when developing high-performance solutions for GWR, and to support GWR analyses of very large datasets. In this respect, Li et al. (2019a) optimized the calculations of AICc and localised standard errors by avoiding the storage of the entire hat matrix, which reduced the memory storage size from O(n 2 ) to O(nm). This strategy of avoiding any n � n matrix operation or storage is effective and makes it workable when dealing with a large dataset on any basic PC. Therefore, as a potential approach for reducing memory costs, we re-formulized the algebraic operations of a GWR calibration, as follows, which are essentially the same as the optimizations proposed by Li et al. (2019a). First observe that Equation (2) can be divided into the following two parts: where X j is the j th row of the X matrix. In this sense, the point-wise estimator of GWR can be regarded as a cross manipulation between the inverse of a m þ 1 ð Þ � m þ 1 ð Þ matrix and a m þ 1 ð Þ � 1 vector. Accordingly, the weight matrix W u i ; v i ð Þ can only be stored as a vector with its diagonal elements. For diagnostics of GWR, more complicated computations are involved, particularly for the n � n matrices, including hat matrix S, its square S T S and the matrix Q ¼ I À S ð Þ T I À S ð Þ. As shown in Equations (5-7), the traces of S and S T S are needed, but where they could be found in these two steps: where C i i means the i th column of matrix C i . Moreover, the matrix Q can also be expressed as follows: where e i is the i th row of the identity matrix I. Observe the matrix Q is also required in many statistical tests for GWR (i.e. for spatial non-stationarity), such as the F-tests proposed by Leung, Mei, and Zhang (2000) and Fotheringham, Brunsdon, and Charlton (2002), which are similarly included in most GWR software tools (GWR3, GWR4, as well as GWmodel). In this sense, it is natural for these F-tests to benefit from the highperformance solutions proposed. According to the above equations, the storage of all n � n matrices required for a GWR calibration and associated diagnostics can be avoided, by storing only vectors of length n and matrices of size n � m þ 1 ð Þ in the location-wise computations. Thus, the memory cost of GWR can be similarly reduced to O(nm), essential for working with "Big Spatial Data" in R. In the current release of GWmodel, the GWR functions have already been optimized in this respect. Therefore, for this study, the next steps are an assessment of highperformance solutions embedded in paralleling computing techniques. Parallelization solutions for GWR The two parallelization solutions adopted were: (a) multicore Central Processing Unit (CPU) and GPU accelerator via multithreading parallel (GWR-MP) and (b) CUDA (GWR-CUDA), respectively. As illustrated in Figure 2, the procedure of GWR-MP was carried out in the following steps: (1) Create the coefficient matrix β n� mþ1 ð Þ and the vectors of n dimensions (S 2 ) for recording the diagonal elements of S T S if diagnostic information is calculated 1 ; (2) Create c 2 threads, and divide the n point-wise operations into them, i.e. n t operations are conducted on the t th thread, where P c t¼1 n t ¼ n; c. Calculate the i th row of the hat matrix and assign it to S i in memory, then renew the i th element of S 2 as S i Step 4 until all the location-wise operations are finished, and the coefficient estimates β _ n�m , tr S ð Þ, tr S T S À � and σ 2 are ready for the final output. By contrast, the parallelizing strategy for GWR-CUDA is designed specifically to fit GPU devices. As illustrated in Figure 3, the detailed procedure of GWR-CUDA includes the following steps: (1) Read the data matrices or vectors (i.e. X,Y and coordinates) from memory into GPU; (2) Divide the n point-wise operations into groups, and within each group, g or fewer point-wise calibrations are conducted in parallel, where g should meet the following condition: where b is the number of bytes needed for each element in the matrix or vector (commonly set as 8), M is the GPU memory size, α is the memory reserved for intermediate calculations. In practice, the number of variables k is far less than the number of observations, n (i.e. k � n), the number of g is more dependent on the term, bgkn. (3) Create arrays Δ g�k�n , Ω g�k�k and a matrix � g�k , and conduct the following location-wise calibrations (i = 1, . . ., g) in parallel within the current group: a. Calculate the weight vector w i ¼ w i1 ; � � � ; w in ð Þ from the corresponding distances of the observations from the location i; b. Calculate X T W i ¼ P n j¼1 w ij X T j and assign it to the i th k � n component of Δ; w ij X T j X j and assign it to the i th k � k component of Ω, It is important to note that both GWR-MP and GWR-CUDA procedures are designed to include the GWR model's diagnostic information calculations, with the calibration points the same as the observations and an assessment of model fit is required (which includes kernel bandwidth optimisation). Otherwise, the above procedures would be greatly simplified with only coefficient estimates returned. We implemented GWR-MP and GWR-CUDA in R, coded the parallel part via C++ and wrapped them via the Rcpp package (Eddelbuettel 2013). General information For this study, we compared the computational performances of GWmodel (version 2.1-4), FastGWR (updated on 12 August 2019), MGWR (version 2.1.1), GWR-MP and GWR-CUDA for implementations of conventional GWR only to ensure the same results. As shown in Table 1, we adopted two devices for running GWR where both devices had a GPU specification for running GWR-CUDA. In terms of experimental data, we produced a series of simulated datasets of size n ranging from 1000 to 10,000 with increments of 1000, from 10,000 to 100,000 with increments of 10,000 and from 100,000 to 1,000,000 with increments of 100,000. For MGWR, FastGWR and GWR-MP, we specified a different number of cores ranging from 2 to 48 to run them in parallel. We didn't test them with all the combinations, but selectively adopted 2, 3, 4, 5, 6, 7, 8, 12, 24, 36 and 48 cores for typical tests. Note that the number of physical cores on the experimental device is 24, but the number of logic cores could be up to 48 through the hyper-threading technology. Notably, the current GWR routine in GWmodel is a serial program, so that the setting of multicores will not work differently for it. As shown in Figure 3, we adopted a different computing strategy for GWR-CUDA, where all samples are divided into groups, and location-wise calibrations within each group are conducted in parallel on the GPU device. Thus, the number g is the parallel computation counts for executing GWR-CUDA, and we took g = 384 (or less if samples were insufficient for the final group) according to Equation (16). For each sample size n and GWR implementation, 10 experiments were conducted independently on the two devices, respectively. Moreover, samples sizes ranging from 100,000 to 1,000,000 are adopted only for the extreme performance tests of GWR-CUDA, not for tests on the other four GWR solutions, as relatively inefficient solutions result due to unacceptably long time frames. Performance indicators Two indicators were used to evaluate performance -the average time cost and the "speedup" of the parallel computations. For each sample size n and GWR implementation, the average time cost was calculated as follows: where T i n;GWR j represents the time cost of running the j th GWR implementation with a sample size n, m (in this case, taken as 10) is the number of individual experiments and � T n;GWR j refers to the average time cost. Note that in all cases, the time costs include both the (automated) kernel bandwidth optimization (by AICc) and the GWR model calibration. Speedup is an important indicator to evaluate the performance of parallel computations (Hill and Marty 2008). According to its original definition, we take a simple expression for its calculation, as. . where k is the speedup, T S is the time cost of serial computing and T M is the time cost of parallel computing with multi-cores. For this study, we repeated 100 independent experiments for each scenario, meaning speedup could be calculated using average time costs from m individual runs, i.e.: where � T S and � T M are the average time costs of serial and parallel computations, respectively. We can verify that the estimation of speedup is significantly valid and reliable, by assuming the time cost for each individual test is a random variable subject to a normal distribution. Results and discussion In Figure 4, we present the averaged time costs of GWmodel, FastGWR, MGWR, GWR-MP and GWR-CUDA with a different number of cores with samples of sizes ranging from 1000 to 100,000. As the GWR implementation in GWmodel is a serial program, the time cost will not be affected by increasing cores, but averaged time costs grow exponentially as sample size increases. This indicates that the basic GWR function in the latest release of GWmodel is not working efficiently with a large dataset, even though the function benefits from algorithmic optimization and code implementation with C++. It can handle a relatively large dataset, say greater than 100,000, but its running time will be incredibly long, particularly when bandwidth optimization is additionally conducted. MGWR is applicable for running on multi-cores, but it failed to calibrate the GWR models with sample sizes greater than 60,000. In the limited number of visible tests, it performed similarly to FastGWR, which is also developed via Python by the same research team (Oshan et al. 2019;Li et al. 2019a). Thus, results for MGWR can be represented by those for FastGWR and are not discussed further. Both FastGWR and GWR-MP are naturally designed for multi-core parallelism. From Figure 4, FastGWR and GWR-MP always outperform the serial routine in GWmodel, and these advantages grow as the number of cores increase and as sample size increases. From Figure 4, GWR-MP performs similarly to FastGWR in most cases, but where GWR-MP tends to outperform FastGWR for samples greater than 60,000. Relative to GWR-MP, the time costs of FastGWR become exponentially large when the number of cores exceeds 24 for sample sizes greater than 60,000, which means that all the physical cores will be employed and the logic cores will be used via the hyper-threading technology. On this condition, the performance and stability of each physical core could worsen due to frequent switches between two logic cores on each physical core. In addition, FastGWR was developed with the Message Passing Interface (MPI), a standard and portable message-passing system for parallel programming (Dalcín et al. 2008). The MPI was originally designed for distributed memory systems, then extended to shared memory parallel computing for effectively utilizing node-level architecture (i.e. stand-alone machine with multicores). Its communication efficiency could be more or less affected by the memory capacity pressure, particularly when all the cores are fully occupied (Brinskiy, Lubin, and Dinan 2015). That could be the main reason of the relatively weak performance of FastGWR when the number of cores exceeds 24. From Figure 4, GWR-CUDA consistently performs the best of all across all scenarios. For critically testing GWR-CUDA, we extend the size of samples up to one million with two different versions of GPU devices. In Figure 5, we present the average time costs of GWmodel and GWR-CUDA. The time consumption of GWmodel increases exponentially, particularly when the sample size is larger than 8000; in contrast, the time cost of GWR-CUDA grows much more slowly as sample size gets larger, but a dramatic increase occurs for sample sizes of around 800,000 or more. The two versions of GPU devices present different performances for running GWR-CUDA, dealing with samples of one million for around 3.5 h (12,766 s) on GPU-1, and around 5 h (17,828 s) on GPU-2. As one of the world's most advanced GPU, NVIDIA® Tesla® V100 (GPU-1) renders a great advantage over the GeForce RTX 2060 Mobile (GPU-2), a mobile graphics chip embedded in a laptop. Given that a laptop cannot run stably with full capacity for a long period, we only tested GWR-CUDA on GPU-2 with samples of sizes ranging from 1000 to 100,000, and 1,000,000 only. Moreover, results indicate that the physical parameters of the CPU and the GPU device will affect the performances of the chosen high-performance solutions. Equipment (laptop or PC) with high-end CPU or GPU devices will provide better performances, and where High- Performance Computing (HPC) infrastructure should provide a better scope for potential improvement in this aspect. The average time costs could be specific for the devices adopted, and almost impossible to be reproduced with a different device. From an objective assessment, we used the speedup indicator to evaluate how improvements benefitted from the parallel strategies implemented in FastGWR, MGWR, GWR-MP and GWR-CUDA. A larger speedup means the parallel solution for a specific GWR implementation makes a greater optimization in computational efficiency than running it serially. As shown in Figure 6, the performance of GWR-MP and FastGWR (and MGWR) improves as more cores are used for running them in parallel. Again GWR-MP demonstrates better usage of multi-core equipment for samples of sizes ranging from 3000 to 10,000, while performance does not show an improvement when the number of cores exceed 24, i.e. all the physical cores are fully occupied. For GWR-CUDA, its superiority in parallel performance is apparent when the sample size is greater than 10,000, but note that the speedup falls sharply to 40 when the sample size reaches 90,000. In Figure 5, we can see that the average time costs of GWR-CUDA also increase exponentially as the sample size becomes large, where a sharper increase occurs around a sample size of 80,000. Note in the inset figure of Figure 5, GWR-CUDA is not always the best performer in comparison with the serial solution in GWmodel. GWR-CUDA takes more time than the serial solution when the sample size is less than 3000. To implement GWR-CUDA, all the predefined data matrices or vectors (i.e. X,Y and coordinates) are transferred from memory into the GPU, and the results, including hat matrix S and coefficient estimates b β are transferred from GPU back to memorywidely known as I/O issue important for GPU performance (Fujii et al. 2013). In other words, the I/O cost is predominant when the sample size is less than 3000, and the computational advantage of GWR-CUDA starts to emerge when the size is getting larger than 3000. The I/O cost could be affected by the physical parameters of GPU, CPU and protocol type, so GWR-CUDA will perform differently with different devices. Thus, the critical value (i.e. 3000 in this study) could fluctuate marginally width different computational configurations. The results also reveal a fact that the highperformance solutions would be not be recommended for samples with relatively small sizes, say less than 5000, the most common data volume in the previous GWR applications. On the flip side, GWR applications with a relatively large data set (e.g. large than 20,000) were rarely found due to the lack of and universal access to high-performance tools. The findings in the context of rich scenarios are beneficial to both development and optimization of the high-performance solutions. Summary In this study, we have proposed two highperformance solutions for GWR via multi-core parallel and CUDA techniques: GWR-MP and GWR-CUDA, respectively. We objectively compared them with existing GWR implementations found in GWmodel, MGWR and another highperformance solution FastGWR. Results indicate that no solution was always the best in terms of computational efficiency, as summarized in Figure 7 by their relative speeds for four sample size intervals (less than 2000; greater than 2000, but less than 10,000; greater than 10,000 but less than 100,000; greater than 10,000). As (effectively) serial solutions, both GWmodel and MGWR provide adequate GWR implementations for (small) sample sizes < 10,000, as computational costs were considered acceptable. For multi-core parallel solutions, GWR-MP provided a commensurate solution with GWR-CUDA for dealing with (large) sample sizes between 10,000 to 100,000 on a computer of common multi-core configuration, where GWR-MP demonstrated more efficient computing capacity for each core or thread than FastGWR, whose design is more suited to non-shared memory clusters. For example, Li et al. (2019a) adopted FastGWR with a dataset of 1.28 million points on a 512-core computing cluster. However, highperformance computing clusters are usually too expensive and too few in number to be accessed by many researchers. Conspicuously, GWR-CUDA provided a relatively cheap but highly efficient solution for analyzing a very large dataset, of which the size could be much larger than 1,000,000, the upper number in this study. The study GPU (NVIDIA GeForce RTX 2060 (Mobile)) only cost around $350, but we found we could implement a GWR model (including bandwidth optimization and model calibration) with one million data points in around 5 h. A better configuration of the GPU, like with NVIDIA Tesla V100 could reduce this time to 3.5 h, but at a cost of around $9000. Note however, GWR-CUDA should only be preferred when sample size is very large in terms of balancing cost with speed (as clearly seen in Figure 7). Note, however the Figure 7 roughly show the comparative performances of these solutions, and could more or less vary when different devices adopted. Both GWR-MP and GWR-CUDA were implemented in R with wrappers on the C++ code, which has been incorporated into the latest release of GWmodel (say version GWmodel_2.2-8). Note that, nowadays it is straightforward to execute R from Python, and vice versa. Therefore, this is not a black-or-white type of choice to run these solutions in R or Python. Moreover, all the C++ code could be easily transferred to a standalone application, which we are currently working on. Inspired by Figure 7, an important feature of this, is to adaptively set a computational strategy according to sample size and the computing environment, and this study provides a direct support for such a strategic optimization. An ultimate solution could be an application developed under the service-oriented architecture with powerful computers or clusters, and the algorithms proposed here would provide fundamental support. Moreover, the solutions proposed here are directly applicable to extended GWR forms beyond the conventional GWR form, such as GTWR; and also directly applicable to other geographically weighted models (Lu et al. 2014b) outside of those for regression (e.g. GW PCA). Further, more pertinent issues, such as robust statistical inference in GWR with a massive data set (Griffith 2015) would also be worthy of investigation. Notes 1. Note that the diagnostic information cannot be calculated when an individual set of regression locations are adopted. 2. Theoretically, the number of threads c could be larger than the number of cores available, but we would suggest creating no more than the number of cores for ensuring the performance of each thread. Disclosure statement No potential conflict of interest was reported by the author(s). Funding This research ChrisBrunsdon is a Professor of Geocomputation and Director of the National Centre for Geocomputation at Maynooth University, Ireland. His research interests include spatial statistics, data science and spatial analysis. Alexis Comber is a Professor of Spatial Data Analytics at the University of Leeds and Leeds Institute for Data Analytics (LIDA). His research activities cover all areas of spatial data: remote sensing, land cover/use, demographics, public health, agriculture, bio-energy and accessibility. Martin Charlton was an Associate Professor at the National Centre for Geocomputation at Maynooth University. He was one of the leading pioneers of quantitative geography and geocomputation whose work helped inspire the recent resurgence of spatial analysis and geographic data science. PaulHarris is a Professor of Spatial Statistics at Rothamsted Research. His research includes methodological development with applied studies in agriculture and encompasses all scales (from the plot and field, to the continent and global). Data availability statement The data that support the findings of this study are available with the identifier(s) at the link (https://figshare.com/s/ 13f325af1e37c3bc15fc).
7,754.6
2022-05-20T00:00:00.000
[ "Computer Science", "Geography", "Mathematics" ]
Statistical convergence of integral form of modified Szász–Mirakyan operators: an algorithm and an approach for possible applications In this study, we take into account the of modified Szász–Mirakyan–Kantorovich operators to obtain their rate of convergence using the modulus of continuity and for the functions in Lipschitz space. Then, we obtain the statistical convergence of this form. In addition, we determine the weighted statistical convergence and compare it with the statistical one for the same operator. Medical applications and traditional mathematics; one way to get a close approximation of the Riemann integrable functions is through the use of the Kantorovich modification of positive linear operators. The use of Kantorovich operators is tremendously helpful from a medical point of view. Their application is shown as an approximation of the rate of convergence in respect of modulus of continuity. Introduction 1.Problem context First introduced by Fast in [1], statistical convergence can be applied to either real or complex number sequences.This is strongly connected to the idea of the natural/ asymptotic density of subsets of positive integers N. In [2], Zygmund refers to this as "almost convergence" and identifies the association between statistical convergence and strong summability.This notation has been further investigated in the number of papers [3][4][5][6][7].In [7], D -lim κ r = L represents the statistical convergence and is called D-convergence.Salat [5] showed that if D -lim κ r = L holds, then the number L is unique.Conversely, if lim r→∞ κ r = L holds, the D -lim κ r = L holds too, since the set {r ≤ n : |κ r -L| ≥ } is finite in this case for all > 0. As stated in [8], this approach has significant applications in the theory of approximating polynomials, functional analysis, numerical solutions to differential equations, integral equations, and others.We used the work to study the following: (1) apply statistical convergence on Kantorovich form of Szász-Mirakyan operators, (2) statistical convergence for approximation theorem of the Korovkin type, (3) the convergence of the same operators in weighted statistical sense. Limitations of existing literature and algorithms Statistical convergence was introduced as a solution to problems arising from series summation.This concept has been widely applied in diverse branches of mathematics, with a particular focus on estimating the characteristics of linear positive operators.The limitations of the existing literature and algorithms are the following: First, classical convergence and statistical convergence are not compared for the same operators in the existing literature.Second, the main drawback of statistical convergence is that it does not ensure convergence of the sequence, while the converse is true.In relation to issues of series summation, statistical convergence was introduced to the theory of approximation to be applied in several areas of pure and applied mathematics to estimate the properties of linear positive operators, and the present study refines this motive for better application in the different areas of mathematics where the rate of convergence needs to be significant.The replacement of uniform convergence with statistical convergence has the benefit of modeling and enhancing the signal approximation technique in different function spaces. Using summability and sequence to function methods, in various ways, the Cauchy Convergence classical notion has been generalized.In 1932, the first generalization was initiated by Banach, which was later studied in detail by Lorentz (1948).Many authors have used statistical convergence for the operators, but the convergence for the same operators is missing.Unless the two convergences are compared, the purpose of using statistical convergence is of no use.Main limitation of statistical convergence is that any convergent sequence is statistically convergent, but the statistically convergent sequence is not always convergent.Nevertheless, neither the limits nor the statistical limits can be computed or quantified with exact precision in the general case.Many mathematical approaches have been developed in order to model this imprecision using mathematical structures and to account for its imprecision.As is well-known, real-world sequences are not convergent in a strict mathematical sense.The algorithm used here is to calculate the auxiliary results for the test functions using the Korovkin theorem, which proves it as a linear positive operator, then to find the rate of convergence in the classical, statistical sense and also in weighted space. Motivation and objectives of this paper Different approaches have been applied by various methods to reduce rate of convergence.Therefore, this study uses modulus of continuity to get an efficient rate of convergence and compares classical, statistical and weighted statistical convergence.Most of the problem arises when we do not have a proper algorithm / methodology to reach the required result.The present work provides this for statistical convergence, where the behavior of the elements become irrelevant. Contribution of this study The following are the primary contributions of this study: (i) In this study, the rate of convergence of the operators is identified both in the classical sense and from a statistical perspective. (ii) The idea that the majority, or something close to it, of a sequence's elements, will converge via statistical convergence; this means that the behavior of the sequence's other elements becomes less significant to the analysis.It was understood at the time that sequences originating from sources in the real world do not converge mathematically, and the work proposes approximation results of weighted statistical convergence. (iii) We deployed different techniques to find the rate of convergence in classical, statistical and weighted statistical sense and hence thereby giving a comparative look of all three convergence. (iv) Providing a proper methodology and algorithm gives a way out to reach the result using the modulus of continuity tool. Paper organization The remaining components of this investigation are laid out in the following manner: Sect. 2 examines the related articles that already exist in the literature relevant to our research.Section 3 focuses on preliminaries required for understanding the results obtained in the paper.Section 4 concentrates on determining the rate of convergence used in classical sense and for functions in Lipschitz class.Section 5 presents the proposed methodology to reach the required result and the specified smart algorithm used in the paper to reach at the desired result.Section 6 discusses some previous applications of Kantorovich form of modified Szász-Mirakyan operators to show the possible cases where our result may be applied.Section 6.1 explores application from previous publications in the area of convergence in sustainability, whereas Sect.6.2 discusses applications in the area of medical diagnosis.Section 7 focuses the findings of this study and a comparative analysis with the existing work.Section 8 concludes the research with research scope. Literature review In the field of approximation theory, Szász-Mirakyan operators have been used as a fundamental tool for the approximation of functions.In particular, the integral form of modified Szász-Mirakyan operators has been analyzed in several studies. Duman et al. [9] proposed a new approach to investigate the statistical convergence of these operators.The authors derived the conditions to prove the modified Szász-Mirakyan operators integral form convergence in a probabilistic sense.They also presented an algorithm for computing the convergence in statistical sense for these operators.Kizmaz and Karagoz [10] considered the statistical convergence of the integral form of modified Szász-Mirakyan operators for the continuous functions.The authors proved that these operators converge to the functions uniformly on compact subsets of the interval [0, 1] and established the order of the convergence. Altin and Karacik [11] have introduced the statistical convergence of modified Szász-Mirakyan operators with respect to a new sequence of weights.They obtained some approximation results for the operators.They also proved that the sequence of modified Szász-Mirakyan operators is statistically convergent with respect to the new sequence of weights.Pehlivan and Duman [12] introduced a new type of Szász-Mirakyan operators, namely the exponential form of the same.They studied the convergence properties in statistical sense for these operators and proved that the sequence of exponential Szász-Mirakyan operators converges statistically to the function f for the functions continuous on [0, 1]. Several mathematicians extended Korovkin-type approximation theorems by incorporating Banach spaces, Banach algebras, function spaces and abstract Banach lattices, as well as utilising other test functions as in [13][14][15][16][17]. Statistical convergence plays a very important part in approximation theory out of all the approaches available to determine the rate of convergence of different linear positive operators.Statistical convergence is a conventional method for achieving sequence summability. We recall the definition of statistical convergence: A sequence κ is statistically convergent to a real number L if for each > 0, Then, in this context, we say Gadjiev and Orhan were the pioneers in analyzing convergence in statistical sense in approximation theory using Korovkin's approximation theory.Their study focused on approximating a function, specifically addressing the problem of function z approximation using the sequence of positive linear operators (B i (z; κ)) [18,19].They stated the Korovkin's approximation Theorem 2.1 using convergence in statistical sense of a sequence of positive linear operators using the notations: Consider C M [c, d] as a space of all continuous functions z in the interval [c, d] and bounded on the entire line, i.e., where M z is constant for every z.Let (B i ) be a sequence of positive linear operators from In addition, Karakaya and Chishti [25] proposed the idea of convergence in weighted statistical sense, the definition of which was later refined by Mursaleen et al. [26].This research investigates the convergence in statistical sense of the modified Sz'asz-Mirakyan operators in the Kantorovich form [27]. Furthermore, we analyze the weighted convergence in statistical sense and rate of convergence of these operators in the Lipschitz space.As defined in [28], "let the function z be defined on the interval [0, ∞).S i , the Szász-Mirakyan operator applied to z is" where In 1977, Jain and Pethe [29] generalized (2.6) as: where and z is any function of exponential type such that for some finite constants K, A > 0.Here ν = (ν i ) i∈N is such that Therefore, for any bounded and integrable function z defined on [0, ∞), Dhamija et al. [27] modified the operator (2.7) in Kantorovich form as: The objective of this paper is to study the statistical approximation properties and rate of convergence of the modified Kantorovich operator (2.8). In this paper, we have used the following methodology: 1.First, take all the functions from C M [c, d], where C M [c, d] is the space of all continuous functions z in the interval [c, d]. 3. Calculate the auxiliary results for the test functions z(κ) = 1, κ, κ 2 using Korovkin theorem, which proves it as a linear positive operator. 4. Determine the rate of convergence through the modulus of continuity, also in Lipshitz space. 5. Also, find statistical limits to determine convergence in statistical sense via D [ν] i .6. Lastly, Estimate the results in classical and weighted space. The following algorithm provides the method for selecting the function from C M [c, d] according to our requirements. Algorithm Input the operator D [ν] i ⇓ Collect the auxiliary results for the test functions z(κ) = 1, κ, κ 2 ⇓ Calculate statistical limit using these auxiliary results ⇓ Compute convergence in statistical sense in classical and weighted space using these statistical limits and estimate the results. If the sequence of positive linear operators D where C M [0, μ] represents the space of all real bounded functions z continuous in [0, ∞). Rate of convergence Let C M [0, ∞) be the space of all continuous and bounded functions on [0, ∞) and κ ≥ 0, then the modulus of continuity of z is defined as where δ > 0. We can see from (4.1), for z ∈ C M [0, ∞), For any δ > 0 and for each u, κ ≥ 0, we have i is defined by (2.8), then we have where Notice by Theorem 3.3, we can say that st -lim i ω(z, √ δ i ) = 0.This provides us the pointwise rate of convergence in statistical sense, of the operators D [ν] i (z; κ) to z(κ).Now we analyze the rate of convergence of the operator D [ν] i using functions of the Lipschitz class Lip M (β), where M > 0 and 0 (4.7) Theorem 4.2 Let D [ν] i be defined as in (2.8) and let z ∈ Lip M (β) with β ∈ (0, 1], then where δ i is defined as above. So, Theorem 4.1 and Theorem 4.2 give us the rate of convergence of operators D [ν] i to z. Statistical convergence-weighted This section focuses on studying the properties of the weighted approximation of D [ν] i using the weighted Korovkin-type theorem proposed by Gadjiev in [19].The aim is to obtain approximation properties on infinite intervals.In the context of this study, the following notations are used for ρ(κ) = 1 + κ 2 . Let B ρ denote the set of all functions z defined on [0, ∞) that satisfy the condition |z(κ)| ≤ M z ρ(κ), where M z is a constant associated with each z.Consider C ρ as the subspace of continuous functions in the space B ρ .Additionally, let C * ρ be the subspace of functions z ∈ C ρ for which the finite limit of lim κ→∞ z(κ) ρ(κ) exists.The space C * ρ can be regarded as a linear normed space with the norm defined as: In this section, the norm defined in (5.1) is used.Now, for convergence in weighted statistical sense, we recollect Gadjiev's stated theorem in [19] as follows: where M ρ is a constant that depends only on ρ. So, for the operators D [ν] i defined by (2.8), we obtain the main result. Theorem 5.3 Let D [ν] i be the sequence of positive linear operators defined by (2.8), then for all z ∈ C 0 ρ , we have Proof To show this, we can prove the condition of Theorem 5.1.First, we need to show that D [ν] i : so by using Lemma 3.1 So, there exists a positive constant M such that M < 1.Hence, D [ν] i (ρ; .)≤ M. Hence, by Theorem 5.1, our proof completes. Applications of modified Kantorovich operators In this section, we have reviewed various applications in previously published papers on the applications of Kantorovich form of modified Szász-Mirakyan operators to demonstrate the significance and potential uses of the operators that have been developed and used in this research work.Our results can potentially be used for applications in the areas with analogous trends, which are discussed below. Applications in the area of convergence in sustainability In the cited work, [31] Turturean et al. explain the convergence in the long-term viability of the economies of the EU's constituent nations.The sustainability and economic policy factors have been examined in terms of both beta and sigma convergence.In order to estimate the beta equation, conditional beta convergence takes into account both absolute convergence and the factors that influence economic growth.Baumol established a methodology for the analyzing beta convergence in 1986 [32], and Sala-i-Martin proposed the idea of sigma convergence for the very first time in his PhD dissertation in 1990 [33].This was also the very first time that it was employed.The phrase "sigma convergence" refers to the gradual shrinking of the difference between the mean of a set of countries or regions and their means over time.The phrase "sigma convergence" refers to the gradual reduction over time of the gap between the mean of a collection of countries or regions and their means.However, as discussed in [34], it is not possible to calculate or measure limits or statistical limits with absolute precision.Various mathematical methodologies, like fuzzy set theory, fuzzy logic, interval analysis, set-valued analysis, etc., have been created to reflect and describe this imprecision.Among these methods is the neoclassical analysis.Fuzzy concepts, such as fuzzy limits, fuzzy continuity, and fuzzy derivatives, are applied to study various ordinary analysis structures, including functions, sequences, series, and operators.In neoclassical analysis, for instance, the set of fuzzy continuous functions encompasses the set of continuous functions studied in classical analysis.The techniques of traditional calculus are extended by neoclassical analysis to account for uncertainties that exist in computations and observations. Applications in the area of medical diagnosis Costarelli and Vinti [35] utilized sampling Kantorovich operators in enhancing the diagnosis of certain vascular apparatus disorders in the medical field.A concrete example is provided by processing a section of a CT (computerized tomography) image representing theaorta artery.The family of bivariate sampling Kantorovich operators permits picture reconstruction and enhancement.A precise diagnosis of vascular apparatus pathology can be made using augmented biomedical imaging.The region of interest is the vessel's lumen, which is essential from a medical standpoint since it helps doctors identify thrombotic zones(areas of blood clot)from the vessel's lumen and correct diagnoses of diseases.The aforementioned problem could be solved by using a contrast medium, which is used to improve images of the inside of the body in CT investigations; however, we can make a more accurate diagnosis from the original CT pictures taken without contrast media, as contrast medium is too invasive to utilize.The primary objective of image processing is to highlight the lumen in the vessel (Fig. 2), which is delineated by the red square (240240 pixels) on the CT image (Fig. 1). Figure 3 depicts the augmented image produced by operators.The increase of the final image relative to the original image indicates that the final image has been built with twice the resolution (960960 pixels).Figure 3 is generally more detailed than the image in Fig. 2. The image reconstructed using Kantorovich algorithm depicts the lumen of the blood artery more accurately.Enhancing images with sampling Kantorovich operators is extremely beneficial from a medical standpoint, enabling doctors to make more accurate diagnoses. Findings and implications The convergence in statistical sense of Kantorovich form of modified Szász-Mirakyan operators has been obtained in classical and weighted space.The finding is important in medical applications and traditional mathematics; one way to get a close approximation of the Riemann integrable functions is through the use of the Kantorovich modification of positive linear operators.According to the discussion of Sect.6, the use of Kantorovich operators is tremendously helpful from a medical standpoint; in this work, the rate of convergence has been approximated through modulus of continuity.The obtained results in this study can be compared with other approximation results that were obtained using different tools for the same operators. Conclusion We examined the convergence in statistical sense of the modified Szász-Mirakyan operators in the Kantorovich form.The rate of convergence of the operators is determined for those functions that are continuous and bounded on the interval [0, ∞) as well as those that belong to the Lipschitz class.Functional analysis is significantly aided by the contributions made by the theory and practice of summability.Therefore, it would be worthwhile to investigate various convergence in statistical sense and summability approaches for the operators.Additionally, we also consider the topic of convergence in statistical sense of the operators in a weighted space. The potential future applications of this study might include looking at the convergence in statistical sense rate of a Bézier variation of the operators that have been established, as well as their respective blended forms in weighted space.With the assistance of functions Theorem 5 . 1 Let B i be a positive linear operators sequence from C ρ → C ρ (or B ρ ), which satisfies the following conditions: lim i→∞ B i (e r ; .)er ρ = 0; r = 0, 1, 2, then lim i→∞ B i (z; .)zρ = 0 for any function z ∈ C 0 ρ , where e r = κ r .Lemma 5.2 ([19]) For a positive linear operators sequence, we have B i : C ρ → B ρ if and only if Figure 1 Figure 2 Figure 1 Depicting a CT scan of the abdomen without contrast medium, with a red square highlighting the aorta artery Figure 3 Figure 3 Showing an enhanced version of Fig. 2, Kantorovich operators is used to enhance Fig. 2 with more clarity and detail
4,627
2024-05-22T00:00:00.000
[ "Mathematics", "Medicine" ]
PRINCIPAL COMPONENT ANALYSIS IN DETERMINING THE BEST SETUP FOR ACOUSTIC MEASUREMENTS IN SOUND CONTROL ROOMS Original scientific paper Measuring process of acoustic quality parameters in sound control room in order to determine the best setup is described. Measurements of six sound control rooms impulse response have been made. The measurements are executed in accordance with the standard ISO3382. In all sound control rooms the same measurement method is used, but the measurement setups are changed. In the first scenario built-in monitor loudspeakers were used. In the second scenario, omnidirectional sound source was used. Omnidirectional measuring microphone and an artificial head were used as receivers. They were placed at the optimal listening position. Principal components analysis method is used to get the most accurate result from measured data obtained under different scenarios and measuring setups. Hence, the measuring conditions and setups which determine the value of subjective assessments of the sound control room are obtained. The results shall be used to calculate correlation between objective measurements and subjective assessments. Introduction Impulse response measurement method is one of the basic measurement methods in the field of objective acoustical study of the room [1].Since a linear part of transfer function between two points in the room is considered, it is assumed that principles valid for linear systems are automatically applied to acoustic room measurements.Analysis of the energy in the room is usually performed at a constant percentage bandwidth, generally an octave or one-third of octave.The problem with impulse measurement of the room is to achieve an adequate signal to noise ratio, which results in praxis with application of several ways of acoustical impulse measurements of the room. Methods for measuring the objective parameters of acoustical quality of the room Objective measured data are obtained by measuring the impulse response of the room, whereas measurements are executed with the use of a personal computer, i.e. the software package Easera [2,3] which is in accordance with the standard of ISO3382.The frequency range is from 63 Hz to 8 kHz, with standard octave bands.The excitation signal with sweep frequency and Maximum Length Sequence Signal -MLS is used.The rooms are excited in two scenarios.In the first scenario the installed equipment and built-in monitor loudspeakers are used, separately the left monitor speaker and separately the right monitor speaker.In the second scenario omnidirectional sound source is used, which is placed in front of the monitor loudspeakers and in front of the optimal positions provided for listening, Fig. 1.The signal from the personal computer (which is the signal source) is directly connected to the output stage of the installed electroacoustic equipment.In such a way all devices for signal processing are bypassed and their eventual impact on the signal is eliminated.Although the final analysis is done in the frequency range determined with central frequency of octave bands from 63 Hz to 8 kHz, sweep tone is generated in the frequency range 0 Hz to 24 kHz.The omnidirectional sound source is selected as a reference sound source that is used in all areas.Thus, on the one hand, the influence of different sound sources on the measurement results of the acoustic parameters when comparing the results of measurements in different areas is avoided [4].On the other hand, the use of reference omnidirectional sound source gives us the illustration of the impact of sound sources that are used in daily work in those sound control rooms on the measurement results [5,6]. The measurements were made in two ways -onechannel and binaural measurements, i.e. with one measuring microphone and with an artificial head, which were set on the position and at height in the room corresponding to the usual position for listening, i.e. the optimal listening position.As it is desired to reduce the possible measurement error to a minimum, five measurement cycles are executed for each measured parameter, and the final result is the mean value of such five values for each measured parameter.Prior to start of the five measurement cycles one pre-testing measurement is executed to check whether equipment setup is properly prepared and ready to start with the measurement process.The duration of the test signal is 5,5 s, and the sampling frequency is 48 kHz.Thus, for each measured parameter 262 144 signal samples are obtained. The microphone is placed at the height of 1,40 m from the floor, at the exact position of the "sound engineer".Artificial head was always in the same position relative to the sound source and the microphone is placed on the medial plane, at the height of the artificial ears [7]. The known problem in measuring of acoustic parameters is repeatability.Due to measurement errors and measurement uncertainties it is sometimes difficult to get exactly the same results even with exactly the same measuring conditions.The exact repeatability is even more difficult to obtain using different types of signals and measuring conditions.Therefore, in this research measurement conditions and the measurement signals which will give the most accurate result are determined.Thus, good measurements repeatability is also obtained.This has been achieved with here shown and described measurement conditions and the applied excitation signal.Finally, the results were analysed using PCA method. Therefore, the same measurement conditions in all rooms for all parameters are selected, as described above.The values of the following objective parameters of room acoustic quality are calculated in accordance with ISO3382 standard [8÷11]: Listed professional sound control rooms have a minimum floor area of 25,9 m 2 to a maximum of 46,20 m 2 , with a corresponding volume of at least 75,11 m 3 to a maximum of 157,08 m 3 , which is consistent with standardized sizes of the average professional sound control rooms.All rooms are appropriately acoustically treated, i.e. insulated walls to protect from outside noise, separated with a window from the studio area and are equipped with special doors that meet the needs of acoustic insulation from outside noise.In each room a mixing console is placed on the best listening position, and sound is radiated through professional loudspeaker systems [12]. Analysis of the measured objective parameters of acoustic quality of sound control room includes analysis of the results across measured frequency band.It includes the range of octave bands with central frequencies from 125 Hz to 8 kHz.With the statistical method Principal Component Analysis (PCA) measurement setup conditions are determined, for which the most accurate measurement results are obtained. PCA is a statistical method, described in detail in [13], [14] which combines a large number of variables (results) to new, virtual variables called principal components.Those variables incorporate all existing and actually measured values, but their number is far lower than the actual.The method of calculating the principal components includes getting the data, subtracting the mean, calculating the covariance matrix, calculating the eigenvectors and eigenvalues of the covariance matrix, choosing components and forming a feature vector, and finaly, deriving the new data set [13].In this analysis, the number of principal components is limited to two (Principal Component One -PC1 and Principal Component Two -PC2) and it is found which actual measured parameters and measurement conditions have the greatest effect on the two selected (first two) components.Selecting those two principal components only the biggest impact of actual results is analysed.Also, a further increase in the number of observed principal components would increase the complexity of the analysis and at the same time, their influence on the results is not significant. The above principle enables to determine conditions necessary to measure the acoustic properties of the room in the best way.It is assumed that every measured room meets the requirements required for analysis, and they are: all variables were measured under the same conditions, the amplitude of excitation area is such that there is a linear relationship between the variables, the number of samples is large enough to be assumed they fairly represent the corresponding measured value, all data are suitable for analysis, and no inappropriate deviations occurs in results.Mentioned inappropriate deviations in measurement results are also monitored during the measurements. Objective parameters are measured in each room nine times with nine different conditions.For the purpose of Prior to making the PCA analysis, a statistical analysis of the results was made for each measured parameter of the sound control rooms, which includes: As this is an extremely large amount of data, this paper presents only the results of PCA analysis, which are the goal of this research. Multichannel music control room R1 Principal component analysis of the reverberation time measured values shows that the greatest impact on the first two principal components PC1 and PC2 for the parameter EDT have measurements M3 and M9; for the parameter RT 10 have measurements M11 and M10; for the parameter RT 20 have measurements M21 and M19, while for the parameter RT 30 have measurements M36 and M32. As in most cases the biggest impact on the first two principal components have measurements performed using an omnidirectional sound source, it can be concluded that the results of the reverberation time measurements made with omnidirectional sound source give the results that best suit their actual values.As in all cases the greatest impact in the first two principal components have measurements obtained with monitor loudspeakers excited with sweep signal, it can be concluded that this measurement conditions give results that best suit their actual values.Additionally during the calculation of the principal components related to the Clarity C, no significant impact on the measurements with omnidirectional sound source is noticed. Principal component analysis of the Definition D values indicates that the greatest impact on the first two principal components PC1 and PC2 have measurements M4 and M5. Principal component analysis of the interaural crosscorrelation coefficient IACC values shows that the first two principal components PC1 and PC2 for parameter IACC Early are mainly defined by measurements M3 and M1; for parameter measurements IACC Late by measurements M13 and M9, and the overall coefficient IACC Full by measurements M19 and M17. Clearly, measurements carried out with omnidirectional sound source have a significant impact on the determination of the first two principal components for IACC coefficient. Multichannel music control room R4 Principal component analysis of the reverberation time values shows that the greatest impact on the first two principal components PC1 and PC2 for parameter EDT have measurements M8 and M9; for the parameter RT 10 have measurements M16 and M17; for the parameter RT 20 have measurements M21 and M23, while for the parameter RT 30 have measurements M31 and M35.As in most cases the biggest impact on the first two principal components have measurements performed using built-in speakers, it can be concluded that the results of the reverberation time measurement facilitated by built-in speakers and excited with sweep tone, give results that best match their actual values. Principal component analysis of the Clarity C values in the multichannel music control room R4, parameter C 7 indicates that the greatest impact on the first two principal components PC1 and PC2 have measurements M9 and M7; for the parameter C 50 have measurements M15 and M13; for the parameter C 80 have measurements M24 and M25, while for the parameter C 35 have measurements M34 and M35.As in all cases the greatest impact on the first two principal components have measurements facilitated by monitor loudspeakers and excited with sweep signal, it can be concluded that this measurement conditions will produce results that most closely match their actual values.Additionally, during the calculation of the principal components related to the clarity C, no significant impact of measurements done with omnidirectional sound source is noticed. Principal component analysis of the Definition D values indicates that the greatest impact on the first two principal components PC1 and PC2 have measurements M4 and M6. Principal component analysis of the interaural crosscorrelation coefficient IACC values shows that the first two principal components PC1 and PC2 for parameter IACC Early are mainly defined by measurements M8 and M7; for parameter by IACC Late measurements M14 and M13, and the overall coefficient IACC Full by M26 and M25. Clearly, measurements done with built-in monitor loudspeakers have a dominant impact on the calculation of the first two principal components for IACC coefficient for sound control room R4.Principal component analysis of the reverberation time values EDT shows that the greatest impact on the first two principal components PC1 and PC2 for parameter EDT have measurements M1 and M6; for the parameter RT 10 have measurements M11 and M18; for the parameter RT 20 have measurements M26 and M27, while for the parameter RT 30 measurements M35 and M33.It is important that in the sound control room R5, a measurement executed with omnidirectional sound source has the greatest impact on the first principal component PC1 for the measured values of parameters EDT andRT 10 , while measurement executed with build-in control monitor primarily effects measurement of parameters RT 20 and RT 30 .As the second principal component PC2 is determined by measurements done with monitor speakers in all cases, it can be stated that the results of reverberation time measurements will be most accurate in cases when monitor loudspeakers are used. Principal component analysis of the Clarity C values in the speech and music control room R5 indicates that the greatest impact on the first two principal components PC1 and PC2 for parameter C 7 have measurements M5 and M6; for the parameter C 50 have measurements M15 and M14; for the parameter C 80 have measurements M23 and M22, while for the parameter C 35 have measurements M33 and M32. Principal component analysis of the parameter clarity C measurement results in sound control room R5 shows in all cases that the biggest impact on the first two principal components PC1 and PC2 occurred when room was excited with right-hand control monitor.Therefore, it can be concluded for room R5 that the best results are obtained with measurements performed with control monitor loudspeaker, and specifically in this case the right-hand one. Principal component analysis of the Definition D values indicates that the greatest impact on the first two principal components PC1 and PC2 have measurements M6 and M5. Principal component analysis of the interaural crosscorrelation coefficient IACC values shows that the first two principal components PC1 and PC2 are mainly defined for parameter IACC Early by measurements M5 and M6; for parameter measurements IACC Late by M10 and M15, and the overall coefficient IACC Full by M23 and M24. Clearly, measurements carried out with control monitor, especially with the right-hand monitor, have the most significant effect on the calculation of the first two principal components for the IACC coefficient.Only in one case a significant impact on the principal components PC1 and PC2 have measurements with omnidirectional sound source. Control room for audio editing R13 Principal component analysis of the reverberation time values shows that the greatest impact on the first two principal components PC1 and PC2 for parameter EDT have measurements M7 and M9; for the parameter RT 10 have measurements M11 and M10; for the parameter RT 20 have measurements M24 and M22, while for the parameter RT 30 measurements M31 and M30. The impact on the first two principal components PC1 and PC2 in this case have the measurements carried out by omnidirectional sound source, as well as measurements with monitor loudspeakers, depending on the measured dynamics range.It is interesting that the biggest impact on measurements with omnidirectional sound source is for the parameter RT 10 , where the dynamics is the smallest.Although there is the impact on the results obtained from excitation with MLS signal, in most cases more significant is excitation with sweep tone.The principal components in the case of control room R13 in most cases are determined by measurements performed with excitation via the monitor speakers.Only the results of measurements carried out by omnidirectional sound source of the parameter C 35 have significant impact on the first two principal components PC1 and PC1. Principal component analysis of the Definition D values indicates that the greatest impact on the first two principal components PC1 and PC2 have measurements M4 and M5. Principal component analysis of the interaural crosscorrelation coefficient IACC values shows that the first two principal components PC1 and PC2 are mainly defined for parameter IACC Early by measurements M8 and M9; for parameter measurements IACC Late by M14 and M10, and the overall coefficient IACC Full M26 and M27. Clearly, measurements taken with monitor loudspeakers significantly impact the calculation of the first two principal components for IACC coefficient. Large control room of "Studio Bajsić" R11 Principal component analysis of the reverberation time values shows that the greatest impact on the first two principal components PC1 and PC2 for parameter EDT have measurements M8 and M9; for the parameter RT 10 have measurements M16 and M17; for the parameter RT 20 have measurements M21 and M26, while for the parameter RT 30 have measurements M33 and M35.Analysis and definition of the first two principal components PC1 and PC2 for control room R11 shows that in most cases a significant impact have measurements with monitor loudspeakers, mostly right-hand speaker.There is only one case with a significant impact of measurements with omnidirectional sound source.In most cases significant impact has the sweep signal.Principal component analysis of the Clarity C values in the large control room of "Studio Bajsić" R11indicates that the greatest impact on the first two principal components PC1 and PC2 for parameter C 7 have measurements M5 and M4; for the parameter C 50 have measurements M14 and M13; for the parameter C 80 have measurements M24 and M22, while for the parameter C 35 have measurements M31 and M33. Calculating process of the principal components PC1 and PC2 for the measurement of sound control room R11 related to the parameter Clarity shows that the biggest impact on results have measurements carried out with the right-hand control monitor and in most cases with sweep tone.There is no case where measurements carried out by omnidirectional sound source have big significance for principal components PC1 and PC2. Principal component analysis of the Definition D values indicates that the greatest impact on the first two principal components PC1 and PC2 have measurements M4 and M5. Also in case of parameter Definition D, a significant impact on the calculation of the first two principal components PC1 and PC2 have measurements done with right-hand control monitor. Principal component analysis of the interaural crosscorrelation coefficient IACC values shows that the first two principal components PC1 and PC2 are mainly defined for parameter IACC Early by measurements M7 and M9; for parameter measurements IACC Late by M13 and M14, and the overall coefficient IACC Full by M25 and M27. Clearly, measurements taken with control monitor have a significant impact in the calculation of the first two principal components for IACC coefficient.Principal component analysis of the reverberation time values for control room R12 shows that the greatest impact on the first two principal components PC1 and PC2 for the parameter EDT have measurements M9 and M7; for the parameter RT 10 have measurements M12 and M15; for the parameter RT 20 have measurements M27 and M24, while for the parameter RT 30 measurements M28 and M36. Small control room of "Studio Bajsić" R12 Calculation of the principal components PC1 and PC2 for measuring parameters of reverberation time shows that measurements with all sound sources are present, i.e. omnidirectional sound source as well as monitor loudspeakers. Principal component analysis of the Clarity C values in the control room R12 indicates that the greatest impact on the first two principal components PC1 and PC2 for parameter C 7 have measurements M8 and M9; for the parameter C 50 have measurements M18 and M17; for the parameter C 80 have measurements M22 and M23, while for the parameter C 35 have measurements M29 and M31.In most cases significant impact in the calculation of the principal components related to the clarity parameter C of control room R12 have measurements performed with control monitor. Principal component analysis of the Definition D values indicates that the greatest impact on the first two Clearly, measurements taken with control monitor have in most cases a significant impact on the calculation of the first two principal components for IACC coefficient.The measurement taken with omnidirectional sound source is significant in only one case. The final result -standardized measurement setup matrix After detailed analysis of the measurement results of objective parameters of professional sound control room acoustical quality, as the final result a standardized matrix of measurement setup is produced.The setup matrix shows which sound source and sound source signal has to be used for the measurements that give the best results.The sound source can be either omnidirectional or built-in monitor loudspeaker, used as operational sound control monitor.The exciting signal can be the sweep signal or MLS signal. From the standardized measurement setup matrix for professional sound control rooms should be noted that the measurement with omnidirectional sound source makes sense when measuring parameters are related to the reverberation time.The energy parameters of clarity C and definition D should be measured with operational control monitor loudspeakers which are used in the control room. Conclusion Sound control rooms are special rooms where acoustic quality is of special importance and it is crucial to exactly define room layout and achieve the best acoustic quality parameters.Analysis of such requirements shall facilitate in achieving such quality.This paper presents a mathematical analysis of the measurement results using Principal Component Analysis.Results of measurement of objective room acoustic quality parameters are exactly specified by the conditions under which they are measured and will be put in correlation with subjective assessment of sound control room further research.Therefore, it is extremely important to exactly determine the measurement's conditions and setup that will give the most accurate measurement results.Final results show the measurement setup for each observed parameter, separately for each sound control room.Thus, the results are obtained, which determine the value of subjective assessments of the room in the best possible way.The presented results also provide a basis for future statistical analysis and correlations between the size and shape of the room and type of sound source and/or signal with particular parameter of acoustic quality of the sound control room.This method also allows the optimization of the acoustic characteristics of the sound control rooms with adequate acoustic quality. Figure Figure 1Measurement setup (OSS -Omnidirectional Sound Source; ML -Monitor Loudspeaker;HATS -Head and Torso Simulator) mathematical analysis each condition of measurement is indicated by a special label, as follows: Figure 2 Figure 2 Layout of the multichannel music control room R1 Figure 3 Figure 3 Layout of the multichannel music control room R4 3 . 3 Speech and music control room R5 Figure 5 Table 8 Figure 5 Layout of the control room for audio editing R13Table 8 Construction parameters of the control room for audio editing R13 Width [m] Depth [m] Height [m] Floor area [m 2 ] Volume [m 3 ] 5,58 4,71 2,90 25,90 75,11Principal component analysis of the Clarity C values in the control room for audio editing R13, parameter C 7 indicates that the greatest impact on the first two principal components PC1 and PC2 have measurements M7 and M8; for the parameter C 50 have measurements M15 and M13; for the parameter C 80 have measurements M25 and M26, while for the parameter C 35 have measurements M28 and M29. Figure 6 Figure 6 Layout of the large control room of "Studio Bajsić" R11 Table 1Measurements ' conditions and labels for the statistical method of PCA Source OSS OSS OSS ML ML ML ML ML ML Position Table 2 Construction parameters of the multichannel music control room R1 Table 3 Coefficients of principal components of the objective room quality parameters for the multichannel music control room R1 Table 4 Construction parameters of the multichannel music control room R4 Table 5 . Coefficients of principal components of the objective room quality parameters for the multichannel music control room R4 Table 6 Construction parameters of the speech and music control room R5 Table 7 Coefficients of principal components of the of objective room quality parameters for the speech and music control room R5 Table 9 Coefficients of principal components of objective room quality parameters for the control room for audio editing R13 Table 10 Construction parameters of the large control room of "Studio Bajsić" R11 Table 11 Coefficients of principal components of the of objective room quality parameters for the large control room of "Studio Bajsić" R11 Table 12 Construction parameters of the small control room of "Studio Bajsić" R12 Table 13 Coefficients of principal components of the of objective room quality parameters for the small control room of "Studio Bajsić" R12 Principal component analysis of the interaural crosscorrelation coefficient IACC values shows that the first two principal components PC1 and PC2 are mainly defined for parameter IACC Early by measurements M8 and M9; for parameter measurements IACC Late by M11 and M14, and the overall coefficient IACC Full by M26 and M27.
5,808.6
2015-10-22T00:00:00.000
[ "Physics" ]
Inhibition of STAT3-interacting protein 1 (STATIP1) promotes STAT3 transcriptional up-regulation and imatinib mesylate resistance in the chronic myeloid leukemia Background Signal transducer and activator of transcription 3 (STAT3) is an important transcriptional factor frequently associated with the proliferation and survival of a large number of distinct cancer types. However, the signaling pathways and mechanisms that regulate STAT3 activation remain to be elucidated. Methods In this study we took advantage of existing cellular models for chronic myeloid leukemia resistance, western blot, in vitro signaling, real time PCR, flow cytometry approaches for cell cycle and apoptosis evaluation and siRNA assay in order to investigate the possible relationship between STATIP1, STAT3 and CML resistance. Results Here, we report the characterization of STAT3 protein regulation by STAT3-interacting protein (STATIP1) in the leukemia cell line K562, which demonstrates constitutive BCR-ABL TK activity. K562 cells exhibit high levels of phosphorylated STAT3 accumulated in the nucleus and enhanced BCR-ABL-dependent STAT3 transcriptional activity. Moreover, we demonstrate that STATIP1 is not involved in either BCR-ABL or STAT3 signaling but that STATIP1 is involved in the down-regulation of STAT3 transcription levels; STATIP1-depleted K562 cells display increased proliferation and increased levels of the anti-apoptosis STAT3 target genes CCND1 and BCL-XL, respectively. Furthermore, we demonstrated that Lucena, an Imatinib (IM)-resistant cell line, exhibits lower STATIP1 mRNA levels and undergoes apoptosis/cell cycle arrest in response to STAT3 inhibition together with IM treatment. We provide evidence that STATIP1 siRNA could confer therapy resistance in the K562 cells. Moreover, analysis of CML patients showed an inverse expression of STAIP1 and STAT3 mRNA levels, ratifying that IM-resistant patients present low STATIP1/high STAT3 mRNA levels. Conclusions Our data suggest that STATIP1 may be a negative regulator of STAT3 and demonstrate its involvement in IM therapy resistance in CML. Background The signal transducer and activator of transcription 3 (STAT3) protein belongs to a class of transcription factors that are activated by a number of growth factors and oncogenic proteins [1]. The activation of STAT3, which is regulated by the phosphorylation of tyrosine 705, is driven by receptor and non-receptor protein tyrosine kinases (TK), such as EGFR, gp130, Ras, Src and Abl [2][3][4][5]. Once activated, STAT3 forms homodimers, translocate to the cell nucleus and binds to specific regulatory DNA elements to induce transcription. Under physiologic conditions, the activation of STAT3 is transient and rapid [6]. However, the persistent activation of STAT3 protein has been associated with several hematological cancers and solid tumors [7]. Previous data suggest that the constitutive activation of STAT3 induces cell transformation by the upregulation of anti-apoptotic and cell proliferation-related genes, such as BCL-XL and CCND1 [7], and oncogenes, such as PIM1 and c-Myc [8,9]. Furthermore, STAT3 activation has been associated with the up-regulation of VEGF and TWIST1, genes related to angiogenesis and metastasis [10]. These findings suggest a straight relationship between STAT3 activation and cancer development. In chronic myeloid leukemia (CML), the chimeric oncoprotein BCR-ABL, a constitutively activated TK, promotes the malignant transformation of hematopoietic cells [11]. BCR-ABL leads to the constitutive activation of the JAK/ STAT, Ras/Raf/MEK/ERK and PI3K/PTEN/Akt/mTOR signaling pathways [12][13][14]. In CML, persistent STAT3 phosphorylation mediated by BCR-ABL has been associated with cellular proliferation, the inhibition of apoptosis and chemotherapy resistance [5,[15][16][17][18][19]. Although it is clear that the signaling activity of BCR-ABL is the main cause of the neoplastic transformation, the precise mechanisms by which BCR-ABL transforms cells remain largely unknown. Thus, strategies designed to understand the transcriptional activity of STAT3 may be important tools for discovering the next generation of anti-leukemia therapies. STAT3 is negatively regulated by the suppressors of cytokine signaling proteins, known as SOCS, by protein inhibitor of activated STAT, known as PIAS, or by phosphatases, known as SHP. However, the regulatory mechanisms that negatively modulate STAT3 are ineffective in cancers [20]. Thus, several studies have tried to identify proteins that could interact and positively or negatively regulate STAT3 activity [21][22][23][24][25][26][27][28]. Although many proteins are known to interact and regulate STAT3 activity, the mechanisms surrounding such regulation of the STAT3 protein remain to be elucidated in CML. Collum and cols. [29] described STAT3interacting protein 1 (STATIP1) as a STAT3-associated protein. STATIP1 contains 12 WD40 domains that mediate protein-protein interactions, which play important roles in the regulation of signal transduction, transcription and proteolysis [30]. STATIP1 overexpression blocked STAT3 activation in the human hepatocellular carcinoma cell line HepG2 [29], suggesting a negative role for STA-TIP1 in STAT3 regulation. However, neither the STATIP1 expression nor its potential to regulate STAT3 activity has been assessed to date in other cancer types, such as leukemia cells. To address this issue, the aim of this study was to evaluate the STATIP1 and STAT3 status in the well-characterized CML model. Using K562 cell line, we report that STATIP1 may act as a negative regulator of STAT3 transcriptional activity in CML and reduce the effects of Imatinib (IM) in K562 cells. Moreover, using a CML multidrug resistance (MDR)/Imatinib resistant cell line (Lucena) and CML patients' samples we address the relationship of STATIP1 and STAT3 in IM resistance. Our results suggest a new role for STATIP1 in CML therapeutic resistance. Cell lines and drug treatments A CML model cell line, K562, was cultured in RPMI-1640 medium containing 10% fetal bovine serum, 100 U/ml penicillin and 100 μg/ml streptomycin in 5% CO2 at 37°C. Lucena cells [K562 MDR/IM resistant cell line induced by vincristine] overexpressing ABCB1 were kindly provided by Dra. Vivian Rumjanek (Departamento de Bioquímica Médica, Universidade Federal do Rio de Janeiro, Brazil) [31]. The Lucena cells were cultured in the same conditions as the K562 cells, but its medium was supplemented with 60 nM VCR (Sigma).The K562 cells were plated at 1 × 10 5 cells/ml. The inhibition of BCR-ABL activity by treatment with IM (imatinib mesylate, Novartis) was performed using a final concentration of 1 μM for 24 h. For STAT3 inhibition, 40 μM LLL-3 was applied to culture for 24 h. The LLL-3 was kindly provided by Dr. Pui-Kai Li from Ohio State University, USA. Patients samples This study was approved by the ethics committee of the National Cancer Institute Hospital (INCA, Rio de Janeiro, Brazil). Patients were admitted or registered at the National Cancer Institute Hospital, according to the guidelines of its Ethics Committee and the Helsinki declaration. All patients and healthy donors were adults and signed the consent form. Bone marrow samples were obtained from CML patients in all disease phases (chronic, accelerated and blastic phases) at the time of diagnose and follow up: IM-responsive patients (3 to 6 mo follow up) and IM-resistant or relapse after initial response (3 to 24 mo follow up). We selected 6 healthy donors (mean age =30, range =20-37, male:female ratio = 4:2), 6 IM-responsive patients (mean age = 45, range = 35-68, male:female ratio = 1:5) and 8 IM-resistant patients (mean age = 51, range = 24-59, male: female ratio = 6:2). Diagnoses and follow-ups were based on hematologic, cytogenetic and molecular assays. IM-responsive patients exhibited a major molecular response and complete hematologic and cytogenetic response, whereas IM-resistant patients lacked hematologic, cytogenetic and molecular responses. The inclusion criterion was to investigate CML patients that received IM as a first-line therapy. The exclusion criterion was CML patients with BCR-ABL mutations. Marrow aspirates were collected in heparinized tubes and processed on the day they were collected. Bone marrow mononuclear cells were isolated from 2-5 mL of aspirate in a Ficoll-Hypaque density gradient (Ficoll 1.077 g/mL; GE, Sweden) according to manufacturer's protocol. Cells were washed 3 times in PBS and subsequently used for RNA extraction. Small interfering RNA (siRNA) TK562 cells were plated at 1 × 10 5 cell/ml in a 24-well plate and left overnight in RPMI-1640 media without antibiotics. STATIP1 siRNA (100 nM) (SC-44436, Santa Cruz) and 2 μL of Lipofectamine™ RNAiMAX (Invitrogen) were incubated separately in a final volume of 50 μL of RPMI-1640 media for 5 min. Subsequently, the siRNA and Lipofectamine were mixed and incubated for 30 min and then applied dropwise on cell cultures. Scrambled siRNA (100 nM) (SC-37007, Santa Cruz) was used as an siRNA negative control. FITC-conjugated siRNA (SC-36869, Santa Cruz) was used to evaluate the transfection efficiency by FACS. siRNA transfections were conducted for up to 72 h. Proliferation assay K562 cells (1 × 10 5 ) were transfected with scrambled or STATIP1 siRNA in a 24-well plate for 72 h. After transfection, cell cultures were treated with 1 μM IM for 24 h. WST-1 assay was performed to determine the number of viable cells. The relative number of viable cells was expressed as a percentage of the untreated cells. Western blot Whole-cell protein extracts were obtained from cell lines in lysis buffer containing 50 mM Tris pH 7.5, 5 mM EDTA, 10 mM EGTA, 50 mM NaF, 20 mM b-glycerolphosphate, 250 mM NaCl, 0.1% Triton X-100, 20 mM Na 3 VO 4 and protease inhibitor mix (Amersham). The protein concentrations were determined using the Bradford assay, and 30 μg of the cell lysate proteins was subjected to separation by 10% SDS-PAGE. The protein extracts were electrophoretically transferred to a nitrocellulose membrane (GE) and probed with the appropriate antibodies. The western blots were developed by ECL Plus (Amersham). The following antibodies were used at 1:1000 dilutions: anti-STATIP1, anti-STAT3, anti-STAT3-Y705 and anti-ACTNB (Santa Cruz). Immunofluorescence K562 cells were fixed to glass slides using cytospin and further fixed by immersion in methanol:acetic acid (1:1) for 10 min at -20°C. Fixed cells were permeabilized in 0.5% Triton X-100 for 10 minutes and blocked with 5% BSA for 1 h. Primary antibody incubation was performed at 4°C for 16 h. The cell nuclei were stained with DAPI (Santa Cruz). The images were analyzed using a LSM 510 Meta (Carl Zeiss) microscope equipped with a 63×/ 1.4 NA Plan-Apochromat oil immersion objective. Apoptosis assay To determine the percentage of apoptotic cells, we analyzed phosphatidyl serine externalization and membrane integrity by double staining with Annexin V PE and 7-AAD (PE Annexin V Apoptosis Detection Kit I, BD Pharmingen, USA) according to manufacturer's instructions. Briefly, after treatment, 1.0 × 10 5 cells were harvested, washed twice with cold PBS and resuspended in 100 μL of 1× binding buffer. Annexin V PE (5 μL) and 7-AAD (5 μL) were added, and samples were incubated for 15 min in the dark. After incubation, 400 μL of 1X binding buffer was added to each sample. Cells positive for Annexin V PE and 7-AAD were considered apoptotic. For every condition, 20.000 events were acquired using a FACSCalibur Flow Cytometer (Becton Dickinson, USA) and analyzed using CellQuest v.3.1 Software (Becton Dickinson, USA). All experiments were performed in triplicate. Statistical analysis All of the experiments were repeated at least three times, and the data are expressed as the mean ± SD. Statistical analyses (ANOVA and t-test) were performed using GraphPad Prism® v.5 software (GraphPad). A P-value (p) <0.05 was considered statistically significant (*p <0.05, **p <0.01, ***p <0.001). Evaluation of STAT3 expression and phosphorylation in CML K562 cells Previous studies have demonstrated that STAT3 is constitutively activated in a variety of cancer cell types [7], including leukemic cells [34]. First, we evaluated the STAT3 expression and phosphorylation status and subcellular localization in our CML cell line, K562. For this, immunofluorescence assays and western blot analyses were performed. Our results indicate that STAT3 is preferentially localized in the K562 cytoplasm, while a very strong nuclear accumulation of phosphorylated STAT3 is observed in these cells ( Figure 1F, 1I). These findings indicate that when STAT3 is phosphorylated, it accumulates in the K562 cell nucleus. These data validate our model as a STAT3-activated leukemic cell line, as reported by Benekli and cols. [7], who described STAT3 phosphorylation as a common finding in leukemic and other cancer cells. Inhibition of BCR-ABL interferes with STAT3 modifications but does not alter STATIP1 protein expression To demonstrate the role of BCR-ABL in STAT3 phosphorylation and the possible consequence of this signaling on STATIP1 expression, we first investigated the status of STAT3 and STATIP1 expression and STAT3 tyrosine-705 phosphorylation in BCR-ABL-inhibited K562 cells by immunofluorescence assays and western blotting. We inhibited BCR-ABL activity with 1 μM IM ( Figure 1J-R), as previously described [35]. Although BCR-ABL coordinates several molecular alterations, the STATIP1 protein levels remained unaltered following BCR-ABL inhibition using 1 μM IM for 24 h (Figure 1C, 1L). However, the STAT3 protein levels, phosphorylation status and nuclear accumulation were decreased in IM-treated cells compared with non-treated K562 cells ( Figures 1R and 2A, C-D). Unlike STAT3, our data suggested that STATIP1 expression is not related to BCR-ABL signaling ( Figures 1L and 2A, C-D). Imatinib treatment induces down-regulation of STAT3 target genes but not alteration of STATIP1 transcript levels Several genes listed as STAT3 targets exhibit a relevant role in cancer [7][8][9][10]. STAT3 target genes mainly include cellular growth promoters and inhibitors of apoptosis [36]. Moreover, STAT3 has been described as an activator of its own transcription [37]. Here, we investigated the regulation of STAT3 target genes in K562 cells in response to IM treatment. The mRNA levels of CCND1, BCL-XL and STAT3 genes were measured by RT-qPCR. Our results suggest that STAT3 target genes were downregulated 24 h after IM treatment (Figure 2A). To assess the direct activity of STAT3 on its gene targets, we directly inhibited STAT3 using LLL-3. In corroboration with the previous results, the CCND1, BCL-XL and STAT3 mRNA levels were down-regulated in K562 cells after 24 h with LLL-3 treatment compared to untreated cells ( Figure 2B). These findings indicate that STAT3 inhibition either indirectly, by IM, or directly, by LLL-3, induces a decrease in STAT3 transcriptional activity. Additionally, STAT3 inhibition with LLL-3 also does not interfere with the STATIP1 mRNA levels ( Figure 2B). Our data indicated that STATIP1 is not correlated with either the BCR-ABL or STAT3 signaling pathways but that it may be related to STAT3 activity in the CML cell line. STATIP1 depletion results in increased STAT3 transcriptional activity in K562 cells Previous studies have demonstrated that STAT3 activity can be regulated by STAT3 protein interactions [23,27,38]. To determine the potential of STATIP1 in regulating the transcriptional activity of STAT3, K562 cells were transfected with siRNA against STATIP1. The mRNA levels were analyzed and compared to untransfected or scrambled-transfected K562 cells. By RT-qPCR, significant decreases in the STATIP1 mRNA and protein levels were observed 72 h after siRNA transfection ( Figure 3A,B). Interestingly, the increase in STAT3 mRNA levels after STATIP1 inhibition were inversely proportional, showing significant elevation at 72 h ( Figure 3C). This result suggests that with transient STATIP1 depletion, STAT3 is more transcriptionally activated. To validate this hypothesis, we investigated STAT3 target gene mRNA levels. Surprisingly, in response to STATIP1 inhibition, a significant two-fold increase of CCND1 mRNA levels and a three-fold increase of BCL-XL mRNA levels were observed 72 h after siRNA transfection ( Figure 3D). These findings showed that STATIP1 down-regulation in K562 cells augments the STAT3 mRNA levels and its targeted genes, demonstrating that STATIP1 is involved (directly or indirectly) in the negative regulation of STAT3 transcription. STATIP1 is involved in imatinib resistance in CML The role of STATIP1 both physiologically and in cancer cells is completely unknown. In an effort to determine the mechanism of STATIP1-mediated CML therapy resistance, we used the Lucena cell line as a model of IM Figure 1 Immunofluorescence analyses of STAT3, STAT3-Y and STATIP1 proteins. STAT3, STAT3-Y and STATIP1 FITC-labeled antibodies (green), DAPI-stained DNA (blue) and merged images. Protein labeling was observed in untreated K562 cells (A-I) and K562 cells treated with 1 μM IM (J-R). The slides were analyzed using an LMS confocal system, and the images were processed using AxioVision-LE software (Carl Zeiss). resistance [39]. Lucena cells were subjected to IM, LLL-3, and co-treatment (as previously reported) [35], and the STAT3, STATIP1 and ABCB1 mRNA levels were evaluated after 24 h and compared to untreated cells. The STATIP1 mRNA levels were lower in the Lucena cells compared to untreated K562 cells ( Figure 4A). Additionally, the STAT3 mRNA levels decreased by 60% in the Lucena cells with each of the different treatments ( Figure 4B), but the ABCB1 mRNA levels only decreased with the LLL-3 treatment (≅50%) ( Figure 4C). No differences were observed regarding the ABCB1 mRNA levels in K562 cells (data not shown). Interestingly, STAT3 inhibition by LLL-3 treatment sensitized Lucena cells to IM treatment ( Figure 4D) in a cell cycle arrest-independent manner ( Figure 4E-F). Together, these results suggest that IM resistance may be associated not only with STAT3 overexpression/activation but also with STATIP1 downregulation. To address this hypothesis, we assessed K562 cell viability after IM treatment in STATIP1-depleted cells 72 h after siRNA transfection. Our results indicated a decrease in the IM sensitivity of the K562 cell line with reduced STATIP1 expression compared to the control or scrambled K562 cells ( Figure 5). After 24 h of 1 μM IM treatment, approximately 25% of the STATIP1-depleted K562 cells remained viable compared to the control or scrambled cells ( Figure 5). Additionally, we also analyzed a total of 14 CML patients with different responses to IM (6 IM-responsive and 8 IM-resistant) and 6 healthy bone marrow donors. RT-qPCR analyses showed that IM-resistant patients presented STATIP1 mRNAs levels down-regulated, compared to IM-responsive patients ( Figure 6A). Moreover, STAT3 mRNA levels were inversely expressed; up-reguleted in IM-resistant patients, compared to IM-responsive ( Figure 6B). These data suggest that the decreased expression of STATIP1 may promote IM resistance in the K562 cell line, and could be an important piece of in vivo IM-resistance development in CML. Discussion Although the BCR-ABL oncoprotein, a hallmark of CML, constitutively activates multiple signaling pathways [3], our group was particular interested in STAT3 signaling activation, as the constitutive activation of STAT3 is associated with oncogenic transformation induced by the viral Src oncoprotein [2]. Furthermore, many in vivo and in vitro assays have demonstrated the association of STAT3 activation with the development and maintenance of several cancer types [1,4,10,16]. Despite the known association of STAT3 phosphorylation with cancer, the mechanisms that regulate STAT3 are not well understood. In this study, we were able to clarify the relationship between BCR-ABL signaling and STAT3 activation. Our data indicated strong STAT3 phosphorylation and nuclear accumulation in untreated K562 cells. K562 treatment with IM, an inhibitor of BCR-ABL activity, not only promoted a decrease in the mRNA and protein levels of STAT3 but also inhibited STAT3 phosphorylation. Moreover, our results also showed a transcriptional positive feedback loop, suggesting that STAT3 promotes its own over-expression, which may be important to signaling intensification. In summary, our findings suggest that STAT3 is phosphorylated and transcriptionally activated by BCR-ABL activity in K562 cells. Several studies have demonstrated that STAT3 signaling can regulate the expression of numerous genes that are frequently involved with proliferation and apoptosis [34], angiogenesis, metastasis and differentiation [36], some of which are capable of positively regulating STAT3 through protein-protein interactions. To probe the diminished STAT3 activation in BCR-ABL-inhibited cells, we assessed the expression of representative known STAT3 target genes involved in proliferation and cellular survival, CCND1 and BCL-XL, respectively, and STATIP1, a protein identified in a two-hybrid assay as interacting with STAT3 [29]. As expected, CCND1 and BCL-XL were down-regulated in response to IM treatment, but unlike STAT3, the STATIP1 mRNA and protein levels were unaltered in the treated cells. Accordingly, our results indicated that STATIP1 was not affected by the molecular alterations promoted by BCR-ABL signaling. Hawkes and cols. characterized the STATIP1 levels in the cytoplasm and nuclei of cancer cell lines exercising multiple distinct roles that are dependent on its subcellular localization [40]. To further investigate the relationship between STAT3 and STATIP1 in the context of BCR-ABL, we inhibited STAT3 activity with LLL-3, a more direct approach that has been previously used by our group [35]. Similar to the BCR-ABL inhibition experiments, previously investigated STAT3 target genes demonstrated decreased mRNA levels compared to untreated cells. STATIP1 remained unchanged in K562 cells treated with the STAT3 drug inhibitor LLL-3. This result corroborates our previous results, again suggesting that STATIP1 expression is not related to molecular signaling changes driven by either BCR-ABL or STAT3. Moreover, our findings showed that STATIP1 is present in both the cytoplasm and nuclei of K562 cells. Further characterization of the localized STATIP1 pools could reveal its precise role in these cellular compartments. It is known that STATIP1 contains 12 WD40 domains that are responsible for mediating protein-protein interactions that play important roles in signal transduction regulation, transcription and proteolysis [30]. In this context, the investigation of the role of STATIP1 in signal transduction showed that its forced over-expression is able to block STAT3 activation [29]. However, the regulation of STAT3 transcriptional activity by STATIP1 was only observed in the human hepatocellular carcinoma cell line HepG2 [29]. In this study, we characterized STATIP1 in the K562 cell line and investigated its role in STAT3 transcriptional activity in a distinct cell line established from another cancer type, chronic myeloid leukemia. Instead of over-expressing STATIP1, as was performed by Collun and cols. [29], we depleted the STATIP1 mRNA and protein levels to investigate the role of STATIP1 in regulating STAT3 transcriptional activity in K562 cells. Our results showed a gradual increase of STAT3-target gene mRNA levels, such as those of STAT3, CCND1 and BCL-XL, in K562 cells subjected to STATIP1 inhibition. Similarly to Collun [29], our findings also indicated that STATIP1 may work as a negative regulator of STAT3 transcriptional activity. Because STATIP1 interacts with STAT3, we inferred that this may be a direct regulation mechanism. Indeed, existing data have already characterized STATIP1 protein as a scaffolding protein that regulates the activity of interacting proteins [40]. Based on this finding, we propose that STATIP1 may interact with STAT3 in K562 cells and regulate STAT3 activation. However, additional investigation is required to address the intricate mechanism by which STAT3 is inhibited by STATIP1. Nevertheless, independent of whether it is a direct or indirect regulation and how it works precisely, our results demonstrated that negative regulation of STAT3 by STATIP1 appears to be a common issue in distinct cancer cell types. If this result is validated in other diverse cancer cell types, we propose that STAT3 regulation may be important to cancer development and that it may also be an interesting target for the design of new drug strategies against cancer cells. Because STAT3 over-expression is closely related to CML drug resistance and has been implicated in a poor prognosis [17,41], we evaluated the role of STATIP1 in IM resistance. We took advantage of an IM-resistant cell model, the Lucena cell line. Lucena cells exhibit a multidrug resistance phenotype (with ABCB1 over-expression) and have been shown to also be IM resistant, compared to K562 cells [39]. We investigated the STATIP1, STAT3, and ABCB1 mRNA levels, together with apoptosis and cell cycle arrest, in Lucena cells with the inhibition of BCR-ABL and STAT3. We observed decreased STATIP1 mRNA levels in Lucena cells compared to K562 cells. Because Lucena cells are resistant to IM, we observed STAT3 down-regulation in all of the treatments; additionally, we observed a decrease in the ABCB1 mRNA levels. This result was expected because it is known that ABCB1 is a STAT3 target [42,43]. Moreover, STAT3 direct inhibition (LLL-3 treatment) induced Lucena cells to undergo apoptosis, in contrast to indirect inhibition (IM treatment), and this effect was independent of cell cycle arrest. This result demonstrated that STAT3 over-expression together with STA-TIP1 down-regulation could be involved in IM resistance. To validate this hypothesis, we depleted STATIP1 and inhibited BCR-ABL activity in K562 cells and assessed the proliferation and survival. Interestingly, our results demonstrated that STATIP1-depleted K562 cells have a higher survival percentage than control or scrambled-transfected cells. STAT3 can overcome sensitivity to BCR-ABL inhibition by driving proliferation, anti-apoptosis and MDR gene expression, increasing CML cell survival [15][16][17][18][19]43]. Moreover, althought we analyzed a small cohort of healthy donors and patients samples, our in vivo analyses suggested that STAT3 and STATIP1 genes are inversely expressed in IM-response, which corresponds to our findings in K562 and Lucena cell lines. The present study is the first report of STATIP1 expression in CML patients with different responses to IM therapy. Further studies may reveal the details of STATIP1 role in IM resistance. Conclusions Our data suggest that STATIP1 may be a negative regulator of STAT3 and that it could be involved in the acquisition of therapeutic resistance to IM in CML. Competing interests The authors declare that they have no competing interests. Authors' contributions ALM and SC performed experiments, statistical analysis, drafted the manuscript and contributed in study conception and intellectual content. DS and MFS participated in acquisition and analysis of the Immunoflurescence experiments. BDR participated in acquisition and analysis of flow cytometry experiments. EA made substantial contributions to the study conception and design and critically revised the manuscript for intellectual content. All authors read and approved the final manuscript.
5,780.6
2014-11-23T00:00:00.000
[ "Biology" ]
Justification of the NLS Approximation for the KdV Equation Using the Miura Transformation It is the purpose of this paper to give a simple proof of the fact that solutions of the KdV equation can be approximated via solutions of the NLS equation. The proof is based on an elimination of the quadratic terms of the KdV equation via the Miura transformation. Introduction The NLS equation describes slow modulations in time and space of an oscillating and advancing spatially localized wave packet.There exist various approximation results, cf.[1][2][3][4] showing that the NLS equation makes correct predictions of the behavior of the original system.Systems with quadratic nonlinearities and zero eigenvalues at the wave number k 0 turn out to be rather difficult for the proof of such approximation results, cf. 5, 6 .The water wave problem falls into this class.Very recently, this long outstanding problem 7 has been solved 8 for the water wave problem in case of no surface tension and infinite depth by using special properties of this problem.Another equation which falls into this class is the KdV equation.The connection between the KdV and the NLS equation has been investigated already for a long time, cf. 9 .In 10, 11 the NLS equation has been derived as a modulation equation for the KdV equation, and its inverse scattering scheme has been related to the one of the KdV equation.It is the purpose of this paper to give a simple proof of the fact that solutions of the KdV equation can be approximated via solutions of the NLS equation.Beyond things this has been shown by numerical experiments in 12 .An analytical approximation result has been given by a rather complicated proof in 5 with a small correction explained in 6 .The much simpler proof of this fact presented here is based on an elimination of the quadratic terms of the KdV equation via the Miura transformation. Advances in Mathematical Physics Following 13 the KdV equation can be transferred with the help of the Miura transformation into the mKdV equation In order to derive the NLS equation we make an ansatz for the solutions v v x, t of 1.4 , where 0 < ε 1 is a small perturbation paramater.Equating the coefficient at εe i kx−ωt to zero yields the linear dispersion relation ω −k 3 .At ε 2 e i kx−ωt we find the linear group velocity c −3k 2 and at ε 3 e i kx−ωt we find that the complex-valued amplitude A satisfies the NLS equation Approximation of the mKdV Equation via the NLS Equation Our first approximation result is as follows. Theorem 2.1.Fix s ≥ 2 and let A ∈ C 0, T 0 , H s 3 be a solution of the NLS equation 1.6 .Then there exist ε 0 > 0 and C > 0 such that for all ε ∈ 0, ε 0 there are solutions of the mKdV equation Proof.The error function R defined by v x, t εψ v x, t ε 3/2 R x, t satisfies where E e i kx−ωt .In order to eliminate the O ε 3 terms we modify the previous ansatz 1.5 by adding After this modification the residual Res εψ v is of formal order O ε 4 .When evaluated in H s there is a loss of ε −1/2 due to the scaling properties of the L 2 -norm.Hence there exist ε 0 > 0 and C res > 0 such that for all ε ∈ 0, ε 0 we have By partial integration we find for s ≥ 2 and all m ∈ {0, . . ., s} that with ε-independent constants C j .Hence using a ≤ 1 a 2 shows that the energy y t R •, t 2 H s satisfies Rescaling time T ε 2 t and using Gronwall's inequality immediately shows the O 1 boundedness of y for all T ∈ 0, T 0 , respectively all t ∈ 0, T 0 /ε 2 .Therefore, we are done. Transfer to the KdV Equation Applying the Miura transformation 1.2 to the approximation εψ v defines an approximation the approximation theorem in the original variables follows.
973
2011-05-17T00:00:00.000
[ "Physics" ]
Zero-field NMR and NQR measurements of the antiferromagnet URhIn 5 The antiferromagnet URhIn 5 with the N´eel temperature T N = 98 K has been investigated by nuclear magnetic/quadrupole resonance (NMR/NQR). 115 In-NQR spectra in the paramagnetic state give the respective electrical field gradient parameters for the locally tetragonal and orthorhombic In(1) and In(2) sites. In the antiferromagnetic state at 4.5 K, 115 In-NMR spectra in the zero external field indicate a commensurate antiferromagnetic structure. The internal field at In(1) sites is found to be zero and that at In(2) sites is 21.1 kOe at 4.5 K. The temperature ( T ) dependence of the nuclear relaxation rates 1 /T 1 in the paramagnetic state shows a distinct site dependence: Korringa-type constant ( T 1 T ) − 1 behavior below ∼ 150 K for In(1) sites and a divergent behavior of 1 /T 1 toward T N for In(2). The plausible antiferromagnetic structure is discussed based on these observations. I. INTRODUCTION The large family of "115" intermetallic compounds with tetragonal HoCoGa 5 type has opened an interesting avenue in the field of strongly correlated f electron systems. 1 The successive discovery of the plutonium superconductors PuCoGa 5 (Ref. 2) and PuRhGa 5 (Ref. 3) with transition temperatures T c = 18 and 9 K, respectively, sparked interest in the actinide-based (An115) compounds AnT Ga 5 (An = U, Np, Pu; T = transition-metal elements). The isomorphic indium compound PuCoIn 5 has been recently reported to be a new superconductor with T c = 2.5 K. 4 In the closely related isostructural heavy-fermion Ce115 series CeT In 5 (T = Co, Rh, Ir), CeCoIn 5 and CeIrIn 5 become superconducting below 2.3 and 0.4 K, respectively. 5-7 CeRhIn 5 , an antiferromagnet with Néel temperature T N = 3.8 K at ambient pressure, becomes superconducting near 2 K, with suppression of the antiferromagnetism, at applied pressure of P * ∼ 2 GPa. 8 Systematic nuclear magnetic resonance (NMR) investigations of the Ce115 systems have established that these superconductors have d-wave superconducting gaps, [9][10][11][12] and that antiferromagnetic (AFM) spin interactions play an active role in the superconducting pairing. [13][14][15][16] In the Pu115 systems, NMR measurements show d-wave-like superconducting gap behavior, 17,18 with T c 's an order of magnitude higher than those for the Ce115. In heavy-fermion 115 systems, recent systematic NMR experiments have suggested that AFM XYtype anisotropy is more favorable for d-wave superconductivity than Ising-type or isotropic fluctuations. [19][20][21] On the other hand, in the U115 and Np115 series, which are Pauli paramagnets or antiferromagnets, no superconductivity has been found. [22][23][24][25][26][27][28][29][30][31][32][33][34] In U115 systems, only gallium compounds have been reported. A search for isomorphic U115 indium compounds succeeded in the discovery and growth of single crystals of URhIn 5 . 35 URhIn 5 is found from measurement of resistivity, magnetic susceptibility, and specific heat to be an antiferromagnet with T N ∼ 98 K. In order to microscopically characterize the 5f electronic state in this new antiferromagnet URhIn 5 , 115 In-NMR and nuclear quadruple resonance (NQR) measurements in zero field have been performed using approximately one dozen small single crystals. In Sec. II, the experimental details are given and the hyperfine parameters are defined. In Sec. III, we report the NQR spectra in the paramagnetic (PM) state, and the NMR spectra in zero field in the Néel state of URhIn 5 . Nuclear relaxation rates 1/T 1 in URhIn 5 are presented. Here, the apparent nature of 5f electrons in URhIn 5 is found to vary from rather localized for temperatures above ∼150 K to itinerant for temperatures below ∼150 K, i.e., the AFM state in URhIn 5 appears to be driven by itinerant 5f electrons. Finally, in Sec. IV, the possible AFM structure is discussed based on the NMR spectra for the respective In sites. II. EXPERIMENTS Single-crystal samples of URhIn 5 were prepared by the In-flux method. 35 For the NMR/NQR measurements, a dozen single crystals were used without grinding in order to avoid spectral broadening due to lattice distortions. NMR measurements were carried out in the temperature range 4-300 K using a phase-coherent, pulsed spectrometer installed in a special area for handling radioisotopes. Frequency-swept NMR/NQR spectra were measured in the zero field by stepwise summing of the spin-echo signal intensity with an autotuning NMR probe. Both 113,115 In nuclei have nuclear spin I = 9 2 , so there are nuclear quadrupolar interactions. Using conventional notation, the quadrupole frequency parameter is defined as ν Q ≡ 3e 2 qQ 2I (2I −1)h , where eQ is the nuclear quadrupolar moment ( 113,115 Q are given as 1.14 and 1.16, respectively), and eq ≡ V ZZ is the principal component of the electric field gradient (EFG) tensor. Here, V ii denotes EFG tensor components in the principal coordinate system, such that |V XX | |V Y Y | |V ZZ | for each ionic site. The EFG components satisfy LaPlace's equation, i.e., . The nuclear gyromagnetic ratio values used here are 115 γ N /2π = 0.93301 MHz/kOe for the major 115 In isotope with the natural abundance of 95.72%, while 113 γ N /2π = 0.93099 MHz/kOe for the minor isotope 113 In with a small abundance of 4.28%. Due to the small U Rh In(2) In (2) In(1) In (1) In (1) In ( abundance and closeness of γ N and Q, the 113 In signal of 113 In is usually buried by the adjoining signal of the 115 In signal. URhIn 5 crystallizes in the tetragonal HoCoGa 5 -type structure (space group, P 4/mmm), illustrated in Fig. 1. This crystal structure is quasi-two-dimensional in character, i.e., it can be regarded as a sequential stacking of UIn 3 and RhIn 2 layers along the c axis. There are two inequivalent crystallographic In sites: the locally tetragonal In(1) (1c site) and the orthorhombic In(2) (4i site), as shown in Fig. 1. Due to the local symmetry, the EFG asymmetry parameter η must be zero for In(1) and nonzero for In(2) sites. The local coordinates based on the principal axes of EFG can be determined, as denoted in Fig. 1: V ZZ for In(1) is parallel to the c axis, and V ZZ for In(2) is perpendicular to the ac plane and V XX is parallel to the c axis. These local coordinates for each site in the 115 compounds are well established by symmetry and experimentally. 36 The nuclear spin-lattice relaxation time T 1 was measured using the inversion-recovery method with a π pulse. Values of T 1 were obtained from fits to an appropriate relaxation function 37 for the In(1) and In(2) sites, respectively. Figure 2 displays all 115 In-NQR spectra for In(1) and In(2) sites in the PM state at 115 K in URhIn 5 . The signal intensities are corrected by the frequencies squared to deduce the transition probabilities. Correction by nuclear spin-spin relaxation times T 2 is unnecessary since the data were taken with a very short separation τ of 12 μs between the first and second rf pulses. As shown in Figs. 2(b)-2(d), each line is quite sharp with linewidth of ∼60 kHz. These sharp NQR lines indicate the high quality and homogeneity of the crystals. A weak resonance was also observed as denoted by the asterisk in Fig. 2(a), with two orders of magnitude longer nuclear relaxation time than the main signals. This is probably a small contribution from a nonmagnetic binary compound, e.g., Rh-In, although it could not be identified. A. NQR spectra From crystallographic considerations, the assignments for In(1) and In (2) have been determined. The local tetragonal symmetry of In(1), i.e., η = 0, requires the NQR lines equally 115 In-NQR spectra in zero external field for In (1) and In(2) sites in the PM state of URhIn 5 taken at 115 K. The assignments are also denoted by arrow sets. (b) and (c) are the expansions for each line for 1ν Q and 2ν Q of In(1) sites, and (d) for 1ν Q and 2ν Q of In(2) sites. The line marked by an asterisk could not be assigned. separated in ν Q , and the remaining four lines with high intensities then arise from In(2), as denoted in Fig. 2(a). Thus, the ν Q for 115 In(1) is easily determined to be 9.276 MHz at 115 K. In the case of finite η, the following electric quadrupole Hamiltonian matrix can be diagonalized to obtain the EFG parameters. As usual, the four allowed ( m = ±1) and the associated forbidden (| m| > 1) transitions would be observed if η is finite. In URhIn 5 , however, only the four allowed transitions are observed, as seen similarly for the In(2) sites in Ce115 compounds. The numerical diagonalization has been done to fit the frequencies for these transitions. As a result, the ν Q and η for 115 In (2) Figure 3 shows the 115 In-NQR/NMR spectra in the AFM state of URhIn 5 at 4.5 K well below T N , which is obtained by frequency sweep in the zero external field. It is noted that the fast repetition of pulses (∼200 ms) weakened the signals coming from the nonmagnetic impurity (marked by the asterisk in Fig. 2). Therefore, all the visible resonance lines in Fig. 3 originate from 115 In in URhIn 5 . Here, in order to compare with the simulated transition probabilities, the signal intensities were again divided by the carrier frequencies squared, but no T 2 correction was made since the τ was very short. The noisy background below ∼30 MHz in Fig. 3 is due to this correction. B. NMR spectra in zero field below T N From Fig. 3, the AFM order is concluded to be commensurate since the spectral lines remain very sharp as seen in the PM state, i.e., no characteristic line broadening due to the distribution of internal fields by incommensurate AFM ordering, such as seen in the related materials CeRhIn 5 (Ref. 38) or CePt 2 In 7 (Ref. 39). The In(1) lines remain in the nearly same position as in the PM state relative to the simulated lines plotted together in Fig. 3, so these lines are NQR lines with no internal field on the In(1) sites. On the other hand, the NMR spectra for In (2) show a characteristic line splitting from NQR lines in the PM state. In the AFM state in zero external field, the NMR occurs by the internal (hyperfine) field H int on the ligand In sites transferred from the magnetic uranium ions. In such a case, one needs to diagonalize the effective Hamiltonian matrix H Z + H Q where H Z = γ Nh I · H int is the Zeeman term. As shown in Fig. 3, by fitting to the diagonalized resonances, the remaining spectra can be explained simply by taking H int = 21.1 kOe, which is parallel to the V Y Y on each In(2) site. We also note that any differing field orientation can not explain the experimental resonances. In this fitting procedure, ν Q and η for In (2) are also obtained as shown in the following. The result is schematically illustrated in Fig. 4. The magnetic structure is discussed later in Sec. IV based on these experimental facts: (i) no internal field on the In(1) sites, and (ii) finite internal field with unique magnitude parallel to V Y Y on the In(2) sites transferred from the uranium sites. 115 In-NMR spectra in zero external field for the In(1) and In(2) sites in the AFM state of URhIn 5 taken at 4.5 K. The simulated resonance lines given by diagonalization of the effective Hamiltonian are also plotted for the In(1) and In(2) sites, respectively. 045123-3 In (2) In (1) In (1) In (1) In (2) FIG. 4. (Color online) Schematic illustration to indicate by bold lines the directional axes of the (transferred) internal fields (H int ) to the In(2) sites. The amplitude of H int is unique on the In(2) sites. No internal field is transferred to the In(1) sites. β = 0.27, respectively. The development of H int just below T N is found to vary more rapidly than the conventional mean field result. The saturated value of H int can be estimated to be 21.1 kOe at T → 0. Figure 6 is the temperature dependence of ν Q for 115 In(1) and 115 In (2), and the EFG asymmetry parameter η for In (2) is inserted into the inset of Fig. 6(b). η is nearly T independent. In most PM solids, the temperature dependence arises from lattice vibrations (phonons) in which the phenomenological relation ν Q (T ) ∝ T 3/2 generally holds. 40 In URhIn 5 , however, this T 3/2 term is found to be very small. In particular, ν Q (T ) for 115 In(1) appears to be nearly T linear as seen in Fig. 6(a). In order to fit the data in the PM region, the empirical formula ν Q (T ) = ν Q0 + kT + lT 3/2 is used. The obtained parameters for 115 In(1) and 115 In(2) are ν Q0 = 9.39 MHz, k = −0.0014 MHz/K, l = 1 × 10 −5 MHz/K 3/2 and ν Q0 = 17.89 MHz, k = −0.00075 MHz/K, l = −4 × 10 −5 MHz/K 3/2 , respectively. The fits well reproduce the data in the PM region, as shown in Fig. 6. Interestingly, the ν Q (T ) for In (1) below T N shows an opposite tendency to that for In(2), i.e., a decrease of ν Q (T ) for In (1) and an increase for In (2) In (1) In (2) (b) a axis and that for In (2) to those along both the a and c axes. Probably, the anisotropic ν Q (T ) between In(1) and In (2) below T N is associated with a characteristic magnetovolume effect in URhIn 5 . Figure 7 shows the temperature dependence of the nuclear spin-lattice relaxation rate 1/T 1 measured at the NQR lines for In(1) in URhIn 5 . Since the internal field is found to be canceled at the In(1) sites, the NQR lines remain even below T N . Therefore, 1/T 1 below T N can be determined using the identical nuclear magnetization recovery functions in the PM state. We also note that the values of 1/T 1 for the 1ν Q , 2ν Q , 3ν Q , and 4ν Q lines are equal at each temperature. In the PM region, the 1/T 1 just above T N is proportional to temperature, i.e., (T 1 T ) −1 is constant in the temperature range from T N to T * ∼ 150 K, as clearly seen in the inset of Fig. 7. D. Nuclear relaxation rates In general, 1/T 1 on the ligand sites can be written as 41,42 where γ n and γ e are the nuclear and electronic gyromagnetic ratios, A is the transferred hyperfine coupling constant, f α (q) is the hyperfine form factor, Imχ (q,ω 0 ) is the imaginary part of dynamical susceptibility generated by magnetic atoms, ω 0 is the NQR frequency, and the suffix ⊥ refers to the component perpendicular to the quantization axis. The hyperfine coupling constants mainly originate from the hybridization between U 5f and the ligand 5s/5p. Therefore, the q dependence of the transferred hyperfine coupling is imposed by f (q) since the transferred hyperfine fields are locally produced by the nearest U ions. For example, f 2 (q) = 16 cos 2 (q x a/2) cos 2 (q y a/2) for the tetragonal In(1) sites. Indeed, f 2 (q) = 0 at the specific AFM propagation vector of q x = π/a or q y = π/a, although the q dependence of f 2 (q) is weak trigonometrically. However, since spin fluctuations in the PM state usually have broad widths in q space, (T 1 T ) −1 can sense AFM fluctuations beyond the moderate filtering by the trigonometrical f 2 (q) term. In the case of cubic UIn 3 with AFM propagation vector of 2π Q/a ≡ (π/a,π/a,π/a), 1/T 1 just above T N can sense a critical increase of AFM fluctuations beyond such a f 2 (q) filtering. 43 Hereafter, since f 2 (q) is not important for the following discussions, f 2 (q) is assumed to be unity for simplicity. It is noted here that the simple approximation of neglecting the q dependence of f 2 (q) is more relevant to the In(2) sites since the site symmetry is lower and hyperfine fluctuations do not vanish at any particular field orientation. In this case, f 2 (q) becomes a more complicated trigonometrical function. 44 An additional consequence from Eq. (2) is that 1/T 1 is only sensitive to the perpendicular spin with respect to the quantization axis. Namely, 1/T 1 for In(1)-NQR can detect the in-plane fluctuations only of 5f electrons since the quantization axis is the c axis parallel to the principal axis V ZZ of the EFG. On the other hand, that for In(2)-NQR can sense the fluctuations both along the a and c axes because V ZZ a. As shown in Fig. 8, (T 1 T ) −1 measured for the In(2)-NQR line in URhIn 5 shows a critical increase just above T N , while (T 1 T ) −1 for In(1)-NQR does not exhibit such an enhancement. Therefore, the anisotropic AFM enhancement of (T 1 T ) −1 between the ligand sites of In(1) and In(2) originates from the strong 5f fluctuations along the c axis, i.e., the ordered moment in the AFM state tends to be orientated along the c axis. Such an anisotropic AFM enhancement due to a tendency for c-oriented moments is also detected by NMR 1/T 1 measurements in NpCoGa 5 . 36 If itinerant electrons dominate the magnetic relaxation, in an electron gas model, the q summation of the imaginary part of dynamical susceptibility becomes q Imχ (q,ω) = πγ 2 , withh and k B as unity, where f (x) and N (x) are the Fermi distribution function and density of states. Then, from Eq. (2), (T 1 T ) −1 becomes T independent (the so-called Korringa behavior) and the value of (T 1 T ) −1 is proportional to the square of N (E F ). Even if electronic correlations exist, (T 1 T ) −1 is proportional to N 2 (E F ) and the magnetic correlation factor K(α) as long as the random-phase approximation (RPA) is applicable. 42 In the case with localized character, 1/T 1 is known to reach a constant value. 41 Such a constant behavior of 1/T 1 in the localized regime has been observed in the paramagnetic state of UIn 3 . 43 Since a constant behavior of (T 1 T ) −1 is clearly observed below T * ∼ 150 K, 5f electrons acquire itinerant character by hybridization with conduction electrons below T * , although the AFM enhancement factor K(α) is uncertain. Above T * , 1/T 1 for In(1) deviates downward and that for In(2) reaches a constant behavior, indicating a loss of 5f electrons' itinerancy. As evidence that 1/T 1 for In(1) reacts to the 5f magnetism, a drop of 1/T 1 just below T N is observed, corresponding to a decrease of the density of states (DOS) at the Fermi surface after the AFM ordering opens an energy gap. Since there is no reason for the correlation factor K(α) to increase below T N after AFM ordering, the unusual increase of (T 1 T ) −1 below ∼50 K (see the inset of Fig. 7) should be attributed to a recovery of the DOS at the Fermi surface at temperatures well below T N . We note that (T 1 T ) −1 seems to saturate at the lowest temperature near 4 K, as seen in the inset of Fig. 7. Such a recovery feature of DOS below T N may be connected with AFM nesting effects on the Fermi surface, which would cause an increase of the residual DOS by self-polarization of up-and down-spin bands. IV. DISCUSSION The experimental results are briefly enumerated. (1) The AFM propagation vector is commensurate. (2) No internal field is transferred to the In(1) sites. (3) Finite internal fields with unique magnitude parallel to V Y Y are transferred to the In(2) sites from the uranium sites as illustrated in Fig. 4. (4) PM moments on the U sites tend to orient parallel to the c axis. A plausible AFM structure in URhIn 5 can be proposed based on the results 1-3 of the static spectral information. First, item 1 makes the puzzle simple: we can conclude that neither spin density wave nor incommensurate spiral AFM as observed in CeRhIn 5 (Ref. 45) occurs in URhIn 5 , i.e., all the U atoms carry the same moment μ ord with a simple AFM arrangement. Therefore, we need only determine the simple AFM propagation vectors which reproduce the observed internal fields on the ligand sites. From item 2, we can conclude that the AFM propagation vector should have an in-plane component at least, i.e., q x = π/a and/or q y = π/a, because an in-plane ferromagnetic arrangement of q x = q y = 0 should give a finite internal field at the In(1) sites. Thus, the possible AFM propagation vectors are narrowed to Q 0 = ( 1 2 ,0,0), . The internal fields at nonmagnetic ligand sites originate from the spin-density distribution of magnetic ions through the dipolar and transferred hyperfine interactions. In principle, if the complete hyperfine tensor was determined in the ordered state through quantification of the c-f mixing effect, the internal fields could be calculated assuming possible magnetic structures. In many cases, however, the hyperfine coupling tensor in the ordered state can not be resolved experimentally. Instead, even without such a complete solution, we can deduce possible directions for the internal field at a nonmagnetic ligand site on the basis of symmetry: 46-49 the induced magnetic field at a ligand site never breaks the symmetry of the magnetic sublattice. Let us consider the in-plane μ ord cases to begin with. If μ ord were parallel to the a axis, any simple AFM propagation vector can not give a unique magnitude of the internal field parallel to the V Y Y axis at the In(2) site by this symmetry principle because such an AFM structure breaks the fourfold-rotational symmetry leading to at least two kinds of hyperfine fields at the In(2) sites in magnitude or in direction. This situation is the same even if the μ ord is parallel to 110 with in-plane stripe type Q 0 or Q 1 . Only in the case of μ ord parallel to 110 with Q 2 or Q 3 do the two kinds of hyperfine fields on the In(2) sites accord in magnitude and direction. But, it is parallel to the c axis, i.e., V XX . These are inconsistent with item 3. Based on the foregoing considerations, even more general cases of μ ord uv0 (u,v = 0,1 and u = v) with AFM arrangement even including multi-k cases (noncollinear AFM structure) can not give a solution with a unique |H int | V Y Y on the In(2) sites. Similarly, the case of μ ord uvw (u,v,w = 0) is also impossible for explaining the observed internal fields at the In(2) sites. As a consequence, symmetry considerations preclude the possibility of in-plane μ ord , i.e., the ordered moments μ ord on U sites must be parallel to the c axis. This is also consistent with item 4 from 1/T 1 as well. In the AFM structure of Q 0 , Q 1 , Q 2 , or Q 3 with μ ord c, as shown in Figs. 9(a) -9(d), the possible directions of hyperfine fields on the In(2) sites are already discussed in our previous works for NpFeGa 5 (Ref. 48) and TbCoGa 5 (Ref. 49). For example, in the case of Q 0 or Q 1 as shown in Figs. 9(a) and 9(b), the In(2) sites magnetically split into two sites again from the differing local directions of H int , i.e., one is parallel to V XX and another is parallel to V Y Y . Of course, this is also inconsistent with the experimental observation. Therefore, a possible AFM structure for URhIn 5 consistent with items 1-3 requires either Q 2 or Q 3 with Ising-type moments along the c axis in view of the symmetry requirement, as shown in Figs. 9(c) and 9(d). For further identification via NMR, however, 103 Rh-NMR (I = 1 2 ) experiments will be necessary with external fields. If the local field on the Rh sites is transferred (or canceled), the AFM structure can be determined by which of the two possibilities is realized. Next, we roughly estimate the size of ordered moments assuming a similar hyperfine coupling constant to that in related UIn 3 . In UIn 3 , the hyperfine coupling A ⊥ is experimentally obtained as 54 kOe/μ B in the PM state. 43 In this case, A ⊥ is produced mainly by four nearest-neighbor U atoms, while the A ⊥ on the In(2) sites in URhIn 5 comes from two nearest neighbors. So, assuming half of A ⊥ , the size of the ordered moment in URhIn 5 can be roughly estimated to be ∼1 μ B /U from H int = 21.1 kOe on the In(2) sites. This value is quite reduced from the ∼3.6 μ B of the U 3+ or U 4+ free ion. This reduction of the ordered moment is consistent with item 5 in the experimental results. Regarding item 5, we also note that the resistivity as well as the susceptibility show a broad hump around T * with increasing temperature above T N , indicating development of c-f hybridization around T * . 35 Finally, the lattice properties of URhIn 5 can be examined to check consistency with the possible AFM structures. Above all, it should be noted that there is no compound having the same AFM structure of Q 2 = ( 1 2 , 1 2 ,0) with μ ord c among the antiferromagnets of the Ln115 (Ln = Ce, Nd, Tb, Dy, Ho) and An115 (An = U, Np) family, so far as we know. On the other hand, the same AFM structure of Q 3 = ( 1 2 , 1 2 , 1 2 ) with c-oriented moments has been found in the related compound UNiGa 5 . 23 Systematic neutron diffraction studies of the antiferromagnets UT Ga 5 (T = Ni, Pd, Pt) with c-directed ordered moments 50 reveal that a local tetragonal factor defined by t ≡ 1 − (2cz/a) can predict the stable AFM structure, where z is the positional parameter of the crystallographic In(2) sites (0, 1 2 ,z). The local tetragonal factor t represents the local compression of U-In cages along the c axis. The t for URhIn 5 determined by x-ray diffraction is ∼3.1% near T N , which is closer to the 1.7% seen in UNiGa 5 with Q 3 than the 5.4% seen in UPdGa 5 with a different Q 4 = (0,0, 1 2 ), and much smaller than the 7% seen in UPtGa 5 with Q 4 . Characteristically, the basal plane lattice constant a for URhIn 5 contracts below T N (Ref. 35) as seen in UNiGa 5 , while it is known to expand in UPtGa 5 . 50 Thus, the magnetic response of lattice in URhIn 5 may suggest a similar AFM structure to UNiGa 5 . V. SUMMARY We have performed NQR/NMR measurement in the zero external field for single crystals of the antiferromagnet URhIn 5 with T N = 98 K. The complete In-NQR spectra have been obtained. The NMR spectra below T N can be interpreted with no internal field on the In(1) sites and a finite internal field on the In(2) sites parallel to the local V Y Y axis of the EFG. The nuclear spin-lattice relaxation rates 1/T 1 indicate that the AFM state is driven by itinerant 5f electrons, which are hybridized with conduction electrons below T * ∼ 150 K. The difference in 1/T 1 between In(1) and In(2) sites indicates that the ordered moments have an Ising character along the c axis. A recovery of DOS well below T N is indicated by a gradual increase of (T 1 T ) −1 , which may be connected with Fermisurface properties of URhIn 5 . From our results and lattice properties, the AFM structure in URhIn 5 appears to be the same AFM structure found in UNiGa 5 . The most plausible AFM structure in URhIn 5 is Q 3 = ( 1 2 , 1 2 , 1 2 ) in Fig. 9(d). In order to completely identify this structure, the further 103 Rh-NMR experiment will be performed with external fields in the near future. A complementary neutron diffraction study will be necessary as well.
6,834.8
2013-07-19T00:00:00.000
[ "Physics" ]
Drug-target binding affinity prediction method based on a deep graph neural network : The development of new drugs is a long and costly process, Computer-aided drug design reduces development costs while computationally shortening the new drug development cycle, in which DTA (Drug-Target binding Affinity) prediction is a key step to screen out potential drugs. With the development of deep learning, various types of deep learning models have achieved notable performance in a wide range of fields. Most current related studies focus on extracting the sequence features of molecules while ignoring the valuable structural information; they employ sequence data that represent only the elemental composition of molecules without considering the molecular structure maps that contain structural information. In this paper, we use graph neural networks to predict DTA based on corresponding graph data of drugs and proteins, and we achieve competitive performance on two benchmark datasets, Davis and KIBA. In particular, an MSE of 0.227 and CI of 0.895 were obtained on Davis, and an MSE of 0.127 and CI of 0.903 were obtained on KIBA. Introduction With the rapid development of machine learning in recent years, artificial intelligence has also been applied to various fields, most of the time achieving valuable results [1][2][3]. Computer-aided drug design is one of these areas and is of great interest. The high cost and time-consuming nature of drug development makes the research of new drugs extremely difficult [4]. Drug-target affinity (DTA) prediction is one of the important subtasks that helps to reduce the time-consuming pre-drug selection phase of drug development [5,6]. There are three main types of computational approaches for DTA prediction, including molecular docking [7], traditional machine learning [8][9][10], and deep learning [11][12][13]. Molecular docking is based on protein structure to explore the main binding modes of ligands when binding to proteins [14], but it requires the crystallized structure of proteins that are difficult to obtain, which also indirectly affects its final performance [15]. Traditional machine learning, on the other hand, uses onedimensional sequence representations in drug and protein sequences to train neural networks; however, these models represent drugs as strings. Such representations can reflect the atomic composition of molecules and the chemical bonds between atoms, but they cannot retain the structural information of molecules [16]. The structural information of the molecule, in turn, affects its chemical properties, which may impair the predictive power of the model as well as the functional relevance of the learned potential space. Deep learning models that have been widely used in recent years also perform well in the DTA prediction task, and learning using deep molecular modeling functions is gradually becoming more common because it can capture hidden information that is difficult to model by human experience. One of the models that may be most suitable for the DTA prediction task is the graph neural network (GNN). The GNN can directly process graph data that can preserve structural information, and this approach has already made research progress. GraphDTA [17] introduces GNN into the DTA prediction task by constructing a drug molecule graph and performs feature extraction for drug molecules based on the graph data, while protein molecules as organic macromolecules can still only use CNN to extract features. DGraphDTA [18] builds graph data for protein molecules based on protein structure prediction, which allows both molecules to be represented by a graph and enables the application of GNN to extract features. However, it only compares two models, GCN [19] and GAT [20], and cannot fully evaluate the performance of GNN in the DTA prediction task. Datasets The data for training generally contains the dataset of drug and protein molecules and the values of corresponding drug-target affinity. This research applies Davis [21] and KIBA (Kinase Inhibitor BioActivity) datasets for specific experiments, and the two datasets are typically used as benchmarks. The Davis dataset completely covered the binding affinity between all 68 drugs and 442 targets included, which was measured by Kd values (kinase dissociation constants). The KIBA dataset integrates information from IC50, Ki (inhibition constant) and Kd (dissociation constant) measurements into a single bioactivity score containing a bioactivity matrix of 2111 kinase inhibitors and 229 human kinases. The drug and protein molecule entities in the two datasets are shown in Table 1. The PDBBind [22,23] dataset is a comprehensive database of drug-target 3D structure binding affinities in the Protein Data Bank (PDB). The PDBBind dataset provides 3D coordinates of target proteins and ligand structures. We use the general set as the training set and the refined set of the PDBBind dataset as the test set. In order to have a stable training process, we only select samples from the general set with a protein sequence length less than 1000 amino acids. The general set is divided into a training set and a validation set with a ratio of 4:1. Finally, we have 8391 training samples, 1680 validation samples and 3940 test samples. To reduce random errors, we trained and tested the model for performance evaluation using a 5-fold training set and test set of the benchmark dataset, while introducing various methods and metrics for comparison. The drug molecules are represented in the sequence format called SMILES (Simplified Molecular Input Line Entry System) [24], and this form is a chemical notation language that uses the element symbols of atoms and chemical bonds between them to represent molecules. However, because the sequence data only retain the structural information of molecules, the performance may be worse if the original SMILES string is directly applied. In addition, graph data can store more 3D features by comparison. Therefore, in this method, the molecular graph is constructed by sequence to meet the requirement for the model. Specifically, the molecular graph is constructed according to the drug SMILES string, with atoms as nodes and bonds as edges. In order to ensure that the features of the nodes are fully considered in the graph convolution process, self-loops are also added to the graph construction to improve the feature performance of the drug molecules. Protein molecules are also processed into graph data similarly. The raw data in the datasets are also sequence strings. However, the above idea that processing drugs does not work while proteins are biological macromolecules, the protein graphs will be too large to satisfy model conditions if atoms are constructed as nodes and chemical bonds are designed as edges. Owing to the development of protein structure prediction, it is feasible to utilize predicted structural information to approximate the real 3D structure of proteins. In this paper, the Pconsc4 [25] method was used to output contact map [26] is used to predict protein structure. The method assumes the residue of protein as nodes, and edges are determined by the Euclidean distance between the Cβ atoms (Cα atoms for glycine) of residue pairs below a specified threshold [26]. Because the graph is constructed with residuals as nodes, features are selected around the residuals, which exhibit different properties due to the R group. These features include polarity, chargedness, aromaticity, etc. And in this paper, we use PSSM [27] to represent the characteristics of the residue nodes. The selected node features of drug and protein molecules and detailed data preprocessing measures are invariable as those in DGraphDTA [18]. Method Almost every drug development study focuses on how to deal with drugs and target molecules, and they preprocess data by applying other algorithms into special forms to fit their approach [28]. However, the models in these studies are relatively simple. DeepDTA only equips three CNN layers to extract molecular features, and DGraphDTA employs two GNN layers and 2 fully connected layers to extract node representations. With the aim of representing a large amount of knowledge and obtaining high accuracy, a model with high expressiveness requires a larger training set. To this extent, a model with more parameters and higher complexity is advantageous. On the other hand, however, an overly complex model may be difficult to train and may lead to unnecessary resource consumption [29]. So there is a need to balance the model complexity so that the model has a high expressive power while reducing unnecessary consumption. CNNs and RNNs generally have advanced performance in handling Euclidean data, but the structural information of molecular graphs cannot be expressed in Euclidean space. This leads to CNN and RNN methods rarely achieving optimal results. Therefore, most researchers apply graph neural networks to extract the features of graph data. Let G = (V, E) denote a simple and connected undirected graph with n nodes and m edges. Let A be the adjacency matrix, D be the diagonal degree matrix, and IN be an n-order identity matrix. Let = A + IN denote the adjacency matrix with self-loop added, and the corresponding diagonal degree matrix is denoted by . Subsequently, let ∈ * denote the node feature matrix, where the ith row of the matrix represents the d-dimensional feature vector of the ith node. The GNN is a network designed to work directly with graph data and exploit its structural information, and there are now many variants available after years of development. The most wellknown one is the GCN, which is widely used in graph data problems. For the GCN, each layer will implement a convolution operation through Eq (1): where H l represents the lth layer of node embedding with H 0 = X, σ () is a nonlinear activation function, and W denotes a learnable parameter. Qimai Li et al. [30] prove that graph convolution is essentially Laplacian smoothing [31]; however, repeated application of Laplacian smoothing may mix the features of vertices from different clusters and make them indistinguishable [32]. In the case of symmetric Laplacian smoothing, they will converge to be proportional to the square root of the vertex degree [33]. This is why a deeper GCN will lead to performance decrease. However, we find that the models that apply GCN commonly set their number of GNN layers to 2 or 3, and consequently these models will have no ability to extract high-order neighborhood information. This is attributed to the notion that stacking more layers tends to degrade the performance of these models, and such a phenomenon is called "oversmoothing" [30]. In other words, this situation refers to the fact that during the training process of graph neural networks, as the number of network layers and iterations increase, the hidden layer representations of each node within the same connected component will tend to converge to the same value. This will result in the final node representation having no actual meaning in the case of deeper layers. Even the addition of residual connectivity, an effective technique widely used in deep CNNs, merely mitigates the oversmoothing problem in GCNs [19]. The purpose of the GNN is to extract the graph embedding of the entire molecular graph. However, if only two or three layers of GCN are used, then for a certain node, just 2-hop or 3-hop neighbor node information can be perceived, which is unfavorable for the information of the whole graph that needs to be obtained to construct the graph embedding [34]. Thus, the GCN operations of several layers can only be regarded as local information aggregation, and the obtained graph embedding is not sufficiently accurate to represent the graph. Graph Diffusion Convolution (GDC) [35] replaces the multilayer convolution operation in GCN by using graph diffusion, and the graph diffusion process is given by the generalized diffusion matrix: , with the weighting coefficients θk and transition matrix T. The selection of θk and T must ensure that Eq (2) converges. One of the main options of weighting coefficients θk follow Eq (3) of Personalized PageRank [36]: where α PPR is the teleport probability, and the greater α PPR is, the more information is diffused. Simple Graph Convolution (SGC) [37] believes that the complexity of GCNs inherited from neural networks is burdensome and unnecessary for less demanding applications. Thus, SGC reduces the additional complexity of GCNs by removing the nonlinearity between the layers of GCNs and simplifying the resultant function to a single linear transformation. The experiments show that the final obtained model is comparable to GCNs with higher computational efficiency and fewer fitted parameters. Simple spectral graph convolution (S 2 GC) [38] further analyzes the Markov Diffusion Kernel [39] and improves the GCN to obtain the following iterative function: where is the special case of Tsym at loop = 1, i.e., D / A D / , the 0,1 parameter balances the self-information of the node with the consecutive neighborhoods, and the value of K represents the receptive field size of the model. For example, each node can update its own node feature representation based on the information of its farthest 4-hop neighbors if K = 4. The above iterative formula is designed by S 2 GC for the node classification task. In this task, GNN aims to extract node features and subsequently add fully connected layers to fit channels, and the last layer of the model uses softmax as a classifier. Therefore, when S 2 GC is applied to this paper, the softmax layer needs to be removed, and the iterative function only retains the part within the softmax function. Figure 1 illustrates the flow of the process used in this paper, which consists of four main parts: the raw molecular sequence data, the processed drug molecule graph and the predicted protein contact map, feature extraction using graph models, and finally DTA prediction. Experiment setting In this paper, we alter the GCN layer of the model in DGraphDTA with other graph neural networks and evaluate the impact of GNNs on the final results. However, limited by computational resources, we first compare the parametric size of various networks and identify some algorithms with lower complexity for subsequent experiments. The algorithms are built based on the PyTorch [40] library, and Pytorch geometric (PyG) [41] provides a variety of portable access algorithms from which to choose graph data tasks. The first step is to compare the graph models used parametrically due to the large amount of data and to keep the models from becoming too complex. Assume the input dimension and output dimension are m and n, respectively, k = m*n, and the results of the partial comparison in the case of a single-layer network are shown in Table 2. Table 2. Number of parameters of the single-layer model. Model The numbers of parameters GCN [19] k + n GAT [20] k + 3n GraphConv [42] 2k + n SAGE [43] 2k + n SGC [37] k + n From Table 2, we can obtain these models with a small difference in complexity. There are other models with complexity far beyond these, such as Molecular Fingerprints (MFConv) [44] with the number of parameters up to 20 times k. It is obvious that using this kind of algorithm will reduce the training efficiency of the model. Similar to S 2 GC, SGC is not applied to the model to compare the performance of various algorithms. As mentioned before, S 2 GC takes the adjacency matrix of the graph to construct the transfer matrix when calculating , and processing the transfer matrix will consume considerable time when applied to the DTA prediction task. Therefore, we extracted the features of drug molecules using S 2 GC, while the SAGE algorithm was used for the feature extraction of protein molecules in this experiment. Training the model requires setting several parameters and tuning details within the model. One of these details is to adjust the proportion of drug features and protein features in the final execution of classification to adjust the influence of both molecules on the final prediction results. Since drug molecules and protein molecules are encoded as 54-and 78-dimensional features, respectively, and their feature dimension ratio is approximately 2:3, the feature dimension ratio of the two molecules produced by the graph neural network can be set to 2:3. In addition, we set the value of K, which is the number of iterations of the network, to 4 and the value of α to 0.05 in the S 2 GC model. Since S 2 GC requires processing adjacency and transfer matrices for graphs, DTA prediction is a task that includes multiple graph data. Therefore, model training is time-consuming and cannot be performed using a large number of different parameters, and the setting of various parameters needs to be based on experience. A few important hyperparameters used in the model are as follows: learning rate of 0.001, number of iterations of 2000, feature dimensions of 54 and 78 for drug and protein molecules, respectively, and corresponding output dimensions of 112 (16*7) and 144 (16*9) for the two types of molecules in the feature extraction phase, where 112:144 can be approximated as 54:78. Metrics The same metrics as in the benchmark are implemented to calculate the concordance index (CI) [45] and the mean squared error (MSE) [46]. The CI is essentially an estimate of the probability that the predicted value will be consistent with the actual value; it measures whether the predicted affinity values for two random drug-target pairs are predicted in the same order as the true values and is calculated by Eq (5). where bx is the predicted value of the larger affinity αx, by is the predicted value of the smaller affinity αy, Z is the normalization constant, and h(x) is the step function: MSE measures the difference between the prediction and the label, and the formula is as follows: , where pi is the predicted value, yi is the corresponding actual label, and a smaller MSE means that the predicted value of the model is closer to the true value. In addition, another metric, Pearson correlation coefficient, is also used in some articles for performance comparison, calculated by Eq (8). where cov is the covariance between the predicted value p and the real number y, and σ () is the standard deviation. , Experimental results In this study, we introduced S2GC and SAGE to extract drug and protein features, respectively, and we proceeded to use several layers of FC to make predictions. Figures 2 and 3 show the distribution between the labels and predicted values of the samples in the two datasets. There is a straight line with a slope of 1. When the sample points are closer to this line, the corresponding DTA predicted by the model is closer to the true value, which indicates better model performance. The distribution of sample points on these two plots also shows that our method is able to predict the DTA values more accurately, with the vast majority of the predicted values and actual labels having a difference of 1 or less. We conducted several experiments to predict DTA and report the performance of certain models on two metrics, MSE and CI. Tables 3 and 4 show the MSE and CI values on the independent test sets of the two benchmark datasets, respectively. It can be observed in the Davis dataset that our model has the best performance on both MSE and CI metrics. GraphDTA, another model based on graph neural networks, also performs well, obtaining an MSE of 0.229, which is second only to the method in this paper. The rest of the models extract features mainly based on sequential data. These models are weaker than the graph model in terms of performance because they do not handle structural information well. For the KIBA dataset, our model performs slightly worse than Affinity2Vec, obtaining only the second-best result. However, our model also obtained an MSE of 0.127 and a CI of 0.903, which are not far from Affinity2Vec's MSE of 0.124 and CI of 0.91. We argue that the main reason for this is the type of dataset; although this dataset is large, it does not contain all binding values, and known DTA only accounts for 24.4% of the affinity. Training on this dataset uses semi-supervised learning and is more suitable for Affinity2Vec's seq2seq [47] and ProtVet [48], which are two unsupervised datadriven models. On the other hand, this may be due to the more complex method in Affinity2Vec; in terms of the model, it uses multilayer GRU [49] and ProtVec to extract sequence features and XGBoost [50] for the final DTA prediction. In terms of features, it applies not only embedding features, but also drug-target meta-path score features [51] and both hybrid features. This fits the characteristics of the large data volume in the KIBA dataset, and there is enough data in KIBA to ensure a smooth fit of the Affinity2Vec training process. In other words, the model and complexity of Affinity2Vec are more suitable for training in a large dataset such as KIBA, which can also explain why the performance of Affinity2Vec on the Davis dataset is not optimal, i.e., due to the insufficient amount of data in Davis. As the PDBBind dataset, the method in this paper achieves 1.231, 0.756 and 0.683 on MSE, CI and Pearson indicators, respectively, which also has some performance improvement relative to some other methods. The detailed comparison is shown in Table 5. As the benchmark model for this paper, DGraphDTA obtained better results on the Davis and KIBA datasets, achieving 0.904, 0.202, and 0.867 for CI, MSE, and Pearson metrics on Davis, and 0.904, 0.126, 0.903 for CI, MSE, and Pearson metrics on KIBA, respectively. However, DGraphDTA only uses three layers of GCNs for feature extraction, and it cannot continue to increase the number of layers subsequently, and simply increasing GCNs can also lead to oversmoothing problems and may also degrade the model performance. The S2GC algorithm used in this paper can be set up to k = 16, which is the effect of 16 layers of GCN. However, due to the lack of computational resources, this paper can only use k = 4 for experiments, and the difference is not obvious compared to the three-layer GCN of DGraphDTA, which is why the performance of this paper's method decreases instead of rising compared to DGraphDTA. In general, GNN-based models commonly outperform other models in terms of performance with little difference in model complexity, and models that rely only on sequence information indeed struggle to outperform graph models in terms of feature extraction. Although other methods can stack models and features to obtain better results, graph models are certainly a more suitable class of architectures for the DTA prediction task. In addition, the performance of the same model may vary on different datasets. For the variation of molecular size and whether all DTA values are covered in both Davis and KIBA datasets, different model architectures can be subsequently set according to different datasets to meet the matching relationship between data volume and model complexity. Conclusions Predicting the strength of drug-target binding affinity is more informative and challenging than just classifying drug-target interactions. The prediction of DTA requires molecular graph data based on drugs and proteins. For such graph data, it is more appropriate to use graph neural networks instead of other neural network models to extract features. To solve the oversmoothing problem of the most widely used GCN when the model layers are deepened, we compared several networks used to alleviate the oversmoothing, from which we selected an S2GC model that performs well in the node classification task to extract molecular features. We also found that the graph neural network-based model outperformed the other algorithms with a small difference in model complexity, so we can continue to follow this idea in our subsequent work to explore the performance of graph neural networks in the DTA prediction task.
5,483.8
2023-01-01T00:00:00.000
[ "Computer Science" ]
Studies on the Restriction of Murine Leukemia Viruses by Mouse APOBEC3 APOBEC3 proteins function to restrict the replication of retroviruses. One mechanism of this restriction is deamination of cytidines to uridines in (−) strand DNA, resulting in hypermutation of guanosines to adenosines in viral (+) strands. However, Moloney murine leukemia virus (MoMLV) is partially resistant to restriction by mouse APOBEC3 (mA3) and virtually completely resistant to mA3-induced hypermutation. In contrast, the sequences of MLV genomes that are in mouse DNA suggest that they were susceptible to mA3-induced deamination when they infected the mouse germline. We tested the possibility that sensitivity to mA3 restriction and to deamination resides in the viral gag gene. We generated a chimeric MLV in which the gag gene was from an endogenous MLV in the mouse germline, while the remainder of the viral genome was from MoMLV. This chimera was fully infectious but its response to mA3 was indistinguishable from that of MoMLV. Thus, the Gag protein does not seem to control the sensitivity of MLVs to mA3. We also found that MLVs inactivated by mA3 do not synthesize viral DNA upon infection; thus mA3 restriction of MLV occurs before or at reverse transcription. In contrast, HIV-1 restricted by mA3 and MLVs restricted by human APOBEC3G do synthesize DNA; these DNAs exhibit APOBEC3-induced hypermutation. Introduction Mammals have evolved a number of ''restriction'' factors that function to block infection by retroviruses and other pathogens. One of these is the APOBEC3 restriction system. The best-studied member of the APOBEC3 family is human APOBEC3G (hA3G). Briefly, hA3G protein is incorporated into HIV-1 particles produced by infected cells. When these virions infect new target cells, hA3G deaminates cytidines to uridines in minus-strand DNA (the initial product of reverse transcription); this results in replacement of guanosine with adenosine in the coding strand of proviral DNA. In turn, HIV-1 encodes a protein, ''Vif'', which binds to hA3G in the infected cell and brings it to the proteasome for degradation, thereby interfering with its inclusion in assembling progeny virions [1,2]. The high frequency of G to A (''G:A'') mutations is a major, but not the only, mechanism by which hA3G restricts HIV-1 [3,4,5]. Mice encode only a single APOBEC3 species (''mA3''). The overall architecture of mA3 is apparently ''reversed'' in mA3 relative to hA3G [6,7]. The natural expression of mA3 is known to function to limit the spread within infected mice of both murine leukemia viruses (MLVs) [8,9,10,11], which are gammaretroviruses, and mouse mammary tumor virus, a betaretrovirus [12,13,14]. In both of these viruses, mA3 exerts this restriction without inducing detectable G:A mutations. Thus, it is clear that mA3 can inhibit retrovirus infection by some mechanism other than cytidine deamination; this additional mechanism is not yet understood. On the other hand, some MLV isolates are sensitive to cytidine deamination by mA3 [15,16,17]; it is striking that the sequences of endogenous MLV genomes, present within normal mouse DNA, do contain G:A mutations, indicating that the MLVs that infected the mouse germline and gave rise to these endogenous virus genomes were sensitive to mA3-induced mutation when they infected the germline [18]. The effects of mA3 on Moloney MLV (MoMLV), which has a long history of passage and selection for robust replication in mice, are somewhat different from its effects on many MLVs [19,20]. By comparing the restriction of both MoMLV and Vifdeficient HIV-1 by both mA3 and hA3G, we found that MoMLV is partially resistant to inactivation by mA3, and that inactivation of MoMLV by mA3 does not involve G:A mutation. In contrast, mA3 induces high levels of G:A mutation in Vif-deficient HIV-1. The mechanism of the partial resistance of MoMLV is completely unknown, but it does not involve exclusion of mA3 from the virion, or indeed from the mature core within the virion [19,20]. As Gag is the most abundant protein in the virus particle and determines the structure of the particle, it seemed possible that some difference between the Gag proteins of MoMLV and those of endogenous MLVs might be responsible for the apparent difference in sensitivity of the viruses to mA3-induced mutation. We have tested this possibility in the present work. We created a chimeric MLV in which the gag gene of MoMLV was replaced by that of a polytropic endogenous MLV. This chimera is fully infectious, indicating that this ''fossil'' gag gene is fully functional. Somewhat surprisingly, the responses of this chimera to both mA3 and hA3G were qualitatively indistinguishable from those of MoMLV itself: thus Gag does not control sensitivity to restriction by these APOBEC3s. The data also show that mA3 blocks infection by MLVs before or at the initiation of reverse transcription. Creation of Chimeric MLV The sequences of the ''polytropic'' and ''modified polytropic'' endogenous MLVs show clear evidence of mA3 action during infection of the mouse germline [18]. We chose PMV19, a polytropic endogenous MLV in C57BL/6 DNA, as a representative endogenous MLV; while this genome contains G:A mutations, the Gag protein that it encodes has the consensus PMV amino-acid sequence. As described in Materials and Methods, the PMV19 gag gene was amplified from C57BL/6 DNA and cloned; the gag gene in an infectious MoMLV molecular clone was then precisely excised, from the AUG initiator codon to the UAG termination codon, and replaced with the PMV19 gag gene. Infectivity of Chimeric MLV To test the ability of the chimeric MLV genome to produce infectious MLV particles, we transfected this molecular clone, together with pBABE-Luc, an MLV-derived retroviral vector encoding luciferase [20], into 293T cells. Our MoMLV clone was used as a control in this experiment. Culture fluids were collected from the transfected cells. The level of MLV particles in the two samples was quantitated by assaying reverse transcriptase (RT) activity; the chimera was found to produce approximately the same amount of virus as MoMLV (data not shown). Cultures of 293 cells expressing the ecotropic MLV receptor, mCAT1, were then infected with these samples and assayed for luciferase activity. As shown in Fig. 1, the specific infectivity of the chimeric virus, as measured by the ratio of luciferase activity to RT activity, was virtually identical to that of the MoMLV control. Restriction of Chimeric MLV by mA3 and hA3G To test the susceptibility of the chimeric MLV to restriction by mA3 and hA3G, we co-transfected the chimeric MLV clone with pBABE-Luc and different doses of plasmids encoding the two APOBEC3s; again, the MoMLV clone was tested in parallel. As shown in Fig. 2, the chimeric MLV is inactivated by mA3, but appears to be far more sensitive to hA3G than to mA3; the curves showing the inactivation of the chimeric virus by both mA3 and hA3G are very similar to those for MoMLV itself. We also tested the incorporation of mA3 and hA3G proteins into chimeric MLV particles by immunoblotting. Samples of chimeric MLV and MoMLV, prepared by transfection together with 0, 3, or 10 mg of mA3 or hA3G plasmid, were analyzed; as both mA3 and hA3G are tagged with a hemagglutinin (HA) epitope, their levels can be compared in a single immunoblot using anti-HA antiserum. Profiles of the virion preparations, analyzed with a broadly reactive anti-CA antiserum, show that there were similar levels of virus in all of the samples (Fig. 3A). As shown in Fig. 3B, the two APOBEC3 proteins are packaged at similar levels in the two viruses; virus made in the presence of 10 mg of APOBEC3 plasmid contains more APOBEC3 protein than that made in 3 mg plasmid. Fig. 3A also shows that the APOBEC3 proteins interfere to some extent with the normal processing of Pr65 Gag during virus maturation. Effect of APOBEC3s upon Viral DNA Synthesis The point in the MLV replication cycle that is blocked by mA3 is not known. We tested the ability of MoMLV and the chimeric MLV, produced in the presence of mA3 or hA3G, to synthesize viral DNA upon infecting new cells. The viruses used in these assays were produced by transfection of 293T cells that had previously been stably transfected with pLXSH, an MLV-derived vector encoding the hygromycin phosphotransferase (hph) gene. 293T cells were transiently transfected with an infectious MoMLV or chimera proviral genome together with the reporter plasmid pBABE-Luc. 293T-mCAT1 cells were then infected with culture supernatants from the transfectants, and lysates of these cells were assayed for luciferase activity. Virions were assayed for RT activity following precipitation from the culture supernatants with polyethylene glycol. The graph shows the luciferase activity divided by the RT activity of the viruses, with the value for MoMLV set to 100%; thus the data represent the relative specific infectivities of the samples. doi:10.1371/journal.pone.0038190.g001 This vector is rescued into infectious particles by the viral proteins encoded by the MLV plasmids. The released virions were then used to infect new 293-mCAT cells and assayed for their ability to synthesize hph DNA. This protocol helps to eliminate the background representing DNA from plasmids used to produce the viruses. Viruses were produced by transient co-transfections of the pLXSH-bearing cells with plasmids containing viral genomes; the MLV-derived luciferase vector pBABE-Luc; and APOBEC3s. The viral populations produced by the transfected cells will include some particles with luciferase-vector genomes and some with pLXSH genomes (as well as some with MLV genomes). Infectivities were quantitated by the luciferase assay, as in Fig. 2 above, and the infected cells were lysed and assayed for hph DNA by real-time PCR. Results of these assays are shown in Fig. 4. With hA3G ( Fig. 4A and 4B), the loss of infectivity in both MLVs was far greater than the reduction in viral DNA synthesis. In contrast, mA3 inhibited viral DNA synthesis by both viruses to virtually the same extent as it inhibited infectivity ( Fig. 4C and 4D). Somewhat similar results with MoMLV were presented earlier [20]. We also assayed the cell lysates for minus-strand strong-stop DNA, the initial product of reverse transcription (Fig. 4, green lines). In all cases, the reduction in strong-stop DNA closely resembled the reduction in total DNA, as assessed by the hph values. As a control, we also tested the effect of the APOBEC3s on DNA synthesis by, as well as infectivity of, DVif HIV-1. As shown in Fig. 5, the two APOBEC3s were very similar in their effects on DVif HIV-1: in both cases, the reduction in infectivity far exceeded the reduction in DNA synthesis. Sequence Analysis of Viral DNAs Synthesized in Presence of APOBEC3s The results in Fig. 4 show that both MoMLV and the chimeric MLV have not entirely lost the ability to undergo reverse transcription when they infect fresh cells, despite the presence of mA3 or hA3G in the virions. We also performed sequence analysis of these DNAs to look for G:A mutations, which would be evidence of cytidine deamination in minus-strand DNA. A stretch of hph DNA was amplified from the cell lysates and PCR products were individually cloned and subjected to sequence analysis. As shown in Table 1, the chimera, like MoMLV [20], does not undergo a notable increase in the frequency of G:A mutations when it is inactivated by mA3. In contrast, hA3G increases this frequency .30-fold. The high level of G:A mutations seen in the assays of DNA produced by MLV particles containing hA3G (Table 1, 5 th to 7 th and 11 th and 12 th rows) shows that our experimental techniques are suitable for detection of these mutations. Discussion It has been previously reported that MoMLV, a laboratory strain of MLV previously subjected to extensive selection for replication in mice, is partially resistant to restriction by mA3 and fully resistant to G:A mutation induced by mA3 [19,20]. The mechanism by which mA3 restricts MoMLV without cytidine deamination is unknown, as is the mechanism by which MoMLV evades mA3 restriction: its resistance, unlike the mechanisms by which other retroviruses resist APOBEC3s, does not involve exclusion of mA3 from the assembling virion. The sequences of ''polytropic'' and ''modified polytropic'' endogenous MLVs show clear evidence of cytidine deamination by mA3 at the time that they were inserted into the mouse germline [18]. It seemed possible that the resistance of MoMLV to mA3 is attributable to its Gag protein, and that the sensitivity of endogenous MLVs could be traced to a difference between their Gag proteins and that of MoMLV. However, we found (Fig. 2) that a chimeric MLV, identical to MoMLV except that its gag gene was from a polytropic endogenous virus, showed the same responses to mA3 (and hA3G) as MoMLV. In future experiments, we will determine whether sensitivity to mA3 can be mapped to the pol gene of a sensitive MLV. Many MLVs also produce an alternative, Nterminally extended and glycosylated form of the Gag protein, called ''glyco-Gag'' [21,22]; we will also test whether, as recently suggested [23], this protein is involved in mA3 resistance. As the mechanism by which mA3 inactivates MLVs is not known, it was of interest to determine whether these viruses can undergo reverse transcription when they infect new host cells. We found that the extent to which MoMLV and the chimeric MLV lost infectivity under the influence of mA3 was very similar to the extent to which the viruses lost the ability to synthesize viral DNA (Fig. 4). Moreover, MLVs inactivated by mA3 are evidently unable to synthesize even minus strand strong-stop DNA, the initial product of reverse transcription (Fig. 4, green lines). Thus, mA3 interferes with infection by MLVs either before or at the initial stages of viral DNA synthesis. In contrast, restriction of MLVs by hA3G and of DVif HIV-1 by mA3 is not at the level of DNA synthesis, as the degree of viral inactivation in these systems is far greater than the inhibition of DNA synthesis ( Fig. 4A and 4B, Fig. 5). The viral DNAs synthesized in these cases are, however, characterized by very high levels of G:A mutation; thus cytidine deamination is presumably a major contributor to virus inactivation in these cases. Taken together, these results highlight the distinctive nature of restriction of MLVs by mA3. Construction of chimeric MLV To generate a chimeric MLV, we replaced the gag gene in our infectious clone of MoMLV [20] with the gag gene of the endogenous MLV PMV19 [18]. Using the specific primer flank-7 (59-GGCAGGAGCCAGGTGTAATGG-39) that anneals in the PMV19 flanking region and a reverse primer PMV19R1 (59-GGGGGGCTCCTGACCCTGACCTCCCTAGTCACC-39), that anneals at the end of PMV19 gag, the PMV19 gag sequence was isolated from C57BL/6 mouse DNA (Jackson Laboratory, Bar Harbor, ME). The chimeric DNA was created using the sequential PCR [24] approach. The specific PMV19 gag sequence and the MoMLV proviral genome were used as templates; the amplification primers contain 59 extensions that are homologous to a portion of the other target gene. Specifically, the sequences of these primers were 59- Cells and viruses Virus particles were produced by transient transfection of 293T cells, or 293T cells that had previously been stably transfected with pLXSH plasmid DNA, as previously described [20,25,26]. Plasmids expressing either hA3G or mA3 were a kind gift from Nathaniel Landau (New York University School of Medicine). The proteins were both tagged at their C termini with the hemagglutinin (HA) epitope [27]. The mA3 protein encoded by the plasmid Figure 5. Effect of APOBEC3s on DVif HIV-1 DNA synthesis. 293-mCAT1 cells were infected with DVif HIV-1. Infectivity and RT activity were assayed as described [25,29,30]. Twenty-four hours after infection, the cells were lysed and assayed by real-time PCR for Luciferase DNA (black and green lines) as described in Materials and Methods [32]. Specific infectivity is represented with red and blue lines. doi:10.1371/journal.pone.0038190.g005 used here is the isoform lacking exon 5. HIV-1 was prepared as previously described [20,25]. Luciferase activity assay 293 cells expressing mouse cationic amino acid transporter 1 (mCAT1, the receptor for ecotropic MLVs; a kind gift of J. Cunningham, Harvard Medical School) [28] were infected with the filtered culture supernatants. Forty-eight hours after infection, cell extracts were assayed for luciferase activity with the Luciferase Assay System (Promega, Madison, WI) as previously described [20,25]. Luciferase assays were performed in triplicate; the three values were, in general, within 10% of each other. Reverse Transcriptase (RT) Assay The samples were analyzed for RT activity as previously described [29,30] following concentration with polyethylene glycol (PEG) [30]. RT assays were performed in triplicate and the three values were, in general, within 10% of each other. ''Specific infectivity'' is the mean of the luciferase values divided by the mean of the RT values. Immunoblotting Virus particles were isolated from filtered culture fluids by centrifugation through 20% sucrose (w/w) in phosphate-buffered saline at 110,0006g for 1 hour at 4uC. The virus pellet was resuspended in 2xNuPAGE sample buffer (Invitrogen, Carlsbad, CA). Immunoblotting against MLV p30 CA was performed with rabbit polyclonal anti-MLV p30 CA antiserum and against HAtagged APOBEC3 proteins with mouse anti-HA monoclonal antibody 16B12 (Covance, Princeton, New Jersey) as previously described [20]. Western Lightning Plus-ECL (Perkin Elmer, Waltham, MA) was used for detection. Analysis of viral DNA synthesis The ability of MLV-derived virus particles to perform DNA synthesis upon infecting new host cells in the presence or absence of APOBEC3 proteins was assayed as previously described [20,31,32,33]. Viruses were produced by transient transfection of 293T hygro cells [26] as mentioned above. The culture supernatants were treated after filtration with 10 U/ml of DNAse 1 (Ambion, Austin, TX) and 4 mM MgCl 2 for 1 h at 37uC to eliminate contaminating plasmid DNA from the virus before infecting 293-mCAT1 cells. An aliquot of the DNAse-treated virus was inactivated by incubation at 68uC for 20 min and used as control in the infection. Twenty-four hours after infection, the cells were lysed and the genomic DNA was extracted by the QIAamp DNA Mini Kit (Qiagen, Hilden, Germany). The genomic DNA was then treated with DpnI (New England Biolabs, Ipswich, MA) for 1 h prior to PCR amplification to further eliminate contaminating parental plasmid DNA. DNA copy numbers obtained with the heated virus were indistinguishable from those obtained by ''infecting'' 293-mCAT1 cells with culture fluid from mocktransfected 293T hygro cells, and were ,10 24 of the values obtained in the infected cultures. All reactions were performed using a DNA engine Opticon 2 instrument (MJ Instruments, now BioRad, Hercules, CA). DNA copy numbers were measured in triplicate and the three values were, in general, within 20% of each other. In the plots of ''DNA synthesis'', the mean of the copy numbers is divided by the mean of the RT values. G to A hypermutation DNA was collected from 293-mCAT1 cells that had been infected with virus carrying the pLXSH vector as described above. The hph DNA was amplified from 100 ng of the genomic DNA with hph 2050F (5-AAAGCCTGAACTCACCGCGACGTC-3) and hph 3030R (5-CACGAGTGCTGGGGCGTCGGTTTC-3) primers by using Taq polymerase (Invitrogen, Carlsbad, CA) for 35 cycles under the following PCR conditions: 95uC, 45 s; 67.5uC, 1 min; and 72uC, 1 min. The PCR products were cleaned up using G-50 columns (GE Healthcare, Buckinghamshire, United Kingdom). The 1-kb PCR product was then ligated into TOPOTA PCR 2.1-TOPO (Invitrogen, Carlsbad, CA) and transformed into Top 10 cells following the manufacturer's conditions. Colonies were selected by growth on ampicillincontaining medium. In order to minimize repeated amplification and cloning of the same DNA, transformed bacteria were only grown for 30 min before plating. Selected colonies were grown in 1 ml Terrific Broth as described before [20]. The purified DNAs were sequenced with the M13R primer. Sequence data were analyzed for mutations by trimming all sequences to the same length using MEGA version 4 [34] and aligning them with ClustalW (EMBL-EBI at http://www.ebi.ac. uk/clustalw/#).
4,557
2012-05-29T00:00:00.000
[ "Biology" ]
Heaving modes in the world oceans Part of climate changes on decadal time scales can be interpreted as the result of adiabatic motions associated with the adjustment of wind-driven circulation, i.e., the heaving of the isopycnal surfaces. Heat content changes in the ocean, including hiatus of global surface temperature and other phenomena, can be interpreted in terms of heaving associated with adjustment of wind-driven circulation induced by decadal variability of wind. A simple reduced gravity model is used to examine the consequence of adiabatic adjustment of the wind-driven circulation. Decadal changes in wind stress forcing can induce three-dimensional redistribution of warm water in the upper ocean. In particular, wind stress change can generate baroclinic modes of heat content anomaly in the vertical direction; in fact, changes in stratification observed in the ocean may be induced by wind stress change at local or in the remote parts of the world oceans. Intensification of the equatorial easterly can induce cooling in the upper layer and warming in the subsurface layer. The combination of this kind of heat content anomaly with the general trend of warming of the whole water column under the increasing greenhouse effect may offer an explanation for the hiatus of global surface temperature and the accelerating subsurface warming over the past 10–15 years. Furthermore, the meridional transport of warm water in the upper ocean can lead to sizeable transient meridional overturning circulation, poleward heat flux and vertical heat flux. Thus, heaving plays a key role in the oceanic circulation and climate. Introduction The oceanic general circulation consists of two major components: the wind-driven circulation in the upper kilometer and the thermohaline circulation over the entire depth of the ocean. Thermohaline circulation is intimately related to the thermohaline forcing on the air-sea interface; the mechanical energy requited for sustaining the thermohaline circulation comes from tidal dissipation, wind stress, and geothermal heat flux. However, wind-driven circulation is not directly linked to surface thermohaline forcing. In fact, wind-driven circulation in the ocean can be idealized as the adiabatic motions of sea water. Since motions in the upper ocean are mostly dominated by wind-driven circulation, many changes in the upper ocean can be understood in terms of the adiabatic motions associated with wind-driven circulation. Climate variability is intimately related to many critically important aspects of our society; thus, it is desirable to identify the sources of climate variability. However, climate variability is complicated because it involves many dynamical processes. These include: the low-frequency variability in the atmosphere (such as the solar isolation), wind stress, air-sea heat flux and freshwater flux, plus internal variability of the climate system itself. Abstract Part of climate changes on decadal time scales can be interpreted as the result of adiabatic motions associated with the adjustment of wind-driven circulation, i.e., the heaving of the isopycnal surfaces. Heat content changes in the ocean, including hiatus of global surface temperature and other phenomena, can be interpreted in terms of heaving associated with adjustment of winddriven circulation induced by decadal variability of wind. A simple reduced gravity model is used to examine the consequence of adiabatic adjustment of the wind-driven circulation. Decadal changes in wind stress forcing can induce three-dimensional redistribution of warm water in the upper ocean. In particular, wind stress change can generate baroclinic modes of heat content anomaly in the vertical direction; in fact, changes in stratification observed in the ocean may be induced by wind stress change at local or in the remote parts of the world oceans. Intensification of the equatorial easterly can induce cooling in the upper layer and warming in the subsurface layer. The combination of this kind of heat content anomaly with the general trend of warming of the whole water column under the increasing greenhouse effect may offer an explanation for the hiatus of global surface temperature and the accelerating subsurface warming over the past 10-15 years. Furthermore, the meridional transport of warm water in the upper ocean can lead to sizeable transient meridional 3 Many studies have been focused on the internal variability of the atmosphere-ocean coupled system, such as the basin modes (Cessi and Paparella 2001;Cessi and Primeau 2001), the oceanic response to stochastic wind (Cessi and Louazel 2001) and the oceanic teleconnections (Cessi and Otheguy 2003). Johnson and Marshall (2004) explored the global teleconnections of meridional overturning anomaly. They came to a conclusion that meridional overturning circulation (MOC hereafter) anomaly on decadal and shorter time scales are confined to the hemisphere basin where the perturbations originated. Thus, the simple ocean model used in our study may be able to catch the major part of the consequence induced by wind stress anomaly. The impact of thermohaline forcing anomaly on the thermohaline circulation was studied by many investigators, e.g., Johnson and Marshall (2002a, b), Zhai and Sheldon (2012), Zhai et al. (2013); however, this is not our focus in this study. Due to the existence of the periodic channel and the relatively small radius of deformation, the dynamics of the Antarctic circumpolar current (ACC) is quite complicated. Up till now, a clear theory for the ACC remains a great challenge, e.g., Rintoul et al. (2001). This is still a research frontier; for example, recently the spin-up and adjustment of the ACC has been examined by Allison et al. (2011). In general, climate anomalous signals in the world oceans can be classified into three basic categories: warming (cooling), freshening (salinification), and heaving. Isopycnal motions subjected to no heat and salinity exchange with the environment are called heaving, and such motions are induced by adjustment of wind-driven circulation, e.g., Bindoff and McDougall (1994); and they can be idealized as the movement of the main thermocline (called thermocline hereafter). In particular, wind stress is one of the most variable components in the climate system. Due to the change of wind forcing wind-driven circulation also adjusts in response. Wind stress changes take place in many different parts of the world oceans, and the corresponding adjustments of wind driven circulation evolve with time in rather complicated ways. As a result, identifying individual adjustment process and its source from climate datasets for the world oceans can be a great challenge. Instead, we set a modest goal in this study to the examination of fundamental heaving modes in the world oceans. Recently, hiatus in global warming has become a hotly debated issue related to global climate change. Over the past several decades, the global sea surface temperature (SST) kept increasing; in addition, there are decadal variability of global ocean heat content, e.g., Levitus et al. (2005Levitus et al. ( , 2009Levitus et al. ( , 2012, Easterling and Wehner (2009), Katsman and van Oldenborgh (2011), Kaufmann et al. (2011), Guemas et al. (2013), Watanabe et al. (2013). However, ocean warming is far from uniform. For example, in the North Atlantic, the tropics/subtropics have warmed, but that the subpolar ocean has cooled. These changes can be linked to NAO, and they are also directly linked to the gyre scale circulation changes and the MOC (Lozier et al. 2008(Lozier et al. , 2010. In particular, over the past 10-15 years, change in global SST is more or less leveled off, i.e., a hiatus in global SST record; on the other hand, subsurface heat content keeps increasing, with seemingly even high rate, e.g., Meehl et al. (2011Meehl et al. ( , 2013, Balmaseda et al. (2013), Kosaka and Xie (2013) and Chen and Tung (2014). As discussed in previous studies, the hiatus of global SST may be due to many mechanisms. For example, stratospheric water vapor and aerosol layer may contribute, e.g., Solomon et al. (2010Solomon et al. ( , 2011. However, our focus in this study is to explore the possible mechanism directly linked to oceanic circulation, in particular the linkage to the variability in wind-driven circulation. In fact, early studies suggested that this may be an important connection, e.g., McGregor et al. (2012), Jones et al. (2011), England et al. (2014). As will be explained in this study, hiatus of global SST record in combination of accelerating subsurface heat content increasing may be explained in terms of the general trend of warming of the whole water column and stratification changes induced by the adjustment of winddriven circulation in response to global decadal wind stress perturbations. In this study we will use idealized geometry and simple wind stress perturbations and focus on the dynamical consequence of large-scale adjustment of wind-driven gyres. One of the most important consequences of such adjustment is the basin scale quasi-horizontal transport of water mass. We assume that such movements take place within a relatively short time, on the order of inter-annual and decadal time scales. Neglecting the contributions due to the surface thermohaline forcing and internal diapycnal diffusion, such movements can be idealized as adiabatic, and they are commonly called heaving. Heaving can induce changes in the basin mean vertical stratification and the corresponding mean vertical heat content profile. For example, change in wind induced Ekman pumping can lead to a shifting of warm water in the upper ocean and thus baroclinic modes of heat content anomaly, as illustrated in Fig. 1. When the Ekman pumping rate is enhanced, the slope of the thermocline increases, leading to a three dimensional redistribution of warm water in the basin (red curve and arrows in Fig. 1a). On the x-z plane, the anomalous circulation appears in the form of a zonal overturning cell rotating anticlockwise. In particular, this induces a westward shifting of heat content and a downward shifting of heat content, as depicted in Fig. 1b, c. It is readily seen that if the wind induced Ekman pumping rate is weakened, an opposite process should take place. Similarly, if we look at the meridional plane, there is an anomalous MOC induced by wind stress anomaly, and this should give rise to a meridional shifting of heat content and the associated poleward heat flux. Since our model is adiabatic, heat content in the whole basin has zero net gain or loss; thus, in the vertical direction heat content anomaly must appear in the form of baroclinic modes. These baroclinic modes of heat content anomaly are entirely due to the adiabatic adjustment of wind-driven circulation in the ocean, i.e., heaving. The basic idea discussed above can be extended to a model ocean with multiple gyres. For an idealized twohemisphere basin, there are several wind-driven gyres, including the subpolar gyres, the subtropical gyres, and the equatorial gyres; the fundamental structure of the thermocline in a quasi-steady state is sketched by black curves in Fig. 2. Assuming the amount of warm water in the upper ocean remains unchanged, during the adjustment winddriven circulation warm water from one gyre is redistributed to other gyres in the model ocean. For example, if the equatorial easterly relaxes, the equatorial thermocline moves up in response. The upward movement of the equatorial thermocline leads to the transportation of warm water in the upper ocean toward middle/ high latitudes, red arrows in Fig. 2a. Neglecting the small change in sea level, the total water column height at each station is nearly constant. Hence, in compensation, cold water in the lower layer moves toward the Equator (blue arrows) to fill up the space left behind by the poleward transport of warm water. The movement of water in the upper and lower layers implies two very important physical processes. First, the poleward flow in the upper layer and the equatorward flow in the lower layer give rise to the anomalous MOC and poleward heat flux. Second, the stratification in the ocean is changed due to such exchange. Since the wind-driven circulation is considered as adiabatic, the total heat content for the whole ocean must be constant. Consequently, Fig. 2 Symmetric and asymmetric modes of heaving induced motions in a two-hemisphere basin. Black arrows indicate the vertical movement of thermocline induced by wind stress changes; black curves depict the undisturbed thermocline and the red curves indicate the disturbed thermocline; red arrows indicate the movement of warm water above the thermocline and blue arrows indicate the movement of cold water below the thermocline basin mean vertical heat content anomaly must appear in the form of baroclinic modes. As shown in Fig. 1, in a single gyre basin heaving motions can induce a first baroclinic mode; for a two-hemisphere basin, the situation becomes more complex, and the second or even higher baroclinic modes can be generated as well. The second example is for the case with wind stress forcing change in the subtropical basin of the Northern Hemisphere, Fig. 2b. If Ekman pumping in the subtropical basin is reduced, the thermocline in the subtropical basin shoals; this leads to the warm water transport toward both high and lower latitudes, depicted by the red arrows. In compensation, cold water in the lower layer flows toward the subtropical basin of the Northern Hemisphere, depicted by the blue arrows. Similarly, if wind forcing in the subpolar basin in the Northern Hemisphere is enhanced, the cyclonic gyration is intensified, and the thermocline moves upward. The cyclonic gyre can no longer hold up the same amount of warm water; hence, warm water is transported from the northern subpolar basin southward to the rest of the basin. As a result, thermocline in other parts of the basin deepens in response, Fig. 2c. In addition, wind stress in both hemispheres can change simultaneously. Such changes can be idealized in terms of the symmetric and asymmetric modes induced by symmetric and asymmetric wind stress perturbations, sketched in Fig. 2d, e. From the point view of global water mass and heat distribution, one of the most important consequences associated with such basin-scale adjustment of wind-driven circulation is the meridional (vertical) redistribution of mass and heat, which can contribute to a substantial portion of the variability of the meridional (vertical) transport of mass and heat. Many relevant and interesting phenomena will be explored in this study. The rest of this paper is organized as following. In Sect. 2, the formulation of a two-hemisphere reduced gravity model is presented, and the results from a series of numerical experiments forced by idealized wind stress perturbations will be discussed. In Sect. 3, the formulation of a Southern Hemisphere ocean model is presented and the results from a series of numerical experiments forced by idealized wind stress perturbations will be discussed. Finally, we conclude in Sect. 4. Model set up With the focus on the wind-driven circulation mostly confined to above the thermocline, the density structure in the ocean can be idealized as a step function in density coordinates. This two-hemisphere (labeled as 2H hereafter) model ocean consists of two layers: the upper (lower) layer has a constant density of ρ 0 − Δρ (ρ 0 ). The upper layer thickness is denoted as h, and the lower layer is infinitely deep and thus motionless. The model is based on the rigid-lid approximation. As such, the effect of free surface ζ � = 0 is replaced by a nonconstant hydrostatic pressure p = p a at the flat surface z = 0. The pressure in layers beneath can be calculated using the hydrostatic relation. The pressure gradient for a one-and-a-half layer model is where g ′ = g�ρ/ρ 0 is the reduced gravity. In approximation, the sea surface level is linked to the upper layer thickness Thus, the sea level can be inferred from the thermocline depth, with an unknown constant which can be determined by requiring the basin-integrated sea surface level is zero, i.e., Since the model has one active layer only, the time dependent momentum and continuity equations for the active upper layer are where (τ x , τ y ) are the zonal and meridional wind stress, (κ, A m ) are the parameters for the vertical and horizontal momentum dissipation, which are set to κ = 0.0015 m/s, A m = 1.5 × 10 4 m 2 /s. Note that an interfacial friction linearly proportional the velocity shear is used, which can be interpreted as a crude parameterization of baroclinic instability. There are many different ways of parameterizing the eddy's effects. For example, Greatbatch (1998) and Greatbatch and Lamb (1990) postulated a more sophisticated formulation for parameterizing the vertical momentum dissipation associated with eddies, which is essentially equivalent to the now commonly used lateral mixing induced by meso-scale eddies of Gent and McWilliams (1990). The model ocean is formed on an equatorial beta-plane, with the Coriolis parameter defined as the linear function of latitude The 2H model is 150° wide in the zonal direction, and extends from 70°S to 70°N. Following the common practice of non-eddy resolving modeling, the model is based on the B-grid, with 152 × 142 grids and 1° × 1° resolution, which grid size is set to 110 km. In this model we chose β = 2.2367 × 10 −11 /m/s, so that the Coriolis parameter on a grid point at 35°N equals to f 35 • N = 8.36552 × 10 −5 /s, corresponding to the Coriolis parameter at this latitude in the spherical coordinates. In this study, we will be focused on the role of zonal wind stress only. The undisturbed zonal wind stress profile (in N/m 2 ) applied for the 2H model is where θ is the latitude. The numerical model is based on the traditional leap-frog scheme with a time step of 3153.6 s for both the momentum equations and the continuity equation. The detail of numerical model can be found in Huang (1987). To prevent the upper layer outcropping in the subpolar basin, a relatively large amount of warm water is specified in the initial state of rest which corresponds to a mean layer depth of 350 m. The reduced gravity model used in this study makes use of observations. According to WOA09 data (Antonov et al. 2010), the annual mean potential density referred to the sea surface, averaged over depth of 0-300 m for the global oceans, is estimated at σ 0 = 25.96 kg/m 3 , but the corresponding value over depth of 400-5500 m is estimated at σ 0 = 27.66 kg/m 3 ; thus, g ′ ≈ 1.7 cm/s 2 . Hence a round off value of g ′ = 1.5 cm/s 2 will be used in this model. The annual mean potential temperature averaged over depth of 0-300 m is 13.36 °C, and the corresponding value over depth of 400-5500 m is 2.555 °C; thus the temperature difference between the upper and lower layers is 10.8 °C. Hence, in this model the temperature difference is set at 10 °C. The model was run for 300 years to reach a quasi-equilibrium reference state. This reference state is symmetric to the Equator, and it has three gyres in each hemisphere. The wind stress, the thermocline depth and the streamfunction of the reference state are shown in Fig. 3. To explain the vertical profile of heat content anomaly, we also plot a meridional profile of the thermocline depth 7 grid points (770 km) east of the western boundary, which can represent the layer depth maximum or minimum as a function of latitude, Fig. 3b. The relatively wide western boundary layer is due to the low horizontal resolution and large frictional parameters used in our model. In the subtropical basin, the maximum depth of the thermocline of the anticyclonic gyre is approximately 603 m, mimicking the situation in the North Pacific Ocean. The strength of the subtropical gyre is about 24 Sv (1 Sv = 10 6 m 3 /s), somewhat weaker than in the North Pacific Ocean. This relatively weak gyre is due to the idealized wind stress profile used in the model. In the subpolar basin, there is weak cyclonic gyre, giving rise to a domeshaped thermocline. Our main focus is to explore the fundamental structure induced by decadal variation of small amplitude wind stress perturbations; thus, the choice of the model parameters and wind stress profile should not qualitatively affect the main results from this study. Zonal wind stress variability in the world oceans Wind stress in the world oceans varies over a broad spectrum in space and time. As an example, zonal wind stress in the central Pacific Ocean is shown in Fig. 4, taken from the GODAS data provided by the NOAA/ OAR/ESRL PSD, Boulder, Colorado, USA, from their Web site at http://www.esrl.noaa.gov/psd/. As shown in Fig. 4b, c, on decadal time scales zonal wind stress in the central Pacific varies with amplitude on the order of 0.015-0.03 N/m 2 . Thus, it is reasonable to use decadal wind stress perturbations on the order to 0.015-0.02 N/ m 2 in numerical experiments exploring the dynamical consequences of heaving. Numerical experiments From the reference state, we carried out a series of numerical experiments. In each experiment, the model was restarted from the quasi-equilibrium reference state and forced by wind stress with small perturbations in the form of a Gaussian profile where �τ = ±0.015 N/m 2 is the amplitude, Δy = 1100 km. The wind stress profiles used in numerical experiments are shown in Fig. 5. Note that because the amplitude of wind stress anomaly is relatively small, perturbations to the wind-driven circulation are almost linearly proportional to the amplitude (including the sign) of wind stress anomaly; thus, the corresponding results for wind stress perturbations with opposite signs can be inferred from results presented in this study. In each experiment wind stress perturbations were linearly increased from 0 at t = 0 to the specified strength at the end of 20 years. Afterward, the wind stress perturbations were kept constant and the model run for additional 20 years. Such numerical experiments may represent the typical cases for variability induced by decadal wind stress perturbations. The pivotal case, Exp. 2H-A In this case the equatorial easterly was enhanced. The adjustment of wind-driven circulation induces a three dimensional redistribution of warm water in the model ocean. The time evolution of the volume anomaly and heat content anomaly is shown in Fig. 6. Due to the intensification of the equatorial easterly, the slope of the equatorial thermocline enhances; the warm water above the thermocline is pushed toward the Equator from both hemispheres. Thus, the amount of warm water in the equatorial band increases with time, but it declines at middle/high latitudes in both hemispheres, Fig. 6a. At the end of the 40 year experiment, the meridional distribution of the volume anomaly is shown in Fig. 6b. A basic assumption made in the reduced gravity model is that the lower layer is infinitely deep; hence, the pressure gradient and velocity in this layer is negligible. In the ocean the poleward mass flux in the upper layer must be compensated by the equatorward return flow in the lower layer; thus, the adjustment of wind-driven gyre in the upper layer should induce an anomalous MOC. The equivalent MOC rate can be diagnosed as follows where x w and x e are the western and eastern boundaries, v is the meridional velocity. By convention, the MOC induced by a northward flow of warm water in the upper layer is defined as positive. As shown in Fig. 6c, wind-driven circulation adjustment induces a pair of anomalous MOC asymmetric to the Equator, with a maximum rate of more than 0.3 Sv at year 20, when the wind stress perturbations reach the (10) M moc (y, t) = x e x w h(x, y, t)v(x, y, t)dx peak amplitude. Afterward, the anomalous MOC declines gradually. If the numerical experiment were run for much longer time, the model ocean would gradually reach a new quasi-equilibrium state, in which the anomalous MOC vanishes. Since MOC varies so much during the adjustment, it is more meaningful to use the MOC rate averaged over the entire 40 year of the numerical experiment. The corresponding meridional profile of the mean MOC rate, with maximum amplitude of 0.2 Sv, is shown in Fig. 6d. The MOC in the ocean is primarily associated with surface thermal forcing, in particular with the thermohaline circulation; in a steady state the wind-driven circulation in combination of surface heating/cooling can also contribute to the MOC. Recent studies revealed a close link between the MOC and surface thermohaline and wind forcing. For example, the NAO cycle plays a vital role in generating the variability of the MOC in Atlantic Ocean, e.g., Lozier et al. (2010), Zhai et al. (2014). However, a major point in our study is that during the adiabatic adjustment of wind-driven circulation anomalous MOC appears, which is not directly linked to the surface thermohaline forcing. Our numerical experiment indicated that, even taking the value averaged over the entire 40 years, the MOC associated with adiabatic adjustment of the wind-driven circulation may consist of a substantial portion of the variable MOC in the world oceans. The anomalous MOC inferred from the model also gives rise to poleward heat flux, which is defined as where ρ o = 1035 kg/m 3 is the mean reference density, C p = 4186 J/kg/°C is the mean heat capacity under constant pressure, T = 10 °C is the temperature difference where T (x, y, z, t) is the instantaneous temperature and T ref (x, y, z) is the temperature in the reference state. The time evolution of heat content anomaly is shown in Fig. 6e. The heat content anomaly at the end of the 40 year experiment is shown in Fig. 6f. In the reference state, thermocline near the western boundary at low latitudes is the deepest, approximately 603 m, Fig. 3b. Hence, positive thermocline perturbations at low latitudes induce positive anomaly of the basin-mean heat content at this depth range. On the other hand, thermocline along the eastern boundary and at high latitudes shoals, leading to negative heat content anomaly at the shallow depth, shown in the upper part of Fig. 6f. Since these motions are adiabatic, heat content anomaly must appear in the form of baroclinic modes. Take the time derivative of the heat content anomaly gives rise to the mean heat content anomaly rate With T = 40 year, this gives rise to the mean rate (per meter) averaged over the 40 year experiment. The vertical integration of this time rate gives rise to the vertical heat flux As shown in Fig. 6g, for the present case the vertical heat flux is on the order to −8 TW (the negative sign indicates a downward shifting of heat content). Dividing by the total area of the model ocean leads to the vertical heat flux per unit area, on the order of 0.03 W/m 2 . It is to emphasize that such vertical heat flux is due to adiabatic adjustment of the water masses in the ocean; if we look through the one-dimensional potential temperature coordinate, there is no change at all. At the end of the experiment, the horizontal structure of the perturbations to the wind-driven circulation is shown in Fig. 7. The equatorial thermocline deepens, in particular 10° off the Equator and near the western boundary, with maximum amplitude of 15 m, Fig. 7a; on the other hand, thermocline shoals at high latitudes, with a maximum value of −9.8 m. The streamfunction anomaly appears as a pair of gyres asymmetric to the Equator, with maximum amplitude of 2 Sv. The streamfunction of the anomalous circulation in the Northern Hemisphere is positive, i.e., it is an anomalous anticyclonic circulation. As a result, the original anticyclonic circulation, including its western boundary current, is intensified, Fig. 7b. Within the framework of the reduced gravity model, the corresponding sea level anomaly in the final state can be inferred from the upper layer depth anomaly Δh, Eq. (2). The sea level anomaly at the end of the experiment has the same pattern as the thermocline thickness anomaly. For the present case, a negative equatorial wind anomaly induces a positive sea level anomaly in the western part of the basin at low latitudes, and a negative sea level anomaly in the eastern part of the basin. Observations, e.g., Merrifield (2011), Qiu and Chen (2012), indicate that the sea level anomaly in the Pacific Ocean over the past 10-20 years has the same signs as those shown in Fig. 7c. Hence, the strong positive (negative) sea level anomaly in the western (eastern) North Pacific may be linked to the stronger than normal easterly in the Equatorial Pacific. However, if this strong anomalous easterly is going to relax, the long-lived strong sea level anomaly in the Pacific Ocean may swing in the opposite direction. It is well known that the adjustment of wind-driven circulation in a closed basin is carried out through wave motions, including the Rossby waves and Kelvin waves. In particular, the role of Kelvin waves and long baroclinic Rossby waves play vital roles in establishing the circulation, e.g., Anderson and Gill (1975), Hsieh et al. (1983), Wajsowicz and Gill (1986), Wajsowicz (1986), Hsieh and Bryan (1996), Marshall and Johnson (2013). Furthermore, for simplified geometry, the adjustment of the global ocean has been studied by Huang et al. (2000), Primeau (2002), Cessi et al. (2004). In the present case, wind forcing anomaly along the Equator and low latitude band induces a decline of the eastwest slope of the thermocline at low latitudes. These signals reach the western boundary and form the coast-trapped Kelvin waves, which move toward the Equator and then propagate eastward along the Equator. After reaching the eastern boundary, these waves reflect and bifurcate into the poleward propagating waves. Although the waves moving along the eastern boundary with a speed close to the Kelvin waves, recent studies suggested that such waves should be interpreted in terms of long cyclonic Rossby waves, e.g., Marshall and Johnson (2013). On their poleward propagation path along the eastern boundary, these waves gradually shed their energy and mass, forming the westward baroclinic Rossby waves carrying the signals through the ocean interior. Since the dissipation along the eastern boundary is relatively low, the thermocline thickness perturbation is nearly constant along the entire eastern boundary, as shown in the right edge in Fig. 7a. Because the total amount of warm water in the upper layer is conserved, thermocline perturbations along the eastern boundary must be negative in order to compensate the increase of warm water in the ocean interior at low latitudes; in the present case, layer thickness anomaly along the eastern boundary is about −4.8 m, Fig. 7a. Due to the negative wind stress anomaly applied to the Equator and low latitude band, thermocline slope anomaly in this region is positive. The sharp reduction of layer thickness perturbations near the western boundary indicates that the northward transport of the western boundary current at this latitude is enhanced. On the other hand, thermocline anomaly is negative at high latitudes, Fig. 7a. Cases with wind stress perturbations at one latitudinal band We continue to present the results from the first set of experiments, in which wind stress perturbations apply to a single latitudinal band, as shown in Fig. 5a-e. Exp. 2H-A has been discussed above; in Exp. 2H-B, a positive wind stress anomaly applied to the Equator. Since wind stress perturbations applied to these experiments have small amplitude, results from Exp. 2H-B are very close to those in Exp. 2H-A, but with opposite signs. For example, volume anomaly is now negative for the equatorial band, and it is positive at middle and high latitudes, black curve in Fig. 8a. In Exp. 2H-C, a positive wind stress anomaly applied to 20°N, reducing the Ekman pumping in the subtropical basin, and thus creating a negative volume anomaly at the latitudinal band from 15°N to 45°N; however, at other latitudes volume anomaly is positive, blue curve in Fig. 8a. In Exp. 2H-D, a positive wind stress anomaly applied to 40°N band, creating a negative volume anomaly at a latitudinal band from 35°N to 60°N, red curve in Fig. 8a. Note that the volume anomaly created in this case is much larger than in the previous two cases. This will also lead to other stronger anomalous features. In compensation, at other latitudes the volume anomaly is positive. In Exp. 2H-E, a positive wind stress anomaly applied to 60°N band, creating a negative volume anomaly north of 57°N. For latitudinal band from 30°N to 56°N the volume anomaly is positive; however, below 30°N, the volume anomaly is negative again, magenta curve in Fig. 8a. The corresponding southward transport of warm water in the upper layer creates a southward MOC in Exp. 2H-C and 2H-D, with the maximum amplitude near the latitude of wind stress anomaly maximum, i.e., 20°N and 40°N respectively, Fig. 8b. In Exp. 2H-D, the mean MOC maximum reaches a large value of −0.64 Sv. The MOC associated with the thermohaline circulation in such a two-hemisphere model basin is likely to be on the order of 10 Sv; thus, the amplitude of perturbations in Exp. 2H-D is about a few percentages of the climatological mean MOC. In Exp. 2H-E, positive wind stress anomaly applied to 60°N, creating a relatively weak negative MOC, with its maximum value around 56°N; however, south of this negative MOC there is a large positive MOC occupying the rest of the basin, with a maximum value of 0.23 Sv. The anomalous MOC gives rise to a sizeable poleward heat flux in the basin, Fig. 8b. In particular, in Exp. 2H-D, the maximum southward heat flux is about 27.5 TW, which is a sizeable fraction of the heat flux variability for such a model basin. Another critical important aspect of the wind-driven circulation adjustment is the redistribution of warm water in the vertical direction. The heat content anomaly in the vertical direction in these cases appears in the form of first baroclinic modes, as shown in Fig. 8c. In Exp. 2H-B, heat content anomaly is positive above the depth of 420 m, but and it is negative below this depth. In Exp. 2H-C, the zerocrossing of the first baroclinic mode is moved downward to the depth of 470 m. The rate of warming in the upper ocean has been discussed in many recent publications, e.g., Lyman and Johnson (2008), Lyman et al. (2010), Abraham et al. (2013), Chen and Tung (2014). Due to the relatively spare data coverage, the rate of warming remains uncertain and for the following discussion we will use the mean rate reported in the comprehensive review by Abraham et al. (2013). According to their analysis, over the period of 1970-2012, the planetary heat storage in the upper 700 m is 0.27 ± 0.04 W/m 2 , equivalent to an increase of heat content in the global upper ocean of 1.9 × 10 23 J (or 27 × 10 19 J/m) or temperature change of 0.2 °C. This is a mean barotropic mode of warming, which can be used as a benchmark. It is readily seen that the magnitude of the baroclinic modes of heat content anomaly obtained from this set of experiments are about 1/3 of the mean global warming rate inferred from observations. In Exp. 2H-D warm water at latitudes higher than 35°N is pushed toward lower latitudes, red curve in Fig. 8a. As a result, thermocline at lower latitudes deepens, so that heat content anomaly is positive below 390 m, but it is negative in the upper ocean, red curve in Fig. 8b. In Exp. 2H-E, the situation is opposite, i.e., heat content anomaly is negative below 380 m, but it is positive above this depth. Note that in terms of the vertical heat content anomaly, over most part of the depth the perturbations created in Exp. 2H-D are of opposite signs, compared with other three cases in this set of experiments. It is clear that the shape of heat content anomaly created by wind stress perturbations is the result of the delicate competition between the zonal mean thermocline depth and wind stress perturbations applied to the model ocean. The strong heat content anomaly in the vertical direction corresponds to an equivalent vertical heat flux. As shown in Fig. 8d, the vertical heat flux averaged over 40 year in Exp. 2H-B, 2H-C, and 2H-E are all positive, indicating an upward heat content shifting; the vertical heat flux is on the order of 2-8 TW. On the other hand, the vertical heat flux in Exp. 2H-D is −11 TW (−0.04 W/m 2 ), and it is about 1/7 of mean warming rate of 0.27 ± 0.04 W/m 2 estimated by Abraham et al. (2013). The slope of the thermocline is changed. When the wind stress perturbations apply at 20°N (Exp. 2H-C), the thermocline depth along the eastern boundary is increased in compensation with the shoaling of the thermocline at middle latitudes, Fig. 9a; the slope of the thermocline is reduced at the Equator (black curve in Fig. 9d), 20°N and 40°N (blue and red curves in Fig. 9a). In particular, the slope is greatly However, the slope of the thermocline at 60°N is slightly enhanced in the eastern basin, Fig. 9a. The meridional mean of layer depth anomaly is positive in the eastern basin, but it is negative at the western basin. This indicates that warm water is pushed from the western basin to the eastern basin. In Exp. 2H-D (wind perturbations apply at 40°N) the thermocline shoals greatly at 40°N. In compensation, thermocline depth along the eastern boundary is increased, and the thermocline depth at the Equator and 20°N is slightly increased, Fig. 9b, e. The thermocline at 60°N is slightly reduced at 60°N, Fig. 9b. In Exp. 2H-E (wind perturbations apply at 60°N) the thermocline at 40°N deepens. In compensation, thermocline depth along the eastern boundary is reduced, and the thermocline depth at other latitudes is slightly reduced, except for a small region near the eastern boundary at 60°N, Fig. 9c. Thermocline perturbations in the Southern Hemisphere are quite different from those in the Northern Hemisphere, lower panels in Fig. 9. First, perturbations in this hemisphere are much weaker because this region is remote from the forcing. The thermocline depth along the eastern boundary is globally constant, and this sets up the thermocline perturbations in the Southern Hemisphere. Thus, when the wind perturbations apply to 20°N and 40°N, thermocline depth perturbations in the Southern Hemisphere are positive; on the other hand, when wind stress perturbations apply to 60°N, thermocline perturbations in the Southern Hemisphere is negative and small. Cases with symmetric wind stress perturbations In this set of experiments, wind stress perturbations symmetric to the Equator apply to two latitudinal bands, labeled as Exp. 2H-F, 2H-G, and 2H-H shown in Fig. 10a. When wind stress perturbations apply to 60°N/60°S, warm water volume anomaly is negative around 65°N/65°S latitude band, and it is positive around 45°N/45°S and negative between 35°N and 35°S, red curve in Fig. 10a. The meridional transport of warm water induces transient MOC and poleward heat flux, anti-symmetric to the Equator, Fig. 10b. The maximum anomalous MOC (0.48 Sv) and poleward heat flux (20 TW) appear in the case when wind stress perturbations apply to the 40°N/40°S latitude bands. The combination of wind stress perturbations at two latitude bands induces a baroclinic mode of heat content anomaly in the vertical direction, which amplitude is larger than the cases with single wind perturbation band discussed above, Fig. 10c. These heat content anomalies correspond to relatively large vertical heat flux, Fig. 10d. As discussed above, wind stress perturbations also induce changes of the thermocline depth, Fig. 11. Since wind stress anomaly is symmetric to the Equator, response in both hemispheres is the same; hence only changes in the Northern hemisphere are discussed here. In all these three experiments, changes of the thermocline at latitude 20°N and 40°N has feature similar to those produced in Exp. 2H-C, 2H-D and 2H-E. On the other hand, changes along the equatorial band and 60°N are much larger than in Exp. 2H-C, 2H-D and 2H-E. As a result, the basin mean zonal thermocline depth change is about double the size produced in Exp. 2H-C, 2H-D, and 2H-E. Cases with asymmetric wind stress perturbations In this set of experiments (Exp. 2H-I, 2H-J, and 2H-K in Fig. 5), wind stress perturbations asymmetric to the Fig. 12b. These can be a sizeable contribution to the climate variability on decadal time scales. Furthermore, the asymmetric wind perturbations can induce heat content anomaly in the form of second baroclinic modes, Fig. 12c. The heat content anomaly corresponds to a vertical heat flux; but under the asymmetric wind perturbations, the heat content anomaly and vertical heat flux are much weaker than the cases with symmetric wind perturbations, Fig. 12d. The reason of higher baroclinic modes is as follows. Due to the asymmetric wind perturbations, thermocline in one hemisphere deepens, but shoals in the other hemisphere. For example, in Exp. 2H-I, zonal winds stress is reduced along 20°S, leading to stronger Ekman pumping for the subtropical gyre in the Southern Hemisphere. With warm water transported from the Northern Hemisphere, thermocline in the whole Southern Hemisphere deepens, with the maximum gain at the lower part of the water column, the black curve in Fig. 13a. On the other hand, around 20°N zonal wind is enhanced, leading to weaker Ekman pumping for the subtropical gyre in the Northern Hemisphere. As a result, warm water is removed from the subtropical gyre in the Northern Hemisphere, and there is negative heat content anomaly for the Northern Hemisphere, the blue curve in Fig. 13a. Due to the slight nonlinearity associated with the inverse of reduced gravity, the gain and loss of heat content profiles in two hemispheres are not exactly asymmetric, and thus resulting in a relatively small residual heat content profile, the red curve in Fig. 13a. When the wind stress perturbations apply to higher latitudes, the heat content profiles in each hemisphere gradually change their shape. For example, in Exp. 2H-J the heat content profile in each hemisphere appears in the form of a baroclinic mode, but in Exp. 2H-I and 2H-K heat content profile in each hemisphere has the same sign, i.e., there is no zero-crossing of the heat content profile in each hemisphere. The combination of heat content anomaly in two hemispheres leads to different shape of net heat content anomaly for the whole basin, as shown in Fig. 13. The layer depth anomaly at different latitudes is shown in Fig. 14. Because wind stress perturbations are asymmetric to the Equator, layer thickness anomaly is asymmetric too, and the basin mean layer thickness anomaly along the Equator is zero; hence only layer thickness anomaly in the Northern Hemisphere is shown in Fig. 14. In Exp. 2H-I, layer thickness along 20°N and 40°N is greatly reduced because the weakened Ekman pumping due to the positive zonal wind stress perturbations at low latitudes. The corresponding layer thickness anomaly along 20°S and 40°S (not shown in this figure) should have large positive values. At high latitudes (60°N), there is virtually no change in layer thickness. When zonal wind stress perturbations apply to 40°S/40°N (Exp. 2H-J), layer thickness anomaly along 40°N becomes more negative, while layer thickness anomaly along 20°N becomes slightly positive, depicted by the blue curve in Fig. 14b. There is now a small negative value of layer thickness anomaly along 60°N and near the eastern boundary. In Exp. 2H-K, layer thickness anomaly is mostly confined to the latitude band around 40°N (40°S) due to the decline of polar easterly and the weakening of the Ekman upwelling. A Southern Hemisphere model ocean Wind-driven circulation in the Southern Hemisphere has very special features because all sub-basins in the Southern Ocean are linked through the ACC. The existence of this periodic channel gives rise to some unique features of the heaving modes. As sketched in Fig. 15, in addition to the inter-gyre modes for a single basin discussed above, there are two new types of basic modes unique to the Southern Oceans, including the annular modes and the inter-basin modes. First, we assume that zonal wind stress in the South Oceans changes along certain latitudinal bands. For example, if zonal wind stress over the ACC is intensified, the slope of the front in the ACC increases and the thermocline shoals. As a result, the mean depth of thermocline in the ACC declines, depicted as the change from the solid line to the dashed line, Fig. 15b. Warm water in the ACC band is pushed toward low latitudes, leading to slightly deeper thermocline in three sub-basins, depicted by the dashed lines. A concrete example is as follows. We assume the zonal wind near the southern boundary of a model ocean is intensified, red curve in Fig. 16a. The front in the ACC moves northward and becomes steeper. The change of thermocline shape in the ACC pushes warm water in the upper ocean from the ACC toward lower latitudes. As a result, warm water volume in the ACC declines, but it increases in middle and lower latitudes, red curve in Fig. 16b. As discussed above, the meridional movement of warm water in the upper layer implies that there must be a compensating return flow below the thermocline, and thus an equivalent MOC in the reduced gravity model. Note that the MOC is zero at the beginning and end of the transition state associated with wind stress perturbations, and it is non-zero only during the transit state of the reduced gravity model. Therefore, we will use the rate averaged over the whole transition period to evaluate the equivalent MOC during the adjustment, the red curve in Fig. 16c. As discussed above, the wind stress change induced MOC also carries an equivalent poleward heat flux, which also contributes to the global climate change. Another important consequence of wind stress induced adjustment is the change of the vertical stratification or the heat content profile. Due to the intensification of westerly over the ACC, warm water is transported from the ACC to lower latitudes. Since the thermocline is deep at lower latitudes and shallow at high latitudes, such movement of warm water in the upper ocean induces a negative volume anomaly at the shallow depth and a positive volume anomaly at the deep level, the red curve in Fig. 16d. The shifting of the heat content in the vertical direction implies a vertical heat flux, and the corresponding profile averaged over the adjustment is shown in Fig. 16e. If the zonal wind stress is weakened, an opposite process should take place, as depicted by blue curves in Fig. 16. Since the amplitude of wind stress perturbations are relatively small, the anomaly produced has nearly the same patterns, but with opposite signs. The details of these solutions will be explained in the following sections. Second, we assume that zonal wind stress in individual sub-basins changes along certain latitudinal band. For example, if zonal wind stress in the Pacific basin is intensified, the thermocline there deepens, moving from the solid line to the dashed line in the upper middle part of Fig. 15c. Assuming the total volume of warm water in the upper ocean remains unchanged, the thermocline in both the Indian and Atlantic basins moves upward in compensation, the dashed lines in the upper left and right parts of Fig. 15c. The thermocline in the ACC band may move slightly, but such change is excluded in this sketch. Model set up Using the same basic equations in Sect. 2, a second model is formulated for an idealized Southern Hemisphere ocean (SH model hereafter). The model basin includes a 360° wide channel model subjected to a periodic condition, and it extends from 60°S to the Equator. The northern part of the model ocean is divided into three sub-basins, separated by three continents. The Indian and Atlantic basins are 60° wide, and the Pacific basin is 150° wide; each continent is 30° wide in longitude, Fig. 17. The southern part of the model ocean is occupied by a 15° wide periodic channel, corresponding to the ACC. The model is also an equatorial beta-plane model, with β = 2.1 × 10 −11 /m/s this gives rise to a Coriolis parameter matches that in spherical coordinates at 45°S, the northern edge of the periodic channel. The other parameters of the model are set as follows: A m = 1.5 × 10 4 m 2 /s, κ = 0.005 m/s. For numerical stability, a rather high interfacial friction parameter is used for the SH model. The same time step of 3153.6 s was used in numerical experiments. To capture the deep thermocline in the South Oceans a relatively large amount of warm water is specified in the initial state which corresponds to a mean layer depth of 750 m. The reduced gravity model used here makes use of observations. According to WOA09 data (Antonov et al. 2010), the annual mean potential density, averaged over depth of 0-700 m for the global oceans, is estimated at σ 0 = 26.61 kg/m 3 , but the corresponding value over depth of 800-5500 m is estimated at σ 0 = 27.72 kg/m 3 ; thus, g ′ ≈ 1.11 cm/s 2 . Hence a round off value of g ′ = 1.0 cm/ s 2 will be used in this model. The annual mean potential temperature averaged over depth 0-700 m is 9.55 °C, and the corresponding value over depth 400-5500 m is 1.87 °C; thus the temperature difference between the upper and lower layers is 7.68 °C. Hence, in this model the temperature difference is set at 7.5 °C. The zonal wind stress profile (Fig. 17a) is taken from the zonally mean zonal wind stress averaged over the 51 years of SODA 2.1.6 data (Carton and Giese 2008). The model was run for 300 years to reach a reference state. As will be shown shortly, due to the selection of parameters, the model ocean can reach a quasi-equilibrium state within 100 years. The thermocline depth and the streamfunction of the reference state are shown in Fig. 17. This reference state has three subtropical gyres north of the periodic channel. The thermocline thickness is nearly constant along the all eastern boundaries of the model ocean. Along the southern edge of the model ocean thermocline shoals to the depth of less than 200 m. The thermocline is the deepest in the Pacific basin, reaching 956 m, red curve in Fig. 17b. The subtropical gyres in three sub-basins are quite weak, with maximum transport of 11 Sv (Indian basin), 16 Sv (Pacific basin) and 11 Sv (Atlantic basin); the transport of the modelled ACC is 26 Sv, Fig. 17d. It is clear that these values are much smaller than the corresponding values in the world oceans. For example, diagnosis based on climatological hydrographic data indicates that the thermocline depth is the deepest in the South Indian Ocean; on the other hand, the transport of ACC is on the order of 100 Sv. Such large differences between our model ocean and the world oceans are due to the fact that the model is formulated for a rather idealized geometry with uniform reduced gravity and forced by a simple zonal wind stress which is zonally constant over the whole Southern Oceans. Since the goal of this study is to explore the fundamental structure of the heaving modes, the difference in thermocline depth between the model and observations should not qualitatively affect the basic results of our study. From this reference state, we carried out a series of numerical experiments. In the first set of experiments, the zonal wind stress was perturbed along certain latitudinal bands; such wind stress perturbations were chosen to explore the annular modes sketched in Fig. 15b. In the second set of experiments, the zonal wind stress perturbations were confined to individual basins; such choice of wind stress perturbations is aimed to explore the inter-basin modes depicted in Fig. 15c. Exp. SH-A, SH-B and SH-C In this set of experiments the model was restarted from the reference state and forced by additional small positive perturbations in zonal wind stress, as defined in Eq. (9). The wind stress perturbations were linearly increased from 0 at t = 0 to the specified strength at the end of 20 years. Afterward, the wind stress perturbations were kept constant and the model run for additional 100 years. The reason of running the experiments for 120 year is as follows. Adjustment of wind-driven circulation is carried out by wave like perturbations, mostly the baroclinic Rossby waves, which move quite slowly at high latitudes, typically on the order of a few centimeters per second. Due to the large interfacial friction imposed in the model, the adjustment time scale of wind-driven circulation in our model ocean is primarily determined by the high latitude basin crossing time of first baroclinic Rossby waves. The theoretical speed for the baroclinic Rossby waves is The southern tips of the continents in the model ocean is 45°S, where the Coriolis parameter and beta (15) c = βg ′ H/f 2 in our beta-plane model is f = 1.0395 × 10 −4 /s, β = 2.1 × 10 −11 /m/s. Since g ′ = 0.01 m/s 2 , assuming H = 800 m gives a typical wave speed of 0.0168 m/s at high latitudes. For a beta-plane model ocean of 360 grids with a zonal grid size of 110 km, the corresponding time for the first baroclinic Rossby waves to travel through the southern tip of the South Pacific basin in the model is about 31 years, and the corresponding time for the waves to travel around the whole Southern Oceans in the model is around and 75 years. Thus, running the model for additional 100 years after the wind stress perturbations reaching its final amplitude is long enough for the circulation to reach a quasi-equilibrium state. We begin with Exp. SH-A, the case of a weakened equatorial easterly. As shown in Fig. 18a, due to weakening of the equatorial easterly, the zonal mean equatorial thermocline moves upward and the warm water above the equatorial thermocline is pushed poleward. Thus, warm water volume in the equatorial band declines with time; since the total amount of warm water in the model ocean is conserved, warm water volume at middle/high latitudes increases. The adjustment of wind-driven gyre in the upper layer implies an anomalous MOC. When wind stress perturbations apply to the equatorial band, the adjustment of winddriven circulation induces a negative MOC because warm water in the upper ocean is pushed southward. This anomalous MOC grows with time, and its peak value at year 20 is around 0.4 Sv; however, as soon as the wind perturbation is no longer increased, the MOC quickly diminishes, Fig. 18b. The adjustment of wind-driven circulation also induces warm water redistribution in the vertical direction. In the reference state, thermocline near the low latitude western boundary is the deepest, reaching 956 m in the Pacific basin. Hence, negative thermocline perturbation at low latitude bands induces a negative heat content anomaly at this depth range, but heat content anomaly in the shallow layers is positive, Fig. 18c. When wind stress perturbations apply to the 30°S latitudinal band, warm water volume anomaly south of 30°S is negative; warm water from this latitudinal band is pushed to lower latitudes, Fig. 18d. The movement of warm water induces a northward MOC, which reaches the peak value of 1 Sv at year 20. However, as soon the wind stress perturbations are level off, the MOC diminishes, and quickly drops to a quite low level, Fig. 18e. In fact, for the last 20 years, the MOC becomes negative, as indicated by the white area on the right hand side of Fig. 18e. Since the amplitude of negative MOC is so small, it cannot be shown in the contour Fig. 18e; instead, we include a fine figure for this area of negative MOC in Fig. 19a. This figure indicates that for the last 30 year of the numerical experiment the MOC is reversed to a rather small negative value. This indicates that the solution enters into an oscillating mode. Oscillation modes in the world oceans have been discussed in many previous studies. For example, Cessi and Primeau (2001) explored the low-frequency modes in closed basins. Recent studies indicated that for the case with low friction, the oscillations associated with these eigen-modes may take a long time, on the order to multi decades or even centennial, to decay, e.g., Allison et al. (2011), Jones et al. (2011) and Samelson (2011. However, due to the strong dissipation imposed by the interfacial friction and lateral friction in our model, these oscillations are strongly damped. In Exp. SH-B, the redistribution of warm water in the ocean leads to a downward shifting of warm water. As a result, warm water volume below 850 m increased, but it is reduced above this depth, Fig. 18f. In Exp. SH-C, the positive zonal wind stress perturbations apply to the 60°S latitude band, the wind-induced Ekman pumping rate in the subtropical basins is greatly enhanced. Consequently, this induces large positive warm water volume anomaly north of 50°S, and volume anomaly Fig. 18g. The movement of warm water induces a northward anomalous MOC, which reaches the peak value of 1.2 Sv at 50°S in year 20. However, as soon the wind stress perturbations are leveled off, the MOC diminishes, and quickly drops to a quite low level, Fig. 18h. Similar to Exp. SH-B, for the last 20 years in Exp. SH-C the MOC becomes negative, as indicated by the white area on the right hand side of Fig. 18h; a refine figure for this area of negative MOC is shown in Fig. 19b. The redistribution of warm water in the ocean leads to a downward shifting of warm water. As a result, warm water below 850 m increased, but it is reduced above this depth. In particular, warm water volume decline is maximum near the 200 m level, corresponding to the strong shoaling of the front in the modeled ACC, Fig. 18g, i. Since the solutions vary greatly with time, it is also meaningful to examine the final states of these three experiments at the end of the 120 year experiments. The meridional distribution of the volume anomaly is shown in Fig. 20a. In Exp. SH-A, the volume anomaly is negative north of 30°S, but it is positive south of 30°S. The meridional shifting of warm water implies a MOC. Although the MOC averaged over the 120 experiments is much smaller than its peak value at year 20, it is still quite sizeable. For Exp. SH-A, the MOC minimum (averaged over 120 year run) is −0.059 Sv, Fig. 20b. Due to the anomalous MOC, there is a poleward heat flux, with a mean value of nearly −1.8 TW averaged over the 120 years of the experiment. In comparison, the volume anomaly in Exp. SH-B is much larger and it is of opposite sign, the blue curve in Fig. 20a. The MOC averaged over 120 year reaches its peak of 0.19 Sv around 30°S. This MOC also carries a sizeable poleward heat flux with its peak value of 5.7 TW, blue curve in Fig. 20b. In Exp. SH-C, there is a negative volume anomaly in the periodic channel, giving rise to a MOC (maximum values of 0.23 Sv) and poleward heat flux (maximum value of 7.4 TW), red curve in Fig. 20b. The anomalous MOC and poleward heat flux diagnosed from these experiments are much smaller than the corresponding values inferred from long term mean observations. For example, Talley (2013) put the estimate of the global overturning cell associated with bottom water formation in the world oceans at 29 Sv, and the associated poleward heat flux on the order of 0.1-0.2 PW (1 PW = 10 15 W). Nevertheless, these transient MOC and poleward heat flux suggest that adiabatic adjustment of wind-driven circulation may contribute to a substantial portion of variability in the MOC and poleward heat flux diagnosed from observations or numerical simulations of the world oceans. The adjustment of wind-drive circulation also induces the vertical redistribution of heat content. At the end of 120 year experiment, the heat content anomaly all appears in the form of first baroclinic modes, Fig. 20c. In Exp. SH-A, the basin mean heat content anomaly is positive above 850 m; it is negative below 860 m, and reaches the minimum of −5.1 × 10 19 J/m at depth of 910 m. In Exp. SH-B, the basin mean heat content anomaly has signs opposite to Exp. SH-A: it is negative above 800 m; it is negative below 820 m, and reaches the maximum of 13.5 × 10 19 J/m at depth of 870 m. In Exp. SH-C, the basin mean heat content anomaly pattern is similar to Exp. SH-B; however, the upper branch looks quite different: this negative branch reaches its minimum of −9.3 × 10 19 J/m at the depth of 220 m; the heat content profile has a zero crossing at the depth of 610 m, and it reaches its maximum of 17.1 × 10 19 J/m at depth of 880 m. The basin-scale redistribution of heat content in the vertical direction implies a vertical heat flux, defined in Eq. (14). For Exp. SH-A, the vertical heat flux peak is 1.1 TW (0.005 W/m 2 ); for Exp. SH-B, it is −2.6 TW (−0.012 W/m 2 ); for SH-C, it is −4.9 TW (−0.024 W/m 2 ), which is much stronger than previous two cases, Fig. 20d. The baroclinic mode structure shown in Fig. 20c is for the profile averaged over the whole model ocean; but the heat content profile in each sub-basin can be quite different, depending on the spatial distribution of wind perturbations. Due to assumption of adiabatic motions, the basin mean heat content anomaly must appear in the form of baroclinic modes. For each sub-basin, however, the net heat content anomaly is non-zero due to inter-basin shifting of warm water. Thus, heat content anomaly in a sub-basin can contain a barotropic component. In Exp. SH-A, heat content anomaly in both the Atlantic and Indian basins (blue and green curves in Fig. 21a) has a single positive lobe below 750 m; heat content anomaly in the Pacific basin (black curve in Fig. 21a) is positive between 750 m and 830 m, Dashed black lines denote the mean warming rate in the upper ocean inferred from observations but it is negative below 840 m. The warm water from the deep part of the Pacific basin is pushed toward the ACC and piled up at shallower levels, red curve in Fig. 21a. Heat content anomaly in the ACC is zero above 160 m, and it is positive below, except for the depth below 910 m, where it has a rather small negative lobe (not visible in this figure). Note that above 750 m, heat content anomaly in all sub-basins is zero; hence the global heat content anomaly profile (magenta curve) overlaps with heat content anomaly profile in the ACC. Below this depth, the global heat content profile is dominated by the contribution from the Pacific basin (black curve). The heat content profiles in each sub-basin have quite different baroclinic structures in the present case. Thus, our results demonstrate that heat content anomaly profile induced by adiabatic motions of the wind-driven circulation in the Southern Oceans can have complex baroclinic structure. In Exp. SH-B, the heat content anomaly is quite different from Exp. SH-A. The maximum depth (>950 m) of the thermocline in the reference state is located near 30°S, where wind stress perturbations can induce large positive heat content anomaly in all three sub-basin around the depth of 900 m. On the other hand, heat content anomaly is mostly negative above 760 m, Fig. 21b. Thus, the baroclinic structure of global heat content is in the form of a first baroclinic mode. If wind stress perturbations apply to the latitude band around 20°S, the global heat content anomaly appears in the form of a third baroclinic mode (figure not shown). In Exp. SH-C wind stress perturbations apply to the 60°S latitude band where thermocline is much shallower. The transport of warm water from high latitudes to lower latitudes creates a positive heat content anomaly below 800 m. Although the basin mean heat content anomaly is in the form of a first baroclinic mode, heat content anomaly profiles in all three sub-basins have no zero-crossing, and they seems to be a combination of a barotropic mode and a second baroclinic mode, but heat content anomaly in the ACC appears in a form close to a first baroclinic mode, Fig. 21c. The transported warm water is mostly originated from the ACC at much shallower levels. Hence, the heat content anomaly above 630 m in the ACC is negative, with a peak near the 210 m level, indicating that the southern edge of the ACC loses a lot of warm water at the shallow level (red curve in Fig. 21c). The combination of heat content anomaly from these four sub-basins creates a first baroclinic mode with shape peaks at 200 m and 890 m level, magenta curve in Fig. 21c. It is interesting to compare the amplitude of the baroclinic modes inferred from our simple model with observations. According to Abraham et al. (2013), over the period of 1970-2012 the planetary heat storage in the upper 700 m is estimated at 27 × 10 19 J/m. Accordingly, the baroclinic modes of heat content inferred from these experiments is smaller, but may be comparable with the mean warming rate inferred from observations. The corresponding time evolution of volume anomaly is each sub-basin is shown in Fig. 22. It is well known that the adjustment of wind-driven circulation in a closed basin is carried out through wave motions, in particular the first baroclinic mode of Rossby waves. As discussed above, the time scale of adjustment for the model ocean is estimated at 75 years, and the volume anomaly ratio (instantaneous volume anomaly/final volume anomaly) gradually reaches the final value of 1 unit after 80-100 year integration, low panels of Fig. 22. Although the total amount of warm water at low/middle latitudes declines, the situation in each sub-basin is different. In Exp. SH-A, the Pacific basin loses warm water quickly, but the Atlantic basin actually gains warm water slowly. The Indian basin also loses warm water during the first 20 years, but it starts to gain warm water afterward and ending up with more warm water at the end of the 120 year run, Fig. 22a. In this case, wind forcing anomaly along the Equator and low latitudes leads to a decline of the east-west slope of the thermocline at low latitudes. These signals reach the western boundary and form the coast-trapped Kelvin waves, which move toward the Equator. They propagate eastward along the Equator. After reaching the eastern boundary, these waves reflect and bifurcate into the poleward propagating waves. On their poleward propagation path along the eastern boundary, these waves gradually shed their energy and mass, forming the westward Rossby waves which carry the signal through the ocean interior, e.g., Huang et al. (2000). In this experiment, the Pacific Basin sets the pace of the adjustment because of its large size. Only after the adjustment of Pacific basin is nearly complete, the final signals propagate downstream, i.e., eastward, leading to the completion of adjustment in the Atlantic basin, and then finally the Indian basin. In fact, the warm water volume anomaly in the Indian basin reaches 95 % of its final value (at year 120) only after 74.7 years. Note that warm water volume anomaly in the Atlantic basin actually overshoots the final value reached at year 120, lower part of column in Fig. 22a. The oscillations shown in model solutions discussed here are similar to the oscillatory solutions discussed in previous study (e.g., Cessi and Paparella 2001;Cessi et al. 2004). Due to the selection of parameters, however, these oscillations are strongly damped. In Exp. SH-B, all sub-basins gain warm water in the final state, except the ACC which loses warm water. The corresponding time evolution of vertical volume anomaly is shown in Fig. 22b. The total volume of warm water in the Pacific basin actually overshoots and then turns back to the final value around year 80, lower part of Fig. 22b. In Exp. SH-C, warm water in the upper ocean in the ACC band is pushed northward; thus, volume anomaly in the Indian, Pacific and Atlantic basin increases, but it is greatly reduced in the ACC, Fig. 22c. Since wind stress perturbations apply to rather high latitudes, the blockage of continents does not play much important role and thermocline adjustment in all basins are mostly synchronized and nearly completed within 60 years and without much delay in individual basins, as shown in the previous cases. Exp. SH-D, SH-E, SH-F, SH-G and SH-H In these experiments, the model was restarted from the reference state and forced by wind stress with small perturbations where �τ = 0.02 N/m 2 is the amplitude, x = 3300 km, y = 1100 km, (x 0 , y 0 ) are the longitude and latitude of the center of wind stress perturbations. The wind stress perturbations were linearly increased from 0 at t = 0 to the full scale of the specified strength at the end of 20 years. Afterward, the wind stress perturbations were kept constant and (16) τ x′ = �τ e −((x−x 0 )/�x) 2 −((y−y 0 )/�y) 2 the model run for additional 100 years. The aim of carrying out these experiments is to explore the inter-basin modes associated with warm water adjustment due to wind stress perturbations applied to individual basins. Exp. SH-D and SH-E In this set of experiments, positive zonal wind stress perturbations apply to the equatorial band. In Exp. SH-D wind stress perturbations applies to the Indian basin, (x 0 , y 0 ) = (60°E, 0°S); in Exp. SH-E wind stress perturbations applies to the Pacific basin, (x 0 , y 0 ) = (180°E, 0°S). Similar experiment was carried out for the case with zonal wind stress perturbations applied to the Atlantic basin; however, results from such an experiment are rather similar to Exp. SH-D; hence, the corresponding results are not included here. These wind stress perturbations induce warm water volume decline at low latitudes (with the peak near 10°S) and warm water increase at high latitudes, Fig. 23a. The pattern of changes in the circulation is similar in both cases; however, when wind stress perturbations apply to the Pacific basin (Exp. SH-E), the changes in the circulation are much The meridional transport of warm water implies an anomalous MOC and poleward heat flux. When wind stress perturbations apply to the Pacific basin, the amplitude of the MOC (−0.017 Sv) and poleward heat flux (−0.52 TW) are much larger than the case when wind stress perturbations apply to the Indian basin (with the corresponding value of −0.0073 Sv and −0.23 TW), Fig. 23b. In addition, heat content anomaly in the vertical direction also changes in response. When wind perturbations apply to the Indian basin, the heat content anomaly appears in the form of a second baroclinic mode, black curve in Fig. 23c. However, when wind stress perturbations apply to the Pacific basin, the heat content anomaly appears in the form of a first baroclinic mode, but with much large amplitude, which is clearly due to fact that the Pacific basin is much larger than the Indian basin. As a result, the corresponding vertical heat flux in Exp. SH-E is also much larger than that in Exp. SH-D, Fig. 23d. Changes in the vertical stratification in each sub-basin are different for these two cases. When wind stress perturbations apply to a sub-basin, thermocline there shoals, indicted by the large negative heat content anomaly in the corresponding sub-basin. In Exp. SH-D, heat content anomaly in the Indian basin is positive above 800 m; it is negative for depth of 800-940 m, Fig. 24a. On the other hand, in the Pacific and Atlantic basins it is positive over the depth of 750-950 m; in the ACC it is mostly positive over the depth of 170-900 m, with an extremely small negative lobe over the depth of 910-960 m (not visible in Fig. 24a). The contributions from all sub-basins give rise to a global heat content anomaly in the form of a second baroclinic mode, indicated by the magenta curve in Fig. 24a. On the other hand, when wind stress perturbations apply to the Pacific basin (Exp. SH-E), warm water volume anomaly is much stronger (Fig. 24b), basically doubled the amplitude in the previous case. Since wind stress perturbations apply to the Pacific basin, they induce a large negative volume anomaly in the Pacific basin over the depth of 850-960 m, but it has a small positive value over the depth of 750-840 m. In both the Indian and Atlantic basins, heat content anomaly is non-negative, and it has positive value over the depth of 750-950 m. Heat content anomaly in the ACC has a positive lobe over the depth of 170-900 m. (There is a very weak negative heat content anomaly over the depth of 910-940 m, not visible in Fig. 24b). The contributions from these four sub-basins generate a global heat content anomaly in the form of a first baroclinic mode, magenta curve in Fig. 24b. The time evolution of warm water volume anomaly for these two cases is shown in Fig. 25. When wind stress perturbations apply to the Indian basin (Exp. SH-D), the warm water in this basin is reduced (red curve in the upper panel of Fig. 25a) and pushed downstream to the other basins. With warm water directly from the upstream basin, the adjustment in the Pacific basin is the fastest, nearly completed within 30 years. The adjustment in the Indian basin is completed secondary, and that of the Atlantic basin is the last because it is located at the end in the downstream direction. Before the thermocline in the Atlantic basin is close to its final state, the circulation and thermocline in the Indian basin cannot be completed, and this is part of In Exp. SH-E, wind stress perturbations apply to the Pacific basin; thus, warm water in this basin is reduced (blue curve in the upper panel of Fig. 25b) and pushed downstream to other basins. With warm water directly from the upstream basin, the adjustment in the Atlantic basin is the fastest, it reaches the final value before year 30; after overshooting, it approaches the final state first. The adjustment in the Pacific basin and the ACC is completed secondary. The adjustment of Indian basin is the last because it is located at the end in the downstream direction. Exp. SH-F, SH-G and SH-H In this set of experiments, wind stress perturbations apply to the 30°S latitude band. In Exp. SH-F, the wind stress perturbations apply to the Indian basin, (x 0 , y 0 ) = (60°E, 30°S). Changes in the circulation are opposite to those in Exp. SH-D and Exp. SH-E. First of all, warm water is now transported from high latitudes (south of 30°S) to low latitudes, Fig. 26a. Meridional transport of warm water in the reduced gravity model is equivalent to a MOC and poleward heat flux in the ocean. In this case, there is a northward MOC, Fig. 26b. Averaged over the 120 years of model run, the mean MOC rate is 0.024 Sv, and the mean poleward heat flux is 0.75 TW. The meridional transport of warm water from middle/high latitudes creates baroclinic modes of heat content anomaly, Figs. 26c and 27. When wind perturbations apply to the Pacific basin, the global heat content anomaly appears in the form of a second baroclinic mode; when wind perturbations applied to either the Indian or Atlantic basins, the global heat content anomaly appears in the form of a first baroclinic mode, Fig. 26a, c. The amplitude of heat content anomaly and the corresponding vertical heat flux for the case with wind perturbation applied to the Pacific basin is much larger than for the cases with wind perturbation applied to the Indian or Atlantic basins, Fig. 26d. When wind stress perturbations apply to the Indian basin, a negative heat content anomaly is created in this basin over the depth of 740-950 m (Fig. 27a), but its amplitude in the Indian basin is only half of that in Exp. SH-D when wind stress perturbations apply to the same longitude but along the Equator. On the other hand, the heat content anomaly in the Pacific basin is positive over the depth of 750-950 m, and its amplitude is double of that in Exp. SH-D. Heat content anomaly in the Atlantic basin is in the form of a first baroclinic mode, with a small negative value over the depth of 750-780 m, and a relatively larger positive value over the depth of 790-950 m. Heat content anomaly in the ACC has a negative lobe at depth of 170-760 m; however, over depth of 770-890 m, it is positive; and for the depth of 910-950 m, it has an extremely small negative value (not visible for the scale used in this figure). The contributions from all sub-basins give rise to a global heat content profile in the form of a first baroclinic mode, magenta curve in Fig. 27a. The time evolution of warm water volume in these subbasins is shown in Fig. 28a. Both the Indian basin and the ACC lose warm water, but the Pacific and Atlantic basins gain warm water. The adjustment in the ACC leads the process and completes the first. Indian basin is last one to complete. In Exp. SH-G, wind stress perturbations apply to the Pacific basin. Warm water volume anomaly north of 30°S is positive. Because the size of the Pacific basin is more than In the ACC the major part of heat content signals is negative over depth of 170-750 m, red curve in Fig. 27b. Over depth of 760-900 m, it is positive. There is a very weak negative segment over depth of 910-940 m (its magnitude is too small to be visible in Fig. 27b). The contributions from these four basins combine into the global heat content anomaly in the form of a second baroclinic mode, magenta curve in Fig. 27b. The time evolution of volume anomaly is shown in Fig. 28b. Similar to Exp. SH-F, the adjustment in the ACC leads that in the whole model ocean; it is nearly completed by year 35. The adjustment of the Atlantic basin is the second, and that in the Indian basin is the third. It is interesting to note that adjustment in the Pacific basin is the last one to finish. In fact, although the final volume anomaly in the Pacific is negative, it starts positive first. After the wind stress perturbations reach its full amplitude at year 20 and increase no more, the volume anomaly in the Pacific basin starts to increase. It enters the regime of positive value only after year 32.2. In comparison, volume anomaly ratio in the ACC crosses the line of 95 % level at year 30.8. It is clear that the adjustment in the Pacific basin is completed only after the completion of adjustment in all other basins. The dynamic detail of this adjustment remains unclear at this time, and it is left for further study. In Exp. SH-H, wind stress perturbations apply to the Atlantic basin. The results from this experiment are rather similar to Exp. SH-F with wind stress perturbations applied to the Indian basin. As shown in Fig. 26, the red curves and the black curves are almost the same. In the vertical direction, the basic features of the heat content signals are quite similar to those in Exp. SH-F. Since wind stress perturbations now apply to the Atlantic basin, heat content anomaly there is negative value below 740 m, green curve in Fig. 27c. Heat content anomaly in the Indian basin is positive and its amplitude is larger than that in the Pacific basin, because the Indian basin is directly downstream from the wind stress perturbations. Heat content anomaly in the ACC is quite similar to that in Exp. SH-F. Similar to that in Exp. F, contributions from individual basins give rise to a first baroclinic mode of global heat content anomaly, magenta curve in Fig. 27c. The time evolution of volume anomaly in each basin is shown in Fig. 28c. The general feature is somewhat similar to Exp. SH-F. Of course, in this case, volume anomaly in the Atlantic basin and the ACC is negative. Furthermore, the amplitude of volume anomaly is now much smaller than in Exp. SH-F. Discussion Using a simple reduced gravity model, we carried out several sets of numerical experiments. These experiments demonstrated that adiabatic movement of warm water in the upper ocean induced by decadal wind stress perturbations can lead to many important dynamical consequences. One of the most important phenomena is the redistribution of warm water in three-dimensional space. In particular, heaving induced by adjustment of wind-driven circulation can lead to vertical heat content anomaly in the form of the baroclinic modes. Since the structure of the modes depends on the two-dimensional shape of the thermocline and the wind stress perturbations, these modes may appear in different forms. These baroclinic modes may be used to interpret the time evolution of heat content anomaly diagnosed from observations or computer generated climate datasets. In particular, since the Southern Oceans are connected through the ACC, changes of stratification observed in a local region may be caused by change of the local wind or change of wind stress in a remote area of the world oceans. Note that heat content variability inferred from our model is at least one order of magnitude smaller than the mean warming rate inferred from observations. It is reasonable to expect that such a simple model may not be able to accurately produce the heat content variability; nevertheless, heat content variability induced by adiabatic motions in the ocean can have amplitude not negligible in diagnosing the general long term trend of climate warming or cooling. Our model showed that if negative zonal wind stress perturbations are imposed along the Equator, the corresponding heat content anomaly is cooling in the upper ocean and warming in the deep layers. A combination of such a baroclinic mode of heat content anomaly with the general trend of warming over the whole water column may be a cause of the hiatus of global SST and the accelerating warming in the deep layers reported in many recent studies. In addition, heat content anomaly profile reproduced in Exp. 2H-B indicates that the deep layers may be cooled down due to adiabatic motions. Although there is a general trend of global warming, the deep ocean cooling also occurs over certain time periods; such a phenomenon may seem odd, but our model results showed that it is dynamically quite conceivable. For climate study, another important consequence of water mass redistribution is the transient MOC, poleward heat flux and vertical heat flux. Results obtained from our simplified model indicate that they can reach quite nonnegligible levels, and may consist of sizeable component of the transient MOC and poleward heat flux diagnosed from in situ observations and numerical simulation of climate change. In particular, our results indicated that there is an equivalent vertical heat flux induced by adiabatic motions of water masses. The contribution to vertical heat content anomaly and the associated vertical heat flux have not yet been thoroughly explored up till now; thus, our analysis may stimulate further study in this direction. In this study we did not include the case with the upper layer outcropping; hence, heat content anomaly discussed here is limited to the subsurface layers only. It is, however, a straightforward step to carry experiments including the upper layer outcropping. In fact, the numerical model used in this study is based on the so-called positive-definite scheme, which can handle the case with outcropping and volume conservation of the moving layer, as discussed by Huang (1987). In our simple reduced gravity model, the hierarchy of stratification and the wave motions are greatly simplified. For example, the barotropic Rossby waves are excluded, and this may distort the adjustment process, especially for short time scales. Furthermore, representing the continuous stratification in the ocean with one moving layer of constant density is certainly an over-simplification. As such, the MOC, poleward/vertical heat flux and heat content profiles inferred from this study should be viewed as the first step in quantifying the corresponding components in the ocean. Our initial testing based on a two-moving-layer model suggested that the MOC and poleward heat flux inferred from a reduced model tend to exaggerate these fluxes. To obtain more accurate pictures one should run models based on more realistic formulation by including more density layers. Whether larger amplitude oscillations in the MOC, poleward heat flux and vertical heat flux exist in more realistic model simulation and how to identify such anomalous field require further analysis of climate datasets generated from computer simulations. In addition, it is well know that circulation in the ACC is closely linked to the eddy motions. As such, results from numerical simulations are rather sensitive to the resolution of the numerical models. There is a well-known phenomenon, the so-called eddy saturation. For example, Hallberg and Gnanadesikan (2006), Meredith and Hogg (2006) showed that in low-resolution models, the transport of the ACC increases with the amplification of the zonal wind stress; however, for the eddy-permitted models, the transport of the ACC reaches a plateau with the increase of wind stress, and would no longer be enhanced even with the further increase of zonal wind stress. Thus, results obtained from our simplified reduced gravity model are not expected to portrait the dynamics of the Southern Oceans accurately. Nevertheless, we hope results obtained from our simple model can enlighten the fundamental physics related to the change of stratification in the world oceans induced by heaving. Further studies based on eddy-resolving models should provide much more accurate information related to such important issues.
20,094.2
2015-03-19T00:00:00.000
[ "Environmental Science", "Physics" ]
Material challenges for solar cells in the twenty-first century: directions in emerging technologies Abstract Photovoltaic generation has stepped up within the last decade from outsider status to one of the important contributors of the ongoing energy transition, with about 1.7% of world electricity provided by solar cells. Progress in materials and production processes has played an important part in this development. Yet, there are many challenges before photovoltaics could provide clean, abundant, and cheap energy. Here, we review this research direction, with a focus on the results obtained within a Japan–French cooperation program, NextPV, working on promising solar cell technologies. The cooperation was focused on efficient photovoltaic devices, such as multijunction, ultrathin, intermediate band, and hot-carrier solar cells, and on printable solar cell materials such as colloidal quantum dots. Introduction Material research has gone a long way since the photovoltaic effect was discovered by Becquerel, in 1839 [1]. It is only with the discovery of the photosensitivity of selenium, the first technological semiconductor, and the fabrication of wafers with about 1% conversion efficiency the perspective of large-scale applications was first mentioned [2]. In 1954, the first silicon 'photocell for converting solar radiation into electrical power' was reported, with an efficiency of 6% [3] and triggered the development of the technology, as this performance turned the photovoltaic technology into one of the best contenders for applications to power the nascent satellite industry. Achievements of mature technologies Within the last 60 years, in a context of depletion of oil deposits and increasing pressure from global warming, solar cells have emerged to offer a credible alternative to fossil fuels, providing large amounts of renewable energy at affordable prices. PV systems already provide 1.7% of the gross electricity production in Organisation for Economic Co-operation and Development [4], while their contribution was negligible (below 0.01%) in 1990. According to International Energy Agency, photovoltaics (PV) is the energy technology with the fastest growth and should pass the 300-GW global installation in 2017. This rapid evolution illustrates the complementarity of fundamental research, dedicated to reaching highest performances and industrial developments, turning laboratory results into commercial systems. It is customary to distinguish three generations of solar cells (1) Silicon-based solar cells (mono-and poly-crystalline silicon) constituted the first PV sector to emerge, taking advantage of the processing feedback and supply feedstock provided by the microelectronics industry [5]. Silicon-based solar cells cover over 80% of the world installed capacity today [6,7] and currently represent 90% of the market shares [8]. (2) Thin film solar cells based on CdTe, copper indium gallium selenide (CIGS), or amorphous silicon were developed as a cheaper alternative to crystalline silicon cells. They provide better mechanical properties, allowing for flexible usages at the risk of a lower efficiency. While the first generation of solar cell was essentially a case for microelectronics, the development of thin-films involved new growth methods and opened the sector to other fields, such as electrochemistry. (3) 'Third generation' solar cells (tandem, perovskite, dye-sensitized, organic, new concepts, …) account for a broad spectrum of concepts, ranging from low-cost low-efficiency systems (dye-sensitized, organic solar cells) to high-cost high-efficiency systems (III-V multijunction), with various purposes from building integration to space applications. Third-generation solar cells are sometimes referred to as 'emerging concepts' because of their low market penetration, although some of them have been investigated for over 25 years. The ability of any solar technology to address the issue of energy production relies on the balance between the investments required to produce and install a module (i.e. several cells assembled in an operable device), and the total energy provided by the module, which in turn depends on its lifetime and conversion efficiency. Research and development in solar technologies has led to remarkable improvements on all aspects, which we will briefly present to account for the situation of the field. This review will focus on the achievements of the mature first-and second-generation technologies to introduce the challenges currently faced by new concepts. Production costs The more developed a technology is, the less expensive it becomes to increase the installed capacity by an additional watt [9]. With the largest production by far, silicon PV cells exhibits an impressive reduction of the production expenses over the last decades. Recent studies [9] estimate indeed that each doubling of the installed capacity of mono-or polycrystalline-based photovoltaic systems leads to a reduction of 12% in the cumulative energy demand, and therefore a reduction of the selling price by 20% and of the greenhouse gases by 20%. As a result, while one watt of peak power amounted to a production cost of 100 USD, 100 MJ, and 10 kg CO 2 in 1975, it is now estimated to 0.5 USD, 15 MJ, and 0.8 kg CO 2 [9,10] corresponding to an average energy payback time of about 2 years [11] under an insolation of 1700 kWh/m 2 /yr. In comparison with silicon-based solar cells, thinfilm systems require less material per surface, avoid costly purification steps, and do not necessitate silver electric contacts. As a result, despite a much smaller volume produced, thin-film systems compare favorably with silicon-based solar cells in terms of embedded energy and energy payback time [12], as well as in terms of cost. Lifetime This payback duration is to be compared to the lifetime of the module, which is notably limited by deteriorations induced by heat and humidity. Despite initial concerns, thin-film systems appear to have lifetime comparable to that of silicon panels, albeit generally lower. A typical degradation rate for the efficiency of both technologies is estimated to −0.5-−1% per year [13], corresponding to a lifetime of 25-40 years before the nominal efficiency drops by 20%. This large lifespan enables solar technologies to produce over their period of use around [14][15][16][17][18][19][20] times the energy invested in the production of the device [11,12]. Efficiency The most symbolic indicator used to assess and compare PV technologies is certainly the conversion efficiency, expressing the ratio between solar energy input and electrical energy output. Efficiency aggregates many constituting parameters of the system, such as the short-circuit current, the open-circuit voltage, and the fill factor, which in turn depend on the fundamental properties of the material as well as the manufacturing defects. For commercially available technologies, an upper bound for the efficiency is set by the celebrated Shockley-Queisser (SQ) limit [14], which accounts for the balance between photogeneration and radiative recombination of thermalized carriers. Any optical, conversion, or electrical loss would result in an efficiency lower than predicted by the SQ formula [15]. In addition to the defects affecting individual solar cells, the engineering required to mount solar cells into a solar module can induce further deficit in the efficiency. The efficiencies of the best module and cell of each major technology are compared to the corresponding SQ limit in Figure 1, using data from [16]. The discrepancy between the best cell efficiency and the upper SQ limit gives a picture of the room for improvement left for fundamental research, while difference between the best cell and the best module indicates how much could be gained by improving the transfer of laboratory prototypes into integrated systems. It appears that most mature technologies are not limited by integration losses (η module ≃ η cell ), while perovskite systems exhibit the larger shortfall module = 0.5 cell . Mature technologies already approach the SQ limit, but most of them show little evolution over the last five years [17,18]. While the efficiency holds information on the overall quality of the materials, it does not indicate whether the material at stake can be readily expanded to larger surfaces. By contrast with small samples, which can be extracted from carefully selected defectless regions of a larger plate, this up-scaling requires indeed an excellent control of the homogeneity of the growth processes. In addition to the efficiency analysis, a complementary approach is therefore used to compare the power output delivered by actual modules -or equivalently the product of the surface of the module by its efficiency. The result, displayed in Figure 2, draws a clear line between industrially mature technologies, upscaled to large surface and hence able to able to deliver more than 100 W, and emergent technologies, which are for the time being limited to smaller surfaces. Finally, a last indicator is the real amount of kWh produced in operation as compared to the nominal power of the modules (measured under 1000 W incident power and at 25 °C). Here, Si technologies are handicapped by the more rapid decrease in efficiency of Si solar cells when either temperature increases (operation temperatures are in the range of 60-80 °C) or illumination intensity decreases, as compared to competing technologies. Material challenges for the twenty-first century We have presented the state of the art of photovoltaics, and the impressive progress made in the previous decades. At about 300-gigawatt peak installed and likely very soon 2% of the electric energy production, there are still significant challenges before photovoltaic energy can be a significant fraction of the overall energy production: (i) Efficiencies are still significantly below those allowed by thermodynamics. As explained below, photovoltaic devices have a high theoretical energy conversion limit: above 33% for single junctions, and ultimately close to 90% if suitable materials can be found. Especially, relevant are materials for tandems, and for new conversion processes such as intermediate band or hot-carrier solar cells. Efficiencies are highly dependent on the quality of the materials and extremely sensitive to chemical and structural defects, even at low concentration. (ii) Materials availability, and processability to achieve low cost. Extremely low cost could be achieved if highly scalable and low-cost processes (e.g. printing technologies) could be used to produce high-quality materials. Materials availability was also seen to enter the equation in the past decade, as the growth of the production causes concerns about the longterm sustainability of the technology. Si-c, Si-c(x80), and aSi-ncSi stand for multicrystalline, crystalline, crystalline under 80 suns illumination and amorphous/ nanocrystalline silicon, respectively. data from [16,18]. (iv) Life cycle constraints (toxicity, recyclability, including structure materials) may become prevalent as the productions reaching volumes (terrawatt scale) where concerns on supply chain, including environmental ones come to the front. An increasing number of researches are oriented toward addressing these issues. (v) Finally, integration to the global energy system (system, storage) and to the built environment (storage, aspect) are becoming hot topics as the penetration in the energy production (2% of electricity) is getting close to the point where power management is critical, and synergies with power electronics and electrochemical storage are considered. Again, with both increased performance and better affordability, a larger number of applications appear. They often require (e.g. architectural integration) that other properties (e.g. esthetics) are considered. Most of the challenges listed above will likely require radically new pathways. While today, performances (including efficiency and reliability) and competitiveness have been sufficient to reach a visible penetration in the energy production, a significant contribution to this mix will require new levels of performance. Performance does not only have an impact on cost, it also contributes positively to sustainability. At the terrawatt level, environmental footprint and life cycle issue will grow in importance as well. For these reasons we have chosen to focus in this issue on emerging approaches that may help solve these issues. We will consider first how the efficiency frontier could be approached. We will then discuss new materials for solar competitiveness and improved integration, especially molecular, colloidal or hybrid materials. We will finally give some examples on how material or device characterization, and modeling, are instrumental for the advent of the above emerging technologies. New highly efficient materials The main energy losses during the conversion of solar energy to electrical power by a solar cell are transmission losses and thermalization losses, with different relative contributions as a function of the semiconductor bandgap (see Figure 3 below). Several strategies exist to reduce those losses, such as multijunction cells, hot-carrier solar cells, intermediate band solar cells, or multiple exciton generation. Multijunctions As of today, multijunction cells are the only concept which demonstrated performances overcoming the single-junction Shockley-Queisser limit and reached industrial applications. (iii) Durability and material aging at solar cell and module level are also an issue as this affects the reliability of the technology and also ultimately the cost. This concerns a lot structure materials and encapsulation, but intrinsic stability of the active materials was often found to be an issue to be solved first, and caused the failure of some technologies in the past, as e.g. Cu 2 S/CdS. for each technology, the surface of the best module is indicated in cm 2 . Mature technologies (1st and 2nd generations) are depicted in yellow, 3rd generation in blue. data from [16,18]. , efficient fabrication methods are available, and many compounds provide direct bandgaps (which are preferred, since their high absorption allows fabricating thinner cells, requiring less material and relaxing the constraint on transport properties). Nevertheless, one limitation is the requirement of lattice matching for keeping high material qualities. A material combination that fulfills this condition is the Germanium/GaAs/InGaP cell. Although it allows reaching high performances (41.6% under 364 suns have been reported [23]), this cell is far from the condition of current matching. The bandgap of the germanium cell being rather low, it produces a current almost twice as large as the limiting subcell current. A device with the optimum combination of materials is therefore not readily available, but can be approached thanks to various technological strategies: wafer bonding [24][25][26], metamorphic growth [27], inverted stacks [28][29][30], dilute nitride [31], and multiquantum wells [32][33][34]. Few of these strategies have been successful, and led to the highest reported conversion efficiency to date, at 46% under 508 suns ( [26], using wafer bonding). Nevertheless, the high cost of those devices prevents their application as flat panels [35,36]. They can find commercial applications for concentrated photovoltaic (CPV), where the surface of the cell is much reduced, or for space applications where the meaningful indicator is the power produced for a given weight. Because the crystalline substrate is an important part of the final cost, it has been proposed to substitute the Ge substrate and cell by silicon. The band gap of silicon cells can cover relevant bandgap values by combining different phases from crystalline to completely amorphous [37][38][39]. Compared to III-V materials, silicon is abundant, and low-cost processing is available, but reaches much lower efficiencies (13.6% [38]). Those cells can be made flexible, with various colors and shapes, so that they can find application in specific cases such as building integrated PV (BIPV). Nevertheless, research on micro-crystalline and amorphous silicon devices has attracted less interested in the past year due to the rise of attractive alternatives such as perovskite cells. Around 90% of the market is constituted by silicon cells [40]. In laboratories, the 25% efficiency threshold has been reached in 1999 [41,42], and slight improvements have been observed since, up to more than 26.6% [43]. Those values approach the theoretical efficiency of the silicon cells (29.4% [44]). The silicon technology is therefore reaching a plateau, pointing toward the need of new conversion concepts such as multijunctions, considering that the bandgap of silicon is not far from the ideal for the bottom cell of a dual junction device. Experimentally, only integration of III-V on silicon has reached efficiencies higher than the record silicon cell The idea of multijunction devices is based on the fact that a cell reaches its maximum conversion efficiency for photons at a wavelength that is equal to its bandgap, when thermalization and non-absorption are inexistent. Dividing the solar spectrum into several wavelength ranges, and converting those wavelength ranges with distinct cells of suited bandgap, allows to reach higher efficiencies, as shown in Figure 4. Splitting of the incident spectrum could be achieved with bandpass filters (not reviewed here, see e.g. [19][20][21]), or by stacking different materials, the highest bandgap material being on top of the device. Stacked junctions can be considered in two configurations, so-called 2-wire and 4-wire configurations (also commonly referred to as 2-and 4-terminals). In the 2-wire configuration, the different cells of the stack are connected in series. As a consequence, the device reaches its optimal performance when the current in the different subcells are equal. Otherwise, the cell that delivers the lowest current will limit the current delivered by the complete device, and the excess current produced by the other cells is wasted. In order to achieve a series connection between the different junctions of the stack, highly recombinative layers are used, such as tunnel junctions, which should allow large currents with moderate voltage drops. In the 4-wire configuration, the subcells are separately connected. This relaxes the necessity of current matching, but other difficulties emerge. Because current is separately extracted from each cell, low resistivity layers and contact grid are required in between to allow lateral currents. Those layers need in addition to be highly transparent. In both the 2-and 4-wire configurations, complexity also arises from the compatibility of the different materials that are included in the device, in terms of intrinsic properties (such as the lattice constant for monolithic III-V cells) or processing steps. Different families of material have been used for multijunction cell fabrication. III-V materials (compounds of elements of columns III and V in the periodic Figure 4. output power provided by multijunction cells in the radiative limit, as a function of the number of subcells, under aM1.5d for a 1000 sun concentration. data from [22]. a strain, the release layer is etched, resulting in a device separated from the substrate. The difficulty lies in the ability to produce no-cracks and low roughness layers in a reasonable amount of time. The main path relies on the use of an AlAs sacrificial layer in combination with a high-selectivity HF etchant for the release of layers grown on a GaAs substrate. In order to be able to under-etch over the whole wafer, a mechanism is required to maintain a path open for the etching solution to reach the release layer. A possible method is called weight-assisted ELO, whereby the device layer is fixed to a support, and a weight is added to the substrate layer [50]. Other approaches are currently considered, like the replacement of the release layer/etchant couple with InAlP/HCl. It leads to potentially cleaner surfaces after etching, but necessitates rethinking the materials used in the device. Also, other strains can be applied to separate the device layer, for example, surface tension ELO [51]. Light trapping As seen earlier, ultrathin absorbers imply light concentration. A mirror at the back of the cell doubles the effective distance traveled by light. With a random surface texturing, the limit for light path enhancement is F = 4n 2 , where n is the real part of the refractive index of the material [52]. Multiresonant absorption is another way for light trapping in direct bandgap semiconductors with sub-wavelength thickness [53]. All those light-path enhancement strategies require the presence of a mirror at the back of the cell, which is yet another motivation for developing ELO. Multiresonant absorption requires periodical or pseudo-periodical nanopatterns with dimensions close to the wavelength. The grid can be implemented in a number of ways, at the top or the bottom of the cell, and can be made of metallic or dielectric material [54]. A classical approach is to use a metallic pattern at the back side of the cell, as we need the back surface to be a mirror anyway [55,56]. This back mirror is deposited before the ELO process, for example, using soft nanoimprint lithography. First, a thin (about 100 nm) layer of dielectric material (TiO 2 sol-gel) is spin-coated over the device, and a soft PDMS mold, replicated from a silicon master, is applied onto it. The solvent containing the TiO 2 is evaporated through the mold, and the remaining TiO 2 is solidified by application of a heating treatment. Then, the mold is removed, leaving nanopatterns on the surface. The whole substrate is then covered by a 200-nm layer of metal (gold or silver). Finally, by applying the ELO process presented earlier, we release the device layer and obtain a cell with a nano-structured back mirror. Light management is especially interesting for solar cells with quantum structures like multiple quantum wells (MQW), superlattices [57] or multi-stacked quantum dots [58]. Indeed, a smaller number of quantum alone [25,45]. Theoretically, current perovskites solar cells should also lead to efficiency improvements [46]. Although this has not been reported so far, encouraging results have been demonstrated (with a current record at 23.6% [47]). Thin film multicrystalline large gap CIGS alloys present rather low efficiencies, so that their integration on silicon is not currently an option [46,48]. Ultrathin cells In conventional cells, the limitation factor for the cell thickness is related to its absorptivity: the cell needs to be thick enough so that, over the desired bandwidth, most photons will be absorbed and generate an exciton. For direct bandgap III-V semiconductors, this translates into a thickness of a few microns. Our objective is to gain 1-2 orders of magnitude in thickness without detrimental effect on the solar cell absorption, which means an absorber thickness below 100 nm for GaAs solar cells. The interests that drive the research toward ultrathin solar cells are numerous and can be grouped along two main axes. First, a thinner cell can contribute to higher efficiency. Same absorption over a smaller thickness means higher optical intensity, which has a positive influence over the open-circuit voltage (V oc ) of the cell. Concentration moreover enhances the efficiency of nonlinear processes like the two-photon absorption in intermediate band solar cells (IBSCs). Having thinner cells also means the carriers are generated closer to the contacts, limiting the volume recombination. In addition, this is a mandatory requirement for hot-carrier solar cells (HCSCs), for which the extraction time has to be reduced drastically in order to prevent the thermalization of the conduction electrons. Secondly, materials used for III-V solar cells can be quite expensive, and a reduction of the thickness has a direct impact over the global cost of the cell. In addition, those cells being usually grown epitaxially at a growth rate about 0.1 nm/s for molecular beam epitaxy, the growth duration can be considerably reduced in the case of ultrathin cells, making them furtherly economically viable. Also, the complex material arrangement often required for high-efficiency concepts makes the cells difficult to grow with a high crystallinity over more than 100 nm. Epitaxial lift-off High-quality crystalline thin films are grown on thick substrates (a few hundred micrometers) which account for a significant part of the total cost of the cell. In order for the absorber thickness reduction to have an economic impact over material usage, it is thus essential that the substrate could be used for multiple growth runs [49]. This is the purpose of epitaxial lift-off (ELO), by which a release layer is grown in-between the substrate and the device. By the combination of a chemical reaction and resonance mechanisms could be the way to go to reach high absorption rates over all those spectral domains (see Figure 6). Calculations have already provided convincing results, supporting that approach. Conclusion Achieving ultrathin solar cells is a goal relevant to the whole field of III-V cells provided they can be made cost effective and very absorbing. Ultrathin technology will lead to better material usage, better carrier collection, and higher open-circuit voltage, ultimately increasing the efficiency and reducing the cost of the cells. Finally, it is a necessary brick for the development of IBSC and HCSC. Hot-carrier solar cells -concept Hot-carrier solar cells are a remarkably elegant concept to achieve a solar energy conversion close to the Carnot efficiency [60,61]. A simple yet challenging idea to avoid thermalization losses while keeping a narrow bandgap to increase absorption (see figure losses) would be to selectively collect carriers before their relaxation with the lattice vibration, i.e. as they while they are still 'hot' . Such HCSC concept was first described by Ross and Nozik [60] ignoring Auger mechanisms, and Würfel later revisited the concept considering impact ionization as a dominant process [61]. Two main requirements are needed to achieve power generation from the extraction of hot carriers: a slow carrier cooling absorber combined with energy-selective contacts (ESC). Relaxation of the photocarrier population toward an equilibrium state, through elastic carriercarrier interaction, should be the dominant process so that a hot-carrier population is obtained under continuous illumination. While this is the case in most photovoltaic materials, and especially for epitaxially grown III-V materials as those on which we will focus in the following discussion, this condition may not always be fulfilled, especially in the case of quantum structures where Auger phenomena could be the dominant process [62]. Such aspect being of marginal concern in the research on HCSC, it will not be discussed furthermore in this paper. Our approach at NextPV lab is to develop and optimize in parallel hot-carrier absorbers and ESC to understand and validate the operation of each element, before combining both in a future complete proofof-concept device. Hot-carrier solar cells -absorber Slow carrier cooling absorbers are necessary to obtain a stable population at a temperature higher than that of the lattice. It is now well understood that photocarriers lose their energy excess through an interaction with longitudinal optical (LO) phonons, as demonstrated in GaAs by Shah and Leite [63]. Figure 7 shows the time evolution of a carrier population from a laser excitation. Before layers is favorable for an improved carrier transport and for the reduction of dislocation density. We apply this approach to several potential applications, especially for the spectral region covered by quantum dots (QDs) where absorption is notoriously weak (less than 1% per quantum confined layer). Fabrication of MQW solar cells has been reported [59]. Those MQW are comprised In 0.18 Ga 0.82 As wells surrounded by GaAs 0.78 P 0.22 barriers, and were inserted in the i-region of a GaAs-p-i-n junction. A special care was taken to balance the strain induced by wells that have some lattice mismatch with GaAs. In Figure 5, the absorption of those structures is compared before and after transfer, and for different nano-structured back mirrors. The difference between 'Flat' and 'Transferred' is the presence of a 100-nm layer of TiO 2 at the back of the former; p indicates the period of the nanostructures. Compared to the non-transferred solar cell, a maximum of ×8 external quantum efficiency (EQE) ratio enhancement is obtained for a wavelength of 965 nm, while the flat EQE indicates a maximum of ×5.6 ratio enhancement for the same wavelength position. Therefore, the addition of the nanogrid at the back results in a maximum of ×1.5 ratio enhancement. These results are coherent with the electromagnetic calculation made using rigorous coupled wave analysis (RCWA). This structure still requires numerous improvements to hopefully reach the full potential of multiple resonance, such as deposition of an anti-reflection coating (ARC), and optimization of the nanogrid parameters and deposition method. Several options are considered in order to develop ultrathin heterostructures. For QDSCs based on the concept of intermediate absorption, the absorption must be enhanced in three spectral domains covering the transitions between valence and conduction, valence and intermediate, and intermediate and conduction bands. Taking advantage of different types of of GaAs/AlGaAs multiquantum wells for high carrier injection (above 10 18 cm −3 ), and ascribed the phenomenon to a hot-phonon bottleneck effect. Hot-carrier relaxation dynamics in quantum systems was later summarized by Nozik [70], also highlighting the importance of Auger mechanisms as they could potentially break the phonon bottleneck. In the previous decade, the bulk of the work on hot-carrier absorber has been focusing on ways to quench Klemens mechanism through a so-called phononic bandgap (between the highest LA and lowest LO phonon energy), which can occur in some crystals with a large mass difference between the cation and the anion [71]. Quantum structures based on III-V materials, particularly multiquantum wells, have remained the most studied pathway to obtain slow carrier cooling absorbers. Le Bris et al. investigated on GaSb-based heterostructures under continuous wave illumination [72], from which the influence of the nanostructuration of the absorber on the carriers' temperature was studied. It was also noted that confinement of photocarriers in a limited volume with the use of large bandgap barriers ('claddings') helps to limit the heat flow at each extremities of the absorber, thus significantly enhancing the temperature of the photocarriers' distribution. More recently, alternative materials have been considered as hot-carrier absorbers; metals [73], in the absence of an optical bandgaps, can potentially absorb the full solar spectrum and transfer the resulting high-energy photoelectrons (only relevant carriers in this specific illumination (1), a small number of electrons exist in the conduction band, and are in thermal equilibrium with the lattice. Just after absorption, the photocarrier distribution is a non-equilibrium one and corresponds to that of the absorbed radiation: a temperature cannot be determined at this stage (2). The distribution then reaches an equilibrium state (3), (4) through elastic carrier-carrier scattering, with a characteristic time in the order of less than a picosecond [64]. The carrier's distribution at this stage has a higher temperature than the lattice, and it is the step at which HCSC should operate to overcome the Shockley-Queisser limit. Interaction with LO phonons then cools down the distribution toward the lattice's temperature (steps (5), (6)) before the onset of radiative recombination shown steps (6), (7) (t ≥ 1 ns), finally leading to state (8), similar to configuration (1). One of the challenges of HCSC is therefore limiting the thermalization rate of the hot carriers' distribution (3), (4), and phonon engineering is a straightforward idea to do so. The question of the decay of LO phonons into longitudinal acoustic (LA) phonons through the Klemens mechanism [65] is critical as it provides a pathway to achieve a so-called bottleneck effect, leading to a hot-phonons population in thermal equilibrium with the hot photocarriers [66]. Using III-V materials, quantum structures were shown to significantly decrease the cooling rate of photocarriers [67,68]; particularly, Rosenwaks et al. [69] observed a significantly broader emission distribution in the time-resolved photoluminescence (TR-PL) spectra absorptivity, Δμ e−h is the difference in chemical potential between electrons and holes, and T e−h is the temperature of the radiation (mostly ascribed to an electronic temperature). A thermalization factor Q (in W.K −1 /cm 2 ) is introduced to characterize and was later used as a benchmark characterizing the capacity of a material to slow the cooling of a high-energy photocarriers' population [77,78]. An absorber is generally considered to be slow cooling for a Q value below 100 W.K −1 .cm −2 [79]. In most cases, one assumes that a significantly larger part of the thermal energy is distributed to the electron rather than the holes, owing to their difference in effective masses. This approximation has been challenged by Gibelli et al. [80] by introducing a two temperatures generalized Planck law, as well as the necessity of a full-spectrum fitting for a more quantitative determination of the carrier cooling properties of a material [81]. While the realization of hot-carrier absorbers and the understanding of cooling mechanisms involved in bulk and nano-structured materials remain of major interest for the field, it is already possible to obtain materials that could readily be used in a complete proof-of-concept device with the aforementioned slow cooling properties. We oriented our researches toward nanostructured III-V materials, deposited by molecular beam epitaxy (MBE), owing to their demonstrated excellent and easily tunable carrier cooling properties, and their straightforward integration in a tentative complete HCSC device [82]. In Figure 8(a) the band diagram of a typical hot-carrier absorber test structure is shown. The material stack consists in a five-MQW region of In 0.15 Ga 0.85 As/Al 0.05 Ga 0.95 As (5 nm/7 nm, respectively) with a total thickness of 100 nm (spacing Al 0.05 Ga 0.95 As layers are including at each extremities) between two 100-nm Al 0.4 Ga 0.6 As claddings. The continuous illumination comes from a 532-nm laser in a classical confocal setup at room temperature, and the photoluminescence signal is analyzed using the generalized Planck law from which the carriers' temperature T H and the thermalization coefficient Q are extracted. The illumination is varied from 100 suns to 40,000 suns. Figure 8(b) shows the evolution of the photocarriers' temperature as a function of the concentration comparing different heterostructures: MQWs with claddings, MQWs without cladding, bulk GaAs with claddings and bulk GaAs. Consistently to what was hinted by Le Bris [72], the combination of MQWs and large barriers embedding the absorber leads to higher temperatures (520 K for 40,000 suns); the role of the claddings is avoiding carrier scattering outside of the absorber limits. Hot carriers are also observed in MQWs without cladding albeit at a markedly lower temperature (400 K). It should also be noted that a roughly similar observation can be made for bulk GaAs embedded between two claddings, showing that hot carriers can be generated in a relatively simple design. While not the highest performing solution, this provides a pathway simplifying the realization of a proof-of-concept case) to a semiconductor with an electronic affinity smaller than the work function of the absorbing metal. More recently, very interesting results have been obtained in perovskite materials with the observation of a hot-phonon bottleneck [74] and thermalization properties comparing even favorably to bulk GaAs. Also, colloidal perovskite nanocrystals were recently found to be of major interest in the scope of HCSC realization with a tentative full device presented and characterized by Grätzel's group [75]. A straightforward path to determine the carriers' temperature derives from the generalized Planck law, which accounts for the photoluminescence signal [76]. Under sufficient pump power, a broadening of the high-energy tail of the PL spectrum is a marker of a hot-carrier population [66]. The temperature can be determined with a rough precision of ∼ ∓20 K from the linear fitting of the high-energy tail of the PL signal, with a first-order development of equation (1), where I PL is the PL intensity, A is the differential resistance (NDR) effect is observed, the peak current then corresponding to the resonance current at the first order. Other original pathways to energy selectivity have also been considered, such as 'optical selectivity' [92] which consists in a narrow bandgap hot-carrier absorbing material with a broadband absorption profile and a narrow band emission profile; the resulting emitted light is then collected in an integrated PV solar cells with a larger bandgap, matched with the maximum of the narrow band emission. Although seemingly complicated to experimentally realize, such concept shows the potential variety of solutions that could be considered for converting the excess of photocarrier's heat into potential energy. DRTBs are an attractive solution in the scope of realizing a proof-of-concept HCSC, owing to their apparent simplicity, the extensive knowledge on such heterostructure and their compatibility with a MBE process based on III-V materials. Most approaches consider that the amplitude of the transmission remains unchanged by the application of an external voltage on the heterostructure [93]. We were particularly interested in the symmetry breaking of the DRTB resulting from such bias; indeed, as the maximum of the transmission is related to the ratio of the transmission of each individual barrier [94], a substantial decrease of the DRTB performances is expected which may limit the resonant current particularly at room temperature where HCSC are supposed to operate. This issue has previously been addressed by Allen [95], proposing a concept of 'effective barrier symmetry' by offsetting the breaking through asymmetric structures. A numerical modeling of the electronic transmission of symmetric and asymmetric Al 0.6 Ga 0.4 As/GaAs/ Al 0.6 Ga 0.4 As structures [96] is shown Figure 9. As expected, the maximum of the transmission is equal to unity for the symmetric structure, but the polarization of the structure leads to a significant decrease. For the asymmetric structure, while the transmission's maximum is much lower than 1 when unbiased, the application of an external voltage leads to a full recovery of the transmission's maximum close to 1. This highlight an often-overlooked characteristic of DRTB which is of major importance for an application to HCSC: the tunnel resonance (energy at which the photocarrier's energy matches that of the confined state) and the feed resonance (voltage at which the resonant population difference between the absorber and collector is maximal) are not the only resonances that need to be taken in account, and amplitude resonance (the voltage at which the amplitude of the transmission is maximum) is a critical parameter for optimizing the carrier transport through a DRTB. We also showed that the simultaneous optimization of these three resonances allows for a higher energy selectivity level through the contact which is of major importance to maximize the potential efficiency increase that HCSC may bring as compared to conventional solar cells [97]. device. The bulk GaAs layer is shown for comparison, and we see that in the absence of nanostructuration or carrier confinement from claddings, the temperature of the photocarriers remains that of the lattice (room temperature). The dominant interpretation of the effect of the slowed photocarrier cooling in MQWs absorber is that quantum confinement may inhibit the thermalization pathway [71,83] through a phonon bottleneck, though recent theoretical work has been challenging this interpretation, instead asserting that a change in the carrier-to-phonon LO interaction is a more likely process [84]. As the absorbers that were investigated in our laboratory were based on nanostructured III-V materials, it makes sense to develop energy-selective contacts epitaxially grown on similar substrates and using the same deposition method, thus allowing a one-step realization of a complete structure. Hot-carrier solar cells -Selective Energy Contacts As compared to the absorber part, the research on selective energy contacts is somewhat scarcer albeit different concepts having been considered. Ideally, the hot photocarrier would transit from the absorber to the collector (typically having a wider bandgap than the absorber) through a single energy state, and transforming the excess in kinetic energy (heat) into potential energy (voltage); an energy-selective contact can be viewed as a thermoelectric leg for which it is possible to calculate a Seebeck coefficient [85]. Theoretical evaluations of the required properties of an SEC, as well as the influence of the contact's parameters on the performance of a final device have been extensively studied by Le Bris [79,86], as well as O'Dwyer et al. [87], mainly insisting on the importance of carrier selectivity (energy width of the contact) and extraction level in regard to the photocarrier's temperature in the absorber. Also, Takeda [88] used the example of resonant tunneling diodes (RTDs) in the detailed balance model to discuss on the requisite of the SEC conductance, concluding that it must be within a suitable range, and the expected efficiency of an HCSC would be negatively impacted for a too high or too low conductivity. While a too low conductance can easily be understood in terms of series resistance effect, a too high conductance is also detrimental for the efficiency as the temperature of the photocarriers' population will decrease as compared to the optimum extraction energy. Various ideas have been considered to achieve the feature of energy selectivity. A straightforward approach is to use low-dimensional structures such as quantum dots [89,90] for a 3D confinement, or double-resonant tunneling barriers (DRTBs) [82,88,91] for a 1D confinement. Both structures lead to the apparition of a confined state through which photocarriers can selectively be extracted. Experimentally, the well-known negative structure while completely vanishing for the symmetric one, in excellent qualitative agreement with the conclusions from the model. This further confirms the beneficial effect of using asymmetric structure for a better matching of the transmission's amplitude resonance with the tunnel and feed resonances [97], and allows a room temperature operation of the SEC. In the future, selective contact modeling should be conducted within the non-equilibrium Green's functions framework which Experimental samples have been realized by MBE and analyzed by temperature-dependent current-voltage analysis (J-V-T). Figure 10(a) presents a comparison of modeled J-V curves for symmetric and asymmetric structures, showing an expected higher resonant current for the latter. Experimental curves are shown in Figure 10(b) for 60 K and 300 K. While both samples have a roughly similar behavior at low temperature, the NDR is fully preserved at room temperature for the asymmetric used. The results reached with this kind of solid HTM have caught up those with liquid cells, with power conversion efficiencies over 10% [104]. The concept of hybrid organic-inorganic solar cells is based on the sensitization of porous semiconducting metal oxide films with organic or metallo-organic dyes. DSC's technology benefits from fabrication through abundant, nontoxic, and cheap materials, as well as tunable color depending upon the dye employed and relatively high-power conversion efficiencies (PCEs up to 14.0% [105] for liquid and 7.5% [106] for solid state devices at the laboratory scale). In particular, advances over past years have been achieved by optimization of the redox mediator nature along with the development of new ruthenium-free dyes leading to record PCEs over 13% for small area devices, making them competitive with amorphous silicon-based devices [100,107]. To push further the power conversion efficiencies, co-sensitization with complementary dyes of semiconducting oxide porous layers combined with a fine tuning of the electronic properties of each dye appeared to be a very promising route [105,107,108]. Many orange and red chromophores have been therefore designed for DSCs, whereas examples of efficient green or blue dyes are rarer [109] and usually involved long and costly preparation and purification routes. In this context, the ISM team has developed an original D-π-[M]-π-A architecture including a ruthenium-diacetylide, i.e. [M] = [Ru(dppe) 2 ] where dppe is bisdiphenylphosphinoethane, embedded within a push-pull structure which were prepared using convergent synthetic routes [109,110]. Fine control of the electron-withdrawing properties of the π-A units led to a new family of chromophores showing red, violet, blue, or greenish color and the optoelectronic properties of which are suitable for using in DSCs ( Figure 11). Thus, these dyes led to PCEs above 6% in standard single-dye DSC and up to 7.5% in devices involving photoanodes co-sensitized with dyes 1 and 3. Presently, new organometallic dyes built from the D-π-[M]-π-A structure including various acceptor moieties based on the 2,1,3-benzothiadizole unit are under investigation. First of all, three new acceptor building blocks have been synthesized (Figure 12). After coupling with a ruthenium-vinylidene complex endowed with a N-(phenyl)-carbazole group and deprotection of the carboxylic acid function, both green and blue dyes were obtained, the electronic properties of which were fully characterized by UV-visible spectroscopy, cyclic voltammetry and density functional theory calculations. Preliminary characterization in DSCs revealed promising performances for green or blue dyes [111]. As liquid electrolyte, yielding to more efficient devices, were found to induce difficult issues for commercialization. The NextPV laboratory has focused its work on allows to add the electron-phonon scattering in order to be more realistic. Hot-carrier solar cells -conclusion and perspectives It has been illustrated that the two critical parts of a HCSC are already functional under the normal solar cell operation conditions (continuous illumination and room temperature). Carriers' temperatures significantly higher than that of the lattice have been experimentally measured, and electrons' transport through a single energy state has been observed, both features on sample stemming from a similar experimental MBE process on GaAs substrate. The next logical step is the combination of both parts in a single-step device where hot carriers (electrons) would be optically generated in a hot-carrier absorber, then extracted through a double-resonant tunneling barrier with an optimized geometry to maximize the resonant current at room temperature. The extraction of energetic particles from a hot-carrier population also brings several questions regarding the influence it will have regarding the said population, and especially potential changes of its equilibrium temperature. Those questions, along with an experimental realization of a complete proof-of-concept device, will hopefully be addressed during the upcoming year. Dye-sensitized solar cells Dye-sensitized solar cells (DSCs) [98], was regarded as one of the most promising photovoltaic devices, have been extensively investigated due to their high theoretical efficiency, facile fabrication processes, and potential low cost. These solar cells use molecular dyes to absorb the solar spectrum efficiently. The dye molecules are bound to TiO 2 (mesoporous) and the porous network is then infiltrated by either an electrolyte or a conducting polymer. The exciton generated in the dye is then dissociated as the electron is very quickly (tens of fs) injected in the oxide. The dye cation is regenerated by either a redox shuttle in the electrolyte or a hole conducting polymer. Over the past two decades, continuous research efforts have contributed to the significant advances in the DSC performance [99]. To date, DSCs devices employing a zinc-porphyrin-based co-sensitized system have shown an efficiency record of 13% using liquid electrolytes, which may limit their outdoor applications [100]. Practical advantages have been gained by replacing the liquid electrolyte with an organic hole transporting material (HTM) [101][102][103] such as spiro-OMeTAD (2,2′,7,7′-tetrakis-(N,N-di-p-methoxyphenylamine)-9,9′-spirobifluorene), which is the most widely the molecular design of new types of HTMs in organic light-emitting diodes (OLEDs) and as electron donor group in organic sensitizers of D-π-A-type in DSCs. Another fascinating advantage is the versatility of the carbazole reactive sites that can be substituted with a wide variety of functional groups, allowing fine-tuning of its optical and electrical properties. Solution-processed quantum dot solar cells Materials small enough for the quantum confinement effect to come into play show unique optoelectronic properties different from those of bulk materials such as size-dependent emission and optical absorption bands [113,114]. These properties make QDs attractive in the fields of science and technology. Colloidal quantum dots (CQDs) are one of such nanomaterials. They can be dispersed in a solution, and are useful active materials for LEDs [115], solar cells [116], biosensors, and biomarkers [117]. In addition to the unique optical properties originating from the quantum confinement effect, CQDs are compatible with solution-based methods such as spin-coating, dip-coating [118], spray-coating, and microstamping [119]. Various types of solar cells have been proposed so far [116,[120][121][122][123][124]. Among them, interest on Pb-based QD heterojunction solar cells, which are typically formed by depositing the PbS QD layer on top of the flat ZnO layer (referred to herein as planer-type cells) (Figure 13(a)), has rapidly increased these days, and a power conversion efficiency of over 11% has been reported on the solar cells [125,126]. However, the short carrier diffusion lengths of CQD films (having a typical diffusion length solid-state solar cells and on process optimization for the preparation of DSCs and perovskite solar cells. In a first step, simple and efficient dyes based on indoline donor units having fluorene substituent group and acceptor units have been applied as light harvester materials in DSCs [112]. For further improvement of the DSC performance, additional substituent groups with long alkylene chains were introduced to the fluorene substituent group or to acceptor units in order to extend the absorption spectrum and to suppress the dye π-stacked aggregation on the TiO 2 surface. The alkyl chains effectively suppress electron recombination between electrons in the conduction band of TiO 2 and electrolyte, resulting in higher open-circuit voltage and short-circuit current. Such materials having a fluorene substituent group are already used in organic photoconductor (OPC) drums, such as charge transport materials (CTM), and are essential to copier and printer technology. Therefore, due to their high stability and low toxicity, these dyes could be used in DSCs albeit with a somewhat lower efficiency as compared to best Ru-based dyes. Previous studies have demonstrated than the onerous synthesis and low charge-carrier mobility of spiro-OMeTAD significantly limit its potential of up-scaling for application on DSCs and perovskite solar cells (PSCs). Therefore, the development of hole-transporting material with higher charge mobility would be also a key approach to achieve higher performance in the solid state solar cell. Carbazole-based derivatives have attracted much attention because of their interesting photochemical properties. Recent interest in the carbazole derivatives has been caused by its good charge-transport function, which can be exploited in of the planar-type cells increased until the active layer thickness reached 200 nm owing to the increase in light-harvesting efficiency, and then decreased on further increase in layer thickness. This behavior is mainly attributed to the limitation of carrier diffusion length. In contrast, the J sc of NW-type cells (L1 = 400-1500 nm, L2 = 300 nm) increased with active layer thickness (L1 + L2) and reached a maximum at 1800 nm, approximately six times as thick as the typical carrier diffusion length of PbS QD-based cells. Figure 13 shows that electron transport in the PbS QD region is a limiting step in the photoelectron conversion process of PbS QD-based solar cells. Once good electron pathways are established by incorporating ZnO NWs, holes left behind in the PbS QD region can diffuse over 1 μm (Figure 14). We also achieved EQE values of approximately 60% at the first exciton absorption peak (1.02 μm) and over 80% in the visible region by optimizing the morphologies of the ZnO NWs [134,135]. The bandgap of bulk PbS is located at approximately 3.1 μm (0.4 eV) and can be tuned from the visible to the short-wavelength infrared by adjusting quantum dot sizes. This property makes PbS CQDs attractive materials for the middle and/or bottom subcells of multijunction solar cells [136,137]. To confirm the usefulness of PbS CQD as photoactive materials for multijunction solar cells. PbS CQDs giving different first exciton absorption peaks were synthesized by following a method reported in Ref. [138] (Figure 15). NW-type solar cells were constructed by a layer-by-layer deposition method. The EQE spectra of the NW-type solar cells give the EQE peaks originating from the first exciton absorption. From the EQE spectra, we confirmed that the solar cells can convert photon energy to electricity in a wide range of the solar spectrum ( Figure 16). In the solar cell (a), the EQE originating from the 1300-nm-exciton absorption peak reaches 47%, which is, to the best of our knowledge, the record value for PbS QD solar cells. While the solar cell (b) gives a power conversion efficiency of 5.5%, which is, to the best of our knowledge, the highest efficiency is 200-300 nm) makes it difficult to increase power conversion efficiency by thickening CQD active layers. To address this issue, many solar cell structures have been proposed [127][128][129]. As an example, we will focus in the following on PbS QD-based solar cells with ZnO nanowires (NWs) (referred to as NW-type solar cells) ( Figure 13(b)). This solar cell structure allows us to achieve efficient electron transportation as well as high light-harvesting efficiency simultaneously [130][131][132]. PbS QD solar cells were constructed by combining ZnO NWs with PbS CQDs using a spin-coating method ( Figure 13(b)) [132]. The PbS QD/ZnO NW cell structure is denoted by 'NW-type cell (L1 = X, L2 = Y)' , where X (nm) is the length of ZnO NW and Y (nm) is the thickness of the PbS QD overlayer. Planar-type cells were also constructed in a similar way. The short-circuit current density (J sc ) and absorbance at 1020 nm (@ the first exciton absorption peak) of the two different types of solar cells are plotted as a function of active layer thickness in Figure 14. The J sc the initial value at the end of the 3000-h light soaking test [140]. We have been developing PbS QD/ZnO NW solar cells (NW-type solar cells) that include spatially separated pathways for electrons and holes. The PbS QD/ ZnO NW hybrid structures allow us to increase light harvesting efficiency and carrier collection efficiency simultaneously. Based on this strategy, we succeeded in extending an EQE onset wavelength to the SWIR region. PbS QD/ZnO NW solar cells were verified to convert solar energy to electricity in a wide range of solar spectrum from 300 nm to approximately 2000 nm. Moreover, PbS CQDs are compatible with solution-based solar cell technology. These characteristic features show that PbS colloidal quantum dots are one of the most promising photovoltaic materials not only for single-junction solar cells but also for the middle and/or bottom cells of multijunction solar cells. Besides Pb-and Cd-based QDs, there are a wide range of options of CQDs [130,131]. Finally, to develop eco-friendly quantum dot solar cells CuInS 2 and AgSiS 2 can be used [141,142]. Organic photovoltaics (OPV) Organic photovoltaics is part of the so-called third generation of photovoltaic panels together with dye-sensitized solar cells (DSSC), hybrid perovskite solar cells (hPSC), and quantum-dots solar cells (QDSC). Since the introduction of the bulk-heterojunction concept in 1995 [143], the continuous development of novel chemical structures led to the increase of the power conversion efficiency (PCE) up to 13% for single-junction cell [144]. While still less efficient than the other solar cell technologies, organic photovoltaics has advantages which make them interesting for specific applications. The flexibility and lightweight make them useful for nomad applications, while the possibility to tune the color, the shape, and the transparency open the route of the integration in modern and esthetic features [145]. Additionally, organic photovoltaics panels have a low-energy pay-back time [146], low carbon print, [147] and are also efficient with low light intensity (for indoor applications) [148]. Non-fullerene acceptors (NFA) Since the development of the bulk heterojunction concept [143], the most commonly used acceptor materials in OPV devices are fullerenes and their derivatives. [6,6]-phenyl-C 61 -butyric acid methyl ester (PC 60 BM) has been for 15 years the benchmark for the acceptor material [149] while other derivatives such as [6,6]-phenyl-C 71 -butyric acid methyl ester (PC 70 BM) [150,151] or indene-C 60 bisadduct (IC 60 BA) [152], were used to improve the absorption in the visible region and the open-circuit voltage, respectively. High efficiencies, up to 11.7%, have been reached with such acceptor [153]. However, these materials present several drawbacks: their poor absorption in the visible region [150], the important energy losses [154] and the instability they ever reported on the solution processed solar cells whose EQE onset is located in the short-wave infrared (SWIR) region. Long-term stability is one of the most important properties in practical applications. We constructed NW-type solar cells using the 1.2-μm-thick nanowire layer infiltrated with PbS QDs bearing Br ligands (the inset figure of Figure 17). The solar cells were verified to achieve not less than 4-year air stability, and show no noticeable degradation ( Figure 17) [139]. Continuous light soaking tests were also carried out on the solar cells using a solar simulator (AM1.5G 100 mW cm −2 ). The power conversion efficiency was found to keep approximately 90% of Figure 15. absorption spectra of PbS cQd solutions. cQd diameters obtained from an empirical equation [133] are shown in the legend along with the first exciton absorption peaks. absorption spectra are normalized with respect to the exciton peak. state-of-the art performances. But more interestingly, due to their contribution in the charge generation and the lower energy loss, such new n-type organic semiconductor opens the route to organic photovoltaics with very high performances. A recent report already published a record power conversion efficiency of 13% with non-fullerene acceptor [144]. Stability improvement In order to turn the organic photovoltaic into a mature technology, the high performances must remain stable over time. It is commonly accepted that the aging of an OPV device can be divided into 3 steps: (1) a rapid loss of efficiency known as the burn-in, (2) a linear loss of long duration, and (3) an eventual catastrophic failure [162]. The causes of the different losses are multiple and depend not only on the architecture of the device but also on the nature of the organic semiconductor used. The development of the inverted architecture in 2006 has been an important step forward in order to design long-lived solar cells [163][164][165]. It resulted in air-stable devices and allowed to perform the fabrication process in air. The burn-in is a major issue in organic photovoltaics as it can lead to 20-60% loss of the initial performances [166][167][168]. Recently, it has been proposed to classify the burn-in period following two phenomena: a loss of short-circuit current (J sc ) or a loss of open-circuit voltage (V oc ). Heumueller et al. have investigated the burn-in involving a loss of J sc [169]. They attributed this degradation to the dimerization of the fullerene derivative. Interestingly, they show that the dimerization rate, and, as a consequence the J sc burn-in, depends on the active layer morphology. Active layers with well-defined non-crystalline fullerene domains present the highest rate of dimerization and the stronger burn-in. Concerning the burn-in involving of loss of V oc , it has been proposed that such phenomenon is the consequence of an increase of the energetic disorder in the donor polymer, leading to a redistribution of charges in a broader density of states [170]. Crystalline polymers, having higher charge carrier densities, are less sensitive to increased energetic disorder, thereby reducing the consequences on the V oc of such device [171]. The fullerene derivative PCBM has also been identified as another cause of performance loss. Indeed, this small molecule has the tendency to crystallize along aging, modifying therefore the fine morphology of the active layer [155]. Several routes have been identified to freeze the active layer morphology. Derue et al. developed an additive to crosslink the fullerene derivatives and suppress the phase segregation phenomenon for several donoracceptor blends [172]. To summarize, a lot of progress have been made in the identification of the degradation mechanisms in OPV devices. The fullerene derivatives are a cause of several degradation routes (long-term crystallization, dimerization…) and the development of efficient non-fullerene acceptors can be a solution to induce in the bulk heterojunction [155]. Recent advances have led to the development of alternative acceptor materials for OPV devices. Holliday et al. developed small molecules based on indacenodithiophene core flanked with benzothiadiazole and rhodanine groups (IDTBR) which allowed to overcome the limitation of P3HT:PCBM-based solar cells [156,157]. Indeed, such acceptor contributes efficiently to the photocurrent generation as it presents a complementary absorption to P3HT and boosts the open-circuit voltage due to lower energy losses. As a consequence, P3HT:IDTBR solar cells reached 6.4% power conversion efficiency [157]. These authors also showed that such acceptor can be efficient with a low bandgap donor polymer PffBT4T-2DT. Using IDTBR instead of PCBM allows to reduce the energy losses in the device. Open-circuit voltage above 1 V was achieved and power conversion efficiencies close to 10% were reached, compared to 7.5% with PCBM [158]. Similarly another n-type organic semiconductor, defined as ITIC, presented state-of-the art performances (PCE = 11%) in combination with PBDB-T, a donor polymer semiconductor [159]. The enhancement of the performances in comparison with devices made with PCBM (PCE = 7.5%) is mainly attributed to the additional charge generation due to the complementary absorption of both organic semiconductors. N-type polymeric semiconductor has also seen a strong development in the last years [160]. Perylene diimide and nanphthalene diimide containing polymers have been developed and optimized to replace PCBM. Among them nanphthalene diimide bithiophene also known as N2200 has allowed the fabrication of all-polymer solar cells with PCE of 8.27% due to complementary absorption between N2200 as acceptor and benzodithiophene-alt-benzo-triazole, a medium bandgap copolymer, as donor [161]. In summary, a lot of efforts have been made to find a suitable replacement for fullerene derivatives. Photovoltaic devices made with non-fullerene acceptors reach efficiencies of OPV devices prepared with water-based inks remained very low, below 1% [180,181], but in 2013, 2 and 2.5% were achieved with P3HT:PCBM [182] and P3HT:ICBA [183] donor-acceptor active layer. They identified a core-shell morphology, with PCBM-rich core and P3HT-rich shell in accordance with the Flory-Huggins free energy of mixing theory [183,184]. Since then, several other polymer donor materials were used to improve the efficiency, a copolymer based on thiophene and quinoxaline units (TQ1) [185] and a low bandgap copolymer (PBDTTPD) which gives 7% average PCE in standard condition (deposited from chlorinated solvents) [186]. Efficiencies up to 3.8% were achieved using such water-based inks [187]. Another method can be used to prepare colloidal inks: the nanoprecipitation. Such technique, based on the supersaturation principle, leads in some specific conditions to the spontaneous generation of nanoparticles dispersed in the anti-solvent [188]. Contrary to the mini-emulsion technique, the solvent and the anti-solvent have to be miscible. The organic semiconductors dissolved in the solvent are injected in the anti-solvent. The materials, which solubility decreases in the solvent mixture, precipitate first and form nanoparticles (generation step), which then percolate (growth step), until they reach a critical diameter which allow them to remain stable in the medium (Figure 18(b)) [188]. Compared to the mini-emulsion technique, such those issues. Several report already show that devices with NFA present better stability than those based on fullerene derivatives [173,174]. Toxicity Another point to be addressed in order to send this technology a step further is the replacement of the commonly used toxic solvent. Indeed, most of the high-efficiency OPV devices were obtained using chlorinated and/or aromatic solvents. The substitution of such solvents with less toxic ones, such as xylene [175,176], while compatible with the industry, has still a non-negligible impact on the environment and working conditions. Eco-friendly processes are under investigation and the development of nanoparticulate organic photovoltaics (NPOPV) is one of them. It began with the work of Thomas Kietzke and Katharina Landfester in 2002 [177][178][179]. They applied the mini-emulsion technique to generate organic semiconductors nanoparticles dispersed in an aqueous medium with poly(9,9-dioctylfluorene-co-N,N-bis(4-butylphenyl)-N,N-diphenyl-1,4-phenylenediamine) (PFB) as the donor material and poly(9,9-dioctylfluorene-co-benzothiadiazole (F8BT) as the acceptor one. This technique implies the use of two non-miscible solvents (chloroform and water) and a surfactant (Figure 18(a)). Even though the efficiency was low (4% IPCE), those articles were the first proofs of concept for the fabrication of organic solar cells with water-based inks. For several years the The combination of those different strategies (efficient donor-acceptor systems, control of the size, and the morphology) can be a solution to develop highly efficient OPV devices using eco-friendly processes. Perovskite solar cells Perovskite solar cells (PSCs) have recently attracted great attention due to the large diversity of low-cost processes existing to produce them, the versatility of structure and materials that can be used (see Figure 19) and the excellent opto-electronic properties (strong absorption and high carrier diffusion length) of perovskite material such as methyl ammonium lead iodide (MAPbI 3 ) [103,[194][195][196][197]. The power conversion efficiency (PCE) of PSCs has dramatically improved to over 20% in a relatively short period [198]. Despite their interesting characteristics (low production cost and high efficiencies), several important challenges remain for their commercialization such as: hysteresis in I-V curves; cells stability; ecotoxicity due to the presence of soluble lead compound in the cells and the scaling up of the spin coating process used to produce high efficiencies PSCs [195,199]. However, recently progresses on these main issues have been made, giving the hope that solar cells including perovskite layers will be soon commercialized [197,[200][201][202][203][204]. These progresses have been realized on the three main layers composing PSCs (the perovskite layer, the electron, and the hole transport layers) and on the PSCs architecture, improving their efficiency, stability, and reducing the hysteresis. Moreover, many improvements have been made on tandem structure which could compete with single-junction crystalline silicon solar cells in terms of market throughput. In this section, challenges and recent progresses on PSCs hysteresis reduction, efficiency, processing, and stability will be presented. Hysteresis reduction As briefly introduced before, a major problem in PSCs is the hysteretic behavior observed in current method allows the formulation of surfactant-free dispersions. Using such procedure, Gärtner et al. generated P3HT:ICBA NP dispersion in ethanol and methanol. They fabricated P3HT:ICBA-based organic solar cells and obtained a record PCE of 4% using an optimized thermal annealing step to induce the coalescence of the NP and the formation of a homogeneous film [189,190]. Perspectives This literature survey on OPV showed that the recent advances allow nowadays the fabrication of devices with power conversion efficiencies around 10% for many donor-acceptor systems. The development of novel n-type organic semiconductors is very promising as it can be a solution to overcome the limitation of devices made with fullerene derivatives and reach even higher power conversion efficiency. The stability issue has been investigated deeply and some strategies have been developed to eliminate or mitigate some degradation mechanisms (phase-segregation, dimerization…). The replacement of aromatic toxic solvent by water or alcohol to develop eco-friendly processes is still at the first stage and most of the devices were fabricated with P3HT as donor polymer. In order to improve the efficiency of NPOPV devices, it will be important in the future to shift to efficient donor-acceptor systems using low bandgap donor polymer and non-fullerene acceptors. Additionally, the PCEs reached with NPOPV devices are still a step lower (30-40%) than those obtained with chlorinated and/or aromatic solvents. These lower performances are due to non-optimized morphology. Indeed, most of the nanoparticles generated are donor-acceptor mixtures with uncontrolled domain sizes. There are some solutions to overcome those limitations. The size of the NP can be tuned by modification of the mixing conditions (ratio solvent/ anti-solvent, concentration) [191], or using microfluidic systems and supercritical anti-solvent (SAS) [192]. The morphology of the NP can be controlled using the solvent selectivity, which allows the fabrication of well-defined core-shell donor-acceptor NP [193]. explained by the passivation of defects and the reduction of charge accumulation at the interfaces [205,207,211]. Efficiency and processing improvement of single-junction and tandem PSCs Since the first use of lead halide-based perovskite as a light harvester in Dye-sensitized solar cells (DSCs) different approaches have been used to improve the PSCs optoelectronic properties [194]. One of the first approaches that have led to high-efficiency PSCs was to use a solid type hole transport material such as spiro-OMeTAD [212]. Regarding the work on the interfacial contact, different methods have been developed, such as UV(O 3 )/TiCl 4 surface treatments, to passivate defects at the compact TiO 2 /MAPbI 3 interface and improve the efficiency of PSCs [213]. Also, several optimization steps have been performed, notably introducing an anti-solvent step during the MAPbI 3 spin coating. This approach leads to smoother perovskite layer having larger grain size [214]. Using this method efficiency over 18% has been reached. It has been shown that the perovskite morphology plays an important role on the PSCs efficiency. Larger grain size usually leading to better performance [200,215]. Another strategy was to play on the perovskite composition which allows to tune its energy bandgap, its conduction and valence band energy levels to have a better energy levels alignment with the ETL and the HTL. Using mixed halide perovskite introducing Br into MAPbI 3 PCE over 20% has been obtained and the device stability has been improved [216]. Recent studies on the Cs 0.05 (MA 0.17 FA 0.83 ) 0.95 PbI 0.83 Br 0.17 compound have shown the great potential of this alloy by achieving efficiency above 20% with a high reproducibility [200]. Since the emergence of PSCs, and thanks to the perovskite large tunable range of bandgap, many groups have started to develop tandem solar cells. Among the main structure developed very interesting results have been achieved using CuInGaSe/perovskite, all perovskite and silicon/perovskite tandem solar cells, with record efficiency reaching 17.8, 20.3, and 23.6%, respectively [217][218][219]. Also using a panchromatic sensitizer density-voltage (J-V) curves [199]. This phenomenon is a notable mismatch between J-V curves measured during a forward scan (from negative to positive voltage) and a backward scan (from positive to negative voltage). The main consequence is the difficulty to determine the PSCs efficiency. Until now, the origin of hysteresis has been mainly discussed as coming from ferroelectric polarization, charge accumulation at the interfaces due to trapping de-trapping, and/or ionic migration of the perovskite [205][206][207]. We have proposed an equivalent circuit model that includes a capacitance induced by charge accumulations at the interface [208]. This model presented in Figure 20(a) can reproduce the large hysteresis observed in experimental data as it can be seen in Figure 20(b). We are now working on understanding the capacitance origin using the software SILVACO ATLAS [209]. Since the first identification of the hysteresis phenomenon in PSCs in 2014, no direct proof on the mechanism at the origin of the hysteresis has been published. Nevertheless, many strategies have been developed successfully to overcome this detrimental effect. Most of these strategies focus on modifying the interface between the selective contacts and the perovskite layer. Among the strategies developed to suppress the hysteresis, one of the first approaches proposed was to introduce fullerene (C 60 ) at the compact TiO 2 /MAPbI 3 interface in the standard planar structure [210]. Using this approach efficiency above 19% have been recently obtained [204]. Another strategy was to introduce mesoporous TiO 2 layer at the compact TiO 2 /MAPbI 3 interface [202]. Using this mesoscopic structure efficiency above 20% has been reached by doping the mp-TiO 2 with Li [202]. Another successful approach which has been proposed was to use an inverted structure using NiO x and PTAA as the hole and electron transport layer, respectively [203]. With this structure, the hysteresis has been strongly reduced and efficiency over 18% were achieved [203]. As one can notice playing on the PSCs structure and especially on both interface of the perovskite layer has been an efficient way to reduce the hysteresis in PSCs. This has been PSCs stability [227]. Concerning the compositional engineering, it has been shown that by modifying the halide and cation composition introducing Br, Cs, and formamidinium in MAPbI 3 , highly stable PSCs could be obtained [200]. Regarding the architecture of the cells, a lot of work has been made on replacing materials such has spiro-OMeTAD due its limited stability [230]. In this context, we specifically designed a series of functionalized poly(vinylcarbazole) bearing alkyl (PVK-[R 2 ] 2 ) or p-substituted diphenyl amine moieties (PVK-[N(PhR′) 2 ] 2 , see Figure 22) [231]. In a typical SnO 2 :F/c-TiO 2 /CH 3 NH 3 PbI 3−x Cl x /HTM/ Au planar PSC, functionalized PVK and spiro-OMeTAD led to similar photovoltaic parameters, i.e. V OC , J SC, and FF, with overall energy conversion efficiencies at 1 sun of about 14%. But a remarkable gain in stability was achieved when functionalized PVK was used as HTM. Efficiency remained stable for more than ten days (under dry-air conditions) rather than one day when using, respectively, functionalized PVK and spiro-OMeTAD as HTMs in PSCs ( Figure 23). While yet not fully understood such effect could be due to the hydrophobicity of the polymer able to protect the perovskite from atmospheric moisture. Another material reducing the stability of PSCs in classical architecture is TiO 2 due to its photocatalytic activity [227]. In order to replace TiO 2 many candidates have been assessed, among them SnO 2 and BaSnO 3 are two promising materials for the electron transport layer. Using these materials efficiency above 20% has been achieved with enhanced stability [201,232]. Another strategy that has been used to enhance the PSCs stability was interface engineering. One approach was to introduce ultrathin Al 2 O 3 layer in the PSCs structure to prevent undesirable chemical reactions. Using this approach several groups have successfully enhanced the stability of PSCs which have remained at 90% of its initial efficiency after storage in air during 24 days [233,234]. Finally, interesting preliminary results have been recently obtained using perovskite layers having a Ruddlesden-Popper structure [235]. Even though the efficiency of these cells is still low (12%) compared to classical architecture, they show promising stability against moisture [235]. presented in Figure 21(a), coded DX3, that exhibits a broad response into the near-infrared, up to ~1100 nm in DSSC/perovskite tandem solar cells has been developed. A tandem structure presented in Figure 21(b) has been realized achieving a conversion efficiency of 21.5% using a spectral splitting system [220]. These results are reaching record efficiencies gained with single-junction silicon solar cells, but they have been obtained by depositing the perovskite layer via spin coating, which cannot be used for the large-scale production of solar cells. Nevertheless, it is expected that if large-scale process can be used for producing PSCs, perovskite-based tandem solar cells could be soon commercialized. Concerning the large-scale processing of perovskite solar cells, several techniques have been investigated but lead to lower efficiencies. Among them we can notice, spray-coating, blade-coating, roll to roll printing, vapor deposition, and blow-drying [221][222][223][224][225]. This last approach is very promising as it allows to fabricate high-performance devices compared to spray-coating, blade-coating, and roll to roll printing while having lower cost than vapor deposition technique [226]. Stability Many studies reported that moisture leads to an irreversible degradation of the perovskite layers [227]. This lead to the formation of PbI 2 which is soluble in water and could thus contaminate the environment, polluting the field and causing eco-toxicological problems. A reduced stability under UV light has been observed and attributed to reactions occurring at the c-TiO 2 and/or mesoporous TiO 2 interface with the perovskite layer [228]. One last stability issue is the thermal stability. It is very important to take it into account since during solar cells operation as the temperature can strongly increase. The crystallographic structure being sensitive to temperature, degradation of the active layer can happen after exposing the device at elevated temperatures for medium-long time [229]. To improve the stability of PSCs many strategies have been successfully developed. Structural and compositional engineering have been two efficient ways to dramatically improve the Luminescence-based characterization The light emitted from the radiative recombination of an electron-hole pair, called luminescence, is a signal that can be detected, and that has been shown to carry numerous material and device properties. Measuring the luminescence is therefore a widely used characterization method. A non-exhaustive list of accessible material properties may include the following: lifetimes, bandgaps, defect position within the bandgap, absorption, carrier temperature (discussed in more details in the Section 2.3), and radiative efficiency. Device-specific quantities can also be determined: series resistances, voltage, saturation currents, and external quantum efficiencies. After briefly reminding some theoretical elements of the luminescence, few examples from the literature illustrating the above list will be given. The light emission from a semiconductor volume element ϕ can be described by the generalized Planck's law [236]: where E is the photon energy, α the absorptivity, n is the optical index, ℏ is the reduced Planck's constant, c 0 is the speed of light in vacuum, T is the temperature, Δ is the quasi-Fermi-level splitting. Experimentally, only the light emitted from the material surface can be assessed, so that an integration of equation (2) over the emitting volume, on the light paths leading to surface emission is needed [237][238][239]. In case the quasi-Fermi-level splitting, the absorption or the temperature is not constant in this volume, the integration of equation (2) is not straightforward. For the sake of simplicity, let us first consider a case in which all those properties Conclusions and outlook In this section, the main challenges and recent progresses in the development of PSCs have been discussed. It has been shown that interface engineering was an efficient strategy to reduce the hysteresis impeding the efficiency assessment of PSCs. Concerning the efficiency, it has been shown that interface passivation and morphology control through compositional and process engineering was of tremendous importance to produce high-performance perovskite solar cells. Also, different large-scale processing techniques have been presented. Finally, different successful strategies used to improve the PSCs stability have been presented. All these recent developments show that PSCs have a strong potential. They are credible alternative to silicon solar cells for the large-scale development of the solar energy thanks to their promising efficiency and low-cost processing. The quasi-Fermi-level splitting, related to the cell voltage, can be obtained by photoluminescence, i.e. without electrical contacts. This has been used to investigate cells at different stages of the fabrication process [249][250][251][252]. Combined with the possibility of recording the luminescence with a spatial resolution, inhomogeneous behaviors have been investigated. This can be related to intrinsic (multicrystalline materials) or extrinsic properties (defects and cracks). Examples can be found for multiple technologies, such as multicrystalline silicon [250,253,254], CIGS [255][256][257][258][259][260], perovskites [260][261][262], III-V [263,264], dye-sensitized solar cells [265], or multijunction [266]. We can also note a particularly relevant use of the luminescence spectral information, with an absolute calibration, in the case of multijunction devices in the 2-wire configuration. In this geometry, the different subcells are connected in series, so that the voltage delivered by each of them is challenging to access. Nevertheless, since they absorb light in different spectral ranges, their luminescence emissions can be distinguished, from which the subcell voltages are accessed [267][268][269][270][271][272][273]. By combining electrical and light excitations of a semiconductor, the emitted luminescence is subject to carrier transport, allowing its properties to be accessed. Series resistances [274][275][276], and collection efficiency mapping have been reported [239,[277][278][279][280]. Using the reciprocity relation (4), the EQE could be determined from luminescence [281][282][283]. Interestingly, close to the bandgap, the current generated by a light excitation is low, so that the signal to noise ratio for the direct measurement of the EQE is low. By contrast, the light emission is the strongest in this energy range, so that the EQE determined by luminescence is more precise. Oppositely, for energies higher than the bandgap, no luminescence can be recorded, but the generated electrical current is stronger. Therefore, improved EQE results can be obtained by combining electrical measurements for energies higher than the bandgap, and luminescence measurements for energies close to the bandgap. The absorption, entering the luminescence equation (1), illustrates that electron and hole states are coupled radiatively. Thanks to this phenomenon, information on the density of states (conduction and valence bands, dopant, defects, excitons) can be investigated [248,284,285]. In the derivation of the light emitted from the surface, we assumed a constant quasi-Fermi-level splitting, absorption and temperature, for the sake of simplicity. However, this may not be verified, e.g. due to diffusion length in the order of the sample thickness, or absorption properties variations (such as bandgap grading in CIGS [286], or MQW shape grading during growth [287,288]). This can be used to convert a wavelength resolution into a depth-resolved information; in the same manner as an EQE can carry information on are position-independent, so that the surface emission can be written: In this equation, the absorptivity α (in m −1 units) has been replaced by the absorbance A (unitless). Although we assumed a constant absorption, quasi-Fermi-level splitting and temperature, we can still consider cases where this is not valid. In those cases, the luminescence can also carry a depth-resolved information, as will be illustrated later. Equation (3) describes the light emission in a general case, when a semiconductor is brought out of equilibrium by any excitation source. This excitation can be an illumination, so that the luminescence will be termed as photoluminescence, or an applied electrical bias, which is the case of the electroluminescence. In this latter case, a general framework has been developed in terms or reciprocity relations [238,240], which draws a link between the reciprocal mechanisms of a solar cell (conversion of light to electrical energy) and an LED (conversion of electrical energy to light). Considering that phenomena governing carrier transport in the dark and under illumination are similar [238,[241][242][243][244], and that the light entering and leaving the device follows the same optical path, a reciprocity relation can be derived that relates the external quantum efficiency of a solar cell (i.e. the ratio of collected electrons by incident photons) to the electroluminescence emitted normally from its surface: Following the same idea, a second reciprocity theorem relates the external quantum efficiency in electroluminescence EQE (i.e. ratio of emitted photons by injected electrons), to the distance of the cell V OC to the ideal V rad OC under the limit of 100% radiative efficiency [238]: The set of equations from (2) to (5) provides us with a toolbox, which can be used to access, from the luminescence, the material, and device properties listed in the introduction of this section. The quasi-Fermi-level splitting, governing the luminescence intensity (equation (2)), is related to the excess carrier population that a semiconductor can maintain. This is therefore a good indicator of the material/device quality, as reflected by the second reciprocity relation (5). Therefore, the luminescence efficiency is considered as a benchmark for comparing different solar cell technologies [238,[245][246][247][248]. photoelectron, and Auger electron spectroscopies combined with appropriate sputtering conditions) can provide crucial information to photovoltaic issues. It is widely recognized that as complex systems are developed, surfaces and interfaces begin to dominate and control the properties of nano-structured and multilayered materials. However, the chemical and physical nature of these surfaces and interfaces are often not rigorously measured or reported. Surface analysis techniques need to be extensively used to characterize significant fraction of atoms or molecules associated with surfaces and interfaces, like impurities, surface contamination, surface enrichment, or depletion which can dominate material properties. The combination of X-ray photoelectron spectroscopy (XPS), Auger electron spectroscopy (AES), and low-energy ion spectroscopy with a respective escape depth of less than 10, 4, and 1 nm is of particular interest. A particular case is presented herein on the highly efficient III-V multijunction cells, for which the capacity to investigate the surface and interface phenomena is particularly crucial to precisely optimize each step of the global cell elaboration process. This is especially true in heterostructures with multiple quantum wells, where the MQW are used to adjust the absorption edge keeping the lattice strain balanced [295]. Figure 24 shows that such achievement is possible by XPS Ar + depth profile on III-V multilayer stacks of InGaAs/GaAsP, where each layer is about 10-nm thick (13.5 nm/7.8 nm). The layers appear well separated and characterized with really low levels of oxygen, which could be related to some artifact during etching. Conventional and time-of-flight secondary ion mass spectrometry (TOFS-SIMS) profiling and imaging techniques bring complementary chemical information to the one obtained by such sequential depth-profiling analysis. Indeed, instead of the abraded surface, the removed matter during abrasion is measured. These mass spectrometry techniques are particularly suitable for organic PV devices characterization [296]. On all materials, they take advantages from the high spatial resolution and detection limit and can be quantitative with the implementation of XPS and nano-Auger techniques, allowing to probe chemical environments. The use of physical erosion is a classical way to perform depth analyses but may likely generate artifacts such as preferential sputtering, surface reconstruction or increased surface reactivity. To confirm the results arising from such a profiling method a common approach consists in the implementation of complementary techniques. Nano-Auger spectroscopy is a good candidate as the inherent high spatial resolution of the technique (12-nm routine spot size) enables the direct characterization of elements depth distribution on cross sections. We have made the demonstration of the capabilities of the new generation Auger nanoprobes on reference Al 0.7 Ga 0.3 As/GaAs superlattices and photovoltaic samples [297]. High spatial resolution can be surface recombination or diffusion length [289][290][291]. Luminescence wavelength resolution can be achieved with a spectrometer [288,292], or with two detectors of different wavelength response [282,293,294] (e.g. with cameras made of different materials, or a camera with a set of filters). Accurate chemical characterization In parts of the research community, there is growing recognition that studies and published reports on the properties and behaviors of nanomaterials and cutting-edge architectures often have reported inadequate or incomplete characterization. With the increasing importance of complex and miniaturized system in fundamental research for photovoltaic technological applications (heterostructures, tandems systems, quantum dots…) new characterization requirements emerge, such as fine depth probing to analyze buried interfaces or high spatial resolution, to address multimaterials or textured-based structures. Characterization platforms and strategies combining advanced analytical resources are widely developed with the objective to get a better understanding of the surfaces and interfaces properties and their roles in meeting some of the current challenges and concerns. In addition, it is desirable to recognize the nature of unexpected or underestimated challenges associated with reproducible synthesis and characterization of materials, including the difficulties of maintaining desired materials properties during handling and processing due to their dynamic nature. From characterization point of view, the question of the preservation and the reliability of the original information during material transfer or analysis need also to be carefully considered. Chemical characterizations of solar materials are a key step to improve the fundamental knowledge of physicochemical processes and constitute a robust base to optimize modules elaboration and to reach ultimate solar cells performance. The use of surface sensitive analysis methods (including scanning probe microscopy, X-ray investigated [315]. This approach is efficient thanks to strain-balanced architecture which consist of alternating layers of wells and barriers under compressive and tensile stress. This permits to consider a large number of wells while preventing the formation of dislocations during crystal growth. On the other hand, the use of barriers is a drawback for the collection of the photo-generated carriers and more generally for the electronic transport quality in the MQW. Indeed, since transport is a succession of thermal escape, assisted tunnel escape, and, at best, direct tunneling across a barrier, the average carrier velocity is low (≈10 4 cm/s) [316]. Finally, the recombination rate is large, and impacts both open-circuit voltage and short-circuit current. Furthermore, thanks to barriers some minibands can occur. The wave functions of carriers in minibands are Bloch waves, meaning that propagation is efficient. Our theoretical study in InGaAs/GaAs/GaAsP cells sheds light on minibands in which the average velocity of carriers is around 10 7 cm/s. However, we also show that, without an adapted design, such minibands are inefficient since they connect only a few wells. We show that a graded interlayer thickness allows to largely increase the extraction of carriers by the minibands [317]. More generally quantum engineering based on behaviors such as resonant tunneling, miniband, and phonon-assisted tunneling will be necessary to design new concepts like quantum ratchet in intermediate band solar cell or contact in hot-carrier solar cells. For this quantum engineering modeling is essential. Conclusions and outlook Previous sections have described how emerging materials and technologies may help to push solar energy to the next level. We have seen that the margin of progression of photovoltaic solar energy is still very important in terms of achievable efficiencies: even with the current remarkable advances we are only about half way to reach our goal. Today, multijunctions hold the efficiency records, and are benefiting from progresses made possible by nanotechnologies. These are still quite costly and are a complex stack, but simpler alternatives do exist and may ultimately provide a pathway for competitiveness of highest efficiency devices. Other remarkable progresses are also made in terms of reducing active material usage, here again, the limit is quite far away according to the latest advances of nanotechnologies and nanophotonics. In both cases, more proofs of principle are needed as well as the technologies to make them affordable. The margin of progression is also very large with new sustainable materials (OPV, DSC, hybrid perovskite) because they use very abundant elements. These materials moreover can be prepared at relatively low temperatures, under atmospheric pressure. They can be printed on many types of substrates and have sometimes esthetic appearance so that new applications and new ways of achieved (≈10 nm), depending on the surface topography, Auger process yield of element considered (element sensitivity), and the global chemical state (superficial contamination or oxidation level) and obviously the acquisition mode employed (point, line, or element mapping). Based on the state of the art of Auger performances and limitations published by Martinez et al. [298], nano-Auger represent a powerful tool for numerous photovoltaic devices characterization and is totally compatible with XPS, SIMS and TOFS-SIMS when localized chemical analyses are needed [299]. Nevertheless, the surface preservation and preparation prior to analysis is a key step, even more than for XPS. Different methodologies can be employed: crosssection polishing (using a chemical polisher or focused ion beam) to reduce the roughness, chemical engineering or physical Ar + abrasion to eliminate or reduce surface superficial oxidation layer and carbon contamination [300]. But, it cannot be excluded once again that such surface preparation methods may perturb the original information. We are currently developing cross-chemical characterization methodologies to demonstrate the representativeness of the information and to reach a multiscale characterization. An important challenge in the future will be the development of new generation Ar + profiling, with adaptable cluster sizes, enabling to preserve the initial information [301]. Such approach allows to accurately follow the diffusion of particular elements in bulk materials or to access buried interfaces. Proofs of concept have been recently achieved on functional oxides which are particularly known to be modified and damaged by traditional monoatomic etching modes [302][303][304]. Atomic force microscopy experiments could be performed to monitor the effect of the various bombardments on the remaining surface morphology. Quantum electronic transport modeling In the past decades, researchers, including the group at IM2NP, have developed models and corresponding numerical calculations suited to the study of the quantum electron transport in nano-scaled semiconductor devices [305][306][307]. These tools have been adapted to photonic devices and especially to PV cells [308][309][310]. They could model an ultrathin solar cell [311,312], at the cost of introducing some approximations, such as the radiative limit. It has been also possible to restrict the investigation to a small active region of the device in order to assume a more realistic approach [313,314]. Generally, such approaches are numerically challenging but will be essential to find new concepts. Quantum phenomena open a wide door to innovation because they can lead to nonlinear and counterintuitive behaviors, and possibly to keys to go beyond the SQ limit. As an example MQW materials which allow to tailor the optical absorption of the solar cell has been integration can be proposed. Remaining issues include the long-term stability and possible environmental concerns due to the use of toxic elements (e.g. in some perovskites). Progress is also observed in materials characterization. To give an example, luminescence-based technologies, which can directly map energy conversion properties (via quasi-Fermi-level splitting) from the microscale to the macroscale (even at the solar farm scale) have seen a strong development recently. Those contribute highly to understanding of material, rapid development of new devices and reliability studies of modules. So, what is left? One of the next big challenges is related to the variability of the solar source, and its ability to meet the demand minute by minute is often questioned. The complementary answers being developed at the moment range from a better management of the source (and sometimes of the usages) to better and cheaper energy storage capabilities. In this way, solar to hydrogen conversion is a promising direction for solar energy. Storing solar energy into hydrogen via water splitting in electrolyzers is considered [318]. The produced hydrogen can also be transported to compensate geographical fluctuation of the solar resources. Solar cells coupled to electrolyzer are currently an active topic, with recent achievement of world records of solar to hydrogen production, at 24.4% in outdoor conditions (using CPV InGaP/GaAs/Ge cells and proton exchange membrane (PEM) electrolyzers) [319], and 30% under normalized illumination (using an InGaP/GaAs/GaInNAsSb triple junction and PEM electrolyzers) [320]. Here as well, emerging technologies may be leading the way for the undergoing global energy transition. Disclosure statement No potential conflict of interest was reported by the authors. Funding This work was supported by the New Energy and Industrial Technology Development Organization (NEDO, Japan) and the Japan Society for the Promotion of Science (JSPS), the CNRS and RCAST (LIA NextPV).
21,327.6
2018-04-10T00:00:00.000
[ "Engineering", "Environmental Science", "Materials Science" ]
Effects of the temperature on the fatigue lifetime reinforcement of a filled NR : Natural rubber (NR) exhibits extraordinary physical properties. Among them, its remarkable fatigue resistance was reported by Cadwell et al. as soon as 1940 (Cadwell et al. 1940). In particular, they found that NR exhibits a strong lifetime reinforcement for non-relaxing loadings ( i.e. for R > 0). Since it was not observed in the case of non-crystallizable rubbers, such reinforcement is generally attributed to strain-induced crystallization (SIC). In automotive applications, NR is used in anti-vibratory systems subjected to high temperatures. Surprisingly, few studies investigated the effect of temperature on the fatigue properties of NR, and more particularly on the lifetime reinforcement (Bathias et al. 1998), while SIC is a highly thermosensitive phenomenon (Trabelsi et al. 2002). The present study aims therefore at investigating how temperature affects the fatigue life reinforcement due to SIC under non-relaxing loading conditions. Fatigue experiments are first carried out at 23 ◦ C for loading ratios ranging from -0.25 to 0.35, before being compared to results obtained at 90 ◦ C and 110 ◦ C where the ability of NR to crystallize is reduced or cancelled. Fatigue damage has been analyzed at both the macro and the microscopic scales. As expected, the material exhibits a strong lifetime reinforcement at 23 ◦ C and the fracture surfaces are peopled with SIC markers (wrenchings (Le Cam et al. 2004), striations (Le Cam and Toussaint 2010, Ruellan et al. 2018) and cones (Ruellan et al INTRODUCTION Numerous studies investigated fatigue of elastomers at room temperature, especially in the case of the natural rubber (NR).Nevertheless, the methodologies used differed in terms of sample geometry, material tested and loading conditions applied.Volumetric samples were most of the time preferred since they are more representative of rubber parts: Cadwell et al. (1940) introduced cylindrical samples before Beatty (1964) defined a Diabolo-like geometry that will be used in the following as the reference one.In the case of multiaxial fatigue experiments, samples with lower radius of curvature were used.Concerning the end-oflife criterion applied, it differed from a study to another: (i) the sample failure (Cadwell et al. 1940), (ii) the appearance of a crack of a given length at the sample surface (Svensson 1981, Saintier 2000, Le Cam 2005, Ostoja-Kuczynski 2005), (iii) a drop of a physical parameter to a critical value (Mars 2001, Neuhaus et al. 2017), (iv) the brutal decrease of the maximal reaction force (Ostoja-Kuczynski et al. 2003, Ruellan et al. 2018).All these differences in terms of experimental conditions highlighted in this brief overview makes difficult any comparison between studies (for further information, the reader can refer to the state of the art in reference (Ruellan et al. 2018)). However, general comments on the fatigue behavior of elastomers can be drawn with respect to the loading condition and the temperature.The fatigue life of NR strongly depends on the mean stress.Indeed, a lifetime reinforcement is observed for positive loadings ratios.It was first reported in the pioneering work by Cadwell et al. (1940).As no such lifetime reinforcement was observed in the case of noncrystallizable rubber (Lindley 1974, Fielding 1943), the reinforcement was commonly attributed to straininduced crystallization (SIC).The fatigue properties of NR are also influenced by temperature since it affects its ability to crystallize under tension.In the case of static loadings, Treloar (1975) initially estimated that the crystallites would completely disappear between 75 and 100 • C. Later, the development of crystallinity measurements performed by WAXS enabled the determination of this threshold more accurately: 75 • C (Bruening et al. 2015), 80 • C (Albouy et al. 2005, Candau et al. 2015) or 100 • C (Trabelsi et al. 2002), even for large strains.Considering fatigue loadings, several studies investigated the effect of temperature in the case of relaxing loadings (Cadwell et al. 1940, Beatty 1964, Neuhaus et al. 2017).Generally, a decrease in fatigue life was measured with the increase in temperature on plane samples (Duan et al. 2016) and Diabolos samples (Lu 1991, Neuhaus et al. 2017).Under non-relaxing loadings, where the lifetime reinforcement occurs at room temperature, Bathias et al. (1998) showed that the lifetime reinforcement was still present at 80 • C but occurred for more important mean stress than at 23 • C. Therefore, the investigation of the fatigue behavior of NR under more exhaustive loading conditions and temperatures is required to bring additional information on the effect of temperature on the lifetime reinforcement.It should be noted that filled NR is subjected to a significant self-heating, which is all the more important in the case of volumetric samples.For example, Lu (1991) measured a difference of 50 • C between the bulk and the surface of a Diabolo sample tested under fatigue loadings.Therefore, self-heating has also to be taken into account in the investigation of fatigue of NR.This brief overview highlighted the lack of studies investigating the effect of temperature on the lifetime reinforcement of NR due to SIC.The present study therefore addresses this topic.The next section gives the experimental setup, then the results are presented and discussed.Concluding remarks close the paper. Material and sample geometry The material considered is a carbon black filled natural rubber (cis-1,4 polyisoprene) vulcanized with sulphur.Samples tested are Diabolo samples.All the details are provided in reference (Ruellan et al. 2018). Loading conditions The fatigue tests were performed with a uni-axial MTS Landmark equipped with a homemade experimental apparatus.This apparatus enables us to test simultaneously and independently eight Diabolo samples, which compensates the fatigue tests duration and the dispersion in the fatigue live. The tests were performed under prescribed displacement.The corresponding local deformation was calculated by FEA at the sample surface in the median zone.In order to investigate the effect of temperature on the fatigue properties, a Servathin heating chamber was used and three temperatures were applied: 23, 90 and 110 • C. A pyrometer tracked the temperature of a material point located in the Diabolo median surface.For that purpose, a second homemade system has been developed to provide the suitable kinematics to the pyrometer.The frequency was chosen in such a way that the global strain rate ε was kept constant, ranging between 1.8 and 2.4 s −1 for one test to another one in order to limit self-heating.In practice, the frequency ranged between 1 and 4 Hz, depending on the strain amplitude.Five different loading ratios R ε = ε min εmax were used: -0.25; 0; 0.125; 0.25 and 0.35.It should be noted that loading ratios inferior, equal and superior to zero correspond to tension-compression, repeated tension and tension-tension, respectively. End-of-life criterion Considering the crack initiation approach, a physical criterion based on the maximal reaction force evolution was chosen.Three regimes were obtained during the fatigue test: the softening corresponding to a decrease in the maximal reaction force, its stabilization and its drop (brutal or not, depending on the loading and the environmental conditions).The number of cycles at crack initiation is denoted N i .It stands for the number of cycles at the end of the plateau, when the derivative of F max is no longer constant.In practice, it corresponds to the occurrence of a macroscopic crack whose length is at the most 5 mm at the sample surface. Scanning Electron Microscopy Fracture surfaces were observed with a JSM JEOL 7100 F scanning electron microscope (SEM).In addition, the SEM is coupled with an Oxford Instrument X Max Energy Dispersive Spectrometer of Xrays (EDS) and an Aztzec software in order to determine the surface fracture composition, especially in the crack initiation zone.The fracture surfaces to be analyzed were previously metallized by vapour deposition of an Au-Pd layer. RESULTS As the reference, results are first presented for fatigue tests carried out at 23 • C in terms of fatigue life and damage mechanism.They are then compared to results obtained at 90 and 110 • C. Finally, a discussion on the effect of the temperature on the lifetime reinforcement is proposed. 23 • C The iso-lifetime curves are provided with respect to the loading conditions in the Haigh diagram presented in Figure 1.The black squares correspond to experimental fatigue tests defined according to the amplitude of force and the mean force applied.Even though the tests are prescribed in terms of displacement, the Haigh diagram is plotted in terms of force.Further discussion on the quantity to use for lifetime representation are given in Ruellan et al. (2018).Each square corresponds to a mean fatigue life value calculated from eight individual tests.In this diagram, the fatigue lifetimes range from 10 4 to 10 6 cycles, with N 1 < N 2 < N 3 < N 4 .The fatigue life of the material at 23 • C can be described with respect to the loading ratio: • for tension-compression loadings (-0.25 < R F < 0), the iso-lifetime curves are decreasing monotonously.Therefore, it is concluded that the fatigue damage is driven by the maximum of loading, • in the case of tension-tension loadings, the slope of the iso-lifetime curves increases in absolute value for loading ratios between 0 < R F < 0.125.For R F > 0.125, their slope becomes positive.For a given F amp , an increase in F mean increases the fatigue life, which illustrates the classical fatigue life reinforcement observed under nonrelaxing loadings.Note that for loading ratios R F > 0.25, the reinforcement is less pronounced. Damage analysis performed at both the macro and the microscopic scales on the failure surface of Diabolo samples after fatigue failure put into light the strong effect of SIC on the damage mechanisms.In the reinforcement zone (i.e. for R F > 0), wrenchings, fatigue striations and cones were observed.They are referred to as SIC markers in the following.This result confirms the role of SIC in the lifetime reinforcement process.More details on damage analysis are given in reference (Ruellan et al. 2018).The results can be summed up as follows: • for tension-compression loadings (-0.25 < R F < 0), the iso-fatigue lives are monotonously decreasing, as for 23 • C.However, it is to note that the fatigue lives for R F = 0 loadings are inferior by a factor 2 to the ones measured at 23 • C.This result corroborates previous results in this field (Lake andLindley 1964, Neuhaus et al. 2017), • under tension-tension loadings, a slight lifetime reinforcement occurs for -0.25 < R F < 0 loadings since the slope of the iso-lifetime curves is positive.It is more important for the shortest lifetimes.Even though the reinforcement is less pronounced than at 23 • C, it does not disappear.This result can appear surprising since the crystallites are assumed to be melted at such temperature in the case of static loadings (Candau et al. 2015).For R F > 0.25 loadings, the reinforcement decreases.This result was already observed by Cadwell et al. (1940) at 23 • C, for a sufficient mean loading.It could indicate that damage induced by the important maximal loading overcomes the reinforcing effect of SIC. Damage analysis revealed the disappearance of SIC markers (wrenchings, striations and cones), while a lifetime reinforcement is still observed.Therefore it is concluded that SIC markers require a certain level of crystallinity to form.A less pronounced lifetime reinforcement was observed at 90 • C.However, the markers of SIC disappeared from the Diabolo samples fracture surface. Since the crystallinity is assumed to be very small or equal to zero under static loadings, fatigue loadings appear therefore as a promoter of SIC.This was discussed in Beurrot-Borgarino et al. (2013).This result also means that even a small crystallinity value is sufficient to induce the fatigue reinforcement.Furthermore, the fact that no SIC marker forms at such temperature while a lifetime reinforcement is still observed could point out the existence of a crystallinity threshold in the formation of these markers.Our results differ from those presented in Bathias et al. (1998).Indeed, the authors measured a similar reinforcement at 80 • C and at 23 • C, but occurring under a higher mean stress.Considering the Haigh diagram we obtained, this would have corresponded to a positive shift of the F mean value at which the reinforcement is observed, which is not the case for the filled natural rubber we have tested.Of course, if the Haigh diagram is plotted in terms of displacement instead of force, the mean displacement at which the reinforcement is observed at 90 • C increases.This is due to the fact that the loading is considered as nonrelaxing in terms of displacement, but the minimum force strongly decreases during the test at elevated temperature, becomes inferior to zero, and the reinforcement is lost.At 110 • C, the lifetime reinforcement vanishes and the material behaves as a non crystallizable rubber, as presumed by Lindley (1974). CONCLUSION This paper investigates the role of SIC in the lifetime reinforcement process of NR at different temperatures.Uni-axial fatigue experiments are carried out under loading ratios ranging from -0.25 to 0.35 at both room and elevated temperatures (90 and 110 • C).The effect of the loading on the fatigue life is highlighted in the Haigh diagram and post-mortem analyses at both the micro and the macroscopic scales are used to better understand damage mechanisms.As expected, a strong lifetime reinforcement is observed at 23 • C under positive loading ratios.At 90 • C, the reinforcement is less pronounced but is still observed.This suggests that fatigue loading is a SIC promoter and that only a small crystallite amount is required to activate the reinforcement.Finally, the lifetime reinforcement vanishes at 110 • C. 3. 2 90 • C Fatigue life and damage mechanisms at 90 • C differ from those obtained at 23 • C. Figure 2 presents the Haigh diagram obtained after fatigue experiments performed at 90 • C. 3.3 110 • CThe Haigh diagram obtained at 110 • C is presented in Figure3.The iso-lifetime curves are monotonously decreasing with F mean .It indicates that F max drives the fatigue life and that the reinforcement is lost.The damage mechanisms are similar to the ones observed at 90 • C. Figure 4 . Figure 4. Haigh diagram at 23, 90 and 110 • C in red, blue and green, respectively.At 23 • C, a strong lifetime reinforcement is measured and attributed to SIC.The presence of wrenchings, striations and cones on the Diabolo samples fracture surface testifies of the crystallization activation.A less pronounced lifetime reinforcement was observed at 90 • C.However, the markers of SIC disappeared from the Diabolo samples fracture surface.
3,346
2019-06-07T00:00:00.000
[ "Materials Science", "Engineering" ]
Design of Optimal Output Regulators for Dual-Rate Linear Discrete-Time Systems Based on the Lifting Technique A design strategy of optimal output regulators for dual-rate discrete-time systems, whose output sampling period is an integer multiple of the input updating period, is proposed. At first, by using the discrete lifting technique, the dual-rate discrete-time system is converted to a single-rate augmented system in form and the lifted state-space model is constructed. Correspondingly, the performance index of the original system is modified to the performance index of the single-rate augmented system. And the original problem is transformed into an output regulation problem for the augmented system. Then, according to the optimal regulator theory, an optimal output regulator for the dual-rate discrete-time system is derived. In the meantime, the existence conditions of the optimal output regulator are discussed. Finally, a numerical example is included to illustrate the effectiveness of the proposed method. Introduction Sampling systems are obtained by discretization of a continuous signal for an actual system.For sampling systems, multirate systems arise when the components of the same system have several different sampling rates [1].In many complex systems, it is unrealistic or sometimes impossible to sample all the physical signals uniformly at one single rate.For example, in chemical industrial process control, the output sampling rate is much slower than the input updating rate because the control output (such as measuring gas molecular weight, etc.) is obtained from laboratory analysis.As the input and the output are sampled in two different sampling periods, the system is often described as a dual-rate system [2,3].The dual-rate system is a special and simple case of multirate systems.In recent years, multirate systems in petrochemical processes [3], hard disk drives [4], and optimal filtering [5] have obtained full application.At the same time, multirate systems have also gained numerous theoretical developments in predictive control [6], stabilization [7], repetitive control [8], robust control [9], and so on. An optimal output regulator is designed by using linear quadratic regulator theory.The output regulation problem aims to find the optimal control law that can minimize the sum of the dynamic deviation of output and the dissipated energy of the control variables with the given weight [10].Research based on linear quadratic regulator theory for multirate system has always been the focus.By solving the linear quadratic regulation problem, a new adaptive technique is proposed for the control of the temperature in a greenhouse in [11].A time-invariant, single-rate design approach for a multirate optimal regulator is presented, and, as an example, a typical problem in the research of control, the control of an inverted pendulum on a cart, is studied in [12].In addition, the lifting technique is a standard tool to handle multirate systems in [2].By using the lifting technique, the multirate system can be converted into a single-rate system.Thus, the control problem for multirate systems can be solved by applying the methods of single-rate systems. In this regard, preview control based on the optimal regulation theory for discrete-time multirate systems has produced very good research results recently [13][14][15].An optimal preview controller design method for discrete-time multirate input systems was proposed in [13].References [14,15] applied this method to descriptor systems; the optimal preview control problem for discrete-time descriptor causal systems in a multirate setting and discrete-time descriptor noncausal multirate systems was studied, separately.But the studies on designing the optimal output regulator for multirate systems were relatively few.Reference [16] studied the robustness of optimal output regulators based on multirate sampling of plant output.Reference [17] introduced the design problem of the optimal output regulator for discretetime descriptor causal multirate input systems. On the basis of [16,17] and by using the method of [13][14][15], this paper studies the design problem of optimal output regulators for dual-rate discrete-time systems whose output sampling period is an integer multiple of the input updating period.The basic method is as follows.First, by using the lifting technique, the normal dual-rate system is converted to a single-rate augmented system.Second, by transformation, this problem becomes an optimal control problem of the augmented system.Finally, returning to the original system, an optimal output regulator for the dual-rate discrete-time system is obtained.Furthermore, the stabilizability and detectability of the single-rate augmented system are discussed, and their rigorous mathematical proofs are given.The following lemmas will be used in this paper (see [18,19]). Description and Related Assumptions Consider the following linear discrete-time system: where () ∈ , () ∈ , and () ∈ represent the state vector, the control input vector, and the output vector, respectively., , are known constant matrices with appropriate dimensions. Remark 3. If (A1) and (A2) hold, the system is dual-rate sampled.That is, the state vector () and output vector () can be measured once during every sampling interval.The input vector () can only be refreshed once during every 1 sampling interval. (A3) Assume / 1 = in this paper, where is a positive integer and > 1. Remark 4. (A5) guarantees the existence of the state feedback during the design of the optimal output regulator. We introduce the quadratic performance index function for system (1): where the weight matrices satisfy > 0, > 0. We would like to design an optimal output regulator for system (1) with dual-rate setting under the performance index (2). Design of the Optimal Output Regulator In this section, the optimal output regulator for system (1) with a dual-rate setting is obtained. Derivation of the Lifting System. Based on the multirate study methods, the dual-rate discrete-time system is converted into a single-rate augmented system in form by using the lifting technique.Lifting technique is a typical approach to multirate control.By using this technique, a fast-rate signal can be mapped to a slow-rate signal with increased dimensionality.While this operation maps a fast-rate signal to a slow-rate signal, the inverse operation maps a lifted signal to a fast-rate signal in [20,21]. Next, a lifted state-space model is constructed.First, the lifting technique is applied to the input vectors.According to (A2), (A3), and (A4), the input vector () of system (1) can be input at = 1 ( = 0, 1, 2, . ..) and ( Second, we apply the lifting technique to the state vectors and output vectors.According to (A1), the state vectors can only be measured at = ( = 0, 1, 2, . ..),where is a positive integer.That is, the state vector () cannot be used in the state feedback if ̸ = .Using the first group of equations of (3), we get By that analogy, we have Continuing the process of lifting by using the second group of equations of (3) and ( 5), we get then we have By using the other equations of (3), we continue lifting and obtain We introduce the vectors as follows: Then ( 9) can be written as where Correspondingly, the output equation is Similarly, by using the lifting technique, the output equation can be written as . . . . . . Now we have successfully transformed the discrete-time system (1) with a dual-rate setting into the single-rate system (18). Substituting ỹ() = Ĉx() + Dũ() into ( 19), we get Because of F > 0 and Ĥ > 0, we have Ĥ + D FD > 0 and Further, by means of contract transformation in the matrix, we can eliminate the cross term of x() and ũ() in (20); that is, Thus (20) becomes As we all know, contract transformation cannot change the positive definiteness of the matrix.So Ĉ FĈ − Ĉ FD [ Ĥ + D FD ] −1 D FĈ ≥ 0 and Ĥ + D FD > 0. Now the problem becomes an optimal control problem of the augmented system (18) under the performance index (23). By using the optimal regulator theory in [10], the following theorem is obtained. and is the unique semipositive definite solution of the algebraic Riccati equation: Proof of Theorem 5 To prove Theorem 5, the following two lemmas are needed. Noticing that elementary transformation of the matrix cannot change the rank of the matrix, and applying the PBH test, we have ) and such that = , ̸ = 0. Obviously, we can derive Mathematical Problems in Engineering 7 So If 1 + + ⋅ ⋅ ⋅ + 1 −1 ̸ = 0, B ̸ = 0 is obtained.Using Lemma 2 again, ( Â B) is stabilizable.This completes the proof. Lemma 7. One has the following: is detectable if and only if ( ) is detectable. So ( Because is reversible and we have Mathematical Problems in Engineering Therefore, is detectable if and only if ( Ĉ Â) is detectable.And notice Applying the result of [22], ( Ĉ Â) is detectable if and only if ( ) is detectable.This completes the proof. Proof of Theorem 5.In Section 3.2, the output regulation problem of system (1) with dual-rate setting (satisfying (A1)-(A4)) has been transformed into an optimal control problem of the augmented system (18) with the performance index (23). According to Lemmas 6 and 7, if (A5) and (A6) hold, the optimal control input of system (18) minimizing the performance index (23) is where is determined by (27) and is the unique semipositive definite solution of the algebraic Riccati equation (28). Through verifying, all of the conditions of Theorem 5 are satisfied.According to Theorem 5, we obtain the feedback matrix: Next we perform MATLAB simulation results.The output response of the discrete-time system with dual-rate setting is shown in Figure 1 and the corresponding control input is shown in Figure 2. In the upper right of these two figures, there are magnified simulated images around = 60.Note that the output response of the system can reach steady state rapidly from Figure 1, and the designed optimal output regulator is effective.The input curve of the closed-loop system is shown in Figure 2. We notice that the output response in Figure 1 is not quite smooth.The response shows a small oscillation, which is the multirate feature of the system.In addition, the input curve shows a stair-step feature in Figure 2.This is because ZOH is used in the input. Conclusion In this paper, we studied the optimal output regulator for a class of linear discrete-time systems with dual-rate setting.By using the discrete lifting technique, the dual-rate discretetime system is converted to a single-rate augmented system in form, and the lifted state-space model is constructed.Then we can use the method for single-rate systems to study the optimal regulator problem of the lifted system.By using optimal regulator theory, an optimal output regulator for the dual-rate discrete-time system is finally derived.This approach is also a guideline for future study of the optimal preview control problem for dual-rate systems.Furthermore, when assumption (A3) does not hold, for example, in the cases where /1 is not an integer or 1/ > 1, this approach can also be applied.But the lifting process is different and the corresponding lifted system need to be reconstructed.Finally, the numerical simulation showed the effectiveness and validity of the conclusions in this paper. 3 Figure 1 : 1 Figure 2 : Figure 1: The output response of the dual-rate discrete-time system.
2,563.4
2016-06-28T00:00:00.000
[ "Mathematics", "Computer Science" ]
Hierarchical Bayesian Spatio-Temporal Modeling for PM 10 Prediction . Over the past few years, hierarchical Bayesian models have been extensively used for modeling the joint spatial and temporal dependence of big spatio-temporal data which commonly involves a large number of missing observations. This article represented, assessed, and compared some recently proposed Bayesian and non-Bayesian models for predicting the daily average particulate matter with a diameter of less than 10 (PM 10 ) measured in Qatar during the years 2016 – 2019. The disaggregating technique with a Markov chain Monte Carlo method with Gibbs sampler are used to handle the missing data. Based on the obtained results, we conclude that the Gaussian predictive processes with autoregressive terms of the latent underlying space-time process model is the best, compared with the Bayesian Gaussian processes and non-Bayesian generalized additive models. Introduction Many of the environmental data contain different scales of variability over space and time. For example, scientists from environmental and public health sciences are typically interested in modeling the evolving process of the air pollution during the time over specified locations. Such a stochastic process is often high-dimensional, large, and complicated with nonstationary structures, so the traditional statistical methods are hampered by the need of advanced statistical techniques to specify the spatio-temporal dependency. This can be quite practical with modern computers with highlevel computational programming. The spatio-temporal modeling of PM 10 and PM 2:5 (particulate matter with diameters of less than 10 and 2.5 micrometers, respectively) is rapidly becoming an important component of most air quality studies [1][2][3]. Particulate matter (also called particle pollution) is a mixture of solid particles and liquid droplets found in the air as a result of dust, soot, dirt, smoke caused by road transportation, and complex chemical reactions in the atmosphere such as sulfur dioxide and nitrogen oxides. Exposure to particle pollution is a public health hazard and can cause acute and chronic heart and lung diseases [4]. The larger the values of particulate matters (PM) are, the more harm on short and long terms of public health is. World Health Organization's (WHO) air quality guidelines recommend that the annual and 24-hour mean concentrations should not, respectively, exceed 20 and 50 microgram per cubic metre (μg/m 3 ) for PM 10 , and 10 and 25 μg/m 3 for PM 2.5 . Countries with fast-developed infrastructure such as Qatar usually suffer from relatively high levels of PM air pollutant. In this regard, the WHO classified the air quality in Qatar as poor and unsafe. In this paper, we focus our attention on modeling the PM 10 in Qatar because the PM 2.5 data is inaccessible. The most recent data indicates that country's annual average 24 hr PM 10 concentration levels were ranged from 126.69 μg/m 3 to 184.55 μg/m 3 , which exceeds the recommended maximum of 50 μg/m 3 [4][5][6]. Several authors have developed spatio-temporal models for analyzing the ambient of air pollution. Research in this field is back in dates to Cressie [7] and Goodall and Mardia [8]. Cressie et al. [9] compared the performance of Markovrandom field with the familiar geestatistical approach in prediction the PM 10 concentrations around the city of Pittsburgh, United States of America. The authors did not model the joint temporal and spatial structure of observations. They first modeled the purely spatial structure of observations for one particular day, then they incorporated the temporal component in their final statistical modeling. Sun et al. [10] developed a spatial forecasting distribution for unmeasured daily log PM 10 average concentration given from ten locations in Vancouver, Canada. At each monitoring site, Sun et al. [10] showed that the autoregressive model of order one (AR(1)) described quite well the daily log PM 10 average values. The authors did not consider the Bayesian models in their approach. Golam Kibria et al. [11] proposed a multivariate spatial prediction methodology in a Bayesian approach that is suited for spatio-temporal data observed at a small set of ambient monitoring stations at successive time points. They demonstrated the usefulness of their approach by mapping PM 2.5 at monitoring sites with different start-up times in the city of Philadelphia, USA. Golam Kibria et al. [11] did not compare the performance of their model with other non-Bayesian models that are commonly used in literature. Shaddick and Wakefield [12] and Sahu and Mardia [13] used short-term spatio-temporal predictive analysis for modeling the PM 2.5 and PM 10 concentration levels. Zidek et al. [3] presented predictive distributions on nonmonitored PM 10 values in Vancouver, Canada. Smith et al. [14] proposed a spatio-temporal model to predict the weekly averages of particulate matter concentrations within three southeastern states in the USA. Sahu et al. [15] modeled the PM 2.5 by combining the rural background and the urban areas into one process. Cocchi et al. [1] developed a hierarchical Bayesian model for the daily average PM 10 values. Pollice and Lasinio [2] developed a Bayesian-based kriging method for estimating the daily average PM 10 concentration levels. Wikle et al. [16] provided an excellent review of classical and Bayesian approaches for analyzing spatio-temporal data. Although Taylor et al. [6], Ahmadi et al. [5], and others have studied the relationship between some environmental conditions and particulate matter levels assessing the air quality based on particulate matter levels in different locations in Qatar, they did not study the spatiotemporal variability of PM 10 . The main objective of this research is to develop space-time models for daily PM 10 air pollution levels in Qatar for the four years, 2016-2019 comparing the hierarchical Bayesian approach with other spatiotemporal recent methods. We develop spatial interpolation and forecasting model using iterative Markov chain Monte Carlo (MCMC) computation setup which is an effective method for modeling a data with large number of missing values [17]. To the best of our knowledge, this will be the first study in Qatar, and we hope that this research will be helpful to protect the environment and public health in Qatar. The rest of this article is organized as follows: Section 2 provides a brief review of two-stage hierarchical Bayesian models that have been used for modeling spatio-temporal data. A numerical example is given in Section 3 to demonstrate that the Bayesian approach accurately predicts the daily average PM 10 values with a large number of missing values comparing with non-Bayesian models. Finally, a conclusion is given in Section 4. Hierarchical Spatio-Temporal Model When data is collected at different points in space and time, we should use a model that can, simultaneously, describe the dependency structure coming from the three sources of variations: time variation, space variation, and joint variability between time and space. Such a model is called a spacetime model (or spatio-temporal model, where spatio refers to space and temporal refers to time). In this article, we develop hierarchical models to predict the daily PM 10 concentration levels which vary over time and locations. The PM 10 concentration levels do not often follow the normal distribution. Thus, we usually model these values on the square-root scale or we use the logtransformation to stabilize the variance and enforce normality and to stabilize the variance [18]. We consider the squareroot scale to alleviate the departure from normality in our research data. Let ℓ and t denote the two units of time where ℓ = 1, ⋯, r represents the longer unit (e.g., year), and t = 1, ⋯, T ℓ represents the shorter unit (e.g., day). Let Z ℓ ðs, tÞ denote the observed value of the PM 10 concentration, after any necessary transformation, at a given location s and over a given discrete time t. We assume that the spatial location s is a two-dimensional vector describing the latitude-longitude (or equivalently northing and easting coordinates), and the time unit is typically hour, day, month, or year. We also assume that the Z ℓ ðs, tÞ is observed at n monitoring sites denoted by s i , i = 1, ⋯, n and at time points denoted by two indices ℓ and t so that the total number of observations is denoted by N = n∑ r ℓ=1 T ℓ . In this article, we denote all the missing data by z ⋆ , whereas all the observed data will be denoted by z. The first stage of the hierarchy assumes that the observed values Z ℓt , where Z ℓt = ðZ ℓ ðs 1 , tÞ,⋯,Z ℓ ðs n , tÞÞ′, can be decomposed into a true (latent) spatio-temporal process Y ℓt = ðY ℓ ðs 1 , tÞ,⋯,Y ℓ ðs n , tÞÞ ′ with an error term ε ℓt = ðε ℓ ðs 1 , tÞ,⋯,ε ℓ ðs n , tÞÞ ′ . More specifically, the data (or measurement error) model in the first stage of the hierarchy is The error term ε ℓt is assumed to be a Gaussian white noise process with mean zero and constant variance σ 2 ε , which is often called the nugget effect absorbing microscale variability. The second stage assumes that the true process Y ℓt has a systematic mean μ ℓt and a spatio-temporal error term. The mean can be modeled based on the past values of the unobserved variable or/and based on some relevant covariates. Typically, Y ℓt can be specified in the following formula: where η ℓt = ðη ℓ ðs 1 , tÞ,⋯,η ℓ ðs n , tÞÞ′ is an spatio-temporal residual random intercept assumed to follow N ð0, CÞ, where C = σ 2 η H η , σ 2 η is the site invariant spatial variance, and H η is the spatial correlation matrix. In this article, we consider 2 Journal of Applied Mathematics four modelings for fη ℓt g. The first one assumes that the random effects, η ℓ ðs i , tÞ = 0, for all locations s i and times t. This implies that the model in (1)-(2) will be the simple regression model. The other models for fη ℓt g are assumed to be Gaussian process, independent over the time, which is specified in Sections 2.1, 2.2, 2.3, and 2.4. Matérn Spatio-Temporal Covariance Function. For spatio-temporal modeling, we usually assume that the random effects process is a weakly stationary Gaussian with a zero mean and a valid isotropic covariance function. A valid covariance function implied that the covariance matrix is positive definite, and isotropic means that the separation vector between the two locations only depends on the distance and not on the direction. The class of the spatiotemporal covariance functions can be separable, productsum, metric, and sum-metric [19]. In this paper, we use the separable covariance model which is simply the product of the pure spatial covariance function, C s ðhÞ, by the pure temporal one, C t ðuÞ, given by where h = ks − s ′ k is the separating spatial distance, and u = jt − t′j is the temporal distance for any pair of points ðs, tÞ × ðs′, t′Þ in the spatial and temporal study domain. The Matérn family provides a general choice of covariance functions. For each time t, the Matérn covariance is given by: where K κ ð·Þ is the modified Bessel function of second kind of order κ which is a parameter controlling the smoothness of the realized random field [20], ΓðκÞ is the standard gamma function, and ϕ is a parameter which controls the decay rate in the correlation as the distance h increases. Popular special cases of Matérn model are (i) κ = 0:5 leads to exponential covariance function CðhÞ = σ 2 η exp ð−ϕhÞ and (ii) Gaussian model, (2), where ℓ = 1, ⋯, r, t = 1, ⋯, T ℓ , X ℓt is the n × ðk + 1Þ design matrix of spatially and temporally varying k -covariates and β = ðβ 0 , β 1 ,⋯,β k Þ′ is the k + 1-dimensional vector of regression coefficients. Thus, the Gaussian process (GP) two-stage model can be written as We assume that the random effect process fηð·Þg is independent from the white noise process fεð·Þg. Note that some of the covariates may vary spatially and not temporally or vice versa. One advantage of using Bayesian model is that we can use it to handle any missing data. This can be done by using the Markov chain Monte Carlo, MCMC, computation where the missing data is simulated from the N ðY ℓt , σ 2 ε Þ distribution defined by (5) at each MCMC iteration using Gibbs sampling. The Gibbs sampler requires that the full conditional distributions of the parameters θ = ðβ, σ 2 ε , σ 2 η , ϕ, κÞ are given in a closed form. The logarithm of the joint posterior distribution of the missing data and the parameters in this case is given by: where pðθÞ is the prior distribution [21] and we refer the readers to Bakar and Sahu ([21], Appendix A) to obtain the Gibbs sampling using the full conditional distributions of θ. (1) and (2) can be written in the hierarchical form (e.g., [18] model) as follows: Autoregressive Model Specification. The autoregressive process (AR) model in Equations where −1 < ρ < 1 is parameter of the first-order autoregressive model and μ ℓ = ρY ℓt−1 + X ℓt β. Note that if ρ = 0, we get the GP model given by (5). Also, note that the autoregressive model requires to specify an independent spatial model with initial values of Y ℓ0 for each ℓ = 1, ⋯, r, with mean μ ℓ and covariance σ 2 ℓ H 0 obtained from the Matérn covariance function in (4) with the same set of parameters. In this case, logarithm of the joint posterior distribution of the missing data and the parameters in this case is given by: 3 Journal of Applied Mathematics where C ⋆ ðϕ, κÞ is the n × m crosscorrelation matrix between η t and η ⋆ t having elements ½C ⋆ ij = Cðks i − s ⋆ j kÞ, i = 1, ⋯, n and j = 1, ⋯, m, and H η ⋆ ðϕ, κÞ is the m × m correlation matrix of η ⋆ t so that ½H η ⋆ ij = ½Cðks ⋆ k − s ⋆ j k ; ϕ, κÞ, for k, j = 1, ⋯, m. Clearly, the process fη ℓt g shows nonstationary structure with variance function that is given by The advantage of using the nonstationary model in (10) is the flexibility in theη t surface which is based on m ≪ n linear functions of η ⋆ t . When m is very small compared with n, this will lead to reduce the computational burden, especially for big data which is usually the case of the spatio-temporal data. Moreover, using nonstationary models usually provides more accurate results in prediction the nonstationary PM 10 process. We specify the hierarchical Gaussian predictive processes (GPP) as follows: whereη ℓt is given in (10). The process fη ⋆ ℓt g, at the S ⋆ m knots, can be modeled according to the autoregressive model where η ⋆ ℓt~N m ð0, Σ η ⋆ Þ for ℓ = 1, ⋯, r, t = 1, ⋯, T ℓ and Σ η ⋆ = σ 2 η H η ⋆ . We assume that the initial conditions η ⋆ ℓ0 has normal distribution with mean zero and covariance matrix Σ 0 = σ 2 ℓ H 0 , and both Σ η ⋆ , Σ 0 can be obtained from the Matérn covariance function defined in (4), where Σ η ⋆ is an m × m matrix much lower dimensional than Σ η of dimension n × n. In this case, logarithm of the joint posterior distribution of the missing data and the parameters in this case is given by: where pðθÞ is the prior distribution for the parameter θ = ðβ, ρ, σ 2 ε , σ 2 η ⋆ , ϕ, κ, σ 2 ℓ Þ and the Gibbs sampling procedure of these parameters can be obtained from the full conditional distributions required provided in the Appendix. Numerical Example 3.1. Data Description. The data used in this article is obtained from three different sources. The first one was collected by the air pointer and the meteorological station and is managed by the Environmental Science Center at Qatar University over the years 2016-2019. It has hourly pollutant including PM 10 measured in μg/m 3 , temperature, Temp, measured in degree Celsius, and relative humidity, RH, with several missing values. First, we sort the data into an ascending order by date, and then, we impute the missing values of the PM 10 by averaging the two nonmissing values before and after this missing value. When two or more successive missing values exist, we impute them by the corresponding monthly average. We do the same for the hourly missing values of temperature and relative humidity. After that, we aggregate the hourly data into daily data for which we fit our models. The amended data from this resource is irregular time series which started on the fourth of January 2016 and ended on the twenty-eighth of August 2019. Table 1 provides a summary statistics of this data. Results suggest that the PM 10 levels increased during the time, and the majority of missing observations (with more than 52%) have been occurred during the year 2018. The second source of the data was the regular monthly observations obtained from nine meteorological stations over the years 2016-2019. The total number of observations is 432 where the stations are located in the following sites: Qatar University, Abu Samra, Al Khor, Al Ruwais, Al Wakrah, Doha Airport, Dukhan, Mukenis-Al Karanaah, and Umm Said. Figure 1 shows the map locations of 9 meteorological sites. Although this data provides the monthly average values of temperature in degree Celsius and relative humidity (RH), it does not include measures for any type of pollutant. Table 2 summarizes the descriptive statistics of this data. We use this data to interpolate the daily temperature and relative humidity by disaggregating technique at each location. We also use our own portable devices to collect daily PM 10 values, as a third source of data, over the period of time from November 5th to December 31th of 2020 close to these sites. We use these devices to collect and simulate more daily PM 10 concentrations over the aforementioned locations. In particular, we recorded the average of the PM 10 taken randomly in the morning and evening of each day over this period. Then, we use this collected data to simulate a new data in each location. Finally, we merge the simulated data with the previous one and obtained the data that we use in this article. The final data has 1037 observations (see Figure 2) which has three variables (PM 10 , temperature, and relative humidity) measured over irregular time over the years 2016-2020 in nine spatial locations. The top panel in Figure 3 shows the distribution of the daily average PM 10 concentrations over the years 2016-2020, whereas the bottom one in the same figure shows these averages by the nine sites. Clearly that the distribution of the PM 10 concentrations, in all locations, are right skewed with very high extreme values. Thus, to stabilize the variance and reduce the departure from normality, we transform the original scale to square root scale. Parameter Estimates, Model Validations, Prediction, and Comparison Results. The main objective of this research is to develop a hierarchical Bayesian model that can be used to select and validate the best model which fits the daily average of the PM 10 air pollutant levels over different locations in Qatar. We consider the two covariates, temperature and relative humidity, for spatial interpolation of the PM 10 concentration at a new location and any time. First, we use the MCMC algorithm with Gibbs sampler, based on 5,000 iterations discarding the first 1,000 values, to impute the missing data z ⋆ as explained in Appendix, and then, we utilize this data to predict the values of Zðs ′ , tÞ at new location s ′ ∉ fs 1 ,⋯,s n g. The posterior predictive distribution is where θ = ðρ, σ 2 ε , σ 2 η , ϕÞ ′ are the model parameters and η ⋆ = ðη ⋆ 1 ,⋯,η ⋆ n Þ ′ (see [17]). We fit the Bayesian GP, AR, and GPP models described in Section 2 using the spTimer package [21,23] and compare these models with the non-Bayesian generalized additive model (GAM) using the R package mgcv [24][25][26]. We use the crossvalidation method to evaluate the predictive performance of these models. Here, data at locations ðs 1 ,⋯,s m Þ where m < n are used to fit the model while data at other sites ðs m+1 ,⋯,s n Þ are used Journal of Applied Mathematics to assess the model using the mean squared error (MSE), mean absolute error (MAE), mean absolute prediction error (MAPE), relative bias (rBIAS), and relative mean separation (rMSEP) defined, respectively, as follows: where N is the total number of nonmissing observations,ẑ i is the posterior predictive value of z i , and z, z − z i are the arithmetic means of z i ,ẑ i for i = 1, ⋯, N. Specifically, we use the data from Abu-Samra and Doha airport stations with 2 × 1037 = 2074 observations for validation purposes. The data from the remaining six stations with 7 × 1037 = 7259 total number of observations are used for model fitting (see Figure 1). The parameter estimates (posterior) for the Bayesian spatio-temporal models are given in Table 3. The 95% credible intervals for the parameters suggest that most of regression parameters for the AR and GP models are statistically significant, whereas the GPP model suggest that the variables are significant. In all models, the nugget effect σ 2 ε is small. On the other hand, the spatial decay parameter ϕ = 0:0033 for GP model and ϕ = 0:001 for GPP model suggesting that the effective ranges will approximately be 909 and 3000 kilometers, respectively. These two values are unusual associated with very large spatial variance values (σ 2 η ) suggesting that the PM 10 concentrations in Qatar are not significantly different over the space. After fitting the three models, we perform the crossvalidation and select the best one. Table 4 summarized the validation statistics for the GAM, GP, and AR models. Clearly, the MSE, MAE, MAPE, rBIAS, and rMSEP are smaller for the Bayesian spatio-temporal methods providing better predictive performance compared to the non-Bayesian additive model. For example, the Bayesian AP, GP, and GPP models reduced the MSE by about 63.7%, 66.4%, and 79.6%, respectively, compared with the non-Bayesian GAM model. We conclude from this table that the Bayesian spatio-temporal GPP model is the best model that can be used to predict the PM 10 concentrations. Figure 4 shows the time series plot of the true and fitted values for Qatar University location using the GPP model. We clearly see that the fitted values are very close to the true values. To further demonstrate the usefulness of the spatio-temporal GPP model for predicting the linear trend surfaces of the PM 10 values with their standard values, we illustrate a prediction map for the 29 th of March 2018 and 15 May of 2019 over a one-kilometer square grid (see Figure 5). Obviously, the graphs show that the GPP model correctly represented the PM 10 concentration level over the space and time. Conclusion The potential of applying nonstationary hierarchical Bayesian spatio-temporal models in PM 10 prediction with a large number of missing values is presented in this paper. The predicting model is developed by comparing the Gaussian predictive processes (GPP) with Gaussian processes (GP), autoregressive (AR) Bayesian models, and non-Bayesian generalized additive model (GAM) models using the datasets from the state of Qatar. The numerical results show that the GPP model outperforms other alternative models providing forecasting with good accuracy and interpretability. We applied the disaggregating technique and simulated a daily spatio-temporal PM 10 data using the available and collected data. Then, we used the Markov chain Monte Carlo with Gibbs sampler to impute the missing data in real collected data. We believe that our statistical data analysis approach will give similar results for future available real data. In many applications, the support vector machine (SVM) algorithm has shown a superior forecasting performance compared with several evolutionary algorithms. This could be an interesting possible extension to this article where further research can be done by comparing the performance of the SVM algorithms with Bayesian models in predicting the daily average PM 10 concentration levels.
5,776.4
2021-09-11T00:00:00.000
[ "Computer Science" ]
Double Trace Interfaces We introduce and study renormalization group interfaces between two holographic conformal theories which are related by deformation by a scalar double trace operator. At leading order in the 1/N expansion, we derive expressions for the two point correlation functions of the scalar, as well as the spectrum of operators living on the interface. We also compute the interface contribution to the sphere partition function, which in two dimensions gives the boundary g factor. Checks of our proposal include reproducing the g factor and some defect overlap coefficients of Gaiotto's RG interfaces at large N, and the two-point correlation function whenever conformal perturbation theory is valid. 1 Introduction Conformal defects have played an important role in the development of conformal field theory (CFT). Of particular interest for many purposes are conformal interfaces, those interfaces separating two different CFTs that preserve a maximal subgroup of the conformal group. A particularly interesting class of interfaces is given by renormalization group (RG) interfaces [1], which are associated to a renormalization group flow from CFT 1 to CFT 2 . In addition to being of intrinsic interest, such defects may provide new tools to study the behavior of renormalization group flows. Various such interfaces, both approximate [2][3][4] and (in the presence of supersymmetry) exact [1,5,6] have been constructed, but in general it is difficult to compute observables that are not protected by symmetry. In particular, we are not aware of computations of two-point correlation functions in the particular case of RG interfaces. Within the AdS/CFT correspondence, conformal interfaces are typically realized using the Janus construction [7], 1 for which various approximate and (in the supersymmetric case) exact solutions are known (see e.g. [8][9][10][11])". The construction takes advantage of the SO(d, 1) symmetry preserved by the interface to slice the bulk geometry by copies of hyperbolic space, 2 Pure hyperbolic space corresponds to f (β) = cosh 2 β. The deformation of f (y) away from this is sourced by scalar field gradients φ(β), the details of which depend on the scalar potential. The bulk equations of motion are in general difficult to solve, and even in those cases where solutions are available, simple observables such as two-point correlation functions are difficult to compute: the only computation of a (non-protected) holographic two-point function we are aware of was performed in [12]. Holographic realizations of RG interfaces have appeared in the literature: see, for example, [13,14]. The purpose of this paper is to introduce and study a type of holographic RG interface that we refer to as (holographic) double trace interfaces. It has been known since the work of [15] that whenever the gravitational dual of a CFT has a scalar field whose mass lies in the unitarity window − d 2 4 ≤ m 2 ≤ − d 2 4 + 1, there are two consistent choices of boundary asymptotics. 3 The two different choices lead to two different CFTs on the boundary, with different spectra. For one choice, the scalar field φ is dual to a gauge-invariant (single trace) operator ϕ + of dimension ∆ + , while the other choice leads to an operator ϕ − of dimension ∆ − . These two CFTs are related by RG flow from CFT − to CFT + , which is initiated on the CFT side through perturbation by the "double trace" operator (ϕ − ) 2 . This is implemented holographically by imposing on the scalar field boundary conditions of mixed Dirichlet-Neumann type; renormalization group flow from the UV to the IR is realized in terms of the dominant asymptotics in the near-boundary and deep bulk regions, respectively. These RG flows are particularly simple at large N . This can be traced to the fact that their effects are due entirely to the asymptotics of quantum fluctuations, and as a result, gravitational backreaction occurs only at loop level. As a result, the leading contribution to any computation takes place on a pure AdS background. This fact makes feasible, at least at leading order, computations that are impractical in the general case. Consider hyperbolic space H d+1 , with its boundary divided into two regions, A + and A − (figure 1), where the local physics is described by CFT + and CFT − respectively. Near boundary region A + , the quantum fluctuations of the bulk scalar field φ should have scaling dimension ∆ + , while those near boundary region A − should have dimension ∆ − . The asymptotics of quantum fluctuations contribute to diagrammatic computations through the particular choice of bulk Green's function G for φ. The choice of Green's function is therefore what determines all properties of the holographic interface, and sections 2 and 3 are devoted to its analysis. With the Green's function in hand, in principle all observables associated to the interfaces can be computed by using this Green's function in all Witten diagrams. In this paper, we focus on the simplest observables that can be derived from G: (1) the two-point correlation function at tree level, and (2) the one-loop partition function. From the correlation function one can further extract the spectrum of non-trivial defect operators. The two-point function can also be compared with CFT results. In particular, we show that our bulk expressions reproduce results that we derive in conformal perturbation theory. Furthermore, from the conformal block expansion we can read off relations between the bulk and bulk-boundary OPE coefficients, which allows us to derive an expression for the defect overlap coefficients for certain operators in the large N limit. As an example, we reproduce the large N behavior of overlap coefficients for the interfaces of [5] between adjacent W N minimal models. We further compute the contribution of Gaiotto's interface to the sphere partition function in the large N limit, and show that it exactly matches the interface contribution to the bulk one-loop partition function. The logic of this paper is as follows. Section 2 gives a detailed definition of holographic double trace interfaces, and discusses methods available for deriving the bulk Green's functions for an arbitrary interface geometry. Section 3 turns to the explicit evaluation of the Green's function in the case of a spherical interface, which is done using the more powerful tools of harmonic analysis on H d ; these tools also allow us to derive the spectrum of interface operators. Section 4 treats the evaluation of the CFT two-point function at leading order in the 1/N expansion, from which we extract the dimensions of a sequence of primary operators living on the interface, matching the results from section 3. Section 5 computes the leading contribution of the interface to the partition function by evaluating the one-loop vacuum bubble diagram. Section 6 is devoted to computations in CFT, which provide two tests of our results. As the first, we derive the CFT two-point function in the presence of double trace interfaces within conformal perturbation theory, and show that it matches our bulk computation in parameter regimes where both descriptions are valid. The second is to derive within the higher spin gravity/WCFT duality of [16] the boundary g-factor and several overlap coefficients for the RG interfaces of [5] joining the W N,k and W N,k−1 minimal models. We find that both match the results of sections 4 and 5. We close with a summary of our conclusions and a list of interesting questions and problems for the future. Double trace interfaces The construction of a double trace interface begins with a pair of d-dimensional unitary CFTs, CFT ± , which have dual descriptions in terms of a single gravitational theory on a weakly curved AdS space, and are related by the choice of boundary condition for a bulk scalar field φ with m 2 = − d 2 4 + ν. We take the mass to lie in the unitarity window, defined by 0 < ν < 1. The two CFTs therefore differ at leading order in the 1/N expansion by the choice of dimension ∆ ± = d 2 ± ν for a single operator ϕ ± . Our goal is to describe a conformal interface separating a region A + whose local physics is that of CFT + , and the complementary region A − described by CFT − . How does one realize such an interface? The AdS/CFT dictionary says that the field φ should have boundary condition ∆ + near the CFT + boundary, and boundary condition ∆ − near the CFT − region. To be more precise about what we mean, consider Poincaré patch coordinates X = (u, χ) with metric 4 By boundary condition, we mean that any configuration of the field φ appearing in the path integral must fall off near the boundary as This is accomplished in Witten diagrams by making a particular choice of inverse for the kinetic operator. The interface is therefore implemented in the bulk by choosing the appropriate bulk Green's function. To be explicit, a double trace interface is obtained by imposing the following conditions on the Green's function G: The form of K ± is not important for this definition, but it is in fact the bulk-boundary propagator associated to the region A ± . The factor of ± 1 2ν is the standard prefactor 1 2∆ ± −d . We consider here two methods of solving these conditions. The first is harmonic analysis: when the bulk geometry can be expressed as a warped product of a symmetric space over an interval, the decomposition in terms of Laplacian eigenfunctions reduces the above equations to an ODE. This method works whenever the wave equation is separable in coordinates respecting the boundary geometry of the defect, as happens when the interface is spherical or planar. Otherwise, one must use the more general methods developed to deal with mixed boundary value problems for partial differential equations; for a thorough treatment of this subject, see for example [17]. As a simple example, we will outline at the end of this section the application of such methods to the derivation of the bulk-boundary propagator in the case of spherical defects; a full derivation using these methods is offered in appendix B. Double trace interfaces as a mixed boundary value problem The Green's function solves a boundary value problem in which the boundary is split into two regions A + and A − , such that the function in question has Dirichlet-like boundary conditions on A + , but Neumann-like boundary conditions on A − . Such problems are known as mixed boundary value problems. (This is not to be confused with "mixed boundary conditions", otherwise known as Robin boundary conditions, which refer to a spatially homogeneous linear combination of Dirichlet and Neumann boundary conditions.) We begin by writing the mixed Green's function in the form where G ∆ − is the homogeneous Green's function for ∆ − asymptotics. Then H satisfies the free scalar equation, so it can be written as the convolution of a function on the boundary of H d+1 with K ∆ − (X, x ), the bulk-boundary propagator for CFT − . Let K + (u, χ; χ ) be the mixed bulk-boundary propagator associated to a boundary point χ ∈ A + . This function is determined by the following properties: Here, by [f ] ∆ we mean the coefficient of u ∆ in the expansion of f as u → 0. K + is given in terms of the Green's function by the standard relation where K ∆ − is the bulk-boundary propagator for the ∆ − CFT. Recalling that it is straightforward to verify that as a function of (u, χ), H satisfies the asymptotic conditions We see thus that observables of double trace interfaces can be expressed in terms of the bulk-boundary propagator, and thus it is this object that will be the primary focus of what follows. We focus in particular on the case of a spherical defect. This case is special because it preserves a maximal subgroup of the conformal group, allowing us to solve the problem as an ODE using harmonic analysis on H d . This is done in section 3. For any other shape, it is necessary to solve for K as a mixed boundary value problem. To illustrate this process, we show in detail how this can be done for the spherical interface in appendix B. The remainder of the section will be occupied with holographic renormalization and the extraction of correlation functions in section 2.2, and some comments on the case of general interface shapes in section 2.3. Holographic renormalization and correlation functions Let us now consider the question of how to extract correlation functions from the bulk-boundary propagator associated to a general double trace interface. The AdS/CFT dictionary states that for each bulk field φ dual to a scalar operator ϕ, the solution to the equations of motion can be expanded in the form where J denotes a source, ψ(χ) is proportional to the one-point function ϕ(χ) J in the presence of J, and all other terms are local functionals of J and ϕ; a J is a free parameter that we will fix later. Note ∆ J + ∆ ϕ = d. J and ψ are locally independent, but are determined by each other upon requiring nonsingular behavior in the bulk. Correlation functions are obtained by the statement that the gravitational partition function with boundary conditions J is equal to the generating functional of the CFT with source J: (2.12) Defining W = log Z, the connected correlation functions are , (2.13) where the value of a ϕ is determined by the effective action. Note that, due to the presence of a J in (2.11), this equation differs from the standard one by a factor of a J . This is a matter of the normalization of the operator dual to J. It would be most natural to choose a J such that a ϕ = 1. The standard normalization, however, sets a J = 1. If φ has the ∆ − quantization, a ϕ is negative, which flips the sign of certain correlators relative to the natural expectation in CFT. Because the a ϕ = 1 normalization is ubiquitous in the literature, we choose a J = 1 for the ∆ + quantization; to obtain the natural sign for the mixed two-point functions, we therefore choose a J = −1 for the ∆ − quantization. We will see in section 4 that this convention reproduces the sign of ϕ + ϕ − that is natural in conformal perturbation theory. In the semi-classical limit W is expressed in terms of the on-shell classical gravitational action S os , W = −S os , so this is the quantity we deal with for the rest of the section. To render the variations welldefined, one requires a well-behaved variational principle. In particular, this implies that if φ = φ c + δφ, where φ c solves the bulk equations and δφ has u ∆ϕ asymptotics, then the variation of the action must be finite. As is well known, to accomplish this requires the inclusion of local counterterms (holographic renormalization), and the counterterms we add determine the allowed fluctuations. Since our system involves both boundary conditions for φ, let us first briefly review how this works when there is no interface. We restrict to 0 < ν < 1 as before, and expand near u = 0 in the form (2.14) where (· · · ) is irrelevant to what follows. Start with the variation of the bare on-shell action. Introduce a cutoff surface u = , and let S (φ) be the cut-off bulk action. As usual, for φ on-shell we write where γ is the induced metric on the cutoff surface, n is the outward-pointing unit normal, and we have dropped the term proportional to the equations of motion. Expanding in (' ' means up to terms that vanish as → 0), we find We now add counterterms, which must render the variation finite. Furthermore, if we want ∆ + boundary conditions, then the variation of the action should depend only on δφ − , while for ∆ − boundary conditions, it should depend on δφ + only. The first can be accomplished by the counterterm Note that this gives a ϕ = −2ν. We can obtain ∆ − boundary conditions by instead using the counterterm which gives The bulk values of φ are determined by either one of φ + or φ − in terms of the bulk-boundary propagator: (the minus sign is due to a J − = −1). In CFT + , the source is J + = φ − , and φ + is given by the relation From this we may obtain the standard result for the CFT + two-point function: The same applied to CFT − (with J − = −φ + ) gives the usual value Let us now turn to our case of interest, where the boundary is divided into a region A + of CFT + and a region A − of CFT − . We can still expand any on-shell field configuration φ as in equation (2.14). Our counterterms, and thus our identification of sources, is however different. We must therefore use the counterterm S ∆ + ct in A + , and S ∆ − ct in A − . Using this counterterm, the variation of the on-shell action becomes: As before, φ + and φ − are determined everywhere determined by these sources: and similarly for φ − . Here we see a J − = −1 appearing again in the second term. Let us use this to find the two-point function G ++ (χ, χ ) for χ, χ ∈ A + . Assume we only have a source in A + , so that J − = 0. The expression for the variation of the on-shell action tells us that now giving and hence Replication of this procedure yields the three independent two-point functions: for χ ± , χ ± ∈ A ± , , as it should be. Interface fusion and other generalizations As we have emphasized, the mixed boundary value problem approach can deal with more general geometries than the Janus approach. Let us take a moment to touch on a geometry relevant to a topic of particular interest for the theory of conformal interfaces: interface fusion. The methods discussed above can be used to understand the fusion properties of two double trace interfaces with the opposite orientation. As a simple example, consider the case of two concentric spherical interfaces with opposite orientations, corresponding to CFT − on region A − , which is interrupted by an The Green's function is obtained using the tools outlined in this section (a detailed example is worked out in appendix B): expand the bulk-boundary propagator (or the Green's function) using spherical wave solutions of the bulk wave equation. The region where we impose condition [K2] is different from that of appendix B, and so the ansatz relevant to the spherical interface -found in equation (B.6) -must be replaced by an ansatz appropriate to the new A − . Similarly, the analog of (B.14), required to satisfy [K3], will now give a more complicated integral equation that must be solved to obtain K. Carrying out this procedure explicitly is complicated, and we leave it for future work. Configurations with even smaller symmetry groups can in principle be considered, but the difficulty of solving the mixed boundary value problem increases quickly as the degree of symmetry is reduced. Green's function from harmonic methods Let us now turn to the explicit computation of the interface propagators in the case of a spherical interface. In this section we take the boundary to be spherical, and A + to be a hemisphere. The computation is simplest in Janus coordinates on H d+1 [7], which make the SO(d, 1) symmetry of the defect manifest. We will mostly use the coordinates and reserve x, x , . . . to refer to points on the H d slice. (For a summary of the relationship of these to other useful coordinate systems on H d+1 , see appendix A.) In these coordinates, the boundary is split into two components: A + , which lies at z → 1, and A − , at z → 0. The interface lies at the boundary of H d , with the limit taken along any surface of constant z. To solve (A,B) we begin by decomposing G with respect to eigenfunctions of the Laplacian on H d . We choose a basis Ψ s (x) for the eigenfunctions, indexed by some parameters s. The index set is equipped with a measure dµ(s), with respect to which Ψ s (x) satisfies the normalization conditions: with δ(s, s ) the normalized delta function satisfying dµ(s) δ(s, s )f (s) = f (s ). One explicit basis and its measure are given in detail in appendix C. This basis picks a point p in H d and decomposes in spherical waves centered around this point. In this case, s = (σ, ) where indexes the spherical harmonics on S d−1 , and σ = σ s ≥ 0 is defined by Since all the functions we use in this paper involve symmetric functions F (x, x ) of two variables on H d , it is also useful to have a basis for these functions that are Laplacian eigenfunctions. As discussed in detail in appendix C, this is straightforward in the spherical basis: is just such an eigenfunction. It depends only on the SO(d, 1)-invariant cross-ratio ξ, which in Poincaré patch coordinates (A.9) on H d is (x−x ) 2 4yy . It further satisfies the useful identity A basis for the functions on H d in hand, our first task is to find the general solution to the wave equation on H d+1 adapted to the Janus decomposition. Wave equation on H d+1 We use the metric (3.1). Performing separation of variables with respect to the Janus slicing, we look for solutions to the wave equation The space of solutions is two-dimensional, but in what follows we will be interested in four different solutions: have the property that as we approach the left boundary (z → 0), while as we approach the right boundary (z → 1), Therefore, Φ ± L and Φ ± R give bases with definite asymptotics z ∆ ± /2 on left-and right-hand boundaries, respectively. Having identified a basis of solutions, we can decompose any solution to the wave equation in the form (3.14) We are free to choose as we like whether to expand in terms of Φ ± L or Φ ± R . Note that when it will cause no confusion, we will frequently abbreviate Φ + L (σ|z) by Φ + L (z), and so forth. Connection coefficients In what follows, we will need the linear transformation between the bases Φ ± L and Φ ± R . This is given by Kummer's connection formulae (E.8a): . (3.16) Note that the connection coefficients are symmetric under the exchange of L ↔ R. Applying the change of basis twice implies the consistency relation (3.18) Green's function We are now in a position to decompose the Green's function with respect to the functions Ψ s and Φ ± R,L . Actually, there are four linearly independent Green's functions G ab with a, b = ±: Thus the standard Green's function G ++ has ∆ + asymptotics on both boundary components, while that with ∆ − asymptotics on the left boundary and ∆ + asymptotics on the right boundary is G −+ . Any Green's function satisfies the condition where δ(X, X ) is the covariant delta function on H d+1 . We begin with an ansatz for G ab in terms of eigenfunctions on H d , which is solved by The Wronskians are found from together with the values for the connection coefficients all others being determined by w ba N M = −w ab M N . This gives the final form for the Green's function: Green's function and the bulk-boundary propagator Consider now the bulk-boundary propagator, which is obtained from the Green's function as follows: if ρ is a defining function on H d+1 , then where ∆ is the scaling dimension of the operator living at the boundary point x . In the coordinate system (3.1) (and in the conformal frame such that the boundary metric is H d ), the defining function is Alternatively, the bulk-boundary propagator can be characterized by the bulk-boundary propagator for the (ab) interface, with insertion at the point x on boundary M . In the notation of section 2 this means, for example, that K − = K −+ L and K + = K −+ R . We give expressions for K ab L ; the generalization to K ab R is obvious. In the Janus conformal frame the conditions become 2. The coefficient of z ∆ −a /2 near the left-hand boundary z → 0 is the covariant delta function δ(x, x ). The first and third properties imply that K can be expanded in the form To impose the second property, we use the connection If we are to get the covariant delta function, the coefficient of this term must give the resolution of the delta function (3.3), implying (3.33) Hence, A simple example is given by the standard bulk-boundary propagator with insertion on the left boundary, In the spherical basis, dµ(s) = dσ . Carrying out the sum over gives This integral can be evaluated straightforwardly by expanding in a power series in (1 − z) and using the integral identity 6 (3.36) Summing the series in (1 − z), we find is related, as it should be, by a Weyl transformation to the usual Poincaré patch expression. This can be seen by noting that Ξ 2 is a conformal covariant factor associated to our choice of defining functional, 4z(1 − z). The corresponding object for the Poincaré patch is Replacing Ξ by Ξ p.p. in (3.37) gives the standard bulk-boundary propagator. Interface bulk-boundary propagator K +− L The bulk-boundary propagator for a non-trivial defect is found in the same way. Equation (3.34) now takes the form Using Euler's transformation we can write Power expanding in (1 − z), the integral can be carried out using (3.36), and summing gives Using Euler's transformation gives the form which can also be nicely represented as Other bulk-boundary propagators All other propagators can be obtained from these two using the relations Interface operator spectrum One of the key features to understand in any interface CFT is the spectrum of operators living on the defect. Fortunately, from the holographic point of view there is a simple and elegant way to identify the interface operators [18]. Say we have a single scalar field φ which couples only to the background geometry. If the background corresponds to a conformal interface, SO(d, 1) invariance implies the linearized equation of motion can be written in the form where x is the coordinate on H d , and D is a differential operator built using only the transverse coordinate z. If we expand φ in eigenmodes of the operator D, then φ a (x) satisfies the standard scalar field equation on H d with mass m 2 a . Each φ a is now the bulk dual to a defect operator of dimension For double trace interfaces the analysis is particularly simple, as the equation of motion is simply the standard bulk equation of motion in Janus coordinates. The relevant eigenmodes can be found by making the substitution iσ → ν a in (3.10) and (3.11), giving us two convenient bases for the solution space, Our problem now is to identify the allowed values of ν a . Let us say that the left boundary has ∆ − asymptotics, and the right, ∆ + . An allowed eigenmode must satisfy these same asymptotics, which is only possible if ψ + a,R is proportional to ψ − a,L . Using the connection coefficients (3.16) (once again replacing iσ → ν a ), we find that this is true when cos(πν a ) = 0. Throwing out redundant choices, the allowed values of ν a are ν a = 1 2 + a (with a = 0, 1, 2, . . .), yielding the interface operator spectrum: Of course, above we only considered those operators descending from the bulk field φ. However, at O(1) in the 1/N expansion this is the only bulk field modified by the defect. Boundary primaries built from other fields simply have dimensions of the form ∆ + n, with ∆ the dimension of a CFT bulk operator O; these operators are merely descendants ∂ n y O, where y is the coordinate transverse to the interface. Only at O(1/N ) does a generic primary O develop singularities as it is brought to the defect, giving rise to a shift in the conformal dimension of the corresponding boundary operator. Of course, there are also the multi-trace operators, whose dimensions in the large N limit are simply the sum of the dimensions of their component operators. Finally, note that in the above we have chosen the standard quantization for all operators. However, there is one operator which lies in the unitarity window: the operator O 0 dual to φ 0 , which has dimension d 2 . The corresponding double trace operator has dimension d, matching that of the interface displacement operator [19], which can be used to generate deformations in the interface shape. This strongly suggests that this double trace operator should be identified with the displacement operator. Since O 0 is the leading boundary operator in the expansion of the bulk operator ϕ, this is consistent with the CFT expectation that the displacement operator takes the form #ϕ 2 + · · · . Correlation functions With the interface bulk-boundary propagator in hand, we turn now to the computation of CFT observables. This section will deal with the two-point functions. Recall that the bulk field φ is dual to a boundary operator ϕ + of dimension ∆ + in A + , and to an operator ϕ − of dimension ∆ − in A − . There are therefore three different correlation functions that we can compute: We begin in section 4.1 by deriving explicit expressions for these two-point functions from the results of sections 2.2 and 3.3. Section 4.2 uses the conformal block expansion of the two-point function to give an alternate derivation of the spectrum of interface primaries at large N . Evaluation of the two-point functions Section 2.2 showed how to extract two-point functions from the bulk-boundary propagators. This can be done using the closed form expressions of section 3, and we do so for G ++ and G −− in section 4.1.1. It is, however, also instructive to work with the representation obtained from solving the dual integral equation, as this approach is more general. To illustrate this procedure, we therefore derive G −+ in section 4.1.2 using the integral representation of appendix B. To evaluate the two-point function G ++ for operator insertions in the A + region, recall that in the standard holographic normalization, G ++ = 2ν[K + ] ∆ + . The bulk-boundary propagator in Janus frame was given in equation (3.44). We make our computation in Poincaré patch coordinates on the H d slices, , corresponding to a planar interface. We wish to compute the correlation function in a flat conformal frame, which requires including the additional Weyl factor (yy ) −∆ + . Combining this factor with equation (2.30a) gives the correlator is the standard holographic normalization factor for scalar correlators. For the planar interface, the conformal cross ratio takes the form ξ = (x−x ) 2 4yy . When comparing with CFT we will use the canonically normalized correlation function The ϕ − ϕ − correlator is obtained from this correlation function by combining the reflection y → −y together with the replacement ν → −ν. We evaluate this propagator using the results of appendix B, which are derived in Poincaré patch coordinates (u, χ) on H d+1 . The boundary points χ can be expressed in spherical coordinates with radial coordinate r; the interface is located on the sphere r = R, and A + is in the interior region. The evaluation of this two-point function can be reduced by SO(d, 1) transformation to the case where r = 0. Equation (2.30c) tells us we should compute [K + ] ∆ − . Due to equation (B.21), as r → 0 the only harmonic that contributes is = 0. We will therefore evaluate the = 0 contribution for r > 0, and then send r → 0. (We must perform the process this way: it involves a distributional integral for which the limit does not commute with the integral.) Set = 0 and take r > R. We take Y 0 = 1, in which case c 0 = (vol S d−1 ) −1 . Inserting (B.18) into (B.11) and using (B.12) gives Once we integrate by parts, we can take the limit r → 0 to obtain and using equation (2.30c) gives us the correlator itself, Now, SO(d, 1) invariance imples that the two-point function at general χ can be written in the form 7 with the conformal cross ratio for a spherical defect given by . . We can find [K + ] ∆ − at general values of χ simply by making this replacement in the above expression. (Note that when r < R < r, ξ < −1.) Setting r = 0 and equating (4.10) and (4.9) gives The correlator thus becomes If we perform a conformal transformation to planar interface coordinates x = ( x, y) such that ∆ + is the region given by y > 0, the correlator takes the form where now ξ = (x−x ) 2 4yy . For some purposes it is useful to work with the folded picture correlatorĜ −+ . Witĥ Finally, for comparison with CFT it is useful to give the canonically normalized folded correlator (4.16) Fusion channels and defect spectrum Bulk correlation functions in CFT are well known to be completely determined by the structure coefficients in the theory C p qr . If ϕ p denote the quasi-primary operators of the theory, holds as an operator equation, where C[x − x , ∂ x ] are operators depending only on conformal dimension. Inserting this expansion into correlation functions reduces their computation to a knowledge of C q pp , which are model-dependent, and conformal blocks, which are universal. The requirement of crossing symmetrythat the answer be independent of the order in which OPEs are taken -puts powerful constraints on the spectrum and couplings of a CFT, and underlies the recent success of the numerical conformal bootstrap methods initiated in [20]. Using the folding trick, any interface can be thought of as a boundary of the product CFT. In the presence of a planar boundary any primary ϕ p has the boundary OPE where ψ a runs over the SO(d, 1) quasi-primaries living on the boundary, and D is a function depending only on the dimension ∆ a . Here we have decomposed x = ( x, y), with y the distance to the boundary. In the presence of an interface, this expansion can be used to evaluate any bulk object in terms of interface correlators. In particular, interface two-point functions can be decomposed in terms of boundary conformal blocks, which were first derived in [21]. The requirement that this process yields the same result as the bulk OPE imposes constraints on the CFT and its boundary. In the presence of an interface, non-trivial constraints arise already at the level of two-point functions, and so the structure implied by the bulk and boundary OPEs should be realized in the two-point functions G ab . Since at leading order in the 1/N expansion double trace interfaces do not see coupling to any other fields, the conformal block structure at this order should only involve operators realized holographically in terms of the field φ itself. We will show in this section that the operator dimensions predicted by the conformal block decomposition of the two-point functions match those derived in section 3.4, and so indeed satisfy this condition. Furthermore, we use our results to derive relations between OPE coefficients, which we will compare in specific cases to known CFT results in section 6. In what follows we work with the canonically normalized correlation functiosn G ab . Bulk fusion channel We begin with the bulk fusion channel, derived from the OPE as ξ → 0. The correlator of two scalar bulk operators O and O has the bulk conformal block decomposition [21] O where q runs over bulk quasi-primaries, and the bulk channel conformal block is (4.20) When the argument δ = 0 we simply omit it. In the case of 2d CFT this is the expression for the global conformal block; these are the only blocks that will be visible in our decomposition even in 2d CFT, since Virasoro blocks degenerate to global conformal blocks at large central charge. G ++ : The bulk fusion channel is obtained from an inspection of (4.5). The first term corresponds to the identity block, while the leading behavior of the second term corresponds to an operator of dimension 2∆ + . A closed form for the conformal block decomposition of the second term follows from the formulae of appendix E.1.3, (ν) n (ν + 1) n (∆ + ) n n!(∆ + + 1) n (∆ + + ν + n) n F(2∆ + + 2n | x) . Therefore the ϕ + ϕ + OPE contains a quasiprimary O n with non-vanishing one point function for every dimension ∆ n = 2∆ + +2n (n = 0, 1, . . .). This result has a straightforward interpretation: the only operators contributing to the exchange channel at this level are double trace operators built from the descendants of ϕ + . Such an interpretation is consistent with the fact that the interface is built from only one bulk field Φ. We can be much more precise: at leading order in the 1/N expansion, the OPE coefficients satisfy the relation The same analysis applies to G −− under ν → −ν. G −+ : To apply the BCFT formulae we work with a planar interface in the folded picture on the upper half plane. WriteĜ This implies that there is a contribution from fusion channels containing operators O n of dimension ∆ n = d + 2n. These have a quite transparent interpretation in terms of the ϕ − ϕ + OPE: since ϕ − and ϕ + live in different sectors of the product CFT their OPE is non-singular, and clearly closes in terms of the double trace operators built from descendants of ϕ − and ϕ + . In particular, we can read off the coefficient product Obviously, the operator O 0 can simply be chosen as the normal-ordered coincidence limit O 0 = (ϕ + ϕ − ) − (divergence). In this normalization, Boundary fusion channel The bulk-boundary OPE allows bulk operators to be expanded in terms of boundary primary operators ψ a and their descendants, which we take to be orthogonal (4.29) Inserting this OPE into a two-point function, one can derive the representation [21] O where the boundary channel conformal block F ∂ is given by G ++ : Using the hypergeometric indentity (E.8b), we can write which is in the appropriate form to apply (4.31). The decomposition follows from the results of appendix (E.1.3) and takes the form so that we have a contribution from a pair of boundary operators of dimension d 2 + k for each k = 0, 1, . . .. This is the same as the boundary operator spectrum found in section 3.4. Note that as we approach the boundary, the dominant contribution comes from a boundary operator ψ of dimension d 2 , As discussed in section 3.4, it is very natural to guess that ψψ fuses into the displacement operator, which has dimension d. In particular, we expect that the displacement operator two-point function is determined at leading order by the φφφφ four-point function. G −+ : Write the folded picture correlator The first hypergeometric function can be evaluated by replacing the "c" parameter 1 by 1 + , using Gauss' summation formula, and taking the limit → 0, giving (−) k . We therefore obtain matching the spectrum derived in 3.4. The fusion coefficients satisfy (4.40) Interface partition function We now turn to the computation of the simplest quantum effect of double trace interfaces: the leading contribution to the sphere free energy due to a double trace interface on the equator, at large N . In the specific case d = 2, this quantitiy coincides with the boundary entropy, or g factor [22], of 2d CFT. The defect free energy is the leading non-extensive contribution to the thermal free energy in the expansion in β/L, where β −1 is the temperature and L is the length of a very long semi-infinite cylinder. Thus, for example in 2d BCFT one can write log Z = c 12 Computing the overall one-loop correction to the free energy requires both UV and IR regulators. The defect contribution to the free energy, however, can be expressed as the difference of two free energies defined using the same UV regulator, which is a UV finite quantity. Our construction is as follows. Take the bulk theory to be CFT + ⊗ CFT − . Into this theory we can introduce the double trace interface joining CFT + on the left to CFT − on the right, and vice versa. Consider the difference ∆F of the free energy of this theory with the defect, F D +⊗− , and without the defect, F +⊗− . The bulk contribution to the free energy cancels between these two terms, and so we have Here F ab denotes the free energy of a theory with a single copy of CF T a on the left and CF T b on the right. The g factor is given by with [det D] ab the functional determinant of D = (−2 + m 2 ) with (a, b) boundary conditions. Using Since when ν = 0 the defect is trivial (and hence g = 1), the value of g is given by the integral log g 2 = ν 0 dν d dν log g 2 . Equation (5.5) is infrared divergent and must be regularized by cutting off the bulk integral. Expressing the metric in the form we choose the cutoff surface defined by ρ = ρ * , which corresponds to computing the CFT partititon function on the sphere. To compute the one-loop contribution of the interface, we need to express the cutoff surface in Janus coordinates [23]. For our purposes the coordinate system is useful; note however that the function τ (z) is 2-to-1 and symmetric about z = 1/2. Writing the metric on H d in the form the Poincaré ball coordinates and Janus coordinates are related by The intersection of the cutoff surface with a leaf of given τ is therefore defined by the relation w = w * (τ ), where Note that w * ≥ 0, which means that the minimum value of τ is given by Sphere free energy and the g-factor To proceed, we use equation (3.28) to write H in the form Kummer's formulae (3.15) allow us to write this as The trace now takes the form To evaluate the inner integral, note that the integral over H d in Janus coordinates simply gives the regulated volume (vol H d ) * . For z < 1 2 , a quadratic transformation of 2 F 1 allows us to express Φ ± L in the form Since the integral is symmetric under z → 1 − z, in the above integral we may make the replacement the factor of 2 is required since τ only covers half the geometry. Using the identities together with the doubling formula for Γ, this becomes with · · · vanishing as (and thus τ * ) approaches 0. We obtain This expression has two sources of IR divergence. The first is from the volume (vol H d ) * , while the second is due to the term proportional to −ν . (vol H d ) * has an expansion (for d ∈ 2N) in powers 1−d+2m * , m = 0, 1, 2, . . .. Provided d is not an odd integer, the divergences fall into two non-overlapping series, which can presumably be eliminated by counterterms that do not affect the finite part of the trace. Alternatively, we can define the integral with s = − by analytic continuation to ν < 0. Either way, the −2ν divergence can be dropped, and the regularized volume replaced by the standard renormalized hyperbolic volume (vol H d ) ren . We do this from now on. The σ integral now takes the form sin πν π . We can evaluate the σ integral using the results of section E.3. The renormalized volume integral then takes the form Under dimensional regularization the volume of H d becomes [24] (vol It is interesting to compare this to the value of the difference between the renormalized action of CFT + and CFT − [24]: from which one can extract the shift in central charge. It is amusing to speculate that the similarity of these expression may indicate some deeper relation between the change in central charge under RG flow, and the g factor for the corresponding RG defect. Note that our result diverges as d approaches odd integers, corresponding to a logarithmic divergence with respect to . This reflects the fact that in odd dimensions, the defect free energy is associated to a conformal anomaly localized on the interface locus [25][26][27]. Explicit values As examples, we give explicit expressions in several cases where the g factor has no ambiguities. Comparison to field theory results In this section we check our bulk results against computations we make directly in the CFT. We are interested in particular in the coefficients appearing in the correlation functions of section 4.1, and in the g factor of section 5.2. We will compute two-point functions for small ν by means of conformal perturbation theory in section 6.1, and show they coincide at large N with the results of section 4.1. We further calculate the g factor and several overlaps of the solvable RG interfaces constructed in d = 2 coset models by Gaiotto in [5]. We will show in section 6.2 that, assuming the higher spin/W-CFT correspondence of [16], these coincide at large N with our bulk results in two dimensions for all values 0 ≤ ν < 1. Coefficients from conformal perturbation theory A check of the coefficients appearing in the correlation functions of section 4.1 can be made against conformal perturbation theory. A CFT can be perturbed by adding a term to the Euclidean action, where O is an operator of conformal dimension ∆ O , κ is a dimensionless coupling constant, and is a (scheme-dependent) length scale which we will take to be a position space short-distance cutoff. S c.t. is the counterterm action arising during the renormalization procedure. Correlation functions of (renormalized) local operators O i of the perturbed CFT can be expressed schematically in terms of the correlation functions of the CFT as For short flows, the right-hand side can be expanded in powers of the renormalized coupling constants. We are interested in deforming by an operator of the form ϕ 2 − , the normal-ordered product of ϕ − with itself, in the case where ∆ ϕ − = d 2 − ν with 0 ≤ ν < 1. When ν = 0 the interface is trivial, while small values of ν give rise to short RG flows. If the CFT has a weakly curved bulk dual, and if ϕ − is dual to a bulk scalar appearing in the path integral, then in the large N limit ϕ − is a "generalized free field" (see [28] and reference [29] therein). This means that correlation functions factorize into two-point functions by Wick contraction. The conformal dimension of this operator is then given by twice the dimension ∆ − of ϕ − , making ϕ 2 − a marginally relevant operator for small values of ν. In the large N limit it is also expected that ϕ 2 − is the only non-trivial relevant operator in the OPE of ϕ 2 − with itself. Denote the coefficient of ϕ 2 − in this OPE by C. In the OPE (position-space cut-off) scheme, the beta function corresponding to κ of the double trace deformation reads is the volume of S d . The value of κ at the IR fixed point (where β = 0) is therefore such that perturbative results in κ correspond to perturbative results in ν. Let us consider a planar interface. Like in section 4.1.1 we will use the coordinates x = ( x, y) but work in the flat conformal frame. Recall that in section 4.1.1 we compute the correlation function for two scalar insertions ϕ − at points x and x , whose distance from the interface is denoted y and y . To first order in κ, this correlation function is perturbatively given by The integral runs over the half-space y < 0, which does not include the two points x and x . Conformal invariance allows us to take both x and x to lie on the positive y axis. The correlator inside the integral has the form so that the right-hand side of (6.5) is proportional to the integral Using spherical coordinates parallel to the interface, and z = −y , we have The angular integral yields the volume A d−2 of S d−2 , while the integral over r is of the form valid for a, b > 0. For the remaining integral over z we use Using (6.9), (6.10) and (6.11), (6.7) is Combining (6.4), (6.6), (6.12), and restoring the x and x dependence, (6.5) becomes is the conformal cross ratio. To first order in ν, this formula coincides with the one obtained in section 4.1.1, which was sin πν π , (6.14) provided that C = 2C . (6.15) This relation is due to the fact that at leading order, ϕ − is a generalized free field. Using Wick contraction it is simple to verify that (6.15) is satisfied. Let us illustrate this in the context of the large-N free/Wilson-Fisher interface of the O(N ) vector model in d dimensions. 8 The theory contains N scalar fields φ 1 , . . . , φ N . The scalar field ϕ − , corresponding up to normalization to the operator φ i φ i , and the double trace operator ϕ 2 − , corresponding to (φ i φ i ) 2 , have the OPEs (6.16) where ellipses stand for omitted irrelevant operators. For N → ∞, we obtain (6.17) in agreement with (6.15). The two-point function of ϕ + can be obtained, to first order in perturbation theory, in a manner analogous to the one just described by perturbing the IR action with the marginally irrelevant operator ϕ 2 + . If the analogous conditions apply for the OPEs of ϕ + and ϕ 2 + , we indeed obtain the result (4.5) (and (4.6)). To compute the perturbative overlap across the interface, which we will compare with section 4.1.2, we start with two insertions of the operator ϕ − on the y axis at positions y > 0 and −y < 0. Let the perturbation run over the half space {x | y > 0}, so that This time we need to cut off the integral over x at radius away from y . In order to compute this, let us split the integral into two parts: one where the coordinate y is outside of the slab s = (y − , y + ), and one where it is inside the slab. Outside of the slab we have to compute the integral for which we use (6.9) again to obtain The first integral in the square brackets evaluates to In the other integral we can split the integrand and shift y , such that . (6.22) Using the analogue of (6.11) one has where ψ is the digamma function and γ is Euler's constant, together with such that the contribution from outside the slab becomes Inside the slab we must cut off the integral over the directions parallel to the interface at an appropriate distance from the y axis, depending on the value of y . Rescaling integration variables by y + y , we have where η is the rescaled y , and is the rescaled cut-off. As η is very small, we can expand the last factor of the integrand. All odd powers of η will drop out in the integration, so that we can write Changing coordinates to τ 2 = η 2 + r 2 and expanding the factor (1 + r 2 ) = (1 + τ 2 − η 2 ) in η again, this expression can be written as We now employ the binomial series which is valid on the domain of integration. Note that the η integral of the k th term of the sum yields a suppression by 2k+1 , while its leading contribution to the τ integral is We therefore find For the sum we have 32) and thus Combining the contributions I out and I in , and using the identity together with the value (6.4) of the coupling constant in the IR, the value of the perturbed correlation function (6.18) becomes This expression still contains a divergence in , which is eliminated by an appropriate counterterm. Conformal invariance dictates that to first order in ν, the correlation function must take the form where ξ = (x − x ) 2 /(4yy ) = (y + y ) 2 /(4yy ) is the conformal cross ratio. Since the case ν = 0 corresponds to the identity interface, the function f ν (ξ) must satisfy f 0 (ξ) = ξ − d 2 . Expanding (6.36) to first order in ν and using the dimensions ∆ ± = d 2 ± ν leads to the condition where "c.t." stands for the counterterm contribution. We observe that the condition C = 2C leads to the cancellation of the log y/y term on both sides. The remaining part of the left-hand side must then be a function of ξ alone. In the OPE scheme the counterterm can only depend on the distance y + y of the two field insertions, and therefore cannot do anything other than precisely eliminate the logarithmic divergence. We therefore conclude that which makes (6.36) indeed agree with the gravitational result (4.14). Checks from minimal model holography in d = 2 In d = 2, the duality between Vasiliev Higher Spin theory in the bulk and Minimal Model CFTs on the boundary belongs to the best-understood examples of non-supersymmetric holography [16,30]. The classical bulk contains one massless field of spin s for every integer s ≥ 2, which transform under the higher spin algebra hs(ν), depending on (the square of) an a priori arbitrary complex number ν. The theory includes a complex scalar field (with a propagating degree of freedom, unlike the topological higher spin fields) of mass m 2 = ν 2 − 1, with −1 < ν < 1. A single higher spin gravity gives rise to two boundary theories: under ν → −ν the algebra hs(ν) remains unchanged, while in the unitary window the scalar field of dimension ∆ + acquires the alternate quantization ∆ − . The asymptotic quantum symmetry algebra W ∞ (ν) associated to hs(ν) also arises as the 't Hooft limit of the algebra W N,k . This is the chiral algebra of the CFT M N,k based on the coset which has central charge The 't Hooft limit takes N, k → ∞ at fixed ν = N N +k . 9 Irreducible representations of the coset (6.39) are labelled by a pair Λ = (λ + , λ − ) of representation labels of su(N ) k and su(N ) k+1 , respectively. 10 We will only consider charge-conjugate theories with diagonal modular invariant: i.e., the theory contains only left-right symmetric pairs of representations, Λ ⊗Λ with Λ Λ , and for each such pair in the Hilbert space, the charge conjugate pair Λ ⊗Λ is present as well. Despite the left-right symmetry, we write tildes over right-movers for the purpose of clarity. The large-level limit of such theories is in general a rather subtle issue [33,34] and leads to continuous orbifold theories [35][36][37]. However, the equivalence with the W ∞ algebras in fact holds for finite N and k -and therefore finite c -since an extension of level-rank duality identifies W N, [30,38]. For the unitary theories (where N and k are positive integers) there exists a well-known relevant deformation of the CFT M N,k which has M N,k−1 as its IR fixed point [39]. Gaiotto introduced interfaces corresponding to this RG flow and gave a recipe for computing its UV-IR overlaps in [5]. The renormalization group flow from M N,k to M N,k−1 was proposed in [16] to be the double trace flow from CFT − to CFT + , and the one-loop computations of [40] support this proposal. Therefore, we expect Gaiotto's interface to be realized holographically as a double trace interface. Because the bulk scalar field is complex, we must take care to include additional factors of 2 when comparing log g and the overlap coefficients with our bulk computations. RG interface construction at finite N and k Before we come to the results in the 't Hooft limit let us briefly explain how the interface is constructed at finite (positive integer) N and k. We give the interface as a boundary condition in the folded theory CFT UV ⊗ CFT IR . The chiral algebra of the folded theory is where we distinguish the IR copy of the level 1 algebra from the UV copy by a prime. The Hilbert spaces of the UV and IR theories decompose into products of representations Λ i ⊗Λ i , where Λ i andΛ i (i = UV, IR) are representations of the left-and right-moving chiral algebra respectively. The Hilbert space of the folded theory will then contain the product of representations Λ U V ⊗Λ IR for the left-moving andΛ U V ⊗ Λ IR for the right-moving degrees of freedom. The boundary condition corresponding to the RG interface consists of a projection in the su(N ) k sector, a permutation brane in the su(N ) 1 sectors, and a Cardy state in the sector su(N ) k−1 /su(N ) k+1 of (6.41) [3]. The projection can be implemented by the topological interface [5] of the product theory. Here S (k) 0λ is a modular S matrix entry of the su(N ) k WZW model. The operators Π project onto the subscript representation 11 (λ, λ − ) U V ⊗ (λ + , λ) IR , which are the products of UV and IR representations sharing a common label λ of su(N ) k . When summed over λ, these operators implement the isomorphism 12 [41] {λ where the left-hand side denotes a representation of the diagonal coset In particular, this isomorphism identifies the su(N ) k current operators J (k)a = J (k−1)a + J (1 )a of the two copies of su(N ) k in the product theory. The boundary condition corresponding to the RG interface is given by the fusion product of the topological interface I with the boundary state The prime in this expression indicates that the sum only runs over representations {λ + , λ − } where the (suppressed) labels of the two su(N ) 1 parts are identical. The Ishibashi states |{λ + , λ − } Z 2 are defined such that they implement a permutation (indicated by the subscript Z 2 ) of these su(N ) 1 parts [42][43][44]. 11 Only the left-moving degrees of freedom are indicated here. 12 Notice that two representation labels of su(N )1 are suppressed on both sides in equation (6.43). The prescription for computing the operator overlaps is therefore as follows. Suppose we want to compute the overlap of the UV operator Φ U V and the conjugate of the IR operator Φ IR , which are composed of left-and right-moving parts The operators φ U VφIR andφ U V φ IR then constitute the left-and right-moving part of the corresponding operator in the doubled theory, respectively. If φ U V is an operator in the representation (λ + , λ − ), andφ IR is in the representation conjugate to (λ + ,λ − ), we write φ U VφIR as a state in the representation {λ + , λ − } of the left-hand side of (6.43). This image only exists if the representation labels of su(N ) k agree, i.e. if λ + =λ − . After the projection we compute the inner product of φ U VφIR and the Z 2 flipped image ofφ U V φ IR , where the latter is obtained by exchanging all degrees of freedom of su(N ) 1 and su(N ) 1 . This requires that the (suppressed) representation labels of su(N ) 1 and su(N ) 1 agree. Finally, the resulting inner product must be multiplied with the corresponding coefficient of the boundary state, leading to the formula λ) ) . (6.47) The RG interface in the 't Hooft limit For finite N and k, one way to quantify the length of an RG flow is to consider the reflectivity of the RG interface [3]. Reflectivity is measured here with respect to specific parts of the chiral symmetry algebra, and different definitions exist. A coefficient which exists for any conformal interface measures reflection and transmission of energy and momentum [45]. From the matrix one defines the reflection and transmission coefficients where These coefficients have the property that R+T = 1. Also, 0 ≤ R ≤ 1 for interfaces between unitary CFTs, with R = 0 for topological interfaces and R = 1 for interfaces which are (totally reflective) conformal boundary states. For our RG interfaces, the matrix R of (6.48) is rather easy to compute. The (left-moving) energymomentum tensor components of the UV and the IR are given by where is the standard Sugawara energy momentum tensor of the su(N ) k WZW model. Following the prescription of identifying J (k)a = J (k−1)a + J (1 )a and applying the Z 2 transformation J (1)a ↔ J (1 )a one obtains [3] We observe that in the 't Hooft limit, the entries R 11 and R 22 (related to reflection) remain finite, while the off-diagonal entries R 12 and R 21 (related to transmission) diverge. The coefficients R and T , however, remain finite, and asymptote to R = 0 and T = 1, as for a topological interface. Notice that in spite of the finite change in central charge, there is in fact no contradiction here, since the central charges of the UV and the IR theory are both infinite in the 't Hooft limit. The RG interface in the 't Hooft limit is in general not the identity, as shown by the non-trivial boundary entropy computed in section 5.2 and confirmed in the next subsection. One could at this point also compute the overlaps -the matrix R -for higher spin fields W s instead of T . Each higher spin field of the bulk corresponds to a descendent of the vacuum representation of the boundary CFT. In the coset numerator theory, the state corresponding to the field of spin s has the form [46] where s c 1 ...cs is proportional to the totally symmetric invariant tensor of rank s present for 2 ≤ s ≤ N in su(N ). The coefficients A n are determined by requiring that |W s transforms trivially under the denominator subalgebra, and by the normalization condition W s |W s = c/s. For the example s = 3 one finds , which leads to the overlap matrix The fact that R 11 is negative for unitary theories is an indication that the conformal RG interface breaks the higher spin algebra. Also, the four entries do not sum up to (c N,k + c N,k−1 )/3, that they do not provide a sensible measure of reflection and transmission. RG interface boundary entropy In the boundary state formalism, the g factor of the RG interface is the coefficient of the vacuum Ishibashi state in the defect boundary state, i.e., (6.58) The modular S matrix elements of the right-hand side can be found in the standard literature (see e.g. [47]), and are reproduced for convenience in appendix F. We observe that the g factor can be written as a product g 2 = P 1 P 2 (6.59) with In the 't Hooft limit, the logarithm of P 1 only contributes at subleading order in 1/N , In order to compute the logarithm of P 2 , define The following expansion in δx holds: With (6.62) and (6.63) we can express the leading contribution to log P 2 for large k and N as The sum is convergent as long as every x is smaller than π, which means that the expansion is valid in the case 0 ≤ ν < 1. In the 't Hooft limit the sum becomes an integral. Since the error term is of order 1/N 2 , the sum will yield the correction up to first order in 1/N . By the Euler-Maclaurin formula we obtain Combining the results (6.61) and (6.65) we find that In the Hooft limit we therefore have d dν log g 2 = πν 2 cot(πν) . (6.67) After including the factor of 2 for the complex field, this agrees precisely with the bulk result (5.29). Matching of coefficients for two-point functions We can also use the recipe of section 6.2.2 to check the coefficients in the two-point functions of section 4.1. The bulk scalar field is dual on the IR side of the interface to the CFT operator ϕ + = Φ IR (f,0) , and to ϕ − = Φ U V (0,f ) on the UV side, where f denotes the fundamental representation of su(N ). The conformal dimensions of Φ IR (f,0) and Φ U V (0,f ) for finite N and k are The first coefficient we would like to match is the constant B in (4.5). Writing the OPE of the scalar field in the IR as Φ IR (f,0) × Φ IR (f,0) ∼ 1 + C IR Φ IR (adj,0) , the constant B is given by the expression The operator Φ IR (adj,0) corresponds to the double trace perturbation in the IR. For finite N and k, the value of C IR can be obtained, e.g., from Coulomb gas methods [48,49]. Since this calculation is not in the focus of this paper we refrain from performing it here, and only point out that in the 't Hooft limit the OPE coefficients of UV and IR coincide, with C approaching 1. The coefficient C goes to 2, in agreement with condition (6.15). Now consider the overlap of the IR operator Φ IR (adj,0) with the identity in the UV. In the numerator su(N ) k−1 ⊗ su(N ) 1 of the IR coset, the chiral state corresponding to this operator can be written as In (6.70), a sum over the indices a of the currents is implied, and indices are raised and lowered with the Killing form K ab . The state |J is the corresponding Virasoro highest-weight state present in the su(N ) k−1 adjoint representation. Following the recipe of section 6.2.1 we compute the overlap The Ishibashi state coefficient is S Its computation is similar to that of the g factor in the section above. Using equation (F.1) of the appendix we find that S k−1 adj,0 S k+1 where we used the expression for the g factor from the previous section. In the 't Hooft limit, the right-most factor goes as The two factors (6.72) and (6.74) therefore combine into Comparing with (4.6) we observe that we have a precise match. Choosing the two insertions to be on the UV side can be done in the analogous way, and in the limit merely results in the replacement ν → −ν. It is also straightforward to verify the overlap of the scalar across the interface we found in section 4.1.2. In the UV theory, the chiral part of the scalar ϕ − , corresponding to Φ U V (0,f ) , can be written as a state in the numerator of the UV coset as where ω 1 denotes the first fundamental weight (which is the highest weight in the fundamental representation) of su(N ), and |ω 1 (1) is the highest weight state of the fundamental representation of su(N ) 1 . In order to have a non-vanishing overlap we insert the conjugate of the scalar ϕ + in the IR, corresponding to Φ IR (f ,0) . The chiral state in the IR coset numerator lies in the productf (k−1) ⊗ f (1 ) of the antifundamental representation of su(N k−1 ) and the fundamental representation of su(N 1 ). It is given by where ω N −1 is the highest weight of the antifundamental representation, α i are the simple roots of su(N ), and |λ (k) denotes the basis state of weight λ in the fundamental (or anti-fundamental) representation of su(N ) k . For finite N and k, the overlap coefficient of one scaler field insertion on each side of the interface is In the prefactor of modular S matrices we notice that for any level k, Using the explicit expressions (6.79) and (6.80), the other factor in (6.81) becomes The RG overlap in the 't Hooft limit is therefore Dividing by g = 1 and including a factor of 2 for the complex scalar, this is indeed what we obtain as coefficient from (4.16). Conclusions and Discussion In this paper, we gave a semi-classical holographic construction of double trace interfaces -RG interfaces associated to an RG flow initiated by double trace deformation. We discussed methods for constructing double trace interfaces of any shape and computing observables using mixed boundary value problem techniques. We gave a simple integral representation for the bulk Green's function associated to a spherical interface, as well as the bulk-boundary propagators and CFT two-point correlation functions in closed form. From these results we obtained the leading contribution of the spherical defect to the CFT partition function (yielding for d = 2 the boundary entropy). Double trace interfaces have arisen previously in concrete systems of interest, allowing us to test our gravitational results against CFT computations. We derived the two-point function in the presence of double trace interfaces in conformal perturbation theory, and showed that the result matches the weak-coupling limit of our gravitational computation in the large-N limit, where the single trace operator becomes a generalized free field. This result generalizes the special case of a Wilson-Fisher/free field interface near d = 4, studied in [4] using bootstrap methods. It would be interesting to compute the correlator at large N in the most physically relevant dimension d = 3. This should be doable by standard methods; we leave this to future work. In d = 2, the W N minimal model RG defects constructed in [5] are realized as double trace interfaces within the higher spin gravity/WCFT proposal of [16]. Using our results, we were able to compute several interface overlap coefficients in the semi-classical limit. We computed the same coefficients using the exact results of [5] and showed that they coincide at large N . Furthermore, we computed the exact boundary g-factor in these models, and showed that its large N limit is reproduced by our one-loop gravitational result. Questions and future directions There are several further observables associated to double trace interfaces that would be interesting to compute. One is the leading (one-loop) correction to the stress tensor two-point function (which in d = 2 reduces to the transmission/reflection coefficients of [45]) and other operators, and the leading (classical) contribution to the higher-point functions. A further question, of interest for the theory of conformal interfaces, would be to study the fusion of double trace interfaces. This computation was outlined in section 2.3. There are two further general points of possible interest we would like to mention. The first is related to defect conformal bootstrap. For free fields, the work of [50] showed that the two-point function for Dirichlet and Neumann boundary conditions of a free field could be reproduced by imposing crossing symmetry. We constructed the large-N spectrum of non-trivial defect operators, and saw that the conformal block decomposition of our two-point functions closes on these operators in the boundary channel, and on double trace operators in the bulk channel. It is interesting to ask whether our two-point functions are the unique solution to the crossing equations that can be generated in this way at large N ; it is further possible that, using this boundary spectrum as a starting point, one could push the analytic bootstrap results of [4] past leading order in . It is also tempting to apply Mellin bootstrap [51] methods to this problem, since there the effects of double trace operators are included automatically in the Mellin space representation. The second point is that the match between the gravitational partition function and the g-factor for Gaiotto's defect provides further evidence for the proposal of [16] that Zamolodchikov's integrable RG flow is implemented holographically as a double trace deformation. The starting-point of this RG flow is described on the one hand by the alternative quantization, but according to the original duality it should be described also by a higher spin gravity with the standard quantization but (at finite N ) a slightly different value of ν. This suggests a duality between distinct higher spin gravity theories. It was shown in [40] that the one-loop correction to the central charge is also consistent with this hypothesis. It would be of interest to pursue this question further. we see that The inner integral is a Weber-Schafheitlin discontinuous integral (see e.g. (11.4.33) of [52]), and takes the value We thus find (B.10) In particular, [K ] ∆ + vanishes for r > R, and so our ansatz guarantees that [K2] is satisfied. We must also impose the condition [K ] ∆ − = δ (d) (x − x ) for r < R. Inserting our ansatz gives Once again the inner integral is a Weber-Schafheitlin discontinous integral, and takes the value so that for r < R, Let us now impose the condition that (B.14) This is an integral equation of Abel type. For 0 < α < 1, the equation (see e.g. (2.3.2) of [17]). With the substitutions we obtain Our particular condition is . The delta function can be expanded This means that φ (r) = c r d−1 δ(r − r ), and therefore Together with equations (B.2) and (B.6), this yields an explicit integral representation for the mixed bulkboundary propagator K + . The relevant integrals can be evaluated in terms of hypergeometric functions, and one can verify that (up to a change in conformal frame) the result matches (3.44). C Spectral decomposition on H d Start with the metric (A.11) on H d , with γ coordinates on S d−1 . We look for solutions to the equation Let index the harmonics Y on the unit sphere S d−1 , and denote by L the eigenvalues of −∇ 2 S d−1 ; L = k(k + d − 2) for an integer k. Decomposing Ψ = Y ψ(w), we have This equation is hypergeometric, and has a unique solution that is finite as w → 0: where we have expressed the eigenvalue in the form λ = σ 2 + (d−1) 2 4 . We can find the normalized eigenfunctions in the following way. First of all, let Y be normalized, (C.5) The Olevskii transform gives a resolution of the radial delta function of the form and Ψ ,σ (w, γ) = Y (γ)ψ k,σ (w), we obtain the identity Similarly, from the inverse Olevskii transform we find that where x i denote the coordinates on H d . Thus the functions Ψ ,σ form a complete basis for the normalizable functions on H d . SO(d, 1)-invariant bifunctions Our primary interest is in bifunctions on H d , i.e. functions u(x, x ) of two points x, x ∈ H d that are symmetric and invariant under SO(d, 1), that are also eigenfunctions of the Laplacian. As with any function, it is possible to expand it with respect to eigenfunctions of the Laplacian u λ (x, x ) satisfying the same properties. At fixed eigenvalue λ σ , such functions can be decomposed as a sum over spherical harmonics of the Ψ ,σ functions: u λσ (x, x ) = c (x )Ψ ,σ (x) . (C.10) Such a function depends only on the hyperbolic distance, and so it suffices to set x = 0 (i.e. w = 0). The expression is further rotationally invariant around x = 0, which implies that only the = 0 mode contributes. We thus find u(x, 0) = c Ψ 0,σ (w) ; can be given in terms of the cross-ratio χ 2 d : For w = 0, χ 2 d = w, so u(x, 0) above can be covariantized to general x by replacing w → χ 2 d . A bifunction of particular interest for us is J σ (x, x ) = Ψ ,σ (x)Ψ ,σ (x ) . (C.14) Equation (C.8) implies that dσ J σ (x, x ) = δ(x, x ), which is invariant under SO(d, 1) transformations; because SO(d, 1) doesn't mix eigenvalues of the Laplacian, this implies that J σ (x, x ) itself is invariant under SO(d, 1) transformations. By acting with a conformal transformation, we can set x = 0, in which case all modes but k = 0 drop out. With Y 0 (γ) = (vol S d−1 ) −1/2 , As we saw above, we can find its value at general x by replacing w → χ 2 d : We often require the value at coincidence: 2 . (C.18) D Integral transforms In Janus coordinates, we make extensive use of a hypergeometric index integral transform. The transform in question is a generalization of the Mehler-Fock transform that was first discovered by Weyl [53]. His work was largely forgotten, and the same integral transform was later rediscovered by Titchmarsh [54] and Olevskii [55]. The inversion theorem (see, e.g., [56]) states that f (x) is recovered by the following formula: f (x) = J −1 a,c {g}(x) = 1 π ∞ 0 ds Γ(a + is)Γ(c + is) Γ(2is)Γ(a + c) E.1.3 Sum relations The decomposition into conformal blocks of section 4.2 is accomplished using equation (4.3.11) of [57], (E.9) valid for any choice of α, β, γ such that the identity makes sense. For our two-point function, 4 F 3 reduces to 3 F 2 , and we apply Saalschütz's theorem F Conventions for su(N ) Our conventions for su(N ) and its affine algebras follow [47]. Here we collect some facts which are important for our section 6.2. The dimension of su(N ) is N 2 − 1, and the dual Coxeter number is g ∨ = N . Bases of generators are denoted J a , a = 1, . . . , N 2 − 1. The weight and root lattice of su(N ) can be realized in R N with standard basis e 1 , . . . e N : The roots are given by α = e i − e j for i = j, and we define the positive roots to be those with i < j. A set of simple roots is then provided by α i = e i − e i+1 for i = 1, . . . , N − 1. The root lattice consists of all vectors of the form N i=1 n i e i with n i ∈ Z and i n i = 0. The Weyl vector, given by half the sum of all positive roots, is represented by ρ = 1 2 N i=1 (N + 1 − 2i)e i . The fundamental weights are ω i = i j=1 e j − i N N j=1 e j for i = 1, . . . , N − 1; every weight is given by λ = N −1 i=1 λ i ω i with Dynkin labels λ i . In our case we need in particular the fundamental and antifundamental representations with highest weights ω 1 and ω N −1 , respectively. The fundamental representation contains the weights ω 1 − i j=1 α j , and the antifundamental representation contains the weights ω N −1 − i j=1 α N −j for i = 0, 1, . . . , N − 1 (empty sums are 0). We also need the adjoint representation θ, which has Dynkin labels 1, 0, . . . , 0, 1. In the su(N ) k WZW model, the chiral fields J a (z) can be decomposed into modes J a n , n ∈ Z, where J a 0 act as −J a on the Virasoro highest weight states |λ labeled by su(N ) weights, and J a n |λ = 0 for n > 0. The |λ have (chiral) conformal dimension (λ, λ + 2ρ)/2(k + N ), where the inner product coincides with the standard one on R N . Highest weight operators with respect to the su(N ) k algebra only occur if (λ, θ) ≤ k.
19,324.2
2017-07-11T00:00:00.000
[ "Physics" ]
Mesoscopic Interference for Metric and Curvature (MIMAC)&Gravitational Wave Detection A compact detector for space-time metric and curvature is highly desirable, especially if it could also detect gravitational waves. Here we show that quantum spatial superpositions of mesoscopic objects, of the type feasible with potential advancement of techniques, can be exploited to create such a detector. By using Stern-Gerlach (SG) interferometry with masses much larger than atoms, where the interferometric signal is extracted by measuring spins, we show that accelerations as low as $5\times10^{-16}\textrm{ms}^{-2}\textrm{Hz}^{-1/2}$, as well as the frame dragging effects caused by the Earth, can be sensed. The apparatus is constructed to be non-symmetric so as to enable the direct detection of curvature and gravitational waves (GWs). In the latter context, we find that it can be used as meter sized, orientable and vibrational (thermal/seismic) noise resilient detector of mid and low frequency GWs from massive binaries (same regimes as those targeted by atom interferometers and LISA). Matter wave interferometry, very successful with atoms [1], and implemented already with macromolecules (10 4 amu mass) [2], is gradually progressing towards ever more macroscopic masses. Several viable ideas have been proposed to date to demonstrate quantum interferometry with larger masses, primarily with foundational motivations such as testing the limits of the superposition principle [3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18] or exploring the quantum nature of gravity [19,20]. It is thus worthwhile to question the extent to which a large object matter wave interferometer can detect the full classical gravitational effects in a location as quantified by the metric and curvature. This comes against a backdrop of proposals of smaller particle interferometers [21][22][23] or larger quantum optomechanical systems [24,25] to detect a g 00 metric component arising from a Newtonian potential, whose variations can be used to infer the associated component of curvature or to detect the Earth's rotation [26,27] or general relativistic effects [28]. The most challenging entities to detect are the GWs, the g ij metric components, whose detection has been a huge recent success using kilometre long optical interferometers [29,30], with future devices proposed in space [31]. On the other hand there are also proposals for usage of atomic interferometers [32][33][34][35] and various quantum resonators [36,37], but nothing yet on the potential of interferometers for propagating (untrapped) objects which are much more macroscopic than single atoms. In this paper, we will employ mesoscopic-object interference for detecting metric and curvature (MIMAC), based on the Stern-Gerlach principle [13,38]. Here, although a spatial interferometry involving superpositions of separated motional states takes place, the output signal of the interferometer is encoded in a spin degree of freedom in a manner which is insensitive to noise in the motional state (which may have thermal or seismic origin). We demonstrate that it can be used to observe the metric and, by using a non-symmetric set-up, also "directly" observe the derivatives in the interferometric signal which determine the curvature of a perturbed Minkowski metric (as opposed to measuring the metric in proximal regions and inferring the curvature by taking appropriate derivatives). Additionally, these interferometers enable the measuring of the Earth's frame dragging and gravitational waves of certain strength and frequency range. In all these cases, it is remarkable, and indeed directly due to the high masses of the objects undergoing interferometry, that the interferometer is very compact (not needing to be any larger than a meter), as well as highly sensitive at a single object level, i.e., not requiring high fluxes of the interfered objects. Interferometric setup: The interferometric system we consider is an asymmetric modification of that proposed by Wan et. al. [13] as shown in Fig. 1. We use a mesoscopic mass containing an embedded spin 1 degree of freedom (three spin states |+1 , |0 , |−1 ). An example is a diamond crystal of nanometer to micrometer dimensions with a NV-centre spin, which is being widely probed as a candidate for the type of experiment we propose [10,12,39,40]. Imagine it to be leaving a source in a motional wavepacket |ψ(0) centred at the velocity v = (0, v y , 0) and in a spin state |0 . At time t = 0 it is initialised (by the application of a sudden microwave pulse) in a superposition of spin eigenstates 1 √ 2 (|+1 + |0 ). The presence of a magnetic field gradient in the x direction induces an acceleration a = (a, 0, 0) on the |+1 spin state (i.e., couples the spin and motional states). The acceleration of the |+1 component is reversed at a time t = τ 1 by applying a microwave pulse which accomplishes |+1 ↔ |−1 and reversed again at t = τ 2 = 3τ 1 by another identical pulse so that at any generic time while a object traverses the interferometer, the combined state of its spin and motion is 1 √ 2 (|0 |ψ 0 (t) + |σ |ψ 1 (t) ), where σ = +1 for 0 < t < τ 1 and τ 2 < t ≤ τ 3 , and σ = −1 for τ 1 ≤ t ≤ τ 2 . Thus there are two interferometric spatial paths: our procedure will first lead to a maximum spatial superposition at time t = 2τ 1 so that at that time, the centres of the spatial states |ψ 0 (2τ 1 ) and |ψ σ (2τ 1 ) are separated by ∆x = aτ 2 1 and then automatically bring the two compo- nents back so that their motional states exactly overlap at time t = τ 3 = 4τ 1 , i.e., |ψ 0 (4τ 1 ) = |ψ σ (4τ 1 ) . This has two striking consequences [13]: (i) The relative phase ∆φ between the interferometric arms is mapped on to the spin state in the form 1 √ 2 |+1 + e i∆φ |0 , so that it can be measured by measuring the spin state alone (for example, from the probability of the state to be brought to the spin state |0 by the application of a third microwave pulse). (ii) The ∆φ depends solely on the difference between phases accumulated in the interferometric paths, and is quite independent of the initial motional state |ψ(0) of the object. Thus, as long as the condition of a uniform magnetic gradient can be fulfilled, the interferometric signal is unaffected by an initial mixed thermal state or other noise (e.g. seismic) in the initial motional state as these can always be modeled as probabilistic choices of |ψ(0) . The non-relativistic action: The phase difference between the two interferometric paths ∆φ = ∆S/h, where ∆S is the difference in action between the two arms. Consider the space-time metric g µν = η µν + h µν where η µν is the standard Minkowski metric of signature (− + ++) and h µν is some small perturbation that may have space and time dependencies. Then the action for a particle of mass m along a trajectory in the nonrelativistic limit (where the laboratory time t will be used as a replacement for the proper time due to the fact we are in the non-relativistic limit) is (1) Already by inspection of the above formula for action it is evident that compared to the term with h 00 , the terms h 0j are harder to detect as c is replaced by a nonrelativistic velocity v j , while h ij is the most difficult to detect with c 2 replaced with v i v j . On the other hand, a high value of m helps in amplifying the action and hence the phase difference. We expand S to the second order in derivatives of h µν assuming a static and slowly varying metric. This gives the difference in the action between the two interferometric paths due to the different components h µν (µ, ν = 0, x, y, z) as where all truncated terms are not pertinent to the effects explored in this letter. Note that we detect the second derivatives of h µν in the phase as well, so that spacetime curvature characterised by the Riemann tensor can be extracted from these derivatives of the perturbation as . Newtonian potential: If we only consider the first non-Minkowski term we can make the substitution h 00 = 2M G/c 2 R with the x-axis being vertical, the experiment taken to be performed at ground level so that R will be the radius of the Earth, and M Earth's mass, a difference in action between the two arms up to the second order in Eq.5 is consistent with the expectation discussed in [41] that any curvature detection will be of the form U (L/R) 2 where U is the gravitational potential and L is the characteristic laboratory length (in the above case, L ∼ ∆x). Despite this quadratic suppression of the curvatures effect, it is still detectable due to the inverse Plank's constant factor in the phase difference it leads to. As such, we can expect to observe even second order effects (curvature effects) as large phase shifts. Fig. 2 shows how these results scale with the mass of the object in the interferometer assuming certain spatial separation ∆x between the interferometric paths being possible due to the requirements to create and maintain the coherence of such a superposition for relevant time-scales, see the discussion below. From Fig.2 it can be seen that a mass of 10 −16 kg in a ∼ 1mm interferometer with interrogation time τ 1 ∼ 100ms gives a detection of acceleration with sensitivity down to 5 × 10 −16 ms −2 Hz −1/2 where, additionally, a flux of N = 200 objects at a time is used (in this case, inter-particle interactions give only a 5% error in the phase). Frame Dragging:To explore the detection of frame dragging effects the slowly rotating metric has to be considered, see [42] ds 2 = −H (r) c 2 dt 2 + J (r) dr 2 + r 2 dθ 2 + r 2 sin 2 (θ) (dφ − Ω dt) 2 (7) In descending order first order Newtonian (blue), second order Newtonian/curvature (orange), first order frame dragging (green) and second order Frame Dragging/curvature (red). As the mass m increases, the phase change increases as ∆x = aτ 2 1 can be kept to its highest value by allowing more time τ1. However, an optimal point is reached slightly after about m = 10 −16 kg after which the ∆x obtained with the maximum τ1 starts decreasing in inverse proportion to mass even for the fixed maximum feasible values of magnetic fields (10 6 Tm −1 ). where where the binomial expansion approximation has been used for being in the linearized limit, and Ω = 2M Gν/c 2 R is the scaled angular velocity of the central rotating mass, where once again M is the mass of the Earth, R is its radius and ν is its angular velocity. The relevant component of Eq. 7 is the cross term dφdt. The apparatus can be aligned along ∆r = ∆x and ∆θ = 0 which is to say, the x direction is 'up' and y direction is aligned parallel to earth's lines of latitude, specifically we will later assume to be located at the equator. We also have ∆y giving rise to dφ, we will make the small angle approximation given the trajectories length is relatively short, hence dφ/dτ ≈ v y /r. The phase difference, again to the second order in aτ 2 1 /R , is thus given by: This phase difference is plotted as a function of mass with a fixed interferometer size in Fig. 2. Substituting all known constants, assuming the interferometer is located on the surface of the Earth, gives ∆φ (h 0j ) = 4 × 10 21 − 6 × 10 3 mav y τ 3 1 + 6 × 10 −4 ma 2 v y τ 5 1 . Once again we note that greater sensitivities can be achieved with larger mass particles used in the interferometer. We can also see that the frame dragging caused by Earth has a significantly more modest effect on phase. It also suggests high precision measurements would be needed to be able to measure the second order derivatives of the metric perturbations due to frame dragging due to Earth's rotation. In Fig. 2 we have plotted the phase ∆φ with respect to the mass of the object for our apparatus situated on Earth, where we have taken Earth's rotation and the radius. Gravitational waves (GWs): Our setup can also extract the phase from the transverse traceless perturbations around the Minkowski background: we assume that a GW is propagating along the x 3 = z direction perpendicularly to the interferometer with angular frequency ω, the two helicity states of the GWs are h + , h × 1. We will also ignore the kinetic energy component of the atoms in action, see Eq.(1), as it is not relevant for the purpose of detecting the phase. The GW induced phase difference for our apparatus is thus given by: where ψ 0 is the wave's phase at t = 2τ 1 and the expansion is around ωτ 3 ≈ 0, i.e., the GW is assumed to not vary appreciably over the length of the interferometer. Note that the h × component is not recorded in our interferometer, as it is proportional to v x v y which varies between positive and negative values, thus cancelling itself out. The h + component is a function of v 2 x , and therefore does not cancel in this way. Essentially to detect the h × component, one has to rotate our apparatus by 45 degrees. At this point, it is worthwhile to compare our proposal with other interferometric schemes for GW detection, although we acknowledge that our scheme has much to develop as here we are only showing its "in principle" feasibility with certain achievable advances in technology. In the domain of atomic interferometry, one of the most advanced of these suggestions is the Atomic GW Interferometric Sensor (AGIS) as discussed in [43] which generates an approximate phase difference of ∼ 10 16 h + for the space based detector [35] with baseline size L ∼ 10 7 m compared to our value of ∼ 10 17 h + for a baseline size of ∆x = 1 m as shown in Fig. 3. Note however, that our proposal differs significantly from AGIS and so the phase difference they are referring to is between two different atom interferometers, while our value is the phase difference between the two arms of the one interferometer. As such this comparison, though worth making, is not intended to capture the entire effectiveness of these two proposals. Indeed single atom interferometers have also been suggested for GW detection [32][33][34]. With respect to those, our advantage stems purely from the much larger m of our massive particle interferometers as our Stern-Gerlach methodology opens up the scope to create a high enough ∆x, even as the mass is increased. As far as optical interferometric setups such as LISA are concerned, which is the frequency domain in which our interferometer is most effective, one can make a comparison by noting that in our case, the path length differences of ∼ h + L are essentially being measured in units of the matter wave de Broglie wavelength, which can be 10 −17 times smaller than typical optical wavelengths through our Stern-Gerlach scheme. Thus the lengths L required can be much smaller (a meter suffices). With respect to the frequency spectrum observable using this technique, one can see from Eq.13 that the phase output will be independent of GW frequency provided ωτ 3 ∼ ωτ 1 1 as seen in Fig. 3. This and the higher frequency detectability scaling can be understood by noting it is susceptible to the average wave amplitude over the time-frame of the interferometer, which tends to zero for higher frequency waves. Note that here we define a detectable strain as one that gives ∆φ (h ij ) = 1. However, if there are several particles traversing the interferometer at once, as well as several interferometers in parallel, so that the phase signal is to be read from N atoms in one shot of the apparatus run, then smaller strain causing ∆φ (h ij ) = 1/ √ N is detectable. Further note that around 10 − 10000 Hz, at which LIGO is performing [44], our setup will not be able to compete. However it will serve as a complementary procedure in the range of eLISA [45] (10 −6 − 10 Hz). Practical implementation: In the proposed system, a magnetic field gradient ∂ x B is used to create the spatial superposition of size ∆x = aτ 2 1 with a = g N V µ B ∂ x B/m where g N V is the Landé g factor, µ B is the Bohr magneton. For large mass interferometry to carry advantage over its atomic counterpart, ∆x must be kept significant even while m increases. Thus in proportion to the value of m we want to use, we have to increase (i) the magnetic field gradient, (ii) the coherence time of the spatial and the spin states. A magnetic gradient as high as 10 6 Tm −1 can be achieved at a distance of 1 µm from a 10µm sized superconducting magnet trapping a flux of 5 T (larger values have been shown to be feasible in experiments [46]). The difficult task of keeping the magnet consistently about 1µm from the interfering object can be achieved by shaping an appropriate elongated magnet or by moving the magnet in tandem with the motion of the object corresponding to the |+1 spin state. The spatial coherence offers a huge window under low values of pressures P = 10 −15 Pa and low internal temperatures 10 mK, as already used in previous proposals [19,47], being for a mass of ∼ 10 −17 kg (100 nm radius), using the results of [47], γ air ≈ 0.006Hz due to scattering of air molecules and γ rad ≈ 6 × 10 −4 Hz due to black-body photon emission. The electron spin coherence at 10 mK can also reach one second with dynamical decoupling [48,49] which is naturally present here due to the spin flipping pulses. This can be further extended by switching the accelerating/decelerating path from the |±1 state to the |0 state (for the times that it is in the state |0 , the mass essentially undergoes free motion with its acquired velocity used to increase/decrease separations). Considering the most difficult metric component to detect, namely the GWs, the greatest sensitivity of detecting h + ∼ 10 −17 / √ Hz will occur for the mass ∼ 10 −17 kg. We can further stretch this sensitivity to h + ∼ 10 −19 / √ Hz by considering a flux of N = 400 particles traversing the interferometer at once, which is consistent with the effect of their mutual gravitational and Casimir-Polder interactions on the phase to be negligible (∼ 10 −3 radians) over their apparatus traversal time. For detecting the frame dragging, we need a v y , which can be 10 ms −1 . These speeds can be achieved for polarizable particle such as nanodiamond using rapid acceleration in a pulsed optical field [50]. Note that the small mass flux was taken to be N = 10 6 taken from [28] for 87Rb atoms. For higher frequencies the relative phase between the paths undergoes several oscillations while the object traverses the interferometer, so cancelling itself out. The the final phase difference is then something that accumulates over a lower time, leading to to a lower sensitivity. We have presented a protocol for a compact (meter scale) interferometer for objects of mass ∼ 10 −17 kg which can not only detect metric components of Newtonian potentials, but also the Earth's frame dragging and GWs of low frequency. The Stern-Gerlach principle implies that simply by changing the orientation of a magnet, the whole interferometer is re-oriented to identify the angular origin of sources. Moreover, the compactness also implies that a large number of interferometers can be built to identify localized noise sources such as gravity gradients and cancel them. On site, the sensi-tivity can be modulated by changing the magnetic field gradient (say, by moving the magnet) so as to identify terms of decreasing strength in succession starting from the Newtonian term and reaching up to the gravitational waves (re-orientation can also aid this). By construction, our interferometric signal only depends on the relative phase between the two arms and thereby is immune to thermal and seismic noise in the initial wavepacket of the mass. Moreover, here the two paths, separated at most by a meter, are unlikely to suffer independent motional noise, while, if a static (and appropriately shaped) magnet is rigidly connected to the source of the objects, then fluctuations of the combined system during the interferometry will not matter. We leave a quantitative anal-ysis of noises, following, for example, the procedures of Refs. [35,51] for the future. Though the proposed implementation seriously stretches the magnetic field gradients and coherence times, much lower values of both should suffice to detect the less demanding components such as h 00 or for functioning as an accelerometer (for example, B = 10 4 Tm −1 and τ 1 ∼ 70 ms can already detect both the Newtonian curvature, as well as the Earth's frame dragging, where the mass used is 10 −18 kgs and ∆x = 1 mm.). Furthermore, we may be able to test modifications of gravity at short distances [52,53], and aspects of selflocalization of the wavefunction in its own gravitational potential [54,55].
5,089
2018-07-27T00:00:00.000
[ "Physics" ]
Assessing the Potential Improvement of Fine-grained Clayey Soils by Plastic Wastes Because of progressively dumping of plastic wastes (PWs) obtained from beverage industry it is of interest to use them as reinforcement material in civil engineering projects. For assessing potential use of plastic wastes in improvement of shear strength of fine-grained soils, two clayey soils were mixed with different amount of plastic wastes (i.e. 0.5%, 1.0%, 1.5% and 3.0% by weight) and consolidated undrained triaxial tests were performed on the compacted samples. Test results indicate that variations of shear strength and pore water pressure depend on the amount and type of plastic waste. It is observed that, irrespective of clay plasticity, adding plastic waste to the fine-grained soils improves their shear strength and plastic waste content (PWC) of 3.0%, within the range of used amounts, has the best effect on the shear strength. Moreover, adding plastic waste causes to decrease shear-induced pore water pressure slowly. Furthermore, deformability of samples changes in term of plastic waste content, type of plastic and clay type. It can be concluded that there is a possible usage of clay-plastic waste mixtures as construction materials and, thereby, plastic wastes can be managed by recycling them in the field of geotechnical engineering, thus contributing to clean up the environment. Introduction The bottled water is the fastest growing beverage industry in the world.International Bottled Water Association (IBWA) reported that 1.5 million tons of plastic are annually used to bottle water and 1500 bottles are dumped as garbage every second.Polyethylene terephthalate (PET) is one of the most abundant plastics in solid urban waste (de Mello et al., 2009).It has been reported that annual consumption of PET bottles is approximately 10 million tons in the world and it grows about up to 15% every year.On the other hand, the number of recycled or returned bottles is very low (ECO PET, 2007).Global bottled water consumption is estimated about 61.4 billion gallons in 2011 and total consumption swelled by 8.6 percent in 2011.Per capita consumption of 8.8 gallons represented a gain of 1.2 gallons over the course of five years (Rodwan Jr., 2012). Biodegradation process of plastics is very slow, because plastics mainly are synthesized using non-renewable fossil resources.Therefore, the plastic wastes should be recycled to decrease these effects.For the management of plastic waste, recently their use in the civil engineering projects is taken into consideration.The advantages are the reuse of these materials and the reduction of using natural material like soil in geotechnical engineering applications.Adding polyethylene fibers of waste plastics to soil-cement mixtures showed that it improves the stress-strain response of uncemented and cemented sands (Consoli et al., 2002).A field application of fiber-reinforced cemented sand pro-posed for increasing the bearing capacity of spread foundations has been reported previously (Consoli et al., 2003).Consoli et al. (2004) by performing triaxial compression tests on cemented and uncemented sand reinforced with various types of fibers indicated that the mode of failure changes from brittle to ductile due to inclusion of fibers.Consoli et al. (2009) found that both cement and fiber insertions affect dramatically the stress-dilatancy behavior of the sand.Dutta & Rao (2007) proposed some regression based models for predicting the behavior of sand-waste plastic mixture.Numerical simulation also indicates that pull-out resistance of fibers governs the stress-strain response of random-reinforced soil (Sivakumar Babu et al., 2008).Comprehensive experimental studies on compacted soilfiber samples showed improvement in strength and stiffness response, reduction in compression indices, reduction in swelling behavior of soil.It is also observed that fibers reduce the seepage velocity of plain soil considerably and thus increase the piping resistance of soil (Sivakumar Babu & Vasudevan, 2008a, b, c).Based on critical state concepts, a constitutive model was proposed to obtain stress-strain response of coir fiber-reinforced soil as a function of fiber content (Sivakumar Babu & Chouksey, 2010).Sivakumar Babu & Chouksey (2011) investigated the effects of plastic waste on the soil behavior by performing a series of triaxial compression and one dimensional compression tests.They found that there is significant improvement in the strength of plastic waste mixed soils due to increase in friction be-tween soil and plastic waste and development of tensile stress in the plastic waste.Compression behavior of plastic waste mixed soil indicates significant reduction in compression parameters. The main objective of the present study is to obtain the geotechnical properties of fine-grained cohesive soils by partially replacing them with plastic waste.To this end, experimental tests were conducted on clayey soils and mixtures of clayey soils with different amount of plastic waste. The tests include a series of consolidated undrained (CU) triaxial tests to determine stress-strain and pore water pressure behavior of plastic waste mixed clayey soils.The obtained results are compared with the associated behavior of plain clays and an analysis is performed in terms of plastic waste content (PWC), type of plastic waste and clay plasticity. Materials The two fine-grained clayey soils used in this study were retrieved from two distinct borrow areas, namely Malekan and Roshdiyyeh areas in East Azerbaijan province.For abbreviation, these soils were denoted with MC and RC letters, respectively.According to Unified Soil Classification System (USCS), both of the clays were categorized as CL (ASTM, 2011).Some index properties of the clayey soils have been listed in Table 1.As well as, grading curves of these materials have been presented in Fig. 1. Two types of plastic wastes obtained from water bottles with different flexibility are used as reinforcing material.Plastic wastes chips were named PW1 and PW2, the PW2 type being more flexible than the PW1 type.The size of pieces for both types of plastics were selected 8 mm in length and 4 mm in width, and their specific gravities are 1.452 and 1.36, respectively. Sample preparation Soil mixtures were prepared by mixing clayey soils with 0%, 0. 5%, 1.0%, 1.5% and 3.0% of plastic wastes by dry weight.To study the effect of plastic flexibility, both PW1 and PW2 plastics were added to Malekan clay.In order to model samples for the triaxial tests which would re-produce field conditions as closely as possible, standard Proctor compaction tests were performed on both the clayey soils and mixtures of MC clay with PW2 plastic to determine maximum dry unit weight (g dmax ) and optimum water content (w opt ) (ASTM, 2012).Compaction test results showed that plastic waste does not significantly affect compaction parameters of MC clay.Therefore, compaction tests were not performed on the other mixtures.Triaxial samples of MC and RC clays mixed with PW1 plastic were prepared according to 0.98g dmax and w opt values of MC and RC clays, respectively.Required materials for samples made of MC clay and PW2 plastic were calculated based on 0.98g dmax and w opt values of associated sample. To obtain a homogenous mixture, required quantity of plastic wastes was distributed over the soil and mixed uniformly and, then, required water was sprayed onto the surface of the materials and after mixing it was placed in sealed plastic bags and stored overnight in a controlled humidity room.Figures 2(a) and 2(b) show typical photos of PW2 plastic waste chips and mixture of this plastic with MC clay, respectively.The entire mixture was statically compacted in the mold, with 50 mm diameter and 100 mm in height, in four layers, and samples for triaxial testing were obtained.Table 2 shows some specifications of tested samples. Shear testing After extruding the samples from the mold, they were set up in triaxial cell and standard consolidated undrained (CU) triaxial testing procedures were followed (ASTM, 2004).To saturate the samples, distilled water was transmitted through them and then incremental backpressure saturation with a pressure differential of 30 kPa was applied.The backpressure was raised to a maximum of 400 kPa and B value was calculated for each increment.Saturation of the samples took approximately 4-6 days to complete until reaching a B value of at least 0.96.The sam- ples were consolidated under effective consolidation stresses of 200 kPa and then shearing was applied to the samples at a rate of 0.04 mm/min until reaching up to 20-24% strain by simultaneously measuring shear-induced pore water pressure. Results and Discussions Figures 3, 4, and 5 illustrate stress-strain curve, changes in pore water pressure and stress paths of MC-PW1, RC-PW1 and MC-PW2 mixtures, respectively.These figures include variations of deviatoric stress vs. axial strain (e a ), excess pore water pressure (Du) vs. e a , and deviatoric stress (q' = s' 1 -s' 3 ) vs. mean normal effective stress (p' = (s' 1 + 2s' 3 )/3).It is clearly observed that the plastic waste influences the behavior of natural soils; so that by increasing the plastic waste content the samples exhibit higher shear strength (Figs. 3(a), 4(a) and 5(a)). Undrained shear strength The correlation between undrained shear strength and plastic waste content is shown in Fig. 6(a).The figure shows the shear strength of samples with PWC = 0.5% is approximately equal to the shear strength of plain clay and when the amount of plastic waste changes from 0.5% to 3.0% shear strength increases gradually.The maximum improvement in the shear strength of different mixtures was obtained at plastic content of 3.0%.Maximum increments in shear strength of MC clay mixed with 3.0% PW1 and 3.0% PW2 plastics are about 49.80% and 25.73%, respectively.The increment value for RC clay mixed with 3.0% PW1 plastic was about 55.20%.In addition, this figure illustrates that the effect of PW1 plastic on the improvement of shear strength is almost twice in comparison with that of PW2 plastic. It is observed that the effect of plastic wastes on the shear strength of clayey soils depends on the clay plasticity so that plastic wastes improve the shear strength of RC clay better than MC clay, but the difference is not noticeable. Excess pore water pressures (Du) Change of pore water pressure during shearing (Du) is presented in Figs.3(b), 4(b) and 5(b).It is obvious that as the strain of samples increases to a specific value Du rises; thereafter its value reduces with straining.The rate of decline is steep in the samples with high plastic content.Also variation of maximum pore water pressure due to shearing (Du max ) (Fig. 6(b)) shows that when PC increases within the samples Du max decreases gradually.The maximum reduction takes place for MC-PW1, MC-PW2 and RC-PW1 mixtures including 3.0% PW and their values are 26.34%,15.96% and 18.24%, respectively. Stress paths Stress paths of the tests (Figs.3(c), 4(c), and 5(c)) explain that, at low level of strain, behavior of all the samples is contractive, but with developing shearing the samples exhibit dilative behavior.Moreover, as plastic waste increases the paths tend to move rightward; i.e. they exhibit more dilative behavior.For example, the behavior of MC clay including 3.0% plastic is completely dilative.Therefore, it can be concluded that adding plastic waste to the clay changes the tendency of samples during shearing. Deformability Secant deformation modulus (E 50 ) is an index of soil deformability.Therefore, the values of E 50 for all the samples obtained from the associated stress-strain curves and their variations vs. PWC have been plotted in Fig. 6c.This figure shows that the effect of plastic waste on the values of E 50 completely depends on the type of clayey soil and type of plastic.In MC clay by adding PW1 the values of secant deformation modulus increases; in other words, the deformability of samples reduces as PWC increases.The rate of increase in E 50 is considerable and it is about 186%.For the RC clay the trend is quite opposite to that for the MC clay, so that the values of E 50 decrease by increasing PWC in the mixtures; in other words, the deformability of samples increases as PWC increases.The rate of decrease is about 67% for the sample with 3.0% of PW1 plastic.It can be concluded that the plastic waste increases deformability of relatively stiff clay and reduces deformability of soft clay.A comparison between the curves of MC-PW1 and MC-PW2 in Fig. 6c indicates that plastic type strongly influences the trend of E 50 variations: while stiff plastic causes E 50 values of MC clay to increase, flexible plastic does not have any meaningful effect on clay deformability.Moreover, Figs. 7 and 8 show photographic views of plain and plastic mixed samples after failure, respectively, for mixture of MC and RC clays with PW1 plastic.It can be noted that angle of sliding surface is higher in mixed sample in comparison with those of plain soil.This is a sign of change in behavior from cohesive soil to frictional one. Conclusions Experiments were conducted to investigate the mechanical behavior of clayey soils mixed with plastic wastes obtained from water bottles.The compaction tests showed that the dry unit weight and optimum water content of mixed samples are not much different from that of associated clay. The findings from this research show that the maximum shear strength for plain MC clay is 101 kPa, whereas for 3.0% PW1 plastic waste mixed clay it is about 151 kPa.The results indicate that there is 49.8% increase in the shear strength of 3.0% plastic waste mixed MC clay as compared with plain clay.The maximum increase in the shear strength of RC clay is 55.20%. It can be concluded that the plastic wastes influences the behavior of clay but this effect varies depending on the clay plasticity and flexibility of plastic wastes.The increase in the shear strength of soil is mainly due to development of tensile stress in the plastic waste.Pore water pressure due to shearing decreases slowly with an increase in plastic waste content. The effect of plastic waste on the deformability of natural soils completely depends on the clay plasticity and plastic types.So that for MC clay with intermediate plasticity adding relatively stiff plastic causes a decrease in deformability, while for RC clay with low plasticity adding the same plastic causes an increase in deformability.In addition, it is observed that the type of plastic strongly influences the of deformation modulus variations. Finally, it can be concluded that it is possible to use clay-plastic mixtures as construction materials, because of some increase in shear strength of clayey soils, and thus help to clean up the environment from the waste plastic materials. Figure 2 -(a) PW2 plastic chips used in the research, and (b) mixture of MC clay with PW2 plastic before compaction. Figure 6 - Figure 6 -Effect of plastic wastes on the: (a) shear strength, (b) maximum pore water pressure, and (c) secant deformation modulus. Table 1 - Some index properties of clayey soils. Table 2 - List of samples with some specifications.
3,365.6
2016-09-01T00:00:00.000
[ "Environmental Science", "Engineering" ]
Simple and effective calculations about spectral power distributions of outdoor light sources for computer vision The Spectral Power Distributions (SPD) of outdoor light sources are not constant over time and atmospheric conditions, which causes the appearance variation of a scene and common natural illumination phenomena, such as twilight, shadow, and haze/fog. Calculating the SPD of outdoor light sources at different time (or zenith angles) and under different atmospheric conditions is of interest to physically-based vision. In this paper, for computer vision and its applications, we propose a feasible, simple, and effective SPD calculating method based on analyzing the transmittance functions of absorption and scattering along the path of solar radiation through the atmosphere in the visible spectrum. Compared with previous SPD calculation methods, our model has less parameters and is accurate enough to be directly applied in computer vision. It can be applied in computer vision tasks including spectral inverse calculation, lighting conversion, and shadowed image processing. The experimental results of the applications demonstrate that our calculation methods have practical values in computer vision. It establishes a bridge between image and physical environmental information, e.g., time, location, and weather conditions. © 2016 Optical Society of America OCIS codes: (330.3795) Low-vision optics; (010.1320) Atmospheric transmittance. References and links 1. N. Jacobs, N. Roman, and R. Pless, “Toward fully automatic geo-location and geo-orientation of static outdoor cameras,” in IEEE Workshop on Applications of Computer Vision, (IEEE, 2008), pp. 1–6. 2. J.-F. Lalonde, S. G. Narasimhan, and A. A. Efros, “What does the sky tell us about the camera?,” in Proc. ECCV, (Springer, 2008), pp. 354–367. 3. J.-F. Lalonde, S. G. Narasimhan, and A. A. Efros, “What do the sun and the sky tell us about the camera?,” Int. J. Comput. Vis. 88, 24–51 (2010). 4. Y. Liu, X. Qin, S. Xu, E. Nakamae, and Q. Peng, “Light source estimation of outdoor scenes for mixed reality,” The Visual Computer 25, 637–646 (2009). 5. K. Sunkavalli, F. Romeiro, W. Matusik, T. Zickler, and H. Pfister, “What do color changes reveal about an outdoor scene?” in Proc. CVPR (IEEE, 2008), pp. 1–8. 6. J. Haber, M. Magnor, and H.-P. Seidel, “Physically-based simulation of twilight phenomena,” ACM Trans. Graph. 24, 1353–1373 (2005). 7. R. Perez, R. Seals, and J. Michalsky, “All-weather model for sky luminance distribution—preliminary configuration and validation,” Sol. Energy 50, 235–245 (1993). 8. K.-J. Yoon, E. Prados, and P. Sturm, “Joint estimation of shape and reflectance using multiple images with known illumination conditions,” Int. J. Comput. Vis. 86, 192–210 (2010). #252414 Received 2 Dec 2015; revised 27 Jan 2016; accepted 16 Feb 2016; published 28 Mar 2016 © 2016 OSA 4 Apr 2016 | Vol. 24, No. 7 | DOI:10.1364/OE.24.007266 | OPTICS EXPRESS 7266 9. D. Wu, J. Tian, B. Li, Y. Wang, and Y. Tang, “Recovering sensor spectral sensitivity from raw data,” J. Electron. Imaging 22, 023032-1 023032-8 (2013). 10. G. D. Finlayson and S. D. Hordley, “Color constancy at a pixel,” J. Opt. Soc. Am. A 18, 253–264 (2001). 11. X. Xing, W. Dong, X. Zhang, and J.-C. Paul, “Spectrally-based single image relighting,” in Entertainment for Education. Digital Techniques and Systems, (Springer, 2010), pp. 509–517. 12. T. Gierlinger, D. Danch, and A. Stork, “Rendering techniques for mixed reality,” J. Real-Time Image Processing 5, 109–120 (2010). 13. J. Wither, S. DiVerdi, and T. Höllerer, “Annotation in outdoor augmented reality,” Computers & Graphics 33, 679–689 (2009). 14. G. D. Finlayson, M. S. Drew, and C. Lu, “Entropy minimization for shadow removal,” Int. J. Comput. Vis. 85, 35–57 (2009). 15. J. Tian, J. Sun, and Y. Tang, “Tricolor attenuation model for shadow detection,” IEEE Trans. Image Process. 18, 2355–2363 (2009). 16. D. B. Judd, D. L. MacAdam, G. Wyszecki, H. Budde, H. Condit, S. Henderson, and J. Simonds, “Spectral distribution of typical daylight as a function of correlated color temperature,” J. Opt. Soc. Am. A. 54, 1031–1040 (1964). 17. A. J. Preetham, P. Shirley, and B. Smits, “A practical analytic model for daylight,” in Proceedings of the 26th annual conference on Computer graphics and interactive techniques, (ACM Press/Addison-Wesley Publishing Co., 1999), pp. 91–100. 18. J. Jung, J. Lee, and I. Kweon, “One-day outdoor photometric stereo via skylight estimation,”in Proc. CVPR (IEEE, 2015), pp. 4521–4529. 19. R. Kawakami, H. Zhao, R. T. Tan, and K. Ikeuchi, “Camera spectral sensitivity and white balance estimation from sky images,” Int. J. Comput. Vis. 105,187—204 (2013). 20. C. Gueymard, SMARTS2: A Simple Model of the Atmospheric Radiative Transfer of Sunshine: Algorithms and Performance Assessment (Florida Solar Energy Center Cocoa, FL, 1995). 21. A. Berk, L. S. Bernstein, and D. C. Robertson, Modtran: A moderate resolution model for lowtran, Tech. Rep., DTIC Document (1987). 22. R. E. Bird and C. Riordan, “Simple solar spectral model for direct and diffuse irradiance on horizontal and tilted planes at the earth’s surface for cloudless atmospheres,” J. Climate Appl. Meteor. 25, 87–97 (1986). 23. International Electrotechnical Commission , Multimedia systems and equipment Colour measurement and management Part 2-1: Colour management Default RGB colour space sRGB, Tech. Rep. IEC 619966-2-1(1999). 24. B. Leckner, “The spectral distribution of solar radiation at the earth’s surface—elements of a model,” Sol. Energy 20, 143–150 (1978). 25. R. Schroeder and J. Davies,“Significance of nitrogen dioxide absorption in estimating aerosol optical depth and size distributions,” Atmosphere-Ocean 25, 107–114 (1987). 26. J. H. Pierluissi and C.-M. Tsai, “New lowtran models for the uniformly mixed gases,” App. Opt. 26, 616–618 (1987). 27. L. Zhou, P. Guo, and Y. Tan, “A new way to study water-vapor absorption coefficient,” Marine Science Bulletin 7 (2005). 28. M. Iqbal, An Introduction to Solar Radiation (Elsevier, 2012). 29. J. Jiang, D. Liu, J. Gu, and S. Susstrunk, “What is the space of spectral sensitivity functions for digital color cameras?,” in IEEE Workshop on Applications of Computer Vision (IEEE, 2013), pp. 168–179. 30. J. Tian and Y. Tang, “Linearity of each channel pixel values from a surface in and out of shadows and its applications,” in Proc. CVPR (IEEE, 2011), pp. 985–992. 31. L. Qu, J. Tian, Z. Han, and Y. Tang, “Pixel-wise Orthogonal Decomposition for Color Illumination Invariant and Shadow-free Image,” Opt. Express 23,2220—2239 (2015). 32. J. Tian and Y. Tang, “Wavelength-sensitive-function controlled reflectance reconstruction,” Opt. Lett.38, 2818– 2820 (2013). Introduction Solar irradiance is changed by atmospheric transmittance effects including absorption, reflecting, and scattering, which causes the spectral power distribution (SPD) of the light that ultimately reaches the Earth's surface to vary with time and air conditions.As shown in Fig. 1, atmospheric transmittance effects lead to many natural things related to computer vision applications such as the variation of sun and sky appearance, twilight, shadow, haze/fog, and cloud.Therefore, in computer vision, a SPD calculation method of outdoor light sources is useful to deal with the problems caused by different time (or zenith angles) and weather conditions. Nature light modeling and image illumination processing have received a lot of attention in the computer vision and computer graphics community.Based on the fact that the luminance varies with the different parts of the sky, a physically-based sky luminance model is employed to infer the camera azimuth in [1] and to recover camera focal length in [2].Lalonde et al. [3] presented how they can geolocate a camera by using the sky appearance and sun position annotated in images.Liu et al. [4] proposed a method to estimate the lighting condition of outdoor scenes through learning a set of images captured at the same sun position.Sunkavalli et al. [5] presented that sunlight and skylight SPDs can be recovered by analyzing a time-lapse video of outdoor scenes.Haber et al. [6] presented an approach to compute the colors of the sky during twilight period before sunrise and after sunset based on the theory of light scattering by small particles.Perez et al. [7] proposed a popular sky model that has been widely used in computer vision and graphics.In general, most of these methods focus on modeling or applying the sky spatial radiance distributions rather than spectral information.Spectral information, i.e., the SPD knowledge of light sources, is useful in some computer vision tasks, such as reflectance recovery [8], camera spectral sensitivities estimation [9], color constancy [10], relighting [11], image rendering [12], and augmented reality [13]. In the computer vision community, two methods are usually applied to estimate SPD illuminations.The first one employs Planck's blackbody radiation law to approximately calculate the SPDs of outdoor light sources.Finlayson et al. [14] and Tian et al. [15] applied Planck's blackbody radiation law to approximately estimate the SPDs of daylight and skylight for deriving shadow invariant images and for detecting shadows, respectively.The second one employs daylight characteristic vectors to recover the SPDs of outdoor light sources.Judd et al. [16] proposed the characteristic vector analysis method on 622 samples of daylight measured at 10 nm intervals over the visible range.Their results suggested that most of the daylight samples can be approximated accurately by a linear combination of the three fixed characteristic vectors.Based on Perez sky model [7], Preetham et al. [17] proposed an improved sky model.It suggests an analytical approximation of the five distribution coefficients in Perez sky model to be a linear function of a single parameter, turbidity, by fitting the Perez formulas to the reference images.With another parameter, sun angle, the absolute sky luminance Y as well as the CIE chromaticities x and y of a sky element can be calculated.They also provide a way to convert the outputs to spectral radiance data (See Appendix 5 in [17]).This analytic model of spectral sky-dome radiance is widely-used.Using images captured during a single day, Jung et al. [18] presented an outdoor photometric stereo method via skylight estimation according to the Preetham sky model.Given sky images and viewing geometry as the input, Kawakami et al. [19] proposed a method to estimate the turbidity of the sky by fitting image intensities to the Preetham sky model.Then, the sky spectrum can be calculated.Having the sky RGB values and their corresponding spectrum, the method estimates the camera spectral sensitivities and white balance setting.In the experimental section, we will analyze and compare with the Planck's blackbody radiation and the Preetham sky model. In meteorology, there are SPD calculation methods based on analyzing the transmittance functions of absorption and scattering along the path of solar radiation through the atmosphere.The latest versions of two representative SPD calculation models with wavelength resolution are SMARTS2 [20] and MODTRAN [21].The SPDs calculated by these two methods can approximate real measured ones well.However, they may be too complex to be used in computer vision, because they contain many parameters to be determined and unfortunately most of these meteorological parameters are not readily available for computer vision applications.Bird et al. [22] proposed a simpler method to calculate the SPD of outdoor light sources.Though Bird's method is much simpler than those in [20,21], it is still difficult to be used in computer vision.In general, the existing methods are not developed for computer vision.They consider the full spectrum and involve many factors and parameters that have no effects on the visible spectrum.Besides, in all existing methods, the characteristics of human vision are not taken into account.Therefore, they may be more suitable for the research on meteorology than computer vision.Because most computer vision tasks concentrate on analyzing images captured in the visible spectrum, it is possible for us to develop the simpler and effective SPD calculation method that can be easily applied in computer vision tasks, such as illumination processing, shadow removal, and image relighting. The main contribution of this paper is that we propose a very simple method to calculate the SPDs of sunlight and skylight in the visible spectrum.Different from previous calculations, we first calculate the atmosphere's absorption, since some light emitted from the sun never reach the Earth's surface neither as direct sunlight nor as diffuse skylight due to the atmosphere's absorption.We propose a simple new absorption calculation method and provide the new "total absorption coefficient".During the development of our method, we pay more attention to the wavelengths near the peaks of the human color matching functions (CMFs), to guarantee the accuracy of color reproduction when using our calculated SPDs to substitute the real ones. 2. A simple method for computing the spectral power distribution of sunlight and skylight Light and image Because our SPDs calculations are for imaging and computer vision, we develop our method based on image formation theories.Figure 2 shows the image formation procedure in outdoor environments.Since both our eyes and conventional cameras are only sensitive to the visible spectrum, all the calculations and analysis in this paper will be done within the spectrum 400-700nm.Given illumination SPD E(λ ) and object's reflectance S(λ ), the tristimulus values are, where Q H (λ ) denotes the XYZ or sRGB color matching functions in three channels.In this paper, tristimulus values in XYZ will be used in Section 2.4 and those in sRGB will be used in Section 4. The detail about XYZ to sRGB conversion can be found in [23]. Absorption of extraterrestrial irradiance passing through the atmosphere When the extraterrestrial irradiance passes through the atmosphere, it is attenuated by absorption, reflection, and scattering processes, and thereby its spectral composition is changed considerably.Some solar radiation will never reachs the Earth's surface neither as direct sunlight nor as diffuse skylight due to the atmosphere's absorption by ozone, nitrogen dioxide, mixed gases, and water vapor.We denote E oλ as extraterrestrial irradiance at mean earth-sun distance for wavelength λ .After absorption, the irradiance becomes, where T oλ , T Nλ , T wλ , and T uλ denote the transmittance functions for molecular ozone, nitrogen dioxide, molecular water vapor, and mixed gases, respectively.Extraterrestrial irradiance also varies with the sun-earth distance which is determined by the Day Number of a year.Since this factor is independent of wavelength, it can be taken outside the integration and can be consider as exposure time in Eq. (l).Therefore this factor can be neglected for computer vision application.T oλ and T Nλ can be represented by, T oλ = exp(−0.35aoλ m) (Re f er to [24]) T Nλ = exp(−0.0016anλ m) (Re f er to [20,25]) where a oλ and a nλ are the absorption coefficients of molecular ozone and nitrogen dioxide, respectively.In [22,24], T wλ and T uλ are given by, where a uλ and a wλ are the absorption coefficients of molecular water vapor, and mixed gases, respectively.W is water vapor amount, and its typical value is 1.6.m is air mass that is calculated by, m = sec(θ ) θ denotes zenith angle in this paper.Unlike that T oλ and T Nλ take noticeable effects in the visible spectrum, T uλ mainly takes effect on wavelengths longer than 1000nm and in the visible spectrum takes effect in 687nm∼691nm [26]; T wλ mainly takes effect on wavelengths that are longer than 800nm and in the visible spectrum takes effect in 690nm∼700nm [27].T uλ and T wλ have little effect in the visible spectrum but their computations are more complex than those of T oλ and T Nλ .More importantly, T uλ and T wλ have different forms with T oλ and T Nλ , which brings difficulties for merging these four terms.Therefore, we here want to seek simple and effective expressions for T uλ and T wλ .Eq. ( 5) and Eq. ( 6) have similar formulas, thus we first merge T uλ into T wλ , i.e., Where ε can be determined by, The result is ε = 4.3.We have, The two attenuations caused by water vapor and mixed gases are merged into a single attenuation coefficient.We define the new term as where a ′ wλ = a wλ + 4.3a uλ .For different m, we find that data counterparts of [a ′ wλ , log(T ′ wλ )]satisfy a power function.So in the following we write T ′ wλ as the form of T oλ and T Nλ .We define a new transmittance function of water vapor and uniform mixture gas as, with, The results of Eq. ( 13) are p = −0.055and q = 0.56.We apply the simple exhaustive search method to solve the optimization problem in Eq. ( 9) and Eq. ( 13).The new attenuation expression of water vapor and mixed gases is approximated as a decreasing exponential function of the combined coefficient. Then since all the absorption factors can be expressed by a decreasing exponential function, we rewrite Eq. ( 2) as, where, The results in Fig. 3 show that our new simple expression produces similar results compared with the combined effects of the original two terms in the visible spectrum.We name the pro- posed T λ as "total absorption transmittance function" and τ ′ (λ ) as "total absorption coefficient" that is represented by, τ ′ (λ ) = 0.35a oλ + 0.0016a nλ + 0.055a The total absorption coefficient can be found in the Appendix (Table 4) and is plotted in Fig. 4, in which a oλ , a nλ , a uλ , and a wλ are derived from [20] with a oλ = A oλ , a nλ = A nλ , a uλ = 100A gλ , and a wλ = 100A wλ .The SPD of extraterrestrial irradiance and an example of its result after absorption at m = 2 is shown in Fig. 5. Using our proposed total absorption coefficient and transmittance function, the absorption calculation of ozone, nitrogen dioxide, water vapor, and mixed gases can be considered all together.Eq. ( 16) is much simpler to use. Computing direct sunlight The direct irradiance at Earth's surface normal to the direction of the sun can be calculated by, where E in is the SPD of extraterrestrial irradiance after absorption, and T rλ , T aλ denote the transmittance functions of Rayleigh scattering and Aerosol scatting, respectively.The transmittance function for the Rayleigh scattering in [28] is adopted in this paper. The transmittance function for the aerosol scattering in [28] is adopted in this paper. where α is wavelength exponent and the value of 1.3 is commonly employed.The parameter β is the cleanliness index (turbidity) which varies from 0.0 to 0.4.For clear weather, turbid weather, and very turbid weather, its value is about 0.1, 0.2, and 0.3 respectively [28]. Computing diffuse skylight The calculations of diffuse skylight in other works are complicated.Here we propose a very simple method.From Eq. ( 19) and Eq. ( 20), we know that T rλ and T aλ describe the direct transmittance functions of Rayleigh scattering and Aerosol scattering respectively.Thus (1-T rλ • T aλ ) actually describes the sum of scattered light that can reach the Earth's surface and that are lost in the atmosphere.So the key point becomes how to determine the proportion of the scattered light that can reach the ground.Therefore, the diffuse skylight on the horizontal surface can be calculated by, where κ accounts for the proportion of the scattered light that can reach the ground.As shown in Fig. 6, κ should be zenith-dependent.The longer light passes through the atmosphere, the more light will be lost, i.e., less scattered light can reach the ground.We determine κ by fitting CIE ∆E Lab between the calculated SPDs by Eq. ( 21) and those from the true measurements by a SOC710 hyperspectral imager.In detail, the ideal white reflectance, S(λ ) = 1, is applied in calculation.For the calculated SPD by our method, the XYZ tristimulus values are, For the measured SPD, the XYZ tristimulus values are, The XYZ tristimulus values are converted to CIELAB color space and ∆E Lab is applied to evaluate the error between E sλ and E ′ sλ . κ can be determined by, κ = argmin ∆E Lab (25) The reason that we apply XYZ values and ∆E Lab to determine κ is to keep higher accuracy near the peaks of CMFs.The values of κ with different zenith angles are tabulated in Table 1. Experiments and comparisons Figure 7 shows the comparisons of the calculated SPDs by our method with those by SMARTS2 method [20] and Bird's method [22] in clear weather, i.e. β = 0.1.The results generally denote that there are no significant discrepancies between the calculated SPDs by our simple method and by the other two complex meteorology ones, especially for direct sunlight.To evaluate the influences of the discrepancies on imagery, we simulate sRGB values (see the 2 nd and 4 th columns in Fig. 7) of Xrite colorchecker which contains 24 common colors in our daily life.From the results, it is hard to find differences by our naked eyes on the simulated images under SPDs calculated by the three methods.The last column in Fig. 7 shows the quantitative Fig. 8. Comparisons of the calculated SPDs by our method and those by Bird's method [22] and SMARTS2 method [20] under different turbidities.From top to bottom are β = 0, β = 0.2, and β = 0.3, respectively. discrepancies of pixel values of the simulated images.For each color patch, in three channels, max differences versus max values are plotted as percent errors. The results in Fig. 7 also show that the SPDs do not vary significantly from 20 degree to 40 degree, which indicates that the outdoor light sources are quite stable during noon.The variation of intensity is larger than that of spectrum.Because variation of intensity can be taken outside the integration and can be put into exposure time in Eq. (1), it has no effect on imagery. Figure 8 shows the comparison of the calculated SPDs by our method with those by S-MARTS2 and Bird's method under different turbidities.The consistency by the three methods on sunlight is better than that on skylight.We can see that for very clear weather β = 0, the outputs of the three methods are quite similar for skylight.The similarity degrades with increasing β .For very turbid weather, β = 0.3, the three outputs are obvious different.Overall, our results are more close to those of SMARTS2 method in the skylight calculation. The comparison on the formula expressions of our method with those of SMARTS2 and Bird's method are listed in Table 2. From the table, we can see that our expressions are much simpler than those in the other two methods.More importantly, in the other two methods there are many parameters that may be hard to be determined in computer vision. Comparison with blackbody radiation Planck's blackbody radiation law is usually applied to calculate the SPDs of outdoor light sources.It is well known that extraterrestrial solar irradiance can be approximated by the blackbody radiation at 5777 K.However, from Fig. 9, we can see that the extraterrestrial solar irradiance roughly follows the blackbody radiation at 5777 K in the near infrared spectrum while it does not follow blackbody radiation law well enough in the visible region.Figure 10 shows the chromaticity of the blackbody radiation and that of sunlight, skylight, and daylight.We find that the chromaticity of daylight is very near to Planckian locus and so it can be accurately approximated by the blackbody radiation.In contrast, the chromaticity of sunlight and skylight deviate from that calculated by blackbody radiation noticeably, which indicates that noticeable errors occur when blackbody radiation is employed to model the SPDs of sunlight and skylight. Comparison with Preetham sky model The Preetham model [17] also only requires the zenith angle and turbidity as varying parameters.Using Judd's characteristic vectors, this model can produce radiance spectral information of all points on the skyphere.Details about recovering radiance spectral information from sky chromaticity values based on Judd's characteristic vectors can be found in the Appendix 5 in [17].After integration of the incoming light from all the directions in the hemisphere, SPD of skylight irradiance can be calculated.Figure 11 shows the comparison between our method, S-MARTS2, Bird's method, and Preetham method.We can find that, compared with our method, Preetham method is not accurate enough to match the two meteorology methods.This is caused by that some errors may arise when recovering full spectra from sky XYZ values.It only uses three principle basis.Another disadvantage of using the Preetham sky model to recover SPD is that it mainly recover skylight SPD while it is difficult to recover SPD of direct sunlight and daylight. The recovery of camera capture information We first demonstrate estimating illumination, zenith angle, turbidity, and camera spectral sensitivities from a single 24 color checker image.Rewriting E(λ ) in Eq. ( 1), we have, Jiang's et al. [29] shows that camera spectral sensitivities (CSS) can be decomposed as T is the eigenvector matrix that is precomputed by a dataset including different CSS.Given a 24 color checker image and the reflectance spectrum as shown in Fig. 12, parameters σ H , θ , T can be recovered by iteratively minimizing the following RMS, and then illumination SPDs and CSS can be recovered. 24 Compared with Jiang's method that can recover CSS and one illuminations-SPD, as shown in Fig. 13, our method can recover CSS and SPDs of three lights, i.e., sunlight, daylight, and skylight simultaneously.More importantly, using our method, sun angle and turbidity can be recovered from the image while Jiang's method can only recover color temperature. Shadow features for shadow detection Shadow is an active research topic in computer vision.Its generation and characteristics have close relation to direct sunlight and diffuse skylight.Tian et al. [30] show that the pixel values of a surface illuminated by daylight (in non-shadow region) are proportional to those of the same surface illuminated by skylight (in shadow region).If f H denote pixel values of a surface in area and F H denote those of the same surface in non-shadow area, [30] shows, where Since our calculating method can simultaneously calculate SPDs of daylight and skylight.Based on our calculated SPDs, we can easily obtain K H for any sun angles and turbidity.As shown in Fig. 14, we capture three images at 43 degree sun angle and in clear weather with 0.15 turbidity (estimated by section 4.1).We calculate the normalized K H in the Canon 5D Mark II RAW images without Gamma correction.We can find these values are quite near to the calculated K H = [0.69,0.57, 0.45] by our calculated SPDs, indicating that our SPD calculation method can predict K H in shadowed images.We also calculated K H under different sun angles and turbidity to find some rules that will be useful for shadow detection.We list these K H values in Table 3. From the table we can find three properties of shadows.Property 1. K H satisfy K R > K G > K B Property 2. K H decrease as turbidity increases.Property 3. K H decrease as sun angle increases.To test property 1, we collected 563 real-world images downloaded from the web or realcaptured.Particularly we carefully captured images at different sun angles and under different weather conditions (including clear sky, little overcast, and heavy clouds that didn't cover the sun).For each image, shadow regions were manually labeled.We got 3672 shadow & nonshadow counterparts and calculated all the K H by these counterparts.We find that 90.50% K H satisfy property 1.Preliminary analysis shows that overexposure, thin shadows, and too large contrast enhancement by post-processing on images usually cause that the calculated K H do not satisfy property 1. Since we usually don't know under what sun angles and turbidity an image is captured, property 2 and 3 are not convenient to be directly used.They may be useful to some inverse problem from images, e.g., using detected shadows to infer sun angle and turbidity.In detail, if we know the CSS of a camera, we can obtain a table like Table 3, and coarsely infer sun angle and turbidity by comparing the K H in the table and the K H calculated from the detected shadows.The application in this section shows that our SPD calculation is valuable for finding new shadow features, which may hard to accomplish by other SPD calculation methods since their methods cannot calculate Eq. ( 28) and Table 3. Deriving intrinsic images Our SPD calculation method can be used to derive intrinsic images.If we convert the linear RGB to Gamma-corrected sRGB vaules (According to [23]), Eq. ( 28) becomes From Eq. ( 30), the following equation holds. where For a pixel in an image, Eq. ( 31) represents a shadow invariant.For an arbitrary pixel and its RGB value vector, ) keeps the same no matter whether the pixel is in shadow region or not.Apparently, a grayscale intrinsic image is obtained.One result is shown in the middle of Fig. 15.To obtain a color intrinsic image, for a RGB value vector (v R , v G , v B ) T , we first define the following grayscale intrinsic values I 1 according to Eq. (31). Similar to Eq. (33), we can obtain two other grayscale intrinsic values I 2 and I 3 , where A color intrinsic image can be obtained from the unique particular solution of the equation set generated by Eqs. ( 31), (34), and (35) (More details can be found in [31]).One color intrinsic image result is shown in the right of Fig. 15.When deriving the intrinsic images, K H are the key parameters.More accurate estimation of K H can produce better intrinsic image results.As shown in Fig. 16, using K H computed by the SPDs calculation method proposed in this paper, better intrinsic image results obtained.In [31], K H is just approximately computed from the mean value of some real measured SPDs.29) and the SPDs calculation method proposed in this paper.right: Intrinsic image results using K H introduced in [31] that is calculated from the mean value of some real measured SPDs.This application converts an image from one illumination to another.In detail, using our SPD calculation method, we firstly recover reflectance from the original image using the method in [32], and then we can relight the same scene according to Eq. ( 1) and convert it to sRGB color space.As shown in Fig. 17, we first convert illumination from skylight to daylight at zenith angle equals to 60 degree.Both the color and the intensity of the converted image are quite similar to the real captured one by our naked eyes.The errors of the conversion are shown in Fig. 18.The mean error of pixel value is 9.The max error of pixel value is 36, which occurs at patch No.15 in Red channel.The error mainly arises from that commercial cameras apply some post processing such as tone mapping and gamut mapping on the images.These factors are not considered when we convert RAW images to sRGB JPG images in our calculation.Figure 19 shows two more results of image lighting conversion.The two images in the first row of Fig. 19 show that image lighting conversion from afternoon to noon.The color of the converted image is more similar to the scene at noon than the original one.The second row of Fig. 19 shows image lighting converted from sunset to afternoon.Both the color and the clarity of the converted image are better than that of the original one. Conclusion Illumination is one of the important components of imaging.Understanding the properties of spectral distributions of outdoor light and their dynamical changes at different times and under different atmospheric conditions is of interest to computer vision.In this paper, we proposed a simple method for computing SPDs of sunlight and skylight for a given zenith angle and turbidity.In the computer vision community, researchers usually apply the blackbody radiation or the eigenvectors of daylight [16] to approximate real SPDs.Compared with these two kinds of methods, our method has two advantages.First of all, SPDs calculated by our method can approximate those calculated by meteorology ones.The second is that our method can simultaneously calculate SPDs of daylight, sunlight, and skylight.The advantages especially the second one are important for our applications.These applications are difficult to accomplish by the blackbody or the eigenvector model since they cannot simultaneously calculate the corresponding SPDs of daylight, sunlight, and skylight.Therefore, it is difficult to employ these models to simultaneously recover the SPDs of daylight and skylight in Sec.4.1, estimate K H in Sec.4.2 and 4.3, and relight the ColorChecker image from skylight to daylight under the same sun angle in Sec.4.4. Our method is much simpler than the existing atmospheric methods and shows its advantages in a number of computer vision applications.Different from other models that need many parameters such as ozone, nitrogen dioxide, mixed gases, and water vapor that have no close relation to computer vision and are not easily obtained in computer vision, our model only requires two parameters that are most related to physically-based computer vision: sun angle (geometry) and turbidity (weather).Our model establishes a bridge between image and physical environmental information, e.g., time, location, and weather conditions.Since we focus on computer vision applications rather than atmosphere physics or meteorology, the basic calculations in atmosphere physics, e.g., transmission functions of scattering, are still based on previous atmospheric sciences literature. Fig. 1 . Fig. 1.Outdoor appearances vary with time and air conditions. Fig. 3 . Fig. 3. New attenuation and transmittance expressions of water vapor and uniformly mixed gases produce similar results as the original expressions. Fig. 5 . Fig. 5.The SPD of extraterrestrial irradiance reported by World Radiation Center and that after absorption at air mass equals to 2. )Fig. 6 . Fig. 6.Schematic diagram of the solar radiation onto the Earth surface. Fig. 9 . Fig. 9. Using blackbody radiation to approximate the true extraterrestrial data.The right image is the close-up view of the left one. Fig. 10 . Fig. 10.CIE Chromaticity of sunlight, skylight, and daylight are compared with Planckian locus formed by color temperatures from 1000k to 500000k. Fig. 12 . Fig. 12.A 24 color checker image captured by a Canon 5D Mark II camera under skylight and its corresponding spectral reflectance. Fig. 13 . Fig. 13.The recovered skylight spectrum and the recovered CSS of Canon 5D Mark II.In the second figure, M and E are the abbreviations of "Measured" and "Estimated", respectively. Fig. 16 . Fig. 16.More accurate estimation of K H can produce better intrinsic image results.Left: Original images; middle: Intrinsic image results using K H calculated by Eq. (29) and the SPDs calculation method proposed in this paper.right: Intrinsic image results using K H introduced in[31] that is calculated from the mean value of some real measured SPDs. Fig. 17 Fig. 17.Converting illumination from skylight to daylight.Left: Image illuminated by skylight; Middle: The image with illumination converted to daylight.Right: The real image captured in daylight. Fig. 17.Converting illumination from skylight to daylight.Left: Image illuminated by skylight; Middle: The image with illumination converted to daylight.Right: The real image captured in daylight. Fig. 18 . Fig. 18.Errors between the converted image and the real captured one. Fig. 19 . Fig. 19.Two more results of image lighting conversions.Left: Original images; Right: converted images. Table 1 . Proportion of the scattered light that can reach the ground with different zenith angles Table 2 . Comparison of the formula expressions of our methods with those of SMARTS2 and Bird's method Table 3 . The calculated K H at different sun angles and under different turbidities Fig. 14.Our SPD calculation method can predict K H in shadowed images.
8,148
2016-04-04T00:00:00.000
[ "Computer Science" ]
Solving Fuzzy Nonlinear Equations with a New Class of Conjugate Gradient Method In this paper, we study the performance of a new conjugate gradient (CG) method for fuzzy nonlinear equations. This method is simple and converges globally to the solution. The parameterized fuzzy coefficients are transformed into unconstrained optimization problem (UOP) and the CG method under exact line search was employed to solve the equivalent optimization problem. The method is discussed in details followed by the simplification for easy analysis. Numerical result on some benchmark problems illustrates the efficiency of the proposed method Introduction Over the past decades, fuzzy nonlinear equations have been playing major role in medicine, engineering, natural sciences, and many more.However, the main setback is that of using the numerical method to obtain the solution of the problems.This is due to the fact that the standard analytical techniques by Buckley andQu (1990,1991) are only limited to solving the linear and quadratic case of fuzzy equations.Recently, numerous researchers have proposed various numerical methods for solving the fuzzy nonlinear equations.i.e.For nonlinear equation () = 0 (1) whose parameters are all or partially represented by fuzzy numbers, Abbasbandy and Asady (2004) investigated the performance of Newton's method for obtaining the solution of the fuzzy nonlinear equations and extended to systems of fuzzy nonlinear equations by Abbasbandy and Ezzati (2006).Newton method converges rapidly if the initial guess is chosen close to the solution point.The main drawback of Newton's method is computing the Jacobian in every iteration.One of the simplest variants of Newton's method was considered by Waziri and Moye (2016) for solving the dual fuzzy nonlinear equations.Another variant of Newton method known Computing and Applied Mathematics as Levenberg-Marquardt modification was use to solve fuzzy nonlinear equations by Ibrahim et al. (2018).Also, Amirah et al. (2010) applied the Broyden's method investigate the fuzzy nonlinear equations.All these methods are Newton-like which requires the computation and storage of either Jacobian or approximate Jacobian matrix at every iterative or after every few iterations.Recently, a diagonal updating scheme for solution of fuzzy nonlinear equations was proposed by Ibrahim et al. (2018).A gradient-based method by Abbasbandy and Jafarian (2006) was employed obtaining the root of fuzzy nonlinear equations.This method is simple and requires no Jacobian evaluation during computations.However, its convergence is linear and very slow toward the solution (Chong and Zack, 2013).The steepest descent method is also badly affected by ill-conditioning (Wenyu and Ya-Xiang, 2006).Lately, a derivative-free approach by Sulaiman et al. (2016) was applied to obtain the solution of fuzzy nonlinear equations.This bracketing method saves the computational cost of evaluating the derivate of a function, and it is also bound to converge because it brackets the root any problem.On the other hand, the convergence is very slow towards the solution due to lack of derivative information (Touati-Ahmed and Storey, 1990).Motivated by this, we proposed a new CG coefficient and applied it to solve fuzzy nonlinear equations.The conjugate gradient method is known to be simple and very efficient in solving optimization problem (Ghani et al., 2016;Sulaiman et al., 2015).The idea of this paper is to transform the parametric form of fuzzy nonlinear equation into an unconstrained optimization problem before applying the new CG method to obtain the solution. This paper is structured as follows; some preliminaries are given in section 2. Section 3 presents a brief overview and the proposed CG method.The CG method for solving fuzzy nonlinear equations is presented in section 4. Numerical experiments and implementation are in section 5. Finally, in section 6, we give the conclusion. New Conjugate Gradient method for Unconstrained Optimization To overcome the computational burden of other iterative methods, the conjugate gradient method was suggested as an alternative.This is due to its simplicity, low memory requirement, and global convergence properties.The CG methods are very important for solving large-scale unconstrained optimization problems (Mamat et al., 2010).Starting with an initial point 0 , the CG method compute through a search direction with a step size ∝ obtain by line search procedure to obtain the next iterative given as and ∈ ℝ is the conjugate gradient parameter that characterizes various CG methods.The classical CG methods are Fletcher-Reeves (FR) (Fletcher and Reeves, 1964), Polak-Ribiere-Polyak (PRP) (Polak and Ribière, 1969), Hestenes-Stiefel (HS) (Hestenes and Stiefel, 1952), and a recent coefficient by Rivaie et al. (2014).These methods are defined as follows . The convergence of these methods under different line search techniques have been discussed by Zoutendijk (1970), Al-Baali (1985), Touti-Ahmed and Storey (1990), Gilbert and Nocedal (1992), and Rivaie et al. (2014).Studying the global convergence of the CG method under exact line search technique would be very interesting.Hence, we proposed a new CG coefficient known as where SM denotes Sulaiman Mustafa and define as follows For the convergence, we need to simplify (4) as follows (5) Since for exact line search, −1 = 0 (Rivaie et al., 2012).Also, ‖ −1 ‖ 2 has been proved to converge globally by Rivaie et al. (2014Rivaie et al. ( , 2012)).Next, we apply this method to solve fuzzy nonlinear equations. New Conjugate Gradient Method for Solving Fuzzy Nonlinear Equations Given a fuzzy nonlinear equation () = 0, and the parametric form defined as The idea is to obtain the solution of ( 6) using conjugate gradient method.We need to transform ( 6) into an Unconstrained optimization problem.We start by defining a function : ℝ 2 → ℝ as follows (Abbasbandy and Asady, 2004) whose gradient ∇ () at point () = ((), ()) is also define as From the definition of (, , ) in ( 7), then (6) can be transformed to the following unconstrained optimization problem; We define an appropriate CG method for () = ( (), ()) as () = −1 () +∝ −1 (10) where ∝ −1 is obtained by exact line search produces, i.e. and From the above description, it can be observed that for the same parameters ∈ [0,1] the solution ( * , * ) which satisfies ( * , * ) = 0 is the same solution for (6) and vice versa. Numerical Examples In this section, we present the numerical solution of some examples using the CG method for fuzzy nonlinear equations.This is to illustrate the efficiency of the method.All computations are carried out on MATLAB 7.0 using a double precision computer.Also, details of the solutions are presented in Figure 1 . Conclusion Recently, the area of fuzzy nonlinear equation has been enjoying a vivid growth with focus on innovative numerical techniques for obtaining its solution.In this paper, we proposed a new conjugate gradient method under exact line search for solving the fuzzy nonlinear equation.This method is simple, requires less memory and hence reduces the computational cost during the iteration process.The parameterized fuzzy quantities are transformed into unconstrained optimization problem.Numerical results on some benchmark problems illustrate the efficiency of the new method.
1,586.2
2018-06-29T00:00:00.000
[ "Mathematics", "Computer Science" ]
Amplification of pressure waves in laser-assisted endodontics with synchronized delivery of Er:YAG laser pulses When attempting to clean surfaces of dental root canals with laser-induced cavitation bubbles, the resulting cavitation oscillations are significantly prolonged due to friction on the cavity walls and other factors. Consequently, the collapses are less intense and the shock waves that are usually emitted following a bubble’s collapse are diminished or not present at all. A new technique of synchronized laser-pulse delivery intended to enhance the emission of shock waves from collapsed bubbles in fluid-filled endodontic canals is reported. A laser beam deflection probe, a high-speed camera, and shadow photography were used to characterize the induced photoacoustic phenomena during synchronized delivery of Er:YAG laser pulses in a confined volume of water. A shock wave enhancing technique was employed which consists of delivering a second laser pulse at a delay with regard to the first cavitation bubble-forming laser pulse. Influence of the delay between the first and second laser pulses on the generation of pressure and shock waves during the first bubble’s collapse was measured for different laser pulse energies and cavity volumes. Results show that the optimal delay between the two laser pulses is strongly correlated with the cavitation bubble’s oscillation period. Under optimal synchronization conditions, the growth of the second cavitation bubble was observed to accelerate the collapse of the first cavitation bubble, leading to a violent collapse, during which shock waves are emitted. Additionally, shock waves created by the accelerated collapse of the primary cavitation bubble and as well of the accompanying smaller secondary bubbles near the cavity walls were observed. The reported phenomena may have applications in improved laser cleaning of surfaces during laser-assisted dental root canal treatments. Introduction Laser-induced cavitation bubbles have already been proposed for surface cleaning [1]. The cleaning of surfaces is carried out by fluid flow generated when bubbles expand and collapse close to boundaries [2]. An example of the use of laser-induced cavitation bubbles is the laser activated irrigation (LAI) during the dental root canal therapy, using an erbium laser (2940 or 2780 nm) [3][4][5][6]. The treatment is based on the delivery of erbium laser pulses into the liquid-filled canal through a fiber tip. The erbium laser light is highly absorptive in water (approximately 1-3 μm penetration depth) [7], which leads to explosive boiling that induces cavitation bubbles. Photon-induced photoacoustic streaming (PIPS™) is the latest application of LAI, which uses the Er:YAG (2940 nm) laser equipped with a conical and stripped fiber tip [8][9][10][11][12][13][14]. With the PIPS technique, the fiber tip is held in the coronal aspect of the access preparation, and very short bursts of very low laser energy are directed down into the canal to stream irrigants throughout the entire root canal system. This technique results in much deeper irrigation than traditional methods (syringe, ultrasonic needle) [9][10][11][12][13], being capable of reaching lateral canals and other outlying structures also in the apical part of the root canal [7,14], with the major cleaning mechanism being attributed to the liquid vorticity resulting from the laser-induced oscillations of the cavitation bubbles [15,16]. Also of major concern in root canal irrigation is the effective removal of the biofilm and of the smear layer, which is produced during root canal instrumentation and consists of inorganic and organic material including bacteria and their byproducts [17][18][19][20]. When LAI was first introduced it was believed that shock waves generated during the bubbles' collapse would contribute to the efficacy of debridement and removal of the biofilm and organic tissue remains [18,21]. However, as opposed to within infinite liquid reservoirs, shock waves are considerably diminished or are not present at all when bubbles are created in confined reservoirs such as dental root canals [7,15]. This is because in confined liquid cavities, the resulting cavitation oscillations are significantly prolonged due to friction on the cavity walls and other factors. Consequently, the collapses are not intense enough to generate shock waves. Current procedures thus still rely on the use of ethylenediaminetetraacetic acid (EDTA) and sodium hypochlorite solutions and are only partially effective in removing the smear layer and biofilm [18][19][20][21][22]. Therefore, further optimization of laser-assisted irrigation and cleaning procedures is called for. Recently, a synchronized delivery of laser pulses was studied in an infinite liquid reservoir, showing that a resonance effect can be achieved by applying a second laser pulse shortly after the collapse of the primary cavitation bubble to increase the mechanical energy of the secondary oscillation [23]. However, these results have limited value for endodontic applications, as the oscillations of cavitation bubbles in the confined geometry of the root canal vary significantly from the infinite liquid reservoir scenario. In confined reservoirs, secondary oscillations are diminished or not present at all and the collapses happen 2-3 mm below the fiber tip. Therefore, subsequent laser pulses lead to the generation of new cavitation bubbles, physically separate from the primary bubble, and the resonance effect does not take place. In this paper, we report on a new SWEEPS (shock waveenhanced emission photoacoustic streaming) technique of synchronized laser-pulse delivery intended to enhance shock waves emitted by collapsed bubbles in confined spaces such as root canals. As the collapse of the laser-induced cavitation bubble is initiated, a second pulse is delivered into the liquid, forming a second cavitation bubble. The growth of the second cavitation bubble accelerates the collapse of the first cavitation bubble, leading to a violent collapse, during which shock waves are emitted. Furthermore, shock waves are also emitted from the collapsing secondary cavitation bubbles that form naturally throughout the entire length of the canal during laser-induced irrigation. Unlike the main cavitation bubbles, the secondary bubbles are in close proximity to canal walls during their collapses, generating shear flows that are able to remove particles from the surface [1]. Additionally, because of their proximity to the canal walls, the emitted shock waves are still propagating at super-sonic speeds as they reach the smear layer, potentially increasing the cleaning mechanism even further. The proposed SWEEPS technique shares similarities with extracorporeal shock wave lithotripsy (ESWL), where focused ultrasonic waves are used to break kidney stones into smaller pieces [24,25]. Materials and methods The cavitation bubbles and the corresponding pressure waves were generated with an Er:YAG laser (LightWalker ATS, Fotona d.o.o, λ = 2.94 μm) fitted with an articulated arm and a fiber tip handpiece (H14, Fotona d.o.o). Laser pulses were delivered into liquid-filled canals through fiber tips (flat Fotona VARIAN 600 fiber tip or conical Fotona PIPS 600 fiber) with 600 μm fiber diameter. Although both types of fiber tips were tried in most of the experiments, the presented data is mainly for the experiments obtained with the flat fiber tip. This is because under the SWEEPS conditions the cone of the conical fiber tip became very quickly damaged, making the collected data unreliable. We attribute this observation to the significant amplification of the pressure waves in the vicinity of the accelerated collapse of the first bubble under the SWEEPS conditions. The induced photoacoustic phenomena in the confined liquid space were characterized with two experimental setups. One of the setups was a laser beam deflection probe (LBDP), which measured the amplitudes of pressure waves based on changes in the refractive index gradient at a single point with high temporal resolution [26]. The other setup involved highspeed camera acquisition and shadow photography used to visualize cavitation bubbles and the emission of resulting shock waves. Bubble oscillation periods and volumes were determined from the captured sequences of images. Measurement of pressure waves using laser beam deflection probe The experimental setup for measuring the amplitudes of pressure waves is shown schematically in Fig. 1. A block of aluminum with L = 25 mm long open-ended canals of different diameters (2, 3, 6, and 8 mm) was submerged 3 mm deep in a basin of distilled water (100 × 100 × 70 mm). A flat fiber tip (VARIAN 600), positioned in the center of the cross section of the canal, 5 mm below the water surface, was used to deliver the excitation laser pulses. A signal generator controlled by a personal computer was used to trigger the excitation laser. A 60-MHz InAs photodiode was used to detect and characterize the temporal profiles of the Er:YAG laser pulses. The laser beam deflection probe consisted of a He-Ne laser beam (λ = 633 nm) focused to a measuring spot 1 mm below the lower edge of the canal (26 mm below the fiber tip) and centered on a quadrant photodiode (QPD). The refractive index gradient produced by the propagation of a pressure wave through the water caused the deflection of the probe laser beam and a change in the signal of the QPD. Because the probe laser beam was positioned directly below the source of the pressure wave, only the vertical deflection was measured by subtracting the sum of signals from the upper two quadrants from the sum of signals from the lower two quadrants of the photodiode. Figure 2 shows a typical LBDP signal (black line) produced by the propagation of a pressure wave following a single laser pulse with energy E p = 20 mJ and pulse width t p = 50 μs. The temporal profile of the laser pulse is represented by the red line on the same graph. Two particular regions of interest are distinguishable from the LBDP signal: the first is the result of the rapid expansion of the laser-induced oscillation bubble (see the dashed rectangle on the left side of Fig. 2), and the second is the result of the oscillation bubble's collapse (see the dashed rectangle on the right side of Fig. 2). The first peak in the LBDP signal at expansion corresponds to the direct pressure wave, while the second peak (approximately 40 us after the first one) is the reflection from the bottom of the water reservoir. The same pair of peaks can be seen during the collapse. Figure 3 shows a typical LBDP signal when two individual laser pulses (E p = 20 mJ and pulse width t p = 50 μs for each pulse) were delivered into the liquid, separated by a temporal delay T p . In the particular case shown in Fig. 3, the second laser pulse was delivered at a time just before the collapse of the first cavitation bubble. Under such conditions, the collapse of the first bubble is accelerated, which leads to a more intense pressure wave generation in comparison to when a second laser pulse is absent (see Fig. 2). This amplification was characterized by measuring the collapse amplitude (A), defined as the peak-to-peak amplitude of the LBDP signal during the first bubble's collapse phase. The temporal separation (the LBDP oscillation time; T′ OSC ) between the 'expansion' and 'collapse' LBDP signals was also measured. Note that the LBDP oscillation time T′ OSC corresponds only approximately to the actual oscillation period of the bubble since the temporal separation between the LBDP signals can be affected by the Fig. 1 A schematic overview of the experimental setup for laser beam deflection probe measurements. A block of aluminum with canals of different diameters was submerged 3 mm deep in a basin of distilled water. A flat fiber positioned in the center of the cross section of the canal, 5 mm below the water surface, was used to deliver the Er:YAG laser pulses with pulse energy E p = 20 mJ and pulse width t p = 50 μs. A signal generator (SG) controlled by a personal computer (PC) was used to trigger the excitation lasers. A photodiode (PD) was used to detect the Er:YAG laser pulses. The measuring system consisted of a He-Ne laser beam (λ = 633 nm) focused to a measuring spot 1 mm below the lower edge of the canal and centered on a quadrant photodiode (QPD). Signals from the QDP were recorded using an oscilloscope (OSC) Fig. 2 Typical signal from the LBDP (upper black signal) following a single Er:YAG laser pulse (lower red signal) with pulse energy E p = 20 mJ and pulse width t p = 50 μs, delivered through a flat fiber tip in a 6-mmdiameter canal. The two marked regions represent the expansion and collapse phase of the laserinduced oscillation bubble spatial movement of the oscillating bubble relative to the LBDP probe (and by other factors further discussed below). In order to find the optimal delay (T p * ) where the collapse amplitude is maximal (A * ), a series of measurements was conducted by varying T p in a range from 200 to 800 μs in 1 μs intervals and recording A and T′ OSC for different canal diameters (2, 3, 6, and 8 mm). High-speed camera and shadow photography Two additional experimental systems were used to record the generated shock waves during the synchronized delivery of laser pulses and to measure the dependence of the bubble's oscillation period (T OSC ) on the laser pulse energy, cavity diameter, and fiber tip position. The shock waves were recorded with a shadow-graphic setup using 30 ps long frequency-doubled Nd:YAG (λ = 532 nm) illumination pulses (Ekspla, Lithuania, PL2250-SH-TH), imaged through a microscope by a charge-coupled device (CCD) camera (Basler AG, Germany, scA1400-17 fm, 1.4 Mpx). The experimental system is basically the same as described in ref. [15]. Figure 4 shows the experimental system for measuring the dependence of the cavitation bubble's oscillation period (Tosc) on different parameters (laser pulse energy, cavity diameter, and fiber tip position). A block of acrylic glass with canals of varying diameters (1.5-6 mm) and lengths (10 mm and 20 mm closed-ended and 30 mm open-ended canals) was used to simulate various cavity dimensions. The block was submerged 3 mm deep in a basin of distilled water, and a conical fiber tip (PIPS 600 μm, Fotona) was positioned in the center of the cross section of the canal, 5 mm below the water surface, to deliver the Er:YAG laser pulses. A Fig. 3 Typical signal from the LBDP (upper black signal) following two Er:YAG laser pulses with pulse energy E p = 20 mJ and pulse width t p = 50 μs (lower red signal) separated by a delay (T p ), delivered through a flat fiber tip, in a 6-mm-diameter canal. The peak-to-peak amplitude (A) of the LBDP signal at the time of the oscillation bubble's collapse was measured. The oscillation period of the bubble, measured by the LBDP, is denoted as T′ OSC Fig. 4 Experimental system for measuring the dependence of the cavitation bubble's oscillation period (T osc ) on different parameters (laser pulse energy, cavity diameter, and fiber tip position). A block of plexiglass with canals of varying diameters (1.5-6 mm) and lengths (10 and 20 mm closed-ended and 30 mm open-ended canals) was used to simulate various cavity dimensions. The block was submerged 3 mm deep in a basin of distilled water and a conical fiber tip was positioned in the center of the cross section of the canal, 5 mm below the water surface, to deliver the Er:YAG laser pulses with pulse energies ranging from 5 to 30 mJ and pulse width t p = 50 μs. A signal generator (SG) was used to trigger the excitation laser and the camera (Photron Fastcam SA-Z) signal generator (SG; Tektronix, US, AFG 3102) was used to trigger the excitation laser and the camera. Figure 5 shows a typical sequence of a cavitation bubble's oscillation caused by a single 8 mJ laser pulse in a 20-mmlong, close-ended canal with a diameter of 3 mm. The bubble oscillation period T OSC was measured as the time from the beginning of the growth of the cavitation bubble to its first collapse (marked by a yellow rectangle). Results The first part of the experimental results demonstrates the amplification of pressure waves in confined canals when a second laser pulse is delivered at a proper delay. Since the required optimal delay depends on the first bubble's oscillation period (Tosc), we also measured the dependence of Tosc on laser energy, cavity diameter and length, and on the position of the fiber tip within the cavity. In the second part, the presence of shockwaves during the first bubble's collapse phase is demonstrated for the optimally synchronized laser pulse pair. Amplification of pressure waves Figure 6 depicts the measured collapse amplitude (A) for various T p , ranging from 450 to 740 μs in a canal with a diameter D = 2 mm. In the case of a single laser pulse, the average (baseline) amplitude (A 1 ) was 116 mV. For T p bellow approximately 550 μs, the collapse amplitude is significantly diminished in comparison with what it would be in the absence of a second pulse. For T p in the optimal range from 560 to 630 μs, Fig. 5 Typical sequence of a cavitation bubble's oscillation, following an Er:YAG pulse with pulse energy E p = 8 mJ and pulse width t p = 50 μs, delivered through a conical fiber tip, in a 3mm-diameter, 20-mm-long, closed-ended canal. The sequence was recorded using a high-speed camera at 100,000 frames per second and an exposure time of 250 ns. The first collapse of the bubble can be observed approximately 380 μs after the beginning of the laser pulse the pressure waves are amplified. And for T p longer than 630 μs, the collapse amplitude returns to the baseline level of a single laser pulse, since at longer delays, the second pulse is delivered after the collapse of the first oscillation bubble has already occurred. The average maximal collapse amplitude at the optimal delay of T * p = 583 μs was A * = 241 mV, which is by a factor of 2.08 higher than A 1 . The optimal delay T * p was determined as the midpoint of the class interval with the highest mean collapse amplitude. The dependence of the collapse amplitude A on T p was measured also for other canal diameters (see Fig. 7). As can be seen from the obtained results, both A* and T * p are strongly dependent on the canal dimensions. Generally, the T * p and pressure wave amplification factor (A f = A * /A 1 ) decrease with the canal diameter. The optimal delay times and amplification factors for different canal diameters are collected in Table 1. The amplification of pressure waves (A f ) is most pronounced in smaller diameter cavities, ranging from A f = 1.09 for the D = 8 mm cavity to A f = 2.2 and A f = 2.08 for the D = 3 and 2 mm cavities, respectively. It is worth noting that shock waves are emitted at shock speeds close to the collapsing bubble but become considerably slower as they travel approximately 26 mm deep into the canal, where the measurement of the pressure waves was made. Therefore, it is expected that the actual amplification of the pressure waves in the vicinity of the collapsing bubble is much larger than shown in Table 1. This was confirmed also by our observation that when a standard conical PIPS fiber tip was used, the fiber's cone became very quickly damaged when the optimal pulse separation was used. Figure 8 shows, for different diameter canals, the difference in the first bubble's LBDP oscillation times (T′ OSC ), both for cases when only a single laser pulse is emitted and for when the first pulse is followed by an optimally delayed second laser pulse. As can be seen, the optimally delayed second pulse (i.e., separated by T* p from the first pulse) accelerates the first bubble's collapse, resulting in a reduced LBDP oscillation time T*′ OSC , The reduction ranges from 1 μs in the D = 8 mm diameter canal to 30 μs in the D = 2 mm diameter canal. The difference between the means of the first bubble LBDP oscillation times depending on whether a second laser pulse is present or not is significant at P < 0.001 for 2, 3, and 6 mm diameter canals and at P < 0.05 for the 8 mm diameter canal. Fig. 6 Amplitudes of the LBDP signal at the collapse of the bubble for various T p , ranging from 450 to 740 μs, in a 2-mm-diameter canal. Each dot represents a measurement of peak-to-peak amplitude of the pressure wave caused by the cavitation bubble's collapse. The diamonds represent the means of grouped data with respective standard deviations. Er:YAG laser pulses with pulse energy E p = 20 mJ and pulse width t p = 50 μs were delivered through a flat fiber tip in this experiment Figure 9 shows the dependence of the first bubble's collapse amplitude A on the first laser pulse's energy. The pulse energy was controlled with a series of apertures of different diameters to keep the temporal profile of the laser pulse constant. The circles represent single-pulse results, and the diamond represents the collapse amplitude A for a case when the first laser pulse was followed by an optimally delayed second laser pulse with the same laser pulse energy. As can be seen from Fig. 9, increasing the individual laser pulse energy does not result in a significant increase in the collapse amplitude. In fact, the collapse amplitude gets even smaller when the laser energy is increased from 10 to 50 mJ, which we attribute to the increase of the bubble's volume relative to the dimension of the canal. It is only when a second, optimally delayed laser pulse is added to the first pulse that the collapse amplitude of the first bubble gets significantly amplified. Dependence of the bubble oscillation period on experimental conditions The optimal separation of a synchronized laser pulse pair (T p ) depends on the bubble oscillation period (T OSC ) which further depends on specific experimental conditions. Figure 10 shows T OSC as a function of the depth of the fiber tip inside a L = 10mm-long open-ended canal with a diameter of D = 4 mm. The fiber tip depth represents the distance from the upper edge of the canal to the exit end of the fiber tip. For depths ranging from 3 to 6.5 mm, we observed no significant influence on T OSC . The small variations in T OSC (ranging from 695 to 725 μs) can be attributed to slight differences in the radius alongside the canal and to the measurement error. The absolute depth of the fiber tip (distance from the water surface) in the measured range would only cause an increase in the hydrostatic pressure of approximately 0.35 mbar (representing the water hydrostatic pressure at the 6.5 mm depth), and therefore, any effect of the absolute depth on T OSC is expected to be insignificant. Figure 11 shows T OSC as a function of the cavity diameter (ranging from D = 1.5 to 6 mm) for close-ended cavities of different lengths (L = 10 and 20 mm) and in the case of an open-ended 30-mm-long canal. Results show that there is a strong negative correlation between the diameter of the canal and T OSC . At small canal diameters (2 and 1.5 mm), the cavitation bubble expands beyond the upper edge of the canal, which results in a shorter T OSC. Figure 12 shows T OSC as a function of laser pulse energy in 3 and 6 mm diameter closed-ended canals and in an infinite liquid. Results show that there is a strong positive correlation between the laser pulse energy and T OSC . Figure 13 shows typical shadow-graphic images of shockwaves as observed during the collapse of a single cavitation bubble in an infinite liquid reservoir. Since the shockwave causes a strong disturbance of water's refractive index, it can be visualized as a sharp circular edge on the shadow-graphic images (yellow arrows are pointing to some of them). It is interesting to observe that multiple shockwaves are generated as a consequence of a divided bubble's collapse. This is especially evident when a flat fiber tip is used. Images of shockwaves generated during bubble collapse As opposed to a single bubble collapse in an infinite reservoir, no shock waves were observed during the collapse of a single cavitation bubble in spatially limited closed-ended canals, in agreement with previous reports [27]. However, when a subsequent laser pulse is emitted during the initial bubble's collapse, the growth of the subsequent bubble exerts pressure on the collapsing initial bubble. This accelerates the collapse of the initial bubble and causes the emission of shock waves even in spatially limited water reservoirs. Figure 14a shows shadow-graphic images of shockwaves being emitted during the collapse of an initial cavitation bubble in a narrow canal. The beginning of a subsequent bubble expansion can be noticed on all images, which indicates that the collapse of the initial bubble was accelerated by a properly delayed subsequent laser pulse. Smaller secondary bubbles are also formed alongside the entire canal. The violent collapse of the initial bubble also initiates the collapses of the secondary bubbles. Figure 14b shows the emission of shock waves from the collapsing secondary bubbles. Discussion A major mechanism of action of currently used laser activated root canal irrigation techniques is believed to be the rapid fluid motion in the canal as a result of expansion and implosion of vapor bubbles, resulting in a more effective delivery of the irrigants throughout the complex root canal system [7,15]. An additional mechanism which contributes to the efficacy of LAI is the improved removal of the smear layer, microorganisms, and biofilm as a result of the physical action of the turbulent irrigant [7,15]. In addition, chemical action seems to play a role as well [18,28]. For example, an increased reaction rate of NaOCl was found upon activation by a pulsed erbium laser [28]. By being able to generate shock waves within narrow root canals, we hypothesize that both the physical and chemical actions of LAI can be further enhanced by using the SWEEPS technique. Experimental results of the SWEEPS technique show that significant amplification of pressure waves can be achieved with optimal delay times of the second laser pulses (see Fig. 7 and Table 1). It is important to note that the amplitude of collapse is significantly higher if a double-pulse regime is used compared to a single-pulse with the same cumulative energy (see Fig. 9), because increased single-pulse energy leads to an increase in the volume of the cavitation bubble relative to the cavity dimensions, which in turn leads to a weakened collapse. The main mechanism of this amplification is in our opinion the acceleration of the initial bubble collapse, which is significantly diminished in confined spaces (like root canals). This hypothesis is confirmed by the results shown in Fig. 8, where oscillation time as measured by the LBDP in the case of a single pulse (T′ OSC ) and in the case of a synchronized pulse pair (T 0 * OSC ) is shown. Slight differences between T′ OSC and T 0 * OSC could be explained by the increased speed of propagation of the pressure waves. However, the distance between the source of the pressure wave and the probe laser beam is approximately 25 mm, which means a travel time of roughly 17 μs at the speed of sound in water [29,30]. Therefore, the increased speed of propagation cannot account for the 30 μs (see Fig. 8, 2-mm-diameter canal) difference between T′ OSC and T 0 * OSC . Furthermore, since shock waves traveling at supersonic speed quickly converge towards the speed of sound [31,32], we do not expect a significant effect on the average speed of propagation of the pressure wave over a relatively great distance (25 mm). Similarly, slight differences in T′ OSC could be the result of the collapse of the bubble happening closer to the probe laser beam, perhaps being pushed downward by the expanding second bubble. High-speed camera observations confirm that this effect is not large enough to contribute significantly to the difference between T′ OSC and T 0 * OSC . However, the differences between T′ OSC and T 0 * OSC are consistent with the collapse happening earlier due to the exerted pressure of the second expanding bubble on the collapsing bubble, accelerating the collapse. The acceleration factor (A cc ) was defined as an increase in the average speed of collapse after the initiation of the second pulse: The strong covariance between A f and A cc for various canal diameters, which is shown in Fig. 15, supports the hypothesis that the amplification of the shock waves is a result of the acceleration of the collapse of the bubble. The results of A f and A cc for the 2-mm-diameter canal are consistent with measurements of the actual bubble oscillation period T OSC (see Fig. 11) and are likely caused by the cavitation bubble partially extending outside the boundaries of the canal during its growth, changing the observed dynamics. It is important to note that the enhanced emission of shock waves does not appear to result in an increased apical irrigant extrusion. Recently, a study of the potential apical irrigant extrusion during the SWEEPS laser irrigation was carried out [33], during which irrigation using two standard endodontic irrigation needles (notched open-end and side-vented) was compared with the PIPS and SWEEPS laser irrigation procedures. Both the PIPS and SWEEPS irrigation procedures resulted in a significantly lower apical extrusion compared to the conventional irrigation with endodontic irrigation needles, in agreement with a previous report [34]. Finally, in our experiments, the single pulses or pairs of pulses were delivered at low repetition rates of up to 0.2 Hz. A potential dependence of the SWEEPS phenomena on the increased pulse pair repetition rate was not explored. Conclusion A laser beam deflection probe, a high-speed camera, and shadow photography were used to characterize the effects of synchronized delivery of Er:YAG pulses in a confined volume of water. As opposed to in infinite liquid reservoirs, shock waves are typically not emitted by laser-induced cavitation bubbles in confined liquid spaces. This limits the surface cleaning efficacy of the laser-induced cavitation bubbles. However, as our study shows, pressure waves caused by the collapse of a laserinduced cavitation bubble can be significantly amplified (P < 0,001) also in a confined reservoir. This is achieved by delivering a subsequent laser pulse, separated from the initial pulse by a proper temporal delay. It is to be noted that similar amplification cannot be achieved by simply increasing the laser pulse energy. Larger single-pulse energies lead to larger cavitation bubbles relative to the cavity dimensions, which in turn results in a weakened collapse of the bubbles. On the other hand, applying a subsequent laser pulse during the initial bubble's collapse leads to the growth of a second bubble, which exerts pressure on the collapsing initial bubble, accelerating its collapse and causing the emission of shock waves. Results show that the optimal delay between the two laser pulses is strongly correlated with the cavitation bubble's oscillation period. The resulting amplification is most pronounced in smaller diameter canals (< 3 mm). Measurements with a high-speed camera show that the oscillation periods of cavitation bubbles depend strongly on laser pulse energy and canal diameter, as opposed to the canal length and fiber tip depth, which have only a minor influence on the bubbles' oscillation period. The observed shock wave-enhanced emission photoacoustic streaming (SWEEPS) phenomenon could be used to improve the efficacy of laser-assisted root canal treatment, especially with respect to the smear layer and biofilm removal. Because of the variability of root canal geometries, further methods of improvement may be needed in order to achieve a reliable synchronization between the bubble oscillation and the laser pulse pair timing. One potential improvement may be a special laser modality in which the temporal separation between the pairs of laser pulses is continuously swept back and forth in order to ensure that during each sweeping cycle the optimal separation between the pulse pair is achieved, as required for shock wave generation [35].
7,481.6
2018-01-11T00:00:00.000
[ "Physics" ]
Sub-TeV quintuplet minimal dark matter with left-right symmetry A detailed study of a fermionic quintuplet dark matter in a left-right symmetric scenario is performed in this article. The minimal quintuplet dark matter model is highly constrained from the WMAP dark matter relic density (RD) data. To elevate this constraint, an extra singlet scalar is introduced. It introduces a host of new annihilation and co-annihilation channels for the dark matter, allowing even sub-TeV masses. The phenomenology of this singlet scalar is studied in detail in the context of the Large Hadron Collider (LHC) experiment. The production and decay of this singlet scalar at the LHC give rise to interesting resonant di-Higgs or diphoton final states. We also constrain the RD allowed parameter space of this model in light of the ATLAS bounds on the resonant di-Higgs and diphoton cross-sections. Introduction and Motivation Dark matter (DM) and its large abundance compared to baryonic matter has been a long standing puzzle without any definite answer as of yet.The Standard Model (SM) of particle physics is bereft of any such particles and hence the need to extend the SM becomes imperative in order to incorporate a DM candidate in the particle spectrum.Numerous approaches have been made to come up with consistent DM models that can explain the experimental observations.A small class of such models is what is referred to as Minimal dark matter (MDM) [1][2][3][4][5][6] models.MDM models postulate a new fermionic or bosonic multiplet, an n-tuplets of the SU (2) group.Being color neutral these new multiplets have no strong interactions, and can only weakly interact with other SM particles mainly through gauge interactions.The stability of the quintuplet, on the other hand, is either ensured by some discrete symmetry or they can be accidentally stable.In a scenario where the lightest component of the new multiplet is electrically neutral, it could be a good candidate for the DM.In this work we will study a MDM model where the DM is coming from the neutral component of a quintuplet fermion. A dark matter coming from SU (2) L quintuplet has severe limitations.Firstly, only a hypercharge zero quintuplet can evade the strong direct detection limits with all other cases being very highly constrained.All the states in the quintuplet fermion are mass degenerate at the tree level.This degeneracy is lifted (only by few hundred MeV) by radiative corrections.This makes the collider phenomenology for the quintuplet extremely challenging.Quintuplets, being charged under the SM gauge group, are produced in pairs at the collider experiments via the gauge interactions.Subsequently, they decay into lightest component of the quintuplet in association with very soft SM leptons and jets.The lightest component of the quintuplet, being the candidate for the DM, remains invisible in the detectors whereas, the final state leptons or jets will be too soft (due to the extremely small mass splitting) to observe at the collider.Attempts have been made to overcome this difficulty by introducing an additional quadruplet scalar [7,8] in order to write a dimension-4 decay term for the quintuplet fermion.However, in this case, the dark matter candidate would then be lost or only be there in an extremely fine-tuned region of the parameter space.We thus take an alternate approach to this problem by choosing a left-right symmetric model where the DM is neutral component of an SU (2) R quintuplet fermion. Left-right symmetric (LRS) models [9,10] by themselves are a very well motivated extension of the SM.They are gauge extensions of the SM with the gauge group being SU (3) C × SU (2) L × SU (2) R × U (1) B−L .At a fundamental level, LRS models preserve parity (P) symmetry.The spontaneous breaking of the right-handed symmetry at some high scale leads to the observed parity violation at the weak scale.Another important consequence of this fact is that the P-violating terms in the QCD Lagrangian leading to the strong-CP problem [11][12][13][14][15][16][17][18][19] are absent in these class of models and hence naturally solving the strong-CP without a global Peccei-Quinn symmetry [20].The gauge structure here compels us to have a right-handed neutrino in the particle spectrum, thus allowing for a small neutrino mass generation through the seesaw mechanism [21][22][23][24][25][26]. In this work, we have considered SU (3) C × SU (2) L × SU (2) R × U (1) B−L gauge symmetry and enlarged the fermion spectrum by introducing a vector-like fermion multiplet which is a quintuplet under SU (2) R .The neutral component of the SU (2) R quintuplet could be good candidate for the dark matter.As the fundamental gauge group in this case does not include U (1) Y , it is only produced after the right-handed symmetry breaking of SU (2) R ×U (1) B−L → U (1) Y .The hypercharge quantum number is thus a derived quantity and allows for many different combinations of charge assignment for the quintuplet to get the zero hypercharge for the DM particle.This model would thus have a much richer phenomenology with many different possibilities for the DM and other particles of the quintuplet.The tree-level masses of the quintuplet particles are still degenerate but the radiative corrections are much larger in this case.The mass degeneracy among the quintuplet fermions of different charges are now produced at the right-handed symmetry breaking scale (heavy right-handed gauge bosons are running in the loops) resulting in much larger mass splitting among them.Thus the production and subsequent decay of the high charge-multiplicity components of the quintuplet produce interesting signatures at the collider experiments without sacrificing the dark matter aspect of the model. The dark matter candidate in the present scenario is the neutral component of the vectorlike SU (2) R quintuplet fermion.Therefore, the dark matter has gauge interactions with the SU (2) R gauge bosons namely, the W R and Z R -bosons.The self-annihilation and coannihilation of the dark matter mainly proceed through a Z R and W R exchange in the s-channel, respectively.It is important to note that the lower values for the masses of W R and Z R are highly constrained from the LHC data as well as other low energy observables.As a result, the self-annihilation and co-annihilation cross-sections are, in general, small.The dark matter relic density measured by WMAP [27] and PLANCK [28] collaborations can only be satisfied for some particular values of DM masses (at around half the W R and Z R masses) where self-annihilation and/or co-annihilation cross-sections are enhanced by the resonant production of Z R and/or W R , respectively.It has been shown in Ref. [3] that the dark matter relic density can only be satisfied at quite large DM masses (around 4 TeV or higher) if one also accounts for the direct detection constraints.A way to circumvent this problem, as has been discussed later in this paper, is to introduce a singlet scalar which can open up a lot of new annihilation and co-annihilation channels allowing for a correct DM relic density for almost any DM mass while also satisfying the direct detection bounds.We have studied this in details in this paper, focusing on the DM and singlet scalar phenomenology in the model. The scalar, being singlet under both SU (2) L and SU (2) R , has Yukawa coupling only with the vector like quintuplet fermions.The couplings with the other scalars in the model arise from the scalar potential.However, couplings with the SM gauge bosons, namely W ± , Z and photon, arise at one loop level via the loops involving quintuplet fermions.The collider phenomenology of the singlet scalar crucially depends on its loop induced coupling with a pair of photons.In the framework of this model, singlet scalar-photon-photon coupling is enhanced due to multi-charged quintuplet fermions in the loop.Therefore, at the LHC experiment, statistically significant number of singlet scalar could be produced via photon-photon1 initial state.Depending on the parameters in the scalar potential, the singlet scalar dominantly decays into a pair of SM Higgses or photons, giving rise to interesting di-Higgs or di-photon resonance signatures at the LHC.We have also studied the collider signatures of the singlet scalar and bounds on the singlet scalar masses from the di-Higgs and di-photon resonance searches by the ATLAS collaboration with 36 inverse femtobarn integrated luminosity data of the LHC running at 13 TeV center-of-mass energy. The rest of the paper is organized as follows.In Sec. 2 we have introduced the model.The particle spectrum is listed along with the interactions among the particles.We have computed the gauge boson, fermion and the scalar masses and mixings in this section.In Sec. 3 we have studied the dark matter phenomenology of the model along with the bounds from direct detection experiments.In Sec. 4 we study the phenomenology of the singlet scalar.Sec. 5 has the collider bounds from the most recent diphoton and di-Higgs results from the Large Hadron Collider (LHC).Finally we conclude in Sec.6 with some discussions. Quintuplet Minimal Dark Matter Model In this work, we consider a minimal model for dark matter (DM) in the framework of SU (3) C × SU (2) L ×SU (2) R ×U (1) B−L gauge symmetry, where B and L are baryon and lepton numbers respectively.Due to the left-right symmetric nature of the model, the chiral fermions are now doublets for both left and right-handed sectors and are given as: where the numbers in the bracket corresponds to SU (3) C , SU (2) L , SU (2) R and U (1) B−L quantum numbers respectively.The electric charge Q for a particle in this model is given as: , where T 3 L/R are the third component of the isospin for SU (2) L/R .A minimal scalar sector requires a right-handed doublet Higgs boson to break the SU (2) R symmetry and a bidoublet Higgs field to break the electroweak symmetry and produce the quark and lepton masses along with the CKM mixings.They are given as: The absence of a triplet scalar in the Higgs sector prevents us from writing a lepton number violating term in the Yukawa Lagrangian and hence a light neutrino mass generation is not possible in this scenario without introducing unnaturally small Yukawa couplings.We thus introduce a singlet fermion N(1,1,1,0) which will help generate light neutrino mass through inverse seesaw mechanism. We introduce an additional SU (2) R vector-like fermion quintuplet given by φ 0 1 = v 1 and φ 0 2 = v 2 .For simplicity, we will assume the VEV of the S-field to be zero. 2he gauge bosons of SU (2) L , SU (2) R , and U (1) B−L mix among themselves to give four massive (W R , Z R , and the SM W and Z-boson) and one massless (the SM photon) gauge bosons.We denote the left-handed (right-handed) triplet gauge state as W i L (W i R ) while the B − L gauge boson is B. The mass-squared matrix for the charged gauge boson M 2 W in the basis (W R , W L ) and the neutral gauge bosons M 2 Z in the basis (W 3R , W 3L , B) is given as where is the EW VEV ∼ 174 GeV and g R = g L = 0.653.This gives the masses of the new right-handed heavy gauge bosons as: while the left-handed W and Z boson masses are the same as in the SM with the effective hypercharge gauge coupling given as . The relevant couplings of the gauge bosons with χ are given as: Here ), χ i are the particles in the quintuplet Σ with Q χ i being the corresponding electric charge and s W = sin θ W where θ W is the Weinberg angle.These couplings are particularly important from the perspective of DM phenomenology as they can lead to self annihilation (through Z R ) and co-annihilation (through W R ) of the DM particle so as to satisfy the RD bounds at these points.The fermion masses are generated from the following Yukawa Lagrangian: where Y and f are the Yukawa couplings and Φ The quark and charged lepton masses in this model would then be given as: (2.8) For simplicity, we will choose a large tan β (= v 1 /v 2 ) limit which requires Y q 33 ∼ 1 to explain the top quark mass while Y q 33 < 10 −2 .The neutrino mass matrix, on the other hand, is a 3 × 3 matrix in the basis (ν L , ν R , N ) given as: where . This is the inverse seesaw mechanism of neutrino mass generation. If we assume that f R v R >> m D , µ N the approximate expressions for the neutrino mass eigenvalues (for one generation) are given as (2.10) So it is easy to get a light neutrino mass by appropriately choosing all the parameters in the neutrino sector. The most general scalar potential involving the bidoublet field, an SU (2) R doublet field, and a real singlet field is given by: (2.11) The physical Higgs spectrum consists of four CP-even scalars, one CP-odd pseudoscalar, and one charged Higgs boson.Two charged states and two CP-odd states are eaten up by the four massive gauge bosons.Using the Higgs potential given in Eqn.2.11 to eliminate µ 2 1 , µ 2 2 and µ 2 R from the minimization conditions, we get the CP-even scalar mass-squared matrix as: where Diagonalizing this mass-squared matrix gives four mass eigenstates.Table 1 gives one benchmark point for a set of parameters which can easily give a light SM-like 125 GeV Higgs boson denoted by h, consisting almost entirely of the real part of φ 0 1 field.We also get a 500 GeV scalar denoted by H 1 , consisting of almost purely the singlet S with negligible mixing with the others.This state is the one most important from the dark matter point of view and it is easy to see here that the mass of this state can be easily increased (decreased) by just decreasing (increasing) the value of µ 2 S and very slightly modifying value of λ 1 accordingly.Such a change does not significantly alter the composition of this H 1 eigenstate till about a mass of 200 GeV.Further decreasing the mass of H 1 (just by increasing µ 2 S ) results in significant mixing of the singlet with the SM-like state and is ruled out from Higgs data.Though one can then alter the other parameters of the model to still keep the mixing low.Two very heavy states H 2 and H 3 with masses of the order of v R consisting of real part of φ 0 2 and H 0 R states are also present in the spectrum.The heavy states are required to be heavier than 15 TeV in order to suppress flavor changing neutral currents [29][30][31][32][33].This can be easily satisfied in our model by choosing a high value (> 10 TeV) for the right-handed symmetry breaking scale, v R .The mass of the pseudo-scalar A 1 is given as: while the charged Higgs boson H ± mass is GeV 2 , v1 = 173.9GeV, v2 = 5 GeV, vR = 13 TeV.Subscript R and I stands for the real and imaginary parts of the field respectively. Dark Matter Phenomenology The motivation for introducing the vector-like quintuplet fermion χ(1, 1, 5, X) was to obtain a candidate for DM.Since all the components of Σ get mass from the same term they are all mass degenerate at tree-level, but radiative corrections remove this degeneracy.Radiative corrections to the masses of the quintuplet fermions will be introduced from the gauge sector and the singlet scalar but since the coupling of the singlet scalar to all the quintuplet particles are the same, it does not introduce any splitting between their masses.The mass splitting due to quantum corrections is thus given by, where Fig. 1 gives a plot of the mass differences between the neutral and the various charged states for all three cases with B − L = 0, 2, 4. For a major portion of the parameter space, the masses of the charged components of the quintuplet get positive contribution from the radiative corrections and hence χ 0 becomes the lightest among the quintuplet fermions 3 .Thus the lightest component of the quintuplet χ 0 can be a good candidate for dark matter.The stability of χ 0 is automatically ensured by virtue of its gauge quantum numbers.As χ 0 is part of the quintuplet, it can decay to the SM particles only via interactions with dimension-6 or higher operators resulting in a decay width suppressed atleast by a factor of 1/Λ 2 .Taking the mass of χ 0 to be at TeV scale, the decay width via dimension-6 operator is of the order Quintuplet Mass differences with B-L=4 of M 3 /Λ 2 .This corresponds to a lifetime greater than the age of the universe for Λ 10 14 GeV. Relic Density The dark matter relic density as a function of the DM mass for B − L = 4 and B − L = 0 cases are given in Fig. 2. We have varied the DM mass from 100 GeV to 9 TeV and plotted the relic density for three values of λ corresponding to λ = 0.1, 1.0, 2.0 and two fixed values of the scalar masses of 500 GeV and 1.5 TeV respectively while keeping α 3 µ 3 = v EW .The other important numbers required to fully understand the plots are M W R = 6 TeV, M Z R = 7.14 TeV.Considering first the B − L = 4 case, it is easy to see that there are five dips in the plot with three of them being very sharp while two others being shallower.These are the regions where a sudden enhancement in the cross-section of either annihilation of two DM particles or co-annihilation of a DM with a singly charged χ ± giving rise to a sudden decrease in the relic density.The three regions with sharp fall-off corresponds to three s-channel processes while the two flatter ones correspond to two t-channel processes 4 .The first dip at M DM = M scalar /2 corresponds to the s-channel process where two DM particles annihilate through an H 1 into SM particles.This process reaches its resonance at a DM mass of half of the scalar mass and the sharp fall is because of the s-channel process.Careful analysis of the plots will show that the dip in the left plot is actually deeper than the right one.This is because the total decay width of the 500 GeV scalar particle is smaller than the 1.5 TeV case resulting in a larger resonant cross-section and hence a deeper valley in the left plot. The second dip is at M DM = M H 1 and corresponds to the t-channel process of two DM particles annihilating into two H 1 bosons.An interesting thing to note is at larger values of λ the relic density can easily satisfy the experimentally observed value while for smaller λ values, the relic density is greater than the experimental limits.This is because the annihilation cross-section σ χ 0 χ 0 →h 1 h 1 ∼ λ 4 and only for large values of λ will lead to a large enough annihilation for the required decrease in relic density.If we take λ → 0 then this dip will disappear all together.Another consequence of being a t-channel process is that in the limit t → 0 the cross-section σ ∼ M −2 H 1 and hence larger (smaller) the scalar mass, smaller (larger) the cross-section.This is the reason why the plot on the right with M H 1 = 1.5 TeV has a much smaller decrease in the relic density at this point compared to the left plot with M H 1 = 500 GeV. The third fall-off corresponds to co-annihilation of the DM particle through a W ± (χ 0 χ ± → W ± R → SM ) resonance while the fourth is DM annihilation through a Z R resonance (χ 0 χ 0 → Z R → SM ).It is easy to see that the dips are exactly at M W ± R /2 and M Z R /2 respectively.The sharp fall-off again is an indication that both are s-channel processes. The fifth dip corresponds to t-channel annihilation process of χ 0 χ 0 → Z R H 1 and is exactly at a DM mass equal to half of the combined masses of Z R and H 1 bosons.The annihilation cross-section σ ∼ λ 2 in this case and hence this dip will again disappear in the limit λ → 0. Actually there is another t-channel co-annihilation process corresponding to that is masked by the dip corresponding to the Z R -mediated annihilation channel. The relic density plot for the B−L = 0 case is given in the right panels of Fig. 2.This plot has only four dips, the ones involving Z R are absent here.This is because there is no χ 0 χ 0 Z R vertex as can be obtained from eqn. 2.6 by putting Q χ i = 0 along with Q B−L = 0. Similar to the previous case, the first dip corresponds to the resonance of s-channel annihilation process mediated by H 1 .The larger (smaller) total decay width in the heavier (lighter) H 1 mass case leads to a shallower (deeper) dip like before.The second dip corresponds to the t-channel annihilation process χ 0 χ 0 → H 1 H 1 .The third dip is the s-channel co-annihilation mediated by a W ± R boson.The fourth dip corresponds to the t-channel co-annihilation of χ 0 χ ± → W ± R H 1 which was not visible in the previous case.It is clearly visible here as the Z R boson couplings with the DM particle is absent.This cross-section is again proportional to λ 2 and hence the dip decreases with λ and eventually vanishes as λ → 0. The relic density plot for the B − L = 2 case is given in Fig. 3.Here the DM mass is only taken till around 3 TeV as after that the negatively charged component of the quintuplet becomes the lightest and stable and hence, ruled out.The plot is very similar to the previous ones.The three dips visible here are due to s-channel H 1 mediated annihilation, the t-channel annihilation of two DM particles into two H 1 s and the s-channel W ± R mediated co-annihilation processes respectively.We have included two plots here for a DM mass of 500 GeV for two different values of α 3 µ 3 being v EW and 0.1v EW respectively.It is easy to see that the only difference between the two plots are in width of the s-channel annihilation region.The lower value of α 3 µ 3 leads to a much narrower region with the relic density being satisfied by two points which are very close in DM mass while for the larger coupling there are two quite distinct values of DM mass possible.This is because at the resonance point, a smaller value of α 3 µ 3 will lead to a smaller annihilation cross-section resulting in a narrower and shallower dip in the relic density plot.Similarly for the other two cases (B − L = 0, 4), the effect of this trilinear coupling would only be seen in the scalar mediated annihilation channel as that is the only relevant process involving this coupling. The introduction of the scalar singlet S has a huge influence on the allowed dark matter masses which can satisfy the observed relic density.As has been discussed earlier, majority of dips in the dark matter relic density plots would disappear in the absence of this singlet scalar.In fact if S is removed from the spectrum there would be no possible dark matter satisfying the observed relic density for the B − L = 2 case with a W R mass of 2 TeV [3].The introduction of a singlet even in this constrained parameter space could provide at least two points for correct DM relic density for a small enough singlet scalar mass.We would thus like to examine what happens if we keep the singlet boson mass as a free parameter while also varying λ.As has been discussed in Sec. 2 the singlet-like Higgs boson mass can be easily changed by just varying the value of µ 2 S , hence it is quite natural that the singlet mass is not a fixed quantity but a variable in this analysis. The scatter plots in Fig. 4 represent the allowed points which satisfy the experimentally observed relic density as a function of DM mass, H 1 mass and λ.We have just considered a relatively low DM mass benchmark region with 0.1 TeV ≤ M DM ≤ 2.5 TeV.The left plots are for the B − L = 4 case while the right pane are for B − L = 0.The plot for the B − L = 2 case is very similar to the one for B − L = 0 and hence has not been included here.To understand this similarity we need to look at the quintuplet spectrum for each of them.For B − L = 4 there is only one singly charged particle in the quintuplet spectrum while for both B − L = 0, 2 there are two singly charged quintuplet fermions each.The masses of the charged particles are also quite close resulting in a very similar behavior for both these cases at least till the point where the neutral fermion is the lightest.see that at a DM mass of M H 1 /2 there is a sharp dip with two points satisfying the correct relic density, one before and another after the resonance point.Similarly for any scalar mass there should be two such points and hence two narrow straight lines.The triangular region is the one corresponding to the t-channel annihilation of two DM particles into two H 1 bosons.A close inspection of this region will show that the upper boundary of the plot is lined by points which are of λ ∼ 2. These are the points where the DM mass is just equal to the scalar mass and the annihilation is only possible for very large cross-section due to the phase space suppression.All the underlying points are where M DM > M H 1 .In this region there is a monotonic increase in λ as we move from low to high DM mass (for a fixed M H 1 ).Since this process is t-channel, the annihilation cross-section would decrease an the mass difference M DM − M H 1 increases and a larger value of λ is needed to compensate for this decrease.The value of λ in this region should also increase as we move downward towards a lower M H 1 for a constant DM mass for the exact same reason. The rectangular region around 2 TeV is due to the co-annihilation of the DM with a charged particle through a W R boson.This region should be independent of λ but the plot shows something completely different.We see that in the parts of this region overlapping with the other two, only small values of λ are allowed.Actually in the overlapping regions there are two processes contributing to the decrease in relic density and if the W R co-annihilation process has to dominate, the other two processes which are both proportional to some power of λ (λ 4 and λ 2 for the t-and s-channel processes respectively) should be small.Hence the low λ points are only allowed in these regions. The B −L = 2 plot in Fig. 4 is similar in nature to the previous case except there is a new region here which is a line parallel to the Y-axis around a DM mass of 200 GeV.If we look back at Fig. 3 we see that the relic density is initially increasing and crosses the experimentally observed line at around the 200 GeV DM mass.This point is independent of λ and gives rise to the vertical line here.Another important new observation here is that now a part of the triangular region can never satisfy the relic density constraints irrespective of the value of α 3 µ 3 .As has been discussed earlier, moving from low to high masses in the triangular region requires the annihilation cross-section to progressively increase as well.Thus we require a larger value of λ but since we only allow λ ≤ 2 there are some parts which simply cannot produce enough annihilation and the relic density is always higher that the observed limit.This situation is remedied as we move closer to the W R mediated co-annihilation region as now both the t-channel annihilation and the s-channel co-annihilation together can contribute to decrease the relic density to the correct experimental limit.Independently just by increasing the value of λ to include points upto λ < 3 will result in complete disappearance of this empty patch.The relic density constraints can then be satisfied over the entire parameter region considered here. The case with a smaller α 3 µ 3 = 0.1 v EW are plotted in the lower panel of Fig. 4. The only difference compared to the upper panel plots (with α 3 µ 3 = v EW ) is that the narrow straight line here is indeed one line instead of two.This, from our earlier observation, is due to the much narrower dip in the s-channel scalar annihilation region for a smaller value of trilinear coupling. Direct Detection This model can lead to quite significant DM-nucleon scattering cross-section via the Z R -boson exchange diagram, resulting in stringent constraints from DM direct detection experiments.This constraint would be most severe for higher B − L cases while for B − L = 0, the χ 0 − Z R interaction itself is absent resulting in no significant bounds in this case.We thus study the case of maximal B − L(= 4) where the DM-nucleon scattering, mediated by Z R , would be suppressed by 1/M 4 Z R .The left panel of Fig. 5 gives the scattering cross-section of χ 0 -proton and the χ 0 -neutron as a function of m Z R for two different values of g R /g L .A smaller value of g R /g L leads to a larger cross-section and hence requires a larger Z R mass to evade the direct detection limits.This fact can be easily seen in the right panel of Fig. 5 where we have plotted the direct detection bound from LUX [34] in the m DM -m Z R plane.The shaded region is consistent with the LUX data and hence a Z R mass greater than 7 TeV is safe for a DM mass above 100 GeV for g R /g L = 1 as has been chosen throughout this paper. Phenomenology of the Singlet Scalar Although the singlet scalar was introduced to satisfy the dark matter relic density for almost any DM mass compared to only a few points in its absence, it gives rise to interesting signatures at the collider experiments.before going into the details of production cross-section and collider signature it is important to study the decays of singlet scalar (H 1 ).At tree level, H 1 couples with a pair of SM Higgs bosons or with a pair of quintuplet fermions.Therefore, if kinematically allowed, H 1 dominantly decays into a pair of Higgses of a pair of quintuplet fermions.H 1 also has loop induced couplings with a pair of photons, Z-bosons and photon-Z pair.The coupling of both photon and Z-boson with the quintuplet fermions being proportional to the electric charge of the fermions (see Eq. 2.6), the loop induced decays can be quite significant as it involves the multi-charged quintuplet fermions running in the loop.In particular, the diphoton decay could be as significant as other decay modes in certain parts of the parameter space.The loop induced interactions (in particular interaction with a pair of photons) play the most crucial role in the production and phenomenology of H 1 in the context of hadron collider experiments.Since the B − L = 4 quintuplet would contain fermions of highest charge multiplicity, we will only consider this case for our analysis in this section as it will lead to the strongest constraints on the model.The decay width of the singlet scalar where χ i ⊂ {χ ++++ , χ +++ , χ ++ , χ + and χ 0 }, M χ i and Q χ i are the mass and charge of the corresponding χ i respectively.The loop function x for x ≤ 1.In Fig. 6 we have plotted the scalar decay branching ratios as a function of the singlet mass for a fixed value of DM mass, λ and for two different values of α 3 .The left panel corresponds to α 3 = 1.0 while the right panel is for α 3 = 0.1.For M H 1 < 250 GeV, the di-Higgs decay is kinematically forbidden and hence, the only possible decay modes are the loop induced decays into a pair of SM gauge bosons.The decay into a pair of W ± -bosons are highly suppressed by the small W L -W R mixing and hence, not shown in Fig. 6.The di-Higgs decay mode becomes kinematically allowed for M H 1 > 250 GeV for a 125 GeV SM Higgs.In this region of the parameter space, the branching ratios depends on two parameters, namely the Yukawa coupling λ (which determines the strength of the loop induced diboson-singlet scalar interactions) and α 3 (which determines the di-Higgs decay width) in the scalar potential.We clearly see that as we decrease the value of α 3 , the diphoton branching ratio increases compared to the di-Higgs.A similar phenomenon will also take place if one increases the value of λ keeping α 3 constant.Once the quintuplet decay channel opens up, the entire decay is almost into the quintuplet fermions with all other channels completely disappearing.It is important to notice the enhancement of the loop induced decay branching ratios into two gauge bosons at around 400 GeV which is the threshold of quintuplet on-shell contribution in the loop. The singlet scalar has loop induced coupling with a pair of photons.The production of H 1 at the LHC proceeds through photon-fusion process and hence, suppressed by the small parton density of photon inside a proton.In fact, the parton density of photon is so small that most of the older versions of PDF's do not include photon as a parton.However, photo-production B-L=4, α 3 =1.0,µ 3 =174 GeV, λ=1.0, m χ 0=200 GeV B-L=4, α 3 =0.1,µ 3 =174 GeV, λ=1.0, m χ 0=200 GeV is the only way to produce H 1 at the LHC.Moreover, if we want to include QED correction to the PDF, inclusion of the photon as a parton with an associated parton distribution function is necessary.And in the era of precision physics at the LHC when PDF's are determined upto NNLO in QCD, NLO QED corrections are important (since α 2 s is of the same order of magnitude as α) for the consistency of calculations.In view of these facts, NNPDF [35,36], MRST [37] and CTEQ [38] have already included photon PDF into their PDF sets.However, different groups used different approaches for modeling the photon PDF.For example, the MRSTgroup used a pasteurization for the photon PDF based on radiation off of primordial up and down quarks, with the photon radiation cut off at low scales by constituent or current quark masses.The CT14QEDvariant of this approach constrains the effective mass scale using ep → eγ + X data, sensitive to the photon in a limited momentum range through the reaction eγ → eγ.The NNPDF group used a more general photon parametrization, which was then constrained by high-energy W, Z and Drell-Yan data at the LHC.For computing the photon-luminosity at the pp collision with 13 TeV center of mass energy, we have used NNPDF23 lo as 0130 PDF with the factorization scales being chosen to be fixed at M H 1 .The resonant photo-production cross-section of H 1 at the LHC can be computed from its di-photon decay width and LHC photon luminosity at √ s = M H 1 as follows: where, tau = M 2 H 1 /s, f γ (x) is the photon PDF and s is the pp center of mass energy.The production cross-section for the heavy singlet is shown in Fig. 7 as a function of singlet mass for a few different DM masses.The DM mass is important here because the masses of other quintuplet fermions is determined by the DM mass and radiative corrections.A larger DM mass implies that the masses of the charged states running in the photon fusion loop are also larger and hence a smaller cross-section for singlet production.It is important to notice the bump around M H 1 ∼ 2M χ 4+ due to the threshold enhancement of the diphoton decay width. Collider Bounds After production, the singlet scalar dominantly decays into a pair of photons or Higgs bosons as long as the decays into a pair of quintuplet fermions are kinematically forbidden.Therefore, the production and decay of H 1 gives rise to interesting resonant diphoton and/or di-Higgs signatures at the LHC.The ATLAS and CMS collaborations of the LHC experiment have already searched for any new physics signatures in the diphoton [39] and di-Higgs [40] invariant mass distributions.In absence of any significant deviation from the SM prediction, limits are imposed on the production cross sections times branching ratio of a resonance giving rise to above mentioned signatures.These limits could lead to significant bounds on the DM RD allowed scalar mass (for example, see Fig. 4).In our analysis we found that the diphoton bound is a lot more severe especially for the α 3 = 0.1 case.In Fig. 8 we have plotted the H 1 production cross-section times the diphoton branching ratio (σ pp→H 1 × H 1 → γγ) and compared with the ATLAS observed limit [39].The diphoton-production cross-section is plotted for two different values of DM mass of 200 GeV and 600 GeV for fixed values of λ = 1.0 and α 3 = 0.1.For the 200 GeV DM mass, any scalar mass below 430 GeV is ruled out from the experiments.For the 600 GeV DM mass a small region from 630 GeV to 680 For comparison, the solid black line shows the ATLAS bound [39]. GeV along with 730 GeV to 1220 GeV scalar masses would be ruled out.In is interesting to note that larger values of the scalar masses are excluded for higher DM mass while the smaller values of M H 1 remain allowed.This is a consequence of the fact that larger DM mass corresponds to smaller H 1 production cross-section and hence, the diphoton signal crosssection in the smaller M H 1 region are smaller than the ATLAS bound.On the other hand, larger DM mass also corresponds to a threshold enhancement of the diphoton decay width and hence, diphoton signal rate at larger M H 1 .As a result, some part of M H 1 region around M H 1 ∼ 2M χ 0 is excluded for M χ 0 ∼ 600 GeV. Experimental bounds on the resonant di-Higgs [40] and diphoton [39] signal cross-sections have a significant impact on the DM allowed regions given in Fig. 4. We have scanned the DM allowed points in Fig. 4 to check the consistency of those points with the ATLAS di-Higgs and diphoton searches and the results are presented in Fig. 9.The pink points in the plots are ruled out from diphoton search while the black points are ruled out by di-Higgs searches.As one would expect, the di-Higgs bounds are only applicable for case with α 3 = 1.0 as the two Higgs final state scalar decay branching ratio is quite large in this case.The diphoton bounds are much stronger for the lower α 3 as the diphoton decay branching ratio is much larger here.Even though a part of the parameter space is ruled out a large part of it is still remains which can explain all the observations from both the collider and dark matter experiments. Summary and Conclusions To summarize, we have performed the dark matter and collider phenomenology of a leftright symmetric (SU (3) C × SU (2) L × SU (2) R × U (1) B−L gauge symmetry) model with an additional SU (2) R quintuplet fermion and a singlet scalar.The motivation for introducing the quintuplet fermion is to obtain a viable candidate for cold dark matter.The neutral component of the quintuplet fermion, being weakly interacting and stable (if lightest among the other components of the quintuplet), could be a good candidate for dark matter.The dark matter, in this model, can interact with ordinary matter via the exchange of a SU (2) R gauge boson (in particular, Z R ).The bounds on the dark matter-nucleon scattering cross-sections from the direct dark matter detection experiments such as LUX exclude M Z R below few TeV for a sub-TeV dark matter.Moreover, the gauge interactions of the neutral quintuplet fermion with massive (> few TeV) SU (2) R gauge bosons result into small annihilation and co-annihilation cross-sections and thus, predict relic density which is much larger than the observed WMAP/PLANCK results.The observed relic density can only be satisfied for few discrete values of the dark matter mass near W R /Z R resonance region (near M W R /2 and M Z R /2).Therefore, in the framework of left-right symmetry with a quintuplet dark matter candidate, sub-TeV dark matter masses are ruled out from the direct detection experiments and relic density constraints.Moreover, an experimentally consistent dark matter candidate in the range of few TeV is only possible for B − L = 4 case.To resolve these issues we introduce a singlet scalar in the above mentioned framework.The Yukawa coupling of the singlet scalar with the quintuplet fermion gives rise to a host of new annihilation channels for the Dark matter.We perform in detail the dark matter phenomenology in this singlet scalar extended scenario.We show that the WMAP/PLANCK measured dark matter relic density can be satisfied over a large range of dark matter masses including sub-TeV range.Moreover, the neutral component of the quintuplet fermion with B − L = 2 and 0 cases can give rise to an experimentally consistent candidate for dark matter as long as they are the lightest member of the quintuplet. We also study the collider signatures of the singlet scalar in details.Being singlet, it has no tree level interactions with the SM gauge bosons or Yukawa interactions with the ordinary leptons and quarks.However, the Yukawa interaction with the quintuplet fermion is allowed by the gauge symmetry.The interactions of the singlet scalar with a pair of EW gauge bosons arise from the loop induced higher dimensional operators.On the other hand, the scalar potential contains the interactions involving the singlet scalar and a pair of SM Higgs bosons.It enables us to study the loop induced (γγ, ZZ and Zγ) as well as tree level decays (hh and a pair of quintuplet fermions) of the singlet scalar.We find that as long as the decay of the scalar to a pair of quintuplet fermions are kinematically forbidden, it only decays to a pair of Higgs bosons or a pair of photons with significant branching ratios.In view of this, we study the photo-production (photon-photon fusion process) of the singlet scalar at the LHC with 13 TeV center-of-mass energy.The photo-production of the singlet scalar and its subsequent decay give rise to interesting resonant diphoton and di-Higgs signatures at the LHC.New physics contributions to the resonant diphoton and di-Higgs productions have already been studied in detail by the ATLAS and CMS Collaborations.We use the most recent ATLAS bounds on the resonant di-Higgs and diphoton cross-sections to show that some part of the dark matter relic density allowed parameter space in our model could be ruled out.It is worthwhile to mention that a significant part of the parameter space in our model is still consistent with the dark matter direct detection constraints, WMAP/PLANCK results as well as the LHC bounds.Future LHC data will be able to probe this part of the parameter space which is still allowed.Therefore, the singlet scalar extended quintuplet MDM left-right symmetric model will be able to explain any future LHC excesses either in the resonant di-Higgs or diphoton channels.On the other hand, the absence of any such excesses could lead to more stringent bounds on the parameter space of our model. Figure 1 : Figure 1: Mass difference between various charged states as a function of the neutral state mass for different B-L cases for MW R = 6 TeV and MZ R = 7.14 TeV.Note that the y-axis ranges are different in all the panels. Figure 2 : Figure 2: Relic density as a function of the dark matter mass.The left panel is for B − L = 4 case, whereas the right panel is for B − L = 0 case.The upper (lower) panels are for the scalar mass 0.5 TeV (1.5 TeV).In all the panels, we take MW R = 6 TeV, MZ R = 7.14 TeV, α3µ3 = vEW . Figure 3 : Figure 3: Relic density as a function of the dark matter mass for B-L=2 case with MW R = 6 TeV, MZ R = 7.14 TeV. Let us first understand the B − L = 4 case.Looking at the top left plot in Fig 4 we clearly see that there are three well-defined distinct regions.A narrow straight line with M DM ≈ M H 1 /2, a triangular region bounded from above by a straight line M DM = M H 1 and a rectangular region around 1.9 TeV M DM 2.05 TeV.The narrow straight line corresponds to the s-channel annihilation of two DM particles through a H 1 boson.There are actually two lines here with two points for each H 1 mass.If we look back at Fig. 2 we Figure 4 : Figure 4: Scatter plots in mDM -m scalar plane showing the allowed parameter space satisfying the relic density.We vary λ in the range 0 to 2 in each panel.The left panel is for B − L = 4 case, whereas the right panel is for B − L = 0 case.The upper (lower) panels are for α3µ3 = vEW (0.1 vEW).In all the panels, we take MW R = 6 TeV, MZ R = 7.14 TeV. Figure 5 : Figure 5: χ 0 -proton and χ 0 -neutron scattering cross-sections are shown as a function of mZ R considering two different values of gR/gL = 0.6 and 1.0 in the left panel.The top right panel depicts the colored region in mDM -mZ R plane which satisfies the LUX[34] upper bound on DM-nucleon scattering cross-section for gR/gL = 0.6.We show the same for gR/gL = 1.0 in the bottom right panel. Figure 6 : Figure 6: The decay branching ratios of singlet scalar (H1) as a function of its mass.The left (right) panel is for α3µ3 = vEW (0.1vEW ).In both the panels we consider B − L = 4 case with λ = 1.0 and m χ 0 = 200 GeV. Figure 7 : Figure 7: The production cross-section of singlet scalar (H1) as a function of its mass for various choices of DM mass (m χ 0 ).The curves are shown for B − L = 4 case with λ = 0.5. Figure 8 : Figure 8: The diphoton production cross-section is shown as a function of the singlet scalar mass for two different choices of the DM mass.We draw these curves for B − L=4 case with α3µ3 = 0.1vEW and λ = 1.0. Figure 9 : Figure 9: The allowed parameter space in mDM − m scalar plane satisfying the relic density for B − L = 4 case.The left (right) panel is for α3µ3 = vEW (0.1vEW ).In both the panels we vary λ in the range 0 to 2. The pink (black) points show the regions excluded from diphoton [39] (di-higgs [40]) search at LHC. Y .The heavy gauge boson masses are thus naturally generated at this scale.The electroweak (EW) symmetry breaking and the fermion masses and mixings, on the other hand, are generated by the neutral components of Φ field once they acquire a non-zero VEV
11,302.2
2018-03-05T00:00:00.000
[ "Physics" ]
Authentication of Satellite-Based Augmentation Systems with Over-the-Air Rekeying Schemes Here we delineate a complete satellite-based augmentation system (SBAS) authentication scheme, including over-the-air rekeying (OTAR), that uses the elliptic curve digital signature algorithm (ECDSA) and timed efficient stream loss-tolerant authentication (TESLA) without the quadrature (Q) channel. This scheme appends two new message types to the SBAS scheduler without over-burdening the message schedule. We have taken special care to ensure that our scheme (1) meets the appropriate security requirements needed to prevent and deter spoofing; (2) is compatible with existing cryptographic standards; (3) is flexible, expandable, and future-proof to different cryptographic and implementation schemes; and (4) is backward compatible with legacy receivers. The scheme accommodates a diverse set of features, including authenticating core-constellation ephemerides. We discuss the SBAS provider and receiver machine state and its startup, including its use by aircraft that traverse differing SBAS coverage areas. We tested our scheme with existing SBAS simulation and analysis tools and found that it had negligible effects on current SBAS availability and continuity requirements. SBASs, such as the wide-area augmentation system (WAAS) used in the United States, among other international equivalents, have become integral to the global navigation satellite system (GNSS) used in civilian aviation.International parties that choose to implement an SBAS (each is known as a Provider) use listening stations around their service volume to assess GNSS satellite positioning data and broadcast corrections widely via geostationary satellites.This information includes wide-area differential GNSS corrections and GNSS satellite information such as its health and integrity.Similar to most GNSS core-constellation signals, the SBAS signal is open and susceptible to spoofing.Given its ubiquitous use in civilian aviation, SBAS should be augmented with spoofing-resistant capabilities to ensure ongoing civilian aviation safety.As Providers agree to share a common SBAS message standard, our work seeks to specify how SBASs can be augmented to provide authenticated service that is resistant to spoofing. SBAS is primarily a data service.It broadcasts data that assists GNSS users.Therefore, appending additional cryptographic data to the SBAS data would be a natural way to authenticate SBAS data for civilian users.Additional SBAS broadcast messages could deliver cryptographic signatures and key values to its users.Using the mathematical primitives underlying cryptographic authentication methods, users could assert that only a Provider was capable of generating a given set of SBAS data as well as the accompanying authenticating pseudorandom data.In this work, we refer to the authenticating data as "signatures".Signatures, together with the associated key data, are either "authenticating pseudorandom data" or "OTAR Segment", which are terms that refer to the cryptographic pseudorandom data itself or the chunks separated for transmission to a receiver, respectively.The term "authenticating pseudorandom data" is used because the data are not human-readable nor are they predictable without the use of private secrets.The security of the authenticating pseudorandom data assumes that (1) the Provider is the exclusive holder of certain secret identifying information and (2) there are no known efficient algorithms that can generate the authenticating pseudorandom data without the secret identifying information.If the identifying information (e.g., keys) is leaked, that information is then compromised and must be revoked.If an efficient algorithm is discovered, the relevant cryptographic primitives are known as broken and must be replaced. The use of cryptographic authentication methods poses challenges to SBASs.The main challenge relates to the delivery of authenticating pseudorandom data via SBAS given current data-bandwidth constraints.Because SBAS is an open signal, secure SBAS authentication must rely on asymmetric cryptographic algorithms, for example, the elliptic curve digital signature algorithm (ECDSA).In this paper, we use the term ECDSA to include other, similar asymmetric cryptographic algorithms (e.g., EC-Schnorr).However, we will specify ECDSA without losing generality for concreteness, noting that certain parameters and characteristic security strengths listed here would be different if we were not using the ECDSA.A single ECDSA signature requires 512 bits to achieve the standard 128-bit security level, which dwarfs the 216 data bits permitted per SBAS message. Some prior art has suggested the use of the quadrature (Q) channel to deliver authenticating pseudorandom data (Fernandez-Hernandez et al., 2021;Neish, Walter, & Powell, 2019); however, those solutions would require power currently used by the In-phase (I) channel.Use of the Q channel would strain the availability and continuity of SBAS systems at coverage area boundaries and will thus be undesirable to SBAS stakeholders.Other prior art suggested the use of a combination of ECDSA with another algorithm known as timed efficient stream loss-tolerant authentication (TESLA) (Fernández-Hernández et al., 2016;Neish, 2020;Various, 2021).Use of this algorithm provides more efficient use of authenticating pseudorandom data and is loss-tolerant.TESLA uses a delayed-release mechanism to authenticate data and requires less authenticating pseudorandom data than ECDSA.However, TESLA requires the Provider and the user to be loosely time-synchronized (Perrig et al., 2005).SBAS cannot use TESLA exclusively; TESLA must be used in tandem with ECDSA to achieve authentication security.In this work, we establish the following relationship between the proposed TESLA-ECDSA scheme.Using this scheme, TESLA authenticates the SBAS messages and ECDSA authenticates the SBAS's use of TESLA for periodic maintenance.While prior art has identified TESLA and ECDSA scheme parameters required to achieve authentication, the maintenance and maintenance requirements, such as how best to perform OTAR, remain largely unaddressed.This work addresses this challenge by suggesting a more efficient authentication maintenance scheme that does not require use of the Q channel.Moreover, this work leverages specific features of TESLA scheme to assert security efficiently (Caparra et al., 2016) and permit the introduction of additional features relevant to SBAS stakeholders. Another challenge lies with receiver computational considerations.Some prior art has explored how TESLA and ECDSA computations would fare when used in GNSS, SBAS, and smartphone contexts (Cancela et al., 2019).TESLA frequently requires a more intense, one-time startup hashing computation upon receiver start followed by minimal hashing operations during standard operation.We note that modern commodity electronics frequently perform these operations and often include hardware-specific acceleration functions in their chips to increase computational efficiency and facilitate parallelism.Therefore, we expect that if these methods burden current receivers, manufacturers could augment their chips at a minimal cost to accommodate the desired computational loads. Prior art suggested appending a single message type (MT) to SBAS for authentication and maintenance (Neish, 2020).Findings from this work suggest that SBAS might send this specific MT every six messages to deliver 190-bits of TESLA authentication data and 26-bits for scheme maintenance.While the scheme requires frequent delivery of the authentication-message, it will not overburden the SBAS MT schedule.We modified this earlier work by appending another MT to SBAS to replace the 26-bits mentioned above that was to be dedicated to maintenance.Our proposed additional MT was designed to be modular for the exclusive purpose of delivering all authenticating pseudorandom data related to scheme maintenance.This design allows for reasonable TFAF requirements and is agnostic to the ECDSA and TESLA scheme parameters.Therefore, minimal changes will be needed in the event of cryptographic primitive breakage.Moreover, the additional MT scales well with increases to security level requirements.It is also flexible and future-proof to accommodate anticipated feedback from Providers and SBAS stakeholders. To evaluate the proposed method against prior art, we implement a full-stack SBAS simulation of our design by augmenting an existing SBAS simulation tool known as the Matlab Algorithm Availability Simulation Tool (MAAST) (Jan et al., 2001).MAAST was used previously to evaluate SBAS design and provides the results of a Monte Carlo simulation performed to evaluate how our design performs under message loss over the WAAS coverage area.Results using this tool revealed that our proposed design outperforms key performance indicators (KPIs), such as a shorter TFAF, a shorter time to authentication per message, and less sensitivity to loss tolerance, compared to other ideas currently under consideration. When using TESLA, the Provider and the Users must be loosely timesynchronized.This poses an interesting "Catch-22" situation because the GNSS provides the time function.Therefore, our scheme must also include a mechanism to resist a Replay Attack.A Replay Attack describes a situation in which a spoofer can listen and then replay messages at a slightly delayed rate.After some time, these induced time delays will allow the spoofer to violate the loose time-synchronized assumption and thus break the TESLA scheme.There are mechanisms described in prior art that receivers can use to establish trust in GNSS ranging signals (Fernandez-Hernandez et al., 2019;Psiaki & Humphreys, 2016); however, GNSS ranging signals have not yet been rigorously authenticated with cryptography.Prior art has also described how an onboard receiver clock might be used to detect this type of attack, given clock uncertainty (Fernandez-Hernandez et al., 2020).Other studies have investigated clock hardware and models that could be incorporated into receivers to enforce the synchronization assumption (Ardizzon et al., 2022).This work extends this prior art by discussing how onboard clocks, external clocks, and maintenance conditions can be used to mitigate the threat of Replay Attacks. Elliptic Curve Digital Signature Algorithm (ECDSA) ECDSA is a standardized asymmetric authentication protocol.As we have not included all details in the following summary, we refer the reader to several widely-available detailed definitions in the cryptographic literature or on the Internet (Boneh & Shoup, 2017).The protocol specifies a signing function and a verifying function.For this protocol, let n be the security level of an instance of the protocol, which linearly describes the required computation that will exhaustively break the instance.The Provider generates a secure random 2n-bit integer for long-term use as a secret private key.The Provider then derives a 2n-bit integer from the private key and distributes it to receivers as a public key.The Provider then uses the signing function with the secret private key to derive signatures on messages.Each signature is a 2-tuple of 2n-bit integers for a total of 4n bits.Receivers use the verifying function with the public key and signatures to assert that the private key holder generated the message and the signature.Because the protocol is secure, there is no known efficient algorithm that can compute the private key given the public key nor any that can compute the signature on a message without the private key.The protocol assumes that the receiver trusts that the public key is from Provider.The protocol is not loss-tolerant; if a single bit of a signature or message is lost, the protocol will fail to provide verification.Likewise, this protocol is not future-proof to attacks that might be conducted from theoretical quantum computers, as described by Neish, Walter, & Enge (2019). Timed Efficient Stream Loss-Tolerant Authentication (TESLA) TESLA is an authentication protocol that allows a receiver to authenticate messages from a Provider when used in tandem with other asymmetric authentication protocols.This protocol poses several relevant advantages over a purely asymmetric systems used by GNSS and SBAS systems.First, the protocol requires less authenticating pseudorandom data to authenticate messages from a Provider.Second, the protocol is loss-tolerant.Third, the computation required for receiver authentication is less strenuous.Figure 1 presents a conceptual diagram of the TESLA description to follow.Algorithms 1 and 2 provide a more concrete description of the protocol. TESLA uses only a single cryptographic primitive to generate a cryptographically-secure hash function.In this proposal, we select a salted SHA-256 that has been truncated to include the left-most 128-bits (described here as the "Hash Function" or H ( ) ⋅ ).Thus, we can describe the TESLA protocol concretely based on this selection without loss of generality.We truncate Hash Function output to the 128 most significant bits to generate 128-bit integers.Because the audience for this work includes primarily experts in navigation, we use the geometric terms "path" and "point" instead of "key chain" and "key", and describe  TESLA geometrically as a "one-way path".For this case, let each 128-bit integer be identified as a Hash Point, and let a collection of Hash Points that are consecutively related via the Hash Function be defined as a Hash Path.Non-consecutive Hash Points further along the Hash Path will relate via repeated application of the Hash Function. ← This procedure can be augmented to cached messages saved while awaiting a complete Authentication Stack.end while that yields a specific output Hash Point.In this manuscript, we refer to a specific input Hash Point as the "preimage" Hash Point of a specific output Hash Point.In other words, while it is trivially easy to compute the output Hash Point of the Hash Function given the preimage Hash Point, one will only be able to locate a preimage Hash Point after an exhaustive search.This is a one-way path.The domain of 128-bit integers, together with a randomized 128-bit salt inclusion to the Hash Function, will render pre-computation attacks (also known as Rainbow Table Attacks) infeasible with modern supercomputers because it meets 128-bit security.Our use of the term Hash Point also serves to avoid confusion with the overuse of the term "key" in TESLA and ECDSA applications.Authentication of private and public keys relates to ECDSA.Many keys will be derived from TESLA Hash Points to achieve the required authentication data-bandwidth efficiency. To use a Hash Path to authenticate messages via TESLA, the Provider computes a Hash Path derived from a random starting Hash Point.The Hash Path must remain secret.The Provider broadcasts the final Hash Point along the Hash Path (i.e., the "Hash Path End") together with an ECDSA signature derived therefrom.Receivers recognize the Hash Path End as authenticated based on the ECDSA signature.The Provider uses the secret preimage Hash Point to the Hash Path End to derive hash-based message authentication code (HMAC) keys to send symmetric authentication signatures along with the standard message set.We propose using keyed-hash message authentication codes that use the Hash Function as its primitive for message signatures (i.e., the function, "HMAC" and the HMAC signatures themselves which are known as "HMACs").We will continue to describe the protocol concretely using our selection without loss of generality.We truncate the HMACs to the left-most bits so that they will fit into SBAS messages.Providers and receivers agree on a schedule in which the Provider will (1) stop using the preimage Hash Point of the Hash Path End to authenticate messages, (2) broadcast that preimage Hash Point for receivers to authenticate messages, and (3) use the next preimage Hash Point along the Hash Path to authenticate new messages.Once a particular preimage Hash Point has been broadcast, receivers cannot accept new signatures derived therefrom.Because the HMACs were received when a specific preimage Hash Point was known only to Provider, it is understood that the Provider must have generated the messages.Each time the Provider releases a Hash Point, the Provider moves back one Hash Point along the secret Hash Path to derive a new HMAC.Given the security of the Hash Function, the Hash Point along the Hash Path located just before the released Hash Point remains secret, and thus becomes the new HMAC key for the next set of messages.The authentication security along the Hash Path hinges on (1) the security of the Hash Function and (2) the loose time-synchronization of the Provider and the receivers. To complete authenticating security, TESLA must be used in tandem with an asymmetric authentication protocol.TESLA is secure along the Hash Path length.However, Hash Paths are finite and must be generated periodically.An asymmetric authentication protocol must sign the Hash Path End, which is the first Hash Point known to the receiver.In other words, for every Hash Path generated by the Provider, the Provider must use an asymmetric signature to ensure authentication security along the entire Hash Path.Moreover, the Provider and the receiver must be loosely time-synchronized.This poses a type of "Catch-22" problem because GNSS and SBAS Providers are the source of time information.This suggests that it might be helpful to avoid using TESLA in any form and to focus only an asymmetric protocol.Later in this work, we will show that the use of TESLA leads to superior loss-tolerance and requires less authenticating of pseudorandom data while accounting for the time-synchronization issues. DEFINITION OF THE SBAS AUTHENTICATION SCHEME The SBAS authentication proposal proposes appending two MTs to the schedule identified here as MT50 and MT51.MT50 is used to authenticate the actual SBAS messages via the TESLA protocol.MT51 is used to (1) authenticate the Hash Path Ends via ECDSA as the Provider uses a series of Hash Paths in its standard operation, and (2) to provide OTAR and perform system-level maintenance of the cryptographic authentication scheme.The scheme in Figure 2 is a conceptual diagram of an overview and the relationships between MT50, MT51, and the cryptographic authentication method employed.In Section 2, we present precise definitions of our proposed SBAS authentication scheme.Sections 3 and 4 provide explanations and reasoning for our proposed definitions.While the definitions presented here are for use in SBAS L5 signals, given the spare bits remaining in each definition, this scheme can also be used for SBAS L1 signals by modifying the preambles (noted with reserved bits in definitions). ECDSA Key Structure We propose a two-level ECDSA key structure, as suggested by Neish (2020).Level-1 keys will be 256-bit-security ECDSA keys managed internationally by a trusted Certificate Authority (CA).Each level-1 key will be in use for 100 weeks.The CA will compute a large number of level-1 keys for use in the perpetual future and will encrypt each key individually via AES-128 with different AES encryption keys (one for each level-1 key) maintained as secret by the CA.The CA will distribute the AES-ciphertext to receiver manufacturers.Receivers will be preloaded with the collection of 512-bit public keys and encrypted with 128-bit keys via the AES-128.As level-1 keys expire, the CA will distribute the keys to decrypt the AES-ciphertext, one at a time, for the Provider to distribute via MT51.As the receiver receives the key to decrypt its onboard AES-ciphertext, it will update its current level-1 ECDSA public key.Each level-1 public key will be 512-bits.Any signature derived therefrom will be a 1024-bits. Level-2 keys will be 128-bit-security ECDSA keys managed by the Provider.Each level-2 key will be in use for ten weeks.To create a new level-2 key, the Provider will generate a secure-random private ECDSA key and an associated public key.The Provider will then submit the new public key to the CA for a signature from the CA's current level-1 key.Provider will then distribute the new public key and the associated authenticating signature over the SBAS.The receiver will receive a new level-2 public key and the authenticating signature, verifying the received new level-2 public key with the associated decrypted level-1 public key. The Provider will use the level-2 keys to authenticate the TESLA Hash Path Ends.Keys derived from the TESLA Hash Paths will be used to authenticate the bulk of SBAS messages with HMACs.For all levels, the authenticating pseudorandom data delivered will accompany data (e.g., SBAS message preamble, MTs, and other data) that must be sent as per the definitions described below.A specific signature must be derived from the entire SBAS message used to deliver that particular key.Concretely, when a level-1 key authenticates a level-2 key, the level-1 signature must be derived from the entire set of messages used to deliver the level-2 key and the expiration time of the accompanying key.In other words, the level-1 signature must be derived from the complete messages containing overhead data, not just from the level-2 key itself.If this does not take place, then the accompanying data, most notably the key expiration times, will not be secured by the cryptographic primitives. TESLA Hash Path and HMAC Keys Each TESLA Hash Path will be used over one week.The Provider will generate an entire Hash Path before its actual use and then broadcast the Hash Path End signed by the current level-2 ECDSA key via MT51.Each Hash Point, except the Hash Path End, will be associated with at least five HMAC keys that will be used to authenticate at least five messages with HMAC, depending on the number of iterations of Equation ( 2).Therefore, a Hash Path will include 100,801 Hash Points, one for each sixth second for the week, and one for the Hash Path End. To generate a Hash Path P, the Provider will derive a secure random 128-bit salt S P from level-2 ECDSA authentication, as described in Section 3.2.1.Let the Hash Points of P be denoted p i P , and let t i be the time at which the Provider publicly releases p i P via broadcast.Here, t i is an integer time (e.g., time in seconds since the GPS epoch).We propose Equation (1) to define the Hash Path where || denotes bit concatenation and  denotes integer division. p H p S t The purpose of the integer division is explained in Section 3.2.2.We propose a left-most 16-bit truncated signature from HMAC that authenticates each SBAS message delivered via MT50.Each message m j , sent at t j , will be provided with a unique HMAC key k j .The key k j for each message m j will be generated according to Equation ( 2).The signature s j derived therefrom will be according to Equation (3).t j is the integer time that the authenticated message will be broadcast and received.PRN is the pseudorandom code associated with the broadcasting geostationary satellite.Frequency is the frequency band of the particular transmission (e.g., a string containing L1 or L5).We discuss the necessity of the concatenation and HMAC operations of Equation ( 2) in Sections 3.2 and 3.3.Equation ( 2) ensures that the HMAC for each message from each satellite has its own key. The output of Equation ( 3) is truncated to its 16 most-significant bits.We note that because Equation (1) uses a concatenation operation because the input data is less than 512-bits, the block size for the selected Hash Function.Equation ( 1) would need to be modified if the input data were to be larger than 512-bits; this will be critical to mitigate length extension attacks (Boneh & Shoup, 2017). Section 2.5 and Algorithms 1 and 2 describe how the Provider and receivers should perfrom authentication, queueing, and caching to verify messages. Message Type (MT) 50 The findings shown in Table 1 present our proposed definition of MT50 with bit allocations.The Provider will send an MT50 with every six messages.The delayed key release used to authenticate messages means that each message will be authenticated between 7 and 11 seconds after its broadcast.Provider should set the cadence of the integrity messages so that each immediately precedes the scheduled MT50s to minimize the time needed for their authentication.If the receiver cannot authenticate a message because of a lost MT50, it generally disregards the message.(Information that alerts the receivers to decrease the level of trust need not be disregarded).Within the message definition, there are five 16-bit HMACs and one 128-bit Hash Point.Once the receiver receives an MT50, the five HMACs included correspond to the previous messages with a secret Hash Point known only to the Provider at the message sending time.The 128-bit Hash Point included corresponds to the HMACS included in the previous MT50 message sent six seconds earlier.Figure 3 provides a conceptual diagram of the delayed Hash Point key release.As per the SBAS specifications, in the event of a GNSS integrity alert, an alert message must be sent by the Provider with four messages in a row.Therefore, occasionally an alert message will take priority over an MT50.The salted Hash Function described in Equation ( 1) accommodates small perturbations to the schedule resulting from alert messages as described in Section 3.2.2 and Figure 4.Even with an MT50 delay, each MT50 must sign the messages that it would have signed without an alert as described in Section 3.2.2. Message Type (MT) 51 In this work, we provide two MT51 definitions with a 128-bit OTAR Payload Segment and an 84-bit metadata section.This version includes many features that could be relevant to SBAS stakeholders.We have not specified which features should be incorporated.This will be deferred until all SBAS stakeholders have had their considerations heard.The information in Section 4.1 discusses how to modify the 128-84-bit allocation if SBAS stakeholders would prefer not to use the additional features described later in the text.However, given the academic context of this work, and because our design meets the key performance indicators (KPIs), we choose to publish the 128-bit feature-rich design.Tables 2 and 3 provide our proposed definition of MT51 with bit allocations.The Provider must broadcast MT51 messages for approximately 1 in every 18 messages for MT51 as described in Section 4.4.These messages do not need to be sent on a rigid schedule; they can be sent in the extra space within the current SBAS schedule. The payload section for MT51 is only 128-bits which leaves 84-bits for metadata that describes how a receiver should interpret the 128-bit payload.The larger 84-bits allows users to introduce additional features, such as parallel and redundant key management as well as several other features explained later in the Sum Total 84 Note: To distinguish the key updated with a specific MT51 and the key used to authenticate that MT51, we call the key associated with the MT51-delivered payload the Germane Key and the key used to authenticate that delivered key the Authenticating Key.Note: Table 3 describes the metadata which specifies how a receiver should interpret the payload.The Authentication Stack, defined in Section 2.5, is composed of a total 2048 bits and required the receipt of the 16 unique messages to OTAR. text.Table 4 provides an sample set of unique MT51 messages, each containing a 128-bit segment per message, that form an Authentication Stack (defined in Section 2.5).To distinguish the key updated with a specific MT51 and the key used to authenticate that MT51, we call the key associated with the MT51-delivered payload the Germane Key and the key used to authenticate that delivered payload the Authenticating Key.The metadata specifies the following, matching the order shown in Table 3. (1) it will identify the system to which the Germane Key applies; (2) it will specify whether the payload is an ECDSA public key, an AES decryption key for an ECDSA public key, or a TESLA Hash Path End; (3) it will provide a 16-bit hash of the entire Germane Key so that the receiver can immediately associate the OTAR Payload Segment with a specific key; (4) it will designate the expiration time of the Germane Key; (5) it will provide a 16-bit hash of the entire authenticating key so that the receiver can immediately associate the authenticating pseudorandom data OTAR Payload Segment with a specific authenticating key; (6) it will specify whether the payload itself is a key or authentication signature; (7) it will identify the segment number of the authenticating pseudorandom data so that the receiver can aggregate the authenticating pseudorandom data segments over time. The signatures used to authenticate a particular key must be derived from the entire set of full MT51 messages used to deliver them.The metadata associated with authenticating pseudorandom data includes the expiration time.These keys are only secure for specific lengths of time, as described in Section 2.1 and 2.2; hence, the expiration time must be authenticated together with the corresponding authenticating pseudorandom data so that each key is retired securely.After the expiration of a particular key, receivers must reject all messages signed with the expired key. Procedures Algorithm 1 describes the procedure needed by the Providers to use TESLA securely.In addition, the Providers must assemble OTAR data by coordinating with the CA as described in Section 2.1.Algorithm 2 captures how a receiver should operate beginning with a cold start and includes specifications of some of the onboard data structures that should be used to track key maintenance.The term "cold start" is used to describe use of a receiver that has been off for an extended period with onboard Level-2 ECDSA or TESLA information that has expired according to its onboard clock.Upon cold start, a receiver must track and record incoming SBAS messages.A receiver should not use any unauthenticated data.NB: the complete set of MT51s is self-authenticating.Therefore, it must track the incoming MT51 messages until it has received a complete set of unexpired Level-1 ECDSA public key, Level-2 ECDSA public key, TESLA Hash Path End, and the associated signatures (collectively known as the "Authentication Stack").The receiver must track and store MT51 messages until the aggregate OTAR Payloads of authenticating pseudorandom data provides a successful ECDSA authentication of the entire Authentication Stack, including a level-1 key onto a level-2 key and a level-2 key onto the Hash Path End.Until it has received and verified a complete set of unique MT51s included in the Authentication Stack, the receiver cannot assert an authenticated fix and must ignore the SBAS corrections and integrity data (i.e., non-MT51 messages) derived from incoming messages.Once the Authentication Stack is received and verified by ECDSA, the receiver can process and authenticate MT50 messages via TESLA and can also associate the MT50-delivered HMACs with messages to authenticate and process the SBAS correction and integrity data. The Provider repeatedly broadcasts the current Authentication Stack, as discussed in Section 4.4 to accommodate random receiver startups.Algorithms 1 and 2 delineate only the single satellite and single frequency cases.To augment those algorithms to handle the multiple satellite or multiple frequencies, five messages should be signed and verified for each satellite and each frequency, as described by Equations ( 2) and ( 3).This means that the MT51 Authentication Stack is used for all satellites and frequencies, but each message for each satellite and frequency is provided with its own authentication via TESLA.By reusing he same Authentication Stack for all satellites and frequencies, we exploit the scalability of TESLA allowing for smaller TFAFs (see Section 3.2). Upon full receipt of the Authentication Stack, receivers must hash the Hash Point from the most recent MT50 to the MT51-provided Hash Path End.In the special case in which the receiver has only been off a short time, turned offline, and then turned back online during the use of the same Hash Path, the receiver TFAF is the time required to hash to the Hash Path End.The worst-case number of hash computations is the length of the Hash Path (about 100,000) and will occur when a receiver first turns on at or near the expiration of the Hash Path.With standard commodity hardware (e.g., an Intel Core i5 Processor), a worst-case time of approximately 10 seconds will be observed if a receiver is turned on immediately before a Hash Path expires.We measured this time by experimenting with personal laptops that were not specifically built for this process.With hardware acceleration, this process time could decrease and evaluated in parallel with other standard receiver processes.This initial hashing computation only occurs when the receiver is turned on and should not hinder processing SBAS navigation data in real-time after an authenticated fix. Modification for Metadata Removal In Section 4.1, we discuss whether certain pieces of the metadata are strictly necessary and whether certain features of the design of this work are useful to all SBAS stakeholders.If the metadata are stripped from MT51 in a final design, then Section 2.5 must be modified.For instance, a minimum-metadata MT51 design could only contain the page number of aggregate metadata. In Algorithm 2, the receiver stores keys in hash tables because the OTAR is not rigidly managed.Different keys can authenticate other keys.For MT51, the receiver must check that the TESLA Hash Path End is signed correctly by the metadata-specified level-2 key and so on from level-2 to level-1.For another MT51 design with only the page number as the metadata, either the unique OTAR Payload Segments do or do not aggregate to generate a consistent Authentication Stack.Rather than store keys in hash tables, the receiver will simply aggregate the OTAR Payload Segments for a complete Authentication Stack in an order specified by the Provider, such as TESLA Hash Path End || ECDSA Level-2 Authentication of TESLA Hash Path End || ECDSA Level-2 Key || ECDSA.Once the entire Authentication Stack aggregate authenticates via ECDSA, the receiver achieves its TFAF and can begin authenticating the bulk of SBAS messages via MT50. TESLA Loss Tolerance TESLA allows receivers to derive missed Hash Points from Hash Points released later along the Provider's pre-computed Hash Path that has been kept secret.For example, suppose a receiver misses an MT50 message and therefore does not receive a Hash Point.The receiver can derive that missing Hash Point by computing the Hash of the next released Hash Point.In another example, suppose a receiver misses several day's worth of Hash Points; upon receipt of a new Hash Point, the receiver can hash all the way down to the Hash Path End as described in Equation (1) to document the new messages' immediate authenticity.This property scales along the length of the Hash Path.If a receiver is off longer than the Hash Path's length or applicability, it will need to OTAR the Hash Path End via MT51 and ECDSA. Regarding loss tolerance of messages and the security of 16-bit HMACs, we implement a main idea previously described by Neish (2020).The smaller 16-bit HMAC design of MT50 aids in general tolerance of message loss.Suppose, instead, each MT50 contained a single HMAC that authenticated the previous five messages as a group.If any of the five earlier messages were lost, the receiver would not verify any of the messages in this group.Therefore, to accommodate message loss, we specify that each HMAC from the set of five smaller HMACs individually authenticates each of the five previous messages.Because the 16-bit length of the HMACs is unusually small with respect to cryptographic authentication, we take special care to specify our spoofing detection procedures and ensure cryptographic independence of keys, as described in Section 3.3. TESLA Efficiency The number of authenticating pseudorandom data bits required to perform OTAR of a Hash Path does not increase with Hash Path length.While complete OTAR requires a daunting 2048 bits, as described in Table 5, the bits do not scale with Hash Path length because the Provider sends only a Hash Path End authenticated with ECDSA.If the Provider finds that the overhead required to perform OTAR of a Hash Path has become too burdensome on the schedule, the Provider can increase the Hash Path's length and decrease the OTAR transmission frequency.Increasing the Hash Path length requires the Provider to compute a longer Hash Path for each OTAR and and for receivers to hash more in the event of cold start.However, this burden is negligible because commodity hardware can compute Hash Paths on the lengths specified in this work within seconds, as described in Section 2.5.In any case, MT51 can reassign Hash Points mid-way through a Hash Path to decrease this burden.For instance, while a Provider could generate week-long Hash Paths, it could assign Hash Path Ends each day or each hour to alleviate the burden of the initial hashing operation.Given the 128-bit Hash Point length, a Hash Path length on the order of decades is safe from attack (Neish, 2020).Therefore, the OTAR transmission frequency is primarily driven by the desired TFAF, as discussed in Section 4.4. To meet a standard 128-bit security level for TESLA, we must use cryptographically-independent 128-bit long keys for each HMAC.It is generally desirable to minimize the number of bits required for TESLA Hash Path distribution.We achieve this by deriving all cryptographic keys from the same 128-bit Hash Path.Specifically, each iteration of Equation (2), from each combination of time, satellite, and frequency, is derived from the same 128-bit Hash Point distribution.Thus, we do not need to generate a separate Hash Path for each stream of authenticated information.Provided each of the keys derived is cryptographically independent, the outcome is cryptographically secure.Ensuring cryptographically-independent keys is achieved by the intermediate HMAC operation shown by Equation (2) as described in Section 3.3.This allows a single Hash Point to authenticate multiple messages and a single Hash Path for a set of SBAS satellites. While the Providers could maintain separate Hash Paths for each satellite, using a single Hash Path may decrease the time required to receive the Authentication Stack by cold start receivers.For instance, WAAS uses three geostationary satellites.WAAS could use the same Hash Path for all of its satellites and broadcast the set of unique MT51 messages that make up the Authentication Stack out-of-phase, thereby decreasing the TFAF by 66.6%, while accommodating receivers that do not track each satellite.We can extend this argument for the L1 and L5 frequency bands by decreasing the TFAF by 83.3%.Our selection of a TFAF for a receiver tracking by a single geostationary SBAS satellite on a single frequency is discussed in Section 4.4. ECDSA-derived Hash Path Salt The security of each TESLA Hash Path hinges on the difficulties involved in computing any earlier preimage Hash Point before it is released by the Provider.Note: Level-1 keys require only the 128-bit AES decryption key, hence a single MT51.Level-2 keys require 256-bit key, and 1024 bits of authenticating pseudorandom data (twice the Level-1 public key length), hence, 1280 bits and 10 MT51s.TESLA Hash Path Ends require 128-bit Hash Point and 512 bits of authenticating pseudorandom data (twice the length of the level-2 public key), hence 640 bits and 5 MT51s.Each unique MT51 is required for OTAR. Unsalted Hash Paths are susceptible to pre-computation attacks, also known as Rainbow Table Attacks (Boneh & Shoup, 2017).To perform these attacks, an attacker pre-computes a large number of Hash Paths and stores them in hopes that one of the pre-computed Hash Paths contains a currently secure mid Hash Path Hash Point.If this were to occur, the attacker then has saved Hash Points located earlier on the no-longer-secure Hash Path and can thus spoof SBAS-authenticated messages.To prevent this from occurring, we must introduce random variations (known as "salt") to the Hash Function which will render pre-computation attacks unfeasible. A well-designed salt scheme will have the following characteristics: (1) the salt scheme must be sufficiently strong to deter pre-computation attacks; (2) the scheme must accommodate spontaneous and episodic message loss (e.g., receiver interference or receiver offline cold start); and (3) he scheme must not impose a burden on the message scheduler.A 128-bit salt would suffice for the security requirements.Given the modular design of the proposed MT51, one could append a designation for the salt of a particular Hash Path to the metadata definitions shown in Table 3 This would require an additional message sent by OTAR to a Hash Path.This would be unlikely to pose a burden on the SBAS schedule.However, we propose an alternative that saves an MT51 message by basing the Hash Path salt on the level-2 signature protocol.We suggest that the Provider might compute a Hash Path Salt S P for Equation (1) via Algorithm 3. In this case, the Provider computes a cryptographically-secure nonce for every ECDSA signature.Using Algorithm 3, the salt derives a public quantity derived from a nonce.For ECDSA, this is the curve point C. The analogous number in EC-Schnorr signatures is usually called r.This C is cryptographically-secure and random because it is derived from the cryptographically-secure nonce generated at signing.Use of this entity as the Hash Path salt saves an MT51 message without compromising the signing key or the Hash Path. While this scheme provides the advantage of one less message for OTAR, it impedes the scheme's flexibility.Using a nonce more than once reveals will reveal the secret private key used to authenticate the data.This means that the Hash Path and its authentication signatures are immutable.The Provider cannot remove a Hash Point mid Hash Path to save receivers the computation of hashing down to the signed Hash Path End because the salt derived from it would then be different.Providers also cannot change any of the metadata (e.g., the expiration time).To reincorporate these two features, as mentioned above, the Provider could augment the MT51 metadata and distribute the salt as a separate MT51. TESLA Hash Path Counter from Time In the standard TESLA formulation, the Hash Function uses integer counter denoting the number of Hash Path Hash Points from the Hash Path start.We propose an analogous approach involving the integer time of message dispatch and arrival.Dispatch and arrival times, rounded down to the nearest integer, are the same for the Provider and the receiver.This is because messages are sent each second and transmission time-of-flight is less than one second.There are several advantages to using the time, instead of the integer number of points from the Hash Path start.Upon start, a receiver is capable of verifying a Hash Point since Equation ( 1) is a function of current integer time and the ECDSA authenticating pseudorandom data.Moreover, it allows Hash Path switching to occur on a non-rigid schedule, which also helps to maintain security (Caparra et al., 2016).While we specified a one week interval, and it would be natural to have a new Hash Path begin at the beginning of a GNSS week, the Provider not communicate the start and length of Hash Paths as overhead or metadata for a non-rigid schedule because of the hashing properties of the Hash Point.If the Provider were to switch Hash Paths arbitrarily, there is a constant-time complexity for the receiver which will need to check if a new Hash Point hashes to the current Hash Path or a new one, as shown in Algorithm 2. Time-based counter aids with security can be used given the loose-time synchronization assumption.The TESLA protocol assumes that Provider and receiver are loosely time-synchronized.To break the TESLA protocol security, an attacker must hack the receiver time to six seconds behind the Provider time.In Equations ( 1) and ( 2), we proposed including the times t i and t j in the TESLA counter and the Hash-Point-to-HMAC-key derivations.As proposed, if the Provider and the receiver are not time-synchronized within one second, the authentication scheme fails to certify messages as authenticated because all the keys were derived from the integer time in seconds.Thus, SBAS message spoofing becomes more complex since any SBAS spoofer must also spoof the GNSS time. During standard operation, the Provider will send an MT50 every six seconds.However, the SBAS alert requirements specified that, upon an alert, alert messages must be sent immediately for four consecutive seconds.Since the Provider computes the entire Hash Path before its use, including assuming the t i 's associated with each Hash Point, alerts will interfere with the six-second MT50 schedule and the TESLA counter that we have proposed.To accommodate perturbations of the authentication schedule, we propose (1) nominally sending each MT50s when t i mod 6 = 0 and (2) performing an integer division by six on time t i as shown in Equation ( 1).If an alert occurs leading to four consecutive messages that displace a TESLA authentication between one and four seconds after t i mod 6 = 0, the Hash Path is preserved because t t i i   6 = ( 4) 6 + when t i mod 6 4 ≤ .Figure 4 provides a conceptual diagram of how the TESLA counter is preserved in the event of an alert message. In the event of an alert, we must modify the scheme to maintain the security of the time-synchronization schedule.Nominally, the earliest that a receiver will authenticate a message is six seconds after its transmission.This serves to protect the scheme and delay an attack for up to six seconds.That length of time is the minimum spread between a delivered HMAC and the corresponding Hash Point used to generate it.As shown in Figure 4, this six-second minimum is violated unless one of the following two proposed modifications is implemented.Option 1: In the event of an alert, the Hash Point after the normal Hash Point authenticates messages during an alert.Consistent with the findings presented in Figure 4, the MT50 in column 168 (not shown) would authenticate the messages of Alerts 3 through 6. Option 2: The receiver does not accept the delayed MT50 until it has been authenticated by a 16-bit HMAC as any other message.Consistent with with Figure 4, the receiver does not use the delayed MT50s shown incolumns 157 though 160 until it receives a valid HMAC from the MT50 in column 162 and the Hash Point delivered in column 168 (not shown).Both Option 1 or Option 2 provides the same level of security by returning the minimum HMAC-to-Hash-Point delay back to six seconds.Based on our implementation work with MAAST, we claim that Option 2 may be easier to implement with current receivers and software.To limit the information lost, a delayed MT50 should sign the original messages corresponding to the loosely-synchronized schedule.In Figure 4 in the row labeled Alert 6: (1) the MT50 of column 160 must sign the messages of columns 151 through 155; (2) the MT50 of column 162 must sign the messages of columns 157 through 161; and (3) the non-MT50 message of column 156 will remain unauthenticated.For the rows labeled alerts 3 through 6, the integrity messages of column 156 will also remain unauthenticated.This is acceptable because those messages tell receivers not to use the service; the surrounding integrity messages will be authenticated regardless.Without this requirement, messages of substance will not be authenticated. TESLA Security Previous work identified the appropriate security-level lengths under conservative adversary models (Neish, 2020).These findings assert that a TESLA Hash Point length of 115-bits and an HMAC length of 15-bits will be sufficient to deter a supercomputer-level attack over the time-between-authentication interval and assert a sufficiently low probability of success.In this work, we have rounded up these numbers to the nearest base-2 number, at 128 and 16, respectively.Increasing these lengths adds additional security-level protection.The primary reason for 128 Hash Point lengths is that, as specified, MT51 can accommodate 128 bits.Section 4 discusses why 128 bits was selected to aid in the scheme's maintenance.The selection of 16-bit HMAC lengths follows our general strategy of 2-bit lengths and aids in the flexibility of the scheme, as discussed in Section 4.1. Until the Hash Point is released, the HMACs are indistinguishable from random bits.Given the security of the salted Hash Path, the probability that an adversary could generate a preimage Hash Point is 2 128 − .This probability is sufficiently low so that it should not be expected to occur ever even with the support of vast computational resources.As described in Section 3.2, given the desire to use the 128-bit Hash Path efficiently at a particular Hash Point time interval, we desire to authenticate many different pieces of information simultaneously (e.g., five messages per Hash Point, Section 4.2).Thus, we have taken a conservative approach in our construction to mitigate any anticipated vulnerabilities when implementing potential feature extensions in this SBAS TESLA design or any other GNSS-TESLA constellation concepts (Anderson et al., 2022;O'Hanlon et al., 2022).To discourage implementation errors when applying these design concepts, we ensure that each piece of information authenticated with an HMAC has its own cryptographically-independent HMAC key as described in Equation (2). Equation ( 2) also takes the secure Hash Point and uses HMAC to derive cryptographically-independent keys by applying HMAC with the Hash Point as the HMAC key field together with unique contextual information in the HMAC message field.In the specific case of Equation ( 2), the HMAC message field includes the message time, the satellite PRN code, and the frequency.The time parameter allows a single Hash Point to authenticate the five messages individually because each message is sent at a different time.The PRN code allows all of the satellites to share the same Hash Path because each satellite has its own unique PRN code; the same is the case with the frequency.Our time, PRN code, and frequency band choices are arbitrary, except that our selection guarantees uniqueness; each message from each satellite and from each band is provided with a cryptographically-independent key that can be used to derive the HMAC.Any unique identifying information, such as a counter, would suffice to ensure that the output of Equation ( 2) is cryptographically independent. There are several ways to construct a secure scheme without the intermediate HMAC operation of Equation ( 2), for example, prepending the context to the signed data.Careful consideration must be take to avoid implementation errors that might introduce vulnerabilities including, but not limited to, (1) allowing an adversary to spoof a message from one satellite so that it would appear to be coming from another satellite (or another frequency); ( 2) allowing an adversary to spoof a message from another time (given that we sign five messages at a time using this scheme); (3) allowing an adversary to prefix the context of one message to another to engage in a swap and provide context for confusion attacks (e.g., especially when the signed data are not of fixed length); ( 4) allowing an adversary to engage in related-key attacks (Peyrin et al., 2012); or (5) deriving additional data using this key (e.g., in signal watermarking concepts as described in Anderson et al. ( 2022 Because the Hash Point is not known to an adversary when the HMACs are released, the probability that an adversary can forge a 16-bit HMAC is 2 16 − .To provide adequate protection against forgery, we must specify a conservative approach for forgery detection.For example, if any of the smaller HMACs fails the verification algorithm, the receiver must discard all non-MT51 information from that particular SBAS satellite and restart collecting new SBAS data.While 2 16 − is a relatively high probability, this is sufficient for the SBAS context because (1) an adversary does not yet have access to the delay-released key nor an HMAC verification oracle (from the cryptography security context), and (2) once forgery has been detected, all prior SBAS data will be immediately discarded.While the details will be presented in a forthcoming work, the receiver logic will be set up so that a single message forgery event will not result in an integrity failure.this will decrease the likelihood of harmful forgery to 2 <10 32 9 − − .The likelihood that an adversary could forge so many messages successfully is small enought to meet the security level required by the stakeholders. Our selection of SHA-256 for the Hash Path and HMAC generation is not necessarily required.We select SHA-256 because it is standard and widely-used.However, the Providers may wish to consider other standardized hashing functions such as SHA-384 or those from SHA3 group to address other concerns, for example, future-proofing and hardware concerns.We selected HMAC-SHA-256, which includes SHA-256 as its primitive, to simplify the protocol.The widespread and straightforward use of SHA-256 aids in the continued security of the proposed scheme.If the SHA-256 security is broken, it will be widely-publicized and quickly exchanged for another hash function.Providers would need to replace only a single function in their implementation to continue operation, much like the recent SHA-1 deprecation. OVER-THE-AIR REKEYING (OTAR) DESIGN METHODOLOGY The proposed design of MT51 is fundamentally modular.The design of MT51 serves to deliver pseudorandom data so that its use can be flexible and recycled among different OTAR designs and applications that require delivery of pseudorandom data, including keys, signatures, and Hash Points.Its purpose is to deliver large chunks of pseudorandom data to maintain the SBAS authentication scheme and the associated metadata that addresses the way in which the receiver should interpret that authenticating pseudorandom data.The same message definition delivers TESLA Hash Path Ends, level-2 keys, level-1-key decryption keys, and the authenticating pseudorandom data required for authentication. Our choice to allow 128-bits per MT51 serves several purposes.We selected 128 and 256-bit security level ECDSA keys since there exist standardized, secure elliptic curves at these security levels with each divisible by 128.The public keys and derived signatures are two-and four-times the security level length, respectively.These quantities are divisible by 128, meaning that there will be integer numbers of OTAR Payload Segments without wasted zero padding.Table 5 exhibits the number of messages required to perform OTAR at each of the key levels.We recognize that we could use the 192-bit level without modifying our scheme because all of the 192-bit level data are also divisible by 128.The design is agnostic to the asymmetric scheme and is recycled for OTAR of the Hash Path Ends, thereby expanding its use for maintenance of the entire scheme, not just the asymmetric portion.The proposed MT51 standard would require no changes if a more efficient authentication scheme (e.g., EC-Schnorr), a quantum-secure scheme, or a different security level replaced the proposed asymmetric authentication scheme.The modular design also facilitates easy expansion of additional features described in Sections 4.1 and 4.2 that may desirable to several of the SBAS stakeholders. MT50 and MT51 are agnostic to the asymmetric cryptographic security scheme and hash function primitives.The SBAS MT scheme need not change When the security of a primitive becomes compromised.However, the Providers and receivers will need to change the primitives that are used in modular fashion.Since MT51 provides 128-bits of authenticating pseudorandom data per message, and standardized cryptographic primitive lengths are generally integer factors of 128, changes to the scheme will by definition increase or decrease the number of segments required to transmit information.If the security of the 128-bit truncated SHA-256 is compromised, SBAS could double the Hash Point length space without affecting the MT50 frequency.Suppose the Hash Point space is the set of 256-bit integers, analogous to untruncated SHA-256.Each MT50 will transmit half of a Hash Point, and each HMAC key will be derived from two consecutive MT50 messages.This increases the time-between authentication events by a few seconds; however, it doubles the Hash Path security and maintains its loss-tolerant properties. MT51 Metadata Design We specified the inclusion of a Germane and Authenticating Key hash within the authenticating pseudorandom data metadata.Since the payload is pseudorandom, metadata must exist so that the receiver can associate the unique MT51 payloads.Any identifying feature would suffice, for example, the ordered key number which rolls over every 2 =65536 16 keys.However, the use of hash on the entire key provides additional features.The key schedule need not be rigid, linear, or sequential.The Provider and the CA can maintain several redundant level-2 and level-1 keys, respectively.Multiple keys, potentially managed in isolation by the Provider and the CA, might provide redundant security.The Provider would need to check to be certain that the two unexpired keys do not share the same 16-bit identifying hash; however, this would be a rare occurrence and the Provider would simply need to draw another key at random in that event. SBAS Providers could broadcast each other's Hash Path Ends and key maintenance features to promote service continuity.Hence, we included the SBAS Service Provider ID in the metadata.SBAS Providers would not need to have access to one another's secret data; they would only need to serve as repeaters.Whereas Section 4.4 discusses higher MT51-frequency to support the local SBAS authentication from cold receiver start, this feature would be low frequency and serve to support the local Provider schedule.Having Providers broadcasting each other keys at low frequency would contribute to TFAF when transferring to a new SBAS Provider.For example, we consider the case of two adjacent SBAS systems, WAAS and EGNOS, and an aircraft traversing from Europe to North America.It will take several hours for an aircraft to make this journey.If EGNOS broadcasts the WAAS keys once an hour, any westward-bound aircraft will receive the WAAS keys it needs to operate before reaching North America.This decreases the WAAS TFAF to zero.At a minimum, SBAS Providers could broadcast keys from adjacent SBAS Providers.Without this feature, when an aircraft enters a new SBAS service volume the first time in a given week (or 10 or 100 weeks), it must act as a cold-start receiver for as long as required to collect the Authentication Stack, as specified in Section 4.4. Spare bits in the authenticating pseudorandom metadata could accommodate other features.These include scheme hyperparameters, such as the choice of a hash function or key or HMAC length.MT51 messages could authenticate specific messages immediately with ECDSA, including those identified by 16-bit hashes.If keys are ever compromised, MT51 could also disseminate key revocations.A revocation MT51 would include eight 16-bit hashes of the affected keys and would only need to be broadcast until the keys have been removed using standard procedure.Because each MT51 broadcast is accompanied by a germane expiration time, the Provider can arbitrarily shorten the applicability of a particular key by rebroadcasting the MT51 with a different expiration time, under the assumption that receivers actually receive and record that updated broadcast and that the Hash Path salt is not disrupted, as described in Section 3.2.1.Receivers would need to remember key expiration time changes and key revocations.Receivers that did not receive these updates would remain vulnerable.If the Provider can remove data, the metadata must include a parameter that identifies a given specific authentication.This is because without the germane key hash, authenticating key hash, segment number, and authentication instance number, the individual segments of pseudorandom data will not be associated.MT51 can manage other authentication schemes, including the program adopted by GNSS as described in Section 4.2.A GNSS authentication, especially one built on TESLA, would save bandwidth by deferring the its maintenance to SBAS. There are many features that can be accommodated by the scheme's modularity; however, if SBAS stakeholders prefer not to use these features, then the spare meta bits could also be used to aid in maintaining resistance to message loss. Providers could add redundant HMACS to the MT51 message and thus authenticate messages redundantly to alleviate MT50 message loss.Moreover, the delivery of 128-bits of authenticating pseudorandom data could be augmented via the use of fountain codes (Fernandez-Hernandez et al., 2017).The use of fountain codes may increase the reliability of delivering MT51 messages and decrease the number of transmitted messages required for an authenticated first fix.The selection of 128-bits supports the scheme's efficiency because all data points are integrally divisible by 128.Special care must be taken to ensure that using fountain codes in the spare bits to augment those 128-bits would not change this integral-divisible property and thus will maintain efficiency. A final MT51 design must accommodate all the features relevant to SBAS stakeholders while maintaining the KPIs.The most relevant KPI for this discussion is TFAF, because the use of more features will require more metadata.This will increase the number of unique MT51s required for OTAR.We proposed MT51 because the minimum MT51 OTAR Payload Segment size was 128-bits.The conveniences associated with this choice are discussed in other sections.This choice permitted us to specify maximum metadata.However, as defined in Table 3, there could be redundant identifying information given certain assumptions about key management.Furthermore, some of the bits of metadata are never used (e.g., delivery of a 128-bit salt will never require more than one page).Because this work reflects an academic context, we are concerned with the breadth of features possible, given that our 128-bit design already meets the KPIs.Therefore, we acknowledge that there are other ways of constructing MT51 metadata that will avoid definition redundancy and spare bits.As SBAS Stakeholders discuss their desires (e.g., for the metadata to become more or less crowded, and noting that even MT51 has some spare bits.Therefore, we offer several suggestions on how to consider balancing the design while maintaining scheme modularity, simplicity, and flexibility. For example, a plausible MT51 design could include a 192-bit OTAR payload or a larger payload.This can be achieved via any of the following: (1) the SBAS Provider could identify keys by their expiration time, thus removing the need for the Germane and Authenticating Key Hashes in the metadata; however, this would eliminate the possibility of parallel and redundant key management; (2) any metadata regarding the keys could be placed on its own page, provided the authenticating signatures were derived from that data to ensure its security; and (3) the SBAS Provider and receiver could agree on a defined ordering, for example, that depicted in Table 4, for an aggregate OTAR payload.We note that the 128-bit MT51 design does not require the receiver to presume anything about the OTAR schedule or use.In our simulation, the receiver implementation proceeds without knowledge of the 100, 10, and 1 week cadence, nor any insight into whether the keys are managed rigidly or linearly. A 192-bit OTAR Payload Segment length almost maintains the integer-divisibility.An Authentication Stack from Table 5 using AES-256 to encrypt level-1 keys and having a separate TESLA salt MT51 would be integer-divisible by 192 with 2304 total bits and 12 192-bit MT51s.Alternatively, as features are removed, rather than shifting bit allocations from the metadata to the OTAR Payload Segment, SBAS Providers could instead arbitrarily replace metadata bits with fountain codes.This suggestion would maintain the integer-divisibility of MT51 on the authenticating pseudorandom data, thereby ensuring there is no wasted zero-padding on the delivered data, and will also add the loss-tolerant properties of MT51. We implore the SBAS Stakeholders to have a future-proofing mindset that includes consideration of attributes that may be needed decades in the future.For example, cryptographic primitives will break and will need to be replaced. Likewise, quantum-computer resistant algorithms may be needed.The best way to maintain a future-proof mindset is to make MT51 agnostic to the data it is delivering and capable of expansion to any future pseudorandom data delivery requirements. At a 128-bit OTAR Payload Segment, the required delivery of 16 unique MT51 already meets the KPIs.However, increasing the OTAR Payload Segment would decrease the number of unique MT51s.This would decrease the burden on the SBAS Schedule, decrease TFAF, and have a less of an effect on continuity and availability, among other considerations.Therefore, we expect the results shown in Section 6 to be repeatable in designs that feature a larger than 128-bit OTAR Payload Segment. Core Constellation Ephemerides Authentication Another feature that is accessible because of this scheme's modularity is authentication of navigation messages from a GNSS application.GNSS messages have limited bandwidth and must maintain backward compatibility.MT51 provides a natural pathway that might be used to authenticate GNSS data with the 128-bit payload and spare meta-data bits.Table 3 provides the required metadata designations, including (1) the germane core constellation system; (2) the type of authenticating pseudorandom data; and (3) the authenticating pseudorandom data segment number.However, the payload will need to be modified to accommodate realistic operations. Given the Satellite-Receiver-Earth geometry, we suggest that the 128-bit payload might be split into eight 16-bit payloads, as shown in Table 6.Each 16-bit HMAC within the MT51 payload corresponds to a specific satellite's broadcast ephemerides.The MT51 metadata informs the receiver which satellites correspond to the payload HMACs.By splitting the payload into smaller HMACs, the receiver need not have access to the entire set of ephemerides to authenticate data from individual satellites. Authenticating core constellation ephemerides exploits a main feature of TESLA which is its bandwidth efficiency.Core constellation ephemerides can derive from the same Hash Path as those distributed by MT50, meaning it would require minimal additional SBAS bandwidth.Equations ( 4) and ( 5) provide formulae that can be used to generate the necessary HMACs, where k j Ephemeride SVN is the HMAC key for a particular ephemeris' HMAC, s j Ephemeride SVN is the HMAC broadcast and used for authentication.p i P , t j ; PRN, Frequency, and SVN share the same definitions as those presented in Equation ( 1) and (2).Other information should be incorporated into these definitions (e.g., whether the signature is for the current or previous ephemeris).These details will be considered in our future work. There are three important security details that remain to be addressed.First, similar to MT50, if any of one of the HMACs returns unauthenticated, the entire set of ephemerides data must be discarded.Second, the HMACs derive from a cryptographically-independent keys derived from a TESLA Hash Point, similar to that described in Section 3.3.Equations ( 4) and ( 5) use the sending time t j , the SBAS PRN and frequency, and the core constellation satellite vehicle number.Third, the ephemeris HMACs must be sent before the release of the corresponding Hash Point according to the loose-time synchronized schedule (i.e., at least six seconds in advance). SBAS could authenticate the broadcast ephemerides of every satellite globally and the receiver can draw from the entire set of HMACs, including the few within the view of the receiver.The metadata segment number informs the receiver from which set of eight satellites the HMACs are derive.For GPS, this would mean four MT51s.However, MT1 and MT31 already provide an issues-of-data mask of the 92-most relevant satellites currently under correction for a particular SBAS.Therefore, we propose that the order of the eight HMACs might correspond to the order prescribed in the issues of data already provided to the receiver.The issue of data index, either the IODP or IODM, must be placed in the unused sections of the metadata, and the metadata segment number field would determine the relevant eight HMACs, in order, among the set of 92.Since the issue of data index requires only two bits, SBAS could use the unused Germane Key Hash, Germane Key Expiration, and Authenticating Key Hash to send 12 HMACs per MT51.Alternatively, one could define a new message type that delivers 13 HMACs, the issue of data index, and the segment number. MT50 Redundant Authentication MT51s are self-authenticating messages resulting from the use of ECDSA.Upon receipt of the entire Authentication Stack, the receiver can assert authenticity to the level-1 key maintained by the CA.However, the MT51 messages are among the SBAS messages that are authenticated by TESLA and MT50.If the receiver has an ECDSA-verified Authentication Stack that can authenticate the bulk of SBAS messages via MT50, the receiver can use TESLA to verify the next Authentication Stack when it is delivered by MT51.In this manner, a receiver can assert the authenticity of the next Hash Path End without awaiting the associated authenticating pseudorandom data since an HMAC in the following MT50 will assert authenticity down to the current Hash Path End through the current level-1 key.In Section 4.4, we suggest that the next Authentication Stack might be rebroadcasted each hour.The ECDSA-and TESLA-based authentication of the next Authentication Stack will then be redundant to a receiver that has undergone an authenticated fix.Therefore, an argument can be made that broadcasting the hour-long frequency of the next Authentication Stack does not add to the scheme.Alternatively, MT50s could not authenticate MT51s will instead authenticate messages redundantly.These considerations will ultimately depend on the manufacturers' implementation preferences and whether this logic should be incorporated into the process.To avoid implementation errors that might serve to exploit the authentication scheme, we suggest that, given stakeholder agnosticism, the simplest version might be selected.We believe that the scheme delineated above, where MT51 data is redundantly authenticated with MT50, is the simpler scheme overall. MT51 Schedule Frequency As discussed earlier, we propose that the cryptoperiod of the level-1, level-2, and TESLA Hash Paths might be 100, 10, and 1 week, respectively.Our selections are somewhat arbitrary and were chosen in an attempt to balance the bandwidth required to rotate key and Hash Path instances with security considerations such as the resources required for a brute-force attack and the likelihood that a key becomes compromised (e.g., leaked).Feasible scheme designs might permit level-1 cryptoperiods to be years longer, level-2 cryptoperiods to be as short as one month or one week, and Hash Paths to rotate every hour or at least once a day.While we will leave this consideration for future work, any choice made must meet minimum security requirements that are associated with the required computational time needed for exhaustive guessing of the keys.Upon cold receiver startup, we would like the TFAF be reasonably short.Therefore, the Authentication Stack must be sent out periodically.As shown in Table 5, a complete Authentication Stack requires 16 MT51s.We propose that the Provider might broadcast the current Authentication Stack every five minutes.In this way, the cold-start receiver achieves the first authenticated fix within the first five minutes, assuming no message loss.This will require a message to be transmitted from the 16 unique MT51s at a rate of 1 out of every 18 messages.With multiple geostationary satellites and frequencies, the satellites can broadcast the unique MT51s out of phase to decrease the TFAF as discussed in Section 3.2. Furthermore, the next keys, applicable immediately upon expiration of the current keys, must be sent well before they are needed to facilitate a seamless transition as the current keys expire.To protect the security of a specific new key, it would be prudent to send the new key just before it will be put into use.For instance, we suggest that the Provider might send the next 100-week level-1 key repeatedly beginning five weeks before its actual use.The next 10-week level-2 and 1-week TESLA Hash Path End might be sent one week before their actual use.To demonstrate its minimal impact on the MT51 SBAS bandwidth, if a Provider sent the next Authentication Stack once every hour, together with the current Authentication Stack once every five minutes, the Provider would need to send a message from the set of 32 unique MT51s at a rate of 1 out of every 17 messages, rounded up.This assumes that each OTAR Payload Segment of the Authentication Stack is broadcast at the same frequency as a baseline.Since some sections of the Authentication Stack change less frequently, further optimization could be performed to balance the TFAF with the likelihood that receivers are off for longer than for one week, 10 weeks, or 100 weeks.For instance, the slowly varying OTAR Payload Segments could be broadcast once every 15 minutes with the weekly-varying OTAR Payload Segments broadcast every two minutes. We note that Providers and receivers may not need to adhere to a set schedule for the key periods or the Hash Paths.As discussed above, the MT51 metadata and modularity afford flexibility with respect to the expiration of particular keys and the agreed-upon schedule for their expiration.Moreover, switching Hash Paths without warning does not add computational complexity to receiver verification.Each Hash Point is a hash, and hashes have constant-time look-up computational complexity, as presented in Algorithm 2. This means that if the Provider were to switch Hash Paths without warning, the receiver would need only one additional computation to check that the new Hash Point is the preimage to another ECDSA-verified Hash Path End.This means that SBAS could incorporate unscheduled or stochastic Hash Path switches, provided the appropriate MT51s are sent in advance, noting that the MT51s specify only Hash Path End expiration times and not their actual use time.If stakeholders elect not to have a rigid Hash Path schedule, a maximum Hash Path length upper bound must be specified so that receivers do not get stuck in an infinite loop as they attempt to hash down to an ECDSA-authenticated Hash Path End. The selection of the level-1 cryptoperiod, level-2 cryptoperiod, and Hash Path length pose trade-offs among various SBAS stakeholders.With a very long Hash Path, aircraft are less likely to find themselves without the current Hash Path End via MT51 since the Hash Path End expires less frequently.Upon startup after a shutdown less than the Hash Path End cryptoperiod, receivers need not await delivery of an updated Authentication Stack via MT51, however, they must accommodate a more intense startup hashing operation to hash the current MT50 Hash Point to the authenticated Hash Path End.With a very short Hash Path (e.g., a Hash Path cryptoperiod of one day), receivers are more likely to be missing part of the Authentication Stack after a shutdown because the missing parts expired while the receiver was shut down.However, a shorter Hash Path will mean that less hash processing will be required at startup.These considerations become especially germane for aircraft that periodically traverse different SBAS service volumes.For instance, consider aircraft that regularly travel a transatlantic route.When outside a specific service volume, their devices will behave as shutdown receivers.Our selection described in Section 4.4 was to accommodate a short time to first fix.With a longer Hash Path, as flights travel across service volumes, the TFAF will more likely be bounded by receiver computational hash capability instead of the MT51 delivery schedule. TESLA TIME SYNCHRONIZATION The HMAC keys authenticating all SBAS messages with HMACs are derived from the Hash Path delay-release on an assumed schedule.Therefore, it is critical to the Hash Path security that the Provider and the receiver are loosely time-synchronized.Loosely time-synchronized means that the receiver has sufficient externally-trusted time to reject HMACS after the release of the associated Hash Point.This poses a "Catch-22" for aircraft that use an isolated GNSS receiver, given that GNSS provides the time function.GNSS ranging signals allow receivers to derive an accurate, atomic-clock-synchronized time.This time measurement is certainly accurate enough to support the delay-release schedule.The prior art details many ways that receivers can establish trust in GNSS ranging signals (Fernandez-Hernandez et al., 2019;Psiaki & Humphreys, 2016); however, GNSS ranging signals are not yet rigorously authenticated with cryptography. To damage SBAS security by breaking the loosely time-synchronized assumption, an adversary must spoof the receiver by delaying the receiver's time estimate.In our case, this delay is six seconds.If the receiver clock was six seconds behind the Provider's clock, an adversary listening to the Provider could identify the delay-released Hash Point, derive the keys, and generate forged HMACs.The longer the delay, the more HMAC forging can be accomplished by an adversary.While one could ensure that the receiver does not allow six-second time intervals; however, this strategy would not work with to combat a Creeping Replay Attack.The worst-case attack model will be examined in this work.In a Creeping Replay Attack, an adversary listens to and replays the GNSS and SBAS signals but introduces an incremental delay slowly enough to avoid detection by the receiver.There are several strategies that can be used to mitigate the Creeping Replay Attack. The first mitigation strategy is to use the onboard clock to evaluate a spoofed-time hypothesis.The prior art has explored several mechanisms that might be used to examine clock tolerances and to generate and use tolerance bounds to assert clock trust (Fernandez-Hernandez et al., 2020).We have explored a few methods that might be used to fuse GNSS and the onboard clock to determine whether a receiver is actively involved in a Creeping Replay Attack, such as Pearson hypothesis tests and Kalman-filter-based hypotheses tests.Explicit hypothesis methods and their evaluation are left for future work.These strategies can detect a Creeping Replay Attack before the six-second breakage boundary, provided the delay rate is faster than the uncertainty bound of the onboard clock.For instance, a consumer quartz clock oscillator will gain or lose approximately 15 seconds each month.This means that any hypothesis that uses the onboard clock exclusively to estimate authentic time is information-bound to 15 seconds per month.Aircraft technicians and airport security officials could consider implementing procedures that guarantee that the aircraft receiver periodically receives trusted GNSS and SBAS signals, for example, while taxiing on the runway, where security officials ostensibly monitor for spoofed signals and jamming.Operators would need to communicate with the receiver that the current estimate can be trusted to reset the information-bound consideration.Receiver manufacturers could invest in more accurate clocks that tie time-trust events to periodic maintenance. Another method is to compare to an external-to-GNSS, trusted time.Every day, billions of devices establish time via rigorous cryptographic authentication via the Internet.This system addresses tangential security concerns such as Internet-based banking.This typically works well because the Internet does not need to accommodate for an SBAS bandwidth limitation.A receiver could compare its own time to an Internet-or cellular-based time.This comparison need only be bound to the GNSS-based time.In other words, the receiver can still utilize the accurate time derived from the GNSS positioning regression, but it can check that this matches the Internet-based time within a boundary such as one second.This internet or a data-based time can be rigorously authenticated with cryptographically according to the standard of the medium.The original TESLA protocol provides a secure time-synchronization procedure.The Provider would establish an Internet server that would accept and operate on receiver-generated nonces that would work within the secure Hash Path (Perrig et al., 2005).Our choice to use the word "compare" is intentional in this case, given the need to address concerns about allowing external time inputs regarding to have access to an isolated SBAS receiver.However, this comparison can be achieved without exposing the receiver time to hacking strategies.Other strategies can be used to alert the pilot that the SBAS time is not trusted, such as incorporating the GNSS time estimate into the Air Traffic Control Radar Beacon System.If the beacon-broadcasted time was more than six seconds behind the trusted time on the ground, then ground services could alert the aircraft of the discrepancy. Because the ranging signal remains unauthenticated despite rigorous methods established with cryptography, SBAS must take a nuanced approach to assert the loosely time-synchronized assumption. FULL STACK SIMULATION AND KEY PERFORMANCE INDICATORS (KPIS) To examine the efficacy of this scheme, we implemented a full-stack simulation of this scheme in MAAST.The MAAST Matlab implementation known as the 16 unique messages within open slots of the SBAS schedule.Therefore, our simulation indicates that SBAS availability and continuity are not affected by this authentication scheme.Figure 5 displays availability maps that compare two cases, including one with no authentication and another with faithful authentication via MT50 and MT51 with message information accepted only after authentication.While the availability maps appear slightly different from one another, most noticeably so at the service volume boundaries over Alaska and Canada, given the reasonable fidelity of MAAST, we consider the results indistinguishable.The pie graphs below indicate the distribution of the different messages with no authentication (left) and with authentication (right).In the authentication case, MT51s are sent in 1 of every 6 messages.While our requirement specifies that MT51 is to sent at a rate of 1 of every 17 messages (i.e., about 6%), this simulation schedule replaces the remaining MT63s with MT51s.Given the nearly identical results under the reasonable fidelity of MAAST, we find these results demonstrate the viabilility of the scheme presented in this work. CONCLUSION This works delineates a complete TESLA-based SBAS authentication scheme that includes OTAR.The scheme relies on three levels of security: (1) 128-bit-security TESLA; (2) 128-bit-security ECDSA; and (3) 256-bit-security ECDSA.Using this scheme, the two message types are appended to the schedule.It does not require removal of power from the I-channel to support a Q-channel strategy.This strategy is immediately backward compatible because older receivers can ignore the new message types.It is also flexible and can be expanded according to the needs of and additional feedback from SBAS Stakeholders.The flexibility of the scheme derives from the observation that a single message can be used for all cryptographic maintenance.Given the reasonable use of onboard clocks, external clocks, and maintenance patterns, we assert the existence of reasonable, nuanced strategies that might be used to mitigate attacks against the loose time-synchronized assumption required by TESLA. We tested the scheme with a faithful, full-stack simulation that includes full encoding and decoding of messages and use of an appropriate cryptographic library.Since the appended information fits within the unused slots in the message schedule, our simulation revealed only negligible differences in performance of the simulated receivers.Moreover, since the scheme makes use of space within the I-channel and not the Q-channel, no associated power decrease or loss of service at volume boundaries was observed.Therefore, we find this scheme, or one that is substantially similar, will be acceptable for use in SBAS authentication. a c k n o w l e d g m e n t s We gratefully acknowledge the support of the FAA Satellite Navigation Team for funding this work under Memorandum of Agreement #693KA8-19-N-00015. FIGURE 1  FIGURE 1 Conceptual diagram of TESLA that demonstrates the delayed-release key secrecy schedule The right section of the diagram follows the left section in time.The diagonally cross-hashed boxes include information held secret by the Provider.The box recedes each time a hash point is released. FIGURE 2 FIGURE 2 Conceptual diagram depicting an overview of the entire scheme presented in this work The objects depicted are defined and described in the Sections to follow.Multiple levels of ECDSA authenticate a Hash Path End (indicated as HPE in the diagram).Preimage Hash Points (HP in the diagram) together with HMACs are used to authenticate SBAS messages.Black arrows represent the direction of authentication; blue arrows, hashing operation, and red arrows, HMAC operations.The diagram reads from left-to-right with increasing time of release by the Provider, i.e., items to the right are released later by the Provider than items to the left. FIGURE 3 A FIGURE 3 A conceptual diagram of how consecutive MT50 messages relate to each other The colors correspond to a specific Hash Point along the Hash Path.Each MT50 includes the HMACs of the five previous messages and the Hash Point used for the HMACs sent with six earlier messages. FIGURE 4 FIGURE 4 Conceptual diagram of accommodations made by the counter scheme to perturbations in the MT50 schedule In the diagram, "m" denotes a standard message, and "I" represents an integrity message.Integrity messages that would be sent on a nominal schedule are marked blue and additional integrity messages during an alert are marked in yellow.Note that T  6 does not change in the event of an alert message MT50 delay, as shown in red; thus, the Hash Path is preserved. ); O'Hanlon et al. (2022)) performed in a reversible manner, allowing an adversary to have access to an unreleased TESLA Hash Point.Our construction with Equation (2) separates context and content to mitigate these issues and provides a clear way to extend authentication by generating additional and irreversible cryptographically-independent information (Section 4.2 (Anderson et al., 2022; O'Hanlon et al., 2022)) in an implementation-error-proof manner. Each element in the Hash Tables store metadata such as the key expiration time and the relevant higher-level authenticating key.Await receipt of all needed unique MT51 OTAR Payload Segments, called the Authentication Stack, to assert authenticated Hash Path Ends.A Hash Path End element p n P +1 stored in H 3 is authenticated if itself and signature are ECDSA verified by an authenticated element in H 2 .A public ECDSA key element stored in H 2 is authenticated if itself and signature data are ECDSA verified by an authenticated element in H 1 .A public ECDSA key element stored in H 1 is authenticated if it was prestored from the CA. The Hash Function is secure and there are no known efficient algorithms that can compute the input Hash Point to generate the Hash Function ALGORITHM 2 Receiver Procedures for Single Satellite-Single Frequency Authenticated Message Distribution with TESLA while on do Receive MT51 payload segments associated via the MT51 metadata and store into three hash tables H 1 , H 2 , and H 3 .H 1 holds data for level-1 ECDSA keys.H 2 holds data for level-2 ECDSA keys and associated level-1 signatures on those keys.H 3 holds data for TESLA Hash Path Ends p n P +1 and associated level-2 signatures on those Hash Path Ends.← ( , , ) via Equation (1) while i ≤ Maximum Iteration From Max Hash Path Length do if p H t ∈ 3 and H p t 3 ( ) is authenticated and not expired via ECDSA through level-1 then p r is authenticated: check m m TABLE 1 Bit Allocation for the Proposed MT50 Preamble MT Reserved HMAC1 HMAC2 HMAC3 HMAC4 HMAC5 Hash Point Note: As per SBAS definitions there are 250 bits per message. TABLE 3 Bit Allocation of Payload Metadata TABLE 4 List of All Unique MT51s in an example Authentication Stack as defined in Section 2.5 Unique MT51 Number OTAR Payload Segment Content 1 AES-128 Key to decrypt receiver-stored Level-1 ECDSA Public Key Note: This set is broadcast repeatedly by the Provider for cold-start receivers.A receiver must receive all of the unique MT51s to initiate authentication. TABLE 5 Delineation of the Number of MT51s Required to Complete a Single OTAR for a Specific Key TABLE 6 Bit Allocation for a Specific MT51 to Authenticate a Subset of Ephemerides With a 128-bit Payload.
19,398.4
2023-01-01T00:00:00.000
[ "Engineering", "Computer Science" ]
From p-Values to Posterior Probabilities of Null Hypotheses Minimum Bayes factors are commonly used to transform two-sided p-values to lower bounds on the posterior probability of the null hypothesis, in particular the bound −e·p·log(p). This bound is easy to compute and explain; however, it does not behave as a Bayes factor. For example, it does not change with the sample size. This is a very serious defect, particularly for moderate to large sample sizes, which is precisely the situation in which p-values are the most problematic. In this article, we propose adjusting this minimum Bayes factor with the information to approximate an exact Bayes factor, not only when p is a p-value but also when p is a pseudo-p-value. Additionally, we develop a version of the adjustment for linear models using the recent refinement of the Prior-Based BIC. Introduction By now, it is well known by practitioners that p-values are not posterior probabilities of a null hypothesis, which is what science would need to declare a scientific finding. So p-values, and particularly the threshold of 0.05, need to be recalibrated. Two widespread practical attempts are (i) the so-called Robust Lower Bound on Bayes factors BF ≥ −e · p · log(p) [1] and (ii) the replacement of the ubiquitous α = 0.05 by α * = 0.005 [2]. These suggestions, which are an improvement of usual practice, fall short of being a real solution, mainly because the dependence of the evidence on the sample size is not considered. Still, the Robust Lower Bound is useful since it is valid from small sample sizes and onward and only depends on the p-value. It is known that the evidence of a p-value against a point null hypothesis depends on the sample size. In [3], they consider p-values in linear models and propose new monotonic minimum Bayes factors that depend on the sample size and converge to −e · p · log(p) as the sample size approaches infinity, which implies it is not consistent, as Bayes factors are. It turns out that the maximum evidence for an exact two-tailed p-value increases with decreasing sample size. There are several proposals in the literature, and most do not depend on the sample size, while those that do continue to be Robust Lower Bounds; however, neither behaves like a real Bayes factor. In this article, we propose to adjust the Robust Lower Bound −e · p · log(p) so that it behaves in a similar or approximate way to actual Bayes factors for any sample size. A further complication arises, however, when the null hypotheses are not simple, that is, when they depend on unknown nuisance parameters. In this situation, what is usually called p-values are only pseudo-p-values [4] (p. 397). So, we first need to extend the validity of the Robust Lower Bound to pseudo-p-values. The effect of adjusting this minimum Bayes factor with the sample size is shown in a simulation in Section 5.1. The outline of the article is as follows: In Section 2 we define pseudo-p-values using the p-value definition of [4] (p. 397) and extend for them the validity of the Robust Lower Bound. In Section 3, we present the adaptive significance levels that will be used for incorporating the sample size in the lower bound: the general adaptive significance level presented in [5] and the refined version for linear models developed in [6]; in both cases, we use versions calibrated using the Prior-Based BIC (PBIC) [7]. In Section 4, we derive adaptive approximate Bayes factors and apply them to pseudo-p-values in Section 5. We close in Section 6 with some final comments. Valid p-Values and Robust Lower Bound Under the null hypotheses, p-values are well known to have Uniform(0, 1); in [4] (p. 397), a more general definition is given. Definition 1. A p-value p(X) is a statistic satisfying 0 ≤ p(x) ≤ 1 for every sample point x. Small values of p(X) give evidence that H 1 : θ ∈ Θ c 0 is true, where Θ 0 is some subset of the parameter space and Θ c 0 is its complement. A p-value is valid if, for every θ ∈ Θ 0 and every 0 ≤ α ≤ 1, Based on this definition, we can say that there are valid p-values that are Uniformly Distributed in (0, 1), that is, P θ (p(X) ≤ α) = α for every θ ∈ Θ 0 and every 0 ≤ α ≤ 1, (1) and others that are not, that is, when there is at least one α, such that Remark 1. We consider any valid p-value complying with (2) a pseudo-p-value. The "Robust Lower Bound" (RLB), as we call it here and proposed by [1], is The authors consider that under the null hypothesis, the distribution of the p-value, p(X), is Uniform(0, 1). Alternatives are typically developed by considering alternative models for X, but the results then end up being quite problem-specific. An attractive approach is instead to directly consider alternative distributions for p itself. In effect, they consider that, under H 1 , the density of p is f (p|ξ), where ξ is an unknown parameter. So, consider testing If the test statistic (T) has been appropriately chosen so that large values of T(X) would be evidence in favor of H 1 , then the density of p under H 1 should be decreasing in p. A class of decreasing densities for p that is very easy to work with is the class of Beta(ξ, 1) densities, for 0 < ξ ≤ 1, given by f (p|ξ) = ξ p ξ−1 . The uniform distribution (i.e., H 0 ) arises from the choice ξ = 1 [1]. The expression B L (p) = inf all π B π (p), where B π (p) is the Bayes factor of H 0 to H 1 for a given prior density π(ξ) on this alternative. Theorem 1. The RLB ξ is a valid p-value for ξ ≥ 1, that is, Proof. Appendix A. Adaptive α with PBIC Strategy The Bayesian literature has been criticizing for several decades the implementation of hypothesis testing with fixed significance levels and, in particular, the use of the scale p-value < 0.05. An adaptive α allows us to adjust the statistical significance with the amount of information; see [5,11,12]. The adaptive values we work with in this section were calculated so that they allow to arrive to results equivalent to those obtained with a Bayes factor. In [5], the authors present an adaptive α based on BIC as where C α is a calibration constant, and strategies for calculating it are presented in [5]. It yields a consistent procedure; it alleviates the problem of the divergence between prac-tical and statistical significance; and it makes it possible to perform Bayesian testing by computing intervals with the calibrated α-levels. An adaptive α is also presented in [6], but this time it is a version refined to nested linear models with calibration based on the Bayesian information criterion based on Prior PBIC [7], Here, b = |X t j X j | |X t i X i | and X i , X j are design matrices and [d m l (1+n e m l )] with l = i, j corresponding to each model. Here, n e m l , with l = i, j, refers to The Effective Sample Size (called TESS) corresponding to that parameter; see [7]. The adaptive α in (5) can also be presented using the PBIC strategy (this strategy was not considered in [5]), and the following expression is obtained Note that this adaptive α is still of BIC structure, since the expression χ 2 α (q) + q log(n) remains. Example: Binomial Models Consider comparing two binomial models S 1 ∼ binomial(n 1 , p 1 ) and S 2 ∼ binomial(n 2 , p 2 ) via the test Defining n = n 1 + n 2 andp, the MLE from p 1 − p 2 , then (7) gives here, Table 1 shows the behavior of this adaptive α n for α = 0.05 and different values of n 1 and n 2 . Adjusting RLB ξ Using Adaptive α In this section, we combine (3) with the formulas for adaptive α in (6) and (7) for adjusting RLB ξ and obtaining an approximation to an objective Bayes factor. Indeed, we adjust the RLB ξ through the expression B(α) = B L (α, ξ 0 ) · g(·), where g is determined in such a way that when B(α) is evaluated in (6) or (7), it converges to a constant (this allows us to obtain equivalent results from the Frequentist and Bayesian point of view, that is, the decision does not change). Substituting p in (3) by the adaptive α value in (7) results in the following expression. For a Uniform(0, 1) p-value with ξ 0 = 1, this expression simplifies to The refined version of this calibration for linear models is obtained when (3) is evaluated in (6) (11) in this case, we only consider ξ 0 = 1. Balanced One-Way Anova Suppose we have k groups with r observations each, for a total sample size of kr, and let H 0 : µ 1 = · · · = µ k = µ vs. H 1 : At least oneµ i different. Then, the design matrices for both models are and the adaptive α for the linear model in accordance with what was presented in [6] is Here, the number of replicas r is The Effective Sample Size (TESS). Therefore, the approximate Bayes factor for this test calculated with (8) is A very important case arises when k = 2. For this situation, the last formula simplifies to Obtaining Bounds for P(H 0 |Data) In this section, we use (9) and (11) to produce bounds for the posterior probability of the null hypothesis H 0 . Since for any Bayes factor B 01 a lower bound for the posterior probability of the null hypothesis can be obtained as (13) Figure 2 shows these posterior probabilities (called P RLB ξ 0 ) for different values of ξ 0 . To simplify the use of these Bayes factors, we call BFG ξ 0 the Bayes factor of Equation (9), BFG the Bayes factor of Equation (10), and BFL the Bayes factor of Equation (11). (13)) for Testing Equality of Two Means Consider comparing two normal means via the test where the associated known variances, σ 2 1 and σ 2 2 , are not equal. On the other hand, considering Assuming priors: π(σ 2 ) ∝ 1/σ 2 for both H 0 and H 1 . The Bayes factor is where t = |Ȳ| s/ √ n a t-statistic with degrees of freedom l = n − 1 and n = n 1 + n 2 ; see [13]. Figure 3 shows the posterior probability for the null hypothesis H 0 when n = 50 and n = 100 for the Robust Lower Bound with ξ 0 = 1 (called P RLB ), the Bayes factor BFL (called P BFL ), the Bayes factor BFG (called P BFG ), and the Bayes factor BF 01 (called P BF 01 ). Note that the posterior probability with BF 01 when τ 0 = 6 looks very similar to the result obtained using the Bayes factors BFL and BFG. We now present a simulation that shows how our adjustment, or calibration, to RLB ξ works quite similarly to an exact Bayes factor. We perform the following experiment: We simulate r data points from each of the two normal distributions, N(µ 1 , σ) and N(µ 2 , σ). We reproduce this K times. For all K simulations, µ 1 − µ 2 = 0. For all K replicates, we test the hypotheses H 0 : µ 1 = µ 2 vs. H 1 : µ 1 = µ 2 , and then we count how many of the p-values lie between 0.05 − ε and 0.05. Note that all of these p-values would be considered sufficient to reject H 0 if α = 0.05 is selected. Finally, we determine the proportion of these "significant" p-values obtained from samples where H 0 is true. Posterior probability for the null hypothesis H 0 for n = 50 and n = 100 using the Bayes factor RLB ξ 0 with ξ 0 = 1, the Bayes factor BF 01 , and the Bayes factor BFL and BFG. Table 2 presents the mean percentage of these significant p-values coming from samples, where H 0 is true for 100 iterations of the simulation scheme with K = 8000, σ = 1, and ε = 0.05 for r = 10, 50, 100, 500, and 1000. As expected, the distribution of the p-values behaved Uniform(0, 1) under H 0 , since H 0 was assumed true in the K replicates. Table 2 also presents the proportion of posterior probability of H 0 greater than or equal to 0.5 (50%) when using the RLB ξ , when corrected according to the method suggested in this document (Equations (10) and (11)), and when an exact Bayes factor (Equation (14)) is used. It is clear that the method suggested here behaves very similarly to an exact Bayes factor. Fisher's Exact Test This is an example where the p-value is a pseudo-p-value (see the example 8.3.30 in [4]). Let S 1 and S 2 be independent observations with S 1 ∼ binomial(n 1 , p 1 ) and S 2 ∼ binomial(n 2 , p 2 ). Consider testing H 0 : Under H 0 , if we let p be the common value of p 1 = p 2 , the joint pmf of (S 1 , and the conditional pseudo-p-value is the sum of hypergeometric probabilities, with s = s 1 + s 2 . Remark 2. It does not seem to be simple to estimate the appropriate ξ 0 that best fits the pseudo-pvalue in (15), in Figure 4 some arbitrary possibilities are given. It is important to note that in Bayesian tests with a point null hypothesis, it is not possible to use continuous prior densities, because these distributions (as well as posterior distributions) will grant zero probability to p = (p 1 = p 2 ). A reasonable approximation will be to give p = (p 1 = p 2 ), a positive probability π 0 , and to p = (p 1 = p 2 ) the prior distribution π 1 g 1 (p), where π 1 = 1 − π 0 and g 1 proper. One can think of π 0 as the mass that would be assigned to the real null hypothesis, H 0 : if it had not been preferred to approximate by the null point hypothesis. Therefore, if and the Bayes factor is Now, if we take g 1 (p) = Beta(a, b) such that E(p) = a a + b = (p 1 = p 2 ), then BF Test = B(a, b) B(s + a, n 1 + n 2 − s + b) p s (1 − p) n 1 +n 2 −s . Figure 4 shows the posterior probability for the null hypothesis H 0 when n = n 1 + n 2 = 50 and 100, for the Robust Lower Bound, the Bayes factor BFG ξ 0 (called P BFG ξ 0 ), the Bayes factor BFG (called P BFG ), and the Bayes factor BF Test (called P BF Test ). We can note that all the P BFG ξ 0 are comparable, even though in the case ξ 0 = 1 (P BFG ) it is a p-value and not a pseudo-p-value. Posterior probability for the null hypothesis H 0 for n = 50 and n = 100 using the Bayes factor RLB ξ 0 with ξ 0 = 1, the Bayes factor BF Test , the Bayes factor BFG ξ 0 , and the Bayes factor BFG. Linear Regression Models Consider comparing two nested linear models M 3 : y l = λ 1 + λ 2 x l2 + λ 3 x l3 + l with M 2 : y l = λ 1 + λ 2 x l2 + l via the test H 0 : M 2 versus H 1 : M 3 , with 1 ≤ l ≤ n, and the errors l are assumed to be independent and normally distributed with unknown residual variance σ 2 . According to the Equation (3) in [6,7] where s 2 3 is the variance x v3 , ρ 23 is the correlation between x v2 and x v3 , and x l3 and X * = (1 n |x l2 ). As an example, we analyze a data set taken from [14], which can be accessed at http://academic.uprm.edu/eacuna/datos.html (accessed on 13 January 2022). We want to predict the average mileage per gallon (denoted by mpg) of a set of n = 82 vehicles using four possible predictor variables: cabin capacity in cubic feet (vol), engine power (hp), maximum speed in miles per hour (sp), and vehicle weight in hundreds of pounds (wt). Through the Bayes factors BFG and BFL, we want to choose the best model to predict the average mileage per gallon by calculating the posterior probability of the null hypothesis of the following test H 0 : M 2 : mpg = λ 1 + λ 2 wt l + l vs. H 1 : M 3 : mpg = λ 1 + λ 2 wt l + λ 3 sp l + l with α = 0.05, q = 1, j = 3, the posterior probabilities for the null hypothesis H 0 are P BFL = 0.9253192, P BFG = 0.7209449. The use of this posterior probability in both cases will change the inference, since the p-value of the F test is p = 0.0325, which is smaller than 0.05. Findley's Counterexample Consider the following simple linear model [15] 2, 3, . . . , n and we are comparing the models H 0 : θ = 0 and H 1 : θ = 0. This is a classical and challenging counterexample against BIC and the Principle of Parsimony. In [7], the inconsistency of BIC is shown, but the consistency of PBIC is shown in this problem. Here, we show through the posterior probabilities of the null hypothesis that the Bayes factor BFG ( based on BIC) is inconsistent, while the Bayes factor BFL ( based on PBIC) is consistent if it is. We perform the analysis in two contexts: First, when n grows and α = 0.05 or α = 0.01 are fixed. Second, when n is fixed and 0 < α < 0.05. For calculations i . Figures 5 and 6 show, through the posterior probability of the null hypothesis H 0 , the consistency of the Bayes factor based in PBIC (P BFL ), as well as the inconsistency of the Bayes factor based in BIC (P BFG ). Lower bounds have been an important development to give practitioners alternatives to classical testing with fixed α levels. A deep-seated problem with the useful bound −e · p · log(p) is that it depends on the p-value, which it should, but it is static, not a function of the sample size n. This limitation makes the bound of little use for moderate to large sample sizes, where it is arguably the correction to p-values more needed. 2. The approximation develops here as a function of p-values, and sample size has a distinct advantage over other approximations, such as BIC, in that it is a valid approximation for any sample size. 3. The (approximate) Bayes factors (9) and (11) are simple to use and provide results equivalent to the sensitive p-value Bayes factors of hypothesis tests. In this article, we extended the validity of the approximation for "pseudo-p-values," which are ubiquitous in statistical practice. We hope that this development will give tools to the practice of statistics to make the posterior probability of hypotheses closer to everyday statistical practice, on which p-values (or pseudo-p-values) are calculated routinely. This allows an immediate and useful comparison between raw-p-values and (approximate) posterior odds.
4,746
2023-04-01T00:00:00.000
[ "Mathematics" ]
Dosimetric and radiobiological comparison of simultaneous integrated boost radiotherapy for early stage right side breast cancer between three techniques: IMRT, hybrid IMRT and hybrid VMAT Purpose This study aimed at evaluating the clinical impact of full intensity-modulated radiotherapy (IMRT), hybrid IMRT (H-IMRT) and hybrid volumetric-modulated arc therapy (H-VMAT) for early-stage breast cancer with simultaneous integrated boost (SIB), in terms of plan quality and second cancer risk (SCR). Methods Three different plans were designed in full IMRT, hybrid IMRT, and hybrid VMAT for each of twenty patients with early-stage breast cancer. Target quality, organs at risk (OARs) sparing, and SCR were compared among the three plans for each case. Results In compared with H-IMRT, IMRT plans showed deterioration in terms of D2% of SIB, V10 of ipsilateral lung, and excess absolute risk (EAR) to contralateral lung (C-Lung) and esophagus. D2% and the homogeneity index (HI) of SIB, V5 of ipsilateral lung (I-Lung), the Dmean of the esophagus, the EAR to C-Lung and the esophagus with hybrid VMAT dramatically increased by 0.63%, 10%, 17.99%, 149.27%, 230.41%, and 135.29%, respectively (p = 0.024; 0.025; 0.046; 0.011; 0.000; 0.014). Dmean of the heart, the EAR to contralateral breast (C-Breast) and C-Lung by full IMRT was significantly decreased in comparison to the H-VMAT (4.67%, p = 0.033, 26.76%, p = 0.018; 48.05%, p = 0.036). Conclusion The results confirmed that H-IMRT could achieve better target quality and OARs sparing than IMRT and H-VMAT for SIB radiotherapy of early-stage right breast cancer. H-IMRT was the best treatment option, while H-VMAT performed the worst among the three plans in terms of SCR to peripheral OARs. Introduction Usually diagnosed as early-stage female cancer, the 5-year specific survival rate of breast cancer is up to 98.9% [1]. Whole breast radiotherapy (RT) and a boost to the tumor bed is considered as the adjuvant therapy after breast-conserving surgery for early-stage breast cancer [2,3]. Studies confirmed that patients benefited from RT and tumor bed boosting [3,4]. Various RT techniques, such as three-dimensional conformal radiation therapy (3D-CRT), intensity-modulated radiation therapy (IMRT), and volumetric-modulated arc therapy (VMAT), have been adopted for treating breast cancer. Utilizing two opposed, wedged, and tangential fields, 3D-CRT treating the whole breast is carried out with multi-leaf collimators (MLCs) to shield the adjacent normal tissue. Many studies [5][6][7] have been confirmed that tangential field techniques such as dynamic wedge and field-in-field techniques are used for whole-breast radiation can improve dose uniformity to the tumour. 3D-CRT has the advantage of improving the local control, but the toxicities associated with radiation to the organs at risk (OARs) are a concern [8]. Dividing each beam into smaller beamlets, IMRT delivers a non-uniform fluence to optimize the dose distribution [8]. VMAT can rotate the angle of gantry and radiate beams continuously, and modulate the dose rate (DR) and the shape of the MLCs simultaneously to achieve a highly conformal dose coverage [9]. IMRT and VMAT were reported to have incomparable advantages in dose homogeneity and coverage compared with 3D-CRT [9,10]. However, IMRT might be more susceptible to setup error and shape changes of the breast in whole breast RT [10]. To reduce the effects of the geometrical uncertainties, Nakamura et al. [11] proposed a method of hybrid IMRT plan comprised of two opposed tangential open beams and two inverse-planned IMRT beams. And they proved the hybrid IMRT had excellent performance in target quality and offsetting the geometrical uncertainties for patients who underwent whole breast RT [12]. With the advancement of medical technology, systemic therapy and radiation therapy techniques have greatly lengthened the life span of women with breast cancer. This, however, may increase the likelihood of radiationinduced secondary cancers. RT resulted in inevitably radiation damage and therapy-related second cancer risk (SCR) for normal tissue, which was confirmed by studies [12,13]. With the improvement of the efficacy and overall survival of breast cancer patients, the SCR and radiation toxicity caused by RT has gradually become a research focus. Although IMRT, hybrid IMRT, hybrid VMAT and VMAT have been shown to improve dose conformity and reduce dose to organs at risk (OARs) compared with 3D-CRT, organ doses to out-of-field regions were greater with IMRT or VMAT than with 3D-CRT, due to the former methods having greater scattering and monitor unit (MU) [14][15][16][17]. Early studies showed that 3D-CRT possesses a lower SCR than IMRT and VMAT for the [18,19]. In clinical breast cancer treatment, however, the uniformity of the target area and the dose of normal tissue should be considered simultaneously. When considering early stage breast cancer with SIB radiotherapy treatment, 3D-CRT technique may result in worse target uniformity compared with IMRT and VMAT techniques. Therefore, 3D-CRT was usually replaced by modern intensity modulation technology in SIB treatment for early stage breast cancer. To pursue an excellent target dose coverage and OARs sparing, and also lower the SCR and radiation toxicity, selecting a reasonable RT modality is critical for treating breast cancer. To the best of our knowledge, the clinical impact of hybrid VMAT for early stage breast cancer with SIB treatment has not been studied. This study aimed at assess the plan quality and SCR among three treatment modalities (full IMRT, hybrid IMRT, and hybrid VMAT) for SIB treatment of early stage breast cancer. Patients preparation Twenty females aged between 31 and 64 years old, with early-stage right-sided breast cancer after breast-conserving surgery, were randomly selected. None of the patients had contraindications for RT. This study was approved by the ethics committee of National Cancer Center/National Clinical Research Center for Cancer/ Cancer Hospital & Shenzhen Hospital, and the informed consent was acquired from each enrolled patient. All of the patients were positioned with a breast bracket and fixed foam plate on the affected side of the lower limbs. The computed tomography (CT) scans were acquired on a Philips Brilliance Big Bore CT (Philips, Holland) simulation in 5-mm-thick slices, in the supine position with the scan scope from the mandible to the thorax. In addition, all of the adjacent normal tissues, such as the heart, lung, esophagus, and contralateral breast, were completely covered. Contouring of target volumes and OARs Target volumes and OARs were delineated on the Eclipse treatment planning system (Version 13.6, Varian Medical Systems Inc.). The clinical target volume (CTV) and the boost region were delineated by the same radiation oncologist on each CT data set. The CTV was the whole breast tissue identifiable on the CT scan assisted by wire markers, which were placed around the palpable breast tissue during the simulation. Then the CTV limited posteriorly by the intercostal front and retracted 5 mm from the skin. The boost region encompassed the surgical bed or seroma. The planning target volume (PTV) was expanded 5 mm based on the CTV, excluding the heart. Then the PTV was retracted 5 mm from the skin and limited posteriorly by the intercostal front. The boost region was expanded by 5 mm in all directions to create the SIB (simultaneous integrated boost) volume. The contoured OARs were the contralateral breast (C-Breast), heart, spinal cord, esophagus, and ipsilateral (I-Lung) and contralateral lungs (C-Lung). Before RT planning, we should also deal with the lead wire marked on the body surface during CT positioning and modify its CT values to -1000HU to reduce the impact on dose distribution. In order to avoid the target receiving insufficient radiation dose because of the target changes in size due to edema during treatment or residual displacement due to breathing not properly controlled, a 10 mm artificial expansion with soft-tissue equivalent HU was added in the breast region and the PTV contours toward the external direction. Figure 1 showed the fields distributions in CT images for the three RT techniques respectively. Three different RT plans (full IMRT, hybrid IMRT, and hybrid VMAT) were created for each case in the Eclipse TPS. Utilizing 6 MV photon beams generated by Varian IX linear accelerator, dose optimization and calculations were done in Eclipse TPS for all of the plans. The algorithms of Dose-Volume Optimizer and Progressive Resolution Optimizer were used for IMRT, and VMAT dose optimization, respectively, and Anisotropic Analytical Algorithm was adopted for final dose calculations [20,21]. For the purpose of comparison, all the plans were normalized so that 95% of PTV covered by 43.5 Gy. All the plans were optimized with the same dose constraints [22] as was detailed in Table 1. Full IMRT The full IMRT plans contained two opposed tangential fields, and the other four fields that were at the angles of 10° or 20° to the two tangential fields in the direction of outside the body. The angles of the collimator and the position of jaws of all of the fields were adjusted before dose optimization to maximize the protection of the lungs. All of the fields were delivered with dynamic sliding-window IMRT delivery technique and the fixed DR of 600 monitor units (MUs)/min. Hybrid IMRT The hybrid IMRT plans owned two opposed tangential open beams plus three IMRT beams. Two of the three IMRT beams were at the angles of 10° to the two tangential fields in the direction of outside the body, and the third IMRT beam had an angle of about 30° to 45° to the tangential field on the upper side avoiding exposure to the heart and contralateral breast. To maximize Spinal cord D max (Gy) < 45 Gy the protection of the lungs, the angles of the collimator of the three IMRT beams were adjusted, and the position of the jaws of the third IMRT beam was adjusted and fixed, adapting the shape of the SIB before dose optimization and calculation. The adopted delivery technique and DR were the same as that of the full IMRT plans. The open beams contributed 80% of the total dose, whereas the inversely optimized IMRT beams contributed to the remaining prescription dose. Hybrid VMAT The hybrid VMAT plans owned two opposed tangential open beams and a half arc beam. The gantry of the arc beam rotated from one tangential angle to the other tangential angle. The maximum DR of the arc beam was set to 600 MUs/min. The open beams contributed with 80% of the total dose, whereas the inversely optimized arc beams contributed to the remaining prescribed dose. For the SIB and PTV-SIB of all of the plans, the prescribed doses were 49.50 and 43.50 Gy in 15 fractions, respectively. The prescribed 95% isodose covered no less than 95% of the target volume [23], and the percentage volume of the target volume radiated over 110% of the prescribed dose was no more than 2%. The dose constraints for adjacent OARs of contralateral breast, heart, ipsilateral lung, contralateral lung, spinal cord, and esophagus were defined according to published literature [24]. According to the planning method of Giorgia Nicolini [25], in order to avoid the target receiving insufficient radiation dose because of the target changes in size due to edema during treatment or residual displacement due to breathing not properly controlled, we give a 10 mm artificial expansion with soft-tissue equivalent HU of the body in the breast region and of the PTV contours toward the external direction for the full IMRT and VMAT plans. Treatment plan evaluation The data collected from the Dose-Volume Histogram (DVH) of all of the plans were evaluated in the aspect of target coverage and OARs sparing. SIB: the maximum dose (D max ), the mean dose (D mean ), and V 95% of SIB were assessed. The D max of SIB, also named D 2% , is defined as the dose received by 2% of the target volume, and V 95% is defined as the percentage volume of the target volume receiving 95% of the prescribed dose. The conformal index (CI) and homogeneity index (HI) were also evaluated. The CI of SIB is defined as CI = TV 2 PTV /(TV × PIV ) utilizing the Paddick conformity index, where the TV PTV was the SIB volume receiving 95% of the prescription dose, the TV is the total volume of the SIB, and the PIV is the total volume covered by the prescribed 95% isodose. The HI of SIB was assessed using HI = (D 5% -D 95% )/D mean , where D 5% and D 95% are the minimum dose radiated to 5% and 95% of the SIB, respectively. PTV-SIB: the D 2% , the D mean , V 95%, and CI of PTV-SIB were assessed. These indicators were defined as described above. OARs: the D max and D mean of contralateral breast, heart, spinal cord and esophagus, and the D mean of contralateral lung were executed for dosimetric analysis. The V 5 (the percentage volume receiving 5 Gy), V 10 (the percentage volume receiving 10 Gy), V 20 (the percentage of volume receiving 20 Gy), V 30 (the percentage of volume receiving 30 Gy), and D mean of the ipsilateral lung and combined lung were also evaluated. SCR calculations The SCR caused by RT of normal tissues can be assessed by excess absolute risk (EAR) model, as proposed by Schneider [23,26]. The EAR to develop a solid cancer after exposure to radiation has been estimated from data of the atomic bomb survivors for different kinds of solid cancer and describes the absolute difference in cancer rates of persons exposed to a dose d and those not exposed to a dose beyond the natural dose exposion per 10,000 person-years per Gy. The Eq. (1) shown below can be utilized to calculate the SCR of an organ [27,28]: where V T is the total organ volume assessed for secondary carcinogenesis, V (D i ) represents the organ volume receiving the dose D i , and the parameter β EAR is the slope of the dose-response curve in the low dose region. Equation (2), RED (D i ), represents the dose-response mechanistic model, which describes the fractionation effects and cell killing: where R is a parameter that represents the repopulation or repair ability of normal tissues between two dose fractions, and the parameter α′ was calculated by Eq. (3): where D T is the prescribed dose of 49.50 Gy to the SIB in this study, and d T represents the corresponding fractionation dose of 3.3 Gy. Given by Eq. (4), µ (x, a) expresses the modifying function: where γ e and γ a are both the age modifying parameters. In this study, the EAR has been investigated to the organs of contralateral breast, contralateral lung, ipsilateral lung, and esophagus. The assumed value of α/β = 3 Gy for all of the organs needed to evaluate EAR, and all of the other parameters used in EAR calculation were taken from previous research [27] and were shown in Table 2. Statistical analysis All the parameters were calculated from the DVHs. Statistical analyses were carried out using IBM SPSS Statistics version 21 (SPSS Inc.Armonk, NY). A paired t-test was performed to analyze the difference between three techniques, and a p value < 0.05 was considered to reveal statistical significance. Target volume The comparison of isodose lines from 500 to 4950 cGy for a selected case is illustrated in Fig. 2. The DVHs of SIB and PTV-SIB of one representative case are displayed in Fig. 3a, b, respectively. The parameters of D 2% , D mean , V 95% , CI, and HI were compared to evaluate the quality of target dose coverage. For SIB, the hybrid IMRT obtained a lower D 2% than both full IMRT and hybrid VMAT (p < 0.05) and achieved better HI than the hybrid VMAT (p < 0.05). For the PTV-SIB, the V 95% of the hybrid IMRT (99.37 ± 0.51) was better than that of the full IMRT and the hybrid VMAT (98.99 ± 0.42, 99.03 ± 0.67). The findings on SIB and PTV-SIB are listed in Table 3. OARs The DVHs of ipsilateral lung (I-Lung), contralateral lung (C-Lung), heart, contralateral breast (C-Breast), esophagus, and spinal cord of one representative case are displayed in Fig. 4a-f, respectively. The delivered doses to the OARs are listed in Table 4. Compared with the hybrid IMRT, V 5 of ipsilateral lung, the Dmean of the esophagus with hybrid VMAT increased by 17.99% and149.27%, respectively (p = 0.046; 0.011), the V 10 of ipsilateral lung with full IMRT increased 18.52% (p = 0.013), and the Dmean of the heart with hybrid VMAT dramatically increased by 4.67% compared with the hybrid VMAT (p = 0.033). SCR calculations The EAR of the organs of contralateral breast, contralateral lung, ipsilateral lung, and esophagus with three treatment modalities are shown in Table 5. Compared with hybrid VMAT, the EAR to the contralateral breast with full IMRT and hybrid IMRT were decreased by 26.76% and 33.48%, respectively (p = 0.018; 0.031), and the EAR Discussion and conclusion Since studies evaluating the hybrid IMRT and hybrid VMAT for early-stage breast cancer with SIB are rare, a comparison of the target dose coverage, OARs sparing, and SCR among full IMRT, hybrid IMRT, and hybrid VMAT for treating early-stage breast cancer with SIB is extremely relevant. This study aimed at estimate the three RT plans, and the expectation was to bring more clinical options to RT with SIB for early-stage right-sided breast cancer. IMRT showed a significant advantage in target dose coverage, and surrounding OARs spring for left-sided breast cancer after breast-conserving surgery [8][9][10]. This could result in better tumor control rate and lower toxicity, and late effects compared with the conventional tangential pair treatment beams. However, IMRT had inherent geometrical uncertainties arising from setup error and target motion, which offset the merits of IMRT for breast cancer [10,12,29]. Combining two opposed tangential open beams and IMRT beams, the hybrid IMRT plan might solve the geometrical uncertainties of IMRT. Nakamura et al. [12] compared the plan quality and robustness of the dose distributions against setup and motion uncertainties among four RT plans. They confirmed that hybrid IMRT performed better robustness against the uncertainties than full IMRT, and it offered superior plan quality. Fogliata et al. [30] compared the dosimetric difference for the involved OARs among 3D-CRT plan with field in field technique, and two VMAT plans (VMAT_full and VMAT_tang, gantry rotation partial arc from about 295 to 173° without and with a sector of 0 MU, respectively) for breast cancer. They proved that full VMAT had an obvious weakness in radiating a higher mean dose to the nearby OARs compared with VMAT_tang. Considering the excellent characteristics of hybrid plans and the lack of studies on hybrid VMAT plan, here, we eagerly studied the clinical dosimetric characteristics and SCR of full IMRT, hybrid IMRT, and hybrid VMAT, and we found that hybrid IMRT was superior to full IMRT and hybrid VMAT in target quality, and OARs sparing for early-stage right-sided breast cancer. Adopting the VMAT_tang (partial arcs with a sector of 0 MU) method from Fogliata et al. 's study, instead of two opposed tangential open beams plus a complete half arc in our study, the performance of hybrid VMAT in protecting peripheral OARs might be improved. However, different from irradiating the only target PTV as in Fogliata et al. 's study, the hybrid VMAT in our study delivered a boost dose to the tumor bed, and achieved better CI and HI for both the tumor bed and the PTV. Thus, the hybrid VMAT with a complete half arc beam might be reasonable in this study. However, the half arc beam delivered only 20% of the total dose by continuous rotation 180°, and the dose to the surrounding OARs inevitably increased. According to previous studies, it can be found that the plan quality for the IMRT and VMAT techniques depends a lot on the optimization process applied and the multiple beam's angles selected. In this study, in order to reduce the dose to the lung during the RT, the direction of 3D-CRT radiation field was still taken as the basis, and 10 degrees more was given to the outside and the gantry angle was range of about 200°, so as to increase the field regulation ability and achieve better uniformity for the target area. As a tumor with a better therapeutic effect and longer life expectancy than most other tumors, the radiation-related risk is the most serious sequelae for breast cancer survivors, which has been confirmed by numerous epidemiological cohort studies [31]. The occurrence of secondary cancer is closely related to the tissues and organs themselves. Studies have shown that fatal secondary cancer mainly occurs in the stomach, lungs, and colon, and thyroid has a particularly low threshold of SCR (mean dose as low as 0.05 Gy in children and young adults) [31,32]. In addition, the occurrence of secondary cancer depends on the radiation dose. Secondary cancer tends to occur in volumes receiving a total dose or near volumes receiving dose from 2 to 50 Gy radiation [31,33]. Several studies demonstrated that SCR dramatically increased when receiving a dose reaching a certain range in the kidney (from 1 to 15 Gy), stomach and pancreas (from 1 to 45 Gy), and bladder and rectum (from 1 to 60 Gy) [30,34]. In our study, seeking the least toxic radiation modality for breast cancer, we compared the SCR of three modalities for the contralateral breast, contralateral lung, ipsilateral lung, and esophagus. Recently, Schneider proposed a calculation model, namely, the EAR model, which can be adopted for SCR calculation and evaluation utilizing DVH data from the RT plan and related radiobiological parameters [25,28]. The EAR model has proved its feasibility to assess the SCR for patients with nasal natural killer T-cell lymphoma and breast cancer [28,30]. Fogliata et al. [30] applied the EAR model to compare the SCR among 3D-CRT, VMAT_full, and VMAT_tang for breast cancer. And they confirmed that VMAT_tang had advantages in reducing RT toxicity for the ipsilateral organs compared with 3D-CRT with field in field technique when they delivered the same SCR to the contralateral organs. In this study, we also adopted the EAR model to calculate the SCR for right-sided breast cancer, and our results demonstrated that the hybrid IMRT performed best in target quality, OARs spring, and SCR to peripheral OARs. However, if the half arc had a sector of 0 MU in hybrid VMAT, the performance of hybrid VMAT in SCR to adjacent OARs probably approached or achieved the effect of hybrid IMRT. The percentage of radiated dose and the effective dose delivery angle for the arc beam in the VMAT_tang in Fogliata's study and the hybrid VMAT in our study was quite different. This could translate into a differentiated radiation dose and SCR to the nearby healthy tissue. Of course, the results of the EAR model in predicting SCR depend on the accuracy of commercial TPS system modeling and related biological parameters. In this study, EAR was used to quantify radiationinduced cancer. However, EAR is originally based on the risk calculations of extremely inhomogeneous dose distributions in the Hodgkin's cohort from the Japanese A-bomb survivors [26,27] but not breast cancer cohort. It is also assumed that the total absolute risk in an organ is the volume weighted sum of the risks of the partial volumes which are irradiated homogeneously. In addition, uncertainties such as out of field low dose calculation as well as the effect of voxel size selection on dose calculation inevitably existed in commercial TPS. Combining the results of previous studies with the results of this study, the following can be concluded: compared with 3D-CRT, IMRT and VMAT improved target uniformity in SIB treatment for early breast cancer, but increased second cancer risk. Barbara Dobler et al. [18,35] found that compared to techniques with a limitation of short arcs or fields around the tangents for whole breast and SIB treatment, IMRT and VMAT are associated with a higher second cancer risk when exploiting a larger gantry angle range of around 200°. This conclusion is consistent with our research results. In addition, in our study, hybrid IMRT improved target uniformity and also had a lower second cancer compared with full IMRT and hybrid VMAT at the same gantry angel range. Hybrid IMRT combined the advantages of 3D-CRT and IMRT in treating early-stage right-sided breast cancer. Hybrid IMRT was shown to have significant advantages in target dose coverage, OARs sparing, and SCR to nearby normal tissues. Hybrid IMRT is worthy of clinical application and promotion.
5,657.4
2022-03-28T00:00:00.000
[ "Medicine", "Physics" ]
What Is Natural about Natural Capital during the Anthropocene? The concept of natural capital denotes a rich variety of natural processes, such as ecosystems, that produce economically valuable goods and services. The Anthropocene signals a diminished state of nature, however, with some scholars claiming that no part of the Earth’s surface remains untouched. What are ecological economists to make of natural capital during the Anthropocene? Is natural capital still a coherent concept? What is the conceptual relationship between nature and natural capital? This article wrestles with John Stuart Mill’s two concepts of nature and argues that during the Anthropocene, natural capital should be understood as denoting economically valuable processes that are not absolutely—but relatively—detached from intentional human agency. Introduction Hardly anyone would deny that natural capital is fundamental to the transdisciplinary field of ecological economics, even if the concept remains deeply contested [1]. After all, the canonical debate between weak and strong sustainability, for instance, hinges on the putative substitutability of natural capital and the vast majority of ecological economists and their life scientist colleagues maintain that instances of natural capital, such as ecosystems, produce goods and services essential to human well-being and the continued existence of our species [2][3][4]. The concept of natural capital typically denotes a rich variety of economically valuable production processes that are afforded to human agents, gratis. Nature not only affords human agents passive materials and raw resources to be improved by labor, but endows them with various production processes that generate valuable goods and services in a manner that is detached from human agency. One classic study concluded that the Earth's entire biosphere, including a wide range of services generated by natural capital-such as the purification of water, nutrient cycling and the detoxification of wastes-is worth between $14 and $54 trillion dollars, annually [5,6]. Despite the significance of natural capital for ecological economics, the relationship between this concept and the concept of nature remains unsettled [7,8]. What concept of nature, if any, is presupposed by the concept of natural capital? Specifically, what is meant by "natural" with respect to natural capital? What ought to be meant by it? Perhaps one reason why the answers to these questions have not been forthcoming is that the concept of nature is wrought with ambiguity and confusion. As the Scottish Enlightenment philosopher and friend to Adam Smith, David Hume, remarked in his A Treatise of Human Nature, there is no more ambiguous and equivocal word in the English language than the term "nature" [9]. While a world replete with nature might safely ignore the conceptual entanglement between nature and natural capital, we no longer live in such a world. Increasingly, our world is one that is dominated, or at least heavily influenced, by intentional human agency and the unintended consequences that arise from it. Last year, a multidisciplinary body of scholars within the International Commission on Stratigraphy, the Anthropocene Working Group, endorsed this claim when they recommended that the world officially recognize the Anthropocene as the new geological epoch [10]. The most striking feature the Anthropocene is that humans are a major geological and environmental force on par with natural forces. While the Anthropocene emphasizes the scale of human agency and its consequence for various Earth systems, it simultaneously conveys a diminished state of nature or "natural agency" on the planet. Ecological economists have begun to wrestle with the implications that the Anthropocene has for their transdisciplinary field but the conceptual relationship between nature and natural capital has been left mostly unanalyzed [11][12][13]. What consequence does the Anthropocene-and the diminished state of nature it represents-have for the concept of natural capital, a concept that is supposed to denote the economic value of non-human agency? Is natural capital still a coherent concept during this new geological epoch? If so, how should we understand the conceptual relationship between nature and natural capital during the Anthropocene? To tackle this set of questions, this article grapples with John Stuart Mill's two concepts of nature. For Mill, nature is either everything actual and possible, or a realm of phenomena that has not yet been affected by human agency. At first glance, neither option appears desirable for making sense of natural capital. After all, nature as everything actual and possible includes the items denoted by the concept of manufactured capital, and if ecological economists wish to continue distinguishing between natural and manufactured capital, this concept of nature seems unacceptable. On the other hand, nature construed as a realm of phenomena detached from human agency ignores the sheer magnitude of intentional human agency during the Anthropocene. This article champions Mill's first concept of nature and shows that, even if we accept that everything actual and possible is natural, we can still distinguish between human activity and the rest of nature for operational purposes. It will be argued that, during the Anthropocene, the concept of natural capital denotes a rich variety of natural and economically valuable processes that are relatively-not absolutely-detached from intentional human agency. Mill's Two Concept of Nature and Natural Capital In one of his Three Essays on Religion-the essay entitled Nature-Mill considers a variety of possible meanings of "nature" and eventually boils his analysis down to two distinct concepts. He states: It . . . appears that we must recognize at least two principle meanings in the word 'nature'. In one sense, it means all powers existing in either the outer or inner world and everything which takes place by means of those powers. In another sense, it means, not everything which happens, but only what takes place without the agency, or without the voluntary and intentional agency, of man. This distinction is far from exhausting the ambiguities of the word; but it is the key to most of those on which important consequences depend. [14] Mill's first concept of nature denotes everything actual and everything possible, including human agents and their intentional activities. Because this concept includes Homo sapiens, it dovetails nicely with Charles Darwin's concept of nature in The Origin of Species, in which he describes nature as a "web of complex relations", whereby no single organism can live independently of that web [15,16]. The second concept of nature, the one that Mill prefers, drives a wedge between intentional human agency and that realm of phenomena that has not yet been affected by human agency [17]. Perhaps the first thing to acknowledge is that G.F. Hegel and Karl Marx also recognized these same two concepts of nature but placed them under the same general heading of "Nature." As Leo Marx explains, for these scholars, "First Nature" is the biophysical world as it existed before the evolution of Homo sapiens and "Second Nature" is what most would refer to as the artificial-the material and cultural environment that our species has imposed upon "First Nature" [18]. This view of nature sustains a division between human activity and everything else, but ultimately it is in agreement with Mill's first concept of nature as denoting everything actual and possible. It is also worth noting that Mill's two concepts elide the normativity of nature. In The Moral Authority of Nature, Lorraine Daston and Fernando Vidal take "nature" to task by exposing the hidden normative authority of this concept [19]. To suggest that specific social conventions and political arrangements are "by nature" or "natural," is often to assert that such institutional arrangements are either irrevocable or optimal. All too often, Daston and Vidal state, "Nature appears as an external authority, even if its imperatives are lodged deep in the body or psyche. Nature's authority can also be internalized, made "natural" in the sense of seeming inevitable or effortless" [19]. Bernadette Bensaude-Vincent and William R. Newman echo Daston and Vidal's analysis. In their The Artificial and the Natural, they state "the concept of nature functions and has always been used as a cultural value, a social norm and a moral authority" [20]. Since terms such as "nature" and "natural" do not merely possess a descriptive component but a normative one as well, it is crucial to recognize that the concept of natural capital is not exempt from such influences. Indeed, the concept of natural capital would appear to be one instance of what the philosopher Bernard Williams refers to as a "thick concept"-a concept that is both descriptive and action-guiding [21]. In any case, for the purposes of this article, it will be sufficient to point out, as a cautionary note, the inherent normativity of nature. Prima facie, Mill's first concept of nature is attractive because it is clearly compatible with naturalism-a thesis that states there are no supernatural phenomena. As the philosopher of mind, Daniel Dennett, reminds us, "artificial environments are themselves a part of nature, after all" [22]. The problem with this all-encompassing concept of nature, however, is that it seems to be discordant with the concept of natural capital for the obvious reason that everything actual and everything possible includes the items denoted by manufactured capital, including capital goods. If this first Millian concept of nature requires that everything is part of nature, then it would appear to be a poor fit for shedding light on what ecological economists mean by natural capital during the Anthropocene. After all, what good is a concept of nature if, by deploying it, it destroys the very feature that makes natural and manufactured capital distinct from one another in the first place? We will return to Mill's first concept of nature below, in Section 3. Mill's second concept of nature-which denotes processes that take place independent of human agency-has roots in the writings of the ancient Greeks, particularly those of Aristotle. For Aristotle, the concept of nature had several meanings [23,24]. In one sense, it denotes specific items that exist by nature and not by any other causes. This concept emphasizes the origin or genesis of an item and requires that natural objects exist by non-human causes, thus making a firm distinction between human and non-human agency. This particular concept of nature presumes that there are things that exist by skill (the artificial) and things that exist on their own when left to themselves (natural objects). Aristotle has this sense of nature in mind when, for example, he reviles usury or the charging of interest. As Joel Kaye explains, "Aristotle believed usury was the most despicable and unnatural, because in the usurious loan, money, which was invented solely as an instrument of exchange, is made to generate itself, to give unnatural birth to itself" [25]. Money does not exist by nature but by law or convention [26]. The charging of interest involves money begetting money and is unnatural because this activity is not in accordance with the end for which money was originally created (to facilitate exchange). In his Physics, Aristotle affirms that nature denotes an inner principle of change that is characteristic of self-moving things. Unlike artificial objects, natural ones are involved in a process of growth, change and flux. Nature, in this sense, is deeply intertwined with how things behave when left to themselves, free from intentional human agency. Since instances of natural capital can produce in a self-generative way whereby production processes materialize from within-without the need for external causes-this concept of nature is particularly fitting for understanding what ecological economists mean by nature when they invoke the concept of natural capital to denote ecosystems. Moreover, this concept of nature can account for the fact that instances of natural capital can be manipulated, modified and generally controlled by human agents without losing their essential identity as items of natural capital (becoming purely artificial). Aristotle gives the example of a wooden bed in Book 2, Chapter 1 of Physics. While the shape and structure of the bed has been fashioned by an intentional human agent, the carpenter, this formal cause is merely "human impositions on the unchanged matter that remains a natural product" [20]. If one were to plant the bed in the ground and that bed were to sprout anything at all, it would not generate beds but trees. In this case, the inner principle of change or motion is independent of the form that is imposed on it by the carpenter and the nature of the object is associated with the unchanged matter. In this sense of "nature," the natural world would be one that owed its entire existence to natural causes and, therefore, would exclude all intentional human activity. This world would be one populated by objects, whether biotic or abiotic, without any forms imposed on them from without. It would be a world that was left entirely to itself, independent of human agency. Indeed, this Aristotelian concept of nature can serve us with a good thought experiment to give shape to what a bona fide natural world would look like independent of any form imposed on them by intentional agents. Of course, one can easily imagine a contrary world as well, one where there is no biotic or abiotic items that are left to be naturally expressed, where every last object and bit of material has been subject to the intentional activity of human agents. Indeed, the environmental philosopher, Alan Holland, has described such a world as a "human-made world," since it would be one where every object owed its form to human causes. Mill's second concept of nature appears to fit the concept of natural capital particularly well since ecological economists are wont to claim that specific instances of natural capital, unlike manufactured capital, are production processes that generate welfare-enhancing benefits to economic agents in a manner that is more or less independent from intentional human agency. Moreover, at least some of the time, ecological economists distinguish natural capital from manufactured capital by emphasizing materials or processes that have not yet been subject to direct human agency. This is especially true when it is acknowledged that the items denoted by natural capital are generally unproduced means of production that do not have to be intentionally built or constructed by human labor. However, there appear to be at least two problems with Mill's second concept of nature as it applies to natural capital. First, some instances of natural capital are modified and improved by intentional human agency but Mill's second concept of nature would deny that such processes are genuinely natural given this causal intervention. Whether through ecosystem engineering and ecosystem restoration, it is difficult to deny that many instances of natural capital are intentionally constructed or built by human agents for some intended effect (consider, for example, the Catskills watershed which is discussed below). In many cases, the productivity of natural capital can be improved or enhanced by direct human intervention, as would be the case when invasive species and other undesirables are extricated from ecosystems to facilitate the production of specific ecosystem goods and services. Natural capital denotes unproduced means of production that are capable of producing in a manner that is detached from human agency but not every instance of natural capital is separated or detached from human agency in this way. A strict application of Mill's second concept of nature would entail excluding the latter processes as genuine instances of natural capital. Mill's second concept of nature is beset with another related problem. During the Anthropocene, the extension of this concept appears to be empty. A growing number of scholars argue that there is no longer any part of the Earth that remains completely unaffected by human technologies [20,27,28]. In his The End of Nature Bill Mckibben states, An idea, a relationship, can go extinct just like an animal or a plant. The idea in this case is 'nature,' the separate and wild province, the world apart from man to which he has adapted, under whose rules he was born and died. In the past we have spoiled and polluted parts of that nature, inflicted environmental 'damage' . . . We never thought we had wrecked nature. Deep down, we never really thought that we could: it was too big and too old. Its forces, the wind, the rain, the sun-were too strong, too elemental. But, quite by accident, it turned out that the carbon dioxide and other gases we were producing in pursuit of a better life-in pursuit of warm houses and eternal economic growth and agriculture so productive it would free most of us for other work-could alter the power of the sun, could increase its heat. And that increase could change the patterns of moisture and dryness, breed storms in new places, breed deserts. Those things may or may not have begun to happen but it is too late to prevent them from happening. We have produced carbon dioxide-we have ended nature. We have not ended rainfall or sunlight . . . But the meaning of the wind, the sun, the rain-of nature-has already changed. [29,30] Mckibben's (1990) claim that "we have ended nature" is obviously not meant to suggest that there is nothing left that is actual and possible-Mill's first concept of nature. Rather, there is simply no longer any part of the Earth's that can be truly described as detached from human agency. More recently, Paul Wapner draws a similar conclusion when, in his book Living through the End of Nature, he remarks, "the wildness of nature has indeed largely disappeared as humans have placed their signature on all the earth's ecosystems" [28]. Wapner continues: Empirically, a growing human population, unparalleled technological prowess, increasing economic might and an insatiable consumptive desire are propelling us to reach further across, dig deeper into and more intensively exploit the earth's resources, sinks and ecosystem services . . . the cumulative force of our numbers, power and technological mastery has swept humans across and deeply into all ecosystems to the point where one can no longer easily draw a clean distinction between the human and nonhuman realms. Whether one looks at urban sprawl, deforestation, loss of biological diversity, or ocean pollution, it is clear that humans have been progressively overtaking large swaths of nature and thereby imprinting themselves everywhere. [28] Indeed, the technology of our species is now so vast that it has extended far beyond the sub-lunar region to include the Cydonia (the region of Mars) [20]. While this may be news to some, even Karl Marx had once remarked that, "the nature which preceded human history no longer anywhere exists" [31]. In any case, it would appear that the claim that there is some realm of phenomena on Earth that remains unaffected by human agency is simply false and if there is nothing left on Earth that remains unaffected by human agency, then the very processes denoted by the concept of natural capital, could not be considered genuinely "natural" in this Millian sense. The Millian Dilemma? Mill's two concepts of nature appear to present us with a dilemma. After all, his first concept would judge that every instance of natural and manufactured capital are equally part of nature. On the other hand, a staunch defender of Mill's second concept of nature would insist that-strictly speaking-there is no more nature and, therefore, no more natural capital left on Earth, full stop. Neither horn of this dilemma is particularly attractive to anyone wishing to establish a coherent concept of natural capital during the Anthropocene. Fortunately, this dilemma is more apparent than real. The way out of this predicament is to concede that while everything, metaphysically, is natural we can still operationalize the concept of nature by insisting that those items which remain relatively detached from human agency, including those items that do not possess significant features caused by intentional human agents, are natural. In taking this pragmatic approach, I am following the philosopher of conservation biology, Sahotra Sarkar, when he states: Even if humans are conceptualized as part of nature, we can coherently distinguish between humans and the rest of nature. There is at least an operational distinction; that is, one that we can straightforwardly make in practical contexts. We can distinguish between anthropogenic features (those largely brought about by human action) and non-anthropogenic ones. [32] By making this operational distinction, Mill's two concepts of nature are treated as compatible (in a sense) because one does not necessarily preclude the other. On this view, Mill's first concept of nature is more fundamental because even the most artificial of objects, including atomic bombs and jumbo jets, are natural. On the other hand, for practical purposes, these same items are deemed artificial since they were intentionally built by human agents and possess a variety of essential anthropogenic features. The same is true for items denoted by the concept of natural capital. Since everything actual and everything possible is natural, every instance of natural capital must also be natural. However, in light of the empirical claim that no phenomena denoted by the concept of natural capital is completely insulated from human agency during the Anthropocene, it is always a question about the relative detachment that such processes have in relation to intentional human agency. On this account, the natural and artificial are located along a spectrum or continuum with the most natural objects being those that remain relatively detached from human agency and the most artificial objects are those that have been built and constructed by intentional human agents. There is no sui generis difference between artificial and natural objects since the difference is always a matter of degree. In other words, there is a blending of the natural and the artificial. This approach to the natural/artificial distinction has the virtue of preserving the practically significant distinction between, for example, intentionally modified environments such as city centers, from environments that have been subject to relatively little human agency, such as remote uninhabited islands in the Pacific Ocean that were recently generated by purely natural causes. Rather than imposing a strict division between natural and artificial objects, I propose that the artificial/natural distinction be described as a continuum whereby phenomena are branded as more (less) natural or more (less) artificial, depending on their degree of detachment from intentional human agency. It will be useful to distinguish objects that remain completely detached from human agency from those which have a first or second degree of detachment. These divisions are represented in Figure 1, below. Sustainability 2018, 10, x FOR PEER REVIEW 6 of 9 artificial since they were intentionally built by human agents and possess a variety of essential anthropogenic features. The same is true for items denoted by the concept of natural capital. Since everything actual and everything possible is natural, every instance of natural capital must also be natural. However, in light of the empirical claim that no phenomena denoted by the concept of natural capital is completely insulated from human agency during the Anthropocene, it is always a question about the relative detachment that such processes have in relation to intentional human agency. On this account, the natural and artificial are located along a spectrum or continuum with the most natural objects being those that remain relatively detached from human agency and the most artificial objects are those that have been built and constructed by intentional human agents. There is no sui generis difference between artificial and natural objects since the difference is always a matter of degree. In other words, there is a blending of the natural and the artificial. This approach to the natural/artificial distinction has the virtue of preserving the practically significant distinction between, for example, intentionally modified environments such as city centers, from environments that have been subject to relatively little human agency, such as remote uninhabited islands in the Pacific Ocean that were recently generated by purely natural causes. Rather than imposing a strict division between natural and artificial objects, I propose that the artificial/natural distinction be described as a continuum whereby phenomena are branded as more (less) natural or more (less) artificial, depending on their degree of detachment from intentional human agency. It will be useful to distinguish objects that remain completely detached from human agency from those which have a first or second degree of detachment. These divisions are represented in Figure 1, below. To be clear, the Artificial/Natural Continuum depicted Figure 1 is almost certainly not shared across all cultures. In his Beyond Nature and Culture, the anthropologist Philippe Descola convincingly argues against the universality of the artificial/natural distinction, insisting that the distinction is specific to Western culture alone [33]. However, with this qualification in mind, the Artificial/Natural Continuum is still helpful for understanding the concept of natural capital during the Anthropocene. Figure 1 shows that the most natural objects in the universe are those found to the left side of the continuum. They remain completely detached from human agency in the sense that they have not causally interacted with intentional human agents: they have been neither affected directly nor indirectly by human agents. Such objects might include, for example, distant astrological or celestial objects-such as distant galaxies or stars. It seems reasonable to suppose that the unobservable part of the universe is certainly natural in this sense. If there truly is no longer any part of the sub-lunar region that is completely detached from human agency, then it follows that describing the Earth, or parts of the Earth, as entirely natural, would be incorrect. However, parts of the Earth might well still be described as relatively natural when compared to objects that owe their forms to human agency and have been completely instrumentalized to serve human ends. Only that part of the universe which has not yet been affected by human agents is a candidate for complete detachment from human agency. On the opposite end of this continuum, to the right side of Figure 1, are those items that have been completely instrumentalized by intentional human agents to serve their own ends. This category To be clear, the Artificial/Natural Continuum depicted Figure 1 is almost certainly not shared across all cultures. In his Beyond Nature and Culture, the anthropologist Philippe Descola convincingly argues against the universality of the artificial/natural distinction, insisting that the distinction is specific to Western culture alone [33]. However, with this qualification in mind, the Artificial/Natural Continuum is still helpful for understanding the concept of natural capital during the Anthropocene. Figure 1 shows that the most natural objects in the universe are those found to the left side of the continuum. They remain completely detached from human agency in the sense that they have not causally interacted with intentional human agents: they have been neither affected directly nor indirectly by human agents. Such objects might include, for example, distant astrological or celestial objects-such as distant galaxies or stars. It seems reasonable to suppose that the unobservable part of the universe is certainly natural in this sense. If there truly is no longer any part of the sub-lunar region that is completely detached from human agency, then it follows that describing the Earth, or parts of the Earth, as entirely natural, would be incorrect. However, parts of the Earth might well still be described as relatively natural when compared to objects that owe their forms to human agency and have been completely instrumentalized to serve human ends. Only that part of the universe which has not yet been affected by human agents is a candidate for complete detachment from human agency. On the opposite end of this continuum, to the right side of Figure 1, are those items that have been completely instrumentalized by intentional human agents to serve their own ends. This category includes ordinary technical artifacts, such as tables and chairs, which consist of materials found and have subject to the intentional modifications of human agents. On this account, something is artificial when it is the result of a deliberate intentional act, usually involving the application of some art or skill [30]. Not only are chairs an example of an object that has been intentionally planned and built by the woodworker, in combination with manufactured tools or machines but the actual construction of this object normally involves transforming or modifying materials to bring about certain desirable characteristics of the object at hand. Under this framework, all such items, including capital goods (such as manufactured machines and tools), possess a first degree of detachment from intentional human agents. Objects that have been either directly or indirectly affected by human agency but that have not yet been completely instrumentalized by intentional human agents can be described as possessing a second degree of detachment. This category includes items that have not been intentionally made by human agents but that, nonetheless, have arisen at least in part as a consequence of human activity. For example, the sawdust caused by the woodworker building the chair is a consequence of intentional human agency but since it is not the goal of such activity and has not (yet) been instrumentalized for human purposes, it can be described as having a second degree of detachment. Therefore, unlike those items which can be described as having a first degree of detachment from human agents, establishing that some object has a second degree of detachment is much weaker since it merely requires that there be some causal connection between some intentional human agent and that object. To make the category "second degree of detachment from human agency" more salient, consider a well-known example: the Catskills Watershed. This watershed provides water filtration services to the residents of New York. Historically, this watershed afforded citizens-upwards of ten million people-with high-quality drinking water. This watershed, which covers 5000-square-kilometers, not only purified the drinking water but meted water out gradually, stabilizing drinking supply and mitigating the possibility of floods [34]. Until the early 1990s, the natural water purification processes, by root systems and soil microorganisms, together with filtration and sedimentation, cleansed the water to such a degree that the Environmental Protection Agency's (EPA) standards were met [35]. However, housing development and the pollution from vehicles and agriculture threatened the water quality of the region and in 1991 the EPA ordered New York City to build a water filtration plant, unless the city could somehow maintain water quality without it [3]. By 1996, New York City was confronted with a choice between restoring the Catskills watershed and constructing a water-purification plant. This choice has been construed as one between investing in either natural or manufactured capital. As it turned out, restoring the ecological integrity of the Catskills or investing in the "machinery of the watershed" was less costly than constructing a "human-constructed" water filtration system [34]. While protecting and restoring the Catskills was estimated to cost 250 million dollars over ten years (mainly to purchase and set aside over 140,000 hectares in the watershed), the overall cost was expected to reach up to 1.5 billion dollars; by contrast, the total cost of pursuing the alternative path, building and operating the filtration system, was estimated to cost between 6 and 8 billion dollars [3]. New York City opted for the former option and, since 1997, has invested nearly 2 billion dollars in "land management changes and innovative tactics such as purchasing land around reservoirs to preserve forests and wetlands that buffer against pollution, paying landowners to restore forest along streams and offering technical aid and infrastructure to farmers and foresters" [34]. Where might the Catskills fall along the spectrum between natural and artificial objects depicted in Figure 1? While there may be no definitive way to determine the exact location of the Catskills along this spectrum, it seems reasonable to claim that the Catskills is located somewhere between the two extremes of natural objects completely detached from human agency and objects that have a first degree of detachment from human agency. The Catskills is a strong candidate for possessing a second degree of detachment from intentional human agency because while it has been modified and improved by human agents for the production of specific ecosystem goods and services, it has not been completely instrumentalized for human purposes. After all, unlike technical artifacts used in the production process of traditional economic goods and services, the Catskills remains characterized by unpredictable spontaneous productions of the Earth, a feature that makes it distinct from most genuine artifacts. Claiming that the Catskills has a second degree of detachment from human agency is fully compatible with Mark Sagoff's claim that the ecosystem goods and services produced by the Catskills cannot be characterized as "natural" in Mill's second sense [36]. As Sagoff makes clear, no one should deny that the Catskills has been and continues to be intentionally modified and improved, through both action and omission, to bring about certain desired effects. To claim otherwise would be mistaken. The way forward is to accept Mill's first concept of nature while simultaneously allowing for the operational distinction between relative and absolute detachment from intentional human agency. On this proposal, the Catskills remains one instance of natural capital, even though it has been subject to various degrees of intentional human agency. Conclusions The Anthropocene presents a challenge to the concept of natural capital. While this concept has traditionally denoted natural production processes that operate independently of intentional human agency, this new geological epoch renders it controversial whether any such process is wholly detached from human agency. The main purpose of this article has been to develop a coherent concept of natural capital for the Anthropocene by analyzing the conceptual relationship between nature and natural capital. I proposed collapsing Mill's two concepts of nature. While Mill's first concept denotes everything actual and possible, including human agents and their intentional activities, his second concept denotes that realm which has not yet been affected by human agency. The main problem with adopting Mill's first concept of nature-without qualification-is that it would not equip ecological economists with the conceptual resources to distinguish between manufactured and natural capital for the obvious reason that everything is natural. It was then argued that Mill's second concept of nature is equally problematic because it denotes processes detached from human agency but there is virtually no part of (the surface of) the Earth that is completely insulated by human activity during the Anthropocene. In response to this dilemma, I proposed accepting Mill's first concept of nature as the most fundamental but for operational purposes, allowing for the distinction between humans and their activities from the rest of nature. This unassuming move enables one to distinguish between anthropogenic and non-anthropogenic features for practical purposes. Properly understood, the concept of natural capital during the Anthropocene remains a coherent concept. The various processes denoted by this concept, including the Catskills watershed, generate economically valuable goods and services in a manner that is relatively detached from human agency.
8,069.6
2018-03-14T00:00:00.000
[ "Environmental Science", "Economics", "Philosophy" ]
Synthesis, In Vitro Profiling, and In Vivo Evaluation of Benzohomoadamantane-Based Ureas for Visceral Pain: A New Indication for Soluble Epoxide Hydrolase Inhibitors The soluble epoxide hydrolase (sEH) has been suggested as a pharmacological target for the treatment of several diseases, including pain-related disorders. Herein, we report further medicinal chemistry around new benzohomoadamantane-based sEH inhibitors (sEHI) in order to improve the drug metabolism and pharmacokinetics properties of a previous hit. After an extensive in vitro screening cascade, molecular modeling, and in vivo pharmacokinetics studies, two candidates were evaluated in vivo in a murine model of capsaicin-induced allodynia. The two compounds showed an anti-allodynic effect in a dose-dependent manner. Moreover, the most potent compound presented robust analgesic efficacy in the cyclophosphamide-induced murine model of cystitis, a well-established model of visceral pain. Overall, these results suggest painful bladder syndrome as a new possible indication for sEHI, opening a new range of applications for them in the visceral pain field. INTRODUCTION Arachidonic acid (AA) is an essential ω-6 20 carbon polyunsaturated fatty acid that is abundant in the phospholipids of cellular membrane. In response to a stimulus, phospholipase A2 promotes its cleavage from the membrane and release into the cytosol, where it can be metabolized, leading to different classes of eicosanoids via three pathways ( Figure 1). 1, 2 The cyclooxygenase (COX) pathway catalyzes the production of prostaglandins, prostacyclins, and thromboxanes, endowed with inflammatory properties. The lipoxygenase (LOX) pathway generates leukotrienes, which play a significant part in the onset of asthma, arthritis, allergy, and inflammation. 3 Both pathways have been extensively studied and targeted pharmaceutically. 4−6 More recently, increasing attention is being paid to the third branch of the AA cascade, the cytochrome P450 (CYP) pathway that notably converts AA to epoxyeicosatrienoic acids (EETs). 7 EETs exhibit antihypertensive, anti-inflammatory, and anti-nociceptive properties, 8 but they are rapidly degraded by the soluble epoxide hydrolase (sEH, EPHX2, E.C. 3.3.2.10) to the less active or inactive dihydroxyeicosatrienoic acids (DHETs). Therefore, sEH inhibition may lead to elevated levels of EETs thereby maintaining their beneficial properties. 9,10 Indeed, the use of selective sEH inhibitors (sEHI) in vivo models resulted in an increase of EETs levels and the reduction of blood pressure and inflammatory and pain states. Thus, sEH has been suggested as a pharmacological target for the treatment of several diseases, including pain-related disorders. 11−16 Given that sEH presents a hydrophobic pocket, several potent sEHI developed in the last years feature an adamantane moiety or an aromatic ring in their structure, such as AR9281, 1, and EC5026, 3, two of the sEHI that have reached clinical trials. 17,18 The first to enter was the adamantane-based AR9281, by Arete Therapeutics, for the treatment of hypertension in diabetic patients. However, it failed largely because of its poor pharmacokinetic properties but also poor target residence time on sEH and only moderate potency on the target. 17 Very recently, EicOsis has replaced the adamantane moiety of AR9281 by an aromatic ring for its drug candidate EC5026, currently in phase 1 clinical trials for the treatment of neuropathic pain. 18 Interestingly, both clinical candidates present similar structures: a left-hand side (lhs) hydrophobic moiety (black), a urea group (green), a piperidine residue (blue), and a right-hand side (rhs) acyl group (red). Also, EicOsis is currently advancing the analogue t-TUCB, 4, for veterinary clinical trials ( Figure 2). 19 Our recent observation that the lipophilic cavity of the enzyme is flexible enough to accommodate polycyclic units larger than adamantane, 20 led to the discovery of a new family of benzohomoadamantane-based ureas, such as 5 and 6, endowed with low nanomolar or even subnanomolar potencies ( Figure 2). 21 Further in vitro studies with these compounds demonstrated that while compound 5 presented moderate experimental solubility and very poor stability in human and mouse microsomes, compound 6 was endowed with favorable drug metabolism and pharmacokinetics (DMPK) properties and showed efficacy in an in vivo murine model of acute pancreatitis. 21 Later on, in an effort for improving the DMPK properties of piperidine 5, we designed a series of analogues where the urea core was replaced by an amide group. Although most of these amides retained or even improved the inhibitory activity of their urea counterparts at the human and mouse enzymes (e.g., compound 7, Figure 2), only moderate improvements in microsomal stabilities were found. 22 Herein, we report further medicinal chemistry around inhibitor 5. New piperidine derivatives retaining the urea group as the main pharmacophore, different substituents in the C-9 position of the polycyclic scaffold (R in I), and a broad selection of substituents at the nitrogen atom of the piperidine (R′ in I) were synthesized (Figure 2). After a screening cascade, two selected candidates with highly improved DMPK properties were subsequently studied in the murine model of capsaicin-induced allodynia. Finally, the best compound was evaluated in a murine model of visceral pain. isocyanates II, followed by the addition of the required substituted aminopiperidine of general structure III to form the final ureas 9−25 (Scheme 1). All the new compounds were fully characterized through their spectroscopic data and elemental analyses or highperformance liquid chromatography (HPLC)/mass spectrometry (MS) (see the Experimental Section and the Supporting Information for further details). sEH Inhibition and Microsomal Stability. Compound 5 presented high inhibitory activities against the human and murine enzymes and moderate experimental aqueous solubility (38 μM), but unacceptable stability in human and murine microsomes (Table 1). 21 Because the acyl chain of piperidine-based sEHI is known to be a suitable position for metabolism, 27 we decided to explore first new piperidine derivatives replacing the acetyl group of 5 by other fragments selected from previous other series of known sEHI to improve the microsomal stability. 28,29 Compounds 9−12 were synthesized maintaining the methyl group in the position R of the benzohomoadamantane scaffold I and replacing the acetyl group of 5 by the propionyl, tetrahydro-2H-pyran-4-carbonyl, isopropylsufonyl, and cyclopropanecarbonyl groups, respectively (Scheme 1). The inhibitory activity against the human and murine enzymes of the new ureas was evaluated, as well as their stabilities in human and mouse microsomes (Table 1). Gratifyingly, regardless of the substituent of the piperidine ring, all the compounds showed potency in the low nanomolar or even subnanomolar ranges in both the human and murine enzymes (Table 1). Indeed, the most potent compound, 12, presented inhibitory activities in the subnanomolar range for both enzymes. However, except for 12, the microsomal stability of these new ureas was very poor and not improved from that of 5 (Table 1). Consequently, we moved to another strategy for improving the microsomal stability of the compounds, by exploring the C-9 position of the benzohomoadamantane scaffold, replacing the methyl group in 5 and 9−12 by other substituents, such as halogen atoms or polar groups. The potency of these compounds was measured against the human and murine enzymes ( Table 2). On the one hand, as expected considering that the catalytic center of sEH is highly hydrophobic, the compounds bearing a polar group in C-9, 23, and 24, presented higher IC 50 values than 5. Of note, the most important drop in the inhibitory activity was produced by the replacement of the methyl group of 5 by the polar hydroxyl group, compound 23. On the other hand, when the methyl group was replaced by chlorine or fluorine atoms, the inhibitory activities against the human and murine enzymes were maintained or even improved, as most of them presented IC 50 values in the low nanomolar or the subnanomolar range (Table 2). Next, the microsomal stability of the most potent compounds was evaluated. Pleasingly, all the compounds featuring halogen atoms in the R position of the benzohomoadamantane scaffold presented better stabilities in human and mice microsomes than their methyl counterparts (Table 2). Especially, the chlorinated compounds 16, 18, and 19 exhibited excellent microsomal stabilities in the two species. In Silico Study: Molecular Basis of Benzohomoadamantane/Piperidine-Based Ureas as sEH Inhibitors. Next, the mechanism of binding of two compounds with high inhibitory activity, that is, 15 (R = Cl, R′ = tetrahydro-2Hpyran-4-carbonyl) and 21 (R = F, R′ = tetrahydro-2H-pyran-4carbonyl), was investigated with molecular dynamics (MD) simulations. sEHs present a flexible L-shaped active site pocket divided into three regions: the lhs and the rhs pockets that are connected by a central narrow channel defined by catalytic residues Asp335, Tyr383, and Tyr466 (see Figure 4). Recently, we showed that bulky benzohomoadamantane groups occupy the lhs in urea-based sEHIs that present both adamantyl and phenyl moieties, for example, compound 6. 21 However, available X-ray structures of sEH in complex with piperidinebased ureas show that the piperidine group can also occupy the lhs. 31 To determine the preferred binding mode of 15 and 21 that present both benzohomoadamantane and piperidine groups, we performed conventional MD simulations starting from two possible orientations in the sEH active site predicted by molecular docking calculation (see the Experimental Section): (a) with the benzohomoadamantane in the lhs and piperidine in the rhs (see Figure 4a, similar to adamantyl based-urea in PDB 5AM3) and (b) the piperidine group is placed in lhs while benzohomoadamantane occupies rhs (similar to piperidine based-urea in PDB 5ALZ). 31 From these MD simulations, the binding affinity of 15 and 21 was estimated with molecular mechanics with generalized Born and surface area solvation (MM/GBSA) calculations showing that the orientation shown in Figure 4a is −5.7 and −10.2 kcal/mol more stable than the opposite orientation for compounds 15 and 21, respectively (see Table S2). When the benzohomoadamantane occupies the lhs and the piperidine the rhs, both compounds present similar absolute binding affinities (−68.0 and −69.4 kcal/mol for 15 and 21, respectively), which is in line with the similar IC 50 values. To corroborate these results, accelerated MD (aMD) simulations were performed to completely reconstruct the binding pathway of compound 15 into the sEH active site pocket (see Movie S1, Figure S1, and Experimental Section). This strategy is frequently used to predict substrate and inhibitor binding pathways in enzymes. 32,33 Spontaneous binding aMD simulations show how the inhibitor is recognized in the lhs pocket by the benzohomodamantane scaffold and then extends through the sEH binding site accommodating the benzohomoadamantane moiety in the lhs, while the piperidine counterpart lays in the 50 values are the average of three replicates. The fluorescent assay as performed here has a standard error between 10 and 20%, suggesting that differences of twofold or greater are significant. Because of limitations of the assay, it is difficult to distinguish among potencies <0.5 nM. 30 b Percentage of remaining compound after 60 min of incubation with pooled human and mouse microsomes in the presence of NADPH at 37°C. c ND: not determined. rhs pocket. Considering these results, we conclude that the orientation shown in Figure 4a is the preferred binding mode of compounds 15 and 21. To understand in more detail the molecular basis of the inhibitory mechanism of benzohomoadamantane/piperidinebased ureas 15 and 21, the non-covalent interactions between the selected inhibitors and the active site residues of sEH were studied (see Figure 4 for compound 21 and Figure S2 for compound 15). MD simulations show that the inhibitor is retained in the active site through three strong hydrogen bond interactions between the urea moiety and the central channel residues Asp335, Tyr383, and Tyr466 (see Figures 4b and S3). In the rhs pocket, the piperidine group is stabilized through persistent hydrophobic interactions with His494 and Met419, while the tetrahydro-2H-pyran moiety is retained by the side chains of Leu417 and Trp525. The oxygen of tetrahydro-2H- Hydrogen bonds between the oxygens of the tetrahydropyran group of 21 and the hydrogen of the OH group of Ser415 is shown. The hydrophobic interaction average distances are computed between the terminal heavy atom of amino acid side chains and the centroid of each ring. Hydrogen bond distances between the carboxylic group of the catalytic Asp335 and the amide groups of the inhibitor and the distance between the carbonyl group of the urea inhibitor and the OH group of Tyr383 and Tyr466 residues. (c) Most relevant molecular interactions in the lhs. Average distances (in Å) obtained from the three replicas of 500 ns of MD simulations are represented. The CH−π interaction is calculated between the hydrogens of the benzohomoadamantane unit and the centroid of the benzoid ring of Trp336. The NH−π interaction is monitored between the amide hydrogen of Gln384 and the center of the aromatic ring of the benzohomoadamantane scaffold. pyran ring establishes transient hydrogen bonds with Ser415 and is relatively solvent exposed (see Figure 4b). In the lhs pocket, the orientation of the benzohomoadmantane moiety is directed by the NH···π interaction between the Gln384 and the aromatic ring of the polycyclic scaffold, which is maintained along the MD simulations. Additionally, hydrophobic interactions are established with the side chains of Met339 and Trp336. This extensive network of hydrophobic interactions and hydrogen bonds in the sEH pocket is key to recognize and bind the inhibitor in the active site. Introducing a polar hydroxy group in the polycyclic scaffold (compound 23) significantly decreases the resulting inhibitory activity (see Table 2). To determine the molecular basis of this drop in activity, the binding modes of compounds 13 (R = Cl and R′ = acetyl and IC 50 = 1.6 nM) and 23 (R = OH and R′ = acetyl and IC 50 = 207 nM) were compared with MD simulations. The incorporation of OH in the polycyclic scaffold causes a series of rearrangements in the lhs pocket that destabilize the inhibitor bounds with the enzyme in the active site (see Figure S4). In particular, the Thr360 side chain establishes a hydrogen bond with the oxygen of the hydroxyl substituent of compound 23 that induces the rotation of the benzohomoadamantane scaffold in the lhs pocket. This breaks the NH−π interaction between Gln384 and the aromatic ring of 23 providing more flexibility to the benzohomadamantane moiety as compared to 13, 15, and 21, which may be related to the decreased activity (see Figure S5). In addition, the enhanced dynamism of the polycyclic scaffold allows the transient entrance of few water molecules into the lhs pocket (average number of water molecules 0.97 ± 0.96 for 23 and 0.31 ± 0.5 for 21, see Figure S6). Compound 24 (R = OCH 3 and R′ = acetyl, IC 50 = 48 nM) that also present reduced activity shows a similar behavior as 23 (see Figures S5 and S6). Therefore, the above-mentioned results and those previously reported with related compounds, 21 reveal that the presence of a small, lipophilic group at C-9 of the benzohomodamantane scaffold is key for the stability and activity of benzohomoadamantane-based sEHIs at the molecular level. 2.4. Further DMPK Profiling of the Selected Inhibitors. The halogen-substituted sEHI compounds that exhibited outstanding inhibitory activities and had more than 50% of the parent compound unaltered after incubation with human and/or murine microsomes were selected for further evaluation. Solubility, permeability through the blood−brain barrier (BBB), cytotoxicity, and cytochrome inhibition of the selected compounds 14−19, 21, 22, and 25 were experimentally measured. In addition, we evaluated all the synthesized compounds as pan assay interference compounds (PAINS) using SwissADME and FAFDrugs4 web tools. 34,35 None of them gave positive as PAINS. While compounds 14, 16, 17, 18, and 19 exhibited limited solubility, with values lower than 20 μM, compounds 15, 21, 22, and 25 displayed good to excellent solubility values. Additionally, the selected compounds were further tested for predicted brain permeation in the widely used in vitro parallel artificial membrane permeability assay−BBB (PAMPA−BBB) model. 36 Compounds 14,15,22, and 25 showed CNS+ proving their potential capacity to reach CNS, whereas the other compounds presented uncertain BBB permeation (CNS +/−). Next, the cytotoxicity of the new sEHI was tested using the propidium iodide (PI) and MTT assays in SH-SY5Y cells. Interestingly, none of the selected compounds appeared cytotoxic at the highest concentration tested (100 μM) ( Table 3). Finally, inhibition of several cytochrome P450 enzymes were measured, giving special attention to CYPs 2C19 and 2C9, as these isoforms are two of the main producers of EETs, the substrates of the sEH. 8 Unfortunately, compounds 16, 17, 18, and 19 inhibited significantly CYP 2C19. In contrast, compounds 14, 15, 21, 22, and 25 did not significantly inhibit these subfamilies of cytochromes (Table 3). Additionally, CYPs 2D6, 1A2, and 3A4 were also evaluated (Table S3). With the only exception of 25, which inhibited CYP3A4 in the submicromolar range, all the compounds showed IC 50 values higher than 10 μM (Tables 3 and S3). After performing the above-mentioned screening cascade, three compounds, 15, 21, and 22, emerged as the more promising candidates. These compounds exhibited excellent inhibitory activities against the human and murine enzymes, improved metabolic stability, good solubility, and did not significantly inhibit cytochromes. Notwithstanding, hERG inhibition and Caco-2 assays were also performed in order to additionally characterize them. None of the compounds significantly inhibit hERG at 10 μM, and they displayed moderate permeability in Caco-2 cells. Finally, they were tested for selectivity against hCOX-2 and hLOX-5, two enzymes involved in the AA cascade. Gratifyingly, they did not present significant inhibition of these enzymes (Table 4). sEH Engagement and Off-Target Profile. Compound 28 was designed as a chemical probe with the objective to disturb the parent compound structure as little as possible. Important in this design was the knowledge that the piperidine nitrogen atom can be substituted without loss of biological activity. Therefore, a butynyl diazirinyl propionic acid minimalistic linker was coupled, via a straightforward amide coupling reaction, to the piperidine nitrogen of 27, in turn obtained from 8d through urea formation and Boc-removal (Scheme 2). The probe 28 was found to be a potent inhibitor with IC 50 of 0.5 and 0.4 nM, for the human and mouse enzymes, respectively. Next, we tested whether probe 28 could covalently bind endogenously expressed human sEH in a complex proteome. Hence, photoaffinity labeling was followed by incorporation of an azide-TAMRA-Biotin tag via copper(I) azide alkyne cycloaddition (CuAAc). This tag allows both visualization and isolation of the probe's protein targets. A fluorescent band at 72 KDa was identified as sEH via immunoblotting (Figures 5, S7, S9 and S10). Once the probe engagement of EPHX2 was confirmed, we determined the minimal probe labeling concentration using purified recombinant human EPHX2 ( Figure S8). The minimal probe concentration was found to be 100 nM, which was then used to get insights in the selectivity of the probe 28 and compound 15. Although it was observed that probe 28 labeled multiple bands, competition with the parent compound 15 shows competition of only EPHX2, illustrating that this is the sole target with high occupancy and that the other bands are non-specific labeling events by the probe ( Figure 5). To further confirm the selective character of 15, we wanted to exclude p38 mitogen-activated protein kinase (p38 MAPK) and pro-angiogenic kinase vascular endothelial growth factor receptor-2 (VEGFR2) as targets because some ureabased sEHI are reported to show cross-reactivity with these proteins. 37−39 In addition, we also aimed to exclude membrane bound microsomal epoxide hydrolase as a possible off-target. 40 To this end, we performed pull-down experiments and immunoblotting with specific antibodies. These experiments confirmed that none of these proteins are targets of 28, underlining its selectivity ( Figure 5b). Pharmacokinetic Study of Compounds 15 and 21. Overall, compounds 15 and 21, with similar DMPK properties and structures, were selected for in vivo studies. First, a study was conducted in order to determine the pharmacokinetic profile in the plasma of compounds 15 and 21 when administered by a subcutaneous (sc) route at a single dose of 5 mg/kg. As shown in Table 5, absorption of 21 is fast, reaching C max (19.1 μg/mL) at 15 min after dosing. The compound disappeared from the plasma progressively and halflife (HL) was calculated to be around 0.7 h. In the case of 15, C max (1.2 μg/mL) was 15 times lower than that of 21, however, showing a higher HL (3.4 h). For both compounds, the narrow differences in AUC 0 t and AUC 0 ∞ showed complete exposure and good bioavailability. Although 21 demonstrated better bioavailability characteristics than 15 both compounds were subsequently evaluated in vivo efficacy studies. In Vivo Efficacy Studies. A first in vivo efficacy study was performed in a capsaicin-induced secondary mechanical hypersensitivity (allodynia) model in mice. It is well known that the increase in sensitivity to mechanical stimulation in the area surrounding capsaicin injection results from central sensitization, 41 which is a key process in chronic pain development and maintenance. 42 In our experimental conditions, mice markedly decreased their paw withdrawal latency to mechanical stimulation after capsaicin administration ( Figure 6), denoting the development of mechanical allodynia. The sc administration of the prototypic, brain-penetrant, 43−46 sEHI AS2586114 induced a dose-dependent reversion of the capsaicin-induced mechanical hypersensitivity reaching a full reversal of sensory hypersensitivity at 10 mg/kg ( Figure 6). The sc administration of compounds 15 and 21 fully inhibited mechanical hypersensitivity in a dose-dependent manner and with a much higher potency than AS2586114, reaching full reversal of sensory gain with 5 mg/kg for compound 15 and even with a dose as low as 1.25 mg/kg for compound 21 ( Figure 6), in spite of its limited predicted BBB permeability (as previously commented). Importantly, the administration of N-methanesulfonyl-6-(2-proparyloxyphenyl)hexanamide (MS-PPOH), an inhibitor of microsomal CYP450s, which is responsible for the production of EETs, 47 fully abolished the effect of not only AS2586114 but also those induced by compounds 15 and 21 ( Figure 6). These results strongly suggest that the three tested compounds induced the reversal of capsaicin-induced mechanical hypersensitivity through the in vivo inhibition of sEH. Section and Tables S4 and S5 and Figures S12 and S13 in the Supporting Information. Given that the tested compounds induced ameliorative effects on this behavioral model of central sensitization attributable to sEH inhibition, we tested the effect of compound 21 (the most potent compound among the sEHI evaluated), in a model of pathological pain. Specifically, cyclophosphamide (CTX)-induced cystitis because it has been used as a model of interstitial cystitis/bladder pain syndrome, 48 and it is known that pain induced by this disease has a strong component of central sensitization in both humans and rodents. 49,50 In our experimental conditions, mice treated with CTX showed a significant increase in the pain behavioral score in comparison to mice treated with the vehicle (Figure 7a). The sc administration of compound 21 (0.63−2.5 mg/kg) significantly reduced this pain-related score in a dosedependent manner (Figure 7a). In addition, animals administered with the CTX vehicle showed a marked reduction in their mechanical threshold in the abdomen, denoting the development of referred hyperalgesia (Figure 7b). The sc treatment with compound 21 also reversed, in a dosedependent manner, the mechanical referred hyperalgesia induced by CTX ( Figure 7b). The administration of MS-PPOH fully reversed the effect of compound 21 in either the pain-related behaviors as in referred hyperalgesia (Figure 7a,b, respectively), mirroring the results obtained on capsaicininduced secondary hyperalgesia and suggesting that compound 21 exerted its in vivo effects on pain through sEH inhibition. To our knowledge, there are no previous studies exploring the role of sEHI on visceral pain. Therefore, our results suggest interstitial cystitis/pain bladder syndrome as a possible new indication for inhibitors of sEH. CONCLUSIONS sEH is a suitable target for several inflammatory and painrelated diseases. In this work, we report further medicinal chemistry around new benzohomoadamantane-based piperidine derivatives, analogues of the clinical candidates AR9281 and EC5026. The introduction of a halogen atom in the position C-9 of the benzohomoadamantane scaffold led to very potent compounds with improved DMPK properties. The in vitro profiling of these new sEHI (solubility, cytotoxicity, metabolic stability, CYP450s, hLOX-5, hCOX-2, and hERG inhibition) allowed one to select two suitable candidates for in vivo efficacy studies. The administration of compounds 15 and 21 reduced pain in the capsaicin-induced murine model of allodynia in a dose-dependent manner and outperformed AS2586114. Moreover, compound 21 was tested in a CTXinduced murine model of cystitis, revealing its robust analgesic effect. Hence, this study opens a whole range of applications of the benzohomoadamantane-based sEHIs in pain and likely other fields. Chemical Synthesis. Commercially available reagents and solvents were used without further purification unless stated otherwise. Preparative normal phase chromatography was performed on a CombiFlash Rf 150 (Teledyne Isco) with pre-packed RediSep Rf silica gel cartridges. Thin-layer chromatography was performed with aluminum-backed sheets with silica gel 60 F254 (Merck, ref 1.05554), and spots were visualized with UV light and 1% aqueous solution of KMnO 4 . HPLC purification was performed on a Prominence ultrafast liquid chromatography system (Shimadzu) using a Waters Xbridge 150 mm C18 prep column with a gradient of acetonitrile in water (with 0.1% trifluoroacetic acid) over 32 min. All compounds showed a sharp melting point and a single spot on TLC. Purity >95% of all final compounds was assessed by the integration of LC chromatograms. Melting points were determined in open capillary tubes with a MFB 595010M Gallenkamp. 400 MHz 1 H and 100.6 MHz 13 C NMR spectra were recorded on a Varian Mercury 400 or on a Bruker 400 Avance III spectrometers. 500 MHz 1 H NMR spectra were recorded on a Varian Inova 500 spectrometer. The chemical shifts are reported in ppm (δ scale) relative to internal tetramethylsilane, and coupling constants are reported in Hertz (Hz). Assignments given for the NMR spectra of selected new compounds have been carried out on the basis of DEPT, COSY 1H/ 1H (standard procedures), and COSY 1 H/ 13 C (gHSQC and gHMBC sequences) experiments. IR spectra were run on PerkinElmer Figure 7. Effects of compound 21 on pain-related behaviors and referred mechanical hyperalgesia induced by CTX. (a) Behavioral score was recorded at 30 min intervals over the 150−240 min observation period after the intraperitoneal (ip) injection of (CTX, 300 mg/kg) or its vehicle. (b) 50% mechanical threshold was evaluated by stimulation of the abdomen with von Frey filaments at 240 min after the administration CTX or its vehicle and was used as an index of referred hyperalgesia. Each bar and vertical line represents the mean ± SEM of values obtained in at least six animals per group. Statistically significant differences: **p < 0.01, between nonsensitized mice (open bar) and the other experimental groups; #p < 0.05, ##p < 0.01 between CTX-treated mice injected with the sEHI or their solvent (black bar); ++p < 0.01 mice injected with compound 21 associated or not with MS-PPOH (one-way ANOVA followed by Student−Newman−Keuls test). spectrum RX I, PerkinElmer spectrum TWO, or Nicolet Avatar 320 FT-IR spectrophotometers. Absorption values are expressed as wavenumbers (cm −1 ); only significant absorption bands are given. High-resolution mass spectrometry (HRMS) analyses were performed with an LC/MSD TOF Agilent Technologies spectrometer. The elemental analyses were carried out in a Flash 1112 series Thermo Finnigan elemental microanalyzer (A5) to determine C, H, N, and S. The structure of all new compounds was confirmed by elemental analysis and/or accurate mass measurement, IR, 1 H NMR, and 13 C NMR. The analytical samples of all the new compounds, which were subjected to pharmacological evaluation, possessed purity ≥95% as evidenced by their elemental analyses (Table S1) or HPLC/UV. HPLC/UV were determined with a HPLC Agilent 1260 Infinity II LC/MSD coupled to a photodiode array. 5 μL of sample 0.5 mg/mL in methanol/acetonitrile were injected, using an Agilent Poroshell 120 EC-C18, 2.7 μm, 50 mm × 4.6 mm column at 40°C. The mobile phase was a mixture of A = water with 0.05% formic acid and B = acetonitrile with 0.05% formic acid, with the method described as follows: flow 0.6 mL/min, 5% B−95% A 3 min, 100% B 4 min, and 95% B−5% A 1 min. Purity is given as % of absorbance at 220 nm. annulen-7-yl)-3-(piperidin-4-yl)urea (27). t- Each system was immersed in a pre-equilibrated truncated octahedral box of water molecules with an internal offset distance of 10 Å. All systems were neutralized with explicit counterions (Na + or Cl − ). A two-stage geometry optimization approach was performed. First, a short minimization of the positions of water molecules with positional restraints on the solute by a harmonic potential with a force constant of 500 kcal mol −1 Å −2 was done. The second stage was an unrestrained minimization of all the atoms in the simulation cell. Then, the systems were gently heated in six 50 ps steps, increasing the temperature by 50 K each step (0−300 K) under constant-volume, periodic-boundary conditions, and the particle-mesh Ewald approach 60 to introduce long-range electrostatic effects. For these steps, a 10 Å cutoff was applied to Lennard−Jones and electrostatic interactions. Bonds involving hydrogen were constrained with the SHAKE algorithm. 61 Harmonic restraints of 10 kcal mol −1 were applied to the solute, and the Langevin equilibration scheme was used to control and equalize the temperature. 62 The time step was kept at 2 fs during the heating stages, allowing potential inhomogeneities to self-adjust. Each system was then equilibrated for 2 ns with a 2 fs timestep at a constant pressure of 1 atm (NPT ensemble). Finally, conventional MD trajectories at a constant volume and temperature (300 K) were collected. In total, we carried out three replicas of 500 ns MD simulations for sEH in the presence of 13, 15, 21, 23, and 24 gathering a total of 7.5 μs of MD simulation time. Each MD simulation was clusterized based on active site residues, and the structures corresponding to the most populated clusters were used for the noncovalent interactions analysis. We monitored the presence of water molecules using the watershell function of the cpptraj MD analysis program. 63 aMD simulations 64,65 were used to study the spontaneous binding of 15 in the active site of sEH. Standard dualboost aMD simulations were performed using the same simulation protocols and aMD parameters as described in our previous works. 21 To reconstruct the spontaneous binding process, we placed one molecule of 15 in the solvent with a minimum distance of 25 Å from catalytic Asp335. First, we performed 250 ns of conventional MD followed by 10 replicas of 2 μs of aMD capturing one binding event (see Movie S1 comprising only the aMD simulation part). Binding affinities (kcal/mol) of compounds 13, 15, 21, and 23 were computed using the MM/GBSA method as implemented in AMBER 18. Preparation of HEK 293T Lysates. HEK293T cells were grown in DMEM media (D6546-500ML Sigma) supplemented with 10% FBS, 2 mM glutamax, 100 units/mL penicillin, and 0.1 mg/mL streptomycin. They were maintained at 37°C with 5% CO 2 . Cells were split every 3 to 4 days according to an ATCC protocol. The cells were harvested and collected by centrifugation (500 g for 5 min at 4°C ) and the supernatant was removed. The pellets were washed twice with ice-cold PBS and resuspended in 2 vol of ice-cold lysis buffer (50 mM HEPES pH 7.5, 150 mM NaCl, 1 mM DTT and 0.5% NP-40). After 30 min on ice, the cells were centrifuged to remove cell debris for 5 min at 4°C. The supernatant was aliquoted and flash frozen in liquid N 2 for use as lysates, with a total protein concentration of 1 mg/mL. Protein concentrations were determined using the BCA assay (Fisher Scientific). Labeling in HEK 293T Lysates. HEK 293T lysates were spiked or not with 100 ng of recombinant purified sEH, treated either with 100 nM probe 28 or DMSO, and incubated for 30 min at 37°C. After this time, the samples were irradiated for 6 min at 365 nm using a 100 W UV lamp. Subsequently, a bi-functional tag containing a TAMRA dye and a biotin was incorporated using copper(I)-catalyzed azide−alkyne cycloaddition (CuAAC). The photoaffinity labeling was analyzed by in-gel analysis, mixing the samples with 4× SDS-loading buffer, and separating using 12% SDS-PAGE after which the gel was scanned on a Typhoon FLA 9500. 4.6. Labeling Purified Soluble Epoxide Hydrolase for Minimal Probe Concentration Determination. Purified recombinant sEH was produced and purified as indicated previously. 30 Of the pure active enzyme 100 or 200 ng were incubated for 30 min at 37°C with decreasing concentrations of probe 3, namely: 10 μM, 1 μM, 100 nM, 10 nM, and 1 nM. After this time, the compounds were irradiated for 6 min at 365 nm using a 100 W UV lamp. Subsequently, a bifunctional tag containing a TAMRA dye and a biotin was incorporated using copper(I)-catalyzed azide−alkyne cycloaddition (CuAAC). The photoaffinity labeling was analyzed by in-gel analysis by mixing the samples with 4× SDS-loading buffer and separating using 12% SDS-PAGE after which the gel was scanned on a Typhoon FLA 9500. 4.7. EPHX2 Target Engagement Confirmation and Off-Target Elucidation by Pull Down. Untreated HEK293T whole cell lysates were normalized to a concentration of 1 mg/mL in a volume of 100 μL, per condition. Lysates were then treated with DMSO, 10 μM of probe 28 or 10 μM of probe 28, and 100 μM of 15 (for competition experiments), and incubated at 37°C for 30 min. After this time, the whole was irradiated for 6 min at 365 nm using a 100 W UV lamp. Subsequently, a bi-functional tag containing a TAMRA dye and a biotin was incorporated via CuAAC. The excess reagents from the samples were then removed by acetone precipitation. Following resuspension of the pellets to a final volume of 100 μL, half of the sample was kept as the input control. The remaining 50 μL were incubated with 20 μL of pre-washed streptavidin beads (Thermo Fisher) for 1 h with mixing at RT. The supernatant was removed, and the beads were sequentially washed with 0.33% SDS in PBS (2 × 50 μL), 1 M NaCl (2 × 50 μL) and PBS (2 × 50 μL). Bound proteins were eluted by boiling (95°C) the beads with 60 μL of 1× SDS loading buffer for 10 min. Samples were resolved by 12% SDS-PAGE. Following visualization using a Typhoon FLA 9500, the gel was transferred onto a nitrocellulose membrane and probed with VEGF2 (cell signaling), p38 MAPK (cell signaling), EPHX1 (Elabscience), and EPHX2 (Abcam) for detection. This experiment was also carried out using lower probe and parent compound concentrations of 1 and 10 μM, respectively, yielding the same results. Affinity-Based Probe and Parent Compound Off-Target Profile Elucidation. To HEK293T cell lysates at 1 mg/mL protein concentration spiked or not with 100 ng of recombinant human sEH and 100 ng of purified recombinant enzyme were treated with either 100 nM probe 28, 10 μM urea 15, and 100 nM probe 28 or DMSO to a concentration of 1% of the total sample. After 30 min of incubation of the compounds at 37°C, the whole was irradiated for 6 min at 365 nm using a 100 W UV lamp. Subsequently, a bi-functional tag containing a TAMRA dye and a biotin was incorporated via CuAAC. The samples were analyzed by in-gel analysis by mixing the samples with 4× SDS-loading buffer and separating using 12% SDS-PAGE after which the gel was scanned on a Typhoon FLA 9500 and/or submitted to Western blot analysis using human sEH antibody for detection (Abcam). The comparison of labeling patterns via fluorescence showed the inability of the parent compound to compete out the probe 28 for most of the targets, which pointed out that except for the sEH the other labeled proteins are not targets of the parent compound but of the probe 28. The plasma was separated by centrifugation for 10 min and stored at −80°C until analysis by HPLC. Frozen plasma samples were thawed at room temperature and 25 μL of acetonitrile were added to a 100 μL of plasma sample. The sample was vortexed for 30 s and centrifuged (14,000 rpm/min) for 5 min. The supernatant was transferred to an injection bottle and 25 μL was injected into the chromatographic system. Instruments and Analysis Conditions. The HPLC system was a PerkinElmer LC (PerkinElmer INC, Massachusetts, U.S.) consisting of a Flexar LC pump, a chromatography interface (NCI 900 network), a Flexar LC autosampler PE, and a Waters 2487 dual λ absorbance detector. The chromatographic column was a kromasil 100-5-C18 (4,0 × 200 mm-Teknokroma Analitica S.A. Sant Cugat, Spain). Flow was 0.8 mL/min and the mobile phase consisting in 0.05 M KH 2 PO4 (30%)/acetonitrile (70%) in isocratic conditions. The elution times of 15 and 21 were 5.6 and 4.4 min, respectively. Compounds were detected at 220 nm. The assay had a range of 0.015−25 μg mL −1 . The calibration curves were constructed by plotting the peak area ratio of analyzed peak against known concentrations. Compound 22 was analyzed under the same chromatographic conditions but the response to the analysis was 10 times lower than that of 15 and 21. 4.9.4. Pharmacokinetic Analysis. 15 and 51 plasma concentrations versus time curves for the means of animals were analyzed by a noncompartmental model based on the statistical moment theory using the "PK Solutions" computer program. The pharmacokinetic parameters calculated were as follows: area under the plot of plasma concentration versus time curve (AUC), calculated using the trapezoidal rule in the interval 0−6 h; HL (t 1/2β ), determined as ln 2/β , being β, calculated from the slope of the linear, least-squares regression line; C max and T max were read directly from the mean concentration curves. 4.10. In Vivo Efficacy Studies. 4.10.1. Experimental Animals. Experiments were performed in female WT-CD1 (Charles River, Barcelona, Spain) mice weighing 25−30 g. Mice were acclimated in our animal facilities for at least 1 week before testing and were housed in a room under controlled environmental conditions: 12/12 h day/ night cycle, constant temperature (22 ± 2°C), air replacement every 20 min, and they were fed a standard laboratory diet (Harlan Teklad Research Diet, Madison, WI, USA) and tap water ad libitum until the beginning of the experiments. The behavioral test was conducted during the light phase (from 9.00 to 15.00 h), and randomly throughout the oestrous cycle. Animal care was in accordance with institutional (Research Ethics Committee of the University of Granada, Spain), regional (Junta de Andalucía, Spain), and international standards (European Communities Council Directive 2010/ 63). Drugs and Drug Administration. The sEHI were dissolved in 5% DMSO (Merck KGaA, Darmstadt, Germany) in physiological sterile saline (0.9% NaCl). Drug solutions were prepared immediately before the start of the experiments and injected sc in a volume of 5 mL/kg into the interscapular area. To test for the effects of MS-PPOH (Cayman Chemical Company, Ann Arbor, MI, USA), a selective inhibitor of microsomal CYP450 epoxidase, 45 on the effects induced by the sEHI tested, this compound was dissolved in DMSO 5% and cyclodextrin 40% in saline and administered 5 min before sEHI injection. When the effect of the association of several drugs was assessed, each injection was performed in different areas of the interscapular zone to avoid the mixture of the drug solutions and any physicochemical interaction between them. In all cases, the researchers who performed the experiments were blinded to the treatment received by each animal. As it will be detailed below, we used two different algogenic substances to explore the effects of sEHI on nociception: capsaicin was used to induce somatic mechanical hypersensitivity, and CTX to induce visceral pain. Capsaicin (Sigma-Aldrich Química S.A.) was dissolved in 1% DMSO in physiological sterile saline to a concentration of 0.05 μg/μL (i.e., 1 μg per mouse). Capsaicin solution was injected intraplantarly (i.pl.) into the right hind paw proximate to the heel, in a volume of 20 μL using a 1710 TLL Hamilton microsyringe (Teknokroma, Barcelona, Spain) with a 30 1/2gauge needle. Control animals were injected with the same volume of the vehicle of capsaicin. CTX (Sigma-Aldrich, Madrid, Spain), which was used to induce a painful cystitis, was dissolved in saline and injected ip at a dose of 300 mg/kg, in a volume of 10 ml/kg. The same volume of solvents was injected in control animals. Evaluation of Capsaicin-Induced Secondary Mechanical Hypersensitivity. Animals were placed into individual test compartments for 2 h before the test to habituate them to the test conditions. The test compartments had black walls and were situated on an elevated mesh-bottomed platform with a 0.5 cm 2 grid to provide access to the ventral surface of the hind paws. In all experiments, punctate mechanical stimulation was applied with a dynamic plantar aesthesiometer (Ugo Basile, Varese, Italy) at 15 min after the administration of capsaicin or its solvent. Briefly, a nonflexible filament (0.5 mm diameter) was electronically driven into the ventral side of the paw previously injected with capsaicin or solvent (i.e., the right hind paw), at least 5 mm away from the site of the injection toward the fingers. The intensity of the stimulation was fixed at 0.5 g force, as described previously. 66 When a paw withdrawal response occurred, the stimulus was automatically terminated, and the response latency time was automatically recorded. The filament was applied three times, separated by intervals of 0.5 min, and the mean value of the three trials was considered the withdrawal latency time of the animal. A cutoff time of 50 s was used. The compounds tested, or their solvent, were administered sc 30 min before the i.pl. administration of capsaicin or DMSO 1% (i.e., 45 min before we evaluated the response to the mechanical punctate stimulus). Evaluation of Cyclophosphamide-Induced Visceral Pain. CTX-evoked pain behaviors and referred hyperalgesia were examined following a previously described protocol with slight modifications. 48 Animals were placed into the same individual test compartments described above for 40 min to habituate them to the test conditions. Then, mice were injected ip with CTX or saline. Compound 21 or its solvent was sc injected at 120 min after CTX ip administration, and pain behaviors were recorded for 2 min every 30 min in the period from 150 to 240 min. These pain-related behaviors were coded according to the following scale: 0 = normal, 1 = piloerection, 2 = labored breathing, 3 = licking of the abdomen, and 4 = stretching and contractions of the abdomen. At the end of the 2 h observation period (i.e., 4 h after the CTX injection), the sensory threshold in the abdomen was measured 240 min after CTX administration, using a series of von Frey filaments with bending forces ranging from 0.02 to 2 g (Stoelting, Wood Dale, USA). Testing was always initiated with the 0.4 g filament. The response to the filament was considered positive if immediate licking/scratching of the application site, sharp retraction of the abdomen, or jumping was observed. If there was a positive response, a weaker filament was used; if there was no response, a stronger stimulus was then selected. The 50% threshold withdrawal was determined using the up and down methods and calculated using the Up−Down Reader software. 67 ■ ASSOCIATED CONTENT Complete details of in vitro biological methods, 1 H and 13
9,958.2
2022-10-12T00:00:00.000
[ "Medicine", "Chemistry" ]
An Improved Pedestrian Ttracking Method Based on Wi-Fi Fingerprinting and Pedestrian Dead Reckoning Wi-Fi based positioning has great potential for use in indoor environments because Wi-Fi signals are near-ubiquitous in many indoor environments. With a Reference Fingerprint Map (RFM), fingerprint matching can be adopted for positioning. Much assisting information can be adopted for increasing the accuracy of Wi-Fi based positioning. One of the most adopted pieces of assisting information is the Pedestrian Dead Reckoning (PDR) information derived from inertial measurements. This is widely adopted because the inertial measurements can be acquired through a Commercial Off The Shelf (COTS) smartphone. To integrate the information of Wi-Fi fingerprinting and PDR information, many methods have adopted filters, such as Kalman filters and particle filters. A new methodology for integration of Wi-Fi fingerprinting and PDR is proposed using graph optimization in this paper. For the Wi-Fi based fingerprinting part, our method adopts the state-of-art hierarchical structure and the Penalized Logarithmic Gaussian Distance (PLGD) metric. In the integration part, a simple extended Kalman filter (EKF) is first used for integration of Wi-Fi fingerprinting and PDR results. Then, the tracking results are adopted as initial values for the optimization block, where Wi-Fi fingerprinting and PDR results are adopted to form an concentrated cost function (CCF). The CCF can be minimized with the aim of finding the optimal poses of the user with better tracking results. With both real-scenario experiments and simulations, we show that the proposed method performs better than classical Kalman filter based and particle filter based methods with both less average and maximum positioning error. Additionally, the proposed method is more robust to outliers in both Wi-Fi based and PDR based results, which is commonly seen in practical situations. Introduction While the problem of outdoor positioning has been extensively solved due to the developments of the Global Navigation Satellite System (GNSS), the indoor localization problem is still waiting for a satisfactory and reliable solution. Currently, many techniques have been applied to the area of indoor positioning. According to their different degrees of dependence on external infrastructures, these techniques can roughly be categorized into three types: • Fully independent of external infrastructures. This type of positioning method relies on the inertial readings from waist mounted [1,2] or foot mounted [3][4][5] Inertial Measurement Units (IMUs) to calculate the relative position changes of the pedestrian. As the self-contained IMUs are adopted, no external signals or infrastructures are needed. Although the Pedestrian Dead Reckoning (PDR) algorithm [6] (corresponds to the waisted mounted situation) or the Zero Velocity Update (ZVU) algorithm [7] (corresponds to the foot mounted situation) can reduce the positioning error growth from cubic to linear, this type of method still has accumulative errors without upper bounds, without, however, the need for preinstalled devices. This type of method is thus suitable for emergency response cases, such as fire fighting. • Fully dependent on special external infrastructures. This type of method or system relies on specially designed and deployed hardware (such as beacons or tags) in the positioning areas to provide the positioning signals. For example, the active bat system [8] and the Cricket system [9] adopt the deployed ultrasonic signals for positioning. The system SpotON [10] adopts the Radio Frequency Identification (RFID) technique to locate the pedestrian. Some other positioning systems, such as [11,12], adopt the Ultra Wide Band (UWB) system to locate the pedestrian according to Time of Arrival (ToA) measurements provided by the UWB systems. As this type of positioning system needs to deploy sensors or beacons all over the positioning areas, it is normally high cost and with limited coverage. • Dependent on the near-ubiquitous signals in indoor environments. An example of this type is Wi-Fi based indoor positioning. As the Wi-Fi signals are already abundant in a wide range of buildings nowadays, they can be adopted without considering peculiar deployment. In this way, the cost can be lowered and the coverage significantly expanded. In Wi-Fi based positioning, the user carries a Wi-Fi signal receiver-in most scenarios, a smartphone--and needs to be located. From the receiver, Received Signal Strength Indication (RSSI) readings can be acquired for positioning use. The methods for Wi-Fi based positioning methods can roughly be categorized into two types: model based [13][14][15] and fingerprinting based [16][17][18]. Model based methods assume a path loss model to estimate the distances from the Access Points (APs) to the user. With the derived ranges, the position of the user can be estimated. However, as both the accuracy of the path loss model and the positions of APs are not always perfectly known, the overall accuracy of such a system can be greatly deteriorated. Fingerprinting based methods are arguably more accurate and more suitable for large scale use cases. Fingerprints in this context denote the RSSI readings from different APs and are considered unique, indicating different positions. The receiver can tell the RSSI readings from different APs because the APs are constantly sending messages containing information of their own Media Access Control (MAC) addresses. In this type of method, two steps are essential: the offline training phase and the online localization phase. In the offline training phase, a Reference Fingerprint Map (RFM) is established with fingerprints collected at different known locations called Reference Points (RPs). Then, in the localization step, a recently collected fingerprint is compared against the fingerprints in the RFM to solve for the position where the fingerprint is collected. Much research has been done on the localization step. The RADAR system [19] proposed a simple k Nearest Neighbor (kNN) method for matching the RSSI fingerprints and solving for the location. Bayes' rule is implemented in [20] and the localization problem is regarded as a Maximum A Posterior (MAP) estimation problem. However, these methods are computationally expensive with large Regions of Interest (RoI). To solve the problem of positioning latency due to larger RoI and to enhance accuracy, hierarchical positioning methods are proposed in [21,22]. In these methods, the RFMs are clustered into smaller batches or subregions either in the RSSI space or the coordinate space. The position of the user is first confined in one or several batches (coarse localization). Then comes the fine localization. As coarse localization can limit the search space to smaller regions, the positioning efficiency is increased. As the only hardware on the user side is normally a smartphone, other sensors mounted on the phone, e.g., inertial sensors, can be adopted to get superior positioning accuracy than pure Wi-Fi fingerprinting based positioning. Additionally, some other available information, e.g., indoor floorplans, can also be adopted. Generally, to include multiple sources of information to enhance Wi-Fi based fingerprinting is called assisted positioning. The authors in [23] adopted a particle filter (PF) to fuse the information of Wi-Fi fingerprinting positioning results, Pedestrian Dead Reckoning (PDR) information estimated from inertial sensors and indoor floorplans. The methodology proposed to form a discretized representation of the indoor map and then adopt it as a priori information in the PF probabilistic model. As some irrelevant degrees of freedom in the state space are removed, the methods can sufficiently decrease the number of particles needed while boosting accuracy. In [24], the authors proposed a particle filter method to integrate the fingerprinting and step counter results. This method is calibration-free for the step counter model and works transparently for heterogeneous devices and users. In [25], the authors combined the notions of hierarchical positioning and assisted positioning by designing a two-filter approach. The final positioning results are adopted for constraining the search space for fingerprinting matching. The first stage filter is a Kalman filter (KF) to achieve a smoothed fingerprinting matching. The second stage KF is adopted to integration of fingerprint matching results and PDR results. In [26], a PF was also adopted. The two primary sources needed to be integrated in the method are the Wi-Fi RSSI and PDR results. However, iBeacons are also deployed in areas with limited AP coverage and can be adopted to occasionally correct positioning errors. In [27], the authors again adopted a particle filter to integrate the information of Wi-Fi fingerprinting, PDR results and an indoor map. Differently, both fixed lag and fixed interval PF based smoothing processes were studied in the implementation, both of which can enhance accuracy and reliability compared to a conventional particle filter. In the context of indoor pedestrian tracking from Wi-Fi fingerprint matching and other sources of information, most methods rely on filters such as KF and PF. Although some publications have made some modifications on the standard filter processes, e.g., by adding smoothing [27] and by adding multi-stage filter integration [25], they are still variants of filter based approaches. For these methods, no matter how the filters are implemented, e.g., Kalman filter or particle filter, they are assumed to be first-order Markov processes, denoting that the next estimation is only based on the previous estimation. All the previous information or observations are represented as the covariance matrix in the Kalman filter and a group of particles in the particle filter. In the structure of filter based methods, as the filtering process goes on, previous information is gradually "forgotten." In this paper, a novel methodology for integration of Wi-Fi fingerprinting and PDR results is proposed. The methodology is graph optimization based rather than filter based. In the proposed graph-optimization based approach, all available observations, no matter whether they are old or current, contribute the same to the final concentrated cost function (CCF) and thus enable reaching globally optimized parameters, rather than sub-optimal estimations in filter based methods. The method only uses sensors from a Commercial Off The Shelf (COTS) smartphone. The framework of the methodology is shown in Figure 1, consisting of two blocks. The filter block is essentially a simple extended Kalman filter (EKF) integrating Wi-Fi fingerprinting and PDR results for tracking. The tracking results are adopted as initial values for the optimization block, while Wi-Fi fingerprinting and PDR results are adopted to form an CCF. The CCF can be minimized with the aim of finding the optimal poses of the user with better tracking results. As the EKF tracking results can normally provide sufficiently good initial values, the optimization can reach convergence in only a few iterations. This means that the optimization has the ability to run on real time. Experiments were designed to verify that the proposed methodology can outperform classical KF and PF based methods in terms of pedestrian tracking accuracy. Simulations were carried out, showing that the proposed methodology is more robust to noises or errors in both Wi-Fi fingerprint matching results and PDR results. The fingerprinting errors are by far the most prevalent in practical situations due to many reasons, such as constant changes in the RFM and RSSI measurement noises. PDR errors are also very common in real applications, possibly due to unconventional movement of the smartphone carriers. The remainder of the paper is arranged as follows: Section 2 gives the related works on pedestrian tracking, including the PDR algorithm and graph optimization, which are used as tools for solving the pedestrian tracking problem in this paper. Section 3 is the methods section, describing the details of the proposed method ( Figure 1). Section 4 is the experiment section with both simulations and real-scenario experiments. Then comes the conclusion (Section 5) and the discussion (Section 6). Related Works The PDR algorithm has become more and more common in many smartphone based applications because it is an efficient way for tracking the user based on inertial readings [28,29]. Graph optimization, as an alternative to filter based methods, has gained more and more attention over the years, and has been proven to have more accurate results than filter based methods in vision based positioning [30][31][32]. The PDR algorithm and graph based optimization are adopted as two tools for solving the pedestrian tracking problem in this paper. They are introduced as follows. Pedestrian Dead Reckoning Unlike closed form inertial calculations, the PDR algorithm for pedestrian tracking assumes a step wise motion model of the user; i.e., the user walks step by step. This can be described in Equation (1): where x t and y t denote the user's position of the t th step. The position can be derived from the previous step if the step length L t and the heading ϕ t are known. In order to track the user's positions, the PDR algorithm should provide the three types of corresponding information in the following ways: • Detecting new steps. This involves activity recognition based on inertial readings. The user can carry the smartphone while at standstill, moving forward and during other irregular movements. This issue has been studied in publications such as [28]. If new steps were detected falsely, the tracking error will increase. • Step length estimation. This corresponds to the estimation of L t from Equation (1). This can not be done through inertial calculation because there is too much noise in the inertial readings and the error accumulates cubically. In our paper, we adopt the a priori model, which assumes that the step length is linear to the step frequency. where f t is the frequency of new steps and parameters a and b are related to the users. From publication [33], these parameters can be adaptively estimated during walking. According to the method, minimum training data are needed for the relationship between a, b, step frequency and step length for different individuals. Therefore, there is a need for minimum "offline" training phase for the step length estimation. • Orientation or heading estimation. This corresponds to the estimation of ϕ t from Equation (1). In this paper, we adopt the method proposed in [34], where the magnetometer readings can be adopted to improve heading estimation. The method in [34] also has the advantages of low computational load and is able to operate under low sensor sampling rates. This is in favor of limiting the needed computing power and enabling real time implementation. Although many methods have been proposed to enhance the accuracy of the PDR algorithm, tracking results based on pure PDR are still quite unreliable due to the fact that the inertial sensors on a smartphone are low end inertial sensors. In general, the tracking results from the PDR algorithm are integrated with other types of information to work. In our paper, the other type of information is the Wi-Fi based fingerprint matching. Graph Optimization Graph optimization can be regarded as the counterpart of recursive filters. In recursive filters, such as in KF or PF, the newest state or estimation is made with the latest observation and the previous state estimations. While in graph optimization, the states in different times are estimated in a batch by minimizing the CCF composed of square error terms between the observations and the states at different times. It is a state-of-the-art technique widely adopted in the visual based positioning area, especially for the visual based Simultaneously Localization and Mapping (SLAM) problems. In the graph optimization problem, the CCF can be represented as square sums, and can be related to a graph, where the nodes can represent the states or variables to be estimated and the edges between the nodes can present a known relationship between the nodes. Minimizing the CCF is essentially finding the best configuration of nodes which satisfies the known relationships (i.e., observations) best [35]. As the CCF has the square sum terms, the optimization problem can be regarded as the minimization of non-linear least squares. The typical solution for such a problem is to linearize around the current state, solve a linear system and then iterate. For solving the linear system, typical methods include Gauss-Newton algorithm and Levenberg-Marquardt algorithm. The solution for the graph optimization problem was surveyed in detail in [36], and we directly took the functional tools from it. In the context of pedestrian positioning, the tool of graph optimization mostly focuses on building the RFM. In [37], the authors adopted the tool for building the RFM with a foot-mounted Inertial Measurement Unit (IMU) and a smartphone. In [38], the tool was adopted to merge crowd-sourced trajectories and generate the RFM with the help of some known landmarks. In those methods, other external devices or information, e.g., Bluetooth beacons, external inertial measurements units (IMUs) or landmarks with known positions are needed, which increases the system complexity, hindering them from prevalent usage in indoor environments. The proposed method, however, focuses on the positioning part, and only relies on a smartphone on the user side, which is suitable for ubiquitous indoor positioning. In our paper, we focus on pedestrian tracking adopting the tool by integration of PDR results and Wi-Fi fingerprint matching. Here, we do not go to detail for the optimization process but rather focus on how to fit the mentioned pedestrian tracking problem with the graph optimization framework. Method As shown in Figure 1, two types of readings from the smartphones are adopted for positioning: RSSI values and inertial readings. The RSSI values are adopted for the Wi-Fi based fingerprint matching algorithm, which can estimate the locations of the users. More of the matching algorithm implemented here will be discussed in Section 3.1. The inertial readings are processed through the PDR algorithm. As mentioned in Section 2.1, the step detection and the step length were estimated adaptively from [33], and the orientation estimation was taken from the method in [34]. Because these are widely adopted, mature methods, we do not include the details in the scope of this paper. For the Wi-Fi based fingerprinting results, the position estimation may have some outliers due to the noise in both the RFM and the measured RSSI. For the PDR based tracking results, the positioning error may accumulate in time severely. These two estimations are considered independent because they come from different sensors on the phone. They also have different error features and can be integrated to achieve better results. The EKF (introduced in Section 3.2) is firstly carried out to integrate the Wi-Fi based fingerprinting results and the PDR results. A rough fusion output can then be acquired. The output acts as the initial value for the graph optimization block (including the establishing of the CCF and the process for optimization). The outputs of the graph optimization block are the final tracking results, which are provably more accurate than the output from the EKF. The graph optimization block is introduced in Section 3.3. Wi-Fi Based Fingerprint Matching The Wi-Fi based fingerprint matching algorithm is the key for Wi-Fi based positioning. When without any additional data to assist positioning, the matching results can be directly adopted as the location estimations of the users. In our implementation, we adopt the hierarchical structure for Wi-Fi based positioning. The hierarchical based positioning can significantly lower the computational cost if the RFM covers a large area [21]. The structure of the hierarchical based Wi-Fi fingerprint matching is shown in Figure 2. It has two stages: the coarse positioning stage and the accurate positioning stage. In the coarse stage, the positioning area is partitioned into larger grids. The representing fingerprint for the area is represented as the set as all the available APs in the grid: where F j is the fingerprint (vector of RSSIs from different APs), a i denotes the area of the grid i and the function AP(.) returns the set of available APs of the fingerprint. For a newly collected fingerprint F new , the task is to compare it with all the representing fingerprints in the grid areas, and choose several of the closest grids as the potential area. During the comparison, we adopt a simple Jaccard distance: where O new denotes the AP set of the newly collected fingerprints F new and the operator |.| denotes the cardinality of the set. The Jaccard distance is adopted in a similar manner in [39]. The distance metric is generally faster because it does not need to complement the fingerprints to vectors of the same length. However, also because the Jaccard distance only takes the availability of APs into consideration, it is not as accurate as those metrics adopting the actual values of RSSIs as well. This metric is suitable for use in the coarse localization stage because we only need to narrow the potential area to a small number of grids. As mentioned, in accurate localization phase, as the search space is narrowed, the matching process can be accelerated. Here we implement the Penalized Logarithmic Gaussian Distance (PLGD) distance as the metric between two fingerprints. The PLGD metric is proposed in [40] and has better performance than a traditional Logarithmic Gaussian Distance (LGD), which is defined as: The index k means the indexes of APs shared by F i and F j , and the respective RSSI values are F i,k and F j,k . The function G(.) can be written as: The max operation in Equation (5) gives an upper bound of if G(F i,k , F j,k ) is close to zero. However, the LGD metric does not consider the situation where some APs are available in F i but not in F j or vise versa. Therefore, PLGD adds the terms to give a penalty for the situation: where the coefficient α is a hyperparameter indicating the weight of the penalized terms and is studied in the experimental section. The function φ(.) is where the index k represents the indexes of APs in F i but not in F j , and λ is a default RSSI value indicating a missing measured attribute. In our implementation, this value was set to −100 dBm. The PLGD metric considers both the availability information and the RSSI values in the fingerprints, and was thus adopted for our method. EKF for Integration To integrate the information of PDR based estimations and Wi-Fi fingerprint matching based estimations, a EKF is adopted here. Normally, the EKF includes two phases: the prediction phase and the measurement correction phase. In our implementation, the prediction denotes adopting the step length and heading change from the PDR algorithm to make the prediction of the person's position in the next step. The measurement correction phase denotes correct the errors in PDR based tracking according to the estimated positions from Wi-Fi based fingerprint matching. The person's state can be represented as the horizontal positions and the heading: The prediction can be written as: Here L t and ϕ t are the estimated step length and the heading change from the PDR algorithm. Then the covariance of the state P t is updated as: As Equation (10) means that the prediction is a none linear process, when solving for the covariance update, it should be linearized. Then, matrix A t is the Jacobian of Equation (10) over the state s. Similarly, matrix W t is the Jacobian matrix over the "driven vector" [L t , δϕ t ]. In this implementation, A t can be written as: and W t can be written as Q t is the process noise, and here we assume that it is linearly related to the estimated step length and heading change form the PDR algorithm: where a, b are the linear coefficients. In the measurement correction phase, as two of the elements of the state s t are directly observable, the measurement model is deemed as linear. The observation matrix can be written as: Then the measurement correction calculation can be written as where • R is the noise variance matrix for the Wi-Fi based position estimation; • Z t is the position estimation from the Wi-Fi based fingerprint matching process; • K t is the Kalman gain; • In this process, the predicted s and P from the prediction step are corrected according to the observation Z t . There are some outliers from the Wi-Fi based fingerprint matching process. However, we do not take out the outliers in the EKF. The integration results from the EKF are only rough estimations and need to be further taken care of. CCF Forming and Graph Optimization As mentioned, unlike recursive filters, the graph optimization can adopt all the available measurements to estimate the positions in a batch. The key for graph optimization is to construct a graph or CCF to minimize by varying the state estimations. In the problem of integration of PDR and Wi-Fi based fingerprinting, the overall CCF should include both sub-terms in the form of square errors. The CCF can also be represented as a graph shown in Figure 3. Each node (circle) denotes a state variable to be estimated, and here the nodes indicate the state or poses of the person. The squares denote the available measurements. There are two types of measurements here, and they are the pose changes (including step length and heading change) from the PDR algorithm and the position estimations from the Wi-Fi based fingerprint matching. The edges connecting the nodes mean that the corresponding observation can represent some relationship of the nodes. In other words, a cost term can be formed between the observation and corresponding nodes. The symbols in the graph representation are: As can be seen in Figure 3, there are two types of edges; correspondingly, there are also two types of error terms in the CCF. We firstly form the PDR based error terms. From the variables from s t , we can derive the pose change u s t between the previous time and the current time: where [x s t , y s t , ϕ s t ] are taken from the variables of s t . Then we can form the cost (in a square sum form) denoting the differences between the pose changes derived from PDR and pose changes derived from the state variables as: where k denotes indexes on all steps, W PDR denotes a 2 × 2 weight matrix for the cost and e PDR k is e PDR which is composed of the step length and heading differences. If the state variables are exactly the same with the PDR derived results, C PDR should be 0. The costs derived from Wi-Fi based fingerprint matching are similar. They represent the differences between the state variables and the estimations from Wi-Fi positioning. The cost can be written as: For the weights of WiFi based cost function, as there should be no differences in error contributions between the errors in x-axis and y-axis, we define each as: For the PDR based cost function, we define the weight matrix as follows according to the contribution in [41]: which implies that the heading error contributes 10 times more than the step size error. As the weights for the PDR cost function were already studied in publication [41], here we directly take the value. Then, the CCF is the sum of the PDR terms and Wi-Fi terms: For the weights of the CCF terms, we can assume that the weights are summed to be 1. This assumption is reasonable because any scale factor of the weights would not affect the results of the minimization of the CCF. The symbol a is a hyper-parameter defining the weights of the CCF terms. The value of a should be between 0 and 1. If it is 0, then the fusion result would degenerate to pure Wi-Fi fingerprinting based results, and otherwise to pure PDR results. In our method, we vary the value of a from 0 to 1 spaced at 0.1. Figure 4 shows the average positioning errors with different values of a. We can see that the curve is flat in the center and peaks at the two ends. The peaks denote that only one type of information between Wi-Fi fingerprinting and PDR results is adopted. We can also see from the figure that the flat area ranges from about 0.3 to about 0.6. In this range, the average positioning error only has minor fluctuations. This means that the average positioning error is not sensitive to the value during this range. In our implementation, we adopted the value 0.4. To sum it up, here only one of the weights factors was considered as a hyper-parameter to tune prior to the implementation to the approach. Other weights were directly taken from previous publications. It was just a current implementation, and how the overall weights will affect our method will be studied in the future. The minimization problem for the CCF is a classical square sum minimization problem, and has been solved with some compact tools like G2O [36]. Another thing to be noted is that there are some outliers in the measurements, including PDR results due to false step detection and Wi-Fi positioning results due to changes in the RFM and noises in the collected RSSI fingerprints. To overcome the outlier problem, we have added a Huber loss function on the CCF CCF = ∑ n ρ(r n (x)) (25) where ρ(.) is the Huber loss function, n represents the indexes over all the square terms, r n () is the residual error and x denotes all the variables to be estimated. The Huber loss function is added to avoid the phenomenon that outliers can introduce large residuals. The adding of loss functions on the CCF is a technique called iterative least-squares, and is introduced in detail in [36]. The complexity for the optimization is O(N 2 ), where N is the dimension of the state space, herein the number of poses to be estimated. Experiment Both a real-scenario experiments and simulations were carried out to verify the effectiveness of the proposed method. Since Wi-Fi based fingerprint matching can be considered as a fundamental for the proposed methodology, it was tested, at first showing that the implementation here (specifically the hierarchical structure and the PLGD metric) is effective at increasing the positioning accuracy solely from Wi-Fi fingerprints. Then, the performance of the proposed integrated positioning methodology adopting PDR and Wi-Fi fingerprint matching was tested and compared with traditional integration methods such as EKF and particle filtering. Besides experiments adopting data collected in real-scenarios, we also carried out simulations by manually adding outliers in positioning results from Wi-Fi fingerprint matching and PDR. Then we tested the robustness of the proposed graph optimization based methodology. Note that the positioning outliers for Wi-Fi based fingerprinting and PDR result matching can be very common in practical situations. Experimental Settings To test the performance of the positioning methodology, two fundamental issues should be considered. • The prerequisites are needed for Wi-Fi based fingerprinting matching. Most importantly, the RFM should be established in the experimental scene. • The positioning benchmarks should be established. This raises the issue of how to acquire the ground truth positions to get the positioning errors and the statistics of the positioning errors. We designed the experiments considering the aforementioned fundamental issues. The experiments were carried out in a partial part of a mall, as shown in Figure 5. There were already abundant APs installed in the mall, so there was no need for further installation of APs for the experiments. To solve for the first issue, we adopted the method in [40] for generating the RFM. Specifically, we acquired the map of the mall, as shown in Figure 5. The Nexus 6P device (Android OS) was adopted to collect the Wi-Fi signal fingerprints. The person carrying the device was asked to walk around in the mall and to mark positions by clicking on the device's screen. Then, according to the marking time of the positions and the Wi-Fi fingerprinting collecting time, the locations of the reference points were interpolated. The RFM was established according to the fingerprints and the interpolated positions. For the second issue, the ground truth positions were measured in a similar way by interpolation according to the marking time and the fingerprints collecting time. In Figure 5, it can be seen that the positions are spaced in nearly all the floorplan. In our experiment, the total number of points was 2373. This number includes the reference points (70%) adopted for establishing the RFM and the ground truth positions (30%) for testing accuracy. The locations of the points look a bit random for the following reasons: (1) the sampling interval of the Wi-Fi fingerprints is not a constant in the actual data collection process due to the scanning mechanism; (2) the walking trajectories of the person have some randomness. The aforementioned scheme for labeling coordinate positions is not perfect. The coordinate position accuracy highly depends on each data collector's marking accuracy. However, this method is widely adopted because the effort needed for labeling is much less than site surveying using a total station. For the purpose of data collection, an Android application was developed. The Android application can: Most importantly, the different types of data have synchronized timestamps, and thus can be adopted for interpolation as aforementioned. Another thing should be noted is that the positioning errors statistics were all based on the positions at the fingerprinting collecting time, not the times of steps (PDR epochs). Even though the position integration returned the positioning estimations at each step, we adopted the estimations to derive the position estimations at Wi-Fi fingerprint collecting time to show the error statistics. Wi-Fi Based Fingerprint Matching The Wi-Fi fingerprint matching in our methodology implements a hierarchical structure and the PLGD metric. Wi-Fi fingerprint matching can be considered as a fundamental work for our methodology, which can provide the Wi-Fi based positioning estimations for integration. Here we carried out experiments showing that the proposed fingerprint matching method has better performance by adopting the hierarchical structure and the PLGD metric. Figure 6 shows the Cumulative Density Functions (CDFs) comparisons of the different combinations: with hierarchical+PLGD, without hierarchical+PLGD, with hierarchical+LGD, and without hierarchical+LGD. We can see that adopting both the hierarchical structure and the PLGD metric can increase the positioning performance. Specifically, applying the hierarchical structure can bring an average error decrease of about 2.0 m. By applying PLGD the number is about 2.1 m, and by applying both the number is 3.5 m. In the proposed Wi-Fi based fingerprinting, some hyper-parameters have effects over the positioning accuracy. The hyper-parameters include the grid length, σ and α (adopted in metric definition in the fingerprinting space). To define the best hyper-parameters, a three-dimensional search is normally needed. However, instead of performing the three-dimensional optimal hyper-parameter search, we only performed a two-dimensional search (over σ and α ) assuming typical grid length values (1, 1.5 and 2 m). This is because that dividing the coordinate space into grids is not new in Wi-Fi based positioning. We found out that the grids were adopted in publications such as [21,39,42]. Typical grid length ranges from 1 to 2 m in the mentioned papers. The two-dimensional search results are shown in Figure 7; we found out that the average positioning errors do not change much if the grid length values are within the three typical values. This might be the reason why many publications do not list the value of grid length as one of the hyper-parameters to tune prior to positioning. In our method, we adopted 1.5 m as the grid length value. The corresponding two-dimensional hyper-parameter search results when the grid length equals 1.5 are shown in Figure 7b. Recall that σ and α denotes the variance of the LGD and the penalty coefficient. The ranges of the σ and α are limited according to their typical values from other implementations [40]. The search resolution of σ is about 0.67 and that of α is about 7.8. From Figure 7b, we can see that the best hyper-parameters herein for σ and α are 6.7 and 33.3 respectively. Integration of PDR and Wi-Fi Based Fingerprinting By walking in the experimental scene and marking on the map, we can establish a RFM, as mentioned before. Then we can collect some test data which includes Wi-Fi fingerprints, inertial measurements and marking positions. With these data, we can achieve the ground truth positions of the person (through linear interpolation according to time) and can test the performance of the proposed methodology. Both a real-scenario experiment and simulations were carried out for such purposes. Figure 8 shows the positioning error CDF comparisons of the proposed methodology and the pure Wi-Fi based positioning without the assisting of PDR information. It shows that the PDR information is very useful for helping increasing the positioning performance. Figure 9 shows the positioning error CDF comparisons between the proposed graph optimization methodology, Kalman filter based integration method (the results are taken from the descriptions in Section 3.2) and particle filter based integration method. Note that the Kalman filter based integration can provide the initial values for graph optimization in the proposed methodology. Not surprisingly, our method provides the best positioning performance, with a CDF above the other's CDF values. This is because our method can be regarded as having adopted all the available information for estimation, while the Kalman based method and the particle filter based method can only adopt the previous measurements for position estimation at current time. proposed method integration with Kalman filter integration with particle filter Figure 9. The CDF comparisons between the proposed methodology, the Kalman filter based methodology and the particle filter based methodology. Real-Scenario Data Experiment The results from both Figures 8 and 9 are summarized in Table 1. We can see that integration of PDR information can indeed increase positioning performance regardless of which integration method is adopted. The average positioning error of the proposed method is 0.8 m and 0.7 m less than Kalman filter based method and particle filter based method respectively. Additionally, using the proposed method, the maximum positioning error is greatly decreased, by 1.3 m compared with the Kalman filter based method and by 1 m from the particle filter based method. The dropping of maximum positioning error can show that the proposed method is more robust than the other two integration methods. Simulations In practical Wi-Fi fingerprinting and PDR based positioning, there may be several outliers due to a large variety of reasons. For Wi-Fi based fingerprinting, the positioning results from Wi-Fi based fingerprint matching are often not as good as in Section 4.2. This is because that the RFM is constantly changing in the environment. There are many factors causing such changes, including the moving object and people, the installations of APs, the moving of existing APs and so on. These changing factors can cause the positioning errors from Wi-Fi based fingerprint matching to increase; in particular, they can bring outliers into position estimations. For PDR based positioning, the outliers may originate from bad accuracy in the devices' inertial sensors and irregular walking patterns or device placement patterns. These factors can be commonly seen in practical applications and need to be taken into consideration. Therefore, to dissect into the robustness of the proposed method, we performed simulations which introduced different levels of outliers both in Wi-Fi fingerprinting and PDR positioning. Specifically, for Wi-Fi positioning, we manually added Wi-Fi positioning outliers to 5% and 10% of the data. Tables 2 and 3 show the positioning results comparisons of the three integration methods. Here, we added the outliers by randomly choosing a specific percentage of Wi-Fi based positioning results and adding errors uniformly distributed at a range between 10 and 20 m. For PDR based positioning, outliers were added by introduce larger errors in pose changes. For that, we also took 5% and 10% of all PDR based pose changes as outliers. In these outliers, we introduced a uniformly distributed step length scale factor between 0.5 to 2, and a heading change scale factor, also from 0.5 to 2. Tables 4 and 5 show the respective error comparisons of the proposed method, traditional Kalman filter based method and particle filter based method. Method Average Error (m) Max Error (m) proposed method 3.7 8.5 Kalman filter 4.5 11.4 particle filter 4.5 11.9 Figure 10 shows the average error curve comparisons of the three methods in different outlier conditions. The horizontal axis denotes the different outlier conditions and the vertical axis denotes the average positioning errors. Table 6 shows the how the outlier conditions correspond to the numbers in the horizontal axis. From this figure, we can see that under all outlier conditions, the average positioning error curves are generally below that of the Kalman and particle filter based methods, proving that our method has better robustness against respective outliers in both Wi-Fi fingerprinting and PDR positioning. Conclusions The Wi-Fi fingerprinting based positioning method has great potential for usage in indoor environments. To enhance the positioning accuracy, many types of assisted information can be adopted. The most commonly seen assisted information is the PDR information because it is normally available from a COTS smartphone. Filters such as Kalman filters and the particle filter are adopted in many methods for the integrated positioning. A new methodology is proposed in this paper which combines hierarchical Wi-Fi fingerprinting, Kalman filtering and graph optimization. According to the experiments: (1) the adoption of the hierarchical structure and the PLGD metric can improve the Wi-Fi based positioning accuracy; (2) with real-scenario data, our method was proven to be more accurate than classical Kalman filter and particle filter based methods (with less average positioning errors); (3) both real-scenario experiments and simulations have shown that the proposed method is more robust than the classical Kalman filter based method and particle the filter based method (with less maximum positioning error). Particularly, the proposed method performs better than classical Kalman filter and particle filter based methods if a lot of outliers exist in both Wi-Fi based fingerprinting and PDR results. The average position error is 1.7 and 2.0 m less than traditional Kalman filter based integration and particle filter based integration if 10% outliers are added in both Wi-Fi fingerprinting and PDR results. Discussions and Future Work The aim of this paper was to provide an approach for ubiquitous indoor positioning, especially for scenarios in public areas, such as malls, stations and airports. The proposed approach only requires a COTS smartphone and already-installed APs. In our approach, no extra hardware is needed and large-scale deployment costs are minimized. Therefore, it is suited for ubiquitous indoor positioning in a lot of public areas. As mentioned, the complexity of the proposed optimization is O(N 2 ), where N is the number of poses to be estimated. Therefore, as the positioning time grows longer, the number of poses also grows, which can lead to significant latency in positioning. One simple strategy to alleviate the growing latency is to gradually discard error terms related to poses in early times, thereby giving the optimization problem an upper limit scale. However, in our experiment, the latency is not significant, since our longest trajectory only has 137 poses. The detailed strategy for lower the positioning latency will be studied in the future. Other assistive techniques such as RFID, UWB, foot mounted positioning and so on are also very helpful for improving positioning accuracy. However, compared to Wi-Fi networks, normally they need to be installed on purpose, thereby increasing the system cost. Overall, the cost-efficiencies of other assistive techniques will be studied in the future. Specially, if some satellite based results are available in the outdoor environment, they can be adopted in the proposed method. The CCF can then include terms derived from satellite based observations, and thus improve the overall accuracy. The adoption of satellite based information in the outdoor environment will also be studied in the future.
10,116.4
2020-02-01T00:00:00.000
[ "Computer Science", "Engineering" ]
JIB-04, a Pan-Inhibitor of Histone Demethylases, Targets Histone-Lysine-Demethylase-Dependent AKT Pathway, Leading to Cell Cycle Arrest and Inhibition of Cancer Stem-Like Cell Properties in Hepatocellular Carcinoma Cells JIB-04, a pan-histone lysine demethylase (KDM) inhibitor, targets drug-resistant cells, along with colorectal cancer stem cells (CSCs), which are crucial for cancer recurrence and metastasis. Despite the advances in CSC biology, the effect of JIB-04 on liver CSCs (LCSCs) and the malignancy of hepatocellular carcinoma (HCC) has not been elucidated yet. Here, we showed that JIB-04 targeted KDMs, leading to the growth inhibition and cell cycle arrest of HCC, and abolished the viability of LCSCs. JIB-04 significantly attenuated CSC tumorsphere formation, growth, relapse, migration, and invasion in vitro. Among KDMs, the deficiency of KDM4B, KDM4D, and KDM6B reduced the viability of the tumorspheres, suggesting their roles in the function of LCSCs. RNA sequencing revealed that JIB-04 affected various cancer-related pathways, especially the PI3K/AKT pathway, which is crucial for HCC malignancy and the maintenance of LCSCs. Our results revealed KDM6B-dependent AKT2 expression and the downregulation of E2F-regulated genes via JIB-04-induced inhibition of the AKT2/FOXO3a/p21/RB axis. A ChIP assay demonstrated JIB-04-induced reduction in H3K27me3 at the AKT2 promoter and the enrichment of KDM6B within this promoter. Overall, our results strongly suggest that the inhibitory effect of JIB-04 on HCC malignancy and the maintenance of LCSCs is mediated via targeting the KDM6B-AKT2 pathway, indicating the therapeutic potential of JIB-04. Introduction Liver cancer is one of the leading causes of cancer-related deaths globally [1]. Among all primary liver cancers, hepatocellular carcinoma (HCC) is the most common subtype, accounting for approximately 78% of the total cases [2]. The prognosis of patients with HCC largely depends on the tumor stage. Early-stage HCC can be treated using surgical resection, transplantation, and ablation techniques using radiofrequency [1]. However, in many cases, HCC is diagnosed at an advanced stage, thereby leading to a very poor survival rate [3,4]. A better understanding of the mechanisms underlying HCC malignancy and metastasis may contribute to the development of more efficacious therapeutic strategies primarily aimed at treating advanced-stage HCC. JIB-04 Caused Reduced Cell Proliferation and Cell Cycle Arrest in HCC Cells Because our previous report showed the effect of JIB-04 on the viability and cell cycle of human colorectal cancer cells [19], we attempted to determine if JIB-04 could influence those of human HCC cells. When three different human HCC lines, namely PLC/PRF/5, Huh7, and HepG2, were treated with 6 µM JIB-04 for 4 days, the viability of all three cell lines was significantly decreased in a time-dependent manner compared with DMSOtreated controls ( Figure 1A). The effect of JIB-04 on cell viability was comparable to that of trichostatin A (TSA), which was used as a control drug because of its known ability to target liver CSCs [29,30]. In addition, we investigated the effect of JIB-04 on the cell cycle progression in HCC cells using a FACS analysis to assess whether the reduced cell viability ( Figure 1A) was related to any defects in the cell cycle. Compared with DMSO-treated controls, all the JIB-04-treated HCC cells showed increased G 1 -phase subpopulations and decreased G 2 /M-phase subpopulations ( Figure 1B), suggesting the occurrence of G 1 /S arrest in JIB-04-treated HCC cells. Thus, these data suggested that the reduced cell viability in JIB-04-treated HCC cells might be partly caused by JIB-04-induced cell cycle defects. progression in HCC cells using a FACS analysis to assess whether the reduced cell viability ( Figure 1A) was related to any defects in the cell cycle. Compared with DMSO-treated controls, all the JIB-04-treated HCC cells showed increased G1-phase subpopulations and decreased G2/M-phase subpopulations ( Figure 1B), suggesting the occurrence of G1/S arrest in JIB-04-treated HCC cells. Thus, these data suggested that the reduced cell viability in JIB-04-treated HCC cells might be partly caused by JIB-04-induced cell cycle defects. JIB-04 Treatment Interrupted HCC Cell Migration and Invasion We carried out transwell assays to determine whether JIB-04 affected the migratory and invasive abilities of HCC cells. PLC/PRF/5 and Huh7 cells treated with JIB-04 displayed reduced migration and invasion compared with control cells treated with DMSO or TSA (Figure 2A, B). Moreover, the effect of JIB-04 on cell migration was further confirmed using a wound-healing assay. Consistent with the data depicted in Figure 2A, the migratory ability of both PLC/PRF/5 and Huh7 cells was decreased upon JIB-04 treatment compared with that of the DMSO-treated control cells ( Figure 2C). Therefore, these results The viability of HCC cells cultured after treatment with DMSO (mock control), 6 µM JIB-04, or 4 µM TSA for indicated intervals was analyzed by CCK assay. The error bars indicate standard deviation; n = 3 in PLC/PRF/5, Huh7, and HepG2 cells. (B) (Top panels) Representative histograms of the cell cycle phase distribution in PLC/PRF/5, Huh7, and HepG2 cells after treatment with DMSO (control) or 6 µM JIB-04 for 24 h. Cells were stained with propidium iodide to detect their DNA content. (Bottom panels) Bar graphs representing relative cell populations in cell cycle phases G 1 , S, and G 2 /M. The data demonstrate JIB-04-dependent effects on cell cycle phases. Data are represented as mean ± SEM derived from triplicate measurements. * p < 0.05, ** p < 0.01, and *** p < 0.001 (DMSO vs. JIB-04); # p < 0.05, ## p < 0.01, and ### p < 0.001 (DMSO vs. TSA positive control). JIB-04 Treatment Interrupted HCC Cell Migration and Invasion We carried out transwell assays to determine whether JIB-04 affected the migratory and invasive abilities of HCC cells. PLC/PRF/5 and Huh7 cells treated with JIB-04 displayed reduced migration and invasion compared with control cells treated with DMSO or TSA (Figure 2A,B). Moreover, the effect of JIB-04 on cell migration was further confirmed using a wound-healing assay. Consistent with the data depicted in Figure 2A, the migratory ability of both PLC/PRF/5 and Huh7 cells was decreased upon JIB-04 treatment compared with that of the DMSO-treated control cells ( Figure 2C). Therefore, these results demonstrated that JIB-04 inhibits the migratory and invasive capacities of HCC cells in vitro. ). (C) Cell migration was assessed with a wound-healing assay. Cells were scraped with a yellow pipette tip and treated with DMSO, 6 μM JIB-04, or 4 μM TSA for 24 h. Cells were imaged under a light microscope after injury. Cell migration was quantified by measuring remaining wound areas before and after treatment with DMSO, 6 μM JIB-04, or 4 μM TSA. Data are represented as mean ± SEM of triplicate measurements. * p < 0.05, ** p <0.01 compared with DMSO-treated controls. ). (C) Cell migration was assessed with a wound-healing assay. Cells were scraped with a yellow pipette tip and treated with DMSO, 6 µM JIB-04, or 4 µM TSA for 24 h. Cells were imaged under a light microscope after injury. Cell migration was quantified by measuring remaining wound areas before and after treatment with DMSO, 6 µM JIB-04, or 4 µM TSA. Data are represented as mean ± SEM of triplicate measurements. * p < 0.05, ** p <0.01 compared with DMSO-treated controls. JIB-04 Treatment Reduced the Expression Levels of CSC Markers To assess the effect of JIB-04 on the stemness of liver CSCs, we investigated the mRNA expression levels of liver CSC marker genes, including CD44, CD133, CD90, LGR5, EpCAM, CD24, and CD13, in three HCC cell lines, namely PLC/PRF/5, Huh7, and HepG2, after JIB-04 treatment. Our qRT-PCR data showed that the mRNA expression levels of CSC marker genes were generally reduced in all three JIB-04-treated cell lines compared to DMSO-treated control cells, implying a significant reduction in the liver CSC population following JIB-04 treatment ( Figure 3A). Next, we analyzed the protein levels of CSC markers in JIB-04-treated HCC cells. A Western blot analysis demonstrated that the protein levels of CD44, LGR5, and EpCAM were decreased in all three HCC cell lines. JIB-04 Treatment Reduced the Expression Levels of CSC Markers To assess the effect of JIB-04 on the stemness of liver CSCs, we investigated the mRNA expression levels of liver CSC marker genes, including CD44, CD133, CD90, LGR5, EpCAM, CD24, and CD13, in three HCC cell lines, namely PLC/PRF/5, Huh7, and HepG2, after JIB-04 treatment. Our qRT-PCR data showed that the mRNA expression levels of CSC marker genes were generally reduced in all three JIB-04-treated cell lines compared to DMSO-treated control cells, implying a significant reduction in the liver CSC population following JIB-04 treatment ( Figure 3A). Next, we analyzed the protein levels of CSC markers in JIB-04-treated HCC cells. A Western blot analysis demonstrated that the protein levels of CD44, LGR5, and EpCAM were decreased in all three HCC cell lines. The mRNA expression of cancer stem cell marker genes was analyzed using qRT-PCR in PLC/PRF/5, Huh7, and HepG2 cultures after treatment with DMSO (control), 6 μM JIB-04, or 4 μM TSA (positive control) for 24 h. All data are normalized to GAPDH and plotted relative to the expression level in control cells. Data are represented as mean ± SEM derived from triplicate measurements (n = 3); * p < 0.05, ** p < 0.01, and *** p < 0.001. (B) (Top panels) Protein levels of CD133, CD44, LGR5, and EpCAM in HCC cells after treatment with 6 μM JIB-04 for 48 h were confirmed with Western blot analysis. GAPDH was used as a loading control. (Bottom panels) Quantification based on densitometry of Western blotting data from top panels in (B). All data are normalized to GAPDH. Data are represented as mean ± SEM of triplicate measurements; * p < 0.05 and ** p < 0.01. The mRNA expression of cancer stem cell marker genes was analyzed using qRT-PCR in PLC/PRF/5, Huh7, and HepG2 cultures after treatment with DMSO (control), 6 µM JIB-04, or 4 µM TSA (positive control) for 24 h. All data are normalized to GAPDH and plotted relative to the expression level in control cells. Data are represented as mean ± SEM derived from triplicate measurements (n = 3); * p < 0.05, ** p < 0.01, and *** p < 0.001. (B) (Top panels) Protein levels of CD133, CD44, LGR5, and EpCAM in HCC cells after treatment with 6 µM JIB-04 for 48 h were confirmed with Western blot analysis. GAPDH was used as a loading control. (Bottom panels) Quantification based on densitometry of Western blotting data from top panels in (B). All data are normalized to GAPDH. Data are represented as mean ± SEM of triplicate measurements; * p < 0.05 and ** p < 0.01. CD133 protein levels were downregulated in PLC/PRF/5 and Huh7 cells ( Figure 3B). In addition, fluorescent immunocytochemistry showed that the expression levels of CSC markers were decreased in PLC/PRF/5 and Huh7 cells after JIB-04 treatment (Supplementary Figure S1). Together, our data indicate that JIB-04 treatment affected the expression of CSC marker genes, suggesting its potential role in the anti-LCSC activity. Many lines of evidence support that CSCs enable themselves to initiate and drive tumorigenesis, which may contribute to the chance of relapse. First, we focused on the CD133 + /CD13 + population in HCC cells, as CD133 + /CD13 + hepatocytes are known to possess CSC characteristics [29]. We investigated the effect of JIB-04 treatment on CD133 + /CD13 + cells using a FACS analysis. Our FACS data showed that the CD133 + /CD13 + population was lower in JIB-04-treated cells than that in DMSO-treated control cells ( Figure 4A), suggesting the potency of JIB-04 as a selective drug for targeting liver CSCs. Furthermore, we performed tumorsphere formation assays to investigate the effect of JIB-04 on tumor initiation, growth, and relapse. Since CSCs can form tumorspheres in serum-free culture medium on non-adherent culture dishes, tumorsphere formation assays are very useful for studying the stem-cell-like properties of CSCs. To confirm whether JIB-04 affected stem-cell-like properties such as tumor initiation, growth, and relapse abilities, we established an LCSC sphere-forming culture system under three-dimensional culture conditions, as previously described [19]. To begin, the effect of JIB-04 on the tumor initiation ability of CSCs was evaluated using three different HCC cell lines. We cultured PLC/PRF/5, Huh7, and HepG2 tumorspheres on non-adherent culture dishes after treatment with DMSO, 6 µM JIB-04, or 4 µM TSA for 24 h. After 7 days, the DMSO-treated control cells developed dense and round-shaped tumorspheres; however, the JIB-04-treated cells failed to form tumorspheres, remaining almost as tiny spheres ( Figure 4B, left panel). Quantitative analysis with a cell-counting kit (CCK) assay demonstrated that JIB-04 treatment resulted in a decrease in the percentage of sphere-initiating cells to about 20% of that of control cells and was comparable to the TSA treatment in inhibiting the initiation of tumorspheres ( Figure 4B, right panel). Furthermore, to evaluate the ability of JIB-04 to diminish tumorsphere growth, we induced tumorsphere formation for 5 days and subsequently treated the spheres with DMSO (mock control), 6 µM JIB-04, or 4 µM TSA for 2 days. The size of JIB-04-treated tumorspheres was found to have reduced ( Figure 4C, left panel). Consistent with these morphological changes, CCK assays further supported that JIB-04 treatment reduced the viability of tumorsphere cells, suggesting that JIB-04 inhibited the growth of the tumorspheres by targeting CSCs ( Figure 4C, right panel). To assess the effect of JIB-04 on relapse, we developed secondary tumorspheres by culturing the surviving cells shown in Figure 4C (left panel) in a standard stem cell medium without additional drug treatment for 12 days. The control cells derived from mock-treated primary tumorspheres were able to form secondary tumorspheres comparable to the primary tumorspheres. In contrast, the surviving cells derived from JIB-04-treated primary tumorspheres mostly failed to develop secondary tumorspheres ( Figure 4D, left panel). As evident from the CCK assays, the number of viable cells in the secondary tumorspheres derived from JIB-04-treated primary tumorspheres was also reduced compared with that observed for DMSO-treated controls, implying that the regrowth of tumorspheres was efficiently suppressed by pretreatment with JIB-04 ( Figure 4D, right panel). . Data are represented as mean ± SEM of triplicate measurements; * p < 0.05, ** p < 0.01, and *** p < 0.001. As mentioned earlier, JIB-04 is a pan-inhibitor of Jumonji histone demethylases [15]. The global levels of several histone modifications in colorectal cancer cells were found to be influenced by JIB-04 treatment [19]. To confirm the effect of JIB-04 on global histone H3 modification levels in human HCC cells, we performed a Western blot analysis. As expected, we found that JIB-04 upregulated the tri-methylation of H3K4, as well as the diand tri-methylation of H3K36, which are known as active chromatin markers ( Figure 4E). In addition, the di-and tri-methylation of H3K9, as well as the tri-methylation of H3K27, which are hallmarks of inactive chromatin, were increased in JIB-04-treated HCC cells ( Figure 4E). Thus, these data suggested that JIB-04 inhibits the KDM4 and KDM6 histone demethylase families in HCC cells. Based on the data above, we attempted to identify the histone demethylase directly related to the JIB-04-based inhibition of tumorsphere formation. To accomplish this, we investigated whether a deficiency in histone demethylases could influence the tumorsphere formation ability of liver CSCs. Tumorsphere formation assays were performed using various histone-demethylase-depleted knockdown cells. Our results demonstrated that the sphere-forming abilities of KDM4B-, KDM4D-, and KDM6B-knockdown cells were significantly decreased compared to that of the shLuc control knockdown cells ( Figure 4F). In contrast, the CCK assay showed that the viabilities of tumorsphere cells in KDM4A-, KDM4C-, and KDM6A-depleted cells were increased compared to that in the control knockdown cells (Supplementary Figure S2). Taken together, our data suggested that JIB-04 exerts anti-LCSC effects via the inhibition of KDM4B, KDM4D, and KDM6B. Transcriptome Analysis Revealed JIB-04-Targeted Pathways in HCC Cells To uncover the mechanism by which JIB-04 affects the malignancy of HCC cells and liver CSCs, we performed an RNA-sequencing analysis using RNA extracts from PLC/PRF/5 cells after treatment with 6 µM JIB-04 for 24 h. JIB-04 treatment resulted in the upregulation of 700 genes and the downregulation of 459 genes (Supplementary Figure S3A). A Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis revealed that JIB-04 treatment resulted in an altered expression of genes involved in the cell cycle, apoptosis, and cellular senescence that were also related to several cancers, including HCC (Supplementary Figure S3B). JIB-04 also altered the expression of genes involved in various signaling pathways, such as the FOXO signaling pathway, the MAPK signaling pathway, the Wnt signaling pathway, and the PI3K-Akt signaling pathway (Supplementary Figure S3B). Among these genes, we focused on genes related to the cell cycle and the PI3K-Akt signaling pathway because the PI3K-Akt pathway is known to contribute to cancer progression via the regulation of G 1 /S cell cycle transition, as well as the maintenance and viability of cancer stem-like cells [27,28,30,31]. JIB-04 Induced Anti-Cancer Effects by Targeting AKT-FOXO3a-p21-RB-E2F Axis Based on our findings, we next questioned whether the PI3K-Akt signaling pathway was responsible for JIB-04-induced G 1 /S cell cycle arrest in HCC cell lines, as shown in Figure 1B. To achieve this, AKT protein levels in HCC cell lines after treatment with DMSO or JIB-04 were first analyzed with a Western blotting analysis using a pan-total AKT antibody. As shown in Figure 5A, JIB-04 treatment reduced AKT protein levels in a dose-dependent manner. We then investigated the nuclear translocation of FOXO proteins in JIB-04-treated HCC cells because the AKT-dependent phosphorylation of FOXO factors results in their cytoplasmic accumulation and subsequent degradation, thereby eliminating their transactivation capacity [32,33]. To determine whether JIB-04 caused the nuclear translocation of FOXO3a, we performed a cell fractionation analysis in PLC/PRF/5 and HepG2 cell lines after treatment with DMSO or JIB-04. In JIB-04-treated PLC/PRF/5 and HepG2 cells, the treatment induced the accumulation of FOXO3a in the nucleus in a dose-dependent manner ( Figure 5B). Since FOXOs control cell cycle progression through the transcriptional regulation of cyclin-dependent kinase (CDK) inhibitors such as p21, p27, and p15 [34][35][36][37], we assumed the increased expression in CDK inhibitors by the JIB-04-induced nuclear enrichment of FOXOs. Our qRT-PCR analysis revealed that the mRNA expression of the p21(CDKN1A) gene was induced by JIB-04 treatment, but the expressions of p15 and p27 did not increase ( Figure 5C, Supplementary Figure S4). Finally, we investigated RB activation and the expression of E2F target genes because p21 can lead to the activation of RB and subsequently cause the suppression of E2F-mediated transcription in the nucleus [38,39]. A Western blot analysis revealed that JIB-04 treatment reduced the phosphorylation levels of RB protein, indicating RB activation ( Figure 5D). In addition, the qRT-PCR assay showed that the mRNA expressions of E2F target genes in JIB-04-treated PLC/PRF/5 and HepG2 cell lines were significantly decreased in a dose-dependent manner, suggesting RB-mediated inactivation of the E2F transcription factor ( Figure 5E). Thus, these results suggested that the AKT-FOXO3a-p21-RB-E2F axis is involved in JIB-04-mediated anti-cancer effects. JIB-04 Induced Anti-Cancer Effects by Targeting AKT-FOXO3a-p21-RB-E2F Axis Based on our findings, we next questioned whether the PI3K-Akt signaling pathway was responsible for JIB-04-induced G1/S cell cycle arrest in HCC cell lines, as shown in Figure 1B. To achieve this, AKT protein levels in HCC cell lines after treatment with DMSO or JIB-04 were first analyzed with a Western blotting analysis using a pan-total AKT antibody. As shown in Figure 5A, JIB-04 treatment reduced AKT protein levels in a dose-dependent manner. We then investigated the nuclear translocation of FOXO proteins in JIB-04-treated HCC cells because the AKT-dependent phosphorylation of FOXO factors results in their cytoplasmic accumulation and subsequent degradation, thereby eliminating their transactivation capacity [32,33]. To determine whether JIB-04 caused the nuclear translocation of FOXO3a, we performed a cell fractionation analysis in PLC/PRF/5 and HepG2 cell lines after treatment with DMSO or JIB-04. In JIB-04-treated PLC/PRF/5 and HepG2 cells, the treatment induced the accumulation of FOXO3a in the nucleus in a dose-dependent manner ( Figure 5B). Since FOXOs control cell cycle progression through the transcriptional regulation of cyclin-dependent kinase (CDK) inhibitors such as p21, p27, and p15 [34][35][36][37], we assumed the increased expression in CDK inhibitors by the JIB-04-induced nuclear enrichment of FOXOs. Our qRT-PCR analysis revealed that the mRNA expression of the p21(CDKN1A) gene was induced by JIB-04 treatment, but the expressions of p15 and p27 did not increase ( Figure 5C, Supplementary Figure S4). Finally, we investigated RB activation and the expression of E2F target genes because p21 can lead to the activation of RB and subsequently cause the suppression of E2F-mediated transcription in the nucleus [38,39]. A Western blot analysis revealed that JIB-04 treatment reduced the phosphorylation levels of RB protein, indicating RB activation ( Figure 5D). In addition, the qRT-PCR assay showed that the mRNA expressions of E2F target genes in JIB-04-treated PLC/PRF/5 and HepG2 cell lines were significantly decreased in a dose-dependent manner, suggesting RB-mediated inactivation of the E2F transcription factor (Figure 5E). Thus, these results suggested that the AKT-FOXO3a-p21-RB-E2F axis is involved in JIB-04-mediated anti-cancer effects. KDM6B Regulated AKT2 Expression through H3K27me3 Modifications Since JIB-04 treatment reduced the expression of AKT protein ( Figure 5A), we hypothesized that the JIB-04-induced inhibition of certain histone demethylases may cause the downregulation of the AKT gene. Therefore, we investigated the expressions of AKT family genes (AKT1, AKT2, and AKT3) in JIB-04-treated PLC/PRF/5 and HepG2 cells. The qRT-PCR analysis revealed that the mRNA expressions of AKT1 and AKT2 were decreased upon JIB-04 treatment, but it was not altered for the AKT3 gene ( Figure 6A). To determine which histone demethylase was related to this AKT transcriptional regulation, we investigated the mRNA expression levels of AKT1 and AKT2 in KDM4B-, KDM4D-, and KDM6B-knockdown cells. Our data showed that only KDM6B depletion significantly decreased the AKT2 mRNA expression in both the PLC/PRF/5 and HepG2 cell lines ( Figure 6B). Moreover, correlation analysis indicated that the expressions of AKT2 and KDM6B were positively correlated ( Figure 6C). In addition, decreased AKT2 mRNA levels were observed in two different KDM6B-depleted cell lines ( Figure 6D). These results suggested that KDM6B is responsible for AKT2 gene regulation. Since KDM6B is known to be responsible for erasing the tri-methylation of lysine-27 on histone H3 (H3K27me3) [40], we investigated whether JIB-04 treatment influenced the H3K27me3 modification level at the AKT2 gene promoter in PLC/PRF/5 cells. Our ChIP assay revealed that the H3K27me3 levels at the AKT2 gene promoter were increased by JIB-04 treatment, whereas the H3K27ac levels were reduced ( Figure 6E). These data suggested that the increased levels of H3K27me3 by JIB-04 treatment may contribute to the downregulation of AKT2 expression. To determine whether KDM6B could associate directly with the AKT gene promoter, we performed a ChIP assay using PLC/PRF/5 cells stably expressing an empty vector or FLAG-tagged KDM6B. The ChIP assay confirmed that KDM6B protein was significantly enriched in the promoter region of AKT2 ( Figure 6F). Collectively, these results suggested that KDM6B is involved in the transcriptional activation of AKT2 via the downregulation of H3K27me3 at the AKT2 gene promoter. JIB-04 treatment, whereas the H3K27ac levels were reduced ( Figure 6E). These data suggested that the increased levels of H3K27me3 by JIB-04 treatment may contribute to the downregulation of AKT2 expression. To determine whether KDM6B could associate directly with the AKT gene promoter, we performed a ChIP assay using PLC/PRF/5 cells stably expressing an empty vector or FLAG-tagged KDM6B. The ChIP assay confirmed that KDM6B protein was significantly enriched in the promoter region of AKT2 ( Figure 6F). Collectively, these results suggested that KDM6B is involved in the transcriptional activation of AKT2 via the downregulation of H3K27me3 at the AKT2 gene promoter. Discussion Recently, the small molecule JIB-04, a pan-inhibitor of Jumonji demethylases, has been shown to inhibit growth and induce apoptosis in drug-resistant glioblastoma and lung cancer cells [17,18]. In addition, we previously reported that JIB-04 targeted colorectal cancer stem cells (CSCs) via the selective inhibition of the Wnt/β-catenin signaling pathway [19]. Although HCC is the most common subtype of liver cancer with a poor prognosis and its high recurrence rate is correlated with the presence of liver CSCs [41,42], the role of JIB-04 in the malignancy of HCC and liver CSC function has not been elucidated. In this study, we identified a novel role of JIB-04 in the viability and maintenance of liver CSCs, as well as in the cell cycle progression of HCC cells, using three different HCC cell lines. Discussion Recently, the small molecule JIB-04, a pan-inhibitor of Jumonji demethylases, has been shown to inhibit growth and induce apoptosis in drug-resistant glioblastoma and lung cancer cells [17,18]. In addition, we previously reported that JIB-04 targeted colorectal cancer stem cells (CSCs) via the selective inhibition of the Wnt/β-catenin signaling pathway [19]. Although HCC is the most common subtype of liver cancer with a poor prognosis and its high recurrence rate is correlated with the presence of liver CSCs [41,42], the role of JIB-04 in the malignancy of HCC and liver CSC function has not been elucidated. In this study, we identified a novel role of JIB-04 in the viability and maintenance of liver CSCs, as well as in the cell cycle progression of HCC cells, using three different HCC cell lines. As described previously [19], JIB-04 largely induced G 2 /M cell cycle arrest in colorectal cancer cells and preferentially eradicated colorectal CSCs by inhibiting the Wnt/β-catenin signaling pathway. In contrast, JIB-04 caused G 1 /S cell cycle arrest in HCC cells, and the KDM-AKT2 pathway was found to be important for cell cycle progression in HCC cells. In this study, the effect of JIB-04 on the tumorigenicity of liver CSCs was assessed using a tumorsphere formation assay, as described elsewhere [43,44]. Similar to the role of JIB-04 in colorectal CSC function, JIB-04 treatment resulted in the downregulation of several CSC marker genes, as well as the inhibition of the tumor-initiating ability of liver CSCs. Interestingly, the growth and relapse of secondary tumorspheres derived from the primary tumorspheres after pre-treatment with JIB-04 were significantly abolished, as shown in Figure 4D. More importantly, tumorsphere formation was reduced by deficiencies in KDM4B, KDM4D, and KDM6B. Thus, these results suggested that the JIB-04-mediated effect on liver CSC function was mediated by targeting histone demethylases such as KDM4B, KDM4D, and KDM6B. Moreover, our data suggested that the JIB-04-dependent inhibition of KDMs directly or indirectly affected the expression of CSC marker genes, including CD133, CD44, CD90, LGR5, EpCAM, CD24, and CD13. Taken together, our results indicated that JIB-04 is a promising candidate for improving the survival rate in patients with liver cancer by targeting liver CSCs, as the CSC markers are known to be responsible for chemoresistance, metastasis, and tumor relapse [10,[45][46][47][48][49][50][51]. In this study, we found that the tri-methylation levels of H3K4, H3K36, H3K9, and H3K27 were globally increased in JIB-04-treated HCC cells ( Figure 4E). The patterns of JIB-04-induced histone H3 methylation in HCC cells were similar to those observed in JIB-04-treated colorectal cancer cells, but the increase in H3K27me3 levels in JIB-04-treated HCC cells was more prominent than that observed in JIB-04-treated colorectal cancer cells. To understand the effect of JIB-04 on the malignancy of HCC and the function of LCSCs, we aimed to determine the KDM(s) responsible for the JIB-04-treatment-related phenotypes in HCC cells. As previously reported in an interesting study [15], JIB-04 can inhibit the demethylase activity of JMJD2A (KDM4A), JMJD2B (KDM4B), JMJC2C (KDM4C), JMJD2D (KDM4D), JMJD2E (KDM4E), JARID1A (KDM5A), and JMJD3 (KDM6B), implying the potential role of the KDM4, KDM5, and KDM6 families in JIB-04-related phenotypes. Based on these results, we propose that KDM4B, KDM4D, and KDM6B are involved in the cell growth and cell cycle progression of HCC cells, as well as in the survival and maintenance of LCSCs. In particular, our data highlighted the importance of KDM6B in HCC malignancy and functions of LCSCs because the tumor-initiating ability of LCSCs was reduced by KDM6B deficiency (Figure 4F), and the JIB-04-induced elevation of H3K27me3 levels could be reduced by the demethylase activity of KDM6B [15]. Although most of the data in this study supported the concept that KDM6B may be one of the main targets underlying the effect of JIB-04 on HCC malignancy and LCSC function, we needed to understand the mechanism of action of KDM6B in the JIB-04-related phenotypes of HCC cells. Therefore, RNA-sequencing analysis was performed to uncover various cancer-related signaling pathways, including the FOXO signaling pathway, the MAPK signaling pathway, the HIF-1 signaling pathway, and the PI3K-AKT signaling pathway (Supplementary Figure S3). Among these, we focused on the PI3K-AKT signaling pathway because it is known to be involved in both the regulation of cell cycle progression [52] and the development of many cancer types, including HCC [53]. Interestingly, it has been reported that the PI3K-AKT pathway plays an important role in CSC biology [54,55]. Consistent with the clues derived from the genome-wide analyzed data, our results showed that JIB-04 significantly decreased AKT protein levels and subsequently affected the FOXO3a-p21-RB-E2F axis, the downstream regulatory cascade of AKT ( Figure 5). AKT is an evolutionarily conserved serine-protein kinase that regulates numerous signaling pathways by controlling downstream effectors that are required for the maintenance of cell homeostasis [56]. Three AKT isoforms, namely AKT1, AKT2, and AKT3, play a crucial role in cellular functions such as transcription, protein synthesis, cell cycle progression, and cell death regulation [21,[56][57][58][59]. Interestingly, aberrant AKT expression was observed in various human cancers [60], and the relationship between AKT and cancer stem cells has also been revealed [26,27,61,62]. Of note, among the three AKT isoforms, the mRNA expression levels of AKT1 and AKT2 were downregulated by JIB-04 treatment, as shown in Figure 6A. AKT2 expression was significantly affected only by KDM6B deficiency, revealing AKT2 as a potential target of KDM6B ( Figure 6B,D). Given this, it is important to understand how KDM6B regulates AKT2 expression. Since KDM6B showed demethylase activity specific for the tri-methylation of histone H3K27, it was speculated that KDM6B positively regulates AKT2 gene transcription via the removal of inactive chromatin markers, such as H3K27me3. As expected, our ChIP assay revealed a significant increase in the H3K27me3 levels at AKT2 promoter regions and the further enrichment of KDM6B within the chromatic domains of the AKT2 promoter, as shown in Figure 6E,F. Thus, we concluded that KDM6B could associate directly with the chromatin domains at AKT2 promoter regions and remove the tri-methylation of histone H3K27, an inactive chromatin marker, from the AKT2 promoter, leading to increased AKT2 expression. Subsequently, the KDM6B-mediated upregulation of AKT2 was required for cell cycle progression via the regulatory cascade of the AKT2-FOXO3a-p21-RB-E2F axis, as was evident from the experimental analyses carried out in this study. Moreover, the KDM6B-AKT2 pathway is known to be involved in the survival and maintenance of liver CSCs, as previously has been described in several studies [26,61,62]. In summary, in this study, we presented a plausible model to explain how JIB-04 treatment induced growth inhibition and cell cycle arrest in HCC cells, as well as the impairment of liver CSC functions. JIB-04 interfered with the cell cycle progression of HCC cells and cancer stem-like cell properties via the AKT2-FOXO3a-RB axis by inhibiting the histone demethylase activity of KDM6B on the AKT2 promoter. Overall, our results suggested that strategies targeting the KDM6B-AKT2 pathway with JIB-04, a pan-histone demethylase inhibitor, may contribute to the development of therapeutics against liver cancer. Retroviral Production and Infection For retrovirus production, either pMSCV empty vector or pMSCV-based FLAG-KDM6B was used to transfect 293FT with 4.5 µg of gag-pol and 0.5 µg of VSVG. The viral supernatant was collected and used to infect HCC cells 48 h after transfection. PLC/PRF/5 cells were infected with retrovirus overnight with 6 µg/mL polybrene. After 24 h, the infected cells were selected with 2.5 µg/mL puromycin [64]. Cell Growth Assay Cell proliferation was measured using a cell-counting kit (CCK-8 assay kit; Dojindo Corporation, Kumamoto, Japan). Twenty-four hours prior to experiments, cells were plated in the wells of a 96-well plate and treated with either DMSO, JIB-04, or TSA for 24, 48, 72, or 96 h. CCK-8 solution was then added to 100 µL of culture medium, and the cells were incubated at 37 • C for 2 h. Optical density was measured at 450 nm using a microplate reader (Sunrise-Basic Tecan, Tecan Austria GmbH, Grodig, Austria). Cell Cycle Analysis Approximately 8 × 10 5 cells were seeded in a 60-pie dish. After 24 h, the cells were treated with 6 µM JIB-04, and the dish was incubated for 24 h. The cells were then fixed, stained with propidium iodide, and analyzed using flow cytometry (FACS), as described previously [19]. Western Blotting A Western blot analysis was performed as described previously [65]. Briefly, cells were treated with either vehicle (DMSO), JIB-04, or TSA for 48 h and then were harvested by centrifugation. The pellets were resuspended in radioimmunoprecipitation assay (RIPA) buffer comprising 150 mM NaCl, 50 mM Tris (pH 8.0), 1% NP-40, 0.5% sodium deoxycholate, 0.1% sodium dodecyl sulfate (SDS), and protease inhibitors and were incubated on ice. Supernatants containing proteins of interest were collected after centrifugation. The protein concentration was determined using a Bradford assay. For Western blot analysis, proteins were separated with 10% SDS-PAGE and transferred to polyvinylidene fluoride Fluorescent Immunocytochemistry For the fluorescent immunocytochemistry assay, HCC cells were cultured on coated coverslips in 6-well plates. After 24 h, HCC cells were treated with JIB-04 for 48 h. The cells were fixed with 3.7% paraformaldehyde and treated with 0.2% Triton X-100 for permeabilization. Then, the cells were incubated with CD133, CD44, and GAPDH primary antibodies. After 24 h, the cells were further incubated with secondary antibodies (Alexa goat anti-mouse) with DAPI for 2 h. Confocal image acquisition was performed using a Carl Zeiss confocal microscope. Chromatin Isolation Assay Chromatin isolation was performed to separate the nuclear and cytoplasmic fractions of the cells. The cells were harvested, and the pellets were resuspended in hypotonic buffer containing 1% NP40, 5 mM MgCl 2 , 10 mM NaCl, 20 mM Tris (pH8.0), and protease inhibitors and were incubated on ice for 1 h. The lysate was centrifuged at 14,000 rpm (4 • C) for 25 min, and the supernatant was collected as the cytosolic fraction. The pellet was lysed with hypertonic buffer containing 1% NP40, 5 mM MgCl 2 , 150 mM NaCl, 20 mM Tris (pH8.0) and was sonicated to efficiently isolate the nuclei. The sonicated pellet was centrifuged at 14,000 rpm (4 • C) for 25 min, and the supernatant was collected as the nuclear fraction. ChIP Assay A ChIP assay was performed using a SimpleChIP Enzymatic chromatin IP Kit ChIP kit (Cell Signaling #9002S) according to the manufacturer's instructions. Briefly, cells were cross-linked in situ by the addition of 37% formaldehyde to a final concentration of 1%, incubated at room temperature for 10 min, and then were incubated with glycine for 5 min. Chromatin was digested and immunoprecipitated using IgG (Cell Signaling #2729), H3 (Cell Signaling #4620), H3K27me3 (Cat. #ab6002; Abcam), and H3K27ac (Cat. #ab4729; Abcam) antibodies overnight at 4 • C. The purified DNA was used for PCR amplification using primers specific to promoter fragments of the AKT2 gene [66]. Isolation of RNA and Quantitative Reverse Transcription-Polymerase Chain Reaction (qRT-PCR) Analysis qRT-PCR analysis was performed as described previously [63]. Total RNA was isolated using TRI-Reagent (Cat. #TR118; Molecular Research Center, Inc., Cincinnati, OH, USA). Expression levels were analyzed by qRT-PCR with SYBR Premix Ex Taq II (Takara Bio, Shiga, Japan) and a 7300 Real-Time PCR system (Applied Biosystems, Franklin Lakes, NJ, USA) using primer sets for target genes. qRT-PCR was performed with the following primers: human CD133, (sense Cell Migration and Invasion Assay Transwell cell migration assays were performed using Falcon cell number inserts (Cat. #353097; Corning, Corning, NY, USA). Cells were pre-incubated with the indicated drugs for 24 h, after which 7 × 10 4 cells were placed in the insert in serum-free medium and allowed to migrate for 2 days. The outer chamber was filled with 750 µL of medium containing 10% FBS. After incubation, non-migrating cells on the upper surface of the insert were removed using a cotton swab. Migrating cells were fixed and stained with crystal violet (Sigma-Aldrich, St. Louis, MO, USA). For the invasion assay, 2 × 10 5 cells were seeded in Matrigel-coated inserts and allowed to invade for 2 days. Invasive cells were stained as described above. Wound-Healing Assay HCC cells were cultured in 12-well plates at 3.6 × 10 5 cells to form confluent monolayers. Wounds were created using a sterile pipette tip after the HCC cells were cultured for 24 h. The wounded areas were recorded at 0 h and 24 h and were measured using ImageJ software. Tumorsphere Formation Tumorspheres derived from cancer cells have been proved to display characteristics of CSCs. CSCs are significant causes of metastasis and drug resistance [67]. Tumorspheres are grown in serum-free and non-adherent conditions. Only cancer stem cells can survive and proliferate in these conditions. Therefore, the formation of tumorspheres is a helpful method for the enrichment of CSCs [68]. To examine the effect of JIB-04 on tumor growth, we cultured primary tumorspheres for 5 days and then treated them with DMSO, 6 µM JIB-04, or 4 µM TSA for 2 days. We then measured the viability of the tumorspheres using a CCK assay. To examine the effect of JIB-04 on tumor recurrence, we trypsinized primary tumorspheres and re-seeded them without drug treatment to produce secondary tumorspheres. After 12 days, we measured the viability of the secondary tumorspheres using a CCK assay [19]. RNA Sequencing RNA sequencing was conducted using Macrogen Inc. (Seoul, Korea). RNA extracts from PLC/PRF/5 cells treated with DMSO or 6 µM JIB-04 for 24 h were subjected to cDNA library construction using a TruSeq Stranded mRNA LT Sample Prep Kit (Illumina, San Diego, CA, USA). The samples were checked for quality using FastQC v0.11.5 software and subjected to sequencing using a HiSeq 4000 sequencer (Illumina). The Kyoto Encyclopedia of Genes and Genomes (KEGG) database was used to determine the pathways of differentially expressed genes. The pathways were ranked using Fisher's exact test with a threshold of significance set by p-value. The data discussed in this publication were deposited in NCBI's Gene Expression Omnibus and are accessible under GEO series accession number GSE179345. Gene Correlation Analysis Gene Expression Profiling Interactive Analysis (GEPIA) is an online TCGA-based tool used to analyze RNA sequencing expression data. Pearson's correlation analysis for KDM6B and AKT2 was performed using GEPIA. Statistical Analysis Data are expressed as means ± SEM or means ± SD. The data between controls and experimental groups were analyzed with two-tailed Student's t-tests. Significance levels were set as follows: * p < 0.05, ** p < 0.01, and *** p < 0.001. Raw data were used in Western blot figures.
9,013.2
2022-07-01T00:00:00.000
[ "Medicine", "Biology" ]
Uncertainties in Life Cycle Greenhouse Gas Emissions from Advanced Biomass Feedstock Logistics Supply Chains in Kansas To meet Energy Independence and Security Act (EISA) cellulosic biofuel mandates, the United States will require an annual domestic supply of about 242 million Mg of biomass by 2022. To improve the feedstock logistics of lignocellulosic biofuels in order to access available biomass resources from areas with varying yields, commodity systems have been proposed and designed to deliver quality-controlled biomass feedstocks at preprocessing “depots”. Preprocessing depots densify and stabilize the biomass prior to long-distance transport and delivery to centralized biorefineries. The logistics of biomass commodity supply chains could introduce spatially variable environmental impacts into the biofuel life cycle due to needing to harvest, move, and preprocess biomass from multiple distances that have variable spatial density. This study examines the uncertainty in greenhouse gas (GHG) emissions of corn stover logistics within a bio-ethanol supply chain in the state of Kansas, where sustainable biomass supply varies spatially. Two scenarios were evaluated each having a different number of depots of varying capacity and location within Kansas relative to a central commodity-receiving biorefinery to test GHG emissions uncertainty. The first scenario sited four preprocessing depots evenly across the state of Kansas but within the vicinity of counties having high biomass supply density. OPEN ACCESS Energies 2014, 7 7126 The second scenario located five depots based on the shortest depot-to-biorefinery rail distance and biomass availability. The logistics supply chain consists of corn stover harvest, collection and storage, feedstock transport from field to biomass preprocessing depot, preprocessing depot operations, and commodity transport from the biomass preprocessing depot to the biorefinery. Monte Carlo simulation was used to estimate the spatial uncertainty in the feedstock logistics gate-to-gate sequence. Within the logistics supply chain GHG emissions are most sensitive to the transport of the densified biomass, which introduces the highest variability (0.2–13 g CO2e/MJ) to life cycle GHG emissions. Moreover, depending upon the biomass availability and its spatial density and surrounding transportation infrastructure (road and rail), logistics can increase the variability in life cycle environmental impacts for lignocellulosic biofuels. Within Kansas, life cycle GHG emissions could range from 24 g CO2e/MJ to 41 g CO2e/MJ depending upon the location, size and number of preprocessing depots constructed. However, this range can be minimized through optimizing the siting of preprocessing depots where ample rail infrastructure exists to supply biomass commodity to a regional biorefinery supply system. Introduction Post-industrialized economies rely on energy for almost all fundamental needs including food production, heat, transportation, manufacturing, and communication.Since the Industrial Revolution, fossil energy resources such as coal, petroleum, and natural gas have become the dominant sources of energy because they are readily accessible and inexpensive.The use of fossil fuels have enabled large-scale industrial development, but growing concerns regarding energy security and the environment, particularly climate change, have inspired the development of mandates for renewable energy from wind, biomass, and solar energy sources.In the United States for example, by 2022, the Energy Independence and Security Act (EISA) of 2007 requires that 61 billion L/year cellulosic ethanol replace petroleum-based transportation fuels.To meet this demand, an estimated 242 million Mg/year of biomass will need to be supplied to biorefineries that can process lignocellulose [1].Sufficient biomass supply has been identified to meet these requirements through large-scale national assessments [2], and research is on-going concerning the logistics required to cultivate, harvest, transport, and process such large quantities of biomass into fuel.Our study focuses on the supply and logistics chain of the lignocellulosic ethanol produced from corn stover, an agricultural residue. Biomass supply chains currently use equipment and infrastructure designed for existing agriculture and forest industries.These supply chains are designed to move biomass short distances, store it for limited periods of time, and are constrained in their ability to address biomass quality issues like moisture and ash content.Most widely cited lignocellulosic biorefinery designs [3][4][5][6] that utilize agricultural residues (e.g., corn stover) or dedicated energy crops (e.g., switchgrass) for biochemical conversion to alcohol have been designed and priced to take in baled biomass feedstocks, assuming the existing (conventional) infrastructure.Although these conventional systems are cost-effective in high biomass-yielding areas, such as supplying corn stover in Iowa or forest resources in the Alabama, they are limited in their ability to support and meet long term national biofuels production goals [7].For example, there are ample biomass resources that would be considered stranded under this model due to having a high transport distance that would result in prohibitive costs.A strong driver for the conventional system is to minimize transportation costs as biomass characteristics make them expensive to handle and transport.Examples of these characteristics include high moisture content, low bulk density, low energy density, high variability, and multiple formats.Richard [8] reviewed the challenges of establishing bioenergy systems from low energy-density biomass resources and discusses different technologies for increasing the energy density of agricultural feedstocks through different preprocessing steps, including pelleting, pyrolysis, and torrefaction.Such densification systems may be more appropriate for thermochemical (e.g., torrefaction and pyrolysis) as opposed to biochemical (pelleting) conversion.One approach to addressing these challenges for biochemical conversion platforms, while also bringing in stranded resources and reducing risk to the biorefinery, is to transition to a commodity-based feedstock supply system, such as that proposed by Idaho National Laboratory (INL) [7].The commodity system incorporates distributed biomass preprocessing depots located near the point of production.The depots can provide the biorefineries with a quality-controlled biomass supply, which is sourced from a variety of biomass types [9].Based on the availability of sufficient corn stover in Kansas to meet the biorefinery's capacity (2000 dry metric tons/day based on the work of [6]), the biorefinery could take in corn stover feedstock to meet annual supply.In the commodity system, the variability in feedstock characteristics such as moisture and ash content can be addressed by the local preprocessing depot to therefore supply the biorefinery with a dense, stable, quality controlled feedstock [8]. Most life cycle assessment (LCA) studies of biochemically-derived ethanol from lignocellulose have assumed the conventional model of delivering baled biomass (for agricultural residues or dedicated energy crops/purpose-grown grasses) directly to the lignocellulosic biorefinery [10][11][12][13][14]; however, recent LCA literature has compared conventional and commodity systems.Eranki et al. [15] compared the energy inputs and greenhouse gas (GHG) emissions of commodity and conventional systems delivered to a 5000 ton/day centralized biorefinery.The commodity system consists of nine preprocessing depots with a fixed capacity (500 tons/day).The mass fraction of biomass feedstock (corn stover, switchgrass and miscanthus) was varied in order to account for the uncertainty in energy requirements for feedstock production.The study concluded that the commodity system's GHG emissions are about 4% lower, while consuming approximately the same total energy as the conventional system.The study also emphasized that the processing technology was critical to cost-reduction for the commodity system, a point also addressed by Shastri et al. [16] and Uria-Martinez et al. [17].Recently, Argo et al. [18] evaluated several environmental sustainability impacts (100-year global warming potential (GWP), rainfall (green water) and groundwater through irrigation (blue water) footprints), and costs for advanced logistics designs that employ densification steps for preprocessing agricultural residues and grasses in depots for long-haul transport to centralized biorefineries designed on the biochemical platform.Their results showed that the commodity system reduced both spatial and temporal variability and thus stabilized the cost of the feedstock logistics and supply chain.Egbendewe-Mondzozo et al. [19] analyzed the cost and GHG emissions of conventional and commodity systems practiced in Southwest Michigan.The supply and logistics chain of the commodity system included seven preprocessing depots located in nine counties, and a centralized biorefinery.The authors examined different processing technology in order to evaluate their effects on biofuel production cost and energy inputs.The study concluded that the commodity system reduced the net life cycle GHG emissions; however, the profitability varied with the type of biomass feedstock and processing technology.Ray et al. [20] and Hess et al. [21] tested the effects of corn stover pellet densification on low-and high-solids pretreatment performance within biochemical conversion systems.The study concluded that pelletizing corn stover did not have a negative impact on pretreatment efficacy.Limited investigation from literature on the effects of densified feedstock on downstream processes suggests there is no adverse impact or possible improvement on pretreatment efficacy [22][23][24]. Select literature has examined spatial contributions to life cycle environmental impacts of ethanol from different feedstocks, mainly pertaining to the infrastructure and logistics of moving the product (ethanol fuel) to demand centers.For example, Wakeley et al. [25] concluded that at higher production scales, ethanol long-haul transport costs and environmental emissions would decline through use of rail infrastructure to transport due to the majority of supply (in the Midwest) needing to access demand centers (on east and west coasts of the U.S.).Strogen and colleagues evaluated the costs and emissions of bio-ethanol distribution on a larger scale than previously studied, and concluded that annual ethanol production scale critically impacts the average transport distance to end use markets [26].The authors found that more than 300,000 tons of CO2e could be avoided if all unnecessary transportation were eliminated.Moreover, Argo et al. [18] found that the logistics of the commodity system results in lower production costs than the conventional one when the biorefinery capacities are above 5000 Mg/day.A study by INL acknowledged that the transportation cost savings in the commodity system does not completely offset the costs associated with the pelleting and regrinding at all transport distances.If the benefits of handling and storing pellets are quantified, the transportation cost savings can be increased and thus can balance or exceed the cost of densification.However, the total cost of the commodity supply chain system could be higher than the conventional system due to the addition of preprocessing operations and equipment [16,27].Our objective is to evaluate the spatial variability of life cycle environmental impacts owing to characteristics along the biomass feedstock supply chain (i.e., from field to depot to biorefinery with intermediate transportation and preprocessing steps) that incur variability as a result of the quantity of biomass harvested, collected, stored, moved, and preprocessed prior to long-distance transport to a centralized biorefinery.Thus, our focus is on identifying processes within the feedstock logistics sequence that introduce the most significant uncertainty in life cycle impact assessment (LCIA).We focus on one LCIA metric, the 100-year GWP applied to a case study of a commodity system design in the state of Kansas (United States) through several configurations for depot location siting, and discuss the relevance to other important environmental impacts within agricultural bioenergy supply systems.Kansas was chosen in INL's 2017 [28] design report as an area that could support a uniform format "depot" supply system design mainly because of resource density (i.e., the presence of sufficient biomass supply) and mix of different feedstocks in different supply regions of the U.S. Our study focuses on Kansas in order to leverage assumptions from INL's 2017 report for the Midwest region, whose supply of corn stover could support depots supplying biomass commodities to a centralized biorefinery; and to demonstrate the depot design where it may likely take place, given resource availability.Our specific objective in this paper is to evaluate the uncertainty in life cycle GWP of the harvest, collection, storage, preprocessing, and transportation stages with two different configurations of depot size and location, whereby the location, size, and number of depots for a particular feedstock supply system design may incur significant dominance and/or variability to the life cycle GWP. Methods LCA, following the International Organization for Standardization (ISO) 14040 methods [29], was applied to evaluate environmental aspects of the bio-ethanol logistics and supply chain.Data for the life cycle inventory (LCI) model, which include the energy resources consumed to process corn stover at different scales, were derived using simulations from the Biomass Logistics Model (BLM) developed using Powersim™ at INL (Idaho Falls, ID, USA) [27].The BLM incorporates information from a number of databases which include all the data related to: (1) the engineering performance data of biomass pre-processing equipment; (2) labor costs; and (3) local tax and regulation data [27].ArcGIS was employed in this study in order to site the biorefinery and preprocessing depots and define all transportation distances [30].The spatial data are publicly available on the website of the Aerial Photography & GIS Data for the Professional & Novice (for counties) [31], and the Kansas Data Access & Support Center (DASC) (for railroads and highways) (Lawrence, KS, USA) [32,33].We use an attributional LCA (aLCA) approach in this study to investigate the spatial variability in LCIA metrics; however, we note the limitations of this approach raised by Andersen [34], who discussed potentially negative environmental impacts resulting from agricultural residue diversion, in that case, bagasse, for biofuel production. Through a focused analysis of agricultural residue harvest, collection, storage, transport, and preprocessing, this study builds on prior work aimed at understanding and characterizing uncertainties within the life cycle supply chain of lignocellulosic ethanol (bio-ethanol) [13,35,36].With the exception of a recent study [18], most prior LCA studies of bio-ethanol [10,11,13,14,37] have assumed a conventional harvest and biomass delivery in bale format, resulting in relatively low (approximately 10%) net life cycle GHG emissions [11].Here, we focus on identifying and characterizing uncertainties in advanced agricultural residue harvest, collection, storage, transport, pre-processing, and delivery operations to bio-ethanol facilities in the U.S. Midwest that arise due to variability in: (1) the sustainable harvest yield, defined as the quantity of corn stover removal set to maintain erosion and soil carbon within tolerable levels [2]; (2) transportation of the agricultural residue to depots that process and densify the biomass; (3) depot facility size, which influences equipment and energy throughput per unit of biomass; and (4) long-distance transport of the densified biomass delivered to the bio-ethanol facility.While we note the significant variability in GHG emissions from feedstock production noted in literature, and in particular the possible risks to loss of soil organic carbon (SOC) with corn stover removal [38,39], here we focus exclusively on uncertainties that could arise due to the spatial variability of corn stover feedstocks available at different spatial densities in the U.S. Midwest. The commodity system allows lignocellulosic biomass to be traded and supplied to biorefineries in a commodity-format market.A number of preprocessing depots can be located within or around the vicinity of biomass and are deployed to convert a diverse, low-density, perishable feedstock resource into an aerobically stable, dense, uniform-format [9,21].The preprocessing steps and equipment include: loaders, horizontal grinders, grinder in-feed systems, and dust collection and conveyor systems [7,9].The energy sources for depot equipment are described in Table S1.A commodity system and depots would likely first appear in areas without enough resources to supply a single biorefinery, requiring resources to be brought in beyond the local area.The incorporation of depots would allow stranded resources to enter the system which would otherwise be economically inaccessible.For this reason, corn stover located in the state of Kansas outside of high yielding areas within the U.S. corn-belt was chosen for analysis. Life Cycle Assessment of the Corn Stover to Lignocellulosic Ethanol Logistics and Supply Chain A LCA model was developed to evaluate uncertainties in GHG emissions for a corn stover-to-ethanol commodity system.A gate-to-gate LCI model was developed followed by LCIA evaluation of the 100-year GWP metric for corn stover feedstock logistics.The feedstock logistics we model in this work fits into a gate-to-gate segment of the full lignocellulosic ethanol life cycle (Figure 1), which refers to the sequence of processes within the life cycle of biomass production, conversion, and in-use consumption.The system boundary for the full life cycle of corn stover-to-ethanol (Figure 1) consists of: (1) feedstock production (i.e., crop production including nutrient replacement and soil GHG emissions); (2) feedstock harvest, collection, and storage; (3) feedstock transport from field to biomass preprocessing depot; (4) preprocessing depot operations; (5) commodity transport from biomass preprocessing depot to biorefinery; (6) biofuel conversion at the biorefinery; (7) ethanol transport, distribution, and blending; and (8) vehicle operation.The functional unit of the analysis is 1 MJ of ethanol.We rely on literatures' estimates for Segments 1, 6, 7, and 8 (see [36,40] for development of the biorefinery model and for discussion of biorefinery inputs, respectively).The assumptions from literature [37] correspond to conventional biomass sourcing with on-site power production and an electricity credit from the Midwest electricity grid mix. This paper focuses on identifying significant uncertainties in environmental metrics within an advanced logistics configuration of the corn stover-to-ethanol supply chain, which was designed to reduce transportation distance and costs.A single-variable sensitivity analysis was conducted to examine the significance of LCI model parameters for harvesting, transporting from field, preprocessing at the depot and transporting from the depot in the advanced commodity system.Commodity transportation (as baled corn stover or densified biomass) is a function of both transport distance and feedstock density.Gate-to-gate system boundary for the corn stover commodity feedstock supply and logistics system within the life cycle of bio-ethanol production. Data Management and Analysis ArcGIS tools were used to identify depot and biorefinery locations and to measure the transport distances within the commodity supply chain.The criteria for location selection consisted of the presence of transportation infrastructure (railroads and road systems) and annual biomass availability (i.e., sustainable harvest yield) in the state of Kansas [2].Significant factors such as access to water, and availability of utilities and labor were assumed sufficient for the model scenarios constructed.For this study, factors affecting policy such as political districts, voting locations and school systems were not considered.Transportation data, including road and rail networks, and freight and truck stations, were obtained from the Aerial Photography & GIS Data for the Professional & Novice [31] and the DASC [32,33].Biomass supply data were assumed to correspond to the marginal access (farm-to-gate) cost, which is proportionally related to biomass demand.The US Department of Energy's U.S. Billion-Ton Update Report (BT2) [2] estimates biomass marginal access costs in $5/ton increments.For the counties we consider in Kansas, biomass marginal access costs begin at $40/ton and can be as high as $80/ton depending upon the available supply within each county.Wakeley et al. [25] and Argo et al. [18] varied biorefinery capacity in their studies in order to examine the range of transportation cost and GHG emissions.In this study, the biorefinery capacity was fixed at 800,000 dry matter tons (DMT)/year.The supply chain was designed to provide 900,000 DMT to account for possible losses due to the converting, handling and transporting processes.We assumed that multiple depots would feed one centralized biorefinery.Given these boundary conditions, a master map was developed joining all transportation and biomass availability data (Figure S1).A radius of 80-km (50-miles) radius is assumed for feedstock transportation from the corn farms to the preprocessing depots.Several tools within ArcGIS were used to identify the geographic boundary of the LCI model.The centroid of each county was identified by using the Spatial Analysis (Feature to Point) tool.The ArcGIS Find Route tool was used to measure the rail distances between two locations based on the available railroad networks.The biomass supply data for each Kansas County were imported to ArcGIS and converted to raster format.The biomass supply data were presented in the master map in order to demonstrate the biomass density and distribution across Kansas.Two depot configurations were considered: (1) equal spatial distance between depots and infrastructure (rail network) availability; and (2) high biomass density and infrastructure (rail network) availability.The rationale for selecting the two configurations was to test the impact that the number of depots and their relative location within a biorefinery supply radius have on life cycle GHG emissions. Configuration 1: Equal Spatial Distance between Depots and Infrastructure Availability The first configuration divides the state of Kansas into six equally spaced primary regions: (1) Northwest; (2) North; (3) Northeast; (4) Southeast; (5) South; and (6) Southwest.However, according to resource assessment data from Oak Ridge National Laboratory (ORNL) [2], the counties in Regions 3 and 4 would not supply biomass for ethanol production (Figure S1).Therefore, counties in these regions were eliminated from the scope of analysis.Four depots are located in Regions 1, 2, 5, and 6 within the vicinity of biomass supply of each region.They are also cited close to road and rail transportation infrastructure to take in baled feedstock and export (via long distance) densified biomass (Figure 2, spatial boundary).Note that the sizing criteria for depots used in this study are different than those used by INL. Configuration 2: High Biomass Density and Infrastructure Availability The second depot configuration places depots according to biomass availability and the shortest railroad distance from the depot to the biorefinery, and is derived from the existing Kansas rail systems (Figure 3) in the central-to-west range of the state.Argo et al. [18] assumed in their model that the feedstock was transported from field to depot by semi-truck and from depot to biorefinery by rail given that railcars are capable of carrying a greater volume of goods with the minimal variable cost per mile.Thus, we assumed that distribution by truck is preferred for transport from field to depot, and rail is preferred for long distance transportation from depot to biorefinery.Based on the distribution of biomass and feedstock transport distance from field to depot, five biomass preprocessing depots were located in the state in counties surrounded by dense feedstock supply.The red point at the centroid of Reno County corresponds to the location of the biorefinery. Life Cycle Assessment Modeling Approach The BLM inventories all equipment and transport modes used to process and move biomass from the field to the biorefinery reactor infeed [29].The transport distance, specified using ArcGIS software, and the depot capacity were varied and analyzed by the BLM model in order to generate the relative energy consumption from each field supplying feedstock to depot and densified biomass to biorefinery.The cumulative energy consumption in advanced logistics is the product of the normalized energy consumption from the BLM (in gallons of diesel per DMT), which includes a semi-truck (2.88 L) to transport one DMT of biomass and a baler (1.36 L) to compress one DMT of biomass, and the biomass input, which was scaled according to the depot capacity.Energy consumption data were input into the SimaPro V.7.3.3LCI [41] model in order to compute the life cycle GHG emission.The SimaPro model accounts for all upstream cradle-to-gate inputs of process energy (diesel and electricity).In order to expedite the gate-to-gate LCIA computation for the two scenarios, a Matlab script [42] was written to batch-process the BLM energy and resource data outputs into SimaPro to generate the GHG emissions for the 46 supply counties (See Section S4 for more detail). Biorefinery Location The biorefinery was located in Reno County based on its proximity to road and railway infrastructure.Two counties were identified originally, Sedgewick (Figure 4a) and Reno (Figure 4b), given their location within the state being in areas with high highway and railway access.Reno County was selected based on it having five highways (K14, K17, U50, K61, K96) and four railroads (St. Louis Southwestern, Missouri Pacific and Atchison, Topeka & Santa Fe Railway and Chicago, Rock Island and Pacific Railroad), whereas Sedgewick has nine highways (I235, K96, I135, I35, K15, U54, U81, K42, K254) but a less extensive railway network of three railroads (Missouri Pacific, Chicago Rock Island, Pacific and St. Louis-San Francisco Railway).Thus, the biorefinery was sited in Reno County because of its proximity to the biomass supplying counties and because of its better access to four railroad networks that could facilitate access on energy efficient rail between depots and the biorefinery, thus giving access to all depot locations selected in Scenarios 1 and 2 (Figure S1). The Capacity of the Biomass Preprocessing Depots The available biomass supply in Kansas exceeds the biorefinery feedstock demand of 900,000 DMT/year.The maximum capacity of each preprocessing depot is assumed to be less than 400,000 DMT/year in order to increase the network of the distribution systems.The depot size limitation allows for more depots to be distributed across the state to supply the annual constant corn stover needed for the centralized biorefinery.To be conservative, we set the size of the pre-processing depots based on the local feedstock density within an 80-km (50-miles) radius.A simple mathematical rule was applied to estimate the depot capacity (Tables S2 and S3, and Equation (S1)) and farm supply (Tables S4 and S5, and Equation (S2)) based on the availability of biomass supply.We assumed that neighboring counties would transport biomass to the nearest depot within an 80-km (50-miles) radius.The distance from the depot to the biorefinery is the distance from the centroid of the county with the depot to the centroid of the county with the biorefinery for Scenario 1 (Table 1) and Scenario 2 (Table 2).The sizing criteria for depots used in this study are different than that used by INL. Uncertainty Analysis We use Monte Carlo methods [43,44] to project the range of probable life cycle GWPs for the two scenarios.The gate-to-gate LCIA model developed in SimaPro V.7.3.3 estimates the GHG emissions (GWP) based on the energy resource inputs derived from the BLM.SimaPro GWP output data for each process in the supply chain were best fit to probability distributions using Oracle Crystal Ball Statistical Software (Oracle Corporation, Redwood City, CA, USA) [45] and two distributions for Scenarios 1 and 2, respectively, were aggregated (Figures S2 and S3).With the scenario probability distributions, a Monte Carlo Simulation (1000 iterations) was run to generate stochastic GWP estimates (Table S6, and Figures S2 and S3).The statistical mean, range, and probability distributions for the stochastic input processes (feedstock harvest, collection and storage, feedstock transport, preprocessing, and commodity transport) are noted in Table S7.This uncertainty analysis procedure was adopted from Venkatesh et al. [46], who investigated uncertainties in life cycle GWP of U.S. coal; the authors fit probability distributions to coal model parameter inputs and used Monte Carlo simulation to examine the effect of spatial and temporal variability on life cycle GWP.Finally, we used single-variable sensitivity analysis to determine the relative significance of the four logistics gate-to-gate processes with respect to the final GHG emission given the expected low and high ranges in order to understand the variability within the system boundary and thus evaluate and mitigate potential environmental risks as noted by Venkatesh et al. [46]. Life Cycle Impact Assessment The average corn stover-to-ethanol life cycle GHG emissions for the two scenarios we examine in Kansas (Table 3) are 26 g CO2e/MJ ethanol and 25 g CO2e/MJ ethanol for Scenarios 1 and 2, respectively, 66% and 63% of the conventional system's life cycle GWP.These average values are 34% and 37% (for Scenarios 1 and 2, respectively) lower than the "conventional system" analog of 39.7 g CO2e/MJ ethanol (Table 3) taken from literature from Pourhashem et al. [37], because of the significantly lower corn stover harvest, collection, storage, preprocessing, and transport from depot to biorefinery sequence estimate that we assume through the BLM model.Review of literature (Table S8) indicated a wide range of results for feedstock harvest, collection, and storage activities within the logistics supply chain of conventional systems.Among the studies reviewed, which presented GWP estimates ranging from 3 g CO2e/MJ [35] to 17.5 g CO2e/MJ [47], aggregate energy input for both collection and harvest were larger than our study findings, suggesting that the GWP for the logistics supply chain will depend on equipment performance, including energy efficiency and age.The equipment and energy inputs for the harvest, collection and storage were detailed in Table S8.The data shows that the energy consumption for feedstock harvest, collection and storage in our study is 58%, 66% and 94% lower than that of Wang et al. [35], Larson et al. [47] (which Pourhashem et al. [37] used), and Eranki et al. [15], respectively (Table S8 compares assumptions and energy inputs for our and literature results for feedstock harvest and collection).The GHG emissions of the corn stover harvest, collection, and storage processes for Scenarios 1 and 2 are significantly lower than those suggested in Pourhashem's study [37] as well as work by Larson et al. [47], who estimates that corn stover harvest, collection, and storage processes emit 403 kg CO2e/DMT, which is equivalent to 17.5 g CO2e/MJ ethanol assuming our fuel conversion assumptions.Other estimates from literature have estimated farm operations for harvesting, collecting, and storing corn stover to be significantly smaller.For example, Wang et al. [35] estimate farm operations for the harvest of corn stover for ethanol production to be 3 g CO2e/MJ.Among the main processes of the supply chain, the transport step from depot to biorefinery contributes the most to the net GHG emissions at 2 g CO2e/MJ ethanol and 1.5 g CO2e/MJ ethanol in Scenarios 1 and 2, respectively, but is modest compared to most other life cycle inputs aside from logistics processes.This contribution is estimated as 8% of the net GHG emissions in Scenario 1 and 6% of the net GHG emissions in Scenario 2. Table 3. Scenarios 1 and 2 weighted average greenhouse gas (GHG) emissions sources and sinks of life cycle components: (1) feedstock production (i.e., crop production including nutrient replacement and soil GHG emissions); (2) feedstock harvest, collection, and storage; (3) feedstock transport from field to biomass preprocessing depot; (4) preprocessing depot operations; (5) commodity transport from biomass preprocessing depot to biorefinery; (6) biofuel conversion at the biorefinery; (7) The GHG emissions from the preprocessing depot contribute 4% and 3% of the net GHG emissions in Scenarios 1 and 2, respectively.This contribution may be low as the logistics design only considered comminution and densification while further processing to address feedstock quality in terms of fuel conversion (moisture, ash, etc.) was outside the scope of this paper.The feedstock harvest, collection, and transport from field processes contribute less than 2% of the net GHG emission for both scenarios.The total GHG emissions for transport steps, including transport of biomass from fields and transport of commodity from depots for Scenario 1, is 2.2 g CO2e/MJ ethanol, which is 50% lower than the GHG emissions of the transport step reported for the conventional case in Pourhashem's paper [37], 4.4 g CO2e/MJ ethanol; while the GHG emissions from these two transportation steps for Scenario 2, 1.7 g CO2e/MJ ethanol, is 64% lower than that of the conventional case. Uncertainties in Greenhouse Gas Emissions in Corn Stover-to-Ethanol Production Most LCA studies of agricultural residue-to-biofuel have assumed the conventional logistics system, which consists of a single transportation step from the field to the biorefinery [10,11,13,14,37,48].These studies indicated that the transport step contributed minimally to the environmental impact of the lignocellulosic ethanol supply chain.For example, Pourhashem's study [37] showed that the transport step contributes approximately 11% to the net life cycle emissions.For the advanced commodity system studied, the transport of densified corn stover from the preprocessing depot to the biorefinery could add significantly to the variability in life cycle GHG emissions for the entire supply chain in both scenarios.Both variability in transportation distance from depot to biorefinery and preprocessing operations at the depot contribute to variation in net GHG emissions from logistics (Figure 5).The transportation of densified biomass from the preprocessing depot to the biorefinery introduces the highest uncertainty, ranging from 0.02 g CO2e/MJ to 13 g CO2e/MJ ethanol for Scenario 1 (Figure 5a) and from 0.02 g CO2e/MJ to 9 g CO2e/MJ ethanol for Scenario 2 (Figure 5b) over the 90% confidence interval (CI).This wide range comes primarily from the variability in transported biomass from each depot and the long, though efficient, transport distance by rail.The transport rail distances range from 14 miles to 349 miles and from 47 miles to 349 miles in Scenarios 1 and 2, respectively (Tables 1 and 2).Scenario 2 has more preprocessing depots distributed around the central biorefinery than Scenario 1, which decreases the size of each depot and thus reduces the feedstock mass being transported per route from the depot to the biorefinery.As a result, the range of uncertainty for logistics in Scenario 2 is smaller than that for Scenario 1. GHG emissions are sensitive to the size of preprocessing depots.This is evident from the wider range of emissions in both feedstock harvest, collection and storage (0.1-2.3 g CO2e/MJ ethanol) and preprocessing depot processes (0.02-9.3 g CO2e/MJ ethanol) (Figure 5a) in Scenario 1, compared to the narrower range of emissions for both feedstock harvest, collection and storage (0.1-1 g CO2e/MJ) and the preprocessing depot processes (0.2-5 g CO2e/MJ) in Scenario 2 (Figure 5b).The larger size of the preprocessing depots in Scenario 1 (Tables 1 and 2) raises the upper bounds of cumulative energy consumption in transportation and biomass densification processes.Argo et al. [18] also concluded that the commodity system resulted in higher GHG emissions (10%-15%) than the conventional system due to having additional transportation steps.The "transport from field" process shows the least significant impact because the biomass supply radius is limited to 80 km (50-miles). Single variable sensitivity analysis was conducted to identify significant parameters in the logistics segment of the life cycle (Figure 6).The results confirm the significance of transporting the densified biomass in both scenarios.The transportation of densified biomass from depot to biorefniery can impose a variation in GHG emissions of more than 20% from the average (Table 3) for Scenario 1 (Figure 6a) and Scenario 2 (Figure 6b), whereas all of the other parameters may cause life cycle GHG emissions to vary by up to 10% (Figure 6) above the average.The calculated 90% CI shows that the GHG emissions in Scenario 1 can be higher by at least 3.4 g CO2e/MJ, and by as much as 3.6 g CO2e/MJ compared to Scenario 2 GHG emissions (see Tables S9 and S10 for further information on the paired-samples T-test).The uncertainty in life cycle GHG emission varies from 24 g CO2e/MJ to 41 g CO2e/MJ in Scenario 1 and from 24 g CO2e/MJ to 33 g CO2e/MJ in Scenario 2 (Figure 7).Only the upper bound of Scenario 1 could cause the biorefinery to fall out of compliance with the U.S. Federal Renewable Fuel Standard Program (RFS2) [49] barring any deviation in all other life cycle inputs that we assume to be fixed do not vary significantly.For example, Pourhashem et al. [37] found that soil carbon loss, if not managed, can add significantly to uncertain and positive GHG emissions, and electricity crediting (a negative GHG emission in most lignocellulosic ethanol life cycle studies) is also uncertain when a biorefinery co-produces electricity to sell to the regional electricity grid because of uncertainties in the displacement of marginal electricity supply [36].Both of these factors could increase the life cycle GHG emission range above the RFS2 compliance level.Under these circumstances, uncertainty in logistics inputs would add further to the uncertainty of meeting RFS2; however, for the most part, the efficiency gained by densification of the biomass improves life cycle GHG emissions and offers a means of overcoming potentially "stranded" biomass supply.The wider range in Scenario 1 primarily results from the greater uncertainty in the "transport from depot" and the "preprocessing depot" processes while the other two operations contribute much less (Figure 5).Compared to the life cycle GHG emissions of the average conventional case, 39.7 g CO2e/MJ [37], the GHG emissions at the 90th percentile in Scenario 2 falls under the GHG emission of the conventional case while it is ~1 g CO2e/MJ higher than the conventional case in Scenario 1.This finding suggests that equally spacing depots across the state of Kansas can possibly surpass GHG emissions of the conventional case, albeit by a small margin. Conclusions This study examines the variability in life cycle GHG emissions of a bio-ethanol feedstock logistics supply chain in Kansas, specifically examining uncertainties in advanced commodity systems that are found to mitigate risks associated with weather, pests, and disease [18,27].In this study, the uncertainty in agricultural residue (corn stover) logistics was examined by testing the location siting and sizing of pre-processing depots in two different but plausible supply chain configurations in the state of Kansas.In Scenario 1, the depots were equally spaced and sited within the vicinity of counties that have high biomass supply density.In Scenario 2, the depot siting was leveraged to consider the shortest rail transport distance to a centralized biorefinery while considering the vicinity of high biomass supply counties.The stochastic results show that the GHG emissions for Scenario 1 have a wider variability and higher mean emissions (See Table S9) than those of Scenario 2. This result illustrates the benefit of locating preprocessing depots in the vicinity of direct railroad lines to the biorefinery since rail transportation has the least environmental impacts and lowest costs [27].It demonstrates that we expect to find a range of environmental impacts (in this case, GHG emissions) depending upon the quantity of biomass moved and upon the distance from biomass-supplying farms to preprocessing depots and from depots to biorefinery central facilities.Other environmental LCIA metrics that depend on energy consumption are expected to follow the variability trend exhibited by GHG emissions. The depot-to-biorefinery transportation segment makes the largest contribution to life cycle GHG emissions from logistics and to life cycle GHG uncertainty for both scenarios.This suggests that the transport distance and the volume of transported commodity from depots are the significant parameters for supply regions like Kansas that have sufficient biomass supply in a feedstock commodity supply chain.Results may be quite different in regions that have many stranded biomass resources that need to be transported a long distance and collected at smaller centralized depots, the siting of which could depend on the presence of both road and rail infrastructure.Results also suggest that the uncertainty in GHG emissions of the depot-to-biorefinery and the pre-processing depot processes declines with increasing number of depots in a region.Future uncertainty analysis for feedstock logistics should focus on improving some of the model boundary settings, particularly the feedstock supply radius and the depot sizing method.The variability in the field-to-depot stage is minimized because the feedstock supply is limited to the region within an 80-km (50-miles) radius and thus the transport distance is assumed to be uniform.However, restricting the feedstock collection radius may impose a limitation in meeting biofuels production as noted by Argo et al. [18].Therefore, future work should consider a larger biomass supply radius.In our study, the depot capacity was simply determined based on the feedstock supply ratio; however, a more precise optimizing of depot capacity could be used to test its significance on uncertainty in logistics chains that may require multiple stranded supply locations using a variety of lignocellulosic sources needed to fit a biorefinery's annual supply wheel. Figure 1 . Figure 1.Gate-to-gate system boundary for the corn stover commodity feedstock supply and logistics system within the life cycle of bio-ethanol production. Figure 2 . Figure 2. Scenario 1 depot configuration based on equal spatial siting between depots and infrastructure availability.The red circle shows the biomass supply radius.The depots are located at the center of the circle and receive feedstock from counties within an 80-km (50-miles) radius.The red point at the centroid of Reno County corresponds to the location of the biorefinery. Figure 3 . Figure 3. Scenario 2 depot configuration based on biomass density and infrastructure availability.The red circle shows the biomass supply radius.The depots are located at the center of the circle and receive the feedstock from counties within an 80-km (50-miles) radius.The red point at the centroid of Reno County corresponds to the location of the biorefinery. Figure 4 . Figure 4. Highway (red) and rail systems (black) of (a) Sedgewick County and (b) Reno County. Figure 5 . Figure 5. Stochastic gate-to-gate logistics GHG emissions over the 90% confidence interval (CI) and interquartile ranges for: (a) Scenario 1; and (b) Scenario 2. Stochastic estimates based on Monte Carlo simulation (1000 iterations) are presented as box and whisker plots.The top of the box represents the 75th percentile, the middle line represents the median (50th percentile) and the bottom of the box represents the 25th percentile.The whiskers correspond to the 5th and 95th percentiles. Figure 6 . Figure 6.Sensitivity analysis of processes: feedstock harvest, collection, and storage; transport from field; preprocessing depot; and transport from depot to biorefinery on life cycle GHG emissions for: (a) Scenario 1; and (b) Scenario 2. The tornado diagram shows that GHG emissions (measured as global warming potential (GWP)) are most sensitive to the transport from depot (top process) and least sensitive to transport from field (bottom process).An increase in GHG emissions sensitivity is shown in 50% gray; and a decrease in GHG emissions sensitivity is shown in 30% gray. Figure 7 . Figure 7. Stochastic life cycle GHG emission results from Monte Carlo simulations for two depot scenarios for the corn stover-to-ethanol logistics and supply chain.The GHG emission for the conventional biorefinery case was obtained from literature [37].Stochastic estimates based on MC simulation are presented as box and whisker plots.The top of the box represents the 75th percentile, the middle line represents the median (50th percentile) and the bottom of the box represents the 25th percentile.The whiskers correspond to the 5th and 95th percentiles. Table 1 . Preprocessing depot capacities and true rail distance from depots to the single biorefinery in Scenario 1. DMT: dry matter ton. Table 2 . Preprocessing depot capacities and true rail distance from depots to the single biorefinery in Scenario 2. production Scenario 1 commodity system: equal spatial distance between depots and infrastructure availability Scenario 2 commodity system: high biomass density and infrastructure availability Conventional system (Pourhashem et al. [37]) ethanol transport, distribution, and blending; and (8) vehicle operation.All units in g CO2e/MJ ethanol.
9,570.4
2014-11-04T00:00:00.000
[ "Engineering", "Environmental Science" ]
The hidden diversity of the potato cyst nematode Globodera pallida in the south of Peru Abstract Our knowledge of the diversity of potato cyst nematodes in their native areas still remains patchy and should be improved. A previous study based on 42 Peruvian Globodera pallida populations revealed a clear south to north phylogeographic pattern, with five well‐supported clades and maximum diversity observed in the south of Peru. In order to investigate this phylogeographic pattern more closely, we genotyped a larger collection of Peruvian populations using both cathepsin L gene sequence data and a new set of 13 microsatellite loci. Using different genetic analyses (STRUCTURE, DAPC), we consistently obtained the same results that led to similar conclusions: the presence of a larger genetic diversity than previously known suggesting the presence of cryptic species in the south of Peru. These investigations also allowed us to clarify the geographic borders of the previously described G. pallida genetic clades and to update our knowledge of the genetic structure of this species in its native area, with the presence of additional clades. A distance‐based redundancy analysis (dbRDA) was also carried to understand whether there was a correlation between the population genetic differentiation and environmental conditions. This analysis showed that genetic distances observed between G. pallida populations are explained firstly by geographic distances, but also by climatic and soil conditions. This work could lead to a revision of the taxonomy that may have strong implications for risk assessment and management, especially on a quarantine species. Potato cyst nematodes (PCN) are a major pest of potato native to South America. An extensive sampling campaign was carried out in Peruvian potato fields in 2002 in order to improve our understanding of the evolutionary history of Globodera pallida (Picard, Plantard, Scurrah, & Mugniery, 2004), one of the two well-known Andean potato cyst nematode species. However, as we progress towards an understanding of the evolutionary history of this particular species, the general idea that the orogeny of the Andes has triggered a variety of adaptive biotic radiations has become a key notion regarding Globodera species evolution and specialization (Grenier, Fournet, Petit, & Anthoine, 2010). At this time, only four Globodera species parasitizing potato have been identified. Among them G. leptonepia was found only one time in a ship-borne consignment of potatoes. It is presumed to be a South American species parasitizing potato, but extensive field collections of potato cyst nematodes in the Andean highlands (Evans, Franco, & Descurrah, 1975) have not resulted in its rediscovery. As a result, G. leptonepia remains a rare and poorly known PCN. G. ellingtonae is a recently described PCN species (Handoo, Carta, Skantar, & Chitwood, 2012). Initially found and described from a potato field sampled in Oregon (USA), this species seems to be restricted geographically to the Americas at this time (Skantar et al., 2011). The last two PCN species are the well-known G. pallida and G. rostochiensis species that both originate from the Andes (Grenier et al., 2010). Cryptic species should not be ignored as they are important for a number of applied reasons regarding in particular food security, risk assessment or nonchemical management technologies. In the case of quarantine nematodes like PCN, failure to recognize cryptic species might complicate efforts towards their eradication or management and also has strong economic consequences for potato export. The question of whether G. pallida should be considered a species complex rather than a single species has been raised by several authors (Madani et al., 2010;Subbotin, Prado Vera, Mundo-Ocampo, & Baldwin, 2011). Interestingly, previous investigations carried out on G. pallida populations sampled along the Andean Cordillera in Peru have revealed a phylogeographic pattern from south to north, with five distinct clades (named I-V) (Picard, Sempere, & Plantard, 2007), and high nucleotide divergence (10%-11% based on cytochrome B sequencing) between populations belonging to the southern and northern clades (Picard et al., 2007). This first study on the genetic diversity of G. pallida was conducted on a limited population set (44 along a 3000-km transect (Picard et al., 2007;Plantard et al., 2008)), and made use of a set of seven microsatellite loci available at that time. Since then, novel microsatellite loci have been developed directly from G. pallida genome (Cotton et al., 2014), selected and combined together to develop a new robust genotyping tool based on 13 microsatellite loci (Montarry et al., 2019(Montarry et al., , 2015. We worked here on a set of 117 PCN populations sampled in the geographic area where European PCN originate (Boucher et al., 2013;Plantard et al., 2008), and where the highest genetic diversity was observed for G. pallida (Picard et al., 2007). We used two types of molecular markers. First, populations were genotyped using the intron length polymorphism of the cathepsin L gene. The cathepsin L gene is involved in nematode nutrition and is made up of 12 introns (Blanchard, 2006). Considerable intron length polymorphism has already been reported for several nematode genes among different Globodera species (Alenda, Gallot-Legrand, Fouville, & Grenier, 2013;Geric Stare et al., 2011). The cathepsin L gene is no exception regarding this intron length polymorphism, and in particular, the region spanning introns 4 and 5 were found to be particularly polymorphic among PCN species (Blanchard, 2006). In fact, based solely on the amplification length polymorphism of this cathepsin region, it is possible to distinguish G. pallida from G. rostochiensis or G. ellingtonae. Second, populations were genotyped using the new set of 13 microsatellite markers. Contrary to the cathepsin L gene, microsatellite loci are often species-specific markers and are located in the noncoding part of the genome (Li, Korol, Fahima, Beiles, & Nevo, 2002;Selkoe & Toonen, 2006). Thanks to the increase in the number of populations from South Peru investigated and to the use of two different genotyping tools with different tempos of evolution, our objectives were (a) to explore the geographic distribution of G. pallida, G. rostochiensis or G. ellingtonae in South Peru and (b) to reinvestigate more in-depth the genetic diversity of G. pallida in its native area. | Cathepsin genotyping A total of 117 Peruvian and two Chilean populations were studied and are listed in Table S1 along with their geographic location. All are from the laboratory collection and were multiplied on the potato cv "Désirée." We sampled populations from different multiplication years to avoid any effect of the year of multiplication on our data. In all, eight juveniles at the L2 stage from eight different cysts of each population were pooled, and their DNA was extracted using the NaOH protocol as describe in Boucher et al. (2013) and genotyped using the cathepsin L primers (Forward: 5' AATCKGTRGATTGGCGTGAC 3'; Reverse 5' GGGCCTTGDGTKGCAACAGC 3'). PCR was carried out in a total volume of 25 µl (12.25 µl of ultra-purified H 2 O, 5 µl of 5 × buffer, 3 µl of MgCl 2 at 25 mM, 1 µl of each primer at 10 µM, 0.25 µl of Taq Golfex 5 U/µl, 0.5 µl of dNTPs at 10 µl each and 2 µl of DNA). The PCR conditions were a denaturation step at 95°C for 1 min, followed by 30 cycles (denaturation: 95°C for 30 s, annealing: 63°C for 50 s, elongation: 72°C for 1 min) and a final elongation step at 72°C for 5 min. Amplification products were observed after migration on an agarose gel (1.5%). The amplification products obtained for European G. rostochiensis populations are about 705 bp, amplification products for European G. pallida populations are about 690 bp, and those for G. ellingtonae are about 880 bp ( Figure S1). | Microsatellite genotyping Globodera pallida populations studied with microsatellites were selected based on the results obtained following cathepsin L genotyping. Where a mix of species was observed in one population, we excluded G. rostochiensis individuals when there was a majority of G. pallida. Not all populations had sufficient individuals to be genotyped by this tool, and finally, 84 populations were studied and none had less than 24 individuals; the precise number of individuals retained per population for each analysis is specified in Table S1. The genotyping protocol, including the loci and primers used, was the same as in Montarry et al. (2015Montarry et al. ( , 2019. Briefly, for each population, 40 cysts were randomly collected and one juvenile (L2 stage) sampled from each cyst. DNA extraction was performed as described in Boucher et al. (2013). All individuals were genotyped at 13 microsatellite loci: Gp106, Gp108, Gp109, Gp111, Gp112, Gp116, Gp117, Gp118, Gp122, Gp126, Gp135, Gp145 and Gr67, and genotyping was performed on the Gentyane INRA platform. Allele sizes were identified using the automatic calling and binning procedure of Genemapper v 5.0 (Do & Rahm, 2004) and completed by a manual examination of irregular results. We used two different data sets for analysis. In the first data set, we chose to retain the maximum number of populations and then removed loci that did not amplify enough individuals in each population. This first data set consisted of 84 populations genotyped by 10 microsatellite loci (loci Gp112, Gp135 and Gp145 were excluded) and was named "10locix84pop." In the second data set used for further analysis, we chose to retain all the genetic information (all 13 loci) and then removed populations where no or too few individual amplifications were obtained (populations TDF, CAS, 309, 384, 264 and 224 were excluded). This second data set was named "13locix78pop." | Population genetic descriptors and genetic structure analysis Different population descriptors like the observed and expected heterozygosity, the fixation index (F IS (Wright, 1978)), and the allele diversity, were calculated using Poppr package (Kamvar, Tabima, & Grünwald, 2014) in R and using the rarefaction index (ElMousadik & Petit, 1996). To observe the genetic differentiation between populations, we calculated pairwise F ST (Weir & Cockerham, 1996) using HIERFSTAT (Goudet, 2005) in R software (R Development Core Team 2018), which is based on allelic frequencies. Genetic structure was also investigated using a Bayesian model based on the Markov chain Monte Carlo (MCMC) clustering method and implemented using the STRUCTURE program, v 2.2.3 (Pritchard, Stephens, & Donnelly, 2000). We performed five independent runs; each had 1,000,000 cycles of burn-in and 3,000,000 cycles of MCMC, with the number of clusters tested (K) ranking from 2 to 50. This analysis assumes an admixture ancestry model based on allele frequency. Results were analysed with pophelper (Francis, 2017), implemented in R. To assess the optimal values of K, we used the ΔK method described by Evanno, Regnaut, & Goudet (2005), as well as a visual comparison between the different K values tested. To supplement the output from STRUCTURE, we also analysed the data sets trough a discriminant analysis of principal components (DAPC) (Jombart, Devillard, & Balloux, 2010), using the ADEGENET package for R (Jombart, 2008). To assess the optimal values of K, we used the Bayesian information criterion (Bickford et al., 2007). We checked the stability of individual group membership probabilities and the correct number of principal components (PCs) using the α-score (Jombart et al., 2010) and cross-verification. | Phylogenetic analysis To investigate the relationship between populations, we conducted a phylogenetic analysis using Poppr (Kamvar et al., 2014) and phytools (Revell, 2012) in the R package. We used Nei dissimilarity (Nei, 1978) and built trees with the unweighted pair group method with arithmetic mean (UPGMA) (Sokal & Michener, 1958). We calculated a node statistical support, running 999 bootstraps of resampling loci. | Geographic distances and the impact of climatic and soil conditions We tested the isolation-by-distances (IBD) hypothesis following instruction of Rousset (1997) and using the Vegan package (Oksanen, Blanchet, Kindt, Legendre, & O'Hara, 2018). The statistical significance of the correlation between the genetic distances (F ST /(1 − F ST )) (Slatkin, 1995) and the natural logarithm of geographic distances were estimated with a Mantel test (10,000 permutations) using Vegan package (Oksanen et al., 2018). We also conducted a distance-based redundancy analysis (dbRDA; Legendre & Anderson, 1999) to correlate F ST values with abiotic and climatic variables. We chose six variables which, from our knowledge, could impact the distribution of PCN. These variables could be geographic (latitude and longitude), climatic (mean annual temperature, mean annual precipitation) and pedologic (content in organic carbon, cation exchange capacity), or they could reflect plant diversity (Shannon index). These variables were extracted from WorldClim v2 (Hijmans, Cameron, Parra, Jones, & Jarvis, 2005), SoilGrid (Hengl et al., 2017) and EarthEnv (http://www.earth env. org) based on population GPS coordinates. We performed dbRDA using the Capscale function implemented in the Vegan package (Oksanen et al., 2018) in R, and we finally chose the best model using a stepwise model based on an adjusted R2 and p-value (Ordistep function, Vegan package) from a start saturated model including all variables cited above, and latitude and longitude, to take into account the geographic position: | Cathepsin analysis results Amplification of the cathepsin marker on 119 populations revealed either one or two bands of different sizes. Overall, four amplification products of different sizes were observed with a length of approximately 690, 705, 800 and 845 bp (Table S1) In PCN populations, maximum diversity was observed in the south of our study area, around Lake Titicaca, where three out of the four sizes of amplification products were found. The 800 bp profile was mostly found in the north of our study area, and usually in association with the 690 bp amplification corresponding to G. pallida. Only one population (196) was found with solely the 800 bp amplification product. This 800 bp product size was subsequently further observed in other Peruvian populations from clades IV and V (Table S1). The 690 bp product was the most commonly observed in our data set and is also found in association with all the other amplification products in mixed populations. Since the PCRs were conducted on a pool of eight juveniles, we cannot determine whether the two amplification products observed in some populations appeared because of hybrids or because of the heterogeneity of the pool. On the contrary, the 845 and 800 bp amplification products were each shared by less than 16% of the populations. Using additional PCN populations from South America, we were able to identify the 845 bp amplification product in two PCN populations sampled in the South of Chile, in Patagonia (TDF and CAS). Finally, we noted a clear geographic distribution of the cathepsin L gene diversity, with the 800 bp amplification product located in the north. We then found mostly the 690 bp allele when going to the south, where additional amplification products can be observed (i.e. 705 and 845 bp). | Microsatellite missing data and genetic differentiation Among all populations, few missing data were observed except for populations 309, 284 and 264, which were not amplified by the loci Gp112, Gp135 and Gp145. 60% of individuals from population 224 were amplified by these loci. Importantly, these populations were those showing the 845 bp cathepsin L amplification product. We also examined the two populations from south Chile (TDF and CAS) and found that they also did not amplify these three loci. These Chilean populations were therefore included in some of the subsequent analyses, and the group formed by populations TDF, CAS, 309, 284, 264 and 224 will now be referred to as the "pallida Chilean type." | Observed structuration using the "10locix84pop" data set The best model in STRUCURE separated genetic variation into five clusters (Figure 1a), with a clearly highest ΔK for K = 700 ( Figure S3). Once again, populations from the south of Chile were grouped with populations 309, 284, 264 and 224. The exact same structuration into five groups was observed with both STRUCTURE (Figure 1a) and DAPC (Figure 2a). We also found perfect congruence with the cathepsin L genotyping results. Interestingly, when there was a switch of the cathepsin allele, there was also a switch of group determined by the STRUCTURE analysis ( Figure 1). Overall, individuals and populations were both clearly assigned to only one group, except for populations 233 and 234, which were assigned to both group 2 and group 3. | Observed structuration using the "13locix78pop" data set This data set of 78 populations, all from Peru and including all loci, showed an optimal structuration at K = 5 (Figures 1b, and S4). The (Table S1), except for the allelic richness, which is lower for group 3. Some particularly interesting cases are observed from STRUCTURE outputs. Most individuals from populations 233 and 234 are assigned half to group 2 and half to group 3 (Figure 3), indicating some admixture between these two groups. Considering geographical data, these populations are located exactly at the edge of these two groups on a plateau delimited by two valleys. For comparison, another case is provided in Figure 3 with populations 191, 208, 206, 210, 215 and 229 all located in the same geographic area. In this case, we did not observe any admixture between groups 2 and 4. We also observed that there are major geographic barriers between all groups, except between groups 1a and 1b ( Figure S5). | Phylogenetic approach We chose to incorporate as the out-group for this analysis, individuals from G. mexicana, the closest relative to G. pallida, and individuals from population 224 belonging to the "pallida Chilean type." After generating trees from different genetic distances, we chose to present a UPGMA dendrogram built with Nei distances, the most supported tree (Figure 3). Population 224 seems to be as different from the other Peruvian populations as G. mexicana. A high bootstrap value supports this result, as well as the first phylogenetic partition between branches corresponding to groups 1a, 1b and 4 on one side, and 2 and 3 on the other. Bootstrap values are also high for the divergence between groups 1a and 1b. The other nodes are less well supported; nevertheless, we can note that the groups identified by the phylogenetic analysis match those identified by STRUCTURE or DAPC analyses. | Geographic distances and the impact of climatic and soil conditions We found a highly significant IBD pattern when considering either, all populations or all populations without the ones from group 4 F I G U R E 1 STRUCTURE clustering analysis obtained with the two different data sets. (a) Results obtained with the "10locix84pop" data set where three loci were removed and all populations were taken into account (b) Results obtained with the "13locix78pop" data sets where all loci were taken into account and "pallida Chilean type" populations were removed; the y-axis shows the assignation rate of each individual displayed on the x-axis. White vertical lines show the transition between different cathepsin alleles indicated on the top of each STRUCTURE graph | D ISCUSS I ON The first aim of this study was to explore more precisely the genetic diversity within G. pallida in the cradle of the species. We found, as previously shown, a clear genetic structure in this area, linked to the geographic but also pedoclimatic conditions. We identified two groups (i.e. the "pallida Chilean type" and group 4) that may be cryptic species within the G. pallida species complex. Furthermore, we recorded the presence of G. rostochiensis in the studied area and the absence of G. ellingtonae. All populations of G. rostochiensis were found in the south of Peru, which is consistent with previous observations (Evans et al., 1975;Franco, 1977). The absence | Alignment with previous results The microsatellite genotyping results showed that our populations are structured in a maximum of six groups (1a, 1b, 2, 3, 4 and "pallida Chilean type"), which are geographically distributed from south to north Peru. In parallel, the cathepsin L marker identified three groups, the first corresponding to groups 1a, 1b, 2 and 3, the second to group 4, and the third to the "pallida Chilean type." Compared with the conclusions reached by Picard et al. (2007), groups 1b and 4 were detected for the first time here. Our groups 1a and 1b correspond in fact to clade I from Picard et al. (2007). However, only one population belonging to group 1b (i.e. pop 320) was included in the data set used by Picard et al. (2007). Located northward, group 2 corresponds to clade II, and group 3 to clade III described in Picard et al. (2007). The "pallida Chilean type" was detected by all analyses and with two types of markers; however, it was not identified by Picard et al. (2007) who did not consider any of these populations in their study. Similarly, Subbotin et al. (2011) carried out a phylogenetic analysis of South American populations of G. pallida using ITS-rRNA sequences. Subclade 1 in Subbotin et al. (2011) seems to correspond to groups 1a and 1b identified here. They also suggested that three of their subclades may represent putative undescribed species. First, subclade 3 that contains populations also studied here and that was identified as belonging to the "pallida Chilean type." Second, subclade 5 which contains northern Peruvian populations that were not studied here, but that are known to show the same | A possible differentiation pattern With the exception of the ambiguous delimitation between groups Clearly, if they were not, and if hybrid depletion was in place, the hybrid proportion in the population would be lower than the proportion of individuals in each group. As a result, the hypothesis of viable and fertile hybrids is retained here. Comparatively, we observed no similar assignation between groups 3 and 4, while they are geographically close. Only very few individuals can be considered hybrids. This could be due to hybrid depletion and lead to high differentiation between these two groups, comparable to the distinction between G. pallida and G. rostochiensis, species for which hybrids are observed but are not fertile (Mugniéry, Bossis, & Pierre, 1992). Furthermore, group 4 presents a unique cathepsin amplification product of 800 bp. This leads us to suspect that group 4 is more genetically differentiated than the others. This view is also supported by the IBD patterns observed as the correlation coefficient is lower when population from group 4 is included in the analysis. This suggests that group 4 did not follow exactly the same evolution compared with the other groups. This hypothesis is also supported by the outputs from the phylogenetic analysis. The question now is to determine whether this differentiation observed here has engendered speciation. Genetic studies are helpful when considering cryptic species, but it is too speculative to conclude about species delimitation without information from other (Padial, Miralles, Riva, & Vences, 2010;Wheeler, 2004;Will & Rubinoff, 2004). For example, crossing test has already provided clarification for the Globodera genus (Mugniéry et al., 1992) and could be carried out in this case. | Climatic and soil conditions potentially explain genetic differentiation Beside the impact of geographic distance and relief, our results also showed the importance of climatic conditions in the genetic differentiation observed. The dbRDA analysis separated mostly group 4 from group 2 by precipitation conditions, and group 1 from the others by temperature. Group 4 evolved in drier conditions, while group 1 is located in a more temperate zone around Lake Titicaca. We could imagine that these particular conditions lead to some genomic adaptations, and thereby to genetic differentiation. We note that the variable plant diversity seems to have no effect on the differentiation between groups. This finding should be considered with caution, because in parasite species, hosts are known to be a driver of speciation (de Vienne et al., 2013). The variable chosen here to reflect host plant diversity (Shannon index) could appear to be approximative because it is built to reflect habitat heterogeneity but not intra-taxon heterogeneity, such as in Solanaceae, which is the host of G. pallida. However, we could imagine that the climatic conditions (AMT and AMP) affect the diversity and distribution of Solanaceae and thereby the differentiation of G. pallida. Moreover, the available climatic and soil data used here are from the last 30 years, while the differentiation is probably older. Therefore, we have to hypothesize that different climatic conditions were at least conserved or even higher at the time the genetic differentiation of these groups occurred. | The "pallida Chilean type" as a candidate for a new species Populations forming the "pallida Chilean type" were clearly identified by both DAPC and STRUCTURE analysis. This group also appears to be the most genetically distant in our data set, and several results support the idea that the divergence is as high as that between dif- (Gamel, Letort, Fouville, Folcher, & Grenier, 2017) are not able to distinguish these "pallida Chilean type" populations from the other G. pallida populations, while they are able to distinguish G. mexicana from G. pallida. Geographically, the "pallida Chilean type" populations are found in the south of our study area, in sympatry with populations belonging to groups 1a and 1b. One surprising result is that this "pallida Chilean type" has never been identified in Europe despite the fact that (a) both G. rostochiensis and G. pallida were introduced from the same geographic area (Boucher et al., 2013;Plantard et al., 2008), and (b) the cathepsin genotyping results showed that the "pallida Chilean type" is found even more often in sympatry with G. pallida (17 out of 26 co-occurrences) than G. rostochiensis is found in sympatry with G. pallida (9 out of 26 co-occurrences). It could therefore be hypothesized that this group was in fact also introduced into Europe, but failed to establish itself and survive in European conditions. This once again suggests strong differences with G. pallida also in terms of fitness and life-history traits that would rather support the view of different species. Clearly, further investigations are required before being able to conclude on a novel PCN species. In a perspective of integrative taxonomy (Dayrat, 2005;Padial et al., 2010), the morphologic and behavioural fields should now also be explored. The evolutionary history of the "pallida Chilean type" remains mostly unknown. At this time, populations belonging to this group have only been reported in Chile (Patagonia) and south Peru. The genetic proximity observed between these populations that are distant by more than 6,000 km is quite surprising, considering the divergence observed for G. pallida in Peru. The best explanation is therefore that-in contrast to G. pallida phylogeographic history mostly due to Andes orogenesis-the "pallida Chilean type" phylogeographic history is rather the result of anthropic dispersion between these two countries. Considering the low levels of heterozygosity and allelic richness observed in the CAS and TDF populations, compared with the 224, 264, 284 and 309 populations, it can even be assumed that these populations were introduced to Chile from south Peru. These two countries were at war until 1884 (end of the War of the Pacific), with battles that were fought in the Pacific Ocean, World War (Brodie & Mai, 1989)-it is possible that the War of the Pacific was the event that allowed for such long-distance dispersion. The possible presence of a species complex may be important in terms of economic impact on crop harvests and must be taken into account if a quarantine species is involved. This is particularly true for G. pallida which is listed as a quarantine species because of the significant damage that it could produce in potato crops. A description of the potential new species would imply revision of its taxonomy and expand G. pallida sensu stricto to G. pallida sensu lato. The groups identified as potential new species appear to be absent from Europe, but they could be introduced. The current regulations and controls on G. pallida, that is, the ban on introduction into Europe of soils originating from non-EU countries, are able to prevent this risk of new introduction. Nonetheless, reinforced controls applied to at-risk regions and maintaining G. pallida on the list of quarantine nematode species appear important, if not essential. ACK N OWLED G EM ENTS We wish to thank all the colleagues who helped for the material multiplication or during the microsatellite and dbRDA work either by sharing some scripts or through fruitful discussions on the results: Lionel Renault, Lucie Mieuzet, Nathan Garcia and Sylvain Fournet. We gratefully acknowledge GLOBAL (Globobera Alliance) for useful discussions and access to some nematode populations and would like to thank INRA Plant Health Department and ANSES Plant Health Laboratory for their support to the partnership agreement NemAlliance in Brittany. CO N FLI C T O F I NTE R E S T The authors have no competing interests to declare. DATA AVA I L A B I L I T Y S TAT E M E N T The microsatellite genotyping data sets are available at Data Inra
6,654.2
2019-12-03T00:00:00.000
[ "Environmental Science", "Biology" ]
Adsorption Studies on Red Mud: Removal and Recovery of Cr (VI) from Wastewater The addition of any organic, inorganic, biological, or radiological substance to water that changes its physical and chemical properties is known as water pollution. These pollutants come in contact with water by rapid and unplanned industrial progress, overpopulation, discharge of sewage into water bodies, etc and such water is unfit and very harmful to public health, animals, plants, and aquatic life. Water pollution due to heavy metals and organic pollutants has been a major Concern for a long. Heavy metals are nonbiodegradable and are the cause of many dreadful disorders in the long run; on the other hand, many organic pollutants are carcinogenic. Sources, toxicity, and hazardous effects of some important heavy metals are also summarized. Heavy metals are also summarized. Include preliminary treatment, primary treatment, secondary treatment, and tertiary treatment. The main objective of preliminary treatment and primary treatment is the removal of gross solids such as large floating and suspended solid matter, grit, oil, and grease if present in considerable quantities. Secondary treatment is a biological process involving bacteria and other microorganisms removing the dissolved and colloidal organic matter present in the wastewater. Tertiary treatment is the final treatment, meant for “polishing” the effluent from the secondary treatment processes to improve the quality further. Introduction The unthoughtful race for industrial development and unlimited exploitation of natural resources have adversely affected all forms of life in the biosphere.This has threatened the survival of all living organisms.Air has become dangerous to breathe, water is unfit to drink, soil is unsuitable for crops and aquatic ecosystems antagonized marine creatures.Industrialization has given us radioactivity, dangerous effluents, poisonous gases, and heavy metals etching our environment.Thanks to pollution, all these are the gifts of industrialization to modern civilization.The term pollution is defined as an undesirable change in chemical, physical, and biological characteristics of air, water and soil which creates a potential health hazard to living organisms and harmfully affects life. Classification of pollution Environmental pollution can be classified as Air pollution, Water pollution, Soil pollution, radioactive pollution, noise pollution, and thermal pollution.• Email<EMAIL_ADDRESS> IJFMR240112252 Volume 6, Issue 1, January-February 2024 2 Water pollution: The addition of any organic, inorganic, biological, or radiological substance to water that changes its physical and chemical properties is known as water pollution.industrial progress, overpopulation, discharge of sewage into water bodies, etc and such water is unfit and very harmful to public health, animals, plants, and aquatic life.Water pollutants can be classified into four major categories.Organic pollutants, inorganic pollutants, suspended solids sediments, and radioactive materials. Heavy metals pollution: Heavy metals can be defined in many ways, based on their physical, chemical, and biological properties.Metals with a specific gravity of about 5g/cm3 or greater are generally defined as heavy metals and these include metals from IIA, IIIB, IVB, VB, and VIB of the periodic table. Environmental effects of heavy metals: From the environmental point of view, the metals that are of greatest concern are those which, either by their presence or their accumulation, can have a toxic or inhibitory effect on living things.The most efficient methods for heavy metal removal are ion exchange and adsorption.In these methods, various types of adsorbents are used with simple, easy convenient procedures for having high removal efficiency.However, ion exchange is expensive and the use of this technique is limited because of its selectivity for specified metal ions in comparison with other metal ions. In adsorption, a certain adsorbent is used for the removal of heavy metal from wastewater and it is considered to be the most effective process because of its high efficiency.Adsorption also produces high-quality effluent.Adsorbents can also be regenerated by using suitable adsorption processes.Adsorption is one of the most effective processes of advanced wastewater treatment, which industries employ to reduce hazardous metals present in the effluents. Equations ADSORPTION ISOTHERMS: The amount of adsorbate that is adsorbed per amount of adsorbent is determined as a function of concentration at a constant temperature.The resulting function is called the adsorption isotherm.Adsorption experiments are carried out to develop adsorption isotherms in which the amount of the adsorbent is changed while the initial concentration and volume of the adsorbate are kept constant. The adsorbent phase concentration after equilibrium is computed using the equation given: ( Co -Ce) V qe = ----------------m where, qe = adsorbent (i.e., solid) phase concentration after equilibrium, mg adsorbate/g adsorbent.Co = initial concentration of adsorb ate, mg/L.Ce = final equilibrium concentration of adsorb ate after adsorption has Occurred mg/l V = volume of liquid in the reactor, L m = mass f absorbent, g.Commonly used adsorption isotherms are Langmuir adsorption isotherm and Freundlich adsorption isotherm. Langmuir Adsorption Isotherm: The Langmuir adsorption isotherm is given as: x able ---= ---------m 1 + bCe where, x/m = mass of adsorbate adsorbed per unit mass of adsorbent, mg adsorbate /g adsorbent.a , b = empirical constants Ce=equilibrium concentration of adsorbate in solution after adsorption, mg/l The assumptions in the Langmuir isotherm are: (1) Maximum adsorption of the adsorbate occurs at homogeneous saturated monolayer sites on the adsorbent surface. (2) The energy of the adsorption is constant.Equilibrium in the adsorption process occurs when all the adsorption sites of the adsorbent get saturated with adsorbate (atoms, molecules, ions) or when the rate of the adsorption of the adsorbate molecules becomes equal to the rate of desorption of the adsorbate molecules from the surface of the adsorbent. Freundlich Adsorption Isotherm: The Freundlich adsorption isotherm is an empirical relation and is given as: 1/n x/m =Kf Ce Red mud is relatively toxic and always poses a serious pollution hazard; it also has significant alkaline properties.Depending upon the number of mud-washing stages, the water associated with the mud may contain 3-10 g dm-3 alkalinity (expressed as Na2CO3).In addition, the amount produced is significant.Around 1-2 tons of residues are generated per ton of alumina produced.It is estimated that in Greece around 10000 metric tonnes of red mud are produced annually.The fineness of solid particles and the large quantity of residue make further disposal or utilization very difficult. Red mud is composed mainly of non-toxic fine particles of silica, aluminum, iron, calcium, and titanium oxides along with some other minor components. Headings The objective of the present thesis is to evaluate the performance of red mud and fly ash as low-cost adsorbents for the removal of lead from wastewater and it includes the following aspects: • A study of the use of fly ash and red mud to remove heavy metal contaminants from industrial wastes. • Batch study analysis and isotherms model experiments to assess the adsorption potential of fly ash and red mud for lead.• Elution of metal ions from the adsorbents after adsorption. I was highly privileged to thank Under the Guidance of Dr.Ramprasad Naik Desavathu, Professor of the Department of Civil Engineering G.I.E.T UNIVERSITY, GUNUPUR, for his valuable guidance, significant suggestions, and help in every aspect of accomplishing the project work.His novel association of ideas, encouragement, appreciation, and intellectual zeal has motivated us to ensure this work is successful.Also, I would like to take this opportunity to thank our Dean Academics Dr. A.V.N.L SHARMA & Head of Department PROF.ASHISH KUMAR SAMAL for permitting me to do this project and providing me with the facilities in the laboratory for doing this project.Also, I am thankful to all our esteemed faculty of the Department of Civil Engineering and Laboratory technicians of our college.Finally, I would be grateful to our family for their constant encouragement throughout and my friends who helped us in one way or the other in completing the project. 2. Prepare Your Paper before Styling 2.2 TECHNOLOGIES FOR TREATMENT OF WASTEWATER CONTAINING HEAVY METALS: Metals like cadmium, chromium, copper, zinc, nickel, lead, and mercury, enter the environment through industrial waste, methods include reverse osmosis, electrochemical treatment, evaporative recovery, ion exchange, chemical precipitation, membrane filtration, chemical oxidation-reduction, electrodialysis, ultrafiltration, and solvent extraction.The applicability and use of all these common techniques are limited by their low efficiency, critical operating parameters, and production of secondary sludge. studies on Red Mud: removal and recovery of Cr (VI) from wastewater" under the guidance of Dr.Ramprasad Naik Desavathu in the Department of Civil Engineering is a bonafide work carried out by us and the results embodied in this report have not been reproduced/copied from any other source and have not been submitted to any other university or institution for the award of any other degree. the Co Guidance of Dr. RAKESH ROSHAN DASH, Professor VSS University of Technology Burla, of the Department of Civil Engineering for his valuable guidance, significant suggestions, and help in every aspect of accomplishing the project work.His novel association of ideas, encouragement, appreciation, and intellectual zeal has motivated us to ensure this work successfully.
2,035.6
2024-01-20T00:00:00.000
[ "Environmental Science", "Chemistry" ]
Evolution of Entanglement Wedge Cross Section Following a Global Quench We study the evolution of entanglement wedge cross section (EWCS) in the Vaidya geometry describing a thin shell of null matter collapsing into the AdS vacuum to form a black brane. In the holographic context, it is proposed that this quantity is dual to different information measures including entanglement of purification, reflected entropy, odd entropy and logarithmic negativity. In $2+1$ dimensions, we present a combination of numerical and analytic results on the evolution and scaling of EWCS for strip shaped boundary subregions after a thermal quench. In the limit of large subregions, we find that the time evolution of EWCS is characterized by three different scaling regimes: an early time quadratic growth, an intermediate linear growth and a late time saturation. Further, in $3+1$ dimensions, we examine the scaling behavior by considering thermal and electromagnetic quenches. In the case of a thermal quench, our numerical analysis supply results similar to observations made for the lower dimension. On the other hand, for electromagnetic quenches, an interesting feature is a departure from the linear behavior of the evolution to logarithmic growth. Introduction Non-equilibrium dynamics and thermalization of a strongly coupled system is a long-standing problem in many areas of physics. In the holographic context, equilibration from a highly excited initial state is expected to be dual to black hole formation under a gravitational collapse. So in this scenario, issues about the black hole physics are tightly connected to the physics of thermalization in the dual strongly coupled system [1] (see [2] for review). A simple setting that shows the general features of equilibration in a far-from-equilibrium system is a global quench. In this setup, one considers the creation of a homogeneous and isotropic highly excited state from the vacuum state by an abrupt change in the Hamiltonian of a closed quantum system. It is expected that this excited state evolves towards the equilibrium and shows some aspects of a thermalization process [3,4] (refer to [5][6][7] for review). Remarkably, the holographic dual of this dynamics is simply described by the Vaidya geometry that shows the collapse of a thin shell of null matter and black hole formation which is an exact solution to Einstein's theory of gravity [1,[8][9][10]. Its worth to mention that the Vaidya geometry is not the only possible holographic model describing the quantum quench, in particular, it can be modeled namely by the time evolution of black hole interior [11]. One may study the dynamics of a globally quenched system by evaluating the correlation between the subsystems of a given system [12]. Among other things, it is known that the entanglement entropy (EE) is a useful probe to capture this dynamics [4]. EE measures the quantum correlations between a subsystem A and its complement A c and is defined as von Neuman entropy of a reduced density matrix ρ A = − tr A c (ρ) as S A = − tr(ρ A ln ρ A ). (1.1) As the non-equilibrium system evolves towards the equilibrium, the EE grows with time linearly and saturates at the equilibrium value which is equal to thermal entropy. This behavior of EE growth has a simple description in terms of propagating entangled pairs of quasi-particles [4,5]. In quantum field theories with holographic dual, there is a very interesting prescription for computing EE. According to Ryu and Takayanagi (RT) seminal proposal [13], EE corresponding to a spatial subregion A in the CFT is given by the area of a codimension-2 minimal surface Γ A where the bulk minimal RT surface Γ A is homologous to the subregion A such that its boundary anchored to the boundary of A (∂A = ∂Γ A ). The authors of [9] generalized this prescription to the time-dependent backgrounds by assuming Γ A is an extremal surface (HRT surface) subject to the same boundary condition. It is worthwhile to mention that both proposals have been derived in the context of AdS/CFT in [14,15]. In general, it shows a quadratic growth in the pre-local-equilibration and it follows by a linear growth regime in the past-local-equilibration. After that and before saturation, the system evolves to memory loss regimes in which the EE forgets the size and shape of the entangling region. This behavior may suggest a simple geometric interpretation for the growth of entanglement based on the propagation of an entanglement wave with a sharp wavefront inward the entangling region from the entangling boundary. This model is dubbed as an entanglement tsunami [20,21]. As mentioned, one way to study the equilibration process is the evaluation of the correlation between subsystems. For a pure state, EE measures the quantum part of the correlations between a subsystem and its complement. However, to analyze the correlation of two disjoint intervals, A and B, EE is not a convenient quantity. This is because EE is a measure of the (quantum) correlation when the total system is pure (subsystem and its complement) while for two disjoint regions A and B, ρ A∪B is not pure. In this case, one useful quantity is mutual information (MI) that measures the total correlation between two subsystems A and B As MI defined in terms of EE, one may use the HEE proposal to study MI and its time evolution in the holographic setups [28]. Further generalizations of MI to systems consisting of more (disjoint) subsystems, e.g. tripartite and n-partite information is studied in several directions in [29,30]. In the framework of holographic theories, EE and MI are related to (H)RT surfaces. Recently, a generalization of (H)RT surface which is called the entanglement wedge cross section (EWCS) has attracted a lot of attention. This geometrical quantity is defined as [31,32] E W (ρ AB ) = area(Σ min AB ) 4G N , (1.4) where Σ min AB is the minimal cross-sectional area of the entanglement wedge [33][34][35] corresponding to the boundary region A ∪ B. One may argue that EWCS takes into account the correlation between the boundary subsystems A and B even for a mixed state [31,32,36]. Therefore, it should be useful to probe the equilibration in a holographic system. The main goal of this paper is to study the dynamics of E W in the Vaidya background as a dual description of a global quench in a CFT. There are several proposals for the CFT counterpart of E W . Initially, it was introduced as a possible dual of the entanglement of purification. This conjecture was based on some information theoretic properties and intuition from holographic tensor networks [31,32]. However, it is turned out that several other correlation measures such as reflected entropy [37], logarithmic negativity [38,39] and odd entropy [40] also relate to E W . Interestingly, all of these measures are useful for analyzing the correlation between A and B where ρ A∪B is a mixed state. See [41][42][43][44][45][46][47] for related progress. In the following, we review each one of them briefly. The entanglement of purification is a measure of classical and quantum correlations between two subsystems [48]. To define entanglement of purification let us assume that ρ AB is (a mixed) density matrix for A ∪ B in the total Hilbert space H = H A ⊗ H B . By adding some auxiliary degrees of freedom to H, it is possible to construct a pure state |ψ ψ| such that ρ AB = tr A B (|ψ ψ|) and |ψ ∈ H AA ⊗H BB . Although this purification is not unique, one may consider a specific purification that minimizes the EE between A and its auxiliary partner A . Therefore, the entanglement of purification is defined as Clearly, the above definition reduces to EE when ρ AB is pure. Note that in general, this quantity should be minimized over all possible |ψ so it is not an easy task to compute it in an arbitrary quantum theory but under some assumptions, one may investigate it in certain situations [32,49]. Moreover, for holographic theories, it has been conjectured that entanglement of purification is dual to the area of entanglement wedge cross section E P = E W . In this case, minimization is restricted to states which have holographic dual [31,32]. Recently, reflected entropy as a new measure of the correlation between two disjoint regions has been highlighted. To define this measure, note that one can canonically purify the mixed state One may note that similar to the entanglement of purification, the reflected entropy also reduces to EE for pure states. It is possible to calculate this mesuare by using the replica method and some holographic argument shows that S R = 2E W [37]. The logarithmic negativity is another quantity that captures the correlation between A and B but unlike mutual information, the entanglement of purification and reflected entropy, it is monotonic under local operations and classical communication (LOCC) and so is appropriate to capture quantum correlations for mixed states. It is defined as E(A, B) = log tr ρ T B AB , where ρ T B AB represents the partial transpose of ρ AB with respect to B [50]. Initially, based on the holographic quantum error-correcting code, authors of [38] conjectured relation between the logarithmic negativity and the area of a brane with tension in the entanglement wedge. For the vacuum state and ball-shaped subregions, this reduces to E = χ d E W where χ d is a dimensional dependent constant. Remarkably, this relation has been derived by noting the connection between logarithmic negativity and Rényi reflected entropy and using the holographic prescription for computing Rényi entropy [39]. Finally, it is also worthwhile mentioned that there is also a new measure of correlations for mixed states which is called odd entropy [40] Based on holographic replica trick, odd entropy is related to the E W (A, B) and HEE between A and B as follows One may note that E W (A, B) vanishes either for product state ρ AB = ρ A ⊗ ρ B or pure state ρ AB = |ψ ψ|, so S o (A, B) reduces to the von Neumann entropy in the former and to the EE in the latter. Refer to [51] for recent study on this topic. As mentioned, the area of EWCS should be a good geometrical quantity to capture correlations of mixed states in the dual quantum field theory and so it should be a useful tool to analyze the equilibration process after a quantum quench. Some authors have investigated aspects of this scenario mainly in two-dimensional CFTs. The author of [52] has investigated the time evolution of reflected entropy and its holographic dual after a global quench in the context of the thermal double model. In a related study, the authors of [53] have studied the dynamics of logarithmic negativity, odd entropy and reflected entropy as well as their holographic counterpart EWCS via AdS/BCFT after local and (in)homogenous global quenches. Furthermore, the time evolution of reflected and odd entropies under local quenches has been analyzed in [54,55] where local quench is modeled by a falling particle in the holographic bulk theory. Also, the time evolution of EWCS in a two-sided black hole and Vaidya geometry has been studied in [56]. In the current article, we aim to provide a detailed analysis of the time evolution of EWCS in various time dependent geometries using holographic prescription. In particular, we are interested in various scaling regimes in the EWCS dynamics during the thermalization process. For this purpose, we investigate EWCS for a strip-shaped region in the Vaidya geometry describing the collapse of a thin shell of null (charged) matter into the AdS vacuum to form a black brane as a holographic description of thermal (electromagnetic) quench. The organization of the present paper is as follows. In section 2, we give the general framework in which we are working, establishing our notation and the general form of the HEE and EWCS functionals both in static and time dependent geometries. Section 3 contains a brief summary about EWCS in static backgrounds which are dual to the initial and final equilibrium states. We review old results for AdS and AdS black brane geometries and also find new ones for the case of extremal black branes. In section 4, we investigate the time evolution of EWCS in 2 + 1 dimensions, where we present both numerical and analytic results. Next, we study the higher dimensional cases by considering both thermal and electromagnetic quenches in section 5. We review our main results and discuss their physical implications in section 6, where we also present some future directions. Set-up We consider Einstein gravity coupled to a Maxwell field in (d+1)-dimensional asymptotically AdS spacetime. The action is where R is the Ricci scalar and Λ = − d(d−1) 2L 2 is the cosmological constant, with L being the AdS radius. The equations of motion following from this action are solved by the geometry of a charged black brane where r h denotes the horizon radius determined by the largest positive root of the blackening factor and µ corresponds to the chemical potential. This geometry is dual to a boundary theory at a finite density with the following expressions for energy, entropy and charge densities, respectively Further, the Hawking temperature of the black brane is given by (2.5) 1 Without loss of generality we will from now on consider L = 1. Figure 1: Schematic minimal surfaces for computing S A∪B in disconnected (left) and connected (middle) configurations. In the right panel, we show the EWCS (Σ in orange). Here we only consider the connected configuration where the EWCS is non-zero. Next, we will consider the extremal limit of the charged black branes where the temperature vanishes. It is straightforward to show that in this limit the blackening factor becomes L r h . Promoting the mass and charge in eq.(2.2) to time dependent functions m(v) and q(v), the Vaidya solution is obtained where in the Eddington-Finkelstein coordinate is given by Here v is a new coordinate defined by which coincides with the boundary time, i.e., t, at r = 0. This geometry describes an infalling null shell in an asymptotically AdS background. In the next sections, we apply holographic prescription to find the time evolution of EWCS using eq. (1.4) for configurations consisting of thin long strips. Figure 1 shows the entangling regions that we consider for computing HEE and EWCS in the static geometry. When the entangling region in the boundary theory is a strip the corresponding domain is specified by where X and˜ is the width and length of the strip, respectively. Note that in our set-up where we consider two different extremal hypersurfaces, i.e, Γ h and Γ 2 +h , X is replaced with h and 2 + h, respectively. In X ˜ limit, the translation invariance implies that the minimal hypersurface for computing HEE, i.e., Γ X , is completely specified by x(r). The HEE functional for static geometries eq.(2.2) then becomes where r t is the turning point of the minimal hypersurface, i.e., tip of Γ X . Of course, using the equation of motion, r t can be implicitly expressed in terms of X as follows . On the other hand, due to the reflection symmetry the corresponding hypersurface for EWCS, i.e., Σ, lies entirely on x = 0 slice. In this case the EWCS functional becomes is the turning point of the minimal hypersurface Γ X corresponding to a boundary region with width X. In the following we will denote the r In [31,32] it was shown that EWCS exhibits a discontinuous phase transition which is due to the competition between two different configurations for computing S A∪B . At small distances, i.e., h , a connected configuration (Γ A∪B = Γ 2 +h + Γ h ) has the minimal area, while for large separations the RT surface changes topology and the disconnected configuration (Γ A∪B = 2Γ ) is favored. In the latter case Σ becomes empty and hence the EWCS vanishes (see figure 1). Indeed, this behavior is similar to the continuous phase transition of HMI and the corresponding critical points are exactly the same. In order to have a nontrivial Σ and nonvanishing E W , we consider the small separation limit h in the following. Let us now turn to the time dependent case where the geometry in the bulk is given by a Vaidya spacetime. Once again, the translation invariance implies that the extremal hypersurface is completely specified by r(x) and v(x). Using eq. (2.7), the HEE functional can be written as where˙≡ d dx . Extremizing the above expression yields the equations of motion for r(x) and v(x), which readv In this case the corresponding boundary conditions for the extremal hypersurface are given as follows where (r t , v t ) is the location of the turning point. On the other hand, the EWCS can be parametrized where due to the reflection symmetry the corresponding hypersurface, i.e., Σ will be symmetric with respect to the midpoint x = 0. In this case the EWCS functional becomes The equation of motion obtained extremizing the above functional is where the hypersurfaces of interest satisfy the following boundary conditions Now we are equipped with all we need to calculate the time dependence of HEE and EWCS using Eqs.(2.13) and (2.16), respectively. Unfortunately it is not possible to find the time evolution of HEE and EWCS during the thermalization process analytically in general dimensions. In the following we will present the numerical results in the thin shell regime. Assuming this condition, the background that we consider is given by eq. (2.7) with where v 0 1 is the parameter that controls the thickness of the null shell. Note that in this setup v = 0 denotes the location of the null shell. Also note that for electromagnetic quench where the system is entirely non-thermal, the blackening factor at late times is given by eq. (2.6) corresponds to that of an extremal solution and we have (2.20) Preliminaries: EWCS for static backgrounds Before examining the full time-dependence of E W , we would like to study its asymptotic behaviors where the geometry is static. This study plays an important role in our analysis in the next sections because according to eqs. (2.7) and (2.19), the early and late time geometries correspond to a pure AdS and a charged AdS black brane, respectively. So in the following we review the computation of E W in these backgrounds. In this case the corresponding extremal hypersurface, i.e., Σ lies entirely on a constant time slice inside the bulk. In the next subsections, we present two specific examples in which we evaluate the behavior of E W . We will consider AdS-Schwarzschild and Extremal AdS black brane geometries for which semi-analytic results can be obtained. AdS-Schwarzschild Black Brane For the AdS-Schwarzschild black branes, the EWCS can be evaluated analytically in different scaling regimes. In this case we consider Evaluating eq. (2.12) gives an exact result [56] On the other hand, the relation between the position of the turning point r t and the strip width X can be written as follows [57] where the infinite series converges for r t < r h . In principle, we can invert this formula to write eq. (3.1) in terms of the boundary quantities, h, 2 + h and T . For the sake of simplicity, in the following we will focus on the low and high temperature behavior of EWCS. As demonstrated in [58] considering low temperature with respect to the separation scale corresponds to h T −1 . On the other hand, one might also regard the h T −1 case where we have low temperature with respect to both the subregion sizes and the separation between them. Further we note that, Now using eqs. (3.1) and (3.2) one can find that the low and high temperature expansion of EWCS for d > 2 is given by (see [58] for details) where α, β, γ and λ are some constants that depend on d and E vac. contribution which can be written in the following form (3.4) The above result eq. (3.3) shows that EWCS is a monotonically decreasing function of temperature and obeys an area law even in finite temperature where the HEE shows a volume law. On the other hand, solving numerically for the turning points r d and r u using eq. It is instructive to analyze the particular case of BTZ black holes with d = 2, since E W can be determined analytically even at finite temperature. In this case the EWCS functional becomes Performing the above integral, we are left with Also the relation between the width of the entangling region and the corresponding turning point at finite temperature is known [59] Now combining the above two equations, as well as eq. (2.5) for d = 2 with zero charge, yields the where c = 3L 2G N is the central charge. We can evaluate the zero temperature limit of the above result to find the vacuum contribution as follows Extremal AdS Black Brane In this case plugging eq.(2.6) into eq.(2.12), we find While the above integral cannot be carried out analytically for general d, in a very similar manner to the analysis of the thermal correction to EWCS in [58], we can obtain the scaling behavior of E W for extremal black branes. First, using binomial expansion we rewrite eq.(3.10) as follows which can be integrated to give Further, the relation between the position of the turning point r t and the strip width X can be written as follows [60] (3.13) Now we would like to invert this formula to write eq. (3.12) in terms of the boundary quantities, h, 2 + h and µ. A similar derivation to the one presented for AdS-Schwarzschild black brane holds in this case. Again, to perform an exact estimation, we will focus on the behavior of EWCS in small and large chemical potential limits. As demonstrated in [60], considering small chemical potential with respect to the separation scale corresponds to h µ −1 . Once again, one might also regard the h µ −1 case where we have small chemical potential with respect to both the subregion sizes and the separation between them. Next we note that h µ −1 limit corresponds to r d r h and r u → r h , while for h µ −1 we have r d , r u r h . Further using eqs. (3.12) and (3.13) one can find that the small and large chemical potential expansion of EWCS for d > 3 is given by (3.14) where α , β , γ and λ are some constants that depends on d. This result shows that EWCS is a monotonically increasing function of µ and obeys an area law even in finite chemical potential. Again, solving numerically for the turning points using eq. EWCS in Vaidya backgrounds: 2 + 1 dimensions In this section, we study the time evolution of EWCS by considering the case where d = 2 and the final equilibrium state is given by the BTZ black hole. First, we provide a numerical analysis and examine the various regimes in the growth of EWCS in the thin shell limit. Next, we will show that Σ is a geodesic whose length can be expressed analytically in closed form, which enables us to directly extract its scaling behavior in various regimes. Numerical analysis We start by evaluating E W (t), defined in eq. (2.16), numerically for several values of h, and T . We will consider subsystems consisting of equal width intervals as depicted in figure 1. For simplicity, we set r h = 1 and work with the rescaled quantityẼ W = 4G N E W throughout the following. We will focus on thermal quench where the corresponding geometry is given by eq. In this case the disconnected configuration is favored at late times and E W saturates to zero. Note that the most straightforward way to choose the minimal area configuration is by comparing the corresponding entanglement entropies. Another way is to compute the mutual information noting that in the disconnected phase the HMI vanishes. Regarding the evolution of EWCS and assuming that the connected configuration is always favored for any boundary time, there are three different scaling regimes 2 (see which the growth of the E W is quadratic. We will examine this observation further in the following. Note that in the period of linear growth, the slope seems more or less the same independent of h. This regime in fact persists all the way up to t ∼ O( + h) where E W reaches its maximum value. Further, at late times, E W decreases and very quickly saturates to a constant value corresponding to the BTZ geometry given by eq. (3.8). We present the time dependence of the EWCS for the case of h < T −1 < in figure 7. The left panel shows the competition between the contribution to HEE due to the connected and disconnected configurations. Based on this figure, although the connected configuration has the minimal area at early time, the late time behavior is governed by the disconnected configuration. The critical time when this transition happens is approximately given by t ∼ O( − h). We show E W (t) for the same values of the parameters in the right panel. Once again, at early times the EWCS starts growing quadratically from the vacuum value and approaches the linear growth regime. It seems that, in this regime, the slope is independent of . Finally, at late times, E W displays a discontinuous transition and immediately saturates to zero where the saturation time is approximately To conclude this section let us comment on the essential role of the minimal hypersurface Γ 2 +h . As noted above, one can identify the three positions, i.e., r d , r u and r w which are important in studying the evolution of EWCS. We can also consider the time dependence of these points, as Analytic treatment When the final equilibrium state is given by the BTZ black brane, most of the expressions can be evaluated analytically. In particular, in the thin shell approximation, we demonstrate that the problem admits semi-analytic solution. In fact, this provides a check on our numerical results and also allow us to derive in detail several general features in the time evolution of EWCS. As we explain in the previous section, the evolution of EWCS can be divided into three different scaling regimes (see Fig. 5). In cases (i) and (iii) corresponding to the early and late time static geometries, the EWCS lies entirely on a constant time slice and we can use the previous expressions derived in sec. 3.1 to find E W . On the other hand, in case (ii) the part of Σ that is inside the shell is given by the geodesic in the pure AdS geometry, and the part of it that is outside the shell is given by the geodesic in the pure black brane geometry. In this case, the geodesic gets refracted at the null shell and it does not have to be in a constant time slice. In the following we focus on case (ii) which is more involved. Using eq. (2.7), we write the metric in zero charge limit as Further, we consider a thin null shell such that m(v) = θ(v) Note we have used the subscript a (b) to refer to quantities on the AdS (black brane) side of the null shell. In this case, using eq. (2.8), the boundary time reads where Q b is some integration constant,˙≡ d dλ and λ parametrizing the geodesic length. Note that we consider geodesics that lie on x = 0 slice, so comparing to [19] there is only one integration constant. Combining these two equations together yieldṡ which can be solved as follows (4.7) Now using the above result we can solve eq. (4.5) for t(r) to find where c ± are integration constants. In the following we only consider the + branch of the geodesics without loss of generality. Next, using eq. (4.4) for v b (r), we obtain which can be rewrite as follows . (4.10) Note that the expression for the part of geodesic that lies in the pure AdS geometry inside the null shell can be obtained from the r h → ∞ limit of eq. (4.9) as follows v a (r) = c a − r − 1 + Q 2 a r 2 Q a , (4.11) where in doing this, we should scale Q b with r h at the same time because in (4.5) we have defined the integration constant with an extra factor of horizon radius. The integration constants , i.e., c a and c b , will be fixed by setting v a (r u ) = v u and v b (r d ) = v d , respectively. Imposing these conditions, we have On the other hand, Q a and Q b can be found using the matching conditions at the null shell. Denoting value of r at the intersection of Σ and the null shell v = 0 as r w , we note that v (r) will be discontinuous at this point because of the refraction condition noted above. To find the matching condition for the derivative we integrate the equation of motion across the null shell which reads (4.13) Solving the above condition, we obtain (4.14) On the other hand, at the intersection of Σ and the null shell we have v a (r w ) = 0 = v b (r w ), (4.15) which means that v remains continuous along r = r w . The above conditions can be solved analytically in closed form as follows Now we are equipped with all we need to calculate the time dependence of EWCS analytically. Upon substituting the profiles of v a (r) and v b (r) into eq. (2.16), EWCS can be evaluated by separately evaluating the integral on the portion of the geodesic above the shell and that below the shell as follows The final result then becomes In principle we should write the above result in terms of the boundary quantities such as h and 2 +h. There is a subtlety in analytically solving for the turning points of the HRT extremal surfaces, i.e., (v t , r t ), in terms of the width of the entangling regions, i.e., X, at arbitrary time. Here we recall that it was shown in [19] that the relation between r t and X as a function of boundary time, i.e., t, can be expressed analytically in closed form as follows where ρ = r h rc and s = rc rt . Note that r c is the value of r at which Γ X intersects the null shell. Also v t can be find as follows which is due to the matching condition for Γ X [21]. With these expressions, the profile of the minimal geodesic and EWCS in eqs. (4.9), (4.11) and (4.20) are implicitly expressed entirely in terms of boundary quantities, e.g., h, and T . In the left panel of Fig. 9, the profile for EWCS which is determined from the analytic expressions in the (v, r) plane are plotted for different values of boundary time. Further, we show the full evolution of r d , r w and r u for a fixed h and in the right panel of Fig. 9. The markers in this figure show the numerical results which coincide with our analytical expressions in the thin shell limit. We will explore some universal features in the time evolution of EWCS using the closed form expression in detail in the next section. Regimes in the growth of EWCS The closed form expression for EWCS given by eq. (4.20) enables us to directly extract the different scaling behavior in various regimes during the thermalization process. In this section we will study these scaling regimes in more detail. In our setup, the main boundary quantities which may affect the behavior of holographic dual for EWCS during the thermalization process are h, and T . On the other hand, the dual geometric entities in the bulk which may govern the evolution of EWCS are r d , r u and r h . Also our numerical and analytical results in the previous sections suggest that the time dependence of E W could be associated to r w , that ranges from r d (i.e., close to the turning point of Γ h ) at early times, to r u (i.e., close to the turning point of Γ 2 +h ) at late times. In fact, Here we have set r h = 1 r d and r u are also depend on time, so that r w is not a monotonically increasing function of time to ensure r d ≤ r w ≤ r u at all times. Based on these results, an immediate conclusion is that the evolution of EWCS is characterized by different scaling regimes depending on r w (t) which we will examine further in the following. Early Growth At early times, the null shell does not reach Σ which lies entirely in AdS geometry, and hence the EWCS is a fixed constant given by the vacuum value. The early growth of EWCS starts immediately after the shell intersects with Σ, i.e., r w ∼ r d and v d ∼ v shell = 0. In other words, there exists a sharp time t 1 after which Γ h lies entirely in the black brane region and hence reduces to that in a static BTZ geometry. Indeed, t 1 is the saturation time for HEE corresponding to a boundary region with width h. Further, the boundary quantities t and h can be fixed in terms of bulk parameters, i.e., v d and r d using eqs. (3.7) and (4.4) as follows Combining the above equations then yields and (4.22), we can expand r c and r u for early times to find (4.26) In the large limit we can approximate the value of Q a , Q b and r w in eqs. (4.14), (4.16) and (4.17) by that at r u → ∞. In this way, using v u = r c − r u we find where r w is given by The above solution for r w , shows that in eqs. (4.14) and (4.16) one must pick the minus and plus sign for Q b and Q a , respectively. To see this, note that when Γ h starts intersecting the null shell, i.e., v d ∼ 0, we should have r w ∼ r d as satisfied by (4.28). Indeed choosing other signs this condition is not satisfied. Using the above relations for the early time limit (in which case, t r h ), eq. (4.20) yields We can use eqs. (4.24) and (4.26) as well as h condition, to rewrite it as follows where E vac. W is the vacuum contribution given by eq. (3.9) and E is the energy density given by eq. (2.4) with d = 2. Therefore at early times, i.e., h t r h , EWCS grows quadratically and the rate of growth is a fixed constant proportional to the energy density. Similar scaling behavior was found for the early growth of entanglement entropy in [21]. Linear growth As we discussed above, for t > t 1 , Γ h lies on a constant time slice outside the horizon and is time independent. Hence, r d remains fixed and we can use eq. (4.24) to find r d and v d . One might note that in fact the time evolution of Σ is then largely governed by properties of Γ 2 +h which go through both AdS and black brane regions. Based on our numerical results we expect that this regime corresponds to h r h t . In this regime we can expand eqs. (4.21) and (4.22) to find (4.31) Solving these at leading order then yields (4.32) Once again in the r u → ∞ limit we can use eqs. (4.27) and (4.28) for Q a , Q b and r w , noting that in this case we should consider the above expressions for r u and r c . Upon substituting these results into eq. (4.20) and expand for large , the resulting EWCS is then which can be rewrite as follows where s eq. is the equilibrium thermal entropy density given by eq. (2.4). It is worth to mention that the above constant rate precisely matches with the previous results of [52,53]. Similar scaling behavior was found during intermediate stages of time evolution of entanglement entropy in [21]. Saturation At late times, the tip of Γ 2 +h approaches the null shell, i.e., r w → r u from below and v u → 0. Thus, we can expand the relevant quantities in small r u − r w and v u . Indeed, in this case Σ lies entirely in the black brane region and hence reduces to that in a static BTZ geometry. Recall that an essential assumption in evaluating EWCS is that both HRT hypersurfaces, i.e., Γ h and Γ 2 +h , correspond to the same boundary time t. We use this condition to simplify the calculation since for v u ∼ 0, v d can be expressed in terms of r d and r u as follows where we have used eq. (4.4). Inserting the above expression in eqs. (4.14) and (4.17) and simplify the resultant equations yields Combining the above relation with eq. (4.16) and expand for small v u , we have Upon substituting these results into eq. (4.20), one finds that at leading order the resulting EWCS is then given by the same expression as in (3.8). Also note that according to eq. (4.37) the v u → 0 limit coincides with r w → r u as expected. Producing the different scaling behavior of EWCS shows that the results based on our analytic treatment are consistent with the previouslly numerical results. 22 In this section we generalise our studies to higher dimensional cases in specific directions. We will mainly focus on three dimensional boundary theory, because the interesting qualitative features of the thermalization process are independent of the dimensionality of the QFT. In order to investigate the behavior of EWCS during the thermalization process more generally, we consider two different types of global quench: a thermal quench and an electromagnetic quench. Once again, we consider subsystems consisting of equal width intervals as depicted in figure 1. For simplicity, we set r h = 1 and work with the rescaled quantityẼ W = 4G Ñ d−2 E W throughout the following. Evolution after a thermal quench (q = 0) Let us begin with the case of a thermal quench where the dual gravitional geometry is given by eqs. In figure 11 we demonstrate the same boundary quantities for the case of hT < 1 < T . larger, the region with linear dependence becomes more pronounced. If we fit the curves in the right panel of figure 11 in the linear growth regime, we find ∆Ẽ W ∼ v w t where the best fit gives v w ≈ 0.68. Note that, the slope seems more or less the same independent of h. Evolution after a electromagnetic quench (q = 0) In this section, we study the case of an electromagnetic quench where a thin shell of charged null fluid collapsing in empty AdS to form a black brane. We choose the system to be entirely nonthermal by approaching the extremal black brane solution whose blackening factor at late times is given by eq. (2.6). In this case, m(v) and q(v) are not independent and we consider eq. (2.20) as the time dependent profile for the horizon radius. Before we proceed, let us recall that for an extremal geometry the event horizon has a double zero. This feature plays an important role in the evolution of EWCS after an electromagnetic quench as we detail below. In figure 12, we show the numerical results for a fixed and several values of h. In this case the connected configuration is always favored for any boundary time and EWCS saturates to a finite value. There is some interesting differences when comparing the behavior of the evolution after a electromagnetic quench here to the thermal quench in the previous section. For extremal cases, the regime of linear growth is replaced by a logarithmic growth. This behavior is inherited from the logarithmic scaling in the static extremal geometries as previously discussed in [17], which has its origin in the double zero at the horizon. Conclusions and Discussions In this paper, we explored the time evolution of entanglement wedge cross section after a global quantum quench for a strip entangling region in various geometries. we considered subsystems consisting of equal width intervals as depicted in figure 1. First, we focused on the simple case of d = 2 and consider a thermal quench in detail where the final equilibrium state is dual to a BTZ geometry. In this case much of the analysis could be carried out analytically. We have also extended these studies to 3+1 dimensions, where we considered two different types of global quench: a thermal quench and an electromagnetic quench. In the following we would like to summarize our main results and also discuss some further problems. • In a (2 + 1)-dimensional bulk geometry, we found that the time evolution of EWCS is characterized by three different scaling regimes: an early time quadratic growth, an intermediate linear growth and a late time saturation. The main behavior in the evolution depends on Γ 2 +h while Γ h is fixed and do not influence the time dependence of EWCS. To confirm these behaviors, we provided a numerical analysis and examined the various regimes in the growth of EWCS in the thin shell limit. We have also found an analytically closed form expression for EWCS, which enables us to directly extract its scaling behavior in various regimes. Our results here shows that at early times, i.e., t h, the EWCS starts at the same value of the pure AdS geometry, then at t ∼ O(h) grows quadratically and approaches a regime of linear growth. We found that as the width of the entangling region becomes larger, the region with linear dependence becomes more pronounced. In analogy to the analysis in [21] and motivated by this linear growth we introduce a dimensionless rate of growth as follows It is worth to mention that the value of the constant rate corresponding to the linear growth regime precisely matches with the previous results of [52,53]. Also note that this value is the same as the entanglement velocity found for a BTZ black brane in [21]. • In higher dimensions, considering a thermal quench where the final equilibrium state is an AdS black brane, the qualitative behaviors of E W during the evolution are very similar to d = 2 case discussed above. In this case, our numerical results, allowing us to identify regimes of early time quadratic growth, an intermediate linear growth and a saturation regime. Moreover, we found that for small entangling regions, EWCS transitions between vacuum and thermal values, without much of a linear regime in between and as the width of the entangling region becomes larger, the region with linear scaling becomes more pronounced. In particular, for a four dimensional bulk theory, in the linear growth regime, we found R W (t) ∼ v w where the best fit gives v w ≈ 0.68. Once again, this constant rate precisely matches the entanglement velocity found for a (3 + 1)-dimensional AdS black brane in [21]. We expect that the same matching happens in higher dimensional cases. It would be interesting to understand whether this indeed happens employing an analytic approach similar to three dimensional case. • Considering an electromagnetic quench in higher dimensions and choosing the system to be entirely non-thermal by approaching the extremal black brane solution, there exist some interesting differences when comparing the behavior of the evolution to the thermal quench. In this case, the regime of linear growth is replaced by a logarithmic growth. This behavior is inherited from the logarithmic scaling in the static extremal geometries, which has its origin in the double zero at the horizon. • As we have mentioned before, it is proposed that EWCS is dual to different information measures including entanglement of purification, reflected entropy, odd entropy and logarithmic negativity. Considering E W as a measure of correlations dual to the reflected entropy, figure 13 indicates that the correlation grows even after t ∼ O( 2 ) where HMI (a natural measure of correlations) reaches its maximum value. It implies that E W captures more correlations than HMI. This behavior is consistent with the result of [36,44,55] that points out E W is more sensitive to classical correlations. Although, this interpretation is in conflict with the relation between EWCS and negativity (which is just sensitive to quantum correlations) [47]. Similar behavior for E W has been noted in [47] for a different model of the quench in (1+1) dimensional CFT. Here, we emphasize that our study shows this is also true for thermal and electromagnetic quench in (1+1) and (1+2) dimensions. We can extend this study to different interesting directions. Although in higher dimensions we did a numerical analysis, we expect that an analytic treatment similar to the three dimensional case most simply done by considering the thin shell and large entangling region limits. Then one can extract some analytic behavior of EWCS in different scaling regimes during the evolution which may useful to more investigate interesting features of the thermalization process. In particular it enables us to study various scaling regimes, generalizing the tsunami picture [21]. In this paper we restricted our discussion to the equilibration following a global quench in relativistic setup. It is interesting to consider more general backgrounds, in particular those with Lifshitz and hyperscaling violating exponents [26,27]. We leave the details of some interesting problems for future study [61].
10,850
2020-05-12T00:00:00.000
[ "Physics" ]
Effect of Electron-Beam Irradiation on Functional Compounds and Biological Activities in Peanut Shells Peanut shells, rich in antioxidants, remain underutilized due to limited research. The present study investigated the changes in the functional compound content and skin aging-related enzyme inhibitory activities of peanut shells by electron-beam treatment with different sample states and irradiation doses. In addition, phenolic compounds in the peanut shells were identified and quantified using ultra-performance liquid chromatography with ion mobility mass spectrometry–quadrupole time-of-flight and high-performance liquid chromatography with a photodiode array detector, respectively. Total phenolic compound content in solid treatment gradually increased from 110.31 to 189.03 mg gallic acid equivalent/g as the irradiation dose increased. Additionally, electron-beam irradiation significantly increased 5,7-dihydroxychrome, eriodictyol, and luteolin content in the solid treatment compared to the control. However, liquid treatment was less effective in terms of functional compound content compared to the solid treatment. The enhanced functional compound content in the solid treatment clearly augmented the antioxidant activity of the peanut shells irradiated with an electron-beam. Similarly, electron-beam irradiation substantially increased collagenase and elastase inhibitory activities in the solid treatment. Mutagenicity assay confirmed the stability of toxicity associated with the electron-beam irradiation. In conclusion, electron-beam-irradiated peanut shells could serve as an important by-product with potential applications in functional cosmetic materials. Introduction Peanuts (Arachis hypogaea L.) are an important crop cultivated worldwide for seed and oil production.Typically regarded as discarded by-products of peanut processing, peanut shells have gained recent recognition for their versatile applications in feedstock, food, and fuel [1,2].Moreover, peanut shells are utilized as a source of natural antioxidants with high phenolic and flavonoid content, especially luteolin [3,4].Studies have underscored the beneficial effects of luteolin in maintaining human health by controlling antioxidative stress, aging, and inflammation [5,6].Agricultural by-products have also been used as natural antioxidants in skin care formulations [7].Peanut shells are a rich source of functional compounds and have piqued interest as an ingredient in natural cosmetics. Functional cosmetics are generally classified into whitening, wrinkle improvement, and ultraviolet ray (UV)-protection products.In particular, the main effects of wrinkleimproving cosmetics are known to promote collagen synthesis, strengthen skin elasticity, and promote epidermal metabolism and fibroblast production [8].Excessive melanin accumulation catalyzed by tyrosinase results in hyperpigmentation disorders including melisma, freckles, and age spots [9].Additionally, degradation of elastin fiber and collagen complex by elastase and collagenase, respectively, leads to decrease in skin elasticity, flexibility, resiliency, and strength [10].The development of tyrosinase, collagenase, and elastase inhibitors for use in functional cosmetics is therefore important for the control of whitening, wrinkles, and skin aging.In previous studies, raw materials for functional cosmetics were reported in medicinal plants and herb extract [8,9]; however, the skin-agingrelated enzyme (i.g., anti-tyrosinase, anti-collagenase, and anti-elastase) inhibitory activities of food crops and their by-products have not yet been elucidated. Ionizing radiation, including gamma rays, X-rays, and electron beams, is an effective technology for food preservation [11].Recently, ionizing radiation technology has been employed to improve bioactive compounds in natural ingredients, as it has the potential to enhance the biological activity of phenolic compounds [12].Instead of radioisotopes, electron-beam irradiation uses an electrical source to generate ionizing energy.In addition, electron-beam irradiation offers several advantages, such as easy handling, reduced logistics costs, and fewer unexpected adverse effects on the irradiated product [13,14]. In most previous studies, quantitative and qualitative analyses of the phenolic compounds in peanut shell extract were conducted using high-performance liquid chromatography (HPLC) by comparing retention time of peaks with those of standard compounds [15].To the best of our knowledge, no prior study has extensively characterized the phenolic compound composition of peanut shells using ultra-performance liquid chromatographyion mobility mass spectrometry-quadrupole time-of-flight (UPLC-IMS-QTOf).Therefore, this study was conducted to evaluate the overall phenolic and flavonoid contents, as well as antioxidant and anti-aging properties of electron-beam-irradiated peanut shells in different sample states (i.e., solid and liquid) and dose levels (i.e., 0, 5, 10, and 20 kGy).This study aimed to determine the polyphenols in peanut shell extracts using UPLC-IMS-QTOf, and changes in response to electron-beam treatment.We established that electron-beam irradiation to be a secure method for enhancing peanut shell biological activity before industrial utilization. Evaluation of Extract Color and Functional Compound Content in Electron-Beam-Irradiated Peanut Shell The change in the color of extract obtained from electron-beam-irradiated peanut shells is presented in Figure 1.Hunter color 'L', 'a', and 'b' represent the degree of lightness, greenness to redness, and blueness to yellowness, respectively.The lower values of 'L', 'a', and 'b' signify increased darkness, greenness, and blueness, respectively, which are primarily influenced by chemical changes or degradation [16].In solid treatment, only minimal changes were observed in the Hunter color values, whereas liquid treatment showed a remarkable decline in the 'a' and 'b' values.In the liquid treatment, the 'L' value increased from 37.02 at 0 kGy to 40.10 at 20 kGy, while the 'a' and 'b' values decreased from −0.01 and −1.66 at 0 kGy to 14.40 and 5.70 at 20 kGy, respectively, resulting in discoloration of the extract color as electron-beam dose level increased.Thus, it was assumed that the electron-beam irradiation of the liquefied peanut shell probably might have a negative influence on the functional compound content or composition.A previous study reported a more pronounced darkening of almonds, hazelnuts, pine nuts, and peanuts when irradiated with an electron-beam; however, these observations were not consistently reproducible [17].Changes in the functional compound (total phenolic and flavonoid) content in peanut shells treated by an electron-beam in different states of the sample and dose levels are shown in Figure 2. The total phenolic and flavonoid contents in solid treatment was dramatically increased when treated with an electron-beam from 110.31 mg gallic acid equivalent (GAE)/g at 0 kGy to 189.03 mg GAE/g at 20 kGy; however, the liquid treatment was less effective than the solid treatment (107.07-122.83mg GAE/g).As the irradiation dose increased, the total phenolic compound content augmented during the solid treatment.Electron-beam irradiation also enhanced flavonoid concentration in solid treatment compared to the control to a value of 72.84 mg catechin equivalent (CE)/g, but there was no statistical difference between the dose levels (137.28-142.40mg CE/g).Han et al. [15] reported that the total phenolic and flavonoid contents in peanut shell extract were 253.94 mg GAE/g and 111.74 mg CE/g, respectively, which were higher than those observed in this study.These differences may be attributed to variations in experimental methods, cultivation environments, and climatic conditions.Previous studies have conducted thermal (i.e., boiling and roasting) and/or non-thermal treatments (i.e., gamma and far-infrared radiation) to enhance the functional compound content in nuts, such as peanuts, hazelnuts, pine nuts, and almonds [1,17,18].Zhang et al. [19] reported that the total flavonoid content of peanut skin increased following ozone treatment.Recently, we reported that atmospheric pressure plasma treatment increases the total phenolic and flavonoid content in peanut shells [3].In this study, it was postulated that the increase in Changes in the functional compound (total phenolic and flavonoid) content in peanut shells treated by an electron-beam in different states of the sample and dose levels are shown in Figure 2. The total phenolic and flavonoid contents in solid treatment was dramatically increased when treated with an electron-beam from 110.31 mg gallic acid equivalent (GAE)/g at 0 kGy to 189.03 mg GAE/g at 20 kGy; however, the liquid treatment was less effective than the solid treatment (107.07-122.83mg GAE/g).As the irradiation dose increased, the total phenolic compound content augmented during the solid treatment.Electronbeam irradiation also enhanced flavonoid concentration in solid treatment compared to the control to a value of 72.84 mg catechin equivalent (CE)/g, but there was no statistical difference between the dose levels (137.28-142.40mg CE/g).Han et al. [15] reported that the total phenolic and flavonoid contents in peanut shell extract were 253.94 mg GAE/g and 111.74 mg CE/g, respectively, which were higher than those observed in this study.These differences may be attributed to variations in experimental methods, cultivation environments, and climatic conditions.Previous studies have conducted thermal (i.e., boiling and roasting) and/or non-thermal treatments (i.e., gamma and far-infrared radiation) to enhance the functional compound content in nuts, such as peanuts, hazelnuts, pine nuts, and almonds [1,17,18].Zhang et al. [19] reported that the total flavonoid content of peanut skin increased following ozone treatment.Recently, we reported that atmospheric pressure plasma treatment increases the total phenolic and flavonoid content in peanut shells [3].In this study, it was postulated that the increase in the functional compound content in the solid treatment may be attributed to cell wall modification and/or decomposition of the chemical bonds of the polyphenolic compounds induced by electron-beam irradiation.In addition, it was also assumed that electron-beam irradiation of the liquid-type sample might cause an excessive breakdown of phenolic compounds in the peanut shell extracts, resulting in decreased yellow color and flavonoid content. modification and/or decomposition of the chemical bonds of the polyphenolic compounds induced by electron-beam irradiation.In addition, it was also assumed that electron-beam irradiation of the liquid-type sample might cause an excessive breakdown of phenolic compounds in the peanut shell extracts, resulting in decreased yellow color and flavonoid content. Evaluation of Biological Activities in Electron-Beam-Irradiated Peanut Shell Antioxidant activity is a typical indicator of various polyphenolic compounds present in peanuts.Thus, the antioxidant activities of polyphenolic compounds can be determined based on their free radical such as 2,2-diphenyl-1-picrylhydrazyl (DPPH) and 2,2′-azino-bis(3-ethylbenzothiazoline-6-sulfonic acid) (ABTS) scavenging activities and reductive potential by ferric ion reducing antioxidant potential (FRAP).The antioxidant activities of irradiated peanut shells were affected by the state of the sample (Figure 3).The solid treatment exhibited higher antioxidant activity than the control and liquid treatments.Hwang et al. [11] observed that electron-beam irradiation increased the antioxidant activity of mugwort extracts as the irradiation dose increased from 2 to 10 kGy.However, in our study, electron-beam irradiation at doses between 5 and 20 kGy showed no statistical difference in antioxidant activities.This could be attributed to the fact that there was no notable variation in the functional compound content, which was highly correlated with antioxidant activity.DPPH, ABTS, and FRAP activities of the peanut shells were strongly correlated with the total phenolic (rDPPH = 0.946, rABTS = 0.952, and rFRAP = 0.956) and flavonoid (rDPPH = 0.993, rABTS = 0.976, and rFRAP = 0.986) contents (p < 0.001).Hwang et al. [11] also demonstrated that irradiation treatment modifies the cell wall and facilitates the emission of extractable substances such as polyphenols, resulting in increased antioxidant activity.Our findings were consistent with these results. Evaluation of Biological Activities in Electron-Beam-Irradiated Peanut Shell Antioxidant activity is a typical indicator of various polyphenolic compounds present in peanuts.Thus, the antioxidant activities of polyphenolic compounds can be determined based on their free radical such as 2,2-diphenyl-1-picrylhydrazyl (DPPH) and 2,2 -azinobis(3-ethylbenzothiazoline-6-sulfonic acid) (ABTS) scavenging activities and reductive potential by ferric ion reducing antioxidant potential (FRAP).The antioxidant activities of irradiated peanut shells were affected by the state of the sample (Figure 3).The solid treatment exhibited higher antioxidant activity than the control and liquid treatments.Hwang et al. [11] observed that electron-beam irradiation increased the antioxidant activity of mugwort extracts as the irradiation dose increased from 2 to 10 kGy.However, in our study, electron-beam irradiation at doses between 5 and 20 kGy showed no statistical difference in antioxidant activities.This could be attributed to the fact that there was no notable variation in the functional compound content, which was highly correlated with antioxidant activity.DPPH, ABTS, and FRAP activities of the peanut shells were strongly correlated with the total phenolic (r DPPH = 0.946, r ABTS = 0.952, and r FRAP = 0.956) and flavonoid (r DPPH = 0.993, r ABTS = 0.976, and r FRAP = 0.986) contents (p < 0.001).Hwang et al. [11] also demonstrated that irradiation treatment modifies the cell wall and facilitates the emission of extractable substances such as polyphenols, resulting in increased antioxidant activity.Our findings were consistent with these results.We also evaluated skin anti-aging activities of electron-beam-irradiated peanut shells, given their reputation as a valuable source of antioxidants (Figure 4).A tyrosinase activity assay was primarily implemented to assess skin whitening, and collagenase and elastase inhibitory activities are usually measured to determine wrinkle improvement [10,20].Tyrosinase inhibitory activity in peanut shells showed no statistical difference in the range of 61.35-66.63%,regardless of the state of the sample or dose levels.Collagenase inhibitory activity in the solid treatment (50.00-65.12%)was significantly enhanced by electron-beam irradiation compared to that in the control (42.42%), whereas no statistical difference was observed between the control and liquid treatments (42.62-44.45%).Similar to the results of the collagenase inhibition assay, the most potent anti-elastase effect was observed in the solid treatment, especially at a dose of 10 kGy.The enhanced anticollagenase and anti-elastase activities in the solid treatment were likely influenced by the increase in phenolic compound content induced by the irradiated electron-beam.We also evaluated skin anti-aging activities of electron-beam-irradiated peanut shells, given their reputation as a valuable source of antioxidants (Figure 4).A tyrosinase activity assay was primarily implemented to assess skin whitening, and collagenase and elastase inhibitory activities are usually measured to determine wrinkle improvement [10,20].Tyrosinase inhibitory activity in peanut shells showed no statistical difference in the range of 61.35-66.63%,regardless of the state of the sample or dose levels.Collagenase inhibitory activity in the solid treatment (50.00-65.12%)was significantly enhanced by electron-beam irradiation compared to that in the control (42.42%), whereas no statistical difference was observed between the control and liquid treatments (42.62-44.45%).Similar to the results of the collagenase inhibition assay, the most potent anti-elastase effect was observed in the solid treatment, especially at a dose of 10 kGy.The enhanced anti-collagenase and antielastase activities in the solid treatment were likely influenced by the increase in phenolic compound content induced by the irradiated electron-beam. Identification of Phenolic Compounds in Peanut Shell Using Ultra-Performance Liquid Chromatography-Ion Mobility Mass Spectrometry-Quadrupole Time-of-Flight (UPLC-IMS-QTOf) and Relative Quantification of Major Phenolic Compounds in Electron-Beam-Irradiated Peanut Shell Using High-Performance Liquid Chromatography Coupled with Photodiode Arrary Detector (HPLC-PDA) Phenolic compounds in the peanut shells were identified using UPLC-IMS-QTOf, and the chromatograms are shown in Figure 5. Nine compounds were tentatively identified in the peanut shells from the polyphenol database, with most of them belonging to flavonoids and flavonoid subclasses (Table 1).Among the phenolic compounds, 5,7dihydroxychromone, eriodictyol, and luteolin were the major compounds, consistent with previous studies [15,21].One phenolic acid, 5-hydroxyferulic acid, was identified in peanut shells; however, the peaks were lower than the flavonoid peaks detected.Compounds 6 and 7 showed the same [M−H] − at m/z 299.0561, suggesting the possibility of being an isomeric pair; they were tentatively identified as chrysoeriol and paratensein, respectively.Phenolic compounds in the peanut shells were identified using UPLC-IMS-QTOf, and the chromatograms are shown in Figure 5. Nine compounds were tentatively identified in the peanut shells from the polyphenol database, with most of them belonging to flavonoids and flavonoid subclasses (Table 1).Among the phenolic compounds, 5,7dihydroxychromone, eriodictyol, and luteolin were the major compounds, consistent with previous studies [15,21].One phenolic acid, 5-hydroxyferulic acid, was identified in peanut shells; however, the peaks were lower than the flavonoid peaks detected.Compounds 6 and 7 showed the same [M−H] − at m/z 299.0561, suggesting the possibility of being an isomeric pair; they were tentatively identified as chrysoeriol and paratensein, respectively.In this study, the UPLC chromatogram confirmed that the composition of the extracted phenolic compounds in electron-beam-irradiated peanut shells was similar to that of the control, but the relative peak area differed depending on the treatment.The 5hydroxyferulic acid, apigenin, chrysoeriol, pratensein, 8-prenyl luteolin, and caflanone, identified by UPLC-IMS-QTOf analysis, were not detected in high-performance liquid chromatography (HPLC) coupled with photodiode array (PDA) chromatogram, possibly because of their low concentrations.Thus, except for these six compounds, the relative quantification of 5,7-dihydroxichromone, eriodictyol, and luteolin in the electron-beamirradiated peanut shells was performed using HPLC-PDA by comparing them with their individual standards (Figure 6).The method validation showed linearity, with a correlation coefficient of 0.999.Among the three chemicals, luteolin was predominant in In this study, the UPLC chromatogram confirmed that the composition of the extracted phenolic compounds in electron-beam-irradiated peanut shells was similar to that of the control, but the relative peak area differed depending on the treatment.The 5hydroxyferulic acid, apigenin, chrysoeriol, pratensein, 8-prenyl luteolin, and caflanone, identified by UPLC-IMS-QTOf analysis, were not detected in high-performance liquid chromatography (HPLC) coupled with photodiode array (PDA) chromatogram, possibly because of their low concentrations.Thus, except for these six compounds, the relative quantification of 5,7-dihydroxichromone, eriodictyol, and luteolin in the electron-beamirradiated peanut shells was performed using HPLC-PDA by comparing them with their individual standards (Figure 6).The method validation showed linearity, with a correlation coefficient of 0.999.Among the three chemicals, luteolin was predominant in the control at 41.56 mg/g, followed by eriodictyol (19.08 mg/g) and 5,7-dihydroxychrome (6.66 mg/g).Regardless of the dose levels, the contents of these three chemicals significantly increased in the solid treatment following electron-beam irradiation compared with the control.Conversely, all the contents decreased to approximately 30.3-100% in the liquid treatment as the irradiation dose increased.As previously mentioned, electron-beam irradiation at doses of 5-20 kGy did not significantly affect the phenolic compound content in peanut shells.Thus, additional studies are needed to compare the changes in phenolic compounds in peanut shells after electron-beam treatment at various doses.Qiu et al. [21] identified antioxidants in peanut shell as 5,7-dihydroxhchromone, eriodictyol, and luteolin with lower contents of 0.95, 0.92, 2.36 mg/g, respectively, compared to the findings in this study.The variation in the content of these compounds reported in the literature is probably due to differences in sample genotypes and extraction methods. Elastase inhibitory activity the control at 41.56 mg/g, followed by eriodictyol (19.08 mg/g) and 5,7-dihydroxychrome (6.66 mg/g).Regardless of the dose levels, the contents of these three chemicals significantly increased in the solid treatment following electron-beam irradiation compared with the control.Conversely, all the contents decreased to approximately 30.3-100% in the liquid treatment as the irradiation dose increased.As previously mentioned, electron-beam irradiation at doses of 5-20 kGy did not significantly affect the phenolic compound content in peanut shells.Thus, additional studies are needed to compare the changes in phenolic compounds in peanut shells after electron-beam treatment at various doses.Qiu et al. [21] identified antioxidants in peanut shell as 5,7-dihydroxhchromone, eriodictyol, and luteolin with lower contents of 0.95, 0.92, 2.36 mg/g, respectively, compared to the findings in this study.The variation in the content of these compounds reported in the literature is probably due to differences in sample genotypes and extraction methods. Mutagenicity Assay Mutagenicity is evaluated as positive when the number of revertant colonies in the treatment is more than twice that of the control, showing a dose-dependent trend [22].As shown in Table 2, the number of revertant colonies in the positive control with TA98 (−S9), TA98 (+S9), TA100 (−S9), and TA100 (+S9) strains was approximately 11.5, 12.2, 2.1, and 3.6 times higher than that of negative control, respectively.However, there was no Mutagenicity Assay Mutagenicity is evaluated as positive when the number of revertant colonies in the treatment is more than twice that of the control, showing a dose-dependent trend [22].As shown in Table 2, the number of revertant colonies in the positive control with TA98 (−S9), TA98 (+S9), TA100 (−S9), and TA100 (+S9) strains was approximately 11.5, 12.2, 2.1, and 3.6 times higher than that of negative control, respectively.However, there was no difference between the sample and the positive control at concentrations up to 4 mg/plate, regardless of electron-beam irradiation. Reagents and Standards All standard chemicals, enzymes, substrates, buffers, and positive controls used in this study were purchased from Sigma Aldrich (St. Louis, MO, USA).HPLC-grade water, ethanol, and acetonitrile were purchased from J.T. Baker, Inc. (Phillipsburg, NJ, USA).Distilled water was obtained using a Milli-Q Advantage A10 water-purification system (Merck Millipore, Billerica, MA, USA). Plant Materials and Sample Preparation Peanuts (Arachis hypogaea cv.Sinpalkwang) were sourced from a peanut farmhouse (Gochang, Republic of Korea).The peanuts were washed with tap water, and the peanut shells and kernels were separated.The resulting peanut shells (solid treatment) and peanut shell extracts (liquid treatment) were used in the experiments.Peanut shell extracts were prepared as previously described by Han et al. [15] with slight modifications.The ground peanut shell (4 g) was mixed with 40 mL of 100% ethanol and incubated with shaking at 25 • C for 24 h.The mixture was centrifuged (CR22N; Eppendorf Himac Technologies Co., Ltd., Ibaraki, Japan) at 10,000× g for 20 min.The supernatant was then collected and used for further experiments. Electron-Beam Irradiation Two states of the sample (i.e., raw materials for the solid state and extracts for the liquid state) were placed in a 50 mL tube with a screw cap at room temperature and exposed to four absorbed doses, i.e., 0 (non-irradiated), 5, 10, and 20 kGy, with electronbeam sources.A UELV-10-10S electron-beam accelerator (10 MeV, 0.2 mA, Moscow, Russia) was used at the Advanced Radiation Technology Institute, Korea Atomic Energy Research Institute (Jeongeup, Republic of Korea).The radiation source was set at a rate of 10 kGy/h.The absorbed doses were evaluated with alanine dosimeters (diameter 5 mm, Bruker Instruments, Bremen, Germany), and the actual dose was within ±5% of the target dose. Extracts of the electron-beam-treated peanut shells (solid treatment) were prepared as described in Section 3.2.The extracts of irradiated peanut shell (solid treatment) and irradiated peanut shell extracts (liquid treatment) were evaporated using a rotary evaporator (SB-1200, EYELA Co., Ltd., Tokyo, Japan).The samples were redissolved in 100% ethanol for UPLC analysis and in dimethyl sulfoxide (DMSO) for functional compound, biological activity, and mutagenicity assays, following previous studies [3,10]. Determination of Extract Color The color of the extracts was evaluated using a chromameter (CM-3500d; Minolta, Tokyo, Japan) with three replicates.The measurements are recorded in L (darknesswhiteness), a (greenness-redness), and b (blueness-yellowness) spectra. Determination of Functional Compound Contents The electron-beam-irradiated peanut shells were quantified for total phenolic and flavonoid contents using the modified Folin-Ciocalteu [23] and aluminum chloride methods [24], respectively.Total phenolic and flavonoid contents were expressed as mg GAE/g extract and mg CE/g extract, respectively. Evaluation of Antioxidant Activities The radical scavenging activities of DPPH, ABTS, and FRAP were analyzed to evaluate the antioxidant activity of the electron-beam-irradiated peanut shell extracts, as described in our previous study [3].These assessments were expressed as mg Trolox equivalent (TE)/g extract, and the FRAP activity was expressed as mM/g extract. Evaluation of Biological Activities Tyrosinase, collagenase, and elastase inhibitory activities were analyzed to evaluate the anti-aging potential of the extracts following the enzymatic method described by Han et al. [15].To evaluate tyrosinase inhibitory activity, the dopachrome method with L-3,4-dihydroxyphenylalanine as the substrate was used.Collagenase inhibitory activity was evaluated using a spectrofluorometric method with metalloproteinase-2 as the substrate.In addition, elastase inhibitory activity was evaluated by detecting released p-nitroaniline from N-succinyl-Ala-Ala-Ala-p-nitroanilide by elastase.Kojic acid, chlorhexidine, and elastatinal were used as positive controls for tyrosinase, collagenase, and elastase inhibition assays, respectively.The inhibition (%) was calculated as follows: where A sample is the absorbance or fluorescence of a mixture consisting of a sample, enzyme, and substrate; A sample blank is the absorbance or fluorescence of a mixture without the enzyme; A control is the absorbance or fluorescence of a mixture without the sample; and A control blank is the absorbance or fluorescence of a mixture without the sample or enzyme.The samples were tested at concentrations of 1, 0.1, and 1 mg/mL for tyrosinase, collagenase, and elastase inhibitory activities, respectively. Identification and Relative Quantification of Phenolic Compounds in Peanut Shell Using UPLC-IMS-QTOf and HPLC-PDA Phenolic compounds in peanut shells were identified using an ACQUITY UPLC equipped with IMS-QTOf via electrospray ionization (ESI) (Vion IMS, Waters, Milford, MA, USA).The chemicals were separated using an ACQITY UPLC BEH C18 column (2.1 × 100 mm, 1.7 µm particle size; Waters).The mobile phases were water with 0.1% formic acid (A) and acetonitrile (B) with 0.1% formic acid, which were applied using the gradient method (5% B for 0-1 min, 5-100% B for 1-20 min, 100% B for 20-22.5 min, and 100-5% B for 22.5-25 min) at 1 mL/min.To identify all possible phenolic compounds, total ion spectra were collected over a mass range of m/z 100-1500 in the negative mode.The gas temperature and gas flow rate was 350 • C and 800 L/min, respectively.The ESI conditions were a capillary voltage of −2300 V and collision voltage of 40 V.The accurate mass of the phenolic compounds was calculated based on their molecular formula in the database, and the compounds were identified by comparing their observed accurate masses with the calculated theoretical masses. The major phenolic compounds in the peanut shells identified by UPLC-IMS-QTOf, 5,7-dihydroxychrome, eriodictyol, and luteolin were quantified using a Chromaster HPLC (Hitachi Ltd., Tokyo, Japan) coupled with a PDA detector.The stationary mobile phase used in the UPLC-IMS-QTOf detection was used for relative quantification.The retention times of the peaks in the HPLC chromatograms were compared with those of commercial standards.Quantification was performed using three different standard curves, and the concentration was expressed as mg/g of the extract. Statistical Analysis All data were presented as the average of the values of the replicates (n = 3), with standard deviation using SigmaPlot software (version 14.0; Systat Software, San Jose, CA, USA).Irradiation dose (5, 10, and 15 kGy) differences in the raw materials and extracts were evaluated using Tukey's multiple range test at p < 0.05, using SPSS statistical software (version 18.0, SPSS Inc., Chicago, IL, USA). Conclusions In this study, nine phenolic compounds in peanut shells were identified using UPLC-IMS-QTOf, and the changes in the major compounds detected in the peanut shells treated in electron-beam irradiation were evaluated.Solids treatment improved the antioxidant properties and skin aging-related enzyme inhibitory capacities of peanut shells by increasing their polyphenolic content.The stability of toxicity related to electron-beam irradiation was confirmed by a mutagenicity assay, suggesting that the electron-beam irradiation could be a safe technique to boost the biological activity of peanut shells prior to their use in industrial applications.Further studies are needed to confirm the optimal irradiation dose to maximize the biological activity of peanut shells. Figure 1 . Figure 1.Change in the color of extract obtained from electron-beam-irradiated peanut shell depending on the sample state and irradiation dose level.The values are presented as the mean ± standard deviation of three replicates.Different letters in the same treatment (i.g., solid and liquid) indicate a significant difference between electron-beam dose levels according to Duncan's multiple range test p < 0.05. Figure 1 . Figure 1.Change in the color of extract obtained from electron-beam-irradiated peanut shell depending on the sample state and irradiation dose level.The values are presented as the mean ± standard deviation of three replicates.Different letters in the same treatment (i.g., solid and liquid) indicate a significant difference between electron-beam dose levels according to Duncan's multiple range test p < 0.05. Figure 2 . Figure 2. Total phenolic compounds and flavonoid contents in electron-beam-irradiated peanut shell depending on state of the sample and dose level.The values are presented as the mean ± standard deviation of three replicates.Different letters in the same treatment (i.g., solid and liquid) indicate a significant difference between electron-beam dose levels according to Duncan's multiple range test p < 0.05.ns, not significant. Figure 2 . Figure 2. Total phenolic compounds and flavonoid contents in electron-beam-irradiated peanut shell depending on state of the sample and dose level.The values are presented as the mean ± standard deviation of three replicates.Different letters in the same treatment (i.g., solid and liquid) indicate a significant difference between electron-beam dose levels according to Duncan's multiple range test p < 0.05.ns, not significant. Figure 3 . Figure 3. Antioxidant activities in electron-beam-irradiated peanut shell depending on the sample state and irradiation dose level.The values are presented as the mean ± standard deviation of three replicates.Different letters in the same treatment (i.g., solid and liquid) indicate a significant difference between electron-beam dose levels according to Duncan's multiple range test p < 0.05.ns, not significant. Figure 3 . Figure 3. Antioxidant activities in electron-beam-irradiated peanut shell depending on the sample state and irradiation dose level.The values are presented as the mean ± standard deviation of three replicates.Different letters in the same treatment (i.g., solid and liquid) indicate a significant difference between electron-beam dose levels according to Duncan's multiple range test p < 0.05.ns, not significant. Figure 4 . Figure 4. Skin aging-related enzyme inhibition effect of electron-beam-irradiated peanut shell depending on the state of the sample and dose level.The values are presented as the mean ± standard deviation of three replicates.Different letters in the same treatment (i.g., solid and liquid) indicate a significant difference between electron-beam dose levels according to Duncan's multiple range test p < 0.05.ns, not significant. Figure 4 . Figure 4. Skin aging-related enzyme inhibition effect of electron-beam-irradiated peanut shell depending on the state of the sample and dose level.The values are presented as the mean ± standard deviation of three replicates.Different letters in the same treatment (i.g., solid and liquid) indicate a significant difference between electron-beam dose levels according to Duncan's multiple range test p < 0.05.ns, not significant. 2. 3 . Identification of Phenolic Compounds in Peanut Shell Using Ultra-Performance Liquid Chromatography-Ion Mobility Mass Spectrometry-Quadrupole Time-of-Flight (UPLC-IMS-QTOf) and Relative Quantification of Major Phenolic Compounds in Electron-Beam-Irradiated Peanut Shell Using High-Performance Liquid Chromatography Coupled with Photodiode Arrary Detector (HPLC-PDA) Figure 6 . Figure 6.Content of the major phenolic compounds in electron-beam-irradiated peanut shell depending on the state of the sample and irradiation dose level.The values are presented as the mean ± standard deviation of three replicates.Different letters in the same treatment (i.g., solid and liquid) indicate a significant difference between electron-beam dose levels according to Duncan's multiple range test p < 0.05.ns, not significant. Figure 6 . Figure 6.Content of the major phenolic compounds in electron-beam-irradiated peanut shell depending on the state of the sample and irradiation dose level.The values are presented as the mean ± standard deviation of three replicates.Different letters in the same treatment (i.g., solid and liquid) indicate a significant difference between electron-beam dose levels according to Duncan's multiple range test p < 0.05.ns, not significant. Table 1 . 1. Identified phytochemical compounds in peanut shell extracts by using UPLC-coupled to QTOf MS/MS. Table 1 . 1. Identified phytochemical compounds in peanut shell extracts by using UPLC-coupled to QTOf MS/MS.
6,931.4
2023-10-25T00:00:00.000
[ "Environmental Science", "Materials Science" ]
Green Gold—Dirty Gold, Tadó, Dept. Chocó, Colombia In place of mercury, small-scale alluvial gold miners in Tadó, Dept. Chocó, Colombia produce “green gold” (oroverde) using locally available plant extracts. The leaves of Balso (Ochroma pyramidale) and Malva (Hibiscus furcellatus) are crushed by hand and are mixed with water to make a foamy liquid that is added to the gold pan (batea) instead of mercury. After the plant extract is added, the gold, magnetite, and other heavy minerals sink and the lighter minerals are floated out of the gold pan. For final clean-up, a combination of other methods may be used. However, ICP (Inductively Coupled Plasma) analyses indicate that even green gold contains 208 4530 ppm Hg—this mercury may have been released from dragas or other small-scale gold mining operations that continue to use mercury; coal burning; volcanism; or native mercury released from cinnabar occurrences. ICP also indicates 308 106,000 ppm Ag and 452 585 ppm Pt. Introduction Gold has been mined from alluvial sources since ancient time using gravity methods in combination with mercury to amalgamate the gold [1] [2]; however, since the 1880s cyanide has also been used to leach gold from disseminated gold-silver-copper ores [3] and gold-bearing pyrite.Even though mercury and the mercury vapors that result from smelting the amalgam are toxic, mercury is widely used in Perú [4], Colombia [5], and elsewhere in South America for small scale gold mining.Colombia is considered to be one of the top three users of mercury in the world [6] and mercury is openly sold and used in Remedios, Dept.Antioquia for small-scale gold mining [5]; however, cyanide is used on gold-silver bearing pyrite ores at Marmato, Dept.Caldas.Because of the environmental issues and human health problems caused by the use of toxic chemicals, specifically mercury, the United Nations awarded alluvial miners in Dept.Chocó, Colombia, the Seed Award for their exemplary production of green gold, their aggressive efforts to reduce the use of mercury in the region, attention to conservation, and the elimination of mining practices such as the use of backhoes and dredges that pollute and destroy the streams [7]. Small-Scale Gold Mining in Chocó Since the 17th century, people of African origin (afrodescendientes) have lived in Dept.Chocó in western Colombia.Choco's inhabitants were originally brought to the region as slaves to work the alluvial gold mines [8] [9]; however, in the 1980s the Colombian government began a number of social programs to improve conditions in the remote area [10] and now the government backs eco-friendly gold production [11].The Amichocó (Friends of Chocó) Foundation and Corporación Oro Verde developed the Certified Green Gold Program (GGP) in Chocó and are expanding to other regions.These programs provide a sustainable alternative to the use of mercury in underprivileged communities and guarantees socially and environmentally responsible small-scale gold mining.The gold and platinum (platinum group metals, or PGMs, include platinum, palladium, rhodium, ruthenium, osmium, and iridium) mined in the region are sold to local and international fair trade markets and the miners receive a bonus on the market value of the gold [12].The green gold programs also minimize the miner's exposure to the toxic mercury vapors released during alluvial gold mining and amalgam burning [5] [13]. The Green Gold Process in the Field In Tadó, Dept.Chocó, Colombia, alluvial gold (oroverde) is produced using plant extracts in place of mercury.The panned, alluvial gold concentrate, which also contains silver combined with the gold as electrum, may also include mm-sized platinum nuggets with PGMs, is treated with the extract from readily available local plants such as Balso (Ochroma pyramidale), Malva (Hibiscus furcellatus), Guácimo blanco (Goethalsia meiantha) and Yarumo (Cecropia virgusa) or other plants (Figure 1).These may include Cedro playero and Yarumo (Colombia) [14] and Murmuncho and Cuiguyum (Perú) [15].In other regions, Cedro playero (Pseudosamanea guachapele) may also be known as Iguá, Tobaco, or Cedro amarillo [16].The green gold process is a traditional African technique that was handed down to Chocó's modern small-scale miners by their ancestors and allows the use of plant extracts in place of mercury amalgamation to recover alluvial gold [11]. One or two leaves from the plant, for example, Balso, Malva, Cedro playero or other, are crushed by hand and mixed with water to make a foamy, sticky liquid (Figures 2-7).Mainly the liquid is added to the gold concentrate in the gold pan (batea) in place of mercury, the coarse gold and heavy minerals (jagua), such as magnetite, sink, and the remaining sedimentary material that may include quartz, feldspars, and lighter minerals remains suspended and are poured out of the batea (Figure 6, Figure 7) leaving a gold concentrate [14] [16] [17].The green gold process is effective because: 1) it produces a soapy mixture that traps and floats lighter minerals effectively separating them from the denser gold [11] (Table 1); and 2) at the same time, the plant juice-water mixture breaks the surface tension of the water and allows precipitation of the very fine-grained gold (Figure 6) that would normally float away; and therefore, the green gold process increases gold production.The process is analogous to minerals separation by the use of heavy liquids, that is, the soapy plant juice allows the high specific gravity minerals such as gold, platinum, and magnetite to sink while the low specific gravity minerals (quartz, feldspars, and micas) remain suspended and can be poured out of the gold pan leaving a gold concentrate.The gold concentrate may then be further cleaned of magnetite by the use of a hand magnet to remove the black magnetite-rich sand (jagua) from the panned, dried gold concentrate.The aventadero method is another cleaning method in which the dried gold concentrate is tossed into the air allowing waste mineral material to be removed by the wind leaving a more pure gold concentrate [18] or the concentrate may be placed on an inclined pan that is gently tapped and the more rounded waste material tumbles away leaving the gold.The use of borax as a substitute for mercury, mainly in hard-rock small-scale gold mining in Bolivia, the Philippines, and Indonesia, is a relatively new method that has also increased gold recovery [19]. Typically, the alluvial gold produced by non-mercury methods can be easily recognized-the grains are individual, mm-sized, flattened, and may be very shiny [5].Green gold samples analyzed for this study (Table 2) were obtained from gold shops in the respective areas and not in the field because of the presence of armed groups (Figure 8) [20].2. Small amounts of gold, palladium, and silver may be used in memory cards.Tau-alluvial green gold concentrate, platinum removed, from gold shop in Tadó, Dept Chocó, Colombia; Rau-alluvial green gold concentrate from gold shop in Remedios, Dept.Antioquia, Colombia [5]; Qau-alluvial green gold concentrate from gold shop in Quibdó, Dept.Chocó, Colombia [5]; Inductively Coupled Plasma (ICP) analyses, in parts per million (ppm), by American Assay Laboratories, Sparks, NV. nr-not reported. Dirty Gold However, despite the use of this sustainable, environmentally-friendly plant-based method, Inductively Coupled Plasma analyses (ICP) indicate that even green gold produced without any mercury may still contain 208 -4530 ppm Hg (Table 2).This mercury is interpreted to be mercury released from small-scale gold mines in the region that used or continue to use mercury.For example, even as the market moves toward green gold, the jungle in Chocó is already pock-marked with craters where gold-bearing sediments were extracted and treated with mercury [21].Gold dredges, or dragas [5] used in the waterways in Dept.Chocó use a copper plate smeared with mercury over which the gold-bearing sediments were washed and this process also released mercury to the environment.Mercury may also be released during volcanic eruptions, from coal-burning, from epithermal mineral occurrences, and some hot springs.Cinnabar occurrences in the region [5] [22] may also provide some native mercury that would readily amalgamate with the alluvial gold.The ICP analyses also indicated 308 -106,000 ppm Ag and 452 -585 ppm Pt (Table 2). Discussion The use of plant extracts in Tadó for alluvial gold production is a relatively new, environmentally sound, and sustainable method for small-scale gold mining in Colombia with applications to small-scale gold mining in the region.However, the properties of the liquid extracted from these plants may not be unique to the species described herein.Therefore, it is necessary to consider other species of the Malvaceae family (ex.genus Malachra, Matisa or others) as well as other plants that may have similar properties.These include Clausiaceae (Chrysochlamys, Clusia), Euphorbiaceae (Acalypha, Alchornea, Croton, Hyeronima), and Moraceae (Ficus). Conclusion The green gold method is inexpensive, sustainable, eliminates the use of mercury, and helps recover finegrained gold that can only be trapped by mercury.However, because of several geologic and environmental factors, the green gold still contains contaminant mercury that must be retorted and removed at the smelter-this recovered mercury may then be recycled or sold.In addition to the mercury, the gold from the Chocó region also contains silver and platinum (PGMs) that must be parted at the smelter before sale.But most importantly, the advent and use of green gold methods is sustainable; inexpensive; reduces the exposure of the small-scale miner to toxic mercury and mercury fumes during amalgam burning; and reduces anthropogenic mercury releases related to small-scale gold mining to the environment. Figure 3 . Figure 3. Crushing the Malva leaf by hand with water makes a foamy liquid. Figure 5 . Figure 5.The liquid that results from hand crushing the plant with water is added to the batea in place of mercury. Figure 7 . Figure 7. Green gold concentrate, see Table2.Small amounts of gold, palladium, and silver may be used in memory cards. Figure 8 . Figure 8. Security in the gold mining areas is a serious concern.The graffiti on the backhoe indicates the presence of ELN (Ejercito de Liberación Nacional/National Liberation Army), an armed paramilitary group, in the mining area [5] [20]. Table 1 . Specific gravity of common minerals. Table 2 . Inductively Coupled Plasma (ICP) analyses of three alluvial green gold concentrates from Colombia.
2,301.6
2015-11-16T00:00:00.000
[ "Geology" ]
Prevalence of pvmrp1 Polymorphisms and Its Contribution to Antimalarial Response As more sporadic cases of chloroquine resistance occur (CQR) in Plasmodium vivax (P. vivax) malaria, molecular markers have become an important tool to monitor the introduction and spread of drug resistance. P. vivax multidrug resistance-associated protein 1 (PvMRP1), as one of the members of the ATP-binding cassette (ABC) transporters, may modulate this phenotype. In this study, we investigated the gene mutations and copy number variations (CNVs) in the pvmrp1 in 102 P. vivax isolates from China, the Republic of Korea (ROK), Myanmar, Papua New Guinea (PNG), Pakistan, the Democratic People’s Republic of Korea (PRK), and Cambodia. And we also obtained 72 available global pvmrp1 sequences deposited in the PlasmoDB database to investigate the genetic diversity, haplotype diversity, natural selection, and population structure of pvmrp1. In total, 29 single nucleotide polymorphisms reflected in 23 non-synonymous, five synonymous mutations and one gene deletion were identified, and CNVs were found in 2.9% of the isolates. Combined with the antimalarial drug susceptibility observed in the previous in vitro assays, except the prevalence of S354N between the two CQ sensitivity categories revealed a significant difference, no genetic mutations or CNVs associated with drug sensitivity were found. The genetic polymorphism analysis of 166 isolates worldwide found that the overall nucleotide diversity (π) of pvmrp1 was 0.0011, with 46 haplotypes identified (Hd = 0.9290). The ratio of non-synonymous to synonymous mutations (dn/ds = 0.5536) and the neutrality tests statistic Fu and Li’s D* test (Fu and Li’s D* = −3.9871, p < 0.02) suggests that pvmrp1 had evolved under a purifying selection. Due to geographical differences, genetic differentiation levels of pvmrp1 in different regions were different to some extent. Overall, this study provides a new idea for finding CQR molecular monitoring of P. vivax and provides more sequences of pvmrp1 in Asia for subsequent research. However, further validation is still needed through laboratory and epidemiological field studies of P. vivax samples from more regions. Introduction Plasmodium vivax was responsible for up to 4.5 million cases of malaria in 2020 [1]. Although it is rarely fatal, P. vivax malaria is the leading cause of malaria-related deaths outside of Africa [2]. In most vivax-endemic areas, a combination of chloroquine (CQ) is the first-line treatment for uncomplicated vivax malaria. Despite inexpensive, well tolerated, and widely available chloroquine resistance (CQR), vivax malaria was first reported from Papua New Guinea (PNG) in 1989 [3], and has been documented in more than ten countries [4], especially in multiple regions of Myanmar [5][6][7]. P. vivax resistance to other antimalarial drugs, such as mefloquine (MQ), sulfadoxine-pyrimethamine (SP) and primaquine (PQ), has also been reported widely [8,9]. The rise of these drug-resistant parasites threatens the global efforts to control malaria. However, molecular markers of drug resistance in P. vivax remain elusive [10]. Generally, genetic variation in the expression of transporter proteins could contribute to evading antimalarial action [11]. ATP-binding cassette (ABC) transporters are transmembrane proteins that can carry various substrate types, such as drugs and metabolic products [12]. The overexpression or mutation of many ABC transporters can lead to drug resistance [13]. As one of the members of the ABC subfamily C, the multidrug resistanceassociated proteins (MRPs) are associated with antimalarial resistance. The increased expression of pfmrp1 has been associated with the resistance of MQ and CQ, and gene polymorphisms in pfmrp1 with in vivo selection after SP and artemether/lumefantrine treatment [14][15][16]. Furthermore, the deletion of this gene in the CQR P. falciparum strain results in increased sensitivity to CQ, Quinine (QN), artemisinin, piperaquine (PPQ) and PQ [17]. Since P. vivax cannot be cultured continuously in vitro [18], most research on the molecular mechanism of drug resistance in P. vivax has focused on the homologous genes related to the drug resistance of P. falciparum [10,19]. Hence, we chose to study the association between pvmrp1 and the development of P. vivax resistance to antimalarial drugs. In this study, we genotyped 94 P. vivax isolates from China, the Republic of Korea (ROK), Myanmar, PNG, Pakistan, the Democratic People's Republic of Korea (PRK), and Cambodia, and combined the 72 available global pvmrp1 sequences deposited in the PlasmoDB database to analyze the characterization of genomic variation and population genomics methods, including genetic differentiation, haplotype network, linkage disequilibrium (LD) and the phylogenetic tree. Meanwhile, the correlation between pvmrp1 and antimalarial drug susceptibility observed in the previous in vitro assays was further analyzed. Study Sites and Participants Clinical blood samples with P. vivax infections (n = 102) were obtained from seven countries, including China (n = 46), ROK (n = 27), Myanmar (n = 21), PNG (n = 3), Pakistan (n = 3), PRK (n = 1), and Cambodia (n = 1). The Chinese samples were collected from local hospitals or centers for disease control and prevention in central China from 2005 to 2008 [20]; The South Korean samples were from local hospitals in endemic areas, such as the ROK, from 2007 to 2009 [21]; The samples of Myanmar were collected from Wet-Won Station Hospital, Yangon, Myanmar, in 1999 [21]; Other samples of P. vivax isolates were imported malaria cases in China. The protocol was reviewed and approved by the National Institute of Parasitic Diseases, the Chinese Center for Disease Control and Prevention, the Kangwon National University Hospital Human Ethics Committee, and the Myanmar Department of Health. The sequence of pvmrp1 from 72 P. vivax isolates deposited in the PlasmoDB database were downloaded and analyzed, and originated from 10 countries, including South America: Columbia (n = 21), Peru (n = 16), Mexico (n = 13), and Brazil (n = 3); Southeast Asia: Myanmar (n = 4) and Thailand (n = 7); Oceania: PNG (n = 4); East Asia: PRK (n = 1); South Asia: India (n = 2) and Africa: Mauritania (n = 1). All sample information was listed in Spreadsheet S1, and all sequences have been uploaded to Genbank with accession numbers from ON933478 to ON933571. Single Nucleotide Polymorphisms (SNPs) Identification in pvmrp1 Gene According to the manufacturer's instructions, genomic DNAs from the 102 whole blood samples were individually extracted using a QIAamp DNA blood kit (Qiagen, Valencia, CA, USA) and were stored at −20 • C in the previous studies. The pvmrp1 gene was amplified by nested or semi-nested PCR using specific primers (Table 1). All reactions were performed in 20 µL containing 4 µL of 5× Phusion HF Buffer (7.5 mM Mg 2+ plus), 0.2 mM of each dNTP, 0.25 µM of each outer primer, 0.4 U Phusion High Fidelity DNA Polymerase (New England Biolabs, Ipswich, MA, USA) and 1 µL of genomic DNA or the amplicon from the first PCR. The PCR was performed with initial denature at 98 • C for 30 s, followed by 35 cycles of 98 • C for 10 s, 56-61 • C for 30 s (Table 1), and 72 • C for 1.5 min, and a final extension period at 72 • C for 10 min. The second round of PCR amplification products was purified and sequenced by GenScript (Nanjing, China). The nucleotide sequences were compared and spliced using Lasergene software (DNASTAR, Madison, WI, USA). Determination of pvmrp1 Copy Number (CN) The copy number variations (CNVs) of pvmrp1 were measured by TaqMan-BHQ1 probe quantitative PCR assays performed on the Roche LC480 thermal cycler. A reference plasmid was constructed with pvmrp1 (nt, 1762-1872) and pvtubulin (nt, 1644-1765) fragments in a ratio of 1:1 similar as described previously [21]. Probes and primers used for amplification of both genes were listed in Table 1. The probe target pvmdr1 was labeled at the 5 end with the FAM reporter dye, and pvtubulin was the HEX reporter dye, both of which were labeled at the 3 end with the quencher dye BHQ1. A real-time PCR was conducted in 10 µL volumes containing 5 µL of Lightcycler ® 480 Probes Master 2 × (Roche Applied Science, Penzberg, Germany), 400 nM of each forward and reverse primer, 250 nM of each probe, and 1 µL of template DNA. Amplifications were performed in triplicate and the cycling parameters were as follows: 95 • C for 10 min, then 40 cycles of 95 • C for 15 s and 58 • C for 30 s. The single copy of the pvtubulin gene served as an internal control. The relative CN of pvmrp1 was calculated using a relative standard curve method as normal, and the amplifications were repeated according to specifications [20,21]. Data Analysis The multiple sequence alignments of worldwide isolates containing the wild reference sequence of pvmrp1 were obtained using the MUSCLE in the MEGA v7.0.18 program to obtain SNPs [25,26]. Moreover, the average nucleotide diversity (π), haplotype diversity (Hd), and the neutrality tests (Tajima's D test and Fu and Li's D* test) were further analyzed to identify pvmrp1 gene polymorphism and determined whether it is under the neutral evolution model [26,27], in order to evaluate the evolutionary relationship of the pvmrp1 gene. The estimation of genetic differentiation (F ST ) of the pvmrp1 was analyzed. Haplotype network was also implemented to identify the genetic association of the pvmrp1 haplotypes by means of the Median-Joining method in the NETWORK v5.0 program [28]. The phylogenetic tree of the aligned sequences was constructed using the Neighbor-Joining method in the MEGA v7.0.18 program [29]. In addition, pairwise LD of pvmrp1 gene at different polymorphic sites was calculated using the DNASP v5.10.01 program [30]. Compared to the previous in vitro drug sensitivity test [20,21], Fisher's exact test and chi-square analysis were performed to analyze whether it was related to the polymorphism of pvmrp1. As the sample size is too small to carry out statistical analysis, the data would not be displayed. A value of p < 0.05 was considered significant. Identification of Gene Mutations and CN in pvmrp1 among Collected Blood Samples in Asia Of the 102 P. vivax isolates, 94 samples (92.1%) were sequenced successfully for the pvmrp1 gene, including China (n = 43), ROK (n = 27), Myanmar (n = 18), PNG (n = 2), Pakistan (n = 2), PRK (n = 1) and Cambodia (n = 1). Compared with Sal-I as the wild reference type, 29 polymorphic sites were observed, 23 (79.3%) of which resulted in nonsynonymous mutations, one site E533 (GAA) deletion, and five (17.2%) synonymous mutations were identified. Compared with the previously reported SNPs [10], nine nonsynonymous mutations were repeatable, and the present analysis revealed 20 different SNPs ( Figure 1B). Three non-synonymous substitutions, including T259R (97.87%), Y1393D (97.87%) and V1478I (95.74%), were high prevalence and approached fixation. The wild type T259 was found in two isolates from Oceania, and the other two wild types were found in isolates from South Asia and East Asia ( Table 2). Of the 29 mutations found in this study, 20 were shown to be region-specific, such as mutations R281K, S354N, E787D, A853, G949D, and V1360 which were only observed in East Asia (range: 12.7-38.0%), V879 and L1207I were unique to the Southeast Asia at high frequency (89.5%). Three mutations (T234M, Q906E and I1232) were more frequent in isolates from Southeast Asia compared to the other areas. Although only 2 isolates were from South Asia, four mutations (F271Y, T282M, F560I and G1419A) were identified exclusively. Also, two isolates were confirmed, I1620T specifically in Oceania (Table 2). In this study, the amplification of pvmrp1 was determined by the relative CN, which was calculated by the standard curve method. Except for one isolate from China, the CN of pvmrp1 was assessed successfully from the other 101 isolates. The estimates of pvmrp1 CN for these isolates ranged from 0.68 to 2.55. Most of the isolates carried one copy of the gene, and only three isolates had double CNs, two from the ROK and one from Pakistan ( Figure 2). Correlation between Polymorphisms and In Vitro Drug Susceptibilities Of the 102 sequenced isolates mentioned above, partial samples from China and ROK were tested for antimalarial drugs susceptibility in the previous study [20,21]. Combined, a total of 39, 34, 39, and 13 isolates were cultured for more than 24 h and assayed for the susceptibility to CQ, QN, MQ and pyrimethamine (PYR), respectively (Table S1, Figure S1). For CQ, the geometric mean IC 50 was 20.92 nM (95% CI: G949D, K1219N, Y1393D, V1478I, and H1586Y), did not appear to be correlated with the IC 50 values of the four antimalarial drugs (Table S2). Using an IC 50 value ≤ 220 nM as the sensitivity standard, the chi-square analysis was performed to analyze the correlation between the CQ sensitive/insensitive isolates and the mutation sites, the results showed that only the prevalence of S354N between these two CQ sensitivity categories revealed a significant difference (p < 0.05; Table 3). With regard to the variation of pvmrp1 CN, increased pvmrp1 CN did not appear to significantly alter parasites' susceptibilities to CQ, MQ and QN ( Figure S2). The Polymorphism of pvmrp1 from Different Regions To estimate the degree of genetic differentiation of the pvmrp1 in global isolates, the sequences of pvmrp1 obtained from the studied regions were compared with homologous sequences from the PlasmoDB database. An additional 72 pvmrp1 sequences from Plas-moDB were downloaded for further analysis, including Myanmar (n = 4), PNG (n = 4), PRK (n = 1), Columbia (n = 21), Peru (n = 16), Mexico (n = 13), Thailand (n = 7), Brazil (n = 3), India (n = 2) and Mauritania (n = 1). All sequences were aligned and cut to 4095 bp (1087-5181 bp) by MEGA7.0. An analysis of the polymorphism of pvmrp1 within the 166 global isolates revealed low nucleotide diversity (π = 0.0011) and high haplotype diversity (Hd = 0.9290). We also found significant differences in the gene polymorphism of pvmrp1 in different regions. Get rid of Africa, the polymorphism of pvmrp1 gene was the highest in South Asia (π = 0.0015), and the lowest in Southeast Asia (π = 0.0006). These re-sults suggested that pvmrp1 in different regions was subjected to different natural selection pressure and showed different levels of gene polymorphism (Table 4). Natural Selection of Polymorphic Region of pvmrp1 from Different P. vivax Isolates To determine whether natural selection promoted the generation of pvmrp1 gene diversity in global isolates, we calculated the ratio of non-synonymous to synonymous mutations (dn/ds). The dn/ds for pvmrp1 of all the 166 isolates was 0.5536, indicating that the pvmrp1 gene was affected by purifying selection. The overall Tajima's D test value for pvmrp1 of all the isolates was negative (Tajima's D = −1.1863, p > 0.1), and the Fu and Li's D* test of all the isolates was −3.9871 (p < 0.02) ( Table 4). It suggested that a neutral model of polymorphism occurrence with values for pvmrp1 was due either to a recent population expansion or genetic hitchhiking. In addition, we found that the significant Fu and Li's D* test value < 0 (0.01 < p < 0.05) were found in isolates from South America and Southeast Asia. Genetic Differentiation, Haplotype Network and LD Analysis of Polymorphic Region of pvmrp1 The level of genetic differentiation of pvmrp1 was estimated by F ST values. As there is only one isolate from Africa, it was not considered. A low level of genetic differentiation was found in the isolates from South America and South Asia (the value of F ST was 0.0809), while others were in a moderate and high level of genetic differentiation (0.1661-0.5498). This was especially true of the southeast Asian isolates, which showed a great genetic differentiation based on the values of F ST (Table 5). To further verify the differential selection between groups, the 11 non-synonymous SNPs (E787D, Q906E, G949D, C1018Y, L1207I, L1287I, Y1393D, G1419A, V1478I, T1525I, and H1586Y), which occurred twice without deletion, were selected to construct a haplotype network among all 166 samples. The haplotype analysis of pvmrp1 in this study revealed 22 distinct haplotypes (Figure 3). Eight of the haplotypes were singleton haplotypes, of which H_4 and H_5 were found in East Asian isolates exclusively, H_10 and H_11 existed only in South Asian isolates, H_12 was found in Southeast Asia, and H_17, H_18, and H_19 were found in South America specifically. The mutant types H_2: EEGCLLDGITY (18.7%) and H_7: EEGCLLDGITH (18.1%) were the most common. H_2, H_3 (DQDCLLDGITH), and H_13 (EEGCLLDAITH) were the dominant haplotype in East Asia (81.8%, 100.0%, and 69.6%, respectively), H_14 (DQGCLLDGVTH) is mainly distributed in South America (63.6%), and H_7 is distributed worldwide, which mainly includes East Asia (40%), South America (33.3%) and Oceania (16.6%) (Figure 4, Table S3). In particular, 14 haplotypes found in the isolates from South America, which indicated the highest haplotype diversity, and nine haplotypes in East Asia, which was consistent with the results of haplotype diversity in Table 4. In addition, by pairwise LD of the pvmrp1 gene at 11 non-synonymous SNPs using the DNASP v5.10.01 program, we observed that there was a strong LD between E787D and Q906E, G949D, and H1586Y (p < 0.0001) ( Figure 5). Meanwhile, the strong LD was present in pvmrp1 gene between G949D and H1586Y, Q906E (p < 0.0001), and existed in the following pairs also: Q906E/L1207I, V1478I/T1525I (p < 0.0001). We found that most of the genes with LD occurred around the ABC transporter domain, such as E787D and Q906E, G949D. Furthermore, phylogenetic analysis of the 166 isolates indicated that the genetic evolution of pvmrp1 varies in different regions, which have obvious population genetic structures related to geographical isolates. The genetic differentiation of P. vivax isolates indicated that the geographically distant was high; for example, the genetic differences between pvmrp1 gene from East Asia and South America were the highest ( Figure 6). Discussion and Conclusions Plasmodium vivax is the most geographically widespread cause of human malaria. Due to the lack of an in vitro culture and transgenic system, molecular markers represent a more practical tool to monitor the introduction and spread of drug resistance in P. vivax [31]. In our study, as a member of ABC transporters, PvMRP1, localizes at the parasite plasma membrane, and the predicted primary protein structure includes 11 TM and 2 NBDs, which function primarily as drug transports [32]. Consistent with the potential involvement of PvMRP1, it has been observed as a transporter with a broad range of substrates, including important endogenous substances such as glutathione and a lot of drugs with diverse structures. CQ was identified as a substrate for this ABC transporter, and the mutations of pfmrp1 were found to be associated with reduced susceptibility to CQ and also to QN [13]. In addition, pvmrp1 transcription level is decreased in the trophozoite stage, which is consistent with the phenotype that P. vivax trophozoites insensitive to CQ [33]. All of these suggest that pvmrp1 may be a potential molecular marker of drug resistance. In this study, we amplified and sequenced the pvmrp1 gene of 102 whole blood samples originating from Asia, and 29 genetic mutations were found out of the 94 successfully sequenced isolates. Among them, most mutations have not been hitherto reported, and most of these mutations are found only in East Asia, which is likely due to the low number of east Asian isolates used in previous studies. The mutations T259R (97.87%), Y1393D (97.87%) and V1478I (95.74%) were approaching fixation in the sequenced samples, and the mutations V879 (89.5%) and L1207I (89.5%) were highly prevalent and present in Southeast Asia exclusively. A small number of imported isolates, such as South Asia (n = 2) and Oceania (n = 2) were not considered. We found significant differences in the types of mutations prevalent in East and Southeast Asia except for the mutations approaching fixation. A sequence similarity analysis of pvmrp1 and pfmrp1 indicated that the Y1393D and G1419A mutations of pvmrp1 overlap with pfmrp1 locations residing between the ABC transmembrane and the second ABC transporter domains which were associated with drug resistance [10]. Furthermore, a recent study has provided evidence that G1419A and V1478I had a significant association with the IC 50 to CQ and artesunate, and G1419A was also associated with the decreased susceptibilities to PPQ, MQ, and QN [34]. However, in combination with our previous in vitro drug susceptibility studies [20,21], we could not find a correlation between the polymorphisms of pvmrp1 and antimalarial drug susceptibilities (p > 0.05). The isolates used in our previous drug susceptibility studies, Y1393D and V1478I, were fixed, and no G1419A was observed in the pvmrp1 gene, which may limit the correlation analysis. Moreover, we found that the prevalence of S354N between the two CQ sensitivity categories revealed a significant difference by chi-square analysis, although more clinical samples are needed to confirm this conclusion. Numerous studies have shown that gene CN polymorphism is related to genetic and phenotypic variation, which is as important as SNPs [35,36]. Three CNVs were also determined from two ROK isolates and one Pakistan isolate in this study, which indicates the variation in pvmrp1 gene amplification in Asian isolates. However, we did not find a statistically significant correlation between CNVs and drug sensitivity. The 94 sequenced samples combined with the 72 known sequences from the PlasmoDB database formed pvmrp1 sequences worldwide. The result displayed a low genetic diversity at the pvmrp1 with an average π of 0.0011. Haplotype analysis of pvmrp1 showed that haplotype diversity varies in different regions, and haplotype polymorphism was higher in South America and Oceania than in the other areas. These results suggested that pvmrp1 of P. vivax from different areas was subjected to various natural selection pressures and showed different levels of gene polymorphism and haplotype polymorphism. In this study, the rates of non-synonymous mutation to synonymous mutation of pvmrp1 were < 1, and the neutral evolutionary test statistic Tajima's D test was negative (Tajima's D = −1.1863, p > 0.1), indicating that the pvmrp1 gene was affected by purifying selection [27]. Fu and Li's D* test further confirmed that the pvmrp1 gene was under purifying selection (Fu and Li's D* = −3.9871, p < 0.02), which suggested that the mutations and accumulate at silent sites, there were likely to be lots of segregating sites, but not much heterozygosity. This could explain that nucleotide diversity (π) was small and the average number of nucleotide differences (K) was high in Table 4. Furthermore, genetic differentiation, the haplotype network, and phylogenetic analysis of the 166 isolates indicated that the genetic evolution of pvmrp1 varies in different regions, which has obvious population genetic structures related to geographical isolates. The genetic differentiation between P. vivax isolates that were geographically distant was high, for example, the genetic differences between the pvmrp1 gene from Southeast Asia and South America were the highest. Perhaps parasite movement could be controlled by different factors, including geographical barriers, distance, poor road infrastructure, cultural and language barriers, and the effectiveness of malaria control interventions. Overall, PvMRP1 localizes on the P. vivax plasma membrane with 11 TM, the gene nucleotide polymorphism, and CNVs were analyzed with the field P. vivax isolates in our study. We found some unreported point mutations of pvmrp1 in the collected samples, and S354N substitution may lead to CQR in P. vivax. In combination with the worldwide isolates, the analysis showed that the pvmrp1 gene had a high haplotype diversity, low nucleotide diversity and was under purifying selection. This study showed the polymorphisms of the pvmrp1 gene from worldwide isolates, and pointed to the potential contribution of the pvmrp1 gene in the CQR of P. vivax. However, the lack of correlation between the pvmrp1 polymorphisms and the IC 50 values of the four antimalarial drugs (CQ, MQ, QN and PYR), highlights the need for more informative tools to function the role of pvmrp1 gene. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/microorganisms10081482/s1, Table S1: IC 50 values to four antimalarial drugs of P. vivax isolates; Figure S1: Dot plots of in vitro susceptibilities of P. vivax isolates to four antimalarial drugs; Table S2: Association of SNPs in pvmrp1 with in vitro susceptibilities to CQ, MQ, QN and PYR; Figure S2: Comparison of IC 50 values for three antimalarials between parasites with single copy and multicopies of pvmrp1 gene; Table S3: Distribution of the number of 22 pvmrp1 haplotypes in worldwide isolates; Spreadsheet S1: Sample information.
5,626.6
2022-07-22T00:00:00.000
[ "Biology" ]
Labor Investment in a New International Mixed Market This paper considers a continuous-time dynamic mixed market model of labor investment decisions of a domestic public firm and a foreign private firm. The paper studies the optimal levels of preemptive investment for the long-run structure of the international mixed market. It is then demonstrated that there are no perfect equilibria in which neither firm invests to its steady-state reaction curve. Some studies include foreign firms.For example, Fjell and Pal [20] extend the analysis to an international context by considering a model where a state-owned public firm competes with both domestic and foreign private firms and examine the effects of entry by an additional private firm.Pal and White [21] examine the effects of privatization on strategic trade policy by incorporating strategic trade policy instruments in an international mixed model where a state-owned public firm competes with both domestic and foreign private firms.Fjell and Heywood [22] consider a mixed oligopoly in which a public Stackelberg leader competes with both domestic and foreign private firms.Matsumura [23] examines a Stackelberg mixed duopoly where a public firm competes against a foreign private firm.Furthermore, Fernández-Ruiz [24] studies firm' decisions to hire managers when a public firm with social welfare objectives competes against a foreign private firm with profit objectives. As is well known, international mixed oligopolies are common in developed and developing countries as well as in former communist countries.Public firms compete against foreign private firms in many industries, such as banking, life insurance, automobiles, airlines, steel, shipbuilding, and tobacco. 2herefore, we examine a continuous-time dynamic model of the strategic investment decisions of a domestic welfare-maximizing public firm and a foreign profit-maximizing private firm.The possibility of firms using excess capacity to strategic investment was studied by [25][26][27], and was also extended in two-stage models by [28][29][30][31][32][33][34].Furthermore, Spence [35] examines the strategic investment decisions of profit-maximizing private firms in a new industry or market by using a continuous-time asymmetric dynamic model.In Spence's model, there exist the leading and the following firms.He shows that the equilibrium is for the leading firm to invest as quickly as possible to some capital level and then stop.Fudenberg and Tirole [36] establish the existence of a set of perfect equilibria by using Spence's dynamic model and suggest that the steady state of the game is usually on neither firm's steady-state reaction curve; that is, there are early-stopping equilibria where neither firm invests to its steady-state reaction curve.Ohnishi [11] studies the perfect equilibria of a continuous-time mixed market model of the strategic investment decisions of welfare-maximizing public and profit-maximizing private firms and shows that the equilibrium outcomes of the mixed market model differ from those of Fudenberg and Tirole's private mar-ket model; that is, there are no early-stopping equilibria where neither firm invests to its steady-state reaction curve. All these studies focus on capital as strategic investment.On the other hand, we focus on labor; that is, we use lifetime employment contracts as strategic investment. 3he purpose of this study is to construct a set of perfect equilibria of a continuous-time dynamic model in which a domestic welfare-maximizing public firm and a foreign profit-maximizing private firm compete for labor investment. The remainder of this paper is organized as follows.In Section 2, the elements of the continuous-time model are formulated.Section 3 characterizes the equilibrium outcomes of the continuous-time model.Section 5 concludes the paper. The Model Let us consider an international mixed market with one domestic welfare-maximizing public firm (firm D) and one foreign profit-maximizing firm (firm F).In the remainder of this paper, when and are used to refer to firms in an expression, they should be understand to refer to D and F with .Time t is continuous, and the horizon is infinite.At each time, it is possible that each firm employs employees and legally enters into a lifetime employment contract with all of the employees. 4 j j i  Firm 's net profit at time is given by i t ( , , ) ( ) where is firm i 's current labor stock, namely the current number of employees in firm , is price as a function of labor stock ( , is firm 's cost per employee, and is firm i 's labor investment at time .We assume that . 5We also assume that and That is, this function is strictly concave.An increase in employment reduces the price through an increase in output. Labor stocks cannot decrease, and each firm has a constant upper bound on the amount of its labor invest-ment at every time ; that is, . At time zero, each firm enters the market with and can start investing.At each time, each firm employs employees, legally enters into lifetime employment contracts with them, and expands its scale.Therefore, there is an upper bound in the number of employees whom each firm can employ newly at each time. (0) 0 i l  Domestic social welfare at time , which is the sum of domestic consumer surplus at time and firm D's net profit at time , is given by Each firm's net present value of profits is 0 ( ( ), ( ), ( ) where is the common discount rate.This is firm F's objective function.r Firm D maximizes the net present value of domestic social welfare, given by F  is not included in because firm F is a foreign competitor.Therefore, it is thought that firm D behaves more aggressively toward firm F. If is high, then future values have a lower weight compared to the situation with a lower .If tends to zero, then firm D maximizes time-average social welfare, and firm F maximizes its time-average profit.Since the arguments in favor of the equilibrium outcomes of the discounting case are the same as in the no-discounting case, we shall devote our attention to the case in which firms do not discount their objectives. r r r We examine the perfect equilibrium outcomes of a state-space game.The state-space game is a game in which both the payoffs and the strategies depend on the history only through the current state.The perfect equilibrium is a strategy combination that induces a Nash equilibrium for the subgame starting from every possible initial state in the state space. Firm 's steady-state reaction function is defined as the locus of points which give the final optimal level of for each final value of The equilibrium occurs where each firm maximizes its objective with respect to its own labor level, given the labor level of its rival.Firm D's steady-state reaction function is derived as follows.Firm D aims to maximize social welfare with respect to , given .The equilibrium must satisfy the following conditions.The firstorder condition for firm D is Furthermore, we have Since , is upward sloping.This means that firm D responds to more aggressive play with more aggressive play. Next, we derive firm F's steady-state reaction function.Firm F aims to maximize its profit with respect to , given .The equilibrium must satisfy the following conditions.The first-order condition for firm F is and the second-order condition for firm F is Furthermore, we have Since and , is downward sloping.This means that firm F's optimal response to more aggressive play by firm D is to be less aggressive.We assume that and have a unique intersection which will be the Nash equilibrium of the state-space game. Equilibrium Outcomes In this section, we analyze the perfect equilibrium outcomes of the continuous-time model.First, we consider the case shown in Figure 1, where represents firm 's steady-state reaction curve.The figures in this paper are drawn with straight lines for simplicity.Spence [35] and Fudenberg and Tirole [36] define the industrial growth path (IGP) as a locus on which each firm invests as quickly as possible.Firms are willing to invest as quickly as possible if there are only profit-maximizing firms in a market and the reaction curves are downward sloping.However, in this paper, we examine the case of a mixed market.As understood from this figure, social welfare increases as firm F increases its investment.Firm D hopes that firm F will invest more.Hence, firm D does not have the incentive to invest as early as firm F does.Therefore, we will not introduce the IGP. We discuss each firm's actual investment paths by using Figure 1.Let A be each firm's initial labor stock.That is, each firm has an exogenously given labor stock, . Each firm can start investing at time zero.Each firm can employ new employees, given the constraints.Social welfare increases as firm F increases its investment, and therefore firm D hopes that firm F will invest more.Firm D will not have an incentive to invest as early as firm F does.Each firm continues to invest, given the constraints.If firm F continue to invest, then the industry continues to grow along (0) 0 i l  AB , and each firm will stop investing at a point where it find optimal. The industry continues to grow along AB C , and reaches on .At , if firm F continues to invest further, then its profit decreases.Hence, firm F invests up to and then stops.However, social welfare increases if firm D invests whether firm F invests or not.Therefore, firm D continues to invest, given the constraints.If firm D continue to invest, then the industry continues to grow along , and firm D will stop investing at a point where it find optimal.The industry continues to grow along , and reaches on .If firm D continues to invest further, then social welfare decreases.Hence, firm D invests up to C and then stops.Neither firm will have an incentive to invest at .This investment path becomes Firm F's profit decreases as the industry grows along .Therefore, firm F may try to stop firm D's investment before the investment path reaches .Even though firm F invests further, the best firm D can do is to invest to .Since this profit of firm F is lower than its profit at , this behavior of firm F is not a credible threat. R C Second, we consider the case shown in Figure 2. Firm D has an exogenously given labor stock, , while firm F has no labor, .In this case, firm D's initial labor stock level is equal to or larger than firm D's labor stock level associated with the intersection of both reaction curves.Since labor stocks cannot decrease, the equilibrium will never occur at any point to the left of .Firm F can increase its own profit and N social welfare by investing, and therefore it will invest.Firm D hopes that firm F will invest more.On the other hand, since firm D decreases social welfare by investing, the best it can do is not to invest.Therefore, firm F unilaterally continues to invest, given the constraints.The industry continues to grow along AE and reaches on .At , if firm F continues to invest further, then its profit decreases.Hence, firm F invests up to and then stops.Neither firm will have an incentive to invest at .This investment path is AE .Social welfare increases as firm F increases its investment, and thus an incentive by which firm F's investment is stopped before the investment path reaches does not happen to firm D. E Third, we consider the case depicted in Figure 3.In this case, each firm has an exogenously given labor stock, . Each firm can start investing at time zero.Since firm D can increases social welfare by investing, the best it can do is to invest.On the other hand, firm D decrease its own profit by investing, and therefore it will not invest.Firm D unilaterally continues to invest, given the constraints.The industry continues to grow along (0)  0 i l AG and reaches on .At , if firm D continues to invest further, then social welfare decreases.Hence, firm D invests up to and then stops.Since firm F decreases its own profit by investing, the best it can do is not to invest.Neither firm will have an incentive to invest at .This investment path is From above discussions, we can see that there are no early-stopping equilibria in the international mixed market model.The main result of this study is described by the following proposition.wishes to be as close to its reaction curve as possible.Therefore, the best firm F can do is to invest whether firm D invests or not.Both social welfare and firm F's profit increase as firm F increases its investment.Therefore, firm F continues to invest, and firm D does not have an incentive to stop firm F from investing.In Region I, since at least firm F continues to invest, the state will reach from Region I to either Region II or Region III. Second, we show each firm's strategy in Region II.Since F D F F ( , , ) l l a  is assumed to be concave in , firm F wishes to be as close to its reaction curve as possible.Firm F's profit decreases if firm F invests whether firm D invests or not.Hence, firm F, which maximizes its own profit, never invests in this region.Firm D wishes to be as close to its reaction curve as possible.Therefore, the best firm D can do is to invest.Firm D will invest up to a point on its reaction curve.In Region II, since firm D unilaterally continues to invest, the state will reach from Region II to Region III.F l Third, we show each firm's strategy in Region III.Each firm wishes to be as close to its own reaction curve as possible.If only firm F or both firms continue to invest, then firm F's profit will decrease.Hence, firm F does not invest.If only firm D continues to invest, then social welfare will decrease, and therefore firm D does not invest either.Each firm's best response to the other firm's strategy at any point of this region is not to invest.Consequently, each firm's optimization problem at any point in this region, given the other firm's strategy, induces a Nash strategy at any point of this region.Thus, the strategies are in perfect equilibrium, and the result follows.Q.E.D. Conclusions We have examined continuous-time dynamic competition of labor investment decisions of a domestic welfare-maximizing public firm and a foreign profit-maximizing private firm.Fudenberg and Tirole [36] examine continuous-time dynamic competition of capital investment decisions of private firms and show that there are early-stopping equilibria in which neither firm invests up to its steady-state reaction curve.On the other hand, we have demonstrated that there are no equilibria in which neither firm invests up to its steady-state reaction curve.There are many studies dealing with mixed markets that incorporate welfare-maximizing public firms.We will pursue further research on these studies in the future. Figure 1 . Figure 1.This investment path is ABC.
3,420.2
2010-05-25T00:00:00.000
[ "Economics" ]
va-Meiotic behavior of several Brazilian soybean varieties Despite the importance of soybeans little cytogenetic work has traditionally been done, due to the small size and apparent similarity of the chromosomes. Fifteen soybean [ Glycine max (L.) Merrill] varieties adapted for cultivation in two distinct regions of Brazil were analyzed cytogenetically. A low frequency of meiotic abnormalities was noted in all varieties, although they were not equally affected. Irregular chromosome segregation, chromosome stickiness, cytoplasmic connections between cells, cytomixis and irregular spindles were the main abnormalities observed, none of which had been described previously in soybeans. All of these abnormalities can affect pollen fertility. Pollen fertility was high in most varieties and was correlated with meiotic abnormalities. Although soybean is not a model system for cytological studies, we found that it is possible to conduct cytogenetic studies on this species, though some modifications in the standard methods for meiotic studies were necessary to obtain satisfactory results. INTRODUCTION The genus Glycine, which includes the cultivated soybean, comprises predominantly diploid (2n = 2x = 40) and tetraploid (2n = 4x = 80) species. Soybean contains 2n = 40 small (1.42-2.82 µm), morphologically similar somatic chromosomes (Sen and Vidyabhusan, 1960) that do not show sufficiently different banding patterns to allow chromosome identification (Ladizinsky et al., 1979). Palmer (1976) has pointed out the usefulness of cytogenetic methods for improvement of soybeans. However, information on the cytogenetics of cultivated soybean is minimal when compared with other important crops. The causes of this lack of information include: i) the small but numerous chromosomes which are indistinguishable from each other and ii) the fact that the techniques usually used for cytological studies in other plant species are inadequate for soybean. In recent years, several important sterile male soybean mutants have been described (see Graybosch and Palmer, 1988;Palmer et al., 1992) and a cytogenetic map of the 20 soybean chromosomes has been constructed for the relatively uncondensed pachytene chromosomes (Singh and Hymowitz, 1988). More recently, in situ hybridization has been used to characterize individual soybean metaphase chromosomes (Griffor et al., 1991). Despite its considerable economic importance for Brazil, there have been no detailed cytogenetic studies of soybeans in this country. Paraná State, which is home to the National Center for Soybean Research, is responsible for a large part of Brazilian soybean production. Current research at this center involves the analysis of spontaneous meiotic mutants that cause male sterility which may be useful in hybridization programs. To further our under-standing of soybean cytology and to improve the technique for meiotic studies we have examined the meiotic behavior and pollen fertility of 15 varieties of soybean adapted for cultivation in two different regions of Brazil. (Maringá, PR), where the soil was prepared for soybean cultivation. MATERIAL AND METHODS Flower buds were collected from five plants of each variety for meiotic analysis and were fixed in FAA (ethanol:formaldehyde:acetic acid, 2:1:1 v/v) for 24 h, after which they were transferred to 70% alcohol and stored at 4 o C. Pollen mother cells (PMCs) were prepared by the squash technique and stained with 1% acetic carmine. At least 250 PMCs in different phases of meiosis were evaluated for each plant and any abnormalities seen were recorded. The same procedures and stain used for meiotic analysis were employed with open flowers to test pollen sterility. One thousand pollen grains/plant were examined. The data were analyzed statistically by analysis of variance in a completely randomized design. Initially, the va-rieties were compared in the same group, and then the two groups were compared. The mean percentage of normal PMCs/variety in each group was compared using the Duncan test. RESULTS The 15 soybean varieties had a low frequency of meiotic abnormalities ( Table I). Analysis of variance revealed significant differences (P < 0.05) in meiotic behavior among the varieties in group I cultivated in Paraná State. In this group, the variety EMBRAPA 48 was the most affected by meiotic abnormalities. The varieties in group II, adapted for cultivation in central Brazil, had a more normal meiotic behavior than those in group I, though analysis of variance showed differences among them. There was also a significant difference between the meiotic behavior of the two groups (P < 0.05) as determined by analysis of variance. The meiotic abnormalities observed among the varieties included chromosome segregation, chromosome stickiness, cytoplasmic connections among cells, cytomixis and irregular spindle. The meiotic phases generally most affected by these abnormalities were prophase I and metaphase I. Precocious migration of univalents to the poles (Figure 1a) was observed in all varieties, with the exception of EMGOPA 314, in which all of the cells had normal meiosis. Another frequent segregational abnormality observed in metaphase I of two varieties was non-oriented bivalents at the equatorial plate ( Figure 1b). This abnormality occurred in the varieties OCEPAR 14 and EMBRAPA 48, with a significantly higher frequency in the latter. Laggard chromosomes in anaphases I and II were observed at a low fre-quency in some varieties. As a consequence of precocious migration of univalents, non-oriented bivalents and laggard chromosomes, some micronuclei (Figura 1c-e) were observed in telophase I and meiosis II. These micronuclei gave rise to microcytes in the tetrads (Figure 1f). Chromosome stickiness was observed only in the MT/BR-45, EMBRAPA 48 and EMBRAPA 62 varieties, and affected all meiotic phases (Figure 2a-d). The phenomenon ranged from slight stickiness to an indistinct compact chromatin mass involving the entire complement (Figure 2a,b), which impaired chromosome segregation. Bridges were observed in telophase I ( Figure 2c) and pycnosis also occurred in some cells (Figure 2d). The most common abnormality in all varieties was cytoplasmic connections involving two or more microsporocytes ( Figure 3a). The mean percentage of cytoplasmic connections ranged from 3.2 to 24.5 (Table II). Analysis of variance showed significant differences (P < 0.05) in this characteristic among the varieties. Although cytoplasmic connections were frequent, only one case of true chromosome transfer among cells (cytomixis) was observed (Figure 3b), although evidence of chromosome transfer was found in some cells with extra chromosomes (Figure 3c,d). An irregular spindle was observed in only a few cells. In meiosis I, tripolar rather than bipolar spindles were present (Figure 4a,b), whereas in meiosis II the spindles were convergent (Figure 4c). The test for pollen fertility showed a low percentage of sterile pollen grains ( Figure 4d). In most of the varieties, pollen fertility was significantly correlated with meiotic abnormalities, although in a few cases there was no relationship (Table I), as in the case of variety EMGOPA 314, which had the highest meiotic stability but the lowest pollen fertility. DISCUSSION Spontaneous chromosomal aberrations are relatively rare in Glycine compared with other important genera (Singh and Hymowitz, 1991a) and generally involve polyploidization and aneuploidy. Spontaneous meiotic mutations that cause male sterility have also been reported in soybeans (see Graybosch and Palmer, 1988;Palmer et al., 1992). We found meiosis to be relatively normal in 15 varieties of cultivated soybeans with few abnormalities when compared with other crops (Moraes-Fernandes, 1982;Souza et al. 1997;Baptista-Giacomelli, 1999). The abnormalities involved chromosome segregation, chromosome stickiness, irregular spindle formation and connections among cells, and had not been described in soybeans. The observed precocious chromosome migration to the poles may have resulted from univalent chromosomes at the end of prophase I or precocious chiasma terminalization in diakinesis or metaphase I. Univalents may originate from an absence of crossing-over in pachytene or from synaptic mutants. However, prophase I stages were not analyzed because of the poor quality of the squash preparations. Chiasmata are responsible for the maintenance of bivalents which permit normal chromosome segregation. This process ensures pollen fertility. While precocious migration of univalents to the poles is a very common abnormality among plants (Pagliarini, 1990;Pagliarini and Pereira, 1992;Defani-Scoarize et al., 1995a,b;, the other segregational abnormality (non-oriented bivalents) observed in the varieties OCEPAR 14 and EMBRAPA 48 is rare, but is known to occur in Chlorophytum comosum (Pagliarini et al., 1993). The behavior of these and of the laggard chromosomes is characteristic in that they generally lead to micronucleus formation (Koduru and Rao, 1981). In soybean, the percentage of cells with meiotic abnormalities was higher in metaphase I and decreased until telophase II, indicating that some chromosomes were included in the main nucleus. This seems to be normal behavior for many species (Koduru and Rao, 1981). Sticky chromosomes were first reported in maize (Beadle, 1932) and are seen as intense chromatin clustering in the pachytene stage. The phenotypic manifestation of stickiness may vary from mild, when only a few chromosomes of the genome are involved, to intense, with the formation of pycnotic nuclei that may involve the entire genome, culminating in chromatin degeneration (for a review, see . In the soybean varieties, the stickiness was of both types. Some cells showed mild stickiness, in which case it was possible to identify the meiotic stage. In other cells, the intense phenotypic manifestation led to the formation of pycnotic nuclei. Chromosome stickiness may be caused by genetic or environmental factors. Genetically controlled stickiness has been described in other cultivated plants such as maize (Beadle, 1932;Golubovskaya, 1989;Caetano-Pereira et al., 1995), pearl millet (Rao et al., 1990) and wheat (Zanella et al., 1991). Several agents have been reported to cause chromosome stickiness, including X-rays (Steffensen, 1956), gamma rays (Rao and Rao, 1977;Al Achkar et al., 1989), temperature (Erikisson, 1968), herbicides (Badr and Ibrahim, 1987) and some chemicals present in soil (Levan, 1945;Steffensen, 1955;Caetano-Pereira et al., 1995). However, the primary cause and biochemical basis of chromosome stickiness are still unknown. Gaulden (1987) postulated that sticky chromosomes may result from the defective functioning of one or two types of specific nonhistone proteins involved in chromosome organization, which are needed for chromatid separation and segregation. The altered functioning of these proteins leading to stickiness is caused by mutations in the structural genes coding for them (hereditary stickiness) or by the action of mutagens on the proteins (induced stickiness). Cytoplasmic connections, the most common abnormality observed in soybeans, is a phenomenon widely described in angiosperms (see Heslop-Harrison, 1966;Risueño et al., 1969;Whelan, 1974). The first description was made by Gates (1908), who observed delicate threads of cytoplasm connecting adjacent pollen mother cells in Oenothera. Gates (1911) subsequently suggested that these connections must form an important avenue of exchange between PMCs, and described the transfer of nuclear material through them from one meiocyte to another, calling the process "cytomixis". According to Heslop-Harrison (1966) and Risueño et al. (1969), the role of cytoplasmic channels is related to the transport of nutrients between meiocytes. Investigations in angiosperms have provided evidence that massive protoplasmic connections are formed among microsporocytes. Our study showed that the frequency of cytoplasmic connections among varieties varied from 3.2 to 24.5%. Although cytoplasmic connections are very common in angiosperms, the movement of nuclear material through them is rare. In the soybean varieties studied here, only one case of chromosome transfer (cytomixis) among microsporocytes was observed. In general, cytomixis has been detected at a higher frequency in genetically imbalanced species such as hybrids, as well as in apomictic, haploid and polyploid species (see Yen et al., 1993). Among the factors proposed to cause cytomixis are the influence of genes, fixation effects, pathological conditions, herbicides and temperature (see Caetano-Pereira and Pagliarini, 1997). Cytomixis may have serious genetic consequences by causing deviations in chromosome number and may represent an additional mechanism for the origin of aneuploidy and polyploidy (Sarvella, 1958). The abnormal spindles observed in a few cells have also been reported for other genera (see Harlan and De Wet, 1975;Veilleux, 1985). The spindle apparatus is normally bipolar and acts as a single unit, playing a crucial role in the alignment of metaphase chromosomes and their poleward movement during anaphase. Distortion in meiotic spindles may be responsible for unreduced gamete formation. While the tripolar spindles seen in metaphase I of some cells may cause genome fractionation, convergent spindles in metaphase II rejoin the homologues segregated in meiosis I, leading to the formation of unreduced gametes. Although the formation of unreduced gametes has been investigated in studies of evolution (Harlan and De Wet, 1975) and in breeding programs (Veilleux, 1985), the frequency of convergent spindles in metaphase II in soybean was very low (0.3 to 1.4%) and not enough to be useful in breeding programs. In normal soybean genotypes meiotic abnormalities are rare whereas they are common in meiotic mutants that cause male sterility. Chromatin bridges and micronuclei were described for the first time in interspecific hybrids of Glycine max x Glycine soja by Ahmad et al. (1977), who found that the extent of abnormalities was influenced by environmental conditions. The same abnormalities were reported by Ahmad et al. (1984), who concluded that chromosome behavior and fertility depended on the parentage of the hybrids and on environmental temperature. Their results, obtained in greenhouse and controlled environmental studies, suggest that at least three factors (genotype, temperature and genotype x temperature interaction) influence chromosome behavior and fertility. All of the meiotic abnormalities found in the soybean varieties analyzed here have been reported to be responsible for pollen sterility. Fertility depends on the efficiency of the meiotic process. Studies on different plant species have shown that the decline in seed production is correlated with meiotic irregularities (La Fleur and Jalal, 1972;Dewald and Jalal, 1974;Moraes-Fernandes, 1982;Smith and Murphy, 1986;Pagliarini and Pereira, 1992;Pagliarini et al., 1993;Khazanehdari and Jones, 1997). In most of the soybean varieties, pollen fertility showed a close relationship with meiotic abnormalities. Most of the varieties had few meiotic abnormalities and, as a consequence, a high pollen fertility. Soybean is an autogamous, diploid and genetically stable species that produces a low number (300 to 800) of pollen grains per anther (Palmer et al., 1978). For this reason high meiotic stability is required in order to guarantee seed production. From our study, we suggest that the differential seed production observed among varieties is due to genetic control and not only to meiotic abnormality. Soybean has not been considered a model system for cytological studies. According to Singh and Hymowitz (1991b), this may explain why soybean cytogenetics has lagged behind genetic studies of maize, barley and tomato. Our experience with soybean cytogenetics confirms this conception. Squash preparations of PMCs routinely employed for other species did not give good results. Some small modifications in the smear and stain in relation to the standard methods were necessary to obtain satisfactory results. The fact that the plants were cultivated in fields probably affected the analysis since, according to Palmer and Kilen (1987), greenhouse-grown plants yield a higher percentage of acceptable preparations, whereas plants grown under hot and dry conditions give very poor results. Despite the difficulties, we conclude that it is possible to conduct cytogenetic studies on soybean.
3,416
2000-09-01T00:00:00.000
[ "Agricultural And Food Sciences", "Biology" ]
An in silico Approach Reveals Associations between Genetic and Epigenetic Factors within Regulatory Elements in B Cells from Primary Sjögren’s Syndrome Patients Recent advances in genetics have highlighted several regions and candidate genes associated with primary Sjögren’s syndrome (SS), a systemic autoimmune epithelitis that combines exocrine gland dysfunctions, and focal lymphocytic infiltrations. In addition to genetic factors, it is now clear that epigenetic deregulations are present during SS and restricted to specific cell type subsets, such as lymphocytes and salivary gland epithelial cells. In this study, 72 single nucleotide polymorphisms (SNPs) associated with 43 SS gene risk factors were selected from publicly available and peer reviewed literature for further in silico analysis. SS risk variant location was tested revealing a broad distribution in coding sequences (5.6%), intronic sequences (55.6%), upstream/downstream genic regions (30.5%), and intergenic regions (8.3%). Moreover, a significant enrichment of regulatory motifs (promoter, enhancer, insulator, DNAse peak, and expression quantitative trait loci) characterizes SS risk variants (94.4%). Next, screening SNPs in high linkage disequilibrium (r2 ≥ 0.8 in Caucasians) revealed 645 new variants including 5 SNPs with missense mutations, and indicated an enrichment of transcriptionally active motifs according to the cell type (B cells > monocytes > T cells ≫ A549). Finally, we looked at SS risk variants for histone markers in B cells (GM12878), monocytes (CD14+) and epithelial cells (A548). Active histone markers were associated with SS risk variants at both promoters and enhancers in B cells, and within enhancers in monocytes. In conclusion and based on the obtained in silico results that need further confirmation, associations were observed between SS genetic risk factors and epigenetic factors and these associations predominate in B cells, such as those observed at the FAM167A–BLK locus. Introduction Primary Sjögren's syndrome (SS) is a systemic autoimmune epithelitis affecting exocrine glands, such as salivary and lacrimal glands (1). The clinical manifestations of SS include dry mouth (xerostomia), dry eyes (keratoconjunctivitis sicca), systemic features, and patients have a 20-to 40-fold increased risk of developing lymphoma (2)(3)(4). Histological examination shows focal and peri-epithelial T and B cell infiltration plus macrophages in exocrine glands and parenchymal organs, such as kidney, lung, and liver (5). SS is characterized by the presence of circulating autoantibodies (Ab) against the sicca syndrome (SS)A/Ro and SSB/La ribonucleoprotein particles (6). It is estimated that there are over 120 million single nucleotide polymorphisms (SNPs) in the human genome (NCBI dbSNP database, Build 143) and, among them, hundreds are disease risk variants for autoimmune diseases (AID) with the particularity that they are for the vast majority excluded from proteincoding regions (exon) and present within regulatory areas (7,8). Regulatory SNPs control genes through an effect on (i) the transcriptional machinery when present within a gene regulatory region [promoter, enhancer, insulator (a gene regulatory element that blocks interaction between enhancers and promoters), and expression quantitative trait loci (eQTL)], (ii) the spliceosomal complex formation that controls intron excision, (iii) the activation of mRNA non-sense-mediated decay (NMD), and (iv) the control of messenger RNA stability through microRNA (3 ′ -UTR). In SS, the list of genetic variations is growing with the particularity that the odds ratio (OR) is usually modest (OR < 1.5) with the exception of the HLA genes that have a significant OR (usually >2) (9). The associated risk genes analysis supports immunopathological pathways in SS, such as antigen presentation, cytokine signaling, and the NF-κB pathway (10). The characterization of regulatory SNPs in SS remains to be established. In SS, several arguments support a role for epigenetic deregulation in disease initiation and progression (11,12). The first clue was that two drugs, procainamide and hydralazine, induced SS in humans by blocking DNA methylation (13). Moreover, defects in DNA methylation characterize T cells, B cells, and salivary gland epithelial cells from SS patients (14)(15)(16), and such defects were associated with the expression of genes usually repressed by DNA methylation, such as transposons and miRNAs in salivary glands from SS patients (17,18). Last, but not least, histone epigenetic markers and ribonucleoprotein post-translational modifications are immunogenic leading to autoAb production (14). Accordingly, the aim of this work was to test the association between genetic and epigenetic determinants in SS. In the following, we pursue a two-staged analysis. First, we characterized a large panel of SS risk variants to reveal that they are predominantly present within regulatory elements. Second, we further explored the striking associations of those regulatory elements with cellular specificity and particularly in immune cells. SS Genetic Risk Factors Data mining based on peer reviewed literature information (PubMed) and publicly available databases (centralgwas.org) served in the compilation of a list of 43 gene risk factors and their reported variants (n = 72) in SS (Table S1 in Supplementary Material) . The number of SS patients and controls were also reported as well as the OR average (95%), when available. The gene list used in this study was manually updated further to include gene function, SNP number (dbSNP database), and genomic location according to the human genome reference GRCh38. Genetic variants and their observed associations with clinical and functional phenotype were submitted to The National Center for Biotechnology Information (NCBI) ClinVar database 1 . The gene list was tested with the FatiGO web interface AmiGO2 2 for functional enrichment. Functional/Regulatory Genome Annotation Data The variant effect predictor (VEP) tool 3 was used to determine the location of the variants (exon, intron, 5 ′ /3 ′ -UTR, Up/Downstream genic sequence, and intergenic section) and their consequences [missense, non-coding transcript, splice donor variant, and target of non-sense-mediated mRNA decay (NMD)]. Linkage Disequilibrium Following SNP selection, the HaploRegV2 web portal was used to identify SNPs in linkage disequilibrium (LD, R 2 ≥ 0.80) in Europeans from the 1000 genome project using a maximum distance between variants of 200 kb in order to cover the enhancer elements (51). Statistical Analysis Pearson's Chi-squared test with Yate's continuity correction, when appropriate, was used to evaluate the significance of differences between the regulatory motifs and the histone chromatin immunoprecipitation (ChIP) experiments. A probability (P) of <0.05 was considered significant. Autoimmune-Related Genes Associated with SS A list of 43 SS-associated gene risk factors corresponding to 72 SNPs, referred to as SS risk variants, was extracted from the scientific literature (Figure 1). Among the risk factors, half (36/72) were associated with another AID (systemic lupus erythematosus, rheumatoid arthritis, systemic sclerosis, inflammatory bowel disease, autoimmune thyroiditis disease, insulin-dependent diabetes, primary biliary cirrhosis, autoimmune hepatitis), allergy, infections, and cancer, including B/T cell lymphomas. This partial overlap suggests that both common and distinct genetic traits are present in SS and equally distributed. Regulatory Regions and DNA Binding Molecules We then used a combination of three tools based on information from the ENCODE program (VEP) and from both the ENCODE and the Roadmap Epigenome programs (RegulomDB and Hap-loReg v2) to determine whether SS risk variants are likely to be within promoters, enhancers, or insulators. These regulatory motifs were defined according to the available ChIP results from multi-cell analysis showing 21/72 (29.2%) promoters, 41/72 (56.9%) enhancers, and 5/72 (6.9%) insulators. Of particular note, within the four SNPs with missense mutations, one promoter and two insulators were detected (Figure 3). Moreover, 34/72 (47.2%) DNase hypersensitive regions (DNase peak) and 12/72 (16.7%) eQTL were recovered. Looking specifically at promoters and enhancers, data from ChIP experiments revealed that NF-κB (n = 5), STATs (n = 3), and EGR-1 (n = 3) were predominant in promoters, and NF-κB (n = 3) in enhancers. For the remaining 5/72 (6.9%) SNPs, no regulatory functions were assigned which is significantly lower than the expected rate of 56.2% (P < 10 −6 ) (50). Genes in High Linkage Disequilibrium In order to improve the analysis, we used the HaploReg v2 tool to include 645 new SNPs that were identified to be in high LD with the 72 annotated SNPs ( Table 1). This tool identifies 34 new genes, including one microRNA (Mir4752), five SNPs with FIGURE 2 | Occurrences of SS risk variants according to the protein-coding gene location. Cell Type-Specific Analysis Revealed Activated Enhancer and Promoter Histone Markers at SS Risk Variants in B Cells To further explore cell type specific activation in promoters and enhancers at SS risk variants and according to the critical role played in the disease by epithelial cells, lymphocytes, and macrophages, we selected from the 18 ENCODE available cells: the human lung adenocarcinoma cell line A549 for epithelial cells, the GM12878 lymphoblastoid cells for B cells, and the peripheral blood CD14 + monocytes for macrophages. For these three cell types, we mapped SS risk variants to markers of active promoters (H3K4me2, H3K4me3, and H3K9Ac), and to markers of active enhancers (H3K36me3 and H3K4me1) (52). In addition, H3K27Ac was selected as a marker of activity, and H3K27me3 as an inactive marker of enhancers. As shown in Figure 4A and with regards to the 21 promoter SS risk variants, the three active promoter markers (H3K4me2, H3K4me3, and H3K9Ac) were significantly enriched in B cells (GM12878) in contrast to the epithelial cells (A549) and monocytes (0.01 < P < 0.0006, Chi square with Yate's correction). The active marker H3K27Ac was enriched in B cells and monocytes in contrast to epithelial cells (P = 0.0001 and P = 0.02, respectively). The same analysis was performed with the 41 enhancer SS risk variants ( Figure 4B) revealing an enrichment of the enhancer active marker H3K36me3 in both B cells and monocytes in contrast to A459 cells (P = 0.02 and P = 0.005, respectively). The active marker H3K27Ac was enriched in B cells (P = 0.0001), and, although not significant, there is a trend for a monocyte enrichment in contrast to epithelial cells. In summary, these findings highlight the critical role of epigenetic factors in B cells to control both promoter and enhancer SS risk variants, and in monocytes to control enhancer SS risk variants. FAM167A-BLK Locus In order to validate our observations, and based on three reports, including the genome wide association study (GWAS) performed by Lessard et al., in 395 patients with SS and 1975 controls from European origins (31,35,41), the FAM167A-BLK locus (Chr 8:11421463-11564604) was selected to position the 8 FAM167A-BLK SS risk variants plus two 5 ′ -UTR variants selected from the LD analysis and previously identified as lupus risk variants (53). These two SNPs are in high LD with 4/8 SS risk variants [rs922483 is in high LD with rs2736340 (r 2 = 0.81), rs13277113 (r 2 = 0.83), and rs2736345 (r 2 = 0.96); and rs2250788 is in high LD with rs2254546 (r 2 = 0.98)]. As shown in Figure 4C, the 10 selected SNPs were positioned in the FAM167A-BLK locus revealing three groups. The first group contains an isolated SNP (rs12549796) that was present in an intronic part of the FAM167A gene. A second group (n = 7) was present in the vicinity of the BLK promoter and exon 1, and a third group (n = 2) was present~35 kb downstream BLK promoter. Next, as revealed by querying the Ensembl database using H3K27Ac to mark active promoters and enhancers, SNPs were positioned within 9/10 H3K27Ac active motifs in B cells (GM12878), which is in contrast to 2/10 H3K27Ac active motifs in monocytes, and none in epithelial cells. Such associations between genetic and epigenetic factors within regulatory elements in B cells for the FAM167A-BLK locus were further reinforced by using the RegulomeDB tool that summarizes results from the ENCODE and Epigenetic Roadmap programs. As indicated Table 2, the RegulomeDB tool supports that SS risk factors at FAM167A-BLK locus would predominantly affect B cells (lymphoblastoid and naive B cells) and, to a lesser extent, monocytes, T cells (naïve, TH2, and Treg), mesenchymal stem cells, and fibroblasts. Discussion Primary SS is an autoimmune disease with a genetic basis in which at least 40 gene risk factors may be involved, including BLK, IRF5, STAT4, and the HLA locus. However, these genetic risk factors alone cannot explain all of the disease risk factors and, in particular, environmental risk factors (e.g., viruses, hormones . . .) that are likely to play a critical role in the process of the disease. Given the complexity of the disease, epigenetic analyses are conducted to provide new insights into the disease as DNA methylation patterns, chromatin structures, and microRNA are influenced both by the genetic machinery and by environmental factors (13,54,55). The primary role of the epigenome is to regulate, in a cell-specific manner, cellular development, and differentiation and such effects vary between individuals with age as revealed by testing identical twins (56), or between smokers and non-smokers (57). Furthermore, genetic variants and, in particular, non-coding and regulatory SNPs can influence cell type specific regions marked by accessible regions, thus opening new perspectives to better characterize disease risk factors and cell types contributing to the diseases which was the aim of the present in silico analysis. Applied to SS, such strategy was fruitful in suggesting the existence of associations between genetic and epigenetic alterations in the setting of the disease. Indeed, a cell-specific overlap These results also suggest that there is an effect on some common pathways (NF-κB, STATs) previously described to be affected in SS (10). FIGURE 4 | Analysis of histone modifications in the promoters (A) and enhancers (B) of SS risk variants within A549 epithelial cells, B cell lymphobastoid GM12878 cells, and CD14 The genetic and epigenetic fine mapping of autoimmune risk factors was recently performed in 21 AID with the notable exception of SS (7). In line with our observations, it was observed that autoimmune risk variants were mostly non-coding (90%) and map predominantly to H3K27Ac positive immune-cell enhancers (60%) and promoters (8%). Next, a T cell signature was observed in nearly all of the AID tested except in lupus and primary billiary cirrhosis (two AID frequently associated with SS) that present a B cell signature, and type I diabetes with pancreatic islets. Finally, it was reported that autoimmune risk factors were enriched within binding sites for immune-related TFs, such as Pu-1 and NF-κB. As a consequence, the physiopathology of AID needs to be updated according to the recent progress in epigenetics (54). Some limitations are inherent in this type of study. First, cells used in the ENCODE program are predominantly cell lines that are different from primary cells, such as the lymphoblastoid GM12878 B cell line, that results from EBV transformation of peripheral blood mononuclear cell using phytohemagglutinin as a mitogen. New results using primary cells, which are available from the Epigenome Roadmap program further supports similarities between lymphoblastoid GM12878 B cells and purified human CD20 + B cells as we observed for the FAM167A-BLK locus when using the RegulomeDB tool. Second, although the ENCODE program is an extensive resource; the program is limited to certain cell types and DNA binding elements that limit the interpretation. Third, many SNPs are in tight genetic linkage and, as a consequence, genetic risk variants may not be causal, but rather reveal the presence of a linked SNP that is functionally relevant to the pathogenesis. Such a situation may be suspected for different SNPs tested from our selection since the LD analysis has revealed new missense mutations as well as new gene risk factors that need to be tested, such as chemokines (CCL7 and CCL11), cytokines (IL2) and the miRNA4752. Two SNPs in CCL11 have been associated with germinal center-like structure formation in SS patients (47), and CCL11 (Eotaxin) circulating levels were reduced in SS patients (58). While the function of the protein encoded by FAM167A is unknown, the tyrosine kinase BLK controls B cell development and is activated after B cell receptor engagement. The FAM167A-BLK locus is associated with several AID, such as SS, lupus, rheumatoid arthritis, scleroderma, and vasculitis. Among them, two risk alleles (rs132771113 and rs9222483) are known to control BLK transcription during B cell development (53,59). Moreover, by integrating epigenetic fine mapping, we further observed that all BLK-associated SS risk variants, including the two previously described, were all present within epigenetic marks in B cells. Altogether, this example illustrates the value of integrating epigenetic resources for investigating the complex mechanisms by which non-coding risk variants could modulate gene expression. Last but not least, the B cell subset identified from our in silico study deserves several comments. First, B cell qualitative abnormalities have been reported in SS with important perturbations in peripheral blood B cell profiling and B cell migration within exocrine glands (5,60). Second, the association between the incidences of B cells in salivary gland epithelial cells has been addressed as well as the formation of ectopic germinal centers and transformation to B cell lymphoma (61). Third, non-HLA genetic associations in SS are predominantly related to B cell genes (BTK, CD40, EBF-1 . . .) as we observed in our selection. Fourth, a recent study reported DNA methylation changes in B cells and such changes predominate within loci containing SS risk factors (16). Altogether, these observations provide rationale for targeting B cells in SS along with the observations that depleting B cells with Rituximab or targeting BAFF with Belimumab are both effective (62,63). In conclusion, we have tested, as a proof of concept, a novel approach that integrates both epigenetic information and results from genomic analysis to further enhance the value of the genetic risk factors highlighted in complex diseases, such as SS. Future work needs to be done in order to confirm experimentally the cellular specificity and the functional role of the characterized regulatory SNPs. Another consequence is that such approach could be used to select and/or propose future therapeutic drugs in SS as epigenetic mechanisms are reversible.
4,088
2015-08-26T00:00:00.000
[ "Biology", "Medicine" ]
Localization of seismic waves with submarine fiber optics using polarization-only measurements Monitoring seismic activity on the ocean floor is a critical yet challenging task, largely due to the difficulties of physical deployment and maintenance of sensors in these remote areas. Optical fiber sensing techniques are well-suited for this task, given the presence of existing transoceanic telecommunication cables. However, current techniques capable of interrogating the entire length of transoceanic fibers are either incompatible with conventional telecommunication lasers or are limited in their ability to identify the position of the seismic wave. In this work, we propose and demonstrate a method to measure and localize seismic waves in transoceanic cables using only conventional polarization optics, by launching pulses of changing polarization. We demonstrate our technique by measuring and localizing seismic waves from a magnitude Mw 6.0 earthquake (Guerrero, Mexico) using a submarine cable connecting Los Angeles, California and Valparaiso, Chile. Our approach introduces a cost-effective and practical solution that can potentially increase the density of geophysical measurements in hard-to-reach regions, improving disaster preparedness and response, with minimal additional demands on existing infrastructure. I am very excited for this work, and I genuinely believe this technology and algorithm proposed will be crucial in leveraging long range telecommunications fibers for seismology.The results are timely, and although the concept of leveraging loop-back points builds off another cited recent work (Marra, Science 2022), application here to state-of-polarization sensors will be crucial moving forwards.The possibility of such instruments being minimally intrusive and affordable means this may be the path forward for large scale application. The article is generally well written and I see very few problems in the way of grammar or explanations.Importantly, however, I feel strongly the authors need to be careful in their wording about having located an earthquake: identifying a fiber span where the signals are strongest is absolutely not equivalent to "precise localization."It is a great leap forward, granted, but for any seismologist reader this is vastly different from a useful earthquake location; even the title of the article could be seen as questionable in this regard. Other specific questions: (These are points I offer to potentially improve the manuscript; none do I feel as strongly about and should not impede publication) Relating to my point about claiming to have located the earthquake, I appreciate the authors' transparency and honesty at line 118, about how some closer spans failed to see the signals.Also, it is clear from the map view (Fig 3a) that Span 41 is not the closest.Nevertheless, this is further evidence the authors should be careful in claims and wording. The authors list some possible reasons for why other spans may not see the earthquake, additionally potentially there could be something relating to fiber curvature or geometry as mentioned in Fichtner et al (2022, "Introduction to phase transmission…").Notably, that straight segments should see nothing, though I admit I don't know if this is applicable also for SOP-based measurements, nor how loop-back repeaters would affect this claim. Line 42: Grammar: "On the other hand, polarization-based methods with less stringent hardware requirements have previously been unable of single-span localization" -> unable to [leverage / achieve / take advantage of] single-span approaches? Figure 2: Comment: Some of the symbols and acronyms in the figure are unfamiliar to me as a seismologist.For example, I don't understand what is going on inside the HLLB (loopback inset).This is OK and potentially normally readers of Communication Engineering will be more used to the symbols and acronyms, I only mention it for perspective. Line 85: When describing the instrumentation: are signals at each HLLB reflector sent back along the same single fiber, or does each get a unique fiber?I guess it's the former, but as someone less familiar with the technologies involved I can't see how you separate the different HLLB paths. Fig 3e (and other analyses): The frequency range shown to have high energy at 0.25 to 0.35 Hz is rather narrower than I would expect for a M6.0 earthquake.Are there limitations on the instrument noise that limit these observations, or maybe some aspect of the eigenvalue approach?A spectrum or seismogram from a nearby land station (or one of comparable epicentral distance) would help convince me of the measurement capability. Line 123: Clarification: "The eigenvalue method's insensitivity to changes…" -> does this imply the eigenvalue method would further outperform SOP approaches, or the other way around?What is meant by "specific stimuli"?If the eigenvalue approach is less sensitive but the direct SOP is more noisy, what will win?I sincerely appreciate this type of analysis and discussion, just I want to make sure I (and readers) understand the implications. Related to the comment above: It would be great if the authors can comment on the sensitivity of such measurements as relating to earthquakes.Detecting a M6.0 is great for proof-of-concept, but such undersea fibers will only really add seismological value if they can detect things below the range of current traditional instruments (e.g.M1, M2, M3, etc. offshore).I realize this may be beyond the scope of this initial study, so this is not required, I only mention it would be interesting to comment on.The paper presents an interesting, telecom-compatible method for localizing geophysical disturbances across a potentially trans-continental fiber cable.The method is an adaptation of what had been already demonstrated in Ref 13 (using the loop-back channel in amplified submarine links to localize disturbances) except that in this case, the measurement is done using polarization changes and conventional telecom lasers (i.e.no need of high-coherence lasers as in Marra's paper).This implies some advantages as telecom lasers themselves can be used (however, the use of dedicated polarization synthesizers and polarimeters in the measurement channel is needed, which means some unusual hardware in these nodes).In essence the authors use long pulses with a selectable polarization state (pulses are the size of the span length) and the reflection from each loop-back channel is analyzed as a function of the time of flight of these long pulses.I think the paper is interesting and has to be published.I have several concerns and questions that the authors can surely address in a relatively easy way: -I had to read too much through the methods section to actually understand the measurement procedure, I am not sure if some of this information could be shifted to the Results section considering that people read Results before Methods.In this case, for people with some skill in optical measurements, the information in Methods is of key importance to understand the process.I suggest to move some of the hardware operation to Results and possibly move part of the matrix treatment to the Methods section. -When it comes to localization, obviously the golden standard in all these systems is using a DAS.Overall, the pulses used here are 300 microseconds long, which is comparatively very long for a DAS (3 orders of magnitude larger).I wonder if a DAS-like architecture with such a relatively long pulse could also give a measurable signal.DAS would have the advantage of being more quantitative and linear than this scheme.I think that an evaluation of the backscattered energy in such a case could help the authors decide if a poor resolution DAS could also do the same measurement (of course with a more expensive laser). -I am sure that during the measurement campaign there were other disturbances of smaller magnitude that could be recorded along the used cable (this is a seismically very active region).Please provide information of what is the minimmum magnitude of event that could be detected in the measurement campaign done here.Showing the magnitude 6 event is interesting, but giving the actual sensitivity threshold would be necessary to comparatively assess this method and the others published in the literature. -Sampling is very low (sub Hz in this case, potentially 2-3 Hz if the hardware had no delay times) as the reflections from all the repeater spans have to be collected and 3 polarization states have to be swept.Please comment if there is any room for increasing the sampling while keeping the same constraints in terms of fiber size. -Of course the interest of gathering measurements across many points is using array methods.However, considering the "nonlinear" nature of these polarization measurments, would this be compatible with array processing?Reviewer #3 (Remarks to the Author): Review of "Localization of Seismic Waves in Submarine Fiber Optics Using Polarization-only Measurements," by Luis Costa et al. The manuscript presents a report on the detection and localization of an earthquake using an undersea fiber optics infrastructure.It appears to be a valuable addition to the rapidly growing literature on this topic.I am inclined to endorse its publication pending the proper addressing of my concerns outlined below: 1) My primary concern relates to the authors' use of singular value decomposition (SVD) as an intermediate step to obtain a polar decomposition for extracting the unitary part of the transmission matrix.While this procedure is standard in Jones space, it may not be suitable in Stokes space.To illustrate this, consider the simple case of combining a (partial) polarizer represented in Jones space by a (positive definite) matrix A and a concatenation of waveplates represented by an arbitrary unitary matrix U. In Jones space, the transmission matrix T is given by T = UA.The polar decomposition of T is either T = U A (right polar decomposition) or T = B U (left polar decomposition) and is unique.Consequently, U is also unique, and applying SVD would yield the exact result, providing the unitary matrix U and, in Stokes space, the rotation matrix corresponding to U.However, if the SVD is directly applied in Stokes space, it would not return the 3 by 3 rotation matrix corresponding to U.This limitation arises because representing a pure polarizer as a linear operator is not possible within the 3-dimensional Stokes space.To maintain the linearity of the representation, it becomes necessary to extend the Stokes space with an extra dimension representing the total power and replace the matrices that represent rotations in Stokes space with 4 by 4 Mueller matrices.In the extended space, the unitary component of the decomposition is the direct sum of a rotation in the 3dimensional Stokes space and an identity in the fourth coordinate.This makes not straightforward the application of the SVD to extract from the transmission matrix the unitary part of the concatenation. Of course, the use of the SVD in the 3-dimensional Stokes space would still produce, for the concatenation polarizer-waveplates, a unitary matrix, but in most cases this unitary matrix includes the polarization rotation induced by the partial polarizer, which would instead be filtered out if the SVD is applied in Jones space. Earthquakes primarily affect fiber propagation by inducing changes in the fiber's refractive index and birefringence, thereby impacting the unitary part of the transmission matrix.On the other hand, polarization-dependent loss is mainly caused by lumped devices and remains substantially timeindependent.By applying the SVD in Stokes space, crosstalk is generated between the time-independent polarization-dependent loss and the time-dependent unitary part of polarization rotation.Consequently, this crosstalk has the potential to significantly reduce the sensitivity to time-dependent birefringence changes. Given that it is not challenging to extract the transmission matrix in Jones space from the data, the authors should reconsider their data processing approach and extract the unitary part of the fiber propagation by applying the SVD in Jones space rather than in Stokes space. 2) The experiment's specific details regarding the system where it was performed have not been provided in the report.However, it appears that the system under test bears a striking resemblance to Curie, the system described in [14].The only discernible difference is the location of one of the system's terminals, with one being in Santiago instead of Valparaiso.To ensure transparency and enable readers to thoroughly understand the characteristics of the system under test, it is crucial to provide this information.Additionally, the report should explicitly state whether the data were collected from the Santiago or Los Angeles terminal. Minor comments: Line 70: (disregard if the paper is modified following the suggestion in comment 1).The authors' analysis is conducted in Stokes space, not in Jones space.Consequently, U represents an arbitrary matrix with real entries describing a (proper) rotation in Stokes space, which is a special case of an orthogonal matrix, not an arbitrary (complex) unitary matrix.This distinction is important as it ensures that readers are given the immediate perception that the analysis takes place in Stokes space, not in Jones space. Line 112: Would be beneficial that the definition of crosstalk is given the first time it is introduced and discussed.The reader is not exposed to the mathematical definition of crosstalk until hitting the figure caption of Fig. 4 of the supplementary material. Line 157: (disregard if the paper is modified following the suggestion in comment 1) Again, the U and V matrix are real, so that V^* should be the transpose of V. Since the star is usually reserved for Hermitian conjugate, I would suggest using another symbol for it. Line 18 of the supplementary: (disregard if the paper is modified following the suggestion in comment 1) The outcome of the singular value decomposition should be the closest orthogonal matrix, and V' is not defined but it should be defined as the transpose of V. Line 77 of the supplementary: The sentence "Note that while the variance of the applied perturbation was constant, the observed variance in the perturbed span due to the nonlinear nature of the measurement" appears to be incomplete. Reviewer #1: I am very excited for this work, and I genuinely believe this technology and algorithm proposed will be crucial in leveraging long range telecommunications fibers for seismology.The results are timely, and although the concept of leveraging loop-back points builds off another cited recent work (Marra, Science 2022), application here to state-of-polarization sensors will be crucial moving forwards.The possibility of such instruments being minimally intrusive and affordable means this may be the path forward for large scale application. The article is generally well written and I see very few problems in the way of grammar or explanations.Importantly, however, I feel strongly the authors need to be careful in their wording about having located an earthquake: identifying a fiber span where the signals are strongest is absolutely not equivalent to "precise localization."It is a great leap forward, granted, but for any seismologist reader this is vastly different from a useful earthquake location; even the title of the article could be seen as questionable in this regard. Other specific questions: (These are points I offer to potentially improve the manuscript; none do I feel as strongly about and should not impede publication) Relating to my point about claiming to have located the earthquake, I appreciate the authors' transparency and honesty at line 118, about how some closer spans failed to see the signals.Also, it is clear from the map view (Fig 3a) that Span 41 is not the closest.Nevertheless, this is further evidence the authors should be careful in claims and wording. Thank you for the positive general comments and for the valuable input regarding clarity.Indeed, we have not demonstrated localization of an earthquake, and while the technique may in principle be able to achieve this by identifying the arrival times of the seismic waves at several spans along the cable (as shown by Marra et al. 2022), we did not demonstrate this, nor did we intend to suggest that we had. Concerning this, we have noticed the following: • In the discussion section, we mistakenly used 'seismic event' instead of 'seismic wave'. We have, therefore corrected the following passage: "We successfully accomplished the precise localization of a seismic event, identifying its location to a single span of fiber between two optical repeaters."to "We successfully accomplished the localization of a seismic wave, identifying its location to a single span of fiber between two optical repeaters."• We also revised the following sentence for clarity: "The ability to localize the seismic wave to within a span enables the observation of the seismic wave move-out (as demonstrated in figure 3e), and may lead to further benefits, such as the determination of the epicenter of a seismic event using a single fiber, and reduced influence of environmental noise compared to cumulative approaches." We aim to eliminate any potential confusion by the reader with these changes. Regarding the title, we believe that we have been careful enough by mentioning the ability to localize seismic waves but not the earthquake itself.The localization of the seismic wave within the fiber cable is one of the central claims of the paper and one of the main advantages of the technique, so we feel strongly that it should remain in the title. The authors list some possible reasons for why other spans may not see the earthquake, additionally potentially there could be something relating to fiber curvature or geometry as mentioned in Fichtner et al (2022, "Introduction to phase transmission…").Notably, that straight segments should see nothing, though I admit I don't know if this is applicable also for SOP-based measurements, nor how loop-back repeaters would affect this claim. Thank you for bringing the work by Fichtner and colleagues to our attention, which was missing in our initial bibliography and provides valuable insights towards interpreting effects that may affect local sensitivity of fiber spans to the earthquake wave. We must note, however, that Fichtner's work is written with phase-measurements in mind, which predominantly measure the effective elongation of the cable.As such, these results do not directly translate to polarization or birefringence-based techniques (as indicated in your question). These following reasons may play a part on why some spans are unable to detect the seismic wave: 1. The layout and geometry of cable at those spans (e.g.strain coupling and the geometry of the fiber).Regarding the geometry of the fiber layout, as studied by Fichtner et al, note that the treatment for a birefringence-based measurement (such as the eigenvalue method) would have to be different than the simpler, optical path length case explored in their article.First-order effects like elongation, as considered by Fichtner, have a minimal impact on birefringence. 2. Intrinsic features of polarization-based approaches, or the eigenvalue method: a. Changes to the birefringence may depend on the previously existing intrinsic birefringence of the cable.As such, depending on the orientation of the fast and slow axis of the fiber relative to the fiber displacement, the birefringence may be locally strengthened or attenuated. b. In addition to the previous point, the eigenvalue method returns only incomplete information about the local changes to the birefringence of the cable: it reports on the change in effective magnitude of the birefringence (eigenvalue), but misses changes to the orientation of the birefringence vector (eigenvector), which, however still affects the output state of polarization.c.Unlike a DAS system, the forward propagating and reflected light travel through two distinct fibers in the HLLB configuration.Each of the fibers has different intrinsic birefringence magnitudes, different orientation of the birefringence vector, and may experience the earthquake signal differently.Though unlikely, it is possible for the birefringence in the return path to cancel the effects of the forward-path birefringence. Due to these factors,each span may have a different sensitivity to seismic wave (which, for long-term deployments, may be possible to calibrate) Note that point 2b is exclusive to the eigenvalue method and distinguishes it from standard SOP measurements.However, in the particular instance of our work, we do not think is the dominant reason for the blind-spots: "Notably, both approaches [SOP and Eigenvalue Method] failed to detect the earthquake at spans closer to the epicenter than the 41st.This suggests that the dominant contribution to the detection limit in any SOP-based method could be due to the complex sensitivity of the local birefringence to different environmental stimuli (e.g., bends, twists, or other effects), the non-linear nature of SOP-based measurements, or variations in mechanical coupling along the cable.". We've updated the text to include the role of fiber geometry and cited Fichtner's work for comprehensiveness. "Notably, both approaches failed to detect the earthquake at spans closer to the epicenter than the 41st.This suggests that the dominant contribution to the detection limit in any SOP-based method could be due to the complex sensitivity of the local birefringence to different environmental stimuli (e.g., bends, twists, or other effects ), the non-linear nature of SOP-based measurements, variations in mechanical coupling along the cable, or the geometry/layout of the cable with respect to the induced deformation by the seismic wave [Fichtener et al].". Further study is needed to understand the underlying cause of the sensitivity variations and blind spots.For example, a long-term study using simultaneously a phase-based technique (such as the one in Marra et al., 2022) and polarization-based approaches for comparison.Line 42: Grammar: "On the other hand, polarization-based methods with less stringent hardware requirements have previously been unable of single-span localization" -> unable to [leverage / achieve / take advantage of] single-span approaches? We have rephrased to hopefully clarify this sentence. "On the other hand, polarization-based methods with less stringent hardware requirements have previously been unable of single-span localization, limiting their application to either full-span approaches or rudimentary localization techniques limited to a single dominant perturbation in the cable." To "On the other hand, polarization-based methods benefit from less stringent hardware requirements but have thus far been unable to localize the seismic wave to a single-span.The non-commutative nature of birefringence operations has limited these methods to full-cable measurements or, at most, to the localization of a single dominant perturbation occurring along the cable."Otherwise what is needed or gained by the eigenvalue approach?Some limited localization can be achieved with SOP-based methods, as shown in figure 3c (right) -The first span that senses the earthquake can be determined, but all spans following it will be affected.The SOP measurement is, however, fundamentally cumulative.Any environmental perturbation at position M acting on a cable comprised of N spans, will appear on all spans from M to N. Actual localization of the seismic wave enables (for example) the measurements in sub-figure e (where we see the move-out of the seismic wave to a neighboring span). As for what is needed by the eigenvalue approach: It requires a set of three measurements at distinct input polarizations, while the SOP method can be done with a single laser shot. Figure 2: Comment: Some of the symbols and acronyms in the figure are unfamiliar to me as a seismologist.For example, I don't understand what is going on inside the HLLB (loopback inset).This is OK and potentially normally readers of Communication Engineering will be more used to the symbols and acronyms, I only mention it for perspective. We added a legend to the figure, specifying what are Erbium Doped Fiber Amplifiers and Fiber Bragg Gratings, so that unfamiliar readers can more easily research these. Line 85: When describing the instrumentation: are signals at each HLLB reflector sent back along the same single fiber, or does each get a unique fiber?I guess it's the former, but as someone less familiar with the technologies involved I can't see how you separate the different HLLB paths. The HLLB reflects a tiny portion of the launched (forward-propagating) optical wave back through a second fiber.All repeaters have a HLLB which routes light through this second fiber, and discrimination of signals from each repeater is done by time-domain reflectometry. Assuming that each HLLB is spaced ~100 km (200 km roundtrip), and the phase velocity of light in an optical fiber is 2x10^8 m/s, we can expect each reflection to be separated by about 1 ms.We added a few sentences to the methods section, under "Choice of pulse width and repetition rate": "The pulse width and repetition rate were selected to accommodate the length and repeater spacing of the FUT.The signals from all repeaters (HLLB) paths are transmitted through the same fiber and are discriminated by their time-of-arrival since the respective input pulse launch (see figure 1b).In order to ensure that all reflections arrive before launching the next pulse into the FUT, the repetition rate must be selected as:" Fig 3e (and other analyses): The frequency range shown to have high energy at 0.25 to 0.35 Hz is rather narrower than I would expect for a M6.0 earthquake.Are there limitations on the instrument noise that limit these observations, or maybe some aspect of the eigenvalue approach?A spectrum or seismogram from a nearby land station (or one of comparable epicentral distance) would help convince me of the measurement capability. The narrow bandwidth is indeed puzzling and demands further study.In a previous work by some of the authors, strong energy was also observed within a narrow frequency band, with a full-span polarization sensing method (see Fig. 3E from Zhan et al, Science, 2021).It may originate, for example, from the generation of a coupled wave which strongly modulates the fiber birefringence.Citing the work by Zhan et al: "Somewhat unexpectedly, 350 s after the earthquake origin time, another package of strong but lower-frequency (0.3 to 0.8 Hz) waves arrived at the Curie cable (Fig. 3, E and F).Given the waves' slow average speed (~2 km/s) and the non-excitation of short-period surface waves from the earthquake at 97 km depth (see fig.S4 for an example of surface waves on SOP), we believe that these late waves are either ocean acoustic waves or Scholte waves converted from the direct P and S waves near bathymetric features (e.g., slopes, trench) (22, 23) and subseafloor heterogeneities (e.g., fault zones) (7)."It is possible that the narrow frequency band (0.25-0.35Hz) observed here is also where strong coupled waves are excited.Reducing the noise overall may help reveal the weaker energy outside the band.We will explore this in a future study.Line 123: Clarification: "The eigenvalue method's insensitivity to changes…" -> does this imply the eigenvalue method would further outperform SOP approaches, or the other way around?What is meant by "specific stimuli"?If the eigenvalue approach is less sensitive but the direct SOP is more noisy, what will win?I sincerely appreciate this type of analysis and discussion, just I want to make sure I (and readers) understand the implications. Related to the comment above: It would be great if the authors can comment on the sensitivity of such measurements as relating to earthquakes.Detecting a M6.0 is great for proof-of-concept, but such undersea fibers will only really add seismological value if they can detect things below the range of current traditional instruments (e.g.M1, M2, M3, etc. offshore).I realize this may be beyond the scope of this initial study, so this is not required, I only mention it would be interesting to comment on. In that specific case, that sentence means to account for the possibility of SOP measurements outperforming the eigenvalue method (in detection limit, not in localization) in some situations. Since the eigenvalue method only measures changes to the eigenvalues (birefringence strength) but not to the eigenvector orientation (birefringence vector), one can conceive of a case in which a span undergoes a series of deformations along its length, with the net result of not changing the birefringence strength, but changing the orientation of the birefringence vector.In such a case, the eigenvalue method would not be sensitive to any perturbation, but direct measurements of the SOP would. In a more realistic scenario, any perturbation to the cable will affect both birefringence strength and the orientation of the birefringence vector.SOP changes as a result of both contributions, while the eigenvalue method only measures changes to the strength.We want to be clear about possible limitations of the eigenvalue technique, and clarify that a comparison in detection limit between the two approaches may be case dependent.In our work, and as we describe in the text, we did not see a relevant difference in SNR from both techniques in our measurements at span 41 to draw any conclusions.For clarification, we added an example to "specific stimuli", in order to hopefully make that passage more clear. "Regarding the detection limit of the two tested approaches, we observed no significant SNR differences between the eigenvalue method and direct SOP measurements.Notably, both approaches failed to detect the earthquake at spans closer to the epicenter than the 41st.This suggests that the dominant contribution to the detection limit in any SOP-based method could be due to the complex sensitivity of the local birefringence to different environmental stimuli (e.g., bends, twists , or other effects), the non-linear nature of SOP-based measurements, or variations in mechanical coupling along the cable, or the geometry/layout of the cable with respect to the induced deformation by the seismic wave [Fichtener].Nonetheless, it is not easy to draw a direct comparison between the detection limit of both approaches, given the fundamental differences between the eigenvalue and direct-SOP methods: on the one hand, the detection limit when using direct-SOP methods with HLLB will likely be in part determined by the accumulated length of cable up to the interrogated span and the environmental noise acting on the cable (due to the cumulative nature of environmental noise).On the other hand, the eigenvalue method's insensitivity to changes to the birefringence vector orientation suggests potentially lower sensitivity in some scenarios, where the net effect along the span predominantly rotates the birefringence vector, without a great net effect on birefringence strength." Regarding the second comment, unfortunately our time with the fiber while implementing the eigenvalue method was relatively short, and as a consequence did not store any data from smaller earthquakes.With the current performance of our proof-of-principle demonstrations, it is unlikely that very low magnitude earthquakes (like M1,M2 or M3) at a similar distance from the cable would be detectable, however.There is nothing fundamental about combining the 9 SOP signals.In fact, there seems to be some misunderstanding by the reviewer, as the 9 SOP time-series themselves are not averaged (and that would be wrong).Each is obtained and processed independently to find the signal power in the earthquake band, over time, for each span (i.e., the moving variance of the signal obtained from each repeater).The resulting 2D plots are then averaged.The processing is detailed in the supplementary and methods section. "The resulting nine time-series are processed independently. The 2D plot displayed in Fig. 3c (right) is an average of the nine 2D plots obtained from processing each time series." On the origin of the nine time series: in a normal implementation of the direct SOP approach there wouldn't be 9 time series, but 3. The 3 time series correspond to the 3 polarization components in the normalized Stokes representation, (polarization of light can be represented as 3-dimensional vector, assuming a perfectly polarized wave).As such, launching one state into the fiber results in 3 time series, one for each of the components of the output Stokes vector. In our implementation, however, the input is changing between 3 states.So, for each of the 3 possible input polarizations, 3 outputs are generated (one for each of the 3 output Stokes states).This results in nine time series. In the case of the eigenvalue method, there is just one time series. Reviewer #2 The paper presents an interesting, telecom-compatible method for localizing geophysical disturbances across a potentially trans-continental fiber cable.The method is an adaptation of what had been already demonstrated in Ref 13 (using the loop-back channel in amplified submarine links to localize disturbances) except that in this case, the measurement is done using polarization changes and conventional telecom lasers (i.e.no need of high-coherence lasers as in Marra's paper). This implies some advantages as telecom lasers themselves can be used (however, the use of dedicated polarization synthesizers and polarimeters in the measurement channel is needed, which means some unusual hardware in these nodes). In essence the authors use long pulses with a selectable polarization state (pulses are the size of the span length) and the reflection from each loop-back channel is analyzed as a function of the time of flight of these long pulses.I think the paper is interesting and has to be published.I have several concerns and questions that the authors can surely address in a relatively easy way: -I had to read too much through the methods section to actually understand the measurement procedure, I am not sure if some of this information could be shifted to the Results section considering that people read Results before Methods.In this case, for people with some skill in optical measurements, the information in Methods is of key importance to understand the process.I suggest to move some of the hardware operation to Results and possibly move part of the matrix treatment to the Methods section. We appreciate and thank the reviewer for the positive comments.The reviewer raises a good point regarding the organization of the paper. We moved the post-processing of the eigenvalue method to the results section, after the theoretical section on how to perform single-span localization.We think that this aids in comprehension of the technical part with minimal alteration to the structure of the paper.We also changed the sub-title of that section to "Measurement and Post-processing" -When it comes to localization, obviously the golden standard in all these systems is using a DAS.Overall, the pulses used here are 300 microseconds long, which is comparatively very long for a DAS (3 orders of magnitude larger).I wonder if a DAS-like architecture with such a relatively long pulse could also give a measurable signal.DAS would have the advantage of being more quantitative and linear than this scheme.I think that an evaluation of the backscattered energy in such a case could help the authors decide if a poor resolution DAS could also do the same measurement (of course with a more expensive laser).This is an interesting idea that we have been discussing internally, but it is very different from our current approach, and has several different considerations. As the reviewer points out, it is true that signals are much longer (higher energy) which could potentially compensate for the added losses from the HLLB (~20 dB).In principle, one could further conceive of performing techniques such as digital pulse compression to further increase the SNR and circumvent these issue. Compared to a DAS system, however, the peak power of the signals at the input and after each amplification stage is lower than what is often used for DAS (usually close to the modulation instability threshold of 23 dBm/200 mW).Coexistence of high peak-power DAS pulses and telecom channels on the same fiber raises concerns of higher bit-error rates due to cross-phase modulation. Furthermore, depending on the DAS architecture used, the non-linear phase contribution of the reflector regions to the strain signal depends on the input pulse width.The pulse width and gauge length need to be carefully considered so that the non-linear phase contribution (originating from displacement in scattering centers within the pulse regions) does not dominate the linear contribution due to the phase evolution over the gauge length, between the two pulse positions (start and end of each gauge length).Pulse widths are typically selected to be shorter than the gauge length for this reason, and it is unclear if such long pulses may lead to high non-linearity in phase measurements.This is also addressable, however, as there have been a few recent works on using multi-frequency measurements to mitigate the nonlinear contribution (Ogden, et al, 2021, Scientific Reports). One final concern is the increased noise floor due to the constant stream of ASE light being backscattered and transmitted by the HLLB path (from the whole fiber).This may further increase the power demands for retrieving a measurable signal.Also, each span is roughly of the length of the full range capable of being interrogated by current DAS systems.It is conceivable that, even in an optimistic scenario, there are some blind spots near the end of each span. Finally, as the reviewer pointed out, this would imply using a high coherence laser -coherence length of at least the spatial resolution.In this work, we were aiming for an implementation that was compatible with telecom lasers and not reliant on coherent detection schemes. Again, this is an interesting idea which could possibly lead to a publication at a later point, but it is far beyond the current work, and would require a significant research effort. -I am sure that during the measurement campaign there were other disturbances of smaller magnitude that could be recorded along the used cable (this is a seismically very active region).Please provide information of what is the minimmum magnitude of event that could be detected in the measurement campaign done here.Showing the magnitude 6 event is interesting, but giving the actual sensitivity threshold would be necessary to comparatively assess this method and the others published in the literature.This comment echoes one of the comments made by Reviewer 1.Unfortunately, our time with the fiber since having the eigenvalue method implemented correctly was limited, and we did not store data from other events -We only retrieved data from the system when we saw a relatively strong earthquake happening, as a means of validating the technique. For this proof of concept, it seems unlikely that this method would be sensitive to lower-magnitude earthquakes (M1 to M4) occurring at similar distances, judging from the magnitude of the received signals against the observed noise.Perhaps with further optimization or additional processing this could be achieved, but goes beyond the scope of this proof of concept. -Sampling is very low (sub Hz in this case, potentially 2-3 Hz if the hardware had no delay times) as the reflections from all the repeater spans have to be collected and 3 polarization states have to be swept.Please comment if there is any room for increasing the sampling while keeping the same constraints in terms of fiber size.This is a limitation of the eigenvalue method (as of DAS system or other roundtrip time-of-flight techniques).In this case, there is an additional penalty as the sampling will be 3 times slower than single-shot techniques. Fundamentally, the measurement rate is limited by the fiber length, as pointed out by the reviewer.Increasing the complexity of the hardware may enable some mitigation of this limitation (as can be for other roundtrip techniques, using multifrequency probing or coding, for example), but that would require further research. -Of course the interest of gathering measurements across many points is using array methods.However, considering the "nonlinear" nature of these polarization measurments, would this be compatible with array processing? Coherent array signal processing techniques, such as beamforming, are not usable unless the nonlinearity can be calibrated or accounted for.Similarly to the previous answer, one possible option is to perform the same measurement at multiple optical frequencies to attempt to overcome the nonlinearity, at the cost of increasing hardware cost and processing complexity. Nevertheless, there is something to be gained from having localization information, even with incoherent measurements between channels.Namely, the time-of-arrival of the seismic wave at different spans can be measured.We were able to observe the seismic wave move-out through two spans, which may eventually lead to localization of the earthquake origin. Reviewer #3: Review of "Localization of Seismic Waves in Submarine Fiber Optics Using Polarization-only Measurements," by Luis Costa et al. The manuscript presents a report on the detection and localization of an earthquake using an undersea fiber optics infrastructure.It appears to be a valuable addition to the rapidly growing literature on this topic.I am inclined to endorse its publication pending the proper addressing of my concerns outlined below: 1) My primary concern relates to the authors' use of singular value decomposition (SVD) as an intermediate step to obtain a polar decomposition for extracting the unitary part of the transmission matrix.While this procedure is standard in Jones space, it may not be suitable in Stokes space.To illustrate this, consider the simple case of combining a (partial) polarizer represented in Jones space by a (positive definite) matrix A and a concatenation of waveplates represented by an arbitrary unitary matrix U. In Jones space, the transmission matrix T is given by T = UA.The polar decomposition of T is either T = U A (right polar decomposition) or T = B U (left polar decomposition) and is unique.Consequently, U is also unique, and applying SVD would yield the exact result, providing the unitary matrix U and, in Stokes space, the rotation matrix corresponding to U.However, if the SVD is directly applied in Stokes space, it would not return the 3 by 3 rotation matrix corresponding to U.This limitation arises because representing a pure polarizer as a linear operator is not possible within the 3-dimensional Stokes space.To maintain the linearity of the representation, it becomes necessary to extend the Stokes space with an extra dimension representing the total power and replace the matrices that represent rotations in Stokes space with 4 by 4 Mueller matrices.In the extended space, the unitary component of the decomposition is the direct sum of a rotation in the 3-dimensional Stokes space and an identity in the fourth coordinate.This makes not straightforward the application of the SVD to extract from the transmission matrix the unitary part of the concatenation. Of course, the use of the SVD in the 3-dimensional Stokes space would still produce, for the concatenation polarizer-waveplates, a unitary matrix, but in most cases this unitary matrix includes the polarization rotation induced by the partial polarizer, which would instead be filtered out if the SVD is applied in Jones space. Earthquakes primarily affect fiber propagation by inducing changes in the fiber's refractive index and birefringence, thereby impacting the unitary part of the transmission matrix.On the other hand, polarization-dependent loss is mainly caused by lumped devices and remains substantially time-independent.By applying the SVD in Stokes space, crosstalk is generated between the time-independent polarization-dependent loss and the time-dependent unitary part of polarization rotation.Consequently, this crosstalk has the potential to significantly reduce the sensitivity to time-dependent birefringence changes. Given that it is not challenging to extract the transmission matrix in Jones space from the data, the authors should reconsider their data processing approach and extract the unitary part of the fiber propagation by applying the SVD in Jones space rather than in Stokes space. Thank you for the careful analysis and suggestion. The reviewer raises a great point concerning the processing of the data in Stokes space, which we had not taken into consideration.In our manuscript, we treated the normalization of the Stokes vectors and the SVD as simple processing steps to obtain unitary (orthogonal) matrices from noisy estimations and partially polarized light. As the reviewer correctly points out, there are physical implications to this kind of processing, due to the presence of polarization dependent loss on the cable.By not working with the full Muller matrices, and instead a partial representation of transmission matrices in Stokes space, we include polarization changes from both rotation of the polarization state and from polarization dependent loss.This may contribute as a loss of sensitivity on our measurements We appreciate the reviewer's suggestion to perform the SVD in Jones space or attempt to work with the full 4x4 Muller matrix.However, as we have used a Stokes receiver in our experiments, we do not have access to the Jones data.Additionally, we are unable to recover the 4D Muller matrices from our measurements since we are building each matrix out of only 3 acquisitions (or a 3D basis).As such, it is not immediately clear to us how we could perform that processing from the current data. Nevertheless, we do agree that the reviewer is fundamentally correct in his assessment, and this must be addressed in the manuscript.We added the following paragraph to the discussion section of the text. Additionally, the sensitivity of the eigenvalue technique may potentially be improved by using a Jones receiver or by recovering the full Muller matrix.Currently, by acquiring three sets of Stokes components, normalizing each of them, and calculating the closest unitary matrix, we are making an assumption of no polarization dependent loss in each span.The polarization rotation originating from environmental changes (i.e., changes to the birefringence) will be combined with the apparent rotation originating from polarization-dependent loss of the lumped elements as the same signal (which is largely time-independent, and not directly correlated to environmental changes). We would like to highlight that this does not invalidate our method and results, but does bring forward an underlying assumption to our processing that we had not previously mentioned in the manuscript, and points a clear path towards optimization and further work which we had not previously considered. 2) The experiment's specific details regarding the system where it was performed have not been provided in the report.However, it appears that the system under test bears a striking resemblance to Curie, the system described in [14].The only discernible difference is the location of one of the system's terminals, with one being in Santiago instead of Valparaiso.To ensure transparency and enable readers to thoroughly understand the characteristics of the system under test, it is crucial to provide this information.Additionally, the report should explicitly state whether the data were collected from the Santiago or Los Angeles terminal. We thank the reviewer for the careful read of the paper.This is in fact, a mistake (which we have now corrected).Indeed, one of the system's terminals is in Valparaiso. We also added the information on the name of the cable, and on which of the terminals the interrogation setup is located at. In the results section: "On December 11, 2022, at 14:31:29 UTC, a magnitude 6.0 earthquake occurred in Guerrero, Mexico, which we captured on the Curie transoceanic fiber cable, which connects Los Angeles (California) to Valparaiso (Chile).The interrogation setup (situated in the Los Angeles terminal) is depicted in Fig. 2 , and includes a telecommunication transponder used to send linearly polarized optical pulses through a polarization synthesizer on the emitter side, and a polarimeter on the receiver side, which is used to evaluate the state of polarization of the received reflections." Minor comments: Line 70: (disregard if the paper is modified following the suggestion in comment 1).The authors' analysis is conducted in Stokes space, not in Jones space.Consequently, U represents an arbitrary matrix with real entries describing a (proper) rotation in Stokes space, which is a special case of an orthogonal matrix, not an arbitrary (complex) unitary matrix.This distinction is important as it ensures that readers are given the immediate perception that the analysis takes place in Stokes space, not in Jones space.This is a good point, for the reasons described of avoiding confusion with a Jones space implementation. We changed the following sentence: where s is the normalized Stokes vector representing the SOP of the input pulse, and Am is the real-valued rotation (orthogonal) matrix that describes the cumulative birefringence effects of the complete round-trip to and from the m-th repeater (Fig. 1a). In the following sentence, however, we kept the word unitary, because that is a general result.The matrix can be any unitary, but in this case will always be a rotation matrix. "where U =A^{fwd}_{(m-1)} can be any (unknown) unitary matrix."Line 112: Would be beneficial that the definition of crosstalk is given the first time it is introduced and discussed.The reader is not exposed to the mathematical definition of crosstalk until hitting the figure caption of Fig. 4 of the supplementary material. We changed the first mention of crosstalk in the main text to the following."However, while the earthquake signal is visible in every span following the 41st when using the direct SOP approach, our eigenvalue method localizes the measurement to a single span with minimal crosstalk (defined as the increase in signal noise power in the earthquake frequency band to subsequent fiber locations).We observe a median value of ~1 dB of crosstalk, Fig. 3d."Line 157: (disregard if the paper is modified following the suggestion in comment 1) Again, the U and V matrix are real, so that V^* should be the transpose of V. Since the star is usually reserved for Hermitian conjugate, I would suggest using another symbol for it. Changed to V^T, as per the reviewer's suggestion. Line 18 of the supplementary: (disregard if the paper is modified following the suggestion in comment 1) The outcome of the singular value decomposition should be the closest orthogonal matrix, and V' is not defined but it should be defined as the transpose of V. Changed to V^T, in accordance to the main text. Line 77 of the supplementary: The sentence "Note that while the variance of the applied perturbation was constant, the observed variance in the perturbed span due to the nonlinear nature of the measurement" appears to be incomplete. We rewrote that paragraph, in hopes of making it clearer: We define crosstalk as the median variance of the signal observed in all (unperturbed) spans located after the perturbed span, normalized to the variance of the signal observed in the perturbed span (which may change between runs of the simulation, due to the nonlinearity of eigenvalue measurements).In figure S4a, we plot the crosstalk against the orthogonality figure of merit (Q) and the maximum birefringence change between consecutive acquisitions (as a measure of non-stationarity). Line 42 : Conceptual: You claim that polarization-based methods have been unusable for single-span localization until now, thus motivating the need for the eigenvalue method you propose.But Fig 3Cright, if I understand correctly, is using more traditional direct SOP methods, so what is the limitation there?I guess you cannot isolate Span 41 from that figure?Otherwise what is needed or gained by the eigenvalue approach? Line 166 : Again as one less familiar with the technologies, I don't understand why 9 SOP time-series are generated and why averaging them for Fig 3C-right is appropriate.But possibly this is given in background / cited literature?Reviewer #2 (Remarks to the Author): Line 42: Conceptual: You claim that polarization-based methods have been unusable for single-span localization until now, thus motivating the need for the eigenvalue method you propose.But Fig 3C-right, if I understand correctly, is using more traditional direct SOP methods, so what is the limitation there?I guess you cannot isolate Span 41 from that figure? Line 166 : Again as one less familiar with the technologies, I don't understand why 9 SOP time-series are generated and why averaging them for Fig 3C-right is appropriate.But possibly this is given in background / cited literature?
11,631.2
2023-12-04T00:00:00.000
[ "Engineering", "Environmental Science", "Physics" ]
Resumming subleading Sudakov logarithms in saturation regime We investigate the scale dependence of transverse momentum dependent(TMD) gluon distribution in saturation regime. We found that in the Collins-2011 scheme, the scale dependence of small x gluon TMD is governed by the same renormalization group(RG) equation that holds at moderate or large $x$. Following the standard procedure, one then can resum both double leading logarithm and single leading logarithm in saturation regime by jointly solving the Collins-Soper equation and the RG equation. I. INTRODUCTION One of the central scientific goals to be achieved at the current and future facilities, including JLab 12 Gev upgrade, RHIC and the planned electron-ion collider(EIC) is to reveal the three-dimensional structure of nucleon/nuclei by measuring final state produced particle transverse momentum spectrum in high energy scatterings. The extraction of parton transverse momentum dependent(TMD) distributions that encode information on the internal structure of nucleon/nuclei from physical observables relies on the QCD factorization theorem. As a leading power approximation, TMD factorization in moderate or large x region has been well established during the past few decades [1][2][3][4]. However, since high twist contributions arise from multiple gluon re-scattering is no longer negligible at small x, it is nontrivial to justify TMD factorization in saturation regime. Some recent attempts to address this issue have been made in Refs. [5][6][7][8][9][10][11] The purpose of the current work is to further extend and refine the previous analysis presented in Refs. [10,11]. The key idea of unifying small x formalism and TMD approach is a two step evolution procedure, which can be best demonstrated using a color neutral scalar particle production through gluon fusion(gg → H) process as an example. Below the produced particle mass and transverse momentum are denoted as Q and k ⊥ respectively. For a comparison, we first review the conventional treatment in moderate x region, where the calculation of the differential cross section can be formulated in collinear factorization. At high order, the collinear divergence and various large logarithm terms show up. After absorbing the collinear divergence and the associated logarithm ln Q 2 µ 2 into the renormalized gluon PDF, we are still left with large double logarithm term α n s ln 2n Q 2 To facilitate resumming these large logarithms to all orders, one can introduce gluon TMD distribution. The large k ⊥ logarithms then can be resummed by solving the Collins-Soper evolution equation [1,4] that governs lnζ c dependence of gluon TMD. Here ζ c is a parameter introduced in the Collins 2011 scheme [4] for regulating the light cone divergence. It plays a role of varying hard scale which allows one to smoothly evolve from the scale Q 2 down to k 2 ⊥ . When the center mass of energy S is much larger than Q 2 , the large logarithm ln S Q 2 ∼ ln 1 x appears in high order calculation could be more important than the logarithm ln Q 2 µ 2 . One thus should formulate the calculation in color glass condensate(CGC) effective theory [12,13] to first take care of the logarithm ln S Q 2 . They can be summed by solving the Balitsky-Kovchegov(BK) [14,15] equation that describes ln 1 x dependence of multiple point functionthe basic nonperturbative ingredient in CGC calculation. In the kinematic region where terms also arise in high order calculations. When these logarithms are much larger than leading order but high twist contributions suppressed by the power of k 2 ⊥ Q 2 [7], TMD factorization should be employed where all subleading power contributions are systematically ignored. The equivalence between the leading power part of the CGC result and TMD factorization calculation at tree level can be verified by utilizing the operator relation between the derivative of multiple point function and gluon TMD matrix element [5,6]. What we gain by making the leading power approximation is that large α n s ln 2n Q 2 k 2 ⊥ logarithm terms can be resummed to all orders in the context of TMD factorization. To achieve such a resummation, it is necessary to show that the properly defined gluon TMD accommodates the similar large logarithm and satisfies the Collins-Soper equation in the small x limit. It has indeed been verified in a recent work [10] that gluon TMD computed at the next to leading order in a quark target model satisfies both the Balitsky-Fadin-Kuraev-Lipatov(BFKL) equation [16,17] and the Collins-Soper equation in the small x limit. The similar analysis was later extended to saturation regime by calculating small x gluon TMD in terms of multiple point functions using CGC approach [11]. Schematically, the derived Weizsäcker-Williams (WW) type gluon TMD takes the following form, and µ F is the factorization scale. F is the Fourier transform of the WW gluon distribution, where U(x ⊥ ) is the Wilson line in the fundamental representation. It absorbs all large logarithm ln 1 x terms from the hard part with the help of the BK equation. The remaining logarithms are resummed into the exponentiation known as the Sudakov factor by solving the Collins-Soper equation. We are eventually left with a hard coefficient H(α s (Q)) that only has finite contributions. The Similar result holds for the dipole-type gluon distribution. The hard coefficients A and B can be calculated perturbatively. In the previous work [11], we only took into account the double leading logarithm contribution in CGC framework, and determined the coefficient A to be A = αsC A π at leading order, which is the same as the one in the standard Collins-Soper-Sterman(CSS) formalism [3]. The purpose of the current work is to sort out the single leading logarithm contribution in saturation regime, i.e. fixing the coefficient B. To this end, one has to study not only the lnζ c dependence but also the factorization scale µ F dependence of small x gluon TMD. In other words, we aim at deriving a renormalization group(RG) equation for the gluon TMD in saturation regime. By jointly solving the RG equation and the Collins-Soper equation, one is able to resum both the double leading logarithm and single leading logarithm contributions. From a theoretical point of view, completing the previous analysis on k ⊥ resummation in saturation regime is interesting in its own right. On the other hand, the present work is further motivated by the fact that very rich polarization dependent phenomenology in saturation regime has been discovered in recent years [18][19][20][21][22][23][24][25][26]. It is time to lay down a solid theoretical ground for performing phenomenological studies of the relevant physical observables which can be measured at RHIC, LHC, and the planned EIC. The rest of this paper is organized as follows. In the next section, we compute the anomalous dimension of small x gluon TMD by isolating the ultraviolet(UV) divergent part. The most nontrivial part of our analysis is to investigate how the UV divergence is affected in the presence of multiple gluon re-scattering. The detailed derivation will be presented. The paper is summarized in Sec.III. II. DERIVATION There are two widely used k ⊥ dependent unpolarized gluon distributions with different gauge link structures: (1) the WW type distribution with a staple like gauge link, and (2) the dipole type distribution with a close loop gauge link. These two type gluon distributions can be directly probed through two-particle correlation in different high energy scattering reactions [5,6,[27][28][29][30]. In Ref. [11], we demonstrated that both gluon TMDs computed in CGC formalism satisfy the Collins-Soper equation after matching them onto the renormalized quadrupole and dipole amplitudes respectively. At leading order, the Collins-Soper equation reads [4], where G(x, b ⊥ , µ 2 , ζ 2 c ) is the Fourier transform of gluon TMD, which can be related to the derivative of the quadrupole amplitude for the WW case. The logarithm ln 1 x dependence of the operator G(x, b ⊥ , µ 2 , ζ 2 c ) is described by the BK equation. On the other hand, the factorization scale dependence of gluon TMD is governed by the renormalization group equation which takes form, The anomalous dimension γ G is also the function of ζ c . Its ζ c dependence can be explicitly separated out as the following [4], at one loop order. Following the standard procedure, it can be readily deduced from the evolution equations 1 , where the large logarithm is resummed into the Sudakov factor. In the dilute limit, it is shown [10] that both the double leading and single leading logarithms can be resummed. In the saturation regime, is only the double leading logarithm contribution Exp − took into account in the previous analysis [11]. The value of γ G (g(µ), 1) is not yet fixed for the saturation case. The purpose of the present work is to compute the anomalous dimension of small x gluon TMD in saturation regime. Once the anomalous dimension is worked out, the single leading logarithm can be resummed to all orders by solving the CS and RG equations as shown above. As an example, we focus on the WW case in this paper. The calculation can be straightforwardly extended to the dipole case. Our starting point is the the matrix element definition of the WW gluon distribution, where F +i a (ξ − , ξ ⊥ ) is the gauge field strength tensor and the gauge link is further fixed to be the past pointing one in the adjoint representation, The WW type gluon TMD also can be defined in the fundamental representation, where U denotes the gauge link in the fundamental representation. One can readily determine the anomalous dimension by isolating the ultraviolet(UV) divergence of small x gluon TMD. To do so, one has to go beyond the conventional treatment of small x formalism in which the Eikonal approximation is applied everywhere. This is because the UV divergent part does not have 1/x enhancement and could be missed in the leading power small x approximation. We thus have to carry out the calculation in the full QCD. The extra care should be taken when performing the Eikonal approximation to simplify calculation. In order to fix conventions and to do a warm up exercise, we start with the tree level calculation, though the UV divergence is absent at the tree level. Diagrams illustrated in Fig.1 give rise to the leading order contributions. In the small x limit, the dominant contribution is from the A + component. It is trivial to compute the graph Fig.1(a) and its conjugate part, which lead to, It is well known that the gauge link is built through gluon re-scattering. The diagram Fig.1(b) gives rise to the first nontrivial term of the Taylor expansion of the gauge link, where g 0 associated with the gauge potential A + b (z − , y ⊥ ) is the bare strong coupling constant , which will be renormalized after including one loop correction. Similarly, the graph Fig.1(c) results in, It is straightforward to resum gluon re-scattering to all orders. The WW type gluon TMD in the small x limit eventually can be cast into the following form in the fundamental representation, where the strong coupling constant appear in the Wilson lines is the bare one. The above matrix element obtained through tree diagram calculation is consistent with the matrix element definition given in Eq.2, which only captures the leading contribution in the power of 1/x. In contrast, the gluon TMD definition Eq.7(or Eq.10) is valid at arbitrary x, and keeps not only the leading ln 1 x terms but also the leading contributions in the power of µ 2 /ζ 2 c and k 2 ⊥ /ζ 2 c . Therefore, the correct UV behavior and the lnζ c dependence of gluon TMD only can be obtained by computing the expectation value of the matrix element in Eq.7(or Eq.10), rather than the matrix element given in Eq.14. UV divergence only arises in virtual corrections. There are four virtual graphs without gluon re-scattering as shown in Fig.2. It is easy to verify that the contribution from the graph Fig.2(d) vanishes. We now start with computing the vertex correction shown in Fig.2(a). To avoid the interaction between the radiated gluon and color source inside target, our calculation is performed in the light cone gauge (A − = 0), in which gluon propagator reads, where the prescription 1 l·p−iǫ for regulating the light cone divergence is proven to be the most convenient choice for our calculation. The contribution from the graph Fig.2(a) is expressed as the product of the corresponding hard part and the gluon TMD matrix element without gauge link being included, The hard part is given by, where k 2 ⊥ in the denominator arises when we make the following conversion by partial integration, We proceed by performing contour integration on l − , As we are only interested in the UV behavior of the small x gluon TMD, the external transverse momentum k ⊥ can be neglected. It is then trivial to carry out the elementary integration for l + . One arrives at, In contrast to a covariant gauge calculation, the vertex correction we obtained is free from the Collins-Soper type light cone divergence. But the second integration ∞ k + dl + l + has the BFKL/BK type light cone divergence when l + goes infinity, and leads to the small x evolution of gluon TMD, that is beyond the scope of the current work. We turn to discuss the Wilson line self energy correction, with the hard part, which contains the light cone divergence when l + goes to zero. Such end point singularity can be cured by introducing a soft factor in the Collins-2011 scheme. The gluon self energy graph Fig.2(c) gives, According to the LSZ reduction formula, half the one loop correction of the gluon propagator contributes to the anomalous dimension of the gluon TMD, while another half contributes to the renormalization of gauge field. That is why we include a factor 1 2 in the above equation. Once again, we use the residue theorem to perform the l − integration in the hard part. This gives, As before, to explicitly isolate the UV pole contribution, the external transverse momentum k ⊥ is set to be zero. The l + integration can be done by very elementary methods, Put all contributions from Fig.2 together, where dimensional regularization is introduced. Some of finite terms might be missed at intermediate steps. However, such treatment is sufficient as we only need to compute UV pole terms for the current purpose. It is interesting to notice that the UV divergence cancels out in the phase space region k + ≤ l + ≤ ∞. This is consistent with the observation that the evolution kernels of the BFKL/BK equations are UV finite. This is also the precise reason why one has to formulate the calculation in full QCD rather than in small x formalism where many sub-leading terms in the power of 1/x are missed, including UV pole terms. The end point singularity in the second term in Eq.26 is canceled by the soft factor in the Collins-2011 scheme. Combining with contributions from the hermitian conjugate diagrams, the subtracted gluon TMD then takes form One finds that both the factorization scale µ and the parameter ζ c dependence of the gluon TMD show up at the next to leading order. The UV counterterm is added in the above equation to give a finite result at ǫ = 0. Note that the collinear divergence in our calculation is absent once the incoming gluon transverse momentum k ⊥ is restored. In a conventional collinear factorization calculation, the remaining collinear divergence can be removed after matching TMD onto gluon PDF. The UV counterterm in the MS scheme is determined as [4], where S ǫ = (4π) ǫ Γ(1−ǫ) . The anomalous dimension of the gluon TMD can be computed accordingly, which is the same as the standard one. With this anomalous dimension, one reproduces the common Sudakov factor including both double leading logarithm and single leading logarithm contributions in the dilute limit. We now calculate virtual correction in the presence of gluon re-scattering. As argued before, it is not appropriate to first resum all order gluon re-scattering into the Wilson lines by applying the Eikonal approximation because the UV divergent part is the sub-leading contribution without 1/x enhancement. Instead, one should work out the UV part before resumming gluon re-scattering. Here we start with one gluon re-scattering case. First of all, it is easy to check that diagrams with four gluon vertex have vanishing contribution in the gauge we specified above. We start evaluating the vertex correction from Fig.3(a), Fig.3(b), and Fig.3(c). It is convenient to calculate the following combination, 1 2 Fig.3(a) + Fig.3 with the hard part being given by, As stated previously, we do not aim at getting the complete result. To clearly extract the UV divergent part associated with the leading power contribution, we set k 1⊥ to be zero and make the Taylor expansion in terms of the power k ⊥ /l ⊥ , We rearrange the kinematic factor k 2 ⊥ into the soft part and combine it with the gluon TMD matrix element by partial integration, Since the hard part is no longer dependent of k 1⊥ , one can carry out the integration over k 1⊥ . This produces a delta function δ 2 (z ⊥ − y ⊥ ). The integration for z ⊥ then can be trivially done. After performing the integral over k + 1 by the residue theorem, one obtains, 1 2 Fig.3(a) + Fig.3 which differs from the Fig.2(a) by a factor 1/2. We will show that another half contribution comes from the combination 1 2 Fig.3(a) + Fig.3(b). To simplify the calculation of 1 2 Fig.3(a) + Fig.3(b), we play the following trick. One can treat the k 1 gluon as the collinear one(the error is power suppressed), and thus apply the Ward identity to the internal gluon line in Fig.3(b). The hard part from Fig.3(b) can be subsequently separated into two parts, Fig.3 Note that we relabeled the gluon momentum flow in the above equation. The internal gluon line sandwiched by the two incoming gluon lines carries momentum l. It is easy to check that the second term in the above equation is canceled out by the half of the contribution from Fig.3(a). We are left with the first term once combing Fig.3(b) with the half of Fig.3(a), which can be further simplified by changing integration variable l → l + k 1 and neglecting k 1⊥ in the numerator, This turns out to be the same as the hard part of Fig.2(a) except for the color factor. Following the procedure outlined above, the UV pole term extracted from Fig.3(a), Fig.3(b) and Fig.3(c) is given by, Fig.3 The vertex correction now is correctly reproduced with one gluon re-scattering being taken into account. Diagrams Fig.3(f) and Fig.3(g) also represent vertex correction, which however, do not contribute to the scale evolution of gluon TMD. Instead, they are responsible for the running of the strong coupling constant in the gauge link together with gluon self energy diagram Fig.3(i) and the Wilson line self energy diagram Fig.3(h). The similar calculation for diagram Fig.3(f) leads to, The external transverse momentum k 1⊥ is set to be zero, one has, By carrying out the contour integration for k + 1 , the lower limit of the second integration is constrained to be zero. We arrive at, The hard part of Fig.3(g) is written as, After integrating over k + 1 , one obtains, The contributions from the Wilson line self energy are listed as the follows, and gluon self energy diagrams, We are now ready to assemble all pieces together. First, one notices that the light cone divergence is canceled out among Fig.3(f), Fig.3(g) and Fig.3(h). Including gluon self energy diagram, we have, from which, one can reproduce the one loop beta function that describes the scale dependence of the strong coupling constant. The summation of the rest diagrams in Fig.3 gives, Adding up hermitian conjugate contributions and the soft factor, the final result reads, The extra UV divergence can be removed by replacing the bare strong coupling constant g 0 with a renormalized one g in the gauge link, with Here quark loop contribution is not included. The one loop virtual correction to the gluon TMD now takes form where xG1 denotes the gluon TMD with the gauge link −g It is easy to see that the anomalous dimension is not affected by gluon re-scattering effect. This is more or less expected because the short distance physics(UV divergence) can not be altered by physics happens in long distance(gluon re-scattering). We now proceed to compute virtual correction with two gluon re-scattering. To generalize the calculation to the two gluon re-scattering case, let us reexamine the evaluations of Fig.3(a), Fig.3(b) and Fig.3(c) from a different aspect of view. We start with investigating the k + 1 pole structure of these three diagrams. Fig.3(b) and Fig.3(c) generate the pole; 1 while a double pole emerges in Fig.3(a), If one picks up 1 k + 1 k + 1 +l + +iǫ contributions from three diagrams are canceled out due to the Ward identity. We are left with the 1 k + 1 +iǫ pole contribution from Fig.3(a). At this step, one can directly isolate the 1 k + 1 +iǫ pole contribution using the residue theorem. The rest calculation is exactly same as that for the standard vertex correction represented by Fig.2(a). In an analogous way, the calculation of Fig.4 can be simplified by playing the same trick. For instance, the 1 k + 2 +l + +iǫ pole contributions from Fig.4(j), Fig.4(k) and Fig.4(l) are canceled out. Such cancelation also occurs among Fig.4(m), Fig.4(n) and Fig.4(o). We are left with the 1 k + 2 +iǫ pole contributions from Fig.4(j) and Fig.4(m). After carrying out integration over k + 2 using the residue theorem, it becomes obvious that the calculations of Fig.4(j) and Fig.4(m) are the same as that for Fig.3(f) and Fig.3(g). We end up with, Eventually, the same vertex correction is reproduced with graphs Fig.4(a-g). Combining with the Wilson line self energy diagram Fig.4(h) and gluon self energy diagram Fig.4(i), one obtains, Now it is easy to see that the identical UV pole structure of the gluon TMD is recovered for the two gluon re-scattering case. The method introduced above can be recursively applied to multiple gluon re-scattering case starting from the right-most gluon attachment. The evaluation of diagrams with n gluon re-scattering always can be reduced to the calculation of diagrams with n − 1 gluon attachment. The UV structure is not affected no matter how many soft gluons are exchanged. This is expected because as a quite general principle, long distance physics does not cause any impact on short distance physics. We verified this statement by explicit calculations for this specific case. Now let us summarize our calculation. We computed the gluon TMD defined in Eq. 7 in term of the operator given in Eq. 14. The calculation is compatible with the conventional treatment of the small x formalism. To study the scale dependence of the gluon TMD, the UV pole terms from virtual corrections are explicitly worked out. Apart from leading to the scale evolution of gluon TMD, virtual correction also results in other effects: 1) the running of strong coupling constant, all the bare strong coupling constant in Eq. 14 should be replaced with the renormalized ones; 2) the bare gauge fields in Eq. 14 are replaced with the renormalzied fields. The derived anomalous dimension of gluon TMD is γ G (g(µ), 1) = αsC A π 11 6 (it is straightforward to include quark loop contribution), which is the same as the one calculated without taking into account multiple gluon re-scattering. Therefore, we conclude that both the double leading and single leading logarithms can be resummed to all orders in saturation regime by solving the CS evolution equation and the RG equation. III. SUMMARY This work is devoted to the study of the resummation of the single leading logarithms in saturation regime. In the Collins-2011 scheme, the double leading logarithm and single leading logarithm can be resummed into an exponentiation i.e. the Sudakov factor by solving the Collins-Soper equation and the RG equation. In a previous publication, we showed that small x gluon TMDs do satisfy the Collins-Soper equation. To derive the RG equation, we compute the one loop virtual corrections to the WW type gluon TMD in the presence of multiple gluon re-scattering. As expected, the UV divergence structure of virtual corrections are not affected by multiple gluon re-scattering effect. As a consequence, the anomalous dimension of small x gluon TMD determined through UV pole terms is found to be the same as the one calculated in the conventional way at one loop order. Our analysis can be straightforwardly applied to other cases, for instance, the WW type gluon distribution with a future pointing gauge link and the dipole type gluon distribution. We reached the same conclusion that the perturbative part of the resulting Sudakov factor takes the same form in dense medium. However, it is not yet clear if the non-perturbative part of the Sudakov factor is affected by saturation effect. We leave this for the future study. Nevertheless, it is now clear that the full k t resummation machinery can be employed to perform the relevant phenomenological studies of physical observables involve two well separated scales in saturation regime.
5,965.4
2018-07-02T00:00:00.000
[ "Physics", "Mathematics" ]
The stability and catalytic performance of K-modified molybdena supported on a titanate nanostructured catalyst in the oxidative dehydrogenation of propane Titanate nanotube supported molybdena was evaluated as a catalyst in the oxidative dehydrogenation of propane to propylene. The synthesized titanate nanotubes with high specific surface area were prepared by a hydrothermal method. The characterization of pristine nanotubes was performed via XRD, Raman, SEM, TEM and BET. The presence of hydrogen titanate nanostructure was confirmed in the bare support. Incipient wetness impregnation method was used to prepare MoTNT-x (x = 5, 10, and 15 wt% molybdena). The as-prepared catalysts' characterization was investigated using Raman, XRD, SEM, EDS, TEM, BET, TGA, and CHNS. Furthermore, H2-TPR was performed to explore reducibility of the catalysts. XRD and Raman results indicated development of the anatase phase in MoTNT-x catalysts upon calcination, along with specific surface area loss according to BET. Study of the catalytic performance of the samples showed an increase in catalytic activity and a significant drop in propylene selectivity with rising molybdena content. The maximum yield of propylene (about 9.3%) was obtained in 10 wt% of Mo content. The effect of potassium loading as a promoter in K/MoTNT-10 catalyst was also explored through characterization of the surface molybdena species and catalytic performance. Due to the presence of potassium, propylene yield increased from 9.3% to 11.3% at 500 °C. The stabilities of both catalysts were considered for 3000 min and showed only slight drops in propane conversion and propylene selectivity. Introduction The rapid development of human societies in the second half of the twentieth century was made possible by oil and gas, either as fuel or as a raw material. Unfortunately, fossil fuels do not contain olens and mainly consist of saturated hydrocarbons and aromatics. Olen production requires sophisticated technologies that are costly and require a large investment. Conversion of light alkanes to olens has become one of the most interesting subjects for research over the last two decades. [1][2][3][4] Olens, because of their high reactivity, have a large role in producing polymers and other more valuable materials. Propylene is a key product in the petrochemical industry, used as a feedstock to produce different polymers and intermediate products. Increasing demand for propylene in the global market, as well as general efforts to convert cheap and abundant raw materials and byproducts of petroleum rening processes into more valuable products, have resulted in substantial research into the oxidative dehydrogenation of propane. 5,6 Catalytic dehydrogenation of alkanes is an endothermic reaction which requires a comparatively high temperature to achieve high yield. However, this high reaction temperature causes high thermal cracking, lowering alkane and coke formation and resulting in a drop in product yield and quick catalyst deactivation. 7 Oxidative dehydrogenation of propane is a viable alternative to the catalytic dehydrogenation process with several benets, such as being exothermic without any thermodynamic limitations. However, this approach suffers from problematic over-oxidation (combustion), which can decrease propylene productivity. 8 A suitable catalyst for the ODH 1 of propane must be able to effectively activate the C-H bond of propane and hamper unfavorable deep oxidation of propene to CO x . 9 Transition metal oxides are the most important catalysts used in oxidative dehydrogenation of propane. [10][11][12][13] The most extensively studied catalysts involve Mo/V/Ce based oxides. [14][15][16][17][18][19] Much research has been done into molybdenum oxide catalysts supported on different metal oxides. [20][21][22][23][24] Catalytic performance of the molybdena catalysts depends on the specic support, promoters, molybdena loading, calcination temperature, etc. 20,21,25 It has been suggested that titaniasupported molybdena catalysts are highly active in propane ODH, 21 although conventional anatase titania suffers from low surface area. 10,21 Recently, Kasuga 26 presented a hydrothermal method to produce titanate nanotubes and TiO 2 with large specic surface area and ion-exchange ability, appropriate for use as supports for active sites in catalysts. The procedure is uncomplicated, simple and cost-efficient; furthermore, it is an eco-friendly technique in comparison to the template method or anodic oxidation. 27 In this article, we propose hydrothermally synthesized titanate nanotubes as a novel support, with a surprisingly high specic surface area, for K-doped molybdena catalyst to be used in oxidative dehydrogenation of propane versus conventional catalytic systems. Molybdena loading, potassium addition, calcination temperature and reaction temperature impacts were investigated through structure and catalytic performance of titania-supported K/Mo catalysts. Catalyst deactivation phenomenon was explored in the system to study stability of the K-promoted and non-promoted catalysts for 3000 min. Synthesis of titanate nanotube Generally, to prepare the catalyst support, 1.7 g of Degussa TiO 2 P25 was added to 150 ml of 10 M aqueous solution of NaOH (Merck) in an exothermic mixing process. The prepared mixture was stirred for 30 min, then transferred into a sealed Teonlined stainless-steel autoclave, lling about 80% of the volume. The sample was kept in an oven at 140 C for 24 h, then the resulting mixture cooled at room temperature and was placed in a centrifuge for 15 min. The materials were washed using a weak acid solution of 0.1 M HNO 3 until the pH of the rinsing solution attained about 1. The sediment was then rinsed with doubly deionized distillated water until the passing water reached pH 7. The obtained sample was dried at 110 C for 12 h. Catalyst preparation Two types of catalysts were prepared by the incipient wetness impregnation method. The rst type of catalyst (MoTNT-x, where x is the wt% of MoO 3 ) involved a certain amount of MoO 3 supported on titanate nanotubes. Briey, a calculated amount of ammonium heptamolybdate was added to a measured volume of doubly deionized water that corresponded to the total pore volume of the support. Then, the support was added to the solution. The mixture was stirred at 70 C until forming a paste. The resulting sample dried for 12 h at 110 C to make a powder. The powder was calcined in static air for 3 h at 500 C. Aer cooling to room temperature, the calcined sample underwent a forming process and 60-100 mesh size was chosen for the catalytic activity and deactivation tests. In addition to MoO 3 , the second type of catalyst (MoKyTNTx, where x and y are the wt% of MoO 3 and the K : Mo molar ratio, respectively) involved a specic amount of KOH supported on titanate nanotubes. In the rst step of catalyst preparation, a calculated amount of KOH was added to the measured ammonium heptamolybdate before stirring in deionized water. Next steps were identical to the preparation procedure for the rst type of catalyst. Catalyst characterization X-ray diffraction (XRD) patterns of the catalysts and titanate nanotubes were recorded on a Philips PW1800 diffractometer using Cu Ka radiation (l ¼ 0.15418 nm). The intensities were determined for all of the synthesized samples with 2q range from 5 to 70 at a step-size D(2q) of 0.03 and a count time of 2 s per step. The indexing of attained spectra was carried out by comparison with JCPDS les (Joint Committee on Powder Diffraction Standards). The mean crystallite size of the sample was estimated by Scherrer's equation, from the XRD linebroadening measurement, as follows: where k is a constant equal to 0.9 (shape factor), l is the wavelength of the X-ray in nanometers, q is the diffraction angle and b is the true half-peak width. Raman spectra were recorded with a Bruker (model SEN-TERRA (2009)) spectrophotometer. A diode laser (l ¼ 785 nm) operating at 25 mW was employed as Raman excitation source with a germanium thermoelectrically cooled charged couple device (Andorf) as detector. Specic surface areas of the catalysts were determined by N 2 adsorption/desorption at À196.15 C using BET method (BEL-SORP Mini II apparatus) with a ten point-isotherm. The samples were degassed for 2 h at 200 C prior to nitrogen adsorption. For transmission electron microscopy (TEM), the material was dispersed at room temperature in isopropanol and an aliquot of the prepared sample was deposited onto perforated carbon foil supported on a copper grid. The investigations were made on a Zeiss EM 900 microscope. Scanning electron microscopy (SEM) was performed by a TESCAN MIRA3 Model apparatus equipped with an analytical system for energy dispersive X-ray spectrometry (EDS) to determine the morphology of the prepared samples. The H 2 -temperature programmed reduction (H 2 -TPR) experiments were carried out in a Quantachrome CHEMBET-3000 apparatus using 20 mg of samples with 10 sccm of 7.0% H 2 in air with concomitant temperature leveling up to 700 C at a heating rate of 10 C min À1 . A thermal conductivity detector (TCD) monitored hydrogen consumption by analyzing the TPR reactor effluent. For quantitative purposes, the TCD signal was calibrated by reduction of Ag 2 O under similar conditions. Prior to this analysis, the sample was oxidized in owing air at 200 C for 1 h. Thermal gravimetric analysis (TGA) was performed using a Netzsch-TGA 209 F1 thermo-gravimetric analyzer in air atmosphere from 25 to 900 C with a heating rate of 10 C min À1 . Elemental analysis was performed using Perkin Elmer 2400 Series II CHNS analyzer. The CHNS analysis was based on the classical Pregl-Dumas technique using a furnace temperature of 1100 C. Catalytic activity and deactivation study Activities of the catalysts in oxidative dehydrogenation of propane were investigated in a microow xed-bed quartz reactor ( Fig. 1) with an internal diameter of 6 mm, external diameter of 7 mm and length of 50 cm at atmospheric pressure. An electric furnace was used to heat the reactor together with a thermocouple type K inside the catalyst bed. The catalytic bed was placed in the middle of the furnace (low thermal variations within a suitable longitudinal range). Blank runs were executed with the reactor packed with quartz wool at 500 C to show the negligibility of propane conversion in the vacant reactor. 100 mg of catalyst with a mesh size of 60-100 diluted with 100 mg silicon carbide for better thermal distribution of the samples were utilized for each test. The samples were kept in the center of the reactor on a piece of quartz wool. The catalyst was pretreated using 20 sccm dry air ow at atmospheric pressure with heating rate of 10 C min À1 up to 300 C; it was then cooled under air ow to 200 C. Next, a mixture of propane (99.8%) and air (99.995%) was fed into the catalytic reactor bed using calibrated mass ow rate controllers. The feed ow was mixed in a chamber with molar ratio of propane/O 2 equal to 1 and ow rate of 100 sccm before contact with the catalyst. The temperature was increased in a stepwise fashion, with steps of 50 C up to 500 C, such that every step of the reaction lasted 30 min. Analysis of the composition of the reaction's output products was performed on-line by a VARIAN CP-3800 gas chromatography device equipped with two ame ionization detectors (FID) and a methanizer (Ru/Al 2 O 3 ). Carbon balance was established for all the catalytic tests, within 5%. Conversion, selectivity, and yield are all calculated based on carbon atoms as follows: Here, X Propane represents propane conversion. S i and Y i stand for product i selectivity and yield, respectively; n i and c i indicate the moles of molecule i and the number of carbon atoms in molecule i, respectively. The turnover frequency (TOF) per molybdena atomic unit is determined according to eqn (5), with a diagram provided in Fig. 8d. In this relation, _ n Propane is the molar ow rate of propane in the feed (1.17547 Â 10 À5 mol s À1 ), X propane represents the propane conversion, M V is the molecular weight of molybdena (95.96 g mol À1 ), m Cat denotes the catalyst's mass (g), W v is the weight fraction of molybdena in the catalyst, and TOF is the turnover frequency (s À1 ). Deactivation studies of the catalysts were performed at 500 C for 3000 min continuously under feed ow with similar conditions to the activity tests. The reactor's effluent was sampled with a time step of 150 min. Following completion of the deactivation study, the feed ow was changed to He to conserve the catalyst state for further studies. The catalyst was cooled to room temperature in the reactor. XRD analysis XRD spectra related to the titanate nanotubes and developed catalysts are provided in Fig. 2. Understanding the crystalline structure of titanate nanotubes is usually difficult due to broad reections in XRD pattern, ascribed to the small size of nanotubes. 10 In addition, titanate nanotubes are relatively unstable and can experience various phase alterations during catalyst preparation methods such as acid washing and calcination, resulting in ambiguity of the exact crystal structure of titanate nanotubes. 28 In Fig. 2a, the peaks at 10.7 , 24.2 , 28.6 , and 48.3 can be attributed to planes 200, 110, 310, and 020, respectively, in H 2 T i5 O 11 $H 2 O (JCPDS: 44-0131). Accordingly, it seems that the hydrothermal process decomposes the structure of the precursor Degussa TiO 2 P25 in the presence of alkaline solution, thereby forming a completely new structure. The XRD pattern related to MoTNT-5 in Fig. 2b only includes the anatase phase (JCPDS: 21-1272). Absence of any MoO 3 diffraction pattern might be a result of full dispersion of molybdena species on the support surface or formation of very tiny molybdena crystals on the surface, whose size is below the device detection capacity. 23,25,29 The molecular structure of molybdena is such that, with increase in molybdena loading and following mono-molybdena species, we observe growth of two-dimensional and three-dimensional species of polymolibdate 30 and a propensity for MoO 3 crystals to develop, though these species can coexist as well. The probability of formation of polymeric molybdena species increases as a monolayer coating is approached. A monolayer coating with a specic surface area of 41 m 2 g À1 for commercial anatase support has been obtained in 3.9% loading of molybdena. 21 The XRD pattern of MoTNT-10 catalyst represents the presence of molybdena oxide along with anatase phase (JCPDS: 35-0609), as the peaks shown in Fig. 2c can be attributed to planes 020, 110, 021, and 060, respectively. The relevant peaks have very low intensity, suggesting formation of few molybdena crystals in conjunction with mono-and poly-molybdate species. Peaks associated with rutile phase were not detectable. Presence of potassium in the catalyst K0.1MoTNT-10 improved dispersion of molybdena species on the support surface. Further, peaks related to molybdena crystals were not detectable, suggesting formation of no molybdena crystals or only microcrystals of size smaller than the detection capacity of the device. Moreover, the intensity of the peak related to anatase phase increased in comparison to the catalyst MoTNT-10. It can be stated that the presence of potassium diminished the conversion of anatase phase into rutile. Supercial species of potassium were not detected, due to its low level in the sample. The XRD pattern of MoTNT-15 catalyst suggests increased intensity of peaks related to molybdena crystals and diminished intensity of the anatase phase peaks compared to MoTNT-10 catalyst. It can be concluded that the number and size of MoO 3 crystals increase with increasing molybdena loading on the titania support surface. Further, as shown in Fig. 2e, peaks associated with rutile phase (planes 110 and 111) were detectable in the XRD pattern of the catalyst with increase in molybdena loading (JCPDS:21-1276). Disappearance of the index peak 2q ¼ 10.6 , related to the structure of titanate hydrogen nanotubes, refers to the molybdena loading and catalyst calcination resulting in alteration of the nanotube structure and its phase conversion to the anatase phase. Using Scherrer's equation (eqn (1)), the average crystallite size of the catalysts was calculated considering the peak located at 2q ¼ 24.2 as the characteristic peak and the results are shown in Table 1. Crystallite size shows a growing trend with increasing Mo loading, which can be attributed to further formation of polymeric and crystalline molybdena species and destruction of the support structure because of anatase to rutile conversion. Lower crystallite size in MoK0.1TNT-10 can be ascribed to the presence of potassium in the support surface structure. The XRD pattern of spent K0.1MoTNT-10 catalyst in Fig. 2f shows lower intensity of anatase characteristic peak at 2q ¼ 24.2 compared to the unspent one. Moreover, the presence of molybdena oxide in the catalyst is proved, as the peaks can be attributed to planes 020, 021, and 060, suggesting formation and growth of both molybdena crystals and its polymeric species. It should also be mentioned that no rutile phase was seen in the XRD pattern of the spent catalyst. Raman analysis The Raman spectra of titanate nanotubes are provided in Fig. 3a. The shoulders at 145 cm À1 and 402 cm À1 can be attributed to the E g mode of the anatase phase. The intensity of these bands increases during the acid washing stage with the decrease in sodium present in the titanate nanotube structure. 31 This suggests the effect of sodium in prevention of conversion of titanate nanotubes to anatase. Considering the intensity of the 145 cm À1 and 402 cm À1 bands observed in the Raman spectrum of the synthesized sample, it can be stated that there is little sodium in the structure. The band at 266 cm À1 can be assigned to Ti-OH bonds, which are important for formation and stability of titanate nanotubes. By calcination of the nanotubes at high temperatures, two Ti-OH bonds are merged to form a Ti-O-Ti bond by releasing H 2 O. This is the cause of destruction of the tubular structure of the nanotube and formation of anatase phase. 10,32 The band at 448 cm À1 is related to vibrations of the Ti-O-Ti bond and that at 903 cm À1 is associated with Na-O-Ti bond. 33 This latter band did not exist in the Raman spectra of our synthesized support, suggesting absence or minimal existence of sodium in the structure of the synthesized titanate nanotube. The bands above 650 cm À1 , including the bands at around 830 and 926 cm À1 , are sensitive to humidity and, during the drying process of the support, they become weaker and move to higher frequencies. These bands can be assigned to supercial vibrational modes. 31 The bands at 200, 510, and 635 cm À1 can be attributed to the anatase phase. 25,29,34,35 The Raman spectra related to MOTNT-5 catalyst in Fig. 3b has bands at 410, 520, and 645 cm À1 , the dominant bands of anatase. No band related to rutile phase was observed. Moreover, no peak suggesting the presence of crystal molybdena species was observed, in line with the obtained results from XRD. The band at 975 cm À1 is related to the Mo]O bond in monomer and polymer species of molybdena. 25,36 The band at 265 cm À1 disappeared from the Raman spectra of MoTNT-x catalysts, suggesting the effect of molybdena loading in destruction of the titanate nanotube structure and its conversion to anatase phase. The reduction in the intensity of the bands associated with anatase phase with increase in the molybdena loading on the support surface is notable in Raman spectra of catalysts in Fig. 3c and d. The broad and not very intense band at 820 cm À1 in the MoTNT-10 catalyst spectrum in Fig. 3c can be attributed to Mo-O-Mo bonds in the crystalline or polymeric structure of molybdena species. 37,38 Comparison of the catalyst MoTNT-10 with MoTNT-5 indicated that the presence of a band at 820 cm À1 is followed by reduction in the intensity of the band related to mono-molybdena species and its transference to the higher frequency of 980 cm À1 , which is in line with the results of other reports. 20,25 This suggests reduction in the presence of monomer molybdena species on the support surface with the increase in its loading. This transition is due to the altered length of the Mo]O bond and can be attributed to decreased interaction with the support and production of threedimensional polymer species of molybdena. 20,39 This is fully compatible with the obtained results from XRD. No band related to rutile phase was observed. Presence of potassium in the structure of K0.1MoTNT-10 catalyst, as shown in Fig. 3d, caused the disappearance of the 820 cm À1 band associated with crystal species of molybdena from its Raman spectra and improvement of dispersion of molybdena species on the support surface, such that the intensity of the band related to Mo]O bond increased signicantly and transferred to the lower frequency of 966 cm À1 . 29 The intensity of bands related to anatase phase increased in response to presence of potassium compared to MoTNT-10 catalyst. No band associated with rutile phase was observed. Due to the low level of potassium in the catalyst, no band suggesting the presence of supercial species of potassium was observed. Watson et al. 29 attributed the bands at 900-950 cm À1 to K 2 MoO 4 and K 2 Mo 2 O 7 species in catalysts with larger amounts of potassium. The Raman spectrum of MoTNT-15 catalyst (Fig. 3e) showed increased intensity of the bands related to molybdena crystals in comparison to MoTNT-10 catalyst. It can be concluded that the number and/or the size of MoO 3 crystals increases with elevation of molybdena loading on the titania support surface. Eventually, by developing a bulk phase on the surface, it causes diminished access to the active sites of the catalyst. 20 Further, a band associated with rutile phase was observed at 230 cm À1 , in line with XRD results. The specic surface area of BET The BET surface area of the acid-washed nanotubes was found using a BET isotherm. It is remarkable that the nanotubes reached a specic surface area of 401 m 2 g À1 , while the area of TiO 2 P25, precursor of the nanotubes, was calculated to be 49 m 2 g À1 , as shown in Table 1. Hydrothermal method is successful in preparation of a support with a high specic surface area. The acid treating process causes increase of the surface area by a signicant value, but the issue of thermal stability also comes into play. 40,41 The specic surface area of acid washed nanotubes obtained through hydrothermal method has been reported at 404 m 2 g À1 , 408 m 2 g À1 and 325 m 2 g À1 in the literature. 10,42,43 As can be observed in Table 1, the BET specic surface areas of prepared catalysts are lower than that of the synthesized nanotube, where the area diminishes with increase in molybdena loading. As the nanotubes have not been calcined, calcined Mo-catalysts showed a surface area much lower than that of the support. The specic surface areas of MoTNT-x with loading of 5, 10, and 15 wt% reach 76, 69, and 44 m 2 g À1 , respectively. It is also observed that the presence of potassium in K0.1MoTNT-10 catalyst causes the surface area to reach 74 m 2 g À1 , an enhancement compared to MOTNT-10 catalyst. This could be due to improved dispersion of molybdena species on the surface (increased level of monomeric species of molybdena), also observed in XRD and Raman analyses. The probability of pore plugging is considerable in response to polymer and crystal species of molybdena. Smaller species cause better dispersion and a higher specic surface area. 20 Watson et al. 29 reported elevation of the specic surface area of catalysts synthesized by hydrothermal method in response to the addition of potassium, which is in line with our observations. Further decline in specic surface area of MoTNT-15 could be due to higher level of molybdena so that it's more than monolayer coverage of the support surface, 21 because further loading of molybdena causes weaker interaction between its species and the support surface. This causes formation of crystal species of molybdena, which are the cause of support pore blocking. 20,21 Furthermore, this surface area reduction can be attributed to destruction of the nanotube structure, which includes stages such as thinning of the walls, blending of nanotubes, breakdown of nanotubes, and conversion to nanoparticles or nanorods. 44 SEM, TEM and EDS SEM imaging was carried out to investigate the morphology of the synthesized support, shown in Fig. 4a, conrming the formation of nanotubes without discernible impurities. TEM images of uncalcined TNT, provided in Fig. 4b and c, show a tubular morphology which proves titanate nanotube formation. Existence of this phase in the support structure has already been proven in XRD and Raman analyses. The prepared nanotubes have an external diameter of around 10 nm and lengths of 40-80 nm. As shown in Fig. 5a, the tubular structure of the support changed completely aer addition of 10 wt% molybdenum to TNT and calcination at 500 C. In fact, some nanotubes broke into nanoparticles and a random mix of nanotubes and nanoparticles is observed. In contrast, the presence of potassium in MoK0.1TNT-10 catalyst increased the structural stability of TNT. The TEM image in Fig. 5b shows that the TNT tubular structure is fairly stable and fewer nanoparticles are formed. EDS analysis, presented in Fig. 5c, proves the presence of potassium ions in the structure of the catalyst, strong evidence that the impregnation method successfully added potassium to the catalyst support structure. H 2 -TPR analysis Temperature programmed reduction with hydrogen is a method to investigate the reducibility of catalysts, as presented in Fig. 6 and Table 1. Based on the literature, reduction of pure molybdena oxide involves several stages. 45,46 The TPR results of bulk molybdena oxide show the presence of two peaks at 1040 and 1270 K, which were related to reduction of MoO 3 to MoO 2 (Mo 6+ to Mo 4+ ) and MoO 2 to Mo. Compared to the bulk MoO 3 TPR prole, TiO 2 -supported MoO 3 displays a noticeable decrease in the maximum hydrogen consumption temperature. 47 This enhancement suggests formation of more monomeric and polymeric molybdena species on the support surface in response to increased interaction with the surface. Note that monomeric, polymeric, and crystalline genera can coexist in MoO x /TiO 2 catalysts. As can be observed in Fig. 6 and Table 1, with increase in the molybdena loading, the maximum hydrogen consumption temperature increases from around 485 C to 542 C, which is consistent with polymeric and crystalline MoO x presence. It is Fig. 4 SEM (a) and TEM (b and c) images of titanate nanotubes. This journal is © The Royal Society of Chemistry 2019 difficult to precisely distinguish MoO x species peaks in H 2 -TPR proles, because different species can reduce simultaneously. The maximum reduction temperature for MoK0.1TNT-10 catalyst was 521.5 C. It is noteworthy that this temperature is higher than the MoTNT-10 catalyst maximum reduction temperature, which can be attributed to presence of potassium within the sample structure. The increase of the reduction temperature can be attributed to reduction of K-affected monomeric species. 10 3.6. TGA and CHNS of spent MoK0.1TNT-10 Spent MoK0.1TNT-10 was heated from 25 to 900 C at a heating rate of 10 C min À1 in a thermogravimetric analyzer to measure its mass over time. Fig. 7 shows that the sample had a negligible mass loss below 450 C, which was expected, as it had previously participated in the ODH reaction at 500 C for 3000 min. A mass loss of about 4% was observed in the range of 450 C to 600 C that is attributed to the burning of coke formed through the ODH reaction. The low amount of mass loss is because the nature of oxidative dehydrogenation of propane does not allow signicant coke formation. No obvious weight loss was seen at temperatures above 600 C, demonstrating high thermal stability of the spent catalyst. CHNS analysis was performed to determine the elemental composition of spent MoK0.1TNT-10. The results, shown in Table 2, imply that the carbon content in the sample is less than 5%, which is compatible with the observation in the TGA analysis. Indeed, the higher the level of carbon in the catalyst, the higher the probability of surface pore blockage, which results in specic surface area reduction. The analyses prove that coke formation had a notable role in neither the pore plugging nor the reduction of active sites on the surface of the catalyst. Catalytic test results The catalytic performance of titanate nanotubes and calcinated catalysts in the oxidative dehydrogenation of propane was examined within the thermal range of 200-500 C, as shown in Fig. 8. In addition to propylene, other products, including H 2 O, CO, and CO 2 , were also produced. Increasing the reaction temperature to 500 C, slight amounts of methane and ethylene were observed in the output products, but their selectivities remained below 0.2%. The oxidative dehydrogenation process of propane takes place according to an oxidation-reduction mechanism (Mars-Van-Krevelen) which involves two major stages, illustrated in Fig. 9. In the rst stage, propane recovers the catalyst. At this stage, the network's oxygen begins to react with propane. In the second stage, the oxygen detached from the catalyst structure is substituted by oxygen adsorbed onto the catalyst. The products, which have been adsorbed chemically, go through two different paths: they are either discharged or remain on the surface and then oxidize into other products such as CO x . 48,49 As the catalytic test results in Fig. 8 show, propane conversion grows as the reaction temperature increases. Furthermore, propylene selectivity diminishes, followed by elevation of selectivity of CO x (CO + CO 2 ). According to the proposed Paper reaction mechanism, propylene is produced during the oxidative dehydrogenation reaction of propane, where there is a possibility for oxidation of propane and/or propylene to CO x , as well. 10 Calcined TNT alone showed negligible activity in the reaction and propane conversion remained almost zero in the temperature range of 200 C to 400 C. However, it reached propane conversions of 1.9% and 2.6% at 450 C and 500 C, respectively, while propylene selectivities were 85.2% and 77.3%. The activity of the nanotubes at these temperatures can be attributed to thermal cracking arising at 450 C and above. As molybdena loading increased from 5 wt% to 10 wt%, greater conversion was observed at a constant temperature. This observation has also been reported in other literature. 25,30 Compared to the other catalysts, MoTNT-15 showed less activity as the degree of propane conversion diminished. This can be attributed to the presence of rutile phase as well as crystalline species of molybdena. This issue has also been proven by XRD and Raman analyses. It has been reported that rutile phase has a lower capacity to maintain metal oxide species on its surface compared to anatase. Consequently, lower activity was also observed in ODH of alkanes in presence of rutile phase. 50,51 Propylene selectivity showed a descending trend with molybdena loading increase at a constant temperature. This decrease is low for MoTNT-10 catalyst. However, MoTNT-15, for the previously mentioned reasons, displayed an obvious reduction in selectivity of propylene compared to MoTNT-10 catalyst. Propylene selectivity increased for the synthesized catalysts compared to VO x /TNT catalysts, 10 while VO x /CeTNT showed a better selectivity towards propylene. 52 The MoTNT-10 catalyst with propane conversion of 23.8% and selectivity of 39% has the highest propylene yield of 9.3% among the catalysts tested at 500 C. It can be stated that an optimal presence of molybdena species is required to achieve desirable efficiency in oxidative dehydrogenation of propane in the presence of MoO x /TNT catalysts. In fact, in addition to adequate and effective presence, good dispersion of monomeric and polymeric genera should be observed, in addition to the absence of crystalline species of molybdena which cause less access to active sites of the surface. 20 The catalytic test results of MoK0.1TNT-10 catalyst can be observed in Fig. 8. The presence of potassium caused lower conversion of propane and more selectivity for propylene compared to MoTNT-10 catalyst, which is consistent with results obtained previously. 23,29,53 Propylene molecules may have interactions at the catalyst surface through formation of weak hydrogen bonds with surface OH groups. If these groups show Brønsted acidic property, their protons form hydrogen bonds with the p bonds of olens. Further, when the acidic property is very powerful, proton transfer may occur from the surface to olen, thereby forming a carbocation. This begins a set of reactions which cause reduction of selectivity. 54 Indeed, by decreasing the acidity of the catalyst, potassium facilitates discharge of propylene (as a nucleophile) off the catalyst surface. In this way, it prevents further oxidation of intermediate products such as propylene, which causes increase in its selectivity. 53,55 In addition, it has been suggested that the presence of potassium causes lower electrophilic oxygen concentration (O 2 -, O À ). These species are highly radical and cause progressive oxidation of the products. It has also been stated that nucleophilic species of O 2À (as a factor of partial oxidation) increase in the presence of potassium. 53,55,56 All the aforementioned points cause decreased catalytic activity and increased selectivity toward propylene. These results are in agreement with those observed in H 2 -TPR analysis, where the maximum reduction temperature increased in the presence of potassium. MoK0.1TNT-10 catalyst, with a propane conversion of 21.2% and propylene selectivity of 53.3%, has a yield of around 11.3% at 500 C, which is an increase of over 20% in comparison with MoTNT-10 catalyst. According to Fig. 8d, at a constant temperature, the largest TOF is related to MoTNT-5, the catalyst with the least molybdena loading amount, and the lowest TOF belongs to MoTNT-15 catalyst. A relationship between the maximum reduction temperature (T max ) determined by H 2 -TPR and the TOF values may be established, such that the highest T max corresponds to the sample with the lowest TOF. Deactivation of catalysts Performances of MoTNT-10 and MoK0.1TNT-10 catalysts, which had the greatest yield among the catalysts, were evaluated for 3000 min at 500 C. Balance of carbon did not obviously change. Propylene selectivity and conversion of propane are provided in Fig. 10 in terms of time for both catalysts. As is shown, the catalysts had slight drops in activity. The drops in propane conversion for MoTNT-10 and MoK0.1TNT-10 are 13% and 9.1%, respectively, while both catalysts showed steady increases in propylene selectivity. This proves that the presence of potassium within the structure of molybdena increases catalyst stability. Based on the results obtained from BET analysis provided in Table 1, the specic surface area of the MoTNT-10 catalyst is reduced to 56 m 2 g À1 following 3000 min time on-stream. This reduction can be attributed to increase of the mean diameter of the crystals on the surface of the catalyst, as well as destruction of anatase phase. The specic surface area of MoK0.1TNT-10 catalyst diminished to 67 m 2 g À1 aer the deactivation test, showing superior stability compared to MoTNT-10 catalyst. It seems that potassium doping is absolutely effective in prevention of structural changes and reduction of surface area compared to non-doped sample. Molybdena crystal development plays the main role in specic surface area reduction of the spent catalyst, while anatase to rutile phase conversion was not observed based on the XRD pattern of spent MoK0.1TNT-10 catalyst in Fig. 2f. Conclusion Titanate nanotubes, synthesized by hydrothermal method, were employed as a support for molybdena and K-doped molybdena species as a catalyst in ODH of propane. Several different characterization methods conrmed the presence of titanate nanotubes and the high surface area of the support. Catalyst preparation was accompanied by support surface area loss, mainly due to calcination. The surface area reduction intensi-ed as molybdena polymeric and crystalline species formation and support phase destruction were observed through molybdena loading up to 15% on the support. All of the tested catalysts were active, but the highest yield of propylene was obtained for the catalyst MoK0.1TNT-10, clearly different from other catalyst yields, especially at high temperatures. Addition of potassium led to much better dispersion of molybdena species on the support surface, as well as a superior yield of propylene and lower propane conversion. The catalysts showed a slight drop in propane conversion and a slighter enhancement in propylene selectivity aer 3000 min on-stream, which is in agreement with specic surface area reduction through the deactivation test. MoK0.1TNT-10 catalyst showed enhanced stability in comparison to MoTNT-10 catalyst, which is attributed to better dispersion of molybdena species in the crystal structure. Conflicts of interest There are no conicts to declare.
8,540
2019-04-12T00:00:00.000
[ "Chemistry" ]
Chromosome 8p engineering reveals increased metastatic potential targetable by patient-specific synthetic lethality in liver cancer Large-scale chromosomal aberrations are prevalent in human cancer, but their function remains poorly understood. We established chromosome-engineered hepatocellular carcinoma cell lines using CRISPR-Cas9 genome editing. A 33–mega–base pair region on chromosome 8p (chr8p) was heterozygously deleted, mimicking a frequently observed chromosomal deletion. Using this isogenic model system, we delineated the functional consequences of chr8p loss and its impact on metastatic behavior and patient survival. We found that metastasis-associated genes on chr8p act in concert to induce an aggressive and invasive phenotype characteristic for chr8p-deleted tumors. Genome-wide CRISPR-Cas9 viability screening in isogenic chr8p-deleted cells served as a powerful tool to find previously unidentified synthetic lethal targets and vulnerabilities accompanying patient-specific chromosomal alterations. Using this target identification strategy, we showed that chr8p deletion sensitizes tumor cells to targeting of the reactive oxygen sanitizing enzyme Nudix hydrolase 17. Thus, chromosomal engineering allowed for the identification of novel synthetic lethalities specific to chr8p loss of heterozygosity. The PDF file includes: Figs. S1 to S8 Legends for data S1 to S5 Other Supplementary Material for this manuscript includes the following: Figure S2 : Figure S2: Screening and validation of chr8pLOH cell clones.(A) Schematic illustration of dual-guided chromosome engineering using CRISPR/Cas9 technology.(B) Agarose gel images of the PCR products using primer pairs flanking the targeted region at 2 Mb and 35 Mb of chr8p.PCR product size is expected at 1500 bp or lower depending on the breakpoint repair.(C) Sanger sequencing results of PCR products using primer pairs flanking the targeted region of the deleted allele.Shown are the sequencing chromatograms.In addition, sequencing of the undeleted alleles by primers flanking each cut site (2 Mb and 35 Mb) revealed alterations, as indicated in the box, at each cut side individually.(D) Representative multiplex FISH of chr8pWT and chr8pLOH clones of HLF, HLE and HCC68 cells.Chromosome 8 is colored in light grey and highlighted with a white rectangle (WT) or circle (LOH).(E) WES-based paired copy number analysis of chr8 in HLF, HLE and HCC68 cells relative to their isogenic cell clone pair depicted as rainfall plots.(F) Brightfield microscopy images showing chr8pWT and chr8pLOH cell clone morphology in HLF, HLE and HCC68 cells.(G) Crystal violet stain of HLF, HLE and HCC68 cells before Cas9-Blast transduction and Cas9expressing chr8pWT and chr8pLOH clones after selection with 2 µg/mL Puromycin (Puro) or 10 µg/mL Blasticidin (Blast). Figure S3 : Figure S3: Chr8pLOH is associated with metastasis.(A) Scatter plot showing ratios of patients with chr8p-deleted tumors at metastatic and primary sites.Data obtained from the Hartwig Medical Foundation dataset (27).(B) Frequency of patients with chr8pLOH or chr8pWT at primary site, local and distant metastases of selected cancer entities.(C) Proliferation of chr8pLOH cell clones determined by BrdU incorporation ELISA relative to respective chr8pWT clones.Data are represented as mean ± SD of two to three independent experiments.Single dots represent technical replicates.Student t-test was performed to determine p-values (p-value > 0.05, ns). Figure S4 : Figure S4: Scheme of metastasis suppressor screening approach and validation of metastasis-related genes on chr8p in HCC68 cells.(A) Graphical scheme of identification of metastasis suppressor candidate genes on chr8p.(B) RNAi migration screen of chr8p candidate metastasis suppressors in HCC68 cells.Exemplary transwell migration images (top) are shown with respective quantification (bottom) of cell migration in four independent experiments.Knockdown was performed with two different siRNAs targeting each gene and quantified relative to Allstar and siGFP control (siRNA #1light grey, siRNA #2dark grey).Data are shown as floating bars with line indicating median and single dots representing each replicate of four independent experiments.(C) Western blot of HLF cell clones after transfection with empty vector (CTRL) or target gene overexpression (OE) and detection of HA tag and GAPDH or DLC1 and Vinculin.(D) Representative images of transwell migration assay in chr8pWT or chr8pLOH HCC68 cells after transfection with empty vector (CTRL) or target gene overexpression vectors (MSRA-HA, NAT1-HA, PPP2CB-HA, DLC1-V5).(E) Quantification of transwell migration in chr8pWT and chr8pLOH HCC68 cells after gene overexpression.Data are represented as mean ± SD of four independent experiments shown by single dots.(F) Heatmap of metastasis-associated gene expression after siRNA-mediated target gene knockdown in HCC68 cells compared to Allstar control (RT-qPCR data) and of chr8pWT and chr8pLOH HLF cells (RNAseq data).Z-scores are shown for gene expression relative to Allstar control (RT-qPCR) and relative to mean gene expression (RNAseq).Two-way ANOVA was performed for comparison of multiple groups.P-values are indicated above the graphs (pvalue > 0.05, ns). Figure Figure S5: Extended data of the CRISPR/Cas9 knockout screen and the chr8pLOH DepMap analysis.(A) Ranked gene essentiality scores of the CRISPR knockout screen in chr8pWT (left) and chr8pLOH (right) HLF cells.(B) Analysis flowcharts for the CRISPR knockout screen and DepMap analyses.(C) Scatter plot representation depicting gene essentiality scores in chr8pWT and chr8pLOH cells at day 7 (left) and mean gene essentiality scores of chr8pWT and chr8pLOH groups in the DepMap liver cancer (center) and pan-cancer (right) datasets. Figure S6 : Figure S6: Validation of NUDT17 dependency in chr8pLOH cells.(A) Growth curve for HCC68 chr8pWT and chr8pLOH cells after transduction with non-targeting sgRNA (NTsgRNA) or sgNUDT17 and cell viability measurement by resazurin assay for four consecutive days.Measurements of two independent sgRNAs for NUDT17 were combined.Out of four independent replicates, one representative growth curve is shown.Data are represented as mean ± SD of technical triplicates.(B) Relative cell viability of chr8pWT and chr8pLOH HCC68 cells following NUDT17 knockout 96 h post seeding.Data are represented as mean ± SD of four independent experiments with each dot representing the mean of one experiment.Colony formation of chr8pWT and chr8pLOH (C-D) HCC68 and (E-F) HLE cells.Cells were transduced with either NTsgRNA or two independent sgRNAs targeting NUDT17 and cultured for 14 days.Representative images of four replicates are shown.Quantification of colony area after NUDT17 knockout in chr8pWT and chr8pLOH (D) HCC68 and (F) HLE cells relative to NTsgRNA transduction.Data are represented as mean ± SD of four independent experiments with each dot representing the mean of one experiment.Two-way ANOVA was performed for comparison of multiple groups.P-values are indicated above the graphs (p-value > 0.05, ns). Figure S7 : Figure S7: Analysis of NUDT17 and NUDT18 deregulation.(A) NUDT17 and NUDT18 gene expression in four chr8pWT and four chr8pLOH cell clones as determined by RNAseq.(B) NUDT18 gene expression in TCGA-LIHC for normal liver tissue (NT) and HCC tumor (T) samples.(C) Kaplan-Meier survival curve of TCGA-LIHC patients with high (red, N 185) or low (blue, N = 184) NUDT18 gene expression.Hazard ration (HR) with 95% confidence interval and p-values were calculated by log-rank test.(D) Colony formation of chr8pWT and chr8pLOH HLF cells.Cells were transfected with siPools targeting NUDT17, NUDT18 or negative control (siCTRL) and cultured for 14 days.Representative images of five replicates are shown.Quantification of colony area after NUDT17 and NUDT18 knockdown in chr8pWT and chr8pLOH cells relative to siCTRL.Data are represented as mean ± SD of five independent experiments with each dot representing the mean of one experiment.(E) NUDT17 and NUDT18 gene expression in pTRIPZ-NUDT18-infected chr8pLOH HLF and HCC68 cells after transfection with siNUDT17 or siCTRL and treatment with doxycycline (DOX).(F) NUDT17 and NUDT18 gene expression in chr8pWT HLF and HCC68 cells after siPool mediated knockdown of NUDT17 and NUDT18 alone or in combination.Gene expression was determined by quantitative RT-PCR and analyzed with the comparative Ct method.Two-way ANOVA was performed for comparison of multiple groups.P-values are indicated above the graphs (p-value > 0.05, ns; p < 0.001 ***).
1,712.2
2023-12-22T00:00:00.000
[ "Medicine", "Biology" ]
The nature of domain walls in ultrathin ferromagnets revealed by scanning nanomagnetometry The recent observation of current-induced domain wall (DW) motion with large velocity in ultrathin magnetic wires has opened new opportunities for spintronic devices. However, there is still no consensus on the underlying mechanisms of DW motion. Key to this debate is the DW structure, which can be of Bloch or N\'eel type, and dramatically affects the efficiency of the different proposed mechanisms. To date, most experiments aiming to address this question have relied on deducing the DW structure and chirality from its motion under additional in-plane applied fields, which is indirect and involves strong assumptions on its dynamics. Here we introduce a general method enabling direct, in situ, determination of the DW structure in ultrathin ferromagnets. It relies on local measurements of the stray field distribution above the DW using a scanning nanomagnetometer based on the Nitrogen-Vacancy defect in diamond. We first apply the method to a Ta/Co40Fe40B20(1 nm)/MgO magnetic wire and find clear signature of pure Bloch DWs. In contrast, we observe left-handed N\'eel DWs in a Pt/Co(0.6 nm)/AlOx wire, providing direct evidence for the presence of a sizable Dzyaloshinskii-Moriya interaction (DMI) at the Pt/Co interface. This method offers a new path for exploring interfacial DMI in ultrathin ferromagnets and elucidating the physics of DW motion under current. The recent observation of current-induced domain wall (DW) motion with large velocity in ultrathin magnetic wires has opened new opportunities for spintronic devices [1]. However, there is still no consensus on the underlying mechanisms of DW motion [1][2][3][4][5][6]. Key to this debate is the DW structure, which can be of Bloch or Néel type, and dramatically affects the efficiency of the different proposed mechanisms [7][8][9]. To date, most experiments aiming to address this question have relied on deducing the DW structure and chirality from its motion under additional in-plane applied fields, which is indirect and involves strong assumptions on its dynamics [2][3][4]10]. Here we introduce a general method enabling direct, in situ, determination of the DW structure in ultrathin ferromagnets. It relies on local measurements of the stray field distribution above the DW using a scanning nanomagnetometer based on the Nitrogen-Vacancy defect in diamond [11][12][13]. We first apply the method to a Ta In wide ultrathin wires with perpendicular magnetic anisotropy (PMA), magnetostatics predicts that the Bloch DW, a helical rotation of the magnetization, is the most stable DW configuration because it minimizes volume magnetic charges [14]. However, the unexpectedly large velocities of current-driven DW motion recently observed in ultrathin ferromagnets [1], added to the fact that the motion can be found against the electron flow [2,3], has cast doubt on this hypothesis and triggered a major academic debate regarding the underlying mechanism of DW motion [4][5][6][7][8][9]. Notably, it was recently proposed that Néel DWs with fixed chirality could be stabilized by the Dzyaloshinskii-Moriya interaction (DMI) [7], an indirect exchange possibly occurring at the interface between a magnetic layer and a heavy metal substrate with large spin-orbit coupling [15]. For such chiral DWs, hereafter termed Dzyaloshinskii DWs, a damping-like torque due to spin-orbit terms (spin-Hall effect and Rashba interaction) would lead to efficient current-induced DW motion along a direction fixed by the chirality [7]. In order to validate unambiguously these theoretical predictions, a direct, in situ, determination of the DW structure in ultrathin ferromagnets is required. However, the relatively small number of spins at the wall center makes direct imaging of its inner structure a very challenging task. So far, only spin-polarized scanning tunnelling microscopy [16] and spin-polarized low energy electron microscopy [17] have allowed a direct determination of the DW structure, demonstrating homochiral Néel DWs in Fe double layer on W(110) and in (Co/Ni) n multilayers on Pt or Ir, respectively. However, these techniques are intrinsically limited to model samples due to high experimental constraints and the debate remains open for widely used trilayer systems with PMA such as Pt/Co/AlO x [1], Pt/Co/Pt [4] or Ta/CoFeB/MgO [18]. Here we introduce a general method which enables determining the nature of a DW in virtually any ultrathin ferromagnet. It relies on local measurements of the stray magnetic field produced above the DW using a scanning nanomagnetometer. To convey the basic idea behind our method, we start by deriving analytical formulas of the magnetic field distribution at a distance d above a DW placed at x = 0 in a perpendicularly magnetized film [ Fig. 1a]. The main contribution to the stray field, denoted B ⊥ , is provided by the abrupt variation of the out-of-plane magnetization M z (x) = −M s tanh(x/∆ DW ) [14], where M s is the saturation magnetization and ∆ DW is the DW width parameter. The resulting stray field components can be expressed as where t is the film thickness. These approximate formulas are valid in the limit of (i) t d, (ii) ∆ DW d and (iii) for an infinitely long DW along the y axis. On the other hand, the in-plane magnetization, with amplitude M (x) = M s / cosh(x/∆ DW ), can be oriented with an angle ψ with respect to the x axis [ Fig. 1b]. This angle is linked to the nature of the DW: ψ = ±π/2 for a Bloch DW, whereas ψ = 0 or π for a Néel DW. The two possible values define the chirality (right or left) of the DW. The spatial variation of the in-plane magnetization adds a contribution B cos ψ to the stray field, whose components are given The net stray field above the DW is finally expressed as which indicates that a Néel DW (cos ψ = ±1) produces an additional stray field owing to extra magnetic charges on each side of the wall. Using Eqs. (1) and (2), we find a maximum relative difference in stray field between Bloch and Néel DWs scaling as ≈ π∆ DW /2d. Local measurements of the stray field above a DW can therefore reveal its inner structure, characterized by the angle ψ. This is further illustrated in Figs. 1(c,d), which show the stray field components B ψ x (x) and B ψ z (x) for various DW configurations while using d = 120 nm and ∆ DW = 20 nm, which are typical parameters of the experiments considered below on a Ta/CoFeB(1nm)/MgO trilayer system. We now demonstrate the effectiveness of the method by employing a single Nitrogen-Vacancy (NV) defect hosted in a diamond nanocrystal as a nanomagnetometer operating under ambient conditions [11][12][13]. Here, the local magnetic field is evaluated within an atomic-size detection volume by monitoring the Zeeman shift of the NV defect electron spin sublevels through optical detection of the magnetic resonance. After grafting the diamond nanocrystal onto the tip of an atomic force microscope (AFM), we obtain a scanning nanomagnetometer which provides quantitative maps of the stray field emanating from nanostructured samples [19][20][21] with a magnetic field sensitivity in the range of 10 µT.Hz −1/2 [22]. In this study, the Zeeman frequency shift ∆f NV of the NV spin is measured while scanning the AFM tip in tapping mode, so that the mean distance between the NV spin and the sample surface is kept constant with a typical tip oscillation amplitude of a few nanometers [20]. Each recorded value of ∆f NV is a function of B NV, and B NV,⊥ , which are the parallel and perpendicular components, respectively, of the local magnetic field with respect to the NV spin's quantization axis (Supplementary Section I). Note that a frequently found approximation is ∆f NV ≈ gµ B B NV, /h, where gµ B /h ≈ 28 GHz/T. This indicates that scanning-NV magnetometry essentially measures the projection B NV, of the magnetic field along the NV center's axis. The latter is characterized by spherical angles (θ,φ), measured independently in the (xyz) reference frame of the sample [ Fig. 2a]. We first applied our method to determine the structure of DWs in a 1.5-µm-wide magnetic wire made of a Ta(5 nm)/Co 40 Fe 40 B 20 (1 nm)/MgO(2 nm) trilayer stack (Supplementary Section II). This system has been intensively studied in the last years owing to low damping parameter and depinning field [23]. Before imaging a DW, it is first necessary to determine precisely (i) the distance d between the NV probe and the magnetic layer and (ii) the product I s = M s t, which are both directly involved in Eq. (3). These parameters are obtained by performing a calibration measurement above the edges of an uniformly magnetized wire, as shown in Fig. 2a. Here we use the fact that the stray field profile B edge (x) above an edge placed at x = 0 can be easily expressed analytically in a form similar to Eq. (10), which only depends on d and I s . An example of a measurement obtained by scanning the magnetometer across a Ta/CoFeB/MgO stripe is shown in Fig. 2b. The data are fitted with a function corresponding to the Zeeman shift induced by the stray field B edge (x) − B edge (x + w c ), where w c is the width of the stripe (Supplementary Section III-A). Repeating this procedure for a set of independent calibration linecuts, we obtain d = 123 ± 3 nm and I s = 926 ± 26 µA, in good agreement with the value measured by other methods [24]. Having determined all needed parameters, it is now possible to measure the stray field above a DW [ Fig. 2c] and compare it to the theoretical prediction, which only depends on the angle ψ that characterizes the DW structure. To this end, an isolated DW was nucleated in a wire of the same Ta/CoFeB/MgO film and imaged with the scanning-NV magnetometer under the same conditions as for the calibration measurements. The resulting distribution of the Zeeman shift ∆f NV is shown in Fig. 2d together with the AFM image of the magnetic wire. Within the resolving power of our instrument, limited by the probe-to-sample distance d ∼ 120 nm [20], the DW appears to be straight with a small tilt angle with respect to the wire long axis, determined to be 2 ± 1 • (Supplementary Section III-B). Taking into account this DW spatial profile, the stray field above the DW was computed for (i) ψ = 0 (righthanded Néel DW), (ii) ψ = π (left-handed Néel DW) and (iii) ψ ± π/2 (Bloch DW). Here we used the micromagnetic OOMMF software [25,26] rather than the analytical formula described above in order to avoid any approximation in the calculation. The computed magnetic field distributions were finally converted into Zeeman shift distribution taking into account the NV spin's quantization axis. A linecut of the experimental data across the DW is shown in Fig. 2e, together with the predicted curves in the three above-mentioned cases. Excellent agreement is found if one assumes that the DW is purely of Bloch type. The same conclusion can be drawn by directly comparing the full two-dimensional theoretical maps to the data [ Fig. 2d and f]. As described in detail in the Supplementary Section III-C, all sources of uncertainty in the theoretical predictions were carefully analysed, yielding the 1 standard error (s.e.) intervals shown as shaded areas in Fig. 2e. Based on this analysis, we find a 1 s.e. upper limit | cos ψ| < 0.07. This corresponds to an upper limit for the DMI parameter D DMI , as defined in Ref. [7], of |D DMI | < 0.01 mJ/m 2 (Supplementary Section III-C). This result was confirmed on a second DW in the same wire. In addition, the measurements were reproduced for different projection axes of the NV probe. The results are shown in Fig. 3 for four NV defects with different quantization axes, showing excellent agreement between experiment and theory if one assumes a Bloch-type DW. These experiments provide an unambiguous confirmation of the Bloch nature of the DWs in our sample, but are also a striking illustration of the vector mapping capability offered by NV microscopy, allowing for robust tests of theoretical predictions. We conclude that there is no evidence for the presence of a sizable interfacial DMI in a Ta(5nm)/Co 40 Fe 40 B 20 (1nm)/MgO trilayer stack. This is in contrast with recent experiments reported on similar samples with different compositions, such as Ta(5nm)/Co 80 Fe 20 (0.6nm)/MgO [3,27] and Ta(0.5 nm)/Co 20 Fe 60 B 20 (1nm)/MgO [18], where indirect evidence for Néel DWs was found through current-induced DW motion experiments. We note that contrary to these studies, our method indicates the nature of the DW at rest, in a direct manner, without any assumption on the DW dynamics. Our results therefore motivate a systematic study of the DW structure upon modifications of the composition of the trilayer stack. In a second step, we explored another type of sample, namely a Pt(3nm)/Co(0.6 nm)/AlO x (2nm) trilayer grown by sputtering on a thermally oxidized silicon wafer (Supplementary Section II). The observation of current-induced DW motion with unexpectedly large velocities in this asymmetric stack has attracted considerable interest in the recent years [1]. Here, the DW width is ∆ DW ≈ 6 nm, leading to a relative field difference between Bloch and Néel cases of ≈ 8% at a distance d ≈ 120 nm. We followed a procedure similar to that described above (Supplementary Section III). After a preliminary calibration of the experiment, a DW in a 500-nm-wide magnetic wire was imaged [ Fig. 4a,b] and linecuts across the DW were compared to theoretical predictions [ Fig. 4c]. Here the experimental results clearly indicate a Néel-type DW structure with left-handed chirality. The same result was found for two other DWs. This provides direct evidence of a strong DMI at the Pt/Co interface, with a lower bound |D DMI | > 0.1 mJ/m 2 . This result is consistent with the conclusions of recent field-dependent DW nucleation experiments performed in similar films [28]. In addition, we note that the observed left-handed chirality, once combined with a damping-like torque induced by the spin-orbit terms, could explain the characteristics of DW motion under current in this sample [8]. In conclusion, we have shown how scanning-NV magnetometry enables direct discrimination between competing DW configurations in ultrathin ferromagnets. This method, which is not sensitive to possible artifacts linked to the DW dynamics, will help clarifying the physics of DW motion under current, a necessary step towards the development of DW-based spintronic devices. In addition, this work opens a new avenue for studying the mechanisms at the origin of interfacial DMI in ultrathin ferromagnets, by measuring the DW structure while tuning the properties of the magnetic material [18,29]. This is a key milestone in the search for systems with large DMI that could sustain magnetic skyrmions [30]. Aknowledgements. This research has been partially funded by the European Commu- -300 0 300 Theoretical two-dimensional Zeeman shift maps for the same three DW configurations. In both e and f, the Bloch hypothesis is the one that best reproduces the data. I. SCANNING-NV MAGNETOMETRY The experimental setup combines a tuning-fork-based atomic force microscope (AFM) and a confocal optical microscope (attoAFM/CFM, Attocube Systems), all operating under ambient conditions. A detailed description of the setup as well as the method to graft a diamond nanocrystal onto the apex of the AFM tip can be found in Ref. [22]. A. Characterization of the magnetic field sensor The data reported in this work were obtained with NV center magnetometers hosted in three different nanodiamonds, labeled ND74 (data of Figure 3 of the main paper), ND75 ( Figure 2) and ND79 ( Figure 4). All nanodiamonds were ≈ 50 nm in size, as measured by AFM before grafting the nanodiamond onto the AFM tip. The magnetic field was inferred by measuring the Zeeman shift of the electron spin resonance (ESR) of the NV center's ground state [13]. This is achieved by monitoring the spin-dependent photoluminescence (PL) intensity of the NV defect while sweeping the frequency of a CW radiofrequency (RF) field generated by an antenna fabricated directly on the sample. The Hamiltonian used to describe the magnetic-field dependence of the two ESR transitions of this S = 1 spin system is given by where D and E are the zero-field splitting parameters that characterize a given NV center, h is the Planck constant, gµ B /h = 28.03(1) GHz/T [31], B is the local magnetic field and S is the dimensionless S = 1 spin operator. Here, the (XY Z) reference frame is defined by the diamond crystal orientation, with Z being parallel to the NV center's symmetry axis u NV , as shown in Figure 5 [32], they depend only on the magnetic field projection along the NV axis B NV, , following Table I. the relation The parameters D and E were extracted from ESR spectra recorded at zero magnetic field In all the data shown in this work, only the upper frequency f + was measured. Thereafter, we will note the corresponding The nanodiamonds were recycled several times to be used with different orientations u NV with respect to the (xyz) reference frame of the sample. The various orientations are labeled with small letters: ND74a, ND74d, ND74e, ND74g, ND75c, ND79c. The spherical angles (θ,φ) that characterize the direction u NV were obtained by applying an external magnetic field of known direction and amplitude with a three-axis coil system, following the procedure described in Ref. [19]. The measurement uncertainty of 2 • (standard error) is related to the precision of the calibration of the coils and their alignment with respect to the (xyz) reference frame. Table I indicates the parameters D, E, θ and φ measured for each NV magnetometer used in this work, with the associated standard errors. The stray field components above a single abrupt edge parallel to the y direction, positioned at x = 0 (magnetization pointing upward for x < 0), are given by These formulas correspond to the thin-film limit (d t) of exact formulas, but the relative error introduced by the approximation is < 10 −5 in our case (d/t ∼ 100), which is negligible compared with other sources of error (see below). The field above a stripe is then obtained by simply adding the contribution of the two edges, namely Using Eqs. (6) and (7), we obtain an analytical formula for the stray field above the stripe. A fit function ∆f stripe NV (x) is then obtained by converting the field distribution into Zeeman shift of the NV defect after diagonalization of the Hamiltonian defined by Eq. induced by the Dzyaloshinskii-Moriya interaction (DMI) [26]. The effect of this rotation will be discussed in Section III D. Table II. The uncertainty and reproducibility of the fit procedure were first analyzed by fitting independent calibration linecuts while fixing the parameters p i to their nominal valuesp i . As an example, the histograms of the fit outcomes for X = {I s , d} are shown in Figure 9(a,b) for a set of 13 calibration linecuts recorded on the Ta/CoFeB/MgO sample with ND75c. From this statistic, we obtain I s,p i = 926.3 ± 2.8 µA and dp i = 122.9 ± 0.7 nm. Here the error bar is given by the standard deviation of the statistic. The relative uncertainty of the fit procedure is therefore given by d/fit = 0.6% for the probe-sample distance and Is/fit = 0.3% for the product I s = M s t. We now estimate the relative uncertainty on the fit outcomes ( d/p i , Is/p i ) linked to each independent parameter p i . For that purpose, the set of calibration linecuts was fitted with one parameter p i fixed at p i =p i ± σ p i , all the other five parameters remaining fixed at their nominal values. The resulting mean values of the fit parameters X = {d, I s } are denoted Xp i +σp i and Xp i −σp i and the relative uncertainty introduced by the errors on parameter p i is finally defined as To illustrate the method, we plot in Figure 9(c,d) the histograms of the fit outcomes while changing the zero-field splitting parameter D fromD − σ D toD + σ D . For this parameter, the relative uncertainties on d and I s are d/D = 1.0% and Is/D = 1.6%. The same analysis was performed for all parameters p i and the corresponding uncertainties are summarized in Table II. The cumulative uncertainty is finally given by where all errors are assumed to be independent. Following this procedure, we finally obtain d = 122.9 ± 3.1 nm and M s t = 926 ± 26 µA B. Micromagnetic calculations While the calibration linecuts were fitted with analytic formulas, the predictions of the stray field above the DWs were obtained using micromagnetic calculations in order to accu- Table II. Summary of the uncertainty X/p i on the value of the fit parameter X (X = d and X = I s ) related to parameter p i for the experiments on Ta/CoFeB/MgO with ND75c (a) and on Pt/Co/AlO x with ND79c (b). The overall uncertainty X is estimated with Eq. (9), assuming that all errors are independent. The standard deviations obtained from a series of 13 linecuts on Ta/CoFeB/MgO (resp. 9 linecuts on Pt/Co/AlO x ) are d/fit = 0.6% and Is/fit = 0.3% (resp. d/fit = 1.4% and Is/fit = 0.5%). and 26] to obtain the equilibrium magnetization of the structure. For the Ta/CoFeB/MgO sample, the nominal values used in OOMMF are: anisotropy constant K = 5.9 · 10 5 J/m 3 (obtained from the measured effective anisotropy field of 107 mT [33]), exchange constant A = 20 pJ/m, film thickness t = 1 nm, stripe width w = 1500 nm, cell size 2.5 × 2.5 × 1 nm 3 . For the Pt/Co/AlO x sample, we used: K = 1. We considered a straight DW with a tilt angle φ DW with respect to the y axis [ Fig. 10(a)]. As illustrated in Figs. 10(b) and 10(c), this angle was directly inferred from the Zeeman shift images, leading to φ DW ≈ 2 ± 1 • for the DW studied in Fig. 2 of the main paper, and φ DW ≈ 6 ± 2 • for the DW studied in Fig. 4 of the main paper. The uncertainty on φ DW enables us to account for the fact that the DW is not necessarily rigorously straight. This point will be discussed in Section III C. The calculation of the stray field was then performed with four different initializations of the DW magnetization: (i) right-handed Bloch, (ii) left-handed Bloch, (iii) right-handed Néel and (iv) left-handed Néel. To stabilize the Néel configuration, the DMI at one of the interfaces of the ferromagnet was added, as described in Ref. [26]. The value of the DMI parameter was set to |D DMI | = 0.5 mJ/m 2 , which is large enough to fully stabilize a Néel DW. The additional consequences of a stronger DMI will be discussed in Section III D. Once the equilibrium magnetization was obtained, the stray field distribution B(x, y) right-handed Bloch DWs is predicted to be < 0.5% [ Fig. 10(d)]. Since this is much smaller than the standard error [cf. Section III C], we plotted the mean of these two cases, which is simply referred to as a Bloch DW, and added the deviation induced by the two possible chiralities to the displayed standard error. Fig. 4 of the main paper. The simulation assumes a straight DW with φ DW = 2 • and ψ = π/2 in (b), and φ DW = 6 • and ψ = π in (c). (d) Linecuts taken from the simulation of (c), illustrating the small effect of the chirality of the Bloch DW. Near the maximum, the field is changed by ±0.5% with respect to the mean value. In the case of (b), the change is even smaller (±0.3%). C. Uncertainties on the DW stray field predictions In this Section, we analyze how the uncertainties on the preliminary measurements affect , which is quite accurate near the stray field maximum and allows us to consider the magnetic field B NV, rather than the Zeeman shift ∆f NV . For clarity the subscript will be dropped and the projected field will be simply denoted B NV . ¤ ¤ x z x y x z x y Figure 11. To estimate the uncertainty in the DW stray field prediction, we analyze how the error on a calibration measurement above an edge (a) and on other parameters translates into an error on the DW field (b). The calibration edge defines the (xyz) axis system. The DW is assumed to be infinitely long, with its plane tilted by an angle φ DW with respect to the (yz) plane. The angle ψ defines the rotation of the in-plane magnetization of the DW with respect to the DW normal. Top panels: side view; Bottom panels: top view. Out-of-plane contribution B ⊥ Let us first consider the out-of-plane contribution to the DW stray field, B ⊥ (x). The stray field components above the DW can be written, in the (xyz) axis system (Fig. 11), as where x DW is the position of the DW (for a given y). This is simply twice the stray field above an edge [see Eq. (6)] expressed in a rotated coordinate system. The projection along the NV center's axis is We now link B ⊥ NV (x) to the calibration measurement. For simplicity, we consider only one of the two edges of the calibration stripe, e.g. the edge at x = 0. We can thus write the projected field above the edge, at a distance d, as Comparing Eqs. (12) and (14), one finds the relation where we define Since B edge NV (x) is experimentally measured, in principle one can use Eq. (15) to predict B ⊥ NV (x) by simply evaluating the function Θ d,θ,φ,φ DW (x) as defined by Eq. (16). As φ DW ∼ 0 implies Θ d,θ,φ,φ DW (x) ∼ 1, it comes that, in a first approximation, B ⊥ NV (x) can be obtained without the need for precise knowledge of any parameter. In other words, the calibration measurement, performed under the same conditions as for the DW measurement, allows us to accurately predict the DW field even though those conditions are not precisely known. This is the key point of our analysis. Strictly speaking, Θ d,θ,φ,φ DW (x), hence B ⊥ NV (x), does depend on some parameters as soon as φ DW = 0, namely on {q i } = {d, θ, φ, φ DW }. To get an insight into how important the knowledge of {q i } is, we need to examine how sensitive Θ d,θ,φ,φ DW (x) is with respect to errors on {q i }. Owing to the sine and cosine functions in Eq. (16), the smallest sensitivity to parameter variations (vanishing partial derivatives) is achieved when either (i) θ ∼ 0 (projection axis perpendicular to the sample plane) or (ii) θ ∼ π/2 (projection axis parallel to the sample plane) combined with φ ∼ 0 and φ − φ DW ∼ 0. However, case (i) cannot be achieved in our experiment, because the out-of-plane RF field cannot efficiently drive ESR of a spin pointing out-of-plane. We therefore target case (ii), that is, θ ∼ π/2 and φ − φ DW ∼ 0. For that purpose, we use a calibration edge that is as parallel to the DW as possible (φ DW → 0) and we seek to have a projection axis that is as perpendicular to the DW plane as possible (θ → π/2 and φ → 0). This is why we employ two perpendicular wires for the calibration and the DW measurements, respectively [cf. Section II]. Conversely, in the worst case of φ DW ∼ π/2 (calibration edge perpendicular to the DW) with θ ∼ π/2, one would have Θ d,θ,φ,φ DW (x) ∼ φ − φ DW , directly proportional to the errors on φ and φ DW . To be more quantitative, we use Eq. (15) to express the uncertainty on the prediction B ⊥ NV (x) as a function of the uncertainties on the various quantities, which gives Here, B edge is given by the measurement error of B edge NV (x), whereas Θ/q i is the uncertainty on Θ {q i } introduced by the error on the parameter q i ∈ {d, θ, φ, φ DW }, the other parameters being fixed at their nominal values, as defined by The results are summarized in Table III for the cases considered in Figs. 2 (Ta/CoFeB/MgO sample) and 4 (Pt/Co/AlO x sample) of the main paper. Θ/q i is evaluated for x = x max , which is the position where the field B ⊥ NV (x) is maximum. It can be seen that the dominating source of uncertainty, though small (≈ 1%), is the error on φ DW , while the errors on d, θ and φ have a negligible impact. In practice, to obtain the theoretical predictions shown in the main paper and in Fig. 10, we do not use explicitly Eq. (15), but rather use the set of parameters {I s , d, θ, φ} determined following the calibration step, and put it into the stray field computation [cf. Section III B]. This allows us to simulate more complex structures than the idealized infinitely long DW considered above [ Fig. 11(b)], in particular the finite-width wires studied in this work. However, we stress that, as far as the uncertainties are concerned, this is completely equivalent to using Eq. (15), since B edge NV (x) is fully characterized by the set {I s , d, θ, φ} [cf. Section III A]. The main difference comes from the influence of the edges of the wire, of width w, on the DW stray field. The standard error σ w then translates into a relative error B ⊥ /w on the DW field B ⊥ NV . For the Ta/CoFeB/MgO sample, w = 1500 ± 30 nm, which gives a negligible error B ⊥ /w < 0.1% for the field calculated at the center of the stripe. For the Pt/Co/AlO x sample, the stripe is narrower, w = 470 ± 20 nm, leading to B ⊥ /w = 0.9%. The overall uncertainty on the prediction B ⊥ NV , for a DW confined in a wire, then becomes The overall errors are indicated in Table III. For Ta/CoFeB/MgO (Fig. 2 of the main paper ), the overall standard error is found to be ≈ 1.5%, whereas for Pt/Co/AlO x (Fig. 4) 1 Table III. Summary of the uncertainty Θ/q i on the value of Θ related to parameter q i for the experiments on Ta/CoFeB/MgO with ND75c (a) and on Pt/Co/AlO x with ND79c (b). The overall uncertainty B ⊥ is estimated with Eq. (19), assuming that all errors are independent. The relative error on the calibration field B edge NV (x) is estimated to be B edge ≈ 1.0% in (a) and B edge ≈ 1.5% in (b). The effect of the stripe width uncertainty leads to an additional error B ⊥ /w < 0.1% in (a) and B ⊥ /w = 0.9% in (b). For an arbitrary angle ψ of the in-plane magnetization of the DW, the projected stray field writes where it is assumed that |B NV | < |B ⊥ NV |. We deduce the expression of the absolute uncertainty for B ψ where σ B ⊥ = B ⊥ B ⊥ NV and σ B = B B NV . This is how the confidence intervals shown in Figs. 2 and 4 of the main paper were obtained. Finally, the confidence intervals for cos ψ were defined as the values of cos ψ such that the data points remain in the interval The interval for the DMI parameter was deduced using the relation [7] D DMI = 2µ 0 M 2 s t ln 2 π 2 cos ψ , which holds for an up-down DW provided that | cos ψ| < 1. D. Effects of a large DMI constant So far, we have only considered, for simplicity and to avoid introducing additional parameters, the effect of DMI on the angle ψ of the in-plane DW magnetization. In doing so, two other effects of DMI have been neglected: (i) the DMI induces a rotation of the magnetization near the edges of the ferromagnetic structure [26] and (ii) the DW profile in the presence of DMI slightly deviates from the profile M z (x) = −M s tanh(x/∆ DW ) [7]. The first (second) effect modifies the stray field above the calibration stripe (above the DW). Here we quantify these effects for the case of Pt/Co/AlO x , for which the DMI is expected to be strong. [28]. This is ≈ 70% of the threshold value D c above which the DW energy becomes negative and a spin spiral develops. Taking D DMI = −2.5 mJ/m 2 , we predict that the magnetization rotation at the edges reaches ≈ 20 • [26]. As a result, the field maximum above the edge is increased by Fig. 8(b) [Fig. 12(a)]. This is of the order of our measurement error, so that this DMI-induced magnetization rotation cannot be directly detected in our experiment. In fitting the data of Fig. 8(b), the outcome for I s and d is changed by a similar amount: we found d = 119.0 ± 3.4 nm and I s = 671 ± 18 µA without DMI, as compared with d = 121.0 ± 3.4 nm and I s = 670 ± 17 µA if D DMI = −2.5 mJ/m 2 is included. The difference is below the uncertainty, therefore it does not affect the interpretation of the data measured above the DW. To quantify the second effect, we performed the OOMMF calculation with two different values of D DMI that stabilize a left-handed Néel DW: D DMI = −0.5 mJ/m 2 , as used for the simulations shown in the main paper, and D DMI = −2.5 mJ/m 2 . The stray field calculations, under the same conditions as in Fig. 4 of the main paper, show an increase of the field maximum by ≈ 0.5% for the stronger DMI [ Fig. 12(b)]. Again, this is well below the uncertainty [cf. Section III C]. Besides, it is worth pointing out that these two effects tend to compensate each other, since the first one tends to increase the estimated distance d, thereby decreasing the predicted DW field, while the second one tends instead to increase the predicted DW field. Overall, we conclude that neglecting the additional effects of DMI provides predictions for the Néel DW stray field that are correct within the uncertainty, even with a DMI constant as large as 70% of D c . We note finally that the predictions for the Bloch case, as plotted in Fig. 2 and 4 of the main paper, are anyway not affected by the above considerations, since the Bloch case implies no DMI at all.
8,230.6
2014-10-06T00:00:00.000
[ "Physics", "Materials Science" ]
Continuum theory for two-dimensional complex plasma clusters We develop a theoretical approach to obtain a new differential equation together with a new boundary condition for the density profile of two-dimensional clusters and apply it to the complex plasma case. In addition, we use the local-density approximation for the interaction energy and consider finite size effects. In this case, our differential equation and the previously used reduce to the same. By using the new boundary condition, a scale invariance appears and the obtained scale function can be used in many systems beyond complex plasmas. The obtained equations are confronted with molecular dynamics simulations. We find that the dependence of the system's size and the density profile with the number of particles, N, agrees very well with the results obtained from simulations. The theory has a surprisingly good accuracy for small systems (8 < N < 500). Moreover, we find by simulations that, for a given external potential, the final configuration from an experiment or simulation can provide us, at most, with two possible values for each interaction parameter, the charge and the screening length. Introduction In complex (dusty) plasma, monodisperse microspheres of dust can be made to float in a circular monolayer. Electrostatic and gravitational forces are used to confine them vertically. With radial confinement and low temperature, the particles form finite crystalline clusters [1,2]. The radial potential well is approximately parabolic and is responsible for the circular shape of the cluster. The interaction between dust particles in the plasma is typically subject to shielding by electrons and ions and is therefore frequently described by the Yukawa (or Debye-Hückel) interaction [3,4]. It is hoped that, given the external potential's parameter, the spatial particle distribution obtained from an experiment or simulation can give values for the interparticle potential parameters by using a suitable theoretical model [5]. In isotropically confined three-dimensional (3D) systems [6,7], much research has been made toward this direction. The behaviors of the density distribution for many particles [8,9] and of the shells' structures for few particles (the so-called Yukawa/Coulomb balls) [10] follow well known phenomenological and ab initio equations. These theories give a good accuracy in almost the entire range of the parameters. In two-dimensional (2D) large systems with moderate and strong screening, the continuum density profile was obtained with a satisfactory accuracy [5,11,12]. However, the density profile or the system's radius for relatively small number of particles (8-500) and for weak screening (or strong confinement) were not well predicted theoretically. There are other methods for determination of interaction parameters in Yukawa systems (see e.g. [13]) but macroscopic properties, such as density and radius, can also be investigated in other 2D systems, and the same procedure can be used in many of them. For instance, similar studies were made in systems with pure Coulomb [14], dipolar [15] and superconductor vortex-vortex [16] interactions. In this paper, we derive good approximations for the density and the radius of 2D complex plasma clusters. The local-density approximation (LDA) with finite size effects (FSE), which were not considered previously in 2D, was used for the interaction energy. By considering deformations in the triangular lattice, which is the predominant symmetry in the system, a new differential equation for the spatial dependence of density is obtained. The general form of this differential equation is different from the ones obtained from others approaches used in the literature [5,11,12]. The procedure to obtain this equation helps us to find a new boundary condition which, with the necessary careful considerations, is essential to the accuracy of the final equation. We compare the theory with results of molecular dynamics (MD) simulations and find that it has a surprisingly good accuracy for a great range of the number of particles N , even when the density cannot be considered continuum (8 < N < 500). Moreover, from simulations we found that this system has two regimes where the system's size and density profile are the same but the interaction parameters are different. Formally, the paper is structured as follows. In the next section the model system and details concerning the MD simulations are given. In section 3, our theoretical approach is developed and, in section 4, their results are compared with those obtained from simulations. Finally, section 5 contains a summary of the main results. Model With a Yukawa type pair potential V pair (r i j ) = q 2 e −κr i j /r i j and an external potential V ext (r ) = V 0 r 2 /2, the Hamiltonian of the system is given by where N is the number of particles, q is the dust charge, κ is the screening parameter, V 0 is the confining potential parameter and α = κ 3 q 2 /V 0 . The rescaled particles' radial positions κr i at the ground state are determined uniquely by α and N . To search for equilibrium configurations of this system, we use a simulated annealing scheme. This is summarized as follows: first, for a given set of parameters, particles are placed at random positions and the solvent is set at a high initial temperature. Then, the solvent temperature is slowly decreased down to T = 0 at a constant rate. The time evolution of the system at a temperature T is modeled by overdamped Langevin equations of motion. These are integrated via Euler finite difference steps following the algorithm where is the total force applied to the particle, t is the time step and the components of the 2D vector g are independent random variables with standard normal distribution which accounts for the Langevin kicks. Interaction energy and the local density approximation In a continuum approximation for the particle density ρ(r), the total potential over a particle located at r is given by where R m is the system's radius. When the range of the potential, 1/κ, is small compared to the system size (i.e. when κ R m 1), a small region around r gives the main contribution to the integral of equation (4). In this case, the effective region of integration is approximately a circle with radius of the order of 1/κ and centered at the position vector r. For such a circular symmetry in the region of integration, we can expand ρ(r ) around r (i.e. ρ(r ) = ρ(r) + (r − r) · ∇ρ(r) + |r − r| 2 ∇ 2 ρ(r)/4 + · · ·) and one can see that the second term vanishes in the integral of ρ(r ) e −κ|r−r | /|r − r |. Therefore, the use of just the first term, i.e. of the local density ρ(r), may give a good approximation for φ(r). This is called the LDA. On the other hand, when R m is equal to a few units of 1/κ or less, FSE must be considered. These effects were taken into account only in the 3D case by Henning et al [9] and we will use a similar FSE consideration in 2D. Even with a constant density, the integral of equation (4) cannot be given in terms of elementary functions. In this case, we consider a position independent FSE by simply integrating over the region |r − r| R m , i.e. a disc of radius R m centered at r, which gives As the system has locally a predominant triangular lattice symmetry, the density is related to the nearest particles' distance x by ρ = 2/( √ 3x 2 ). In the limit of strong screening, i.e. a big value of κ and therefore a short range of the pair potential, each particle effectively interacts only with its nearest-neighbors. In this case, the total potential φ(r) is approximately equal to 6 (the number of nearest-neighbors of the triangular lattice) times V pair (x(r)). Now we make the consideration that each particle in the cluster always interacts with only six effective particles at a distance x (see figure 1(a)) through an effective potential given by V eff (x) = φ(x)/6, even when the interaction has a long-range character. In fact, each effective particle is in a nearest-neighbor position but represents the interaction with all the particles in one sixth of the system as is shown in figure 1(a). In the short-range case, the effective potential becomes the pair potential and the effective particles become the nearest-neighbor particles. We considered that the total potential over a particle can be given explicitly in terms of x, i.e. φ(r) = φ(ρ(r)) = φ(x(r)), which is always true for the LDA without a position dependent FSE. Equation (5) is a continuum approximation and therefore it is accurate only when the interaction range 1/κ is much bigger than the particles' spacing x (κ x 1). In fact, for Yukawa systems, this approximation fails when κ x 1, or even κ x ∼ 1 [18], and one should consider correlation (discretization) effects. In this case, each particle interacts, effectively, only with few particles in its neighborhood. As those particles have a triangular lattice-like arrangement, we approximate their interaction energy by that given by a perfect triangular lattice. A good approximation for this energy was obtained in [18] by summing up a set of well-defined rings. To include the FSE, we must perform the sum up to the ring of radius ≈ R m , which yields Notice that in the regime of κ x 1 (or ρ/κ 2 1) equation (5) is recovered. In the rest of this article, we will use equation (5) and concentrate on 10 −4 α 10 1 . Nevertheless, the already good results obtained for average and great values of α (∝ κ 3 ) [11,12] can still be further improved by using equation (6). Differential equation for the density In order to compensate the external force, F ext = −V 0 rr, one must consider a deformation of the original hexagon formed by the effective particles. To do so, we choose the dislocation shown in figure 1(b). In this case, the distance between the central particle and the three effective particles in the positive (negative) direction ofr was increased (decreased) by x/2. The angle between two neighbor particles remains π/3 and the total potential remains the same up to the first order in x. The magnitude of the force due to an effective particle is ∂φ ∂ x . Therefore, from figure 1(b) and from the equilibrium of forces, we have which, for small x, becomes For a smooth dependence of x with r we associate x/x to the derivative of x with respect to r . This is based on the fact that the variation of the nearest-neighbor distance from the bottom rhombus to the top one, i.e. the difference between their sides, in figure 1(b), is x and the radial distance between their centers is x. By substituting x = x(dx/dr ) in equation (8), we obtain the following differential equation: This last equation written in terms of the density becomes where V ext (r) represents a general external potential. The obtained differential equation has a general form different from other well-known approaches. In a variational approach [11], the minimization of the total potential energy functional under the constraint ρ(r) d 2 r = N , gives Whilst in a hydrostatic approach [12], the Euler equation where P = −∂(φ/2)/∂(1/ρ) = (ρ 2 /2)∂φ/∂ρ is the pressure, gives In spite of the differences between equations (10), (12) and (14), by using an approximation for φ of the form φ(r) = Bρ(r), where B is a constant, they provide the same result for the density, i.e. ρ(r) = ρ 0 − V ext (r)/B where ρ 0 is a constant of integration. The differences between these equations appear when φ has other dependences with ρ and r. These differences will be investigated in future work. Now we use the result ρ(r) = ρ 0 − V ext (r)/B together with the approximation of equation (5) and the external potential V ext (r ) = V 0 r 2 /2 to obtain the following equation for the density: The values of ρ(0)/κ 2 and κ R m are obtained from the boundary and normalization conditions. As remarked in section 2, they must depend only on the parameters α and N . Boundary and normalization conditions A continuum approximation for the density must satisfy the normalization condition ρ(r) d 2 r = N , which in our case is written as This equation can be used to eliminate one unknown term of equation (15), while the second one can be eliminated from a boundary condition. The most simple boundary condition is to say that ρ(R m ) = 0. However it does not indeed happen, although becomes a good approximation in the large N limit where, as observed in experiments and simulations, lim N →∞ ρ(R m )/ρ(0) = 0 ∀α. The use of this condition and equation (16) in (15) gives where Except by the factor (1 − e −κ R m ) descendant from FSE, this result was obtained by Totsuji et al [11] by using a variational approach. There, the result ρ(R m ) = 0 comes out from the minimization of the total energy obtained from the LDA without FSE and correlation effects. When the latter effects were considered [11], a non-zero density at the edge came out naturally, satisfying lim N →∞ ρ(R m )/ρ(0) = 0. Notice that, from the effective particles approach of figure 1(b), a straightforward boundary condition appears. When the central particle of figure 1(b) is at the boundary of the cluster, the three effective particles at the top do not exist. Therefore, the equilibrium of forces considered in equation (7) is now written as (1 + 2 cos(π/3))F(x m ) = V 0 R m , where x m is the nearest-neighbor distance at the boundary, or equivalently Despite the consideration of just three effective particles in this case, the effective potential continues to be given by φ(x)/6. This is in fact a correction for our choice of the term (1 − e −κ R m ) for the FSE. This choice overestimates the potential at the edge by a factor that approximates the number 2 as κ R m increases. By placing the potential φ of equation (5) into equation (19) we obtain an expression relating x m and R m − 1 3 An important and non-trivial consideration must be made here: for a particle at the edge, just a fraction of its Wigner-Seitz cell is inside the circle of radius R m (see figure 2). This fraction is approximately one half for R m /x m > 1. Due to this fact, the density (given by the inverse of Wigner-Seitz cell area) at the edge is related to x m by This correction in the density should be made always that the superior limit of integration of the normalization condition shown in equation (16) is R m . In this case, the total area occupied by all Wigner-Seitz cells must be equal to πR 2 m but it is overestimated if we always have A WS = 1/ρ = √ 3x 2 /2 (see figure 2). This problem has not been considered previously in the literature but it becomes important in small systems. Equation (21) is not exact but gives a good approximation to correct this problem. Using equations (21) and (20), we obtain The above equation is one of the main theoretical results obtained in this paper. It will be determinant for the good accuracy of the theory at small number of particles. By using equation (22) in equation (15), we can find ρ(0) in terms of R m . Finally, the density at any point is then given by where κ R m can be found by using equation (16), which gives Scale invariance For a given number of particles, equations (23) and (24) inform that the distances in the system scale with R m , i.e. where f (N , r/R m ) is independent on the potentials' parameters. This would imply that the values of κ and α cannot be obtained separately from the final configuration of an experiment, but only from a relation between them. Another experiment with a different N must be done in order to get a new relation. A density profile obeying equation (25) can be obtained when the potential φ is, up to a constant, approximated by φ(r) = Bρ(r). By using this approximation, the method of the previous two subsections (i.e. any of equations (10), (12) and (14) together with ρ(R m ) = 4/( √ 3x 2 m ) and equation (19)) gives where c = c(N ) is obtained from and R m is related to B by The factor B can be a function of N , R m and of the potentials' parameters; and may be obtained, for example, by: (a) an integral of the pairwise potential B = V p (r ) d 2 r , as it was done in equation (5), or (b) by a derivative of the Madelung energy (equation (6)), B = ∂φ/∂ρ, evaluated at the mean densityρ = N /π R 2 m . Indeed, the approximation φ = Bρ is very general and can be used in many systems beyond Yukawa [16,17], and so does equations (26) and (27). [12] and [11]. The scale invariance of the theory is broken when position dependent FSE (small κ R m ) or correlation effects (small ρ/κ 2 ) are taken into account. Using the latter, Totsuji et al [19] were able to obtain approximate values of κ and q 2 , for a given V 0 , by considering the α-dependence of R 2 m ρ(0). Their results are applicable with the assumption that α 1. But, as we will see in section 4, if all values of α are possible a priori, α can be a bi-valued function of R 2 m ρ(0). Results Equations (24) and (26) for the radius and the density, respectively developed in the latter section, were compared with the results obtained from MD simulations in order to verify the accuracy of our theory. We defined the system's radius as the radial distance of the outermost particle. Although, in the theory, we have made considerations that demand great values of κ R m and ρ/κ 2 , which implies in the need of great number of particles since N = π R 2 mρ , we still found satisfactory results even for N < 500. Figure 3(a) shows the N -dependence of κ R m , obtained from equation (24) (lines) and simulations (symbols) for α = 10 1 , 1, 10 −1 , 10 −2 , 10 −3 and 10 −4 . In figure 3(b), we compare equation (24), equation (24) using ρ(R m ) = 2/( √ 3x 2 m ) instead of equation (21), equation (18) and the equations obtained in [12] and [11] (with cohesive energy) in the case of α = 10 −2 . The surprisingly good agreement of the theory for small number of particles is evidenced in figure 4 where α = 10 −2 and no logarithm scale is used. The dotted and dashed lines represent equations (18) and (24), respectively. One can see that the boundary condition of equation (22) was responsible for a great improvement in the theory. The inset of figure 4, which is a zoom for 2 < N < 21, shows that, although the radius obtained from simulations has a discontinuous dependence with N , the theory gives a good smooth approximation. A good accuracy starts as N > 8, when the structural transition (1, 7) → (2, 7) occurs for all α [20]. For N 8, the particles are located at the vertices of a regular polygon centered at the origin with one or none particle at the center. In these cases, the exact values of κ R m can be easily obtained theoretically. For α = 10 −2 , the relative error of equation (24) is less than 5% for any N > 8. The radial density profile was calculated by dividing the system in annuli and counting the number of particles in each annulus. This number is then divided by the area of the respective annulus and the result is taken as the density at the mean radius of the annulus. For a smooth profile, the width of the annuli cannot be too small. Such smoothness can only be obtained for N 100. Since the accuracy of the theory for the radius is already known from figure 3, we now investigate the accuracy for the density by showing only how R 2 m ρ depends on r/R m . By doing this, the scale invariance predicted by the theory in section 3.4 is also investigated. (26) and does not depend on α, and from the one developed in [11] (with cohesive energy) for α = 10 −4 and 10 1 (the curves for other values of α are between the two and so are the ones obtained by the theory of [12]). As it can be seen, our theory gives a good approximation even for values of N between 100 and 1000, which are more easy to be achieved in experiments. We can see from figures 5(a) and (b) that the density behavior is approximately parabolic and depends on α. We interpolate this density using a one-parameter fitting function given by where d = d(α, N ) is the fitting parameter. This function is parabolic on r and attends the conditions of equation (16) and of continuity of its gradient for |r| < R m . Figure 6(a) shows the values of d for N = 10 000 as a function of α obtained by interpolations of equation (29) with simulation results (symbols) compared with the value predicted by the theory (line). We can see that the fitting results have a well-behaved dependence on α. However, it is not injective, i.e. it does not have a one-to-one correspondence. For instance, the scaled densities of α = 10 −4 and 10 5 , for N = 10 000, are shown in figure 6(b) and one can see that they are almost the same. This implies in a limitation of any theory for the density: there can be two possible values for each of the system's parameters (κ, q and V 0 ) which can result in the same values of R m and ρ(r ) coming from an experiment or simulation. Also, just a theory which accounts non-local, finite size and correlational effects can predict these two regimes at the same time, giving the two possible values of the parameters. Conclusions The density profile and the system's size of 2D complex plasma clusters confined by a parabolic potential are obtained analytically. For the calculation of the interaction energy, we used the LDA with a position independent FSE. A differential equation for the density was obtained by a new method. By using the proposed interaction energy, this differential equation gives the same result of the variational [11] and pressure [12] approaches, however our method gives light to an important new boundary condition. The density resulted from our approximation scales with the radius, implying that the values of κ and α cannot be calculated separately from the final configuration of a single experiment. In fact, the real density does not have this scale invariance but it is showed that there are systems with different α but identical normalized density profiles. The boundary condition and the FSE were determinant to provide surprisingly good results in systems with relatively small number of particles (8 < N < 500). For instance, for α = 10 −2 , the relative error of the theoretical maximum radius is less than 5% for any N > 8.
5,576.2
2013-09-01T00:00:00.000
[ "Physics" ]
Highest achievable detection range for SPR based sensors using gallium phosphide (GaP) as a substrate: a theoretical study In the present study, we have theoretically modelled a surface plasmon resonance (SPR) based sensing chip utilizing a prism made up of gallium phosphidee. It has been found in the study that a large range of refractive index starting from the gaseous medium to highly concentrated liquids can be sensed by using a single chip in the visible region of the spectrum. The variation of the sensitivity as well as detection accuracy with sensing region refractive index has been analyzed in detail. The large value of the sensitivity along with the large dynamic range is the advantageous feature of the present sensing probe. Introduction The surface plasmon resonance (SPR) technique is the most promising tool for the detection of various chemical and biological species [1][2][3][4][5][6][7][8][9][10][11][12]. At the present time, the SPR technique is being used not only in the biochemical species detection and gas sensing but also in imaging, terahertz plasmonics, artificially structured materials lithography, and many other areas [2][3][4]. In SPR based sensors, the famous Kretschmann-Reather configuration is utilized [1]. In this configuration, a thin layer of metal such as silver or gold is deposited onto the base of a prism as shown in Fig. 1. A p-polarized light is allowed to fall from the substrate side. The medium to be sensed is kept in contact with the metal layer. Under the resonance condition, the reflected light beam causes a sharp dip into the intensity when the wavevector of the incident wave matches with the wave vector of the surface plasmon wave (Fig. 1 insight). If n s and ε m represent the refractive index and the dielectric function of the sensing medium and metal layer, respectively, the resonance condition can be written as The expression on the left hand side represents the propagation constant (k) of the evanescent wave, and the right hand side represents the propagation constant of the surface plasmon wave existing at the gold sensing layer interface. θ SPR is the resonance angle, i.e. the angle at which two wave vectors match. Here, o  is the wavelength of light being used. In SPR based sensors, two interrogation schemes are generally used. One is called the angular interrogation, and the other is called the spectral interrogation. In the angular interrogation technique, we fix the wavelength of the incident light, i.e. we use a monochromatic light source and vary the angle of incidence from the critical angle to 90 o . This change in the angle changes the propagation constant of the incident wave. At a particular value of the angle of incidence, this propagation constant becomes equal to the SPR propagation constant leading to the condition of the resonance. This resonance angle is quite sensitive to the changes in the sensing region environment. A slight change in the surrounding refractive index causes a corresponding change in the resonance angle. By measuring the change in the resonance angle, the change in the surrounding region refractive index can be measured. In another scheme called the wavelength interrogation scheme, we use a polychromatic source of light and fix the angle of incidence to any particular value between  c and 90 o . This again changes the value of the propagation constant of the incident beam. At a particular value of the incident wavelength, the propagation constant of the incident wave may become equal to the surface plasmon wave vector. This is called the resonance condition, and the wavelength at which this occurs is called the resonance wavelength. At this wavelength, the intensity of the reflected light shows the minimum. For a change in the refractive index of the sensing medium, the resonance wavelength changes. By measuring the change in the resonance wavelength, the corresponding change in the refractive index of the medium can be measured. There are other techniques also for measuring changes in the refractive index such as the Mach-Zhander interferometer and Febry-Perot interferometer, however, SPR sensors are also getting progressed with the equal pace such as in the sensing of hydrogen gas using palladium [13][14][15]. In the present work, an angular interrogation technique is used to exploit the famous Krestchmann-Reather configuration. People have devised various sensing probes which are applicable either for the gas sensing or for the liquid medium. We here are proffering a sensing chip which can detect various gases as well as highly concentrated liquid in the visible region of the spectrum thereby producing a large dynamic range of the sensing medium refractive index. This could be possibly realized by utilizing the sensor chip which can be fabricated by depositing a thin layer of gold layer (50 nm) onto the base of a prism made up of a semiconducting material called gallium phosphide. Gallium phosphide is a transparent semiconducting material having refractive index around 3.3 in the visible range of the spectrum. Also it is a wide band gap material. The detailed theoretical analysis is carried out in terms of the sensitivity and detection accuracy. The most advantageous feature of having large detection range has been addressed. In order to realize the practical design of the sensor, consider a GaP prism coated with a gold layer of about 50 nm as shown in Fig. 1. For coating the gold layer, any vacuum deposition technique such as thermal evaporation, sputtering or electron beam deposition can be used. The chemical or the gaseous medium which is to be sensed is kept in contact with the gold layer. A p-polarized light with the intensity I and wavelength 632 nm from He-Ne laser is allowed to fall on the substrate. The intensity of the reflected light is measured with respect to the angle of incidence. A sharp dip is observed at a particular value of the angle of the incidence. This will be the resonance angle. A slight change in the sensing region will reflect in terms of the corresponding change in the resonance angle. In mathematical modelling, we require the value of the dielectric constant of the metal layer for the He-Ne laser wavelength which can be calculated from the Drude formula [6]: where mr  and mi  are the real and imaginary parts of the dielectric constants of the gold layer. Also  p and  c are the plasma and collision frequencies of the gold layer. Their values are 1.6826×10 -7 m and 8.9342×10 -6 m, respectively. To get a feel of the refractive index variation, one should note that most of the gases possess the refractive index close to unity whereas highly concentrated liquid chemicals such as solutions of C 7 H 6 Cl 2 have their refractive index values around 1.6 to 1.7 [11]. For the calculation of the reflected light beam intensity, one should be very precise because it is the resonance angle which decides the sensitivity of the sensor. In the present case, we have used a N-layer matrix method for the calculation of the reflection coefficient. The first medium is the GaP prism, the second medium is the gold layer, and the third medium is the sensing region itself which is to be sensed. These layers are assumed to be stacked along the z axis as shown in Fig. 2. The arbitrary medium layer has the thickness d i , the dialectic constant ε i , and the permeability µ i . The relationship between the tangential electric field and magnetic field at z=z 1 =0 and at z= z N-1 are related by 1 1 1 1 where E 1 and H 1 are the tangential electric and magnetic fields at the first boundary. E N-1 and H N-1 are the similar fields at the Nth layer. M is called the characteristic matrix of the combined structure and is given by and In the present case, there are three media and two layers N=2. Therefore, n 1 is the refractive index of the substrate, n 2 = m  is the refractive index of the metal layer (gold), n 3 is the refractive index of the sensing medium, and d 2 will be the thickness of the gold layer which is 50 nm in the present case. If  1 is the angle of incidence and  o is the wavelength of light in the free space which is 632 nm for He-Ne laser, the amplitude reflection coefficient r p for the p-polarized incident wave is given as All the M 11 , M 12 , M 21 , and M 24 are the four matrix coefficients of the matrix in (4). This will be a complex number. The reflection coefficient is the absolute square of r p as 2 . We shall evaluate the performance of the sensor in terms of the two parameters: Sensitivity: if the refractive index of the sensing medium is altered by n s , the resonance angle will also change, and if the change in the corresponding angle is  res , we define the sensitivity as Detection accuracy: for each SPR curve, we have to determine the exact location of the SPR angle which will be more correct if the SPR curve is sharp. So we define a parameter called the detection accuracy which is just the reciprocal of the full width at half maximum (FWHM) of an SPR curve: In this particular section of the article, we shall present various theoretical results obtained from the theory discussed above. Now since we are using here the technique of angular interrogation, so we shall check first the possibility of the SP wave excitation by incident light. For this, we have plotted in Fig. 3 the propagation constant of the surface plasmon wave with the angle of incidence along with the propagation constant of the incident wave from glass as well as from the GaP substrate. As it is quite clear from (1) that the surface plasmon wave vector is independent of the angle of incidence, hence the curve comes out to a straight line parallel to the angle axis as shown in Fig. 3. The surface plasmon wave vector is plotted for n s =1.62. In the same graph, we have also plotted the propagation constant of the direct light, i.e. light in the glass prism, i.e. the evanescent wave and also the propagation constant of the wave in the GaP prism. It is quite obvious from the figure that the propagation constant of light through the glass prism does not intersect the SP wave propagation constant which indicates that the SP wave cannot be excited for these waves. However, the SP dispersion curve intersects the light wave for GaP showing the possibility that the SP can be excited by these waves at a particular angle of incidence called the resonance angle. Thus, it is clear that the SP wave at the gold and high refractive index medium interface can be excited by using a GaP substrate. Now we shall calculate the reflection coefficient. In Fig. 4, we have plotted the variation of reflected light intensity with the angle of incidence. As it is obvious, the intensity shows a sharp dip at a particular value of the angle of incidence. For n s =1.30, the resonance angle comes out to be 24.9788 o . As we increase the value of n s from 1.30 to 1.50, i.e. n s =0.2, the SPR dip shifts to the higher value of angle of incidence  SPR = 29.9641° giving rise to a shift of = 4.9853°. The larger the value of shift is, the greater the sensitivity is. This cannot be compared for the Si based prism as the SPR can not be excited for such a high value of sensing region refractive index. In the same figure, we have also plotted the SPR curve for n s = 1.7 and 1.9, and the four curves are well depict the quite high value of the sensitivity separately. To check the variation of sensitivity with the sensing region refractive index, we have calculated the resonance angle for a large range of refractive index starting from the gaseous medium to highly concentrated liquids. The curve between the sensitivity and sensing region refractive index is shown in Fig. 5. The sensitivity increases with an increase in the sensing region refractive index as for higher concentrations the resonance condition will be satisfied at higher values of SPR angles. One more parameter is the detection accuracy. For the calculation of the detection accuracy, first we have calculated the full width at half maximum FWHM of each SPR curve, and then the DA is evaluated as per the definition given in (10). It is visible from the SPR curves that with an increase in the sensing region the refractive index, the SPR curves get broadened giving rise to poor detection accuracy. However, the broadening in SPR curves as given in Fig. 4 is not too much so that it may affect  SPR . The variation of detection accuracy with refractive index of the sensing layer is plotted in Fig. 6. More importantly, we are using the visible light source, i.e. a He-Ne laser with the wavelength 632 nm, and we are able to sense a large spectrum of the sensing region refractive index. The sensor reported till now either uses the infra red (IR) source or uses a buffer layer to bring the SPR dip in the visible region, but in the present case, we have modelled a single sensing chip which can be used to sense the gases as well as the liquid medium in the visible region of the spectrum. The large range of n s is the most advantageous feature of the present study. The theoretical study given in the current study has already been verified experimentally by Motogaito et al. [11]. However, a detailed theoretical analysis in terms of detection accuracy and sensitivity is missing. The current study provides a material to fill that gap. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
3,339
2016-06-01T00:00:00.000
[ "Materials Science", "Physics", "Engineering" ]